[go: up one dir, main page]

Skip to main content
Log in

MAP/M/c and M/PH/c queues with constant impatience times

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

This paper considers stationary MAP/M/c and M/PH/c queues with constant impatience times. In those queues, waiting customers leave the system without receiving their services if their elapsed waiting times exceed a predefined deterministic threshold. For the MAP/M/c queue with constant impatience times, Choi et al. (Math Oper Res 29:309–325, 2004) derive the virtual waiting time distribution, from which the loss probability and the actual waiting time distribution are obtained. We first refine their result for the virtual waiting time and then derive the stationary queue length distribution. We also discuss the computational procedure for performance measures of interest. Next we consider the stationary M/PH/c queue with constant impatience times and derive the loss probability, the waiting time distribution, and the queue length distribution. Some numerical results are also provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Asmussen, S., O’Cinneide, C.A.: Representations for matrix-geometric and matrix-exponential steady-state distributions with applications to many-server queues. Stoch. Mod. 14, 369–387 (1998)

    Article  Google Scholar 

  2. Asmussen, S., Møller, J.R.: Calculation of the steady state waiting time distribution in GI/PH/c and MAP/PH/c queues. Queueing Syst. 37, 9–30 (2001)

    Article  Google Scholar 

  3. Barrer, D.Y.: Queuing with impatient customers and ordered service. Oper. Res. 5, 650–656 (1957)

    Article  Google Scholar 

  4. Boots, N.K., Tijms, H.: An M/M/c queue with impatient customer. Top 7, 213–220 (1999)

    Article  Google Scholar 

  5. Brandt, A., Brandt, M.: On the M(n)/M(n)/s queue with impatient calls. Perfor. Eval. 35, 1–18 (1999)

    Article  Google Scholar 

  6. Brill, P.H., Posner, M.J.M.: Level crossings in point processes applied to queues: single-server case. Oper. Res. 25, 662–674 (1977)

    Article  Google Scholar 

  7. Carbonell, F., Jímenez, J.C., Pedroso, L.M.: Computing multiple integrals involving matrix exponentials. J. Comput. Appl. Math. 213, 300–305 (2008)

    Article  Google Scholar 

  8. Choi, B.D., Kim, B., Zhu, D.: \(MAP/M/c\) queue with constant impatient time. Math. Oper. Res. 29, 309–325 (2004)

    Article  Google Scholar 

  9. Cohen, J.W.: On up- and downcrossing. J. Appl. Prob. 14, 405–410 (1977)

    Article  Google Scholar 

  10. Fox, B.L., Glynn, P.W.: Computing Poisson probabilities. Commun. ACM 31, 440–445 (1988)

    Article  Google Scholar 

  11. Guo, C.-H.: Convergence analysis of the Latouche–Ramaswami algorithm for null recurrent quasi-birth-death processes. SIAM J. Matrix Anal. Appl. 23, 744–760 (2002)

    Article  Google Scholar 

  12. Haugen, R.B., Skogan, E.: Queueing systems with several input streams and time out. Telektronikk 2, 100–106 (1978)

    Google Scholar 

  13. Haugen, R.B., Skogan, E.: Queueing systems with stochastic time out. IEEE Trans. Commun COM–28, 1984–1989 (1980)

    Article  Google Scholar 

  14. Heyman, D.P., Stidham Jr, S.: The relation between time and customer averages in queues. Oper. Res. 28, 983–994 (1980)

    Article  Google Scholar 

  15. Kawanishi, K., Takine, T.: A note on the virtual waiting time in the stationary PH/M/c+D queue. J. Appl. Prob. (2015, to appear)

  16. Kim, J., Kim, J.: M/PH/1 queue with deterministic impatience time. Commun. Korean Math. Soc. 28, 383–396 (2013)

    Article  Google Scholar 

  17. Kim, J., Kim, B., Kim, J.: M/PH/1 queue with deterministic impatience time. Proc. KSIAM 2007(2), 63–66 (2007)

    Google Scholar 

  18. Kim, B., Kim, J.: A single server queue with Markov modulated service rates and impatient customers. Perfor. Eval. 83, 1–15 (2015)

    Google Scholar 

  19. Latouche, G., Ramaswami, V.: A logarithmic reduction algorithm for quasi-birth-and-death processes. J. Appl. Prob. 30, 650–674 (1993)

    Article  Google Scholar 

  20. Latouche, G., Ramaswami, V.: Introduction to Matrix Analytic Methods in Stochastic Modeling. SIAM, Philadelphia, PA (1999)

    Book  Google Scholar 

  21. Latouche, G., Pearce, C.E.M., Taylor, P.G.: Invariant measures for quasi-birth-and-death processes. Stoch. Mod. 14, 443–460 (1998)

    Article  Google Scholar 

  22. Lucantoni, D.M., Ramaswami, V.: Efficient algorithms for solving the nonlinear matrix equations arising in phase type queues. Stoch. Mod. 1, 29–51 (1985)

    Article  Google Scholar 

  23. Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server vacations and a class of non-renewal arrival processes. Adv. Appl. Prob. 22, 676–705 (1990)

    Article  Google Scholar 

  24. Miyazawa, M.: The derivation of invariance relations in complex queueing systems with stationary inputs. Adv. Appl. Prob. 15, 874–885 (1983)

    Article  Google Scholar 

  25. Movaghar, A.: On queueing with customer impatience until the beginning of service. Queueing Syst. 29, 337–350 (1998)

    Article  Google Scholar 

  26. Narayana, S., Neuts, M.F.: The first two moment matrices of the counts for the Markovian arrival process. Stoch. Mod. 8, 459–477 (1992)

    Article  Google Scholar 

  27. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach. Johns Hopkins Univ. Press, Baltimore, MD (1981)

    Google Scholar 

  28. Neuts, M.F.: Structured Stochastic Matrices of M/G/1 Type and Their Applications. Marcel Dekker, New York, NY (1989)

    Google Scholar 

  29. Sakasegawa, H., Wolff, R.W.: The equality of the virtual delay and attained waiting time distributions. Adv. Appl. Prob. 22, 257–259 (1990)

    Article  Google Scholar 

  30. Sakuma, Y., Takine, T.: Multi-class M/PH/1 queues with deterministic impatience times (submitted)

  31. Seneta, E.: Non-negative Matrices and Markov Chains, 2nd edn. Springer, New York (1981)

    Book  Google Scholar 

  32. Sengupta, B.: An invariance relationship for the G/G/1 queue. Adv. Appl. Prob. 21, 956–957 (1989)

    Article  Google Scholar 

  33. Swensen, A.R.: On a GI/M/c queue with bounded waiting times. Oper. Res. 34, 895–908 (1986)

    Article  Google Scholar 

  34. Takine, T.: Queue length distribution in a FIFO single-server queue with multiple arrival streams having different service time distributions. Queueing Syst. 39, 349–375 (2001)

    Article  Google Scholar 

  35. Takine, T., Matsumoto, Y., Suda, T., Hasegawa, T.: Mean waiting times in nonpreemptive priority queues with Markovian arrival and i.i.d. service processes. Perfor. Eval 20, 131–149 (1994)

    Article  Google Scholar 

  36. Tijms, H.C.: Stochastic Models: An Algorithmic Approach. Wiley, Chichester (1994)

    Google Scholar 

  37. Van Loan, C.F.: Computing integrals involving the matrix exponential. IEEE Trans. Autom. Control AC–23, 395–404 (1978)

    Article  Google Scholar 

  38. Wallström, B.: A queueing system with time-outs and random departures. In: Proceedings of ITC 8, paper ID.231 (1976)

Download references

Acknowledgments

We are grateful to the referee(s) and the associate editor for their valuable comments and suggestions which have improved this paper. This research was supported in part by Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science under Grants Nos. 25330027 and 26350416.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ken’ichi Kawanishi.

Appendices

Appendix 1: Proof of Lemma 1

Let \(\theta _C\) denote the maximum absolute value of diagonal elements of \({\varvec{C}}_{\mathrm {A}}\).

$$\begin{aligned} \theta _C = \max _{i \in \mathscr {M}} | [{\varvec{C}}_{\mathrm {A}}]_{i,i} | . \end{aligned}$$

Since \({\varvec{C}}_{\mathrm {A}}+{\varvec{D}}_{\mathrm {A}}\) is irreducible, \({\varvec{C}}_{\mathrm {A}}+ (\mu /(\theta _c+\mu )) {\varvec{D}}_{\mathrm {A}}\) is also irreducible. On the other hand, it follows from (10) that

$$\begin{aligned} {\varvec{Y}}_{\mathrm {A}}= \frac{\mu }{\theta _C+\mu } {\varvec{Y}}_{\mathrm {A}}^2 + {\varvec{Y}}_{\mathrm {A}}\left( \varvec{I} + \frac{1}{\theta _C+\mu } ({\varvec{C}}_{\mathrm {A}}-\mu \varvec{I}) \right) + \frac{1}{\theta _C+\mu } {\varvec{D}}_{\mathrm {A}}\ge \frac{1}{\theta _C+\mu } {\varvec{D}}_{\mathrm {A}}. \end{aligned}$$

We thus have \({\varvec{C}}_{\mathrm {A}}+ \mu {\varvec{Y}}_{\mathrm {A}}\ge {\varvec{C}}_{\mathrm {A}}+ (\mu /(\theta _c+\mu )) {\varvec{D}}_{\mathrm {A}}\). Therefore, \({\varvec{C}}_{\mathrm {A}}+ \mu {\varvec{Y}}_{\mathrm {A}}\) is irreducible, from which the irreducibility of \(\varvec{U}_{1,1}\) follows. The irreducibility of \(\varvec{U}_{2,2}\) immediately follows from Lemma 2 in [8].

Next we prove \(\varvec{U}_{i,j} \varvec{e} = \varvec{0}\) (\(i,j=1,2\)) when \(\rho =1\). We first show \(\varvec{U}_{1,2} \varvec{e} = \varvec{0}\). Post-multiplying both sides of (10) by \(\varvec{e}\) and using (1), we obtain

$$\begin{aligned} ({\varvec{Y}}_{\mathrm {A}}-\varvec{I})({\varvec{D}}_{\mathrm {A}}-\mu {\varvec{Y}}_{\mathrm {A}}) \varvec{e} = \varvec{0}. \end{aligned}$$
(77)

On the other hand, because \({\varvec{\pi }}_{\mathrm {A}}{\varvec{Y}}_{\mathrm {A}}= {\varvec{\pi }}_{\mathrm {A}}\) for \(\rho \ge 1\) (see Lemma 2 in [8]), we have

$$\begin{aligned} \varvec{e}{\varvec{\pi }}_{\mathrm {A}}({\varvec{D}}_{\mathrm {A}}-\mu {\varvec{Y}}_{\mathrm {A}}) \varvec{e} = \varvec{e} \cdot (\lambda - \mu ) = \varvec{0}. \end{aligned}$$

The difference of the above two equations becomes \([\varvec{e}{\varvec{\pi }}_{\mathrm {A}}- ({\varvec{Y}}_{\mathrm {A}}-\varvec{I})] ({\varvec{D}}_{\mathrm {A}}-\mu {\varvec{Y}}_{\mathrm {A}}) \varvec{e} = \varvec{0}\). Because \(\varvec{e}{\varvec{\pi }}_{\mathrm {A}}- ({\varvec{Y}}_{\mathrm {A}}-\varvec{I})\) is nonsingular, we obtain

$$\begin{aligned} ({\varvec{D}}_{\mathrm {A}}-\mu {\varvec{Y}}_{\mathrm {A}}) \varvec{e} = \varvec{0}, \end{aligned}$$
(78)

so that \(\varvec{U}_{1,2} \varvec{e} =\varvec{0}\).

Next we consider \(\varvec{U}_{1,1} \varvec{e} = \varvec{0}\). Owing to (1) and (78), \(({\varvec{C}}_{\mathrm {A}}+\mu {\varvec{Y}}_{\mathrm {A}})\varvec{e} = (\mu {\varvec{Y}}_{\mathrm {A}}- {\varvec{D}}_{\mathrm {A}})\varvec{e} = \varvec{0}\). Therefore, \(\varvec{U}_{1,1}\varvec{e} = \varvec{0}\) is equivalent to

$$\begin{aligned} (-\varvec{B}_{c-1})^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = \varvec{e}. \end{aligned}$$
(79)

We thus show \((-\varvec{B}_i)^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = \varvec{e}\) (\(i=0,1,\ldots ,c-1\)) below. For \(i=0\), \(\varvec{B}_0={\varvec{C}}_{\mathrm {A}}\), so that from (1), we have \((-\varvec{B}_0)^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = \varvec{e}\). Suppose \((-\varvec{B}_i)^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = \varvec{e}\) holds for some \(i=n-1\) (\(n=1,\ldots ,c-1\)). Post-multiplying both sides of (6) by \(\varvec{e}\) yields

$$\begin{aligned} \varvec{B}_n \varvec{e} = {\varvec{C}}_{\mathrm {A}}\varvec{e} - \frac{n \mu }{c} \varvec{e} + \frac{n \mu }{c} (-\varvec{B}_{n-1})^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = {\varvec{C}}_{\mathrm {A}}\varvec{e} = -{\varvec{D}}_{\mathrm {A}}\varvec{e}. \end{aligned}$$

We thus have \((-\varvec{B}_i)^{-1}{\varvec{D}}_{\mathrm {A}}\varvec{e} = \varvec{e}\) for \(i=n\).

The equality \(\varvec{U}_{2,1} \varvec{e} = \varvec{0}\) immediately follows from (79) and \({\varvec{Z}}_{\mathrm {A}}\varvec{e}=\varvec{e}\) (see Lemma 2 in [8]). Finally, \(\varvec{U}_{2,2} \varvec{e} = \varvec{0}\) follows from (1) and \({\varvec{Z}}_{\mathrm {A}}\varvec{e}=\varvec{e}\). \(\Box \)

Appendix 2: Proof of Lemma 2

It follows from (1), (2), and (79) that

$$\begin{aligned} \varvec{u}_{3,1}\varvec{e}&= ({\varvec{\kappa }}_{\mathrm {A}}{\varvec{C}}_{\mathrm {A}}+ \mu {\varvec{\kappa }}_{\mathrm {A}}(-\varvec{B}_{c-1})^{-1} {\varvec{D}}_{\mathrm {A}}+{\varvec{\pi }}_{\mathrm {A}}) \varvec{e} = {\varvec{\kappa }}_{\mathrm {A}}{\varvec{C}}_{\mathrm {A}}\varvec{e} + \mu {\varvec{\kappa }}_{\mathrm {A}}\varvec{e} + 1 \\&= {\varvec{\kappa }}_{\mathrm {A}}(\mu \varvec{I} -{\varvec{D}}_{\mathrm {A}}) \varvec{e} + 1. \end{aligned}$$

On the other hand, it follows from (2) and (3) that

$$\begin{aligned} \varvec{u}_{3,2} \varvec{e}&= (\tau {\varvec{\pi }}_{\mathrm {A}}+ {\varvec{\kappa }}_{\mathrm {A}}){\varvec{D}}_{\mathrm {A}}\varvec{e} -(\mu \tau + 1) {\varvec{\pi }}_{\mathrm {A}}\varvec{e} - \mu {\varvec{\kappa }}_{\mathrm {A}}\varvec{e}\\&= {\varvec{\kappa }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}\varvec{e} - \mu {\varvec{\kappa }}_{\mathrm {A}}\varvec{e} -1 + \lambda \tau - \mu \tau \\&= - [ {\varvec{\kappa }}_{\mathrm {A}}(\mu \varvec{I}- {\varvec{D}}_{\mathrm {A}}) \varvec{e} + 1]. \end{aligned}$$

We thus have \(\varvec{u}_{3,1} \varvec{e} = - \varvec{u}_{3,2} \varvec{e}\). Furthermore, using \(\mu =\lambda = {\varvec{\pi }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}\varvec{e}\) and \({\varvec{\kappa }}_{\mathrm {A}}^* \varvec{e} = 0\), we have

$$\begin{aligned} {\varvec{\kappa }}_{\mathrm {A}}(\mu \varvec{I} -{\varvec{D}}_{\mathrm {A}}) \varvec{e} + 1= & {} ({\varvec{\kappa }}_{\mathrm {A}}^* + \alpha {\varvec{\pi }}_{\mathrm {A}}) (\mu \varvec{I} -{\varvec{D}}_{\mathrm {A}}) \varvec{e} + 1 = {\varvec{\kappa }}_{\mathrm {A}}^* (\mu \varvec{I} -{\varvec{D}}_{\mathrm {A}}) \varvec{e} + 1\\= & {} 1 - {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e}, \end{aligned}$$

which completes the proof of the equalities.

Next we prove \(1 - {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e} \ge 1/2\). From (16), we have

$$\begin{aligned} {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e} = \lambda - \lambda ^{-1} {\varvec{\pi }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}(\varvec{e}{\varvec{\pi }}_{\mathrm {A}}-{\varvec{C}}_{\mathrm {A}}-{\varvec{D}}_{\mathrm {A}})^{-1} {\varvec{D}}_{\mathrm {A}}\varvec{e}, \end{aligned}$$
(80)

where we use \(\mu =\lambda \). Let A(0, t] denote the number of arrivals in an interval (0, t] given that the distribution of the initial state J(0) of the underlying Markov chain is given by \({\varvec{\pi }}_{\mathrm {A}}\). The variance of A(0, t] is known to be [26]

$$\begin{aligned} \mathrm {Var}(A(0,t]) = (\lambda - 2\lambda ^2 + 2 {\varvec{\pi }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}\varvec{d}_1) t - 2 {\varvec{\pi }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}\left( \varvec{I}-e^{({\varvec{C}}_{\mathrm {A}}+{\varvec{D}}_{\mathrm {A}})t}\right) \varvec{d}_2, \end{aligned}$$
(81)

where

$$\begin{aligned} \varvec{d}_i = (\varvec{e}{\varvec{\pi }}_{\mathrm {A}}-{\varvec{C}}_{\mathrm {A}}-{\varvec{D}}_{\mathrm {A}})^{-i} {\varvec{D}}_{\mathrm {A}}\varvec{e}, \quad i=1,2. \end{aligned}$$

Substituting (80) into (81), we have

$$\begin{aligned} \mathrm {Var}(A(0,t]) = (\lambda - 2 \lambda {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e}) t - 2 {\varvec{\pi }}_{\mathrm {A}}{\varvec{D}}_{\mathrm {A}}\left( \varvec{I}-e^{({\varvec{C}}_{\mathrm {A}}+{\varvec{D}}_{\mathrm {A}})t}\right) \varvec{d}_2. \end{aligned}$$

By definition, \(\mathrm {Var}(A(0,t]) \ge 0\) and \(\lim _{t \rightarrow \infty } \exp [({\varvec{C}}_{\mathrm {A}}+{\varvec{D}}_{\mathrm {A}})t] = \varvec{e}{\varvec{\pi }}_{\mathrm {A}}\). We thus have

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{\mathrm {Var}(A(0,t]) }{t} = \lambda - 2 \lambda {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e} \ge 0, \end{aligned}$$

from which \(1 - {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e} \ge 1/2\) follows. \(\square \)

Appendix 3: Proof of Lemma 3

Let \(\varvec{a}_{\mathrm {cond}}(x)\) (\(x > 0\)) denote a \(1 \times m\) vector whose jth (\(j \in \mathscr {M}\)) element represents

$$\begin{aligned} {[\varvec{a}_{\mathrm {cond}}(x)]_j}&= \frac{\mathrm{d}}{\mathrm{d}x} Pr( A(0) \le x, J(T_c(0))=j \mid L(0) \ge c). \end{aligned}$$

By definition, \(\varvec{a}(x) = Pr(L(0) \ge c) \cdot \varvec{a}_{\mathrm {cond}}(x)\), where

$$\begin{aligned} Pr(L(0) \ge c) = 1 - \sum _{n=0}^{c-1} \varvec{q}_n \varvec{e} = \int _{0+}^{\infty } \varvec{p}(x) \varvec{e} \mathrm{d}x. \end{aligned}$$
(82)

Note that every customer who finds \(L(t-)=c-1\) or \(0 < V(t-) < \tau \) at his/her arrival epoch t will become customer #c when their service starts. Furthermore, once their service starts, they remain customer #c for an exponentially distributed interval with mean \(1/\mu \). Because of the memoryless property of the exponential distribution, the elapsed service time of customer #c follows an exponential distribution with mean \(1/\mu \) and it is independent of the waiting time. Based on these observations and conditional PASTA, we obtain

$$\begin{aligned} \varvec{a}_{\mathrm {cond}}(x) = \frac{\varvec{q}_{c-1}{\varvec{D}}_{\mathrm {A}}}{\lambda _c} \cdot \mu e^{-\mu x} + \displaystyle \int _{0+}^{\min (x,\tau )} \frac{\varvec{p}(y){\varvec{D}}_{\mathrm {A}}}{\lambda _c} \cdot \mu e^{-\mu (x-y)} \mathrm{d}y, \end{aligned}$$
(83)

where \(\lambda _c\) denotes the arrival rate of customers who will become customer #c.

$$\begin{aligned} \lambda _c = \varvec{q}_{c-1} {\varvec{D}}_{\mathrm {A}}\varvec{e} + \int _{0+}^{\tau } \varvec{p}(y){\varvec{D}}_{\mathrm {A}}\varvec{e} \mathrm{d}y. \end{aligned}$$
(84)

Applying Little’s law to the mean number of customer #c, we have \(\lambda _c (\mu ^{-1}) = 1 \times Pr(L(0) \ge c) + 0 \times Pr(L(0) < c)\), from which we obtain

$$\begin{aligned} \lambda _c = \mu Pr(L(0) \ge c). \end{aligned}$$
(85)

It then follows from (83) and (85) that

$$\begin{aligned} \varvec{a}(x) = \varvec{q}_{c-1}{\varvec{D}}_{\mathrm {A}}\cdot e^{-\mu x} + \int _{0+}^{\min (x,\tau )} \varvec{p}(y){\varvec{D}}_{\mathrm {A}}\cdot e^{-\mu (x-y)} \mathrm{d}y. \end{aligned}$$
(86)

Note here that for \(x > \tau \), (86) is rewritten to be

$$\begin{aligned} \varvec{a}(x) = \left[ \varvec{q}_{c-1}{\varvec{D}}_{\mathrm {A}}\cdot e^{-\mu \tau } + \int _{0+}^{\tau } \varvec{p}(y){\varvec{D}}_{\mathrm {A}}\cdot e^{-\mu (\tau -y)} \mathrm{d}y \right] e^{-\mu (x-\tau )}, \quad x > \tau , \end{aligned}$$

which completes the proof. \(\square \)

Appendix 4: Generality of (29)

In this Appendix, we prove that (29) holds for the stationary FCFS MAP/G/c queue, where the same notation as for the MAP/M/c+D queue are used for the MAP/G/c queue. In the proof, we only assume that \(\{V(t),J(t)\}\) is jointly stationary under the probability measure P, and the waiting time of a customer arriving at time t is given by \(V(t-)\) (i.e., FCFS). Therefore, (29) is fairly general. We define \(N_{i,j,0}\) (\(i,j \in \mathscr {M}\), \(i \ne j\)) and \(N_{i,j,1}\) (\(i,j \in \mathscr {M}\)) as the point process associated with state transitions of J(t) from state i to state j without arrivals and with arrivals, respectively, and let \(P_{i,j,0}\) (\(i,j \in \mathscr {M}\), \(i \ne j\)) and \(P_{i,j,1}\) (\(i,j \in \mathscr {M}\)) denote the Palm measures obtained by conditioning on an atom of \(N_{i,j,0}\) and an atom of \(N_{i,j,1}\), respectively, at time zero. We define \(\mathbb {I}(\chi )\) as an indicator function of event \(\chi \). Let \({E}\), \({E}_{i,j,0}\) (\(i,j \in \mathscr {M}\), \(i \ne j\)) and \({E}_{i,j,1}\) (\(i,j \in \mathscr {M}\)) denote the expectation with respect to P, \(P_{i,j,0}\) and \(P_{i,j,1}\), respectively.

We define \(v_j(s)\), \(v_j^+(s)\), \(a_j(s)\), and \(a_j^+(s)\) as

$$\begin{aligned} v_j(s)&= {E}\left[ e^{-s V(0)} \mathbb {I}(J(0)=j) \right] ,&v_j^+(s)&= {E}\left[ e^{-s V(0)} \mathbb {I}(V(0) > 0,J(0)=j) \right] , \\ a_j(s)&= {E}\left[ e^{-s A(0)} \mathbb {I}(J(T_c(0))=j) \right] ,&a_j^+(s)&= {E}\left[ e^{-s A(0)} \mathbb {I}(A(0) > 0, J(T_c(0))=j) \right] , \end{aligned}$$

where \(T_c(t) = t\) if \(A(t)=0\). By definition,

$$\begin{aligned} v_j(s) = \sum _{n=0}^{c-1} [\varvec{q}_n]_j + v_j^+(s), \qquad a_j(s) = \sum _{n=0}^{c-1} [\varvec{q}_n]_j + a_j^+(s). \end{aligned}$$

We also define \(w_{i,j}^q(s)\) and \(w_{i,j}(s)\) as

$$\begin{aligned} w_{i,j}^q(s) = {E}_{i,j,1} \left[ e^{-s V(0-)} \right] , \qquad w_{i,j}(s) = {E}_{i,j,1} \left[ e^{-s V(0)} \right] . \end{aligned}$$

Owing to conditional PASTA, we have

$$\begin{aligned} w_{i,j}^q(s)= \frac{v_i(s)}{[{\varvec{\pi }}_{\mathrm {A}}]_i}, \qquad {E}_{i,j,0} \left[ e^{-s V(0)} \right] = \frac{v_i(s)}{[{\varvec{\pi }}_{\mathrm {A}}]_i}. \end{aligned}$$

Suppose a customer arrives at time t with a state transition from state \(J(t-)=i\) to \(J(t)=j\). If \(V(t-) < V(t)\), this customer will be a customer \(\#c\) during an interval from time \(t+V(t-)\) to time \(t+V(t)\). Note that \(V(t-)=V(t)\) can happen if \(L(t-) < c-1\) or the service time of the arriving customer is equal to zero. We thus have

$$\begin{aligned} a_j^+(s)&= \sum _{i \in \mathscr {M}} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} \cdot {E}_{i,j,1}\left[ \int _{V(0-)}^{V(0+)} e^{-s x} \mathrm{d}x \right] = \sum _{i \in \mathscr {M}} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} \cdot \frac{1}{s}\\&\quad \cdot \left( \frac{v_i(s)}{[{\varvec{\pi }}_{\mathrm {A}}]_i} - w_{i,j}(s)\right) \!=\! \frac{1}{s} \left( \sum _{i \in \mathscr {M}} v_i(s) [{\varvec{D}}_{\mathrm {A}}]_{i,j} \!-\! \sum _{i \in \mathscr {M}} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} w_{i,j}(s) \right) , \end{aligned}$$

where we use the invariance relation \(H=\lambda G\) [14] in the first equality.

We define \(f_j(t)\) (\(j \in \mathscr {M}\), \(-\infty < t < \infty \)) as \(f_j(t)=\exp (-s V(t))\mathbb {I}(J(t)=j)\). Applying Miyazawa’s rate conservation law [24, Lemma 3.1] to \({E}[f_j(t)]\), we have

$$\begin{aligned}&s {E}\left[ e^{-s V(0)} \mathbb {I}(V(0) > 0, J(0)=j) \right] \\&\quad = [{\varvec{\pi }}_{\mathrm {A}}]_j [{\varvec{D}}_{\mathrm {A}}]_{j,j} (w_{j,j}^{q}(s)- w_{j,j}(s)) - \sum _{i \ne j} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} w_{i,j}(s)\\&\qquad + [{\varvec{\pi }}_{\mathrm {A}}]_j \sum _{k \ne j} ([{\varvec{C}}_{\mathrm {A}}]_{j,k} + [{\varvec{D}}_{\mathrm {A}}]_{j,k}) \frac{v_j(s)}{[{\varvec{\pi }}_{\mathrm {A}}]_j} - \sum _{i \ne j} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{C}}_{\mathrm {A}}]_{i,j} \frac{v_i(s)}{[{\varvec{\pi }}_{\mathrm {A}}]_i}\\&\quad = - \sum _{i \in \mathscr {M}} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} w_{i,j}(s) + v_j(s) \left( \sum _{k \ne j} [{\varvec{C}}_{\mathrm {A}}]_{j,k} + \sum _{k \in \mathscr {M}} [{\varvec{D}}_{\mathrm {A}}]_{j,k} \right) - \sum _{i \ne j} v_i(s) [{\varvec{C}}_{\mathrm {A}}]_{i,j}\\&\quad = \sum _{i \in \mathscr {M}} v_i(s) [{\varvec{D}}_{\mathrm {A}}]_{i,j} - \sum _{i \in \mathscr {M}} [{\varvec{\pi }}_{\mathrm {A}}]_i [{\varvec{D}}_{\mathrm {A}}]_{i,j} w_{i,j}(s) - v_j(s) [{\varvec{C}}_{\mathrm {A}}]_{j,j} - \sum _{i \ne j} v_i(s) [{\varvec{C}}_{\mathrm {A}}]_{i,j} \\&\qquad - \sum _{i \in \mathscr {M}} v_i(s) [{\varvec{D}}_{\mathrm {A}}]_{i,j}= s a_j^+(s) - \sum _{i \in \mathscr {M}} v_i(s) ([{\varvec{C}}_{\mathrm {A}}]_{i,j} + [{\varvec{D}}_{\mathrm {A}}]_{i,j}). \end{aligned}$$

It then follows that

$$\begin{aligned} a_j^+(s) = v_j^+(s) + \sum _{i \in \mathscr {M}} \frac{v_i(s)}{s} \big ([{\varvec{C}}_{\mathrm {A}}]_{i,j} + [{\varvec{D}}_{\mathrm {A}}]_{i,j}\big ), \end{aligned}$$

or equivalently,

$$\begin{aligned} a_j(s) = v_j(s) + \sum _{i \in \mathscr {M}} \frac{v_i(s)}{s} \big ([{\varvec{C}}_{\mathrm {A}}]_{i,j} + [{\varvec{D}}_{\mathrm {A}}]_{i,j}\big ), \end{aligned}$$

which implies (29).

Appendix 5: Proof of Lemma 4

We first consider the case \(x \ge \tau \). By definition, we have

$$\begin{aligned} \varvec{a}(x+\Delta x) = \varvec{a}(x) (\varvec{I}+{\varvec{C}}_{\mathrm {S}}\Delta x) + o(\Delta x), \end{aligned}$$

from which (45) follows. Note that (45) implies

$$\begin{aligned} \varvec{a}(x) = \varvec{a}(\tau ) e^{{\varvec{C}}_{\mathrm {S}}(x-\tau )}, \quad x \ge \tau . \end{aligned}$$
(87)

On the other hand, for \(0 < x < \tau \),

$$\begin{aligned} \varvec{a}(x+\Delta x)= & {} \varvec{a}(x) (\varvec{I}+ {\varvec{C}}_{\mathrm {S}}\Delta x) + \int _0^{\tau -x} \varvec{a}(x+y) {\varvec{D}}_{\mathrm {S}}\Delta x \lambda e^{-\lambda y} \mathrm{d}y \\&+ \int _0^{\infty } \varvec{a}(\tau +y) {\varvec{D}}_{\mathrm {S}}\Delta x \lambda e^{-\lambda (\tau -x)} \mathrm{d}y + o(\Delta x), \end{aligned}$$

and therefore, we have

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}x} \varvec{a}(x)= & {} \varvec{a}(x) {\varvec{C}}_{\mathrm {S}}+ \int _0^{\tau -x} \varvec{a}(x+y) {\varvec{D}}_{\mathrm {S}}\cdot \lambda e^{-\lambda y} \mathrm{d}y\\&+ \int _0^{\infty } \varvec{a}(\tau +y) {\varvec{D}}_{\mathrm {S}}\cdot \lambda e^{-\lambda (\tau -x)} \mathrm{d}y. \end{aligned}$$

(44) follows from (87) and the above equation. \(\square \)

Appendix 6: Proof of Lemma 5

We can show the irreducibility of \({\varvec{C}}_{\mathrm {S}}+ \lambda {\varvec{Y}}_{\mathrm {S}}\) in exactly the same way as \({\varvec{C}}_{\mathrm {A}}+ \lambda {\varvec{Y}}_{\mathrm {A}}\) (cf. Appendix 1), from which the irreducibility of \(\varvec{V}_{1,2}\) follows. Next we consider \(\varvec{V}_{2,1} = \varvec{S}_2 \varvec{N} -\varvec{I}\), where \(\varvec{N}\) is defined as \(\varvec{N}=\varvec{Q}_{c,2} (-\varvec{H}_{c-1})^{-1} \varvec{Q}_{c-1,0}\). We can show the irreducibility of \(\lambda {\varvec{Z}}_{\mathrm {S}}+ {\varvec{C}}_{\mathrm {S}}\) in exactly the same way as in Lemma 2 in [8]. Therefore, \(\exp [(\lambda {\varvec{Z}}_{\mathrm {S}}+ {\varvec{C}}_{\mathrm {S}}) x]\) (\(x > 0\)) is a positive matrix, so that \(\varvec{S}_2\) is also a positive matrix. On the other hand, \(\varvec{N}\) can be regarded as a transition rate matrix, so that it is a non-zero, nonnegative matrix (i.e., \(\varvec{N} \ge \varvec{O}\) and \(\varvec{N} \ne \varvec{O}\)).

In general, columns of a non-zero, nonnegative matrix \(\varvec{N}\) can be classified into two groups: a nonempty set \(\mathscr {C}^+\) of columns with at least one positive element and the other set \(\mathscr {C}^{\mathrm {null}}\) of columns whose elements are all equal to zero. Because \(\varvec{S}_2\) is a positive matrix, \(\varvec{S}_2 \varvec{N}\) is a nonnegative matrix, where all elements of columns in \(\mathscr {C}^+\) are positive and all elements of columns in \(\mathscr {C}^{\mathrm {null}}\) are equal to zero. Therefore, \(\varvec{S}_2 \varvec{N}\) has a single irreducible class. As a result, \(\varvec{V}_{2,1}=\varvec{S}_2\varvec{N}-\varvec{I}\) has a single irreducible class, where states corresponding to columns in \(\mathscr {C}^+\) form an irreducible class and all other states are transient.

Next we consider \(\varvec{V}_{i,j} \varvec{e} = \varvec{0}\) (\(i,j=1,2\)) for \(\rho =1\). To prove \(\varvec{V}_{1,1}\varvec{e}= \varvec{e}\), we first show

$$\begin{aligned} (-\varvec{H}_i)^{-1} \varvec{Q}_{i,0} \varvec{e}=\varvec{e}, \quad i=0,1,\ldots ,c-1. \end{aligned}$$
(88)

For \(i=0\), \(\varvec{H}_0=\varvec{Q}_{0,1}\), so that from (40) we have \((-\varvec{H}_0)^{-1} \varvec{Q}_{0,0} \varvec{e} = \varvec{e}\). Suppose \((-\varvec{H}_i)^{-1} \varvec{Q}_{i,0} \varvec{e} = \varvec{e}\) holds for some \(i=n-1\) (\(n=1,\ldots ,c-1\)). Post-multiplying both sides of (50) by \(\varvec{e}\) and using (40) yields

$$\begin{aligned} \varvec{H}_n \varvec{e} = \varvec{Q}_{n,1}\varvec{e} + \varvec{Q}_{n,2} (-\varvec{H}_{n-1})^{-1} \varvec{Q}_{n-1,0} \varvec{e} = \varvec{Q}_{n,1}\varvec{e} + \varvec{Q}_{n,2} \varvec{e} = -\varvec{Q}_{n,0} \varvec{e}. \end{aligned}$$

We thus have \((-\varvec{H}_n)^{-1} \varvec{Q}_{n,0} \varvec{e} = \varvec{e}\) for \(i=n\).

From (41), we also have \(\varvec{Q}_{c,2} \varvec{e} = {\varvec{D}}_{\mathrm {S}}\varvec{e}\). It then follows from the definition of \(\varvec{V}_{1,1}\) that

$$\begin{aligned} \varvec{V}_{1,1}\varvec{e} = \varvec{S}_1 {\varvec{D}}_{\mathrm {S}}\varvec{e} - e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } \varvec{e}. \end{aligned}$$

Furthermore, from the definition of \(\varvec{S}_1\) and (41), we have

$$\begin{aligned} \varvec{S}_1 {\varvec{D}}_{\mathrm {S}}\varvec{e} = e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } \int _0^{\tau } e^{-\lambda {\varvec{Y}}_{\mathrm {S}}x} {\varvec{D}}_{\mathrm {S}}\varvec{e} \mathrm{d}x + e^{-\lambda \tau } \varvec{e}. \end{aligned}$$

We can show that if \(\rho =1\), \({\varvec{D}}_{\mathrm {S}}\varvec{e} = \lambda {\varvec{Y}}_{\mathrm {S}}\varvec{e}\), in exactly the same way as (78). We thus have

$$\begin{aligned} \int _0^{\tau } e^{-\lambda {\varvec{Y}}_{\mathrm {S}}x} {\varvec{D}}_{\mathrm {S}}\varvec{e} \mathrm{d}x = \int _0^{\tau } e^{-\lambda {\varvec{Y}}_{\mathrm {S}}x} \lambda {\varvec{Y}}_{\mathrm {S}}\varvec{e} \mathrm{d}x = (\varvec{I}-e^{-\lambda {\varvec{Y}}_{\mathrm {S}}\tau }) \varvec{e}. \end{aligned}$$

As a result, we have

$$\begin{aligned} \varvec{V}_{1,1}\varvec{e}&= e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } (\varvec{I}-e^{-\lambda {\varvec{Y}}_{\mathrm {S}}\tau }) \varvec{e} + e^{-\lambda \tau } \varvec{e} - e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } \varvec{e} \\&= e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } \varvec{e} - e^{-\lambda \varvec{I} \tau }\varvec{e} + e^{-\lambda \tau } \varvec{e} - e^{\lambda ({\varvec{Y}}_{\mathrm {S}}-\varvec{I})\tau } \varvec{e} \\&= \varvec{0}. \end{aligned}$$

Next we consider \(\varvec{V}_{2,1}\varvec{e}= \varvec{0}\). It follows from (41) and (88) that

$$\begin{aligned} \varvec{V}_{2,1}\varvec{e} = \varvec{S}_2 \varvec{Q}_{c,2} (-\varvec{H}_{c-1})^{-1} \varvec{Q}_{c-1,0} \varvec{e} - \varvec{e} = \varvec{S}_2 \varvec{Q}_{c,2} \varvec{e} - \varvec{e} = \varvec{S}_2 {\varvec{D}}_{\mathrm {S}}\varvec{e} - \varvec{e}. \end{aligned}$$

Note here that \(\varvec{S}_2\) in Theorem 3 is calculated to be

$$\begin{aligned} \varvec{S}_2 = [-(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}})]^{-1} \left( \varvec{I} - e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau }\right) + e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}- \varvec{I}) + {\varvec{C}}_{\mathrm {S}})\tau } (-{\varvec{C}}_{\mathrm {S}})^{-1}. \end{aligned}$$

Therefore, noting (41) and \({\varvec{Z}}_{\mathrm {S}}\varvec{e} = \varvec{e}\), we have

$$\begin{aligned} \varvec{V}_{2,1}\varvec{e}&= [\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}]^{-1} \left( e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau } - \varvec{I} \right) {\varvec{D}}_{\mathrm {S}}\varvec{e} \\&\quad + e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}- \varvec{I}) + {\varvec{C}}_{\mathrm {S}})\tau } (-{\varvec{C}}_{\mathrm {S}})^{-1}{\varvec{D}}_{\mathrm {S}}\varvec{e} - \varvec{e}\\&= [\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}]^{-1} \left[ \left( e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau } - \varvec{I}\right) {\varvec{D}}_{\mathrm {S}}\varvec{e} \right. \\&\quad {} \left. + (\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}- \varvec{I}) + {\varvec{C}}_{\mathrm {S}})\tau } \varvec{e} - (\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}})\varvec{e} \right] \\&= [\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}]^{-1} \left[ e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau } {\varvec{D}}_{\mathrm {S}}\varvec{e} + e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau } (\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \varvec{e} \right] \\&= [(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}})]^{-1} e^{(\lambda ({\varvec{Z}}_{\mathrm {S}}-\varvec{I}) + {\varvec{C}}_{\mathrm {S}}) \tau } ({\varvec{D}}_{\mathrm {S}}+{\varvec{C}}_{\mathrm {S}}) \varvec{e} \\&= \varvec{0}. \end{aligned}$$

Owing to (41), \(\varvec{V}_{1,2} \varvec{e} = \varvec{0}\) and \(\varvec{V}_{2,2} \varvec{e} = \varvec{0}\) immediately follow from \(\lambda {\varvec{Y}}_{\mathrm {S}}\varvec{e} = {\varvec{D}}_{\mathrm {S}}\varvec{e} = -{\varvec{C}}_{\mathrm {S}}\varvec{e}\) and \({\varvec{Z}}_{\mathrm {S}}\varvec{e} =\varvec{e}\), respectively. \(\square \)

Appendix 7: Proof of Lemma 6

From (41), (65), (70), (71), and (88), we have

$$\begin{aligned} \varvec{v}_{3,1}\varvec{e} = \frac{1-e^{-\lambda \tau }}{\lambda } ({\varvec{\kappa }}_{\mathrm {S}}{\varvec{D}}_{\mathrm {S}}\varvec{e} - 1 - \lambda {\varvec{\kappa }}_{\mathrm {S}}\varvec{e}), \quad \varvec{v}_{3,2}\varvec{e} = 1 + \lambda {\varvec{\kappa }}_{\mathrm {S}}\varvec{e} - {\varvec{\kappa }}_{\mathrm {S}}{\varvec{D}}_{\mathrm {S}}\varvec{e}, \end{aligned}$$

from which the first equality follows. Further, from (60), (61), \(\mu =\lambda \), and \({\varvec{\kappa }}_{\mathrm {S}}^* \varvec{e}=0\), we have

$$\begin{aligned} {\varvec{\kappa }}_{\mathrm {S}}{\varvec{D}}_{\mathrm {S}}\varvec{e} - \lambda {\varvec{\kappa }}_{\mathrm {S}}\varvec{e} = ({\varvec{\kappa }}_{\mathrm {S}}^* + \alpha {\varvec{\pi }}_{\mathrm {S}}) {\varvec{D}}_{\mathrm {S}}\varvec{e} - \lambda ({\varvec{\kappa }}_{\mathrm {S}}^* + \alpha {\varvec{\pi }}_{\mathrm {S}}) \varvec{e} = {\varvec{\kappa }}_{\mathrm {S}}^* {\varvec{D}}_{\mathrm {S}}\varvec{e}, \end{aligned}$$

from which the second equality follows. Finally, we can show \(1 - {\varvec{\kappa }}_{\mathrm {S}}^* {\varvec{D}}_{\mathrm {S}}\varvec{e} \ge 1/2\) in exactly the same way as \(1 - {\varvec{\kappa }}_{\mathrm {A}}^* {\varvec{D}}_{\mathrm {A}}\varvec{e} \ge 1/2\) in Lemma 2, from which the inequality follows. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kawanishi, K., Takine, T. MAP/M/c and M/PH/c queues with constant impatience times. Queueing Syst 82, 381–420 (2016). https://doi.org/10.1007/s11134-015-9455-9

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11134-015-9455-9

Keywords

Mathematics Subject Classification

Navigation