Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-22T04:29:45.026Z Has data issue: false hasContentIssue false

On the study of the running maximum and minimum level of level-dependent quasi-birth–death processes and related models

Published online by Cambridge University Press:  19 September 2022

Kayla Javier*
Affiliation:
Wingate University
Brian Fralix*
Affiliation:
Clemson University
*
*Postal address: 228 Cedar St., Wingate, NC 28174. Email address: [email protected]
**Postal address: O-110 Martin Hall, Box 340975, Clemson, SC 29634. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We present a study of the joint distribution of both the state of a level-dependent quasi-birth–death (QBD) process and its associated running maximum level, at a fixed time t: more specifically, we derive expressions for the Laplace transforms of transition functions that contain this information, and the expressions we derive contain familiar constructs from the classical theory of QBD processes. Indeed, one important takeaway from our results is that the distribution of the running maximum level of a level-dependent QBD process can be studied using results that are highly analogous to the more well-established theory of level-dependent QBD processes that focuses primarily on the joint distribution of the level and phase. We also explain how our methods naturally extend to the study of level-dependent Markov processes of M/G/1 type, if we instead keep track of the running minimum level instead of the running maximum level.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and preliminary results

Given a real-valued stochastic process $\{X(t);\, t \geq 0\}$ , we can define both the running maximum process $\{\overline{X}(t);\, t \geq 0\}$ and the running minimum process $\{\underline{X}(t);\, t \geq 0\}$ , where, for each $t \geq 0$ ,

\begin{align*} \overline{X}(t) \,:\!=\, \sup_{s \in [0,t]}X(s),\qquad \underline{X}(t) \,:\!=\, \inf_{s \in [0,t]}X(s). \end{align*}

The marginal distributions of these processes are very tractable when $\{X(t);\, t \geq 0\}$ represents Brownian motion, and they are also well known to play a prominent role in the theory of Lévy processes; see e.g. Kyprianou [Reference Kyprianou13] for an accessible introduction to the theory of Lévy processes.

In the recent work of Mandjes and Taylor [Reference Mandjes and Taylor15], the authors present a recursive procedure that can be used to calculate the joint distribution of both the state (which tracks both the level and the phase) of a level-dependent quasi-birth–death (QBD) process (see Bright and Taylor [Reference Bright and Taylor3]) and its running maximum level, at an independent exponential time; once these distributions can be calculated efficiently, Erlangization can be used to further study, numerically, the joint distribution of the running maximum level, the level, and the phase at each fixed time t. The results contained in [Reference Mandjes and Taylor15] were derived ‘from scratch’ by making clever use of first-step analysis and censoring arguments, as well as sample-path properties satisfied by level-dependent QBD processes. Our objective is to build further on the work of [Reference Mandjes and Taylor15] by showing how alternative formulas can be derived in an arguably more straightforward manner from theory that has been developed in the matrix-analytic literature. In fact, not only will we analyze level-dependent QBD processes, we will also explain how our results and ideas apply to level-dependent Markov processes of M/G/1 type, assuming of course that we replace the running maximum level process with a running minimum level process.

An important ingredient needed in our analysis is a formula that can be found at the top of page 124 of Latouche and Ramaswami [Reference Latouche and Ramaswami14], but before we state this formula we first need to introduce some notation. Suppose $\{Y(t);\, t \geq 0\}$ is a continuous-time Markov chain (CTMC) having state space S and generator (transition rate matrix) $\mathbf{Q} \,:\!=\, [q(x,y)]_{x,y \in S}$ , where, for each $x \in S$ ,

\begin{align*} q(x) \,:\!=\, -q(x,x) \geq 0 \end{align*}

denotes the sojourn rate associated with each exponential sojourn spent in state x by $\{Y(t);\, t \geq 0\}$ . We assume throughout that $\{Y(t);\, t \geq 0\}$ —as well as every other CTMC we analyze—satisfies the property that $q(x) < \infty$ for each $x \in S$ , and

\begin{align*} \sum_{y \neq x}q(x,y) = -q(x,x) \end{align*}

for each $x \in S$ .

Further associated with $\{Y(t);\, t \geq 0\}$ is a collection of transition functions $\{p_{x,y}\}_{x,y \in S}$ , where, for each $x,y \in S$ ,

\begin{align*} p_{x,y}(t) \,:\!=\, \mathbb{P}_{x}(Y(t) = y), \qquad t \geq 0, \end{align*}

where $\mathbb{P}_{x}({\cdot})$ represents a conditional probability, given $Y(0) = x$ . Each transition function $p_{x,y}$ has associated with it a Laplace transform $\pi_{x,y}\,:\, \mathbb{C}_{+} \rightarrow \mathbb{C}$ , which is defined on $\mathbb{C}_{+} \,:\!=\, \{\alpha \in \mathbb{C}\,:\, Re(\alpha) > 0\}$ —the set of all complex numbers having positive real part—as

\begin{align*} \pi_{x,y}(\alpha) \,:\!=\, \int_{0}^{\infty}e^{-\alpha t}p_{x,y}(t)dt, \qquad \alpha \in \mathbb{C}_{+}. \end{align*}

Readers should recall that two continuous functions defined on $[0, \infty)$ are equal if and only if their Laplace transforms are equal on $\mathbb{C}_{+}$ (in fact the functions are equal if and only if their Laplace transforms are equal on $(0, \infty)$ ), and once we can numerically calculate a Laplace transform at each point in $\mathbb{C}_{+}$ , we can use one of many numerical transform inversion algorithms, such as that found in [Reference Abate and Whitt1], to calculate the value of the underlying continuous function at various points of $[0, \infty)$ .

For each subset $T \subset S$ , we define

\begin{align*} \tau_{T} \,:\!=\, \inf\{t \geq 0\,:\, Y(t{-}) \neq Y(t) \in T\}, \end{align*}

which represents the first time $\{Y(t);\, t \geq 0\}$ makes a transition to a state contained in T. Readers should note that $\tau_{T} > 0$ with probability one, even if $X(0) \in T$ , as $\tau_{T}$ represents the first time the chain makes a transition to a state in T, which could be made from a state $x \in T$ if $X(0) = x$ .

Theorem 1. ([Reference Latouche and Ramaswami14, p. 124].) Suppose T is a nonempty subset of S, where $T \neq S$ . Then for each $x \in T^{c}$ and each $y \in T$ ,

(1) \begin{align} p_{x,y}(t) = \sum_{z \in T^{c}}\sum_{w \in T}\int_{0}^{t}p_{x,z}(s)q(z,w)\mathbb{P}_{w}(Y(t-s) = y, \tau_{T^{c}} > t-s)ds, \qquad t \geq 0. \end{align}

While this result is obviously known, in [Reference Latouche and Ramaswami14] the formula appears to be given only with the intention of using it as a tool for deriving the stationary distribution of QBD processes, but we feel this result deserves its own theorem. The authors of [Reference Latouche and Ramaswami14] appear to establish the result with a Markov renewal argument, but here is an alternative argument that follows from ideas found in [Reference Fralix, Van Leeuwaarden and Boxma9]. Even though we will not use point process arguments anywhere else in this paper, we feel that the following proof is worth providing here, especially since the main idea behind the proof simplifies a great deal in the discrete-time context.

Proof. We prove Theorem 1 via the framework found in Chapter 9 of Brémaud [Reference Brémaud2], where a CTMC is thought of as being governed by a countable collection of independent, homogeneous Poisson processes.

Here is a rough sketch of the construction: for each ordered pair $(x,y) \in S \times S$ where $x \neq y$ , we construct a Poisson process $\{N_{x,y}(t);\, t \geq 0\}$ with rate q(x, y). Now setting $Y(0) = y_{0}$ —an arbitrarily chosen state—we define the first transition time $T_{1}$ of $\{Y(t);\, t \geq 0\}$ as

\begin{align*} T_{1} \,:\!=\, \inf_{y \in S}\inf\{t \geq 0\,:\, N_{y_{0},y}(t) = 1\}, \end{align*}

and we set $Y(t) = y_{0}$ for $0 \leq t < T_{1}$ , with $Y(T_{1}) = y_{1}$ for that state $y_{1}$ that attains the infimum (such a state both exists, and is unique with probability one). Next, given $y_{1} = Y(T_{1})$ , set

\begin{align*} T_{2} \,:\!=\, \inf_{y \in S}\inf\{t \geq 0: N_{y_{1},y}(t + T_{1}) - N_{y_{1},y}(T_{1}) = 1\}, \end{align*}

and again define $Y(t) = y_{1}$ for $T_{1} \leq t < T_{2}$ and set $Y(T_{2}) = y_{2}$ , where $y_{2}$ is the state that attains the infimum. From here, one can define $\{Y(t);\, t \geq 0\}$ inductively over the entire line. Readers should note that it is possible for $\{Y(t);\, t \geq 0\}$ to have infinitely many transitions in a finite time interval, meaning

\begin{align*} T_{\infty} \,:\!=\, \lim_{n \rightarrow \infty}T_{n} < \infty; \end{align*}

in this case we construct an extra ‘cemetery state’ $\partial$ that is not a member of S, and assume the process stays at this cemetery state from the explosion time $T_{\infty}$ onward. Readers should find it clear, at least on an intuitive level, that $\{Y(t);\, t \geq 0\}$ is a CTMC with transition rate matrix $\mathbf{Q}$ , but we refer those interested in seeing a rigorous description of this procedure to Chapter 9, Sections 1 and 2, of [Reference Brémaud2].

Thinking of $\{Y(t);\, t \geq 0\}$ in this manner, we can observe that for each $x \in T^{c}$ and each $y \in T$ , if $Y(0) = x$ we have

\begin{align*} \mathbf{1}(Y(t) = y) = \sum_{z \in T^{c}}\sum_{w \in T}\int_{0}^{t}\mathbf{1}(Y(s{-}) = z, \tau_{T^{c}}(s) > t, Y(t) = y)N_{z,w}(ds), \end{align*}

where $Y(s{-})$ is the left-hand limit of Y at s, and for each $C \subset S$ ,

\begin{align*} \tau_{C}(s) \,:\!=\, \inf\{u \geq s\,:\, Y(u{-}) \neq Y(u) \in C\}. \end{align*}

Clearly $\tau_{C} \,:\!=\, \tau_{C}(0)$ . Taking the expectation of both sides, while further applying the Campbell–Mecke formula to the right-hand side, as is done in [Reference Fralix, Van Leeuwaarden and Boxma9], gives

\begin{align*} \mathbb{P}_{x}(Y(t) = y) = \sum_{z \in T^{c}}\sum_{w \in T}\int_{0}^{t}\mathbb{P}_{x}(Y(s) = z)q(z,w)\mathbb{P}_{w}(\tau_{T^{c}} > t-s, Y(t-s) = y)ds, \end{align*}

which proves the claim.

Remark 1. It is also possible to establish Theorem 1 via the random-product technique; see [Reference Buckingham and Fralix4, Reference Fralix7, Reference Joyner and Fralix12, Reference Joyner and Fralix11]. Even though the random-product technique requires less of a technical background in measure-theoretic probability, when using this technique one has to specially treat absorbing states, as well as states that cannot be reached from any other state (meaning the only way the CTMC can visit such a state is if it starts there). Such states may appear in a few places in our analysis, so we decided to present a proof of Theorem 1 with the line of reasoning given in [Reference Fralix, Van Leeuwaarden and Boxma9], which uses the point process framework of [Reference Brémaud2].

The next result is a simple corollary of Theorem 1.

Corollary 1. Fix a nonempty subset $T \subset S$ where $T \neq S$ . Then for each $x \in T^{c}$ and each $y \in T$ ,

(2) \begin{align} \pi_{x,y}(\alpha) = \sum_{z \in T^{c}}\pi_{x,z}(\alpha)(q(z) + \alpha)\mathbb{E}_{z}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right], \alpha \in \mathbb{C}_{+}. \end{align}

Proof. Given $\alpha \in \mathbb{C}_{+}$ , multiply both sides of (1) by $e^{-\alpha t}$ , and integrate with respect to t over $[0, \infty)$ : this yields

(3) \begin{align} \pi_{x,y}(\alpha) = \sum_{z \in T^{c}}\pi_{x,z}(\alpha)\sum_{w \in T}q(z,w)\mathbb{E}_{w}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right], \end{align}

and this is equivalent to (2), since for each $z \in T^{c}$ ,

\begin{align*} (q(z) + \alpha)\mathbb{E}_{z}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right] &= (q(z) + \alpha)\mathbb{E}_{z}\!\left[\int_{T_{1}}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right] \\ &= (q(z) + \alpha)\sum_{w \in T}\frac{q(z,w)}{q(z) + \alpha}\mathbb{E}_{w}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right] \\ &= \sum_{w \in T}q(z,w)\mathbb{E}_{w}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = y)dt\right]. \end{align*}

2. Level-dependent QBD processes

Suppose $\{Y(t);\, t \geq 0\}$ is a level-dependent QBD process, whose state space S is expressed in terms of a countable union of levels:

\begin{align*} S \,:\!=\, \bigcup_{n=0}^{\infty}L_{n}, \end{align*}

where, for each integer $n \geq 0$ , level n is the set $L_{n}$ , defined as

\begin{align*} L_{n} \,:\!=\, \{(n,1), (n,2), \ldots, (n,d_{n} - 1), (n,d_{n})\} \end{align*}

with $d_{n}$ being a fixed positive integer that is allowed to vary with n. Given the structure of S, it helps, for each $t \geq 0$ , to express Y(t) as

\begin{align*} Y(t) = (X(t), J(t)), \end{align*}

where X(t) denotes the current level of the process—meaning $X(t) = n$ if and only if $Y(t) \in L_{n}$ —and J(t) represents the current phase of the process. We follow the notation scheme from [Reference Mandjes and Taylor15] by letting $\mathbf{Q}$ denote the transition rate matrix of $\{Y(t);\ t \geq 0\}$ , where the rows and columns of $\mathbf{Q}$ are ordered in a manner that corresponds to the states of S being ordered lexicographically, so that

\begin{align*} \mathbf{Q} = \left( \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \mathbf{Q}^{(0)} & \Lambda^{(0)} & \mathbf{0}_{d_{0} \times d_{2}} & \mathbf{0}_{d_{0} \times d_{3}} & \mathbf{0}_{d_{0} \times d_{4}} & \cdots \\[4pt] \mathcal{M}^{(1)} & \mathbf{Q}^{(1)} & \Lambda^{(1)} & \mathbf{0}_{d_{1} \times d_{3}} & \mathbf{0}_{d_{1} \times d_{4}} & \cdots \\[4pt] \mathbf{0}_{d_{2} \times d_{0}} & \mathcal{M}^{(2)} & \mathbf{Q}^{(2)} & \Lambda^{(2)} & \mathbf{0}_{d_{2} \times d_{4}} & \cdots \\[1pt] \mathbf{0}_{d_{3} \times d_{0}} & \mathbf{0}_{d_{3} \times d_{1}} & \mathcal{M}^{(3)} & \mathbf{Q}^{(3)} & \Lambda^{(3)} & \ddots \\[1pt] \mathbf{0}_{d_{4} \times d_{0}} & \mathbf{0}_{d_{4} \times d_{1}} & \mathbf{0}_{d_{4} \times d_{2}} & \mathcal{M}^{(4)} & \mathbf{Q}^{(4)} & \ddots \\[4pt] \vdots & \vdots & \vdots & \vdots & \ddots & \ddots \\[4pt] \end{array} \right), \end{align*}

where $\mathbf{0}_{m \times n}$ represents the zero matrix with m rows and n columns.

From this description of $\mathbf{Q}$ , we can see that the dimensions of $\mathbf{Q}^{(0)}$ and $\Lambda^{(0)}$ are $d_{0} \times d_{0}$ and $d_{0} \times d_{1}$ , respectively, while for each integer $n \geq 1$ , the dimensions of $\mathcal{M}^{(n)}$ , $\mathbf{Q}^{(n)}$ , and $\Lambda^{(n)}$ are $d_{n} \times d_{n-1}$ , $d_{n} \times d_{n}$ , and $d_{n} \times d_{n+1}$ , respectively. Each matrix $\Lambda^{(n)}$ contains transition rates corresponding to transitions made from a state in $L_{n}$ to a state in $L_{n+1}$ , while each matrix $\mathcal{M}^{(n)}$ contains transition rates corresponding to transitions made from a state in $L_{n}$ to a state in $L_{n-1}$ . In the interest of avoiding ‘nuisance states’, we assume throughout that each state $x \in S$ satisfies the following condition: there exist two states $y,z \in S$ (which may depend on x) such that $q(x,y) > 0$ and $q(z,x) > 0$ . This is a more general condition than irreducibility, as we are assuming that $\{Y(t);\, t \geq 0\}$ has no absorbing states, nor are there states that are not accessible from any other state in S. This simple assumption will allow us to apply the random-product technique featured in [Reference Buckingham and Fralix4, Reference Fralix7, Reference Fralix, Hasankhani and Khademi8] without further comment. Readers should note that in [Reference Mandjes and Taylor15], the authors assume the structure of $\mathbf{Q}$ is such that $\{Y(t);\, t \geq 0\}$ is an irreducible CTMC.

A very important family of matrices associated with $\{Y(t);\, t \geq 0\}$ is the family of ‘ $\mathbf{R}$ -matrices’ $\{\mathbf{R}_{k+1,k}(\alpha)\}_{k \geq 0}$ defined as follows: for each integer $k \geq 0$ ,

\begin{align*} (\mathbf{R}_{k+1,k}(\alpha))_{i,j} \,:\!=\, ({-}(\mathbf{Q}^{(k+1)})_{i,i} + \alpha)\mathbb{E}_{(k+1,i)}\!\left[\int_{0}^{\tau_{D_{k+1}^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = (k,j))dt\right], \end{align*}

where, for each $k \geq 1$ ,

\begin{align*} D_{k} = \bigcup_{n=0}^{k-1}L_{n}. \end{align*}

More generally, we can define the $\mathbf{R}$ -matrices $\{\mathbf{R}_{n,m}(\alpha)\}_{n \geq 1, 0 \leq m < n}$ , where

\begin{align*} (\mathbf{R}_{n,m}(\alpha))_{i,j} \,:\!=\, ({-}(\mathbf{Q}^{(n)})_{i,i} + \alpha)\mathbb{E}_{(n,i)}\!\left[\int_{0}^{\tau_{D_{n}^{c}}}e^{-\alpha t}\mathbf{1}(Y(t) = (m,j))dt\right]. \end{align*}

Matrices similar to these $\mathbf{R}$ -matrices have been used many times in other studies; see e.g. Naoumov [Reference Naoumov16] as well as Bright and Taylor [Reference Bright and Taylor3].

The next result, Proposition 1, shows that each $\mathbf{R}_{n,m}(\alpha)$ matrix, $n > m \geq 0$ , can be expressed in terms of products of matrices from the sequence $\{\mathbf{R}_{k+1,k}(\alpha)\}_{k \geq 0}$ . Readers should carefully note our usage of both the product symbol $\prod$ and the coproduct symbol $\coprod$ : we use both to better emphasize the order in which we apply matrix multiplication. Given a collection of matrices $\{H_{k}\}_{k \geq 0}$ , we write

\begin{align*} \prod_{k=m}^{n}H_{k} \,:\!=\, H_{m}H_{m+1} \cdots H_{n} \end{align*}

when $m \leq n$ , while we instead write

\begin{align*} \coprod_{k=m}^{n}H_{k} \,:\!=\, H_{m}H_{m-1} \cdots H_{n} \end{align*}

when $m \geq n$ .

Proposition 1. For each integer $n \geq 1$ and each integer $m \in \{0,1,2,\ldots,n-1\}$ ,

\begin{align*} \mathbf{R}_{n,m}(\alpha) = \coprod_{k=n}^{m+1}\mathbf{R}_{k,k-1}(\alpha). \end{align*}

Proof. This result can be established using the random-product technique discussed in [Reference Joyner and Fralix11].

The next result provides us with a way of numerically calculating the $\mathbf{R}_{k+1,k}(\alpha)$ matrices.

Proposition 2. The matrices $\mathbf{R}_{k+1,k}(\alpha)$ , for $k \geq 0$ , satisfy the following recursion: for each integer $k \geq 1$ ,

\begin{align*} \mathbf{R}_{k+1,k}(\alpha)=\mathcal{M}^{(k+1)}[\alpha\mathbf{I}^{(k)}-\mathbf{Q}^{(k)}-\mathbf{R}_{k,k-1}(\alpha)\Lambda^{(k-1)}]^{-1}, \end{align*}

where $\mathbf{R}_{1,0}(\alpha) = \mathcal{M}^{(1)}(\alpha \mathbf{I}^{(0)} - \mathbf{Q}^{(0)})^{-1}$ .

Together, Propositions 1 and 2 provide us with a simple method for numerically computing all $\mathbf{R}_{n,m}(\alpha)$ matrices, for $0 \leq m < n$ .

Proof. One can use the random-product technique, as is done in [Reference Joyner and Fralix12], to show that

\begin{align*} \alpha \mathbf{R}_{k+1,k}(\alpha) = \mathcal{M}^{(k+1)} + \mathbf{R}_{k+1,k}(\alpha)\mathbf{Q}^{(k)} + \mathbf{R}_{k+1,k}(\alpha)\mathbf{R}_{k,k-1}(\alpha)\Lambda^{(k-1)}, \end{align*}

meaning we can express $\mathbf{R}_{k+1,k}(\alpha)$ in terms of $\mathbf{R}_{k,k-1}(\alpha)$ , thus proving the result.

We are now ready to discuss the main results of this section. Further associated with $\{Y(t);\, t \geq 0\}$ is a stochastic process $\{\overline{X}(t);\, t \geq 0\}$ , where, for each real $t \geq 0$ ,

\begin{align*} \overline{X}(t) \,:\!=\, \sup_{0 \leq s \leq t}X(s), \end{align*}

which represents the maximum level achieved by $\{Y(t);\, t \geq 0\}$ over the interval [0, t]; in [Reference Mandjes and Taylor15], the authors refer to $\{\overline{X}(t);\, t \geq 0\}$ as the running maximum process. We can further combine $\overline{X}(t)$ and Y(t) by defining the stochastic process $Z(t) \,:\!=\, (\overline{X}(t), X(t), J(t))$ , which is clearly also a CTMC, whose state space $\overline{S}$ is

\begin{align*} \overline{S} = \bigcup_{n=0}^{\infty}\bigcup_{m=0}^{n}L_{n,m}, \end{align*}

where, for each integer $n \geq 0$ and each integer $m \in \{0,1,2,\ldots,n\}$ ,

\begin{align*} L_{n,m} \,:\!=\, \{([n,m],1), ([n,m],2), \ldots, ([n,m],d_{m} - 1), ([n,m],d_{m})\}. \end{align*}

Observe that the state ([n, m], k) has level [n, m] and phase k, where $k \in \{1,2,\ldots, d_{m}\}$ .

In Theorem 2 we study the marginal distributions of $\{Z(t);\, t \geq 0\}$ by applying Corollary 1 in various ways. Throughout both this section and the next, we let $\boldsymbol{\Pi}_{[n_{1}, m_{1}],[n_{2},m_{2}]}(\alpha)$ denote a matrix in $\mathbb{C}^{d_{m_{1}} \times d_{m_{2}}}$ which is of the form

\begin{align*} \boldsymbol{\Pi}_{[n_{1},m_{1}],[n_{2},m_{2}]}(\alpha) = [\pi_{([n_{1}, m_{1}],i), ([n_{2},m_{2}],j)}(\alpha)]_{i \in \{1,2,\ldots,d_{m_{1}}\}, j \in \{1,2,\ldots, d_{m_{2}}\}}. \end{align*}

This matrix contains Laplace transforms of transition functions associated with the CTMC $\{Z(t);\, t \geq 0\}$ ; i.e.

\begin{align*} \pi_{([n_{1}, m_{1}], i_{1}), ([n_{2},m_{2}], i_{2})}(\alpha) = \int_{0}^{\infty}e^{-\alpha t}\mathbb{P}_{([n_{1},m_{1}], i_{1})}(Z(t) = ([n_{2}, m_{2}], i_{2}))dt. \end{align*}

This is a slight abuse of notation, as $\pi_{x,y}$ refers to the Laplace transform of the transition function of $\{Y(t);\, t \geq 0\}$ or $\{Z(t);\, t \geq 0\}$ , but it will be clear from the context which process is being used whenever we use this notation. Likewise, readers should note that we will occasionally write $\mathbb{P}_{x}$ when we want to express a conditional probability given $Z(0) = x$ , just as we wrote $\mathbb{P}_{x}$ to denote a conditional probability given $Y(0) = x$ . It will always be clear from the context what is being conditioned on when we write $\mathbb{P}_{x}$ , so we will use this notation throughout the rest of the paper without further comment.

Theorem 2. For each $m_{0} \geq 0$ ,

(4) \begin{align} \boldsymbol{\Pi}_{[m_{0}, m_{0}],[m_{0},m_{0}]}(\alpha) = [\alpha \mathbf{I}^{(m_{0})} - \mathbf{Q}^{(m_{0})} - \mathbf{R}_{m_{0},m_{0} - 1}(\alpha)\Lambda^{(m_{0} - 1)}]^{-1}. \end{align}

Furthermore, for each $n \geq m_{0}+1$ ,

(5) \begin{align} \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha) = \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}(\alpha)\prod_{\ell = m_{0} +1}^{n}\Lambda^{(\ell)}[\alpha \mathbf{I}^{(\ell)} - \mathbf{Q}^{(\ell)} - \mathbf{R}_{\ell, \ell-1}(\alpha)\Lambda^{(\ell - 1)}]^{-1}. \end{align}

Finally, for each $n \geq m_{0}$ and each $m \in \{0,1,2,\ldots,n-1\}$ ,

(6) \begin{align} \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,m]}(\alpha) = \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha)\coprod_{\ell = n}^{m+1}\mathbf{R}_{\ell, \ell - 1}(\alpha). \end{align}

Proof. We first prove (6), where we assume $Z(0) = ([m_{0}, m_{0}], i_{0})$ for some fixed (yet arbitrarily chosen) state in $L_{m_{0}}$ . Applying (2) to $\{Z(t);\, t \geq 0\}$ while choosing

\begin{align*} T = \bigcup_{k=0}^{n-1}L_{n,k} \end{align*}

yields, for each state $([n,m],j) \in T$ ,

(7) \begin{align} & \pi_{([m_{0},m_{0}], i_{0}), ([n,m],j)}(\alpha) \end{align}
(8) \begin{align} &= \sum_{i = 1}^{d_{n}}\pi_{([m_{0},m_{0}],i_{0}),([n,n],i)}(\alpha)(q([n,n],i) + \alpha)\mathbb{E}_{([n,n],i)}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Z(t) = ([n,m],j))dt\right]. \end{align}

Next, observe that for each $i \in \{1,2,\ldots,d_{n}\}$ , a simple comparison between the transition structures of $\{Y(t);\, t \geq 0\}$ and $\{Z(t);\, t \geq 0\}$ reveals

(9) \begin{align} & ({-}(\mathbf{Q}^{(n)})_{i,i} + \alpha)\mathbb{E}_{([n,n],i)}\!\left[\int_{0}^{\tau_{T^{c}}}e^{-\alpha t}\mathbf{1}(Z(t) = ([n,m],j))dt\right] = (\mathbf{R}_{n,m}(\alpha))_{i,j}, \end{align}

and after combining this observation with (7) we obtain, upon further simplification,

\begin{align*} \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,m]}(\alpha) = \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha)\mathbf{R}_{n,m}(\alpha), \end{align*}

proving (6).

The next step of the proof is to establish Equation (4). When we prove this equality, we will simultaneously show that for each integer $n \geq 1$ , the matrix $(\alpha \mathbf{I}^{(n)} - \mathbf{Q}^{(n)} - \mathbf{R}_{n,n-1}(\alpha)\Lambda^{(n-1)})$ is invertible.

Using the Kolmogorov forward equations associated with $\{Z(t);\, t \geq 0\}$ , we see that for each $\alpha \in \mathbb{C}_{+}$ ,

\begin{align*} \alpha \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}(\alpha) - \mathbf{I}^{(m_{0})} &= \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}\mathbf{Q}^{(m_{0})} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}-1]}\Lambda^{(m_{0}-1)} \\ &= \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}\mathbf{Q}^{(m_{0})} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}\mathbf{R}_{m_{0},m_{0}-1}(\alpha)\Lambda^{(m_{0}-1)}, \end{align*}

and after rearranging the matrices, we find

\begin{align*} \boldsymbol{\Pi}_{[m_{0},m_{0}],[m_{0},m_{0}]}(\alpha)\!\left[\alpha \mathbf{I}^{(m_{0})} - \mathbf{Q}^{(m_{0})} - \mathbf{R}_{m_{0},m_{0}-1}(\alpha)\Lambda^{(m_{0}-1)}\right] = \mathbf{I}^{(m_{0})}, \end{align*}

which establishes (4).

Equation (5) can be established in a similar manner. Again, from the Kolmogorov forward equations associated with $\{Z(t);\, t \geq 0\}$ , we see that for each integer $n > m_{0}$ ,

\begin{align*} & \alpha \boldsymbol{\Pi}_{[m_{0},m_{0}], [n,n]}(\alpha) \\ &= \boldsymbol{\Pi}_{[m_{0},m_{0}], [n-1,n-1]}(\alpha)\Lambda^{(n-1)} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha)\mathbf{Q}^{(n)} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n-1]}(\alpha)\Lambda^{(n-1)} \\ &= \boldsymbol{\Pi}_{[m_{0},m_{0}], [n-1,n-1]}(\alpha)\Lambda^{(n-1)} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha)\mathbf{Q}^{(n)} + \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}\mathbf{R}_{n,n-1}(\alpha)\Lambda^{(n-1)}, \end{align*}

and after rearranging terms, we get

\begin{align*} \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha)\!\left[\alpha \mathbf{I}^{(n)} - \mathbf{Q}^{(n)} - \mathbf{R}_{n,n-1}(\alpha)\Lambda^{(n-1)}\right] = \boldsymbol{\Pi}_{[m_{0},m_{0}], [n-1,n-1]}(\alpha)\Lambda^{(n-1)}, \end{align*}

i.e.

\begin{align*} \boldsymbol{\Pi}_{[m_{0},m_{0}],[n,n]}(\alpha) = \boldsymbol{\Pi}_{[m_{0},m_{0}], [n-1,n-1]}(\alpha)\Lambda^{(n-1)}\!\left[\alpha \mathbf{I}^{(n)} - \mathbf{Q}^{(n)} - \mathbf{R}_{n,n-1}(\alpha)\Lambda^{(n-1)}\right]^{-1}, \end{align*}

proving (5). This completes the proof of Theorem 2.

A number of different numerical transform inversion algorithms can be used to numerically invert the transforms we have derived. We will make use of the well-known method of Abate and Whitt [Reference Abate and Whitt1], but we also refer readers to the algorithms of Den Iseger [Reference Den Iseger5] as well as the very recently established method of Horváth et al. [Reference Horváth, Horváth, Almousa and Telek10].

The numbers below were generated by applying the Abate–Whitt transform inversion algorithm to the Laplace transforms of the following model. We assume customers arrive at an infinite-server queueing system in accordance with a Poisson process with rate $\lambda = 100$ , and each customer brings with it to the system an amount of work that is exponentially distributed with parameter $1/f$ , where $f = 6$ . When there are i customers present in the system, the service rate allocated to each customer is $\min\!(C, R_{max}i)$ , where both C and $R_{max}$ vary in accordance with a finite-state CTMC having state space $E = \{1,2,3,4,5\}$ and generator $\mathbf{S}\,:\!=\, [s_{i,j}]_{i,j \in S}$ . This model was studied in [Reference Mandjes and Taylor15], and it is an extension of a model studied in Ellens et al. [Reference Ellens6]. In [Reference Mandjes and Taylor15], the generator matrix $\mathbf{S}$ is given by

\begin{align*} \left( \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -25.6921 & 9.9979 & 0.4890 & 5.5337 & 9.6715 \\[4pt] 10.5165 & -31.4895 & 8.4180 & 6.9109 & 5.6441 \\[4pt] 9.9951 & 1.9202 & -29.5037 & 14.7246 & 2.8639 \\[4pt] 8.0869 & 14.9862 & 10.0376 & -39.5345 & 6.4238 \\[4pt] 10.4716 & 2.5668 & 2.8565 & 12.8328 & -28.7277 \\ \end{array} \right) \end{align*}

(these numbers were generated randomly by the authors), and the possible values of both C and $R_{max}$ are given below in Table 1 (this was taken from [Reference Mandjes and Taylor15, p. 221]).

Table 1. Values of C and $R_{max}$ .

Readers should observe that the values we calculated in Table 2 are consistently within $0.0015$ of the corresponding values in [Reference Mandjes and Taylor15]; it is interesting that our values are consistently slightly smaller than those given in Table 2 of [Reference Mandjes and Taylor15]. The values found in [Reference Mandjes and Taylor15] were calculated with the Erlangization technique.

Table 2. Probability that the number of customers exceeds 15 in the interval [0, 1], as a function of the initial number of customers X(0) and the initial phase J(0).

3. Markov processes of M/G/1 type

We close by studying the joint distribution of the running minimum level, the level, and the phase of a level-dependent Markov process of M/G/1 type at a fixed time t. Suppose now that $\{Y(t);\, t\geq 0\}$ represents a level-dependent Markov process of M/G/1 type whose state space S can be expressed in terms of a countable union of levels:

\begin{align*} S=\bigcup_{n=0}^\infty L_n, \end{align*}

where for each integer $n\geq 0$ ,

\begin{align*} L_n\,:\!=\,\{(n,1),(n,2),\dots, (n,d_n-1), (n,d_n)\},\end{align*}

where each $d_n$ is a positive integer that varies with n. Readers should observe that the analysis we provide in this section also carries though when the number of levels is finite; what is most important is the M/G/1-type structure, i.e. that $\{Y(t);\, t \geq 0\}$ is downward-skip-free with respect to level transitions.

Just as before, we express Y(t) as (X(t), J(t)), where X(t) and J(t) respectively denote the current level and phase of the process at time t. We express the transition rate matrix $\mathbf{Q}$ of $\{Y(t);\, t \geq 0\}$ in block-partitioned form as

\begin{eqnarray*}\mathbf{Q}=\begin{pmatrix}\mathbf{A}_{0,0} & \quad \mathbf{A}_{0,1} & \quad \mathbf{A}_{0,2} & \quad \mathbf{A}_{0,3} & \quad \mathbf{A}_{0,4} & \quad \cdots\\[4pt]\mathbf{A}_{1,0} & \quad \mathbf{A}_{1,1} & \quad \mathbf{A}_{1,2} & \quad \mathbf{A}_{1,3} & \quad \mathbf{A}_{1,4} & \quad \cdots \\[4pt]\mathbf{0}_{d_{2} \times d_{0}} & \quad \mathbf{A}_{2,1} & \quad \mathbf{A}_{2,2} & \quad \mathbf{A}_{2,3} & \quad \mathbf{A}_{2,4} & \quad \cdots \\[1pt]\mathbf{0}_{d_{3} \times d_{0}} & \quad \mathbf{0}_{d_{3} \times d_{1}} & \quad \mathbf{A}_{3,2} & \quad \mathbf{A}_{3,3} & \quad \mathbf{A}_{3,4} & \quad \ddots \\[1pt]\vdots & \quad \vdots& \quad \vdots & \quad \ddots & \quad \ddots& \quad \ddots \end{pmatrix}.\end{eqnarray*}

Observe that for each integer $i\geq 0$ and each $j\geq i-1$ , $\mathbf{A}_{i,j} \in \mathbb{R}^{d_{i} \times d_{j}}$ contains the transition rates corresponding to transitions from states in $L_i$ to states in $L_j$ . Again we assume that for each state $x\in S$ there exist two states $y,z\in S$ (which may depend on x) such that $q(x,y)>0$ and $q(z,x)>0$ .

Just as in Section 2, there is an important family of ‘ $\mathbf{R}$ -matrices’ $\{\mathbf{R}_{\ell, m}(\alpha)\}_{m\geq 1, 0\leq \ell <m}$ such that for each integer $m\geq 1$ and each integer $\ell \in \{0,1,\dots, m-1\}$ ,

\begin{align*}(\mathbf{R}_{\ell, m}(\alpha))_{i,j}(\alpha)\,:\!=\,({-}(\mathbf{A}_{\ell,\ell})_{i,i}+\alpha)\mathbb{E}_{(\ell,i)}\!\left[\int_0^{\tau_{C_{m-1}}}e^{-\alpha t}\mathbf{1}(Y(t)=(m, j))dt \right],\end{align*}

where, for each integer $m\geq 1$ ,

\begin{equation*} C_m=\bigcup_{n=0}^m L_n.\end{equation*}

Our analysis of Markov processes of M/G/1 type also involves a close study of a family of ‘ $\mathbf{G}$ -matrices’ $\{\mathbf{G}_{n,m}(\alpha)\}_{0 \leq m < n}$ , where for each integer $n \geq 1$ and each integer $m \in \{0,1,\ldots,n-1\}$ ,

\begin{align*} (\mathbf{G}_{n,m}(\alpha))_{i,j}= \mathbb{E}_{(n,i)}\!\left[\mathbf{1}(Y(\tau_{L_{m}})=(m,j))e^{-\alpha\tau_{L_{m}}}\right].\end{align*}

Our next proposition, Proposition 3, shows how to express all $\mathbf{R}$ -matrices in terms of $\mathbf{G}$ -matrices.

Proposition 3. For each integer $m \geq 1$ and each integer $\ell\in\{0,1,2,\dots,m-1\}$ , we have

(10) \begin{align} \mathbf{R}_{\ell, m}(\alpha)=\sum_{k=m}^\infty \mathbf{A}_{\ell, k}\mathbf{G}_{k, m}(\alpha)\left[\alpha \mathbf{I}^{(m)} - \sum_{n=m}^\infty \mathbf{A}_{m,n}\mathbf{G}_{n,m}(\alpha)\right]^{-1},\end{align}

where we follow the convention that $\mathbf{G}_{m,m}(\alpha)\,:\!=\,\mathbf{I}^{(m)}.$ Furthermore, for each $m \geq 0$ and each $k > m$ ,

(11) \begin{align} \mathbf{G}_{k,m}(\alpha) = \coprod_{\ell = k}^{m+1} \mathbf{G}_{\ell,\ell-1}(\alpha) \,:\!=\, \mathbf{G}_{k,k-1}(\alpha)\mathbf{G}_{k-1,k-2}(\alpha) \cdots \mathbf{G}_{m+1,m}(\alpha), \end{align}

and the family of $\mathbf{G}$ -matrices $\{\mathbf{G}_{k+1,k}(\alpha)\}$ satisfy the following recursive scheme: for each integer $k \geq 1$ ,

(12) \begin{align} \mathbf{G}_{k,k-1}(\alpha)&=\left[\alpha\mathbf{I}^{(k)}-\mathbf{A}_{k,k}-\sum_{i=k+1}^{\infty} \mathbf{A}_{k,i} \coprod_{j=i}^{k+1}\mathbf{G}_{j,j-1}(\alpha)\right]^{-1}\mathbf{A}_{k,k-1}. \end{align}

Proof. We follow the line of reasoning given in the unpublished manuscript [Reference Joyner and Fralix11]. First, we define the collection of matrices $\{\mathbf{N}_m(\alpha)\}_{m\geq 1}$ , where, for each integer $m\geq 1$ and each integer $i,j \in \{1,2,\ldots,d_{m}\}$ (where possibly $i = j$ ),

\begin{align*} (\mathbf{N}_{m}(\alpha))_{i,j}&\,:\!=\,\mathbb{E}_{(m,i)}\!\left[\int_0^{\tau_{L_{m-1}}}e^{-\alpha t}\mathbf{1}(Y(t)=(m, j))dt\right].\end{align*}

Applying a first-step analysis argument shows that

(13) \begin{align} (\mathbf{N}_{m}(\alpha))_{i,j}&=\frac{\mathbf{1}(i=j)}{q((m,i))+\alpha}+\sum_{k\neq i}\frac{q((m,i),(m,k))}{q((m,i))+\alpha}(\mathbf{N}_{m}(\alpha))_{k,j}\nonumber\\& \quad +\sum_{k=m+1}^\infty\sum_{n=1}^{d_k} \frac{q((m,i),(k,n))}{q((m,i))+\alpha}\mathbb{E}_{(k,n)}\!\left[\int_0^{\tau_{L_{m-1}}} e^{-\alpha t}\mathbf{1}(Y(t)=(m,j))dt\right].\end{align}

We can use the strong Markov property at the stopping time $\tau_{L_{m}}$ to further simplify the remaining expectations found in (13): indeed,

(14) \begin{align} &\mathbb{E}_{(k,n)}\!\left[\int_0^{\tau_{L_{m-1}}} e^{-\alpha t}\mathbf{1}(Y(t)=(m,j))dt\right]\\&\quad =\sum_{\ell=1}^{d_m}\mathbb{E}_{(k,n)}[\mathbf{1}(Y(\tau_{L_m})=(m,\ell))e^{-\alpha\tau_{L_m}}]\mathbb{E}_{(m,\ell)}\!\left[\int_0^{\tau_{L_{m-1}}} e^{-\alpha t}\mathbf{1}(Y(t)=(m,j))dt\right]\nonumber\\&\quad =\sum_{\ell=1}^{d_m}(\mathbf{G}_{k, m}(\alpha))_{n,\ell}(\mathbf{N}_m(\alpha))_{\ell,j}. \nonumber \end{align}

Plugging (14) into (13), then expressing (13) (while remembering that $\mathbf{G}_{m,m}(\alpha) = \mathbf{I}^{(m)}$ ), we get

(15) \begin{align} \alpha\mathbf{N}_m(\alpha)=\mathbf{I}^{(m)} + \sum_{k=m}^\infty\mathbf{A}_{m,k} \mathbf{G}_{k,m}(\alpha)\mathbf{N}_m(\alpha),\end{align}

which implies

\begin{align*}\left[\alpha\mathbf{I}^{(m)} - \sum_{k=m}^\infty \mathbf{A}_{m,k}\mathbf{G}_{k,m}(\alpha)\right]\mathbf{N}_m(\alpha)=\mathbf{I}^{(m)},\end{align*}

meaning

(16) \begin{align} \mathbf{N}_m(\alpha)=\left[\alpha \mathbf{I}^{(m)} - \sum_{k=m}^\infty \mathbf{A}_{m,k}\mathbf{G}_{k,m}(\alpha)\right]^{-1}.\end{align}

We are now ready to derive (10). From the definition of $\mathbf{R}_{\ell, m}(\alpha)$ , we can see from applying both first-step analysis and the strong Markov property that

(17) \begin{align} \mathbf{R}_{\ell, m}(\alpha)=\sum_{k=m}^\infty \mathbf{A}_{\ell, k}\mathbf{G}_{k, m}(\alpha)\mathbf{N}_m(\alpha).\end{align}

Plugging (16) into (17) yields (10).

The next step is to establish (11). Fix an integer $m \geq 0$ and an integer $k>m$ . Again using the strong Markov property, we get

\begin{align*} \mathbf{G}_{k,m}(\alpha)=\mathbf{G}_{k, k-1}(\alpha)\mathbf{G}_{k-1,m}(\alpha),\end{align*}

and by a simple induction argument, we get

\begin{align*} \mathbf{G}_{k,m}(\alpha) = \coprod_{\ell = k}^{m+1}\mathbf{G}_{\ell, \ell - 1}(\alpha), \end{align*}

which establishes (11).

It remains to derive (12). Fixing $i \in\{1,2,\dots, d_k\}$ and $j\in\{1,2,\dots, d_{k-1}\}$ , we have

\begin{align*} (\mathbf{G}_{k,k-1}(\alpha))_{i,j}&=\frac{q((k,i),(k-1,j))}{q((k,i))+\alpha}+\sum_{\ell \neq i}\frac{q((k,i),(k,\ell))}{q((k,i))+\alpha}(\mathbf{G}_{k,k-1}(\alpha))_{\ell,j} \\ &+\sum_{m=k+1}^{\infty}\sum_{\ell=1}^{d_{m}}\frac{q((k,i),(m,\ell))}{q((k,i))+\alpha}(\mathbf{G}_{m,k-1}(\alpha))_{\ell,j}\end{align*}

or, in matrix form,

(18) \begin{align} \alpha\mathbf{G}_{k,k-1}(\alpha)=\mathbf{A}_{k,k-1}+\sum_{m=k}^\infty \mathbf{A}_{k,m}\mathbf{G}_{m,k-1}(\alpha).\end{align}

Applying (11) to (18) shows that

(19) \begin{align} \alpha\mathbf{G}_{k,k-1}(\alpha)&=\mathbf{A}_{k,k-1}+\textbf{A}_{k, k}\mathbf{G}_{k, k-1}(\alpha)+\sum_{i=k+1}^{\infty} \mathbf{A}_{k,i}\!\left(\coprod_{j=i}^{k+1}\mathbf{G}_{j,j-1}(\alpha)\right)\mathbf{G}_{k,k-1}(\alpha), \end{align}

and solving for $\mathbf{G}_{k,k-1}(\alpha)$ in (19) gives

\begin{align*} \mathbf{G}_{k,k-1}(\alpha)&=\left[\alpha\mathbf{I}^{(k)}-\mathbf{A}_{k,k}-\sum_{i=k+1}^{\infty} \mathbf{A}_{k,i} \coprod_{j=i}^{k+1}\mathbf{G}_{j,j-1}(\alpha) \right]^{-1}\mathbf{A}_{k,k-1},\end{align*}

which proves (12).

While Proposition 3 is theoretically interesting, it is only practically useful if the $\mathbf{G}$ -matrices can be calculated numerically. It is not clear in general whether there is a way to calculate these matrices, but they can be calculated if we impose additional assumptions on $\{Y(t);\, t \geq 0\}$ . Suppose, for instance, that there exists an integer $n_{0} \geq 1$ large enough so that $\mathbf{A}_{n,k}=\mathbf{A}_{k-n}$ for all $n \geq n_0$ and $k \geq n-1$ . Under this additional assumption, one can see that $\mathbf{G}_{n,n-1}(\alpha)=\mathbf{G}(\alpha)$ for each $n\geq n_0$ , where

\begin{align*} \mathbf{G}(\alpha) \,:\!=\, \mathbf{G}_{n_{0}, n_{0} - 1}(\alpha). \end{align*}

As explained in [Reference Joyner and Fralix11], the matrix $\mathbf{G}(\alpha)$ is the pointwise limit of a sequence of matrices $\{\mathbf{G}(N,\alpha)\}_{N \geq 0}$ , where $\mathbf{G}(0,\alpha) = \mathbf{0}_{d_{n_{0}} \times d_{n_{0}}}$ , and for each integer $N \geq 0$ ,

\begin{align*} \mathbf{G}(N+1,\alpha) = (\alpha \mathbf{I}^{(d_{n_{0}})} - \mathbf{A}_{0})^{-1}\!\left[\mathbf{A}_{-1} + \sum_{n=1}^{\infty}\mathbf{A}_{n}\mathbf{G}(N,\alpha)^{n}\right]. \end{align*}

The $\mathbf{G}$ -matrices can also be calculated numerically if there are only finitely many levels $L_{0}, L_{1}, \ldots, L_{C}$ , i.e. if $\mathbf{A}_{k,\ell} = \mathbf{0}$ for each $k \in \{0,1,2,\ldots,C\}$ and each $\ell \geq C+1$ , and if $\mathbf{A}_{k,\ell} = \mathbf{0}$ for each $k \geq C + 1$ and each $\ell \geq 0$ . In this case, our analysis can be used to show that

\begin{align*} \mathbf{G}_{C,C-1}(\alpha) = (\alpha \mathbf{I}^{(C)} - \mathbf{A}_{C,C})^{-1}\mathbf{A}_{C,C-1}, \end{align*}

and all other one-step $\mathbf{G}$ -matrices $\mathbf{G}_{k,k-1}(\alpha)$ , $1 \leq k \leq C-1$ , can be calculated recursively using (12).

We are now ready to set up and establish the main result of this section. We associate with $\{Y(t);\,t\geq 0 \}$ the stochastic process $\{\underline{X}(t);\, t\geq 0 \}$ where for each $t \geq 0$ ,

\begin{align*} \underline{X}(t)\,:\!=\,\inf_{0\leq s\leq t} X(s), \end{align*}

which represents the running minimum level achieved by $\{Y(t);\,t\geq 0\}$ over the interval [0, t]. Next, for each $t\geq 0$ we define $Z(t)\,:\!=\,(\underline{X}(t), X(t),J(t))$ , and just as was the case in the previous section, $\{Z(t);\,t\geq 0\}$ is a CTMC with state space

\begin{align*} \underline{S}=\bigcup_{n=0}^\infty \bigcup_{m=n}^\infty L_{n,m}, \end{align*}

where for each integer $n\geq 0$ and each integer $m \geq n$ ,

\begin{align*}L_{n,m}\,:\!=\,\{([n,m],1), ([n,m],2),\dots, ([n,m],d_m-1), ([n,m], d_m)\}.\end{align*}

In our next result, Theorem 3, we show how to derive the Laplace transforms of the transition functions associated with $\{Z(t);\, t\geq 0\}$ .

Theorem 3. For each integer $m_0\geq 0$ ,

(20) \begin{align} \boldsymbol{\Pi}_{[m_0, m_0], [m_0,m_0]}(\alpha) = [\alpha \mathbf{I}^{(m_0)} - \mathbf{A}_{m_0,m_0} - \mathbf{R}_{m_0,m_0+1}(\alpha)\mathbf{A}_{m_0+1,m_0}]^{-1}. \end{align}

Furthermore, for each integer $n \in \{0,1,\dots, m_0-1\}$ ,

(21) \begin{align} \boldsymbol{\Pi}_{[m_0, m_0], [n,n]}(\alpha)=\boldsymbol{\Pi}_{[m_0, m_0], [m_0,m_0]}(\alpha)\coprod_{\ell=m_0-1}^{n} \mathbf{A}_{\ell+1,\ell}\!\left[\alpha \mathbf{I}^{(\ell)}-\mathbf{A}_{\ell,\ell}-\mathbf{R}_{\ell, \ell+1}(\alpha)\mathbf{A}_{\ell+1,\ell}\right]^{-1}. \end{align}

Finally, for each integer $n\in\{0,1,\dots, m_0-1,m_0\}$ and each integer $m \geq n$ , $\boldsymbol{\Pi}_{[m_0, m_0], [n,m+1]}(\alpha)$ satisfies the recursion

(22) \begin{align} \boldsymbol{\Pi}_{[m_0, m_0], [n,m+1]}(\alpha)=\sum_{k=n}^{m} \boldsymbol{\Pi}_{[m_0, m_0], [n,k]}(\alpha)\mathbf{R}_{k, m+1}(\alpha).\end{align}

Proof. Theorem 3 can be established using virtually the same argument we used to establish Theorem 2, so we give only a brief outline of the argument. First, Corollary 1 can be used to show that for each integer $n \in \{0,1,\dots, m_{0} - 1, m_{0}\}$ and each integer $m \geq n$ ,

\begin{align*} \boldsymbol{\Pi}_{[m_{0}, m_{0}], [n,m+1]}(\alpha) = \sum_{k=n}^{m}\boldsymbol{\Pi}_{[m_{0}, m_{0}], [n,k]}(\alpha)\mathbf{R}_{k,m+1}(\alpha), \end{align*}

which establishes (22).

Next, we use the forward equations associated with $\{Z(t);\, t \geq 0\}$ , combined with (22), to get

\begin{align*} \alpha \boldsymbol{\Pi}_{[m_{0}, m_{0}], [m_{0}, m_{0}]}(\alpha) - \mathbf{I}^{(m_{0})} &= \boldsymbol{\Pi}_{[m_{0}, m_{0}], [m_{0}, m_{0}]}(\alpha)\mathbf{A}_{m_{0}, m_{0}} + \boldsymbol{\Pi}_{[m_{0}, m_{0}], [m_{0}, m_{0}+1]}(\alpha)\mathbf{A}_{m_{0} + 1, m_{0}} \\ &= \boldsymbol{\Pi}_{[m_{0}, m_{0}], [m_{0}, m_{0}]}(\alpha)\mathbf{A}_{m_{0}, m_{0}}\\ & \quad + \boldsymbol{\Pi}_{[m_{0}, m_{0}], [m_{0}, m_{0}]}(\alpha)\mathbf{R}_{m_{0}, m_{0} + 1}(\alpha)\mathbf{A}_{m_{0} + 1, m_{0}}, \end{align*}

from which we get (20).

Table 3. Probability that the number of customers falls below 2 in the interval [0, 1], as a function of the initial number of customers X(0) and the initial phase J(0), with a queue limit of 25.

Finally, again using the forward equations associated with $\{Z(t);\, t \geq 0\}$ , as well as (22), yields, for each $n < m_{0}$ ,

\begin{align*} \alpha \mathbf{\Pi}_{[m_{0}, m_{0}],[n,n]}(\alpha) &= \boldsymbol{\pi}_{[m_{0},m_{0}], [n,n]}(\alpha)\mathbf{A}_{n,n} + \boldsymbol{\Pi}_{[m_{0},m_{0}], [n,n+1]}(\alpha)\mathbf{A}_{n+1,n} \\ & \quad + \boldsymbol{\Pi}_{[m_{0}, m_{0}],[n+1,n+1]}\mathbf{A}_{n+1,n+1} \\ &= \boldsymbol{\pi}_{[m_{0},m_{0}], [n,n]}(\alpha)\mathbf{A}_{n,n} + \boldsymbol{\Pi}_{[m_{0},m_{0}], [n,n]}(\alpha)\mathbf{R}_{n,n+1}(\alpha)\mathbf{A}_{n+1,n} \\ & \quad + \boldsymbol{\Pi}_{[m_{0}, m_{0}],[n+1,n+1]}\mathbf{A}_{n+1,n+1}, \end{align*}

from which we get

\begin{align*} \boldsymbol{\Pi}_{[m_{0}, m_{0}],[n,n]}(\alpha) = \boldsymbol{\Pi}_{[m_{0}, m_{0}], [n+1,n+1]}(\alpha)\mathbf{A}_{n+1,n}[\alpha \mathbf{I}^{(n)} - \mathbf{A}_{n,n} - \mathbf{R}_{n,n+1}(\alpha)\mathbf{A}_{n+1,n}]^{-1}, \end{align*}

and repeated iterations of the same equality yield (21).

In order to further illustrate the applicability of our results, we consider a slight generalization of the example considered at the end of Section 2. Suppose now that our queueing system has a finite capacity of $C = 25$ customers, while customers arrive at the queueing system in the following manner: single customers arrive in accordance with a Poisson process with rate 100, batches containing two customers arrive in accordance with a Poisson process with rate 10, and batches of three customers arrive in accordance with a Poisson process with rate 1. We further assume that if a batch of customers arrives at the system, but not all customers from the batch can enter the system together because of capacity constraints, the entire arriving batch leaves the system.

Table 3 gives the probability that the number of customers goes below 2 in the interval [0, 1], as a function of the initial number of customers and the initial phase. Again, these numbers were generated using our results, combined with the transform inversion algorithm from [Reference Abate and Whitt1].

Acknowledgement

We wish to thank two anonymous referees for providing many useful comments on an earlier version of this manuscript.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process for this article.

References

Abate, J. and Whitt, W. (1995). Numerical inversion of Laplace transforms of probability distributions. ORSA J. Computing 7, 3643.CrossRefGoogle Scholar
Brémaud, P. (1999). Markov Chains: Gibbs Fields, Monte Carlo Simulation and Queues. Springer, New York.CrossRefGoogle Scholar
Bright, L. W. and Taylor, P. G. (1995). Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 11, 497525.CrossRefGoogle Scholar
Buckingham, P. and Fralix, B. (2015). Some new insights into Kolmogorov’s criterion, with applications to hysteretic queues. Markov Process. Relat. Fields 21, 339368.Google Scholar
Den Iseger, P. (2007). Numerical transform inversion using Gaussian quadrature. Prob. Eng. Inf. Sci. 20, 144.CrossRefGoogle Scholar
Ellens, W. et al. (2015). Performance evaluation using periodic system-state measurements. Performance Evaluation 93, 2746.CrossRefGoogle Scholar
Fralix, B. (2015). When are two Markov chains similar? Statist. Prob. Lett. 107, 199203.CrossRefGoogle Scholar
Fralix, B., Hasankhani, F. and Khademi, A. (2020). The role of the random-product technique in the theory of Markov chains on a countable state space. Submitted. Available at http://bfralix.people.clemson.edu/preprints.htm.Google Scholar
Fralix, B., Van Leeuwaarden, J. S. H. and Boxma, O. J. (2013). Factorization identities for a general class of reflected processes. J. Appl. Prob. 50, 632653.CrossRefGoogle Scholar
Horváth, I., Horváth, G., Almousa, S. A. and Telek, M. (2020). Numerical inverse Laplace transformation by concentrated matrix exponential distributions. Performance Evaluation 137, 102067.CrossRefGoogle Scholar
Joyner, J. and Fralix, B. (2016). A new look at block-structured Markov processes. Unpublished manuscript. Available at http://bfralix.people.clemson.edu/preprints.htm.Google Scholar
Joyner, J. and Fralix, B. (2016). A new look at Markov processes of G/M/1-type. Stoch. Models 32, 253274.CrossRefGoogle Scholar
Kyprianou, A.. (2006). Introduction to the Fluctuation Theory of Lévy Processes. Springer, New York.Google Scholar
Latouche, G. and Ramaswami, V. (1999). Introduction to Matrix-Analytic Methods in Stochastic Modeling. ASA-SIAM Publications, Philadelphia.CrossRefGoogle Scholar
Mandjes, M. and Taylor, P. (2016). The running maximum of a level-dependent quasi-birth-death process. Prob. Eng. Inf. Sci. 30, 212223.CrossRefGoogle Scholar
Naoumov, V. (1996). Matrix-multiplicative approach to quasi-birth-and-death processes analysis. In Matrix-Analytical Methods in Stochastic Models, eds S. R. Chakravarthy and A. S. Alfa, Marcel Dekker, New York, 1996, pp. 87106.Google Scholar
Figure 0

Table 1. Values of C and $R_{max}$.

Figure 1

Table 2. Probability that the number of customers exceeds 15 in the interval [0, 1], as a function of the initial number of customers X(0) and the initial phase J(0).

Figure 2

Table 3. Probability that the number of customers falls below 2 in the interval [0, 1], as a function of the initial number of customers X(0) and the initial phase J(0), with a queue limit of 25.