Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-23T13:27:56.509Z Has data issue: false hasContentIssue false

The asymptotic tails of limit distributions of continuous-time Markov chains

Published online by Cambridge University Press:  06 October 2023

Chuang Xu*
Affiliation:
University of Hawai’i at Mānoa
Mads Christian Hansen*
Affiliation:
University of Copenhagen
Carsten Wiuf*
Affiliation:
University of Copenhagen
*
*Postal address: Department of Mathematics, University of Hawai’i at Mānoa, Honolulu, HI 96822, USA. Email address: [email protected]
**Postal address: Department of Mathematical Sciences, University of Copenhagen, Copenhagen, 2100, Denmark.
**Postal address: Department of Mathematical Sciences, University of Copenhagen, Copenhagen, 2100, Denmark.
Rights & Permissions [Opens in a new window]

Abstract

This paper investigates tail asymptotics of stationary distributions and quasi-stationary distributions (QSDs) of continuous-time Markov chains on subsets of the non-negative integers. Based on the so-called flux-balance equation, we establish identities for stationary measures and QSDs, which we use to derive tail asymptotics. In particular, for continuous-time Markov chains with asymptotic power law transition rates, tail asymptotics for stationary distributions and QSDs are classified into three types using three easily computable parameters: (i) super-exponential distributions, (ii) exponential-tailed distributions, and (iii) sub-exponential distributions. Our approach to establish tail asymptotics of stationary distributions is different from the classical semimartingale approach, and we do not impose ergodicity or moment bound conditions. In particular, the results also hold for explosive Markov chains, for which multiple stationary distributions may exist. Furthermore, our results on tail asymptotics of QSDs seem new. We apply our results to biochemical reaction networks, a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are birth–death processes. Our approach, together with the identities, easily extends to discrete-time Markov chains.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Stochastic biological models based on continuous-time Markov chains (CTMCs) are commonly used to model complex cellular behaviours, gene expression, and the evolution of DNA [Reference Ewens19]. Noise is an inherent extrinsic and intrinsic property of such biological systems and cannot be ignored without compromising the conclusions and the accuracy of the models.

In many cases it is reasonable to expect well-behaved biological systems, and thus also well-behaved stochastic models. Therefore, in these cases, it is natural to assume the modelling CTMC is ergodic, that is, that there exists a unique stationary distribution which describes the system in the long run. In other cases, for example for population processes without immigration, the population eventually goes extinct almost surely, and thus the ergodic stationary distribution is trivially the Dirac delta measure at zero. In these cases, it makes sense to study the quasi-stationary distribution (QSD), that is, the long-time behaviour of the process before extinction (usually called the Q-process) [Reference Méléard and Villemonais34]. Jointly, stationary distributions and QSDs are referred to as limit distributions in the present paper.

The stationary distribution (provided it exists) is generally difficult to state in explicit form, except in a few cases. If the underlying stochastic process has a detailed balanced structure, then the stationary distribution takes a product form; see [Reference Anderson, Craciun and Kurtz2, Reference Cappelletti and Wiuf9, Reference Gelenbe and Mitrani23, Reference Kelly30, Reference Mairesse and Nguyen32, Reference Whittle46] for various network models, [Reference Méléard and Villemonais34] for birth–death processes (BDPs), and [Reference Economou18, Reference Gelenbe22, Reference Hoessly and Mazza28, Reference Neuts38, Reference Ramaswami42] for generalizations of such processes. QSDs with explicit expressions appear in even rarer cases [Reference Van Doorn45].

While an explicit expression might not be known in general, less suffices in many cases. For example, if an expression for the tail distribution is known, then the existence and relative sizes of moments could be assessed from the decay rate of the tail distribution. Additionally, relative recurrence times could be assessed for stationary distributions of CTMCs.

With this in mind, we establish results for the tail behaviour of stationary distributions and QSDs, provided such exist. In particular, we concentrate on CTMCs on the non-negative integers $\mathbb{N}_0$ with asymptotic power law transition rate functions (to be made precise). Our approach is based on generic identities derived from the flux-balance equation [Reference Kelly30] for limit distributions and stationary measures (Theorems 1 and 2). The identity for stationary distributions/measures might be seen as a difference equation, which has order one less than the difference equation obtained directly from the master equation. More interestingly, for any given state x, the left-hand side of the identity consists of terms evaluated in states $\ge x$ only, while the right-hand side of the identity consists of terms evaluated in states $<x$ only, and all terms have non-negative coefficients. For BDPs, the identities coincide with the classical recursive expressions for stationary distributions and QSDs.

Furthermore, the identities allow us to study the tail behaviour of limit distributions, provided they exist, and to characterize their forms (Theorems 4 and 3). Specifically, in Section 5, for CTMCs with transition rate functions that are asymptotically power laws, we show that there are only three regimes: the decay follows either (i) a Conley–Maxwell–Poisson distribution (light-tailed), (ii) a geometric distribution, or (iii) a sub-exponential distribution. Similar trichotomy results appear in the literature in other contexts [Reference Asmussen5, Reference Maulik and Zwart33], but, to our knowledge, only for stationary distributions. Our approach is based on repeated use of the identities we establish, combined with certain combinatorial identities (e.g., Lemma 1).

Importantly, we successfully obtain QSD tail asymptotics. To the best of our knowledge, no similar classification results on QSD tail asymptotics have been established, despite the fact that the Lyapunov function approach has been used frequently to establish ergodicity of QSDs [Reference Champagnat and Villemonais12]. A superficial reason may be the inherent difference between stationary distributions and QSDs: stationary distributions are equilibria of the master equation while QSDs are not. A deeper explanation may be that the drift of CTMCs (or discrete-time Markov chains (DTMCs)) may not directly relate to the decay rate of QSDs. For an absorbing CTMC, known conditions for exponential ergodicity of stationary distributions are not even sufficient to establish uniqueness of QSDs [Reference Van Doorn45].

This difference is also reflected in our results, where an extra condition is required to establish QSD tail asymptotics (Theorems 3 and 4). Our novel approach successfully addresses tail asymptotics of both stationary distributions and QSDs at the same time, based on the similarity of the algebraic equations they satisfy (Theorems 1 and 2).

We apply our main results to biochemical reaction networks, a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are BDPs.

The Lyapunov function approach is widely taken as a standard method to prove ergodicity of Markov processes [Reference Meyn and Tweedie36]. Additionally, the approach has been used to obtain tail asymptotics of stationary distributions, assuming ergodicity [Reference Aspandiiarov and Iasnogorodski6, Reference Bertsimas, Gamarnik and Tsitsiklis8, Reference Denisov, Korshunov and Wachtel16, Reference Denisov, Korshunov and Wachtel17, Reference Menshikov and Popov35, Reference Xu, Hansen and Wiuf47]. In contrast, our results do not require ergodicity or any other finite moment condition. For DTMCs, ergodicity follows from the existence of a stationary distribution on an irreducible set [Reference Miller37]; in contrast, this is not true for CTMCs [Reference Miller37]. In particular, for explosive Markov chains, potentially with more than one stationary distribution, the cited results fail, whereas our results also hold in this case.

The trichotomy pattern which we observe for the tail asymptotics is not surprising, as it has already been observed for BDPs [Reference Takine44], as well as for processes on continuous state spaces, such as the Lindley process [Reference Asmussen5] and the exponential functional of a Lévy process drifting to infinity [Reference Maulik and Zwart33]. The techniques applied in these papers do not seem applicable in our setting.

It is noteworthy that the identities we establish could be used to calculate the limit distribution recursively, up to an error term that depends on only a few generating terms ( $\pi(0)$ in the case of BDPs) and the truncation point of the limit distribution. The error term is given by the tail distribution, and thus the decay rate of the error term can be inferred from the present work. Approximation of limit distributions will be pursued in a subsequent paper. The main results of this paper, together with the approach, can be extended to DTMCs in a straightforward way.

2. Preliminaries

2.1. Sets and functions

Denote the sets of real numbers, positive real numbers, integers, positive integers, and non-negative integers by $\mathbb{R}$ , $\mathbb{R}_+$ , $\mathbb{Z}$ , $\mathbb{N}$ , and $\mathbb{N}_0$ , respectively. For $m, n\in\mathbb{N}$ , let $\mathbb{R}^{m\times n}$ denote the set of m-by-n matrices over $\mathbb{R}$ . Furthermore, for any set B, let $\#B$ denote its cardinality and $\unicode{x1d7d9}_B$ the corresponding indicator function. For $b\in\mathbb{R}$ , $A\subseteq\mathbb{R}$ , let $bA=\{ba\,:\, a\in A\}$ and $A+b=\{a+b\,:\, a\in A\}$ . Given $A\subseteq\mathbb{R}$ , let $\inf A$ and $\sup A$ denote the infimum and supremum of the set, respectively. By convention, $\sup A=-\infty$ and $\inf A=+\infty$ if $A=\varnothing$ ; $\sup A=+\infty$ if A is unbounded from above; and $\inf A=-\infty$ if A is unbounded from below. For $x, y\in\mathbb{N}_0$ with $x\ge y$ , let $x^{\underline{y}}=\frac{x!}{(x-y)!}$ .

Let f and g be non-negative functions on an unbounded set $A\subseteq\mathbb{N}_0$ . We write $f(x)\lesssim g(x)$ if there exist $C,N>0$ such that

\begin{align*}f(x)\le C g(x) \quad \text{for all}\quad x\in A,\ x\ge N;\end{align*}

that is, $f(x)=\mathrm{O}(g(x))$ since f is non-negative. (Here O refers to the standard big-O notation.) The function f is said to be asymptotic power law (APL) if there exists $r_1\in\mathbb{R}$ such that $\lim_{x\to\infty}\frac{f(x)}{x^{r_1}}=a$ exists and is finite. Hence $r_1=\lim_{x\to\infty}\frac{\log f(x)}{\log x}$ . An APL function f is called hierarchical (APLH) on A with $(r_1,r_2,r_3)$ if there further exist $r_2, r_3$ with $r_2+1\ge r_1>r_2>r_3\ge r_1-2$ , and $a>0$ , $b\in\mathbb{R}$ , such that for all large $x\in A$ ,

(1) \begin{equation}f(x)=ax^{r_1}+bx^{r_2}+\mathrm{O}(x^{r_3}).\end{equation}

The requirement $r_2+1\ge r_1$ and $r_3\ge r_1-2$ comes from the analysis in Sections 67, where asymptotic Taylor expansions of functions involve the powers of the first few leading terms. Here $r_1$ , $r_2$ , and $r_3$ are called the first, second, and third power of f, respectively. All rational functions, polynomials, and real analytic APL functions are APLH. Not all APL functions are APLH; e.g., $f(x)=(1+(\!\log(x+1))^{-1})x$ on $\mathbb{N}$ is not APLH.

Given a function f which is APLH, the first power in the expansion is uniquely determined, while the other two powers are not. In other words, given the asymptotic expansion (1), there may exist a family of APLH functions admitting the same asymptotic expansion. Let $r_2^*$ and $r_3^*$ be the infima over all $(r_2,r_3)\in\mathbb{R}^2$ such that f is APLH on A with $(r_1,r_2,r_3)$ . For convention, we always choose the minimal powers $\big(r_1,r_2^*,r_3^*\big)$ , whenever f is APLH with $\big(r_1,r_2^*,r_3^*\big)$ . As an example, $f(x)=x^2+3x+4$ is APLH on $\mathbb{N}_0$ with $(2,r_2,r_3)$ for any $2>r_2>r_3\ge 1$ (in which case $b=0$ ) or $1=r_2>r_3\ge 0$ ( $b=3$ ). In this case, f is APLH on $\mathbb{N}_0$ with minimal powers $\big(r_1,r_2^*,r_3^*\big)=(2,1,0)$ . In contrast, take $f(x)=x+x^{1/3}\log x$ . Then f is APLH on $\mathbb{N}_0$ with $(1,r_2,r_3)$ for any $r_2>r_3>1/3$ ( $b=0$ ). In this case, $r_1=1$ and $r_2^*=r_3^*=1/3$ , but f is not APLH on $\mathbb{N}$ with $(1,1/3,1/3)$ . For any real analytic APLH function f on $\mathbb{N}_0$ , f is APLH on $\mathbb{N}_0$ with $\big(r_1,r_2^*,r_3^*\big)$ , where $r_1=\lim_{x\to\infty}\frac{\log f(x)}{\log x}$ , $r_2^*=r_1-1$ , and $r_3^*=r_1-2$ .

2.2. Markov chains

Let $(Y_t\,:\, t\ge 0)$ (or $Y_t$ for short) be a CTMC with state space $\mathcal{Y}\subseteq\mathbb{N}_0$ and transition rate matrix $Q=(q(x,y))_{x,y\in\mathcal{Y}}$ ; in particular, each entry is finite. Recall that a set $A\subseteq\mathcal{Y}$ is closed if $q(x,y)=0$ for all $x\in A$ and $y\in\mathcal{Y}\setminus A$ [Reference Norris39]. Assume $\partial\subsetneq\mathcal{Y}$ is a (possibly empty) finite closed absorbing set, which is to the left of $\partial^{\textsf{c}}=\mathcal{Y}\setminus\partial$ . Here, the relative position of $\partial$ to $\partial^{\textsf{c}}$ ensures that the only way for the Markov chain to end in an absorbing state is by jumping from a transient state backward to an absorbing state (this property is used in Proposition 1 below). Furthermore, define the set of jump vectors

\begin{align*}\Omega=\{y-x\,:\, q(x,y)>0,\ \text{for some}\ x,y\in\mathcal{Y}\},\end{align*}

and let $\Omega_{\pm}=\{\omega\in\Omega\,:\, {\text{sgn}}(\omega)=\pm1\}$ be the sets of forward and backward jump vectors, respectively.

For any probability distribution $\mu$ on $\partial^{\textsf{c}}$ , define

\begin{align*}\mathbb{P}_{\mu}(\cdot)=\int_{\mathcal{Y}}\mathbb{P}_x(\cdot){\text{d}}\mu(x),\end{align*}

where $\mathbb{P}_x$ denotes the probability measure of $Y_t$ with initial condition $Y_0=x\in\mathcal{Y}$ . Any positive measure $\mu$ on a set $A\subseteq\mathbb{N}_0$ can be extended naturally to a positive measure on $\mathbb{N}_0$ with no mass outside A, $\mu(\mathbb{N}_0\setminus\! A)=0$ .

A (probability) measure $\pi$ on $\mathcal{Y}$ is a stationary measure (distribution) of $Y_t$ if it is a non-negative equilibrium of the so-called master equation [Reference Gillespie24]:

(2) \begin{equation}0=\sum_{\omega\in\Omega}q(x-\omega,x)\pi(x-\omega)-\sum_{\omega\in\Omega}q(x,x+\omega)\pi(x),\quad x\in\mathcal{Y}.\end{equation}

(Here and elsewhere, functions defined on $\mathcal{Y}$ are put to zero when evaluated at $x\not\in\mathcal{Y}\subseteq\mathbb{Z}$ .) Any stationary distribution $\pi$ of $Y_t$ satisfies

\begin{align*}\mathbb{P}_{\pi}(Y_t\in A)=\pi(A),\quad A\in2^{\mathcal{Y}},\end{align*}

for all $t\ge0$ , where $2^{\mathcal{Y}}$ is the power set of $\mathcal{Y}$ [Reference Champagnat and Villemonais12].

Let $\tau_{\partial}=\inf\{t>0\,:\, Y_t\in\partial\}$ be the entrance time of $Y_t$ into the absorbing set $\partial$ . We say $Y_t$ admits certain absorption if $\tau_{\partial}<\infty$ almost surely for all $Y_0\in\partial^{\textsf{c}}$ . Moreover, the process associated with $Y_t$ conditioned never to be absorbed is called a Q-process [Reference Champagnat and Villemonais11]. A probability measure $\nu$ on $\partial^{\textsf{c}}$ is a quasi-stationary distribution (QSD) of $Y_t$ if for all $t\ge0$ ,

\[\mathbb{P}_{\nu}(Y_t\in A|\tau_{\partial}>t)=\nu(A),\quad A \in 2^{\partial^{\textsf{c}}}.\]

3. Identities for limit distributions

Let $\omega_*=\text{gcd}(\Omega)$ be the (unique) positive greatest common divisor of $\Omega$ . Define the scaled largest positive and negative jump, respectively, as

\begin{align*}\omega_+\,:\!=\,\sup_{\omega\in \Omega_+}\omega\omega_*^{-1},\quad \omega_-\,:\!=\,\inf_{\omega\in \Omega_-}\omega\omega_*^{-1}.\end{align*}

Furthermore, for $j\in\{\omega_-,\ldots,\omega_++1\}$ , define

(3) \begin{align}A_j &=\begin{cases}\{\omega\in\Omega_-\,:\, j\omega_* >\omega\},\quad \text{if}\ j\in\{\omega_-,\ldots,0\},\\ \{\omega\in\Omega_+\,:\, j\omega_*\le \omega\},\quad \text{if}\ j\in\{1,\ldots,\omega_++1\}.\end{cases}\end{align}

Hence, $\varnothing=A_{\omega_-}\subseteq A_j\subseteq A_{j+1}\subseteq A_0=\Omega_-$ for $\omega_-< j<0$ , and $\varnothing=A_{\omega_++1}\subseteq A_{j+1}\subseteq A_j\subseteq A_1=\Omega_+$ for $1< j< \omega_+$ .

The following classical result provides a necessary condition for QSDs.

Proposition 1. ([Reference Collet, Martínez and San Martín15].) Assume $\partial\neq\varnothing$ . Let $\nu$ be a QSD of $Y_t$ on $\partial^{\textsf{c}}$ . Then for $x\in\mathbb{N}_0\!\setminus\!\partial$ ,

\[ \theta_{\nu}\nu(x)+\sum_{\omega\in\Omega} q(x-\omega,x)\nu(x-\omega)-\sum_{\omega\in\Omega}q(x,x+\omega)\nu(x) = 0,\]

where

\begin{align*}\theta_{\nu}=\sum_{\omega\in\Omega_-}\sum_{y\in\partial^{\textsf{c}}\cap\left(\partial-\omega\right)}\nu(y)q(y,y+\omega)\end{align*}

is finite.

Proof. For $\partial=\{0\}$ and $x\in\partial^{\textsf{c}}$ , the identity follows from [Reference Collet, Martínez and San Martín15]. The same argument applies if $\partial^{\textsf{c}}\not=\{0\}$ . For $x\in\mathbb{N}_0\setminus\mathcal{Y}$ , the identity holds trivially (with both sides being zero), which can be argued similarly to the proof of Theorem 1.

Before stating a generic identity for stationary distributions which originates from the master equation of the CTMC, we provide an example.

Example 1. Consider a CTMC on $\mathbb{N}_0$ with $\Omega=\{1,-2\}$ and transition rate functions

\[q(x,x+1)=\kappa_1 x+\kappa_2 x(x-1),\quad q(x,x-2)=\kappa_3 x(x-1)(x-2),\]

where $\kappa_i$ ( $i=1,2,3$ ) are positive constants. Such rate functions can be associated with a weakly reversible stochastic reaction network (see Example 10 in Section 5). This CTMC is ergodic on $\mathbb{N}$ and there exists a stationary distribution $\pi$ , which solves the master equation

(4) \begin{equation}b(x-1)\pi(x-1)+a(x+2)\pi(x+2)=(a(x)+b(x))\pi(x),\quad x\in\mathbb{N}_0,\end{equation}

where $a(x)=\kappa_3x^{\underline{3}}$ and $b(x)=(\kappa_1x+\kappa_2x^{\underline{2}})$ . One can show by induction that the terms in (4) can be separated so that those with $\pi(y)$ for $y\ge x$ and those with $\pi(y)$ for $y<x$ are on different sides of the equality, in such a way that all coefficients are positive. Naive rearrangement of the terms does not suffice. We find

(5) \begin{equation}a(x)\pi(x)+a(x+1)\pi(x+1)=b(x-1)\pi(x-1),\quad x\in\mathbb{N}_0.\end{equation}

In addition, to our surprise, we observe that (4) is a third-order difference equation, while (5) is a second-order difference equation. Hence, the identity (5), though equivalent to (4), helps simplify the relationship among all terms of the stationary distribution.

The following generic identities generalize the observations in Example 1, and provide an equivalent definition of a stationary distribution, perhaps in a more handy form. The identities are the so-called flux-balance equation [Reference Kelly30, Lemma 1.4] specialized to one dimension. We emphasize that stationary measures are not necessary finite, while stationary distributions and QSDs are probability distributions.

Theorem 1. The following statements are equivalent:

  1. (1) $\pi$ is a stationary measure (stationary distribution) of $Y_t$ on $\mathcal{Y}$ ;

  2. (2) $\pi$ is a positive measure (probability distribution) on $\mathcal{Y}$ satisfying, for $x\in\mathbb{N}_0$ ,

    \begin{multline*}\sum_{\omega\in\Omega_-}\sum_{j=\omega\omega^{-1}_*+1}^{0}q(x-j\omega_*, x-j\omega_*+\omega)\pi(x-j\omega_*)\\=\sum_{\omega\in\Omega_+}\!\sum_{j=1}^{\omega\omega^{-1}_*}q(x-j\omega_*,x-j\omega_*+\omega)\pi(x-j\omega_*)<\infty;\end{multline*}
  3. (3) $\pi$ is a positive measure (probability distribution) on $\mathcal{Y}$ satisfying, for $x\in\mathbb{N}_0$ ,

    \begin{multline*}\sum_{j=\omega_-+1}^{0}\pi\left(x-j\omega_*\right)\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)\\= \sum_{j=1}^{\omega_+}\pi\left(x-j\omega_*\right)\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)<\infty.\end{multline*}

Proof. We only prove the equivalent representations for stationary measures; the equivalent representations for stationary distributions then follow. We assume without loss of generality that $\omega_*=1$ .

Recall from [Reference Kelly30, Lemma 1.4] that $\pi$ is a stationary measure satisfying (2) if and only if for any $A\subseteq\mathcal{Y}$ ,

(6) \begin{equation}\sum_{y\in A}\sum_{z\in A^c}\pi(y)q(y,z)=\sum_{y\in A}\sum_{z\in A^c}\pi(z)q(z,y)<\infty,\end{equation}

where $A^c=\mathcal{Y}\setminus A$ . (In [Reference Kelly30], the identity (6) is proved for finite $\mathcal{Y}$ . It can be shown by induction that it also holds for countable $\mathcal{Y}$ .) Let $\pi(x)=0$ for $x\notin \mathcal{Y}$ and $q(x,y)=0$ for $(x,y)\notin\mathcal{Y}^2$ ; then (6) holds for $A\subseteq\mathcal{Y}$ if and only if for $A\subseteq\mathbb{Z}$ ,

(7) \begin{equation}\sum_{y\in A}\sum_{z\in \mathbb{Z}\setminus A}\pi(y)q(y,z)=\sum_{y\in A}\sum_{z\in\mathbb{Z}\setminus A}\pi(z)q(z,y)<\infty.\end{equation}

In particular, for any $x\in \mathbb{N}_0$ , let $A=\{y\in\mathbb{Z}\,:\, y\ge x\}$ . Then (7) implies that

(8) \begin{equation}\sum_{y\ge x}\sum_{z\le x-1}\pi(y)q(y,z)=\sum_{y\ge x}\sum_{z\le x-1}\pi(z)q(z,y)<\infty.\end{equation}

Conversely, (8) implies (2) (i.e., $\pi$ is a stationary measure), which is shown by subtracting from both sides of (8) the same equations but with x replaced by $x+1$ . This is possible since both sides are finite. We now compare (8) with Theorem 1(3). Recall that $A_{\omega_-} = A_{\omega_++1} = \varnothing$ . In the latter equation, let $y=x-j$ and $z=x-j+\omega$ ; then Theorem 1(3) follows from (8). Furthermore, let $j=x-y$ and $\omega=z-y$ , then (8) is obtained from Theorem 1(3). Observe that Theorem 1(2) and Theorem 1(3) are equivalent by Fubini’s theorem.

A special form of Theorem 1 under more assumptions has been stated in the context of stochastic reaction networks [Reference Hansen26, Proposition 5.4.9].

For any positive measure $\mu$ on $\mathbb{N}_0$ , let

\begin{align*}T_{\mu}\,:\, \mathbb{N}_0\to[0,1],\quad x\mapsto\sum_{y=x}^{\infty}\mu(y),\end{align*}

be the tail distribution (or simply the tail) of $\mu$ .

The following identities for QSDs also present equivalent definitions of the latter.

Theorem 2. Assume $\partial\neq\varnothing$ . Then the following statements are equivalent:

  1. (1) $\nu$ is a QSD of $Y_t$ on $\partial^{\textsf{c}}$ ;

  2. (2) $\nu$ is a probability measure on $\partial^{\textsf{c}}$ , and for $x\in\mathbb{N}_0\setminus\partial$ ,

    \begin{align*}&\sum_{\omega\in\Omega_-}\sum_{j=\omega\omega_*^{-1}+1}^{0}q(x-j\omega_*,x-j\omega_*+\omega)\nu(x-j\omega_*)=\theta_{\nu}T_{\nu}(x)\\&+\sum_{\omega\in\Omega_+}\!\sum_{j=1}^{\omega\omega_*^{-1}}q(x-j\omega_*,(x-j\omega_*+\omega)\nu(x-j\omega_*)<\infty;\end{align*}
  3. (3) $\nu$ is a probability measure on $\partial^{\textsf{c}}$ , and for $x\in\mathbb{N}_0\setminus\partial$ ,

    \begin{align*}&\sum_{j=\omega_-+1}^{0}\nu\left(x-j\omega_*\right)\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)=\theta_{\nu}T_{\nu}(x)\\&+\sum_{j=1}^{\omega_+}\nu\left(x-j\omega_*\right)\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)<\infty.\end{align*}

Proof. The proof is similar to that of Theorem 1 and is thus omitted.

If a CTMC jumps unidirectionally (e.g., it is a pure birth or a pure death process), then all stationary measures, if such exist, are concentrated on absorbing states [Reference Xu, Hansen and Wiuf48]. In contrast, an absorbed pure birth process has no QSDs. However, an absorbed pure death process may admit one or more QSDs, as illustrated below.

Example 2. Consider a pure death process $Y_t$ on $\mathbb{N}_0$ with linear death rates $d_j=dj$ for $j\in\mathbb{N}_0$ . Let $0<a\le 1$ , and define $\nu$ as follows:

\begin{align*} \nu(1)=a,\quad \nu(x)=\left\{\begin{array}{c@{\quad}l} \frac{a}{x!}\frac{\Gamma(x-a )}{\Gamma(1-a)}, & \text{if}\ 0<a<1,\\[5pt]0, & \text{if}\ a=1,\end{array}\right\}\quad \text{for}\quad x>1, \end{align*}

where $\Gamma(\cdot)$ is the gamma function. Then $\nu$ is a QSD of $Y_t$ on $\mathbb{N}$ with $\partial=\{0\}$ . Hence, there is a family of QSDs fulfilling Theorem 2.

Formulae for stationary distributions and QSDs of BDPs follow directly from Theorems 1 and 2.

Corollary 1. ([Reference Anderson4, Reference Cavender10].) (i) Let $Y_t$ be a BDP on $\mathbb{N}_0$ with birth and death rates $b_j$ and $d_j$ , respectively, such that $b_{j-1}>0$ and $d_j>0$ for all $j\in\mathbb{N}$ . If $\pi$ is a stationary distribution for $Y_t$ , then

\[\pi(j)=\pi(0)\prod_{i=0}^{j-1}\frac{b_i}{d_{i+1}},\quad j\in\mathbb{N}.\]

(ii) Let $Y_t$ be a BDP on $\mathbb{N}_0$ with birth and death rates $b_j$ and $d_j$ , respectively, such that $b_0=0$ , and $b_j>0$ and $d_j>0$ for all $j\in\mathbb{N}$ . Then a probability distribution $\nu$ on $\mathbb{N}$ is a QSD trapped into 0 for $Y_t$ if and only if

(9) \begin{equation}d_j\nu(j)=b_{j-1}\nu(j-1)+d_1\nu(1)\left(1-\sum_{i=1}^{j-1}\nu(i)\right),\quad j\ge2.\end{equation}

Proof. Here $\Omega=\{-1,1\}$ , $\omega_*=\omega_+=1$ , $\omega_-=-1$ , and $\mathcal{Y}=\mathbb{N}_0$ . Moreover, $q(j,j-1)=d_j$ and $q(j,j+1)=b_j$ for $j\in\mathbb{N}$ .

(i) $\partial=\varnothing$ . Since $\pi$ is a stationary distribution on $\mathbb{N}_0$ , it follows from Theorem 1 that

\[\pi(j)q(j,j-1)=\pi(j-1)q(j-1,j),\quad j\in\mathbb{N}.\]

Hence the conclusion is obtained by induction.

(ii) $\partial=\{0\}$ and $\partial^{\textsf{c}}=\mathbb{N}$ . It follows from Theorem 2 that a probability measure $\nu$ is a QSD on $\mathbb{N}$ if and only if

\[\theta_{\nu}=q(1,0)\nu(1),\quad \nu(j)q(j,j-1)=\theta_{\nu}T_{\nu}(j)+\nu(j-1)q(j-1,j),\quad j\in\mathbb{N}\setminus\!\{1\},\]

that is, (9) holds.

Regarding the tail distributions, we have the following identities.

Corollary 2. Assume $\Omega$ is finite and $\partial=\varnothing$ . Let $\pi$ be a stationary distribution of $Y_t$ on $\mathcal{Y}$ . Then, for $x\in\mathbb{N}_0$ ,

\begin{align*}&T_{\pi}(x)\Biggl(\sum_{\omega\in A_0}q(x,x+\omega)+\sum_{ \omega\in A_1}q(x-\omega_*,x-\omega_*+\omega)\Biggr)+\sum_{j=\omega_-}^{-1}T_{\pi}(x-j\omega_*)\\ \nonumber&\qquad\quad \cdot\Biggl(\sum_{\omega\in A_{j}}q(x-j\omega_*,x-j\omega_*+\omega)-\sum_{\omega\in A_{j+1}}q(x-(j+1)\omega_*,x-(j+1)\omega_*+\omega)\Biggr)\\ \nonumber&= \sum_{j=1}^{\omega_+}T_{\pi}(x-j\omega_*)\Biggl(\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)\\ &\hspace{6cm}-\sum_{\omega \in A_{j+1}}q(x-(j+1)\omega_*,x-(j+1)\omega_*+\omega)\Biggr),\end{align*}

where $A_j$ is defined in (3).

Proof. Assume without loss of generality that $\omega_*=1$ and $0\in\mathcal{Y}$ . The left-hand side of the equation in Theorem 3(3) is

\[\begin{split}\text{LHS}=&\sum_{j=\omega_-+1}^{0}(T_{\pi}(x-j)-T_{\pi}(x-j+1))\sum_{\omega\in A_j}q(x-j,x-j+\omega)\\=&\sum_{j=\omega_-+1}^{0}T_{\pi}(x-j)\sum_{\omega\in A_{j}}q(x-j,x-j+\omega)\\&-\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\sum_{\omega\in A_{j+1}}q(x-j-1,x-j-1+\omega)\\=&\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\Bigg(\sum_{\omega\in A_{j}}q(x-j,x-j+\omega)-\sum_{\omega\in A_{j+1}}q(x-j-1,x-j-1+\omega)\Bigg)\\&+T_{\pi}(x)\sum_{\omega\in A_0}q(x,x+\omega),\end{split}\]

while the right-hand side equals

\[\begin{split}\text{RHS}=&\sum_{j=1}^{\omega_+}(T_{\pi}(x-j)-T_{\pi}(x-j+1))\sum_{\omega\in A_j}q(x-j,x-j+\omega)\\=&\sum_{j=1}^{\omega_+}T_{\pi}(x-j)\Bigg(\sum_{\omega\in A_j}q(x-j,x-j+\omega)-\sum_{\omega\in A_{j+1}}q(x-j-1,x-j-1+\omega)\Bigg)\\&-T_{\pi}(x)\sum_{\omega \in A_1}q(x-1,x-1+\omega),\end{split}\]

which together yield the desired identity.

Corollary 3. Assume $\Omega$ is finite, $\partial\neq\varnothing$ , and let $\nu$ be a QSD of $Y_t$ on $\partial^{\textsf{c}}$ . Then for all $x\in\mathbb{N}_0\setminus\partial$ ,

(10) \begin{align}&T_{\nu}(x)\Biggl(\sum_{\omega\in A_0}q(x,x+\omega)+\sum_{ \omega\in A_1}q(x-\omega_*,x-\omega_*+\omega)\Biggr)+\sum_{j=\omega_-}^{-1}T_{\nu}(x-j\omega_*)\nonumber\\ &\qquad\qquad \cdot\Biggl(\sum_{\omega\in A_{j}}q(x-j\omega_*,x-j\omega_*+\omega)-\sum_{\omega\in A_{j+1}}q(x-(j+1),x-(j+1)+\omega)\omega_*)\Biggr)\nonumber\\ &= \theta_{\nu}T_{\nu}(x)+\sum_{j=1}^{\omega_+}T_{\nu}(x-j\omega_*)\Biggl(\sum_{\omega\in A_j}q(x-j\omega_*,x-j\omega_*+\omega)\nonumber\\&-\sum_{\omega \in A_{j+1}}q(x-(j+1)\omega_*,x-(j+1)\omega_*+\omega)\Biggr),\end{align}

where $A_j$ is defined in (3).

Proof. Similar to that of Corollary 2.

4. Asymptotic tails of limit distributions

To establish the asymptotic tails of limit distributions, we assume the following:

( $\textbf{A1}$ ) $\#\Omega<\infty$ .

( $\textbf{A2}$ ) $\mathcal{Y}$ is unbounded, and for $\omega\in\Omega$ , $q(x,x+\omega)=a_{\omega}x^{R_{\omega}^1}+b_{\omega}x^{R_{\omega}^2}+\mathrm{O}\big(x^{R_{\omega}^3}\big)$ is an APLH function in x on $\mathcal{Y}$ with $\big(R_{\omega}^1,R_{\omega}^2,R_{\omega}^3\big)$ for some constants $a_{\omega}, b_{\omega}$ . The APLH function is assumed strictly positive for all large $x\in\mathcal{Y}$ .

( $\textbf{A3}$ ) $\partial^{\textsf{c}}$ is irreducible.

Assumption ( $\textbf{A1}$ ) guarantees that the chain has bounded jumps only, which enables us to use the identities established in the previous section to estimate the tails. Assumption ( $\textbf{A2}$ ) is common in applications, and ensures that $\partial^{\textsf{c}}$ is unbounded too, as $\partial$ is finite by assumption. In particular, ( $\textbf{A2}$ ) is satisfied provided the following assumption holds:

$(\textbf{A2})^{\prime}$ For $\omega\in\Omega$ , $q(x,x+\omega)$ is a strictly positive polynomial for all large $x\in\mathcal{Y}$ .

Assumption ( $\textbf{A3}$ ) is assumed to avoid non-essential technicalities. Moreover, ( $\textbf{A3}$ ) means that either $Y_t$ is irreducible or the conditional process of $Y_t$ before entering $\partial$ is irreducible. This assumption is satisfied for many known one-dimensional infinite CTMCs modelling biological processes (e.g., for population processes). In addition, ( $\textbf{A3}$ ) implies that $\Omega_+\neq\varnothing$ and $\Omega_-\neq\varnothing$ (otherwise there are no non-singleton communicating classes).

The following parameters are well-defined and finite. Let

\begin{align*}R=\underset{{\omega\in\Omega}}{\max}R_{\omega}^1,\quad R_-=\underset{{\omega\in\Omega_-}}{\max}R_{\omega}^1,\quad R_+=\underset{{\omega\in\Omega_+}}{\max}R_{\omega}^1,\end{align*}
\begin{align*}E_-=\underset{{\omega\in\Omega_-}}{\cup}\big\{R_{\omega}^1, R_{\omega}^2, R_{\omega}^3\big\},\quad E_+=\underset{{\omega\in\Omega_+}}{\cup}\big\{R_{\omega}^1, R_{\omega}^2, R_{\omega}^3\big\},\end{align*}

and define

\begin{align*}\sigma_1=\min\big\{R_-\,-R_-^1,R_+-R_+^1\big\},\quad \sigma_2=\min\big\{R_-\,-R_-^2,R_+-R_+^2\big\},\end{align*}

where

\begin{align*}R_-^1=\max \big\{r\in E_-\,:\, r<R_-\big\},\quad R_+^1=\max \big\{r\in E_+\,:\, r<R_+\big\},\end{align*}
\begin{align*}R_-^2=\max \big\{r\in E_-\,:\, r<R_-^1\big\},\quad R_+^2=\max \big\{r\in E_+\,:\, r<R_+^1\big\}.\end{align*}

(These values are only used to define $\sigma_1$ , $\sigma_2$ .)

Hence $R^1_-\ge R_{\omega}^2\ge R_-\,-1$ , for some $\omega\in\Omega_-$ with $R_{\omega}^1=R_-$ . Similarly, $R^1_+\ge R_+-1$ , $R^2_-\ge R_-\,-2$ , and $R^2_+\ge R_+-2$ . This implies that $0<\sigma_1\le1$ and $\sigma_1<\sigma_2\le2$ . If all transition rate functions are real analytic, then by convention, $R_{\omega}^2=R_{\omega}^1-1,\ R_{\omega}^3=R_{\omega}^1-2$ , for all $\omega\in\Omega$ , and hence $\sigma_1=1$ and $\sigma_2=2$ . Furthermore, let

\begin{align*}\alpha=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}q(x,x+\omega)\omega}{x^R},\quad \alpha_-=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_-}q(x,x+\omega)\vert \omega\vert}{x^{R_-}},\end{align*}
\begin{align*}\alpha_+=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_+}q(x,x+\omega)\omega}{x^{R_+}},\quad \vartheta=\frac{1}{2}\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}q(x,x+\omega)\omega^2}{x^R},\end{align*}
\begin{align*}\gamma=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}q(x,x+\omega)\omega-\alpha x^R}{x^{R-\sigma_1}},\quad \beta = \gamma - \vartheta.\end{align*}

The parameters also admit limit-free representations:

\begin{align*}&\alpha=\sum_{\omega\in\Omega\,:\ R_{\omega}^1=R}a_{\omega}\omega,\quad \alpha_-=\sum_{\omega\in\Omega_-\,:\ R_{\omega}^1=R_-}a_{\omega}|\omega|,\quad \alpha_+=\sum_{\omega\in\Omega_+\,:\ R_{\omega}^1=R_+}a_{\omega}\omega,\\&\vartheta=\frac{1}{2}\sum_{\omega\in\Omega\,:\ R_{\omega}^1=R}a_{\omega}\omega^2,\ \gamma=\sum_{\omega\in\Omega\,:\ R_{\omega}^1=R-\sigma_1}a_{\omega}\omega+\sum_{\omega\in\Omega\,:\ R_{\omega}^2=R-\sigma_1}b_{\omega}\omega,\ \beta = \gamma - \vartheta.\end{align*}

The form with the limit emphasizes that the parameters are coefficients of monomials of certain leading degrees. Furthermore, define

\begin{align*}\Delta=\left\{\begin{array}{c@{\quad}l} -\gamma (\alpha_+\omega_*)^{-1}, & \text{if}\quad \sigma_1<1,\\[4pt] ({-}\gamma+R\vartheta) (\alpha_+\omega_*)^{-1},& \text{if}\quad \sigma_1=1,\end{array}\right. \quad \delta=\Delta(\omega_+-\omega_-\,-1)^{-1}.\end{align*}

Note that $\alpha\le0$ implies $R_-\ge R_+$ . Moreover, we have $\alpha_+, \alpha_->0$ and $0<\delta\le\Delta$ , by ( $\textbf{A3}$ ). Furthermore, $\alpha$ , $\alpha_-$ , $\alpha_+$ , $\beta$ , and $\vartheta$ do not depend on the choice of second and third powers of the transition rate functions, whereas $\sigma_1$ , $\sigma_2$ , $\gamma$ , $\Delta$ , and $\delta$ do depend on the powers.

To state the results on the asymptotic tails of limit distributions, we classify probability distributions into the following classes.

Let $\mathcal{P}$ be the set of probability distributions on A. For $a,b>0$ , define

\begin{align*}\mathcal{P}_{a}^{1+}&=\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\lesssim \exp({-}ax(\!\log x) (1+\mathrm{o}(1)))\},\\ \mathcal{P}_{a}^{1-}&=\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\gtrsim \exp({-}ax(\!\log x) (1+\mathrm{o}(1)))\},\\ \mathcal{P}_{a,b}^{2+}&=\big\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\lesssim \exp({-}bx^a(1+\mathrm{o}(1)))\big\},\\ \mathcal{P}_{a,b}^{2-}&=\big\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\gtrsim \exp({-}bx^a(1+\mathrm{o}(1)))\big\},\\\mathcal{P}_{a}^{3+}&=\big\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\lesssim x^{-a}\big\},\\ \mathcal{P}_{a}^{3-}&=\big\{\mu\in\mathcal{P}\,:\, T_{\mu}(x)\gtrsim x^{-a}\big\},\end{align*}

where $T_{\mu}(x)$ is the tail distribution of a probability measure $\mu$ , and $\mathrm{o}(\cdot)$ refers to the standard little-o notation. Furthermore, define

\begin{align*}\mathcal{P}_{a}^{2+}=\cup_{b>0}\mathcal{P}_{a,b}^{2+},&\hspace{1cm} \mathcal{P}_{a}^{2-}=\cup_{b>0}\mathcal{P}_{a,b}^{2-},&\hspace{-4cm} \mathcal{P}_{<1}^{2-}=\cup_{0<a<1}\mathcal{P}_{a}^{2-},\\ \mathcal{P}^{i+}=\cup_{a>0}\mathcal{P}_{a}^{i+}, &\hspace{1cm} \mathcal{P}^{i-}=\cup_{a>0}\mathcal{P}_{a}^{i-}, \qquad i=1,2,3.\end{align*}

The sets $\mathcal{P}_{a}^{i+}$ , $i=1,2,3$ , are decreasing in a, while $\mathcal{P}_{a}^{i-}$ , $i=1,2,3$ , are increasing in a. Similarly, $\mathcal{P}_{a,b}^{2+}$ is decreasing in both a and b, while $\mathcal{P}_{a,b}^{2-}$ is increasing in both a and b. Moreover, it is readily verified that

\[ \mathcal{P}^{1+}\subseteq\mathcal{P}_1^{2+}\subseteq\mathcal{P}^{3+},\quad \mathcal{P}^{3-}\subseteq\mathcal{P}^{2-}_{<1}\subseteq\mathcal{P}^{2-}_1\subseteq\mathcal{P}^{1-}. \]

The probability distributions in $\mathcal{P}^{2+}_{1}\cap\mathcal{P}^{2-}_{1}$ decay as fast as exponential distributions and are therefore exponential-like. Similarly, those in $\mathcal{P}^{1+}$ are super-exponential; those in $\mathcal{P}^{2-}_{<1}$ are sub-exponential, and in particular those in $\mathcal{P}^{3+}\cap\mathcal{P}^{3-}$ are power-like [Reference Johnson, Kemp and Kotz29].

The Conley–Maxwell–Poisson (CMP) distribution on $\mathbb{N}_0$ with parameter $(a,b)\in\mathbb{R}_+^2$ has probability mass function given by the following [Reference Johnson, Kemp and Kotz29]:

\[{\textsf{CMP}}_{(a,b)}(x)=\frac{a^x}{(x!)^b}\left(\sum_{j=0}^{\infty}\frac{a^j}{(j!)^b}\right)^{\!\!\!-1},\quad x\in\mathbb{N}_0.\]

In particular, ${\textsf{CMP}}_{a,1}$ is a Poisson distribution. For every probability distribution $\mu\in\mathcal{P}^{1+}\cap\mathcal{P}^{1-}$ , there exist $(a_1,b_1), (a_2,b_2)\in\mathbb{R}_+^2$ such that

\[{\textsf{CMP}}_{(a_1,b_1)}\lesssim T_{\mu}(x)\lesssim{\textsf{CMP}}_{(a_2,b_2)}.\]

Conversely, every CMP distribution is an element in $\mathcal{P}^{1+}\cap\mathcal{P}^{1-}$ , and hence super-exponential. The zeta distribution on $\mathbb{N}$ with parameter $a>1$ has probability mass function given by the following [Reference Johnson, Kemp and Kotz29]:

\[{\textsf{Zeta}}_a(x)=\frac{1}{\zeta(s)}x^{-a},\]

where $\zeta(a)=\sum_{i=1}^{\infty}i^{-a}$ is the Riemann zeta function of a. For every probability distribution $\mu\in\mathcal{P}^{3+}\cap\mathcal{P}^{3-}$ , there exist $a_1, a_2>1$ such that

\[{\textsf{Zeta}}_{a_1}(x)\lesssim T_{\mu}(x)\lesssim{\textsf{Zeta}}_{a_2}(x).\]

Conversely, every zeta distribution is an element in $\mathcal{P}^{3+}\cap\mathcal{P}^{3-}$ , and hence sub-exponential.

Figure 1. $I=\mathcal{P}^{1+} \cap \mathcal{P}^{1-}$ : CMP-like distributions. $II=\mathcal{P}^{2+}_{1}\cap \mathcal{P}^{2-}_{1}$ : Exponential-like distributions. $III=\mathcal{P}^{3-}\cap \mathcal{P}^{3+}$ : Power-like distributions.

We first provide the tail asymptotics for QSDs. We point out that parameter conditions for the existence and ergodicity of QSDs are given in [Reference Xu, Hansen and Wiuf48].

Theorem 3. Assume ${(\textbf{A1})}$ ${(\textbf{A3})}$ and $\partial\neq\varnothing$ . Assume there exists a QSD $\nu$ of $Y_t$ on $\partial^{\textsf{c}}$ . Then $\alpha\le0\le R$ . Furthermore, we have the following:

  • If $R=0$ , then $\alpha_-\ge\theta_{\nu}$ . If in addition $R_-=R_+$ , then $\alpha\le-\theta_{\nu}$ , and if $R_->R_+$ , then $\beta\ge\theta_{\nu}$ .

  • If $R_-=R_+>0$ and $\alpha=0$ , then $R\ge\sigma_1$ .

Moreover, if $R_->R_+$ , we have the following:

  1. (i) If $R=0$ and $\beta>\theta_{\nu}$ , then $\nu\in\mathcal{P}^{1-}_{(R_-\,-R_+)\omega_*^{-1}}$ .

  2. (ii) If $R=0$ , $\beta=\theta_{\nu}$ , and $R_-\,-R_+\le1$ , then $\nu\in\mathcal{P}^{2-}_{1}$ .

  3. (iii) If $R=0$ , $\beta=\theta_{\nu}$ , and $R_-\,-R_+>1$ , then $\nu\in\mathcal{P}^{2-}_{R_-\,-R_+-1}$ .

  4. (iv) If $R>0$ , then $\nu\in\mathcal{P}^{1-}_{(R_-\,-R_+)\omega_*^{-1}}$ . If in addition $R>1$ , then $\nu\in\mathcal{P}^{1+}_{(R_-\,-R_+)(\omega_+\omega_*)^{-1}}$ .

If $R_+=R_-$ , we have the following:

  1. (v) If $R>0$ and $\alpha<0$ , then $\nu\in\mathcal{P}^{2-}_{1}$ . If in addition $R>1$ , then $\nu\in\mathcal{P}^{2+}_{1}$ .

  2. (vi) If $R>0$ and $\alpha=0$ , we have the following:

    • If $R=\sigma_1<1$ , then $\nu\in\mathcal{P}^{2-}_{1-R}$ .

    • If $R\ge\sigma_1=1$ , then $\nu\in\mathcal{P}^{3-}$ .

    • If $\min\{1,R\}>\sigma_1$ , then $\nu\in\mathcal{P}^{2-}_{1<}$ ,

  3. (vii) If $R=0$ , we have the following:

    • If $\alpha+\theta_{\nu}=0$ and $\sigma_1<1$ , then $\nu\in\mathcal{P}^{2-}_{1-\sigma_1}$ .

    • If $\alpha+\theta_{\nu}=0$ and $\sigma_1=1$ , then $\nu\in\mathcal{P}^{3-}$ ,

    • If $\alpha+\theta_{\nu}<0$ , then $\nu\in\mathcal{P}^{2-}_{1}$ .

Furthermore, we have the following:

  1. (viii) If $R=1$ , then $\nu\in\mathcal{P}^{3+}_{\theta_{\nu}\alpha_+^{-1}}$ .

  2. (ix) If $0<R<1$ , then $\nu\in\mathcal{P}^{2+}_{1-R}$ .

  3. (x) If $R=0$ and $\alpha_->\theta_{\nu}$ , then $\nu\in\mathcal{P}^{2+}_{1}$ .

  4. (xi) If $R=0$ and $\alpha_-=\theta_{\nu}$ , then $\nu\in\mathcal{P}^{1+}_{-\sigma_1\omega_-^{-1}}$ .

Corollary 4. Assume ${(\textbf{A1})}$ ${(\textbf{A3})}$ and $\partial\neq\varnothing$ . No QSDs have a tail decaying faster than a CMP distribution. Moreover, any QSD, if such exists, is super-exponential if $R_->\max\{1,R_+\}$ or (xi) holds, exponential-like if (a) $R_-=R_+=0$ and $\alpha+\theta_{\nu}<0$ or (b) $R_-=R_+>1$ and $\alpha<0$ holds, and sub-exponential if (vi) or $R_-=R_+=\alpha+\theta_{\nu}=0$ holds; in particular, it decays no faster than a power-like distribution if $R\ge \sigma_1=1$ and $\alpha=0$ .

Analogously, we further characterize the tails of the stationary distributions. Parameter conditions for the existence and ergodicity of stationary distributions are given in [Reference Xu, Hansen and Wiuf48].

Theorem 4. Assume ${(\textbf{A1})}$ ${(\textbf{A2})}$ and $\partial=\varnothing$ . Assume there exists a stationary distribution $\pi$ of $Y_t$ on $\mathcal{Y}$ with unbounded support. Then $\alpha\le0$ , and in particular, when $\alpha=0$ ,

  • if $\Delta=0$ then $\sigma_1<1$ ;

  • if $\sigma_1=1$ then $\Delta>1$ .

Moreover, we have the following:

  1. (i) If $R_->R_+$ , then $\pi\in\mathcal{P}^{1+}_{(R_-\,-R_+)(\omega_+\omega_*)^{-1}}\cap\mathcal{P}^{1-}_{(R_-\,-R_+)\omega_*^{-1}}$ .

  2. (ii) If $R_-=R_+$ and $\alpha<0$ , then $\pi\in\mathcal{P}^{2+}_{1}\cap\mathcal{P}^{2-}_{1}$ .

  3. (iii) If $\alpha=0$ , $\Delta>0$ , and $\sigma_1<1$ , then $\pi\in\mathcal{P}^{2+}_{1-\sigma_1}\cap\mathcal{P}^{2-}_{1-\sigma_1}$ .

  4. (iv) If $\alpha=0$ and $\sigma_1=1$ , then $\pi\in\mathcal{P}^{3+}_{\Delta-1}$ . In particular, if in addition (iv) $^\prime$ $\delta>1$ , then $\pi\in\mathcal{P}^{3+}_{\Delta-1}\cap\mathcal{P}^{3-}_{\delta-1}$ .

  5. (v) If $\alpha=0$ , $\Delta=0$ , and $\sigma_2<1$ , then $\pi\in\mathcal{P}^{2-}_{1-\sigma_2}$ .

  6. (vi) If $\alpha=0$ , $\Delta=0$ , and $\sigma_2\ge1$ , then $\pi\in\mathcal{P}^{3-}$ .

As a consequence, a trichotomy regarding the tails of the stationary distributions can be derived.

Corollary 5. Assume ${(\textbf{A1})}$ ${(\textbf{A2})}$ and $\partial=\varnothing$ . Any stationary distribution of $Y_t$ on $\mathcal{Y}$ with unbounded support, if such exists, is

  • super-exponential with a CMP-like tail if Theorem 4(i) holds,

  • exponential-like if Theorem 4(ii) holds,

  • sub-exponential if one of Theorem 4(iii), Theorem 4(iv) $^\prime$ , and Theorem 4(vi) holds. In particular the tail is power-like if (iv) $^\prime$ holds.

Corollary 6. Assume ${(\textbf{A1})}$ , ${(\textbf{A2})^{\prime}}$ , ${(\textbf{A3})}$ , $\partial=\varnothing$ , $R\ge 3$ , and $(R-1)\vartheta-\alpha_+\le0$ . Any stationary distribution of $Y_t$ on $\mathcal{Y}$ with unbounded support, if such exists, is ergodic. In particular, $Y_t$ is non-explosive.

Proof. By ${(\textbf{A2})^{\prime}}$ , $R\in\mathbb{N}_0$ and $\sigma_1=1$ . By [Reference Xu, Hansen and Wiuf48, Theorem 1], under ${(\textbf{A3})}$ , $Y_t$ is explosive if either (1) $R\ge2$ and $\alpha>0$ , or (2) $R\ge3$ , $\alpha=0$ , and $\gamma-\vartheta>0$ . By Theorem 4, $\alpha\le0$ . If a stationary distribution exists and if $Y_t$ is non-explosive, then under ${(\textbf{A3})}$ , $Y_t$ is is positive recurrent and the stationary distribution is unique and ergodic [Reference Norris39]. When $\alpha=0$ and $R\ge3$ , it follows from Theorem 4 that $\Delta>1$ , i.e., $\gamma-\vartheta<(R-1)\vartheta-\alpha_+\le0$ as assumed. Hence, $Y_t$ is always non-explosive and thus the stationary distribution is ergodic.

We make the following remarks:

  • The estimate of the tail does not depend on the choice ( $R_{\omega}^2,\ R_{\omega}^3$ ) of the transition rate functions when $\alpha<0$ , whereas it may depend on this choice when $\alpha=0$ . In this case, the larger $\sigma_1$ and $\sigma_2$ are, the sharper the results are.

  • Generically, no limit distributions (in the cases covered) can decay faster than a CMP distribution or slower than a zeta distribution.

  • The unique gap case in Theorem 4 is $\alpha=0$ , $\sigma_1<1$ , and $\Delta=0$ .

  • Assume $(\textbf{A2})^{\prime}$ . By Corollary 6, if the chain $Y_t$ is explosive and a stationary distribution exists, then $\alpha=0$ and $R\ge3$ .

  • Although not stated explicitly in Theorem 4, the tail asymptotics of a stationary distribution of a BDP (Cases (i)–(iii) and (iv) $^\prime$ ) is sharp up to the leading order, in comparison with Proposition 3. Similarly, when $R_->R_+$ , the tail asymptotics is sharp up to the leading order for upwardly skip-free processes (Case (i)) [Reference Anderson4]. In comparison with the sharp results provided in Proposition 4, the results obtained in Theorem 4 capture the leading tail asymptotics of a stationary distribution, e.g., in Case (iii).

  • The assumption that $R>1$ in Corollary 4 is crucial. Indeed, as Examples 5 and 6 illustrate, when $R=1$ and $\alpha<0$ , the QSD may still exist and has either geometric or zeta-like tail. This means $\alpha<0$ is not sufficient for any QSD to have an exponential tail. It remains to see whether a QSD with CMP tail may exist when $R=1$ . Moreover, we emphasize that the conditions $R>1$ and $\alpha<0$ ensure the existence of a unique ergodic QSD assuming $(\textbf{A2})^{\prime}$ [Reference Xu, Hansen and Wiuf47]. Hence, such ergodic QSDs are not heavy-tailed.

  • Let $Y_t$ be a CTMC and $\widetilde{Y}_t$ a BDP on the same state space. If the critical parameters are the same ( $\alpha$ , $\beta$ , R, etc.), then the two processes share the same qualitative characterization in terms of existence of limit distributions, ergodicity, explosivity, etc. [Reference Xu, Hansen and Wiuf47]. One might conjecture that the decay of limit distributions (provided such exist) is also the same and takes the sharp form induced by the BDP. In other words, one might conjecture that the tail asymptotics of a general CTMC coincide with that of a BDP representative.

In the following, we illustrate and elaborate on the results by example. The examples have real analytic APLH transition rate functions, and thus $\sigma_1=1$ and $\sigma_2=2$ .

Assumption ${(\textbf{A1})}$ is crucial for Theorem 4.

Example 3. Consider a model of stochastic gene regulatory expression [Reference Shahrezaei and Swain43], given by $\Omega=\{-1\}\cup\mathbb{N}$ and

\begin{align*}q(x,x-1)=\unicode{x1d7d9}_{\mathbb{N}}(x),\quad q(x,x+j)=ab_{j-1},\quad j\in\mathbb{N},\ x\in\mathbb{N}_0,\end{align*}

where $a>0$ , and $b_j\ge0$ for all $j\in\mathbb{N}_0$ . Here the backward jump $-1$ represents the degradation of mRNA with unity degradation rate, and the forward jumps $j\in\mathbb{N}$ account for bursty production of mRNA with transcription rate a and burst size distribution $(b_j)_{j\in\mathbb{N}_0}$ . When $b_j=(1-\delta)\delta^j$ for $0<\delta<1$ , the stationary distribution is the negative binomial distribution [Reference Shahrezaei and Swain43]:

\[ \pi(x)=\frac{\Gamma(x+a)}{\Gamma(x+1)\Gamma(a)}\delta^x(1-\delta)^a,\quad x\in\mathbb{N}_0.\]

When $a=1$ , $\pi$ is also geometric. Theorem 4, if it did apply, would seem to suggest $T_{\pi}$ decays like a CMP distribution, since $1=R_->R_+=0$ . The technical reason behind this is that the proof applies Corollary 2, which requires ${(\textbf{A1})}$ . It does not seem possible to directly extend the result in Theorem 4 to CTMCs with unbounded jumps.

Example 4. Consider a BDP with birth and death rates

\begin{align*}q(x,x-1)=\sum_{j=1}^bS(b,j)x^{\underline{j}},\quad q(x,x+1)=a,\quad x\in\mathbb{N}_0,\end{align*}

where $a>0$ , $b\in\mathbb{N}$ , and the S(i,j) denote the Stirling numbers of the second kind [Reference Abramowitz and Stegun1]. Here $R_+=0<R_-=b$ , and $\alpha=-S(b,b)=-1<0$ . This BDP has an ergodic stationary distribution on $\mathbb{N}_0$ [Reference Xu, Hansen and Wiuf47], and the unique stationary distribution is $\pi={\textsf{CMP}}_{a,b}$ .

By Theorem 3(v), the tail of a QSD decays no faster than an exponential distribution when $\alpha<0$ and $0\le R_-=R_+\le1$ , which is also confirmed by the examples below.

Example 5. Consider the linear BDP on $\mathbb{N}_0$ with birth and death rates

\begin{align*}q(x,x+1)&=b x,\qquad x\in\mathbb{N}_0,\\q(1,0)&=d,\\q(x,x-1)&=\big(d\cdot2^{-1}+b\big)(x+1),\qquad x\in\mathbb{N}\setminus\{1\},\end{align*}

where b and d are positive constants [Reference O’Neill40]. For this process, a QSD $\nu$ is

\[\nu(x)=\frac{1}{x(x+1)},\quad x\in\mathbb{N}.\]

Hence $T_{\nu}$ decays as fast as the zeta distribution with parameter 2. Here $\alpha=-d/2<0$ , and $R=R_-=R_+=1$ .

Example 6. Consider the linear BDP on $\mathbb{N}_0$ with $b_j=b j$ and $d_j=d_1 j$ with $0<b<d_1$ and $j\in\mathbb{N}$ [Reference Van Doorn45]. For this process, a QSD $\nu$ is

\[\nu(x)=\left(\frac{b}{d_1}\right)^{\!\!x-1}\left(1-\frac{b}{d_1}\right),\quad x\in\mathbb{N},\]

a geometric distribution. Here $\alpha=b-d_1<0$ , and $R=R_-=R_+=1$ . By Parts (iv) and (viii) of Theorem 3, the tail of the QSD decays no faster than a CMP distribution and no more slowly than a zeta distribution, if $R_+<R_-=1$ , which is also confirmed by the example below.

Example 7. Consider a BDP on $\mathbb{N}_0$ given by

\begin{align*} q(x,x+1)&=\frac{x}{x+2},\quad x\in\mathbb{N}_0,\\q(x,x-1)&=x-1+2\frac{1}{x},\quad x\in\mathbb{N}.\end{align*}

Here $R=R_-=1>R_+=0$ , $\alpha=-1$ , and $\sigma_1=1$ , $\sigma_2=2$ . Using the same Lyapunov function constructed in the proof of [Reference Xu, Hansen and Wiuf47, Theorem 4.4], it can be shown that there exists a uniquely ergodic QSD. By Theorem 3(iv), the tail of the QSD decays no more slowly than a CMP distribution. Indeed, the QSD is given by

\[\nu(x)=\frac{1}{(x-1)!(x+1)},\quad T_{\nu}(x)=\frac{1}{x!},\quad x\in\mathbb{N}.\]

The tail of the QSD decays like a Poisson distribution.

Example 8. Consider a BDP on $\mathbb{N}_0$ given by

\begin{align*}q(x,x+1)=x^2,\quad q(x,x-1)=x^2+x,\quad x\in\mathbb{N}_0.\end{align*}

Here $R=R_-=R_+=2>1$ , and $\alpha=0$ . Corollary 4 states that any QSD (if it exists) is heavy-tailed and its tail decays no faster than a zeta-like distribution. Indeed, a QSD of the process is given by

\[\nu(x)=\frac{1}{x(x+1)},\quad T_{\nu}(x)=\frac{1}{x},\quad x\in\mathbb{N}.\]

Example 9. Consider a quadratic BDP on $\mathbb{N}_0$ , given by

\begin{align*}q(x,x+1)=x(x+3)/2,\quad q(x,x-1)=x(x+1),\quad x\in\mathbb{N}_0.\end{align*}

Then $R_-=R_+=R=2>1$ , $\alpha=-1/2$ . Hence there exists a uniquely ergodic QSD [Reference Xu, Hansen and Wiuf47]. By Corollary 4, this QSD decays exponentially. Indeed, the QSD is given by

\[\nu(x)=2^{-x},\quad T_{\nu}(x)=2^{-x+1},\quad x\in\mathbb{N}.\]

5. Applications

In this section, we apply the results on asymptotic tails to diverse models in biology. We emphasize that for all models/applications, the transition rate functions are real analytic APLH on a subset of $\mathbb{N}_0$ , and thus $\sigma_1=1$ and $\sigma_2=2$ , which we will not further mention explicitly.

5.1. Biochemical reaction networks

In this section, we apply the results of Section 4 to some examples of stochastic reaction networks (SRNs) with mass-action kinetics. These are used to describe interactions of constituent molecular species with many applications in systems biology, biochemistry, genetics, and beyond [Reference Gardiner21, Reference Pastor-Satorras, Castellano, Van Mieghem and Vespignani41]. An SRN with mass-action kinetics is a CTMC on $\mathbb{N}_0^d$ ( $d\ge 1$ ) encoded by a labelled directed graph [Reference Anderson, Kurtz, Koeppl and Setti3]. We concentrate on SRNs on $\mathbb{N}_0$ with one species (S). In this case the graph is composed of reactions (edges) of the form $n\text{S}\longrightarrow^{\kappa} m\text{S}$ , $n,m\in\mathbb{N}_0$ (n molecules of species S is converted into m molecules of the same species), encoding a jump from x to $x+m-n$ with propensity $q(x,x+m-n)=\kappa x(x-1)\ldots(x-n+1)$ , $\kappa>0$ . Note that multiple reactions might result in the same jump vector.

In general little is known about the stationary distributions of a reaction network, let alone the QSDs, provided either such exist [Reference Anderson, Craciun and Kurtz2, Reference Gupta, Briat and Khammash25, Reference Hansen and Wiuf27, Reference Hoessly and Mazza28]. Special cases include complex balanced networks (in arbitrary dimension) which have Poisson product-form distributions [Reference Anderson, Craciun and Kurtz2, Reference Cappelletti and Wiuf9], reaction networks that are also BDPs, and reaction networks with irreducible components, each with a finite number of states.

Example 10. To show how general the results are, we consider two SRNs, neither of which is a BDP.

(i) Consider a reaction network with a strongly connected graph [Reference Xu, Hansen and Wiuf47]:

For this reaction network, $\Omega=\{1,-2\}$ , and

\[q(x,x+1)=\kappa_1 x+\kappa_2 x(x-1),\quad q(x,x-2)=\kappa_3 x(x-1)(x-2).\]

Hence, $\alpha=-\kappa_3<0$ . It is known that there exists a unique exponentially ergodic stationary distribution $\pi$ on $\mathbb{N}$ [Reference Xu, Hansen and Wiuf47]. (The state 0 is neutral [Reference Xu, Hansen and Wiuf48].) By Theorem 4, $\pi\in\mathcal{P}^{1+}_1\cap\mathcal{P}^{1-}_1$ . Hence $\pi$ is light-tailed and $T_{\pi}$ decays as fast as a Poisson distribution, since $\omega_+=\omega_*=1$ , $R_+=2$ , and $R_-=3$ . However, the stationary distribution is generally not Poisson. If $\kappa_2^2=\kappa_1\kappa_3$ , then the reaction network is complex-balanced; hence the stationary distribution is Poisson [Reference Anderson, Craciun and Kurtz2]. If the parameter identity is not fulfilled, then the distribution cannot be Poisson in this case [Reference Cappelletti and Wiuf9].

(ii) Consider a similar reaction network including direct degradation of S [Reference Xu, Hansen and Wiuf47]:

The threshold parameters are the same as in (i), and it follows from [Reference Xu, Hansen and Wiuf47] that the reaction network has a uniformly exponentially ergodic QSD $\nu$ . By Theorem 1, $T_{\nu}$ decays like a CMP distribution.

Example 11. The following bursty Schlögl model was proposed in [Reference Falk, Mendler and Drossel20]:

where $j\in\mathbb{N}$ . When $j=1$ , it reduces to the classical Schlögl model.

The associated process has a unique ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ [Reference Xu, Hansen and Wiuf47]. Bifurcation with respect to patterns of the ergodic stationary distribution is discussed in [Reference Falk, Mendler and Drossel20], based on a diffusion approximation in terms of the Fokker–Planck equation. Using the results established in this paper, the tail distribution can be characterized rigorously. In fact $\pi\in\mathcal{P}^{1+}_{j^{-1}}\cap\mathcal{P}^{1-}_1$ . Hence $\pi$ is light-tailed and $T_{\pi}$ decays like a CMP distribution.

Proof. We have $\Omega=\{-1,1\}\cup\{j\}$ . The ergodicity follows from [Reference Xu, Hansen and Wiuf47, Subsection 4.3] as a special case. It is straightforward to verify that ${(\textbf{A1})}$ ${(\textbf{A3})}$ are all satisfied. Moreover, $R_+=2$ and $R_-=3$ . The conclusion then follows from Theorem 4.

Example 12. Consider the following one-species SRN:

\[\text{S}\xrightarrow{\kappa_1}4\text{S},\quad 3\text{S}\xrightarrow{\kappa_2}0.\]

In this case, $\omega_*=3$ , and the ambient space $\mathbb{N}_0$ is divided into disjoint irreducible sets $3\mathbb{N}$ , $3\mathbb{N}_0+1$ , and $3\mathbb{N}_0+2$ , as well as an absorbing state 0. By simple calculation, we have $R_-=3>R_+=1$ , and $\alpha = -3\kappa_2<0$ . Hence, this SRN is positive recurrent on the sets $3\mathbb{N}_0+1$ and $3\mathbb{N}_0+2$ , while it admits a unique QSD on $3\mathbb{N}$ [Reference Xu, Hansen and Wiuf48, Theorem 7]. According to Theorem 4, the tails of the stationary distributions decay like CMP on $3\mathbb{N}_0+1$ and $3\mathbb{N}_0+2$ , and according to Theorem 3, the tail of the QSD also decays as CMP on $3\mathbb{N}$ . Since the transition rate functions take a common form (polynomial) on all three irreducible sets, the parameters are the same on all three irreducible sets (open and closed), and consequently, the tail asymptotics are the same on the two closed irreducible sets. The same would hold true for the open irreducible sets, if there were more than one open irreducible set.

Example 13. Consider the following one-species S-system modelling a gene regulatory network [Reference Chowdhury, Chetty and Evans14]:

\[\varnothing\xrightarrow{(\kappa_1,\xi_1)}\text{S}\xleftarrow{(\kappa_2,\xi_2)}3\text{S}\]

with the generalized mass action kinetics (GMAK)

\begin{align*}q(x,x+1)=\kappa_1\frac{\Gamma(x+\xi_1)}{\Gamma(x)},\quad q(x,x-2)=\kappa_2\frac{\Gamma(x+\xi_2)}{\Gamma(x)},\quad x\in\mathbb{N}_0,\end{align*}

where $\kappa_1, \kappa_2>0$ are the reaction rate constants, and $\xi_2>\xi_1>0$ are the indices of GMAK.

By Stirling’s formula,

\[\log\Gamma(x)=(x-1/2)\log x-x+\log\sqrt{2\pi}+\mathrm{O}\big(x^{-1}\big);\]

hence $q(x,x+1)$ is APLH with $(\xi_1,\xi_1-1,\xi_1-2)$ and $q(x,x-2)$ is APLH with $(\xi_2,\xi_2-1,\xi_2-2)$ . Then $R_-=\xi_2>R_+=\xi_1$ , $\omega_*=1$ , $\omega_-=-2$ , and $\omega_+=1$ . Using the same Lyapunov function constructed in the proof of [Reference Xu, Hansen and Wiuf47, Theorem 4.4], it can be shown that there exists a uniquely ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ with support $\mathbb{N}$ . By Theorem 4, $\pi\in\mathcal{P}^{1+}_{\xi_3-\xi_1}\cap\mathcal{P}^{1-}_{\xi_3-\xi_1}$ .

5.2. An extended class of branching processes

Consider an extended class of branching processes on $\mathbb{N}_0$ [Reference Chen13] with transition rate matrix $Q=(q(x,y))_{x,y\in\mathbb{N}_0}$ given by

\[q(x,y)=\left\{\begin{array}{c@{\quad}l}r(x)\mu(y-x+1), &\quad \text{if}\quad y\ge x-1\ge0\quad \text{and}\quad y\neq x,\\ -r(x)(1-\mu(1)), &\quad \text{if}\quad y=x\ge1,\\q(0,y), & \quad \text{if}\quad y>x=0,\\ -q(0), &\quad \text{if}\quad y=x=0,\\ 0, &\quad \text{otherwise},\end{array}\right.\]

where $\mu$ is a probability measure on $\mathbb{N}_0$ , $q(0)=\sum_{y\in\mathbb{N}}q(0,y)$ , and r(x) is a positive finite function on $\mathbb{N}_0$ . Assume the following:

( $\textbf{H1}$ ) $\mu(0)>0$ , $\mu(0)+\mu(1)<1$ .

( $\textbf{H2}$ ) $\sum_{y\in\mathbb{N}}q(0,y)y<\infty$ , $M=\sum_{k\in\mathbb{N}_0}k\mu(k)<\infty$ .

( $\textbf{H3}$ ) r(x) is a polynomial of degree $R\ge1$ for large x.

The tail asymptotics of infinite stationary measures in the null-recurrent case is investigated in [Reference Liu and Zhao31] under ( $\textbf{H1}$ )–( $\textbf{H2}$ ) for general r. Here we assume r is polynomial ( $\textbf{H3}$ ). The following is a consequence of the results of Section 5.

Theorem 5. Assume ${(\textbf{H1})}$ ${(\textbf{H3})}$ , $Y_0\neq0$ , and that $\mu$ has finite support.

  1. (i) Assume $q_0>0$ . Then there exists an ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ if (i-1) $M<1$ or (i-2) $M=1$ and $R>1$ . Moreover, $T_{\pi}$ decays like a geometric distribution if (i-1) holds, and like a zeta distribution if (i-2) holds.

  2. (ii) Assume $q_0=0$ . Then there exists an ergodic QSD $\nu$ on $\mathbb{N}$ if (ii-1) $M<1$ and $R>1$ or (ii-2) $M=1$ and $R>2$ . Moreover, $T_{\nu}$ decays like a geometric distribution if (ii-1) holds, while it decays no faster than a zeta distribution if (ii-2) holds.

Proof. For all $k\in\Omega$ , let

\begin{align*}q(x,x+k)=\begin{cases} r(x)\mu(k+1), &\text{if}\ x\in\mathbb{N},\\ q(0,k), &\text{if}\ x=0.\end{cases}\end{align*}

By ${(\textbf{H1})}$ , $\mu(k)>0$ for some $k\in\mathbb{N}$ . Hence, regardless of q(0), by positivity of r, ${\textrm(\textbf{A1})}$ ${\textrm(\textbf{A3})}$ are satisfied with $\Omega_{-}=\{-1\}$ and $\Omega_{+}=\{j\in\mathbb{N}: j+1\in\text{supp}\,\mu \text{ or } q(0,j)>0\}$ . Let $r(x)=ax^R+bx^{R-1}+\mathrm{O}\big(x^{R-2}\big)$ with $a>0$ . It is straightforward to verify that $R_{+}=R_{-}=R$ , $\alpha=a(M-1)$ . The ergodicity follows from [Reference Xu, Hansen and Wiuf47], and the tail asymptotics follow from Theorems 4 and 3.

5.3. Stochastic population processes under bursty reproduction

Two stochastic population models with bursty reproduction are investigated in [Reference Be’er and Assaf7].

The first model is a Verhulst logistic population process with bursty reproduction. The process $Y_t$ is a CTMC on $\mathbb{N}_0$ with transition rate matrix $Q=(q(x,y))_{x,y\in\mathbb{N}_0}$ satisfying

\[q(x,y)=\begin{cases} c\mu(j)x,\qquad \text{if}\ y=x+j,\ j\in\mathbb{N},\\[5pt] \frac{c}{K}x^2+x,\quad\, \text{if}\ y=x-1\in\mathbb{N}_0,\\[5pt] 0,\qquad\qquad\ \, \text{otherwise},\end{cases}\]

where $c>0$ is the reproduction rate, $K\in\mathbb{N}$ is the typical population size in the long-lived metastable state prior to extinction [Reference Be’er and Assaf7], and $\mu$ is the burst size distribution.

Approximations of the mean time to extinction and QSD are discussed in [Reference Be’er and Assaf7] against various burst size distributions of finite mean (e.g., a Dirac measure, a Poisson distribution, a geometric distribution, a negative-binomial distribution). The existence of an ergodic QSD for this population model is established in [Reference Xu, Hansen and Wiuf47]. However, the tail of the QSD is not addressed therein.

Theorem 6. Assume $\mu$ has finite support. Let $\nu$ be the unique ergodic QSD on $\mathbb{N}$ trapped to zero for the Verhulst logistic model $Y_t$ . Then $T_{\nu}$ decays like a CMP distribution.

Proof. We have $\Omega=\{-1\}\cup\text{supp}\,\mu$ , $q(x,x-1)=\frac{c}{K}x^2+x$ , $q(x,x+k)=c\mu(k)x$ , for $k\in\text{supp}\,\mu$ and $x\in\mathbb{N}$ . Since $\mu$ has finite support, ${(\textbf{A1})}$ ${(\textbf{A3})}$ are satisfied. Moreover, since $\text{supp}\,\mu\neq\varnothing$ , we have $R_-=2$ and $R_+=1$ . Again, the ergodicity result follows from [Reference Xu, Hansen and Wiuf47]. The tail asymptotics follow directly from Theorem 4.

In subsequent sections, we provide proofs of the main results in Section 4. Since the proof of Theorem 3 is based on that of Theorem 4, we begin by proving Theorem 4 in the next section.

6. Proof of Theorem 4

Let

\[\begin{split} \alpha_j=&\begin{cases}\lim_{x\to\infty}\frac{\sum_{\omega\in A_j}q(x,x+\omega)}{x^{R_-}},\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ \lim_{x\to\infty}\frac{\sum_{\omega\in A_j}q(x,x+\omega)}{x^{R_+}},\quad \text{if}\ j=1,\ldots,\omega_+,\\ 0,\qquad\qquad\qquad\qquad\qquad\quad \text{otherwise}, \end{cases}\\ \gamma_j=&\begin{cases}\lim_{x\to\infty}\frac{\sum_{\omega\in A_j}q(x,x+\omega)-\alpha_j x^{R_-}}{x^{R_-\,-\sigma_1}},\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ \lim_{x\to\infty}\frac{\sum_{\omega\in A_j}q(x,x+\omega)-\alpha_j x^{R_+}}{x^{R_+-\sigma_1}},\quad \text{if}\ j=1,\ldots,\omega_+,\\ 0,\hspace{4.8cm} \ \text{otherwise}.\end{cases}\end{split}\]

Note that $\beta=\alpha_0$ . From Lemma 1, $\alpha_-=\omega_*\sum_{j=\omega_-+1}^0\alpha_{j}$ and $\alpha_+=\omega_*\sum_{j=1}^{\omega_+}\alpha_j$ . By ${(\textbf{A3})}$ ,

(11) \begin{equation}\sum_{\omega\in A_j}q(x,x+\omega)=\begin{cases} x^{R_-}\big(\alpha_j+\gamma_jx^{-\sigma_1}+\mathrm{O}\big(x^{-\sigma_2}\big)\big),\quad \text{if}\ j=\omega_-+1,\ldots,0,\\[5pt] x^{R_+}\big(\alpha_j+\gamma_jx^{-\sigma_1}+\mathrm{O}\big(x^{-\sigma_2}\big)\big),\quad\, \text{if}\ j=1,\ldots,\omega_+.\end{cases}\end{equation}

Since

\begin{align*}q(x,x+j\omega_*)={\text{sgn}}(j)\Biggl(\sum_{\omega\in A_j}q(x,x+\omega)-\sum_{\omega\in A_{j+1}}q(x,x+\omega)\Biggr),\quad j=\omega_-,\ldots,-1,1,\ldots,\omega_+,\end{align*}

we have

\begin{align*}q(x,x+j\omega_*)=\begin{cases} x^{R_-}\big((\alpha_{j+1}-\alpha_j)+(\gamma_{j+1}-\gamma_j)x^{-\sigma_1}+\mathrm{O}\big(x^{-\sigma_2}\big)\big),\quad \text{if}\ j=\omega_-,\ldots,-1,\\[4pt] x^{R_+}\big(\big(\alpha_j-\alpha_{j+1}\big)+(\gamma_j-\gamma_{j+1})x^{-\sigma_1}+\mathrm{O}\big(x^{-\sigma_2}\big)\big),\quad\, \text{if}\ j=1,\ldots,\omega_+.\end{cases}\end{align*}

Since ( $\textbf{A1}$ )–( $\textbf{A2}$ ) imply that ${\cap}_{\omega\in\Omega}\ \{x\in\mathcal{Y}\,:\, q(x,x+\omega)=0\}$ is finite, it easily follows that both $\Omega_-\neq\varnothing$ and $\Omega_+\neq\varnothing$ since $\text{supp}\,\pi$ is unbounded. Hence $\alpha_-\ge\alpha_0>0$ , $\alpha_+\ge\alpha_1>0$ , and $-\infty<\omega_-<\omega_+<\infty$ .

For ease of exposition and without loss of generality, we assume throughout the proof that $\omega_*=1$ (recall the argument in the proof of Theorem 1). Hence $\mathbb{N}_0+b\subseteq\mathcal{Y}\subseteq\mathbb{N}_0$ for some $b\in\mathbb{N}_0$ by Proposition 2.

Most of the inequalities below are based on the identities in Theorem 1 and Corollary 2. Therefore, we use ‘LHS’ (‘RHS’) with a label in the subscript as shorthand for the left-hand side (right-hand side) of the equation with the given label.

The claims that $\Delta=0$ implies $\sigma_1<1$ and that $\sigma_1=1$ implies $\Delta>1$ are proved in Step I below.

We first show $\alpha\le0$ . Suppose for the sake of contradiction that $\alpha>0$ . Then either (1) $R_+>R_-$ or (2) $R_+=R_-$ and $\alpha_+>\alpha_-$ holds. Define the auxiliary function

\[f_j(x)=\sum_{\omega\in A_j}q(x-j,x-j+\omega),\quad j=\omega_-+1,\ldots,\omega_+.\]

From (11) it follows that

\[f_j(x)=\begin{cases} x^{R_-}\big(\alpha_j+\gamma_jx^{-\sigma_1}-\alpha_jj{R_-}x^{-1}+\mathrm{O}\big(x^{-\min\{\sigma_2,\sigma_1+1\}}\big)\big),\quad \text{if}\ j=\omega_-+1,\ldots,0,\\[4pt] x^{R_+}\big(\alpha_j+\gamma_jx^{-\sigma_1}-\alpha_jj{R_+} x^{-1}+\mathrm{O}\big(x^{-\min\{\sigma_2,\sigma_1+1\}}\big)\big),\quad \text{if}\ j=1,\ldots,\omega_+.\end{cases}\]

Let

\begin{align*}\beta_j(x)=\begin{cases} x^{-R_-}f_j(x)-\alpha_j,\quad \text{if}\ j=\omega_-+1,\ldots,0,\\[4pt] x^{-R_+}f_j(x)-\alpha_j,\quad \text{if}\ j=1,\ldots,\omega_+.\end{cases}\end{align*}

Then there exist $N_3,\ N_4\in\mathbb{N}$ with $N_3>N_1, N_4$ such that for all $x\ge N_3$ ,

(12) \begin{equation}|\beta_j(x)|\le N_4x^{-\sigma_1},\quad j=\omega_-+1,\ldots,\omega_+.\end{equation}

From Theorem 1(3), we have

(13) \begin{equation}x^{R_-\,-R_+}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\pi\left(x-j\right)=\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\pi\left(x-j\right).\end{equation}

Since $R_-\le R_+$ and $T_{\pi}(x)\le1$ for all $x\in\mathbb{N}_0$ , summing up in (13) from x to infinity yields

(14) \begin{equation}\sum_{y=x}^{\infty}y^{R_-\,-R_+}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\pi\left(y-j\right)=\sum_{y=x}^{\infty}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\pi\left(y-j\right).\end{equation}

In light of the monotonicity of $T_{\pi}(x)$ and $x^{R_-\,-R_+}$ , it follows from (12) that there exist $C=C(N_4)>0$ and $N_5\in\mathbb{N}$ with $N_5\ge N_3$ such that for all $x\ge N_5$ ,

\[\begin{split}\text{LHS}_{(14)}& \le\sum_{y=x}^{\infty}y^{R_-\,-R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4y^{-\sigma_1}\right)\pi\left(y-j\right)\\& \le x^{R_-\,-R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigma_1}\right)\sum_{y=x}^{\infty}\pi\left(y-j\right)\\& =x^{R_-\,-R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigma_1}\right)T_{\pi}(x-j)\\& \le x^{R_-\,-R_+}T_{\pi}(x)\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigma_1}\right)\\& \le x^{R_-\,-R_+}T_{\pi}(x)\left(\alpha_-+Cx^{-\sigma_1}\right).\end{split}\]

Similarly, with a possibly larger C and $N_5$ , for all $x\ge N_5$ , one can show

\[\begin{split}\text{RHS}_{(14)}&\ge\left(\alpha_+-Cx^{-\sigma_1}\right)T_{\pi}(x-1),\end{split}\]

which further implies that for all x large enough,

\[1\ge\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge x^{R_+-R_-}\frac{\alpha_+-Cx^{-\sigma_1}}{\alpha_-+Cx^{-\sigma_1}}>1,\]

since either (1) $R_+>R_-$ or (2) $R_+=R_-$ and $\alpha_+>\alpha_-$ holds. This contradiction shows that $\alpha\le0$ .

Next, we provide asymptotics of $T_{\pi}(x)$ case by case. Beforehand, let us illustrate the idea behind the proof by means of an example: the BDP case ( $\omega_+=-\omega_-=1$ ) for $R_->R_+$ . Again, assume without loss of generality that $\omega_*=1$ . From Corollary 2 it follows that

\[\frac{T_{\pi}(x)(q(x,x-1)+ q(x-1,x))}{T_{\pi}(x-1)q(x-1,x)} - \frac{T_{\pi}(x+1) q(x,x-1)} {T_{\pi}(x-1)q(x-1,x)} = 1,\]

which implies by non-negativity of $T_{\pi}(x+1) q(x,x-1)$ that

\[\frac{T_{\pi}(x)}{T_{\pi}(x-1)} \ge \frac{q(x-1,x)}{q(x-1,x) + q(x,x-1)}.\]

Note that

\begin{align*}q(x-1,x) = (x-1)^{R_+} \big(\alpha_1 +\gamma_1 x^{-\sigma_1} + O\big(x^{-\sigma_2}\big)\big),\end{align*}
\begin{align*}q(x,x-1) = x^{R_-}\big(\alpha_0+\gamma_0 x^{-\sigma_1}+O\big(x^{-\sigma_2}\big)\big).\end{align*}

This further shows that

\[T_{\pi}(x) \gtrsim \Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_0}\right)^{x+Cx^{1-(R_-\,-R_+)}+\mathrm{O}(\!\log x)}\]

for some constant $C>0$ .

To obtain the upper estimate, rewrite (13) as

\[(\alpha_0+\beta_0(x))\pi(x) = (\alpha_1+\beta_1(x))x^{R_+-R_-}\pi(x-1).\]

Summing up the above equation from x to infinity yields that

\[\sum_{y\ge x}(\alpha_0+\beta_0(y))\pi(y) = \sum_{y\ge x} (\alpha_1+\beta_1(x))x^{R_+-R_-}\pi(y-1).\]

From (12), it follows that there exist $C_1, N>0$ such that for all large $x\ge N$ ,

\begin{align*}\sum_{y\ge x}(\alpha_0+\beta_0(y))\pi(y) \ge \left(\alpha_0-C_1x^{-\sigma_1}\right)T_{\pi}(x),\end{align*}

while

\begin{align*}\sum_{y\ge x} \big(\alpha_1+\beta_1(x)\big)x^{R_+-R_-}\pi(y-1) \le& \sum_{y=x}^{\infty}y^{R_+-R_-}\left(\alpha_1+C_1y^{-\sigma_1}\right)\pi\left(y-1\right)\\ \le& x^{R_+-R_-}T_{\pi}(x-1)\left(\alpha_1+C_1x^{-\sigma_1}\right).\end{align*}

Hence,

\[\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\le x^{R_+-R_-}\frac{\alpha_1+C_1x^{-\sigma_1}}{\alpha_{0}-C_1x^{-\sigma_1}}=x^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_{0}}+\mathrm{O}\big(x^{-\sigma_1}\big)\right),\]

which further implies that

\[T_{\pi}(x)(x)\lesssim\Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_0}\right)^{x+C_2x^{1-\sigma_1}+\mathrm{O}(\!\log x)}\]

for some $C_2\ge C_1$ .

Now we provide the detailed asymptotic estimates of the tail distribution case by case.

(i) $R_->R_+$ . Recall Stirling’s formula for the gamma function [Reference Abramowitz and Stegun1]:

\[\log\Gamma(x)=x\log x-x+\mathrm{O}\big(\!\log x\big),\]

where $\log$ is the natural logarithm. Based on this formula, it suffices to prove that there exists $\widetilde{C}>0$ such that

(15) \begin{equation}T_{\pi}(x)\gtrsim\Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_0}\right)^{x+\widetilde{C}x^{1-(R_-\,-R_+)}+\mathrm{O}(\!\log x)},\end{equation}
(16) \begin{equation}T_{\pi}(x)\lesssim\Gamma\big(x\omega_+^{-1}\big)^{R_+-R_-}\left(\frac{\alpha_+}{\alpha_0}\omega_+^{R_+-R_-}\right)^{\omega^{-1}_+x+\widetilde{C}x^{1-\sigma_1}+\mathrm{O}(\!\log x)}.\end{equation}

Next, we prove (15) and (16) one by one.

We first show (15). Recall that ( $\textbf{A2}$ ) ensures that there exists $N\in\mathbb{N}$ such that $q(x,x+\omega)$ is a strictly positive non-decreasing polynomial on $\mathbb{N}_0+N$ for all $\omega\in\Omega$ . Moreover, $A_j=\{\omega\in\Omega_-\,:\, \omega<j\}$ if $j\le0$ , and $A_j=\{\omega\in\Omega_+\,:\, \omega\ge j\}$ if $j>0$ . It follows from Corollary 2 that for all $x\in\mathbb{N}_0+N-\omega_-$ ,

(17) \begin{align}\nonumber&T_{\pi}(x)\Biggl(\sum_{\omega\in A_0}q(x,x+\omega)+\sum_{ \omega\in A_1}q(x-1,x-1+\omega)\Biggr)\\ \nonumber&+\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\cdot\Biggl(\sum_{\omega\in A_{j}}q(x-j,x-j+\omega)-\sum_{\omega\in A_{j+1}}q(x-(j+1),x-(j+1)+\omega)\Biggr)\\ =& \sum_{j=1}^{\omega_+}T_{\pi}(x-j)\Biggl(\sum_{\omega\in A_j}q(x-j,x-j+\omega)-\sum_{\omega \in A_{j+1}}q(x-(j+1),x-(j+1)+\omega)\Biggr).\end{align}

Furthermore, note that $R_->R_+$ , and we have the following estimates for both sides of the above equality:

\[\begin{split} \text{LHS}_{(17)}=&T_{\pi}(x)\Biggl(\sum_{\omega\in A_0}q(x,x+\omega)+\sum_{\omega\in A_1}q(x-1,x-1+\omega)\Biggr) +\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\\&\quad\cdot\bigl({-}q(x-(j+1),x-(j+1)+{j})\\&+\sum_{\omega\in A_{j}}\bigl(q(x-j,x-j+\omega)-q(x-(j+1),x-(j+1)+\omega)\bigr)\bigr)\\ \le& T_{\pi}(x)\Biggl(\sum_{\omega\in A_0}q(x,x+\omega)+\sum_{\omega\in A_1}q(x-1,x-1+\omega)\\ &+\sum_{j=\omega_-}^{-1}\sum_{\omega\in A_{j}}\bigl(q(x-j,x-j+\omega)-q(x-(j+1),x-(j+1)+\omega)\bigr)\Bigg)\\=&T_{\pi}(x)x^{R_-}\big(\alpha_{0}+\mathrm{O}\big(x^{-\tilde{\sigma}}\big)\big),\end{split}\]

where $\tilde{\sigma}=\min\{1,R_-\,-R_+\}>0$ .

By the monotonicity of $q(x,x+\omega)$ ,

\[\begin{split}\text{RHS}_(17)&\ge\sum_{j=1}^{\omega_+}T_{\pi}(x-j)\Biggl(\sum_{\omega\in A_j}q(x-j,x-j+\omega)-\sum_{\omega \in A_{j+1}}q(x-j,x-j+\omega)\Biggr)\\&= T_{\pi}(x-1)\Biggl(\sum_{\omega\in A_1}q(x,x+\omega)-\sum_{\omega\in A_1}\bigl(q(x,x+\omega)-q(x-\omega,x)\bigr)\Biggr)\\&\ge T_{\pi}(x-1)\big(\alpha_1x^{R_+}+\mathrm{O}\big(x^{R_+-1}\big)\big).\end{split}\]

Then there exist $N_1>N_2>0$ such that for all $x\ge N_1$ ,

(18) \begin{multline}\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge x^{R_+-R_-}\frac{\alpha_1+\mathrm{O}\big(x^{-1}\big)}{\alpha_{0}+\mathrm{O}(x^{-\tilde{\sigma}})}\\=x^{R_+-R_-}\Big(\frac{\alpha_1}{\alpha_{0}}+\mathrm{O}\big(x^{-\tilde{\sigma}}\big)\Big)\ge \frac{\alpha_1}{\alpha_{0}}x^{R_+-R_-}\big(1-N_2x^{-\tilde{\sigma}}\big).\end{multline}

Hence, if $0<R_-\,-R_+<1$ , then $\tilde{\sigma}=R_-\,-R_+<1$ , and there exists $\widetilde{C}=\widetilde{C}(N_1,N_2)>0$ such that for all $x\ge N_1$ ,

\begin{align*}T_{\pi}(x)&\ge T_{\pi}(N_1-1)\prod_{j=0}^{x-N_1}\left(\frac{\alpha_1}{\alpha_{0}}(x-j)^{R_+-R_-}\left(1-\frac{N_2}{(x-j)^{\tilde{\sigma}}}\right)\right)\\&=T_{\pi}(N_1-1)\prod_{j=N_1}^{x}\left(\frac{\alpha_1}{\alpha_{0}}j^{R_+-R_-}\right)\prod_{j=N_1}^{x}\left(1-\frac{N_2}{j^{\tilde{\sigma}}}\right)\\&\gtrsim \left(\frac{\alpha_1}{\alpha_{0}}\right)^{x+1-N_1}\frac{\Gamma\left(x+1\right)^{R_+-R_-}}{\Gamma(N_1)^{R_+-R_-}}\exp\Bigg(\sum_{j=N_1}^x-2N_2j^{-\tilde{\sigma}}\Bigg) \\&\gtrsim\Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_{0}}\right)^{x-\widetilde{C}x^{1-\tilde{\sigma}}}x^{R_+-R_-},\end{align*}

since $1-x^{-1}\ge \exp\big({-}2x^{-1}\big)$ for large x, and we employ the fact that $T_{\pi}(N_1-1)>0$ since $\text{supp}\,\pi=\mathcal{Y}$ is unbounded. Hence (16) holds. Similarly, if $R_-\,-R_+\ge1$ , then $\widetilde{\sigma}=1$ , and analogous arguments can be applied.

Next we show (16). Rewrite (13) as

(19) \begin{equation}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\pi\left(x-j\right)=x^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\pi\left(x-j\right).\end{equation}

Summing up in (19) from x to infinity yields

(20) \begin{equation}\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\pi\left(y-j\right)=\sum_{y=x}^{\infty}y^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\pi\left(y-j\right).\end{equation}

We can obtain the other estimates similarly. It follows from (12) that there exist $C=C(N_4)>0$ and $N_5\in\mathbb{N}$ such that for all $x\ge N_5$ ,

\[\begin{split}\text{LHS}_{(20)}&\ge\left(\alpha_0-Cx^{-\sigma_1}\right)T_{\pi}(x),\end{split}\]
\[\begin{split}\text{RHS}_{(20)}&\le \sum_{y=x}^{\infty}y^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+N_4y^{-\sigma_1}\right)\pi\left(y-j\right)\\& \le x^{R_+-R_-}T_{\pi}(x-\omega_+)\left(\alpha_++Cx^{-\sigma_1}\right),\end{split}\]

which together further imply that

\[\frac{T_{\pi}(x)}{T_{\pi}(x-\omega_+)}\le x^{R_+-R_-}\frac{\alpha_++Cx^{-\sigma_1}}{\alpha_{0}-Cx^{-\sigma_1}}=x^{R_+-R_-}\left(\frac{\alpha_+}{\alpha_{0}}+\mathrm{O}\big(x^{-\sigma_1}\big)\right).\]

The remaining arguments are analogous to the arguments for (15).

(ii) $R_-=R_+$ and $\alpha_->\alpha_+$ . Analogously to (i), we will show that there exist real constants $\delta_+,\delta_-$ and $\widetilde{C}>0$ such that for all $\overline{\delta}>\delta_+$ and $\underline{\delta}<\delta_-$ ,

(21) \begin{align} T_{\pi}(x)&\gtrsim\left(\frac{\alpha_+}{\alpha_-}\right)^{x+\widetilde{C}x^{1-\sigma_1}+\mathrm{O}(\!\log x)}, \end{align}
(22) \begin{align} T_{\pi}(x)&\lesssim\left(\frac{\alpha_+}{\alpha_-}\right)^{(\omega_+-\omega_-\,-1)^{-1}x+\widetilde{C}x^{1-\sigma_1}+\mathrm{O}(\!\log x)}.\end{align}

We first prove (21). Since $R=R_-=R_+$ ,

\begin{align*}f_j(x)=x^R\big(\alpha_j+\beta_j(x)\big),\quad \beta_j(x)=\mathrm{O}\big(x^{-\sigma_1}\big),\quad j=\omega_-+1,\ldots,\omega_+.\end{align*}

Moreover, $\alpha=\alpha_+-\alpha_-<0$ implies that

\begin{align*}\sum_{j=1}^{\omega_+}\alpha_j<\sum_{j=\omega_-+1}^0\alpha_{j}.\end{align*}

From (13), it follows that

\[\sum_{j=\omega_-+1}^0\pi\left(x-j\right)\alpha_{j}+\sum_{j=\omega_-+1}^0\pi\left(x-j\right)\beta_{j}(x)=\sum_{j=1}^{\omega_+}\pi\left(x-j\right)\alpha_{j}+\sum_{j=1}^{\omega_+}\pi\left(x-j\right)\beta_{j}(x).\]

Summing up the above equality from x to $\infty$ yields

\[\begin{split}&\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi\left(y-j\right)\alpha_{j}+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi\left(y-j\right)\beta_{j}(y)\\&\quad=\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi\left(y-j\right)\alpha_j+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi\left(y-j\right)\beta_j(y).\end{split}\]

Since each double sum in the above equality is convergent, we have

\begin{align*} 0&=\sum_{j=\omega_-+1}^0\alpha_{j}\sum_{y=x}^{\infty}(\pi(y)-\pi\left(y-j\right))+\sum_{j=1}^{\omega_+}\alpha_j\sum_{y=x}^{\infty}(\pi\left(y-j\right)-\pi(y))\\ &\quad -\left(\sum_{j=\omega_-+1}^0\alpha_{j}-\sum_{j=1}^{\omega_+}\alpha_j\right)T_{\pi}(x)\\ &\quad+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}(\pi(y)\beta_{j}(y+j)-\pi\left(y-j\right)\beta_{j}(y))\\ &\quad+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}(\pi\left(y-j\right)\beta_j(y)-\pi(y)\beta_{j}(y+j))\\ &\quad+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)-\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j).\end{align*}

This further yields the following equality:

(23) \begin{align}\nonumber &(\alpha_-\,-\alpha_+)T_{\pi}(x)+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j)-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)\\&=\sum_{j=\omega_-+1}^{-1}\sum_{\ell=0}^{-j-1}\pi\left(x+\ell\right)\widetilde{f}_{j}(x+j+\ell)+\sum_{j=1}^{\omega_+} \sum_{\ell=1}^{j}\pi\left(x-\ell\right)\widetilde{f}_{j}(x+j-\ell),\end{align}

where $\widetilde{f}_j(x)=\alpha_j+\beta_j(x)=x^{-R}f_j(x)\ge0$ , $j=\omega_-+1,\ldots,\omega_+$ . From (12), it follows that there exist $C,N>0$ such that

\[\Biggl|\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j)-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)\Biggl| \le C\sum_{y=x}^{\infty}\pi(y)y^{-\sigma_1}\le Cx^{-\sigma_1}T_{\pi}(x),\]

for $x\ge N$ . Hence

\[\text{LHS}_{(23)}\le\left((\alpha_-\,-\alpha_+)+Cx^{-\sigma_1}\right)T_{\pi}(x).\]

Using Fubini’s theorem, we have

(24) \begin{equation}\text{RHS}_{(23)}=\sum_{j=\omega_-+2}^0\pi(x-j)\sum_{\ell=\omega_-+1}^{j-1}\widetilde{f}_{\ell}(x+\ell-j)+\sum_{j=1}^{\omega_+}\pi(x-j)\sum_{\ell=j}^{\omega_+}\widetilde{f}_{\ell}(x+\ell-j).\end{equation}

Hence, further choosing larger N and C, we have, for all $x\ge N$ ,

(25) \begin{align}\nonumber\text{RHS}_{(23)} &\ge\pi\left(x-1\right)\sum_{j=1}^{\omega_+}\widetilde{f}_j(x+j-1) =\pi\left(x-1\right)\sum_{j=1}^{\omega_+}\big(\alpha_{j}+\beta_{j}(x+j-1)\big)\\ &\ge \big(\alpha_+-Cx^{-\sigma_1}\big)\pi\left(x-1\right). \end{align}

This implies from $\pi(x-1)=T_{\pi}(x-1)-T_{\pi}(x)$ that

\[\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigma_1}}{\alpha_-}.\]

Using similar arguments as in the proof of (i), one obtains (21).

Next we show (22). We establish the reverse estimates for both sides of (23). Similarly, there exist some $C,N>0$ such that for all $x\ge N$ ,

\[ \text{LHS}_{(23)}\ge\big(\alpha_-\,-\alpha_+-Cx^{-\sigma_1}\big)T_{\pi}(x)\ge\big(\alpha_-\,-\alpha_+-Cx^{-\sigma_1}\big)T_{\pi}(x-\omega_-\,-1),\]
\[\text{RHS}_{(23)}\le\big(\alpha_++Cx^{-\sigma_1}\big)\big(T_{\pi}\big(x-\omega_+\big)-T_{\pi}(x-\omega_-\,-1)\big).\]

This implies that for possibly larger C, N, for all $x\ge N$ ,

\[\frac{T_{\pi}(x-\omega_-\,-1)}{T_{\pi}(x-\omega_+)}\le\frac{\alpha_++Cx^{-\sigma_1}}{\alpha_-}.\]

The remaining arguments are similar to those in the proof of (i).

(iii)–(vi) $\alpha=0$ . Hence $\alpha_+=\alpha_-$ . Recall

\begin{align*}\Delta=\alpha_+^{-1}\cdot\begin{cases}-\gamma,\qquad\quad\ \, \text{if}\ \sigma_1<1,\\[4pt] -\gamma+R\vartheta,\quad \text{if}\ \sigma_1=1.\end{cases}\end{align*}

Let $\delta=\Delta(\omega_+-\omega_-\,-1)^{-1}$ . For $j=\omega_-+1,\ldots,\omega_+$ ,

\begin{align*}r_j=\begin{cases} \gamma_j,\qquad\qquad\ \, \text{if}\ \sigma_1<1, \\[4pt] \gamma_j-jR\alpha_j,\qquad \text{if}\ \sigma_1=1,\end{cases}\end{align*}
\begin{align*}\vartheta_j(x)=\beta_j(x)-r_j x^{-\sigma_1}.\end{align*}

Hence we have

\begin{align*}\vartheta_j(x)=\mathrm{O}(x^{-\overline{\sigma_2}}),\quad j=\omega_-+1,\ldots,\omega_+,\end{align*}

where

\begin{align*}\overline{\sigma_2}=\left\{\begin{array}{l@{\quad}l}\min\{1,\sigma_2\},& \text{if}\ \sigma_1<1,\\ \sigma_2,& \text{if}\ \sigma_1=1.\end{array}\right.\end{align*}

Let $\eta=\overline{\sigma_2}-\sigma_1$ and $\varepsilon=\min\{\sigma_1,\eta\}$ . Hence $0<\varepsilon\le\eta\le1$ . If $\sigma_1<1$ , then $\eta\le1-\sigma_1$ . If $\sigma_1=1$ , then $\varepsilon=\eta$ .

To show (iii)–(vi), it suffices to prove that there exists $C>0$ such that

(26) \begin{equation}T_{\pi}(x)\gtrsim\left\{\begin{array}{l@{\quad}l}\exp\Bigl({-}\frac{\Delta}{1-\sigma_1} x^{1-\sigma_1}+\mathrm{O}\bigl(x^{1-\sigma_1-\varepsilon}+\log x\bigr)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1, \Delta>0,\\[8pt] \exp\Bigl({-}\frac{\Delta}{1-\sigma_1} x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1, \Delta>0,\\[8pt] x^{-(\Delta-1)},& \text{if}\ \sigma_1=1, \Delta>0,\\[4pt] \exp\Bigl({-}\frac{C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}\bigl(x^\sigma+\log x\bigr)\Bigr), & \text{if}\ {\sigma_2}<1,\ \sigma_1+\sigma_2\neq1, \Delta=0,\\[8pt] \exp\Bigl({-}\frac{C}{1-\sigma_2}x^{1-{\sigma_2}}+\mathrm{O}(\!\log x)\Bigr), & \text{if}\ \sigma_2<1,\ \sigma_1+\sigma_2=1, \Delta=0,\\[8pt] x^{-C},& \text{if}\ {\sigma_2}\ge1, \Delta=0, \end{array}\right.\end{equation}

where $\sigma=\max\{1-\sigma_1-\sigma_2, 0\}$ , and if $\Delta>0$ , then

(27) \begin{equation}T_{\pi}(x)\lesssim \left\{\begin{array}{l@{\quad}l} \exp\Bigl({-}\frac{\delta}{1-\sigma_1} x^{1-\sigma_1}+\mathrm{O}\bigl(x^{\max\{1-\sigma_1-\varepsilon,0\}}+\log x\bigr)\Bigr), & \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1,\\[7pt] \exp\Bigl({-}\frac{\delta}{1-\sigma_1} x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1,\\[7pt] x^{-\max\{\delta-1,0\}},& \text{if}\ \sigma_1=1. \end{array}\right. \end{equation}

To show (26) and (27) for a probability distribution $\mu$ on $\mathbb{N}_0$ , define its weighted tail distribution on $\mathbb{N}_0$ as

\begin{align*}W_{\mu}\,:\,\mathbb{N}\to[0,1],\quad x\mapsto\sum_{y=x}^{\infty}y^{-\sigma_1}\mu(y).\end{align*}

In the following, we will show there exist constants $C>\sigma_1$ such that

(28) \begin{equation}W_{\pi}(x)\gtrsim \left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}\bigl(x^{1-\sigma_1-\varepsilon}\bigr)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1, \Delta>0,\\[7pt] \exp\Bigl(\frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1, \Delta>0, \\[7pt] x^{-\Delta},& \text{if}\ \sigma_1=1, \Delta>0,\\[7pt] \exp\Bigl(\frac{-C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}\bigl(x^{\max\{1-\sigma_1-{\sigma_2}, 0\}}\bigr)\Bigr),& \text{if}\ {\sigma_2}<1,\ \sigma_1+\sigma_2\neq1, \Delta=0,\\[7pt] \exp\Bigl(\frac{-C}{1-{\sigma_2}}x^{1-{\sigma_2}}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_2<1,\ \sigma_1+\sigma_2=1, \Delta=0,\\[7pt] x^{-C},& \text{if}\ {\sigma_2}\ge1, \Delta=0, \end{array}\right. \end{equation}

with $\Delta>1$ , when $\sigma_1=1$ . Moreover, if $\Delta>0$ , then

(29) \begin{equation}W_{\pi}(x)\lesssim \left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-\delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(x^{\max\{1-\sigma_1-\varepsilon,0\}})\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1,\\[7pt] \exp\Bigl(\frac{-\delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1,\\[7pt] x^{-\delta},& \text{if}\ \sigma_1=1. \end{array}\right. \end{equation}

Then we will prove (26) and (27) based on (28) and (29).

Step I. Prove (28) and (29). Here we will also show $\Delta\ge0$ , and in particular $\Delta>1$ when $\sigma_1=1$ . We first show (28). Since $\alpha_+=\alpha_-$ ,

\[\begin{split}\text{LHS}_{(23)}&=\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(r_j(y+j)^{-\sigma_1}+\vartheta_{j}(y+j)\right)\\&\quad-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(r_j(y+j)^{-\sigma_1}+\vartheta_{j}(y+j)\right)\\&=\left(\sum_{j=\omega_-+1}^0r_j-\sum_{j=1}^{\omega_+}r_j\right)W_{\pi}(x)\\&\quad+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j\big((y+j)^{-\sigma_1}-y^{-\sigma_1}\big)\right)\\&\quad-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j\big((y+j)^{-\sigma_1}-y^{-\sigma_1}\big)\right).\end{split}\]

By Lemma 1,

\[\sum_{j=\omega_-+1}^0r_j-\sum_{j=1}^{\omega_+}r_j=\alpha_+\Delta.\]

Moreover,

\[\begin{split}&\hspace{-2cm}\Biggl|\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j\big((y+j)^{-\sigma_1}-y^{-\sigma_1}\big)\right)\\&\quad-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j\big((y+j)^{-\sigma_1}-y^{-\sigma_1}\big)\right)\Biggl|\\ & \lesssim \sum_{y=x}^{\infty}\pi(y)y^{-\sigma_1-\eta}\le x^{-\eta}W_{\pi}(x).\end{split}\]

Since $\text{RHS}_{(24)}\ge0$ for all large x, we have $\Delta\ge0$ .

From (25) it follows that there exist $C, N\in\mathbb{N}$ such that for all $x\ge N$ ,

\[ \text{LHS}_{(23)}\le\alpha_+\left(\Delta+Cx^{-\eta}\right)W_{\pi}(x),\]

while

\[\begin{split}\text{RHS}_{(23)}&\ge\alpha_+(1-Cx^{-\sigma_1})\pi(x-1)\\&=\alpha_+(1-Cx^{-\sigma_1})(x-1)^{\sigma_1}(W_{\pi}(x-1)-W_{\pi}(x))\\&=\alpha_+\big(x^{\sigma_1}-C-\sigma_1 x^{\sigma_1-1}+\mathrm{O}\big(x^{\sigma_1-2}\big)\big)(W_{\pi}(x-1)-W_{\pi}(x)).\end{split}\]

Further choosing larger N and C, by the monotonicity of $W_{\pi}$ , for all $x\ge N$ ,

\begin{multline*}\big(x^{\sigma_1}-C-\sigma_1 x^{\sigma_1-1}+\mathrm{O}\big(x^{\sigma_1-2}\big)\big)W_{\pi}(x-1)\\\le \left(x^{\sigma_1}-C+\Delta-\sigma_1 x^{\sigma_1-1}+Cx^{-\eta}+\mathrm{O}\big(x^{\sigma_1-2}\big)\right)W_{\pi}(x).\end{multline*}

If $\sigma_1<1$ , then $\eta\le1-\sigma_1$ , and hence $\eta+\sigma_1-2\le-\sigma_1$ . If $\sigma_1=1$ , then $\varepsilon=\eta$ . Recall that $\overline{\sigma_2}=\eta+\sigma_1$ . Then we have

\[\begin{split} \frac{W_{\pi}(x)}{W_{\pi}(x-1)}&\ge\frac{x^{\sigma_1}-C-\sigma_1 x^{\sigma_1-1}+\mathrm{O}\big(x^{\sigma_1-2}\big)} {x^{\sigma_1}-C+\Delta+Cx^{-\eta}-\sigma_1 x^{\sigma_1-1}+\mathrm{O}\big(x^{\sigma_1-2}\big)}\\[4pt] &=\left\{\begin{array}{l@{\quad}l} 1-\Delta x^{-\sigma_1}\big(1+\mathrm{O}\big(x^{-\varepsilon}\big)\big),& \text{if}\ \Delta>0,\\[4pt] 1-Cx^{-\overline{\sigma_2}}\big(1+\mathrm{O}\big(x^{\max\{-\sigma_1,\overline{\sigma_2}-2\}}\big)\big),& \text{if}\ \Delta=0.\end{array}\right. \end{split}\]

First assume $\Delta>0$ . Since $\varepsilon\le1$ , by the Euler–Maclaurin formula,

\[\begin{split} \log\frac{W_{\pi}(x)}{W_{\pi}(N-1)}&\ge\sum_{j=N}^{x}\log\big(1-\Delta j^{-\sigma_1}+\mathrm{O}\big(j^{-\sigma_1-\varepsilon}\big)\big)\\[4pt] &=\left\{\begin{array}{l@{\quad}l}\frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}\big(x^{\max\{0,1-\sigma_1-\varepsilon\}}\big), & \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1,\\[4pt] \frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1,\\[4pt] -\Delta\log x+\mathrm{O}(1),& \text{if}\ \sigma_1=1,\end{array}\right. \end{split}\]

which implies that

\[ W_{\pi}(x)\gtrsim\left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}\big(x^{1-\sigma_1-\varepsilon}\big)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1,\\[4pt] \exp\Bigl(\frac{-\Delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1,\\[4pt] x^{-\Delta},& \text{if}\ \sigma_1=1;\end{array}\right. \]

i.e., (29) holds. Moreover, since $x^\sigma_1 W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$ , we have

\begin{align*}\Delta>\left\{\begin{array}{l@{\quad}l}0,& \text{if}\ \sigma_1<1,\\[4pt] 1,& \text{if}\ \sigma_1=1.\end{array}\right.\end{align*}

Now assume $\Delta=0$ ; then

\[ W_{\pi}(x)\gtrsim\left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-C}{1-\overline{\sigma_2}}x^{1-\overline{\sigma_2}}+\mathrm{O}\big(x^{\max\{1-\sigma_1-\sigma_2,0\}}\big)\Bigr), & \text{if}\ \overline{\sigma_2}\neq1,\ \sigma_1+\sigma_2\neq1,\\[4pt] \exp\Bigl(\frac{-C}{1-\overline{\sigma_2}}x^{1-\overline{\sigma_2}}+\mathrm{O}(\!\log x)\Bigr),&\text{if}\ \overline{\sigma_2}\neq1,\ \sigma_1+\sigma_2=1,\\[4pt] x^{-C},& \text{if}\ \overline{\sigma_2}=1,\end{array}\right. \]

where we use the fact that $\sigma_1+\overline{\sigma_2}=1$ implies $0<\sigma_1,\sigma_2<1$ and $\sigma_1+\sigma_2=1$ . Moreover, also by the fact that $x^\sigma_1 W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$ , we have $\overline{\sigma_2}\le1$ , which implies that $\sigma_1<1$ . In addition, $C>\sigma_1$ when $\overline{\sigma_2}=1$ , i.e., $\sigma_1<1\le\sigma_2$ . Hence, for some $C>\sigma_1$ ,

\[ W_{\pi}(x)\gtrsim\left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}\big(x^{\max\{1-\sigma_1-{\sigma_2}, 0\}}\big)\Bigr),& \text{if}\ {\sigma_2}<1,\ \sigma_1+\sigma_2\neq1,\\[4pt] \exp\Bigl(\frac{-C}{1-{\sigma_2}}x^{1-{\sigma_2}}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_2<1,\ \sigma_1+\sigma_2=1,\\[4pt] x^{-C},& \text{if}\ {\sigma_2}\ge1.\end{array}\right. \]

Next we show (29) by establishing the reverse estimates for both sides of (23). From (24) it follows that there exist positive constants N and $C_i$ ( $i=1,2$ ) such that for $x\ge N$ ,

\[ \text{LHS}_{(23)}\ge\alpha_+\left(\Delta-Cx^{-\eta}\right)W_{\pi}(x)\ge\alpha_+\left(\Delta-Cx^{-\eta}\right) W_{\pi}(x-(\omega_-+1)),\]

whereas

\[\begin{split}\text{RHS}_{(23)}&\le\alpha_+\left(1+C_1x^{-\sigma_1}\right)\sum_{j=\omega_-+2}^{\omega_+}\pi\left(x-j\right)\\&\le\alpha_+\left(x^{\sigma_1}+C_2\right)\sum_{j=\omega_-+2}^{\omega_+}\pi\left(x-j\right)(x-j)^{-\sigma_1}\\&=\alpha_+\left(x^{\sigma_1}+C_2\right)(W_{\pi}(x-\omega_+)-W_{\pi}(x-(\omega_-+1))).\end{split}\]

Hence, when $\Delta>0$ , for all $x\ge N$ we have

\[\frac{W_{\pi}(x-(\omega_-+1))}{W_{\pi}(x-\omega_+)}\le\frac{x^{\sigma_1}+C_2}{ x^{\sigma_1}+C_2+\Delta-Cx^{-\eta}}=1-\Delta x^{-\sigma_1}+\mathrm{O}\big(x^{-\sigma_1-\varepsilon}\big).\]

Analogously to the above analysis, one might show that

\[ W_{\pi}(x)\lesssim\left\{\begin{array}{l@{\quad}l}\exp\Bigl(\frac{-\delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}\big(x^{\max\{1-\sigma_1-\varepsilon,0\}}\big)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon\neq1,\\[4pt] \exp\Bigl(\frac{-\delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \sigma_1<1,\ \sigma_1+\varepsilon=1,\\[4pt] x^{-\delta},& \text{if}\ \sigma_1=1;\end{array}\right. \]

in particular, one can further choose $\delta\ge1$ $\big($ not necessarily $\delta=\Delta(\omega_+-\omega_-\,-1)^{-1}\big)$ when $\sigma_1=1$ , also because $x^{\sigma_1} W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$ .

Moreover, one can always show $W_{\pi}(x)\le x^{-\sigma_1}T_{\pi}(x)\le x^{-\sigma_1}$ ; hence (29) also holds when $\sigma_1=1$ .

Step II. Prove (26) and (27) based on (28) and (29).

Since $W_{\pi}(x)\le x^{-\sigma_1}T_{\pi}(x)$ , (26) follows directly from (28).

Next, we prove (27) based on (29). Recall that

\begin{align*}\pi(x)\le x^{\sigma_1} W_{\pi}(x).\end{align*}

Assume $\Delta>0$ . We only prove the case $\sigma_1<1$ and $\sigma_1+\varepsilon=1$ . (The other two cases can be proved using analogous arguments.) There exist $N\in\mathbb{N}$ and $C_1>\sigma_1$ such that $\exp\Bigl(\frac{-\delta}{1-\sigma_1}y^{1-\sigma_1}+C_1\log y\Bigr)$ is decreasing on $[N,+\infty)$ , and for all $x\ge N$ ,

\begin{align*}T_{\pi}(x)&=\sum_{y=x}^{\infty}\pi(y)\ \le \ \sum_{y=x}^{\infty}y^{\sigma_1} W_{\pi}(y) \lesssim\sum_{y=x}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}y^{1-\sigma_1}+C_1\log y\Bigr)\\&\lesssim\int_{x-1}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}y^{1-\sigma_1}+C_1\log y\Bigr){\text{d}}y \lesssim\int_{x-1}^{\infty}y^{C_1+\sigma_1}{\text{d}}\left({-}\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}y^{1-\sigma_1}\Bigr)\right)\\&\le(x-1)^{C_1+\sigma_1}\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}(x-1)^{1-\sigma_1}\Bigr)\\&\quad+(C_1+\sigma_1)(x-1)^{\sigma_1-1}\int_{x-1}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}y^{1-\sigma_1}+C_1\log y\Bigr){\text{d}}y,\end{align*}

which further implies that for all $x\ge N$ ,

\[\begin{split}T_{\pi}(x)&\lesssim\frac{(x-1)^{C_1+\sigma_1}\exp\Bigl(\frac{-\delta}{1-\sigma_1}(x-1)^{1-\sigma_1}\Bigr)}{1+\mathrm{O}\bigl((x-1)^{\sigma_1-1}\bigr)}\\&\lesssim\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}(x-1)^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr)=\exp\Bigl(\tfrac{-\delta}{1-\sigma_1}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr).\end{split}\]

This shows (28) in that case.

7. Proof of Theorem 3

Again ( $\textbf{A3}$ ) implies $\text{supp}\,\nu=\partial^{\textsf{c}}$ [Reference Collet, Martínez and San Martín15], which is unbounded by ( $\textbf{A2}$ ).

Comparing the identities for stationary distributions and QSDs, the unique difference comes from an extra term on the right-hand side of the identity for QSDs with coefficient $\theta_{\nu}>0$ . This makes the identity in Theorem 1(3) for stationary distributions into an inequality, with its left-hand side greater than its right-hand side, for QSDs. Hence, all arguments in the proof of Theorem 3 establishing $\alpha\le0$ as well as the lower estimates for $T_{\pi}$ (the tail of the stationary distribution) carry over to $T_{\nu}$ .

Next, we show $R\ge0$ . The proof is in a similar spirit to that for $\alpha\le0$ . Since $\alpha\le0$ , we have $R_-=R$ from the definition of $\alpha$ . Again, assume without loss of generality that $\omega_*=1$ , so that $\partial$ contains all large positive integers by Proposition 2. From Theorem 2(3), similarly to (13), for all large x we have

\begin{align*}x^{R}(\alpha_-+Cx^{-\sigma_1})T_{\nu}(x)&\ge x^{R}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\nu\left(x-j\right)\\&= \theta_{\nu}T_{\nu}(x)+x^{R_+}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\nu\left(x-j\right) \\&\ge\theta_{\nu}T_{\nu}(x),\end{align*}

which yields

\[x^{R}\big(\alpha_-+Cx^{-\sigma_1}\big)-\theta_{\nu}\ge0.\]

This shows $R\ge0$ , since $\theta_{\nu}>0$ . Moreover, if $R=0$ , then $\alpha_-\ge\theta_{\nu}$ . The claim that $R_-=R_+=0$ implies $\alpha\le-\theta_{\nu}$ is proved below in (vii).

Recall that $\alpha_0=\beta$ . Similarly to (17) and the inequality (18) based on it, one can also obtain $\alpha_0\ge\theta_{\nu}$ if $R=0$ and $R_->R_+$ . Moreover, there exists $C>\alpha_1>0$ such that for all large x,

(30) \begin{equation}\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge \left\{\begin{array}{l@{\quad}l} x^{R_+-R_-}\frac{\alpha_1-Cx^{-1}}{\alpha_0-\theta_{\nu}x^{-R_-}+Cx^{-\tilde{\sigma}}},&\text{if}\ R_->R_+,\\[8pt]\frac{\alpha_1-Cx^{-1}}{\alpha_0+\alpha_1-\theta_{\nu}x^{-R}+Cx^{R-1}},& \text{if}\ R_-=R_+,\end{array}\right.\end{equation}

where we recall $\tilde{\sigma}=\min\{1,R_-\,-R_+\}$ .

Similarly to (20), we establish

(31) \begin{multline}\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\nu\left(y-j\right)\\=\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)+\sum_{y=x}^{\infty}y^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\nu\left(y-j\right).\end{multline}

Since LHS $_{(32)}$ is finite, we have that $\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)$ is also finite. Furthermore, by a similar analysis as in the proof of Theorem 4, there exists $C>0$ such that for all large $x\in\mathbb{N}$ ,

\[\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)\le (\alpha_-+Cx^{-\sigma_1})T_{\nu}(x)\to0,\quad \text{if}\ x\to\infty.\]

Step I. Establish lower estimates for $T_{\nu}$ based on the above inequality, using a similar asymptotic analysis to that demonstrated repeatedly in the proof of Theorem 3.

  1. 1. $R_-=R>R_+$ :

    • $R=0$ (Cases (i)–(iii)). Then $\alpha_{0}\ge\theta_{\nu}$ . If $\alpha_{0}>\theta_{\nu}$ , then there exists $\widetilde{C}>0$ such that

      \[ T_{\nu}(x)\gtrsim\exp\left({-}(R_-\,-R_+)\log\Gamma(x)-\left(\!\log\tfrac{\alpha_{0}-\theta_{\nu}}{\alpha_1}\right)x-\widetilde{C}x^{1-(R_-\,-R_+)} +\mathrm{O}(\!\log x)\right);\]
      i.e., $\nu\in\mathcal{P}^{1-}_{R_-\,-R_+}$ . Hence Case (i) is proved. If $\alpha_{0}=\theta_{\nu}$ , then
      \[\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge x^{\min\{0,1+R_+-R_-\}}\big(\tfrac{\alpha_1}{C}-x^{-1}\big),\]
      which yields that
      \[T_{\nu}(x)\gtrsim\exp\left(\min\{0,R_+-R_-+1\}\log\Gamma(x)-\left(\!\log\tfrac{C}{\alpha_1}\right)x-\tfrac{C}{\alpha_1}\log x\right);\]
      i.e., $\nu\in\mathcal{P}^{2-}_{1}$ if $0>R_+-R_-\ge-1$ , and $\nu\in\mathcal{P}^{1-}_{R_-\,-R_+-1}$ if $R_+-R_-<-1$ . Hence Cases (ii) and (iii) are also proved.
    • $R>0$ (Case (iv)). Based on (30), there exists $\widetilde{C}>0$ such that

      \[\log T_{\nu}(x)\gtrsim \exp\left({-}(R_-\,-R_+)\log\Gamma(x)-\left(\!\log\tfrac{\alpha_{0}}{\alpha_1}\right)x-\widetilde{C}x^{1-\min\{\widetilde{\sigma},R_-\}} +\mathrm{O}(\!\log x)\right); \]
      i.e., $\nu\in\mathcal{P}^{1-}_{R_-\,-R_+}$ . Hence the first part of (iv) is proved. The second part of (iv) is proved below in Step II.

  2. 2. $R_-=R_+=R$ . Then (31) is

    \[\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\nu\left(y-j\right) =\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)+\sum_{y=x}^{\infty} \sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\nu\left(y-j\right), \]
    from which it follows that there exist $C>0$ and $N\in\mathbb{N}$ such that for all large $x\in\mathbb{N}$ ,
    (32) \begin{equation} \frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigma_1}}{\alpha_-\,-\theta_{\nu}x^{-R_-}+Cx^{-\sigma_1}}, \end{equation}
    Based on this, we establish the following lower estimates for $T_{\nu}(x)$ :
  3. (v) $R>0$ and $\alpha<0$ . We can show that

    \begin{align*}T_{\nu}(x)\gtrsim\begin{cases}\exp\bigl(\big(\!\log\frac{\alpha_+}{\alpha_-}\big)x+\mathrm{O}(\!\log x)\bigr),\qquad\quad\quad\ \, \text{if}\ \min\{R,\sigma_1\}=1,\\[8pt] \exp\bigl(\big(\!\log\frac{\alpha_+}{\alpha_-}\big)x+\mathrm{O}\big(x^{1-\min\{R,\sigma_1\}}\big)\bigr),\quad \text{if}\ \min\{R,\sigma_1\}\neq1,\end{cases}\end{align*}
    i.e., $\nu\in\mathcal{P}^{2-}_{1}$ . The latter part is proved in Step II below.
    • $R>0$ and $\alpha=0$ . We prove the conclusions case by case.

    • $0<R<\sigma_1$ and $\alpha=0$ . Then

      \begin{align*}1\ge T_{\nu}(x)\gtrsim\left\{\begin{array}{l@{\quad}l} \exp\Bigl(\frac{\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \min\{2R,\sigma_1\}=1,\\[8pt] \exp\Bigl(\frac{\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}\big(x^{1-\min\{2R,\sigma_1\}}\big)\Bigr),& \text{if}\ \min\{2R,\sigma_1\}\neq1,\end{array}\right.\end{align*}
      which tends to infinity as $x\to\infty$ . This is a contradiction, and thus it is not possible for this case to occur.
    • $0<R=\sigma_1<1$ and $\alpha=0$ . Then

      \begin{align*}T_{\nu}(x)\gtrsim\left\{\begin{array}{l@{\quad}l} \exp\Bigl({-}\frac{2C-\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ 2R=1,\\[8pt] \exp\Bigl({-}\frac{2C-\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}\big(x^{1-2R}\big)\Bigr),& \text{if}\ 2R\neq1,\end{array}\right.\end{align*}
      i.e., $\nu\in\mathcal{P}^{2-}_{1-R}$ .
    • $\min\{1,R\}>\sigma_1$ and $\alpha=0$ . Then

      \begin{align*}T_{\nu}(x)\gtrsim\left\{\begin{array}{l@{\quad}l} \exp\Bigl({-}\frac{2C}{\alpha_+(1-\sigma_1)}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ \min\{R,2\sigma_1\}=1,\\[8pt] \exp\Bigl({-}\frac{2C}{\alpha_+(1-\sigma_1)}x^{1-\sigma_1}+\mathrm{O}\big(x^{1-\min\{R,2\sigma_1\}}\big)\Bigr),& \text{if}\ \min\{R,2\sigma_1\}\neq1,\end{array}\right.\end{align*}
      i.e., $\nu\in\mathcal{P}^{2-}_{1<}$ .
    • $R\ge\sigma_1=1$ and $\alpha=0$ . If $R=\sigma$ , then

      \begin{align*}T_{\nu}(x)\gtrsim x^{-\frac{2C-\theta_{\nu}}{\alpha_+}},\end{align*}
      i.e., $\nu\in\mathcal{P}^{3-}$ . If $R>\sigma_1$ , then
      \begin{align*}T_{\nu}(x)\gtrsim x^{-\frac{2C}{\alpha_+}},\end{align*}
      which also indicates $\nu\in\mathcal{P}^{3-}$ .
  4. (vii) $R=0$ . From (32) it follows that

    \[1\ge\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigma_1}}{\alpha_-\,-\theta_{\nu}+Cx^{-\sigma_1}},\]

    which yields $\frac{\alpha_+}{\alpha_-\,-\theta_{\nu}}\le1$ , i.e., $\alpha\le-\theta_{\nu}<0$ . Similarly, based on (30), $\theta_{\nu}\le\alpha_0\le\alpha_-$ .

    • $R=0$ , $\alpha+\theta_{\nu}=0$ , $\sigma_1<1$ :

      \begin{align*}T_{\nu}(x)\gtrsim\left\{\begin{array}{l@{\quad}l} \exp\Bigl({-}\frac{2C}{(\alpha_-\,-\theta_{\nu})(1-\sigma_1)}x^{1-\sigma_1}+\mathrm{O}(\!\log x)\Bigr),& \text{if}\ 2\sigma_1=1,\\[8pt] \exp\Bigl({-}\frac{2C}{(\alpha_-\,-\theta_{\nu})(1-\sigma_1)}x^{1-\sigma_1}+\mathrm{O}\big(x^{1-2\sigma_1}\big)\Bigr),& \text{if}\ 2\sigma_1\neq1,\end{array}\right.\end{align*}
      i.e., $\nu\in\mathcal{P}^{2-}_{1-\sigma_1}$ .
    • $R=0$ , $\alpha+\theta_{\nu}=0$ , $\sigma_1=1$ :

      \begin{align*}T_{\nu}(x)\gtrsim x^{-\frac{2C}{\alpha_-\,-\theta_{\nu}}},\end{align*}
      i.e., $\nu\in\mathcal{P}^{3-}$ .
    • $R=0$ , $\alpha+\theta_{\nu}<0$ :

      \begin{align*}T_{\nu}(x)\gtrsim\left\{\begin{array}{l@{\quad}l} \exp\Bigl(\frac{\alpha+\theta_{\nu}}{\alpha_-\,-\theta_{\nu}} x+\mathrm{O}\big(x^{1-\sigma_1}\big)\Bigr),& \text{if}\ \sigma_1<1,\\[8pt] \exp\Bigl(\frac{\alpha+\theta_{\nu}}{\alpha_-\,-\theta_{\nu}}x+\mathrm{O}(\!\log x)\Bigr),&\text{if}\ \sigma_1=1,\end{array}\right.\end{align*}
      i.e., $\nu\in\mathcal{P}^{2-}_{1}$ .

Step II. Establish upper estimates for $T_{\nu}$ .

Case I: $R>1$ .

The arguments establishing the upper estimates for $T_{\pi}$ in the proof of Theorem 4 can be adapted to establish those for $T_{\nu}$ .

  • Latter part of (iv): $R_->\max\{1,R_+\}$ . Based on (31), one can show that there exists $C>0$ such that for all large x,

    \begin{align*} \frac{T_{\nu}(x)}{T_{\nu}(x-\omega_+)} &\le x^{R_+-R_-}\frac{\alpha_++Cx^{-\sigma_1}}{\alpha_{0}-\frac{\theta_{\nu}}{R_-\,-1}(x-1)^{1-R_-}-Cx^{-\sigma_1}} \\ &=x^{R_+-R_-} \left(\tfrac{\alpha_+}{\alpha_{0}}+\mathrm{O}(x^{-\min\{\sigma_1,R_-\,-1\}})\right), \end{align*}
    which implies that
    \begin{align*} T_{\nu}(x)&\lesssim \exp\Bigl({-}(R_-\,-R_+)\omega_+^{-1}\log \Gamma\big(x\omega_+^{-1}\big)-\Bigl((R_-\,-R_+)\omega_+^{-1}+\log\tfrac{\alpha_0}{\alpha_+}\Bigr)x \\ &\quad +\mathrm{O}\big(x^{1-\min\{\{\sigma_1,R_-\,-1\}\}}+\log x\big)\Bigr). \end{align*}
    Then $\nu\in\mathcal{P}^{1+}_{(R_-\,-R_+)\omega_+^{-1}}$ .
  • Latter part of (v): $R_-=R_+>1$ and $\alpha<0$ . An analogue of (23) is

    (33) \begin{align}\nonumber &(\alpha_-\,-\alpha_+)T_{\nu}(x)-\theta_{\nu}\sum_{y=x}^{\infty}y^{-R}{T_{\nu}(y)}+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\nu(y)\beta_{j}(y+j)- \sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\nu(y)\beta_j(y+j)\\ &=\sum_{j=\omega_-+1}^{-1}\sum_{\ell=0}^{-j-1}\nu\left(x+\ell\right)\widetilde{f}_{j}(x+j+\ell)+\sum_{j=1}^{\omega_+} \sum_{\ell=1}^{j}\nu\left(x-\ell\right)\widetilde{f}_{j}(x+j-\ell). \end{align}
    Based on this, one can show that there exists $C>0$ such that for all large x,
    \begin{multline*} \bigg((\alpha_-\,-\alpha_+)-\frac{\theta_{\nu}}{R-1} x^{1-R}-\theta_{\nu}x^{-R}-Cx^{-\sigma_1}\bigg)T_{\nu}(x)\le\text{LHS}_{(33)}\\ \le\left(\alpha_-\,-\alpha_+-\theta_{\nu}x^{-R}+Cx^{-\sigma_1}\right) T_{\nu}(x),\\ (\alpha_+-Cx^{-\sigma_1})(T_{\nu}(x-1)-T_{\nu}(x))\le\text{RHS}_{(33)}\\ \le (\alpha_++Cx^{-\sigma_1})\left(T_{\nu}(x-\omega_+)-T_{\nu}(x-\omega_-\,-1)\right). \end{multline*}
    This implies
    \[\frac{T_{\nu}(x-\omega_+)}{T_{\nu}(x-\omega_-\,-1)}\le\frac{\alpha_++Cx^{-\sigma_1}}{\alpha_-\,-\frac{\theta_{\nu}}{R-1} x^{1-R}-\theta_{\nu}x^{-R}},\]
    and hence
    \[T_{\nu}(x)\lesssim \exp\Bigl({-}(\omega_+-\omega_-\,-1)^{-1}\log\Bigl(\frac{\alpha_-}{\alpha_+}\Bigr)x+\mathrm{O}(x^{1-\min\{\{\sigma_1,R-1\}\}}+\log x)\Bigr).\]
    Then $\nu\in\mathcal{P}^{2+}_{1}$ .

Case II: $R\le1$ (Cases (viii)–(xi)).

Indeed, from (31), for large x,

\begin{align*}\frac{T_{\nu}(x-\omega_{-})}{T_{\nu}(x)}&\le\frac{\alpha_-+Cx^{-\sigma_1}-\theta_{\nu}x^{-R}}{\alpha_-+Cx^{-\sigma_1}}=1-\frac{\theta_{\nu}x^{-R}}{\alpha_-+Cx^{-\sigma_1}}\\[5pt]&=\left\{\begin{array}{l@{\quad}l} 1-\frac{\theta_{\nu}}{\alpha_-}x^{-R}+\mathrm{O}(x^{-\min\{2R,R+\sigma_1\}}),& \text{if}\ R>0,\\[5pt] 1-\frac{\theta_{\nu}}{\alpha_-}+\mathrm{O}\big(x^{-\sigma_1}\big),& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\[5pt] \frac{C}{\alpha_-}x^{-\sigma_1}+\mathrm{O}\big(x^{-2\sigma_1}\big), & \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.\end{array}\right.\end{align*}

Using similar arguments as in the proof of Theorem 4, we can show

\begin{align*}T_{\nu}(x)\lesssim\left\{\begin{array}{l@{\quad}l} x^{-\theta_{\nu}/\alpha_-},& \text{if}\ R=1,\\[5pt] \exp\left({-}\frac{\theta_{\nu}}{\alpha_-(1-R)}\big({-}x\omega_-^{-1}\big)^{1-R}+\mathrm{O}(\!\log x)\right),& \text{if}\ 0<R<1,\\[5pt] &\quad\min\{2R,R+\sigma_1\}=1,\\[5pt] \exp\left({-}\frac{\theta_{\nu}}{\alpha_-(1-R)}\big({-}x\omega_-^{-1}\big)^{1-R}+\mathrm{O}\big(x^{1-\min\{2R,R+\sigma_1\}}\big)\right),& \text{if}\ 0<R<1,\\[5pt] &\quad \min\{2R,R+\sigma_1\}\neq1,\\[5pt]\exp\left(\!\log\left(1-\frac{\theta_{\nu}}{\alpha_-}\right)\big({-}x\omega_-^{-1}\big)+\mathrm{O}\big(x^{1-\sigma_1}\big)\right),& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\[5pt]\Gamma\big({-}x\omega_-^{-1}\big)^{-\sigma_1}\big(C\alpha_-^{-1}\big)^{-x\omega_-^{-1}-\widetilde{C}x^{1-\sigma_1}},& \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.\end{array}\right.\end{align*}

This implies that

\[\nu\in \left\{\begin{array}{l@{\quad}l} \mathcal{P}^{3+}_{\theta_{\nu}/\alpha_-},& \text{if}\ R=1,\\ \mathcal{P}^{2+}_{1-R},&\text{if}\ 0<R<1,\\ \mathcal{P}^{2+}_{1},& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\\mathcal{P}^{1+}_{-\omega_-^{-1}\sigma_1},& \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.\end{array}\right.\]

Appendix A. Structure of the state space

Proposition 2. Let $n=\min\mathcal{Y}$ . Then $\mathcal{Y}\subseteq\omega_*\mathbb{N}_0+n$ . Assume ${(\textbf{A2})}$ and $\Omega_+\neq\varnothing$ . Then there exists $m\in\mathbb{N}_0$ with $m-n\in\omega_*\mathbb{N}_0$ such that $\omega_*\mathbb{N}_0+m\subseteq\partial^{\textsf{c}}\subseteq\mathcal{Y}\subseteq\omega_*\mathbb{N}_0+n$ .

Proof. The first conclusion follows immediately from [Reference Xu, Hansen and Wiuf48, Lemma B.2].

For the second conclusion, we only prove the case when $\Omega_-=\varnothing$ . The case when $\Omega_-\neq\varnothing$ is proved in [48, Lemma B.1]. If $\Omega_+=\{\omega_*\}$ is a singleton, then by ${(\textbf{A2})}$ , there exists $m\in\mathcal{Y}$ such that $\omega_*\mathbb{N}_0+m\subseteq\mathcal{Y}$ ; hence the conclusion follows. Assume $\Omega_+$ has more than one element. By the definition of $\omega_*$ , there exist coprime $\omega_1,\ \omega_2\in\Omega_+$ . By ${(\textbf{A2})}$ , there exists $n_1\in\mathcal{Y}$ such that

\begin{align*}q(x,x+\omega_1)>0,\ q(x,x+\omega_2)>0,\quad x\in\mathcal{Y},\ x\ge n_1.\end{align*}

Hence $(\omega_1\mathbb{N}_0+n_1)\cup(\omega_2\mathbb{N}_0+n_1)\subseteq\mathcal{Y}$ . Further assume $n_1=0$ for ease of exposition. We claim that there exists $s_j\in\mathcal{Y}\cap\mathbb{N}$ , for $j=0,\ldots,\omega_1-1$ , such that $s_j-j\in\omega_1\mathbb{N}_0$ . Then

\[\mathbb{N}_0+\prod_{j=0}^{\omega_1-1}s_j\subseteq\cup_{j=0}^{\omega_1-1}\big(\omega_1\mathbb{N}_0+s_j\big)\subseteq\mathcal{Y},\]

since for every $x\in\mathbb{N}_0$ , $x+\prod_{j=0}^{\omega_1-1}s_j\in \omega_1\mathbb{N}_0+(x\!\!\mod\omega_1)$ and $\prod_{j=0}^{\omega_1-1}s_j\ge s_k$ for all $k=0,\ldots,\omega_1-1$ . Hence it suffices to prove the above claim. Since $\omega_1$ and $\omega_2$ are coprime, there exist $m_1,\ m_2\in\mathbb{N}$ such that $m_1\omega_1-m_2\omega_2=1$ . Then let $s_j=jm_1\omega_1+m_2\omega_2(\omega_1-j)$ , for $j=0,\ldots,\omega_1-1$ . It is readily seen that $s_j\in\omega_1\mathbb{N}_0\cup\omega_2\mathbb{N}_0\subseteq\mathcal{Y}$ and

\begin{align*}s_j=m_2\omega_1\omega_2+j(m_1\omega_1-m_2\omega_2)=m_2\omega_2\omega_1+j\in\omega_1\mathbb{N}_0+j.\end{align*}

The proof is complete.

Appendix B. A lemma for Theorem 4

Lemma 1. Assume ${(\textbf{A1})}$ ${(\textbf{A3})}$ . Then the following hold:

  1. (i) We have $\alpha_0\ge\alpha_{-1}\ge \cdots\ge \alpha_{\omega_-+1}$ , and $\alpha_1\ge \alpha_2\ge\cdots\ge \alpha_{\omega_+}$ .

  2. (ii) We have $\beta_j(x)=\begin{cases}(\gamma_j+({-}j+1)R_-\alpha_j)x^{-\sigma_1}+\mathrm{O}(x^{-\sigma_2}),\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ (\gamma_j-jR_+\alpha_j)x^{-\sigma_1}+\mathrm{O}(x^{-\sigma_2}),\qquad\qquad\, \text{if}\ j=1,\ldots,\omega_+.\end{cases}$

  3. (iii) It holds that

    \[\alpha_-=\omega_*\sum_{j=\omega_-+1}^0\alpha_{j},\qquad \alpha_+=\omega_*\sum_{j=1}^{\omega_+}\alpha_{j},\qquad \gamma_-=\omega_*\sum_{j=\omega_-+1}^0\gamma_{j},\qquad \gamma_+=\omega_*\sum_{j=1}^{\omega_+}\gamma_{j}.\]
  4. (iv) If $\alpha=0$ , then

    \[\sum_{j=\omega_-+1}^{\omega_+}|j|\alpha_j=\vartheta\omega_*^{-2}.\]

Proof. Assume without loss of generality that $\omega_*=1$ . We prove only the ‘+’ cases. Analogous arguments apply to the ‘ $-$ ’ cases.

(i)–(ii). The first two properties follow directly from their definitions.

(iii). By Fubini’s theorem,

\[\sum_{j=1}^{\omega_+}\sum_{\omega\in A_j}q(x,x+\omega)=\sum_{j\ge1}\sum_{\ell\ge j}q(x,x+\ell)=\sum_{1\le j}jq(x,x+j)=\sum_{\omega\in\Omega_+}q(x,x+\omega)\omega.\]

Comparing the coefficients before the highest degree of the polynomials on both sides, and using the definition of $\alpha_{\ell}$ as well as $\alpha_+$ , we have

\[\alpha_+=\sum_{j=1}^{\omega_+}\alpha_{j}.\]

(iv). By Fubini’s theorem again,

\[\sum_{j=1}^{\omega_+}\sum_{\ell\ge j}jq(x,x+\ell)=\sum_{j=1}^{\omega_+}\frac{j(j+1)}{2}q(x,x+j).\]

Note that $\alpha=0$ implies $\alpha_-=\alpha_+$ . Since $R_+=R$ , comparing the coefficients before $x^R$ , we have

\[\sum_{j=1}^{\omega_+}j\alpha_j=\frac{1}{2}\left(\alpha_++\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_+}q(x,x+\omega)\omega^2}{x^{R_+}}\right).\]

Similarly,

\[\sum_{j=\omega_-+1}^0\sum_{\ell\le j-1}|j-1|q(x,x+\ell)=\sum_{j=\omega_-}^{-1}\frac{(j-1)j}{2}q(x,x+j).\]

Hence

\[\sum_{j=\omega_-+1}^0|j-1|\alpha_j=\frac{1}{2}\left(\alpha_-+\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_-}q(x,x+\omega)\omega^2}{x^{R_-}}\right).\]

Then the conclusion follows from

\[\vartheta=\frac{1}{2}\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}q(x,x+\omega)\omega^2}{x^{R}}\]

and

\[\sum_{j=\omega_-+1}^{0}|j-1|\alpha_j=\alpha_-+\sum_{j=\omega_-+1}^{0}|j|\alpha_j.\]

Appendix C. Calculation of sharp asymptotics of stationary distributions in special cases

We first provide the sharp asymptotics of stationary distributions for BDPs. For any two real-valued functions f,g on $\mathbb{R}$ , we write $f(x)\sim g(x)$ if there exists $C>1$ such that $C^{-1}g(x)\le f(x)\le Cg(x)$ for all $x\in\mathbb{R}$ .

Proposition 3. Assume $(\textbf{A1})$ $(\textbf{A3})$ , $\alpha=0$ , and $\omega_+=-\omega_-=1$ . Let $\pi$ be a stationary distribution of $Y_t$ supported on $\mathcal{Y}$ . Then the following hold:

  1. (i) If $\Delta=0$ , $\sigma_1<1$ , and $\min\{2\sigma_1,\sigma_2\}>1$ , then $R>1$ and $\pi\in\mathcal{P}^{3+}_{R-1}\cap \mathcal{P}^{3-}_{R-1}$ .

  2. (ii) If $\Delta>0$ , $\sigma_1<1$ , then $\pi\in\mathcal{P}^{3+}_{1-\sigma_1}\cap \mathcal{P}^{3-}_{1-\sigma_1}$ .

  3. (iii) If $\sigma_1=1$ , then $\Delta>1$ and $\pi\in\mathcal{P}^{3+}_{\Delta-1}\cap \mathcal{P}^{3-}_{\Delta-1}$ .

Proof. The proof is similar to that of Theorem 4, with the aid of Stirling’s formula and the Euler–Maclaurin formula.

When $Y_t$ is not a BDP, the asymptotic tail of a stationary distribution can be established in some cases when $\alpha=0$ . When $\alpha=0$ , $\alpha_+=\alpha_-$ and

\begin{align*}\frac{1}{\omega_+-\omega_-\,-1}\le\frac{2}{\omega_+-\omega_-}\le\frac{\alpha_+\omega_*}{\vartheta}\le1.\end{align*}

Hence

\begin{align*}\delta\le R-\frac{\gamma}{\vartheta}=\Delta\frac{\alpha_+\omega_*}{\vartheta}\le\Delta,\end{align*}

and both equalities hold if and only if $Y_t$ is a BDP.

Proposition 4. Assume $(\textbf{A1})$ $(\textbf{A3})$ , $\alpha=0$ , $\gamma+\vartheta<0$ , and $\partial=\varnothing$ . Let $\pi$ be a stationary distribution of $Y_t$ supported on $\mathcal{Y}$ . Then for large $x\in\mathcal{Y}$ ,

\[ T_{\pi}(x)\sim \begin{cases} \exp\Big(\frac{\gamma}{\vartheta(1-\sigma_1)}\big(\omega_*^{-1}x\big)^{1-\sigma_1}+\mathrm{O}(\!\log x)\Big),\ \hspace{2.8cm} \text{if}\ \sigma_1<1,\\ \hspace{8.85cm} \min\{2\sigma_1,\sigma_2\}=1,\\x^{1-R+\sigma_1}\exp\Big(\frac{\gamma}{\vartheta(1-\sigma_1)}(\omega_*^{-1}x)^{1-\sigma_1}+\mathrm{O}\big(x^{1-\min\{2\sigma_1,\sigma_2\}}\big)\Big),\ \text{if}\ \sigma_1<1,\\ \hspace{8.85cm} \min\{2\sigma_1,\sigma_2\}\neq1,\\x^{1+\frac{\gamma}{\vartheta}-R}\left(1+\mathrm{O}\left(x^{-1}\right)\right),\ \hspace{4.7cm}\text{if}\ \sigma_1=1.\end{cases}\]

Hence $\pi\in\mathcal{P}^{2-}_{1-\sigma_1}\cap\mathcal{P}^{2+}_{1-\sigma_1}$ if $\sigma_1<1$ , and $\pi\in\mathcal{P}^{3-}_{R-\frac{\gamma}{\vartheta}-1}\cap\mathcal{P}^{3+}_{R-\frac{\gamma}{\vartheta}-1}$ if $\sigma_1=1$ .

Proof. To prove the conclusions, we apply [16, Theorem1] and arguments similar to those in the proof of Theorem 4.

Acknowledgements

The authors are indebted to two anonymous referees and the editor; their comments improved the presentation of the paper. The proof of Theorem 1 is attributed to one referee. The authors thank the editor for bringing [Reference Kelly30, Lemma 1.4] to their attention. The authors acknowledge the support of the Erwin Schrödinger International Institute for Mathematics and Physics (ESI) for the 2018 workshop ‘Advances in Chemical Reaction Network Theory’.

Funding information

C. X. acknowledges support from the TUM University Foundation and the Alexander von Humboldt Foundation, as well as internal start-up funding from the University of Hawai’i at Mānoa. C. W. acknowledges support from the Novo Nordisk Foundation (Denmark), grant NNF19OC0058354.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Abramowitz, M. and Stegun, I. A. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. US Government Printing Office, Washington, DC.Google Scholar
Anderson, D. F., Craciun, G. and Kurtz, T. G. (2010). Product-form stationary distributions for deficiency zero chemical reaction networks. Bull. Math. Biol. 72, 19471970.10.1007/s11538-010-9517-4CrossRefGoogle ScholarPubMed
Anderson, D. F. and Kurtz, T. G. (2011). Continuous time Markov chain models for chemical reaction networks. In Design and Analysis of Biomolecular Circuits: Engineering Approaches to Systems and Synthetic Biology, eds Koeppl, H., Setti, G., M. di Bernardo and D. Densmore, Springer, New York, pp. 3–42.10.1007/978-1-4419-6766-4_1CrossRefGoogle Scholar
Anderson, W. J. (1991). Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York.10.1007/978-1-4612-3038-0CrossRefGoogle Scholar
Asmussen, S. (1998). Subexponential asymptotics for stochastic processes: extremal behavior, stationary distributions and first passage probabilities. Ann. Appl. Prob. 8, 354374.CrossRefGoogle Scholar
Aspandiiarov, S. and Iasnogorodski, R. (1999). Asymptotic behaviour of stationary distributions for countable Markov chains, with some applications. Bernoulli 5, 535569.10.2307/3318715CrossRefGoogle Scholar
Be’er, S. and Assaf, M. (2016). Rare events in stochastic populations under bursty reproduction. J. Statist. Mech. 2016, article no. 113501.10.1088/1742-5468/2016/11/113501CrossRefGoogle Scholar
Bertsimas, D., Gamarnik, D. and Tsitsiklis, J. N. (2001). Performance of multiclass Markovian queueing networks via piecewise linear Lyapunov functions. Ann. Appl. Prob. 11, 13841428.10.1214/aoap/1015345407CrossRefGoogle Scholar
Cappelletti, D. and Wiuf, C. (2016). Product-form Poisson-like distributions and complex balanced reaction systems. SIAM J. Appl. Math. 76, 411432.CrossRefGoogle Scholar
Cavender, J. A. (1978). Quasi-stationary distributions of birth-and-death processes. Adv. Appl. Prob. 10, 570586.CrossRefGoogle Scholar
Champagnat, N. and Villemonais, D. (2016). Exponential convergence to quasi-stationary distribution and Q-process. Prob. Theory Relat. Fields. 164, 243283.10.1007/s00440-014-0611-7CrossRefGoogle Scholar
Champagnat, N. and Villemonais, D. (2017). General criteria for the study of quasi-stationarity. Electron. J. Prob. 28, article no. 22.Google Scholar
Chen, R.-R. (1997). An extended class of time-continuous branching processes. J. Appl. Prob. 34, 1423.10.2307/3215170CrossRefGoogle Scholar
Chowdhury, A. R., Chetty, M. and Evans, R. (2015). Stochastic S-system modeling of gene regulatory network. Cognitive Neurodynamics 9, 535547.10.1007/s11571-015-9346-0CrossRefGoogle ScholarPubMed
Collet, P., Martínez, S. and San Martín, J. (2013). Quasi-Stationary Distributions: Markov Chains, Diffusions and Dynamical Systems. Springer, Heidelberg.10.1007/978-3-642-33131-2CrossRefGoogle Scholar
Denisov, D., Korshunov, D. and Wachtel, V. (2013). Potential analysis for positive recurrent Markov chains with asymptotically zero drift: power-type asymptotics. Stoch. Process. Appl. 123, 30273051.10.1016/j.spa.2013.04.011CrossRefGoogle Scholar
Denisov, D., Korshunov, D. and Wachtel, V. (2016). At the Edge of Criticality: Markov Chains with Asymptotically Zero Drift. Available at https://arxiv.org/abs/1612.01592.Google Scholar
Economou, A. (2005). Generalized product-form stationary distributions for Markov chains in random environments with queueing applications. Adv. Appl. Prob. 37, 185211.10.1239/aap/1113402405CrossRefGoogle Scholar
Ewens, W. J. (2004). Mathematical Population Genetics. I. Theoretical Introduction. Springer, New York.CrossRefGoogle Scholar
Falk, J., Mendler, B. and Drossel, B. (2017). A minimal model of burst-noise induced bistability. PLOS ONE 12, article no. e0176410.10.1371/journal.pone.0176410CrossRefGoogle Scholar
Gardiner, C. W. (2009). Stochastic Methods: A Handbook for Physics, Chemistry and the Natural Sciences, 4th edn. Springer, Berlin.Google Scholar
Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. J. Appl. Prob. 28, 656663.CrossRefGoogle Scholar
Gelenbe, E. and Mitrani, L. (2009). Analysis and Synthesis of Computer Systems, 2nd edn. Academic Press, London.Google Scholar
Gillespie, D. T. (1992). A rigorous derivation of the chemical master equation. Physica A 188, 404425.CrossRefGoogle Scholar
Gupta, A., Briat, C. and Khammash, M. (2014). A scalable computational framework for establishing long-term behavior of stochastic reaction networks. PLOS Comput. Biol. 10, article no. e1003669.10.1371/journal.pcbi.1003669CrossRefGoogle Scholar
Hansen, M. C. (2018). Quasi-stationary Distributions in Stochastic Reaction Networks. Doctoral Thesis, University of Copenhagen.Google Scholar
Hansen, M. C. and Wiuf, C. (2020). Existence of a unique quasi-stationary distribution in stochastic reaction networks. Electron. J. Prob. 25, article no. 45.CrossRefGoogle Scholar
Hoessly, L. and Mazza, C. (2019). Stationary distributions and condensation in autocatalytic reaction networks. SIAM J. Appl. Math. 79, 11731196.10.1137/18M1220340CrossRefGoogle Scholar
Johnson, N. L., Kemp, A. W. and Kotz, S. (2005). Univariate Discrete Distributions, 3rd edn. John Wiley, New York.10.1002/0471715816CrossRefGoogle Scholar
Kelly, F. P. (2011). Reversibility and Stochastic Networks. Cambridge University Press.Google Scholar
Liu, Y. and Zhao, Y. Q. (2011). Asymptotics of the invariant measure of a generalized Markov branching process. Stoch. Models 27, 251271.10.1080/15326349.2011.542736CrossRefGoogle Scholar
Mairesse, J. and Nguyen, H.-T. (2010). Deficiency zero Petri nets and product form. Fund. Informaticae 105, 237261.CrossRefGoogle Scholar
Maulik, K. and Zwart, B. (2006). Tail asymptotics of exponential functionals of Lévy processes. Stoch. Process. Appl. 116, 156177.CrossRefGoogle Scholar
Méléard, S. and Villemonais, D. (2012). Quasi-stationary distributions and population processes. Prob. Surveys 9, 340410.CrossRefGoogle Scholar
Menshikov, M. V. and Popov, S. Y. (1995). Exact power estimates for countable Markov chains. Markov Process. Relat. Fields 1, 5778.Google Scholar
Meyn, S. P. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press, New York.10.1017/CBO9780511626630CrossRefGoogle Scholar
Miller, R. G. (1963). Stationary equations in continuous time Markov chains. Trans. Amer. Math. Soc. 109, 3544.Google Scholar
Neuts, M. F. (1984). Matrix-analytic methods in queuing theory. Europ. J. Operat. Res. 15, 212.CrossRefGoogle Scholar
Norris, J. R. (1998). Markov Chains. Cambridge University Press.Google Scholar
O’Neill, P. D. (2007). Constructing population processes with specified quasi-stationary distributions. Stoch. Models 23, 439449.10.1080/15326340701471075CrossRefGoogle Scholar
Pastor-Satorras, R., Castellano, C., Van Mieghem, P. and Vespignani, A. (2015). Epidemic processes in complex networks. Rev. Modern Phys. 87, 925979.CrossRefGoogle Scholar
Ramaswami, V. (1988). A stable recursion for the steady state vector in Markov chains of M/G/1 type. Commun. Statist. Stoch. Models. 4, 183188.10.1080/15326348808807077CrossRefGoogle Scholar
Shahrezaei, V. and Swain, P. S. (2008). Analytical distributions for stochastic gene expression. Proc. Nat. Acad. Sci. USA 105, 1725617261.CrossRefGoogle Scholar
Takine, T. (2004). Geometric and subexponential asymptotics of Markov chains of M/G/1 type. Math. Operat. Res. 29, 624648.CrossRefGoogle Scholar
Van Doorn, E. A. (1991). Quasi-stationary distributions and convergence to quasi-stationarity of birth–death processes. Adv. Appl. Prob. 23, 683700.10.2307/1427670CrossRefGoogle Scholar
Whittle, P. (1986). Systems in Stochastic Equilibrium. John Wiley, Chichester.Google Scholar
Xu, C., Hansen, M. C. and Wiuf, C. (2022). Structural classification of continuous time Markov chains with applications. Stochastics 94, 10031030.CrossRefGoogle Scholar
Xu, C., Hansen, M. C. and Wiuf, C. (2023). Full classification of dynamics for one-dimensional continuous time Markov chains with polynomial transition rates. Adv. Appl. Prob. 55, 321355.CrossRefGoogle Scholar
Figure 0

Figure 1. $I=\mathcal{P}^{1+} \cap \mathcal{P}^{1-}$: CMP-like distributions. $II=\mathcal{P}^{2+}_{1}\cap \mathcal{P}^{2-}_{1}$: Exponential-like distributions. $III=\mathcal{P}^{3-}\cap \mathcal{P}^{3+}$: Power-like distributions.