Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-16T18:06:34.182Z Has data issue: false hasContentIssue false

Critical branching as a pure death process coming down from infinity

Published online by Cambridge University Press:  18 January 2023

Serik Sagitov*
Affiliation:
Chalmers University of Technology and University of Gothenburg
*
*Postal address: Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, SE-412 96 Göteborg, Sweden. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We consider the critical Galton–Watson process with overlapping generations stemming from a single founder. Assuming that both the variance of the offspring number and the average generation length are finite, we establish the convergence of the finite-dimensional distributions, conditioned on non-extinction at a remote time of observation. The limiting process is identified as a pure death process coming down from infinity.

This result brings a new perspective on Vatutin’s dichotomy, claiming that in the critical regime of age-dependent reproduction, an extant population either contains a large number of short-living individuals or consists of few long-living individuals.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Consider a self-replicating system evolving in the discrete-time setting according to the following rules:

  1. Rule 1: The system is founded by a single individual, the founder, born at time 0.

  2. Rule 2: The founder dies at a random age L and gives a random number N of births at random ages $\tau_j$ satisfying $1\le\tau_1\le \ldots\le \tau_N\le L$ .

  3. Rule 3: Each new individual lives independently from others according to the same life law as the founder.

An individual that was born at time $t_1$ and dies at time $t_2$ is considered to be alive during the time interval $[t_1,t_2-1]$ . Letting Z(t) stand for the number of individuals alive at time t, we study the random dynamics of the sequence

\begin{equation*}Z(0)=1, Z(1), Z(2),\ldots,\end{equation*}

which is a natural extension of the well-known Galton–Watson process, or GW process for short; see [Reference Watson and Galton13]. The process $Z({\cdot})$ is the discrete-time version of what is usually called the Crump–Mode–Jagers process or the general branching process; see [Reference Jagers5]. To emphasise the discrete-time setting, we call it a GW process with overlapping generations, or GWO process for short.

Put $b\,:\!=\,\frac{1}{2}\mathrm{var}(N)$ . This paper deals with the GWO processes satisfying

(1) \begin{equation} \mathrm{E}(N)=1,\quad 0<b<\infty.\end{equation}

The condition $\mathrm{E}(N)=1$ says that the reproduction regime is critical, implying $\mathrm{E}(Z(t))\equiv1$ and making extinction inevitable, provided $b>0$ . According to [Reference Athreya and Ney1, Chapter I.9], given (1), the survival probability

\begin{equation*}Q(t)\,:\!=\,\mathrm{P}(Z(t)>0)\end{equation*}

of a GW process satisfies the asymptotic formula $tQ(t)\to b^{-1}$ as $t\to\infty$ (this was first proven in [Reference Kolmogorov6] under a third moment assumption). A direct extension of this classical result for the GWO processes,

\begin{equation*}tQ(ta)\to b^{-1},\quad t\to\infty,\quad a\,:\!=\,\mathrm{E}(\tau_1+\ldots+\tau_N),\end{equation*}

was obtained in [Reference Durham3, Reference Holte4] under the conditions (1), $a<\infty$ ,

(2) \begin{equation} t^2\mathrm{P}(L>t)\to 0,\quad t\to\infty,\end{equation}

plus an additional condition. (Notice that by our definition, $a\ge1$ , and $a=1$ if and only if $L\equiv1$ , that is, when the GWO process in question is a GW process.) Treating a as the mean generation length (see [Reference Jagers5, Reference Sagitov8]), we may conclude that the asymptotic behaviour of the critical GWO process with short-living individuals (see the condition (2)) is similar to that of the critical GW process, provided time is counted generation-wise.

New asymptotic patterns for the critical GWO processes are found under the assumption

(3) \begin{equation} t^2\mathrm{P}(L>t)\to d,\quad 0\le d< \infty,\quad t\to\infty,\end{equation}

which, compared to (2), allows the existence of long-living individuals given $d>0$ . The condition (3) was first introduced in the pioneering paper [Reference Vatutin12] dealing with the Bellman–Harris processes. In the current discrete-time setting, the Bellman–Harris process is a GWO process subject to two restrictions: (a) $\mathrm{P}(\tau_1=\ldots=\tau_N= L)=1$ , so that all births occur at the moment of an individual’s death, and (b) the random variables L and N are independent. For the Bellman–Harris process, the conditions (1) and (3) imply $a=\mathrm{E}(L)$ , $a<\infty$ , and according to [Reference Vatutin12, Theorem 3], we get

(4) \begin{equation} tQ(t)\to h,\quad t\to\infty,\qquad h\,:\!=\,\frac{a+\sqrt{a^2+4bd}}{2b}.\end{equation}

As was shown in [Reference Topchii11, Corollary B] (see also [Reference Sagitov7, Lemma 3.2] for an adaptation to the discrete-time setting), the relation (4) holds even for the GWO processes satisfying the conditions (1), (3), and $a<\infty$ .

The main result of this paper, Theorem 1 of Section 2, considers a critical GWO process under the above-mentioned set of assumptions (1), (3), $a<\infty$ , and establishes the convergence of the finite-dimensional distributions conditioned on survival at a remote time of observation. A remarkable feature of this result is that its limit process is fully described by a single parameter $c\,:\!=\,4bda^{-2}$ , regardless of complicated mutual dependencies between the random variables $\tau_j, N,L$ .

Our proof of Theorem 1, requiring an intricate asymptotic analysis of multi-dimensional probability generating functions, is split into two sections for the sake of readability. Section 3 presents a new proof of (4) inspired by the proof of [Reference Vatutin12]. The crucial aspect of this approach, compared to the proof of [Reference Sagitov7, Lemma 3.2], is that certain essential steps do not rely on the monotonicity of the function Q(t). In Section 4, the technique of Section 3 is further developed to finish the proof of Theorem 1.

We conclude this section by mentioning the illuminating family of GWO processes called the Sevastyanov processes [Reference Sevastyanov9]. The Sevastyanov process is a generalised version of the Bellman–Harris process, with possibly dependent L and N. In the critical case, the mean generation length of the Sevastyanov process, $a=\mathrm{E}(L N)$ , can be represented as

\begin{equation*}a=\mathrm{cov}(L,N)+\mathrm{E}(L).\end{equation*}

Thus, if L and N are positively correlated, the average generation length a exceeds the average life length $\mathrm{E}(L)$ .

Turning to a specific example of the Sevastyanov process, take

\begin{equation*}\mathrm{P}(L= t)= p_1 t^{-3}(\!\ln\ln t)^{-1}, \quad \mathrm{P}(N=0|L= t)=1-p_2,\quad \mathrm{P}(N=n_t|L= t)=p_2, \ t\ge2,\end{equation*}

where $n_t\,:\!=\,\lfloor t(\!\ln t)^{-1}\rfloor$ and $(p_1,p_2)$ are such that

\begin{equation*}\sum_{t=2}^\infty \mathrm{P}(L= t)=p_1 \sum_{t=2}^\infty t^{-3}(\!\ln\ln t)^{-1}=1,\quad \mathrm{E}(N)=p_1p_2\sum_{t=2}^\infty n_t t^{-3}(\!\ln\ln t)^{-1}=1.\end{equation*}

In this case, for some positive constant $c_1$ ,

\begin{equation*}\mathrm{E}\big(N^2\big)= p_1p_2\sum_{t=1}^\infty n_t^2 t^{-3}(\!\ln\ln t)^{-1}< c_1\int_2^\infty \frac{d (\!\ln t)}{(\!\ln t)^2\ln\ln t}<\infty,\end{equation*}

implying that the condition (1) is satisfied. Clearly, the condition (3) holds with $d=0$ . At the same time,

\begin{equation*}a=\mathrm{E}(NL)= p_1p_2\sum_{t=1}^\infty n_t t^{-2}(\!\ln\ln t)^{-1}> c_2\int_2^\infty \frac{d (\!\ln t)}{(\!\ln t)(\!\ln\ln t)}=\infty,\end{equation*}

where $c_2$ is a positive constant. This example demonstrates that for the GWO process, unlike for the Bellman–Harris process, the conditions (1) and (3) do not automatically imply the condition $a<\infty$ .

2. The main result

Theorem 1. For a GWO process satisfying (1), (3) and $a<\infty$ , there holds a weak convergence of the finite-dimensional distributions

\begin{align*}(Z(ty),0<y<\infty|Z(t)>0)\stackrel{\rm fdd\,}{\longrightarrow} (\eta(y),0<y<\infty),\quad t\to\infty.\end{align*}

The limiting process is a continuous-time pure death process $(\eta(y),0\le y<\infty)$ , whose evolution law is determined by a single compound parameter $c=4bda^{-2}$ , as specified next.

The finite-dimensional distributions of the limiting process $\eta({\cdot})$ are given below in terms of the k-dimensional probability generating functions $\mathrm{E}\Big(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}\Big)$ , $k\ge1$ , assuming

(5) \begin{align} 0=y_0< y_1< \ldots< y_{j}<1\le y_{j+1}< \ldots< &\ y_k<y_{k+1}=\infty, \nonumber \\ &0\le j\le k,\quad 0\le z_1,\ldots,z_k<1.\end{align}

Here the index j highlights the pivotal value 1 corresponding to the time of observation t of the underlying GWO process.

As will be shown in Section 4.2, if $j=0$ , then

\begin{align*}\mathrm{E}\Big(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}\Big)=1-\frac{1+\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{\big(1+\sqrt{1+c}\big)y_1},\quad \Gamma_i\,:\!=\,c({y_1}/{y_i} )^2,\end{align*}

and if $j\ge1$ ,

\begin{align*}\mathrm{E} &\Big(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}\Big)\\&=\frac{\sqrt{1+\sum_{i=1}^{j}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i+cz_1\cdots z_{j}y_1^{2} }-\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{\big(1+\sqrt{1+c}\big)y_1}.\end{align*}

In particular, for $k=1$ , we have

\begin{align*}\mathrm{E}\big(z^{\eta(y)}\big)&= \frac{\sqrt{1+c(1-z)+czy^{2}}-\sqrt{1+c(1-z)}}{\big(1+\sqrt{1+c}\big)y},\quad 0< y<1,\\\mathrm{E}\big(z^{\eta(y)}\big)&= 1-\frac{1+\sqrt{1+c(1-z)}}{\big(1+\sqrt{1+c}\big)y},\quad y\ge1.\end{align*}

It follows that $\mathrm{P}(\eta(y)\ge0)=1$ for $y>0$ , and moreover, putting here first $z=1$ and then $z=0$ yields

\begin{align*}\mathrm{P}(\eta(y)<\infty)&=\frac{\sqrt{1+cy^2}-1}{\big(1+\sqrt{1+c}\big)y}\cdot1_{\{0< y<1\}}+\bigg(1-\frac{2}{\big(1+\sqrt{1+c}\big)y}\bigg)\cdot1_{\{y\ge 1\}},\\\mathrm{P}(\eta(y)=0)&=\frac{y-1}{y}\cdot1_{\{y\ge 1\}},\end{align*}

implying that $\mathrm{P}(\eta(y)=\infty)>0$ for all $y>0$ . In fact, letting $y\to0$ , we may set $\mathrm{P}(\eta(0)= \infty)=1.$

To demonstrate that the process $\eta({\cdot})$ is indeed a pure death process, consider the function

\begin{equation*}\mathrm{E}\Big(z_1^{\eta(y_1)-\eta(y_2)}\cdots z_{k-1}^{\eta(y_{k-1})-\eta(y_{k})}z_k^{\eta(y_k)}\Big)\end{equation*}

determined by

\begin{align*}\mathrm{E}\Big(z_1^{\eta(y_1)-\eta(y_2)}\cdots z_{k-1}^{\eta(y_{k-1})-\eta(y_{k})}z_k^{\eta(y_k)}\Big)&=\mathrm{E}\Big(z_1^{\eta(y_1)}(z_2/z_1)^{\eta(y_2)}\cdots (z_k/z_{k-1})^{\eta(y_k)}\Big).\end{align*}

This function is given by two expressions:

\begin{align*}\frac{\big(1+\sqrt{1+c}\big)y_1-1-\sqrt{1+\sum\nolimits_{i=1}^{k}\! (1-z_{i})\gamma_i}}{\big(1+\sqrt{1+c}\big)y_1}, \quad &\text{for }j=0,\\\frac{\sqrt{1+\sum\nolimits_{i=1}^{j-1}(1-z_{i})\gamma_i+(1-z_{j})\Gamma_j+cz_j y_1^2}-\sqrt{1+\sum\nolimits_{i=1}^{k} \!(1-z_{i})\gamma_i}}{\big(1+\sqrt{1+c}\big)y_1}, \quad &\text{for }j\ge1,\end{align*}

where $\gamma_i\,:\!=\,\Gamma_i-\Gamma_{i+1}$ and $\Gamma_{k+1}=0$ . Setting $k=2$ , $z_1=z$ , and $z_2=1$ , we deduce that the function

(6) \begin{equation} \mathrm{E}\big(z^{\eta(y_1)-\eta(y_2)};\,\eta(y_1)<\infty\big),\quad 0<y_1<y_2,\quad 0\le z\le1,\end{equation}

is given by one of the following three expressions, depending on whether $j=2$ , $j=1$ , or $j=0$ :

\begin{align*}\frac{\sqrt{1+c y_1^2+c(1-z)\big(1-(y_1/y_2)^2\big)}-\sqrt{1+c (1-z)\big(1-(y_1/y_2)^2\big)}}{\big(1+\sqrt{1+c}\big)y_1},\quad &y_2<1, \\[3pt]\frac{\sqrt{1+c y_1^2+c(1-z) \big(1-y_1^2\big)}-\sqrt{1+c(1-z)\big(1-(y_1/y_2)^2\big)}}{\big(1+\sqrt{1+c}\big)y_1},\quad &y_1<1\le y_2, \\[3pt]1- \frac{1+\sqrt{1+c(1-z)\big(1-(y_1/y_2)^2\big)}}{\big(1+\sqrt{1+c}\big)y_1},\quad &1\le y_1.\end{align*}

Since the generating function (6) is finite at $z=0$ , we conclude that

\begin{equation*}\mathrm{P}(\eta(y_1)< \eta(y_2);\, \eta(y_1)< \infty)=0,\quad 0<y_1<y_2.\end{equation*}

This implies

\begin{equation*}\mathrm{P}(\eta(y_2)\le \eta(y_1))=1,\quad 0<y_1<y_2,\end{equation*}

meaning that unless the process $\eta({\cdot})$ is sitting at the infinity state, it evolves by negative integer-valued jumps until it gets absorbed at zero.

Consider now the conditional probability generating function

(7) \begin{equation}\mathrm{E}\big(z^{\eta(y_1)-\eta(y_2)}| \eta(y_1)<\infty\big),\quad 0<y_1<y_2,\quad 0\le z\le1.\end{equation}

In accordance with the three expressions given above for (6), the generating function (7) is specified by the following three expressions:

\begin{align*}\frac{\sqrt{1+c y_1^2+c(1-z)\big(1-(y_1/y_2)^2\big)}-\sqrt{1+c (1-z)\big(1-(y_1/y_2)^2\big)}}{\sqrt{1+c y_1^2}-1},\quad &y_2<1, \\[3pt]\frac{\sqrt{1+c y_1^2+c(1-z) \big(1-y_1^2\big)}-\sqrt{1+c(1-z)\big(1-(y_1/y_2)^2\big)}}{\sqrt{1+c y_1^2}-1},\quad &y_1<1\le y_2, \\[3pt]1- \frac{\sqrt{1+c(1-z)\big(1-(y_1/y_2)^2\big)}-1}{\big(1+\sqrt{1+c}\big)y_1-2},\quad &1\le y_1.\end{align*}

In particular, setting $z=0$ here, we obtain

\begin{equation*}\mathrm{P}(\eta(y_1)-\eta(y_2)=0| \eta(y_1)<\infty)= \left\{\begin{array}{l@{\quad}l@{\quad}r}\frac{\sqrt{1+c\big(1+y_1^2-(y_1/y_2)^2\big)}-\sqrt{1+c\big(1-(y_1/y_2)^2\big)}}{\sqrt{1+c y_1^2}-1} & \!\!\text{for} & \!\! 0<y_1< y_2<1, \\[13pt]\frac{\sqrt{1+c}-\sqrt{1+c\big(1-(y_1/y_2)^2\big)}}{\sqrt{1+c y_1^2}-1} & \!\!\text{for} & \!\! 0<y_1<1\le y_2, \\[13pt]1- \frac{\sqrt{1+c\big(1-(y_1/y_2)^2\big)}-1}{\big(1+\sqrt{1+c}\big)y_1-2} & \!\!\text{for} & \!\! 1\le y_1<y_2.\end{array}\right.\end{equation*}

Notice that given $0<y_1\le1$ ,

\begin{equation*}\mathrm{P}(\eta(y_1)-\eta(y_2)=0| \eta(y_1)<\infty)\to 0,\quad y_2\to\infty,\end{equation*}

which is expected because of $\eta(y_1)\ge\eta(1)\ge1$ and $\eta(y_2)\to0$ as $y_2\to\infty$ .

The random times

\begin{equation*}T=\sup\{u\,:\, \eta(u)=\infty\},\quad T_0=\inf\{u\,:\,\eta(u)=0\}\end{equation*}

are major characteristics of a trajectory of the limit pure death process. Since

\begin{align*}\mathrm{P}(T\le y)=\mathrm{E}\big(z^{\eta(y)}\big)\Big\vert_{z=1},\qquad \mathrm{P}(T_0\le y)=\mathrm{E}\big(z^{\eta(y)}\big)\Big\vert_{z=0},\end{align*}

in accordance with the above-mentioned formulas for $\mathrm{E}\big(z^{\eta(y)}\big)$ , we get the following marginal distributions:

\begin{align*} \mathrm{P}(T\le y)&=\frac{\sqrt{1+cy^2}-1}{\big(1+\sqrt{1+c}\big)y}\cdot1_{\{0\le y<1\}}+\bigg(1-\frac{2}{\big(1+\sqrt{1+c}\big)y}\bigg)\cdot1_{\{y\ge 1\}},\\ \mathrm{P}(T_0\le y)&=\frac{y-1}{y}\cdot1_{\{y\ge 1\}}.\end{align*}

The distribution of $T_0$ is free from the parameter c and has the Pareto probability density function

\begin{equation*}f_0(y)=y^{-2}1_{\{y>1\}}.\end{equation*}

In the special case (2), that is, when (3) holds with $d=0$ , we have $c=0$ and $\mathrm{P}(T=T_0)=1$ . If $d>0$ , then $T\le T_0$ , and the distribution of T has the following probability density function:

\begin{equation*}f(y)=\left\{\begin{array}{l@{\quad}l@{\quad}r}\frac{1}{\big(1+\sqrt{1+c}\big)y^2} \Big(1-\frac{1}{\sqrt{1+cy^2}}\Big)& \text{for} & 0\le y<1, \\[7pt]\frac{2}{\big(1+\sqrt{1+c}\big)y^2} & \text{for} & y\ge1,\end{array}\right.\end{equation*}

which has a positive jump at $y=1$ of size $f(1)-f(1-)=(1+c)^{-1/2}$ ; see Figure 1. Observe that $\frac{f(1-)}{f(1)}\to\frac{1}{2}$ as $c\to\infty$ .

Figure 1. The dashed line is the probability density function of T; the solid line is the probability density function of $T_0$ . The left panel illustrates the case $c=5$ , and the right panel illustrates the case $c=15$ .

Intuitively, the limiting pure death process counts the long-living individuals in the GWO process, that is, those individuals whose life length is of order t. These long-living individuals may have descendants, however none of them would live long enough to be detected by the finite-dimensional distributions at the relevant time scale, see Lemma 2 below. Theorem 1 suggests a new perspective on Vatutin’s dichotomy (see [Reference Vatutin12]), claiming that the long-term survival of a critical age-dependent branching process is due to either a large number of short-living individuals or a small number of long-living individuals. In terms of the random times $T\le T_0$ , Vatutin’s dichotomy discriminates between two possibilities: if $T>1$ , then $\eta(1)=\infty$ , meaning that the GWO process has survived thanks to a large number of individuals, while if $T\le 1<T_0$ , then $1\le \eta(1)<\infty$ , meaning that the GWO process has survived thanks to a small number of individuals.

3. Proof that $\boldsymbol{{tQ(t)}}\to \boldsymbol{{h}}$

This section deals with the survival probability of the critical GWO process

\begin{equation*}Q(t)=1-P(t),\quad P(t)\,:\!=\,\mathrm{P}(Z(t)=0).\end{equation*}

By its definition, the GWO process can be represented as the sum

(8) \begin{equation} Z(t)=1_{\{L>t\}}+\sum\nolimits_{j=1}^{N} Z_j\left(t-\tau_j\right),\quad t=0,1,\ldots,\end{equation}

involving N independent daughter processes $Z_j({\cdot})$ generated by the founder individual at the birth times $\tau_j$ , $j=1,\ldots,N$ (here it is assumed that $Z_j(t)=0$ for all negative t). The branching property (8) implies the relation

\begin{equation*} 1_{\{Z(t)=0\}}=1_{\{L\le t\}}\prod\nolimits_{j=1}^{N} 1_{\big\{Z_j \left(t-\tau_j\right)=0\big\}},\end{equation*}

which says that the GWO process goes extinct by the time t if, on one hand, the founder is dead at time t and, on the other hand, all daughter processes are extinct by the time t. After taking expectations of both sides, we can write

(9) \begin{equation}P(t)=\mathrm{E}\bigg(\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,L\le t\bigg).\end{equation}

As shown next, this nonlinear equation for $P({\cdot})$ implies the asymptotic formula (4) under the conditions (1), (3), and $a<\infty$ .

3.1. Outline of the proof of (4)

We start by stating four lemmas and two propositions. Let

(10) \begin{equation}\Phi(z) \,:\!=\,\mathrm{E}\big((1-z)^ N-1+Nz\big), \end{equation}
(11) \begin{equation}W(t) \,:\!=\,\big(1-ht^{-1}\big)^{N}+Nht^{-1}-\sum\nolimits_{j=1}^{N}Q\left(t-\tau_j\right)-\prod\nolimits_{j=1}^{N} P\left(t-\tau_j\right), \end{equation}
(12) \begin{equation} D(u,t) \,:\!=\,\mathrm{E}\Big(1-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,u<L\le t\Big)+\mathrm{E}\big(\big(1-ht^{-1}\big)^{N} -1+Nht^{-1};\,L>u\big), \end{equation}
(13) \begin{equation}\mathrm{E}_u(X) \,:\!=\,\mathrm{E}(X;\,L\le u ),\end{equation}

where $0\le z\le 1$ , $u>0$ , $t\ge h$ , and X is an arbitrary random variable.

Lemma 1. Given (10), (11), (12), and (13), assume that $0< u\le t$ and $t\ge h$ . Then

\begin{align*}\Phi\big(ht^{-1}\big)= \mathrm{P}(L> t)+\mathrm{E}_u\bigg(\!\sum\nolimits_{j=1}^{N}Q\left(t-\tau_j\right)\bigg)-Q(t)+\mathrm{E}_u(W(t))+D(u,t).\end{align*}

Lemma 2. If (1) and (3) hold, then $\mathrm{E}(N;\,L>ty)=o\big(t^{-1}\big)$ as $t\to\infty$ for any fixed $y>0$ .

Lemma 3. If (1), (3), and $a<\infty$ hold, then for any fixed $0<y<1$ ,

\begin{align*}\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{1}{t-\tau_j}-\frac{1}{t}\bigg)\bigg)\sim at^{-2},\quad t\to\infty.\end{align*}

Lemma 4. Let $k\ge1$ . If $0\le f_j,g_j\le 1$ for $ j=1,\ldots,k$ , then

\begin{equation*} \prod\nolimits_{j=1}^k\!\big(1-g_j\big)-\prod\nolimits_{j=1}^k\big(1-f_j\big)=\sum\nolimits_{j=1}^k (f_j-g_j)r_j, \end{equation*}

where $0\le r_j\le1$ and

\begin{align*}1-r_j=\sum\nolimits_{i=1}^{j-1}g_i+\sum\nolimits_{i=j+1}^{k}f_i-R_j,\end{align*}

for some $R_j\ge0$ . If moreover $f_j\le q$ and $g_j\le q$ for some $q>0$ , then

\begin{equation*}1- r_j\le(k-1)q,\qquad R_j\le kq,\qquad R_j\le k^2q^2.\end{equation*}

Proposition 1. If (1), (3), and $a<\infty$ hold, then $\limsup_{t\to\infty} tQ(t)<\infty$ .

Proposition 2. If (1), (3), and $a<\infty$ hold, then $\liminf_{t\to\infty} tQ(t)>0$ .

According to these two propositions, there exists a triplet of positive numbers $(q_1,q_2,t_0)$ such that

(14) \begin{equation}q_1\le tQ(t)\le q_2,\quad t\ge t_0,\quad 0<q_1<h<q_2<\infty.\end{equation}

The claim $tQ(t)\to h$ is derived using (14) by accurately removing asymptotically negligible terms from the relation for $Q({\cdot})$ stated in Lemma 1, after setting $u=ty$ with a fixed $0<y<1$ , and then choosing a sufficiently small y. In particular, as an intermediate step, we will show that

(15) \begin{align}Q(t)= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}Q\left(t-\tau_j\right)\bigg)+\mathrm{E}_{ty}(W(t))-aht^{-2}+o\big(t^{-2}\big),\quad t\to\infty. \end{align}

Then, restating our goal as $\phi(t)\to 0$ in terms of the function $\phi(t)$ , defined by

(16) \begin{equation}Q(t)=\frac{h +\phi(t)}{t},\quad t\ge1,\end{equation}

we rewrite (15) as

(17) \begin{align}\frac{h +\phi(t)}{t}&= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{h +\phi\left(t-\tau_j\right)}{t-\tau_j}\bigg)+\mathrm{E}_{ty}(W(t))-aht^{-2}+o\big(t^{-2}\big),\quad t\to\infty. \end{align}

It turns out that the three terms involving h, outside W(t), effectively cancel each other, yielding

(18) \begin{align}\frac{\phi(t)}{t}&= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{\phi\left(t-\tau_j\right)}{t-\tau_j}+W(t)\bigg)+o\big(t^{-2}\big),\quad t\to\infty.\end{align}

Treating W(t) in terms of Lemma 4 yields

(19) \begin{align} \phi(t)&= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\phi\left(t-\tau_j\right) r_j(t)\frac{t}{t-\tau_j}\bigg)+o\big(t^{-1}\big), \end{align}

where $r_j(t)$ is a counterpart of $r_j$ in Lemma 4. To derive from here the desired convergence $\phi(t)\to0$ , we will adapt a clever trick from Chapter 9.1 of [Reference Sewastjanow10], which was further developed in [Reference Vatutin12] for the Bellman–Harris process, with possibly infinite $\mathrm{var}(N)$ . Define a non-negative function m(t) by

(20) \begin{align}m(t)\,:\!=\,|\phi(t)|\, \ln t,\quad t\ge 2. \end{align}

Multiplying (19) by $\ln t$ and using the triangle inequality, we obtain

(21) \begin{align} m(t)\le \mathrm{E}_{ty}\Bigg(\!\sum\nolimits_{j=1}^{N} m\left(t-\tau_j\right)r_j(t) \frac{t\ln t}{\left(t-\tau_j\right)\ln\left(t-\tau_j\right)}\Bigg)+v(t),\end{align}

where $v(t)\ge 0$ and $v(t)=o(t^{-1}\ln t)$ as $t\to\infty$ . It will be shown that this leads to $m(t)=o(\!\ln t)$ , thereby concluding the proof of (4).

3.2. Proof of lemmas and propositions

Proof of Lemma 1.For $0<u\le t$ , the relations (9) and (13) give

(22) \begin{align}P(t)=\mathrm{E}_u\bigg(\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right) \bigg)+\mathrm{E}\bigg(\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,u<L\le t\bigg).\end{align}

On the other hand, for $t\ge h$ ,

\begin{align*}\Phi\big(ht^{-1}\big)&\stackrel{(10)}{=}\mathrm{E}_u\Big(\big(1-ht^{-1}\big)^{N}-1+Nht^{-1}\Big)+\mathrm{E}\Big(\big(1-ht^{-1}\big)^{N}-1 +Nht^{-1};\,L> u\Big).\end{align*}

Adding the latter relation to

\begin{align*}1&=\mathrm{P}(L\le u)+\mathrm{P}(L> t)+\mathrm{P}(u<L\le t)\end{align*}

and subtracting (22) from the sum, we get

\begin{align*}\Phi\big(ht^{-1}\big)+Q(t)=\mathrm{E}_u\bigg(\big(1-ht^{-1}\big)^{N} +Nht^{-1}-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right)\bigg)+\mathrm{P}(L> t)+D(u,t),\end{align*}

with D(u, t) defined by (12). After a rearrangement, we obtain the statement of the lemma.

Proof of Lemma 2.For any fixed $\epsilon>0$ ,

\begin{align*}\mathrm{E}(N;\,L>t)&=\mathrm{E}(N;\,N\le t\epsilon,L>t)+\mathrm{E}\big(N;\,1<N(t\epsilon)^{-1},L>t\big)\\&\le t\epsilon\mathrm{P}(L>t)+(t\epsilon)^{-1}\mathrm{E}\big(N^2;\,L>t\big).\end{align*}

Thus, by (1) and (3),

\begin{align*}\limsup_{t\to\infty} (t\mathrm{E}(N;\,L>t))\le d\epsilon,\end{align*}

and the assertion follows as $\epsilon\to0$ .

Proof of Lemma 3.For $t=1,2,\ldots$ and $y>0$ , put

\begin{align*}B_t(y)&\,:\!=\, t^2\,\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{1}{t-\tau_j}-\frac{1}{t}\bigg)\bigg)-a. \end{align*}

For any $0<u<ty$ , using

\begin{equation*}a=\mathrm{E}_u(\tau_1+\ldots+\tau_N)+A_u,\quad A_u\,:\!=\,\mathrm{E}(\tau_1+\ldots+\tau_N;\,L> u),\end{equation*}

we get

\begin{align*}B_t(y)&= \mathrm{E}_u\bigg(\!\sum\nolimits_{j=1}^{N} \frac{t}{t-\tau_j}\tau_j\bigg)+\mathrm{E}\bigg(\!\sum\nolimits_{j=1}^{N} \frac{t}{t-\tau_j}\tau_j\,;\,u<L\le ty\bigg)\\& \quad -\mathrm{E}_u(\tau_1+\ldots+\tau_N)-A_u\\ &=\mathrm{E}\Bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j}{1-\tau_j/t};\,u<L\le ty\Bigg)+\mathrm{E}_u\Bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j^2}{t-\tau_j}\Bigg)-A_u.\end{align*}

For the first term on the right-hand side, we have $\tau_j\le L\le ty$ , so that

\begin{align*}\mathrm{E}\Bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j}{1-\tau_j/t};\,u<L\le ty\Bigg)\le(1-y)^{-1}A_u.\end{align*}

For the second term, $\tau_j\le L\le u$ and therefore

\begin{align*}\mathrm{E}_u\Bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j^2}{t-\tau_j}\Bigg)\le\frac{u^2}{t-u}\mathrm{E}_u(N)\le\frac{u^2}{t-u}.\end{align*}

This yields

\begin{equation*}-A_u\le B_t(y)\le (1-y)^{-1}A_u+\frac{u^2}{t-u},\quad 0<u<ty<t,\end{equation*}

implying

\begin{equation*}-A_u\le \liminf_{t\to\infty} B_t(y)\le\limsup_{t\to\infty} B_t(y)\le (1-y)^{-1}A_u.\end{equation*}

Since $A_u\to0$ as $u\to\infty$ , we conclude that $B_t(y)\to 0$ as $t\to\infty$ .

Proof of Lemma 4.Let

\begin{equation*}r_j\,:\!=\,\big(1-g_1\big)\ldots \big(1-g_{j-1}\big)\left(1-f_{j+1}\right)\ldots \big(1-f_k\big),\quad 1\le j\le k.\end{equation*}

Then $0\le r_j\le1$ , and the first stated equality is obtained by telescopic summation of

\begin{align*} \big(1-g_1\big)\prod\nolimits_{j=2}^{k}\big(1-f_j\big)-\prod\nolimits_{j=1}^k\big(1-f_j\big)&=(f_1-g_1)r_1,\\ \big(1-g_1\big)\big(1-g_2\big)\prod\nolimits_{j=3}^{k}\big(1-f_j\big)- \big(1-g_1\big)\prod\nolimits_{j=2}^{k}\big(1-f_j\big)&=(f_2-g_2)r_2,\ldots,\\ \prod\nolimits_{j=1}^{k}\big(1-g_j\big)-\prod\nolimits_{j=1}^{k-1}\big(1-g_j\big)\big(1-f_k\big)&=(f_k-g_k)r_k.\end{align*}

The second stated equality is obtained with

\begin{align*}R_j&\,:\!=\,\sum_{i=j+1}^{k}f_i\big(1-\left(1-f_{j+1}\right)\ldots \big(1-f_{i-1}\big)\big)\\& \quad +\sum_{i=1}^{j-1}g_i\big(1-\big(1-g_1\big)\ldots (1-g_{i-1})\left(1-f_{j+1}\right)\ldots \big(1-f_k\big)\big),\end{align*}

by performing telescopic summation of

\begin{align*} 1-\left(1-f_{j+1}\right)&=f_{j+1},\\\left(1-f_{j+1}\right)-\left(1-f_{j+1}\right)(1-f_{j+2})&=f_{j+2}\left(1-f_{j+1}\right),\ldots,\\ \prod\nolimits_{i=j+1}^{k-1}\left(1-f_i\right)- \prod\nolimits_{i=j+1}^{k}\left(1-f_i\right)&=f_k\prod\nolimits_{i=j+1}^{k-1}\left(1-f_i\right),\\ \prod\nolimits_{i=j+1}^{k}\left(1-f_i\right)-\left(1-g_1\right)\prod\nolimits_{i=j+1}^{k}\left(1-f_i\right)&=g_1\prod\nolimits_{i=j+1}^{k}\left(1-f_i\right),\ldots,\\ \prod\nolimits_{i=1}^{j-2}(1-g_i)\prod\nolimits_{i=j+1}^{k}\left(1-f_i\right)- \prod\nolimits_{i=1}^{j-1}(1-g_i)\prod\nolimits_{i=j+1}^{k}\left(1-f_i\right)&=g_{j-1} \prod\nolimits_{i=1}^{j-2}(1-g_i)\prod\nolimits_{i=j+1}^{k}\left(1-f_i\right).\end{align*}

By the above definition of $R_j$ , we have $R_j\ge0$ . Furthermore, given $f_j\le q$ and $g_j\le q$ , we get

\begin{equation*}R_j\le \sum\nolimits_{i=1}^{j-1}g_i+\sum\nolimits_{i=j+1}^{k}f_i\le (k-1)q.\end{equation*}

It remains to observe that

\begin{align*}1-r_j\le 1-(1-q)^{k-1}\le (k-1)q,\end{align*}

and from the definition of $R_j$ ,

\begin{equation*}R_j\le q\sum\nolimits_{i=1}^{k-j-1}(1-(1-q)^{i})+q\sum\nolimits_{i=1}^{j-1}\big(1-(1-q)^{k-j+i-1}\big)\le q^2\sum\nolimits_{i=1}^{k-2}i\le k^2q^2.\end{equation*}

Proof of Proposition 1.By the definition of $\Phi({\cdot})$ , we have

\begin{equation*}\Phi(Q(t))+P(t)=\mathrm{E}_u\big(P(t)^{N} \big)+\mathrm{P}(L> u)-\mathrm{E}\big(1-P(t)^ N;\,L> u\big),\end{equation*}

for any $0<u<t$ . This and (22) yield

(23) \begin{align}\Phi(Q(t))&=\mathrm{E}_u\bigg(P(t)^{N}-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right)\bigg)+\mathrm{P}(L> u) \nonumber\\&\quad -\mathrm{E}\big(1-P(t)^ N;\,L> u\big)-\mathrm{E}\bigg(\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,u<L\le t\bigg). \end{align}

We therefore obtain the upper bound

\begin{align*} \Phi(Q(t))&\le \mathrm{E}_u\Big(P(t)^{N}-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right)\Big)+\mathrm{P}(L> u),\end{align*}

which together with Lemma 4 and the monotonicity of $Q({\cdot})$ implies

(24) \begin{align} \Phi(Q(t))\le \mathrm{E}_u\bigg(\!\sum\nolimits_{j=1}^{N}(Q\left(t-\tau_j\right)-Q(t))\bigg)+\mathrm{P}(L>u).\end{align}

Borrowing an idea from [Reference Topchii11], suppose to the contrary that

\begin{equation*}t_n\,:\!=\,\min\{t: tQ(t)\ge n\}\end{equation*}

is finite for any natural n. It follows that

\begin{equation*}Q(t_n)\ge \frac{n}{t_n},\qquad Q(t_n-u)<\frac{n}{t_n-u},\quad 1\le u\le t_n-1.\end{equation*}

Putting $t=t_n$ into (24) and using the monotonicity of $\Phi({\cdot})$ , we find

\begin{eqnarray*} \Phi\big(nt_n^{-1}\big)\le \Phi(Q(t_n))\le \mathrm{E}_u\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{n}{t_n-\tau_j}-\frac{n}{t_n}\bigg)\bigg)+\mathrm{P}(L> u).\end{eqnarray*}

Setting $u=t_n/2$ here and applying Lemma 3 together with (3), we arrive at the relation

\begin{equation*}\Phi\big(nt_n^{-1}\big)=O\big(nt_n^{-2}\big),\quad n\to\infty.\end{equation*}

Observe that under the condition (1), the L’Hospital rule gives

(25) \begin{equation}\Phi(z)\sim bz^2,\quad z\to0.\end{equation}

The resulting contradiction, $n^{2}t_n^{-2}=O\big(nt_n^{-2}\big)$ as $n\to\infty$ , finishes the proof of the proposition.

Proof of Proposition 2.The relation (23) implies

\begin{align*} \Phi(Q(t))\ge \mathrm{E}_u\bigg(P(t)^{N}-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right)\bigg)-\mathrm{E}\big(1-P(t)^ N;\,L> u\big).\end{align*}

By Lemma 4,

\begin{align*}P(t)^{N}-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right)= \sum_{j=1}^{N}\big(Q\left(t-\tau_j\right)-Q(t)\big)r_j^*(t),\end{align*}

where $0\le r_j^*(t)\le 1$ is a counterpart of the term $r_j$ in Lemma 4. By the monotonicity of $P({\cdot})$ , we have, again referring to Lemma 4,

\begin{equation*}1-r_j^*(t)\le (N-1)Q(t-L).\end{equation*}

Thus, for $0<y<1$ ,

(26) \begin{align} \Phi(Q(t))&\ge \mathrm{E}_{ty}\Bigg(\!\sum_{j=1}^{N}(Q\left(t-\tau_j\right)-Q(t))r_j^*(t) \Bigg)-\mathrm{E}\big(1-P(t)^ N;\,L> ty\big). \end{align}

The assertion $\liminf_{t\to\infty} tQ(t)>0$ is proven by contradiction. Assume that $\liminf_{t\to\infty} tQ(t)=0$ , so that

\begin{equation*}t_n\,:\!=\,\min\big\{t: tQ(t)\le n^{-1}\big\}\end{equation*}

is finite for any natural n. Plugging $t=t_n$ into (26) and using

\begin{equation*}Q(t_n)\le \frac{1}{nt_n},\quad Q(t_n-u)-Q(t_n)\ge \frac{1}{n(t_n-u)}-\frac{1}{nt_n},\quad 1\le u\le t_n-1,\end{equation*}

we get

\begin{equation*}\Phi\Big(\frac{1}{nt_n}\Big)\ge n^{-1}\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\bigg)r_j^*(t_n)\bigg)-\frac{1}{nt_n}\mathrm{E}(N;\,L> t_ny).\end{equation*}

Given $L\le ty$ , we have

\begin{align*}1-r_j^*(t)\le NQ(t(1-y))\le N\frac{q_2}{t(1-y)}, \end{align*}

where the second inequality is based on the already proven part of (14). Therefore,

\begin{equation*}\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\bigg)\big(1-r_j^*(t_n)\big)\bigg)\le \frac{q_2y}{t_n^2(1-y)^2}\mathrm{E}\big(N^2\big),\end{equation*}

and we derive

\begin{align*} nt_n^2\Phi\Big(\frac{1}{nt_n}\Big)&\ge t_n^2\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{1}{t_n-\tau_j}-\frac{1}{t_n}\bigg)\bigg) -\frac{\mathrm{E}\big(N^2\big)q_2y}{(1-y)^2}-t_n\mathrm{E}(N;\,L> t_ny).\end{align*}

Sending $n\to\infty$ and applying (25), Lemma 2, and Lemma 3, we arrive at the inequality

\begin{equation*}0\ge a-yq_2\mathrm{E}\big(N^2\big)(1-y)^{-2},\quad 0<y<1,\end{equation*}

which is false for sufficiently small y.

3.3. Proof of (18) and (19)

Fix an arbitrary $0<y<1$ . Lemma 1 with $u=ty$ gives

(27) \begin{align}\Phi\big(h t^{-1}\big)= \mathrm{P}(L> t)+\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}Q\left(t-\tau_j\right)\bigg)-Q(t)+\mathrm{E}_{ty}(W(t))+D(ty,t). \end{align}

Let us show that

(28) \begin{align}D(ty,t)=o\big(t^{-2}\big),\quad t\to\infty. \end{align}

Using Lemma 2 and (14), we find that for an arbitrarily small $\epsilon>0$ ,

\begin{equation*}\mathrm{E}\Big(1-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,ty<L\le t(1-\epsilon)\Big)=o\big(t^{-2}\big),\quad t\to\infty.\end{equation*}

On the other hand,

\begin{align*}\mathrm{E}\Big(1-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,t(1-\epsilon)<L\le t\Big)\le \mathrm{P}(t(1-\epsilon)<L\le t),\end{align*}

so that in view of (3),

\begin{equation*}\mathrm{E}\Big(1-\prod\nolimits_{j=1}^{N}P\left(t-\tau_j\right);\,ty<L\le t\Big)=o\big(t^{-2}\big),\quad t\to\infty.\end{equation*}

This, (12), and Lemma 2 imply (28).

Observe that

(29) \begin{equation}bh^2=ah+d.\end{equation}

Combining (27), (28), and

\begin{equation*}\mathrm{P}(L> t)-\Phi\big(h t^{-1}\big)\stackrel{(3)(25)}{=}dt^{-2}-bh^2t^{-2}+o\big(t^{-2}\big)\stackrel{(29)}{=}-aht^{-2}+o\big(t^{-2}\big),\quad t\to\infty,\end{equation*}

we derive (15), which in turn gives (17). The latter implies (18) since by Lemmas 2 and 4,

\begin{equation*} \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{h }{t-\tau_j}\bigg)-\frac{h}{t}=\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t-\tau_j}-\frac{h}{t}\bigg)\bigg)-ht^{-1}\mathrm{E}(N;\,L> ty)=aht^{-2}+o\big(t^{-2}\big).\end{equation*}

Turning to the proof of (19), observe that the random variable

\begin{equation*}W(t)=\big(1-h t^{-1}\big)^{N}-\prod\nolimits_{j=1}^{N}\bigg(1-\frac{h +\phi\left(t-\tau_j\right)}{t-\tau_j}\bigg)+\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t}-\frac{h +\phi\left(t-\tau_j\right)}{t-\tau_j}\bigg)\end{equation*}

can be represented in terms of Lemma 4 as

\begin{align*}W(t)&=\prod\nolimits_{j=1}^{N}(1-f_j(t))-\prod\nolimits_{j=1}^{N}(1-g_j(t))+\sum\nolimits_{j=1}^{N}(f_j(t)-g_j(t))\\&=\sum\nolimits_{j=1}^{N}(1-r_j(t))(f_j(t)-g_j(t)),\end{align*}

by assigning

(30) \begin{align}f_j(t)\,:\!=\,h t^{-1},\quad g_j(t)\,:\!=\,\frac{h +\phi\left(t-\tau_j\right)}{t-\tau_j}.\end{align}

Here $0\le r_j(t)\le 1$ , and for sufficiently large t,

(31) \begin{align}1-r_j(t)\stackrel{ (14)}{\le} Nq_2t^{-1}.\end{align}

After plugging into (18) the expression

\begin{equation*}W(t)=\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t}-\frac{h }{t-\tau_j}\Bigg)(1-r_j(t))-\sum\nolimits_{j=1}^{N}\frac{\phi\left(t-\tau_j\right)}{t-\tau_j}(1-r_j(t)),\end{equation*}

we get

\begin{align*}\frac{\phi(t)}{t}&= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{\phi\left(t-\tau_j\right)}{t-\tau_j}r_j(t)\bigg)+\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t-\tau_j}-\frac{h}{t}\bigg)(1-r_j(t) )\bigg)+o\big(t^{-2}\big),\quad\!\! t\to\infty.\end{align*}

The latter expectation is non-negative, and for an arbitrary $\epsilon>0$ , it has the following upper bound:

\begin{align*} \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t-\tau_j}-\frac{h}{t}\bigg)(1-r_j(t) )\bigg)\stackrel{ (31)}{\le} q_2\epsilon\mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\frac{h }{t-\tau_j}-\frac{h}{t} \bigg)\bigg)+\frac{q_2h}{(1-y)t^2}\mathrm{E}\big(N^2;\,N> t\epsilon\big).\end{align*}

Thus, in view of Lemma 3,

\begin{align*}\frac{\phi(t)}{t}&= \mathrm{E}_{ty}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{\phi\left(t-\tau_j\right)}{t-\tau_j}r_j(t)\bigg)+o\big(t^{-2}\big),\quad t\to\infty.\end{align*}

Multiplying this relation by t, we arrive at (19).

3.4. Proof of $\phi(t)\to 0$

Recall (20). If the non-decreasing function

\begin{equation*}M(t)\,:\!=\,\max_{1\le j\le t} m(j)\end{equation*}

is bounded from above, then $\phi(t)=O\big(\frac{1}{\ln t}\big)$ , proving that $\phi(t)\to 0$ as $t\to\infty$ . If $M(t)\to\infty$ as $t\to\infty$ , then there is an integer-valued sequence $0<t_1<t_2<\ldots,$ such that the sequence $M_n\,:\!=\,M(t_n)$ is strictly increasing and converges to infinity. In this case,

(32) \begin{equation}m(t)\le M_{n-1}<M_n,\quad 1\le t< t_n,\quad m(t_n)=M_n,\quad n\ge1.\end{equation}

Since $|\phi(t)|\le \frac{M_{n}}{\ln t_{n}}$ for $t_n\le t<t_{n+1}$ , to finish the proof of $\phi(t)\to 0$ , it remains to verify that

(33) \begin{equation} M_{n}=o(\!\ln t_{n}),\quad n\to\infty.\end{equation}

Fix an arbitrary $y\in(0,1)$ . Putting $t=t_n$ in (21) and using (32), we find

\begin{align*}M_n\le M_n\mathrm{E}_{t_ny}\Bigg(\!\sum\nolimits_{j=1}^{N}r_j(t_n)\frac{t_n\ln t_n}{\big(t_n-\tau_j\big)\ln\big(t_n-\tau_j\big)}\Bigg)+\big(t_n^{-1}\ln t_n\big)o_n.\end{align*}

Here and elsewhere, $o_n$ stands for a non-negative sequence such that $o_n\to0$ as $n\to\infty$ . In different formulas, the sign $o_n$ represents different such sequences. Since

\begin{equation*}0\le \frac{t\ln t}{(t-u)\ln (t-u)}-1\le \frac{u(1+\ln t)}{(t-u)\ln (t-u)},\quad 0\le u< t-1,\end{equation*}

and $r_j(t_n)\in[0,1]$ , it follows that

\begin{align*}M_n-M_n\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}r_j(t_n)\bigg)&\le M_n\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))}\bigg)+\big(t_n^{-1}\ln t_n\big)o_n.\end{align*}

Recalling that $a=\mathrm{E}(\!\sum_{j=1}^{N}\tau_j)$ , observe that

\begin{align*}\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}\frac{\tau_j(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))}\bigg)\le \frac{a(1+\ln t_n)}{t_n(1-y)\ln (t_n(1-y))}= \big(a(1-y)^{-1}+o_n\big)t_n^{-1}.\end{align*}

Combining the last two relations, we conclude

(34) \begin{align}M_n\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}(1-r_j(t_n))\bigg)&\le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n.\end{align}

Now it is time to unpack the term $r_j(t)$ . By Lemma 4 with (30),

\begin{equation*}1-r_j(t)=\sum_{i=1}^{j-1}\frac{h +\phi(t-\tau_i)}{t-\tau_i}+(N-j)\frac{h }{t}-R_j(t),\end{equation*}

where, provided $\tau_j\le ty$ ,

\begin{equation*}0\le R_j(t)\le Nq_2t^{-1}(1-y)^{-1},\quad R_j(t)\le N^2q_2^2t^{-2}(1-y)^{-2},\quad t>t^*,\end{equation*}

for a sufficiently large $t^*$ . This allows us to rewrite (34) in the form

\begin{align*}M_n\mathrm{E}_{t_ny} &\Bigg(\!\sum\nolimits_{j=1}^{N}\Bigg(\!\sum_{i=1}^{j-1}\frac{h +\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-j)\frac{h }{t_n}\Bigg)\Bigg)\\&\le M_n\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N} R_j(t_n)\bigg)+a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n.\end{align*}

To estimate the last expectation, observe that if $\tau_j\le ty$ , then for any $\epsilon>0$ ,

\begin{equation*}R_j(t)\le Nq_2t^{-1}(1-y)^{-1} 1_{\{N>t\epsilon\}}+ N^2q_2^2t^{-2} (1-y)^{-2}1_{\{N\le t\epsilon\}},\quad t>t^*,\end{equation*}

implying that for sufficiently large n,

\begin{equation*}\mathrm{E}_{t_ny}\bigg(\!\sum\nolimits_{j=1}^{N}R_j(t_n)\bigg)\le q_2t_n^{-1}(1-y)^{-1}\mathrm{E}\big(N^{2} ;\, N> t_n\epsilon\big)+ q_2^2\epsilon t_n^{-1}(1-y)^{-2}\mathrm{E}\big(N^2\big),\end{equation*}

so that

\begin{align*}M_n\mathrm{E}_{t_ny}&\bigg(\!\sum\nolimits_{j=1}^{N}\bigg(\!\sum\nolimits_{i=1}^{j-1}\frac{h +\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-j)\frac{h }{t_n}\bigg)\bigg)\\&\le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n.\end{align*}

Since

\begin{equation*}\sum\nolimits_{j=1}^{N}\sum\nolimits_{i=1}^{j-1}\bigg(\frac{h}{t_n-\tau_i}- \frac{h }{t_n}\bigg)\ge0,\end{equation*}

we obtain

\begin{align*}M_n\mathrm{E}_{t_ny}&\Bigg(\!\sum\nolimits_{j=1}^{N}\Bigg(\!\sum_{i=1}^{j-1}\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-1)\frac{h }{t_n}\Bigg)\Bigg)\\&\le a(1-y)^{-1}t_n^{-1}M_n +t_n^{-1}(M_n+\ln t_n)o_n.\end{align*}

By (16) and (14), we have $\phi(t)\ge q_1-h$ for $t\ge t_0$ . Thus, for $\tau_j\le L\le t_ny$ and sufficiently large n,

\begin{equation*}\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}\stackrel{}{\ge} \frac{q_1-h}{t_n(1-y)}.\end{equation*}

This gives

\begin{equation*}\sum\nolimits_{j=1}^{N}\Bigg(\!\sum_{i=1}^{j-1}\frac{\phi(t_n-\tau_i)}{t_n-\tau_i}+(N-1)\frac{h }{t_n}\Bigg)\ge \bigg(h+\frac{q_1-h }{2(1-y)}\bigg)t_n^{-1}N(N-1),\end{equation*}

which, after multiplying by $t_nM_n$ and taking expectations, yields

\begin{align*}\bigg(h+\frac{q_1-h }{2(1-y)}\bigg)M_n\mathrm{E}_{t_ny}(N(N-1))\le a(1-y)^{-1}M_n +(M_n+\ln t_n)o_n.\end{align*}

Finally, since

\begin{equation*} \mathrm{E}_{t_ny}(N(N-1))\to2b,\quad n\to\infty,\end{equation*}

we derive that for any $0<\epsilon<y<1$ , there is a finite $n_\epsilon$ such that for all $n>n_\epsilon$ ,

\begin{equation*}M_n\big(2bh(1-y)+bq_1-bh-a-\epsilon\big) \le \epsilon\ln t_n.\end{equation*}

By (29), we have $bh\ge a$ , and therefore

\begin{equation*}2bh(1-y)+bq_1-bh-a-\epsilon\ge bq_1-2bhy-y.\end{equation*}

Thus, choosing $y=y_0$ such that $bq_1-2bhy_0-y_0=\frac{bq_1}{2}$ , we see that

\begin{equation*}\limsup_{n\to\infty}\frac{M_n}{\ln t_n} \le \frac{2\epsilon}{bq_1},\end{equation*}

which implies (33) as $\epsilon\to0$ , concluding the proof of $\phi(t)\to 0$ .

4. Proof of Theorem 1

We will use the following notational conventions for the k-dimensional probability generating function

\begin{equation*}\mathrm{E}\Big(z_1^{Z(t_1)}\cdots z_k^{Z(t_k)}\Big)=\sum_{i_1=0}^\infty\ldots\sum_{i_k=0}^\infty\mathrm{P}(Z(t_1)=i_1,\ldots, Z(t_k)=i_k)z_1^{i_1}\cdots z_k^{i_k},\end{equation*}

with $0< t_1\le \ldots\le t_k$ and $z_1,\ldots,z_k\in[0,1]$ . We define

\begin{equation*}P_k\big(\bar t,\bar z\big)\,:\!=\,P_k(t_1,\ldots,t_{n};\,z_1,\ldots,z_{k})\,:\!=\,\mathrm{E}\Big(z_1^{Z(t_1)}\cdots z_k^{Z(t_k)}\Big)\end{equation*}

and write, for $t\ge0$ ,

\begin{equation*}P_k\big(t+\bar t,\bar z\big)\,:\!=\,P_k(t+t_1,\ldots,t+t_{k};\,z_1,\ldots,z_{k}).\end{equation*}

Moreover, for $0< y_1<\ldots<y_k$ , we write

\begin{equation*}P_k(t\bar y,\bar z)\,:\!=\,P_k(ty_1,\ldots,ty_{k};\,z_1,\ldots,z_{k}),\end{equation*}

and assuming $0< y_1<\ldots<y_k<1$ ,

\begin{equation*}P_k^*\big(t,\bar y,\bar z\big)\,:\!=\,\mathrm{E}\Big(z_1^{Z(ty_1)}\cdots z_{k}^{Z(ty_{k})};\,Z(t)=0\Big)=P_{k+1}(ty_1,\ldots,ty_k,t;\,z_1,\ldots,z_k,0).\end{equation*}

These conventions will be similarly applied to the functions

(35) \begin{equation}Q_k\big(\bar t,\bar z\big)\,:\!=\,1-P_k\big(\bar t,\bar z\big),\quad Q_k^*\big(t,\bar y,\bar z\big)\,:\!=\,1-P_k^*\big(t,\bar y,\bar z\big).\end{equation}

Our special interest is in the function

(36) \begin{equation} Q_k(t)\,:\!=\,Q_k\big(t+\bar t,\bar z\big),\quad 0= t_1< \ldots< t_k, \quad z_1,\ldots,z_k\in[0,1),\end{equation}

to be viewed as a counterpart of the function Q(t) treated by Theorem 2. Recalling the compound parameters

\begin{equation*}h=\frac{a+\sqrt{a^2+4bd}}{2b}\end{equation*}

and $c=4bda^{-2}$ , put

(37) \begin{equation}h_k\,:\!=\,h\frac{1+\sqrt{1+cg_k}}{1+\sqrt{1+c}},\quad g_k\,:\!=\, g_k(\bar y,\bar z)\,:\!=\,\sum_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})y_{i}^{-2}.\end{equation}

The key step of the proof of Theorem 1 is to show that for any given $1=y_1<y_2<\ldots<y_k$ ,

(38) \begin{equation}tQ_k(t)\to h_k,\quad t_i\,:\!=\,t(y_i-1), \quad i=1,\ldots,k,\quad t\to\infty.\end{equation}

This is done following the steps of our proof of $tQ(t)\to h$ given in Section 3.

Unlike Q(t), the function $Q_k(t)$ is not monotone over t. However, monotonicity of Q(t) was used in the proof of Theorem 2 only for the proof of (14). The corresponding statement

\begin{equation*} 0<q_1\le tQ_k(t)\le q_2<\infty,\quad t\ge t_0,\end{equation*}

follows from the bounds $(1-z_1)Q(t)\le Q_k(t)\le Q(t)$ , which hold by the monotonicity of the underlying generating functions over $z_1,\ldots,z_{n}$ . Indeed,

\begin{equation*}Q_k(t)\le Q_k(t, t+t_2,\ldots,t+t_{k};\,0,\ldots,0)= Q(t),\end{equation*}

and on the other hand,

\begin{equation*}Q_k(t)= Q_k(t,t+t_2,\ldots,t+t_{k};\,z_1,\ldots,z_k)= \mathrm{E}\Big(1-z_1^{Z(t)}z_2^{Z(t+t_2)}\cdots z_k^{Z(t+t_k)}\Big)\ge \mathrm{E}\Big(1-z_1^{Z(t)}\Big),\end{equation*}

where

\begin{equation*} \mathrm{E}\Big(1-z_1^{Z(t)}\Big)\ge \mathrm{E}\Big(1-z_1^{Z(t)};\,Z(t)\ge1\Big)\ge (1-z_1)Q(t).\end{equation*}

4.1. Proof of $\boldsymbol{{tQ}}_{\boldsymbol{{k}}}\boldsymbol{{(t)}}\to \boldsymbol{{h}}_{\boldsymbol{{k}}}$

The branching property (8) of the GWO process gives

\begin{equation*} \prod_{i=1}^{k} z_i^{Z(t_i)}=\prod_{i=1}^{k} z_i^{1_{\{L>t_i\}}}\prod\nolimits_{j=1}^{N} z_i^{Z_j\left(t_i-\tau_j\right)}.\end{equation*}

Given $0< t_1<\ldots<t_k< t_{k+1}=\infty$ , we use

\begin{align*}\prod_{i=1}^{k} z_i^{1_{\{L>t_i\}}}&=1_{\{L\le t_1\}}+\sum_{i=1}^{k}z_1\cdots z_{i}1_{\{t_{i}<L\le t_{i+1}\}}\end{align*}

to deduce the following counterpart of (9):

\begin{align*}P_k\big(\bar t,\bar z\big)&=\mathrm{E}_{t_1}\Bigg(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z)\Bigg)+\sum_{i=1}^{k}z_1\cdots z_{i}\mathrm{E}\Bigg(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z);\,t_{i}<L\le t_{i+1}\Bigg).\end{align*}

This implies

(39) \begin{align}P_k\big(\bar t,\bar z\big)\,=\,&\mathrm{E}_{t_1}\Bigg(\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z)\Bigg)+\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P}(t_{i}<L\le t_{i+1}) \nonumber\\&-\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E}\Bigg(1-\prod_{j=1}^{N}P_k(\bar t-\tau_j,\bar z);\, t_{i}<L\le t_{i+1}\Bigg).\end{align}

Using this relation we establish the following counterpart of Lemma 1.

Lemma 5. Consider the function (36) and put $P_k(t)\,:\!=\,1-Q_k(t)=P_k\big(t+\bar t,\bar z\big)$ . For $0<u<t$ , the relation

(40) \begin{align}\Phi\big(h_k t^{-1}\big)&= \mathrm{P}(L> t)-\sum_{i=1}^{k}z_1\cdots z_{i}\mathrm{P}\big(t+t_i<L\le t+t_{i+1}\big) \nonumber \\&+\mathrm{E}_u\bigg(\!\sum\nolimits_{j=1}^{N}Q_k\left(t-\tau_j\right)\bigg)-Q_k(t)+\mathrm{E}_u(W_k(t))+D_k(u,t) \end{align}

holds with $t_{k+1}=\infty$ ,

(41) \begin{align} W_k(t)\,:\!=\,\big(1-h_k t^{-1}\big)^{N}+Nh_k t^{-1}-\sum\nolimits_{j=1}^{N}Q_k\left(t-\tau_j\right)-\prod\nolimits_{j=1}^{N}P_k\left(t-\tau_j\right),\end{align}

and

(42) \begin{align} D_k(u,t)\,:\!=\, &\mathrm{E}\Big(1-\prod\nolimits_{j=1}^{N}P_k\left(t-\tau_j\right);\,u<L\le t\Big)+\mathrm{E}\Big(\big(1-h_k t^{-1}\big)^{N} -1+Nh_k t^{-1};\,L>u\Big) \nonumber\\&+\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E}\Bigg(1-\prod_{j=1}^{N}P_k\left(t-\tau_j\right);\, t+t_{i}<L\le t+t_{i+1}\Bigg).\end{align}

Proof. According to (39),

\begin{align*}P_k(t)&=\mathrm{E}_u\Bigg(\prod_{j=1}^{N}P_k\left(t-\tau_j\right)\Bigg)+\mathrm{E}\bigg(\prod\nolimits_{j=1}^{N}P_k\left(t-\tau_j\right);\,u<L\le t\bigg)\\& \quad +\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P}\big(t+t_{i}<L\le t+t_{i+1}\big)\\& \quad -\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{E}\Bigg(1-\prod_{j=1}^{N}P_k\left(t-\tau_j\right);\, t+t_{i}<L\le t+t_{i+1}\Bigg).\end{align*}

By the definition of $\Phi({\cdot})$ ,

\begin{align*}\Phi\big(h_k t^{-1}\big)+1\,=\,&\mathrm{E}_u\Big(\big(1-h_k t^{-1}\big)^{N}+Nh_k t^{-1}\Big)+\mathrm{P}(L> t)\\&+\mathrm{E}\Big(\big(1-h_k t^{-1}\big)^{N} -1+Nh_k t^{-1};\,L> u\Big)+\mathrm{P}(u<L\le t),\end{align*}

and after subtracting the two last equations, we get

\begin{align*}\Phi\big(h_k t^{-1}\big)+Q_k(t) \,=\, &\mathrm{E}_u\Big(\big(1-h_k t^{-1}\big)^{N} +Nh_k t^{-1}-\prod\nolimits_{j=1}^{N}P_k\left(t-\tau_j\right)\Big)+\mathrm{P}(L> t)\\&-\sum_{i=1}^{k}z_1\cdots z_{i} \mathrm{P}(t+t_{i}<L\le t+t_{i+1})+D_k(u,t),\end{align*}

with $D_k(u,t)$ satisfying (42). After a rearrangement, the relation (40) follows together with (41).

With Lemma 5 in hand, the convergence (38) is proven by applying almost exactly the same argument as used in the proof of $tQ(t)\to h$ . An important new feature emerges because of the additional term in the asymptotic relation defining the limit $h_k$ . Let $1=y_1<y_2<\ldots<y_k<y_{k+1}=\infty$ . Since

\begin{align*}\sum\nolimits_{i=1}^{k}z_1\cdots z_{i}\mathrm{P}\big(ty_{i}<L\le ty_{i+1}\big)\sim d t^{-2}\sum_{i=1}^{k}z_1\cdots z_{i}\Big(y_{i}^{-2}-y_{i+1}^{-2}\Big),\end{align*}

we see that

\begin{align*}\mathrm{P}(L> t)-\sum\nolimits_{i=1}^{k}z_1\cdots z_{i}\mathrm{P}\big(ty_{i}<L\le ty_{i+1}\big)\sim dg_k t^{-2},\end{align*}

where $g_k$ is defined by (37). Assuming $0\le z_1,\ldots,z_k<1$ , we ensure that $g_k>0$ , and as a result, we arrive at a counterpart of the quadratic equation (29),

\begin{equation*}bh_k^2=ah_k+dg_k,\end{equation*}

which gives

\begin{equation*}h_k=\frac{a+\sqrt{a^2+4bdg_k}}{2b}=h\frac{1+\sqrt{1+cg_k}}{1+\sqrt{1+c}},\end{equation*}

justifying our definition (37). We conclude that for $k\ge1$ ,

(43) \begin{align}\frac{Q_k(t\bar y,\bar z)}{Q(t)}\to &\frac{1+\sqrt{1+c\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})y_{i}^{-2}}}{1+\sqrt{1+c}}, \nonumber\\& \qquad \qquad 1=y_1<\ldots< y_k,\quad0\le z_1,\ldots,z_k<1.\end{align}

4.2. Conditioned generating functions

To finish the proof of Theorem 1, consider the generating functions conditioned on the survival of the GWO process. Given (5) with $j\ge1$ , we have

\begin{align*}Q(t)\mathrm{E}&\Big(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0\Big)=\mathrm{E}(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)};\,Z(t)>0)\\&=P_k(t\bar y,\bar z)-\mathrm{E}\Big(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)};\,Z(t)=0\Big)\stackrel{(35)}{=}Q_j^*\big(t,\bar y,\bar z\big)-Q_k(t\bar y,\bar z),\end{align*}

and therefore,

\begin{equation*}\mathrm{E}\Big(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0\Big)=\frac{Q_j^*\big(t,\bar y,\bar z\big)}{Q(t)}-\frac{Q_k(t\bar y,\bar z)}{Q(t)}.\end{equation*}

Similarly, if (5) holds with $j=0$ , then

\begin{equation*}\mathrm{E}\Big(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0\Big)=1-\frac{Q_k(t\bar y,\bar z)}{Q(t)}.\end{equation*}

Letting $t^{\prime}=ty_1$ , we get

\begin{equation*}\frac{Q_k(t\bar y,\bar z)}{Q(t)}=\frac{Q_k(t^{\prime},t^{\prime}y_2/y_1,\ldots,t^{\prime}y_k/y_1)}{Q(t^{\prime})}\frac{Q(ty_1)}{Q(t)},\end{equation*}

and applying the relation (43), we have

\begin{equation*}\frac{Q_k(t\bar y,\bar z)}{Q(t)}\to \frac{1+\sqrt{1+\sum\nolimits_{i=1}^{k}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i}}{\big(1+\sqrt{1+c}\big)y_1},\end{equation*}

where $\Gamma_i=c({y_1}/{y_i} )^2$ . On the other hand, since

\begin{equation*}Q_j^*\big(t,\bar y,\bar z\big)=Q_{j+1}(ty_1,\ldots,ty_j,t;\,z_1,\ldots,z_j,0), \quad j\ge1,\end{equation*}

we also get

\begin{equation*}\frac{Q_j^*\big(t,\bar y,\bar z\big)}{Q(t)}\to \frac{1+\sqrt{1+\sum\nolimits_{i=1}^{j}z_1\cdots z_{i-1}(1-z_{i})\Gamma_i+cz_1\cdots z_{j}y_1^2}}{\big(1+\sqrt{1+c}\big)y_1}.\end{equation*}

We conclude that as stated in Section 2,

\begin{align*}\mathrm{E}\Big(z_1^{Z(ty_1)}\cdots z_k^{Z(ty_k)}|Z(t)>0\Big)\to \mathrm{E}\Big(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}\Big).\end{align*}

Acknowledgements

The author is grateful to two anonymous referees for their valuable comments, corrections, and suggestions, which helped enhance the readability of the paper.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Athreya, K. B. and Ney, P. E. (1972). Branching Processes. John Wiley, New York.CrossRefGoogle Scholar
Bellman, R. and Harris, T. E. (1948). On the theory of age-dependent stochastic branching processes. Proc. Nat. Acad. Sci. USA 34, 601604.CrossRefGoogle Scholar
Durham, S. D. (1971). Limit theorems for a general critical branching process. J. Appl. Prob. 8, 116.CrossRefGoogle Scholar
Holte, J. M. (1974). Extinction probability for a critical general branching process. Stoch. Process. Appl. 2, 303309.CrossRefGoogle Scholar
Jagers, P. (1975). Branching Processes With Biological Applications. John Wiley, New York.Google Scholar
Kolmogorov, A. N. (1938). Zur Lösung einer biologischen Aufgabe. Commun. Math. Mech. Chebyshev Univ. Tomsk 2, 112.Google Scholar
Sagitov, S. (1995). Three limit theorems for reduced critical branching processes. Russian Math. Surveys 50, 10251043.CrossRefGoogle Scholar
Sagitov, S. (2021). Critical Galton–Watson processes with overlapping generations. Stoch. Quality Control 36, 87110.CrossRefGoogle Scholar
Sevastyanov, B. A. (1964). The age-dependent branching processes. Theory Prob. Appl. 9, 521537.CrossRefGoogle Scholar
Sewastjanow, B. A. (1974). Verzweigungsprozesse. Akademie-Verlag, Berlin.Google Scholar
Topchii, V. A. (1987). Properties of the probability of nonextinction of general critical branching processes under weak restrictions. Siberian Math. J. 28, 832844.CrossRefGoogle Scholar
Vatutin, V. A. (1980). A new limit theorem for the critical Bellman–Harris branching process. Math. USSR Sb. 37, 411423.CrossRefGoogle Scholar
Watson, H. W. and Galton, F. (1874). On the probability of the extinction of families. J. Anthropol. Inst. Great Britain Ireland 4, 138144.CrossRefGoogle Scholar
Yakymiv, A. L. (1984). Two limit theorems for critical Bellman–Harris branching processes. Math. Notes 36, 546550.CrossRefGoogle Scholar
Figure 0

Figure 1. The dashed line is the probability density function of T; the solid line is the probability density function of $T_0$. The left panel illustrates the case $c=5$, and the right panel illustrates the case $c=15$.