1. Introduction
It is known that Galton–Watson processes are widely applied in nuclear physics, biology, ecology, epidemiology, and many other areas, and have been extensively studied; see [Reference Athreya and Ney2, Reference Haccou, Jagers and Vatutin10, Reference Kimmel and Axelrod18] and references therein. The study of Galton–Watson processes can be extended directly in two directions. One popular extension is the branching process in a random environment (BPRE), which has attracted much attention. Many interesting results arise from the existence of the random environment; we refer the reader to [Reference Kersting and Vatutin15] and references therein for details. Another interesting extension of the Galton–Watson process is the branching process in a varying environment (BPVE). Compared with BPREs, the the study of BPVEs has not been as successful. The main reason is that a BPVE is no longer a time-homogeneous Markov chain, but BPREs do have some homogeneous properties. Indeed, if the environments are assumed to be stationary and ergodic, then a BPRE is a time-homogeneous process under annealed probability. The emerging of the so-called varying environments also brings some strange phenomena to the branching processes. For example, the process may ‘fall asleep’ in some positive state [Reference Lindvall19], it may diverge at different exponential rates [Reference Macphee and Schuh21], and the tail probabilities of the surviving time may show some strange asymptotics [Reference Fujimagari9, Reference Wang and Yao29]. For other aspects of the study of BPVEs, we refer the reader to [Reference Bhattacharya and Perlman3–Reference Cohn and Wang5, Reference Dolgopyat, Hebbar, Koralov and Perlman7, Reference Jagers11, Reference Jones13, Reference Kersting14] and the references therein.
In this paper we study BPVEs with immigration. For simplicity, we assume that only one immigrant immigrates into the system in each generation. Roughly speaking, for $n\ge0$ , let $Z_n$ be the number of individuals in the nth generation, which does not count the immigrants entering the system at time n. If $Z_n=0$ , we call n a regeneration time. Our aim is to provide the necessary and sufficient conditions to decide whether the process has finitely or infinitely many regeneration times. We should note that for Galton–Watson processes or BPREs with immigration, if the process has one regeneration time it must have infinitely many regeneration times. In other words, it will never happen that such a process owns finitely many regeneration times if there are any due to the time homogeneity.
Our motivation originates from two aspects, the regeneration structure of BPREs and the cutpoints of random walks in varying environments. On one hand, in [Reference Kesten, Kozlov and Spitzer16], in order to study the stable limit law of random walks in random environments, a regeneration structure of a single-type BPRE was constructed, and the tail probabilities of the regeneration time and the number of total progeny before the first regeneration time were estimated. Related problems in the multitype case of this regeneration structure can be found in [Reference Key17, Reference Roitershtein23, Reference Wang25]. Along these lines, it is natural for us to consider the number of regeneration times for BPVEs. On the other hand, in [Reference Csáki, Földes and Révész6, Reference James, Lyons and Peres12, Reference Lorentzen20, Reference Wang26], a class of questions related to the cutpoints of random walks in varying environments was considered. We find that the regeneration structures for BPVEs and the excursions between successive cutpoints share some similarities, so we aim to study the regeneration of BPVEs in this paper.
We currently treat only the regeneration times of one-type and two-type BPVEs with linear fractional offspring distributions. Basically, the one-type case is much easier, and the two-type case is very complicated. But the ideas in studying the two models are similar, so we omit the proofs of the one-type case. The difficulty in studying the two-type case arises from the fact that the probability that n is a regeneration time is written in terms of the product of $2\times2$ nonnegative matrices, which are hard to estimate. To overcome this difficulty, we need some delicate analyses among the spectral radii, the tails of continued fractions, and the product of nonnegative matrices. These analyses lead to some interesting results in these fields, which may be of independent interest.
In Section 2, we precisely define the models and state the main results. In Section 3, we prove some properties of continued fractions that are useful for the proof of the main result. In Section 4, we focus on the proof of the main result. In Section 5, we construct some concrete examples that explicitly exhibit the new phenomena that arise from the existence of so-called varying environments.
2. Models and main results
Linear-fractional branching processes are of special interest as the iterations of their generating functions are again linear-fractional functions, allowing for explicit calculations of various entities of importance [Reference Athreya and Ney2, pp. 7--8]. Such explicit results illuminate the known asymptotic results concerning more general branching processes, and, on the other hand, may bring insight into less-investigated aspects of the theory of branching processes. So, in order to discuss the properties of the generation time of branching processes in varying environments (one-type and two-type), we study linear-fractional branching processes first.
2.1. One-type case
For $k\ge1$ , suppose $0<p_k\le \frac12$ , $q_k>0$ are numbers such that $p_k+q_k=1$ and
Let $\{Z_n\}_{n\ge0}$ be a Markov chain such that $Z_0=0$ and $E(s^{Z_n}\mid Z_0,\ldots,Z_{n-1}) = [f_{n}(s)]^{1+Z_{n-1}}$ , $n\ge1$ . Clearly, $\{Z_n\}_{n\ge0}$ forms a branching process in varying environments with exactly one immigrant in each generation. We now define the regeneration time with which we are concerned.
Definition 1. Let $C=\{n\ge0\colon Z_n=0\}$ , and for $k\ge1$ let $C_k=\{n\colon n+i\in C, 0\le i\le k-1\}$ . If $n\in C$ , n is called the regeneration time of the process $\{Z_n\}$ . If $n\in C_k$ , we call n a k-strong regeneration time of the process $\{Z_n\}$ .
Remark 1. Here, we slightly abuse the term ‘regeneration time’. Notice that if $R\in C$ then $Z_R=0$ , i.e. the process temporarily dies out. But the process may get regenerated at time R, since there is an immigrant entering into the system at time R that may give birth to a number of individuals. In this point of view, we call R a regeneration time. We emphasize that the regeneration times here are different from the classical regeneration times of regenerative processes. In the literature (see, e.g., [Reference Sigman and Wolff24]), for a stochastic process $X= \{X(t)\}_{t \ge 0}$ , if there is a random variable $R > 0$ such that $\{X(t + R)\}_{t \ge 0}$ is independent of $\{\{X(t)\}_{t < R},R\}$ , and $\{X(t + R) \}_{t \ge 0}$ equals $\{X(t)\}_{t \ge 0}$ in distribution, then X is call a regenerative process and R is called a regeneration time. In our setting, for a regeneration time R, due to the existence of the so-called varying environments the distribution of $\{Z_{R+n}\}_{n\ge 0}$ differs from that of $\{Z_n\}_{n\ge 0}$ .
For $k\ge1$ , let $m_k=f^{\prime}_k(1)={q_k}/{p_k}$ . For $n\ge k\ge 1$ , set $D(k,n)\,:\!=\,1+\sum_{j=k}^n m_j\cdots m_n$ and write, for simplicity, $D(n)\equiv D(1,n)$ .
The following theorem provides a criterion for the finiteness of the number of regeneration times.
Theorem 1. Suppose that, for some $\varepsilon>0$ , $\varepsilon <p_n\le \frac12$ , $n\ge 1$ . Let D(n), $n\ge1$ , be as defined above. If
then $\{Z_n\}$ has at most finitely many regeneration times, almost surely. If there exists some $\delta>0$ such that $D(n)\le \delta n\log n$ for n large enough and
then $\{Z_n\}$ has infinitely many k-strong regeneration times, almost surely.
2.2. Two-type case
Suppose $a_k,b_k,d_k,\theta_k$ , $k\ge 1$ , are positive real numbers and set
For $n\ge1$ and $ \mathbf{s}=(s_1,s_2)^\top\in [0,1]\times [0,1]$ , let
which is known as the probability-generating function of a linear-fractional distribution. Here and in what follows, $a^{\top}$ denotes the transpose of the vector a, $\mathbf{e}_1=(1,0)^{\top}$ , $\mathbf{e}_2=(0,1)^{\top}$ , and $\mathbf{1}={\mathbf{e}_1}+{\mathbf{e}_2}=(1,1)^{\top}$ .
Suppose $\mathbf{Z}_n=(Z_{n,1},Z_{n,2})^{\top}$ , $n\ge 0$ , is a two-type branching process with immigration satisfying $E\big(s^{\mathbf{Z}_{n}}\mid\mathbf{Z}_0,\ldots,\mathbf{Z}_{n-1}\big)=\mathbf{f}_n(\mathbf{s})^{\mathbf{Z}_{n-1}+{\mathbf{e}_1}}$ for all $n\ge1$ . Here, $\mathbf{f}_n(\mathbf{s})^{\mathbf{Z}_{n-1}+{\mathbf{e}_1}}=\big[{f}^{(1)}_n(\mathbf{s})\big]^{Z_{n-1,1}+1}\big[{f}^{(2)}_n(\mathbf{s})\big]^{Z_{n-1,2}}$ .
We now define the regeneration times and the k-strong regeneration times of the two-type process $\{\mathbf{Z}_n\}$ in a similar fashion to the one-type case.
Definition 2. Let $C=\{n\ge0\colon\mathbf{Z}_n=(0,0)^\top\}$ and, for $k\ge1$ , let $C_k=\{n\colon n+i\in C, 0\le i\le k-1\}$ . If $n\in C$ , we call n a regeneration time of the process $\{\mathbf{Z}_n\}$ . If $n\in C_k$ , we call n a k-strong regeneration time of the process $\{\mathbf{Z}_n\}$ .
To study the regeneration times of the two-type branching process, we need the following condition on the sequences $a_k,b_k,d_k,\theta_k$ , $k\ge1$ .
Assumption 1. Suppose that
and $a_k\rightarrow a$ , $b_k\rightarrow b$ , $d_k\rightarrow d$ , $\theta_k\rightarrow\theta$ as $k\rightarrow\infty$ , where $b,d>0$ and $a,\theta\ge0$ are certain numbers such that $a+\theta>0$ and $bd-a\theta\neq0$ .
In what follows, for a matrix M we denote by $\varrho(M)$ its spectral radius (the largest eigenvalue). For $n\ge m\ge1$ , we set
and write L(1,n) as L(n) for simplicity. In the two-type case, we have the following criteria.
Theorem 2. Assume Assumption 1 holds, and that $\varrho(M_k)\ge 1$ for all $k\ge1$ .
(i) If
then $\{\mathbf{Z}_n\}$ has at most finitely many regeneration times, almost surely.
(ii) If there exists some $\delta>0$ such that $L(n)\le \delta n\log n$ for n large enough and
then $\{\mathbf{Z}_n\}$ has infinitely many k-strong regeneration times, almost surely.
Remark 2. For branching processes with general offspring distributions, the probability-generating function of the population size cannot be computed explicitly in general, but, as seen in [Reference Agresti1, Reference Kersting and Vatutin15], it can be controlled from below and above by that of a branching process with linear-fractional offspring distributions. Based on this observation, our results may be generalized to branching processes with general offspring distributions.
Remark 3. We give some explanation on Assumption 1. The assumption in (2) allows us to show that
for some universal constant $0<\zeta<\gamma<\infty$ , i.e. the spectral radius of the product of matrices can be uniformly bounded from below and above by the product of the spectral radii of these matrices; see Lemma 5. Such a result plays an important role in proving Theorem 2.
3. Products of $2\times2$ matrices and continued fractions
The probability-generating function of $\mathbf{Z}_n$ can be written in terms of the products of the mean offspring matrices, which are hard to compute directly since they are inhomogeneous. But it is known that the products of $2\times2$ matrices can be written in terms of the products of the tails of certain continued fractions.
In this section, we focus on how to estimate the products of the mean offspring matrices by means of continued fractions. To begin with, we introduce some new matrices related to $M_k$ , $k\ge1$ . For $k\ge1$ , set
and write
Then, for $n\ge k\ge1$ , we have
and $A_k\cdots A_n=\Lambda_k^{-1}M_k\cdots M_n\Lambda_{n+1}$ , $n\ge k\ge 1$ . Since $a_k$ , $b_k$ , $d_k$ , and $\theta_k$ are all positive numbers, we have
Under Assumption 1, we have
whose spectral radii are
Next, we introduce some basics on continued fractions. Let $\beta_k,\alpha_k,k\ge 1$ be certain real numbers. For $1\le k\le n$ , we denote by
the $(n-k+1)$ th approximant of a continued fraction, and
If $\lim_{n\rightarrow\infty}\xi_{k,n}$ exists, we say that the continued fraction $\xi_k$ is convergent and its value is defined as $\lim_{n\rightarrow\infty}\xi_{k,n}$ . We call $\xi_k$ , $k\ge1$ , in (7) the tails, and
$k \ge2$ , the critical tails of the continued fraction
respectively.
The following lemma gives the convergence of the tails and the critical tails of the continued fractions.
Lemma 1. If $\lim_{n\rightarrow\infty}\alpha_n=\alpha\ne0$ , $\lim_{n\rightarrow\infty}\beta_n=\beta$ , and $\alpha^2+4\beta\ge0$ , then, for any $k\ge1$ , $\lim_{n\rightarrow\infty}\xi_{k,n}$ exists and, furthermore,
The proof of Lemma 1 can be found in many references. We refer the reader to [Reference Lorentzen20] (see the discussion between (4.1) and (4.2) on p. 81 therein).
For $n\ge k\ge1$ , let
where we stipulate that $y_{n+1,n}=1$ . Then we have
Lemma 2. For $1\le k\le n$ , $\xi_{k,n}$ defined in (8) coincides with the one in (6) with $\beta_k=\tilde b_k^{-1}\tilde d_{k+1}^{-1}$ and $\alpha_k=\tilde a_k\tilde b_k^{-1}\tilde d_{k+1}^{-1}$ .
Proof. Clearly,
For $1\le k< n$ , note that
We come to the conclusion that the lemma is true by iterating this equation.
In the remainder of this section, we always assume that Assumption 1 holds, and $\xi_k,\xi_{k,n}$ , $n\ge k\ge1$ , are as defined in (6) and (7) with $\beta_k=\tilde b_k^{-1}\tilde d_{k+1}^{-1}$ and $\alpha_k=\tilde a_k\tilde b_k^{-1}\tilde d_{k+1}^{-1}$ . Since
it follows from Lemma 1 that
Moreover, consulting (5), (8), and the relationship $\xi_k={\beta_k}/({\alpha_k+\xi_{k+1}})$ , we have
The relationship between the entries $\xi_{k+1}^{-1}\cdots\xi_{k+n}^{-1}$ and $A_{k+1}\cdots A_{k+n}$ , which plays an important role in the proof of our main result, was established in [Reference Wang28, Theorem 2]. For convenience, we state it here.
Proposition 1. ([Reference Wang28, Theorem 2].) Let $\xi_k$ be as in (7) for all $k \ge 1$ . Suppose $M_k\rightarrow M$ , $a+\theta\neq 0$ , $b\neq 0$ , and $bd\neq a\theta$ . Then there exists a $k_0>0$ such that, for $k\ge k_0$ and $i,j=1,2$ , we have
where the convergence is uniform in k, and
Furthermore, if $\rho\ge1$ then, for $k\ge k_0$ , with the above $i,j=1,2$ ,
where the convergence is uniform in k.
4. Proof of the main result
Keep in mind that in what follows, unless otherwise specfied, c (with or without an index) is a positive constant whose value may be different from line to line.
For $n\ge k\ge1$ , write $\mathbf{f}_{k,n}(\mathbf{s})\,:\!=\,{\mathbf{f}_k(\mathbf{f}_{k+1}\cdots(\mathbf{f}_n(\mathbf{s}))\cdots\!)}$ . By iterating (1), we see that
As a consequence,
which implies that
Let $G(k,n)\,:\!=\,1+\sum_{j=k}^n\mathbf{e}_1^{\top}M_j\cdots M_n\mathbf1$ , $n\ge k\ge1.$
In order to study the regeneration times of the process $\{\mathbf{Z}_n\}$ , we should estimate G(k,n). With Proposition 1 in hand, and using some other estimates, we can control G(k,n) by the entries $\sum_{j=k}^{n}\varrho(M_j)\cdots\varrho(M_n)$ . We state the methods of the estimates in the following lemmas.
Lemma 3. Assume Assumption 1 holds, and $\varrho(M_k)\ge 1$ . Then, for $\varepsilon>0$ , there exist constants $k_0$ and N such that, for all $k> k_0$ and $n-k> N$ ,
where $\varphi(1)$ and $\varphi(2)$ are as defined in Proposition 1. Furthermore, for $n>N$ ,
Proof. In view of (4), we have
for all $n\ge k\ge 1$ . Assumption 1 means that all the conditions of Proposition 1 are fulfilled. Then, in view of (13), we can see that for each $\varepsilon>0$ there exists a constant $N^{\prime}>0$ such that, for all $n>N^{\prime}$ and $k\ge k_0$ ,
Thus, for $k>k_0$ and $n-k\ge N^{\prime}$ ,
for $l=1,2$ . Noticing that $1-({\theta_{n+1}}/{b_{n+1}})\rightarrow1-({\theta}/{b})$ as $n\rightarrow\infty$ , we conclude that (15) is true.
Now we turn to (16). For the above $\varepsilon$ , it follows from (12) that there exist constants $k_0$ and N” such that, for all $k> k_0$ and $n-k\ge N^{\prime\prime}$ ,
Taking (9) into account, we rewrite
It follows from (10) that
Then, using (18), (19), and (21) to estimate (20), we get, for $n>\max\{N^{\prime}+k_0,N^{\prime\prime}+k_0\}$ ,
We can rewrite the second summand in (17) as
Then, taking the fact $\varphi(1)\cdot\lim_{n\rightarrow\infty}{\tilde{b}_n}/{\xi_n^{-1}}=\varphi(2)$ and (22) into consideration, we get that there exists a constant K such that, for $n>K$ ,
Therefore, taking (22), (23), and (17) together, we see that (16) is true.
Lemma 4. Suppose that Assumption 1 holds. Then there are constants $0<c_1<c_2<\infty$ and numbers $N_1,N_2$ , which may depend on $c_1$ and $c_2$ , such that, for all $n-m>N_1$ , $m>N_2$ ,
The following is [Reference Wang27, Lemma 4]. For convenience, we state it here.
Lemma 5. ([Reference Wang27, Lemma 4].) Suppose that Assumption 1 holds. Then, for $n\ge m\ge1$ ,
for some constants $0<\zeta<\gamma<\infty$ independent of m and n.
Lemma 6. Suppose that Assumption 1 holds. Then there exist constants $c_3>0$ , $c_4>0$ and numbers $N_1,N_3>0$ , which may depend on $c_3$ and $c_4$ , such that, for all $n-m\ge N_1$ and $m>N_3$ ,
The proof of this lemma is similar to that of [Reference Wang27, Lemma 5]; we just point out the differences.
Proof of Lemma 6. We write
From the proof of [Reference Wang27, Lemma 5], we get that
where $P_{m,n}=A_{m,n}(12)A_{m,n}(21)-A_{m,n}(11)A_{m,n}(22)$ .
Applying Proposition 1, we have that for $\varepsilon>0$ there exist $k_0>0$ and $N_1>0$ such that, for all $m>k_0$ , $n-m\ge N_1$ ,
It follows from Lemma 1 that $\lim_{m\rightarrow\infty}\xi_m=-{\varrho_1}/({bd-a\theta})$ . As a result,
On the other hand, note that
We thus see that
Consequently, there exist constants $c^{\prime}_3>0$ , $c^{\prime}_4>0$ , and $k_1>0$ such that, for $m>k_1$ ,
Taking (25) and (24) into consideration, we have that, for all $m>k_0$ and $n-m\ge N_1$ , there exists a constant c’ such that
Therefore, in view of (27), we conclude that the lemma is true.
Lemma 7. Suppose Assumption 1 is fulfilled. Then there exist constants $c_5<c_6<\infty$ and numbers $N_1,N_4$ such that, for all $n-m>N_1$ , $m>N_4$ ,
Proof. Let $A_{m,n}$ and $P_{m,n}$ be as defined in Lemma 6. Define
By some easy calculation, we have $\varrho(M_m\cdots M_n)=\frac12(Q_{m,n}+\sqrt{Q_{m,n}^2+4P_{m,n}})$ .
Note that
Thus, in view of (25) we get that there exist constants $0<c^{\prime}_5<c^{\prime}_6<\infty$ and $N_1>0,N^{\prime}_3>k_0$ such that. for all $m>N^{\prime}_3, n-m>N_1$ ,
Consulting (25) and (26), we have that there exists a constant $c^{\prime}>0$ such that, for all $n-m>N_1$ and $m>k_0$ ,
Taking (28) and (29) into account, we get that there exist constants $0<c^{\prime\prime}_5<c^{\prime\prime}_6<\infty$ such that, for all $m>N^{\prime}_3$ and $ n-m>N_1$ ,
For $1\le m< n$ , let L(m,n) be as in (3) and write
Also, we write H(1,n) as H(n) for simplicity.
We establish the relationship between L(n) and H(n) as follows.
Lemma 8. Suppose that Assumption 1 holds. Then there exist constants $0<c_7<c_{8}<\infty$ , $0<c_{9}<c_{10}<\infty$ and positive integers $N_5,N_6,N_7$ such that, for $n-m\ge N_5$ , $m\ge N_6$ ,
and for $n>N_7$ ,
Proof. Clearly, taking (12) and Lemma 4 together, we get (31).
For (32), first, by (31) we have that, for $m>N_6$ and $n>N_5+N_6$ ,
For $n>N_5+N_6$ , we rewrite
Since
$\sum_{j=n-N_5+1}^{n}\varrho(M_j)\cdots\varrho(M_n)\rightarrow\varrho^{N_5}+\cdots+\varrho>0$ as $n\rightarrow\infty$ . Then there exist constants $0<c_{11}<c_{12}<\infty$ and $N^{\prime}_7>0$ such that, for all $n>N^{\prime}_7$ ,
We rewrite
Recalling that, by (11), $\xi_j>0$ for all $j>0$ , in view of the fact that $\varrho(M_j)>0$ for all $j>0$ we get that $\Delta$ is a positive constant. Therefore, it follows from (31) that there exist constants $0<c_{13}<c_{14}<\infty$ such that
Taking (34), (33), (35), and (36) into consideration, we conclude that, for all $n>N_7\,:\!=\,\max\{N_5+N_6, N^{\prime}_7\}$ , (32) is true.
Lemma 9. For every n and k,
Also, for every $l>n+k$ ,
Proof. Using the Markov property and (14), we have
We thus complete the proof of the lemma.
Recalling the definitions in (30) and (3), by some easy computations we see that
and
Now we are ready to prove the main results.
Proof of Theorem 2. For $j<i$ , set $C_{j,i}=\{x\colon x\in(2^j, 2^i], x\in C\}$ and let $A_{j,i}=|C_{j,i}|$ be the cardinality of the set $C_{j,i}$ . On the event $\{A_{m,m+1}>0\}$ , let $l_m=\max\{k\colon k\in C_{m,m+1}\}$ be the largest regeneration time in $C_{m,m+1}$ . Then, for $m\ge1$ we have
Fix $2^{m}+1\le n\le 2^{m+1}$ and $2^{m-1}+1\le i\le n$ . Using Lemma 9 and the Markov property, we get that
It follows from Lemma 3 that for fixed $\varepsilon>0$ there exists a constant $K_1>0$ such that, for all $i>K_1$ and $ n-i> K_1$ ,
Recall that $\xi_k>0$ , $k\ge 1$ by (11). Then, by some easy computation, we have
It follows from (31) that, for all $i> N_6$ and $j-i\ge N_5+2$ ,
Thus, for all $i> N_6$ ,
Then, under the assumption $\varrho(M_{i+1})\ge 1$ , we have
On the other hand, since $\xi_n\rightarrow{\varrho_1}/({a\theta-bd})\,=\!:\,\xi>0$ as $n\rightarrow\infty$ , there exists a constant $K_2$ such that, for all $n\ge K_2$ , $\xi_n<\xi+1$ . As a result, for all $i>K_2$ ,
Consulting (43), (44), and (45), we have, for all $i>K_3\,:\!=\,\max\{K_2,N_6\}$ ,
In view of (41), we have thus shown that, for all $i\ge K\,:\!=\,\max\{K_3,K_1\}$ and $n-i> K_1$ ,
Taking this and (40) together, we get, for $m>\log{K}$ (which implies $2^{m-1}+1>K$ ),
Substituting this into (39), using Lemma 9 and (42), we see that
Note that under the condition in case (i), $\sum_{n=2}^\infty{1}/({L(n)\log n})<\infty$ . Thus, it follows from (32) that the right-hand side side of (46) is finite, so is $\sum_{m=K+1}^{\infty}P(A_{m,m+1}>0)$ . Applying the Borel–Cantelli lemma, we conclude that with probability 1, at most finitely many of the events $\{A_{m,m+1}>0\}$ , $m\ge1$ , occur, which completes the first part of Theorem 2.
Next, we turn to the second part. Suppose there exists some $\delta>0$ such that $L(n)\le \delta n\log n$ for n large enough and $\sum_{n=2}^\infty{1}/({L(n)\log n})=\infty$ .
We also use Borel–Cantelli Lemma to prove the result in this case, but here we need to estimate not only the sum of $P(A_j)$ , but also the sum of $P(A_jA_l)$ .
First, let’s study the probability $P(A_j)$ . For $j\ge1$ , let $n_j=[j\log j]$ be the integer part of $j\log j$ and set $A_j=\{n_j\in C_k\}$ . For fixed $\varepsilon>0$ , in view of Lemma 3 there exist constants $L_1>k_0$ and $L_2$ such that, for all $n-k\ge L_1$ , $k\ge k_0$ ,
and, for all $m>L_2$ ,
Notice that the sequence $a_n+b_n$ , $n\ge0$ , is bounded away from 0 and positive. Then, taking (48) and Lemma 9 into account, we obtain
Under the assumption $\varrho(M_j)\ge 1$ for all $j\ge1$ , we can see that L(n) is increasing from (37). Applying [Reference Csáki, Földes and Révész6, Lemma 2.2], we conclude that $\sum_{j=2}^{\infty}{1}/({L([j\log j])})$ and $\sum_{j=2}^{\infty}{1}/({L(j)\log j})$ converge or diverge simultaneously. As a result, it follows from (49) and Lemma 8 that
Second, we study the probability $P(A_jA_l)$ . Define $\mathcal C_k = \{(j,l)\colon 2\le j\lt l, l\log l \gt j\log j+k\}$ . Note that when $j>\mathrm{e}^{\max\{L_1,L_2\}+k}$ and $l\ge j+1$ , we have $n_l-n_j-k> L_1$ and $n_l>L_2$ . It thus follows from Lemma 9 and (47) that for the $\varepsilon$ above, when $(j,l)\in \mathcal C_k$ satisfying $j>\mathrm{e}^{\max\{L_1,L_2\}+k}$ ,
Then, taking (38) and the fact $\log(1-x)\le -x$ for all $0<x<1$ into account, we have
For the above $\varepsilon>0$ , let
Obviously, for $l\ge\ell$ , $\big(1-\exp\big\{{-}\sum_{i=n_j+k-1}^{n_l}{1}/{H(i)}\big\}\big)^{-1}\le 1+\varepsilon$ . Thus, it follows from (51) that
Next, suppose $l<\ell$ . Note that for $0<u<\log(({1+\varepsilon})/{\varepsilon})$ we have $1-\mathrm{e}^{-u}\ge cu$ for some $c\,:\!=\,c(\varepsilon)>0$ small enough. Then, in view of (51) and (32), we have
Since L(n) is increasing, we have
for all $j\ge N_7$ . Again by (32), we have
Therefore, taking (53) and (48) into account, we get that for $(j,l)\in\mathcal C_k$ satisfying $\ell\ge l\ge j+1$ and $j\ge \max\{N_7, \mathrm{e}^{\max\{L_1,L_2\}+k}\}\,=\!:\,M$ ,
Consequently, for $j\ge M$ ,
Recall that
and $L(n)\le \delta n\log n$ for some $\delta>0$ and n large enough. We claim that if j is large enough, then
Suppose on the contrary that $\ell \ge j^\gamma$ . Then, for $j>N_7$ ,
for j large enough. Since $\gamma > (({1+\varepsilon})/{\varepsilon})^{c_{10}\delta}+\varepsilon$ , we have
which contradicts (55). This means (56) is right.
Applying (56) and (54), we obtain, for j large enough,
Taking this together with (52), we conclude that, for some $j_0>0$ ,
Therefore, taking (50) into account, we have
An application of the Borel–Cantelli lemma [Reference Petrov22, p. 235] yields
Since $\varepsilon>0$ is arbitrary, we can conclude that $P(A_j,\,j\ge 1\text{ occur infinitely often})=1$ . So, the second part of the theorem is proved.
5. Examples
For $n\ge1$ , let $q_{n},p_n>0$ be numbers such that $q_{n}+p_n=1$ . Suppose that $\mathbf{Z}_n$ , $n\ge0$ , is a two-type branching process with immigration satisfying $\mathbf{Z}_0=0$ , and there is a fixed immigration ${{\mathbf{e}_1}}$ in each generation, with offspring distributions
Some computation yields the mean matrix
with $b_n={q_{n}}/{p_{n}}$ , $n\ge1$ . Then $\varrho(M_k)=(b_k+\sqrt{b_k^2+4b_k})/2$ .
Fix $B\ge 0$ . Set $i_0\,:\!=\,\min\big\{i\colon {B}/{3i}<\frac23 \big\}$ and let
Theorem 3. Fix $B\ge 0$ . If $B\ge 1$ , then $\{\mathbf{Z}_n\}$ has finitely many regeneration times, almost surely. If $B<1$ , then $\{\mathbf{Z}_n\}$ has infinitely many k-strong regeneration points, almost surely.
Proof. For $B\ge0$ , let $r_n={B}/{3n}$ . It is easy to see that $\lim_{n\rightarrow\infty}n^2(r_n-r_{n+1})={B}/{3}$ . Thus, by some computation, we obtain
which implies that $\sum_{k=1}^\infty|b_{k+1}-b_k|<\infty$ . Since $p_k=\frac{2}{3}- r_k$ , $k\ge 1$ , we see that $b_k\ge\frac12$ . As a result, $\varrho(M_k)\ge 1$ . Thus, the conditions in Theorem 2 are fulfilled.
By Taylor expansion of $\varrho(M_k)$ at 0, we get $\varrho(M_k)=1+ 3r_k+O(r_k^2)$ as $k\rightarrow\infty$ . And then, by Euler’s asymptotic formula for the harmonic series, we get that
When $B>1$ , $c\int_{k_0}^n1/{x^B}\,\mathrm{d}x$ is convergence, so is $\sum_{k=k_0}^n\varrho(M_1)^{-1}\cdots\varrho(M_k)^{-1}$ by (57). Then it follows from (57) that
as $n\rightarrow\infty$ , which implies that $\sum_{n=2}^{\infty}1/({L(n)\log n})<\infty$ . So the conditions in Theorem 2(i) are satisfied. Then, by Theorem 2(i), the branching process has finitely many regeneration times almost surely.
When $B\le 1$ , $\int_{i_0}^n1/{x^B}\,\mathrm{d}x$ is divergent, so is $\sum_{k=k_0}^n\varrho(M_1)^{-1}\cdots\varrho(M_{k-1})^{-1}$ , and we have $\sum_{k=k_0}^n\varrho(M_1)^{-1}\cdots\varrho(M_{k-1})^{-1}\sim \int_{i_0}^n1/{x^B}\,\mathrm{d}x$ by (57), while
In view of (57), we thus have
So if $B=1$ , then $\sum_{n=2}^\infty{1}/({L(n)\log n})<\infty$ . Applying Theorem 2(i), we can see that the branching process has finitely many regeneration times almost surely.
Assuming $B<1$ , it follows from (59) that $\sum_{n=2}^{\infty}1/({L(n)\log n})=\infty$ and $L(n)\log n\le cn\log n$ for n large enough. Then, by Theorem 2(ii), we conclude that the process has infinitely many k-strong regeneration times almost surely.
Remark 5. Let $\{\mathbf{Y}_n\}_{n\ge0}$ be a branching process in varying environments with $\mathbf{Y}_0=\mathbf{e}_1$ , which shares the same branching mechanism as $\{\mathbf{Z}_n\}_{n\ge0}$ in this section. By the results in [Reference Wang27, Theorem 1], we can see that the tail probability of surviving time $\nu$ of the process $\{\mathbf{Y}_n\}_{n\ge0}$ satisfies
When $B=1$ , it follows from (58) that $P(\nu>n)\sim{c}/({n\log n})\rightarrow0$ . So $\{\mathbf{Y}_n\}_{n\ge0}$ is extinct in this situition. Then we conclude that $\{\mathbf{Z}_n\}_{n\ge0}$ should have a regeneration time. But, by Theorem 3, in this case, the process $\{\mathbf{Z}_n\}_{n\ge0}$ has finitely many regeneration times. Such a phenomenon never happens for the time-homogenous branching process, since by the time-homogenous property, if the process owns one regeneration time, it must have infinitely many regeneration times.
When there are infinitely many regeneration times, we have established the asymptotic property of the number of regeneration times in [0,n] as follows.
Theorem 4. Fix $0\le B<1$ . Then
and, for any $\varepsilon>0$ ,
almost surely.
Remark 6. Notice that Theorem 4 contains the case $B=0$ . In this case, $p_i\equiv \frac23$ and $\varrho{(M_i)}\equiv 1$ for all $i\ge1$ , so that $\{\mathbf{Z}_n\}$ is indeed a critical Galton–Watson process with immigration. As shown by Theorem 4, it seems that, up to multiplication by a positive constant, the value of $0\le B<1$ does not affect the order of the number of regeneration times in [0,n].
Proof. Let $S_n=\#\{k\colon k\in C\cap[0,n]\}$ . By (14), we can see that
Consulting (16) and (32), there exist positive constants $C_1$ and $C_2$ such that
It follows from (59) that when $B<1$ , $L(n)\sim cn$ . As a result,
We thus get (60).
Now we turn to the second part. Noticing that $S_n$ is positive and nondecreasing, by (62) we have $E(\max_{1\le k\le n}S_k) = E(S_n) < \sum_{1\le i\le n}{c}/{i}$ . We know that, for each $\varepsilon>0$ , $\sum_{i=2}^{\infty}1/({i(\log i)^{1+\varepsilon}})<\infty$ . Therefore, by [Reference Fazekas and Klesov8, Theorem 2.1], we have
almost surely as $n\rightarrow\infty$ , which completes the proof of (61).
Acknowledgements
The authors would like to thank two referees who read the paper carefully and gave very good suggestions and comments which helped to improve the paper to a large extent.
Funding information
This project is supported by National Natural Science Foundation of China (Grant Nos. 12271043, 12001558) and the Nature Science Foundation of Anhui Educational Committee (Grant No. 2023AH040025).
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.