Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-22T19:27:55.436Z Has data issue: false hasContentIssue false

First-passage time for Sinai’s random walk in a random environment

Published online by Cambridge University Press:  29 November 2024

Wenming Hong*
Affiliation:
Beijing Normal University
Mingyang Sun*
Affiliation:
Beijing Normal University
*
*Postal address: School of Mathematical Sciences & Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China.
*Postal address: School of Mathematical Sciences & Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China.
Rights & Permissions [Opens in a new window]

Abstract

We investigate the tail behavior of the first-passage time for Sinai’s random walk in a random environment. Our method relies on the connection between Sinai’s walk and branching processes with immigration in a random environment, and the analysis on some important quantities of these branching processes such as extinction time, maximum population, and total population.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and results

Random walks in a random environment (RWRE, for short) model the displacement of a particle in an inhomogeneous medium. We are concerned with nearest-neighbor RWRE on $\mathbb{Z}$ , in which case the space of environments may be identified with $\Omega=[0,1]^{\mathbb{Z}}$ , endowed with the cylindrical $\sigma$ -field $\mathcal{F}$ . Environments $\omega=\{\omega_x\}_{x\in\mathbb{Z}}\in \Omega$ are chosen according to a probability measure P on $(\Omega,\mathcal{F})$ . Given the value of $\omega$ , we define $\{X_n\}_{n\ge 0}$ as a random walk in a random environment, which is a Markov chain whose distribution is denoted by $\textrm{P}_\omega$ and called the quenched law. The transition probabilities of $\{X_n\}_{n\ge 0}$ are as follows: $X_0=0$ and, for $n\ge 0$ and $x\in\mathbb{Z}$ , $\textrm{P}_\omega(X_{n+1}=x+1\mid X_n=x) = \omega_x = 1-\textrm{P}_\omega(X_{n+1}=x-1\mid X_n=x)$ .

Let $\mathbb{Z}^{\mathbb{N}}$ be the space for the paths of the random walk $\{X_n\}_{n\ge 0}$ , and $\mathcal{G}$ denote the $\sigma$ -field generated by the cylinder sets. Note that for each $\omega\in\Omega$ , $\textrm{P}_\omega$ is a probability measure on $(\mathbb{Z}^{\mathbb{N}},\mathcal{G})$ , and for each $G\in\mathcal{G}$ , $\textrm{P}_\omega(G)\colon(\Omega,\mathcal{F})\to[0,1]$ is a measurable function of $\omega$ . Thus, the annealed law for the random walk in a random environment $\{X_n\}_{n\ge 0}$ is defined by

\[\mathbb{P}(F\times G)=\int_{F}\textrm{P}_\omega(G)P(\textrm{d}\omega ),\quad F\in\mathcal{F},\, G\in\mathcal{G}.\]

For ease of notation, we will use $\mathbb{P}$ to refer to the marginal on the space of environments or paths, i.e. $\mathbb{P}(F)=\mathbb{P}(F\times \mathbb{Z}^{\mathbb{N}})$ for $F\in \mathcal{F}$ , and $\mathbb{P}(G)=\mathbb{P}(\Omega \times G)$ for $G\in \mathcal{G}$ . Expectations under the law $\mathbb{P}$ will be written $\mathbb{E}$ .

Throughout the paper, we will make the following assumptions.

Assumption 1.1. The environment $\omega=\{\omega_x\}_{x\in\mathbb{Z}}$ is an independent and identically distributed (i.i.d.) sequence of random variables and uniformly elliptic, i.e. there exists a constant $0<\beta<\frac12$ such that $\mathbb{P}(\beta\le\omega_0\le1-\beta)=1$ .

Assumption 1.2.

(1.1) \begin{align} & \mathbb{E}\bigg[\log\bigg(\frac{1-\omega_0}{\omega_0}\bigg)\bigg] = 0, \end{align}
(1.2) \begin{align} & \sigma^2 \;:\!=\; \textrm{Var}\bigg[\log\bigg(\frac{1-\omega_0}{\omega_0}\bigg)\bigg] \in (0,\infty). \end{align}

Assumption 1.1 is a commonly adopted technical condition that implies that, $\mathbb{P}$ almost surely ( $\mathbb{P}$ -a.s.),

(1.3) \begin{equation} \bigg|\log\bigg(\frac{1-\omega_0}{\omega_0}\bigg)\bigg| \le \log\bigg(\frac{1-\beta}{\beta}\bigg) \;=\!:\; M_1.\end{equation}

Condition (1.1) ensures, according to [Reference Solomon19], that $\{X_n\}_{n\ge 0}$ is recurrent, i.e. $\mathbb{P}$ -a.s.,

(1.4) \begin{equation} \liminf_{n\to\infty}X_n = -\infty, \qquad \limsup_{n\to\infty}X_n=+\infty.\end{equation}

Finally, condition (1.2) simply excludes the case of a usual homogeneous random walk.

Recurrent RWRE is well known for its slowdown phenomenon. Indeed, under Assumptions 1.1 and 1.2, it was proved by Sinai in [Reference Sinai18] that $X_n/(\log n)^2$ converges in distribution to a non-degenerate limit. The rate $(\log n)^2$ is in complete contrast with the typical magnitude of order $\sqrt{n}$ for a usual simple symmetric random walk. Recurrent RWRE will thus be referred to as Sinai’s walk. A lot more is known about this model; we refer to the survey in [Reference Zeitouni21] for limit theorems, large-deviation results, and for further references.

In this paper, we are interested in the persistence probability of the random walk in a random environment. More precisely, we define the first-passage time for $\{X_n\}_{n\ge 0}$ as follows:

\begin{equation*} \sigma_x \;:\!=\; \inf\{n\ge 0\colon x+X_n<0\}, \quad x\in\mathbb{N},\end{equation*}

which is a.s. finite for any $x\in\mathbb{N}$ due to (1.4). It is natural to consider the asymptotic behavior of $\mathbb{P}(\sigma_x>n)$ as $n\to\infty$ , which is the so-called persistence probability. The study of the first-passage times for random walks is a classical theme in probability theory. When $\{X_n\}_{n\ge 0}$ is a homogeneous random walk, the following elegant result [Reference Doney10, Reference Rogozin17] is deduced from the famous Wiener–Hopf factorization: if $\lim_{n\to\infty}\mathbb{P}(X_n>0)=\rho\in(0,1)$ , then, for every fixed $x \ge 0$ ,

(1.5) \begin{equation} \mathbb{P}(\sigma_x>n) \sim V(x)n^{\rho-1}l(n) \quad \mbox{as}\ n\to\infty,\end{equation}

where V(x) denotes the renewal function corresponding to the descending ladder height process and l(n) is a slowly varying function at infinity. Recent progress has been made for random walks with non-identically distributed increments, integrated random walks, and more general Markov walks; see, for example, [Reference Dembo, Ding and Gao7Reference Denisov and Wachtel9, Reference Grama, Lauvergnat and Le Page13]. The tail behavior of first-passage times for these models is derived via a strong coupling method and based on the existence of harmonic functions.

For a random walk in an i.i.d. random environment, the persistence probability for $x=0$ has also been known for a long time.

Theorem 1.1. ([Reference Afanasyev3].) Under Assumptions 1.1 and 1.2, there exists a positive constant C such that, as $n\to \infty$ , $\mathbb{P}(\sigma_0>n) \sim {C}/{\log n}$ .

This result is based on the connection between $\sigma_0$ and the total population of a branching process in a random environment (BPRE). Recently, [Reference Aurzada, Devulder, Guillotin-Plantard and Pène5] studied the tail behavior of $\sigma_0$ for a random walk in some correlated environment, and directly calculated the upper and lower bound of $\mathbb{P}(\sigma_0>n)$ with an error term that is slowly varying at infinity.

It is known that when $\{X_n\}_{n\ge 0}$ is a Markov process, the asymptotics of $\mathbb{P}(\sigma_x>n)$ will not drastically depend on x [Reference Aurzada and Simon6], i.e. $\mathbb{P}(\sigma_x>n) \asymp \mathbb{P}(\sigma_0>n)$ for any $x\ge 0$ . However, under the annealed law the RWRE is not a Markov process since the past history gives information about the environment. In this paper, we are concerned with the persistence probability of an RWRE for any fixed $x\in\mathbb{N}$ , i.e. the asymptotic behavior as $n\to\infty$ of

\[\mathbb{P}(\sigma_x>n) = \mathbb{P}\Big(\min_{k\le n}X_k\ge -x\Big).\]

The main result of this paper can be stated as follows.

Theorem 1.2. Under Assumptions 1.1 and 1.2, for any $x\in\mathbb{N}$ there exists a positive constant C(x) such that, as $n\to \infty$ , $\mathbb{P}(\sigma_x>n) \sim {C(x)}/{\log n}$ .

Remark 1.1. It is well known that the constant C(x) in the persistence probability is a harmonic function for a wide class of Markov processes; see, e.g., (1.5). However, we cannot expect the harmonic property of C(x) in Theorem 1.2, since the RWRE is not a Markov process under $\mathbb{P}$ . Nonetheless, this constant dependent on x can be explicitly formulated as follows:

\begin{equation*} C(x) = \sigma\sqrt{\frac{\pi}{2}}\sum_{k=0}^{x}\tilde{c}_k,\quad x\in\mathbb{N}, \end{equation*}

where $\tilde{c}_k$ , $k\ge 0$ , are some positive constants; see (3.20). Our method is a generalization of the arguments in [Reference Afanasyev3] that relate the first-passage time $\sigma_x$ to the total population of a branching process with immigration in a random environment (BPIRE). In particular, C(0) equals the constant C in Theorem 1.1 when $x=0$ .

The rest of the paper is organized as follows. In Section 2, we first recall the well-known connection between Sinai’s walks and critical branching processes with immigration in a random environment, then study some important quantities of these branching processes that imply Theorem 1.2 as a corollary. In Section 3 we introduce a change of measure by means of the associated random walk, which plays an important role in the study of BPIREs, and then prove Theorem 2.1. Section 4 contains some useful conditioned limit results that may be of independent interest, and the proof of Theorem 2.2.

2. Connection with BPIREs

We first recall the connection of random walks in a random environment with branching processes with immigration in a random environment (see, e.g., [Reference Afanasyev3, Reference Kesten, Kozlov and Spitzer15]), and study some important quantities of BPIREs. For any fixed $x\in\mathbb{N}$ , we consider a process defined by the upcrossing of $\{X_n\}_{n\ge 0}$ ,

\begin{equation*} Z_n^x \;:\!=\; \#\{k<\sigma_x\colon X_k=n-x-1,\ X_{k+1}=n-x\}, \quad n\ge 0.\end{equation*}

In other words, $Z_n^{x}$ is the number of steps from $n-x-1$ to $n-x$ made by the RWRE $\{X_n\}_{n\ge 0}$ before reaching the site below $-x$ .

Another description is as follows: let $\xi_{i,n}$ be the number of steps $(n-x\to n-x+1)$ between the ith and the $(i + 1)$ th steps $(n-x-1\to n-x)$ for $n\ge 0$ and $i\ge 1$ . Observe that, given the value of $\omega$ , $\{\xi_{i,n}\}_{i\ge0}$ are i.i.d. geometric-distribution random variables with generating function

\[f_n(s)=\frac{1-\omega_{n-x}}{1-\omega_{n-x}s}, n\ge 0,\]

and $\{Z_n^{x}\}_{n\ge 0}$ satisfies the following recursion:

\begin{equation*} Z_{0}^x=0, \qquad Z_{n+1}^x = \left\{ \begin{array}{ll} \displaystyle\sum_{i=1}^{Z_{n}^x+1}\xi_{i,n}, &\qquad 0 \le n \le x, \\[5mm] \displaystyle\sum_{i=1}^{Z_{n}^x}\xi_{i,n}, &\qquad n > x. \end{array} \right.\end{equation*}

Therefore, the process $\{Z_n^{x}\}_{n\ge 0}$ evolves as a branching process in a random environment with one immigrant each unit of time before the xth generation. Note that we can reformulate the first-passage time $\sigma_x$ of the RWRE $\{X_n\}_{n\ge 0}$ as the total population sizes of $\{Z_n^{x}\}_{n\ge 0}$ , i.e.

(2.1) \begin{equation} \sigma_x = 1+x+2\sum_{k=0}^{\infty}Z_{k+1}^x.\end{equation}

The properties of BPIREs are closely related to the so-called associated random walk $\{S_n\}_{n\ge 0}$ constituted by the logarithmic mean offspring number, which is defined as follows:

\begin{equation*} S_0=0, \qquad S_{n+1}-S_{n} = \textrm{E}_{\omega}[\xi_{1,n}] = \log\bigg(\frac{\omega_{n-x}}{1-\omega_{n-x}}\bigg), \quad n\ge 0.\end{equation*}

Then, (1.1) and (1.2) in Assumption 1.2 are respectively equivalent to

(2.2) \begin{equation} \mathbb{E}[S_1]=0, \qquad \mathbb{E}[S_1^2]=\sigma^2\in(0,\infty).\end{equation}

For a systematic study of branching processes in random environments under the conditions in (2.2), we refer to [Reference Kersting and Vatutin14].

Our goal in this section is to estimate some important quantities of $\{Z_n^{x}\}_{n\ge 0}$ , such as the tail distributions of its extinction time, of its maximum population, and of its total population; then Theorem 1.2 can be easily inferred.

Theorem 2.1. For any $x\in\mathbb{N}$ , let $T_x=\inf\{n>x\colon Z_n^x=0\}$ be the extinction time of $\{Z_n^{x}\}_{n\ge 0}$ . Then, under Assumptions 1.1 and 1.2, there exists a positive constant c(x) such that, as $n\to \infty$ , $\mathbb{P}(T_x>n) \sim {c(x)}/{\sqrt{n}}$ , where $c(x)=\sum_{k=0}^{x}\tilde{c}_k$ ; see (3.20) for an explicit expression for $\tilde{c}_k$ .

Theorem 2.2. Under Assumptions 1.1 and 1.2, if we write $C(x) \;:\!=\; c(x)\cdot\sigma\sqrt{\pi/2}$ for any $x\in\mathbb{N}$ , then, as $n\to \infty$ , $\mathbb{P}\big(\sup_{k\ge 0}Z_k^{x}>n\big) \sim {C(x)}/{\log n}$ and

(2.3) \begin{equation} \mathbb{P}\Bigg(\sum_{k=0}^{\infty}Z_k^{x}>n\Bigg) \sim \frac{C(x)}{\log n}. \end{equation}

Proof of Theorem 1.2. Combining (2.1) and (2.3), we get that, as $n\to \infty$ ,

\[ \mathbb{P}(\sigma_x>n) = \mathbb{P}\Bigg(\sum_{k= 0}^{\infty}Z_k^{x}>\frac{n-x-1}{2}\Bigg) \sim \frac{C(x)}{\log(n-x-1)-\log 2} \sim \frac{C(x)}{\log n}. \]

Thus, the proof is completed.

3. Survival probability

3.1. Change of measure

In this section, we introduce a new measure $\mathbb{P}^+$ under which the associated random walk $\{S_n\}_{n\ge 0}$ is conditioned to stay positive. The strict descending ladder epochs are defined recursively as follows:

(3.1) \begin{equation} {\tau}_0 = 0, \qquad {\tau}_n = \inf\{k > {\tau}_{n-1}\colon {S}_k < {S}_{{\tau}_{n-1}}\}, \quad n\ge 1.\end{equation}

Let U(x) denote the renewal function associated with $\{-S_{\tau_n}\}_{n\ge 0}$ , which is a positive function defined by $U(x) = \sum_{n\ge 0}\mathbb{P}(-S_{\tau_n} \le x)$ , $x\ge 0$ . It is well known that U is harmonic for the sub-Markov process obtained by killed $(S_n)_{n\ge 0}$ when entering the negative half-line [Reference Tanaka20], i.e.

\begin{equation*} U(x) = \mathbb{E}[U(x+S_1);\ x+S_1 \ge 0],\quad x\ge 0.\end{equation*}

Applying this harmonic property of U, we introduce a sequence of probability measures $\{\mathbb{P}_{(n)}^+\colon n\ge 1\}$ on the $\sigma$ -field $\mathcal{A}_n$ generated by $\{\omega_i\colon-x\le i<n-x\}$ and $\{Z_i^x\colon i\le n\}$ by means of Doob’s h-transform, i.e. $\textrm{d}\mathbb{P}_{(n)}^+ \;:\!=\; U(S_n)\mathbf{1}_{\{\tau_1>n\}}\textrm{d}\mathbb{P}$ . This and Kolmogorov’s extension theorem show that, on a suitable probability space, there exists a probability measure $\mathbb{P}^+$ on the $\sigma$ -field $\mathcal{A}=\cup_{n\ge 1}\mathcal{A}_n$ (see [Reference Afanasyev, Geiger, Kersting and Vatutin4, Reference Kersting and Vatutin14] for more details) such that $\mathbb{P}^+|_{\mathcal{A}_n} = \mathbb{P}_{(n)}^+$ , $n\ge 1$ . Under the new measure $\mathbb{P}^+$ , the sequence $\{S_{n}\}_{n\ge 0}$ is a Markov chain with state space $[0,\infty)$ , called a random walk conditioned to stay positive; this terminology is justified by the following convergence result (see [Reference Afanasyev, Geiger, Kersting and Vatutin4, Lemma 2.5]).

Lemma 3.1. Assume that condition (2.2) is valid. Let $Y_1,Y_2,\ldots$ be a uniformly bounded sequence of real-valued random variables adapted to the filtration $\mathcal{A}$ such that the limit $Y_{\infty}\;:\!=\; \lim_{n\to\infty}Y_n$ exists $\mathbb{P}^+$ -a.s. Then $\lim_{n\to\infty}\mathbb{E}[Y_n\mid\tau_1>n]=\mathbb{E}^+[Y_{\infty}]$ .

3.2. Proof of Theorem 2.1

Proof. Let $Z_{i,j}$ denote the offspring size in the ith generation that are descendants of one immigrant joining the jth generation of the process, $i\ge j\ge 0$ . Clearly, $\{Z_{i,j}\colon i\ge j+1\}$ forms a BPRE (with $Z_{j,j}$ equal to 0 rather than 1). It is known (see, e.g., [Reference Kersting and Vatutin14, Chapter 1]) that, for $i\ge j+1$ ,

\begin{equation*} \textrm{E}_{\omega}\big[s^{Z_{i,j}}\big] = 1 - \frac{a_j}{a_{i}(1-s)^{-1}+b_i - b_j}, \end{equation*}

where $a_n=\exp(-S_n)$ , $b_0=0$ , and $b_n=\sum_{i=0}^{n-1}a_{i}$ , $n\ge 1$ . Then we can decompose $Z_n^x$ as an independent sum under the quenched law for $n>x$ : $Z_n^x=Z_{n,0}+Z_{n,1}+\cdots+Z_{n,x}$ . By the equality

\begin{equation*} 1-\frac{a_j}{a_{n}(1-s)^{-1}+b_n - b_j} = \frac{a_{n}(1-s)^{-1}+b_n - b_{j+1}}{a_{n}(1-s)^{-1}+b_n - b_j}, \end{equation*}

it follows that

(3.2) \begin{align} g_n(s) \;:\!=\; \textrm{E}_{\omega}[s^{Z_n^x}] = \prod_{j=0}^{x}\textrm{E}_{\omega}\big[s^{Z_{n,j}}\big] & = \prod_{j=0}^{x}\bigg(1-\frac{a_j}{a_{n}(1-s)^{-1}+ b_n - b_j}\bigg) \nonumber \\[5pt] & = \frac{a_{n}(1-s)^{-1}+b_n - b_{x+1}}{a_n(1-s)^{-1}+b_n} \nonumber \\[5pt] & = 1 - \frac{b_{x+1}}{a_n(1-s)^{-1}+b_n}. \end{align}

In particular, taking $s=0$ in (3.2), we get, for $n>x$ ,

(3.3) \begin{equation} \textrm{P}_{\omega}(T_x > n) = \textrm{P}_{\omega}(Z_n^x > 0) = \frac{b_{x+1}}{a_n+b_n} = \frac{\sum_{i=0}^{x}{\textrm{e}}^{-S_i}}{\sum_{i=0}^{n}{\textrm{e}}^{-S_i}}. \end{equation}

Now we are ready to prove Theorem 2.1, i.e. there exists a positive constant c(x) such that

(3.4) \begin{equation} \lim_{n\to\infty}\sqrt{n}\,\mathbb{P}(T_x>n) = \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}\bigg[\frac{\sum_{i=0}^{x}{\textrm{e}}^{-S_i}}{\sum_{i=0}^{n}{\textrm{e}}^{-S_i}}\bigg] = c(x). \end{equation}

To this end, we adapt the argument that originally came from [Reference Kozlov16] and was improved in [Reference Afanasyev, Geiger, Kersting and Vatutin4] via the measure change method.

For any $0\le k \le x<n$ , note that

\begin{equation*} \frac{{\textrm{e}}^{-S_k}}{\sum_{i=0}^{n}{\textrm{e}}^{-S_i}} = \frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l} + \sum_{i=0}^{n-k}{\textrm{e}}^{-(S_{k+i}-S_k)}} = \frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l} + \sum_{i=0}^{n-k}{\textrm{e}}^{-\tilde{S}_i}}, \end{equation*}

where $\tilde{S}_i = S_{k+i}-S_k$ . In view of this and (3.4), it suffices to show that, for any $0\le k\le x$ , there exists a positive constant $\tilde{c}_k$ such that

(3.5) \begin{equation} \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}+\sum_{i=0}^{n}{\textrm{e}}^{-\tilde{S}_i}}\Bigg] = \tilde{c}_k; \end{equation}

then Theorem 2.1 holds with $c(x)=\sum_{k=0}^{x}\tilde{c}_k$ .

Since the random walk $\{\tilde{S}_i\}_{i\ge 0}$ is independent of $\{S_l\}_{l\le k}$ and has the same distribution as $\{S_i\}_{i\ge 0}$ , it follows that

(3.6) \begin{align} \mathbb{E}\Bigg[\frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l} + \sum_{i=0}^{n}{\textrm{e}}^{-\tilde{S}_i}}\Bigg] & = \mathbb{E}\Bigg[\mathbb{E}\Bigg[\frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}+\sum_{i=0}^{n}{\textrm{e}}^{-\tilde{S}_i}} \mid S_1,\ldots,S_k\Bigg]\Bigg] \nonumber \\[5pt] & = \int_{0}^{\infty}\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-\tilde{S}_i}}\Bigg] \mathbb{P}\Bigg(\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}\in\textrm{d}y\Bigg) \nonumber \\[5pt] & = \int_{0}^{\infty}\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-S_i}}\Bigg] \mathbb{P}\Bigg(\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}\in\textrm{d}y\Bigg). \end{align}

Recall that $\{\tau_n\}_{n\ge 0}$ are the strict descending ladder epochs of the random walk $\{S_n\}_{n\ge 0}$ , see (3.1). According to [Reference Kersting and Vatutin14, Theorem 4.6], there exists a constant $c_1>0$ such that $\mathbb{P}(\tau_1 > n)\sim{c_1}/{\sqrt{n}}$ as $n\to \infty$ . Since the random variables $\{\tau_{i+1} - \tau_i\}_{i\ge 0}$ are i.i.d., by the results of regular variation under convolution [Reference Feller12, p. 278], for $j\ge 1$ and as $n\to \infty$ ,

(3.7) \begin{equation} \mathbb{P}(\tau_j > n) \sim \sum_{i=0}^{j-1}\mathbb{P}(\tau_{i+1} - \tau_i> n) = j\,\mathbb{P}(\tau_1 > n) \sim \frac{jc_1}{\sqrt{n}\,}. \end{equation}

Next, we estimate the integrand in (3.6) for any fixed $y\in (0,\infty)$ . To this end, we split the range of integration into $r + 1$ parts (the proper value of r will be determined below):

\[\{{\tau}_0 \le n < {\tau}_1\},\ \{{\tau}_1 \le n < {\tau}_2\},\ \ldots,\ \{{\tau}_{r-1} \le n < {\tau}_r\},\ \{{\tau}_r \le n \}.\]

Step 1

We prove first that there exists a constant $A_0(y)$ dependent on y such that

(3.8) \begin{equation} \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\, {\tau}_1>n\Bigg] \sim \frac{c_1 A_0(y)}{\sqrt{n}}. \end{equation}

According to [Reference Kersting and Vatutin14, Lemma 5.5], $\sum_{i=0}^{\infty}{\textrm{e}}^{-S_i}<\infty$ $\mathbb{P}^+$ -a.s.; then, by the fact that $0<\big(\sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}\big)^{-1}\le 1$ for $n\ge 0$ and applying Lemma 3.1, we get

\begin{equation*} \lim_{n\to\infty}\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}} \mid {\tau}_1>n\Bigg] = \mathbb{E}^{+}\Bigg[\frac{1}{y + \sum_{i=0}^{\infty}{\textrm{e}}^{-{S}_i}}\Bigg] \;=\!:\; A_0(y)>0. \end{equation*}

Thus, (3.8) follows from this and (3.7).

Step 2

For any $1\le j\le r-1$ , we will show that there exists a constant $A_j(y)$ dependent on y such that

(3.9) \begin{equation} \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,{\tau}_j\le n <\tau_{j+1}\Bigg] \sim \frac{c_1 A_j(y)}{\sqrt{n}}. \end{equation}

Due to (3.7), we have, for any $0<\delta<1$ and as $n\to\infty$ ,

\begin{align*} \mathbb{P}(\tau_{j}\le\delta n, \tau_{j+1}>n) & \ge \mathbb{P}(\tau_{j}\le\delta n, \tau_{j+1} - \tau_{j} > n) \\[5pt] & = \mathbb{P}(\tau_{j}\le\delta n)\cdot\mathbb{P}(\tau_{j+1} - \tau_{j} > n) \sim \frac{c_1}{\sqrt{n}\,}\bigg(1-\frac{jc_1}{\sqrt{\delta n}\,}\bigg), \end{align*}

which implies that

(3.10) \begin{equation} \mathbb{P}(\delta n < \tau_{j} \le n, \tau_{j+1}>n) = \mathbb{P}(\tau_{j} \le n < \tau_{j+1}) - \mathbb{P}(\tau_{j}\le\delta n, \tau_{j+1}>n) = o\bigg(\frac{1}{\sqrt{n}\,}\bigg). \end{equation}

In view of (3.10), we consider in place of (3.9) the expression

\begin{equation*} \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,\tau_{j}\le\delta n,\,\tau_{j+1}>n\Bigg], \quad 0<\delta<1. \end{equation*}

Let $\hat{S}_i\;:\!=\;S_{i+\tau_j}-S_{\tau_j}$ , $i\ge 0$ . Then, by the strong Markov property, the random walk $\{\hat{S}_i\}_{i\ge 0}$ is independent of $\{S_j\}_{j\le \tau_j}$ . Since $\{\tau_{j}\le\delta n,\tau_{j+1}>n\}\subset\{\tau_{j}\le\delta n,\tau_{j+1}-\tau_{j} > (1-\delta)n\}$ , and under the latter condition,

\begin{align*} \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i} = \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\Bigg(\sum_{i=\tau_j}^{n}{\textrm{e}}^{-({S}_i-S_{\tau_j})}\Bigg) & = \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\Bigg(\sum_{i=0}^{n-\tau_j}{\textrm{e}}^{-\hat{S}_i}\Bigg) \\[5pt] & \ge \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\Bigg(\sum_{i=0}^{(1-\delta)n}{\textrm{e}}^{-\hat{S}_i}\Bigg), \end{align*}

which implies that

(3.11) \begin{align} & \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,\tau_{j}\le\delta n,\tau_{j+1}>n\Bigg] \nonumber \\[5pt] & \quad \le \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{(1-\delta)n}{\textrm{e}}^{-\hat{S}_i}\big)};\,\hat{\tau}_1>(1-\delta)n\right] \nonumber \\[5pt] & \quad = \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{(1-\delta)n}{\textrm{e}}^{-\hat{S}_i}\big)}\mid\hat{\tau}_1>(1-\delta)n\right] \cdot \mathbb{P}(\hat{\tau}_1 > (1-\delta)n). \end{align}

Hence, applying the dominated convergence theorem and Lemma 3.1, we get that

(3.12) \begin{align} & \lim_{n\to\infty}\mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j -1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{(1-\delta)n}{\textrm{e}}^{-\hat{S}_i}\big)}\mid\hat{\tau}_1>(1-\delta)n\right] \nonumber \\[5pt] & \qquad = \mathbb{E}\left[\lim_{n\to\infty}\mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{(1-\delta)n}{\textrm{e}}^{-\hat{S}_i}\big)}\mid\hat{\tau}_1>(1-\delta)n,\, \{S_j\}_{j\le\tau_j}\right]\right] \nonumber \\[5pt] & \qquad = \mathbb{E}\left[\hat{\mathbb{E}}^+\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{\infty}{\textrm{e}}^{-\hat{S}_i}\big)}\mid\{S_j\}_{j\le\tau_j}\right]\right] \;=\!:\; A_j(y), \end{align}

where $\hat{\tau}_1$ is the descending ladder epoch of $\{\hat{S}_i\}_{i\ge 0}$ , and $\hat{\mathbb{E}}^+$ denotes the corresponding measure change. Then, combining (3.11), (3.12), and (3.7), we get the following upper bound:

(3.13) \begin{equation} \limsup_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\, \tau_{j}\le\delta n,\,\tau_{j+1}>n\Bigg] \le \frac{c_1 A_j(y)}{\sqrt{1-\delta}}. \end{equation}

Next, we show that the lower bound can be obtained in a similar way. It is easy to see that $\{\tau_{j}\le \delta n, \tau_{j+1}>n\}\supset \{\tau_{j}\le \delta n, \tau_{j+1} - \tau_{j} > n\}$ and, conditioned on the latter event,

\begin{equation*} \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i} = \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\Bigg(\sum_{i=0}^{n-\tau_j}{\textrm{e}}^{-\hat{S}_i}\Bigg) \le \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\Bigg(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\Bigg). \end{equation*}

Thus, we have

(3.14) \begin{align} & \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,\tau_{j}\le\delta n,\,\tau_{j+1}>n\Bigg] \nonumber \\[5pt] & \quad \ge \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\big)};\,\tau_{j}\le\delta n,\, \tau_{j+1}-\tau_{j} > n\right] \nonumber \\[5pt] & \quad = \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\big)};\,\hat{\tau}_1 > n\right] \nonumber \\[5pt] & \qquad - \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\big)};\,\tau_{j}>\delta n,\,\hat{\tau}_1>n\right] \nonumber \\[5pt] & \quad = \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\big)}\mid\hat{\tau}_1>n\right] \cdot \mathbb{P}(\hat{\tau}_1 > n) - o\bigg(\frac{1}{\sqrt{n}\,}\bigg), \end{align}

where the last equality follows from

\begin{align*} \mathbb{E}\left[\frac{1}{y + \sum_{i=0}^{\tau_j-1}{\textrm{e}}^{-{S}_i} + {\textrm{e}}^{-S_{\tau_j}}\big(\sum_{i=0}^{n}{\textrm{e}}^{-\hat{S}_i}\big)};\,\tau_{j}>\delta n,\,\hat{\tau}_1 > n\right] & \le \mathbb{P}(\tau_{j}>\delta n) \cdot \mathbb{P}(\hat{\tau}_1 > n) \\[5pt] & \sim \frac{jc_1}{\sqrt{\delta n}\,} \cdot \frac{c_1}{\sqrt{n}\,} = o\bigg(\frac{1}{\sqrt{n}\,}\bigg). \end{align*}

By the dominated convergence theorem, (3.7), and (3.14), we get the following lower bound:

(3.15) \begin{equation} \liminf_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\, \tau_{j}\le\delta n,\,\tau_{j+1}>n\Bigg] \ge c_1 A_j(y). \end{equation}

In view of (3.10), (3.13), and (3.15), we obtain that

\begin{align*} c_1 A_j(y) & \le \liminf_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\, {\tau}_j \le n < \tau_{j+1}\Bigg] \\[5pt] & \le \limsup_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\, {\tau}_j \le n < \tau_{j+1}\Bigg] \le \frac{c_1 A_j(y)}{\sqrt{1-\delta}} . \end{align*}

Then (3.9) holds true since $\delta\in(0,1)$ can be arbitrarily small.

Step 3

Finally, we turn to the estimation of

(3.16) \begin{equation} \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,{\tau}_r\le n\Bigg], \end{equation}

and decompose the range of integration into two parts: $\{\tau_r\le(1-\delta)n\}$ and $\{(1-\delta)n<\tau_r\le n\}$ . By (3.7), the expectation of (3.16) over the second of these intervals is not greater than

\begin{equation*} \mathbb{P}((1-\delta)n<\tau_r\le n) \sim \bigg(\frac{1}{\sqrt{1-\delta}\,}-1\bigg)\frac{c_1 r}{\sqrt{n}\,} \quad \textrm{as}\ n\to\infty, \end{equation*}

and over the first it is not greater than

(3.17) \begin{equation} \mathbb{E}\left[\frac{1}{y + \sum_{i=\tau_r}^{\tau_r+\delta n}{\textrm{e}}^{-{S}_i}};\,{\tau}_r\le(1-\delta)n\right] \le \mathbb{E}\left[\frac{{\textrm{e}}^{S_{\tau_r}}}{\sum_{i=0}^{\delta n}{\textrm{e}}^{-\hat{S}_i}}\right] = \big(\mathbb{E}\big[{\textrm{e}}^{S_{\tau_1}}\big]\big)^r\mathbb{E}\left[\frac{1}{\sum_{i=0}^{\delta n}{\textrm{e}}^{-\hat{S}_i}}\right]. \end{equation}

Note that $0<\mathbb{E}\big[{\textrm{e}}^{S_{\tau_1}}\big]<1$ . According to [Reference Kozlov16, Theorem 1], the second factor on the right-hand side of (3.17) is asymptotically no greater than ${c_2}/{\sqrt{\delta n}}$ . Bringing together the estimates obtained, we find that, for sufficiently large n,

(3.18) \begin{equation} \mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,{\tau}_r\le n\Bigg] \le \bigg[c_1r\bigg(\frac{1}{\sqrt{1-\delta}\,}-1\bigg) + \frac{c_2\big(\mathbb{E}\big[{\textrm{e}}^{S_{\tau_1}}\big]\big)^r}{\sqrt{\delta}}\bigg]\frac{1}{\sqrt{n}\,}. \end{equation}

Choosing $\delta=1/r^2$ , for sufficiently large r, we can make the factor in square brackets on the right-hand side of (3.18) smaller than any pre-assigned $\varepsilon>0$ . Combining this and (3.8), (3.9), and (3.18), we get that, for sufficiently large r and all large enough n (depending on r and $\varepsilon$ ),

\begin{equation*} \Bigg|\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}}\Bigg] - c_1\sum_{j=0}^{r-1}A_j(y)\Bigg| < 2\varepsilon. \end{equation*}

This means that the sequence

\[\Bigg\{\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}}\Bigg]\Bigg\}_{n\ge 0}\]

is bounded. But then for any fixed y the sequence $\big\{\sum_{j=0}^{r}A_j(y)\big\}_{r\ge 0}$ is also bounded, and hence the series $\sum_{j=0}^{\infty}A_j(y)$ converges. Thus we have, for any fixed y,

(3.19) \begin{equation} \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}}\Bigg] = c_1\sum_{j=0}^{\infty}A_j(y) \in (0,\infty). \end{equation}

Writing $L_n\;:\!=\;\min(S_k\colon 0\le k\le n)$ , by [Reference Kozlov16, Theorem A] we have, for $y\ge 0$ ,

\begin{equation*} \sum_{j=0}^{\infty}A_j(y) \le \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}\bigg[\frac{1}{\sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}}\bigg] \le \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}[{\textrm{e}}^{L_n}] = \frac{\hat{U}(1){\textrm{e}}^{-c_-}}{\sqrt{\pi}}, \end{equation*}

where $\hat{U}(1)=\int_{0}^{\infty}\textrm{e}^{-x}\,\textrm{d}U(x)$ . From this, (3.19), and applying the dominated convergence theorem, we get that

(3.20) \begin{align} & \lim_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[\frac{1}{\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}+\sum_{i=0}^{n}{\textrm{e}}^{-\tilde{S}_i}}\Bigg] \nonumber \\[5pt] & = \lim_{n\to\infty}\sqrt{n}\int_{0}^{\infty}\mathbb{E}\Bigg[\frac{1}{y + \sum_{i=0}^{n}{\textrm{e}}^{-S_i}}\Bigg] \mathbb{P}\Bigg(\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}\in\textrm{d}y\Bigg) \nonumber \\[5pt] & = c_1\int_{0}^{\infty}\sum_{j=0}^{\infty}A_j(y)\,\mathbb{P}\Bigg(\sum_{l=0}^{k-1}{\textrm{e}}^{S_k-S_l}\in\textrm{d}y\Bigg) \;=\!:\; \tilde{c}_k\in (0,\infty). \end{align}

Hence, (3.5) is valid and Theorem 2.1 holds with $c(x)=\sum_{k=0}^{x}\tilde{c}_k$ .

4. Maximal population and total population

4.1. Preliminary results

In this section, we give some useful lemmas that will be used for the proof of conditioned limit results in the next section.

Lemma 4.1. Assume that condition (2.2) is valid. Let $Y_1,Y_2,\ldots$ be a uniformly bounded sequence of non-negative random variables adapted to the filtration $\mathcal{A}$ such that for any fixed $j\ge 0$ the limit

(4.1) \begin{equation} \lim_{n\to\infty}\mathbb{E}[Y_n\cdot\mathbf{1}_{\{T_x > n\}}\mid\tau_j\le n<\tau_{j+1}]=a_j \end{equation}

exists. Then the limit $\lim_{n\to\infty}\mathbb{E}[Y_n\mid T_x > n] = ({c_1}/{c(x)})\sum_{j=0}^{\infty}a_j$ exists.

Proof. Note that

(4.2) \begin{align} \mathbb{E}[Y_n\mid T_x > n] & = \sum_{j=0}^{\infty}\mathbb{E}[Y_n\cdot\mathbf{1}_{\{\tau_j\le n<\tau_{j+1}\}}\mid T_x > n] \nonumber \\[5pt] & = \sum_{j=0}^{\infty}\frac{\mathbb{P}(\tau_j\le n<\tau_{j+1})}{\mathbb{P}(T_x > n)}\, \mathbb{E}[Y_n\cdot\mathbf{1}_{\{T_x > n\}}\mid\tau_j\le n<\tau_{j+1}] \nonumber \\[5pt] & \;=\!:\; F_m(n) + R_m(n), \end{align}

where $F_m(n)$ is the sum of the first m terms of the last but one series, and $R_m(n)$ is the corresponding remainder term. By (3.7), (4.1), and Theorem 2.1, we get

\begin{equation*} \lim_{n\to\infty}F_m(n) = \frac{c_1}{c(x)}\sum_{j=0}^{m-1} a_j. \end{equation*}

We assume that the sequence $\{Y_n\}_{n\ge 1}$ is uniformly bounded by some positive constant $M_2$ . Then we have $F_m(n)\le \mathbb{E}[ Y_n\mid T_x > n ] \le M_2$ for any $m,n\ge 1$ , hence the limit

(4.3) \begin{equation} \lim_{m\to\infty}\lim_{n\to\infty}F_m(n) = \frac{c_1}{c(x)}\sum_{j=0}^{\infty} a_j \end{equation}

exists and is finite. On the other hand, observe that

(4.4) \begin{equation} R_m(n) \le M_2\cdot\sum_{j=m}^{\infty}\frac{\mathbb{P}(T_x>n,\,\tau_j\le n<\tau_{j+1})}{\mathbb{P}(T_x>n)} = M_2\cdot\frac{\mathbb{P}(T_x > n,\,\tau_m\le n)}{\mathbb{P}(T_x > n)}. \end{equation}

By the uniformly elliptic condition (1.3), it follows that, for any $0\le i\le x$ ,

(4.5) \begin{equation} {\textrm{e}}^{-jM_1} \le {\textrm{e}}^{-S_i} \le {\textrm{e}}^{jM_1}, \quad \mathbb{P}\mbox{-}\textrm{a.s.} \end{equation}

Combining this with choosing $\delta=1/r^2$ in (3.18), we obtain that

(4.6) \begin{align} & \lim_{m\to\infty}\limsup_{n\to\infty}\sqrt{n}\,\mathbb{P}(T_x > n,\,\tau_m\le n) \nonumber \\[5pt] & \qquad = \lim_{m\to\infty}\limsup_{n\to\infty}\sqrt{n}\,\mathbb{E}\Bigg[ \frac{\sum_{i=0}^{x}{\textrm{e}}^{-S_i}}{\sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,{\tau}_m\le n\Bigg] \nonumber \\[5pt] & \qquad \le {\textrm{e}}^{jM_1}(x+1)\lim_{m\to\infty}\limsup_{n\to\infty}\sqrt{n}\, \mathbb{E}\Bigg[\frac{1}{\sum_{i=0}^{n}{\textrm{e}}^{-{S}_i}};\,{\tau}_m\le n\Bigg] \nonumber \\[5pt] & \qquad \le {\textrm{e}}^{jM_1}(x+1)\lim_{m\to\infty}\bigg(\frac{c_1}{m(1-1/m^2)} + c_2m\big(\mathbb{E}\big[{\textrm{e}}^{S_{\tau_1}}\big]\big)^m\bigg) = 0. \end{align}

In view of (4.4), (4.6), and Theorem 2.1, we get that

(4.7) \begin{equation} \lim_{m\to\infty}\limsup_{n\to\infty}R_m(n) = 0. \end{equation}

Thus, we conclude the proof of Lemma 4.1 by combining (4.2), (4.3), and (4.7).

We will use the following result [Reference Afanasyev1, Lemma 3] concerning the behavior of the processes $\{a_n\}_{n\ge 0}$ and $\{b_n\}_{n\ge 0}$ conditioned on the event $\{\tau_j\le n<\tau_{j+1}\}$ for any $j\ge 0$ .

Lemma 4.2. Assume that condition (2.2) is valid. Let $\stackrel{\textrm{f.d.d.}}{\longrightarrow}$ denote convergence in the sense of finite-dimensional distributions. Then, for any fixed $j\ge 0$ , as $n\to\infty$ ,

\begin{align*} \{a_{[nt]}\colon t\in(0,1]\mid\tau_j\le n < \tau_{j+1}\} & \stackrel{\textrm{f.d.d.}}{\longrightarrow} 0, \\[5pt] \{b_{[nt]}\colon t\in(0,1]\mid\tau_j\le n < \tau_{j+1}\} & \stackrel{\textrm{f.d.d.}}{\longrightarrow} v_j, \end{align*}

where $v_j$ is a process with constant positive trajectories on (0,1] for any $j\ge 0$ . Moreover, the processes $\{a_{[nt]}\colon t\in(0,1]\}$ , $\{b_{[nt]}\colon t\in(0,1]\}$ , and $\{S_{[nt]}/\sigma\sqrt{n}\colon t\in[0,\infty)\}$ are asymptotically independent as $n\to\infty$ conditioned on the event $\{\tau_j\le n<\tau_{j+1}\}$ .

The next result describes the trajectories of the associated random walk allowing survival.

Lemma 4.3. Assume that condition (2.2) is valid. Let $Y_n(t)\;:\!=\;S_{[nt]}/\sigma\sqrt{n}$ , $t\in[0,\infty)$ , $n\ge 0$ . Then, for any $x\in\mathbb{N}$ , as $n\to\infty$ , $\{Y_n(t)\colon t\in[0,\infty)\mid T_x > n\}\stackrel{\textrm{d}}{\longrightarrow} \{W^+(t)\colon t\in[0,\infty)\}$ , where $\{W^+(t)\colon 0\le t\le 1\}$ is the Brownian meander and $\{W^+(t)\colon t > 1\}$ represents the standard Brownian motion starting from $W^+(1)$ . The symbol $\stackrel{\textrm{d}}{\longrightarrow}$ denotes convergence in distribution in the space $D[0,\infty)$ .

Proof. Step 1: The convergence of finite-dimensional distributions. We fix $m\in\mathbb{N}$ and $0<t_1<\cdots<t_m<\infty$ , $x_i\in\mathbb{R}$ , $1\le i\le m$ . Recall that $a_n=\exp(-S_n)$ , $b_0=0$ , and $b_n=\sum_{i=0}^{n-1}a_{i}$ , $n\ge 1$ . By (3.3), we can write

(4.8) \begin{align} & \mathbb{P}(Y_n(t_i)\le x_i,\,1\le i\le m,\,T_x>n\mid\tau_j\le n<\tau_{j+1}) \nonumber \\[5pt] & \qquad = \mathbb{E}[\textrm{P}_{\omega}(T_x>n)\cdot\mathbf{1}_{\{Y_n(t_i)\le x_i,\, 1\le i\le m\}} \mid \tau_j \le n < \tau_{j+1}] \nonumber \\[5pt] & \qquad = \mathbb{E}\bigg[\frac{b_{x+1}}{a_n+b_n}\cdot\mathbf{1}_{\{Y_n(t_i)\le x_i,\, 1\le i\le m\}} \mid \tau_j \le n < \tau_{j+1}\bigg]. \end{align}

Then, applying Lemma 4.2 and [Reference Afanasyev2, Lemma 1], we obtain

(4.9) \begin{align} & \lim_{n\to\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{a_n+b_n}\cdot\mathbf{1}_{\{Y_n(t_i)\le x_i,\,1\le i\le m\}} \mid \tau_j \le n < \tau_{j+1}\bigg] \nonumber \\[5pt] & \quad = \lim_{n\to\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{a_n+b_n}\mid\tau_j\le n<\tau_{j+1}\bigg] \cdot \lim_{n\to\infty}\mathbb{P}(Y_n(t_i)\le x_i,\,1\le i\le m \mid \tau_j\le n<\tau_{j+1}) \nonumber \\[5pt] & \quad = \mathbb{E}\bigg[\frac{b_{x+1}}{v_j}\bigg] \cdot \mathbb{P}(W^+(t_i)\le x_i,\,1\le i\le m). \end{align}

Combining (4.8), (4.9), and Lemma 4.1, we get that

(4.10) \begin{align} & \lim_{n\to\infty}\mathbb{P}(Y_n(t_i)\le x_i,\,1\le i\le m \mid T_x>n)\nonumber \\[5pt] & \qquad = \frac{c_1}{c(x)}\sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j}\bigg] \cdot \mathbb{P}(W^+(t_i)\le x_i,\,1\le i\le m). \end{align}

These arguments are valid in the case $x_i=\infty$ , $1\le i\le m$ , as well, and therefore

\begin{equation*} \frac{c_1}{c(x)}\sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j}\bigg] = 1. \end{equation*}

It follows from this and (4.10) that

(4.11) \begin{equation} \lim_{n\to\infty}\mathbb{P}(Y_n(t_i)\le x_i,\,1\le i\le m \mid T_x>n) = \mathbb{P}(W^+(t_i)\le x_i,1\le i\le m). \end{equation}

Step 2: Tightness. For a function $f\in D[u,v]$ , $0\le u<v<\infty$ , we consider the modulus of continuity $\omega_f(\delta,u,v) = \sup{|f(s)-f(t)|}$ , where the supremum is taken over all s, t such that $s,t\in[u,v]$ , $|t-s|<\delta$ , $\delta\in(0,\infty)$ . For any fixed $\nu,\varepsilon\in(0,\infty)$ , by [Reference Afanasyev2, Lemma 1] we have, for any fixed $j\ge 0$ ,

\begin{align*} & \lim_{\delta\downarrow 0}\limsup_{n\to\infty}\mathbb{P}(\omega_{Y_n}(\delta,0,\nu)\ge\varepsilon,\,T_x>n \mid \tau_j \le n < \tau_{j+1}) \nonumber \\[5pt] & \qquad \le \lim_{\delta\downarrow 0}\limsup_{n\to\infty}\mathbb{P}(\omega_{Y_n}(\delta,0,\nu)\ge\varepsilon \mid \tau_j \le n < \tau_{j+1}) = 0. \end{align*}

Then, applying Lemma 4.1 we get that $\lim_{\delta\downarrow 0}\limsup_{n\to\infty}\mathbb{P}(\omega_{Y_n}(\delta,0,\nu)\ge\varepsilon\mid T_x>n) = 0$ . We conclude the proof of Lemma 4.3 by combining this with (4.11).

Lemma 4.4. Assume that condition (2.2) is valid. Then, for any $m>k>x$ ,

\begin{align*} \textrm{E}_{\omega}\bigg[\bigg(\frac{Z_{m}^x}{{\textrm{e}}^{S_{m}}}-\frac{Z_{k}^x}{{\textrm{e}}^{S_{k}}}\bigg)^2\bigg] \le (x+1)\cdot b_{x+1}(2(b_m-b_{k}) + a_m - a_{k}). \end{align*}

Proof. Recall that, for $n>x$ , $Z_n^x=Z_{n,0}+Z_{n,1}+\cdots+Z_{n,x}$ , which implies that

(4.12) \begin{align} \textrm{E}_{\omega}\bigg[\bigg(\frac{Z_{m}^x}{{\textrm{e}}^{S_{m}}}-\frac{Z_{k}^x}{{\textrm{e}}^{S_{k}}}\bigg)^2 \bigg] & = \textrm{E}_{\omega}\bigg[\bigg(\frac{\sum_{i=0}^{x}Z_{m,i}}{{\textrm{e}}^{S_{m}}} - \frac{\sum_{i=0}^{x}Z_{k,i}}{{\textrm{e}}^{S_{k}}}\bigg)^2\bigg] \nonumber \\[5pt] & \le (x+1)\cdot\sum_{i=0}^{x}\textrm{E}_{\omega}\bigg[\bigg(\frac{Z_{m,i}}{{\textrm{e}}^{S_{m}}} - \frac{Z_{k,i}}{{\textrm{e}}^{S_{k}}}\bigg)^2\bigg]. \end{align}

For each $0\le i\le x$ , since $\{Z_{l,i}\colon l\ge i+1\}$ is a BPRE, it follows from [Reference Afanasyev3, Lemma 4] that

(4.13) \begin{align} \textrm{E}_{\omega}\bigg[\bigg(\frac{Z_{m,i}}{{\textrm{e}}^{S_{m}}}-\frac{Z_{k,i}}{{\textrm{e}}^{S_{k}}}\bigg)^2\bigg] & = {\textrm{e}}^{-2S_i}\cdot\textrm{E}_{\omega}\bigg[\bigg(\frac{Z_{m,i}}{{\textrm{e}}^{S_{m}-S_j}} - \frac{Z_{k,i}}{{\textrm{e}}^{S_{k}-S_j}}\bigg)^2\bigg] \nonumber \\[5pt] & = {\textrm{e}}^{-2S_i}\cdot\Bigg(2\sum_{l=k}^{m-1}{\textrm{e}}^{S_i-S_l} + {\textrm{e}}^{S_i-S_m} - {\textrm{e}}^{S_i-S_k}\Bigg) \nonumber \\[5pt] & = {\textrm{e}}^{-S_i} \cdot (2(b_m-b_{k}) + a_m - a_{k}). \end{align}

Thus, we conclude the proof of Lemma 4.4 by combining (4.12) and (4.13).

4.2. Conditioned limit results

In this section, we derive some Yaglom-type results for the BPIRE introduced in Section 2, which show that $\{Z_n^{x}\}_{n\ge 0}$ exhibits ‘supercritical’ behavior conditioned on the event $\{T_x>n\}$ as $n\to\infty$ . The proofs are adapted from the arguments in [Reference Afanasyev2, Reference Afanasyev3] which are devoted to the analogue results for BPRE.

Proposition 4.1. Assume that condition (2.2) is valid. Then, for any $x\in\mathbb{N}$ , as $n\to\infty$ ,

(4.14) \begin{equation} \bigg\{\frac{Z_{[nt]}^x}{{\textrm{e}}^{S_{[nt]}}}\colon t\in(0,1] \mid T_x>n\bigg\} \stackrel{\textrm{d}}{\longrightarrow} \{\eta_x(t)\colon 0 < t \le 1\}, \end{equation}

where $\{\eta_x(t)\colon 0<t\le 1\}$ is a stochastic process with a.s. constant paths, i.e. there exists a random variable $\eta_x$ , dependent on x, such that $\mathbb{P}(\eta_x(t) = \eta_x,\, 0<t\le 1) = 1$ and $\mathbb{P}( 0<\eta_x< \infty) = 1$ . Convergence in (4.14) means convergence in distribution in the space D[u,1] with Skorokhod topology for any fixed $u\in (0,1)$ .

Proof. Let $X_n(t) \;:\!=\; Z_{[nt]}^x{\textrm{e}}^{-S_{[nt]}}$ , $t\in(0,1]$ . By (3.2), for any $\lambda\ge 0$ ,

\begin{equation*} \textrm{E}_{\omega}[{\textrm{e}}^{-\lambda X_n(1)};\,T_x>n] = g_n({\textrm{e}}^{-\lambda a_n}) - g_n(0) = \frac{b_{x+1}}{a_n+b_n} - \frac{b_{x+1}}{a_n(1-{\textrm{e}}^{-\lambda a_n})^{-1}+b_n}. \end{equation*}

Applying Lemma 4.2 gives, for any $j\ge 0$ ,

\begin{align*} \lim_{n\to\infty}\mathbb{E}[{\textrm{e}}^{-\lambda X_n(1)}\cdot\mathbf{1}_{\{T_x>n\}} \mid \tau_j \le n < \tau_{j+1}] & = \lim_{n\to\infty}\mathbb{E}[\textrm{E}_{\omega}[{\textrm{e}}^{-\lambda X_n(1)};\,T_x>n]\mid\tau_j\le n<\tau_{j+1}] \\[5pt] & = \mathbb{E}\bigg[\frac{b_{x+1}}{v_j(1+\lambda v_j )}\bigg]. \end{align*}

Then, using Lemma 4.1, we obtain that, for any $x\in\mathbb{N}$ ,

(4.15) \begin{equation} \lim_{n\to\infty}\mathbb{E}[{\textrm{e}}^{-\lambda X_n(1)} \mid T_x>n] = \frac{c_1}{c(x)}\sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j(1+\lambda v_j)}\bigg] \;=\!:\; \varphi(\lambda, x). \end{equation}

The above arguments are valid in the case $\lambda=0$ as well. Therefore,

\begin{equation*} \varphi(\lambda, x) \le \frac{c_1}{c(x)}\sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j}\bigg] = 1 \end{equation*}

for all $\lambda>0$ . Then the function series in (4.15) converges uniformly. Combining this and the dominated convergence theorem gives

\begin{align*} \lim_{\lambda\to0}\sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j(1+\lambda v_j)}\bigg] &= \sum_{j=0}^{\infty}\lim_{\lambda\to0}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j(1+\lambda v_j)}\bigg] = \sum_{j=0}^{\infty}\mathbb{E}\bigg[\lim_{\lambda\to0}\frac{b_{x+1}}{v_j(1+\lambda v_j)}\bigg]\\[2pt] &= \sum_{j=0}^{\infty}\mathbb{E}\bigg[\frac{b_{x+1}}{v_j}\bigg]. \end{align*}

Hence the Laplace transform $\lambda \to \varphi(\lambda, x)$ is continuous at 0. By the continuity theorem, for any $x\in\mathbb{N}$ there exists a random variable $\eta_x$ such that

(4.16) \begin{equation} \{X_n(1) \mid T_x > n\} \stackrel{\textrm{d}}{\longrightarrow} \eta_x. \end{equation}

Consider the process $\{\eta_x(t)\colon 0<t\le1\}$ which puts this random variable $\eta_x$ in correspondence with each $t\in(0,1]$ , i.e. $\mathbb{P}(\eta_x(t) = \eta_x,\, 0<t\le 1) = 1$ . We will show that, for any $u\in(0,1)$ , as $n\to\infty$ ,

(4.17) \begin{equation} \{X_n(t)\colon t\in[u,1] \mid T_x > n\} \stackrel{\textrm{f.d.d.}}{\longrightarrow} \{\eta_x(t)\colon u \le t \le 1\}. \end{equation}

By (4.16), it follows that to prove (4.17) it suffices to show that, for any $\varepsilon>0$ ,

(4.18) \begin{equation} \lim_{n\to\infty}\mathbb{P}(\sup\nolimits_{t\in[u,1]}|X_n(t)-X_n(u)| \ge \varepsilon \mid T_x>n) = 0. \end{equation}

It is easy to see that the process $\{Z_k^x{\textrm{e}}^{-S_k}\}_{k\ge 0}$ is a submartingale under the quenched law $\textrm{P}_{\omega}$ ; then, applying Doob’s inequality and Lemma 4.4, we get that

\begin{align*} & \mathbb{P}(\sup\nolimits_{t\in[u,1]}|X_n(t)-X_n(u)|\ge\varepsilon\mid\tau_j\le{un}/{2},\,n<\tau_{j+1}) \\[2pt] & \qquad = \mathbb{E}[\textrm{P}_{\omega}(\sup\nolimits_{t\in[u,1]}|X_n(t)-X_n(u)|\ge\varepsilon)\mid\tau_j\le{un}/{2}, \, n < \tau_{j+1}] \\[2pt] & \qquad \le \frac{1}{\varepsilon^2}\cdot\mathbb{E}[\textrm{E}_{\omega}[(X_n(t)-X_n(u))^2] \mid \tau_j \le {un}/{2},\,n<\tau_{j+1}] \\[2pt] & \qquad \le \frac{x+1}{\varepsilon^2}\cdot\mathbb{E}[b_{x+1}(2(b_n-b_{nu})+a_n-a_{nu}) \mid \tau_j \le {un}/{2},\,n<\tau_{j+1}]. \end{align*}

By (4.5), $b_{x+1}$ is bounded from above; then applying [Reference Afanasyev2, Lemma 3] implies that

\begin{equation*} \lim_{n\to\infty}\mathbb{P}(\sup\nolimits_{t\in[u,1]}|X_n(t)-X_n(u)|\ge\varepsilon\mid\tau_j\le{un}/{2},\,n<\tau_{j+1}) = 0. \end{equation*}

Hence, it follows from (3.10) that $\lim_{n\to\infty}\mathbb{P}(\sup_{t\in[u,1]}|X_n(t)-X_n(u)|\ge\varepsilon\mid\tau_j\le n<\tau_{j+1}) = 0$ . Thus, (4.18) holds true in view of Lemma 4.1. On the other hand, observe that, for any $u\in(0,1)$ and $\varepsilon\in (0,\infty)$ , $\omega_{X_n}(\delta,u,1) \le 2\sup_{t\in[u,1]}|X_n(t)-X_n(u)|$ , which implies that

\begin{align*} & \lim_{\delta\downarrow0}\limsup_{n\to\infty}\mathbb{P}(\omega_{X_n}(\delta,u,1)\ge\varepsilon\mid T_x>n) \\[5pt] & \qquad \le \lim_{n\to\infty}\mathbb{P}(\sup\nolimits_{t\in[u,1]}|X_n(t)-X_n(u)|\ge{\varepsilon}/{2}\mid T_x>n) = 0. \end{align*}

Thus, we conclude the proof of Proposition 4.1 by combining this with (4.17).

Proposition 4.1 gives no explicit formulas for the limiting distribution of the process $Z_{[nt]}^x{\textrm{e}}^{-S_{[nt]}}$ . Next, we show some conditioned limit results for the process $\{\log{Z_{[nt]}^x},\, 0 \le t \le 1\}$ , which allow for the explicit expression of the limiting distribution.

Proposition 4.2. Assume that condition (2.2) is valid. Then, for any $x\in\mathbb{N}$ , as $n\to\infty$ ,

\begin{align*} \Bigg\{\frac{\log{\big(Z_{[nt]}^x+1\big)}}{\sigma\sqrt{n}}\colon t\in[0,\infty) \mid T_x > n\Bigg\} & \stackrel{\textrm{d}}{\longrightarrow} \{W^+(t\land\tau)\colon t\in[0,\infty)\}, \\[5pt] \Bigg\{\frac{\log{\big(Z_{[nT_x]}^x+1\big)}}{\sigma\sqrt{n}}\colon t\in[0,1] \mid T_x > n\Bigg\} & \stackrel{\textrm{d}}{\longrightarrow} \bigg\{\frac{W_0^+(t)}{\alpha}\colon t\in[0,1]\bigg\}, \end{align*}

where $\tau = \inf\{t>0\colon W^+(t)=0\}$ , $\alpha$ is a random variable uniformly distributed on (0,1), and $\{W_0^+(t)\colon t\in[0,1]\}$ is a Brownian excursion independent of $\alpha$ .

Proof. This proposition was proved when $x=0$ [Reference Afanasyev3, Theorems 3 and 5]. Note that the proofs from [Reference Afanasyev3] still work if we replace the lemmas therein with Lemmas 4.1 and 4.3, and Proposition 4.1. Thus we omit them for the sake of simplicity.

4.3. Proof of Theorem 2.2

Proof. We first prove that, for any $x\in\mathbb{N}$ , there exists a positive constant C(x) such that

(4.19) \begin{equation} \lim_{n\to\infty}\log{n}\cdot\mathbb{P}(\sup\nolimits_{k\ge 0}Z_k^{x}>n) = C(x). \end{equation}

Note that $\sup_{k\ge 0}Z_k^{x} = \sup_{t\in[0,1]}Z_{[tT_x]}^{x}$ , and therefore it suffices to demonstrate that, as $n\to\infty$ ,

(4.20) \begin{equation} J(n,x) \;:\!=\; \mathbb{P}\big(\sup\nolimits_{t\in[0,1]}\log\big(Z_{[tT_x]}^{x}+1\big) > n\big) \sim \frac{C(x)}{n}. \end{equation}

For any fixed $\varepsilon>0$ , we write

(4.21) \begin{equation} J(n,x) = J_1(n,x,\varepsilon)+J_2(n,x,\varepsilon), \end{equation}

where

\begin{align*} J_1(n,x,\varepsilon) & \;:\!=\; \mathbb{P}\big(\sup\nolimits_{t\in[0,1]}\log\big(Z_{[tT_x]}^{x}+1\big) > n,\,T_x>\varepsilon n^2\big), \\[5pt] J_2(n,x,\varepsilon) & \;:\!=\; \mathbb{P}\big(\sup\nolimits_{t\in[0,1]}\log\big(Z_{[tT_x]}^{x}+1\big) > n,\,T_x\le\varepsilon n^2\big). \end{align*}

It is clear that

\begin{align*} J_1(n,x,\varepsilon) & = \mathbb{P}\big(\sup\nolimits_{t\in[0,1]}\log\big(Z_{[tT_x]}^{x}+1\big)>n \mid T_x>\varepsilon n^2\big) \cdot \mathbb{P}(T_x>\varepsilon n^2) \\[5pt] & = \mathbb{P}\Bigg(\sup\nolimits_{t\in[0,1]}\frac{\log\big(Z_{[tT_x]}^{x}+1\big)}{\sigma\sqrt{n^2\varepsilon}} > \frac{1}{\sigma\sqrt{\varepsilon}\,} \mid T_x>\varepsilon n^2\Bigg) \cdot \mathbb{P}(T_x>\varepsilon n^2); \end{align*}

then, applying Proposition 4.2 and Theorem 2.1 gives

(4.22) \begin{equation} \lim_{n\to\infty}nJ_1(n,x,\varepsilon) = \frac{c(x)}{\sqrt{\varepsilon}} \cdot \mathbb{P}\bigg(\sup\nolimits_{t\in[0,1]}\frac{W_0^+(t)}{\alpha} > \frac{1}{\sigma\sqrt{\varepsilon}\,}\bigg). \end{equation}

Since $\alpha$ is uniformly distributed on (0,1) and independent of $W_0^+$ , we have

\begin{align*} \lim_{\varepsilon\downarrow 0}\frac{1}{\sqrt{\varepsilon}\,} \mathbb{P}\bigg(\sup\nolimits_{t\in[0,1]}\frac{W_0^+(t)}{\alpha} > \frac{1}{\sigma\sqrt{\varepsilon}\,}\bigg) & = \lim_{\varepsilon\downarrow0}\frac{1}{\sqrt{\varepsilon}\,} \int_{0}^{1}\mathbb{P}\bigg(\sup\nolimits_{t\in[0,1]}W_0^+(t)>\frac{u}{\sigma\sqrt{\varepsilon}\,}\bigg)\,\textrm{d}u \\[5pt] & = \lim_{\varepsilon\downarrow0}\sigma\int_{0}^{{1}/{\sigma\sqrt{\varepsilon}}} \mathbb{P}(\sup\nolimits_{t\in[0,1]}W_0^+(t) > y)\,\textrm{d}y \\[5pt] & = \sigma\mathbb{E}\big[\sup\nolimits_{t\in[0,1]}W_0^+(t)\big] = \sigma\sqrt{\pi/2}, \end{align*}

where the last equality follows from [Reference Durrett and Iglehart11, Corollary 3.2]. Thus, we conclude that

(4.23) \begin{equation} \lim_{\varepsilon\downarrow0}\lim_{n\to\infty}nJ_1(n,x,\varepsilon) = c(x)\cdot\sigma\sqrt{\pi/2}\;=\!:\;C(x)\in(0,\infty). \end{equation}

Now we turn to the estimate of $J_2(n,x,\varepsilon)$ . We write $a={\textrm{e}}^n-1$ , $\gamma_a=\inf\{k\ge 0\colon Z_k^x>a\}$ , and let $\theta$ be the left shift operator on environments so that $(\theta^k \omega)_y = \omega_{k+y}$ for any $k\in\mathbb{N}$ and $y\in\mathbb{Z}$ . Note that

\begin{align*} \textrm{P}_{\omega}(\sup\nolimits_{k\ge0}Z_{k}^{x} > a,\,T_x\le\varepsilon n^2) = \sum_{m=0}^{\varepsilon n^2}\sum_{l>a}\textrm{P}_{\omega}(\gamma_a = m,\,Z_m^x =l ) \cdot (\textrm{P}_{\theta^m\omega}(T_0\le\varepsilon n^2 - m))^l, \end{align*}

which implies that

(4.24) \begin{align} J_2(n,x,\varepsilon) & = \sum_{m=0}^{\varepsilon n^2}\sum_{l>a}\mathbb{P}(\gamma_a = m,\,Z_m^x =l) \cdot \mathbb{E}[(\textrm{P}_{\theta^m\omega}(T_0\le\varepsilon n^2 - m))^l] \nonumber \\[5pt] & \le \sum_{m=0}^{\varepsilon n^2}\sum_{l>a}\mathbb{P}(\gamma_a = m,\,Z_m^x =l) \cdot \mathbb{E}[(\textrm{P}_{\omega}(T_0\le\varepsilon n^2 - m))^a] \nonumber \\[5pt] & = \mathbb{P}(\sup\nolimits_{k\ge 0}Z_{k}^{x} > a) \cdot \mathbb{E}\bigg[\bigg(1-\frac{1}{a_{\varepsilon n^2}+b_{\varepsilon n^2}}\bigg)^{{\textrm{e}}^n-1}\bigg] \nonumber \\[5pt] & \;=\!:\; J(n,x)\cdot\alpha(n,\varepsilon). \end{align}

Since $\alpha(n,\varepsilon )<1$ , in view of (4.21) and (4.24) we get that

\begin{equation*} J_1(n,x,\varepsilon)\le J(n,x)\le \frac{J_1(n,x,\varepsilon)}{1-\alpha(n,\varepsilon )}. \end{equation*}

Combining this and (4.23), we obtain that

\begin{align*} C(x) & = \lim_{\varepsilon\downarrow0}\lim_{n\to\infty}nJ_1(n,x,\varepsilon) \le \liminf_{n\to\infty}nJ(n,x)\le\limsup_{n\to\infty}nJ(n,x) \\[5pt] & \le \lim_{\varepsilon\downarrow0}\lim_{n\to\infty}nJ_1(n,x,\varepsilon) \frac{1}{1-\lim_{\varepsilon\downarrow 0}\limsup_{n\to\infty}\alpha(n,\varepsilon )} =C(x), \end{align*}

where the last equality follows from the fact that $\lim_{\varepsilon\downarrow 0}\limsup_{n\to\infty}\alpha(n,\varepsilon ) = 0$ [Reference Afanasyev3, Lemma 4]. Thus, (4.20) holds true and the first part of Theorem 2.2 is proved.

Now let us prove the second assertion of Theorem 2.2. For any $\varepsilon\in(0,1)$ , we have

\begin{equation*} \mathbb{P}(T_x\cdot\sup\nolimits_{k\ge 0}Z_{k}^{x}>n) \le \mathbb{P}(T_x>n^{\varepsilon}) + \mathbb{P}(\sup\nolimits_{k\ge0}Z_{k}^{x} > n^{1-\varepsilon}); \end{equation*}

then, by Theorem 2.1 and (4.19), we get that

(4.25) \begin{equation} \limsup_{n\to\infty}\log{n}\cdot\mathbb{P}(T_x\cdot\sup\nolimits_{k\ge0}Z_{k}^{x}>n) \le \frac{C(x)}{1-\varepsilon}. \end{equation}

Observe that $\sup_{k\ge 0}Z_{k}^{x}\le \sum_{k=0}^{\infty}Z_{k}^{x}\le T_x\cdot \sup_{k\ge 0}Z_{k}^{x}$ ; combining this and (4.25), we obtain that

\begin{align*} C(x) = \lim_{n\to\infty}\log{n}\cdot\mathbb{P}(\sup\nolimits_{k\ge0}Z_{k}^{x}>n) & \le \liminf_{n\to\infty}\log{n}\cdot\mathbb{P}\Bigg(\sum_{k=0}^{\infty}Z_{k}^{x}>n\Bigg) \\[5pt] & \le \limsup_{n\to\infty}\log{n}\cdot\mathbb{P}\Bigg(\sum_{k=0}^{\infty}Z_{k}^{x}>n\Bigg) \\[5pt] & \le \limsup_{n\to\infty}\log{n}\cdot\mathbb{P}(T_x\cdot\sup\nolimits_{k\ge 0}Z_{k}^{x}>n) \le \frac{C(x)}{1-\varepsilon}, \end{align*}

which proves (2.3) since $\varepsilon\in(0,1)$ can be chosen arbitrarily small. Thus, the proof of Theorem 2.2 is completed.

Acknowledgements

The authors are sincerely grateful to the anonymous referee for careful reading of the original manuscript and for helpful suggestions to improve the paper.

Funding information

The research was supported by NSFC (No. 11971062) and the National Key Research and Development Program of China (No. 2020YFA0712900).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Afanasyev, V. I. (1993). A limit theorem for a critical branching process in a random environment. Diskret. Mat. 5, 4558.Google Scholar
Afanasyev, V. I. (1997). A new limit theorem for a critical branching process in a random environment. Discrete Math. Appl. 7, 497513.CrossRefGoogle Scholar
Afanasyev, V. I. (1999). On the maximum of a critical branching process in a random environment. Discrete Math. Appl. 9, 267284.Google Scholar
Afanasyev, V. I., Geiger, J., Kersting, G. and Vatutin, V. (2005). Criticality for branching processes in random environment. Ann. Prob. 33, 645673.CrossRefGoogle Scholar
Aurzada, F., Devulder, A., Guillotin-Plantard, N. and Pène, F. (2017). Random walks and branching processes in correlated Gaussian environment. J. Statist. Phys. 166, 123.CrossRefGoogle Scholar
Aurzada, F. and Simon, T. (2015). Persistence probabilities and exponents. In Lévy Matters V (Lecture Notes Math. 2149). Springer, Berlin, pp. 183–221.CrossRefGoogle Scholar
Dembo, A., Ding, J. and Gao, F. (2013). Persistence of iterated partial sums. Ann. Inst. H. Poincaré Prob. Statist. 49, 873884.CrossRefGoogle Scholar
Denisov, D., Sakhanenko, A. and Wachtel, V. (2018). First-passage times for random walks with nonidentically distributed increments. Ann. Prob. 46, 33133350.CrossRefGoogle Scholar
Denisov, D. and Wachtel, V. (2015). Exit times for integrated random walks. Ann. Inst. H. Poincaré Prob. Statist. 51, 167193.CrossRefGoogle Scholar
Doney, R. A. (1995). Spitzer’s condition and ladder variables in random walks. Prob. Theory Relat. Fields 101, 577580.CrossRefGoogle Scholar
Durrett, R. T. and Iglehart, D. L. (1977). Functional of Brownian meander and Brownian excursion. Ann. Prob. 5, 130135.Google Scholar
Feller, W. (1971). An Introduction to Probability Theory and its Applications, Vol. 2. John Wiley, New York.Google Scholar
Grama, I., Lauvergnat, R. and Le Page, É. (2018). Limit theorems for Markov walks conditioned to stay positive under a spectral gap assumption. Ann. Prob. 46, 18071877.CrossRefGoogle Scholar
Kersting, G. and Vatutin, V. (2017). Discrete Time Branching Processes in Random Environment. John Wiley, New York.CrossRefGoogle Scholar
Kesten, H., Kozlov, M. V. and Spitzer, F. (1975). A limit law for random walk in a random environment. Compositio Math. 30, 145168.Google Scholar
Kozlov, M. V. (1976). On the asymptotic behaviour of the probability of non-extinction for critical branching processes in a random environment. Theory Prob. Appl. 21, 791804.CrossRefGoogle Scholar
Rogozin, B. A. (1971). On the distrbution of the first ladder moment and height and fluctuations of a random walk. Theory Prob. Appl. 16, 575595.CrossRefGoogle Scholar
Sinai, Ya. G. (1982). The limiting behavior of a one-dimensional random walk in a random medium. Theory Prob. Appl. 27, 256268.CrossRefGoogle Scholar
Solomon, F. (1975). Random walks in a random environment. Ann. Prob. 3, 131.CrossRefGoogle Scholar
Tanaka, H. (1989). Time reversal of random walks in one-dimension. Tokyo J. Math. 12, 159174.CrossRefGoogle Scholar
Zeitouni, O. (2004). Random walks in random environment. In Lectures on Probability Theory and Statistics (Lecture Notes Math. 1837). Springer, Berlin, pp. 189–312.CrossRefGoogle Scholar