Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-22T08:56:58.299Z Has data issue: false hasContentIssue false

Scaling limit of the local time of random walks conditioned to stay positive

Published online by Cambridge University Press:  13 February 2024

Wenming Hong*
Affiliation:
Beijing Normal University
Mingyang Sun*
Affiliation:
Beijing Normal University
*
*Postal address: School of Mathematical Sciences and Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, PR China.
*Postal address: School of Mathematical Sciences and Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, PR China.
Rights & Permissions [Opens in a new window]

Abstract

We prove that the local time of random walks conditioned to stay positive converges to the corresponding local time of three-dimensional Bessel processes by proper scaling. Our proof is based on Tanaka’s pathwise construction for conditioned random walks and the derivation of asymptotics for mixed moments of the local time.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and results

1.1. Motivation

Let $(S_n)_{n\ge 0}$ denote a random walk on $\mathbb{Z}$ with starting point zero, that is, $S_0 = 0$ , and for $n\ge 1$ , $S_n= \sum_{i=1}^{n} X_i$ , where $\{X_n\colon n\ge 1\}$ are independent and identically distributed integer-valued random variables with $\mathbb E X_1 = 0,\, \mathbb{E}(X_1^2)=: \sigma^2 \in(0,\infty)$ . The local time of reflected random walk $(|S_n|)_{n\ge 0}$ is defined as the visiting number of level x during the first n steps, that is, for $x\in \mathbb{Z}_{+}$ and $n\ge 1$ ,

\begin{align*}\xi(x,n)=\#\{0< k\le n\colon |S_k| = x\}=\sum_{k=1}^{n}\textbf{1}{\{|S_k| = x\}}.\end{align*}

Let $(B_t)_{t\ge 0}$ denote a standard Brownian motion and let $(|B_t|)_{t\ge 0}$ denote the reflected Brownian motion. The local time of $(B_t)_{t\ge 0}$ is defined by

\begin{align*}L^{x}_{t}(B)=\lim_{\varepsilon\to 0}\dfrac{1}{\varepsilon}\int^{t}_{0}\textbf{1}_{[x,x+\varepsilon)}(B_s)\,{\mathrm{d}} s,\,\ t\ge 0,\ x\in \mathbb{R},\end{align*}

and $L^{x}_{t}(|B|)=L^{x}_{t}(B)+L^{-x}_{t}(B)$ , $ x\ge 0$ is the local time of reflected Brownian motion. Set $\gamma_0 = 0$ and define recursively

\begin{align*}\gamma_n = \inf\{k> \gamma_{n-1}\colon |S_k|=0 \} = \inf\{k\ge 0\colon \xi(0,k)>n\},\quad n\ge 1. \end{align*}

Similarly, the inverse of Brownian local time at level 0 is defined as follows:

\begin{align*}\Gamma_x = \inf\{t\ge 0\colon L^{0}_{t}(B) >x\},\quad x\ge 0.\end{align*}

Donsker’s invariance principle tells us that the reflected random walk converges to the reflected Brownian motion by proper time and space scaling. It is natural to consider the scaling limit of the local times. However, one cannot get it directly from the continuous mapping theorem because the local time is not a continuous function of the process. For the case of simple random walks, this is confirmed by Rogers [Reference Rogers21].

Theorem A. (Rogers.) Assume $(S_n)_{n\ge 0}$ is a simple symmetric random walk. Then, as $n\to \infty$ ,

(1.1) \begin{equation}\biggl(\dfrac{\xi(\lfloor nx\rfloor,\gamma_n)}{n}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_{\Gamma_1}(|B|)\colon x\ge 0 \bigr),\end{equation}

where the symbol $\Longrightarrow$ denotes the convergence in distribution in the space $D[0,\infty)$ .

Note that this type of scaling for local times is closely related to the classical Ray–Knight theorems. In fact, Rogers’ proof relies heavily on the intrinsic branching structure of simple random walks revealed by Dwass [Reference Dwass10]. The local time $\xi(\lfloor nx\rfloor,\gamma_n)$ can be represented in terms of a critical Galton–Watson (GW) process. As a consequence, convergence of local times follows from the corresponding results for GW processes, and the scaling limits are continuous-state branching processes, which in law equal the right-hand side of (1.1) by the second Ray–Knight theorem. The idea of connecting local times and branching processes has recently been used by Hong et al. (see [Reference Hong, Yang and Zhou13], [Reference Yang24]), and they extended Rogers’ result to random walks with bounded jumps based on the multitype branching process with immigration which is hidden in the path (see [Reference Hong and Wang11], [Reference Hong and Zhang12]).

Later, Denisov and Wachtel [Reference Denisov and Wachtel7] considered general random walks for which the increments are centred and belong to the domain of attraction of an $\alpha$ -stable law with $1<\alpha\le 2$ . They showed that the convergence of (1.1) holds in the sense of finite-dimensional distributions by the method of moments. Since there exists no branching structure for random walks with unbounded jumps, the proof of tightness is still an open problem for the local time of general random walks.

In this paper we are interested in the scaling limits of the local time of random walks conditioned to stay positive. It is well known that the phrase ‘random walks conditioned to stay positive’ has at least two different interpretations. In the first, we consider a random walk conditioned on the event that the first n values are positive; this is a discrete version of meander, and it has been known for a long time that a suitably rescaled version of this process converges weakly to a Brownian meander (see Iglehart [Reference Iglehart14]). The second interpretation involves conditioning on the event that the random walks always stay positive, and so can be thought of as a discrete version of the Bessel process, i.e. random walks conditioned to stay positive under $\mathbb{P}^+$ (one can make sense of this probability by means of the Doob h-transform; see [Reference Bertoin and Doney2] or below). Similarly, a suitably rescaled version of this process converges weakly to a Bessel process (see Bryn-Jones and Doney [Reference Bryn-Jones and Doney5]).

1.2. Conditioned random walks

To begin with, we introduce the random walks conditioned to stay positive in the meander sense. For $x\ge 0$ , we define the exit time

\begin{equation*}\tau_x = \inf\{ n> 0\colon x+S_n \le 0\},\end{equation*}

and write $\tau\;:\!=\; \tau_{0}$ for convenience. Let $S^{(m)}_n$ denote the random variable $S_n$ under the conditional probability $\mathbb{P}(\cdot\mid \tau>n)$ , and its local time is defined as

\begin{align*}\xi^{(m)}(x,n)=\#\bigl\{0<k\le n\colon S^{(m)}_k = x\bigr\},\quad x\in \mathbb{Z}_{+},\ n\ge 1.\end{align*}

The continuous-time analogue of $(S^{(m)}_n)_{n\ge 0}$ is known as the Brownian meander (see [Reference Durrett, Iglehart and Miller9], [Reference Iglehart14]), which may be considered as the Brownian motion $(B_t)_{t\ge 0}$ conditioned to stay positive on the time interval (0, 1], and is defined as follows:

\begin{align*}B^{(m)}_t=\dfrac{|B(\alpha+(1-\alpha)t)|}{\sqrt{1-\alpha}},\quad 0\le t \le 1,\end{align*}

where $\alpha=\sup\{t\in [0,1]\colon B_t=0\}$ is the last passage time at 0 before time 1. Let us introduce its local time at the level $x\ge 0$ before time $t\in [0,1]$ :

\begin{align*}L^{x}_t(B^{(m)})=\lim_{\varepsilon\to 0}\dfrac{1}{\varepsilon}\int^{t}_{0}\textbf{1}_{[x,x+\varepsilon)}\bigl(B^{(m)}_s\bigr)\,{\mathrm{d}} s.\end{align*}

This limit exists almost surely (see [Reference Takacs22]). Note that $(S^{(m)}_n)_{n\ge 0}$ is the random walk $(S_n)_{n\ge 0}$ conditioned to stay positive up to epoch n, so it is natural to conjecture that the local time of $(S^{(m)}_n)_{n\ge 0}$ before time n converges to the local time of the Brownian meander by proper scaling. Recently, this has been confirmed by Afanasyev (see [Reference Afanasyev1]).

Theorem B. (Afanasyev.) Assume $\mathbb E X_1 = 0$ and $\mathbb{E}(X_1^2)=: \sigma^2 \in(0,\infty)$ . Then, as $n\to \infty$ ,

(1.2) \begin{equation}\biggl(\dfrac{\sigma\xi^{(m)}(\lfloor \sigma\sqrt{n}x \rfloor,n)}{\sqrt{n}}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_1(B^{(m)})\colon x\ge 0 \bigr).\end{equation}

The main purpose of this paper is to extend Rogers’ and Afanasyev’s results respectively to the local times of random walks conditioned to stay positive in the sense of h-transforms. Before stating our results precisely, we recall the essentials of the conditioning to stay positive for random walks.

The weakly descending ladder process $(\sigma_n , H_n)_{n\ge 0}$ is defined recursively as $\sigma_0 = H_0 = 0$ , and for $n\ge 1$ ,

\begin{equation*}\sigma_n = \inf\{k> \sigma_{n-1}\colon S_k \le S_{\sigma_{n-1}}\},\quad H_n = -S_{\sigma_{n}}.\end{equation*}

We let V(x) denote the renewal function associated with $(H_n)_{n\ge 0}$ , which is a positive function defined by

\begin{equation*}V(x) = \sum_{n\ge 0}\,\mathbb{P}(H_n \le x),\quad x\ge 0.\end{equation*}

Note that V(x) is the expected number of descending ladder heights which are $\le x$ . It is well known that V is harmonic for the sub-Markov process obtained by killed $(S_n)_{n\ge 0}$ when leaving the positive half-line (see [Reference Tanaka23]), that is,

\begin{equation*}V(x) = \mathbb{E} [V(x+S_1);\ x+S_1 > 0],\quad x\ge 0.\end{equation*}

Next we introduce a change of measure which is defined by the well-known Doob h-transform: for any $n\in \mathbb{N}$ and $A\in \sigma(S_1,\ldots,S_n)$ ,

\begin{equation*}\mathbb{P}^{+}(A)\;:\!=\; \mathbb{E} [V(S_n);\ A\cap\{\tau>n\}].\end{equation*}

According to Kolmogorov’s extension theorem and the harmonic property of V(x), it is easy to see that $\mathbb{P}^{+}$ is well-defined. The random walk $(S_n)_{n\ge 0}$ under the new probability $\mathbb{P}^{+}$ is denoted by $S^+=(S^{+}_n)_{n\ge 0}$ and called a random walk conditioned to stay positive. This terminology is justified by the following weak convergence result (see [Reference Bertoin and Doney2, Theorem 1]):

\begin{equation*} \mathbb{P}^{+}(\!\cdot\!)=\lim_{n\to \infty}\mathbb{P}(\cdot\mid \tau > n).\end{equation*}

The local time $\xi^{+}(x,n)$ is defined as the visiting number of level x by $S^{+}$ during the first n steps, that is, for $x\in \mathbb{Z}_{+}$ and $n\ge 1$ ,

\begin{align*}\xi^{+}(x,n)=\#\{0<k\le n\colon S^{+}_k = x\}=\sum_{k=1}^{n}\textbf{1}{\{S^{+}_k = x\}}.\end{align*}

Similarly, the local time $\xi^{+}(x)$ is defined as the visiting number of level x by the whole path of $S^+$ ,

\begin{align*}\xi^{+}(x)=\#\{k>0\colon S^{+}_k = x\}=\sum_{k=1}^{\infty}\textbf{1}{\{S^{+}_k = x\}},\quad x\in \mathbb{Z}_{+}.\end{align*}

Now we turn to the continuous-time analogue of $S^+$ . Let $\rho=(\rho_t\colon t\ge 0)$ denote a three-dimensional Bessel process starting from 0, which may be considered as the Brownian motion $(B_t)_{t\ge 0}$ conditioned to stay positive over the whole real half-line, and its local time is denoted by $L^{x}_{t}(\rho)$ , that is, for $x\ge 0$ and $t\ge 0$ ,

(1.3) \begin{equation}L^{x}_{t}(\rho)=\lim_{\varepsilon\to 0}\dfrac{1}{\varepsilon}\int^{t}_{0}\textbf{1}_{[x,x+\varepsilon)}(\rho_t)\,{\mathrm{d}} t. \end{equation}

Note that (1.3) is also well-defined for $t=\infty$ , and the right-hand side is almost surely finite (see [Reference Revuz and Yor19, Exercise VI.1.27]).

1.3. Main results

Our first result deals with the local time within a fixed but finite time period [0, t], which is an analogue of the relation (1.2) for the random walk conditioned to stay positive under $\mathbb{P}^{+}$ .

Theorem 1.1. Assume $\mathbb E X_1 = 0$ and $\mathbb{E}(X_1^2)=: \sigma^2 \in(0,\infty)$ . Then, for any $t\in[0,\infty)$ , as $n\to \infty$ ,

(1.4) \begin{equation}\biggl(\dfrac{\sigma\xi^{+}(\lfloor \sigma\sqrt{n}x\rfloor, \lfloor nt \rfloor)}{\sqrt{n}}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_{t}(\rho)\colon x\ge 0 \bigr).\end{equation}

Remark 1.1. The main idea is to make use of the mutually absolute continuity between the two conditioned random walks $S^+$ and $S^{(m)}$ , then apply Afanasyev’s invariance principle (1.2). To our knowledge, this continuity argument was originally raised by Bolthausen [Reference Bolthausen3] and then developed by Caravenna and Chaumont [Reference Caravenna and Chaumont6], who have shown that a rescaled version of random walk conditioned to stay positive converges in distribution (in the functional sense) towards the corresponding stable Lévy process conditioned to stay positive.

Now we are interested in the local time within the time period $[0, \infty)$ . We prove the convergence in the sense of finite-dimensional distributions, which is related to the Ray–Knight theorem of three-dimensional Bessel processes (see [Reference Roger and Yor20, page 32]), so can also be seen as an analogue of the relation (1.1) for the random walk conditioned to stay positive under $\mathbb{P}^{+}$ . The proof is based on the derivation of asymptotics for mixed moments of the local time $\xi^{+}(x)$ in the same spirit as Denisov and Wachtel [Reference Denisov and Wachtel7].

Theorem 1.2. Assume $\mathbb E X_1 = 0$ and $\mathbb{E}(X_1^2)=: \sigma^2 \in(0,\infty)$ . Then, as $n\to \infty$ ,

(1.5) \begin{equation}\biggl(\dfrac{\sigma^2\xi^{+}(\lfloor nx\rfloor)}{n}\colon x\ge 0\biggr)\stackrel{\mathrm{f.d.d.}}{\longrightarrow} \bigl( L^{x}_{\infty}(\rho)\colon x\ge 0 \bigr),\end{equation}

where the symbol $\stackrel{\mathrm{f.d.d.}}{\longrightarrow}$ denotes the convergence in the sense of finite-dimensional distributions in the space $D[0,\infty)$ .

If $(S_n)_{n\ge 0}$ is a simple symmetric random walk, the above result can be strengthened as follows, based on the intrinsic branching structure of simple random walks.

Theorem 1.3. Assume $(S_n)_{n\ge 0}$ is a simple symmetric random walk. Then, as $n\to \infty$ ,

\begin{equation*} \biggl(\dfrac{\xi^{+}(\lfloor nx\rfloor)}{n}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_{\infty}(\rho)\colon x\ge 0 \bigr).\end{equation*}

Remark 1.2. The proof of Theorem 1.3 relies on Tanaka’s pathwise construction for random walks under $\mathbb{P}^{+}$ and the intrinsic branching structure of simple random walks. Consequently, the local time of $S^+$ can be expressed in terms of a critical Galton–Watson process with immigration (GWI). In the same vein as Rogers [Reference Rogers21], this embedding into a branching process ensures weak convergence in the functional sense. However, we do not know how to prove the tightness of the rescaled local time sequence under conditions of Theorem 1.2, since there exists no branching structure for general random walks.

1.4. Outline of the paper

The exposition of this paper is organized as follows. In Section 2 we prove Theorem 1.1 based on the mutually absolute continuity between the conditioned processes, and Afanasyev’s invariance principle for local times. Then we prove Theorem 1.3 in Section 3. The assumption of the simple symmetric random walk is essential because we will use the intrinsic branching structure within the path of the walk, which is revealed by Dwass [Reference Dwass10]. In Section 4 we determine the asymptotic behaviour of the mixed moments of local time $\xi^{+}(x)$ , and then in Section 5 we can prove Theorem 1.2 by the method of moments.

2. Proof of Theorem 1.1

The main purpose of this section is to prove Theorem 1.1. To this end, we first show that (1.4) holds for $t=1$ , that is, for any bounded and continuous functional $H\colon D[0,\infty)\to\mathbb{R}$ ,

(2.1) \begin{equation}\lim_{n\to \infty}\mathbb{E}\biggl[H\biggl(\dfrac{\sigma\xi^{+}(\lfloor \sigma\sqrt{n}x\rfloor,n)}{\sqrt{n}}\colon x\ge 0\biggr)\biggr] = \mathbb{E}\bigl[H\bigl(L^{x}_{1}(\rho)\colon x\ge 0\bigr)\bigr].\end{equation}

By the definition of $S^{(m)}$ and $S^{+}$ , it is easy to see that they are absolutely continuous with respect to each other, hence

\begin{equation*}\mathbb{E}\biggl[H\biggl(\dfrac{\sigma\xi^{+}(\lfloor \sigma\sqrt{n}x\rfloor,n)}{\sqrt{n}}\colon x\ge 0\biggr)\biggr] = \mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\biggl(\dfrac{\sigma\xi^{(m)}(\lfloor \sigma\sqrt{n}x\rfloor,n)}{\sqrt{n}}\colon x\ge 0\biggr)\biggr],\end{equation*}

where $V_n(x)\;:\!=\; \mathbb{P}(\tau>n)\cdot V(x)$ is the rescaled renewal function.

For the continuous-time analogue, we recall the following lemma due to Imhof [Reference Imhof15], which shows the absolute continuity relation between the Brownian meander and three-dimensional Bessel process.

Lemma 2.1. Let $(B^{(m)}_t\colon 0\le t\le 1)$ be a standard Brownian meander and let $(\rho_t\colon t\ge 0)$ be a three-dimensional Bessel process starting from 0. For any measurable and non-negative functional $F\colon D[0,1]\to\mathbb{R}$ , we have

\begin{equation*}\mathbb{E} [F(\rho_t\colon 0\le t\le 1)]=\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\, F\bigl(B^{(m)}_t\colon 0\le t\le 1\bigr)\biggr]. \end{equation*}

By Lemma 2.1 we can rewrite the right-hand side of (2.1) as follows:

(2.2) \begin{equation}\mathbb{E}\bigl[H\bigl(L^{x}_{1}(\rho)\colon x\ge 0\bigr)\bigr]=\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L^{x}_1\bigl(B^{(m)}\bigr)\colon x\ge 0\bigr)\biggr].\end{equation}

Combining (2.1) and (2.2), it suffices to show that

(2.3) \begin{equation}\lim_{n\to \infty} \mathbb{E}\bigl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)\bigr] = \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\biggr], \end{equation}

where for convenience we let

\begin{align*}\xi^{(m)}_n = \biggl(\dfrac{\sigma\xi^{(m)}(\lfloor \sigma\sqrt{n}x\rfloor,n)}{\sqrt{n}}\colon x\ge 0\biggr),\quad L_1\bigl(B^{(m)}\bigr)=\bigl(L^{x}_1\bigl(B^{(m)}\bigr)\colon x\ge 0\bigr).\end{align*}

The basic idea is to use the fact (see equation (3.12) of [Reference Caravenna and Chaumont6]) that, for any $M>0$ ,

(2.4) \begin{equation}\lim_{n\to\infty}\sup_{x\in [0,M]} \biggl|V_n(\sigma \sqrt{n}x)-\sqrt{\dfrac{2}{\pi}}\,x\biggr|=0, \end{equation}

and then to apply the invariance principle (1.2). However, some care is needed, because the functions $V_n(\sigma\sqrt{n} x)$ and ${({{2}/{\pi}})}^{1/2}\, x$ are unbounded. To overcome this difficulty, we introduce for $M>0$ the cut function $I_M(x)$ , which can be viewed as a continuous version of the indicator function $\textbf{1}_{(-\infty,M]}(x)$ :

\begin{equation*}I_M(x)= \begin{cases} 1, & x\le M,\\[5pt] M+1-x, & M\le x\le M+1,\\[5pt] 0, & x\ge M+1.\end{cases}\end{equation*}

Then we restrict the values of $S^{(m)}_n/\sigma\sqrt{n}$ and $B^{(m)}_1$ to a compact set. More precisely, the left-hand side of (2.3) can be decomposed as

(2.5) \begin{align}\mathbb{E}\bigl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)\bigr]&=\mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr]\nonumber \\[5pt] &\quad + \mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)\biggl(1-I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr)\biggr],\end{align}

and the right-hand side of (2.3) can be decomposed as

(2.6) \begin{align}\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\biggr]&=\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr] \nonumber \\[5pt] &\quad +\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\bigl(1-I_M\bigl(B^{(m)}_1 \bigr)\bigr)\biggr].\end{align}

Since H is bounded by some positive constant $C_1$ and the second terms of the right-hand side of (2.5) and (2.6) are non-negative, it follows by the triangle inequality that

\begin{align*} &\biggl|\mathbb{E}\bigl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)\bigr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\biggr] \biggr|\nonumber \\[5pt] &\quad\le \biggl| \mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr] \biggr| \nonumber \\[5pt] &\quad\quad\ C_1\,\mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)\biggl(1-I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr)\biggr] + C_1\,\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\bigl(1-I_M\bigl(B^{(m)}_1 \bigr)\bigr)\biggr].\end{align*}

By the definition of $S^{(m)}_n$ and the harmonic property of V(x), we get

\begin{equation*}\mathbb{E}\bigl[V_n\bigl(S^{(m)}_n\bigr)\bigr]=\mathbb{E}[V_n(S_n)\mid \tau>n]=\mathbb{E}[V(S_n);\,\tau>n]=1,\end{equation*}

and it follows from Lemma 2.1 that

(2.7) \begin{equation}\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\biggr] = 1 .\end{equation}

This implies that

(2.8) \begin{align}&\biggl|\mathbb{E}\bigl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)\biggr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\biggr] \biggr|\nonumber \\[5pt] &\quad \le \biggl| \mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr] \biggr| \nonumber \\[5pt] &\quad\quad + C_1\biggl(1-\mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr) I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr]\biggr) + C_1\biggl(1-\mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1 I_M\bigl(B^{(m)}_1 \bigr)\biggr]\biggr).\end{align}

According to the proof of (1.2) in [Reference Afanasyev1], Afanasyev actually proved the convergence of the joint processes,

\begin{align*}\biggl\{\biggl(\dfrac{S^{(m)}_{\lfloor nt \rfloor}}{\sigma\sqrt{n}},\,\dfrac{\sigma\xi^{(m)}(\lfloor \sigma\sqrt{n}x \rfloor,n)}{\sqrt{n}}\biggr)\colon t\in [0,1],\ x\ge 0\biggr\}\qquad\qquad\qquad\qquad \nonumber \\[5pt] \Longrightarrow \bigl\{\bigl(B^{(m)}_t,\, L^{x}_1\bigl(B^{(m)}\bigr)\bigr)\colon t\in [0,1],\ x\ge 0 \bigr\}, \end{align*}

where the symbol $\Longrightarrow$ denotes the convergence in distribution in the space $D[0,1]\times D[0,\infty)$ . Thus, for any bounded and continuous functional $F\colon D[0,1]\times D[0,\infty)\to\mathbb{R}$ , we have

(2.9) \begin{equation}\lim_{n\to \infty}\mathbb{E}\bigl[F\bigl(\widetilde{S}^{(m)}_n,\xi^{(m)}_n\bigr)\bigr] = \mathbb{E}\bigl[F\bigl(B^{(m)},L_1\bigl(B^{(m)}\bigr)\bigr)\bigr],\end{equation}

where for convenience we use

\begin{align*}\widetilde{S}^{(m)}_n = \biggl(\dfrac{S^{(m)}_{\lfloor nt \rfloor}}{\sigma\sqrt{n}}\colon t\in [0,1]\biggr).\end{align*}

Let $f_M(x)=xI_M(x),\, x\ge 0$ ; then $f_M H$ is a bounded and continuous functional defined on the space $\mathbb{R}_{+}\times D[0,\infty)$ . In particular, it follows from (2.9) that

(2.10) \begin{equation}\lim_{n\to \infty}\mathbb{E}\biggl[f_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr) H\bigl(\xi^{(m)}_n\bigr)\biggr] = \mathbb{E}\bigl[f_M\bigl(B^{(m)}_1\bigr) H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)\biggr].\end{equation}

By the triangle inequality we obtain

\begin{align*} &\biggl| \mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr] \biggr| \nonumber \\[5pt] &\quad \le \biggl| \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\, H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr] - \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr] \biggr| \nonumber \\[5pt] &\quad\quad + C_1\sup_{x\in [0,M]} \biggl|V_n(\sigma \sqrt{n}x)-\sqrt{\dfrac{2}{\pi}}\,x\biggr|.\end{align*}

It follows from (2.4) and (2.10) that

(2.11) \begin{equation}\lim_{n\to \infty}\mathbb{E}\biggl[V_n\bigl(S^{(m)}_n\bigr)H\bigl(\xi^{(m)}_n\bigr)I_M\biggl(\dfrac{S^{(m)}_n}{\sigma\sqrt{n}}\biggr)\biggr] = \mathbb{E}\biggl[\sqrt{\dfrac{2}{\pi}}\,B^{(m)}_1\,H\bigl(L_1\bigl(B^{(m)}\bigr)\bigr)I_M\bigl(B^{(m)}_1 \bigr)\biggr].\end{equation}

Observe that this equation also yields the convergence as $n\to\infty$ of the second term in the right-hand side of (2.8) towards the third term (just take $H\equiv 1$ ), and note that the third term can be made arbitrarily small by choosing M sufficiently large due to (2.7). Therefore, from (2.11) it actually follows that the left-hand side of (2.8) vanishes as $n\to\infty$ , that is, equation (2.3) holds true.

Next, we adapt the above argument to show that (1.4) holds for any $t\in [0,\infty)$ . First note that Afanasyev’s proof in [Reference Afanasyev1] can be adapted to show a more general version of (1.2), that is, for $t\in[0,1]$ , as $n\to \infty$ ,

\begin{equation*} \biggl(\dfrac{\sigma\xi^{(m)}(\lfloor \sigma\sqrt{n}x \rfloor,\lfloor nt \rfloor)}{\sqrt{n}}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_t\bigl(B^{(m)}\bigr)\colon x\ge 0 \bigr).\end{equation*}

Consequently, by the continuity argument it is easy to see that (1.4) holds for $t\in[0,1]$ . To handle the case of $t>1$ , we have to renew the definition of Brownian meander. For any fixed $M\in\mathbb{Z_+}$ , we redefine the Brownian meander on [0, M], that is,

\begin{align*}\widetilde{B}^{(m)}_t\;:\!=\; \sqrt{M}\,B^{(m)}\biggl(\dfrac{t}{M}\biggr) =\sqrt{\dfrac{M}{1-\alpha}}\,\biggl|B\biggl(\alpha+(1-\alpha)\dfrac{t}{M}\biggr)\biggr|,\quad 0\le t \le M.\end{align*}

Now, by the scaling properties of the Brownian meander $\widetilde{B}^{(m)}$ and its local time, it is not hard to see that Iglehart’s and Afanasyev’s invariance principles can be extend as follows: as $n\to\infty$ ,

\begin{equation*}\biggl(\dfrac{S_{\lfloor nt \rfloor}}{\sqrt{n}}\colon t\in[0,M] \Bigm|\tau>Mn\biggr)\Longrightarrow \bigl( \widetilde{B}^{(m)}_t\colon t\in[0,M] \bigr),\end{equation*}

and for any $t\in[0,M]$ ,

\begin{equation*}\biggl(\dfrac{\sigma\widetilde{\xi}(\lfloor \sigma\sqrt{n}x \rfloor,\lfloor nt \rfloor)}{\sqrt{n}}\colon x\ge 0\Bigm| \tau>Mn\biggr)\Longrightarrow \bigl( L^{x}_t\bigl(\widetilde{B}^{(m)}\bigr)\colon x\ge 0 \bigr),\end{equation*}

where $\widetilde{\xi}(x,n) = \#\{0<k\le n\colon S_k = x\}$ is the local time of $(S_n)_{n\ge 0}$ . Hence, by the continuity argument and since $M\in\mathbb{Z}_+$ is chosen arbitrarily, we obtain the convergence of (1.4) holds for $t\in[0,\infty)$ . Thus the proof of Theorem 1.1 is complete.

3. Proof of Theorem 1.3

In this section we assume $(S_n)_{n\ge 0}$ is a simple symmetric random walk. We will use the intrinsic branching structure within the path of the walk. For $n\ge 0$ , define

\begin{align*}T_n=\inf\{k\ge0\colon S_k=n\},\end{align*}

which is the first hitting time of site n of the walk. Clearly $T_n <\infty$ , for any $n\ge 0$ due to our walk is recurrent, i.e. $\limsup_{n\to \infty}S_n=\infty$ and $\liminf_{n\to \infty}S_n=-\infty$ . For any fixed $n \ge 1$ , let $U^{n}(0)=1$ and for ${k\ge 1}$ , denote

\begin{equation*}U^{n}(k)=\#\{T_{n-1}\le m < T_n\colon S_m=-k+n, S_{m+1}=-k+n-1\}. \end{equation*}

The connection between branching processes and simple random walks is as follows, we refer the readers to [Reference Dwass10] and [Reference Kesten, Kozlov and Spitzer17] for the proof and more details.

Lemma 3.1. $\{U^{n}(k)\colon k\ge 0\}_{n\ge 1}$ are independent Galton–Watson processes, with the same offspring distribution $\{p_k\}_{k\ge 0}$ , where $p_k=(\frac{1}{2})^{k+1}$ .

Remark 3.1. Note that the branching mechanism of the GW process is a geometric distribution, which can be interpreted as follows: starting at $-k+n$ , if the walker jumping from $-k+n$ to $-k+n-1$ is considered a success, and jumping from $-k+n$ to $-k+n+1$ is considered a failure, then the number of trials required for the first success follows a geometric distribution.

Next we recall Tanaka’s pathwise construction for random walks conditioned to stay positive (see [Reference Kersting and Vatutin16], [Reference Tanaka23]). Let $( \tau_k^{+}, H_k^{+} )_{k\ge 0}$ be the sequence of strictly ascending ladder epochs and heights, that is, $\tau_0^{+}=H_0^{+}=0$ , and for $k\ge 1$ ,

\begin{equation*}\tau_k^{+}=\inf\bigl\{ n> \tau_{k-1}^{+}\colon S_{n} > S_{\tau_{k-1}^{+}}\bigr\},\quad H_k^{+} = S_{\tau_{k}^{+}}. \end{equation*}

Define $e_1,\ e_2,\ldots,$ the sequence of excursions of $(S_n)_{n\ge 0}$ from its supremum that have been time-reversed,

\begin{equation*}e_n = \bigl( 0, S_{\tau_{n}^{+}}-S_{\tau_{n}^{+}-1}, \ldots,\ S_{\tau_{n}^{+}}-S_{\tau_{n-1}^{+}} \bigr) \end{equation*}

for $n\ge 1$ . For convenience write $e_n = (e_n(0), e_n(1),\ldots, e_n(\tau_{n}^{+}-\tau_{n-1}^{+}))$ as an alternative for the steps of each $e_n$ . According to Tanaka’s construction, the random walk conditioned to stay positive $S^+ = (S_n^+)_{n\ge 0}$ has a pathwise realization by glueing these time-reversed excursions end to end in the following way:

\begin{equation*}S_n^+ = H_k^+ + e_{k+1}(n-\tau_{k}^{+}),\quad \text{if}\ \tau_{k}^{+}< n \le \tau_{k+1}^{+}. \end{equation*}

For simple random walks, it is easy to see that $\tau^{+}_k=T_k$ and $H^{+}_k=k$ , for any $k\ge 0$ . By the definition of $U^n(k)$ , it can be verified that

\begin{equation*}U^{n}(k)=\#\bigl\{T_{n-1}\le m < T_n\colon S^{+}_m=k+n, S^{+}_{m+1}=k+n-1\bigr\}. \end{equation*}

Therefore, using the definition of local time, we obtain

(3.1) \begin{align}\xi^{+}(x,T_n)&=U^{1}(x-1)+U^{1}(x)\nonumber \\[5pt] &\quad + U^{2}(x-2)+U^{2}(x-1)\nonumber \\[5pt] &\quad + \cdots\cdots \nonumber \\[5pt] &\quad + U^{n}(x-n)+U^{n}(x-n+1).\end{align}

It is not hard to see that $\{T_n\}_{n\ge 0}$ are the prospective minimum value sequences of $S^{+}$ , that is, minimum values with respect to the future development of random walks conditioned to stay positive,

\begin{align*} T_n = \min\bigl\{m>T_{n-1}\colon S^{+}_{m+k}>S^{+}_{m}\ \text{for all $ k\ge 1$}\bigr\}.\end{align*}

Hence $\xi^{+}(n)=\xi^{+}(n,T_n)$ by the fact that $S^{+}_{k}>S^{+}_{T_n}=n$ for $k>T_n$ . Thus we can reformulate (3.1) as follows:

(3.2) \begin{align}\xi^{+}(n)&=U^{1}(n-1)+U^{2}(n-2)+\cdots+U^{n-1}(1)+U^{n}(0)\nonumber \\[5pt] &\quad + U^{1}(n)+U^{2}(n-1)+\cdots+U^{n-1}(2)+U^{n}(1).\end{align}

By Lemma 3.1, $\{U^{n}(k)\colon k\ge 0\}_{n\ge 1}$ are independent Galton–Watson processes. If we denote

\begin{align*}Z_n\;:\!=\; U^{1}(n)+U^{2}(n-1)+\cdots+U^{n}(1)+U^{n+1}(0),\quad n\ge 1,\end{align*}

then $\{Z_n\}_{n\ge 1}$ is a GWI process with offspring distribution $\{p_k\}_{k\ge 0}$ and immigration distribution $\delta_1$ , that is, there is only one immigrant in each generation. Using (3.2), we get

(3.3) \begin{equation}\xi^{+}(n)=Z_{n-1}+Z_n-1. \end{equation}

Next we recall the scaling limits of GWI processes (see [Reference Li18, Theorem 3.43]): as $n\to\infty$ ,

(3.4) \begin{equation}\biggl(\dfrac{Z(\lfloor nt\rfloor)}{n}\colon t\ge 0\biggr)\Longrightarrow (X_t\colon t\ge 0 ),\end{equation}

where $(X_t)_{t\ge 0}$ is a CBI process (continuous-state branching process with immigration) defined by the stochastic differential equation

(3.5) \begin{equation}{\mathrm{d}} X_t=\sqrt{2 X_t}\,{\mathrm{d}} B_t + {\mathrm{d}} t,\quad X_0 = 0.\end{equation}

Applying (3.3), (3.4), and (3.5), we get the scaling limit of local time

(3.6) \begin{equation}\biggl(\dfrac{\xi^{+}(\lfloor nt\rfloor)}{n}\colon t\ge 0\biggr)\Longrightarrow \bigl(\widetilde{X}_t\colon t\ge 0 \bigr),\end{equation}

where $(\widetilde{X}_t)_{t\ge 0}$ is also a CBI process and satisfies the stochastic differential equation

\begin{equation*}{\mathrm{d}} \widetilde{X}_t=2\sqrt{\widetilde{X}_t}\,{\mathrm{d}} B_t + 2{\mathrm{d}} t,\quad \widetilde{X}_0 = 0.\end{equation*}

By the definition of BESQ (see [Reference Revuz and Yor19, Definition XI.1.1]), we know that $(\widetilde{X}_t)_{t\ge 0}$ is also called the square of two-dimensional Bessel process starting from 0 and is denoted by $\textit{BESQ}^{2}_{0}$ . According to a variant of the Ray–Knight theorems (see [Reference Roger and Yor20, page 32]), we know that the law of $(L^{x}_{\infty}(\rho)\colon x\ge 0 )$ is the same as the law of $\textit{BESQ}^{2}_{0}$ . Applying (3.6) and the above arguments, we get

\begin{equation*}\biggl(\dfrac{\xi^{+}(\lfloor nx\rfloor)}{n}\colon x\ge 0\biggr)\Longrightarrow \bigl( L^{x}_{\infty}(\rho)\colon x\ge 0 \bigr).\end{equation*}

Thus the proof of Theorem 1.3 is complete.

4. Moments of local time

The goal of this section is to determine the asymptotic behaviour of the mixed moments of local time $\xi^{+}(x)$ , which will be used to establish Theorem 1.2 by the method of moments in the next section.

Proposition 4.1. Assume $\mathbb E X_1 = 0$ and $\mathbb{E}(X_1^2)=: \sigma^2 \in(0,\infty)$ . Let $\mathcal{S}_m$ be the set of permutations of $\{1, \ldots ,m\}$ . Then, for any $(x_1,\ldots,x_m)\in \mathbb{R}_+^{m}$ ,

(4.1) \begin{equation}\lim_{n\to\infty}\mathbb{E}\Biggl[\prod_{i=1}^{m} \dfrac{\sigma^2\xi^{+}(\lfloor nx_i \rfloor )}{n}\Biggr]=2^m\sum_{\sigma\in\mathcal{S}_m}x_{\sigma(m)}\prod_{i=1}^{m-1}\min\{x_{\sigma(i)},x_{\sigma(i+1)}\}.\end{equation}

In particular, if $x_i=x\in\mathbb{R}_+,\ i=1,2\ldots,m$ , then

(4.2) \begin{equation}\lim_{n\to\infty}\mathbb{E}\biggl[\biggl(\dfrac{\sigma^2\xi^{+}(\lfloor nx \rfloor )}{n}\biggr)^m\biggr]=(2x)^m\, m!.\end{equation}

Proof. Observe that

(4.3) \begin{align}\prod_{i=1}^{m}\xi^{+}(nx_i) &= \sum_{j_1,\ldots,j_m\ge 1} \textbf{1}{\bigl\{S^+_{j_1}=nx_1,\ldots, S^+_{j_m}=nx_m\bigr\}} \nonumber \\[5pt] &\le \sum_{\sigma\in \mathcal{S}_m} \sum_{j_1\le \cdots\le j_m} \textbf{1}{\bigl\{S^+_{j_1}=nx_{\sigma(1)},\ldots, S^+_{j_m}=nx_{\sigma(m)}\bigr\}}.\end{align}

Here and below, $\xi^{+}(nx_i)$ means $\xi^{+}(\lfloor nx_i\rfloor )$ for notational convenience. Similarly,

(4.4) \begin{equation}\prod_{i=1}^{m}\xi^{+}(nx_i) \ge \sum_{\sigma\in \mathcal{S}_m} \sum_{j_1< \cdots< j_m} \textbf{1}{\bigl\{S^+_{j_1}=nx_{\sigma(1)},\ldots, S^+_{j_m}=nx_{\sigma(m)}\bigr\}}.\end{equation}

Taking expectations in (4.3) and applying Proposition 2.6 from [Reference Denisov and Wachtel7], we obtain that as $n\to \infty$ ,

(4.5) \begin{align}\mathbb{E}\Biggl[\prod_{i=1}^{m} \xi^{+}(nx_i)\Biggr] &\le \mathbb{E}\Biggl[\sum_{\sigma\in \mathcal{S}_m} \sum_{j_1\le \cdots\le j_m} \textbf{1}{\bigl\{S^+_{j_1}=nx_{\sigma(1)},\ldots, S^+_{j_m}=nx_{\sigma(m)}\bigr\}} \Biggr]\nonumber\\[5pt] & = \mathbb{E}^+\Biggl[\sum_{\sigma\in \mathcal{S}_m} \sum_{j_1\le \cdots\le j_m} \textbf{1}{\{S_{j_1}=nx_{\sigma(1)},\ldots, S_{j_m}=nx_{\sigma(m)}\}} \Biggr]\nonumber\\[5pt] & =\mathbb{E}\Biggl[\sum_{\sigma\in \mathcal{S}_m}V(nx_{\sigma(m)})\sum_{j_1\le \cdots\le j_m} \textbf{1}{\{S_{j_1}=nx_{\sigma(1)},\ldots, S_{j_m}=nx_{\sigma(m)}, \tau>j_m\}} \Biggr]\nonumber\\[5pt] & \sim \dfrac{n^{m-1}}{\mathbb{E}H^{+}}\biggl(\dfrac{2}{\sigma^2}\biggr)^{m-1}\sum_{\sigma\in \mathcal{S}_m}V(nx_{\sigma(m)}) \prod_{i=1}^{m-1}\min\{x_{\sigma(i)},x_{\sigma(i+1)}\}.\end{align}

By the renewal theorem, $V(x)\sim x/\mathbb{E}H_1$ , as $x\to \infty$ . Applying Theorem 4.5 from [Reference Kersting and Vatutin16], we find that $\mathbb{E}H_1 \cdot\mathbb{E}H^{+}_1={{\sigma^2}/{2}}$ . Combining this and (4.5), we obtain

\begin{equation*} \limsup_{n\to\infty}\mathbb{E}\Biggl[\prod_{i=1}^{m} \dfrac{\sigma^2\xi^{+}(nx_i)}{n}\Biggr]\le 2^m\sum_{\sigma\in\mathcal{S}_m}x_{\sigma(m)}\prod_{i=1}^{m-1}\min\{x_{\sigma(i)},x_{\sigma(i+1)}\}.\end{equation*}

Taking expectations in (4.4) and applying Proposition 2.6 from [Reference Denisov and Wachtel7] again, similarly we can obtain the lower bound

\begin{equation*} \liminf_{n\to\infty}\mathbb{E}\Biggl[\prod_{i=1}^{m} \dfrac{\sigma^2\xi^{+}(nx_i)}{n}\Biggr]\ge 2^m\sum_{\sigma\in\mathcal{S}_m}x_{\sigma(m)}\prod_{i=1}^{m-1}\min\{x_{\sigma(i)},x_{\sigma(i+1)}\}.\end{equation*}

Thus the proof is complete.

Remark 4.1. Our previous proof of (4.2) requires the assumption that the walk is right continuous and relies on Tanaka’s construction for $S^+$ . We are deeply thankful to one of the referees for providing an idea to relax this restrictive condition and derive asymptotics for mixed moments (4.1) based on Denisov and Wachtel [Reference Denisov and Wachtel7], which allows us to prove convergence of finite-dimensional distributions in a straightforward manner. Furthermore, as pointed out by the referee, Proposition 4.1 and Theorem 1.2 can be generalized to stable random walks, but at the moment the distribution of the limiting process is unknown.

5. Proof of Theorem 1.2

We will prove that the finite-dimensional distributions in (1.5) do converge, that is, for $m\in \mathbb{Z}_+$ and $x=(x_1,\ldots,x_m)\in \mathbb{R}_+^{m}$ , as $n\to\infty$ ,

(5.1) \begin{equation}\biggl(\dfrac{\sigma^2\xi^{+}(nx_1)}{n},\, \dfrac{\sigma^2\xi^{+}(nx_2)}{n},\ldots,\dfrac{\sigma^2\xi^{+}(nx_m)}{n}\biggr)\stackrel{{\mathrm{d}} }{\longrightarrow} \bigl( L^{x_1}_{\infty}(\rho),L^{x_2}_{\infty}(\rho),\ldots,L^{x_m}_{\infty}(\rho) \bigr).\end{equation}

To this end, we study the Laplace transform of the left-hand side of (5.1). Obviously, for every $r\ge 1$ ,

(5.2) \begin{equation}\sum_{j=0}^{2r+1}(\!-\!1)^j\dfrac{y^j}{j!}\le {\mathrm{e}}^{-y} \le \sum_{j=0}^{2r}(\!-\!1)^j\dfrac{y^j}{j!},\quad y\ge 0.\end{equation}

From (5.2) it follows that for any $\lambda=(\lambda_1,\ldots,\lambda_{m})\in \mathbb{R}^{m}$ ,

(5.3) \begin{equation}\mathbb{E}\Biggl[ \exp\!{\Biggl\{-\sum_{i=1}^{m}\dfrac{\lambda_i \sigma^2\xi^{+}(nx_i)}{n}\Biggr\}} \Biggr] \le \sum_{j=0}^{2r}\dfrac{(\!-\!1)^j}{j!}\biggl(\dfrac{\sigma^2}{n}\biggr)^j\mathbb{E}\Biggl[\Biggl(\sum_{i=1}^{m}\lambda_i\xi^{+}(nx_i) \Biggr)^j\Biggr]\end{equation}

and

(5.4) \begin{equation}\mathbb{E}\Biggl[ \exp\!{\Biggl\{-\sum_{i=1}^{m}\dfrac{\lambda_i \sigma^2\xi^{+}(nx_i)}{n}\Biggr\}} \Biggr] \ge \sum_{j=0}^{2r+1}\dfrac{(\!-\!1)^j}{j!}\biggl(\dfrac{\sigma^2}{n}\biggr)^j \mathbb{E}\Biggl[\Biggl(\sum_{i=1}^{m}\lambda_i\xi^{+}(nx_i) \Biggr)^j\Biggr].\end{equation}

Let $\{a=(a_1,a_2,\ldots,a_{m})\}$ be the set of m-dimensional multi-indices. Then, by the binomial formula, we have

(5.5) \begin{equation}\mathbb{E}\Biggl[\Biggl(\sum_{i=1}^{m}\lambda_i\xi^{+}(nx_i) \Biggr)^j\Biggr] = \sum_{a:|a|=j}\dfrac{j!}{a!}\,\lambda^{a}\,\mathbb{E}\Biggl[\prod_{i=1}^{m} \xi^{+}(nx_i)^{a_i}\Biggr]. \end{equation}

Applying Proposition 4.1, we find that there exists some function $\phi_j(x,a)$ such that

(5.6) \begin{equation}\mathbb{E}\Biggl[\prod_{i=1}^{m} \xi^{+}(nx_i)^{a_i}\Biggr] \sim n^j \phi_j(x,a), \quad \text{as $ n\to\infty$.}\end{equation}

Furthermore, this proposition also gives the following bound:

\begin{equation*} \phi_j(x,a)\le j!\biggl(\dfrac{2}{\sigma^2}\biggr)^j \Bigl(\max_{1\le i\le m}x_i\Bigr)^j.\end{equation*}

Combining (5.5) and (5.6), we deduce that as $ n\to\infty$ ,

(5.7) \begin{equation}\mathbb{E}\Biggl[\Biggl(\sum_{i=1}^{m}\lambda_i\xi^{+}(nx_i) \Biggr)^j\Biggr]\sim n^{j}\psi_j(x,\lambda), \end{equation}

for some function $\psi_j(x,\lambda)$ satisfying

(5.8) \begin{equation}\psi_j(x,\lambda)\le j!\biggl(\dfrac{2}{\sigma^2}\biggr)^j\Bigl(\max_{1\le i\le m} x_i\Bigr)^{j}\Biggl(\sum_{i=1}^{m}\lambda_i \Biggr)^j. \end{equation}

Plugging (5.7) into (5.3) and (5.4), we obtain

\begin{equation*} \limsup_{n\to \infty}\mathbb{E}\Biggl[ \exp\!{\Biggl\{-\sum_{i=1}^{m}\dfrac{\lambda_i \sigma^2\xi^{+}(nx_i)}{n}\Biggr\}} \Biggr] \le \sum_{j=0}^{2r}\dfrac{(\!-\!1)^j}{j!}\sigma^{2j}\,\psi_j(x,\lambda)\end{equation*}

and

\begin{equation*} \liminf_{n\to \infty}\mathbb{E}\Biggl[ \exp\!{\Biggl\{-\sum_{i=1}^{m}\dfrac{\lambda_i \sigma^2\xi^{+}(nx_i)}{n}\Biggr\}} \Biggr] \ge \sum_{j=0}^{2r+1}\dfrac{(\!-\!1)^j}{j!}\sigma^{2j}\,\psi_j(x,\lambda).\end{equation*}

Note that the estimate (5.8) allows us to let $r\to\infty$ for $\lambda_i$ small enough. As a result, there exists $\delta>0$ such that if $\lambda_i\in [0,\delta)$ for all $1\le i\le m$ , then

\begin{equation*} \lim_{n\to \infty}\mathbb{E}\Biggl[ \exp\!{\Biggl\{-\sum_{i=1}^{m}\dfrac{\lambda_i \sigma^2\xi^{+}(nx_i)}{n}\Biggr\}} \Biggr] = \Psi(x,\lambda),\end{equation*}

where

\begin{align*}\Psi(x,\lambda) \;:\!=\; \sum_{j=0}^{\infty}\dfrac{(\!-\!1)^j}{j!}\sigma^{2j}\,\psi_j(x,\lambda).\end{align*}

Notice also that (5.5) implies the continuity of $\lambda\to\Psi(x,\lambda)$ on $[0,\delta)^{m}$ . By the continuity theorem for Laplace transform, the distribution of the rescaled local time sequence

\begin{align*}\biggl(\dfrac{\sigma^2\xi^{+}(nx_1)}{n},\, \dfrac{\sigma^2\xi^{+}(nx_2)}{n},\ldots,\dfrac{\sigma^2\xi^{+}(nx_{m})}{n}\biggr)\end{align*}

converges weakly to a law $F_x$ , which is characterized by the Laplace transform

\begin{align*}\lambda \mapsto \Psi(x,\lambda).\end{align*}

The continuity of this function implies the consistency of the family of finite-dimensional distributions $F_x$ . Furthermore, by Theorem 1.3, we know that the limiting process is $( L^{x}_{\infty}(\rho)\colon x\ge 0 )$ . Thus the proof is completed.

Acknowledgements

The authors are sincerely grateful to the anonymous referees for careful reading of the original manuscript and for helpful suggestions which improved the presentation of the paper. In particular, the authors are deeply thankful to one of the referees for providing an idea for relaxing a certain restrictive condition of Theorem 1.2 based on the results from Denisov and Wachtel [Reference Denisov and Wachtel7].

Funding information

The research was supported by NSFC (no. 11971062) and the National Key Research and Development Programme of China (no. 2020YFA0712900).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Afanasyev, V. I. (2019). Convergence to the local time of Brownian meander. Discrete Math. Appl. 29, 149158.10.1515/dma-2019-0014CrossRefGoogle Scholar
Bertoin, J. and Doney, R. A. (1994). On conditioning a random walk to stay nonnegative. Ann. Prob. 22, 21522167.10.1214/aop/1176988497CrossRefGoogle Scholar
Bolthausen, E. (1976). On a functional central limit theorem for random walks conditioned to stay positive. Ann. Prob. 4, 480485.10.1214/aop/1176996098CrossRefGoogle Scholar
Borodin, A. N. (1982). On the asymptotic behavior of local times of recurrent random walks with finite variance. Theory Prob. Appl. 26, 758772.10.1137/1126082CrossRefGoogle Scholar
Bryn-Jones, A. and Doney, R. A. (2006). A functional limit theorem for random walk conditioned to stay non-negative. J. London Math. Soc. 74, 244258.10.1112/S0024610706022964CrossRefGoogle Scholar
Caravenna, F. and Chaumont, L. (2008). Invariance principles for random walks conditioned to stay positive. Ann. Inst. H. Poincaré Prob. Statist. 44, 170190.10.1214/07-AIHP119CrossRefGoogle Scholar
Denisov, D. and Wachtel, V. (2016). Universality of local times of killed and reflected random walks. Electron. Commun. Prob. 21, 111.10.1214/15-ECP3995CrossRefGoogle Scholar
Denisov, D., Korshunov, D. and Wachtel, V. (2020). Renewal theory for transient Markov chains with asymptotically zero drift. Trans. Amer. Math. Soc. 373, 72537286.10.1090/tran/8167CrossRefGoogle Scholar
Durrett, R. T., Iglehart, D. L. and Miller, D. R. (1977). Weak convergence to Brownian meander and Brownian excursion. Ann. Prob. 5, 117129.Google Scholar
Dwass, M. (1975). Branching processes in simple random walk. Proc. Amer. Math. Soc. 51, 270274.10.1090/S0002-9939-1975-0370775-4CrossRefGoogle Scholar
Hong, W. and Wang, H. (2013). Intrinsic branching structure within $(L-1)$ random walk in random environment and its applications. Infinite Dimens. Anal. Quantum Prob. Relat. Top. 16, 1350006.10.1142/S0219025713500069CrossRefGoogle Scholar
Hong, W. and Zhang, L. (2010). Branching structure for the transient $(1;R)$ -random walk in random environment and its applications. Infinite Dimens. Anal. Quantum Prob. Relat. Top. 13, 589618.10.1142/S0219025710004188CrossRefGoogle Scholar
Hong, W., Yang, H. and Zhou, K. (2015). Scaling limit of local time of Sinai’s random walk. Front. Math. China 10, 13131324.10.1007/s11464-015-0485-8CrossRefGoogle Scholar
Iglehart, D. L. (1974). Functional central limit theorems for random walks conditioned to stay positive. Ann. Prob. 2, 608619.10.1214/aop/1176996607CrossRefGoogle Scholar
Imhof, J. P. (1984). Density factorizations for Brownian motion, meander and the three-dimensional Bessel process, and applications. J. Appl. Prob. 21, 500510.10.2307/3213612CrossRefGoogle Scholar
Kersting, G. and Vatutin, V. (2017). Discrete Time Branching Processes in Random Environment. John Wiley, New York.10.1002/9781119452898CrossRefGoogle Scholar
Kesten, H., Kozlov, M. V. and Spitzer, F. (1975). A limit law for random walk in a random environment. Compositio Math. 30, 145168.Google Scholar
Li, Z. (2011). Measure-Valued Branching Markov Processes. Springer, Heidelberg.10.1007/978-3-642-15004-3CrossRefGoogle Scholar
Revuz, D. and Yor, M. (1999). Continuous Martingales and Brownian Motion, 3rd edn (Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 293). Springer, Berlin.Google Scholar
Roger, M. and Yor, M. (2008). Aspects of Brownian Motion. Springer, Berlin.Google Scholar
Rogers, L. C. G. (1984). Brownian local times and branching processes. In Seminar on Probability XVIII (Lecture Notes in Math. 1059), pp. 42–55. Springer, Berlin.10.1007/BFb0100030CrossRefGoogle Scholar
Takacs, L. (1995). Limit distributions for the Bernoulli meander. J. Appl. Prob. 32, 375395.10.2307/3215294CrossRefGoogle Scholar
Tanaka, H. (1989). Time reversal of random walks in one-dimension. Tokyo J. Math. 12, 159174.10.3836/tjm/1270133555CrossRefGoogle Scholar
Yang, H. (2019). Scaling limit of the local time of the reflected (1,2)-random walk. Statist. Prob. Lett. 155, 108578.10.1016/j.spl.2019.108578CrossRefGoogle Scholar