Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-05T05:09:32.110Z Has data issue: false hasContentIssue false

Limit theorems of occupation times of normalized binary contact path processes on lattices

Published online by Cambridge University Press:  27 August 2024

Xiaofeng Xue*
Affiliation:
Beijing Jiaotong University
*
*Postal address: School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The binary contact path process (BCPP) introduced in Griffeath (1983) describes the spread of an epidemic on a graph and is an auxiliary model in the study of improving upper bounds of the critical value of the contact process. In this paper, we are concerned with limit theorems of the occupation time of a normalized version of the BCPP (NBCPP) on a lattice. We first show that the law of large numbers of the occupation time process is driven by the identity function when the dimension of the lattice is at least 3 and the infection rate of the model is sufficiently large conditioned on the initial state of the NBCPP being distributed with a particular invariant distribution. Then we show that the centered occupation time process of the NBCPP converges in finite-dimensional distributions to a Brownian motion when the dimension of the lattice and the infection rate of the model are sufficiently large and the initial state of the NBCPP is distributed with the aforementioned invariant distribution.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In this paper we are concerned with the normalized binary contact path process (NBCPP). For later use, we first introduce some notation. For $d\geq 1$ , the d-dimensional lattice is denoted by $\mathbb{Z}^d$ . For $x,y\in \mathbb{Z}^d$ , we write $x\sim y$ when they are neighbors. The origin of $\mathbb{Z}^d$ is denoted by O. Now we recall the definition of the binary contact path process (BCPP) introduced in [Reference Griffeath6]. The binary contact path process $\{\phi_t\}_{t\geq 0}$ on $\mathbb{Z}^d$ is a continuous-time Markov process with state space $\mathbb{X}=\{0,1,2,\ldots\}^{\mathbb{Z}^d}$ and evolves as follows. For any $x\in \mathbb{Z}^d$ , $x\sim y$ , and $t\geq 0$ ,

\begin{align*} \phi_t(x)\rightarrow \begin{cases} 0 & \text{at rate}\,\dfrac{1}{1+2\lambda d}, \\[7pt] \phi_t(x)+\phi_t(y) & \text{at rate}\,\dfrac{\lambda}{1+2\lambda d}, \end{cases}\end{align*}

where $\lambda$ is a positive constant. As a result, the generator $\Omega$ of $\{\phi_t\}_{t\geq 0}$ is given by

\begin{align*} \Omega f(\phi) = \frac{1}{1+2\lambda d}\sum_{x\in \mathbb{Z}^d}(f(\phi^{x,-})-f(\phi)) + \frac{\lambda}{1+2\lambda d}\sum_{x\in \mathbb{Z}^d}\sum_{y\sim x}(f(\phi^{x,y})-f(\phi))\end{align*}

for any f from $\mathbb{X}$ to $\mathbb{R}$ depending on finitely many coordinates and $\phi\in \mathbb{X}$ , where, for all $x,y,z\in \mathbb{Z}^d$ ,

\begin{equation*} \phi^{x,-}(z) = \begin{cases} \phi(z) & \text{if}\ z\neq x, \\ 0 & \text{if}\ z=x; \end{cases} \qquad \phi^{x,y}(z) = \begin{cases} \phi(z) & \text{if}\ z\neq x, \\ \phi(x)+\phi(y) & \text{if}\ z=x. \end{cases}\end{equation*}

Intuitively, $\{\phi_t\}_{t\geq 0}$ describes the spread of an epidemic on $\mathbb{Z}^d$ . The integer value a vertex takes is the seriousness of the illness on this vertex. A vertex taking value 0 is healthy and one taking a positive value is infected. An infected vertex becomes healthy at rate ${1}/({1+2\lambda d})$ . A vertex x is infected by a given neighbor y at rate ${\lambda}/({1+2\lambda d})$ . When the infection occurs, the seriousness of the illness on x is added with that on y.

The BCPP $\{\phi_t\}_{t\geq 0}$ was introduced in [Reference Griffeath6] to improve upper bounds of critical values of contact processes (CPs) according to the fact that the CP $\{\xi_t\}_{t\geq 0}$ on $\mathbb{Z}^d$ can be equivalently defined as

\begin{align*} \xi_t(x)= \begin{cases} 0 & \text{if}\ \phi_{(1+2\lambda d)t}(x)=0, \\ 1 & \text{if}\ \phi_{(1+2\lambda d)t}(x)>0 \end{cases}\end{align*}

for any $x\in \mathbb{Z}^d$ . For a detailed survey of CPs, see [Reference Liggett11, Chapter 6] and [Reference Liggett12, Part II]. To emphasize the dependence on $\lambda$ , we also write $\xi_t$ as $\xi_t^\lambda$ . The critical value $\lambda_\mathrm{c}$ of the CP is defined as

\begin{align*} \lambda_\mathrm{c} = \sup\Bigg\{\lambda>0\colon \mathbb{P}\Bigg(\sum_{x\in\mathbb{Z}^d}\xi_t^{\lambda}(x)=0\,\text{for some}\,t>0 \mid \sum_{x}\xi^\lambda_0(x)=1\Bigg)=1\Bigg\}.\end{align*}

Applying the above coupling relationship between BCPP and CP, it is shown in [Reference Griffeath6] that $\lambda_\mathrm{c}(d)\leq {1}/{2d(2\gamma_d-1)}$ for $d\geq 3$ , where $\gamma_d$ is the probability that the simple random walk on $\mathbb{Z}^d$ starting at O never returns to O again. In particular, $\lambda_\mathrm{c}(3)\leq 0.523$ as a corollary of the above upper bound. In [Reference Xue17], a modified version of BCPP is introduced and then a further improved upper bound of $\lambda_\mathrm{c}(d)$ is given for $d\geq 3$ . It is shown in [Reference Xue17] that $\lambda_\mathrm{c}(d)\leq ({2-\gamma_d})/({2d\gamma_d})$ and consequently $\lambda_\mathrm{c}(3)\leq 0.340$ .

The BCPP belongs to a family of continuous-time Markov processes called linear systems defined in [Reference Liggett11, Chapter 9], since there are a series of linear transformations $\{\mathcal{A}_k\colon k\geq 1\}$ on $\mathbb{X}$ such that $\phi_t=\mathcal{A}_k\phi_{t-}$ for some $k\geq 1$ at each jump moment t. As a result, for each $m\geq 1$ , Kolmogorov–Chapman equations for

\begin{align*} \Bigg\{\mathbb{E}\Bigg(\prod_{i=1}^m\phi_t(x_i)\Bigg)\colon x_1,\ldots,x_m\in \mathbb{Z}^d\Bigg\}\end{align*}

are given by a series of linear ordinary differential equations, where $\mathbb{E}$ is the expectation operator. For the mathematical details, see [Reference Liggett11, Theorems 9.1.27 and 9.3.1].

For technical reasons, it is convenient to investigate a rescaled time change $\{\eta_t\}_{t\geq 0}$ of the BCPP defined by

\begin{align*} \eta_t=\exp\bigg\{\frac{1-2d\lambda}{2d\lambda}t\bigg\}\phi_{({1+2d\lambda})/{2d\lambda}t}.\end{align*}

The process $\{\eta_t\}_{t\geq 0}$ is introduced in [Reference Liggett11, Chapter 9] and is called the normalized binary contact path process (NBCPP) since $\mathbb{E}\eta_t(x)=1$ for all $x\in \mathbb{Z}^d$ and $t>0$ conditioned on $\mathbb{E}\eta_0(x)=1$ for all $x\in \mathbb{Z}^d$ (see Proposition 2.1).

Here we recall some basic properties of the NBCPP given in [Reference Liggett11, Section 9.6]. The state space of the NBCPP $\{\eta_t\}_{t\geq 0}$ is $\mathbb{Y}=[0, +\infty)^{\mathbb{Z}^d}$ and the generator $\mathcal{L}$ of $\{\eta_t\}_{t\geq 0}$ is given by

\begin{equation*} \mathcal{L}f(\eta) = \frac{1}{2\lambda d}\sum_x(f(\eta^{x,-})-f(\eta)) + \frac{1}{2d}\sum_{x\in\mathbb{Z}^d}\sum_{y\sim x}(f(\eta^{x,y})-f(\eta)) + \bigg(\frac{1}{2\lambda d}-1\bigg)\sum_{x\in\mathbb{Z}^d}f^\prime_x(\eta)\eta(x)\end{equation*}

for any $\eta\in\mathbb{Y}$ , where $f_x^\prime$ is the partial derivative of f with respect to the coordinate $\eta(x)$ and, for all $x,y,z\in \mathbb{Z}^d$ ,

\begin{align*} \eta^{x,-}(z)= \begin{cases} \eta(z) & \text{if}\ z\neq x, \\ 0 & \text{if}\ z=x; \end{cases} \qquad \eta^{x,y}(z)= \begin{cases} \eta(z) & \text{if}\ z\neq x, \\ \eta(x)+\eta(y) & \text{if}\ z=x. \end{cases}\end{align*}

Applying [Reference Liggett11, Theorem 9.1.27], for any $x\in \mathbb{Z}^d$ , $t\geq0$ , and $\eta\in \mathbb{Y}$ ,

(1.1) \begin{equation} \frac{{\mathrm{d}}}{{\mathrm{d}} t}\mathbb{E}_\eta\eta_t(x) = -\mathbb{E}_\eta\eta_t(x) + \frac{1}{2d}\sum_{y\sim x}\mathbb{E}_\eta\eta_t(y),\end{equation}

where $\mathbb{E}_\eta$ is the expectation operator with respect to the NBCPP with initial state $\eta_0=\eta$ . For any $x,y\in \mathbb{Z}^d$ , $t\geq 0$ , and $\eta\in \mathbb{Y}$ , let $F_{\eta,t}(x,y)=\mathbb{E}_\eta(\eta_t(x)\eta_t(y))$ . Consider $F_{\eta,t}$ as a column vector from $\mathbb{Z}^d\times \mathbb{Z}^d$ to $[0, +\infty)$ ; applying [Reference Liggett11, Theorem 9.3.1], there exists a $(\mathbb{Z}^d)^2\times (\mathbb{Z}^d)^2$ matrix $H=\{H((x,y), (v,w))\}_{x,y,v,w\in \mathbb{Z}^d}$ , independent of $\eta, t$ , such that

(1.2) \begin{equation} \frac{{\mathrm{d}}}{{\mathrm{d}} t}F_{\eta, t}=HF_{\eta, t}\end{equation}

for any $\eta\in \mathbb{Y}$ and $t\geq 0$ . For a precise expression for H, see Section 3, where proofs of main results of this paper are given. Roughly speaking, both $\mathbb{E}_\eta\eta_t(x)$ and $\mathbb{E}_\eta(\eta_t(x)\eta_t(y))$ are driven by linear ordinary differential equations, which is a basic property of general linear systems as we recalled above. For more properties of the NBCPP as consequences of (1.1) and (1.2), see Section 2.

In this paper we study the law of large numbers and the central limit theorem (CLT) of the occupation time process $\big\{\int_0^t\eta_u(O)\,{\mathrm{d}} u\big\}_{t\geq 0}$ of an NBCPP as $t\rightarrow+\infty$ . The occupation time is the simplest example of the so-called additive functions of interacting particle systems, whose limit theorems have been popular research topics since the 1970s (see the references cited in [Reference Komorowski, Landim and Olla7]). In detail, let $\{X_t\}_{t\geq 0}$ be an interacting particle system with state space $S^{\mathbb{Z}^d}$ , where $S\subseteq \mathbb{R}$ , then $\int_0^tf(X_s)\,{\mathrm{d}} s$ is called an additive function for any $f\colon S^{\mathbb{Z}^d}\rightarrow \mathbb{R}$ . If $\{X_t\}_{t\geq 0}$ starts from a stationary probability measure $\pi$ and $f\in L^1(\pi)$ , then the law of large numbers of $\int_0^tf(X_s)\,{\mathrm{d}} s$ is a direct application of the Birkhoff ergodic theorem such that

\begin{align*} \lim_{t\rightarrow+\infty}\frac{1}{t}\int_0^tf(X_s)\,{\mathrm{d}} s = \mathbb{E}_\pi(f\mid\mathcal{I}) \,\text{almost surely (a.s.) and in $L^1$},\end{align*}

where $\mathcal{I}$ is the set of all invariant events. Furthermore, if the stationary process $\{X_t\}_{t\geq 0}$ starting from $\pi$ is ergodic, i.e. $\mathcal{I}$ is trivial, then $\mathbb{E}_\pi(f\mid\mathcal{I})=\mathbb{E}_\pi f=\int_{S^{\mathbb{Z}^d}}f(\eta)\,\pi({\mathrm{d}}\eta)$ almost surely. In the ergodic case, it is natural to further investigate whether a CLT corresponding to the above Birkhoff ergodic theorem holds, i.e. whether

\begin{align*} \lim_{t\rightarrow+\infty}\frac{1}{\sqrt{t}\,}\int_0^t(f(X_s)-\mathbb{E}_\pi f)\,{\mathrm{d}} s = N(0, \sigma^2)\end{align*}

in distribution for some $\sigma^2=\sigma^2(f)>0$ as $t\rightarrow+\infty$ . It is shown in [Reference Komorowski, Landim and Olla7] that if the generator of the ergodic process $\{X_t\}_{t\geq 0}$ satisfies a so-called sector condition, then the CLT above holds. As an application of this result, [Reference Komorowski, Landim and Olla7, Part 2] shows that mean-zero asymmetric exclusion processes starting from Bernoulli product measures are ergodic and satisfy the sector condition. Consequently, corresponding CLTs of additive functions are given. For exclusion processes, particles perform random walks on $\mathbb{Z}^d$ but jumps to occupied sites are suppressed. We refer the reader to [Reference Liggett11, Chapter 8] for the basic properties of exclusion processes.

CLTs of additive functions are also discussed for the supercritical contact process $\xi_t^{\lambda, d}$ , where $\lambda>\lambda_\mathrm{c}$ . It is shown in [Reference Liggett11, Chapter 6] that the supercritical contact process has a nontrivial ergodic stationary distribution $\overline{\nu}$ , which is called the upper invariant distribution. CLTs of additive functions of supercritical contact processes starting from $\overline{\nu}$ are given in [Reference Schonmann16]. Since marginal distributions of $\overline{\nu}$ do not have clear expressions as product measures, the proof of the main theorem in [Reference Schonmann16] follows a different strategy than that given in [Reference Komorowski, Landim and Olla7]. It is shown in [Reference Schonmann16] that additive functions of the supercritical contact process satisfy a so-called FKG condition, and then corresponding CLTs follow from [Reference Newman15, Theorem 3].

For a general interacting particle system starting from a stationary distribution $\pi$ which is not ergodic (or we do not know whether $\pi$ is ergodic), we should first study which type of f makes $\mathbb{E}_\pi(f\mid\mathcal{I})=\mathbb{E}_\pi f$ almost surely and then the investigation of a corresponding CLT is meaningful. It is natural to first consider f with the simplest form $f(\eta)=\eta(x)$ for some $x\in \mathbb{Z}^d$ , then the corresponding additive function reduces to the occupation time. CLTs of occupation times of interacting particle systems are discussed for critical branching processes and voter models on $\mathbb{Z}^d$ in [Reference Birkner and Zähle1, Reference Cox and Griffeath3] respectively. For voter models, each $x\in \mathbb{Z}^d$ has an opinion in $\{0, 1\}$ and x adopts each neighbor’s opinion at rate 1. For critical branching random walks, there are several particles on each $x\in \mathbb{Z}^d$ , the number of which is considered as the state of x. All particles perform independent random walks on $\mathbb{Z}^d$ and each particle dies with probability $\frac12$ or splits into two new particles with probability $\frac12$ at rate 1. As shown in [Reference Birkner and Zähle1, Reference Cox and Griffeath3], CLTs of occupation times of both models perform dimension-dependent phase transition phenomena. For critical branching random walks, the CLTs above have three different forms in the respective cases where $d=3$ , $d=4$ , and $d\geq 5$ . For voter models, the CLTs above have five different forms in the respective cases where $d=1$ , $d=2$ , $d=3$ , $d=4$ , and $d\geq 5$ .

The investigation of the CLT of the occupation time of an NBCPP in this paper is greatly inspired by [Reference Birkner and Zähle1, Reference Cox and Griffeath3]. Several basic properties of NBCPPs are similar to those of the voter model and the critical branching random walk. For example, in all these three models, the expectations of states of sites on $\mathbb{Z}^d$ are all driven by linear ordinary differential equations. Furthermore, when $d\geq 3$ (and $\lambda$ is sufficiently large for an NBCPP), each model has a class of nontrivial extreme stationary distributions distinguished by a parameter representing the mean particle number on each site, the probability of a site taking opinion 1, and the mean seriousness of the illness on each site in critical branching random walks, voter models, and NBCPPs respectively. In addition, the voter model and NBCPP are both linear systems. In conclusion, according to all these similarities, it is natural to ask whether CLTs similar to those of voter models and critical branching random walks hold for the NBCPP.

On the other hand, the evolution of NBCPPs has different mechanisms than those of voter models and critical branching random walks. For the voter model, the state of each site is bounded by 1. For the critical branching random walk, although the number of particles on a site is not bounded from above, each jump, birth, and death of a particle can only change one or two sites’ states by one. The above properties make it easy to bound high-order moments of states of sites for voter models and critical branching random walks. For NBCPPs, the seriousness of the illness on a site is unbounded and the increment of this seriousness when being further infected is also unbounded, which makes the estimation of $\mathbb{E}((\eta_t(x))^m)$ difficult for large m. Hence, it is also natural to discuss the influence of the above differences between NBCPPs and the other two models on the techniques utilized in the proof of the CLT of NBCPPs.

According to a calculation of the variance (see Remark 2.3), it is natural to guess that the CLT of the occupation time of an NBCPP should take three different forms in the respective cases where $d\geq 5$ , $d=4$ , and $d=3$ , as in [Reference Cox and Griffeath3, Theorem 1] and [Reference Birkner and Zähle1, Theorem 1.1]. However, in this paper we only give a rigorous result for part of the first case, i.e. the CLT of the occupation time for sufficiently large d, since we cannot bound the fourth moment of $\eta_t(O)$ uniformly for $t\geq 0$ when the dimension d is low. For our main result and the mathematical details of the proof, see Sections 2 and 3, respectively.

Since the 1980s, the large deviation principle (LDP) of the occupation time has also been a popular research topic for models such as exclusion processes, voter models, critical branching random walks, and critical branching $\alpha$ -stable processes [Reference Bramson, Cox and Griffeath2, Reference Deuschel and Rosen4, Reference Landim8Reference Li and Ren10, Reference Maillard and Mountford13]. It is natural to ask whether an LDP holds for the occupation time of an NBCPP. We think the core difficulty in the study of this problem is the estimation of $\mathbb{E}\exp\big\{\alpha\int_0^t\eta_s(O)\,{\mathrm{d}} s\big\}$ for $\alpha\neq 0$ . We will work on this problem as a further investigation.

We are also inspired by investigations of CLTs of empirical density fields of NBCPPs, which are also called fluctuations. For sufficiently large d and $\lambda$ , [Reference Xue and Zhao18] gives the fluctuation of the NBCPP $\{\eta_t^{\lambda,d}\}_{t\geq 0}$ starting from the configuration where all sites are in state 1; [Reference Nakashima14] gives another type of fluctuation of a class of linear systems starting from a configuration with finite total mass, including NBCPP as a special case. The boundedness of $\sup_{t\geq 0}\mathbb{E}\big(\big(\eta_t^{\lambda, d}(O)\big)^4\big)$ given in [Reference Xue and Zhao18] for sufficiently large d and $\lambda$ is a crucial property for the proofs of the main results in this paper. For the mathematical details, see Section 2.

2. Main results

In this section we give our main result. For later use, we first introduce some notation and definitions. We denote by $\{S_n\}_{n\geq 0}$ the discrete-time simple random walk on $\mathbb{Z}^d$ , i.e. $\mathbb{P}(S_{n+1}=y\mid S_n=x)={1}/{2d}$ for any $n\geq 0$ , $x\in \mathbb{Z}^d$ , and $y\sim x$ . We denote by $\{Y_t\}_{t\geq 0}$ the continuous-time simple random walk on $\mathbb{Z}^d$ with generator L given by $Lh(x)=({1}/{2d})\sum_{y\sim x}(h(y)-h(x))$ for any bounded h from $\mathbb{Z}^d$ to $\mathbb{R}$ and $x\in \mathbb{Z}^d$ . As defined in Section 1, we use $\gamma_d$ to denote the probability that $\{S_n\}_{n\geq 1}$ never returns to O again conditioned on $S_0=O$ , i.e.

\begin{align*} \gamma_d = \mathbb{P}(S_n\neq O\,\text{for all}\,n\geq 1 \mid S_0=O) = \mathbb{P}(Y_t\neq O\,\text{for any}\,t\geq 0\mid Y_0=x)\end{align*}

for any $x\sim O$ . For any $t\geq 0$ , we use $p_t(\,\cdot\,, \cdot\,)$ to denote the transition probabilities of $Y_t$ , i.e. $p_t(x,y)=\mathbb{P}(Y_t=y\mid Y_0=x)$ for any $x,y\in \mathbb{Z}^d$ . For any $x\in \mathbb{Z}^d$ , we define

(2.1) \begin{equation} \Phi(x)=\mathbb{P}(S_n=x\,\text{for some}\,n\geq 0\mid S_0=O).\end{equation}

We use $\vec{1}$ to denote the configuration in $\mathbb{Y}$ where all vertices take the value 1. For any $\eta\in \mathbb{Y}$ , as defined in Section 1, we denote by $\mathbb{E}_\eta$ the expectation operator of $\{\eta_t\}_{t\geq 0}$ conditioned on $\eta_0=\eta$ . Furthermore, for any probability measure $\mu$ on $\mathbb{Y}$ , we use $\mathbb{E}_\mu$ to denote the expectation operator of $\{\eta_t\}_{t\geq 0}$ conditioned on $\eta_0$ being distributed with $\mu$ .

Now we recall more properties of the NBCPP $\{\eta_t\}_{t\geq 0}$ proved in [Reference Liggett11, Chapter 9] and [Reference Xue and Zhao18, Section 2].

Proposition 2.1. (Liggett, [Reference Liggett11].) For any $\eta\in \mathbb{Y}$ , $t\geq 0$ , and $x\in \mathbb{Z}^d$ ,

$$\mathbb{E}_\eta\eta_t(x)=\sum_{y\in \mathbb{Z}^d}p_t(x, y)\eta(y).$$

Furthermore, $\mathbb{E}_{\vec{1}}\,\eta_t(x)=1$ for any $x\in \mathbb{Z}^d$ and $t\geq 0$ .

Proposition 2.1 follows from (1.1) directly. In detail, since the generator L of the simple random walk on $\mathbb{Z}^d$ satisfies

\begin{align*} L(x,y)= \begin{cases} -1 & \text{if}\ x=y, \\ \dfrac{1}{2d} & \text{if}\ x\sim y, \\ 0 & \,\text{otherwise}, \end{cases}\end{align*}

we have $\mathbb{E}_\eta\eta_t(x)=\sum_{y\in \mathbb{Z}^d}{\mathrm{e}}^{tL}(x,y)\mathbb{E}_\eta\eta_t(y)$ by (1.1), where ${\mathrm{e}}^{tL}=\sum_{n=0}^{+\infty}{t^nL^n}/{n!}=p_t$ .

Proposition 2.2. (Liggett, [Reference Liggett11].) There exist a series of functions $\{q_t\}_{t\geq 0}$ from $(\mathbb{Z}^d)^2$ to $\mathbb{R}$ and a series of functions $\{\hat{q}_t\}_{t\geq 0}$ from $(\mathbb{Z}^d)^4$ to $\mathbb{R}$ such that

(2.2) \begin{align} \mathbb{E}_{\vec{1}}\,(\eta_t(O)\eta_t(x)) & = \mathbb{E}_{\vec{1}}\,(\eta_t(y)\eta_t(x+y)) = \sum_{z\in \mathbb{Z}^d}q_t(x,z), \end{align}
(2.3) \begin{align} \mathbb{E}_\eta(\eta_t(x)\eta_t(y)) & = \sum_{z,w\in \mathbb{Z}^d}\hat{q}_t((x,y), (z,w))\eta(z)\eta(w) \end{align}

for any $t\geq 0$ , $x,y\in \mathbb{Z}^d$ . and $\eta\in \mathbb{Y}$ . Furthermore,

(2.4) \begin{equation} q_t(O,w)=\sum_{(y,z):\,y-z=w}\hat{q}_t((x,x), (y,z)) \end{equation}

for any $x,w\in \mathbb{Z}^d$ .

Proposition 2.2 follows from (1.2). In detail, (2.3) holds by taking $\hat{q}_t={\mathrm{e}}^{tH}$ . When $\eta_0=\vec{1}$ , applying the spatial homogeneity of $\eta_t$ , we have

(2.5) \begin{equation} F_{\vec{1}, t}(O,x)=F_{\vec{1}, t}(O, -x)=F_{\vec{1}, t}(y, y+x)=F_{\vec{1}, t}(y+x, y).\end{equation}

Denote by $A_t(x)$ the expression $F_{\vec{1}, t}(O,x)$ , applying (1.2) and (2.5), we have

\begin{align*}\frac{{\mathrm{d}}}{{\mathrm{d}} t}A_t=QA_t\end{align*}

for any $t\geq 0$ and a $\mathbb{Z}^d\times \mathbb{Z}^d$ matrix $Q=\{q(x,y)\}_{x,y\in \mathbb{Z}^d}$ independent of t. Consequently, (2.2) holds by taking $q_t={\mathrm{e}}^{tQ}$ . Since

\begin{align*} \frac{{\mathrm{d}}}{{\mathrm{d}} t}A_t(x) & = \frac{{\mathrm{d}}}{{\mathrm{d}} t}F_{\vec{1}, t}(z+x,z) \\ & = \sum_u\sum_vH((z+x,z), (u,v))F_{\vec{1}, t}(u,v) = \sum_{y\in \mathbb{Z}^d}\sum_{(u,v):u-v=y}H((z+x,z), (u,v))A_t(y),\end{align*}

the $\mathbb{Z}^d\times \mathbb{Z}^d$ matrix Q can be chosen such that $q(x,y)=\sum_{(u, v): u-v=y}H((z+x, z), (u, v))$ for any $x,y,z\in \mathbb{Z}^d$ . Applying this equation repeatedly, we have

\begin{align*} q^n(x,y)=\sum_{(u, v): u-v=y}H^n((z+x, z), (u, v))\end{align*}

for all $n\geq 1$ by induction, and then

(2.6) \begin{equation} q_t(x,y)=\sum_{(u, v): u-v=y}\hat{q}_t((z+x, z), (u, v)).\end{equation}

As a result, (2.4) holds.

Proposition 2.3. (Liggett, [Reference Liggett11].) When $d\geq 3$ and $\lambda>{1}/({2d(2\gamma_d-1)})$ , we have the following properties:

  1. (i) For any $x,y,z,w\in \mathbb{Z}^d$ , $\lim_{t\rightarrow+\infty}q_t^{\lambda, d}(x,y) = \lim_{t\rightarrow+\infty}\hat{q}_t^{\lambda, d}(x,y,z,w) = 0$ .

  2. (ii) Conditioned on $\eta_0=\vec{1}$ , $\eta_t$ converges weakly as $t\rightarrow+\infty$ to a probability measure $\nu_{\lambda, d}$ on $\mathbb{Y}$ .

  3. (iii) For any $x,y\in \mathbb{Z}^d$ ,

    \begin{align*} \lim_{t\rightarrow+\infty}\mathbb{E}_{\vec{1}}\,((\eta_t(x))^2) & = \sup_{t\geq 0}\mathbb{E}_{\vec{1}}\,((\eta_t(x))^2) = \mathbb{E}_{\nu_{\lambda, d}}((\eta_0(O))^2) = 1 + \frac{1}{h_{\lambda, d}}, \\ \lim_{t\rightarrow+\infty}{\rm Cov}_{\vec{1}}\,(\eta_t(x), \eta_t(y)) & = {\rm Cov}_{\nu_{\lambda, d}}(\eta_0(x), \eta_0(y)) = \frac{\Phi(x-y)}{h_{\lambda,d}}, \end{align*}
    where
    $$h_{\lambda, d}=\frac{2\lambda d(2\gamma_d-1)-1}{1+2d\lambda}$$
    and $\Phi$ is defined as in (2.1).

Proposition 2.3 follows from [Reference Liggett11, Theorem 2.8.13, Corollary 2.8.20, Theorem 9.3.17]. In detail, let $\{\overline{p}(x.y)\}_{x,y\in \mathbb{Z}^d}$ be the one-step transition probabilities of the discrete-time simple random walk on $\mathbb{Z}^d$ , and $\{\overline{q}(x,y)\}_{x,y\in \mathbb{Z}^d}$ be defined as

\begin{align*} \overline{q}(x,y)= \begin{cases} \frac12{q(x,y)} & \text{if}\ x\neq y, \\[5pt] \frac12{q(x,y)}+1 & \text{if}\ x=y. \end{cases}\end{align*}

Furthermore, let $\overline{h}\colon \mathbb{Z}^d\rightarrow (0,+\infty)$ be defined as

\begin{align*} \overline{h}(x)=\frac{\Phi(x)+h_{\lambda,d}}{h_{\lambda,d}}.\end{align*}

Then, direct calculations show that $\overline{p}$ , $\overline{q}$ , and $\overline{h}$ satisfy the assumption of [Reference Liggett11, Theorem 2.8.13]. Consequently, applying [Reference Liggett11, Theorem 2.8.13], $\lim_{t\rightarrow+\infty}\overline{q}_t(x,y)=0$ for all $x,y\in \mathbb{Z}^d$ , where $\overline{q}_t={\mathrm{e}}^{-t}{\mathrm{e}}^{t\overline{q}}$ . It is easy to check that $q_t=\overline{q}_{2t}$ and hence the first limit in (i) is zero. By (2.6), $\hat{q}_t(x,y,z,w)\leq q_t((x-y), (z-w))$ and hence the second limit in (i) is zero. Therefore, (i) holds. It is not difficult to show that $\overline{p}$ , $\overline{q}$ , and $\overline{h}$ also satisfy the assumption of [Reference Liggett11, Corollary 2.8.20]. Applying this corollary, we have $\lim_{t\rightarrow+\infty}\sum_{y}q_t(x,y)=\overline{h}(x)$ . Equation (2.2) shows that $\sum_{y}q_t(x,y)=\mathbb{E}(\eta_t(z)\eta_t(z+x))$ , and hence

\begin{align*} \lim_{t\rightarrow+\infty}\mathbb{E}_{\vec{1}}\,((\eta_t(x))^2) & = \overline{h}(0)=1+\frac{1}{h_{\lambda, d}}, \\ \lim_{t\rightarrow+\infty}{\rm Cov}_{\vec{1}}\,(\eta_t(x), \eta_t(y)) & = \overline{h}(x,y) - 1 = \frac{\Phi(x-y)}{h_{\lambda,d}}.\end{align*}

Note that the second limit also applies Proposition 2.1. These last two limits along with (i) ensure that $\{q_t\}_{t\geq 0}$ and $\vec{1}$ satisfy the assumption of [Reference Liggett11, Theorem 9.3.17]. By this theorem, (ii) and (iii) hold.

Proposition 2.4 ([Reference Xue and Zhao18].) There exist an integer $d_0\geq 5$ and a real number $\lambda_0>0$ satisfying the following properties:

  1. (i) If $d\geq d_0$ and $\lambda\geq \lambda_0$ , then $\lambda>{1}/{2d(2\gamma_d-1)}$ .

  2. (ii) For any $d\geq d_0$ , $\lambda\geq \lambda_0$ , and $x\in \mathbb{Z}^d$ ,

    \begin{align*} \mathbb{E}_{\nu_{\lambda, d}}((\eta_0(O)^4)) \leq \liminf_{t\rightarrow+\infty}\mathbb{E}^{\lambda,d}_{\vec{1}}((\eta_t(x))^4) < +\infty. \end{align*}
  3. (iii) For any $d\geq d_0$ and $\lambda\geq \lambda_0$ ,

    \begin{align*} \lim_{M\rightarrow+\infty} \underset{\substack{{(x,y,z)\in(\mathbb{Z}^d)^3:} \\\|x-y\|_1\wedge\|x-z\|_1\geq M }}{\sup} {\rm Cov}_{\nu_{\lambda,d}}((\eta_0(x))^2, \eta_0(y)\eta_0(z))=0, \end{align*}
    where $\|\cdot\|_1$ is the $l_1$ -norm on $\mathbb{R}^d$ and $a\wedge b=\min\{a,b\}$ for $a,b\in \mathbb{R}$ .

Here we briefly recall how to obtain Proposition 2.4 from the main results given in [Reference Xue and Zhao18]. Note that, in [Reference Xue and Zhao18], limit behaviors are discussed for $\eta_{tN^2}$ as $N\rightarrow+\infty$ and t is fixed. All these behaviors can be equivalently given for $\eta_t$ as $t\rightarrow+\infty$ . In [Reference Xue and Zhao18], a discrete-time symmetric random walk $\{S_n^{\lambda,d}\}_{n\geq 0}$ on $(\mathbb{Z}^d)^4$ and function $\overline{H}^{\lambda,d}\colon (\mathbb{Z}^d)^4\times(\mathbb{Z}^d)^4\rightarrow [1, +\infty)$ are introduced such that

\begin{align*} \mathbb{E}^{\lambda,d}_{\vec{1}}\Bigg(\prod_{i=1}^4\eta_t(x_i)\Bigg) \leq \mathbb{E}_{\vec{x}}\Bigg(\prod_{n=0}^{+\infty}\overline{H}(S_n, S_{n+1})\Bigg)\end{align*}

for any $\vec{x}=(x_1, x_2, x_3, x_4)\in (\mathbb{Z}^d)^4$ and $t\geq 0$ . We refer the reader to [Reference Xue and Zhao18] for precise expressions of $\overline{H}$ and the transition probabilities of $\{S_n\}_{n\geq 0}$ . It is shown in [Reference Xue and Zhao18] that there exist $\,\hat{\!d}_0$ and $\hat{\lambda}_0$ such that

(2.7) \begin{equation} \sup_{\vec{x}\in(\mathbb{Z}^d)^4} \mathbb{E}_{\vec{x}}\Bigg(\Bigg(\prod_{n=0}^{+\infty}\overline{H}(S_n,S_{n+1})\Bigg)^{1+\varepsilon}\Bigg) < +\infty\end{equation}

for some $\varepsilon=\varepsilon(\lambda, d)>0$ when $d\geq \,\hat{\!d}_0$ and $\lambda\geq \hat{\lambda}_0$ . Taking

$$d_0=\max\{5, \,\hat{\!d}_0\}, \qquad \lambda_0=\max\bigg\{\hat{\lambda}_0, 1+\frac{1}{2d_0(2\gamma_{d_0}-1)}\bigg\},$$

(i) holds. By Proposition 2.3(ii) and Fatou’s lemma, we have

\begin{align*} \mathbb{E}_{\nu_{\lambda,d}}((\eta_0(O)^4)) \leq \liminf_{t\rightarrow+\infty}\mathbb{E}^{\lambda,d}_{\vec{1}}((\eta_t(x))^4).\end{align*}

Then, Proposition 2.4(ii) follows from (2.7). It is further shown in [Reference Xue and Zhao18] that

\begin{align*} |{\rm Cov}_{\vec{1}}\,(\eta_t(x)\eta_t(v),\eta_t(y)\eta_t(z))| \leq \mathbb{E}_{(x,v,y,z)}\Bigg(\prod_{n=0}^{+\infty}\overline{H}(S_n, S_{n+1}), \sigma<+\infty\Bigg)\end{align*}

for any $x,v,y,z\in \mathbb{Z}^d$ and $t\geq 0$ , where $\sigma=\inf\{n\colon\{S_n(1), S_n(2)\}\bigcap\{S_n(3), S_n(4)\}\neq\emptyset\}$ and $S_n(j)$ is the jth component of $S_n$ . Applying Proposition 2.4(ii) and (iii), the left-hand side in the last inequality can be replaced by $|{\rm Cov}_{\nu_{\lambda,d}}(\eta_0(x)\eta_0(v),\eta_0(y)\eta_0(z))|$ . By (2.7) and the Hölder inequality, (iii) holds by checking that

(2.8) \begin{equation} \lim_{M\rightarrow+\infty}\sup_{\stackrel{(x,y,z)\in(\mathbb{Z}^d)^3:}{\|x-y\|_1\wedge\|x-z\|_1\geq M}} \mathbb{P}_{(x,x,y,z)}(\sigma<+\infty)=0.\end{equation}

By the transition probabilities of $\{S_n\}_{n\geq 0}$ , $\{S_n(i)-S_n(j)\}_{n\geq 0}$ is a lazy version of the simple random walk on $\mathbb{Z}^d$ for any $1\leq i\neq j\leq 4$ . Hence, (2.8) follows from the transience of the simple random walk on $\mathbb{Z}^d$ for $d\geq 3$ , and then (iii) holds.

Now we give our main results. The first one is about the law of large numbers of the occupation time process $\big\{\int_0^t \eta_u(O)\,{\mathrm{d}} u\big\}_{t\geq 0}$ .

Theorem 2.1. Assuming that $d\geq 3$ and $\lambda>{1}/{2d(2\gamma_d-1)}$ .

  1. (i) If the NBCPP on $\mathbb{Z}^d$ starts from $\nu_{\lambda, d}$ , then

    \begin{align*} \lim_{N\rightarrow+\infty}\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u = t \end{align*}
    a.s., and is in $L^2$ for any $t>0$ .
  2. (ii) If the NBCPP on $\mathbb{Z}^d$ starts from $\vec{1}$ , then

    \begin{align*} \lim_{N\rightarrow+\infty}\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u = t \end{align*}
    is in $L^2$ for any $t>0$ .

Remark 2.1. By Proposition 2.3, an NBCPP starting from $\nu_{\lambda, d}$ is stationary. Hence, Theorem 2.1(i) will hold immediately when we can check the ergodicity. However, as far as we know, it is still an open question as to whether an NBCPP starting from $\nu_{\lambda, d}$ is ergodic. As far as we understand, the currently known properties of NBCPPs do not provide convincing evidence for either a positive or negative answer to this problem. We will work on this problem as a further investigation.

It is natural to ask what happens for the occupation time for small $\lambda$ and $d=1,2$ . Currently, we have the following result.

Theorem 2.2. Assuming that $d\geq 1$ and $\lambda<{1}/{2d}$ , if the NBCPP on $\mathbb{Z}^d$ starts from $\vec{1}$ , then

\begin{align*} \lim_{N\rightarrow+\infty}\mathbb{E}_{\vec{1}}\bigg(\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg)^2\bigg)=+\infty \end{align*}

for any $t>0$ .

Remark 2.2. Theorem 2.2 shows that there is no convergence in $L^2$ for the occupation time when $\lambda<1/2d$ and the process starts from $\vec{1}$ . Combined with Theorem 2.1, the occupation time for the NBCPP on $\mathbb{Z}^d$ for $d\geq 3$ performs a phase transition as $\lambda$ grows from small to large. More problems arise from Theorem 2.2. What occurs in cases where $d=1,2$ and $\lambda\geq 1/2d$ , or $d\geq 3$ and $1/2d\leq \lambda\leq {1}/({2d(2\gamma_d-1)})$ ? Furthermore, does the occupation time converge to 0 in distribution when $\lambda$ is sufficiently small? Our current strategy cannot solve these questions, which we will work on as further investigations.

Our third main result is about the CLT of the occupation time process. To state our result, we first introduce some notation and definitions. For any $t\geq 0$ and $N\geq 1$ , we define

\begin{align*} X_t^N=\frac{1}{\sqrt{N}\,}\int_0^{tN}(\eta_u(O)-1)\,{\mathrm{d}} u.\end{align*}

We write $X_t^N$ as $X_{t,\lambda,d}^N$ when we need to distinguish d and $\lambda$ .

Here we recall the definition of ‘weak convergence’. Let S be a topological space. Assuming that $\{Y_n\}_{n\geq 1}$ is a sequence of S-valued random variables and Y is an S-valued random variable, then we say that $Y_n$ converges weakly to Y when and only when $\lim_{n\rightarrow+\infty}\mathbb{E}f(Y_n)=\mathbb{E}f(Y)$ for any bounded and continuous f from S to $\mathbb{R}$ .

Now we give our central limit theorem.

Theorem 2.3. Let $d_0$ and $\lambda_0$ be defined as in Proposition 2.4. Assume that $d\geq d_0$ , $\lambda>\lambda_0$ , and the NBCPP on $\mathbb{Z}^d$ starts from $\nu_{\lambda, d}$ . Then, for any integer $m\geq 1$ and $t_1,t_2,\ldots,t_m\geq 0$ , $\big(X_{t_1, \lambda, d}^N, X_{t_2, \lambda, d}^N,\ldots, X_{t_m, \lambda, d}^N\big)$ converges weakly, with respect to the Euclidean topology of $\mathbb{R}^m$ , to $\sqrt{C_1(\lambda, d)}(B_{t_1}, B_{t_2},\ldots,B_{t_m})$ as $N\rightarrow+\infty$ , where $\{B_t\}_{t\geq 0}$ is a standard Brownian motion and

\begin{align*} C_1(\lambda, d) = \frac{2\int_0^{+\infty}\int_0^{+\infty}p_{r+\theta}(O, O)\,{\mathrm{d}} r\,{\mathrm{d}}\theta} {h_{\lambda, d}\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta}. \end{align*}

Remark 2.3. Theorem 2.3 is consistent with a calculation of variance. Applying Propositions 2.12.3 and the Markov property of $\{\eta_t\}_{t\geq 0}$ ,

\begin{align*} \lim_{N\rightarrow+\infty}{\rm Var}_{\nu_{\lambda,d}}(X_t^N) = \frac{2t}{h_{\lambda, d}}\int_0^{+\infty}\sum_x\Phi(x)p_r(O,x)\,{\mathrm{d}} r. \end{align*}

Applying the strong Markov property of the simple random walk,

\begin{align*} \int_0^{+\infty}p_\theta(x,O)\,{\mathrm{d}}\theta = \Phi(x)\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta \end{align*}

and hence

\begin{align*} \int_0^{+\infty}\sum_x\Phi(x)p_\theta(O,x)\,{\mathrm{d}}\theta & = \frac{\int_0^{+\infty}\int_0^{+\infty}\sum_xp_r(O,x)p_\theta(x,O)\,{\mathrm{d}} r\,{\mathrm{d}}\theta} {\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta} \\ & = \frac{\int_0^{+\infty}\int_0^{+\infty}p_{r+\theta}(O, O)\,{\mathrm{d}} r\,{\mathrm{d}}\theta} {\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta}. \end{align*}

Note that $p_t(O,O)=O(t^{-{d}/{2}})$ . Hence,

$$ \frac{\int_0^{+\infty}\int_0^{+\infty}p_{r+\theta}(O, O)\,{\mathrm{d}} r\,{\mathrm{d}}\theta} {\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta} \in (0, +\infty) $$

when and only when $d\geq 5$ . That is why we guess that Theorem 2.3 holds for all $d\geq 5$ and sufficiently large $\lambda$ . Note that

$$\gamma_d=\frac{1}{\int_0^{+\infty}p_\theta(O, O)\,{\mathrm{d}}\theta}$$

by the strong Markov property, and hence

\begin{align*} C_1(\lambda, d)=\frac{1}{h_{\lambda, d}}C(x)\big|_{x=O}, \end{align*}

where $\{C(x)\}_{x\in \mathbb{Z}^d}$ are constants given in [Reference Cox and Griffeath3, Theorem 1] for cases where $d\geq 5$ . Explicit calculations of the constants occurring in the main theorems and their dependency on d are given in [Reference Birkner and Zähle1, Section 1] and [Reference Cox and Griffeath3, Sections 0 and 2].

Remark 2.4. Under assumptions of Theorem 2.3, it is natural to ask, for any given $T>0$ , whether $\big\{\big\{X_{t,\lambda,d}^N\big\}\big\}_{0\leq t\leq T}$ converges in distribution to $\{\sqrt{C_1(\lambda, d)}B_t\}_{0\leq t\leq T}$ with respect to the Skorokhod topology. We think the answer is positive but we cannot prove this claim according to our current approach. Roughly speaking, in the proof of Theorem 2.3 we decompose the centralized occupation time as a martingale plus an error function. We can show that the error function converges to 0 in distribution at each moment $t>0$ but we cannot check the tightness of this error function to show that it converges weakly to the zero function with respect to the Skorokhod topology. We will work on this problem as a further investigation.

Theorem 2.3 shows that the central limit theorem of the NBCPP is an analogue of that of voter models and branching random walks given in [Reference Birkner and Zähle1, Reference Cox and Griffeath3] when the dimension d is sufficiently large. Using Proposition 2.3 and the fact that $p_t(O,O)=O(t^{-{d}/{2}})$ , calculations of variances similar to that in Remark 2.3 show that

\begin{align*} \lim_{N\rightarrow+\infty}\frac{1}{N\log N}{\rm Cov}_{\nu_{\lambda,d}} \bigg(\int_0^{tN}(\eta_u(O)-1)\,{\mathrm{d}} u,\int_0^{sN}(\eta_u(O)-1)\,{\mathrm{d}} u\bigg) = K_4\min\{t,s\}\end{align*}

when $d=4$ and

\begin{align*} \lim_{N\rightarrow+\infty}\frac{1}{N^{{3}/{2}}}{\rm Cov}_{\nu_{\lambda, d}} \bigg(\int_0^{tN}\!\!(\eta_u(O)-1)\,{\mathrm{d}} u,\int_0^{sN}\!(\eta_u(O)-1)\,{\mathrm{d}} u\bigg) = K_3\big(t^{{3}/{2}}+s^{{3}/{2}}-|t-s|^{{3}/{2}}\big)\end{align*}

when $d=3$ for any $t,s>0$ and some constants $K_3, K_4\in (0, +\infty)$ . These two limits provide evidence for us to guess that, as N grows to infinity,

$$\bigg\{\frac{1}{N^{{3}/{4}}}\int_0^{tN}(\eta_u(O)-1)\,{\mathrm{d}} u\bigg\}_{t\geq 0}$$

converges weakly to $\sqrt{K_3}$ times the fractional Brownian motion with Hurst parameter $\frac34$ when $d=3$ , and

$$\bigg\{\frac{1}{\sqrt{N\log N}\,}\int_0^{tN}(\eta_u(O)-1)\,{\mathrm{d}} u\bigg\}_{t\geq 0}$$

converges weakly to $\{\sqrt{K_4}B_t\}_{t\geq 0}$ when $d=4$ , where $\{B_t\}_{t\geq 0}$ is the standard Brownian motion. Furthermore, Remark 2.3 provides evidence for us to guess that Theorem 2.3 holds for all $d\geq 5$ . That is to say, central limit theorems of occupation times of NBCPPs on $\mathbb{Z}^3$ , $\mathbb{Z}^4$ , and $\mathbb{Z}^d$ for $d\geq 5$ are guessed to be analogues of those of voter models and branching random walks in each case. However, our current proof of Theorem 2.3 relies heavily on the fact that $\sup_{t\geq 0}\mathbb{E}_{\vec{1}}^{\lambda, d}((\eta_t(O))^4)<+\infty$ when d and $\lambda$ are sufficiently large, which we have not managed to prove yet for small d. That is why we currently only discuss the NBCPP on $\mathbb{Z}^d$ with d sufficiently large.

Another possible way to extend Theorem 2.3 to cases with small d is to check the ergodicity of the NBCPP, as mentioned in Remark 2.1. If the NBCPP is ergodic, then we can apply Birkhoff’s theorem for f with the form $f(\eta)=\eta^2(O)$ and then an upper bound of fourth moments of $\{\eta_t(O)\}_{t\geq 0}$ is not required. We will work on this way as a further investigation.

Proofs of Theorems 2.1, 2.2, and 2.3 are given in Section 3. The proof of Theorem 2.1 is relatively easy, where it is shown that

\begin{align*} \lim_{N\rightarrow+\infty}\mathbb{E}_{\nu_{\lambda, d}}\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg)=t, \qquad \lim_{N\rightarrow+\infty}{\rm Var}_{\nu_{\lambda, d}}\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg)=0\end{align*}

according to Propositions 2.1 and 2.3. The proof of Theorem 2.2 utilizes the coupling relationship between the NBCPP and the contact process to show that

\begin{align*} \mathbb{E}_{\vec{1}}\,((\eta_t(O))^2) \geq \exp\bigg\{\frac{(1-2\lambda d)t}{(1+2\lambda d)}\bigg\}.\end{align*}

The proof of Theorem 2.3 is inspired by [Reference Birkner and Zähle1], which gives CLTs of occupation times of critical branching random walks. The core idea of the proof is to decompose $\int_0^t(\eta_u(O)-1)\,{\mathrm{d}} u$ as a martingale $M_t$ plus a remainder $R_t$ such that the quadratic variation process $({1}/{N})\langle M\rangle_{tN}$ of $({1}/{\sqrt{N}})M_{tN}$ converges to $C_1 t$ in $L^2$ and $({1}/{\sqrt{N}})R_{tN}$ converges to 0 in probability as $N\rightarrow+\infty$ . To give the above decomposition, a resolvent function of the simple random walk $\{Y_t\}_{t\geq 0}$ on $\mathbb{Z}^d$ is utilized. There are two main technical difficulties in the execution of the above strategy for the NBCPP. The first one is to prove that ${\rm Var}(({1}/{N})\langle M\rangle_{tN})$ converges to 0 as $N\rightarrow+\infty$ , which ensures that the limit of $({1}/{N})\langle M\rangle_{tN}$ is deterministic. The second one is to show that $\mathbb{E}(({1}/{N})\sup_{0\leq s\leq tN}(M_s-M_{s-})^2)$ converges to 0 as $N\rightarrow+\infty$ , which ensures that the limit process of $\{({1}/{\sqrt{N}})M_{tN}\}_{t\geq 0}$ is continuous. To overcome the first difficulty, we relate the calculation of ${\rm Var}(\langle M\rangle_{tN})$ to a random walk on $\mathbb{Z}^{2d}$ according to the Kolmogorov–Chapman equation of linear systems introduced in [Reference Liggett11]. To overcome the second difficulty, we bound $\mathbb{P}(({1}/{N})\sup_{0\leq s\leq tN}(M_s-M_{s-})^2>\varepsilon)$ from above by $O\big(({1}/{N^2})\int_0^{tN}\mathbb{E}(\eta^4_s(O))\,{\mathrm{d}} s\big)$ according to the strong Markov property of our process. For the mathematical details, see Section 3.

3. Theorem proofs

In this section we prove Theorems 2.1, 2.2, and 2.3. We first give the proof of Theorem 2.1.

Proof of Theorem 2.1. Throughout this proof we assume that $d\geq 3$ and $\lambda>{1}/({2d(2\gamma_d-1)})$ . We only prove (i), since (ii) follows from an analysis similar to that leading to the $L^2$ convergence in (i). So in this proof we further assume that $\eta_0$ is distributed with $\nu_{\lambda,d}$ . By Proposition 2.3, the NBCPP starting from $\nu_{\lambda, d}$ is stationary. Hence, applying Birkhoff’s ergodic theorem,

\begin{align*} \lim_{N\rightarrow+\infty}\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u = t\mathbb{E}_{\nu_{\lambda,d}}(\eta(O)\mid\mathcal{I}) \quad \text{a.s.}, \end{align*}

where $\mathcal{I}$ is the set of all invariant events. As a result, we only need to show that the convergence in Theorem 2.1 is in $L^2$ .

Applying Proposition 2.3, $\{\eta_u(O)\}_{u\geq 0}$ are uniformly integrable and hence

\begin{align*} \mathbb{E}_{\nu_{\lambda, d}}\eta(O)=\lim_{t\rightarrow+\infty}\mathbb{E}_{\vec{1}}\eta_t(O)=1. \end{align*}

Therefore,

\begin{align*} \mathbb{E}_{\nu_{\lambda, d}}\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg) = t \end{align*}

for any $N\geq 1$ and $t>0$ . Consequently,

\begin{align*} \mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u - t\bigg)^2\bigg) = \frac{1}{N^2}{\rm Var}_{\nu_{\lambda, d}}\bigg(\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg). \end{align*}

Hence, to prove the $L^2$ convergence in Theorem 2.1 we only need to show that

(3.1) \begin{equation} \lim_{N\rightarrow+\infty}\frac{1}{N^2}{\rm Var}_{\nu_{\lambda, d}}\bigg(\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg) = 0 \end{equation}

for any $t\geq 0$ . Since $\nu_{\lambda,d}$ is an invariant measure of our process,

(3.2) \begin{align} {\rm Var}_{\nu_{\lambda, d}}\bigg(\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg) & = 2\int_0^{tN}\bigg(\int_0^\theta{\rm Cov}_{\nu_{\lambda, d}}(\eta_r(O),\eta_{\theta}(O))\,{\mathrm{d}} r\bigg)\,{\mathrm{d}}\theta \notag \\ & = 2\int_0^{tN}\bigg(\int_0^\theta{\rm Cov}_{\nu_{\lambda, d}}(\eta_0(O),\eta_{\theta-r}(O))\,{\mathrm{d}} r\bigg)\,{\mathrm{d}}\theta. \end{align}

Applying Propositions 2.1 and 2.3,

(3.3) \begin{align} {\rm Cov}_{\nu_{\lambda,d}}(\eta_0(O),\eta_t(O)) & = \mathbb{E}_{\nu_{\lambda,d}}(\eta_0(O)\eta_t(O)) - 1 \notag \\ & = \mathbb{E}_{\nu_{\lambda,d}}\Bigg(\eta_0(O)\Bigg(\sum_{y\in \mathbb{Z}^d}p_t(O, y)\eta_0(y)\Bigg)\Bigg)-1 \notag \\ & = \sum_{y\in\mathbb{Z}^d}p_t(O,y)(\mathbb{E}_{\nu_{\lambda,d}}(\eta_0(O)\eta_0(y)) - 1) \notag \\ & = \sum_{y\in\mathbb{Z}^d}p_t(O,y){\rm Cov}_{\nu_{\lambda,d}}(\eta_0(O),\eta_0(y)) \notag\\ & = \frac{1}{h_{\lambda, d}}\sum_{y\in \mathbb{Z}^d}p_t(O, y)\Phi(y). \end{align}

Therefore, by (3.3), for any $t\geq 0$ ,

(3.4) \begin{equation} -1 \leq {\rm Cov}_{\nu_{\lambda,d}}(\eta_0(O),\eta_t(O)) \leq \frac{1}{h_{\lambda,d}}\sum_{y\in\mathbb{Z}^d}p_t(O,y) = \frac{1}{h_{\lambda, d}}, \end{equation}

and hence $\sup_{t\geq 0}|{\rm Cov}_{\nu_{\lambda, d}}(\eta_0(O), \eta_t(O))|<+\infty$ . By (3.2) and (3.4), to prove (3.1) we only need to show that

(3.5) \begin{equation} \lim_{t\rightarrow+\infty}{\rm Cov}_{\nu_{\lambda, d}}(\eta_0(O), \eta_t(O))=0. \end{equation}

Since $d\geq 3$ , for any $\varepsilon>0$ there exists $M>0$ such that $\Phi(y)<\varepsilon$ when $\|y\|_1\geq M$ . As a result, by (3.3),

\begin{align*} \limsup_{t\rightarrow+\infty}{\rm Cov}_{\nu_{\lambda, d}}(\eta_0(O), \eta_t(O)) & \leq \limsup_{t\rightarrow+\infty}\frac{\varepsilon}{h_{\lambda,d}}\sum_{y:\|y\|_1\geq M}\! p_t(O,y) + \frac{1}{h_{\lambda, d}}\sum_{y:\|y\|_1\leq M}\!\!\Big(\lim_{t\rightarrow+\infty}p_t(O, y)\Big) \\ & = \limsup_{t\rightarrow+\infty}\frac{\varepsilon}{h_{\lambda,d}}\sum_{y:\|y\|_1\geq M}p_t(O,y) \leq \frac{\varepsilon}{h_{\lambda, d}}. \end{align*}

Since $\varepsilon$ is arbitrary, let $\varepsilon\rightarrow 0$ and then (3.5) holds. As we have explained, Theorem 2.1 follows from (3.5), and the proof is complete.

Now we prove Theorem 2.2.

Proof of Theorem 2.2. Throughout this proof we assume that $\lambda<1/2d$ . According to a calculation similar to that leading to (3.3),

(3.6) \begin{align} \mathbb{E}_{\vec{1}}\bigg(\bigg(\frac{1}{N}\int_0^{tN}\eta_u(O)\,{\mathrm{d}} u\bigg)^2\bigg) & = \frac{2}{N^2}\int_0^{tN}\int_0^s\sum_{x}p_{s-r}(O,x) \mathbb{E}_{\vec{1}}\,(\eta_r(O)\eta_r(x))\,{\mathrm{d}} r\,{\mathrm{d}} s \notag \\ & \geq \frac{2}{N^2}\int_0^{tN}\int_0^sp_{s-r}(O,O)\mathbb{E}_{\vec{1}}\,(\eta_r^2(O))\,{\mathrm{d}} r\,{\mathrm{d}} s. \end{align}

Let $\xi_t$ be the contact process defined as in Section 1, then a well-known property of $\xi_t$ is that

\begin{align*} \mathbb{P}_{\vec{1}}\,(\xi_t(O)=1) \leq \exp\bigg\{\frac{(2\lambda d-1)t}{1+2d\lambda}\bigg\}. \end{align*}

This property can be proved by coupling the contact process with a branching process, the detail of which we omit here. According to the coupling relationship of the NBCPP and the contact process given in Section 1, utilizing the Hölder inequality and Proposition 2.1, we have

\begin{equation*} \mathbb{P}_{\vec{1}}\,(\xi_t(O)=1) = \mathbb{P}_{\vec{1}}\,(\eta_t(O)>0) \geq \frac{(\mathbb{E}_{\vec{1}}\,\eta_t(O))^2}{\mathbb{E}_{\vec{1}}\,(\eta_t^2(O))} = \frac{1}{\mathbb{E}_{\vec{1}}\,(\eta_t^2(O))}. \end{equation*}

Hence,

(3.7) \begin{equation} \mathbb{E}_{\vec{1}}\,(\eta_t^2(O)) \geq \exp\bigg\{\frac{(1-2\lambda d)t}{1+2d\lambda}\bigg\}. \end{equation}

Since $p_t(O,O)=\Theta(t^{-{d}/{2}})$ as $t\rightarrow+\infty$ , Theorem 2.2 follows directly from (3.6) and (3.7).

From now on we assume that $d\geq d_0$ and $\lambda\geq \lambda_0$ , where $d_0$ and $\lambda_0$ are defined as in Proposition 2.4. As mentioned at the end of Section 2, our proof follows the strategy introduced in [Reference Birkner and Zähle1] where $X_t^N$ is decomposed as a martingale plus a remainder term. As $N\rightarrow+\infty$ , the martingale converges weakly to a Brownian motion and the remainder converges to 0 in probability. In detail, for any $\theta>0$ , we define

\begin{align*} G_\theta(\eta)=\sum_{x\in \mathbb{Z}^d}g_\theta(x)(\eta(x)-1),\end{align*}

where $g_\theta$ is the resolvent function of the simple random walk given by

\begin{align*} g_\theta(x)=\int_0^{+\infty}{\mathrm{e}}^{-\theta u}p_u(O,x)\,{\mathrm{d}} u.\end{align*}

According to the Markov property of the simple random walk,

\begin{align*} \sum_{x} g_0^2(x)=\int_0^{+\infty}\int_0^{+\infty}p_{u+\theta}(O,O)\,{\mathrm{d}} u\,{\mathrm{d}}\theta.\end{align*}

Then, applying $p_t(O, O)=\Theta(t^{-d/2})$ as $t\rightarrow+\infty$ , $\sum_{x} g_0^2(x)$ is finite when and only when $d\geq 5$ . Hence, $\sum_{x} g_0^k(x)<+\infty$ when $k\geq 2$ and $d\geq 5$ . Let

\begin{align*} M_t^\theta=G_\theta(\eta_t)-G_\theta(\eta_0)-\int_0^t \mathcal{L}G_\theta(\eta_s)\,{\mathrm{d}} s,\end{align*}

then by Dynkin’s martingale formula, $\{M_t^\theta\}_{t\geq 0}$ is a martingale with quadratic variation process $\{\langle M^\theta\rangle_t\}_{t\geq 0}$ given by

\begin{align*} \langle M^\theta\rangle_t = \int_0^t(\mathcal{L}((G_\theta(\eta_s))^2) - 2G_\theta(\eta_s)\mathcal{L}G_\theta(\eta_s))\,{\mathrm{d}} u.\end{align*}

Applying the definition of $\mathcal{L}$ ,

(3.8) \begin{equation} \langle M^\theta\rangle_t = \frac{1}{2\lambda d} \int_0^t\sum_{x\in\mathbb{Z}^d}\eta_s^2(x)\Bigg(g_\theta^2(x)+\lambda\sum_{y\sim x}g_\theta^2(y)\Bigg)\,{\mathrm{d}} s.\end{equation}

Applying $\theta g_\theta({\cdot})-Lg_\theta({\cdot})=\textbf{1}_{O}({\cdot})$ ,

\begin{align*} \mathcal{L}G_\theta(\eta) = \theta G_\theta(\eta) - \Bigg(\eta(O)-\theta\sum_xg_\theta(x)\Bigg) = \theta G_\theta(\eta)-(\eta(O)-1).\end{align*}

As a result,

(3.9) \begin{equation} X_t^N=\frac{1}{\sqrt{N}\,}M_{tN}^{1/N}+\frac{1}{\sqrt{N}\,}R_{tN}^{1/N},\end{equation}

where $R_t^\theta=-G_\theta(\eta_t)+G_\theta(\eta_0)+\int_0^t\theta G_\theta(\eta_s)\,{\mathrm{d}} s$ . To prove Theorem 2.3, we need the following lemmas.

Lemma 3.1. For any $t\geq 0$ , $({1}/{N})\langle M^{1/N}\rangle_{tN}$ converges to $C_1(\lambda, d)t$ in $L^2$ as $N\rightarrow+\infty$ .

Lemma 3.2. For any $t\geq 0$ , $({1}/{\sqrt{N}})R_{tN}^{1/N}$ converges to 0 in $L^2$ as $N\rightarrow+\infty$ .

Lemma 3.3. For any $t\geq 0$ ,

\begin{align*} \lim_{N\rightarrow+\infty}\mathbb{E}\bigg(\sup_{0\leq s\leq tN}\frac{1}{N}\big(M^{1/N}_{s}-M^{1/N}_{s-}\big)^2\bigg) = 0, \end{align*}

where $M^{1/N}_{s-}=\lim_{u<s, u\rightarrow s}M^{1/N}_u$ , i.e. the state of $M^{1/N}_{\cdot}$ at the moment just before s.

We first utilize Lemmas 3.1, 3.2, and 3.3 to prove Theorem 2.3.

Proof of Theorem 2.3. We denote by $\mathcal{D}[0,+\infty)$ the set of càdlàg functions from $[0,+\infty)$ to $\mathbb{R}$ endowed with the Skorokhod topology. According to Lemmas 3.1 and 3.3,

\begin{align*} \bigg(\bigg\{\frac{1}{\sqrt{N}\,}M_{tN}^{1/N}\bigg\}_{t\geq 0}, \bigg\{\frac{1}{N}\langle M^{1/N}\rangle_{tN}\bigg\}_{t\geq 0}\bigg) \end{align*}

satisfies condition (b) of [Reference Ethier and Kurtz5, Theorem 1.4, Chapter 7]. Hence, using that theorem, $\big\{({1}/{\sqrt{N}})M_{tN}^{1/N}\colon t\geq 0\big\}_{N\geq 1}$ converges weakly, with respect to the Skorokhod topology of $\mathcal{D}[0,+\infty)$ , to a continuous martingale $\{W_t\}_{t\geq 0}$ as $N\rightarrow+\infty$ , where $\{W_t\}_{t\geq 0}$ satisfies that $\{W_t^2-C_1(\lambda, d)t\}_{t\geq 0}$ is also a martingale. Hence, $\{W_t\}_{t\geq 0}$ is a standard Brownian motion times $\sqrt{C_1(\lambda, d)}$ . Then, for given $0<t_1<t_2<\cdots<t_m$ , since $\{\eta_t\}_{t\geq 0}$ is continuous at each $t_i$ with probability 1, we have that $({1}/{\sqrt{N}})\big(M_{t_1N}^{1/N},\ldots,M_{t_mN}^{1/N}\big)$ converges weakly to $\sqrt{C_1(\lambda, d)}(B_{t_1},\ldots,B_{t_m})$ as $N\rightarrow+\infty$ . Consequently, Theorem 2.3 follows from (3.9) and Lemma 3.2.

Now we only need to prove Lemmas 3.1, 3.2, and 3.3. We first give the proof of Lemma 3.1.

Proof of Lemma 3.1. Conditioned on $Y_0=O$ , the number of times $\{Y_t\}_{t\geq 0}$ visits O follows a geometric distribution with parameter $\gamma_d$ , and at each time $Y_t$ stays at O for an exponential time with mean 1. Therefore,

\begin{align*} \int_0^{+\infty}p_s(O,O)\,{\mathrm{d}} s = \mathbb{E}_O\int_0^{+\infty}\textbf{1}_{\{X_s=O\}}\,{\mathrm{d}} s = 1 \times \frac{1}{\gamma_d} = \frac{1}{\gamma_d}. \end{align*}

Then, by (3.8) and Proposition 2.3,

\begin{align*} \lim_{N\rightarrow+\infty}\mathbb{E}_{\nu_{\lambda,d}}\bigg(\frac{1}{N}\langle M^{1/N}\rangle_{tN}\bigg) & = \frac{t(1+2\lambda d)}{2\lambda d}\bigg(1+\frac{1}{h_{\lambda,d}}\bigg)\sum_{x}g_0^2(x) \\ & = \frac{2\gamma_d t}{h_{\lambda,d}}\int_0^{+\infty}\int_0^{+\infty}p_{\theta+r}(O,O)\,{\mathrm{d}}\theta\,{\mathrm{d}} r = C_1(\lambda,d)t. \end{align*}

Hence, to complete the proof we only need to show that

(3.10) \begin{equation} \lim_{N\rightarrow+\infty}\frac{1}{N^2}{\rm Var}_{\nu_{\lambda,d}}(\langle M^{1/N}\rangle_{tN}) = 0. \end{equation}

By Proposition 2.2 and the Markov property of $\{\eta_t\}_{t\geq 0}$ , for $s<u$ and $x, w\in \mathbb{Z}^d$ ,

\begin{align*} {\rm Cov}_{\nu_{\lambda,d}}(\eta_s^2(x),\eta_u^2(w)) = \sum_{y,z}\hat{q}_{u-s}((w,w),(y,z)) {\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z)). \end{align*}

Therefore, to prove (3.10) we only need to show that

(3.11) \begin{equation} \lim_{r\rightarrow+\infty}\sum_{x,w,y,z}V_0(x)V_0(w)\hat{q}_r((w,w),(y,z)) |{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))| = 0, \end{equation}

where $V_\theta(x)=g_\theta^2(x)+\sum_{y\sim x}g^2_\theta(y)$ , which is decreasing in $\theta$ .

According to Propositions 2.22.4 and the Cauchy–Schwarz inequality,

\begin{align*} \sum_{y,z}\hat{q}_r((w,w),(y,z)) = \sum_{y}q_r(O,y) = \mathbb{E}_{\vec{1}}\,((\eta_r(O))^2) & \leq 1 + \frac{1}{h_{\lambda,d}}, \\ \sup_{w,y,z}|{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(w),\eta_0(y)\eta_0(z))| < 2\mathbb{E}_{\nu_{\lambda,d}}((\eta_0(O))^4) & < +\infty. \end{align*}

Then, using the fact that $\sum_xV_0(x)<+\infty$ and the dominated convergence theorem, to prove (3.11) we only need to show that

(3.12) \begin{equation} \lim_{r\rightarrow+\infty}\sum_{y,z}\hat{q}_r((w,w),(y,z)) |{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))| = 0 \end{equation}

for any $x,w\in\mathbb{Z}^d$ . By Proposition 2.4, for any $\varepsilon>0$ , there exists $M>0$ such that $|{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))|<\varepsilon$ when $\|x-y\|_1\wedge\|x-z\|_1\geq M$ , and hence

\begin{multline*} \sum_{(y,z):\|x-y\|_1\wedge\|x-z\|_1\geq M}\hat{q}_r((w,w),(y,z)) |{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))| \\ \leq \varepsilon\sum_{(y,z):\|x-y\|_1\wedge\|x-z\|_1\geq M}\hat{q}_r((w,w),(y,z)) \leq \varepsilon\bigg(1+\frac{1}{h_{\lambda, d}}\bigg). \end{multline*}

We claim that

(3.13) \begin{equation} \lim_{t\rightarrow+\infty}\sum_{v}\hat{q}_t((w,w),(y,v)) = \lim_{t\rightarrow+\infty}\sum_{v}\hat{q}_t((w,w),(v,z)) = 0 \end{equation}

for any $w,y,z\in \mathbb{Z}^d$ . The proof of (3.13) is given later. By (3.13), Proposition 2.4, and the Cauchy–Schwarz inequality,

\begin{align*} \lim_{r\rightarrow+\infty}\sum_{\substack{(y,z):\|x-y\|_1\leq M\,\text{or}\\\|x-z\|_1\leq M}} \hat{q}_r((w,w),(y,z))|{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))| = 0 \end{align*}

and hence

\begin{align*} \limsup_{r\rightarrow+\infty}\sum_{y,z}\hat{q}_r((w,w),(y,z)) |{\rm Cov}_{\nu_{\lambda,d}}(\eta_0^2(x),\eta_0(y)\eta_0(z))| \leq \varepsilon\bigg(1+\frac{1}{h_{\lambda, d}}\bigg). \end{align*}

Since $\varepsilon$ is arbitrary, let $\varepsilon\rightarrow 0$ and then (3.12) holds.

Now we prove (3.13) to complete the proof of Lemma 3.1.

Proof of (3.13). By [Reference Liggett11, Theorem 9.3.1] and the definition of $\mathcal{L}$ ,

\begin{align*} \hat{q}_t((w,w),(y,v)) = {\mathrm{e}}^{-2t}\sum_{n=0}^{+\infty}\frac{t^n}{n!}H^n((w,w),(y,v)), \end{align*}

where H is a $(\mathbb{Z}^d)^2\times (\mathbb{Z}^d)^2$ matrix given by

\begin{align*} H((x,y), (u,v))= \begin{cases} {1}/{2d} & \text{if}\ x\neq y,\ u\sim x, \,\text{and}\,v=y, \\ {1}/{2d} & \text{if}\ x\neq y,\ u=x, \,\text{and}\,v\sim y, \\ {1}/{2d\lambda} & \text{if}\ x=y \,\text{and}\,(u,v)=(x,x), \\ {1}/{2d} & \text{if}\ x=y,\ u\sim x, \,\text{and}\,v=u, \\ {1}/{2d} & \text{if}\ x=y,\ u=x, \,\text{and}\,v\sim x, \\ {1}/{2d} & \,\text{if}\ x=y,\ u\sim x. \,\text{and}\,v=x, \\ 0 & \text{otherwise} \end{cases} \end{align*}

and

\begin{align*} H^n((w,w),(y,v)) = \sum_{\stackrel{\{(u_i,v_i)\}_{0\leq i\leq n}:}{(u_0,v_0)=(w,w),(u_n,v_n)=(y,v)}} \prod_{i=0}^{n-1}H((u_i,v_i),(u_{i+1},v_{i+1})). \end{align*}

As a result,

\begin{align*} \sum_{v\in\mathbb{Z}^d}\hat{q}_t((w,w),(y,v)) = {\mathrm{e}}^{-2t}\sum_{n=0}^{+\infty}\frac{(2t)^n}{n!} \mathbb{E}_{(w,w)}\Bigg(\prod_{i=0}^{n-1}\Theta(\beta_{i},\beta_{i+1})\textbf{1}_{\{\beta_n(1)=y\}}\Bigg), \end{align*}

where $\{\beta_n=(\beta_n(1), \beta_n(2))\colon n\geq 0\}$ is a random walk on $(\mathbb{Z}^d)^2$ such that

\begin{align*} \mathbb{P}(\beta_{n+1}=(u,v)\mid\beta_n=(x,y)) = \begin{cases} {1}/{4d} & \text{if}\ x\neq y,\ u\sim x. \,\text{and}\,v=y, \\ {1}/{4d} & \text{if}\ x\neq y,\ u=x, \,\text{and}\,v\sim y, \\ {1}({6d+1}) & \text{if}\ x=y\,\text{and}\,(u,v)=(x,x), \\ {1}/({6d+1}) & \text{if}\ x=y,\ u\sim x. \,\text{and}\,v=u, \\ {1}/({6d+1}) & \text{if}\ x=y,\ u\sim x, \,\text{and}\,v=x, \\ {1}/{6d+1} & \text{if}\ x=y,\ u=x, {and}\,v\sim x, \\ 0 & \text{otherwise} \end{cases} \end{align*}

and $\Theta$ is a function from $(\mathbb{Z}^d)^2$ to $\mathbb{R}$ such that $\Theta((x,y),(u,v)) =\frac12{H((x,y),(u,v)){\rm deg}(x,y)}$ , where

\begin{align*} {\rm deg}(x,y) = \begin{cases} 4d & \text{if}\ x\neq y, \\ 6d+1 & \text{if}\ x=y. \end{cases} \end{align*}

Note that ${\rm deg}({\cdot})$ is the degree function of the graph generated by adding edges on $\mathbb{Z}^{2d}$ to connect (x,x) with itself and (y, y) for all $x\in \mathbb{Z}^d$ and all y such that $y\sim x$ on $\mathbb{Z}^d$ .

According to the definition of $\Theta$ , for each $i\geq 0$ , $\Theta(\beta_i,\beta_{i+1})\geq 1$ and $\Theta(\beta_i,\beta_{i+1})=1$ when and only when $\beta_i(1)=\beta_i(2)$ . Hence, by Hölder’s inequality,

(3.14) \begin{multline} \sum_{v\in\mathbb{Z}^d}\hat{q}_t((w,w),(y,v)) \\ \leq \Bigg(\mathbb{E}_{(w,w)}\Bigg(\prod_{i=0}^{+\infty} \Theta^{1+\varepsilon}(\beta_{i},\beta_{i+1})\Bigg)\Bigg)^{{1}/({1+\varepsilon})}{\mathrm{e}}^{-2t} \sum_{n=0}^{+\infty}\frac{(2t)^n}{n!}(\mathbb{P}(\beta_n(1)=y))^{\varepsilon/(1+\varepsilon)} \end{multline}

for any $\varepsilon>0$ . Since $\{\beta_n(1)-\beta_n(2)\}_{n\geq 0}$ is a lazy version of the simple random walk on $\mathbb{Z}^d$ and $H(\beta_i,\beta_{i+1})>1$ only when $\beta_i(1)=\beta_i(2)$ , by the strong Markov property of $\{\beta_n\}_{n\geq 0}$ ,

\begin{align*} \mathbb{E}_{(w,w)}\Bigg(\prod_{i=0}^{+\infty}\Theta^{1+\varepsilon}(\beta_{i},\beta_{i+1})\Bigg) = \frac{4d}{6d+1}\bigg(\frac{6d+1}{4d}\bigg)^{1+\varepsilon}\gamma_d + \sum_{k=1}^{+\infty}(\alpha_\varepsilon(d,\lambda))^k\gamma_d, \end{align*}

where

\begin{align*} \alpha_\varepsilon(d,\lambda) = \frac{1}{6d+1}\bigg(\frac{6d+1}{2\cdot2d\lambda}\bigg)^{1+\varepsilon} + \frac{2d}{6d+1}\bigg(\frac{6d+1}{2\cdot2d}\bigg)^{1+\varepsilon} + \frac{4d}{6d+1}\bigg(\frac{6d+1}{2\cdot2d}\bigg)^{1+\varepsilon}(1-\gamma_d). \end{align*}

Since $\lambda>{1}/{2d(2\gamma_d-1)}$ , $\lim_{\varepsilon\rightarrow 0}\alpha_\varepsilon(d,\lambda)={1}/{4d\lambda}+\frac12+1-\gamma_d<1$ and hence there exists $\varepsilon_0>0$ such that $\mathbb{E}_{(w,w)}\big(\prod_{i=0}^{+\infty}\Theta^{1+\varepsilon_0}(\beta_{i},\beta_{i+1})\big)<+\infty$ . By (3.14), to complete the proof we only need to show that

(3.15) \begin{equation} \lim_{t\rightarrow+\infty}{\mathrm{e}}^{-2t}\sum_{n=0}^{+\infty}\frac{(2t)^n}{n!} (\mathbb{P}(\beta_n(1)=y))^{\varepsilon_0/(1+\varepsilon_0)} = 0. \end{equation}

Since $d\geq 3$ and $\{\beta_n(1)\}_{n\geq 1}$ is a lazy version of the simple random walk on $\mathbb{Z}^d$ ,

\begin{align*} \lim_{n\rightarrow+\infty}\mathbb{P}(\beta_n(1)=y)=0. \end{align*}

Hence, for any $\varepsilon>0$ , there exists $M\geq 1$ such that $(\mathbb{P}(\beta_n(1)=y))^{\varepsilon_0/(1+\varepsilon_0)}\leq \varepsilon$ when $n\geq M$ , and then

\begin{align*} {\mathrm{e}}^{-2t}\sum_{n=M}^{+\infty}\frac{(2t)^n}{n!} (\mathbb{P}(\beta_n(1)=y))^{\varepsilon_0/(1+\varepsilon_0)} \leq \varepsilon{\mathrm{e}}^{-2t}\sum_{n=0}^{+\infty}\frac{(2t)^n}{n!} = \varepsilon. \end{align*}

Since $\lim_{t\rightarrow+\infty}\sum_{n=0}^{M-1}{\mathrm{e}}^{-2t}{(2t)^n}/{n!}=0$ , we have

\begin{align*} \limsup_{t\rightarrow+\infty}{\mathrm{e}}^{-2t}\sum_{n=0}^{+\infty}\frac{(2t)^n}{n!} (\mathbb{P}(\beta_n(1)=y))^{\varepsilon_0/(1+\varepsilon_0)} \leq \varepsilon. \end{align*}

Since $\varepsilon$ is arbitrary, let $\varepsilon\rightarrow0$ and then (3.15) holds.

Now we prove Lemma 3.2.

Proof of Lemma 3.2. Applying Propositions 2.1 and 2.3, for any moment $c\geq 0$ ,

\begin{align*} \mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{\sqrt{N}\,}G_{1/N}(\eta_{c})\bigg)^2\bigg) & = {\rm Var}_{\nu_{\lambda,d}}\bigg(\frac{1}{\sqrt{N}\,}G_{1/N}(\eta_{c})\bigg) \\ & = {\rm Var}_{\nu_{\lambda,d}}\bigg(\frac{1}{\sqrt{N}\,}G_{1/N}(\eta_0)\bigg) = \frac{1}{N}\sum_{x}\sum_{y}g_{1/N}(x)g_{1/N}(y)\frac{\Phi(y-x)}{h_{\lambda, d}}. \end{align*}

As shown in Remark 2.3,

\begin{align*} \Phi(y-x) = \frac{\int_0^{+\infty}p_s(y-x,O)\,{\mathrm{d}} s}{\int_0^{+\infty}p_s(O,O)\,{\mathrm{d}} s} = \int_0^{+\infty}p_s(x,y)\,{\mathrm{d}} s\gamma_d. \end{align*}

Hence,

(3.16) \begin{align} \mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(&\frac{1}{\sqrt{N}}G_{1/N}(\eta_{c})\bigg)^2\bigg) \notag \\ & = \frac{\gamma_d}{Nh_{\lambda,d}}\int_0^{+\infty}\int_0^{+\infty}\int_0^{+\infty}{\mathrm{e}}^{-(r+s)/N} \sum_x\sum_yp_r(0,x)p_s(x,y)p_u(y,0)\,{\mathrm{d}} r\,{\mathrm{d}} s\,{\mathrm{d}} u \notag \\ & = \frac{\gamma_d}{Nh_{\lambda,d}}\int_0^{+\infty}\int_0^{+\infty}\int_0^{+\infty}{\mathrm{e}}^{-(r+s)/N} p_{r+s+u}(O,O)\,{\mathrm{d}} r\,{\mathrm{d}} s\,{\mathrm{d}} u \notag \\ & = \frac{\gamma_d}{Nh_{\lambda,d}}\int_0^{+\infty}p_{\theta}(O,O) \bigg(\int_0^{\theta}{\mathrm{e}}^{-v/N}\bigg(\int_0^v1\,{\mathrm{d}} u\bigg)\,{\mathrm{d}} v\bigg)\,{\mathrm{d}}\theta \notag \\ & = \frac{N\gamma_d}{h_{\lambda,d}}\int_0^{+\infty}p_{\theta}(O,O) \bigg(1-{\mathrm{e}}^{-\theta/N}\bigg(1+\frac{\theta}{N}\bigg)\bigg)\,{\mathrm{d}}\theta. \end{align}

Since $p_\theta(O,O)=O(\theta^{-d/2})$ as $\theta\rightarrow+\infty$ and $\lim_{x\rightarrow+\infty}1-{\mathrm{e}}^{-x}(1+x)=1$ , there exist $0<M_1,\widehat{C}_4<+\infty$ such that $p_{\theta}(O,O)\leq\widehat{C}_4 \theta^{-d/2}$ and $1-{\mathrm{e}}^{-x}(1+x)\leq \widehat{C}_4$ when $\theta,x\geq M_1$ . Since $\lim_{x\rightarrow0}({1-{\mathrm{e}}^{-x}(1+x)})/{x^2}=\frac12$ , we have

\begin{align*} \widetilde{C}_4 \,:\!=\, \sup_{0<x\leq M_1}\frac{1-{\mathrm{e}}^{-x}(1+x)}{x^2}<+\infty. \end{align*}

We denote by $C_4$ the maximum of $\widehat{C}_4$ and $\widetilde{C}_4$ . Then,

\begin{align*} p_{\theta}(O,O)\bigg(1-{\mathrm{e}}^{-\theta/N}\bigg(1+\frac{\theta}{N}\bigg)\bigg) \leq \begin{cases} {\theta^2 C_4}/{N^2} & \text{when}\,0\leq \theta\leq M_1, \\ {C_4^2\theta^{2-({d}/{2})}}/{N^2} & \text{when}\,M_1\leq \theta\leq NM_1, \\ C_4^2\theta^{-{d}/{2}} & \text{when}\, \theta>NM_1. \end{cases} \end{align*}

Hence, for $d\geq 5$ ,

\begin{align*} \frac{N\gamma_d}{h_{\lambda, d}}\int_0^{M_1}p_{\theta}(O,O) \bigg(1-{\mathrm{e}}^{-\theta/N}\bigg(1+\frac{\theta}{N}\bigg)\bigg)\,{\mathrm{d}}\theta & \leq \frac{C_4\gamma_d}{Nh_{\lambda,d}}\int_0^{M_1}\theta^2\,{\mathrm{d}}\theta = O(N^{-1}), \\ \frac{N\gamma_d}{h_{\lambda,d}}\int_{M_1}^{NM_1}p_{\theta}(O,O) \bigg(1-{\mathrm{e}}^{-\theta/N}\bigg(1+\frac{\theta}{N}\bigg)\bigg)\,{\mathrm{d}}\theta & \leq \frac{C_4^2\gamma_d}{Nh_{\lambda,d}}\int_{M_1}^{NM_1}\theta^{2-({d}/{2})}\,{\mathrm{d}}\theta \\ & \leq \frac{C_4^2\gamma_d}{Nh_{\lambda,d}}\int_{M_1}^{NM_1}\theta^{2-{5}/{2}}\,{\mathrm{d}}\theta = O(N^{-1/2}), \\ \frac{N\gamma_d}{h_{\lambda,d}}\int_{NM_1}^{+\infty}p_{\theta}(O,O) \bigg(1-{\mathrm{e}}^{-\theta/N}\bigg(1+\frac{\theta}{N}\bigg)\bigg)\,{\mathrm{d}}\theta & \leq \frac{C_4^2N\gamma_d}{h_{\lambda,d}}\int_{NM_1}^{+\infty}\theta^{-d/2}\,{\mathrm{d}}\theta = O(N^{2-({d}/{2})}). \end{align*}

Consequently, applying (3.16),

(3.17) \begin{equation} \lim_{N\rightarrow+\infty}\mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{\sqrt{N}\,}G_{1/N}(\eta_{0})\bigg)^2\bigg) = \lim_{N\rightarrow+\infty}\mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{\sqrt{N}\,}G_{1/N}(\eta_{tN^2})\bigg)^2\bigg) = 0. \end{equation}

By the Cauchy–Schwarz inequality,

\begin{align*} \mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{\sqrt{N}\,} \int_0^{tN}\frac{1}{N}G_{1/N}(\eta_s)\,{\mathrm{d}} s\bigg)^2\bigg) & \leq \frac{tN}{N^3}\int_0^{tN}(\mathbb{E}_{\nu_{\lambda,d}}((G_{1/N}(\eta_s))^2))\,{\mathrm{d}} s \\ & = \frac{t^2N^2}{N^3}\mathbb{E}_{\nu_{\lambda,d}}((G_{1/N}(\eta_0))^2). \end{align*}

Hence, applying (3.17),

(3.18) \begin{equation} \lim_{N\rightarrow+\infty}\mathbb{E}_{\nu_{\lambda,d}}\bigg(\bigg(\frac{1}{\sqrt{N}\,} \int_0^{tN}\frac{1}{N}G_{1/N}(\eta_s)\,{\mathrm{d}} s\bigg)^2\bigg) = 0. \end{equation}

Lemma 3.2 follows from (3.17) and (3.18).

Finally, we prove Lemma 3.3.

Proof of Lemma 3.3. By (3.9) and the fact that $\{X_t^N\}_{t\geq 0}$ is continuous in t,

(3.19) \begin{equation} \sup_{0\leq s\leq tN}\frac{1}{N}\big(M^{1/N}_{s}-M^{1/N}_{s-}\big)^2 \leq \sup_{0\leq s\leq tN}\frac{1}{N}\sup_{x}\max_{y\sim x}\{g^2_{1/N}(x)\eta^2_s(x), g^2_{1/N}(x)\eta^2_s(y)\}. \end{equation}

For any $M>0$ , let

\begin{align*} \tau_M = \inf\bigg\{s>0\colon \frac{1}{N}\sup_{x}\max_{y\sim x}\{g^2_{1/N}(x)\eta^2_s(x), g^2_{1/N}(x)\eta^2_s(y)\}>M\bigg\}; \end{align*}

then

$$ \bigg\{\sup_{0\leq s\leq tN}\frac{1}{N}\sup_{x}\max_{y\sim x}\{g^2_{1/N}(x)\eta^2_s(x),g^2_{1/N}(x)\eta^2_s(y)\}>M\bigg\} = \{\tau_M\leq tN\}. $$

Conditioned on $\{\tau_M\leq tN\}$ , there exists $x_0\in \mathbb{Z}^d$ such that

\begin{align*} \max_{y\sim x_0}\{g^2_{1/N}(x_0)\eta^2_{\tau_M}(x_0), g^2_{1/N}(x_0)\eta^2_{\tau_ M}(y)\}>NM. \end{align*}

If $\{\eta_s(y)\}_{y\sim x_0}$ and $\eta_s(x_0)$ do not jump to 0 during $s\in [\tau_M,\tau_M+1]$ , then

\begin{align*} \max_{y\sim x_0}\{g^2_{1/N}(x)\eta^2_{s}(x_0),g^2_{1/N}(x_0)\eta^2_{s}(y)\} \geq NM\exp\bigg\{\min\bigg\{\frac{1}{2\lambda d}-1,0\bigg\}\bigg\} \end{align*}

for $s\in [\tau_M, \tau_{M}+1]$ and hence

\begin{align*} \int_0^{tN+1}\sum_{x}g_{1/N}^4(x)\Bigg(\eta_s^4(x)+\sum_{y\sim x}\eta_s^4(y)\Bigg)\,{\mathrm{d}} s \geq N^2M^2\exp\bigg\{2\min\bigg\{\frac{1}{2\lambda d}-1,0\bigg\}\bigg\}. \end{align*}

Since the state of a vertex jumps to 0 at rate $1/(2\lambda d)$ , applying the strong Markov property of $\{\eta_t\}_{t\geq 0}$ ,

\begin{multline*} \mathbb{P}\Bigg(\int_0^{tN+1}\sum_{x}g_{1/N}^4(x)\Bigg(\eta_s^4(x)+\sum_{y\sim x}\eta_s^4(y)\Bigg)\,{\mathrm{d}} s \geq N^2M^2\exp\bigg\{2\min\bigg\{\frac{1}{2\lambda d}-1,0\bigg\}\bigg\}\Bigg) \\ \geq \mathbb{P}(\tau_M\leq tN)\exp\bigg\{{-}\frac{2d+1}{2\lambda d}\bigg\}. \end{multline*}

Then, by Markov’s inequality, Proposition 2.4, and the fact that

\begin{align*} \sum_x g_{1/N}^4(x)\leq \sum_x g_0^4(x)<+\infty, \end{align*}

we have

(3.20) \begin{align} & \mathbb{P}\bigg(\sup_{0\leq s\leq tN}\frac{1}{N}\sup_{x} \max_{y\sim x}\{g^2_{1/N}(x)\eta^2_s(x), g^2_{1/N}(x)\eta^2_s(y)\}>M\bigg) \notag \\ & \leq \frac{\exp\{({2d+1})/{2\lambda d}\}}{N^2M^2\exp{2\min\{{1}/({2\lambda d})-1,0\}}} \int_0^{tN+1}\sum_xg_0^4(x)(\mathbb{E}_{\lambda,d}(\eta^4(O))(2d+1))\,{\mathrm{d}} s \leq \frac{C_3}{NM^2}, \end{align}

where $C_3<+\infty$ is independent of N and M. Applying the Fubini theorem, for any positive random variable V and any $c>0$ ,

\begin{align*} \mathbb{E}\big(V\textbf{1}_{\{V>c\}}\big) = c\mathbb{P}(V>c) + \int_c^{+\infty}\mathbb{P}(V\geq u)\,{\mathrm{d}} u. \end{align*}

Hence, for any $\varepsilon>0$ ,

\begin{align*} \mathbb{E}\bigg(\sup_{0\leq s\leq tN}\frac{1}{N}(M^{1/N}_{s}-M^{1/N}_{s-})^2\bigg) \leq \varepsilon + \varepsilon\frac{C_3}{N\varepsilon^2} + \int_{\varepsilon}^{+\infty}\frac{C_3}{Nu^2}\,{\mathrm{d}} u = \varepsilon + \frac{2C_3}{N\varepsilon} \end{align*}

according to (3.19) and (3.20). As a result,

\begin{align*} \limsup_{N\rightarrow+\infty}\mathbb{E}\bigg(\sup_{0\leq s\leq tN}\frac{1}{N}(M^{1/N}_{s}-M^{1/N}_{s-})^2\bigg) \leq \varepsilon. \end{align*}

Since $\varepsilon$ is arbitrary, let $\varepsilon\rightarrow 0$ and the proof is complete.

Acknowledgements

The author is grateful to the referees and editors for their careful reading of the paper as well as their comments and suggestions.

Funding information

The author is grateful for financial support from the National Natural Science Foundation of China with grant number 12371142 and the Fundamental Research Funds for the Central Universities with grant number 2022JBMC039.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Birkner, M. and Zähle, I. (2007). A functional CLT for the occupation time of a state-dependent branching random walk. Ann. Prob. 35, 20632090.CrossRefGoogle Scholar
Bramson, M., Cox, J. T. and Griffeath, D. (1988). Occupation time large deviations of the voter model. Prob. Theory Relat. Fields 77, 401413.CrossRefGoogle Scholar
Cox, J. T. and Griffeath, D. (1983). Occupation time limit theorems for the voter model. Ann. Prob. 11, 876893.CrossRefGoogle Scholar
Deuschel, J. D. and Rosen, J. (1998). Occupation time large deviations for critical branching Brownian motion and related processes. Ann. Prob. 26, 602643.CrossRefGoogle Scholar
Ethier, N. and Kurtz, T. (1986). Markov Processes: Characterization and Convergence. John Wiley, New York.CrossRefGoogle Scholar
Griffeath, D. (1983). The binary contact path process. Ann. Prob. 11, 692705.CrossRefGoogle Scholar
Komorowski, T., Landim, C. and Olla, S. (2012). Fluctuations in Markov Processes: Time Symmetry and Martingale Approximation. Springer, New York.CrossRefGoogle Scholar
Landim, C. (1992). Occupation time large deviations for the symmetric simple exclusion process. Ann. Prob. 20, 206231.CrossRefGoogle Scholar
Lee, T. Y., Landim, C. and Chang, C. (2004). Occupation time large deviations of two-dimensional symmetric simple exclusion process. Ann. Prob. 32, 661691.Google Scholar
Li, Q. and Ren, Y. (2011). A large deviation for occupation time of critical branching $\alpha$ -stable process. Sci. China Math. 54, 14451456.CrossRefGoogle Scholar
Liggett, T. M. (1985). Interacting Particle Systems. Springer, New York.CrossRefGoogle Scholar
Liggett, T. M. (1999). Stochastic Interacting Systems: Contact, Voter and Exclusion Processes. Springer, New York.CrossRefGoogle Scholar
Maillard, G. and Mountford, T. (2009). Large deviations for voter model occupation times in two dimensions. Ann. Inst. H. Poincaré Prob. Statist. 45, 577588.CrossRefGoogle Scholar
Nakashima, M. (2009). Central limit theorem for linear stochastic evolutions. J. Math. Kyoto U. 49, 201224.Google Scholar
Newman, C. (1983). A general central limit theorem for FKG systems. Commun. Math. Phys. 91, 7580.CrossRefGoogle Scholar
Schonmann, H. (1986). Central limit theorem for the contact process. Ann. Prob. 14, 12911295.CrossRefGoogle Scholar
Xue, X. (2018). An improved upper bound for critical value of the contact process on $\mathbb{Z}^d$ with $d\geq 3$ . Electron. Commun. Prob. 23, 111.CrossRefGoogle Scholar
Xue, X. and Zhao, L. (2021). Non-equilibrium fluctuations of the weakly asymmetric normalized binary contact path process. Stoch. Process. Appl. 135, 227253.CrossRefGoogle Scholar