Hostname: page-component-7bb8b95d7b-qxsvm Total loading time: 0 Render date: 2024-09-19T13:52:33.972Z Has data issue: false hasContentIssue false

On the quasi-ergodicity of absorbing Markov chains with unbounded transition densities, including random logistic maps with escape – CORRIGENDUM

Published online by Cambridge University Press:  18 September 2024

MATHEUS M. CASTRO*
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
VINCENT P. H. GOVERSE
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
JEROEN S. W. LAMB
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected]) International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan Centre for Applied Mathematics and Bioinformatics, Department of Mathematics and Natural Sciences, Gulf University for Science and Technology, Halwally, Kuwait
MARTIN RASMUSSEN
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
Rights & Permissions [Opens in a new window]

Abstract

Type
Corrigendum
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

In the paper [Reference Castro, Goverse, Lamb and Rasmussen1], the technical Lemmas 4.5 and 4.6 are incorrect. This cascaded into the proofs of Proposition 5.2 and Theorems 2.2, 2.3 and 2.4. Although some of the main theorems of the original paper were impacted, the ideas in [Reference Castro, Goverse, Lamb and Rasmussen1] are robust enough to correct the original proof. In this corrigendum, we provide the necessary modifications to the statements and proofs.

Furthermore, Lemma 3.1(i) has a typo, which propagated to Proposition 4.2(i) and Theorem 2.2(ii). We give the corrected statements below and note that the proof of these results remains correct.

Finally, Theorem 2.4 lacks a condition, which we provide below. Its proof remains essentially the same.

2. Lemma 3.1(i) and Proposition 4.2(i)

Lemma 3.1(i) was incorrectly quoted from [Reference Krengel2, Theorem 3.3.5] and [Reference Schaefer3, Corollary V.8.1]. The correct statement is as follows.

Lemma 3.1.

  1. (i) For every $f\in L^1(M,\rho )$ ,

    $$ \begin{align*}\lim_{n\to \infty} \frac{1}{n} \sum_{i=0}^{n-1}T ^i f = \eta \frac{\mathbb E_\rho[f \mid \mathcal I(T,\rho)]}{\mathbb E_\rho[ \eta \mid \mathcal I(T,\rho)]}\quad \rho\mbox{-almost surely (a.s.)}.\end{align*} $$

This affected the statements of Proposition 4.2(i), which are corrected as follows.

Proposition 4.2.

  1. (i) For every $f\in L^1(M,\mu ),$

    $$ \begin{align*} {\frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i}{\mathcal P}^i f \xrightarrow{n\to\infty} \eta \int_M f(y)\mu(dy)}\quad \text{ in } L^1(M,\mu) \text{ and } \mu\text{-a.s.} \end{align*} $$

3. Lemmas 4.5 and 4.6

By labelling $\{g_i\}_{i=0}^{m-1}$ and $\{C_i\}_{i=0}^{m-1}$ in Lemma 4.4, we assume that the permutation $\sigma $ satisfies $\sigma (i) = i-1\ \text {(mod }m)$ . We write $g_j =g_{{j\ (\mathrm {mod}\ m)}}$ and $C_j = C_{{j\ (\mathrm {mod}\ m)}}$ for every $j\in \mathbb N$ .

With this convention, Lemmas 4.5 and 4.6, should be combined in a single lemma and corrected as follows.

Lemma 4.5. Suppose the absorbing Markov chain $X_n$ satisfies Hypothesis H1. Then for every bounded and measurable function $h:M\to \mathbb {R}$ and $\ell \in \{0,1,\ldots ,m-1\}$ ,

(3.1) $$ \begin{align} \frac{1}{\unicode{x3bb}^{mn+\ell}} {\mathcal P}^{mn+\ell}h \xrightarrow[L^1(M,\mu)]{n\to\infty} \sum_{s=0}^{m-1} g_{s}\int_{C_{s+\ell}} h\,d\mu, \end{align} $$

and

(3.2)

Proof. Due to Proposition 4.2, there exists $\alpha _0,\ldots ,\alpha _{m-1}\in \mathbb C$ and $v\in E_{\mathrm {aws}}$ such that ${h = \sum _{s=0}^{m-1} \alpha _s g_s + v}.$

Step 1. We show that $v \in E_{\mathrm {aws}}$ if and only if $\int _{C_i} v\,d \mu = 0$ for every $i\in \{0,1,\ldots , m-1\}.$ Suppose first that $v \in E_{\mathrm {aws}}$ . We claim that for all $i \in \{0,1,\ldots , m-1\}$ . Indeed, if , then with $\alpha _i \neq 0$ and $w \in E_{\mathrm {aws}}$ . Since $\mu (C_i \cap C_j) = 0$ for all $j \neq i$ , we obtain that $v \not \in E_{\mathrm {aws}}$ . It follows that

Reciprocally, assume that $\int _{C_i} v \,d \mu = 0$ for every $i\in \{0,1,\ldots ,k-1\}.$ Write $v = \sum _{i=0}^{k-1}\alpha _i g_i + w$ , with $w\in E_{\mathrm {aws}}$ . Since $\int g_i\,d \mu = 1$ , we have that $\alpha _i = \int _{C_i} \alpha _i g_i\,d\mu = \int _{C_i} (\sum _{j=0}^{k-1}\alpha _j g_j + w) \,d\mu = \int _{C_i} v\,{d} \mu = 0.$ We obtain that $\alpha _i = 0$ for every $i\in \{0,1,\ldots , k-1\},$ which implies $v\in E_{\mathrm {aws}}$ .

Step 2. We show that equation (3.1) holds. Integrating $h = \sum _{s=0}^{m-1} \alpha _s g_s + v$ with respect to $\mu $ on $C_i$ , from Step $1$ , we obtain that $h = \sum _{s=0}^{m-1} g_s \int _{C_s} h\,d \mu + v.$

Therefore,

$$ \begin{align*} \frac{1}{\unicode{x3bb}^{nm + \ell}} {{\mathcal P}}^{nm+\ell} h = \sum_{s=0}^{m-1} g_{s-\ell} \int_{C_s} h\,d\mu + \frac{1}{\unicode{x3bb}^{nm + \ell}} {{\mathcal P}}^{nm+\ell} v \xrightarrow[L^1(M,\mu)]{n\to\infty} \sum_{s=0}^{m-1} g_s\int_{C_{s+\ell}} h\,d\mu. \end{align*} $$

Step 3. We show that equation (3.2) holds and conclude the proof of the lemma. From Step $1$ , we have that for some $w\in E_{\mathrm {aws}}$ . Given $\ell \in \{0,1,\ldots ,m-1\}$ , define $n_\ell := m n +\ell $ . A direct computation implies that

On the one hand, we have that

$$ \begin{align*} \|J^{n_\ell}_h\|_{L^1(M,\mu)} \leq \frac{1}{n_\ell}\sum_{i=0}^{n_\ell} \bigg\|\frac{{\mathcal P}^i}{\unicode{x3bb}^i} \bigg(\bigg|h \frac{{\mathcal P}^{n_\ell} }{\unicode{x3bb}^{n_\ell-i}}w\bigg|\bigg)\bigg\|_{L^1(M,\mu)}\leq\frac{ \|h\|_\infty }{n_\ell} \sum_{i=0}^{n_\ell-1}\bigg\| \frac{{\mathcal P}^{n_\ell-i} }{\unicode{x3bb}^{n_\ell-i}}w\bigg\|_{L^1(M,\mu)}. \end{align*} $$

From Step $2$ , we obtain that $J^{n_\ell } \xrightarrow []{n\to \infty } 0$ in $L^1(M,\mu )$ .

On the other hand, Step 2 yields that

$$ \begin{align*} I_h^{n_\ell} &=\sum_{s=0}^{m-1} \frac{\mu(C_s)}{n_\ell} \sum_{j=0}^{n-1} \frac{{\mathcal P}^{m j}}{\unicode{x3bb}^{m j}}\bigg(\sum_{i =0}^{m-1}\frac{{\mathcal P}^i}{\unicode{x3bb}^i}(h g_{s-\ell+i})\bigg)+\sum_{s=0}^{m-1} \frac{\mu(C_s)}{n_\ell} \frac{{\mathcal P}^{mn-1}}{\unicode{x3bb}^{mn -1}} \bigg(\sum_{i=0}^{\ell}\frac{{\mathcal P}^i}{\unicode{x3bb}^i}(h g_{s-\ell+i}\bigg)\bigg)\\ &\quad\xrightarrow[L^1(M,\mu)]{n\to\infty} \sum_{s=0}^{m-1} \frac{\mu(C_s)}{m} \sum_{k=0}^{m-1} g_k \int_{C_k}\sum_{i =0}^{m-1}\frac{{\mathcal P}^i}{\unicode{x3bb}^i} (h g_{s-\ell+i})\,d\mu = \sum_{k=0}^{m-1} \mu(C_{\ell+k} )g_k \int_{M} h \eta\,d\mu. \end{align*} $$

Hence, $I_h^{n_\ell } + J_{h}^{n_\ell } \xrightarrow []{n\to \infty }\sum _{k=0}^{m-1} \mu (C_{\ell +k} )g_k \int _{M} h \eta \,d\mu $ in $L^1(M,\mu ),$ which concludes the proof of Step 3.

4. Proposition 5.2

As a consequence of the corrected Lemma 4.5 , Proposition 5.2 reads as follows.

Proposition 5.2. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H1. Suppose that one of the following items holds:

  1. (a) there exists $K>0$ such that $\mu (\{K<\eta \}) =1$ almost surely;

  2. (b) there exists $ g\in L^1(M,\mu )$ such that $ ({1}/{\unicode{x3bb} ^n}) {\mathcal P}^n(x,M) \leq g \ \text {for every }n\in \mathbb N$ ;

  3. (c) the absorbing Markov chain $X_n$ fulfils Hypothesis H1.

Then for every $h\in L^\infty (M,\mu )$ and $\ell \in \{0,1,\ldots ,m-1\}$ ,

(4.1) $$ \begin{align} \lim_{n\to \infty} \frac{1}{n}\sum_{i=0}^{mn+\ell-1}\frac{{\mathcal P}^i}{\unicode{x3bb}^i} \bigg(h\frac{{\mathcal P}^{mn+\ell-i}(\cdot,M)}{\unicode{x3bb}^{mn+\ell-i}} \bigg) \xrightarrow{n\to\infty} \sum_{s=0}^{m-1}\mu(C_{s+\ell})g_s\int_M h(y)\eta(y)\mu(dy)\ \mu\text{-a.s.} \end{align} $$

In addition,

(4.2) $$ \begin{align} \frac{1}{\unicode{x3bb}^{mn+\ell}} {\mathcal P}^{nm+\ell} h \xrightarrow{n\to\infty} \sum_{s=0}^{m-1}\mu(C_{s+\ell})g_s \int_M h(x) \mu(dx)\, \mu\text{-a.s.} \end{align} $$

Proof. The proof of the theorem assuming that either item (a) or item (b) holds remains mostly the same. The only correction to be made is on page $16$ line $5$ , where the term $(1/\unicode{x3bb} )^n{\mathcal P}(x,M)$ should be replaced by $(1/\unicode{x3bb} )^n{\mathcal P}^n(x,M)$ .

Now, we prove item (c). For every $j\in \mathbb N$ , define the set $K_j:= \{x\in M; k(x,\cdot )\in L^{\infty }(M,\hspace{-0.8pt}\mu )\}$ and the bounded operator $\mathcal G_{j}\hspace{-0.8pt}:\hspace{-0.8pt}L^{1}(M,\mu )\hspace{-0.8pt}\to\hspace{-0.8pt} L^\infty (K_j,\mu )$ , By composing ${\mathcal G}_j$ to equations (3.1) and (3.2) considering $\ell -1$ instead of $\ell $ , from Lemma 4.5 and the fact that $\mathcal G_j$ is a bounded operator, we obtain that equations (4.1) and (4.2) converge for $\mu $ -almost every $x\in K_j$ . Finally, since Hypothesis H1 implies that $\mu (\bigcup _{j\geq 1} K_j) = 1$ , we obtain the result.

5. Theorem 2.2

The corrections of Lemma 4.5 also affect Theorem 2.2.

Theorem 2.2. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H1. Then the following assertions hold:

  1. (i) there exist a natural number $m\in \mathbb N$ and sets $C_0, C_1, \ldots , C_{m-1}=:C_{-1} \in \mathscr B(M)$ such that for every $i\in \{0,1,\ldots ,m-1\}$ ;

  2. (ii) for every $f\in L^1(M,\mu )$ , $ ({1}/{n}) \sum _{i=0}^{n-1} ({1}/{\unicode{x3bb} ^i}){\mathcal P}^i f \xrightarrow {n\to \infty } \eta \int _M f(y) \mu (dy)$ in $L^1(M,\mu )$ and $\mu $ -a.s.;

  3. (iii) there exist non-negative functions $g_0,g_1,\ldots ,g_{m-1}=:g_{-1}\in L^1(M,\mu )$ , satisfying

    $$ \begin{align*}{\mathcal P} g_{j} = \unicode{x3bb} g_{j-1}\quad \text{and}\quad \|g_j\|_{L^1(M,\mu)}=1\end{align*} $$
    for every $j\in \{0,1,\ldots ,n-1\}$ , such that given $\ell \in \{0,1,\ldots ,m-1\}$ and $h\in L^\infty (M,\mu )$ , the following limit holds:
    $$ \begin{align*} \frac{1}{\unicode{x3bb}^{nm+\ell}} {\mathcal P}^{nm+\ell} h \xrightarrow[L^1(M,\mu)]{n\to\infty} \sum_{s=0}^{m-1}g_s \int_M h(x) \mu(dx);\end{align*} $$
  4. (iv) if in addition, we assume that M is a Polish space, then for every $h\in L^\infty (M,\mu )$ ,

    (5.1) $$ \begin{align} \bigg(x\mapsto \mathbb E_x \bigg[\frac{1}{n} \sum_{i=0}^{n-1}h\circ X_i \mid \tau> n\bigg]\bigg) \xrightarrow{n\to\infty} \int_M h(y) \eta(y) \mu(dy) \end{align} $$
    in the $L^\infty (M,\mu )$ -weak ${}^*$ topology. In particular, we obtain that equation (5.1) also converges weakly in $L^1(M,\mu ).$

Proof. The proof of assertions (i), (ii) and (iii) remains unchanged. To prove assertion (iv), fix $\ell \in \{0,1,\ldots ,m-1\}$ , repeating the proof of [Reference Castro, Goverse, Lamb and Rasmussen1, Lemma 2.2] but changing $g_{n}$ by $g_{mn+\ell }$ , we obtain that $g_{nm+\ell }$ converges to the right-hand side of equation (5.1) in $L^\infty (M,\mu )$ -weak $^*$ . Since $\ell \in \{0,1,\ldots ,m-1\}$ is arbitrary, we obtain that assertion (iv) follows.

6. Theorem 2.3

The same proof as before holds using the corrected Lemma 4.5 and Proposition 5.2.

7. Theorem 2.4

Theorem 2.4 requires an extra assumption.

Theorem 2.4. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H2, and suppose that ${\mathcal P} f|_{K_i}\in \mathcal C^0(K_i)$ for every $f\in L^1(M,\mu )$ and $i\in \mathbb N$ , where $\{K_i\}_{i\in \mathbb N}$ is the nested sequence of compact sets given by the second part of Hypothesis H2. Then, given $h\in L^\infty (M,\mu )$ , equation (2.3) holds for every $x\in (\bigcup _{i\in \mathbb N} K_i)\cap \{\eta>0\}.$

In the case where $m=1$ in Theorem 2.2(i), equation (2.4) holds for every $x\in ( \bigcup _{i\in \mathbb N} K_i)\cap \{\eta>0\}$ .

Proof. Observe that $\mathcal G_j:L^1(M,\mu ) \to \mathcal C^0(K_j)$ , is a bounded linear operator since it is a positive operator between two Banach lattices [Reference Schaefer3, Theorem 5.3]. Then the proof follows from the same arguments as given in the new proof of Proposition 5.2 (c) and equation (5.3).

Acknowledgment

The authors thank Bernat Bassols Cornudella for the valuable discussions and for pointing out some of the inaccuracies in the original publication.

References

Castro, M. M., Goverse, V. P. H., Lamb, J. S. W. and Rasmussen, M.. On the quasi-ergodicity of absorbing Markov chains with unbounded transition densities, including random logistic maps with escape. Ergod. Th. & Dynam. Sys. 44 (2024), 18181855.CrossRefGoogle Scholar
Krengel, U.. Ergodic Theorems (De Gruyter Studies in Mathematics, 6). Walter de Gruyter & Co., Berlin, 1985.CrossRefGoogle Scholar
Schaefer, H. H.. Banach Lattices and Positive Operators (Die Grundlehren der mathematischen Wissenschaften, 215). Springer-Verlag, New York, 1974.CrossRefGoogle Scholar