Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-23T23:57:54.510Z Has data issue: false hasContentIssue false

On the quasi-ergodicity of absorbing Markov chains with unbounded transition densities, including random logistic maps with escape

Published online by Cambridge University Press:  25 September 2023

MATHEUS M. CASTRO*
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
VINCENT P. H. GOVERSE
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
JEROEN S. W. LAMB
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected]) International Research Center for Neurointelligence, The University of Tokyo, Tokyo 113-0033, Japan Centre for Applied Mathematics and Bioinformatics, Department of Mathematics and Natural Sciences, Gulf University for Science and Technology, Halwally, Kuwait
MARTIN RASMUSSEN
Affiliation:
Department of Mathematics, Imperial College London, London, SW7 2AZ, UK (e-mail: [email protected], [email protected], [email protected])
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we consider absorbing Markov chains $X_n$ admitting a quasi-stationary measure $\mu $ on M where the transition kernel ${\mathcal P}$ admits an eigenfunction $0\leq \eta \in L^1(M,\mu )$. We find conditions on the transition densities of ${\mathcal P}$ with respect to $\mu $ which ensure that $\eta (x) \mu (\mathrm {d} x)$ is a quasi-ergodic measure for $X_n$ and that the Yaglom limit converges to the quasi-stationary measure $\mu $-almost surely. We apply this result to the random logistic map $X_{n+1} = \omega _n X_n (1-X_n)$ absorbed at ${\mathbb R} \setminus [0,1],$ where $\omega _n$ is an independent and identically distributed sequence of random variables uniformly distributed in $[a,b],$ for $1\leq a <4$ and $b>4.$

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

Consider a family of transformations $\mathcal F = \{f_\omega :E\to E\}_{\omega \in \Delta }$ , where E is a metric space. Given a subset $M \subset E$ and endowing $\mathcal F$ with a probability measure, we aim to understand the statistical behaviour of the random dynamical system

$$ \begin{align*}f^n (x;\omega_1, \ldots, \omega_n) := f_{\omega_n} \circ \cdots \circ f_{\omega_1}(x)\end{align*} $$

conditioned upon remaining in M.

Such a problem can be naturally modelled via the Markov chain $X_n = f_n(X_{n-1})$ with $f_n\in \mathcal F$ absorbed at $\partial := E\setminus M$ , that is, $X_n\in \partial $ implies $X_{n+1} \in \partial $ . Statistical information for the above (conditioned) random dynamical system is then obtained by certain limiting distributions for the paths $X_n$ . In the literature, such limiting distributions appear mainly in two forms. The first is the so-called Yaglom limit

(1.1) $$ \begin{align} \lim_{n\to \infty } {\mathbb P}[X_n \in A \mid X_0 = x, \tau>n], \end{align} $$

where $x\in M$ , $\tau =\min \{n\in \mathbb N\mid X_n\in \partial \}$ and A is a measurable subset of M. The second one is the so-called quasi-ergodic limit

(1.2)

There are several contexts in which the Yaglom limit converges to a quasi-stationary measure. We recall that a probability measure $\mu $ on M is called a quasi-stationary measure for $X_n$ on M if for every $n\in \mathbb N$ , $\mu (\mathrm {d} x) := {\mathbb P}[X_n \in \mathrm {d} x\mid X_0 \sim \mu , \tau>n].$ However, the limit in equation (1.2) is related to the existence of a quasi-ergodic measure for $X_n$ on M. A measure $\nu $ on M is called a quasi-ergodic measure for $X_n$ on M if for every measurable subset A of M, equation (1.2) converges to $\nu (A)$ for $\nu $ -almost every $x\in M.$

In the literature [Reference Benaïm, Champagnat, Oçafrain and Villemonais3, Reference Breyer and Roberts4, Reference Castro, Lamb, Olicón-Méndez and Rasmussen7Reference Darroch and Seneta9, Reference Zhang, Li and Song29], various sufficient conditions are presented for the existence and uniqueness of quasi-stationary measures $\mu $ and quasi-ergodic measures $\nu $ . These conditions imply that $\nu \ll \mu $ and that the Radon–Nikodym derivative ${\eta (x) =\nu (\mathrm {d} x)/\mu (\mathrm {d} x)}$ is an eigenfunction of ${\mathcal P},$ where ${\mathcal P}$ is the transition kernel of $X_n$ . The uniform convergence of the sequence $\{{\mathcal P}^n(\cdot ,M)/\unicode{x3bb} ^n\}_{n\in \mathbb N}$ , where $\unicode{x3bb} := \int _M {\mathcal P}(y,M)\mu (\mathrm {d} y)$ , to the eigenfunction $\eta $ plays a crucial role in the proofs.

In this paper, we take a different approach. We set out to derive a quasi-ergodic measure starting from a quasi-stationary measure. The existence of quasi-stationary measures is a well-established problem (see [Reference Pollett26] for a bibliography). Quasi-stationary measures arise as positive eigenmeasures of the operator $\mu \mapsto \int _M {\mathcal P}(x,\cdot )\mu (\mathrm {d} x)$ and extensive literature exists on how to solve such eigenvalue problems [Reference Meyer-Nieberg22, Reference Oikhberg and Troitsky24, Reference Pflug25]. Since quasi-ergodic measures do not admit such an approach, they are less well understood. Quasi-ergodic measures are important in the analysis of random dynamical systems, for instance, in the context of the recently established conditioned Lyapunov spectrum [Reference Castro, Chemnitz, Chu, Engel, Lamb and Rasmussen6, Reference Engel, Lamb and Rasmussen11].

Inspired by these results, where quasi-ergodic measures can be expressed as a density over the quasi-stationary measures, we obtain natural conditions on the transition kernel ${\mathcal P}$ such that the existence of a quasi-ergodic measure becomes equivalent to solving an eigenvalue problem for ${\mathcal P}$ in $L^1.$ As a result, we considerably simplify the procedure of finding a quasi-ergodic measure. Furthermore, we also obtain, under aperiodicity conditions, that the Yaglom limit in equation (1.1) converges to the quasi-stationary measure $\mu $ -almost surely (a.s.)

As an application of our results, we characterize the limits in equations (1.1) and (1.2) for the random logistic map $Y_{n+1} = \omega _n Y_n(1-Y_n)$ absorbed at ${\mathbb R} \setminus [0,1],$ with $\{\omega _n\}_{n\in \mathbb N}$ an independent and identically distributed (i.i.d.) sequence of random variables such that $\omega _0 \sim \mathrm {Unif}([a,b]),$ with $1\leq a <4$ and $b>4,$ where $\mathrm {Unif}([a,b])$ denotes the continuous uniform distribution in $[a,b]$ . The analysis of this system is challenging since its transition kernel presents a change of behaviour on the points $0$ and $1$ . In particular, for every $x\in (0,1)$ ,

$$ \begin{align*}{\mathcal P}(x,\mathrm{d} y) = {\mathbb P}[Y_n \in \mathrm{d} y \mid Y_0 =x] \ll \mathrm{Leb}(\mathrm{d} y),\end{align*} $$

while ${\mathcal P}(0,\mathrm {d} y ) = {\mathcal P}(1,\mathrm {d} y) = \delta _0(\mathrm {d} y).$ This implies that the transition densities ${\mathcal P}(x,\mathrm {d} y)$ explode when x approaches the points $0$ and $1$ . Consequently, the results in the literature [Reference Benaïm, Champagnat, Oçafrain and Villemonais3, Reference Breyer and Roberts4, Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Reference Champagnat and Villemonais8, Reference Zhang, Li and Song29] cannot be applied, since ${\mathcal P}$ does not act as a compact operator on $\mathcal C^0(M)$ and $L^p(M)$ , with $p\geq 1$ . Hence, a more refined analysis is needed.

To overcome this issue, we consider $AM$ -compact operators (see [Reference Glück and Haase16, Appendix A]), a generalization of compact operators. Inspired by the novel results on positive integral operators in [Reference Gerlach and Glück14Reference Glück and Haase16], we analyse the action of ${\mathcal P}$ on $L^1(M,\mu )$ . Since ${\mathcal P}$ is an integral operator, it is $AM$ -compact, and we can establish its peripheral spectrum from which the asymptotic behaviour of $Y_n$ follows.

This paper is divided into six sections. In §2, the basic concepts of the theory of absorbing Markov chains are briefly recalled, the main underlying hypotheses of this paper are defined (Hypotheses H1 and H2) and the main results of this paper are stated (Theorems 2.1, 2.2, 2.3 and 2.4). In §3, it is shown that Hypothesis H1 implies that ${\mathcal P}/\unicode{x3bb} $ is a mean ergodic operator. Section 4 is dedicated to a brief presentation of Banach lattice theory, the definition of an $AM$ -compact operator and the proof of Theorem 2.2. In §5, we combine the results of the previous sections to prove Theorems 2.3 and 2.4. Finally, in §6, we analyse the asymptotic behaviour of the random logistic map $Y_n$ , introduced above, and prove Theorem 2.1.

2 Main results

Let E be a metric space and M a subspace of E. We aim to study Markov chains on E conditioned upon remaining in the set M. With this objective in mind, we denote as $E_M$ the topological space $M \sqcup \partial $ generated by the topological basis

$$ \begin{align*}\mathcal T = \{A\cap M;\ A\ \text{is an open set of }E\}\sqcup \partial,\end{align*} $$

where $\sqcup $ denotes disjoint union. In this paper, we assume that

$$ \begin{align*}X:=(\Omega, \{\mathcal F_n\}_{n\in\mathbb N_0}, \{X_n\}_{n\in\mathbb N_0}, \{{\mathcal P}^n\}_{n\in\mathbb N_0}, \{{\mathbb P}_x\}_{x \in {{E_M}}})\end{align*} $$

is a Markov chain with state space ${E_M}$ , in the sense of [Reference Rogers and Williams27, Definition III.1.1], that is, the pair $(\Omega ,\{\mathcal F_n\}_{n\in \mathbb N})$ is a filtered space; $X_n$ is an $\mathcal F_n$ -adapted process with state space ${E_M}$ ; ${\mathcal P}^n$ a time-homogeneous transition probability function of the process $X_n$ satisfying the usual measurability assumptions and Chapman–Kolmogorov equation; $\{{\mathbb P}_x\}_{x\in {E_M}}$ is a family of probability function satisfying ${\mathbb P}_x[X_0=x] = 1$ for every $x\in {E_M}$ ; and for all $m,n\in \mathbb N_0$ , $x\in E_M$ , and every bounded measurable function f on ${E_M}$ ,

$$ \begin{align*}{\mathbb E}_x[f\circ X_{m+n}\mid \mathcal F_n] = ({{\mathcal P}}^m f)(X_n)\quad {\mathbb P}_x\text{-a.s}.\end{align*} $$

We assume that $X_n$ is a Markov chain that is absorbed at $\partial ,$ meaning that ${{\mathcal P}}(\partial ,\partial ) = 1. $ In view of the above definitions, it is natural to define the stopping time

$$ \begin{align*}\tau(\omega) := \inf\{n\in\mathbb N; X_n(\omega) \not\in M\}.\end{align*} $$

Throughout the paper, the following notation is used.

Notation 2.1. Given a probability measure $\mu $ on M and $p\in [1,\infty ], $ we denote ${L^p (M,{\mathscr B}(M),\mu )}$ as $L^p(M,\mu )$ and $\mathcal M(M)$ as the set of Borel signed-measures on $M.$ Moreover, we denote ${{\mathbb P}_\mu (\cdot ) := \int _{M} {\mathbb P}_x (\cdot ) \mu (\mathrm {d} x).}$

We denote as $\mathcal C^0(M) :=\{f:M\to {\mathbb R}; \ f \text { is continuous}\}$ and $\mathcal F_b(M)$ as the set of bounded Borel measurable functions on M. Given $f\in {\mathcal F_b}(M)$ , write

and by abuse of notation, denote

$$ \begin{align*} f\circ X_n := \begin{cases} f\circ X_n& \text{if } X_n\in M,\\ 0 & \text{if } X_n \notin M .\end{cases}\end{align*} $$

Given a sub $\sigma $ -algebra $\mathscr F$ of $\mathscr B(M)$ and $f\in L^1(M,\mu ),$ we denote $ {\mathbb E}_\mu [f\mid \mathscr F ] \in L^1(M,\mathcal F,\mu )$ as the conditioned conditional expectation of f given $\mathscr F$ , that is, the unique function in $L^1(M,\mathscr F,\mu )$ such that

$$ \begin{align*}\int_F f(x) \mu(\mathrm{d} x) = \int_F {\mathbb E}_\mu [f \mid \mathscr F]\mu(\mathrm{d} x)\quad \text{for every }F\in\mathscr F.\end{align*} $$

We define the sets

$$ \begin{align*}\mathcal M_+(M) = \{\mu\in\mathcal M(M);\mu(A) \geq 0 \text{ for every }A\in\mathscr{B}(M)\}, \end{align*} $$

and

$$ \begin{align*}L^p_+ (M,\mu) = \{f\in L^p(M,\mu);\ f \geq 0 \ \mu\text{-a.s.}\}\quad \text{for every }p\in[1,\infty].\end{align*} $$

Finally, given a Banach space $E,$ we say that the sequence $\{x_n\}_{n\in \mathbb N}\subset E$ converges in the weak topology to $x\in E$ if for every bounded linear functional $\phi \in E^*$ , $\lim _{n\to \infty } \phi (x_n) = \phi (x).$ Moreover, we say that the sequence $\{\phi _n\}_{n\in \mathbb N}\subset E^*$ converges in the weak $^*$ topology to $\phi $ if $\lim _{n\to \infty }\phi _n(x) = \phi (x)$ for every $x\in E.$

Since stationary measures do not capture the behaviour of $X_n$ before absorption, they become irrelevant when dealing with absorbing Markov chains. Due to this issue, it is necessary to extend the concept of stationary measures to quasi-stationary measures. Below, we recall the definition of a quasi-stationary measure.

Definition 2.2. A Borel measure $\mu $ on a metric space M is to be a quasi-stationary measure for the Markov chain $X_n$ if

$$ \begin{align*}{\mathbb P}_\mu[X_n \in \cdot \mid \tau>n] =\mu(\cdot) \quad\text{for all } n\in\mathbb N.\end{align*} $$

We call $\unicode{x3bb} = \int _M{\mathcal P}(x,M)\mu (\mathrm {d} x)$ the survival rate of $\mu $ .

Observe that if $X_n$ admits a quasi-stationary measure $\mu $ on M with survival rate $\unicode{x3bb} $ , then ${\mathcal P}$ may be seen as a bounded linear operator in $ L^\infty (M,\mu )$ . Moreover, since

(2.1) $$ \begin{align} \int_M {\mathcal P}(x, A )\mu(\mathrm{d} x) = \unicode{x3bb} \mu(A)\quad \text{for every }A\in\mathscr{B}(M), \end{align} $$

and $L^\infty (M,\mu )$ is dense in $ L^1(M,\mu ),$ the operator ${\mathcal P}$ can be naturally extended as a bounded linear operator in $L^1(M,\mu ).$

While we have that ergodic stationary measures can be described in terms of Birkhoff averages for classical Markov chains, this is not true any longer when dealing with absorbing Markov chains, meaning that quasi-stationary measures cannot be described in terms of conditioned Birkhoff averages. This obstruction motivates the definition of quasi-ergodic measures.

Definition 2.3. A measure $\nu $ on M is called a quasi-ergodic measure if for every $f\in {\mathcal F_b}(M),$

$$ \begin{align*} \lim_{n\to\infty}{\mathbb E}_x\bigg[\frac{1}{n} \sum_{i=0}^{n-1} f\circ X_i \hspace{0.1cm}\bigg\vert \hspace{0.1cm} \tau>n\bigg] = \int_M f(y) \nu(\mathrm{d} y) \quad\mbox{for } \nu\mbox{-almost every } x\in M. \end{align*} $$

One of the main objectives of this paper is to study the statistical asymptotic behaviour of the Markov chain $Y_{n+1}^{a,b} := \omega _n Y_{n}^{a,b} (1-Y_n^{a,b})$ absorbed at ${\mathbb R}\setminus [0,1]$ , where $\{\omega _n\}_n$ is an i.i.d. sequence of random variables such that $\omega _0 \sim \mathrm {Unif}([a,b])$ with $1\leq a<4$ and $b>4$ .

We mention that in the case where $Y_n^{a,b}$ does not escape from the interval $[0,1]$ , that is, $1\leq a < b\leq 4$ , [Reference Athreya and Dai2, Theorem 2] and [Reference Zmarrou and Homburg30, Proposition 9.5] show that $Y_n^{a,b}$ admits a unique stationary measure $\mu _{a,b}$ for $Y_n^{a,b}$ on $[0,1]$ such that $\mu _{a,b} ((0,1)) = 1.$ For dynamical considerations of random logistic maps and an analysis of the case were the sample space is finite, see [Reference Abbasi, Gharaei and Homburg1].

The following theorem describes the asymptotic distribution of $Y_n^{a,b}$ conditioned upon survival when $1\leq a < 4<b$ , also establishing the existence of quasi-stationary and quasi-ergodic measures for $Y_n^{a,b}$ on $[0,1]$ .

Theorem 2.1. Consider $M=[0,1]$ , $1\leq a<4<b$ , the Markov chain $Y_n^{a,b}$ on ${\mathbb R}_{M}$ absorbed at $\partial ={\mathbb R}\setminus M$ and $\tau ^{a,b}(\omega ) = \min \{n\in \mathbb N; Y_n^{a,b} \in {\mathbb R}\setminus [0,1]\}.$ Then we have the following.

  1. (i) $Y_n^{a,b}$ admits a quasi-stationary measure $\mu _{a,b}$ with survival rate $\unicode{x3bb} _{a,b}$ such that $\mathrm {supp}(\mu _{a,b}) =[0,1]$ and $\mu _{a,b} \ll \mathrm {Leb}$ , where $\mathrm {Leb}$ denotes the Lebesgue measure on $[0,1].$

  2. (ii) There exists $\eta _{a,b} \in L^1(M,\mu )$ such that ${\mathcal P} \eta _{a,b} = \unicode{x3bb} _{a,b} \eta _{a,b} $ , $\|\eta _{a,b}\|_{L^1(M,\mu )} =1$ and $\eta _{a,b}>0 \ \mu _{a,b}$ -a.s.

  3. (iii) For every $h\in L^\infty (M,\mathrm {Leb})$ and $x\in (0,1)$ ,

    $$ \begin{align*} \lim_{n\to\infty} {\mathbb E}_x \bigg[ \frac{1}{n}\sum_{i=0}^{n-1} h\circ Y_i^{a,b}\, \bigg|\, \tau^{a,b}>n\bigg] = \int_M h(y) \eta_{a,b}(y) \mu_{a,b}(\mathrm{d} y).\end{align*} $$
  4. (iv) For every $h\in L^\infty (M,\mathrm {Leb})$ and $x\in (0,1)$ ,

    $$ \begin{align*}\lim_{n\to \infty} {\mathbb E}_x[ h \circ Y_i^{a,b}\mid \tau^{a,b}>n] = \int h(y) \mu_{a,b}(\mathrm{d} y). \end{align*} $$

Theorem 2.1 is proved in §6.1. Later, we generalize the above result allowing values of a in the interval $(0,1)$ (see Theorem 6.15). However, this result relies on the technical assumption of $(a,b)$ being an admissible pair (see Definition 6.1). We were not able to show the existence of quasi-stationary and quasi-ergodic measures for all values of $a\in [0,1),$ which can be seen from technical details in the inequalities of Proposition 6.17 that is used in Step 2 of Lemma 6.7. This technical obstruction is explained in Remark 6.10.

We use a more general setup for the proof of the above theorem. We present two incrementally restrictive hypotheses, Hypothesis H1 and H2, which are satisfied by $Y^{a,b}_n$ for every $(a,b) \in [1,4)\times (4,\infty ),$ and implies results similar to Theorem 2.1 in different modes of convergence (see Theorems 2.2, 2.3 and 2.4).

In the following, we recall the definition of an integral operator.

Definition 2.4. Let $p,q \in [1,\infty )$ and $(\Omega _1,\mathcal F_1,\mu _1)$ , $(\Omega _2,\mathcal F_2,\mu _2)$ be measure spaces. We say that the bounded linear map $T:L^p(\Omega _1,\mathcal F_1,\mu _1)\to L^q(\Omega _2,\mathcal F_2,\mu _2)$ is an integral operator if there exists a measurable function $\kappa :\Omega _2\times \Omega _1\to {\mathbb R}$ , called kernel function, such that for every $f\in L^p(\Omega _1,\mathcal F_1,\mu _1)$ ,

$$ \begin{align*}\kappa (x,\cdot) f(\cdot) \in L^1(\Omega_1, \mu_1)\quad \text{for }\mu_2\text{-almost every }x\in \Omega_2, \end{align*} $$

and

$$ \begin{align*} Tf(x)= \int_{\Omega_1} f(y) \kappa( x,y) \mu_1 (\mathrm{d} y)\quad\mbox{for } \mu_2\mbox{-almost every } x\in \Omega_2. \end{align*} $$

For a large class of Markov processes, it is common for the existence of a probability function $\rho $ on M such that

$$ \begin{align*}{\mathcal P}(x, \mathrm{d} y) \ll \rho(\mathrm{d} y)\quad \mbox{for } \rho\mbox{-almost every } x\in M. \end{align*} $$

In such systems, it is natural to seek quasi-stationary measures that are absolutely continuous with respect to $\rho $ . In this situation and assuming that $\mu \ll \rho $ , we have from equation (2.1) that ${\mathcal P}: L^1(M,\mu )\to L^1(M,\mu )$ is an integral operator.

It is also natural to assume that the absorbing Markov chain $X_n$ is irreducible, that is, if there exists $A \in \mathscr B(M)$ such that $\mu (\{{\mathcal P}(\cdot ,A)>0\}\triangle A) =0,$ then either $\mu (A) = 0$ or $\mu (X\setminus A)=0,$ where $\triangle $ denotes the symmetric difference of sets. In cases where $X_n$ is not irreducible, it is always possible to separate the state space into irreducible regions and analyse each region separately.

The conditions discussed above are summarized in Hypothesis H1.

Hypothesis H1. Let $X_n$ be an absorbing Markov chain on $E_M$ absorbed at $\partial $ . We say that $X_n$ fulfils Hypothesis H1 if the following conditions are met.

  1. (H1a) There exists a quasi-stationary measure $\mu \in \mathcal M_+(M)$ for the Markov chain $X_n$ with survival rate $\unicode{x3bb} $ .

  2. (H1b) There exists $\eta \in L^1_+(M,\mu )$ such that ${\mathcal P} \eta = \unicode{x3bb} \eta $ and $\|\eta \|_{L^1(M,\mu )}=1$ .

  3. (H1c) The transition kernel ${\mathcal P}:L^1(M,\mu ) \to L^1(M,\mu )$ is an integral operator with kernel function $\kappa :M\times M \to {\mathbb R}_+.$

  4. (H1d) For every $A \in \mathcal B(M)$ such that $0<\mu (A) < 1$ ,

    $$ \begin{align*}\int_{M\setminus A} \int_A \kappa (x,y) \mu(\mathrm{d} y) \mu(\mathrm{d} x)> 0, \end{align*} $$
    that is, if there exists $A\in \mathscr B(M)$ such that
    then either $\mu (M\setminus A) = 0$ or $\mu (A) = 0.$

We mention that given an absorbing Markov chain $X_n$ satisfying Hypothesis H1, we obtain from [Reference Meyer-Nieberg22, Lemma 4.2.9 and Example (i) on p. 262] that $\eta (x)>0$ for $\mu $ -almost every $x\in M.$

The theorem below implies that under Hypothesis H1, $\eta (x) \mu (\mathrm {d} x)$ is the only candidate for the quasi-ergodic measure for $X_n$ on $M.$ Moreover, it is also shown that such a hypothesis implies the existence of a maximal $m\in \mathbb N,$ with the following properties:

  • there exists measurable sets $C_0,\ldots ,C_{m-1} \subset M$ such that $M= C_0 \sqcup \cdots \sqcup C_{m-1};$

  • for every $n\in \mathbb N$ , $X_n \in C_{k\ (\mathrm {mod}\ m)},$ then $X_{n+1} \in C_{k+1\ (\mathrm {mod}\ m)}.$

Theorem 2.2. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H1 then the following assertions hold.

  1. (i) There exist a natural number $m\in \mathbb N$ and sets $C_m:= C_0, C_1, \ldots , C_{m-1} \in \mathscr B(M)$ such that $\mu (C_i) = 1/m$ for every $i\in \{0,1\ldots , m-1\}$ and

  2. (ii) For every $f\in L^1(M,\mu )$ ,

    $$ \begin{align*} \frac{1}{n} \sum_{i=0}^{n-1} \frac{1}{\unicode{x3bb}^n}{\mathcal P}^n f \xrightarrow{n\to\infty} \eta \int_M f(y) \eta(y) \mu(\mathrm{d} y), \end{align*} $$
    in $L^1(M,\mu )$ and $\mu $ -a.s.
  3. (iii) The following limit holds

    $$ \begin{align*} \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n (x, M) \xrightarrow{n\to\infty} \eta(x)\ \text{in }L^1(M,\mu).\end{align*} $$
  4. (iv) If, in addition, we assume that M is a Polish space, then for every $h\in L^\infty (M,\mu )$ ,

    (2.2) $$ \begin{align} \bigg(x\mapsto {\mathbb E}_x \bigg[\frac{1}{n} \sum_{i=0}^{n-1}h\circ X_i \mid \tau> n\bigg]\bigg) \xrightarrow{n\to\infty} \int_M h(y) \eta(y) \mu(\mathrm{d} y) \end{align} $$
    in the $L^\infty (M,\mu )$ -weak ${}^*$ topology (see [Reference Brezis5, Ch. 3.4] for the definition and the main properties of the weak $^{*}$ topology), in particular, we obtain that (2.2) also converges weakly in $L^1(M,\mu ).$

Theorem 2.2 is proved in §4.

It is observed that Theorem 2.2(iv) gives us $L^\infty (M,\mu )$ -weak $^{*}$ convergence of equation (2.2), and to guarantee such convergence in $L^\infty (M,\mu )$ , we require an additional regularity hypothesis (Hypothesis H2) on the kernel functions of the operator ${\mathcal P}$ .

Hypothesis H2. Let $X_n$ be a Markov chain $X_n$ on $E_M$ absorbed at $\partial $ . We say that $X_n$ fulfils Hypothesis H2 if:

  1. (1) $X_n$ fulfils Hypothesis H1; and

  2. (2) for $\mu $ -almost every point $x\in M$ , $\kappa (x,\cdot )\in L^\infty (M,\mu )$ . Equivalently, since $\mu $ is an inner regular measure [Reference Viana and Oliveira28, Proposition A.3.2], there exists a sequence of nested compact sets $\{K_i\}_{i\in \mathbb N}$ such that $\mu (\bigcup _{i\in \mathbb N} K_i ) = 1,$ and for every $i\in \mathbb N,$

We mention that, in practice, once $\mathrm {(H1a)}$ and $\mathrm {(H1b)}$ are verified, then $\mathrm {(H1c)}$ , $\mathrm {(H1d)}$ and Hypothesis H2 can be readily verified. We exemplify this in §6 considering the absorbing Markov chain $Y^{a,b}_n$ (see the proof Theorem 6.15).

In addition to quasi-stationary measures, the so-called Yaglom limit

$$ \begin{align*} \lim_{n\to\infty} \frac{{\mathcal P}^n(x,A)}{{\mathcal P}^n(x,M)}\ \text{for }x\in M\ \text{and}\ A\in\mathscr{B}(M),\end{align*} $$

provides an alternative perspective on the asymptotic behaviour of the paths $X_n$ conditioned on survival. Observe that for the Yaglom limit to exist, it is necessary that M does not exhibit a cyclic decomposition under $X_n$ , that is, that $m=1$ on item (i) of Theorem 2.2.

The following two results provide conditions that ensure the existence of a quasi-ergodic measure for $X_n$ on M and the convergence of the Yaglom limit.

Theorem 2.3. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H1. If any of the following items hold:

  1. (a) there exists $K>0$ such that $\mu (\{K<\eta \}) =1$ a.s.;

  2. (b) there exists $ g\in L^1(M,\mu )$ such that

    $$ \begin{align*} \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n(\cdot,M) \leq g \quad \text{for every }n\in \mathbb N; \end{align*} $$
  3. (c) the absorbing Markov chain $X_n$ fulfils Hypothesis H2,

then for every $h\in L^\infty (M,\mu )$ ,

(2.3) $$ \begin{align} \lim_{n\to\infty} {\mathbb E}_x\bigg[ \frac{1}{n}\sum_{i=0}^{n-1} h \circ X_i\, \bigg|\, \tau>n\bigg] = \int_M h(y) \eta(y) \mu(\mathrm{d} y) \mbox{ for } \mu\mbox{-almost every } x\in M. \end{align} $$

If, additionally, $m=1$ in Theorem 2.2(i), then

(2.4) $$ \begin{align} \lim_{n\to\infty} {\mathbb E}_x[h\circ X_n \mid \tau>n] = \lim_{n\to\infty} \frac{{\mathcal P}^n h(x)}{{\mathcal P}^n(x,M)} = \int_M h(y) \mu(\mathrm{d} y) \mbox{ for } \mu\mbox{-almost every } x\in M. \end{align} $$

Theorem 2.3 is proved in §5.

The following theorem is a refinement of the previous theorem, allowing us to characterize the set where the convergence of equations (2.3) and (2.4) hold.

Theorem 2.4. Let $X_n$ be an absorbing Markov chain fulfilling Hypothesis H2. Then, given $h\in L^\infty (M,\mu )$ , equation (2.3) holds for every $x\in \bigcup _{i\in \mathbb N} K_i,$ where $\{K_i\}_{i\in \mathbb N}$ is the nested sequence of compact sets given by the second part of Hypothesis H2.

In the case where $m=1$ in Theorem 2.2(i), equation (2.4) holds for every $x\in \bigcup _{i\in \mathbb N} K_i$ .

Theorem 2.4 is proved in §5.

Remark 2.5. Notice that Theorems 2.3 and 2.4 also hold in a non-escape context. This means that if $X_n$ is a Markov chain on the metric space M without absorption, satisfying the following properties:

  • $\mu $ is ergodic stationary measure for $X_n$ on $M;$

  • the transition kernel ${\mathcal P}:L^1(M,\mu )\to L^1(M,\mu )$ is an integral operator; and

  • $X_n$ is aperiodic, that is, $m=1$ in Theorem 2.2(i),

then for every $h \in L^\infty (M,\mu ), \lim _{n\to \infty } {\mathcal P}^n h = \int h \,\mathrm {d} \mu , \mu $ -a.s. In particular, from [Reference Lin20, Theorem 1(ii)], we obtain that $X_n$ is a weak-mixing Markov chain.

3 Mean-ergodic operators

For classical dynamical systems and Markov processes, mean-ergodic operators provide a vast array of tools and techniques for analysing their statistical properties [Reference Eisner, Farkas, Haase and Nagel10, Chs. 7, 8 and 10]. This section shows that this is also true for absorbing Markov chains.

In the following, we recall the definition of a mean ergodic operator.

Definition 3.1. Let $(E,\|\cdot \|)$ be a Banach space, we say that $T:E\to E$ is a mean-ergodic operator if there exists a projection $P:E\to E$ such that

$$ \begin{align*}\lim_{n\to\infty}\bigg\|\frac{1}{n} \sum_{i=0}^{n-1} T^i x - Px\bigg\| =0\quad \text{for every }x\in E. \end{align*} $$

Let M be a metric space, $\rho $ a Borel probability measure on $X,$ and $T:L^1(M,\rho )\to L^1(M,\rho ).$ We denote

where $T^*:L^\infty (M,\rho )\to L^\infty (M,\rho )$ is the dual operator of $T,$ that is, the unique bounded automorphism on $L^\infty (M,\rho )$ such that

$$ \begin{align*}&\int_M (Tf)(x) h(x)~\rho(\mathrm{d} x)\\ &\quad= \int f(x) (T^*h)(x)~\rho(\mathrm{d} x)\quad \mbox{for every } f\in L^1(M,\rho) \mbox{ and } h\in L^\infty(M,\rho). \end{align*} $$

Our results are highly dependent on the following two propositions.

Proposition 3.1. ([Reference Krengel18, Theorem 3.3.5] and [Reference Pflug25, Corollary V.8.1])

Let M be a metric space and $\rho $ be a probability measure on M, and $T:L^1(M,\rho ) \to L^1(M,\rho )$ be a linear operator such that $\|T\| = 1.$ Assume that there exists $\eta \in L^1(M,\rho )$ satisfying $T\eta = \eta $ and $\rho (\{\eta>0\}) = 1.$ Then, we have the following.

  1. (i) For every $f\in L^1(M,\rho )$ ,

    $$ \begin{align*}\lim_{n\to \infty} \frac{1}{n} \sum_{i=0}^{n-1}T ^i f = \eta \frac{{\mathbb E}_\rho[f \eta \mid \mathcal I(T,\rho)]}{{\mathbb E}_\rho[ \eta \mid \mathcal I(T,\rho)]}\quad \mu\mbox{-a.s}. \end{align*} $$
  2. (ii) The operator T is mean-ergodic.

While Hypothesis H1 does not imply that ${\mathcal P}$ is a compact operator, the proposition shows that given $f\in L^1(M,\mu )$ , the orbit $\{({1}/{\unicode{x3bb} ^n}){\mathcal P}^n f\}_{n\in \mathbb N}$ is weakly precompact.

Proposition 3.2. Suppose that the Markov process $X_n$ satisfies Hypothesis H1, then for every $f\in L^1(M,\mu )$ the sequence

$$ \begin{align*}\bigg\{\frac{1}{\unicode{x3bb}^i}{\mathcal P}^i f\bigg\}_{i\in\mathbb N}\ \mbox{ is weakly-}L^1(M,\mu) \mbox{ precompact.} \end{align*} $$

Proof. Let $f\in L^1_+(M,\mu )$ . Note that for every $i,m\in \mathbb N,$

$$ \begin{align*}0\leq \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i f \leq \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i (f- f\wedge m\eta) + \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i (f\wedge m\eta)\leq \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i (f- f\wedge m\eta) + m\eta,\end{align*} $$

where given two functions $f_1,f_2$ , we define $f_1\wedge f_2:= \min \{f_1,f_2\}$ . Since $\|({1}/{\unicode{x3bb} ^i}) {\mathcal P}^i f\|_{L^1(M,\mu )} = \|f\|_{L^1(M,\mu )}$ for every $i\in \mathbb N$ and $ m \eta \wedge f \xrightarrow {m\to \infty } f$ in $L^1(M,\mu )$ and $\mu $ -a.s., we can obtain that for every $\varepsilon>0$ , there exists $\delta>0$ such that

$$ \begin{align*}\frac{1}{\unicode{x3bb}^i}\int_A {\mathcal P}^i f(x) \mu(\mathrm{d} x) <\varepsilon\ \text{if }\mu(A)<\delta\quad \text{for every }i\in\mathbb N. \end{align*} $$

From [Reference Lasota and Mackey19, p. 87, item 3], we conclude that $\{({1}/{\unicode{x3bb} ^n}){\mathcal P}^n f\}_{n\in \mathbb N}$ is weakly $L^1(M,\mu )$ precompact.

4 $AM$ -compact operators

Observe that under Hypothesis H1, the operator $ ({1}/{\unicode{x3bb} }){\mathcal P} :L^1(M,\mu ) \to L^1(M,\mu )$ , is well behaved in a functional analytical point of view. Namely, $\frac {1}{\unicode{x3bb} }{\mathcal P}$ is a positive integral operator whose orbits are weakly compact. The theory of Banach lattices provides powerful tools for studying the spectrum of such operators. In the following two paragraphs, we recall the definition of a Banach lattice (we follow the definitions provided in [Reference Meyer-Nieberg22, Ch. 2] and [Reference Pflug25, Ch. 2]).

Given $(L,\leq )$ a partially ordered set and a set $B\subset L,$ we define, if exists,

$$ \begin{align*}\sup B = \min\{\ell\in L;\ b\leq \ell{\text{ for all}}\ b\in B\} \end{align*} $$

and

$$ \begin{align*}\inf B = \max\{\ell\in L;\ \ell \leq b {\text{ for all}}\ b\in B\}. \end{align*} $$

With the above definitions, we say that L is a lattice if for every $f_1,f_2\in L,$

$$ \begin{align*}f_1\lor f_2 := \sup \{f_1,f_2\} ,\ f_1\land f_2:= \inf \{f_1,f_2\} \end{align*} $$

exists. Additionally, in the case where L is a vector space and the lattice $(L,\leq )$ satisfies

$$ \begin{align*}f_1\leq f_2 \Rightarrow f_1+f_3\leq f_2+f_3\ {\quad\text{for all }} f_3\in L, \text{ and} \end{align*} $$
$$ \begin{align*}f_1\leq f_2 \Rightarrow \alpha f_1 \leq \alpha f_2\quad\text{for all } \alpha>0, \end{align*} $$

then $(L,\leq )$ is called a vector lattice. Finally, if $(L,\|\cdot \|)$ is a Banach space and the vector lattice $(L,\leq )$ satisfies

$$ \begin{align*}|f_1|\leq |f_2|\Rightarrow \|f_1\|\leq \|f_2\|,\end{align*} $$

where $|f_1| := f_1\lor (-f_1),$ then the triple $(L,\leq ,\|\cdot \|)$ is called a Banach lattice. When the context is clear, we denote the Banach lattice $(L,\leq ,\|\cdot \|)$ simply by L.

In this paper, we use two fundamental notions from Banach lattice theory. The first one is that of an ideal of a Banach lattice and the second one is that of an irreducible operator on a Banach lattice. A vector subspace $I \subset L$ is called an ideal if for every $f_1,f_2\in L$ such that $f_2\in I$ and $|f_1|\leq |f_2|,$ we have $f_1\in I$ . Finally, a positive linear operator $T:L\to L$ is called irreducible if $\{0\}$ and L are the only T-invariant closed ideals of T.

The theory of $AM$ -compact operators provides a generalization to the theory of compact operators. $AM$ -compact operators are considerably more general than compact operators and possess a sufficient degree of regularity. In the following, we recall the definition of an $AM$ -compact operator.

Definition 4.1. Let E be a Banach lattice and Y a Banach space. A linear operator $T:E \to Y$ is called $AM$ -compact if for every $x_1,x_2\in E$ , $T([x_1,x_2])$ is precompact in Y, where ${[x_1,x_2] :=\{y\in E;\ x_1\leq y \leq x_2\}.}$

The following result shows us that all positive integral operators are $AM$ -compact.

Theorem 4.1. [Reference Gerlach and Glück14, Proposition A.5]

Let $(\Omega _1,\mu _1)$ and $(\Omega _2,\mu _2)$ be the $\sigma $ -finite measure spaces and $p,q\in [1,\infty )$ . Let $T:L^p(\Omega _1,\mu _1) \to L^q (\Omega _2,\mu _2)$ be a positive bounded integral operator, then T is an $AM$ -compact operator.

Given $f\in L^1(M,\mu )$ , the key to our results is to understand the asymptotic behaviour of the sequence $\{({1}/{\unicode{x3bb} ^n}){\mathcal P}^n f\}_{n\in \mathbb N}$ . It turns out that the behaviour of this sequence has a strong connection with the peripheral spectrum of ${\mathcal P}$ . In this way, we denote $L^1(M,\mu ;\mathbb C) := L^1(M,\mu ) \oplus i L^1(M,\mu )$ and linearly extend the operator ${\mathcal P}$ to the Banach space $L^1(M,\mu ;\mathbb C)$ .

Here, we summarize the spectral properties implied by Hypothesis H1.

Proposition 4.2. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H1. Then:

  1. (i) for every $f\in L^1(M,\mu ),$

    $$ \begin{align*} \frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i}{\mathcal P}^i f \xrightarrow{n\to\infty} \eta \int_M f(y)\eta(y)\mu(\mathrm{d} y), \end{align*} $$
    in $L^1(M,\mu )$ and $\mu $ -a.s.;
  2. (ii) there exists a decomposition ${L^1(M,\mu; \mathbb C)= E_{\mathrm {rev}} \oplus E_{\mathrm {aws}}},$ such that $E_{\mathrm {rev}}$ and $E_{\mathrm {aws}}$ are ${\mathcal P}$ -invariant,

    $$ \begin{align*}E_{\mathrm{rev}}= \mathrm{span} \bigg\{f \in L^1(M,\mu;\mathbb C);\ \frac{1}{\unicode{x3bb}}{\mathcal P} f = e^{{2\pi j i}/{m} } f \text{ for some }j\in\{0,1,\ldots,m-1\} \bigg\},\end{align*} $$
    and
    $$ \begin{align*}E_{\mathrm{aws}}= \bigg\{f \in L^1(M,\mu; \mathbb C); \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i f\xrightarrow{n\to \infty }0, \text{ in }L^1(M,\mu)\bigg\}.\end{align*} $$

    Moreover,

    $$ \begin{align*}\mathrm{dim}\ker\bigg(\frac{1}{\unicode{x3bb}}{\mathcal P} - e^{{2 \pi j i}/{m}}\mathrm{Id}\bigg) = 1 \quad \text{for every }j\in\{0,1,\ldots,m-1\}.\end{align*} $$

Proof. (i) From Proposition 3.1, it is enough to show that if $A \in \mathcal I({\mathcal P}/\unicode{x3bb} , \mu )$ , then either ${\mu (A) = 0 }$ or $\mu (A) = 1.$ To see this, let $A\in \mathscr B(M)$ such that

then

From Hypothesis H1, we obtain that either $\mu (A) = 1$ or $\mu (A)=0$ .

(ii) From Propositions 3.2 and 4.1, we have that the semigroup $\{({1}/{\unicode{x3bb} ^n}) {\mathcal P}^n \}_{n\in \mathbb N}$ fulfils the standard assumptions of [Reference Glück and Haase16, §6]. Combining [Reference Glück and Haase16, Proposition 4.3, Theorem 2.2] and [Reference Eisner, Farkas, Haase and Nagel10, Proposition 16.27 and Corollary 16.32], we obtain that

$$ \begin{align*}E_{\mathrm{rev}} = \overline{\bigg\{f \in L^1(M,\mu; C); \frac{1}{\unicode{x3bb}}{\mathcal P} f = e^{2 i \pi \theta} f \text{ for some }\theta\in {\mathbb R}\bigg\}},\end{align*} $$

and

$$ \begin{align*}E_{\mathrm{aws}} = \{f\in L^1(M,\mu; \mathbb C); \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i f \to 0 \ \text{in }L^1(M,\mu)\}.\end{align*} $$

Applying [Reference Glück and Haase16, Theorem 6.1(a)], we obtain that if $ \unicode{x3bb} e^{2 i \pi \theta } \in \sigma _{\mathrm {pnt}}(\mathcal P)$ , then $\theta \in {\mathbb Q}.$ Observe Hypothesis H1 implies that ${\mathcal P}/\unicode{x3bb} $ is an irreducible operator [Reference Meyer-Nieberg22, Example (i), p. 262]. From [Reference Glück and Haase16, Theorem 6.1(b)], we obtain that ${\mathcal P}/\unicode{x3bb} $ has only finitely many unimodular eigenvalues. Finally, from [Reference Meyer-Nieberg22, Theorem 4.2.13(iii)] (taking $x' = 1$ ), the proof is finished.

Let $\sigma _{\mathrm {pnt}}(({1}/{\unicode{x3bb} }){\mathcal P}) := \{ \widetilde {\unicode{x3bb} } \in {\mathbb C}; \mbox { there exists } h\in L^1(M), ({1}/{\unicode{x3bb} }){\mathcal P}h = \widetilde {\unicode{x3bb} } h\}$ be the point spectrum of the operator $({1}/{\unicode{x3bb} }){\mathcal P}.$ In [Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Reference Oçafrain23], it is shown that the cardinality of $\mathbb S^1 \cap \sigma _{\mathrm {pnt}}(({1}/{\unicode{x3bb} }){\mathcal P})$ is intrinsically connected with the existence a possible periodic behaviour of $X_n$ in a suitable partition of M. This remains true under Hypothesis H1, and such periodic behaviour is established in Lemmas 4.3 and 4.4.

Definition 4.2. Given an absorbing Markov chain $X_n$ that satisfies Hypothesis H1, we define ${m(X_n) := \#( \mathbb S^1 \cap \sigma _{\mathrm {pnt}}(\frac {1}{\unicode{x3bb} }{\mathcal P}))},$ which is finite from Proposition 4.2.

From now on, we denote $m(X_n)$ simply as m.

Lemma 4.3. Let $X_n$ be a Markov chain satisfying Hypothesis H1. Then there exist eigenfunctions $g_1,\ldots ,g_{m-1} \in L^1_+(M,\mu )$ of ${\mathcal P}^m$ such that $\|g_j\|_{L^1(M,\mu )} =1$ for every $j\in \{0,1,\ldots ,m-1\}$ , and $\mathrm {span}_{\mathbb C} (\{g_i\}_{i=0}^{m-1}) = \mathrm {ker}( {\mathcal P}^m - \unicode{x3bb} ^m \mathrm {Id}). $

Moreover, the eigenfunctions $g_0$ , $g_1$ , $\ldots $ , $g_{m-1}$ can be chosen in a way such that they have disjoint support, that is, defining $C_i = \{g_i>0\}$ for all $i\in \{0,\ldots ,m-1\}, $ then $ \mu ( C_i\cap C_j ) = 0\ {\ \text {for all}} \ i\neq j.$

Furthermore, the family of sets $\{C_i\}_{i=0}^{m-1}$ satisfies

$$ \begin{align*} \mu( M\setminus ( C_0 \sqcup C_1 \sqcup \cdots \sqcup C_{m-1})) = 0.\end{align*} $$

Proof. The proof follows from similar arguments and computations laid out in [Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Proposition 6.9] with the following two adaptations:

  1. (1) the space $\mathcal C^0(M)$ is replaced by $L^1(M,\mu );$ and

  2. (2) the set equalities are replaced by the relation $\sim .$ Namely, the given $A,B \in \mathscr B(M)$ are said to be equivalent, that is, $A\sim B$ if $\mu (A\triangle B) = 0,$ where $A\triangle B := (A\setminus B) \cup (B\setminus A).$

The proof of the following lemma is analogous to the proof [Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Lemma 6.15].

Lemma 4.4. Let $\{g_i\}_{i=0}^{m-1}\subset L^1_+(M,\mu )$ , as in Proposition 4.3. Then, there exists a cyclic permutation $\sigma :\{0,1,\ldots ,m-1\}\to \{0,1,\ldots ,m-1\}$ of order m such that for every $i\in \{0,1,\ldots ,m-1\}$ , ${\mathcal P} g_i = \unicode{x3bb} g_{\sigma (i)}.$ In particular, this implies that

$$ \begin{align*}\{{\mathcal P}(x,C_i)>0\} \subset C_{\sigma(i)}\ \text{for every}\ i\in\{0,1,\ldots,m-1\}. \end{align*} $$

The following two lemmas are the last ingredients needed for the proof of Theorem 2.2.

Lemma 4.5. Suppose the absorbing Markov chain $X_n$ satisfies Hypothesis H1. Then,

(4.1) $$ \begin{align} \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n(x,M) \xrightarrow{n\to\infty}\eta(x)\ \text{in }L^1(M,\mu). \end{align} $$

Proof. We divide the proof into three steps.

Step 1. We show that $\mu (C_i) = 1/m$ for every $i\in \{0,1\ldots ,m-1\}$ .

Observe that Proposition 3.1 implies that for every $i\in \mathbb N$ ,

and

in $L^1(M,\mu )$ and $\mu $ -a.s.

However, from [Reference Foguel12, Theorem E, p. 29], we obtain that for $\mu $ -almost every $x\in M$ ,

$$ \begin{align*} \mu(C_i) &= \lim_{n\to\infty}\frac{\sum_{j=0}^{n} {\mathcal P}^j(x,C_i)}{\sum_{j=0}^{n} {\mathcal P}^j(x,M)} \leq \lim_{n\to\infty} \frac{\sum_{j=0}^{n} ({1}/{\unicode{x3bb}^{i+mj}}) {\mathcal P}^{i + m j }(x,M)}{ \sum_{j=0}^{nm} ({1}/{\unicode{x3bb}^j}) {\mathcal P}^j(x,M)}\\ &= \frac{ \lim_{n\to\infty} ({1}/{nm}) \sum_{j=0}^{n} ({1}/{\unicode{x3bb}^{i+mj}}) {\mathcal P}^{i + m j }(x,M)}{ \lim_{n\to\infty} ({1}/{mn}) \sum_{j=0}^{nm} ({1}/{\unicode{x3bb}^j}) {\mathcal P}^i(x,M)}\\ &= \frac{1}{m} \frac{{\mathbb E}_\mu[({1}/{\unicode{x3bb}^i}) {\mathcal P}^i(\cdot,M) \mid \mathcal I(({1}/{\unicode{x3bb}^m}) {\mathcal P}^i,\mu)]}{{\mathbb E}_\mu[\eta\mid \mathcal I(({1}/{\unicode{x3bb}^m}) {\mathcal P}^i,\mu)]}. \end{align*} $$

Therefore,

$$ \begin{align*} \mu(C_i) &= \mu(C_i) \int_M {{\mathbb E}_\mu\bigg[\eta\mid \mathcal I\bigg(\frac{1}{\unicode{x3bb}^m} {\mathcal P}^i,\mu\bigg)\bigg]}\mu(\mathrm{d} x)\\ &\leq \frac{1}{m} \int_M {\mathbb E}_\mu\bigg[\frac{1}{\unicode{x3bb}^i}{\mathcal P}^i(\cdot,M)\mid \mathcal I\bigg(\frac{1}{\unicode{x3bb}^m} {\mathcal P}^i,\mu\bigg)\bigg](x) \mu(\mathrm{d} x)\\ &\leq \frac{1}{m}\quad \text{for all }i\in\{0,1,\ldots,m-1\}. \end{align*} $$

Since

$$ \begin{align*} 1 = \mu(M) = \mu(C_0) + \cdots + \mu(C_{m-1}) \leq \frac{1}{m} +\cdots +\frac{1}{m} =1,\end{align*} $$

we obtain that

$$ \begin{align*}\mu (C_i) = \frac{1}{m}\quad \text{for every }i\in\{0,1,\ldots,m-1\}. \end{align*} $$

Step 2. We show that for every $i \in \{0,1,\ldots ,m-1\}$ , there exists $f_i \in E_{\mathrm {aws}}$ such that

From the decomposition $L^1(M,\mu ) = E_{\mathrm {rev}}\oplus E_{\mathrm {aws}} $ (see Proposition 4.2(ii)), there exist $\alpha _1,\ldots ,\alpha _m \in {\mathbb R}$ and $f_i\in E_{\mathrm {aws}}$ such that

Since $f_i\in E_{\mathrm {aws}}$ , it follows that $\int _M f_i(y) \mu (\mathrm {d} y) = 0.$ Therefore, $\alpha _0,\ldots ,\alpha _{i-1},\alpha _{i+1},\ldots , \alpha _m = 0$ and Since $\mu (C_i) =1/m$ , we obtain that $\alpha _i =1/m.$

Step 3. We conclude the proof of the proposition.

From Step 2, we obtain that

Since $f:=f_1 +\cdots + f_m \in E_{\mathrm {aws}}$ , this shows that $({1}/{\unicode{x3bb} ^n}){\mathcal P}^n(\cdot ,M) \xrightarrow []{n\to \infty } \eta $ in $L^1(M,\mu ).$

Lemma 4.6. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H1, then for every $h\in L^\infty (M,\mu )$ ,

(4.2) $$ \begin{align} \frac{1}{n} \sum_{i=0}^{n-1} \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot) \frac{1}{\unicode{x3bb}^{n-i}}{\mathcal P}^{n-i}(\cdot,M)\bigg) \xrightarrow{n\to\infty} \eta \int_M h(y) \eta(y) \mu(\mathrm{d} y), \text{ in } L^1(M,\mu). \end{align} $$

Proof. Let $h \in L^\infty (M,\mu ),$ from a direct computation

$$ \begin{align*} I_h^n&:=\bigg\| \frac{1}{n} \sum_{i=0}^{n-1} \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot) \frac{1}{\unicode{x3bb}^{n-i}}{\mathcal P}^{n-i}(\cdot,M)\bigg)- \frac{1}{n} \sum_{i=0}^{n-1} \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i(h\eta)\bigg\|_{L^1(M,\mu)}\\ &\leq \frac{1}{n} \sum_{i=0}^{n-1} \bigg\| \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i\bigg(h\cdot \bigg( \frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{n-i}(\cdot, M) - \eta\bigg)\bigg)\bigg\|_{L^1(M,\mu)}\\ &\leq \frac{\|h\|_{\infty}}{n} \sum_{i=0}^{n-1} \bigg\| \frac{1}{\unicode{x3bb}^i}{\mathcal P}^i\bigg| \frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{n-i}(\cdot, M) - \eta\bigg|\bigg\|_{L^1(M,\mu)}\\ &\leq \frac{\|h\|_{\infty}}{n} \sum_{i=0}^{n-1} \bigg\| \frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{n-i}(\cdot, M) - \eta\bigg\|_{L^1(M,\mu)}\xrightarrow[\text{Lem } 4.5]{n\to\infty} 0. \end{align*} $$

The theorem follows by combining the above equation with Proposition 4.2 (i).

Now, we prove Theorem 2.2.

Proof of Theorem 2.2

Items (i), (ii) and (iii) follows directly from respectively Propositions 4.4, 4.2(i) and Lemma 4.5.

In the following, we prove item (iv). Given $h\in L^\infty (M,\mu )$ , define

$$ \begin{align*} g_n(x) &:= {\mathbb E}_x\bigg[\frac{1}{n}\sum_{i=0}^{n-1} h\circ X_i \mid \tau>n\bigg]= \frac{\unicode{x3bb}^n}{{\mathcal P}^n(x,M)} \frac{1}{n}\sum_{i=0}^{n-1} \frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot) \frac{1}{\unicode{x3bb}^{n-i}}{\mathcal P}^{n-i}(\cdot,M)\bigg)(x). \end{align*} $$

It is clear that $\|g_n\|_{L^\infty (M,\mu )} \leq \|h\|_{L^\infty (M,\mu )}$ for every $n\in \mathbb N$ . Since M is a Polish space, from the Banach–Alaoglu theorem, we obtain that the space

$$ \begin{align*}B_{\|\cdot\|_{L^\infty(M,\mu)}} (0,\|h\|_{\infty}) :=\{ g\in L^\infty(M,\mu); \|g\|_{L^\infty(M,\mu)}\leq \|h\|_{L^\infty(M,\mu)}\}\end{align*} $$

is a compact metric space when endowed with the $L^\infty (M,\mu )$ -weak ${}^*$ topology. Let $\{g_{n_k}\}_{n\in \mathbb N}$ be a $L^\infty (M,\mu )$ -weak ${}^*$ convergent subsequence of $\{g_n\}_{n\in \mathbb N}$ , and denote its limit as g.

We show that $g = \int _M h \eta \ \mathrm {d} \mu \ \mu $ -a.s., which implies item (iv). Observe that given $A\in \mathscr B(M),$ from Lemmas 4.5 and 4.6, we obtain that

Since $\mu (\{\eta>0\})=1$ , it follows that $g = \int _M h(x)\eta (x) \mu (\mathrm {d} x) \ \mu $ -a.s.

5 Almost-sure convergence

In this section, we strengthen the $L^\infty (M,\mu )$ -weak ${}^*$ convergence given in Theorem 2.2 to $L^\infty (M,\mu )$ convergence.

Note that for every $n\in \mathbb N$ , $x\in M$ and $A\in \mathscr B(M)$ ,

Therefore, to prove Theorem 2.3, it is enough to find conditions where equations (4.1) and (4.2) converge almost surely.

To prove Theorem 2.3, we need the following three propositions.

Proposition 5.1. [Reference Meyer-Nieberg22, Proposition 3.3.3]

Let $T:L^1(M,\mu )\to L^1(M,\mu )$ be a positive bounded integral operator. Then if $\{f_n\}_{n\in \mathbb N}\subset L^1(M,\mu )$ is a $L^1(M,\mu )$ -order bounded sequence satisfying $f_n\to 0$ in $\mu $ -measure as $n\to \infty $ , then $Tf_n \to 0$ as $n\to \infty \ \mu $ -almost everywhere.

Proposition 5.2. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H1. Suppose that one of the following items holds:

  1. (a) there exists $K>0$ such that $\mu (\{K<\eta \}) =1$ almost surely;

  2. (b) there exists $ g\in L^1(M,\mu )$ such that

    $$ \begin{align*} \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n(x,M) \leq g \quad \text{for every }n\in \mathbb N; \end{align*} $$
  3. (c) the absorbing Markov chain $X_n$ fulfils Hypothesis H2.

Then for every $h\in L^\infty (M,\mu )$

(5.1) $$ \begin{align} \lim_{n\to \infty} \frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot)\frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{(n-i)}(\cdot,M)\bigg)(x) \xrightarrow{n\to\infty} \eta(x)\int_M h(y)\eta(y)\mu(\mathrm{d} y)\quad \mu\text{-a.s}. \end{align} $$

In addition, if

(5.2) $$ \begin{align} \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n h \xrightarrow{n\to\infty} \eta \int_M h(x) \mu(\mathrm{d} x)\ \text{in } L^1(M,\mu), \end{align} $$

then $({1}/{\unicode{x3bb} ^n}) {\mathcal P}^n h \xrightarrow {n\to \infty } \eta \int _M h(x)\mu (\mathrm {d} x) \ \mu $ -a.s.

Proof. Observe that item (a) is a particular case of item (b). In fact, note that for every $x\in M,$ $({1}/{\unicode{x3bb} ^n}){\mathcal P}(x,M)\leq {1}/{K}({1}/{\unicode{x3bb} ^n}){\mathcal P}^n\eta (x) = {\eta (x)}/{K}$ which correspond to item (b) when setting $g := \eta /K.$ Now, we assume item (b). From Lemmas 4.2 and 4.5, we obtain that equations (5.1) and (5.2) converge in probability. Moreover, item (b) implies that for every $n\in \mathbb N$ and for $\mu $ -almost every $x \in M$ ,

$$ \begin{align*}-\|h\|_{L^\infty(M,\mu)}g(x)\leq \frac{1}{\unicode{x3bb}^n} {\mathcal P}^n h(x)\leq \|h\|_{L^\infty(M,\mu)}g(x) \end{align*} $$

and

$$ \begin{align*}-\|h\|_{L^\infty(M,\mu)} g(x) \leq \frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot)\frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{(n-i)}(\cdot,M)\bigg)(x) \leq \|h\|_\infty g(x).\end{align*} $$

Therefore, Proposition 5.1 implies the result.

Now, we assume item (c). Let us consider the set

$$ \begin{align*}K_m = \{x\in M; \|k(x,\cdot)\|_{L^\infty(M,\mu)}\leq m\}. \end{align*} $$

It is clear that the

is a bounded linear operator. Therefore, we have for every $h\in L^\infty (M,\mu ),$

and

Since Hypothesis H2 implies that $\mu (\bigcup _{m=1} K_m) = 1$ , we obtain the result.

Proposition 5.3. Let $X_n$ be an absorbing Markov chain satisfying Hypothesis H2 and $m=1$ in Theorem 2.2(i). Then for every $h\in L^\infty (M,\mu )$ ,

$$ \begin{align*} \frac{1}{\unicode{x3bb}^n}{\mathcal P}^{n} h \to \eta \int_M h(x)\mu(\mathrm{d} x)\quad \text{in }L^1(M,\mu).\end{align*} $$

Proof. In the case where $m=1$ in Theorem 2.2(i), we obtain that $\mathrm {dim}(E_{\mathrm {rev}}) =1.$ Then, given $f\in L^1(M,\mu )$ , there exists $\alpha \in \mathbb C$ and $g \in {\mathbb E}_{\mathrm {aws}}$ such that

$$ \begin{align*}f = \alpha \eta + g. \end{align*} $$

From Proposition 4.2, we obtain that

$$ \begin{align*}\frac{1}{\unicode{x3bb}^n}{\mathcal P}^n f \xrightarrow{n\to\infty }\alpha \eta \quad \text{in }L^1(M,\mu).\end{align*} $$

Finally, integrating over $\mu $ in the above limit, we obtain that $\alpha =\int _M f(x)\mu (\mathrm {d} x).$

Finally, we prove Theorems 2.3 and 2.4.

Proof of Theorem 2.3

Observe that given $h\in L^\infty (M,\mu )$ , for every $n\in \mathbb N$ and $x\in M$ , we obtain

(5.3) $$ \begin{align} {\mathbb E}_x\bigg[\frac{1}{n} \sum_{i=0}^{n-1}h\circ X_i \mid \tau> n\bigg] = \frac{ {1}/{n}\sum_{i=0}^{n-1} ({1}/{\unicode{x3bb}^i}){\mathcal P}^i(h(\cdot) ({1}/{\unicode{x3bb}^{n-i}}){\mathcal P}^{n-i}(\cdot,M) )(x)}{({1}/{\unicode{x3bb}^n}) {\mathcal P}^n(x,M)}. \end{align} $$

From Proposition 5.2, we obtain that for $\mu $ -almost every $x\in M$ ,

$$ \begin{align*}\lim_{n\to\infty}{\mathbb E}_x[h\circ X_i \mid \tau> n] = \int_M \eta(x) h(x) \mu(\mathrm{d} x),\end{align*} $$

which proves the first part of the theorem.

In the case where $m=1$ in Theorem 2.2, then $\# (\sigma _{\mathrm {pnt}}({\mathcal P}/\unicode{x3bb} ) \cap \mathbb S^1) =1$ , then combining Propositions 5.2 and 5.3, we obtain that for $\mu $ -almost every $x\in M$ ,

$$ \begin{align*}\lim_{n\to\infty} \frac{{\mathcal P}^n h (x)}{ {\mathcal P}^n(x,M)} = \int_M h(y) \mu(\mathrm{d} y).\\[-37pt]\end{align*} $$

Proof of Theorem 2.4

Observe that under the conditions of Theorem 2.4, we obtain that for every $i\in \mathbb N$ , the operators

are bounded linear operators, since

(5.4) $$ \begin{align} \frac{1}{n}\sum_{i=0}^{n-1}\frac{1}{\unicode{x3bb}^i} {\mathcal P}^i\bigg(h(\cdot)\frac{1}{\unicode{x3bb}^{n-i}} {\mathcal P}^{(n-i)}(\cdot,M)\bigg)(x) \xrightarrow[L^1(M,\mu)]{n\to\infty} \eta(x)\int_M h(y)\eta(y)\mu(\mathrm{d} y) \end{align} $$

and

(5.5) $$ \begin{align} \lim_{n\to\infty}\frac{1}{\unicode{x3bb}^n}{\mathcal P}^n(x,M) \xrightarrow[L^1(M,\mu)]{n\to\infty} \eta(x). \end{align} $$

Composing $\mathcal G^i$ on both sides, we obtain that equations (5.4) and (5.5) hold pointwise in $K^i$ for every $i\in \mathbb N.$ From equation (5.3), we obtain that for every $x\in \bigcup _{x\in \mathbb N} K_i$ and $h\in L^\infty (M,\mu )$ ,

$$ \begin{align*} \lim_{n\to\infty }{\mathbb E}_x\bigg[\frac{1}{n} \sum_{i=0}^{n-1}h\circ X_i \mid \tau> n\bigg] = \int_M h(y) \eta (y)\mu(\mathrm{d} y).\end{align*} $$

Note that if $m=1$ in Theorem 2.2, we obtain that $\#(\sigma _{\mathrm {pnt} {\mathcal P}/\unicode{x3bb} }\cap \mathbb S^1) =1.$ Therefore, $h\in L^\infty (M,\mu )$

$$ \begin{align*} \lim_{n\to\infty} \frac{{\mathcal P}^n h}{{\mathcal P}^n(x,M)} = \int_M h(y) \mu(\mathrm{d} y)\quad \text{for every }x\in\bigcup_{m\in\mathbb N} K_m.\\[-40pt] \end{align*} $$

6 Random logistic map with escape

In this section, we analyse the Markov chain $Y_{n+1}^{a,b} = \omega _n Y_{n}^{a,b} (1-Y_n^{a,b})$ absorbed at $\partial = {\mathbb R}\setminus M,$ where $\{\omega _n\}_{n\in \mathbb N}$ is an i.i.d. sequence of random variables such that $\omega _n\sim \mathrm {Unif}([a,b])$ , where $0<a<4<b$ and $M=[0,1].$ As before, for every $A\in \mathscr B(M)$ and $x\in M$ , we denote

$$ \begin{align*}{\mathcal P}(x,A):= {\mathbb P}[ Y_1^{a,b} \in A \mid Y_0^{a,b} = x].\end{align*} $$

Clearly, $\delta _0$ is a stationary measure for $Y_n^{a,b}$ on $[0,1]$ . In the following, we provide conditions to show that $Y_n^{a,b}$ admits a non-trivial quasi-stationary measure on $[0,1]$ , which we define as a quasi-stationary measure for $Y_n^{a,b}$ different from $\delta _0.$ For the sake of simplicity and in the interest of readability, we denote $Y^{a,b}_n$ simply as $Y_n$ . Similarly, when the context is clear, we omit the $a,b$ superscript from future objects that depend on a and b.

In the following proposition, we explicitly compute the transition functions of $Y_n.$

Proposition 6.1. Let $0\leq a \leq b $ , and consider the absorbing Markov chain $Y_n^{a,b}$ . Moreover, given $f\in L^1(M,\mathrm {Leb}),$

$$ \begin{align*} {{\mathcal P}} f(x) = \frac{1}{(b-a)x(1-x)}\int_{ax(1-x)}^{bx(1-x)\wedge 1} f (y) \mathrm{d} y. \end{align*} $$

In the case where $f\in \mathcal C^0(M),$ then ${\mathcal P} f \in \mathcal C^0(M)$ and ${\mathcal P} f (0) = {{\mathcal P}}f(1) = f(0). $

Proof. Let $f\in L^1(M,\mathrm {Leb})$ by a direct computation,

Now, consider $f\in \mathcal C^0(M).$ The above equation implies that $\mathcal Pf$ is continuous in $(0,1)$ . For every $x\in (0,1)$ , let us define the interval $J_x := [ax(1-x), bx(1-x)\land 1].$ It follows that for every $x\in (0,1/b)$ , $\min _{y\in J_x} f(y) \leq \mathcal Pf(x) \leq \max _{y\in J_x} f(y).$ From the continuity of f, we obtain that $\lim _{x\to 0} \mathcal Pf = f(0).$ Since ${\mathcal P}(x) = {\mathcal P} (1-x)$ for every $x\in (0,1/2)$ , it follows that $\lim _{x\to 1}{\mathcal P}(x) = f(0),$ implying that ${\mathcal P} f\in \mathcal C^0(M)$ .

The first step to apply Theorem 2.4 to $Y_n$ on $[0,1]$ is to show that $Y_n$ admits a quasi-stationary measure different from $\delta _0$ on $[0,1]$ .

Consider a measure $\mu \in \mathcal M (M)$ such that $\mu \ll \mathrm {Leb}(\mathrm {d}x)$ and define $ g:=\mu (\mathrm {d} x)/ \mathrm {Leb}(\mathrm {d} x).$ Note that

where

$$ \begin{align*}\alpha_{\pm}(x) := \frac{1}{2} \pm \frac{1}{2}\sqrt{1 - \frac{4}{b}x}\quad \text{and}\quad \beta_{\pm}(x) := \frac{1}{2} \pm \frac{1}{2}\sqrt{1 - \frac{4}{a}x}.\end{align*} $$

The above observation motivates the definition of the stochastic transfer operator,

(6.1) $$ \begin{align} \mathcal L: L^1([0,1],\mathrm{Leb}) &\to L^1([0,1],\mathrm{Leb}), \\ g&\mapsto \bigg(x\mapsto \int_{\alpha_-(x)}^{\alpha_+(x)} \frac{ g(y)}{(b-a)y(1-y)} \ \mathrm{d} y\nonumber\\ &\quad- \int_{\beta_-(x\wedge a/4)}^{\beta_+(x\wedge a/4)} \frac{ g(y)}{(b-a)y(1-y)} \ \mathrm{d} y \bigg), \nonumber \end{align} $$

note that $\mathcal L$ is a well-defined linear operator since for every $g\in L^1([0,1],\mathrm {Leb})$ ,

$$ \begin{align*} \|\mathcal L (g)\|_{L^1(M,\mathrm{Leb})} := \int_0^1 |\mathcal L g(x)| \,\mathrm{d} x \leq \int_M {\mathcal P}(x,M) |g(y)| \mathrm{d} y \leq \|g\|_{L^1(M,\mathrm{Leb})}. \end{align*} $$

The following two propositions summarize the above comments and show that $\mathcal L$ is well defined as an automorphism in $L^p(M,\mathrm {Leb})$ for every $p\in [1,\infty ].$ For the following result, see for instance [Reference Zmarrou and Homburg30, §5].

Proposition 6.2. A probability measure $\mu \in \mathcal M_+(M)\setminus \{\delta _0\}$ on $[0,1]$ is a quasi- stationary measure for $Y_n$ if and only if $\mu (\mathrm {d} x) \ll \mathrm {Leb}(\mathrm {d} x)$ and there exists $0<\unicode{x3bb} <1,$ such that

$$ \begin{align*} \mathcal L \frac{\mu(\mathrm{d} x) }{\mathrm{Leb}(\mathrm{d} x)} = \unicode{x3bb} \frac{\mu(\mathrm{d} x) }{\mathrm{Leb}(\mathrm{d} x)}. \end{align*} $$

Proposition 6.3. For every $p\in [1,\infty ]$ , the operators

$$ \begin{align*}{\mathcal P}|_{L^p([0,1],\mathrm{Leb})}, {\mathcal L} |_{L^p([0,1],\mathrm{Leb})}: L^p([0,1]) \to L^p([0,1]) \end{align*} $$

are well defined and bounded.

Proof. By a direct computation, one can check that

implying that

$$ \begin{align*}\|\mathcal L\|_{L^\infty(M,\mathrm{Leb})} = \frac{4}{b-a} \tanh^{-1}\bigg(\sqrt{1-\frac{a}{b}}\bigg).\end{align*} $$

Since $\|\mathcal L\|_{L^1(M,\mathrm {Leb})} \leq 1,$ by the Riesz–Thorin interpolation theorem [Reference Folland13, Theorem 6.27],

$$ \begin{align*}\|\mathcal L\|_{L^p(M,\mathrm{Leb})} < \infty \quad \text{for all } p\in [ 1 , \infty]. \end{align*} $$

For the operator ${\mathcal P},$ note that for every $0\leq f \in L^1 ([0,1])$ ( $1\leq p \leq \infty $ ),

showing that $\|{\mathcal P}\|_{L^1(M,\mathrm {Leb})} \leq \|\mathcal L\|_{L^\infty (M,\mathrm {Leb})}<\infty .$ Using that $\|{\mathcal P}\|_{L^\infty (M,\mathrm {Leb})} \leq 1$ , we have again by the Riesz–Thorin interpolation theorem that $\|{\mathcal P}\|_{L^p(M,\mathrm {Leb})} <\infty \ \text {for all } \ p\in [1,\infty ]. $

For every $a\in (0,4)$ and $0<\varepsilon <3/8$ , let us define $M_\varepsilon := [4\varepsilon (1-\varepsilon )^2,1-\varepsilon ]$ and the Markov chain ${Y^{a,b,\varepsilon }_{n+1}:=Y^\varepsilon _{n+1} = \omega _n Y_n^\varepsilon (1 - Y^\varepsilon _n)}$ absorbed at $\partial ^\varepsilon = {\mathbb R} \setminus M_\varepsilon $ , where $\{\omega _n\}_{n\in \mathbb N}$ is an i.i.d. sequence of random variables and $\omega _n\sim \mathrm {Unif}([a,b])$ . Moreover, for every $\varepsilon \in (0,3/8),$ we denote the transition kernels and transfer operator for the absorbing Markov chain $Y_n^\varepsilon $ respectively as

(6.2)

In the next proposition, we show the existence of a sequence of positive real numbers $\{\varepsilon _i\}_{i\in \mathbb N}$ converging to $0$ , such that for every $i\in \mathbb N$ , the absorbing Markov chain $Y_n^{\varepsilon _i}$ admits a unique quasi-stationary measure $\mu _{\varepsilon _i}$ supported on $M_{\varepsilon _i}.$ Moreover, these measures will play an important role in constructing a non-trivial quasi-stationary measure for $Y_n$ on M.

Proposition 6.4. Let $(a,b) \in [1,4) \times (4,\infty )$ and $Y_n^{a,b,\varepsilon }$ be the Markov chain absorbed at $\partial ^{\varepsilon }$ defined above. Then, there exists a sequence of positive numbers $\{\varepsilon _i\}_{i\in \mathbb N}$ converging to $0$ such that, for every $i\in \mathbb N$ , the following items hold:

  1. (a) $Y_n^{a,b,\varepsilon _i}$ admits a unique quasi-stationary measure $\mu _{a,b,\varepsilon _i} := \mu _\varepsilon $ on $M_\varepsilon $ with survival rate $\unicode{x3bb} _{\varepsilon _i}>0$ ;

  2. (b) there exists a continuous function $g_\varepsilon ^{a,b}:= g_{\varepsilon _i} \in \mathcal C^0(M_{\varepsilon _i})$ such that $\mu _{\varepsilon _i}(\mathrm {d} x) = g_{\varepsilon _i}(x)\,\mathrm {d} x$ ; and

  3. (c) $\mathrm {supp}(\mu _{\varepsilon _i}) = M_{\varepsilon _i}.$

Proof. From [Reference Jakobson17, Theorem B and Remark XIII/5], there exists a sequence $\{r_i\}_{i\in \mathbb N} \subset [a,4)$ converging to $4$ such that for every $i\in \mathbb N$ , the logistic map $f_{r_i} :[0,1]\to [0,1]$ , $f_{r_i}(x)= r_i x(1-x)$ admits an invariant ergodic measure $\rho _{r_i} \ll \mathrm {Leb}$ and $\mathrm {supp}(\rho _{r_i}) = [f_{r_i}^2(1/2), f_{r_i}(1/2)]$ .

Consider the sequence $\{\varepsilon _i = (4 - r_i)/4\}_{i\in \mathbb N}$ . Combining equation (6.2) and Proposition 6.1, we obtain that

(6.3)

In the following, we show that for every $i\in \mathbb N,$ given $x\in M_{\varepsilon _i}$ and open interval $I \subset M_{\varepsilon _i} =[f_{r_i}^2(1/2), f_{r_i}(1/2)] ,$ there exists $n_0 = n_0(x,I) \in \mathbb N$ such that ${\mathcal P}_{\varepsilon _i}^{n_0}(x, I)> 0.$

Consider the set $J := \{y\in M_{\varepsilon _i}; \omega x(1-x) =y\ \text {for some }\omega \in [a,b]\}.$ Since J has non-empty interior, we obtain that $\rho _{r_i}(J)>0.$ Since $\rho _{r_i}$ is an invariant ergodic measure, there exists $\omega _0 \in [a,b]$ such that $y:=\omega _0 x(1-x) \in J$ and $n_1>0 \in \mathbb N$ such that $f^{n_1}_{r_i}(y)~\in ~I$ .

Consider the natural number $n_0 \in \mathbb N$ and the continuous function ${F^{x,n_0}: [a,b]^{n_0} \to {\mathbb R},} F^{x,n_0}(c_1,\ldots , c_{n_0}) := f_{c_1} \circ f_{c_2} \circ \ldots f_{c_{n_0}}(x)$ . From the last paragraph, we obtain that $F^{x,n_0}(\omega _0,r_{\varepsilon _i},\ldots ,r_{\varepsilon _i}) \in I$ . Finally, since $F^{x,n_0}$ is a continuous function, we obtain that

(6.4) $$ \begin{align} {\mathcal P}^{n_0}(x, I) &= {\mathbb P} [Y_{n_0}^{a,b,\varepsilon_i} \in I\mid X_0 = x]\nonumber\\&= \frac{1}{(b-a)^{n_0}}\mathrm{Leb}^{\otimes n_0}(\{p\in [a,b]^{n_0};\ F^{x,n_0}(p) \in I\})>0. \end{align} $$

From equations (6.2) and (6.4), we conclude that [Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Hypothesis (H)] is fulfilled and therefore items (a), (b) and (c) follows directly from [Reference Castro, Lamb, Olicón-Méndez and Rasmussen7, Theorem A].

Observe that the family of measures given by the previous proposition $\{\mu _{\varepsilon _i}\}_{i\in \mathbb N}$ can be naturally extended on $[0,1]$ by imposing that $\mu _{\varepsilon _i}([0,1]\setminus M_{\varepsilon _i}) =0$ for every $i\in \mathbb N.$ To construct a quasi-stationary measure for the Markov process $Y_n$ on $[0,1]$ , we use that $\{\mu _{\varepsilon _i}\}_{i \in \mathbb N}$ is precompact in the weak $^*$ of $\mathcal M([0,1]),$ that is,

(6.5) $$ \begin{align} \bigcap_{i\in \mathbb N} \overline{ \{\mu_{\varepsilon_{k+i}}\}_{k\in\mathbb N}}^{\mathrm{w}^*\text{-}\mathcal M(M)} \neq \emptyset, \end{align} $$

where $\mathrm {w}^*\text {-}\mathcal M(M)$ denotes the weak ${}^*$ topology of $\mathcal M(M).$

The proposition below shows that the elements of equation (6.5) are natural candidates for quasi-stationary measures for $Y_n$ on $[0,1]$ .

Proposition 6.5. Assume that there exists a probability measure $\mu _{a,b} :=\mu $ on M, $\unicode{x3bb}>0,$ and subsequences

$$ \begin{align*}\{\mu_{\delta_n}\}_{n\in\mathbb N} \subset \{\mu_{\varepsilon_n}\}_{n \in \mathbb N}, \ \{\unicode{x3bb}_{\delta_n}\}_{n\in\mathbb N} \subset \{\unicode{x3bb}_{\varepsilon_n}\}_{n\in \mathbb N}, \end{align*} $$

such that

$$ \begin{align*}\mu_{\delta_n} \to \mu, \ \text{in the weak}^{*}\text{-topology}\ \text{as }n\to\infty, \end{align*} $$
$$ \begin{align*}\lim_{n\to \infty} \unicode{x3bb}_{\delta_n} = \unicode{x3bb} \quad \text{and}\quad\lim_{n\to \infty }\delta_n = 0. \end{align*} $$

Then $\mu $ is a quasi-stationary measure for $Y_n$ on $[0,1].$

Proof. Let

$$ \begin{align*}E = \{x\in [0,1], \mu(\{x\})>0\},\end{align*} $$

note that E is, at most, countable. Consider the set

$$ \begin{align*}\mathcal A =\{I \in \mathcal B(M); \ I\text{ is an interval,}\ \overline{I}\subset (0,1)\ \text{and }\sup I,\inf I\not\in E\}. \end{align*} $$

It is clear that $\sigma (\mathcal A) = \mathcal B(M).$ Note that for every $I \in \mathcal A,$ there exists $n_0 = n_0(I)$ such that

$$ \begin{align*}A \subset M_{\delta_n}\quad \text{for all } n>n_0.\end{align*} $$

This implies that for every $n>n_0,$

$$ \begin{align*} \int_M {\mathcal P}(x,I) \mu_{\delta_n}(\mathrm{d} x) &= \int_{M_\varepsilon} {\mathcal P}(x,I) \mu_{\varepsilon_n}(\mathrm{d} x) = \unicode{x3bb}_{\delta_n} \mu_{\delta_n} (I). \end{align*} $$

Since ${\mathcal P}(x,I)$ is a continuous function, we obtain

$$ \begin{align*} \int_M {\mathcal P}(x,I) \mu(\mathrm{d} x) =\unicode{x3bb} \mu(I) \quad \text{for every }I\in\mathcal A.\end{align*} $$

Since $({4-b})/({b-a}) \leq {\mathcal P}(x,M) \ \text {for all } \ x\in [0,1],$ it follows that $\unicode{x3bb}>0.$

Applying the monotone class theorem, we obtain that $\mu $ is a quasi-stationary measure for $Y_n$ on $[0,1]$ .

In light of Proposition 6.5, to construct a non-trivial quasi-stationary measure for $Y_n$ on $[0,1]$ , it remains to show that

(6.6) $$ \begin{align} \bigcap_{i\in \mathbb N} \overline{ \{\mu_{\varepsilon_{i+k}}\}_{i\in \mathbb N}}^{\mathrm{w}^*\text{-}\mathcal M(M)} \setminus \{\delta_0\} \neq \emptyset. \end{align} $$

Note that for every $i\in \mathbb N$ , $\mu _{\varepsilon _i}(\mathrm {d} x) \ll \mathrm {Leb}(\mathrm {d} x)$ . To show that equation (6.6) holds, we study the behaviour of the distributions of $\mu _{\varepsilon _i}$ with respect to the Lebesgue measure.

The definition below provides conditions on a and b, which implies that equation (6.6) holds (see Theorem 6.9).

Definition 6.1. A pair $(a,b)\in \mathbb (0,4)\times (4,\infty )$ is called an admissible pair if either

  • $a\geq 2;$ or

  • for every $x\in [(4 a^2 - a^3)/16,a/4]$ ,

    (6.7) $$ \begin{align} 0\leq \frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{2}{b} \bigg(1- \sqrt{1-\frac{4 x}{a}}\bigg)} \leq \frac{a}{4} \end{align} $$
    and
    (6.8) $$ \begin{align} &\frac{2 \big(\tanh ^{-1}\big(\sqrt{{2 \sqrt{1-({4 x}/{b})}+b-2}/{b}}\big)-\tanh ^{-1}\big(\sqrt{{a+2 \sqrt{1-({4 x}/{b})}-2}/{a}}\big)\big)}{2 \tanh ^{-1}\big(\sqrt{{2 \sqrt{1-({4 x}/{a})}+b-2}/{b}}\big)+\log ({a}/{4-a})}\nonumber\\&\quad\leq \frac{\sqrt{1-{4 x}/{b}}}{\sqrt{1-{4 x}/{a}}}. \end{align} $$

In Theorem 6.18, we show that if $(a,b)\in [1,4)\times (4,\infty )$ , then $(a,b)$ is an admissible pair. Assuming that $(a,b)$ is an admissible pair, it is possible to show that $Y_n^{a,b}$ admits a non-trivial quasi-stationary measure on $[0,1].$ To accomplish this goal, we need the following three technical lemmas.

Lemma 6.6. Let $(a,b)$ be an admissible pair, with $a <2$ , and $f:[0,1]\to {\mathbb R}$ be a function continuous by parts with a finite number of discontinuities, such that:

  1. (1) $0\leq f(x)$ for every $x\in [0,1];$

  2. (2) f is non-decreasing in the interval $[0,a/4];$ and

  3. (3) f is non-increasing in the interval $[a/4,1].$

Then $\mathcal L f$ is a continuous function such that:

  1. (1) $0\leq \mathcal L f(x)$ for every $x\in [0,1];$

  2. (2) $\mathcal L f$ is non-decreasing in the interval $[0,(4a^2 - a^3)/16];$ and

  3. (3) $\mathcal L f$ is non-increasing in the interval $[a/4,1].$

Proof. Recall that

(6.9) $$ \begin{align} \mathcal L f(x) = \int_{\alpha_-(x)}^{\alpha_+(x)} \frac{f(y)}{(b-a)y(1-y)} \ \mathrm{d} y - \int_{\beta_-(x\wedge a/4)}^{\beta_+(x\wedge a/4)} \frac{ f(y)}{(b-a)y(1-y)} \ \mathrm{d} y , \end{align} $$

where

$$ \begin{align*} \alpha_{\pm}(x) = \frac{1}{2} \pm \frac{1}{2}\sqrt{1 - \frac{4}{b}x}\quad \text{and}\quad \beta_{\pm}(x) = \frac{1}{2} \pm \frac{1}{2}\sqrt{1 - \frac{4}{a}x}. \end{align*} $$

It is clear that $\mathcal L f$ is continuous and a non-negative function. Observe that $\mathcal L f$ is differentiable except for finitely many points. In fact, the derivative of $\mathcal L f$ on the points where the derivative exists is given by

(6.10)

Since for every $x\in [a/4,1],$

$$ \begin{align*} \frac{\mathrm{d} \mathcal L f}{\mathrm{d} x}(x) = - \frac{1}{b-a}\frac{ f(\alpha_+ (x)) + f(\alpha_-(x))}{ x\sqrt{1-{4}/{b}x} } \leq 0,\end{align*} $$

if follows that $\mathcal L f$ is non-increasing in $[a/4,1].$

Observe that for every $x\in [0,a/4]$ , we obtain

$$ \begin{align*}\frac{1}{(b-a) x\sqrt{1-{4}/{b}x} } < \frac{1}{(b-a) x\sqrt{1- {4}/{a}x}}\quad \text{and}\quad \frac{a}{4} \leq \beta_+(x)\leq \alpha_+(x).\end{align*} $$

Since f is non-increasing in $[a/4,1],$ we conclude that

$$ \begin{align*} -\frac{ f(\alpha_+(x))}{(b-a) x\sqrt{1-{4}/{b}x} } + \frac{f(\beta_+(x))}{(b-a) x\sqrt{1- {4}/{a}x}}\geq 0. \end{align*} $$

To finish the proof, it is enough to show that $f(\beta _-(x)) \geq f(\alpha _-(x))$ for every $x\in [0,(4a^2 - a^3)/16].$ Observe that since f is non-decreasing on $[0,a/4]$ , we obtain that for every $x\in [0,(4a^2 - a^3)/16]$ ,

$$ \begin{align*} \beta_-(x) \leq \alpha_-(x) \leq a/4, \end{align*} $$

implying that

$$ \begin{align*} f(\beta_-(x)) -f(\alpha_-(x)) \geq 0.\\[-37pt] \end{align*} $$

Lemma 6.7. Let $(a,b)$ be an admissible pair and $\varepsilon \in (0,3/8)$ such that $[(4a^2 - a^3)/16, a/4]\subset {M_\varepsilon }$ . Consider that sequence of functions , then for every $n\in \mathbb N$ , the following assertions hold:

  1. (1) for every $x\in [0,1];$

  2. (2) is non-decreasing in the interval $[0,(4a^2-a^3)/16];$ and

  3. (3) is non-increasing in the interval $[a/4,1].$

Proof. Recall that for every $\varepsilon \in (0,3/8)$ and $f\in \mathcal C^0(M_{\varepsilon })$ , We divide the proof into two steps.

Step 1. We show the result for the case where $a\geq 2.$

We show the above result by induction on n. The case $n=0$ is immediately verified. Suppose that items $(1)$ , $(2)$ and $(3)$ are true for We will show that the same is true for

Item $(1)$ is trivially fulfilled since $\mathcal L_\varepsilon $ is a positive operator. Additionally items $(2)$ and $(3)$ follow from equation (6.10) and realizing that for every $(a,b)\in [2,4)\times (4,\infty )$ ,

$$ \begin{align*}\alpha_-(x) \leq \beta_-(x) \leq \frac{4a^2 - a^3}{16} \leq \frac{a}{4}\leq \beta_+(x) \leq \alpha_+(x)\quad \text{for every }x\in\bigg[0,\frac{4 a^2 - a^3}{16}\bigg].\end{align*} $$

This proves Step 1.

Step 2. We show that if $(a,b)$ is an admissible pair and $a\in (0,2)$ , then:

  1. (1) for every $x\in [0,1];$

  2. (2) is non-decreasing in the interval $[0,a/4];$ and

  3. (3) is non-increasing in the interval $[a/4,1]$ .

We will prove that the above items hold by strong induction on n. For the cases $n=0$ and $n = 1$ , the computations can explicitly be done and such a conclusion is achieved.

Now, suppose that the conclusions of Step $2$ are true for

and we will show that it is also true for

From Lemma 6.6, it follows that:

  1. (1) for every $x\in [0,1];$

  2. (2) is non-decreasing in the interval $[0,(4a^2 - a^3)/16];$ and

  3. (3) is non-increasing in the interval $[a/4,1].$

It remains to show that $\mathcal L^{n+1}_\varepsilon {M_{\varepsilon }} $ is non-decreasing in $[(4a^2 - a^3)/16,a/4].$ From the proof of the previous theorem, it is enough to show that

(6.11)

Observe that

$$ \begin{align*} \alpha_-(x) <\frac{a}{4} < \beta_-(x) \quad \text{for every } x\in \bigg[\frac{4a^2 - a^3}{16},\frac{a}{4}\bigg]. \end{align*} $$

Therefore,

(6.12)

and

Since $(a,b)$ is an admissible pair, equation (6.7) implies that for every $x\in [(4a^2 - a^3)/16,a/4]$ ,

$$ \begin{align*} \beta_-\circ\alpha_-(x) < \alpha_- \circ \beta_- (x) <\frac{a}{4}< \alpha_+ \circ \beta_-(x) \leq \beta_+ \circ \alpha_-(x). \end{align*} $$

This implies that

where

$$ \begin{align*}I_1^{(a,b)}(x):=2 \big(\tanh ^{-1}\big(\sqrt{{2 \sqrt{1-{4 x}/{b}}+b-2}/{b}}\big)-\tanh ^{-1}\big(\sqrt{{a+2 \sqrt{1-{4 x}/{b}}-2}/{a}}\big)\big).\end{align*} $$

However, from the induction hypothesis and equation (6.12),

where

$$ \begin{align*}I_2^{(a,b)}(x) :=\bigg( 2\tanh ^{-1}\bigg(\sqrt{{2 \sqrt{1-{4 x}/{a}}+b-2}/{b}}\bigg)+\log \bigg(\frac{a}{4-a}\bigg)\bigg).\end{align*} $$

Combining the above three equations, equation (6.8) and using the definition of admissible pair, we obtain that equation (6.11) holds. This proves Step $2$ .

Observe the above two steps imply the proof of the lemma.

Recall from Proposition 6.4, for every $i\in \mathbb N$ ,

$$ \begin{align*}g_{\varepsilon_i}^{a,b}:= g_{\varepsilon_i} = \frac{\mu_{\varepsilon_i}(\mathrm{d} x) }{\mathrm{Leb}(\mathrm{d} x)} \in L^1 ([0,1],\mathrm{Leb}),\end{align*} $$

where we set $g_{\varepsilon _i}(x) =0$ for every $x\in M\setminus M_{\varepsilon _i}.$

Lemma 6.8. Let $(a,b)$ be an admissible pair. Then, for every $i\in \mathbb N$ :

  1. (1) $0\leq g_{\varepsilon _i} (x)$ for every $x\in [0,1];$

  2. (2) $ g_{\varepsilon _i}(x)$ is non-decreasing in the interval $[0,(4a^2 - a^3)/16];$ and

  3. (3) $ g_{\varepsilon _i} (x)$ is non-increasing in the interval $[a/4,1].$

Proof. Recall that for every $i\in \mathbb N$ and $f\in \mathcal C^0(M_{\varepsilon _i})$ ,

Observe that if $(a,b)$ is an admissible pair and $i\in \mathbb N$ , then $\mathcal L_{\varepsilon _i} :\mathcal C^0(M_{\varepsilon _i} )\to \mathcal C^0(M_{\varepsilon _i} )$ is an irreducible compact operator. Moreover, it is readily verified that $\mathcal L_{\varepsilon _i} $ admits a single eigenvalue in its peripheral spectrum, implying that

for some $\alpha _{\varepsilon _i}>0.$

The lemma follows directly from the above equation in combination with Lemma 6.7.

Combining Lemmas 6.6, 6.7 and 6.8, we obtain the following result.

Theorem 6.9. Let $(a,b)$ be an admissible pair. Then the absorbing Markov chain $Y_n^{a,b}$ admits a quasi-stationary measure $\mu $ on $[0,1]$ different from $\delta _0.$

Proof. For every $i\in \mathbb N$ , let $\mu _{\varepsilon _i}(\mathrm {d} x) = g_{\varepsilon _i} (x) \mathrm {d} x$ be the unique quasi-stationary measures for $Y_n^{\varepsilon _i}$ on $M_{\varepsilon _i}$ given by Proposition 6.4 and extend it to $[0,1]$ in a way that $\mu _{\varepsilon _i}( M\setminus M_{\varepsilon _i} ) = 0. $

Since $\mathcal M_1 ([0,1])$ is sequentially compact in the weak $^*$ topology, we can assume without loss of generality (passing to a subsequence if necessary) that the sequence of real numbers $\{\varepsilon _i\}_{i\in \mathbb N}$ is such that $\lim _{i\to \infty } \varepsilon _i = 0, \mu _{\varepsilon _i} \to \mu \ \text {in the weak}^{*}\ \text {topology}$ and $\lim _{i\to \infty } \unicode{x3bb} _{\varepsilon _i} = \unicode{x3bb} \geq ({4-a})/({b-a}).$

From Proposition 6.5, the probability measure $\mu $ is a quasi-stationary measure of ${\mathcal P}$ . It remains to show that $\mu \neq \delta _0.$ Suppose by contradiction that $\mu = \delta _0.$ Then, $\lim _{i\to \infty } \mu _{\varepsilon _i}([0,(4a^2 -a^3)/32]) = 1. $ However, from Lemma 6.8, it follows that

$$ \begin{align*}{\mu_{\varepsilon_i}([0,(4a^2 - a^3)/32]) \leq \mu_{\varepsilon_i}([(4a^2 -a^3)/32,4a^2 -a^3)/16])}\quad\text{for every }i\in\mathbb N.\end{align*} $$

Taking the limit as $i\to \infty $ , we obtain that $1 \leq \mu ([4a^2 -a^3)/32,(4a^2 -a^3)/16])$ , which is a contradiction, implying that $\mu \neq \delta _0.$

Remark 6.10. Observe that without assuming that $(a,b)$ is an admissible pair, the inductive step presented in Step 2 of Lemma 6.7 no longer holds. Without this lemma, the core argument in the proof of Theorem 6.9 cannot be applied, and the existence of a non-trivial quasi-stationary measure for $Y_n^{a,b}$ becomes unclear.

From now on, we define $\mu _{a,b} = \mu $ as a non-trivial quasi-stationary measure for $Y_n$ on $[0,1]$ and $\unicode{x3bb} _{a,b} =\unicode{x3bb} $ its associated survival rate (given by Theorem 6.9). The next proposition shows that $\mu $ is absolutely continuous to the Lebesgue measure.

Proposition 6.11. Let $(a,b)$ be an admissible pair. Then, $\mu \ll \mathrm {Leb}(\mathrm {d} x)$ and $0<\unicode{x3bb} < 1.$

Proof. We can decompose $\mu (\mathrm {d} x) = \mu (\{0\})\delta _0(\mathrm {d} x) + \mu '(\mathrm {d} x) + \mu (\{1\})\delta _1(\mathrm {d} x).$

Since $\delta _0 \neq \mu $ , we obtain that $\mu (\{0\}) \neq 1.$ Observe that

$$ \begin{align*} \unicode{x3bb} \mu(\{0\}) \delta_0(\mathrm{d} x) + \unicode{x3bb} \mu'(\mathrm{d} x) + \unicode{x3bb} \mu(\{1\}) \delta_1(\mathrm{d} x) &= \unicode{x3bb} \mu(\mathrm{d} x)\\ &= {\mathcal P}^*(\mu)(\mathrm{d} x)\\ &= (\mu(\{1\}) + \mu\{0\}) \delta_0(\mathrm{d} x) + {\mathcal P}^* (\mu'). \end{align*} $$

Since ${\mathcal P}^*(\mu ') \ll \mathrm {Leb}(\mathrm {d} x)$ , it follows that ${\mathcal P}^*(\mu ')(\{1\}) =0,$ implying that

$$ \begin{align*}\mu(\{1\}) = 0\end{align*} $$

and

$$ \begin{align*}\unicode{x3bb} \mu(\{0\}) = \mu(\{0\}). \end{align*} $$

We claim that $\unicode{x3bb} < 1.$ Suppose, by contradiction, the opposite that $\unicode{x3bb} =1$ , then

$$ \begin{align*}\mu = {({\mathcal P}^*)}^n \mu = \mu(\{0\})\delta_1(\mathrm{d} x) + ({{\mathcal P}^*})^n \mu'. \end{align*} $$

Since pointwise as $n\to \infty ,$ we have, by the Lebesgue dominated convergence theorem,

$$ \begin{align*}\mu = \lim_{n\to\infty}{({\mathcal P}^*)}^n \mu = \mu\{0\}\delta_1(\mathrm{d} x) + \lim_{n\to\infty}{{\mathcal P}^*}^n \mu' = \mu(\{0\}) \delta_0, \end{align*} $$

which is contradiction since $\mu (\{0\}) \neq 1.$

This implies that $\unicode{x3bb} < 1$ and therefore $\mu (\{0\}) = 0.$ Therefore, $\mu (\{0\}\cup \{1\}) = 0$ and

$$ \begin{align*} \unicode{x3bb} \mu = {\mathcal P}^* \mu \ll \mathrm{Leb}(\mathrm{d} x).\\[-37pt] \end{align*} $$

From now on, we define

$$ \begin{align*}\frac{\mu_{a,b} (\mathrm{d} x)}{\mathrm{Leb}(\mathrm{d} x)} =: g^{a,b}= g \in L^1([0,1],\mu). \end{align*} $$

The next result summarizes the properties of $g.$

Proposition 6.12. Let $(a,b)$ be an admissible pair. Then the function g fulfils the following properties:

  1. (i) $g \in \mathcal C^0(M);$

  2. (ii) g is non-decreasing in the interval $[0,4a^2 -a^3)/16];$

  3. (iii) g is non-increasing in the interval $[a/4,1]$ ;

  4. (iv) there exists $k>0$ such that $k<g(x)$ for every $x\in M.$

Proof. We divide this proof into $3$ steps.

Step 1. We show that $g(x)>0$ for every $x\in (0,1].$

Suppose that there exists $x \in (0,1]$ such that $g(x) = 0.$ Therefore,

$$ \begin{align*}0 = \unicode{x3bb} g(x) = \int_{\alpha_-(x)}^{\alpha_+(x)}\frac{g(y)}{(b-a)y(1-y)} \mathrm{d} y - \int_{\beta_-(x\wedge a/4)}^{\beta_+(x\wedge a/4)}\frac{g(y)}{(b-a)y(1-y)} \mathrm{d} y. \end{align*} $$

This implies that

$$ \begin{align*} g(y) = 0\quad \text{for all } y\in I_1:= [\alpha_-(x),\beta_-(x\wedge a/4)]\cup [\beta_+(x\wedge a/4),\alpha_+(x)] \subset (0,1).\end{align*} $$

Let $x_0 \in \mathrm {supp}(\mu )\cap (0,1).$ By the same arguments presented in the proof of Proposition 6.4, we can show that there exist $n_0 = n_0(x,I_1)$ such that ${\mathcal P}^{n_0}(x_0, I_1)>0.$ Since ${\mathcal P}^{n_0}(x_0,I_1)$ is a continuous function, there exists an open neighbourhood $B \subset (0,1)$ of x such that

$$ \begin{align*}\inf_{y\in B} {\mathcal P}^{n_0}(y,I_1) \geq \tfrac{1}{2} {\mathcal P}^{n_0}(x_0,I_1)>0.\end{align*} $$

Therefore,

$$ \begin{align*}0 = \mu(I_1) = \frac{1}{\unicode{x3bb}^n}\int_M {\mathcal P}^{n_0}(y,I)g(y)\,\mathrm{d} x \geq \frac{{\mathcal P}^{n_0}(x,I_1)}{2}\mu(B)>0, \end{align*} $$

which is a contradiction. Therefore, $g(x)>0$ for every $x\in (0,1].$

Step 2. We show $(i),(ii)$ and $(iii).$

Recall that for every $i\in \mathbb N$ , This observation, combined with Theorem 6.8 and Lemma 6.6, implies that

$$ \begin{align*}\|\mathcal{L} g_{\varepsilon_i}\|_{L^\infty} = \sup_{y\in [({4a^2 - a^3})/{16},{a}/{4}]}\mathcal{L} g_{\varepsilon_i}(y) \quad \text{for every }i\in\mathbb N. \end{align*} $$

Let

$$ \begin{align*} J := \bigcup_{x\in[({4a^2 - a^3})/{16},{a}/{4}]} ( [\alpha_-(x),\beta_-(x) \land a/4 ]\cup [\beta_-(x) \land a/4, \alpha_+(x) \land a/4])\subset (0,1),\end{align*} $$

and observe that J is a compact set. Finally,

(6.13) $$ \begin{align} 0&\leq g_{\varepsilon_i}(x) \leq \frac{1}{\unicode{x3bb}_{\varepsilon_i}} \mathcal{L} g_{\varepsilon_i}(x) \leq \frac{1}{\unicode{x3bb}_\varepsilon}\sup_{y\in [({4a^2 - a^3})/{16},{a}/{4}]}\mathcal{L} g_{\varepsilon_i}(y)\nonumber\leq \frac{1}{\unicode{x3bb}_{\varepsilon_i}}\!\int_{J}\frac{g_{\varepsilon_i} (y)}{(b-a)y(1-y)} \mathrm{d} y\\ &\leq \sup_{y\in J}\frac{1}{4 y (1-y)} \sup_{i\in\mathbb N} \frac{1}{\unicode{x3bb}_{\varepsilon_i}}=: C <\infty. \end{align} $$

Therefore, we obtain a uniform bound for $\{ g_{\varepsilon _i}\}_{i\in \mathbb N}$ on $L^{\infty }(M)$ for n big enough. For every $\delta>0,$ consider the map

From the Arzelà–Ascoli theorem, it is readily verified that $T_\delta $ is a compact operator for every $0 <\delta <1/2$ . From equation (6.13), we obtain that there exists a subsequence $\{\mathcal L g_{\varepsilon _{i_n}}\}_{n\in \mathbb N} \subset \{\mathcal L g_{\varepsilon _{i}}\}_{i\in \mathbb N} $ and $f_\delta \in \mathcal C^0([\delta ,1-\delta ])$ such that

$$ \begin{align*}\lim_{n\to \infty} \| T_\delta \mathcal L g_{\varepsilon_{i_n}} - f_\delta\|_{\infty} = 0.\end{align*} $$

Choosing an interval $I_\delta \subset [\delta ,1-\delta ]$ , observe that

$$ \begin{align*}{\mathcal P}(x,I_\delta)\ \text{is continuous on }x\in [0,1], \end{align*} $$

it follows that

$$ \begin{align*} \int_{I_\delta} f_{\delta}(y) \,\mathrm{d} x = \lim_{n\to\infty } \int_{I_\delta} \mathcal L g_{\varepsilon_{i_n}}(x)\,\mathrm{d} x = \lim_{n\to\infty } \int_{0}^{1} {\mathcal P} (x,I_\delta) g_{\varepsilon_{i_n}}(x)\,\mathrm{d} x = \int_{I_\delta} \unicode{x3bb} g (x)\,\mathrm{d} x. \end{align*} $$

Since $I_\delta $ is an arbitrary interval subset of $[\delta ,1-\delta ]$ , we obtain $f_\delta =\unicode{x3bb} g|_{[\delta ,1-\delta ]}$ . Using that for every subsequence of

there exists subsubsequence converging to

, we obtain that

Since $\{g_{\varepsilon _i}\}_{i\in \mathbb N}$ is bounded $L^\infty (M,\mu )$ and g lies in $L^1(M,\mu ),$ the above equation implies that

$$ \begin{align*} \mathcal{L} g_{\varepsilon_i} \to \unicode{x3bb} g\ \text{in }L^1([0,1]).\end{align*} $$

Thus, there exists a subsequence $\{g_{\varepsilon _{n_i}}\}_{i\in \mathbb N} \subset \{g_{\varepsilon _{n}}\}_{n\in \mathbb N} $ such that

(6.14) $$ \begin{align} \lim_{i\to\infty} \mathcal L g_{\varepsilon_{n_i}} = \unicode{x3bb} g \quad \mu\text{-a.s}. \end{align} $$

Therefore, for $\mu $ -almost every $x\in M$ ,

$$ \begin{align*} 0\leq g(x) &\leq \frac{1}{\unicode{x3bb}}\lim_{i\to\infty}\mathcal{L} g_{\varepsilon_{n_i}}(x)\leq \frac{C}{\unicode{x3bb} }, \end{align*} $$

which implies that g is $L^\infty ([0,1])$ . Since for every $i\in \mathbb N$ :

  1. (1) $ \mathcal {L} g_{\varepsilon _{n_i}}$ is non-decreasing in the interval $[0,(4a^2 - a^3)/16];$ and

  2. (2) $ \mathcal {L} g_{\varepsilon _{n_i}}$ is non-increasing in the interval $[a/4,1],$

from equation (6.14) and the continuity of g on $(0,1]$ , we obtain that:

  1. (1) g is non-decreasing in the interval $[0,(4a^2 - a^3)/16];$ and

  2. (2) g is non-increasing in the interval $[a/4,1].$

The proof is finished observing that $g\in \mathcal C^0([0,1])$ when imposing $g(0) := \inf _{x\in (0,a/4)} g(x). $

Step 3. We show item (iv).

Observe that in virtue of Step 2, it is enough to show that $g(0)>0$ . Since g is continuous, it follows that

$$ \begin{align*}\lim_{\varepsilon \to 0} \frac{1}{\varepsilon}\int_0^\varepsilon g(y) \,\mathrm{d} y = g(0). \end{align*} $$

Since $g(x)\,\mathrm {d} x$ is a quasi-stationary measure of $Y_n$ on $[0,1]$ , we obtain that

$$ \begin{align*} \int_0^\varepsilon g(y)\mathrm{d} y = \frac{1}{\unicode{x3bb}} \int_0^1 {\mathcal P}(y,[0,\varepsilon]) g(y)\, \mathrm{d} y. \end{align*} $$

It is clear that

$$ \begin{align*}{\mathcal P}(x,[0,\varepsilon]) = 1\quad \text{for every } x \in [\alpha_+(\varepsilon), 1]\subset [a/4,1]. \end{align*} $$

Since g is decreasing in $[a/4,1]$ and $g(1)>0$ , it follows that

$$ \begin{align*} \int_0^\varepsilon g(y)\,\mathrm{d} y = \frac{1}{\unicode{x3bb}} \int_0^1 {\mathcal P}(y,[0,\varepsilon]) g(y) \mathrm{d} y \geq \frac{g(1)}{\unicode{x3bb}} \bigg(1- \alpha_+(\varepsilon)\bigg) = \frac{g(1)}{\unicode{x3bb}} \bigg(\frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{4 \varepsilon }{b}}\bigg).\end{align*} $$

Finally,

$$ \begin{align*} g(0) &= \lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \int_0^\varepsilon g(y)\mathrm{d} y \geq \lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \frac{g(1)}{\unicode{x3bb}} \bigg(\frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{4 \varepsilon }{b}}\bigg) = \frac{g(1)}{b\unicode{x3bb}}>0. \end{align*} $$

Combining Steps 1–3, we conclude the proof of the theorem.

To apply Theorem 2.4, we need to show that ${\mathcal P}$ admits an eigenfunction lying in $L^1([0,1],\mathrm {Leb}).$ To do this, consider the operator

$$ \begin{align*} T: \mathcal C^0([0,1])&\to \mathcal C^0([0,1]),\\ f&\mapsto \frac{\mathcal L(g f)}{\unicode{x3bb} g}. \end{align*} $$

It is clear that T is a Markov operator that is:

  1. (1) $T: \mathcal C^0([0,1])\to \mathcal C^0([0,1])$ is a bounded positive linear operator;

  2. (2) $T1 =1.$

Proposition 6.13. Let $(a,b)$ be an admissible pair. Then there exists a probability $\nu \ll \mathrm {Leb}$ such that $\nu $ is a fixed point of the operator $T^*: \mathcal M(M) \to \mathcal M(M)$ .

Proof. Since T is a Markov operator, it is well known that there exists a probability measure $\nu $ such that $T^*\nu = \nu $ (see [Reference Eisner, Farkas, Haase and Nagel10, Ch. 10]).

Let us decompose $\nu $ as

$$ \begin{align*}\nu = \alpha_1 \delta_0 + \alpha_2 \nu' + \alpha_3 \delta_1, \end{align*} $$

where $\nu \in \mathcal M_1(M)$ and $\nu '(\{0\} \cup \{1\}) = 0.$

Since

$$ \begin{align*} \mathcal L (fg)(0) &= \frac{1}{b-a}\lim_{x\to 0} \bigg(\int_{\alpha_-(x)}^{\beta_-(x)} \frac{f(y)g(y)}{ y(1-y)} \mathrm{d} y + \int_{\beta_+(x)}^{\alpha_+(x)} \frac{f(y)g(y)}{ y(1-y)} \mathrm{d} y \bigg) \\ &= \frac{\log(b/a)}{b-a}f(0)g(0) + \frac{\log(b/a)}{b-a}f(1)g(1), \end{align*} $$

we obtain that

(6.15) $$ \begin{align} Tf(0) =\frac{\log(b/a)}{(b-a) \unicode{x3bb} }f(0) + \frac{\log(b/a)}{(b-a)\unicode{x3bb}} \frac{g(1)}{g(0)}f(1). \end{align} $$

From a similar computation, we obtain that

(6.16) $$ \begin{align} Tf(1) = \frac{1}{ \unicode{x3bb} g(1)}\int_{\alpha_-(1)}^{\alpha_+(1)} f(x) g(x) \,\mathrm{d} x. \end{align} $$

Note that given $A\in \mathscr B([0,1])$ such that $\mathrm {Leb}(A)= 0$ and $A \subset [\delta , 1-\delta ]$ for some $\delta>0,$ then

This implies that

since $\nu '(\{0\} \cup \{1\}) = 0$ , we obtain

(6.17) $$ \begin{align} T^*\nu'(\mathrm{d} x) \ll \mathrm{Leb}(\mathrm{d} x). \end{align} $$

Combining $T^*\nu = \nu $ , and equations (6.15), (6.16) and (6.17), we obtain that $\nu ' \ll \mathrm {Leb}(\mathrm {d} x). $

Let $\{f_n\}_{n\in \mathbb N} \in \mathcal C^0(M)$ be a sequence of continuous functions such that:

  1. (1) $0\leq f_n (x)\leq 1$ for every $n\in \mathbb N$ and $x\in [0,1];$

  2. (2) $f_n (1) =1;$ and

  3. (3) $f_n(x) =0$ for every $x\in [0,1-1/n].$

Since $T^*\nu = \nu $ and $f_n$ is continuous, it follows that

(6.18) $$ \begin{align} \int_M f_n(x) \nu(\mathrm{d} x) = \int_M Tf_n(x) \nu(\mathrm{d} x) \quad \text{for every }n\in\mathbb N. \end{align} $$

The left-hand side of equation (6.18) is equal to

$$ \begin{align*} \int_M f_n(x) \nu(\mathrm{d} x) = \alpha_2 \int f_n(x) \nu(\mathrm{d} x) + \alpha_3 f_n(1) = \alpha_2 \int f_n(x) \nu(\mathrm{d} x) + \alpha_3, \end{align*} $$

and the right-hand side of equation (6.18) is equal to

$$ \begin{align*} &\int_M f_n(x) T^*\nu(\mathrm{d} x)\\ &\quad= \alpha_1\bigg( \frac{\log(b/a)}{(b-a)\unicode{x3bb} }f_n(0) + \frac{\log(b/a) g(1)}{(b-a) g(0)\unicode{x3bb}}f_n(1)\bigg) + \alpha_2 \int_0^1 f_n(x)T^*\nu(\mathrm{d} x) \\ &\qquad + \alpha_3 \frac{1}{\unicode{x3bb} g(1)}\int_{\alpha_-(1)}^{\alpha_+(1)} f_n(x) g(x) \,\mathrm{d} x\\ &\quad= \alpha_1 \frac{\log(b/a)}{b-a} \frac{ g(1)}{\unicode{x3bb} g(0)} + \alpha_2 \int_0^1 f_n(x)T^*\nu(\mathrm{d} x)+ \alpha_3 \frac{1}{\unicode{x3bb} g(1)}\int_{\alpha_-(1)}^{\alpha_+(1)} f_n(x) g(x) \,\mathrm{d} x. \end{align*} $$

Taking the limit as $n\to \infty $ in equation (6.18), we obtain that

$$ \begin{align*}\alpha_3 = \alpha_1 \frac{\log(b/a)}{b-a} \frac{ g(1)}{\unicode{x3bb} g(0)}. \end{align*} $$

Repeating the same argument with the sequence $\{f_n(1-x)\}_{n\in \mathbb N} \subset \mathcal C^0([0,1])$ , we obtain that

$$ \begin{align*} \alpha_1 = \frac{\log(b/a)}{(b-a)\unicode{x3bb}} \alpha_1. \end{align*} $$

If $\alpha _1=0$ , then $\alpha _3 =0$ and the proof is finished. Suppose by contradiction that $\alpha _1>0$ , the above equation shows that $ 1= \log (b/a)\unicode{x3bb} ^{-1}(b-a)^{-1}.$ However, we obtain that

$$ \begin{align*}g(0)= \frac{1}{\unicode{x3bb}}\mathcal L g(0) = \frac{1}{\unicode{x3bb}}\frac{\log(b/a)}{b-a}g(0) +\frac{1}{\unicode{x3bb}}\frac{\log(b/a)}{b-a}g(1) = g(0) +\frac{1}{\unicode{x3bb}}\frac{\log(b/a)}{b-a}g(1), \end{align*} $$

therefore, $g(1) =0,$ contradicting Proposition 6.12.

With the above results, we can prove the following two theorems.

Theorem 6.14. Let $(a,b)$ be an admissible pair. Then the operator ${\mathcal P}:L^1([0,1],\mu ) \to L^1([0,1],\mu )$ admits eigenvalue $\eta $ with respect to eigenvalue $\unicode{x3bb} $ such that $\mu (\{\eta>0\})=1$ and $\|\eta \|_{L^1(M,\mu )}=1.$ In particular, $Y_n^{a,b}$ fulfils Hypothesis H1.

Proof. From Proposition 6.13, there exists an eigenmeasure $\nu (\mathrm {d} x) = h (x) \,\mathrm {d} x$ of $T^*$ with $h \in L^1([0,1],\mu ).$ This implies that for every $f\in \mathcal C^0(M)$ ,

$$ \begin{align*} \int_0^1 T(f)(x) h(x) \,\mathrm{d} x = \int_0^1 f(x) h (x) \,\mathrm{d} x. \end{align*} $$

However, since $f g\in L^\infty ([0,1])$ , we obtain that

$$ \begin{align*} \int_0^1 f(x) h (x) \,\mathrm{d} x &= \int_0^1 Tf (x) h(x) \,\mathrm{d} x = \int_0^1 \frac{\mathcal L (fg) (x)}{ \unicode{x3bb} g(x)} h(x) \,\mathrm{d} x\\&= \int_0^1 f(x) \frac{g(x)}{\unicode{x3bb}}{\mathcal P}\bigg( \frac{h}{g}\bigg) \,\mathrm{d} x. \end{align*} $$

Finally, defining $\eta (x) = h(x)/g(x)$ , it follows that ${\mathcal P}\eta = \unicode{x3bb} \eta .$ Since

$$ \begin{align*}\eta (x) = \frac{1}{\unicode{x3bb} (b-a) x(1-x)} \int_{a x(1-x)}^{b x(1-x)\land 1} \eta (y)\, \mathrm{d} y, \end{align*} $$

we clearly have that $\eta \in \mathcal C^0( (0,1)).$ Moreover, it is easy to see that if there exists $x_0\in (0,1)$ such that $ \eta (x_0) = 0,$ then $\eta ( x) = 0 \ \mathrm {Leb}$ -a.s. in $(0,1)$ , which is a contradiction.

Theorem 6.15. Let $(a,b)$ be an admissible pair. Consider $M=[0,1]$ and the Markov chain $Y_{n+1}^{(a,b)} = \omega _n Y_n^{(a,b)} (1-Y_n^{(a,b)})$ absorbed at $\partial = {\mathbb R} \setminus M,$ with $\{\omega _n\}_{n\in \mathbb N}$ an i.i.d sequence of random variables such that $\omega _n\sim \mathrm {Unif}([a,b])$ on ${\mathbb R}_{M}$ with absorption $\partial $ . Then we have the following.

  1. (i) $Y_n^{a,b}$ admits a quasi-stationary measure $\mu _{a,b}$ with survival rate $\unicode{x3bb} _{a,b}$ such that $\mathrm {supp}(\mu ) =[0,1]$ and $\mu \ll \mathrm {Leb}$ , where $\mathrm {Leb}$ denotes the Lebesgue measure on $[0,1].$

  2. (ii) There exists $\eta ^{a,b} \in L^1(M,\mu )$ such that ${\mathcal P} \eta ^{a,b} = \unicode{x3bb} _{a,b} \eta ^{a,b} $ , $\|\eta ^{a,b}\|_{L^1(M,\mu )} =1$ and $\eta ^{a,b}>0 \ \mu _{a,b}$ -a.s.

  3. (iii) For every $h\in L^\infty (M,\mathrm {Leb}),$

    $$ \begin{align*} \lim_{n\to\infty} {\mathbb E}\bigg[ \frac{1}{n}\sum_{i=0}^{n-1} h\circ Y_n^{a,b} \mid \tau>n\bigg] = \int_M h(y) \eta^{a,b}(y) \mu_{a,b}(\mathrm{d} y)\quad \text{for every }x\in(0,1).\end{align*} $$
  4. (iv) For every $h\in L^\infty (M,\mu )$ ,

    $$ \begin{align*}\lim_{n\to \infty} {\mathbb E}_x[ h \circ X_i\mid \tau>n] = \int h(y) \mu(\mathrm{d} y)\quad \mbox{for every } x\in (0,1). \end{align*} $$

Proof. Note that Theorem 6.14 implies that $Y_n^{(a,b)}$ satisfies items $\mathrm {(H1a)}$ and $\mathrm {(H1b)}$ of Hypothesis H1, also items $\mathrm {(H1c)}$ and $\mathrm {(H1d)}$ of Hypothesis H1 follow from Propositions 6.1 and 6.12.

Once again, from Propositions 6.1 and 6.12, we obtain that $Y_n^{a,b}$ satisfies Hypothesis H2 defining $K_i := [1/i,1-1/i]$ for every $i\in \mathbb N$ . Also, since the logistic map $4 x(1-x)$ is chaotic in $[0,1]$ and $f:{\mathbb R}\times [a,b]\to {\mathbb R}$ , $f(x,\omega )=\omega x(1-x)$ is a continuous function, so we conclude that $m=1$ in Theorem 2.2. Therefore, the conclusions of the theorem follow directly from Theorem 2.4.

6.1 Analysis of the admissible pairs

Fixing a pair $(a,b) \in (0,4)\times (4,\infty )$ , it is relatively easy to check if $(a,b)$ is an admissible pair or not. However, it is complicated to solve inequality equations (6.7) and (6.8) in terms of $(a,b)$ . In this section, we prove that every $(a,b)\in [1,4)\times (4,\infty )$ is an admissible pair.

We start showing that for every $(a,b)\in [1,2)\times (4,\infty ),$ inequality equation (6.7) is fulfilled.

Proposition 6.16. For every $(a,b)\in [1,2)\times (4,\infty )$ , we have that

$$ \begin{align*}0\leq \frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{2}{b}\bigg(1-\sqrt{1-\frac{4x}{a} }\bigg)} \leq \frac{a}{4}\quad \text{for every }x\in\bigg[\frac{4 a^2 - a^3}{16},\frac{a}{4}\bigg].\end{align*} $$

Proof. Note that $\tfrac 12-\tfrac 12 \sqrt {1-{2}/{b}(1-\sqrt {1-{4x}/{a}})}$ is an increasing function in $x.$ Therefore, for every $x\in [(4a^2 - a^3)/16,a/4]$ , we obtain

$$ \begin{align*} 0\leq & \frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{2}{b}\bigg(1-\sqrt{1-\frac{4x}{a}}\bigg)} \leq \frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{2}{b}}\leq \frac{1}{4}\leq \frac{a}{4}.\\[-3.3pc] \end{align*} $$

Proposition 6.17. For every $b>4$ , the following maps

$$ \begin{align*} &F^{a,b}_1(x)\\ &\quad:= \frac{2\big( \tanh^{-1}\big(\sqrt{1 - {2}/{b} + {2}/{b} \sqrt{1-({4x}/{b})}}\big) - \tanh^{-1}\big(\sqrt{1 - {2}/{a} +{2}/{a}\sqrt{1-({4 x}/{b})}}\big)\big) }{\sqrt{1-({4x}/{b})}} \end{align*} $$

and

$$ \begin{align*} F^{a,b}_2(x):= \frac{2 \tanh ^{-1}\big(\sqrt{{-2 +2 \sqrt{1-({4 x}/{a})}+b}/{b}}\big)+\log\big({a}/({4-a})\big)}{\sqrt{1-({4x}/{a})}} \end{align*} $$

are increasing in x in the interval $[(4a^2 -a^3)/16,a/4].$

Proof. It is readily verified that

$$ \begin{align*}&{x\mapsto \tanh^{-1}\bigg(\sqrt{1 - \frac{2}{b} + \frac{2}{b} \sqrt{1-\frac{4x}{b}}}\bigg) - \tanh^{-1}\bigg(\sqrt{1 - \frac{2}{a} +\frac{2}{a}\sqrt{1-\frac{4 x}{b}}}\bigg) }\ \quad\text{and }\\&x \mapsto \sqrt{1-\frac{4x}{b}}\end{align*} $$

are respectively increasing and decreasing for $x\in [(4a^2 -a^3)/16,a/4],$ implying that $F^{a,b}_1(x)$ is increasing in x in the interval $[(4a^2 -a^3)/16,a/4].$

In the following, we prove that $F^{a,b}_2$ is an increasing function in $[(4a^2 -a^3)/ 16,a/4].$ Through the change of coordinates $y =\sqrt {({-2 +2 \sqrt {1-4x/a}+b})/{b}},$ we obtain that to show that $F^{a,b}_2$ is an increasing function, it is enough to show that

$$ \begin{align*}F^{a,b}_3(y) = \frac{\displaystyle \log (({1+y})/({1-y})) + \log({a}/({4-a}))}{b y^2-b+2}\end{align*} $$

is decreasing in x in the interval $[\sqrt {(b-2)/b} ,\sqrt {(b-1)/b}]\supset [\sqrt {(-2+b)/b} , \sqrt {(b-a)/b}].$ Since

$$ \begin{align*}\frac{\mathrm{d} F^b_3}{\mathrm{d} y}(y) = \frac{2 (b (y^2-1) y (\log ({a}/({4-a}))+\log (({1+y})/({1-y})))+b y^2-b+2)}{(1-y^2) (b (y^2-1)+2)^2},\end{align*} $$

it is enough to show that

$$ \begin{align*} b (y^2-1)\bigg( y \log \bigg(\frac{a}{4-a} \cdot \frac{1+y}{1-y}\bigg)+1\bigg)+2 &\leq b (y^2-1) \bigg(y\log \bigg( \frac{1+y}{3-3 y}\bigg)+ 1\bigg)+2 \\ &\leq 0\quad \text{for every }y\in\bigg[\sqrt{\frac{b-2}{b}} , \sqrt{\frac{b-1}{b}}\bigg]. \end{align*} $$

Observe that given $y\in [\sqrt {(-2+b)/b} ,\sqrt {(-1+b)/b}]\subset [\sqrt {2}/2,1],$ we obtain that

$$ \begin{align*} b y (y^2-1) \log\bigg(\frac{1+ y}{3- 3y}\bigg) \leq 0\quad \text{and}\quad \frac{1+y}{3-3y} -1 \geq 0.\end{align*} $$

From [Reference Love21, equation (2)], it follows that $\log (1+x)\geq x/(1+x/2)$ for every $x\geq 0.$ Therefore,

(6.19) $$ \begin{align} &2+ b (y^2-1)+b y (y^2-1) \log \bigg(\frac{y+1}{3-3 y}\bigg)\nonumber\\ &\quad\leq 2+ b (y^2-1)+\frac{b y (({y+1})/(3-3 y)-1) (y^2-1)}{{1}/{2} (({y+1})/({3-3 y})-1)+1}\nonumber\\ &\quad=2-\frac{b (1-y^2) (4 y^2-3y+2)}{2-y}. \end{align} $$

Using standard techniques, one can check that ${b>4}$ and $y \in [\sqrt {(-2+b)/b} ,\sqrt {(-1+b/b}]$ then equation (6.19) is less than or equal to $0$ , implying that $F^{a,b}_3$ is decreasing in the interval $[\sqrt {(-2+b)/b} ,\sqrt {(-1+b)/b}]$ for every $(a,b) \in [1,2]\times (4,\infty )$ and therefore $F^{a,b}_2$ is increasing in $[(4a^2 -a^3)/16,a/4]$ for every $(a,b)\in [1,2]\times (4,\infty )$ .

Using the above proposition, we show that if $(a,b) \in [1,4)\times (4,\infty )$ , then $(a,b)$ is an admissible pair.

Theorem 6.18. If $(a,b) \in [1,4)\times (4,\infty )$ , then $(a,b)$ is an admissible pair.

Proof. From the definition of admissible pair, we just need to consider the case $(a,b) \in [1,2]\times (4,\infty ).$ From Proposition 6.16, we obtain that the pair $(1,b)$ satisfies equation (6.7).

In the following, we show that the pair $(1,b)$ satisfies equation (6.8). Observe that equation (6.8) is equivalent of showing that $F^{a,b}_1(x) \leq F^{a,b}_2(x)$ for every $x\in [(4a^2 - a^3)/16,a/4]$ , where $F^{a,b}_1$ and $F^{a,b}_2$ are defined in Proposition 6.17. From Proposition 6.17, it is enough to show that

$$ \begin{align*}F_1^{a,b}\bigg(\frac{a}{4}\bigg) \leq F_2^b\bigg(\frac{4a^2 -a^3}{16}\bigg) \quad\text{for every }(a,b)\in [1,4)\times(4,\infty).\end{align*} $$

We divide the proof into two steps.

Step 1. 1 We show that for every $b>4$ , $(1,b)$ is an admissible pair.

Note that for every $b>4,$

$$ \begin{align*} F_1^{1,b}\bigg(\frac{1}{4}\bigg) &=\frac{2 \big(\tanh ^{-1}\big(\sqrt{({b+2 \sqrt{1-{1}/{b}}-2})/{b}}\big)-\tanh ^{-1}\big(\sqrt{2 \sqrt{1-{1}/{b}}-1}\big)\big)}{\sqrt{1-{1}/{b}}}\\ &\leq 4 \tanh ^{-1}\bigg(\frac{\sqrt{2 \sqrt{({b-1})/{b}}-1}-\sqrt{({b+2 \sqrt{({b-1})/{b}}-2})/{b}}}{1- \sqrt{\big(2 \sqrt{({b-1})/{b}}-1\big)\big(({b+2 \sqrt{({b-1})/{b}}-2})/{b}\big)}}\bigg) \end{align*} $$

and

$$ \begin{align*} F_2^{1,b}\bigg(\frac{3}{16}\bigg) = 4 \tanh ^{-1}\bigg(\frac{\sqrt{1-({1}/{b})}-{1}/{2}}{1-{1}/{2} \sqrt{1-({1}/{b})}}\bigg) = 4 \tanh^{-1}\bigg(\frac{3 \sqrt{b-1} \sqrt{b}-2}{3 b+1}\bigg).\end{align*} $$

Since the function $x\mapsto 4 \tanh ^{-1}(x)$ is increasing, to finish the proof of this step it is enough to show that

(6.20) $$ \begin{align} &\frac{\sqrt{-1+2 \sqrt{({b-1})/{b}}}-\sqrt{({b+2 \sqrt{({b-1})/{b}}-2})/{b}}}{1- \sqrt{\big(-1+2 \sqrt{({b-1})/{b}}\big)\big(({b+2 \sqrt{({b-1})/{b}}-2})/{b}\big)}}\nonumber\\ &\quad\leq \frac{3 \sqrt{b-1} \sqrt{b}-2}{3 b+1}\quad \text{for every }b>4. \end{align} $$

Using standard methods, one can show that the above equation simplifies in showing that

$$ \begin{align*}p(b)&:=4239 b^6-23868 b^5+31482 b^4+8964 b^3\\ &\quad-40401 b^2+23424 b -4096 \geq 0\quad \text{for every }b>4.\end{align*} $$

However, since for every $\delta>0$ ,

$$ \begin{align*}p(4+\delta) &= 4239 \delta ^6+77868 \delta ^5+571482 \delta ^4\\ &\quad+2119716 \delta ^3+4091679 \delta ^2+3683256 \delta +998384>0, \end{align*} $$

we obtain that equation (6.20) holds. This completes Step 1.

Step 2. 2 We show that $(a,b)$ is an admissible pair for every $(a,b)\in [1,2)\times (4,\infty ).$

Fixing $b>4,$ observe that

$$ \begin{align*}&(2-a) F_1^{a,b}\bigg(\frac{a}{4}\bigg)\\ &\quad= 2\frac{2-a}{\sqrt{1-({a}/{b})}} \bigg(\tanh ^{-1}\bigg(\sqrt{\frac{2 \sqrt{1-({a}/{b})}+b-2}{b}}\bigg)-\tanh ^{-1}\bigg(\sqrt{\frac{2 \sqrt{1-({a}/{b})}+a-2}{a}}\bigg)\bigg) \end{align*} $$

and

$$ \begin{align*} (2-a) F_2^{a,b}\bigg(\frac{4a^2 - a^4}{16}\bigg) = 2 \bigg(2 \tanh ^{-1}\bigg(\sqrt{\frac{b-a}{b}}\bigg)+\log \bigg(\frac{a}{4-a}\bigg)\bigg).\end{align*} $$

It is readily verified that

$$ \begin{align*}\frac{2-a}{\sqrt{1-({a}/{b})}}\quad \text{and}\quad\tanh ^{-1}\bigg(\sqrt{\frac{2 \sqrt{1-({a}/{b})}+b-2}{b}}\bigg)-\tanh ^{-1}\bigg(\sqrt{\frac{2 \sqrt{1-({a}/{b})}+a-2}{a}}\bigg) \end{align*} $$

are decreasing functions in $a\in [1,2),$ implying that $(2-a) F_1^{a,b}(a/4)$ is a decreasing function in $a\in [0,1]$ and

$$ \begin{align*} (a-2) F_2^{a,b} ((4a^2 - a^3)/16) = 2 \bigg(2 \tanh ^{-1}\bigg(\sqrt{\frac{b-a}{b}}\bigg)+\log \bigg(\frac{a}{4-a}\bigg)\bigg)\end{align*} $$

is an increasing function in $a \in [1,2)$ . From Step $1$ , we obtain that for every $a\in [1,2),$

$$ \begin{align*}F_1^{a,b}\bigg(\frac{a}{4}\bigg)=\frac{(2-a)F_1^{a,b}({a}/{4})}{2-a} \leq \frac{ F_1^{a,b}(1/4)}{2-a}\leq \frac{F_2^{a,b}(3/16)}{2-a} \leq F_2^{a,b}\bigg(\frac{4a^2-a^3}{16}\bigg).\end{align*} $$

This completes the proof of the theorem.

We finish the paper proving Theorem 2.1.

Proof of Theorem 2.1

The theorem follows directly from Theorems 6.15 and 6.18.

Acknowledgements

We are grateful to Jochen Glück for many useful discussions and valuable comments regarding the theory of positive integral operators. MC’s research has been supported by an Imperial College President’s PhD scholarship. MC, VG and JL are also supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). JL gratefully acknowledges research support from IRCN, University of Tokyo and CAMB, Gulf University of Science and Technology, as well as from the London Mathematical Laboratory.

References

Abbasi, N., Gharaei, M. and Homburg, A. J.. Iterated function systems of logistic maps: synchronization and intermittency. Nonlinearity 31(8) (2018), 38803913.CrossRefGoogle Scholar
Athreya, K. B. and Dai, J.. Random logistic maps. I. J. Theoret. Probab. 13(2) (2000), 595608.CrossRefGoogle Scholar
Benaïm, M., Champagnat, N., Oçafrain, W. and Villemonais, D.. Degenerate processes killed at the boundary of a domain. Preprint, 2021, arXiv:2103.08534.Google Scholar
Breyer, L. A. and Roberts, G. O.. A quasi-ergodic theorem for evanescent processes. Stochastic Process. Appl. 84(2) (1999), 177186.CrossRefGoogle Scholar
Brezis, H.. Functional Analysis, Sobolev Spaces and Partial Differential Equations (Universitext). Springer, New York, 2011.CrossRefGoogle Scholar
Castro, M. M., Chemnitz, D., Chu, H., Engel, M., Lamb, J. S. W. and Rasmussen, M.. The Lyapunov spectrum for conditioned random dynamical systems. Ann. Henri Poincaré, to appear.Google Scholar
Castro, M. M., Lamb, J. S. W., Olicón-Méndez, G. and Rasmussen, M.. Existence and uniqueness of quasi-stationary and quasi-ergodic measures for absorbing Markov processes: a Banach lattice approach. Preprint, 2022, arXiv:2111.13791.Google Scholar
Champagnat, N. and Villemonais, D.. Exponential convergence to quasi-stationary distribution and Q-process. Probab. Theory Related Fields 164(1–2) (2016), 243283.CrossRefGoogle Scholar
Darroch, J. N. and Seneta, E.. On quasi-stationary distributions in absorbing discrete-time finite Markov chains. J. Appl. Probab. 2(1) (1965), 88100.CrossRefGoogle Scholar
Eisner, T., Farkas, B., Haase, M. and Nagel, R.. Operator Theoretic Aspects of Ergodic Theory (Graduate Texts in Mathematics, 272). Springer, Cham, 2015.CrossRefGoogle Scholar
Engel, M., Lamb, J. S. W. and Rasmussen, M.. Conditioned Lyapunov exponents for random dynamical systems. Trans. Amer. Math. Soc. 372(9) (2019), 63436370.CrossRefGoogle Scholar
Foguel, S. R.. The Ergodic Theory of Markov Processes (Van Nostrand Mathematical Studies, 21). Van Nostrand Reinhold Co., New York–Toronto, ON–London, 1969.Google Scholar
Folland, G. B.. Real Analysis: Modern Techniques and Their Applications (Pure and Applied Mathematics), 2nd edn. John Wiley & Sons, Inc., New York, 1999; A Wiley-Interscience Publication.Google Scholar
Gerlach, M. and Glück, J.. Convergence of positive operator semigroups. Trans. Amer. Math. Soc. 372(9) (2019), 66036627.CrossRefGoogle Scholar
Glück, J.. Aperiodicity of positive operators that increase the support of functions. Preprint, 2022, arXiv:2209.01171.Google Scholar
Glück, J. and Haase, M.. Asymptotics of operator semigroups via the semigroup at infinity. Positivity and Noncommutative Analysis (Trends in Mathematics). Birkhäuser/Springer, Cham, 2019, pp. 167203.CrossRefGoogle Scholar
Jakobson, M. V.. Absolutely continuous invariant measures for one-parameter families of one-dimensional maps. Comm. Math. Phys. 81(1) (1981), 3988.CrossRefGoogle Scholar
Krengel, U.. Ergodic Theorems (De Gruyter Studies in Mathematics, 6). Walter de Gruyter, Berlin, 2011.Google Scholar
Lasota, A. and Mackey, M. C.. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics (Applied Mathematical Sciences, 97). Springer Science & Business Media, New York, 2013.Google Scholar
Lin, M.. On weakly mixing Markov operators and non-singular transformations. Z. Wahrsch. Verwandte Gebiete 55(2) (1981), 231236.CrossRefGoogle Scholar
Love, E. R.. 64.4 some logarithm inequalities. Math. Gaz. 64(427) (1980), 5557.CrossRefGoogle Scholar
Meyer-Nieberg, P.. Banach Lattices. Springer Science & Business Media, Berlin–Heidelberg, 2012.Google Scholar
Oçafrain, W., Quasi-stationarity and quasi-ergodicity for discrete-time Markov chains with absorbing boundaries moving periodically. ALEA Lat. Am. J. Probab. Math. Stat. 15(1) (2018), 429451.CrossRefGoogle Scholar
Oikhberg, T. and Troitsky, V. G.. A theorem of Krein revisited. Rocky Mountain J. Math. 35(1) (2005), 195210.CrossRefGoogle Scholar
Pflug, G.. Banach lattices and positive operators-H. H. Schaefer. Metrika 23 (1976), 193193.CrossRefGoogle Scholar
Pollett, P. K.. Quasi-stationary distributions: a bibliography, 2008. Available at https://people.smp.uq.edu.au/PhilipPollett/papers/qsds/qsds.pdf.Google Scholar
Rogers, L. C. G. and Williams, D.. Diffusions, Markov Processes and Martingales, Volume 1: Foundations (Cambridge Mathematical Library, 7). John Wiley & Sons, Ltd., Chichester, 1994.Google Scholar
Viana, M. and Oliveira, K.. Foundations of Ergodic Theory (Cambridge Studies in Advanced Mathematics, 151). Cambridge University Press, Cambridge, 2016.Google Scholar
Zhang, J., Li, S. and Song, R.. Quasi-stationarity and quasi-ergodicity of general Markov processes. Sci. China Math. 57(10) (2014), 20132024.CrossRefGoogle Scholar
Zmarrou, H. and Homburg, A. J.. Bifurcations of stationary measures of random diffeomorphisms. Ergod. Th. & Dynam. Sys. 27(5) (2007), 16511692.CrossRefGoogle Scholar