Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-23T18:51:37.018Z Has data issue: false hasContentIssue false

On the dimension of stationary measures for random piecewise affine interval homeomorphisms

Published online by Cambridge University Press:  04 August 2023

KRZYSZTOF BARAŃSKI*
Affiliation:
Institute of Mathematics, University of Warsaw, ul. Banacha 2, 02-097 Warszawa, Poland
ADAM ŚPIEWAK
Affiliation:
Department of Mathematics, Bar-Ilan University, Ramat-Gan 5290002, Israel Institute of Mathematics, Polish Academy of Sciences, ul. Śniadeckich 8, 00-656 Warszawa, Poland (e-mail: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

We study stationary measures for iterated function systems (considered as random dynamical systems) consisting of two piecewise affine interval homeomorphisms, called Alsedà–Misiurewicz (AM) systems. We prove that for an open set of parameters, the unique non-atomic stationary measure for an AM system has Hausdorff dimension strictly smaller than $1$. In particular, we obtain singularity of these measures, answering partially a question of Alsedà and Misiurewicz [Random interval homeomorphisms. Publ. Mat. 58(suppl.) (2014), 15–36].

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

In recent years, a growing interest in low-dimensional random dynamics has led to an intensive study of random one-dimensional systems given by (semi)groups of interval and circle homeomorphisms, both from the stochastic and geometric points of view (see e.g. [Reference Alsedà and Misiurewicz1, Reference Czudek and Szarek7, Reference Czudek, Szarek and Wojewódka-Ścia̧żko8, Reference Gelfert and Stenflo10Reference Gharaei and Homburg12, Reference Łuczyńska16Reference Malicet18, Reference Szarek and Zdunik23, Reference Szarek and Zdunik24]). This can be seen as an extension of the research on the well-known case of groups of smooth circle diffeomorphisms (see e.g. [Reference Ghys13, Reference Navas19]).

Let $f_1, \ldots , f_m$ , $m \geq 2$ , be homeomorphisms of a $1$ -dimensional compact manifold X (a closed interval or a circle). The transformations $f_i$ generate a semigroup consisting of iterates $f_{i_n} \circ \cdots \circ f_{i_1}$ , where $i_1, \ldots , i_n \in \{1, \ldots , m\}$ , $n \in \{0, 1, 2,\ldots \}$ . For a probability vector $(p_1, \ldots , p_m)$ , such a system defines a Markov process on X which, by the Krylov–Bogolyubov theorem, admits a (non-necessarily unique) stationary measure, i.e. a Borel probability measure $\mu $ on X satisfying

$$ \begin{align*} \mu(A) = \sum_{i = 1}^m p_i \mu(f_i^{-1}(A)) \end{align*} $$

for every Borel set $A \subset X$ . In many cases, it can be shown that the stationary measure is unique (at least within some class of measures) and is either absolutely continuous or singular with respect to the Lebesgue measure. It is usually a non-trivial problem to determine which of the two cases occurs (see e.g. [Reference Navas20, §7]), and the question has been solved only in some particular cases.

This paper is a continuation of the research started in [Reference Barański and Śpiewak2] on singular stationary measures for so-called Alsedà–Misiurewicz systems (AM systems), defined in [Reference Alsedà and Misiurewicz1]. These are random systems generated by two piecewise affine increasing homeomorphisms $f_-$ , $f_+$ of the unit interval $[0,1]$ , such that $f_i(0) = 0$ , $f_i(1) = 1$ for $i = -, +$ , each $f_i$ has exactly one point of non-differentiability $x_i \in (0,1)$ and $f_-(x) < x < f_+(x)$ for $x \in (0,1)$ . For a detailed description of AM systems, refer to [Reference Barański and Śpiewak2]. The dynamics of AM systems and related ones has already gained some interest in recent years, being studied in e.g. [Reference Alsedà and Misiurewicz1Reference Czudek6, Reference Toyokawa25]. Within the class of uniformly contracting iterated function systems, piecewise linear maps and the dimension of their attractors were recently studied in [Reference Prokaj and Simon22].

In this paper, as explained below, we study stationary measures for symmetric AM systems with positive endpoint Lyapunov exponents.

Definition 1.1. A symmetric AM system is the system $\{f_-, f_+\}$ of increasing homeomorphisms of the interval $[0,1]$ of the form

$$ \begin{align*} f_-(x)= \begin{cases} a x &\text{for }x\in[0,x_-],\\ 1 - b(1 - x) &\text{for }x\in (x_-, 1], \end{cases}\! \quad f_+(x)= \begin{cases} b x &\text{for }x\in[0,x_+],\\ 1 - a(1 - x) &\text{for }x\in (x_+, 1], \end{cases}\! \end{align*} $$

where $0 < a < 1 < b$ and

$$ \begin{align*} x_- = \frac{b - 1}{b - a}, \quad x_+ = \frac{1 - a}{b - a}. \end{align*} $$

See Figure 1.

Figure 1 An example of a symmetric AM system.

We consider $\{f_-, f_+\}$ as a random dynamical system, which means that iterating the maps, we choose them independently with probabilities $p_-, p_+$ , where $(p_-, p_+)$ is a given probability vector (i.e. $p_-, p_+> 0$ , $p_- + p_+ = 1$ ). Formally, this defines the step skew product

(1.1) $$ \begin{align} {\mathcal F}^+\colon \Sigma_2^+ \times [0,1] \to \Sigma_2^+ \times [0,1], \quad {\mathcal F}^+({\underline{i}}, x) = (\sigma({\underline{i}}), f_{i_1}(x)), \end{align} $$

where $\Sigma _2^+ = \{-,+\}^{\mathbb N},\ {\underline {i}} = (i_1, i_2, \ldots ) \in \Sigma _2^+$ and $\sigma \colon \Sigma _2^+ \to \Sigma _2^+$ is the left-side shift.

The endpoint Lyapunov exponents of an AM system $\{f_-, f_+\}$ are defined as

$$ \begin{align*} \Lambda(0) = p_- \log f_-'(0) + p_+ \log f_+'(0),\quad \Lambda(1) = p_- \log f_-'(1) + p_+ \log f_+'(1). \end{align*} $$

It is known (see [Reference Alsedà and Misiurewicz1, Reference Gharaei and Homburg12]) that if the endpoint Lyapunov exponents are both positive, then the AM system exhibits the synchronization property, i.e. for almost all paths $(i_1, i_2, \ldots ) \in \{-, +\}^{\mathbb N}$ (with respect to the $(p_-, p_+)$ -Bernoulli measure), we have $|f_{i_n} \circ \cdots \circ f_{i_1}(x) - f_{i_n} \circ \cdots \circ f_{i_1}(y)| \to 0$ as $n \to \infty $ for every $x,y \in [0,1]$ . Moreover, in this case, there exists a unique stationary measure $\mu $ without atoms at the endpoints of $[0,1]$ , i.e. a Borel probability measure $\mu $ on $[0,1]$ , such that

$$ \begin{align*} \mu = p_- \, (f_-)_* \mu + p_+ \, (f_+)_* \mu, \end{align*} $$

with $\mu (\{0,1\}) = 0$ (see [Reference Alsedà and Misiurewicz1], [Reference Gharaei and Homburg11, Proposition 4.1], [Reference Gharaei and Homburg12, Lemmas 3.2–3.4] and, for a more general case, [Reference Czudek and Szarek7, Theorem 1]). From now on, by a stationary measure for an AM system, we will always mean the measure $\mu $ . It is known that $\mu $ is non-atomic and is either absolutely continuous or singular with respect to the Lebesgue measure (see [Reference Barański and Śpiewak2, Propositions 3.10 and 3.11]).

In [Reference Alsedà and Misiurewicz1], Alsedà and Misiurewicz conjectured that the stationary measure $\mu $ for an AM system should be singular for typical parameters. In our previous paper [Reference Barański and Śpiewak2], we showed that there exist parameters $a,b, (p_-, p_+)$ , for which $\mu $ is singular with Hausdorff dimension smaller than $1$ (see [Reference Barański and Śpiewak2, Theorems 2.10 and 2.12]). These examples can be found among AM systems with resonant parameters, that is, those with ${\log a}/{\log b} \in {\mathbb Q}$ . In most of the examples, the measure $\mu $ is supported on an exceptional minimal set, which is a Cantor set of dimension smaller than $1$ (although we also have found examples of singular stationary measures with the support equal to the unit interval, see [Reference Barański and Śpiewak2, Theorem 2.16]).

In this paper, already announced in [Reference Barański and Śpiewak2], we make a subsequent step to answer the Alsedà and Misiurewicz question, showing that the stationary measure $\mu $ is singular for an open set of parameters $(a,b)$ and probability vectors $(p_-, p_+)$ . In particular, we find non-resonant parameters (i.e. those with ${\log a}/{\log b} \notin {\mathbb Q}$ ), for which the corresponding stationary measure is singular (note that non-resonant AM systems necessarily have stationary measures with support equal to $[0,1]$ , see [Reference Barański and Śpiewak2, Proposition 2.6]). To prove the result, we present another method to verify singularity of stationary measures for AM systems. Namely, instead of constructing a measure supported on a set of small dimension, we use the well-known bound on the dimension of stationary measure

$$ \begin{align*} \dim_H \mu \leq -\frac{H(p_-, p_+)}{\chi(\mu)}, \end{align*} $$

in terms of its entropy

$$ \begin{align*}H(p_-, p_+) = -p_-\log p_- - p_+\log p_+\end{align*} $$

and the Lyapunov exponent

$$ \begin{align*} \chi(\mu) = \int \limits_{[0,1]} ( p_- \log f'_-(x) + p_+ \log f'_+(x) )\,d\mu(x), \end{align*} $$

proved in [Reference Jaroszewska and Rams15] in a very general setting. We find an open set of parameters for which the Lyapunov exponent is small enough (hence the average contraction is strong enough) to guarantee $\dim _{H}\mu < 1$ . The upper bound on the Lyapunov exponent follows from estimates of the expected return time to the interval

$$ \begin{align*} M = [x_+, x_-]. \end{align*} $$

Remark 1.1. One should note that the question of Alsedà and Misiurewicz has been answered when considered within a much broader class of general random interval homeomorphisms with positive endpoint Lyapunov exponents [Reference Bradík and Roth3, Reference Czernous and Szarek5] and minimal random homeomorphisms of the circle [Reference Czernous4]. More precisely, Czernous and Szarek considered in [Reference Czernous and Szarek5] the closure $\overline {\mathcal {G}}$ of the space $\mathcal {G}$ of all random systems $((g_-, g_+), (p_-, p_+))$ of absolutely continuous, increasing homeomorphisms $g_-, g_+$ of $[0,1]$ , taken with probabilities $p_-, p_+$ , such that $g_-, g_+$ are $C^1$ in some fixed neighbourhoods of $0$ and $1$ , have positive endpoint Lyapunov exponents and satisfy $g_-(x) < x < g_+(x)$ for $x \in (0,1)$ . In [Reference Czernous and Szarek5, Theorem 10], they proved that for a generic system in $\overline {\mathcal {G}}$ (in the Baire category sense under a natural topology), the unique non-atomic stationary measure is singular. This result was extended by Bradík and Roth in [Reference Bradík and Roth3, Theorem 6.2], where they allowed the functions to be only differentiable at $0,1$ , and showed that in addition to being singular, the stationary measure has typically full support. Similar results were obtained by Czernous [Reference Czernous4] for minimal systems on the circle. However, as the finite-dimensional space of AM systems is meagre as a subset of the spaces considered in [Reference Bradík and Roth3Reference Czernous and Szarek5], these results give no information on the singularity of stationary measures for typical AM systems.

2 Results

We adopt a convenient notation

$$ \begin{align*} b = a^{-\gamma} \end{align*} $$

for $a \in (0,1)$ , $\gamma> 0$ and

$$ \begin{align*} {\mathcal I}\colon [0,1] \to [0,1], \quad {\mathcal I}(x) = 1 - x, \end{align*} $$

so that a symmetric AM system has the form

(2.1) $$ \begin{align}\kern-11pt f_-(x)= \begin{cases} a x &\text{for }x\in[0,x_-],\\ {\mathcal I}(a^{-\gamma}{\mathcal I}(x)) &\text{for }x\in (x_-, 1], \end{cases}\! \quad f_+(x)= \begin{cases} a^{-\gamma} x &\text{for }x\in[0,x_+],\\ {\mathcal I}(a{\mathcal I}(x)) &\text{for }x\in (x_+, 1], \end{cases}\! \end{align} $$

where

$$ \begin{align*} x_- = \frac{a^{-\gamma} - 1}{a^{-\gamma} - a}, \quad x_+ = \frac{1 - a}{a^{-\gamma} - a}. \end{align*} $$

By definition, we have

(2.2) $$ \begin{align} f_\pm = {\mathcal I} \circ f_\mp \circ {\mathcal I}^{-1} = {\mathcal I} \circ f_\mp \circ {\mathcal I}. \end{align} $$

Under this notation, the endpoint Lyapunov exponents for the system in equation (2.1) and a probability vector $(p_-, p_+)$ are given by

$$ \begin{align*} \Lambda(0) = (p_- - \gamma p_+)\log a, \quad \Lambda(1) = (p_+ - \gamma p_-)\log a. \end{align*} $$

Throughout the paper, we assume that $\Lambda (0)$ and $\Lambda (1)$ are positive, which is equivalent to

(2.3) $$ \begin{align} \gamma> \max \bigg(\frac{p_-}{p_+}, \frac{p_+}{p_-}\bigg). \end{align} $$

In particular, we have $\gamma> 1$ . Note that this implies

(2.4) $$ \begin{align} x_+ < x_-. \end{align} $$

Indeed, if $\gamma> 1$ , then the endpoint Lyapunov exponents for $p_- = p_+ = 1/2$ are positive, so equation (2.4) follows from [Reference Barański and Śpiewak2, Lemma 4.1].

The aim of this paper is to prove the following theorem.

Theorem 2.1. Consider a space of symmetric AM systems $\{f_-, f_+\}$ of the form in equation (2.1) with positive endpoint Lyapunov exponents. Then there is a non-empty open set of parameters $(a,\gamma ) \in (0,1) \times (1, \infty )$ and probability vectors $(p_-, p_+)$ , such that the corresponding stationary measure $\mu $ for the system $\{f_-, f_+\}$ is singular with Hausdorff dimension smaller than $1$ . More precisely, there exists $\delta> 0$ such that for every $(p_-, p_+)$ with $p_-, p_+ < \tfrac 12 + \delta $ , there is a non-empty open interval $J_{p_-,p_+} \subset (1, 3/2)$ , depending continuously on $(p_-, p_+)$ , such that for $\gamma \in J_{p_-,p_+}$ and $a \in (0, a_{max})$ for some $a_{max} = a_{max}(\gamma )> 0$ , depending continuously on $\gamma $ , we have

$$ \begin{align*} \dim_H \mu \leq \frac{p\log p + (1-p)\log (1-p)}{(1 - {(1+\gamma)p^2(p+\gamma)}/({\gamma - p(1-p)})) \log a} < 1, \end{align*} $$

where $p = \max (p_-, p_+)$ .

In particular, in the case $(p_-, p_+) = (\tfrac 12,\tfrac 12)$ , we have

$$ \begin{align*} \dim_H \mu \leq \frac{(1 - 4 \gamma) \log 2}{(\gamma - 1)(3/2 - \gamma) \log a} < 1 \end{align*} $$

for $\gamma \in (1,3/2)$ , $a \in (0, 2^{({1 - 4\gamma })/{((\gamma - 1)(3/2 - \gamma ))}})$ .

Remark 2.2. The range of probability vectors $(p_-, p_+)$ for which we obtain the singularity of $\mu $ for a non-empty open set of parameters $a, \gamma $ , is rather small. As the proof of Theorem 2.1 shows, suitable conditions for the possible values of $p = \max (p_-, p_+)$ are given by the inequalities in equations (4.2) and (4.4). Solving them, we obtain $p \in [\tfrac 12, p_0)$ , where $p_0 = 0.503507\ldots $ is the smaller of the two real roots of the polynomial $p^6-2p^5+5p^4-6p^3-2p^2+1$ . As p varies from $\tfrac 1 2$ to $p_0$ , the range of allowable parameters $\gamma $ shrinks from the interval $(1, 3/2)$ to a singleton. For such values of p and $\gamma $ , the measure $\mu $ is singular for sufficiently small $a> 0$ . See Figure 2.

Figure 2 The range of parameters $p = \max (p_-, p_+)$ and $\gamma $ , for which the stationary measure $\mu $ for the system in equation (2.1) is singular for sufficiently small $a> 0$ .

Remark 2.3. Every system of the form in equation (2.1) with $a < \tfrac 12$ is of disjoint type in the sense of [Reference Barański and Śpiewak2, Definition 2.3], i.e. $f_-(x_-) < f_+(x_+)$ . Indeed, for $a < \tfrac 12$ , we have

$$ \begin{align*}2a^{1-\gamma} < a^{-\gamma} < a^{-\gamma} +a, \end{align*} $$

so $a^{1-\gamma } -a < a^{-\gamma } - a^{1-\gamma }$ and

$$ \begin{align*} f_-(x_-) = a \frac{a^{-\gamma} - 1}{a^{-\gamma} - a} < a^{-\gamma} \frac{1 - a}{a^{-\gamma} - a} = f_+(x_+). \end{align*} $$

Since a simple calculation shows $2^{({1 - 4\gamma })/{((\gamma - 1)(3/2 - \gamma ))}} < \tfrac 1 2$ for $\gamma \in (1,3/2)$ , we see that all the systems with the probability vector $(p_-, p_+) = (\tfrac 12,\tfrac 12)$ covered by Theorem 2.1 are of disjoint type.

Remark 2.4. Since the conditions used in the proof of Theorem 2.1 to obtain the singularity of $\mu $ define open sets in the space of system parameters, it follows that the singularity of the stationary measure holds also for non-symmetric AM systems with parameters close enough to those covered by Theorem 2.1. We leave the details to the reader.

3 Preliminaries

We state some standard results from probability and ergodic theory, which we will use within the proofs.

Theorem 3.1. (Hoeffding’s inequality)

Let $X_1, \ldots , X_n$ be independent bounded random variables and let $S_n = X_1 + \cdots + X_n$ . Then for every $t> 0$ ,

$$ \begin{align*} {\mathbb P}(S_{n}-\mathbb {E} S_n\geq t)\leq \exp \bigg(-{\frac {2t^{2}}{\sum _{j=1}^{n}(\sup X_j - \inf X_j)^{2}}}\bigg). \end{align*} $$

Theorem 3.2. (Wald’s identity)

Let $X_1, X_2, \ldots $ be independent identically distributed random variables with finite expected value and let N be a stopping time with $\mathbb {E} N < \infty $ . Then,

$$ \begin{align*} \mathbb{E} (X_1 + \cdots + X_N) = \mathbb{E} N \: \mathbb{E} X_1. \end{align*} $$

Theorem 3.3. (Kac’s lemma)

Let $F\colon X \to X$ be a measurable $\mu $ -invariant ergodic transformation of a probability space $(X, \mu )$ and let $A \subset X$ be a measurable set with $\mu (A)> 0$ . Then,

$$ \begin{align*} \int \limits_A n_A \, d\mu_A = \frac{1}{\mu(A)}, \end{align*} $$

where

$$ \begin{align*} n_A \colon X \to {\mathbb N} \cup \{\infty\}, \quad n_A(x) = \inf\{n \ge 1: F^n(x) \in A\} \end{align*} $$

is the first return time to A and $\mu _A = {1}/{\mu (A)} \mu |_A$ .

For the proofs of these results, refer respectively to [Reference Hoeffding14, Theorem 2], [Reference Feller9, Ch. XII, Theorem 2] and [Reference Petersen21, Theorem 4.6].

4 Proofs

As noted in the introduction, the proof of Theorem 2.1 is based on an upper bound on the Hausdorff dimension of a stationary measure in terms of its entropy and Lyapunov exponent, in a version proved by Jaroszewska and Rams in [Reference Jaroszewska and Rams15, Theorem 1]. Consider a symmetric AM system $\{f_-, f_+\}$ of the form in equation (2.1) with positive endpoint Lyapunov exponents for some probability vector $(p_-,p_+)$ , and its stationary measure $\mu $ . Recall that the entropy of $(p_-,p_+)$ is defined by

$$ \begin{align*}H(p_-, p_+) = -p_-\log p_- - p_+\log p_+,\end{align*} $$

while

$$ \begin{align*} \chi(\mu) = \int \limits_{[0,1]} ( p_- \log f'_-(x) + p_+ \log f'_+(x) )\,d\mu(x) \end{align*} $$

is the Lyapunov exponent of $\mu $ . As $\mu $ is non-atomic (see [Reference Barański and Śpiewak2, Proposition 3.11]) and $f_-, f_+$ are differentiable everywhere except for the points $x_-, x_+$ , the Lyapunov exponent $\chi (\mu )$ is well defined. Moreover, $\mu $ is ergodic (see [Reference Gharaei and Homburg12, Lemmas 3.2 and 3.4]). It follows that we can use [Reference Jaroszewska and Rams15, Theorem 1] which asserts that

(4.1) $$ \begin{align} \dim_H \mu \leq -\frac{H(p_-, p_+)}{\chi(\mu)} \end{align} $$

as long as $\chi (\mu ) < 0$ .

Now we proceed with the details. Let

$$ \begin{align*} M = [x_+, x_-], \quad L = [x_+, f_-^{-1}(x_+)),\quad R={\mathcal I}(L) = (f_+^{-1}(x_-), x_-]. \end{align*} $$

It follows from equation (2.4) that these intervals are well defined. Note that $M, L, R$ depend on parameters a and $\gamma $ , but we suppress this dependence in the notation. To estimate the Hausdorff dimension of $\mu $ , we find an upper bound for $\chi (\mu )$ in terms of $\mu (M)$ and estimate $\mu (M)$ from below. To this aim, we need the disjointness of the intervals $L,R$ . The following lemma provides the range of parameters for which this condition holds.

Lemma 4.1. The following assertions are equivalent:

  1. (a) $\overline L \cap \overline R = \emptyset $ ;

  2. (b) $x_+ < f_-(\tfrac 12)$ ;

  3. (c) $\gamma>1 - {\log (a^2 - 2a +2)}/{\log a}$ .

Proof. By equation (2.4) and the fact that $x_+ = {\mathcal I}(x_-)$ , we have $\tfrac 12 < x_-$ , so $f_-(\tfrac 12) = {a}/{2}$ and condition (b) becomes $x_+ < {a}/{2}$ . Then a direct computation yields the equivalence of conditions (b) and (c). Furthermore, by equation (2.4), condition (a) holds if and only if $f_-^{-1}(x_+) < f_+^{-1}(x_-)$ . As $f_-\circ {\mathcal I} = {\mathcal I} \circ f_+$ , this is equivalent to $f_-^{-1}(x_+) < {\mathcal I}(f_-^{-1}(x_+))$ , which is the same as $f_-^{-1}(x_+) < 1/2$ . Applying $f_-$ to both sides, we arrive at condition (b).

Remark 4.2. The condition in Lemma 4.1(c) can be written as $\gamma {\kern-1.2pt}-{\kern-1.2pt} 1{\kern-1.2pt}>{\kern-1.2pt} - {\log ((1{\kern-1pt}-{\kern-1pt}a)^2 +} {1)}/ {\log a}$ . As $\log ((1-a)^2 + 1) < \log 2$ for $a \in (0, 1)$ , we see that the condition is satisfied provided $\gamma> 1$ , $a \in (0, 2^{{1}/({1 - \gamma })})$ .

We can now estimate the measure of M. It is convenient to use the notation

$$ \begin{align*} p = \max(p_-, p_+). \end{align*} $$

Obviously, $p \in [\tfrac 1 2, 1)$ . Note that the condition in equation (2.3) for the positivity of the endpoint Lyapunov exponents can be written as

(4.2) $$ \begin{align} \gamma> \frac{p}{1-p} \end{align} $$

and the entropy of $(p_-, p_+)$ is equal to

$$ \begin{align*} H(p) = -p\log p - (1-p)\log (1-p). \end{align*} $$

The following lemma provides a lower bound for $\mu (M)$ .

Lemma 4.3. Let $a \in (0,1)$ , $\gamma> 1$ and $p \in [\tfrac 12, 1)$ satisfy the conditions in equation (4.2) and Lemma 4.1(c). Then,

$$ \begin{align*} \mu(M) \geq \frac{\gamma (1 - p) - p}{\gamma - p(1 - p)}. \end{align*} $$

Before giving the proof of Lemma 4.3, let us explain how it implies Theorem 2.1. Suppose the lemma is true. Then we can estimate the Lyapunov exponent $\chi (\mu )$ in the following way.

Corollary 4.4. Let $a \in (0,1)$ , $\gamma> 1$ and $p \in [\tfrac 12, 1)$ satisfy the conditions in equation (4.2) and Lemma 4.1(c). Then,

$$ \begin{align*} \chi(\mu) \le \bigg(1 - \frac{(1+\gamma)p^2(p+\gamma)}{\gamma - p(1-p)}\bigg) \log a. \end{align*} $$

Proof. By definition, we have

(4.3) $$ \begin{align} \chi(\mu) = (\mu (M) + (p_- - \gamma p_+)\mu([0, x_+]) + (p_+ - \gamma p_-) \mu([x_-, 1])) \log a. \end{align} $$

Computing the maximum of this expression under the condition $\mu ([0, x_+]) + \mu ([x_-, 1]) = 1 - \mu (M)$ , we obtain

$$ \begin{align*} \chi(\mu) \le (1 - (1+ \gamma) p (1 - \mu(M)))\log a. \end{align*} $$

Then Lemma 4.3 provides the required estimate by a direct computation.

Proof of Theorem 2.1

Let $a \in (0,1)$ , $\gamma> 1$ and $p \in [\tfrac 12, 1)$ satisfy the conditions in equation (4.2) and Lemma 4.1(c). By Corollary 4.4, we have $\chi (\mu ) < 0$ provided

(4.4) $$ \begin{align} \frac{(1+\gamma)p^2(p+\gamma)}{\gamma - p(1-p)} < 1. \end{align} $$

Hence, applying equation (4.1) and Corollary 4.4, we obtain

(4.5) $$ \begin{align} \dim_H \mu \leq \frac{p\log p + (1-p)\log (1-p)}{(1 - {(1+\gamma)p^2(p+\gamma)}/({\gamma - p(1-p)})) \log a} \end{align} $$

as long as equation (4.4) is satisfied. If, additionally,

(4.6) $$ \begin{align} p\log p + (1-p)\log (1-p)> \bigg(1 - \frac{(1+\gamma)p^2(p+\gamma)}{\gamma - p(1-p)}\bigg) \log a, \end{align} $$

then equation (4.5) provides $\dim _H \mu < 1$ . We conclude that the conditions required for $\dim _H \mu < 1$ are equations (4.2), (4.4), Lemma 4.1(c) and equation (4.6).

To find the range of allowable parameters, consider first the case $p_- = p_+ = \tfrac 1 2$ (which corresponds to $p = \tfrac 1 2$ ). Then the condition in equation (4.2) is equivalent to $\gamma> 1$ , while the inequality in equation (4.4) takes the form $2\gamma ^2 - 5 \gamma +3 < 0$ and is satisfied for $\gamma \in (1, 3/2)$ . Furthermore, by Remark 4.2, the condition in Lemma 4.1(c) is fulfilled for $\gamma> 1$ , $a \in (0, 2^{{1}/({1 -\gamma })})$ . The condition in equation (4.6) can be written as ${(1 - 4 \gamma ) \log 2}/{((\gamma - 1)(3/2 - \gamma ) \log a)} < 1$ , which is equivalent to

$$ \begin{align*} a < 2^{({1 - 4\gamma})/{((\gamma - 1)(3/2 - \gamma))}}. \end{align*} $$

A direct computation shows $2^{({1 - 4\gamma })/{((\gamma - 1)(3/2 - \gamma ))}} < 2^{{1}/({1 - \gamma })}$ for $\gamma \in (1,3/2)$ . By equation (4.5), we conclude that in the case $p_- = p_+ = \tfrac 1 2$ , we have

$$ \begin{align*} \dim_H \mu \leq \frac{(1 - 4 \gamma) \log 2}{(\gamma - 1)(3/2 - \gamma) \log a} < 1 \end{align*} $$

for $\gamma \in (1,3/2)$ , $a \in (0, 2^{({1 - 4\gamma })/{((\gamma - 1)(3/2 - \gamma ))}})$ .

Suppose now that $(p_-, p_+)$ is a probability vector with $p < \tfrac 1 2 + \delta $ for a small $\delta> 0$ . Note that the functions appearing in equations (4.2) and (4.4) are well defined and continuous for $\gamma \in (1,3/2)$ and p in a neighbourhood of $\tfrac 1 2$ . Hence, equations (4.2) and (4.4) are fulfilled for $\gamma \in J_{p_-, p_+} = J_p$ , where $J_p \subset (1, 3/2)$ is an interval slightly smaller than $(1, 3/2)$ , depending continuously on $p \in [\tfrac 1 2, \tfrac 1 2 + \delta )$ . Furthermore, if $\gamma \in J_p$ , then the conditions in Lemma 4.1(c) and equation (4.6) hold for sufficiently small $a> 0$ , where an upper bound for a can be taken to be a continuous function of $\gamma $ , which does not depend on p. By equation (4.5), we have

$$ \begin{align*} \dim_H \mu \leq \frac{p\log p + (1-p)\log (1-p)}{(1 - {(1+\gamma)p^2(p+\gamma)}/({\gamma - p(1-p)})) \log a} < 1 \end{align*} $$

for $p \in [\tfrac 1 2, \tfrac 1 2 + \delta $ ), $\gamma \in J_{p}$ and sufficiently small $a> 0$ (with a bound depending continuously on $\gamma $ ). In fact, analysing the inequalities in equations (4.2) (4.4), Lemma 4.1(c) and equation (4.6), one can obtain concrete ranges of parameters $a, \gamma , p$ , for which $\dim _H \mu < 1$ (cf. Remark 2.2 and Figure 2).

To complete the proof of Theorem 2.1, it remains to prove Lemma 4.3.

Proof of Lemma 4.3

The proof is based on Kac’s lemma (see Theorem 3.3) and the observation that outside of the interval M, the system $\{f_-, f_+\}$ (after a logarithmic change of coordinates) acts like a random walk with a drift. Note first that $\mu (M)> 0$ . Indeed, we have

(4.7) $$ \begin{align} f_+^{-1}(x_-)> x_+, \end{align} $$

as it is straightforward to check that this inequality is equivalent to $a^{1 - \gamma }> 1$ , which holds since $a \in (0,1)$ and $\gamma> 1$ . This means that the sets M and $f_+^{-1}(M)$ are not disjoint. By symmetry, M and $f_-^{-1}(M)$ are also not disjoint. As $\lim \nolimits _{n \to \infty } f_+^{-n}(x_-) = 0$ and $\lim \nolimits _{n \to \infty } f_-^{-n}(x_+) = 1$ , we see that $\bigcup \nolimits _{n = 0}^{\infty } f_+^{-n}(M) \cup f_-^{-n}(M) = (0,1)$ and hence $\mu (M)>0$ , as $\mu $ is stationary and $\mu (\{0,1\})=0$ .

We will apply Kac’s lemma to the step skew product in equation (1.1) and the set $\Sigma _2^+ \times M$ . Let $n_{M} \colon \Sigma _2^+ \times M \to {\mathbb N} \cup \{ \infty \}$ be the first return time to $\Sigma _2^+ \times M$ , that is,

$$ \begin{align*} n_{M} ({\underline{i}}, x) = \inf \{ n \geq 1 : ({\mathcal F}^+)^n({\underline{i}}, x) \in \Sigma_2^+ \times M \}. \end{align*} $$

Set ${\mathbb P} = \operatorname {\mathrm {Ber}}^+_{p_-, p_+}$ to be the $(p_-, p_+)$ -Bernoulli measure on $\Sigma _2^+$ . Since ${\mathbb P} \otimes \mu $ is invariant and ergodic for ${\mathcal F}^+$ (cf. [Reference Gharaei and Homburg12, Lemmas 3.2 and A.2]) and $({\mathbb P} \otimes \mu ) (\Sigma _2^+ \times M) = \mu (M)>0$ , Kac’s lemma implies

(4.8) $$ \begin{align} \int \limits_{\Sigma_2^+ \times M} n_{M}\, d\nu = \frac{1}{\mu(M)}, \end{align} $$

where

$$ \begin{align*}\nu = \frac{1}{\mu(M)} ({\mathbb P} \otimes \mu)|_{\Sigma_2^+ \times M}.\end{align*} $$

Recall that we assume the condition in Lemma 4.1(c), so $\overline L \cap \overline R = \emptyset $ . Let

$$ \begin{align*} C = [\sup L, \inf R], \end{align*} $$

so that $M = L \cup C \cup R$ with the union being disjoint. By the definitions of $L, C$ and R,

(4.9) $$ \begin{align} f_-(L) \subset [0, x_+), \quad f_-(C \cup R) \cup f_+(L \cup C) \subset M, \quad f_+(R) \subset (x_-, 1]. \end{align} $$

Let

$$ \begin{align*}E = \{ ({\underline{i}}, x) \in \Sigma_2^+ \times M : f_{i_1}(x) \notin M \} = \{ ({\underline{i}}, x) \in \Sigma_2^+ \times M : n_{M}> 1 \}.\end{align*} $$

It follows from equation (4.9) that

(4.10) $$ \begin{align} E = \{i_1 = -\} \times L \cup \{ i_1 = + \} \times R, \end{align} $$

so

(4.11) $$ \begin{align} \nu(E) = \frac{p_- \:\mu(L) + p_+ \:\mu(R)}{\mu(M)} \end{align} $$

and as L, R are disjoint subsets of M,

(4.12) $$ \begin{align} \nu(E) \le p\frac{\mu(L) + \mu(R)}{\mu(M)} \leq p \end{align} $$

for $p = \max (p_-, p_+)$ . By equation (4.10),

(4.13) $$ \begin{align} \int \limits_{\Sigma_2^+ \times M} n_{M} \, d\nu = 1 - \nu (E) + \int \limits_E n_{M} \, d\nu = 1 - \nu (E) + \int \limits_{\{i_1 = -\} \times L} n_{M} \, d\nu + \int \limits_{\{i_1 = +\} \times R} n_{M} \, d\nu. \end{align} $$

Note that it follows from equation (4.7) that $f_+(x_+) < x_-$ , and hence a trajectory $\{f_{i_n} \circ \cdots \circ f_{i_1}(x)\}_{n=0}^\infty $ of a point $x \in [0,1]$ cannot jump from $[0, x_+)$ to $(x_-, 1]$ (or vice versa) without passing through M. Combining this observation with the fact that the transformations $f_-$ and $f_+$ are increasing, we conclude that

(4.14) $$ \begin{align} \begin{aligned} n_{M}({\underline{i}}, x) &\leq n_{M}({\underline{i}}, x_+) \quad \text{for }({\underline{i}}, x) \in \{i_1 = -\} \times L,\\ n_{M}({\underline{i}}, x) &\leq n_{M}({\underline{i}}, x_-) \quad \text{for } ({\underline{i}}, x) \in \{i_1 = +\} \times R. \end{aligned} \end{align} $$

Therefore, we can apply equation (4.14) together with equation (4.11) to obtain

(4.15) $$ \begin{align} \begin{aligned} &\int \limits_{\{i_1 = -\} \times L} n_{M} \, d\nu + \int \limits_{\{i_1 = +\} \times R} n_{M} \, d\nu \leq \int\limits_{\{i_1 = -\} \times L} n_{M}({\underline{i}}, x_+)d\nu + \int\limits_{\{i_1 = +\} \times R} n_{M}({\underline{i}}, x_-)d\nu\\ &\quad = \frac{\mu(L)}{\mu(M)} \int\limits_{\{i_1 = -\}} n_{M}({\underline{i}}, x_+)\,d{\mathbb P}({\underline{i}}) + \frac{\mu(R)}{\mu(M)} \int\limits_{\{i_1 = +\}} n_{M}({\underline{i}}, x_-)\,\,d{\mathbb P}({\underline{i}})\\ &\quad = p_-\:\frac{\mu(L)}{\mu(M)}\mathbb{E}_-N_- + p_+\:\frac{\mu(R)}{\mu(M)}\mathbb{E}_+N_+\le\nu(E) \max(\mathbb{E}_-N_-, \mathbb{E}_+N_+), \end{aligned} \end{align} $$

where

$$ \begin{align*} N_\pm({\underline{i}}) = \inf \{ n \geq 1 : f_{i_n} \circ \cdots \circ f_{i_1} (x_\mp) \in M \} \end{align*} $$

and $\mathbb {E}_\pm $ is the expectation taken with respect to the conditional measure

$$ \begin{align*} {\mathbb P}_\pm = \frac{1}{{\mathbb P}(i_1 = \pm)} {\mathbb P} |_{\{i_1 = \pm\}} = \frac{1}{p_\pm} {\mathbb P} |_{\{i_1 = \pm\}}. \end{align*} $$

Using equations (4.13), (4.15) and (4.12), we obtain

(4.16) $$ \begin{align} \int \limits_{\Sigma_2^+ \times M} n_{M} \, d\nu \le1 + \nu(E) (\max(\mathbb{E}_-N_-, \mathbb{E}_+N_+) - 1)\le 1 + p(\max(\mathbb{E}_-N_-, \mathbb{E}_+N_+)-1). \end{align} $$

Define random variables $X_j^\pm \colon \Sigma _2^+ \to {\mathbb R}$ , $j \in {\mathbb N}$ , by

$$ \begin{align*} X_j^-({\underline{i}}) = \begin{cases} 1 &\text{if } i_j = -, \\ -\gamma &\text{if } i_j = +, \end{cases} \quad X_j^+({\underline{i}}) = \begin{cases} -\gamma &\text{if } i_j = -, \\ 1 &\text{if } i_j = +. \end{cases} \end{align*} $$

Then $X_2^-, X_3^-, \ldots $ is an independent and identically distributed sequence of random variables with ${\mathbb P}_-(X_j^- = 1) = p_-$ , ${\mathbb P}_-(X_j^- = -\gamma ) = p_+$ . To estimate $\mathbb {E}_-N_-$ , note that for ${\underline {i}} \in \{i_1 = -\}$ , we have

$$ \begin{align*} N_-({\underline{i}}) = \inf \{ n \geq 1 : a^{1 + X_2^- + \cdots + X_n^-} x_+ \geq x_+\} = \inf \{ n \geq 2 : X_2^- + \cdots + X_n^- \leq -1\}, \end{align*} $$

as for $n < N_-({\underline {i}})$ , we have $f_{i_n} \circ \cdots \circ f_{i_1} (x_+) < x_+$ and $f_-(x) = ax,\ f_+(x) = a^{-\gamma }x$ on $[0,x_+]$ . Consequently, $N_-$ is a stopping time for $\{X_j^-\}_{j=2}^{\infty }$ . We show that $\mathbb {E}_-N_- < \infty $ . To do this, note that by Hoeffding’s inequality (see Theorem 3.1) and equation (2.3),

$$ \begin{align*} {\mathbb P}_-(N_-> n + 1) &\leq {\mathbb P}_-\bigg(\sum \limits_{j=2}^{n+1} X_j^- > -1\bigg)\\ &= {\mathbb P}_-\bigg(\sum \limits_{j=2}^{n+1} X_j^- - n\mathbb{E}_- X_2^- \geq -1 - n(p_- - \gamma p_+)\bigg)\\ &\leq \exp \bigg(- \frac{2(1 + n(p_- - \gamma p_+))^2}{n (\gamma + 1)^2} \bigg) \leq \exp(-cn) \end{align*} $$

for some constant $c>0$ and $n \in {\mathbb N}$ large enough. We have used here the fact that $t:= - 1 - n(p_- - \gamma p_+)$ is positive for n large enough, following from equation (2.3). As $\mathbb {E}_-N_- = \sum \nolimits _{n=0}^\infty {\mathbb P}_-(N> n)$ , the above inequality implies $\mathbb {E}_-N_- < \infty $ .

Let

$$ \begin{align*} S_{N_-}({\underline{i}}) = \sum \limits_{n=2}^{N_-({\underline{i}})} X_n^-({\underline{i}}). \end{align*} $$

This random variable is well defined, since $2 \leq N_- < \infty $ holds ${\mathbb P}_-$ -almost surely. As $\mathbb {E}_-N_- < \infty $ , we can apply Wald’s identity (see Theorem 3.2) to obtain

(4.17) $$ \begin{align} \mathbb{E}_-S_{N_-} = \mathbb{E}_- X_2^-(\mathbb{E}_-N_- -1) = (p_- - \gamma p_+) (\mathbb{E}_-N_- -1). \end{align} $$

To estimate $\mathbb {E}_-S_{N_-}$ , we condition on $X_2^-$ and note that $S_{N_-} \geq -1-\gamma $ almost surely and, by equation (2.3), $-\gamma < -1$ . This gives

(4.18) $$ \begin{align} \mathbb{E}_- S_{N_-} & = p_-\: \mathbb{E}_-(S_{N_-}|X_2^- = 1) + p_+ \: \mathbb{E}_-(S_{N_-}|X_2^- = -\gamma) \nonumber\\ & \geq p_- \: (-1 - \gamma) - p_+\: \gamma \\ & = -p_- - \gamma. \nonumber \end{align} $$

Combining this with equations (2.3) and (4.17), we get

(4.19) $$ \begin{align} \mathbb{E}_- N_- -1 \leq \frac{p_- + \gamma}{\gamma p_+ - p_-}. \end{align} $$

By the symmetry in equation (2.2) and $x_+ = {\mathcal I}(x_-)$ , we can estimate $\mathbb {E}_+N_+$ in the same way, exchanging the roles of $p_-$ and $p_+$ , obtaining

(4.20) $$ \begin{align} \mathbb{E}_+ N_+ -1 \leq \frac{p_+ + \gamma}{\gamma p_- - p_+}. \end{align} $$

Applying equations (4.19) and (4.20) to equation (4.16), we see that

$$ \begin{align*} \int \limits_{\Sigma_2^+ \times M}\!\! n_{M}({\underline{i}}, x)\,d\nu ({\underline{i}}, x) \leq 1 + p \max\bigg(\frac{p_- + \gamma}{\gamma p_+ - p_-}, \frac{p_+ + \gamma}{\gamma p_- - p_+} \bigg) = 1 + p \frac{p + \gamma}{\gamma (1-p) - p}. \end{align*} $$

Invoking equation (4.8), we obtain

$$ \begin{align*} \mu(M) \ge \frac{1}{1 + p ({p + \gamma})/({\gamma (1-p) - p})} = \frac{\gamma (1-p)-p}{\gamma - p(1-p)}, \end{align*} $$

which ends the proof.

We finish the paper with some remarks on the limitations of our method for proving singularity of the measure $\mu $ .

Remark 4.5. One should be aware that, in general, the upper bound $-{H(p_-, p_+)}/{\chi (\mu )}$ does not coincide with the actual value of $\dim _H \mu $ for AM systems. Indeed, for $(p_-, p_+) = (\tfrac 1 2, \tfrac 1 2)$ , we have $H(p_-, p_+) = \log 2$ and by equation (4.3),

$$ \begin{align*} \chi(\mu) = \bigg( \frac{1+ \gamma}{2} \mu(M) + \frac{1 - \gamma}{2} \bigg) \log a \ge \log a. \end{align*} $$

However, [Reference Barański and Śpiewak2, Theorems 2.10 and 2.12] provide an exact value of the dimension of $\mu $ in the resonance case $\gamma = k \in {\mathbb N}$ , $k \ge 2$ , yielding

$$ \begin{align*} \dim_{H}\mu = \dim_{H}(\operatorname{\mathrm{supp}} \mu) = \frac{\log \eta}{\log a}, \end{align*} $$

where $\eta \in (\tfrac 1 2, 1)$ is the unique solution of the equation $\eta ^{k+1} - 2\eta + 1 = 0$ . Therefore,

$$ \begin{align*} \dim_{H}\mu = \frac{\log \eta}{\log a} < \frac{\log {1}/{2}}{\log a} \leq -\frac{H(p_-, p_+)}{\chi(\mu)}.\end{align*} $$

Remark 4.6. It is natural to ask what is a possible range of parameters for which the method presented is this paper could be applied. Let us discuss this in the basic case $p_- = p_+ = \tfrac 1 2$ . Following the proof of Theorem 2.1 in this case, we see that by equations (4.1) and (4.3), if for a given $\gamma> 1$ we have

$$ \begin{align*}\mu(M)> \frac{\gamma - 1}{\gamma + 1},\end{align*} $$

then the measure $\mu $ is singular for $a < 1$ small enough (depending on $\gamma $ ). However, combining equations (4.8), (4.16), (4.17), and noting that $\mathbb {E}_- N_- = \mathbb {E}_+ N_+$ and $\mathbb {E}_- S_{N_-} = \mathbb {E}_+ S_{N_+}$ for $p_- = p_+ = \tfrac 1 2$ , we see that

$$ \begin{align*} \mu(M) \geq \frac{\gamma - 1}{\gamma - 1 - \mathbb{E}_- S_{N_-}}, \end{align*} $$

provided that the condition of Lemma 4.1(c) is satisfied (which for a fixed $\gamma> 1$ holds for small enough $a \in (0,1)$ ). Therefore, if for fixed $\gamma> 1$ inequality

(4.21) $$ \begin{align} \mathbb{E}_- S_{N_-}> -2 \end{align} $$

is satisfied, then $\mu $ is singular for $a \in (0,1)$ small enough. The proof of Theorem 2.1 shows that equation (4.21) holds for $\gamma \in (1, 3/2)$ . Figure 3 presents computer simulated values of $\mathbb {E}_- S_{N_-}$ for $\gamma $ in the interval $(1,3)$ . It suggests that the range of parameters $\gamma $ for which the singularity of $\mu $ holds with a small enough could be extended from $(1, 3/2)$ to a larger set of the form $(1, \gamma _1) \cup (2, \gamma _2)$ , for some $\gamma _1 \in (1,2)$ , $\gamma _2 \in (2, 3)$ . It is easy to see that one can obtain equation (4.21) for some $\gamma> 3/2$ by conditioning on a larger number of steps in equation (4.18). We do not pursue the task of finding a wider set of possible parameters in this work. One should note, however, that equation (4.21) cannot hold for $\gamma \geq 3$ , as the formula from the first line of equation (4.18) can be used together with an obvious bound $S_{N_-} \leq -1$ to obtain (for $p_- = p_+ = \tfrac 1 2$ )

$$ \begin{align*} \mathbb{E}_- S_{N_-} \leq -p_- - p_+ \gamma = \frac{- 1 - \gamma}{2},\end{align*} $$

yielding $\mathbb {E}_- S_{N_-} \leq -2$ for $\gamma \geq 3$ . This shows that the method used in this paper cannot be (directly) applied for $\gamma \geq 3$ (even though there do exist AM systems with $\gamma \geq 3$ for which $\mu $ is singular—see [Reference Barański and Śpiewak2, Theorems 2.10 and 2.12]). To obtain an optimal range of $\gamma $ satisfying equation (4.21), one should compute $\mathbb {E}_- S_{N_-}$ explicitly in terms of $\gamma $ . This however seems to be complicated and Figure 3 suggests that one should not expect a simple analytic formula.

Figure 3 Simulated values of $\mathbb {E}_- S_{N_-}$ as a function of $\gamma $ . The values of $\gamma $ are presented on the x-coordinate axis, while the y-coordinate gives the corresponding value of $\mathbb {E}_- S_{N_-}$ . Simulations were performed for $4000$ values of $\gamma $ , uniformly spaced in the interval $(1,3)$ . For each choice of $\gamma $ , we performed $40\,000$ simulations of $3000$ steps of the corresponding random walk.

Acknowledgements

Krzysztof Barański was supported by the National Science Centre, Poland, grant no. 2018/31/B/ST1/02495. Adam Śpiewak acknowledges support from the Israel Science Foundation, grant 911/19. A part of this work was done when the second author was visiting the Budapest University of Technology and Economics. We thank Balázs Bárány, Károly Simon and R. Dániel Prokaj for useful discussions, and the staff of the Institute of Mathematics of the Budapest University of Technology and Economics for their hospitality.

References

Alsedà, L. and Misiurewicz, M.. Random interval homeomorphisms. Publ. Mat. 58(suppl.) (2014), 1536.10.5565/PUBLMAT_Extra14_01CrossRefGoogle Scholar
Barański, K. and Śpiewak, A.. Singular stationary measures for random piecewise affine interval homeomorphisms. J. Dynam. Differential Equations 33(1) (2021), 345393.10.1007/s10884-019-09807-5CrossRefGoogle Scholar
Bradík, J. and Roth, S.. Typical behaviour of random interval homeomorphisms. Qual. Theory Dyn. Syst. 20(3) (2021), Paper no. 73, 20 pp.10.1007/s12346-021-00509-2CrossRefGoogle Scholar
Czernous, W.. Generic invariant measures for minimal iterated function systems of homeomorphisms of the circle. Ann. Polon. Math. 124(1) (2020), 3346.10.4064/ap180518-12-4CrossRefGoogle Scholar
Czernous, W. and Szarek, T.. Generic invariant measures for iterated systems of interval homeomorphisms. Arch. Math. (Basel) 114(4) (2020), 445455.10.1007/s00013-019-01405-7CrossRefGoogle Scholar
Czudek, K.. Alsedà–Misiurewicz systems with place-dependent probabilities. Nonlinearity 33(11) (2020), 62216243.10.1088/1361-6544/aba094CrossRefGoogle Scholar
Czudek, K. and Szarek, T.. Ergodicity and central limit theorem for random interval homeomorphisms. Israel J. Math. 239(1) (2020), 7598.10.1007/s11856-020-2046-4CrossRefGoogle Scholar
Czudek, K., Szarek, T. and Wojewódka-Ścia̧żko, H.. The law of the iterated logarithm for random interval homeomorphisms. Israel J. Math. 246(1) (2021), 4753.10.1007/s11856-021-2235-9CrossRefGoogle Scholar
Feller, W.. An Introduction to Probability Theory and Its Applications. Volume II, 2nd edn. John Wiley & Sons, Inc., New York–London–Sydney, 1971.Google Scholar
Gelfert, K. and Stenflo, Ö.. Random iterations of homeomorphisms on the circle. Mod. Stoch. Theory Appl. 4(3) (2017), 253271.10.15559/17-VMSTA86CrossRefGoogle Scholar
Gharaei, M. and Homburg, A. J.. Skew products of interval maps over subshifts. J. Difference Equ. Appl. 22(7) (2016), 941958.10.1080/10236198.2016.1164146CrossRefGoogle Scholar
Gharaei, M. and Homburg, A. J.. Random interval diffeomorphisms. Discrete Contin. Dyn. Syst. Ser. S 10(2) (2017), 241272.Google Scholar
Ghys, É.. Groups acting on the circle. Enseign. Math. (2) 47(3–4) (2001), 329407.Google Scholar
Hoeffding, W.. Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 (1963), 1330.10.1080/01621459.1963.10500830CrossRefGoogle Scholar
Jaroszewska, J. and Rams, M.. On the Hausdorff dimension of invariant measures of weakly contracting on average measurable IFS. J. Stat. Phys. 132(5) (2008), 907919.10.1007/s10955-008-9566-3CrossRefGoogle Scholar
Łuczyńska, G.. Unique ergodicity for function systems on the circle. Statist. Probab. Lett. 173 (2021), Paper no. 109084, 7 pp.10.1016/j.spl.2021.109084CrossRefGoogle Scholar
Łuczyńska, G. and Szarek, T.. Limits theorems for random walks on Homeo $\left({S}^1\right)$ . J. Stat. Phys. 187(1) (2022), Paper no. 7, 13 pp.10.1007/s10955-022-02903-9CrossRefGoogle Scholar
Malicet, D.. Random walks on $\mathrm{Homeo}({S}^1)$ . Comm. Math. Phys. 356(3) (2017), 10831116.10.1007/s00220-017-2996-5CrossRefGoogle Scholar
Navas, A.. Groups of Circle Diffeomorphisms (Chicago Lectures in Mathematics). University of Chicago Press, Chicago, IL, 2011.10.7208/chicago/9780226569505.001.0001CrossRefGoogle Scholar
Navas, A.. Group actions on 1-manifolds: a list of very concrete open questions. Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Volume III. Invited Lectures. Eds. B. Sirakov, P. Ney de Souza and M. Viana. World Scientific Publishing, Hackensack, NJ, 2018, pp. 20352062.Google Scholar
Petersen, K.. Ergodic Theory (Cambridge Studies in Advanced Mathematics, 2). Cambridge University Press, Cambridge, 1983.10.1017/CBO9780511608728CrossRefGoogle Scholar
Prokaj, R. D. and Simon, K.. Piecewise linear iterated function systems on the line of overlapping construction. Nonlinearity 35(1) (2022), 245277.10.1088/1361-6544/ac355eCrossRefGoogle Scholar
Szarek, T. and Zdunik, A.. Stability of iterated function systems on the circle. Bull. Lond. Math. Soc. 48(2) (2016), 365378.10.1112/blms/bdw013CrossRefGoogle Scholar
Szarek, T. and Zdunik, A.. The central limit theorem for iterated function systems on the circle. Mosc. Math. J. 21(1) (2021), 175190.10.17323/1609-4514-2021-21-1-175-190CrossRefGoogle Scholar
Toyokawa, H.. On the existence of a $\sigma$ -finite acim for a random iteration of intermittent Markov maps with uniformly contractive part. Stoch. Dyn. 21(3) (2021), Paper no. 2140003, 14 pp.10.1142/S0219493721400037CrossRefGoogle Scholar
Figure 0

Figure 1 An example of a symmetric AM system.

Figure 1

Figure 2 The range of parameters $p = \max (p_-, p_+)$ and $\gamma $, for which the stationary measure $\mu $ for the system in equation (2.1) is singular for sufficiently small $a> 0$.

Figure 2

Figure 3 Simulated values of $\mathbb {E}_- S_{N_-}$ as a function of $\gamma $. The values of $\gamma $ are presented on the x-coordinate axis, while the y-coordinate gives the corresponding value of $\mathbb {E}_- S_{N_-}$. Simulations were performed for $4000$ values of $\gamma $, uniformly spaced in the interval $(1,3)$. For each choice of $\gamma $, we performed $40\,000$ simulations of $3000$ steps of the corresponding random walk.