Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-25T17:05:37.208Z Has data issue: false hasContentIssue false

Hausdorff dimension of multidimensional multiplicative subshifts

Published online by Cambridge University Press:  19 July 2023

JUNG-CHAO BAN
Affiliation:
Department of Mathematical Sciences, National Chengchi University, Taipei 11605, Taiwan, ROC Math. Division, National Center for Theoretical Science, National Taiwan University, Taipei 10617, Taiwan, ROC (e-mail: [email protected])
WEN-GUEI HU
Affiliation:
College of Mathematics, Sichuan University, Chengdu 610064, China (e-mail: [email protected])
GUAN-YU LAI*
Affiliation:
Department of Mathematical Sciences, National Chengchi University, Taipei 11605, Taiwan, ROC
Rights & Permissions [Opens in a new window]

Abstract

The purpose of this study is two-fold. First, the Hausdorff dimension formula of the multidimensional multiplicative subshift (MMS) in $\mathbb {N}^d$ is presented. This extends the earlier work of Kenyon et al [Hausdorff dimension for fractals invariant under multiplicative integers. Ergod. Th. & Dynam. Sys. 32(5) (2012), 1567–1584] from $\mathbb {N}$ to $\mathbb {N}^d$. In addition, the preceding work of the Minkowski dimension of the MMS in $\mathbb {N}^d$ is applied to show that their Hausdorff dimension is strictly less than the Minkowski dimension. Second, the same technique allows us to investigate the multifractal analysis of multiple ergodic average in $\mathbb {N}^d$. Precisely, we extend the result of Fan et al, [Multifractal analysis of some multiple ergodic averages. Adv. Math. 295 (2016), 271–333] of the multifractal analysis of multiple ergodic average from $\mathbb {N}$ to $\mathbb {N}^d$.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

In this article, we would like to study the following two related topics, namely, the Hausdorff dimension of multidimensional multiplicative subshifts and the multifractal analysis of the multiple ergodic average. Before presenting our main results, we give the motivation of this study. Let $(X,T)$ be a topological dynamical system, where $T: X\rightarrow X$ is a continuous map on a compact metric space X, and $\mathbb {F}=(f_1,\ldots , f_d)$ be a d-tuple of functions, where $f_{i}:X\rightarrow \mathbb {R}$ is continuous for $1\leq i\leq d$ . The multiple ergodic theory is to study the asymptotic behavior of the multiple ergodic average

(1) $$ \begin{align} A_{n}\mathbb{F}(x)= \frac{1}{n}\sum_{k=0}^{n-1}f_{1}(T_{1}^{k}(x))f_{2}(T_{2}^{k}(x)) \cdots f_{d}(T_{d}^{k}(x))\text{.} \end{align} $$

Such a problem was initiated by Furstenberg, Katznelson, and Ornstein [Reference Furstenberg, Katznelson and Ornstein13] in his proof of the Szemerédi’s theorem. The $L^{2}$ -convergence of equation (1) was first considered by Conze and Lesigne [Reference Conze and Lesigne7], then generalized by Host and Kra [Reference Host and Kra15] when $ T_{j}=T^{j}$ ( $T^j(x)$ means the jth iteration of x under T). Bourgain [Reference Bourgain5] proved the almost everywhere convergence when $d=2$ and $f_{j}\in L^{\infty }(\mu )$ ( $\mu $ is probability measure on X). Gutman et al [Reference Gutman, Huang, Shao and Ye14] obtained the almost surely convergence when the system is weakly mixing pairwise independently determined. The reader is referred to [Reference Fan, Feng and Lau9, Reference Frantzikinakis12] for an up-to-date investigation into this subject.

Let $\Sigma _{m}=\{0,\ldots ,m-1\}$ and $\Omega \subseteq \Sigma _{m}^{\mathbb {N}}$ be a subshift which is a closed and shift $\sigma $ -invariant subset of $\Sigma _{m}^{\mathbb {N}}$ with the shift action $\sigma (x_i)=x_{i+1}$ for all $i\in \mathbb {N}$ . Suppose S is the semigroup generated by primes $p_{1},\ldots ,p_{k}$ . Set

(2) $$ \begin{align} X_{\Omega }^{(S)}=\{(x_{i})_{i=1}^{\infty }\in \Sigma _{m}^{\mathbb{N} }:x|_{iS}\in \Omega \text{ for all } i\in \mathbb{N}\text{, }\gcd (i,S)=1\}, \end{align} $$

where $\gcd (i,S)=1$ means that $\gcd (i,s)=1\ \text { for all } s\in S$ . The authors of [Reference Kenyon, Peres and Solomyak16] call $X_{\Omega }^{(S)}$ ‘multiplicative subshifts’, since it is invariant under the multiplicative action. That is,

$$ \begin{align*} x=(x_{k})_{k\geq 1}\in X_{\Omega }^{(S)}\Rightarrow \text{for all } i\in \mathbb{N}\text{, }(x_{ik})_{k\geq 1}\in X_{\Omega }^{(S)}. \end{align*} $$

It is worth noting that the investigation of $X_{\Omega }^{(S)}$ was initiated by the study of the set $X^{p_{1},p_{2},\ldots ,p_{k}}$ defined below. Namely, if $p_{1},\ldots ,p_{k}$ are primes, define

(3) $$ \begin{align} X^{p_{1},p_{2},\ldots ,p_{k}}=\{(x_{i})_{i=1}^{\infty}\in \Sigma _{m}^{\mathbb{N}}:x_{i}x_{ip_{1}}\cdots x_{ip_{k}}=0,\text{for all } i\in \mathbb{N}\}, \end{align} $$

and it is clear that $X^{p_{1},p_{2},\ldots ,p_{k}}$ is a special case of $X_{\Omega }^{(S)}$ with $\Omega $ being the subshift of finite type with forbidden set $\mathcal {F}=\{1,\ldots ,m-1\}^{k+1}$ . The dimensional theory of the multiplicative subshifts and the multifractal analysis of the multiple ergodic average attract more attention and have become popular research topics in recent years (cf. [Reference Ban, Hu and Lai1, Reference Ban, Hu and Lin3, Reference Brunet6, Reference Fan, Liao and Ma10, Reference Fan, Schmeling and Wu11, Reference Kenyon, Peres and Solomyak16Reference Pollicott19]). Fan, Liao, and Ma [Reference Fan, Liao and Ma10] obtained the Hausdorff dimension of the level set of equation (1) with $f_i(x)=x_1, T_i=T^{i}$ for all $1\leq i\leq \ell $ . More precisely, fix $\theta \in [-1,1]$ and $\ell \geq 1$ ,

(4) $$ \begin{align} \dim_H(B_{\theta})=1-\frac{1}{\ell}+\frac{1}{\ell}H\bigg( \frac{1+\theta}{2} \bigg), \end{align} $$

where

(5) $$ \begin{align} B_{\theta}:=\bigg\{ (x_k)_{k=1}^{\infty}\in \{ -1, 1\}^{\mathbb{N}} : \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n x_k x_{2k} \cdots x_{\ell k} =\theta \bigg\} \end{align} $$

and $H(t)=-t \log _2 t - (1-t)\log _2 (1-t)$ . In the same work of [Reference Fan, Liao and Ma10], the authors prove that the Minkowski dimension of $X^2$ (equation (3) with $m=2$ and $p_1=2$ ) equals

(6) $$ \begin{align} \dim_M (X^2)=\sum_{n=1}^{\infty} \frac{\log_2 F_n}{2^{n+1}}, \end{align} $$

where $\{ F_n \}$ is the Fibonacci sequence with $F_1{\kern-1pt}={\kern-1pt}2$ , $F_2{\kern-1pt}={\kern-1pt}3$ , and $F_{n+2}{\kern-1pt}={\kern-1pt}F_{n+1}+F_n(n{\kern-1pt}\geq{\kern-1pt} 1)$ .

Later, Kenyon, Peres, and Solomyak [Reference Kenyon, Peres and Solomyak16] generalized the work of Fan, Ma, and Liao [Reference Fan, Liao and Ma10] to investigate the dimension formula of $X_A^q$ . Namely, for an integer $q\geq 2$ ,

(7) $$ \begin{align} X_A^q:= \{ (x_i)_{i=1}^{\infty} \in \{0,1,\ldots, m-1 \}^{\mathbb{N}} : A(x_i , x_{iq} )=1 \text{ for all } i \in \mathbb{N} \}, \end{align} $$

where $A\in M_m(\{ 0,1\})$ , and $M_m(\{ 0,1\})$ is the space of all $m\times m$ 0-1 matrices with entries being $0$ or $1$ .

Theorem 1.1. [Reference Kenyon, Peres and Solomyak16, Theorem 1.3]

  1. (1) Let A be a primitive $0$ - $1$ matrix. Then,

    (8) $$ \begin{align} \dim_H (X_A^q)= \frac{q-1}{q} \log_m \sum_{i=0}^{m-1} t_i , \end{align} $$
    where $(t_i)_{i=0}^{m-1}$ is a unique positive vector satisfying
    $$ \begin{align*} t_i^q=\sum_{j=0}^{m-1} A(i,j)t_j. \end{align*} $$
  2. (2) The Minkowski dimension of $X_A^q$ exists and equals

    (9) $$ \begin{align} \dim_M (X_A^q)=(q-1)^2 \sum_{k=1}^{\infty} \frac{\log_m | A^{k-1}|}{q^{k+1}}, \end{align} $$
    where $|A|$ is the sum of all entries of the matrix A.

Peres et al [Reference Peres, Schmeling, Seuret and Solomyak17] obtained the Hausdorff dimension and Minkowski dimension of $X^{2,3}$ (equation (3) with $p_1=2,p_2=3$ and $m=2$ ). One objective of this paper is to extend Theorem 1.1 from $\mathbb {N}$ to $\mathbb {N}^d$ (Theorem 1.3).

The multifractal analysis of general multiple ergodic averages was pioneered by Fan, Schmeling, and Wu [Reference Fan, Schmeling and Wu11]. Specifically, they take into account the broader form of the multiple ergodic average as denoted below. Define the multiple ergodic average

(10) $$ \begin{align} A_n \varphi (x)= \frac{1}{n}\sum_{k=1}^n \varphi (x_k, x_{kq} ,\ldots, x_{kq^{\ell-1}} ), \end{align} $$

where $\varphi : S^{\ell }=\{ 0,1,\ldots , m-1\}^{\ell } \rightarrow \mathbb {R}$ is a continuous function with respect to the discrete topology and $\ell \geq 1 , q\geq 2$ . The level set with respect to the multiple ergodic average in equation (10) is defined by

(11) $$ \begin{align} E(\alpha)=\Big\{ (x_k)_{k=1}^{\infty}\in \Sigma_m^{\mathbb{N}} : \lim_{n \rightarrow \infty} A_n \varphi (x) =\alpha \Big\},\quad \alpha \in \mathbb{R}. \end{align} $$

Let $s\in \mathbb {R}$ , and let $\mathcal {F}(S^{\ell -1},\mathbb {R}^+)$ denote the cone of non-negative real functions on $S^{\ell -1}$ . The nonlinear operator $\mathcal {N}_s:\mathcal {F}(S^{\ell -1},\mathbb {R}^+)\rightarrow \mathcal {F}(S^{\ell -1},\mathbb {R}^+)$ is defined by

(12) $$ \begin{align} \mathcal{N}_sy(a_1,a_2,\ldots,a_{\ell-1})=\bigg( \sum_{j\in S} e^{s\varphi(a_1,a_2,\ldots,a_{\ell-1},j)}y(a_2,\ldots,a_{\ell-1},j) \bigg)^{{1}/{q}}. \end{align} $$

Define the pressure function by

(13) $$ \begin{align} P_{\varphi}(s)=(q-1)q^{\ell-2}\log \sum_{j\in S}\psi_s(j), \end{align} $$

where $\psi _s$ is a unique strictly positive fixed point of $\mathcal {N}_s$ . The function $\psi _s$ is defined on $S^{\ell -1}$ and it can be extended on $S^k$ for all $1\leq k\leq \ell -2$ by induction. That is, for $a\in S^k$ ,

(14) $$ \begin{align} \psi^{(k)}_s(a)=\bigg( \sum_{j\in S}\psi^{(k+1)}_s(a,j) \bigg)^{{1}/{q}}. \end{align} $$

The Legendre transform of $P_{\varphi }$ is defined as

(15) $$ \begin{align} P^*_{\varphi} (\alpha):= \inf_{s\in \mathbb{R}} (-s \alpha +P_{\varphi} (s)). \end{align} $$

Denote by $L_{\varphi }$ the set of $\alpha \in \mathbb {R}$ such that $E(\alpha )\neq \emptyset $ . The following theorem is obtained by Fan, Schmeling, and Wu [Reference Fan, Schmeling and Wu11] and Wu [Reference Wu20] for the one-dimensional case.

Theorem 1.2. ([Reference Fan, Schmeling and Wu11, Theorem 1.1], [Reference Wu20, Theorem 3.1])

  1. (1) $L_{\varphi }= [ P^{\prime }_{\varphi }(-\infty ), P^{\prime }_{\varphi }(+\infty ) ]$ , where $P^{\prime }_{\varphi }(\pm \infty )=\lim _{s\rightarrow \pm \infty }P^{\prime }_{\varphi }(s)$ .

  2. (2) If $\alpha =P^{\prime }_{\varphi }(s_{\alpha })$ for some $s_{\alpha } \in \mathbb {R}\cup \{ \pm \infty \}$ , then $E(\alpha )\neq \emptyset $ , and the Hausdorff dimension of $E(\alpha )$ is equal to

    $$ \begin{align*} \dim_H E(\alpha)=\frac{-P^{\prime}_{\varphi}(s_{\alpha})s_{\alpha}+P_{\varphi}(s_{\alpha})}{q^{\ell-1}\log m}=\frac{P^*_{\varphi}(\alpha)}{q^{\ell-1}\log m}. \end{align*} $$

The other objective of this paper is to extend Theorem 1.2 from $\mathbb {N}$ to $\mathbb {N}^d$ (Theorem 1.5). The connection between Theorems 1.1 and 1.2 is that if $\ell =2$ (respectively $\ell =3$ ) and $\varphi (x_k,x_{2k})= x_k x_{2k}$ (respectively $\varphi (x_k,x_{2k},x_{3k})= x_k x_{2k} x_{3k}$ ) in equation (10), it is mentioned in [Reference Peres and Solomyak18] (respectively [Reference Peres, Schmeling, Seuret and Solomyak17]) that $\dim _H E(0)=\dim _H(X^2)$ (respectively $\dim _H E(0)=\dim _H(X^{2,3})$ ). The study of Hausdorff dimension of multiplicative subshifts can therefore be seen as a multifractal analysis of the multiple ergodic averages. From this vantage point, this investigation aims to provide some multifractal analysis results of the multiple ergodic averages in $\mathbb {N}^d$ .

To state the main results, we first introduce the multidimensional multiplicative subshift below. For $k\geq 1$ , let $\mathbf {p}_{1},\ldots ,\mathbf {p}_{k}\in \mathbb {N}^{d}$ , the multidimensional version of equation (3) is defined as

(16) $$ \begin{align} X^{\mathbf{p}_{1},\mathbf{p}_{2},\ldots ,\mathbf{p}_{k}}=\{(x_{\mathbf{i} })_{\mathbf{i}\in \mathbb{N}^d}\in \Sigma _{m}^{\mathbb{N}^{d}}:x_{\mathbf{i}}x_{\mathbf{i\cdot p} _{1}}\cdots x_{\mathbf{i\cdot p}_{k}}=0\text{ for all } \mathbf{i}\in \mathbb{N} ^{d}\}\text{,} \end{align} $$

where $\mathbf {i\cdot j}$ denotes the coordinate-wise product vector of $\mathbf {i}$ and $\mathbf {j}$ , that is, $\mathbf {i\cdot j}=(i_1j_1,\ldots ,i_dj_d)$ for $ \mathbf {i}=(i_{l})_{l=1}^{d}$ , $\mathbf {j}=(j_{l})_{l=1}^{d}\in \mathbb {N} ^{d}$ . It is obvious that $X^{\mathbf {p}_{1},\mathbf {p}_{2},\ldots ,\mathbf {p }_{k}}$ is the $\mathbb {N}^{d}$ version of $X^{p_{1},p_{2},\ldots ,p_{k}}$ . Recently, Ban, Hu, and Lai [Reference Ban, Hu and Lai1] established the Minkowski dimension of the set defined by equation (16). Precisely, let $\mathbf { p}_i=(p_{i,1},p_{i,2},\ldots ,p_{i,d})\in \mathbb {N}_{\geq 2}^d$ for all $1\leq i\leq k$ , where $\mathbb {N}_{\geq 2}^d=(\mathbb {N} \setminus \{ 1\})^d$ is the set of d-dimensional vectors that are component-wise greater than or equal to 2. Suppose $\gcd (p_{i,\ell },p_{j,\ell })=1$ for all $1\leq i<j\leq k$ and $1\leq \ell \leq d$ . The formula for the Minkowski dimension of $X^{\mathbf {p}_{1},\mathbf {p}_{2},\ldots ,\mathbf {p}_{k}}$ is obtained as

(17) $$ \begin{align} \begin{aligned} \dim_M(X^{\mathbf{p}_{1},\mathbf{p}_{2},\ldots ,\mathbf{p}_{k}})=\ &\bigg[\underset{i=1}{\overset{k}{\prod}}\bigg(1-\frac{1}{p_{i,1}p_{i,2}\cdots p_{i,d}}\bigg)\bigg] \\ &\times\sum_{M_1,M_2,\ldots,M_d=1}^{\infty}\bigg[ \prod_{i=1}^{d}\bigg(\frac{1}{r_{M_i}^{(i)}}-\frac{1}{r_{M_i+1}^{(i)}}\bigg)\bigg] \log_m b_{M_1,M_2,\ldots,M_d}, \end{aligned} \end{align} $$

where $b_{M_1,M_2,\ldots ,M_d}$ is the number of admissible patterns on the lattice $\mathbb {L}_{M_1,M_2,\ldots ,M_d}$ in $\mathbb {N}_0^{k}=\{0,1,\ldots \}^{k}$ with forbidden set $\mathcal {F}=\{ x_{\vec {0}}=x_{\vec {e}_1}=x_{\vec {e}_2}=\cdots =x_{\vec {e}_{k}}=1 \}$ (see [Reference Ban, Hu and Lai1, Definition 2.6] for definitions of $\mathbb {L}_{M_1,M_2,\ldots ,M_d}$ and $r_{M_i+1}^{(i)}$ ).

To the best of our knowledge, the dimension results of the multidimensional multiplicative subshifts and the multifractal analysis of the multiple average in multidimensional lattices have rarely been reported. Brunet [Reference Brunet6] considers the self-affine sponges under the multiplicative action, and establishes the associated Ledrappier–Young formula, Hausdorff dimensions, and Minkowski dimension formula of such sponges. Ban, Hu, and Lai obtained the large deviation principle for multiple average in $\mathbb {N}^d$ [Reference Ban, Hu and Lai2].

It is also emphasized that the problems of multifractal analysis and dimension formula of multiple average on ‘multidimensional lattices’ are new and challenging. The difficulty is that it is not easy to decompose the multidimensional lattices into the independent sublattices according to the given ‘multiple constraints’, e.g., the $\mathbf {p}_{i}$ in equation (16), and calculate its density among the entire lattice. Fortunately, the technique developed in [Reference Ban, Hu and Lai1] is useful and leads us to investigate the Hausdorff dimension of the multidimensional multiplicative subshifts and the multifractal analysis of multiple averages on $\mathbb {N}^d$ .

The first result of this paper is presented below, and it extends Theorem 1.1 from $\mathbb {N}$ to $\mathbb {N}^d$ .

Theorem 1.3. Let $A\in M_m(\{0,1\})$ . For $d\geq 1$ and $\mathbf {p}=(p_1,p_2,\ldots , p_d)\in \mathbb {N}_{\geq 2}^d$ , the Hausdorff dimension of the set

$$ \begin{align*} X^{\mathbf{p}}_A= \{ (x_{\mathbf{i}})_{\mathbf{i}\in \mathbb{N}^d} \in \{0,1,\ldots, m-1 \}^{\mathbb{N}^d} : A(x_{\mathbf{i}}, x_{\mathbf{i} \cdot \mathbf{p}})=1 \text{ for all } \mathbf{i} \in \mathbb{N}^d \} \end{align*} $$

is

$$ \begin{align*} \begin{aligned} \dim_H(X^{\mathbf{p}}_A)=\frac{p_1\cdots p_d-1}{p_1\cdots p_d}\log_m\sum_{i=0}^{m-1}t_{i}, \end{aligned} \end{align*} $$

where $(t_i)_{i=0}^{m-1}$ is a unique positive vector satisfying

(18) $$ \begin{align} t_i^{p_1\cdots p_d}=\sum_{j=0}^{m-1} A(i,j)t_j. \end{align} $$

Theorem 1.3 is applied to show that the Hausdorff dimension of $X^{\mathbf {p}}_A$ is strictly less than its Minkowski dimension (Example 1.4).

Example 1.4. When $m=2$ , $\mathbf {p}=(2,3)$ and $A=[ \begin {smallmatrix} 1&1\\ 1&0 \end {smallmatrix}]$ , we have

$$ \begin{align*} \begin{aligned} p_{00}=\frac{t_0}{t_0^6},\quad p_{01}=\frac{t_1}{t_0^6},\quad p_{10}=\frac{t_0}{t_1^6},\quad p_{00}+p_{01}=1,\quad p_{10}=1, \end{aligned} \end{align*} $$

then

(19) $$ \begin{align} \begin{aligned} t_{0}^6=t_0+t_1,\quad t_1^6=t_0, \end{aligned} \end{align} $$

which implies

$$ \begin{align*} \begin{aligned} t_1^{35}=t_1^5+1. \end{aligned} \end{align*} $$

Then the unique positive vector of equation (19) is $(t_0,t_1)\approx (1.0216,1.1368)$ . Thus,

$$ \begin{align*} \begin{aligned} \dim_H(X_{A}^{\mathbf{p}})&=\frac{(6-1)}{6}\log_2(t_{0}+t_1)\\ &\approx\frac{5}{6\log2}\log(1.0216+1.1368)\\ &\approx 0.9251\\ &<0.9348\approx\dim_M(X_{A}^{\mathbf{p}}), \end{aligned} \end{align*} $$

where the last estimate for the Minkowski dimension is obtained by the dimension formula established in [Reference Ban, Hu and Lai1] (cf. equation (17)). Generally, the equality $\dim _H(X_{A}^{\mathbf {p}})= \dim _M(X_{A}^{\mathbf {p}})$ holds only when the row sums of A are equal. The proof is similar to [Reference Kenyon, Peres and Solomyak16, Theorem 1.3].

For $n\in \mathbb {N}$ , let [[ $1,n$ ]] be the interval of integers $\{1,2,\ldots ,n\}$ . For $\mathbf {N}=(N_1,N_2,\ldots , N_d)\in \mathbb {N}^d$ , denote [[ $1,\mathbf { N}$ ]] by [[ $1,N_1$ ]] $\times $ [[ $1,N_2$ ]] $\times \cdots \times $ [[ $1,N_d$ ]]. The notion $\mathbf {N}\rightarrow \infty $ means $N_i\to \infty $ for all $1\leq i\leq d$ . The multidimensional multiple ergodic average in $\mathbb {N}^d$ is defined by

(20) $$ \begin{align} A_{\mathbf{N}} \varphi (x)=\frac{1}{N_1\cdots N_d} \sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}}) \end{align} $$

and its level set is

(21) $$ \begin{align} E(\alpha)=\bigg\{ (x_{\mathbf{i}})_{\mathbf{i}\in \mathbb{N}^d}\in \Sigma_m^{\mathbb{N}^d} : \lim_{\mathbf{N}\rightarrow\infty} \frac{1}{N_1 \cdots N_d} \sum_{\mathbf{j}\in[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})=\alpha \bigg\}. \end{align} $$

The following result is an $\mathbb {N}^d$ version of Theorem 1.2. By abuse of notation, we continue to write $P_{\varphi }$ for the $\mathbb {N}^d$ version pressure function, and it is defined in equation (34).

Theorem 1.5

  1. (1) $L_{\varphi }= [ P^{\prime }_{\varphi }(-\infty ), P^{\prime }_{\varphi }(+\infty ) ]$ , where $P^{\prime }_{\varphi }(\pm \infty )=\lim _{s\rightarrow \pm \infty }P^{\prime }_{\varphi }(s)$ and $P_{\varphi }(s)$ is defined by equation (34).

  2. (2) If $\alpha =P^{\prime }_{\varphi }(s_{\alpha })$ for some $s_{\alpha } \in \mathbb {R}\cup \{ \pm \infty \}$ , then $E(\alpha )\neq \emptyset $ and the Hausdorff dimension of $E(\alpha )$ is equal to

    $$ \begin{align*} \dim_H E(\alpha)=\frac{-P^{\prime}_{\varphi}(s_{\alpha})s_{\alpha}+P_{\varphi}(s_{\alpha})}{(p_1\cdots p_d)^{\ell-1}\log m}=\frac{P^*_{\varphi}(\alpha)}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Example 1.6. Let $p_1{\kern-1pt}={\kern-1pt}2,p_2{\kern-1pt}={\kern-1pt}3,m{\kern-1pt}={\kern-1pt}2,\ell {\kern-1pt}={\kern-1pt}2$ , and $\varphi $ be the potential given by $\varphi (x,y)= x_{\mathbf {1}}y_{\mathbf {1}}$ with $x=(x_{\mathbf { i}})_{\mathbf {i}\in \mathbb {N}^2},y=(y_{\mathbf {i}})_{\mathbf {i}\in \mathbb {N}^2}\in \Sigma _2^{\mathbb {N}^2}$ . (Here, $\mathbf {1}$ denotes the d-dimensional vector with all components being 1.) So

$$ \begin{align*}[\varphi([i],[j])]_{(i,j)\in \{0,1\}^2}=\left[\begin{matrix} 0&0\\ 0&1 \end{matrix}\right]\!, \end{align*} $$

where $[i]=\{x=(x_{\mathbf {i}})_{\mathbf {i}\in \mathbb {N}^2}: x_{\mathbf {1}}=i\}$ .

Then the nonlinear equation (33) becomes

$$ \begin{align*} \begin{aligned} \psi_s(0)^6&=\psi_s(0)+\psi_s(1),\\ \psi_s(1)^6&=\psi_s(0)+e^s\psi_s(1). \end{aligned} \end{align*} $$

Since $(0)^{\infty }\in \Sigma _2^{\mathbb {N}}$ , then by Theorem 4.18, we have $0=P^{\prime }_{\varphi }(-\infty )$ . Taking $s=-\infty $ , we obtain

$$ \begin{align*} \begin{aligned} \psi_{-\infty}(0)^6&=\psi_{-\infty}(0)+\psi_{-\infty}(1),\\ \psi_{-\infty}(1)^6&=\psi_{-\infty}(0). \end{aligned} \end{align*} $$

Then,

$$ \begin{align*} \begin{aligned} \dim_H E(0)=\frac{(6-1)\log[\psi_{-\infty}(0)+\psi_{-\infty}(1)]}{6\log 2} \approx 0.9251. \end{aligned} \end{align*} $$

It is worth pointing out that the set $X_A^{\mathbf {p}}$ in Example 1.4 is a subset of $E(0)$ in Example 1.6, but $\dim _H (X_A^{\mathbf {p}})=\dim _H E(0)$ . This phenomenon appears in the previous paragraph for the one-dimensional version [Reference Peres, Schmeling, Seuret and Solomyak17], and the $\mathbb {N}^d$ version of this equality is confirmed in Examples 1.4 and 1.6 as well. Moreover, the spectrum $\alpha \mapsto \dim _H E(\alpha )$ is presented in Figure 1.

Figure 1 The spectrum $\alpha\mapsto \dim_H(E(\alpha))$ .

The remainder of this paper is organized as follows. In §2, we give a partition of $\mathbb {N}^d$ (Lemma 2.1) and then compute the limit of density (Lemma 2.2). In §§3 and 4, we prove the Theorems 1.3 and 1.5 respectively.

2 Preliminaries

Given integers $d\geq 1$ and $p_1,p_2,\ldots ,p_d\geq 2$ , we let $\mathcal {M}_{\mathbf {p}}=\{ (p_1^m,p_2^m,\ldots ,p_d^m) :m\geq 0\}$ be the subset of $\mathbb {N}^d$ , called a lacunary lattice. For $\mathbf {i}\in \mathbb {N}^d$ , denote by $\mathcal {M}_{\mathbf {p}}(\mathbf {i})=\mathbf { i}\cdot \mathcal {M}_{\mathbf {p}}$ the lattice obtained by pushing $\mathcal {M}_{\mathbf {p}}$ by $\mathbf {i}$ . Finally, we define $\mathcal {I}_{\mathbf {p}}=\{ \mathbf {i}\in \mathbb {N}^d : p_j\nmid i_j\mbox { for some }1\leq j \leq d \}$ as an index set of $\mathbb {N}^d$ such that for any $\mathbf {i}\neq \mathbf {j}\in \mathcal {I}_{\mathbf {p}},\mathcal {M}_{\mathbf {p}}(\mathbf { i})\cap \mathcal {M}_{\mathbf {p}}(\mathbf {j})=\emptyset $ . The following lemmas give the disjoint decomposition of $\mathbb {N}^d$ which is the $\mathbb {N}^d$ version of [Reference Ban, Hu and Lai1, Lemma 2.1].

Lemma 2.1. For $p_1,p_2,\ldots ,p_d\geq 2$ ,

$$ \begin{align*} \mathbb{N}^d=\displaystyle\bigsqcup\limits_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}}\mathcal{M}_{\mathbf{p}}(\mathbf{i}). \end{align*} $$

More notation is needed to characterize the partition of $[\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ for $\mathbf {N}=(N_1,\ldots ,N_d) \in \mathbb {N}^d$ . We define $\mathcal {J}_{\mathbf {N};\ell }= \!\{ \mathbf {i}\in [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt] : |\mathcal {M}_{\mathbf { p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]|=\ell \}$ , where $| \cdot |$ denotes cardinality. The following lemma gives the limit of the density of $\mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}}$ which is the $\mathbb {N}^d$ version of [Reference Ban, Hu and Lai1, Lemma 2.2].

Lemma 2.2. For $N_1, N_2,\ldots ,N_d$ , and $\ell \geq 1$ , we have the following assertions.

  1. (1) $ |\mathcal {J}_{\mathbf {N};\ell }| = \prod _{k=1}^{d}\lfloor {N_k}/{p_k^{\ell -1}}\rfloor -\prod _{k=1}^{d}\lfloor {N_k}/{p_k^\ell }\rfloor .$

  2. (2) $ \lim _{\mathbf {N}\to \infty }{| \mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}} | }/{ | \mathcal {J}_{\mathbf { N};\ell }| }=1-{1}/{p_1\cdots p_d}.$

  3. (3) $ \lim _{\mathbf {N}\to \infty }{| \mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}} | }/{ N_1\cdots N_d }={(p_1\cdots p_d-1)^2}/{(p_1\cdots p_d)^{\ell +1}}.$

  4. (4) $\lim _{\mathbf {N}\to \infty }{1}/({N_1\cdots N_d})\sum _{\ell =1}^{N_1\cdots N_d}|\mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}}|\log F_{\ell } =\sum _{\ell =1}^{\infty }{\lim _{\mathbf {N}\to \infty }}{|\mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}}|}/N_1 \cdots N_d\log F_{\ell }.$

We decompose $\Sigma _m^{\mathbb {N}^d}$ as follows:

$$ \begin{align*} \Sigma_m^{\mathbb{N}^d}=\bigsqcup_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}}S^{\mathcal{M}_{\mathbf{p}}(\mathbf{i})}, \end{align*} $$

where $S=\{0,1,\ldots ,m-1\}$ .

Let $\mu $ be a probability measure on $\Sigma _m$ . We consider $\mu $ as a measure on $S^{\mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ , which is identified with $\Sigma _m$ , for every $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}$ . Then we define the infinite product measure $\mathbb {P}_\mu $ on $\bigsqcup _{\mathbf {i} \in \mathcal {I}_{\mathbf {p}}}S^{\mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ of copies of $\mu $ . More precisely, for any word u of size $[\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ , we define

(22) $$ \begin{align} \mathbb{P}_\mu([u])=\prod_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mu([u|_{\mathcal{M}_{\mathbf{ p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]), \end{align} $$

where $[u]$ denotes the cylinder of all words starting with u.

3 Proof of Theorem 1.3

Before embarking on the proof of Theorem 1.3, we sketch out the flow of the proof for readers’ convenience. We first decompose the $\mathbb {N}^d$ lattice into disjoint one-dimensional sublattices, then define the probability measure $\mathbb {P}_\mu $ on $X_{A}^{\mathbf {p}}$ . Subsequently, we calculate the pointwise dimension (cf. equation (23)) at $u\in \Sigma _{m}^{\mathbb {N}^d}$ ,

(23) $$ \begin{align} \dim_{\mathrm{loc}}( \mathbb{P}_\mu, u)=\lim_{\mathbf{N} \rightarrow \infty} \frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{ N}]\kern-1.2pt]}]}{N_1 \cdots N_d}, \end{align} $$

and the Hausdorff dimension of $\mathbb {P}_\mu $ (cf. equation (24), also see [Reference Fan8] for dimension of a measure),

(24) $$ \begin{align} \dim_H (\mathbb{P}_\mu)=\inf \{ \dim_H(F) : F \mbox{ Borel}, \mathbb{P}_\mu(F)=1 \}, \end{align} $$

to obtain the lower bound of $\dim _H(X_{A}^{\mathbf {p}})$ (Lemma 3.1). Finally, we maximize the measure dimension $\dim _H(\mathbb {P}_\mu )$ (Lemma 3.2), and find an upper bound of $\dim _H(X_{A}^{\mathbf {p}})$ (Lemma 3.3) to obtain the Hausdorff dimension of $X_{A}^{\mathbf {p}}$ .

Lemma 3.1. ( $\mathbb {N}^d$ version of [Reference Kenyon, Peres and Solomyak16, Proposition 2.3])

Let $\Omega =\Sigma _A$ be a shift of finite type on $\Sigma _m^{\mathbb {N}}$ and $\mu $ be a probability measure on $\Omega $ . Then,

(25) $$ \begin{align} \dim_{\mathrm{loc}}(\mathbb{P}_{\mu},x)=s(\Omega, \mu)\quad \mbox{for }\mathbb{P}_{\mu}\mbox{-almost every (a.e.) }x\in X_{A}^{\mathbf{p}}, \end{align} $$

where

(26) $$ \begin{align} s(\Omega,\mu):=(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}} \end{align} $$

with $\beta _k$ is the partition of $\Omega $ into cylinders of length k and

$$ \begin{align*} H_m^{\mu}(\beta_k)=-\sum_{\alpha\in\beta_k}\mu(\alpha)\log_m \mu(\alpha). \end{align*} $$

Therefore, $\dim _H(\mathbb {P}_{\mu })=s(\Omega ,\mu )$ , and $\dim _H(X_{A}^{\mathbf {p}})\geq s(\Omega , \mu )$ .

Proof. To obtain $\dim _{\mathrm {loc}}(\mathbb {P}_\mu ,u)=s(\Omega ,\mu )$ for $\mathbb {P}_\mu $ -a.e. u. We prove that for every $\ell _1, \ell _2,\ldots , \ell _d\in \mathbb {N}$ and $\ell =\min _{1\leq i \leq d} \ell _i$ ,

$$ \begin{align*} \begin{aligned} &(1)~\liminf_{\mathbf{N}\rightarrow\infty}\frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{N_1 \cdots N_d}\geq (p_1\cdots p_d-1)^2\sum_{k=1}^{\ell}\frac{H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}} \quad\mbox{for } \mathbb{P}_\mu\mbox{-a.e. } u,\\ &\mbox{and}\\ &(2)~\limsup_{\mathbf{N}\rightarrow\infty}\frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{N_1 \cdots N_d}\\&\quad\qquad\leq (p_1\cdots p_d-1)^2\sum_{k=1}^{\ell}\frac{H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}}+\frac{(\ell+1)\log_m(2m)}{(p_1\cdots p_d)^{\ell}}\quad\mbox{for } \mathbb{P}_\mu\mbox{-a.e. } u. \end{aligned} \end{align*} $$

Fixing $\ell _1,\ldots ,\ell _d\in \mathbb {N}$ , we can restrict to $N_i=p_i^{\ell _i}r_i$ and $r_i\in \mathbb {N}$ for all $1\leq i\leq d$ . Since for $p_i^{\ell _i}r_i\leq N_i <p_i^{\ell _i}(r_i+1)$ , $1\leq i \leq d$ , we have

$$ \begin{align*} \begin{aligned} \frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{N_1 \cdots N_d}&\geq\frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,{(p_1^{\ell_1}r_1,\ldots, p_d^{\ell_d}r_d)}]\kern-1.2pt]}]}{p_1^{\ell_1}(r_1+1)\cdots p_d^{\ell_d}(r_d+1)}\\ &= \frac{r_1\cdots r_d}{(r_1+1)\cdots (r_d+1)} \frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,{(p_1^{\ell_1}r_1,\ldots, p_d^{\ell_d}r_d)}]\kern-1.2pt]}]}{p_1^{\ell_1}r_1\cdots p_d^{\ell_d}r_d}, \end{aligned} \end{align*} $$

which implies that

$$ \begin{align*} \begin{aligned} \liminf_{\mathbf{N}\rightarrow\infty}\frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{N_1 \cdots N_d}= \liminf_{r_1,\ldots,r_d\rightarrow\infty}\frac{-\log\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,{(p_1^{\ell_1}r_1,\ldots, p_d^{\ell_d}r_d)}]\kern-1.2pt]}]}{p_1^{\ell_1}r_1\cdots p_d^{\ell_d}r_d}. \end{aligned} \end{align*} $$

The lim sup is dealt with similarly.

Recall

$$ \begin{align*} \mathcal{J}_{\mathbf{N};\ell}\cap \mathcal{I}_{\mathbf{p}}=\{ \mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap [\kern-1.3pt[ 1,\mathbf{ N}]\kern-1.2pt] : |\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|=\ell \}. \end{align*} $$

The method below estimates the main part $\mathcal {G}$ and remainder $\mathcal {H}$ , which is similar to that of Kenyon et al [Reference Kenyon, Peres and Solomyak16]. Let

$$ \begin{align*} \mathcal{G}_{\mathbf{N}}:=\bigcup_{k=1}^{\ell}\bigcup_{\mathbf{i}\in \mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}}(\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]) \end{align*} $$

and

$$ \begin{align*} \mathcal{H}_{\mathbf{N}}:=[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]-\mathcal{G}_{\mathbf{N}}. \end{align*} $$

Then by the definition of the measure $\mathbb {P}_\mu $ , we have

$$ \begin{align*} \begin{aligned} \mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]=\mathbb{P}_\mu[u|_{\mathcal{G}_{\mathbf{N}}}]\cdot \mathbb{P}_\mu[u|_{\mathcal{H}_{\mathbf{N}}}]. \end{aligned} \end{align*} $$

Claim 1. We have

$$ \begin{align*} \begin{aligned} \mathbb{P}_\mu[u|_{\mathcal{G}_{\mathbf{N}}}]&=\prod_{k=1}^{\ell}\prod_{\mathbf{i}\in \mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}}\mathbb{P}_\mu[u|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]\\ &=\prod_{k=1}^{\ell}\prod_{\mathbf{i}\in \mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}}\mu[u|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]. \end{aligned} \end{align*} $$

Proof of Claim 1

The proof comes directly from the definition of $\mathbb {P}_\mu $ and it is omitted.

Claim 2. For all $k\leq \ell $ ,

$$ \begin{align*} \begin{aligned} \sum_{\mathbf{i}\in\mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}} \frac{-\log_m \mu[u|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{(p_1\cdots p_d-1)^2(N_1 \cdots N_d/(p_1\cdots p_d)^{k+1})} \rightarrow H_m^{\mu}(\beta_k), \end{aligned} \end{align*} $$

as $N_i=p_i^{\ell _i} r_i\rightarrow \infty $ for $1\leq i \leq d$ and $\mathbb {P}_\mu $ -a.e. u.

Proof of Claim 2

Since the $u\mapsto -\log _m\mu [u|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]}]$ are independent and identically distributed (i.i.d.) for $\mathbf {i}\in \mathcal {J}_{\mathbf {N};k}\cap \mathcal {I}_{\mathbf {p}}$ , their expectation equals $H_m^{\mu }(\beta _k)$ . Note that

$$ \begin{align*} |\mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}|=\frac{(p_1\cdots p_d-1)^2 N_1 \cdots N_d}{(p_1\cdots p_d)^{k+1}}. \end{align*} $$

Fixing $k\leq \min _{1\leq i\leq d}{\ell _i}$ and taking $N_i=p_i^{\ell _i} r_i, r_i\rightarrow \infty $ for all $1\leq i \leq d$ , we get infinite i.i.d. random variables. The proof is completed by the law of large numbers (LLN).

Then item (1) is followed by

$$ \begin{align*} \begin{aligned} (3)~&\frac{-\log_m \mathbb{P}_\mu[u|_{\mathcal{G}_{\mathbf{N}}}]}{N_1 \cdots N_d}\\ &=\sum_{k=1}^{\ell}\frac{(p_1\cdots p_d-1)^2}{(p_1\cdots p_d)^{k+1}} \sum_{\mathbf{i}\in\mathcal{J}_{\mathbf{N};k}\cap \mathcal{I}_{\mathbf{p}}} \frac{-\log_m \mu[u|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]}{(p_1\cdots p_d-1)^2(N_1 \cdots N_d/(p_1\cdots p_d)^{k+1})}\\ &\longrightarrow \sum_{k=1}^{\ell}\frac{(p_1\cdots p_d-1)^2H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}},\\ \mbox{and}&\\ (4)~&\mathbb{P}_\mu[u|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]\leq \mathbb{P}_\mu[u|_{\mathcal{G}_{\mathbf{N}}}]. \end{aligned} \end{align*} $$

To prove item (2), we work with $\mathbb {P}_\mu [u|_{\mathcal {H}_{\mathbf {N}}}]$ . Since

$$ \begin{align*}|\mathcal{H}_{\mathbf{N}}|&=N_1 \cdots N_d-\sum_{k=1}^{\ell}(p_1\cdots p_d-1)^2\frac{N_1 \cdots N_dk}{(p_1\cdots p_d)^{k+1}}\\&=\frac{N_1 \cdots N_d}{(p_1\cdots p_d)^{\ell}}\bigg[(\ell+1)-\frac{\ell}{p_1\cdots p_d} \bigg]\\&=(\ell+1)\prod_{i=1}^dp_i^{\ell_i-\ell}r_i-\frac{\ell }{p_1\cdots p_d}\prod_{i=1}^dp_i^{\ell_i-\ell}r_i\\&<(\ell+1)\prod_{i=1}^dp_i^{\ell_i-\ell}r_i, \end{align*} $$

then

$$ \begin{align*} \begin{aligned} \sum_{r_1,\ldots,r_d=1}^{\infty} 2^{-|\mathcal{H}_{\mathbf{N}} |}=\sum_{r_1,\ldots,r_d=1}^{\infty} 2^{-Cr_1\cdots r_d}<\infty, \end{aligned} \end{align*} $$

where $C=[(\ell +1)-{\ell }/{p_1\cdots p_d} ]\prod _{i=1}^dp_i^{\ell _i-\ell }>0$ .

Define

$$ \begin{align*} \begin{aligned} \mathcal{S}(\mathcal{H}_{\mathbf{N}}):=\{u\in X_{A}^{\mathbf{p}} : \mathbb{P}_\mu[u|_{\mathcal{H}_{\mathbf{N}}}]\leq (2m)^{-|\mathcal{H}_{\mathbf{N}}|} \}. \end{aligned} \end{align*} $$

Since there are at most $m^{|\mathcal {H}_{\mathbf {N}}|}$ cylinder sets $[u|_{\mathcal {H}_{\mathbf {N}}}]$ , we have

$$ \begin{align*} \mathbb{P}_\mu(\mathcal{S}(\mathcal{H}_{\mathbf{N}}))\leq 2^{-|\mathcal{H}_{\mathbf{N}}|}. \end{align*} $$

This implies

$$ \begin{align*} \sum_{r_1,\ldots,r_d=1}^{\infty}\mathbb{P}_\mu(\mathcal{S}(\mathcal{H}_{\mathbf{N}})) \leq \sum_{r_1,\ldots,r_d=1}^{\infty} 2^{-|\mathcal{H}_{\mathbf{N}}|}<\infty. \end{align*} $$

Thus,

$$ \begin{align*} \lim_{\mathbf{N\to\infty}}\sum_{r_1=N_1,\ldots,r_d=N_d}^{\infty}\mathbb{P}_\mu(\mathcal{S}(\mathcal{H}_{\mathbf{N}}))=0. \end{align*} $$

That is,

$$ \begin{align*} \begin{aligned} \mathbb{P}_\mu\bigg(\bigcap_{N_1,\ldots,N_d\geq 1}\bigcup_{r_1=N_1,\ldots,r_d=N_d}^{\infty} \mathcal{S}(\mathcal{H}_{\mathbf{N}})\bigg)=0. \end{aligned} \end{align*} $$

Hence for $\mathbb {P}_\mu $ -a.e. $u\in X_{A}^{\mathbf {p}}$ , there exists $M_1(u),\ldots ,M_d(u)\in \mathbb {N}$ such that $u\notin \mathcal {S}(\mathcal {H}_{\mathbf {N}})$ for all $N_1=p_1^{\ell _1} r_1\geq M_1(u),\ldots ,N_d=p_d^{\ell _d} r_d\geq M_d(u)$ . For such u and $N_i\geq M_i(u)$ for all $1\leq i \leq d$ , we have

$$ \begin{align*} \begin{aligned} \frac{-\log_m \mathbb{P}_\mu[u|_{\mathcal{H}_{\mathbf{N}}}]}{N_1 \cdots N_d}<\frac{|\mathcal{H}_{\mathbf{N}}|\log_m(2m)}{N_1 \cdots N_d}<\frac{(\ell+1)\log_m(2m)}{(p_1\cdots p_d)^{\ell}}. \end{aligned} \end{align*} $$

The proof is complete.

Lemma 3.2. ( $\mathbb {N}^d$ version of [Reference Kenyon, Peres and Solomyak16, Corollary 2.6])

Let A be a primitive $m\times m \ 0$ - $1$ matrix and $\Omega =\Sigma _A$ be the corresponding subshift of finite type. Let $\bar {t}=(t_i)_{i=0}^{m-1}$ be the solution of equation (18). Then the unique optimal measure on $\Sigma _A$ is Markov, with the vector of initial probabilities $\mathbf {P}=(P_i)_{i=0}^{m-1}=(\sum _{i=0}^{m-1}t_i)^{-1} \bar {t}$ and the matrix of transition probabilities

(27) $$ \begin{align} (p_{ij})_{i,j=0}^{m-1}\quad\mbox{where } p_{ij}=\frac{t_j}{t_i^{p_1\cdots p_d}}\quad \mbox{if } A(i,j)=1. \end{align} $$

Moreover, $s(\Omega , \mu )=(p_1\cdots p_d-1)\log _mt_{\phi }$ , where $t_\phi ^{p_1\cdots p_d}=\sum _{i=0}^{m-1} t_i$ .

Proof. Since

$$ \begin{align*} \sum_{k=1}^{\infty}\frac{H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}}=\bigg[\sum_{k=1}^{\infty}\frac{H_m^{\mu}(\beta_{1})}{(p_1\cdots p_d)^{k+1}}+\frac{1}{p_1\cdots p_d}\sum_{i=0}^{m-1}P_i\sum_{k=1}^{\infty} \frac{H_m^{\mu_{i}}(\beta_{k}(\Omega_i))}{(p_1\cdots p_d)^{k+1}}\bigg] \end{align*} $$

and

$$ \begin{align*} \sum_{k=1}^\infty \frac{1}{(p_1\cdots p_d)^k}=\frac{1}{p_1\cdots p_d-1}, \end{align*} $$

we have

$$ \begin{align*} s(\Omega,\mathbb{P}_\mu)=\frac{p_1\cdots p_d-1}{p_1\cdots p_d}\bigg[H_m^{\mu}(\beta_{1})+\frac{1}{p_1\cdots p_d-1}\sum_{i=0}^{m-1}P_is(\Omega_i,\mu_{i})\bigg], \end{align*} $$

where $P_i=\mu [i]$ and $\mu _{i}$ is the conditional measures of $\mu $ on $\Omega _i$ .

Since the measure $\mathbb {P}_\mu $ is completely determined by the probability vector $\mathbf {P}=(P_i)_{i=0}^{m-1}$ and the measures $\mu _{i}$ on $\Omega _i$ , the optimizations on $\Omega _i$ are independent for all $0\leq i \leq m-1$ . Thus, if $\mathbb {P}_\mu $ is optimal for $\Omega $ , then $\mu _{i}$ is optimal for $\Omega _i$ , $0\leq i\leq m-1$ . Since

$$ \begin{align*} &\max_{\mathbf{{p}}}\bigg[H_m^{\mu}(\beta_{1})+\frac{1}{p_1\cdots p_d-1}\sum_{i=0}^{m-1}P_is(\Omega_i)\bigg]\\ &\quad=\max_{\mathbf{{p}}}\bigg[-\sum_{i=0}^{m-1}P_i\log_mP_i+\frac{1}{p_1\cdots p_d-1}\sum_{i=0}^{m-1}P_is(\Omega_i)\bigg]\\ &\quad=\max_{\mathbf{{p}}}\bigg[\sum_{i=0}^{m-1}P_i(a_i-\log_mP_i)\bigg], \end{align*} $$

we have

$$ \begin{align*} s(\Omega):&=\max\{s(\Omega,\mathbb{P}_\mu): \mu \mbox{ is a probability measure on }\Omega\}\\ &=\max_{\mathbf{{p}}}\frac{p_1\cdots p_d-1}{p_1\cdots p_d}\bigg[\sum_{i=0}^{m-1}P_i(a_i-\log_mP_i)\bigg], \end{align*} $$

where $a_i={s(\Omega _i)}/{p_1\cdots p_d-1}$ .

Then we obtain the optimal probability vector

$$ \begin{gather*} \mathbf{P}=(P_i)_{i=0}^{m-1},\quad P_i=\frac{m^{a_i}}{\sum_{j=0}^{m-1}m^{a_j}}=\frac{t_i}{t_{\phi}^{p_1\cdots p_d}},\\[5pt] t_\phi=m^{{s(\Omega)}/{p_1\cdots p_d-1}},\quad t_i=m^{{s(\Omega_i)}/{p_1\cdots p_d-1}},\quad i\leq m-1, \end{gather*} $$

and

$$ \begin{align*} \begin{aligned} t_\phi^{p_1\cdots p_d}=\sum_{i=0}^{m-1}t_i. \end{aligned} \end{align*} $$

Due to the conditional entropy, we have

$$ \begin{align*} \begin{aligned} H_m^{\mu}(\beta_{k+1})=H_m^{\mu}(\beta_{k})+H_m^{\mu}(\beta_{k+1}|\beta_{k}), \end{aligned} \end{align*} $$

where for two partitions $\alpha $ and $\beta $ ,

$$ \begin{align*} H_m^{\mu}(\alpha|\beta)=\sum_{B\in\beta}\bigg(-\sum_{A\in\alpha}\mu(A|B)\log_m \mu(A|B) \bigg)\mu(B). \end{align*} $$

Then,

$$ \begin{align*} s(\Omega,\mathbb{P}_\mu)&=(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{H_m^{\mu}(\beta_k)}{(p_1\cdots p_d)^{k+1}}\\&= \bigg( \frac{p_1\cdots p_d-1}{p_1\cdots p_d} \bigg)\bigg[ H_m^{\mu}(\beta_1)+\sum_{k=1}^{\infty}\frac{H_m^{\mu}(\beta_{k+1}|\beta_{k})}{(p_1\cdots p_d)^{k}}\bigg]. \end{align*} $$

Observe that

$$ \begin{align*} H_m^{\mu}(\beta_1)&=-\sum_{i=0}^{m-1}\frac{t_i}{t_{\phi}^{p_1\cdots p_d}}\log_m(\frac{t_i}{t_{\phi}^{p_1\cdots p_d}})\nonumber\\ &=p_1\cdots p_d\log_mt_{\phi}-\sum_{i=0}^{m-1}\frac{t_i}{t_{\phi}^{p_1\cdots p_d}}\log_mt_i\nonumber\\ &=p_1\cdots p_d\log_mt_{\phi}-\sum_{i=0}^{m-1}\mu[i]\log_mt_i \end{align*} $$

and

$$ \begin{align*} H_m^{\mu}(\beta_{k+1}|\beta_{k})&=\sum_{[u]\in \beta_k}\mu[u]\bigg(-\sum_{w:[uw]\in \beta_{k+1}}\frac{t_{uw}}{t_{u}^{p_1\cdots p_d}}\log_m\bigg(\frac{t_{uw}}{t_{u}^{p_1\cdots p_d}}\bigg) \bigg)\\ &=\sum_{[u]\in \beta_k}\mu[u]\bigg(p_1\cdots p_d\log_mt_u-\sum_{w:[uw]\in \beta_{k+1}}\frac{t_{uw}}{t_{u}^{p_1\cdots p_d}}\log_mt_{uw} \bigg)\\ &=p_1\cdots p_d\sum_{[u]\in \beta_k}\mu[u]\log_mt_u-\sum_{[v]\in \beta_{k+1}}\mu[v]\log_mt_v,\\ \end{align*} $$

where $\mu [uw]=\mu [u]{t_{uw}}/{t_u^{p_1\cdots p_d}}$ . Then we have

$$ \begin{align*} \begin{aligned} s(\Omega,\mathbb{P}_\mu)=(p_1\cdots p_d-1)\log_mt_{\phi}. \end{aligned} \end{align*} $$

The proof is complete.

Lemma 3.3. ( $\mathbb {N}^d$ version of [Reference Kenyon, Peres and Solomyak16, Lemma 5.2])

Let $\mu $ be a Markov measure on $\Omega $ , with the vector of initial probabilities $\mathbf {P}=(\sum _{i=0}^{m-1}t_i)^{-1} \bar {t}$ and the matrix of transition probabilities

(28) $$ \begin{align} (p_{ij})_{i,j=0}^{m-1}\quad\mbox{where}\ p_{ij}=\frac{t_j}{t_i^{p_1\cdots p_d}}\quad \mbox{if } A(i,j)=1. \end{align} $$

Then,

$$ \begin{align*} \liminf_{\mathbf{N}\rightarrow\infty}-\frac{ \log \mathbb{P}_{\mu}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt] }])}{N_1 \cdots N_d}\leq(p_1\cdots p_d-1) \log_m t_{\phi} \end{align*} $$

for all $x\in X_{A}^{\mathbf {p}}$ .

Proof. The proof is similar to that of Lemma 4.9 when $\varphi $ is a zero function.

Lemma 3.4. ( $\mathbb {N}^d$ version of [Reference Kenyon, Peres and Solomyak16, Proposition 2.4])

Let $\Omega =\Sigma _A$ be a shift of finite type on $\Sigma _m^{\mathbb {N}}$ . Then,

(29) $$ \begin{align} \dim_H(X_{A}^{\mathbf{p}})=\sup_{\mu} \dim_H(\mathbb{P}_{\mu})=\sup_{\mu} s(\Omega, \mu), \end{align} $$

where the supremum is over the Borel probability measures on $\Omega $ .

Proof. By Lemmas 4.11 and 3.3, we will then get $\dim _H(X_{A}^{\mathbf {p}})\leq (p_1\cdots p_d-1) \log _m t_{\phi }$ . Equation (29) then follows by Lemma 3.2.

Proof of Theorem 1.3

The proof is complete by Lemmas 3.1 and 3.4.

4 Proof of Theorem 1.5

The stages of the proof of Theorem 1.5 follow Fan, Schmeling, and Wu [Reference Fan, Schmeling and Wu11]. First, we establish the LLN in our setting (Lemma 4.4), then use the unique positive solution of nonlinear operator $\mathcal {N}_s$ to construct a family of telescopic product measures $\mathbb {P}_{\mu _s}$ in equations (35) and (36). Then the convexity of such solution, LLN, and Billingsley lemma (Lemma 4.11) give the upper and lower bound of Hausdorff dimension of $E(\alpha )$ (Lemma 4.12 and Lemma 4.16 respectively), and we establish Theorem 4.1 in §4.1. To complete the proof of Theorem 1.5, we prove the case when s tends to $\pm \infty $ in §4.2 (Theorems 4.18 and 4.19).

4.1 The case when $s_{\alpha }$ is finite

Theorem 4.1

  1. (1) If $\alpha =P^{\prime }_{\varphi }(s_{\alpha })$ for some $s_{\alpha } \in \mathbb {R}$ , then

    $$ \begin{align*} \dim_H E(\alpha)=\frac{-P^{\prime}_{\varphi}(s_{\alpha})s_{\alpha}+P_{\varphi}(s_{\alpha})}{(p_1\cdots p_d)^{\ell-1}\log m}=\frac{P^*_{\varphi}(\alpha)}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$
  2. (2) For $\alpha \in (P^{\prime }_{\varphi }(-\infty ),P^{\prime }_{\varphi }(0)]$ , $\dim _H E^+(\alpha )=\dim _H E(\alpha )$ .

  3. (3) For $\alpha \in [P^{\prime }_{\varphi }(0),P^{\prime }_{\varphi }(+\infty ))$ , $\dim _H E^-(\alpha )=\dim _H E(\alpha )$ .

Proof. The proof follows from Lemmas 4.16 and 4.12 below.

Consider a probability space $(\Sigma _m^{\mathbb {N}^d},\mathbb {P}_\mu )$ . Let $X_{\mathbf {j}}(x)=x_{\mathbf {j}}$ be the $\mathbf {j}$ th coordinate projection. For $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}$ , consider the process $Y^{(\textbf i)}=(X_{\mathbf {j}})_{\mathbf { j}\in \mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ . Then, by the definition of $\mathbb {P}_\mu $ , the following fact is obvious.

Lemma 4.2. The processes $Y^{(\textbf i)}=(X_{\mathbf {j}})_{\mathbf {j}\in \mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ for $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}$ are $\mathbb {P}_\mu $ -independent and identically distributed with $\mu $ as the common probability law.

Now we consider $(\bigsqcup _{\mathbf {i} \in \mathcal {I}_{\mathbf {p}}}S^{\mathcal {M}_{\mathbf {p}}(\mathbf {i})},\mathbb {P}_\mu )$ as a probability space $(\Omega ,\mathbb {P}_\mu )$ . Let $(F_{\mathbf {j}})_{\mathbf {j}\in \mathbb {N}^d}$ be functions defined on $\Sigma _m$ . For each $\mathbf {j}$ , there exists a unique $\mathbf {i}(\mathbf {j})\in \mathcal {I}_{\mathbf {p}}$ such that $\mathbf {j}\in \mathcal {M}_{\mathbf {p}}(\mathbf {i}(\mathbf {j}))$ . Then, $x\mapsto F_{\mathbf {j}}(x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i}(\mathbf {j}))})$ defines a random variable on $\Omega $ . Later, we will study the LLN for variables $\{ F_{\mathbf {j}}(x|_{\mathcal {M}_{\mathbf {p}}(\mathbf { i}(\mathbf {j}))}) \}_{\mathbf {j}\in \mathbb {N}^d}$ . Notice that if $\mathbf {i(j)}\neq \mathbf {i(j')}$ , then the two variables $F_{\mathbf {j}}(x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i}(\mathbf {j}))})$ and $F_{\mathbf {j'}}(x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i}(\mathbf {j'}))})$ are independent. However, if $\mathbf {i(j)}= \mathbf {i(j')}$ , they are not independent in general. To prove the LLN, we will need the following technical lemma which allows us to compute the expectation of the product of $F_{\mathbf {j}}(x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i}(\mathbf {j}))})$ .

Lemma 4.3. Let $(F_{\mathbf {j}})_{\mathbf {j}\in \mathbb {N}^d}$ be functions defined on $\Sigma _m$ . Then for any $N_1,N_2,\ldots ,N_d\geq 1$ , we have

$$ \begin{align*} \mathbb{E}_{\mathbb{P}_\mu}\bigg( \prod_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt] } F_{\mathbf{j}}(x|_{\mathcal{M}_{\mathbf{ p}}(\mathbf{i(j)})}) \bigg)=\prod_{\ell=1}^{N_1\cdots N_d} \prod_{\mathbf{i} \in\mathcal{J}_{\mathbf{N};\ell}\cap \mathcal{I}_{\mathbf{p}}} \mathbb{E}_\mu \bigg( \prod_{\mathbf{y}\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{ N}]\kern-1.2pt] } F_{\mathbf{y}}(y) \bigg). \end{align*} $$

In particular, for any function G defined on $\Sigma _m$ , for any $\mathbf {i}\in \mathbb {N}^d$ ,

$$ \begin{align*} \mathbb{E}_{\mathbb{P}_\mu}G(x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})})=\mathbb{E}_\mu G(\cdot ). \end{align*} $$

Proof. Let

$$ \begin{align*} Q_{\mathbf{N}}(x)= \prod_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} F_{\mathbf{j}}(x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i(j)})}) \end{align*} $$

and

$$ \begin{align*} Q_{\mathbf{N, i}}(x)=\prod_{\mathbf{y}\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} F_{\mathbf{y}}(x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})}). \end{align*} $$

Since the variables $x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ for $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}$ are independent under $\mathbb {P}_\mu $ (by Lemma 4.2), we have

(30) $$ \begin{align} \mathbb{E}_{\mathbb{P}_\mu} Q_{\mathbf{N}}=\prod_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu} Q_{\mathbf{N, i}}. \end{align} $$

Then by the definition of $\mathcal {J}_{\mathbf {N};\ell }\cap \mathcal {I}_{\mathbf {p}}$ , we can rewrite equation (30) to get

$$ \begin{align*} \mathbb{E}_{\mathbb{P}_\mu} Q_{\mathbf{N}}=\prod_{\ell=1}^{N_1\cdots N_d} \prod_{\mathbf{i} \in\mathcal{J}_{\mathbf{N};\ell}\cap \mathcal{I}_{\mathbf{p}}}\mathbb{E}_{\mathbb{P}_\mu} Q_{\mathbf{N, i}}. \end{align*} $$

However, the marginal measures on $S^{\mathcal {M}_{\mathbf {p}}(\mathbf {i})}$ of $\mathbb {P}_\mu $ are equal to $\mu $ . So,

$$ \begin{align*} \mathbb{E}_{\mathbb{P}_\mu} Q_{\mathbf{N, i}}=\mathbb{E}_\mu \bigg( \prod_{\mathbf{y}\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} F_{\mathbf{y}}(y) \bigg). \end{align*} $$

Now, for any function G defined on $\Sigma _m$ and any $\mathbf {j}\in \mathbb {N}^d$ , if we set $F_{\mathbf {j}}=G$ and $F_{\mathbf {j'}}=1$ for $\mathbf {j'}\neq \mathbf {j}$ , we have

$$ \begin{align*} \mathbb{E}_{\mathbb{P}_\mu} G(x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i(j)})}) =\mathbb{E}_{\mu} G(y). \end{align*} $$

The proof is thus completed.

To prove the LLN, we need the following result. Recall that the covariance of two bounded functions $f,g$ with respect to $\mu $ is defined by

$$ \begin{align*} \mathrm{cov}_{\mu}(f,g)=\mathbb{E}_{\mu}[ (f-\mathbb{E}_{\mu}f)(g-\mathbb{E}_{\mu} g) ]. \end{align*} $$

When the functions $(F_{\mathbf {j}})_{\mathbf {j}\in \mathbb {N}^d}$ are all the same function F, we have the following LLN.

Lemma 4.4. Let F be a function defined on $\Sigma _m$ . Suppose that there exist $C>0$ and $0<\eta <p_1\cdots p_d$ such that for any $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}$ and any $\ell _1,\ell _2\in \mathbb {N}\cup \{ 0\}$ ,

$$ \begin{align*} \mathrm{cov}_{\mu}(F_{\mathbf{i}\cdot\mathbf{p}^{\ell_1}},F_{\mathbf{i}\cdot\mathbf{p}^{\ell_2}})\leq C \eta^{({\ell_1+\ell_2})/{2}}\mathrm{.} \end{align*} $$

( $\mathbf {p}^{\ell }=(p_1^\ell ,p_2^\ell ,\ldots ,p_d^\ell ).$ ) Then for $\mathbb {P}_\mu $ -a.e. $x\in \Sigma _m^{\mathbb {N}^d}$ ,

$$ \begin{align*} \lim_{\mathbf{N} \rightarrow \infty} \frac{1}{N_1\cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt] }( F (x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i(j)})})-\mathbb{E}_{\mu}F )=0. \end{align*} $$

Proof. Without loss of generality, we may assume $\mathbb {E}_{\mathbb {P}_\mu } F (x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i(j)})})=0$ for all $\mathbf {j}\in \mathbb {N}^d$ . Our goal is to prove $\lim _{\mathbf {N} \rightarrow \infty } Y_{\mathbf {N}}=0 \ \mathbb {P}_\mu $ -almost everywhere, where

$$ \begin{align*} Y_{\mathbf{N}}=\frac{1}{N_1\cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}X_{\mathbf{j}} \quad\mbox{with } X_{\mathbf{j}}=F (x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i(j)})}). \end{align*} $$

It is enough to show

$$ \begin{align*} \sum_{ N_1,N_2,\ldots,N_d=1}^{\infty}\mathbb{E}_{\mathbb{P}_\mu} Y_{\mathbf{N}}^2 <+\infty. \end{align*} $$

Notice that

(31) $$ \begin{align} \mathbb{E}_{\mathbb{P}_\mu} Y_{\mathbf{N}}^2=\frac{1}{(N_1\cdots N_d)^2}\sum_{\mathbf{j}_1,\mathbf{j}_2\in[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2}. \end{align} $$

By Lemma 4.2, we have $\mathbb {E}_{\mathbb {P}_\mu }X_{\mathbf {j}_1} X_{\mathbf {j}_2}\neq 0$ only if $\mathbf {i}(\mathbf { j}_1)=\mathbf {i}(\mathbf {j}_2)$ . So,

$$ \begin{align*} \sum_{\mathbf{j}_1,\mathbf{j}_2\in[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2}= \sum_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \sum_{\mathbf{j}_1,\mathbf{j}_2\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2}. \end{align*} $$

By Lemma 2.2, we can rewrite the above sum as

(32) $$ \begin{align} \sum_{\ell=1}^{N_1\cdots N_d} \sum_{\mathbf{i} \in\mathcal{J}_{\mathbf{N};\ell}\cap \mathcal{I}_{\mathbf{p}}} \sum_{\mathbf{j}_1,\mathbf{ j}_2\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2}. \end{align} $$

Recall that $\mathbb {E}_{\mathbb {P}_\mu }X_{\mathbf {j}}=\mathbb {E}_{\mu }F$ for all $\mathbf {j}\in \mathbb {N}^d$ (Lemma 4.3). For $\mathbf {j}_1,\mathbf {j}_2\in \mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ , we write $\mathbf {j}_1=\mathbf {i}\cdot \mathbf {p}^{\ell _1}$ and $\mathbf {j}_2=\mathbf {i}\cdot \mathbf {p}^{\ell _2}$ with $0\leq \ell _1,\ell _2 \leq |\mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]|$ . By the Cauchy–Schwarz inequality and hypothesis, we obtain

$$ \begin{align*} | \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2} | \leq \mathbb{E}_{\mu}F^2 \leq C\eta^{|\mathcal{M}_{\mathbf{p}}(\mathbf{ i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|}. \end{align*} $$

So,

$$ \begin{align*} \sum_{\mathbf{j}_1,\mathbf{j}_2\in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} | \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2} | \leq C |\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|^2 \eta^{| \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|}. \end{align*} $$

Substituting this estimate into equation (32) and using Lemma 2.2, we get

$$ \begin{align*} \begin{aligned} \bigg|\sum_{\mathbf{j}_1,\mathbf{j}_2\in[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \mathbb{E}_{\mathbb{P}_\mu}X_{\mathbf{j}_1} X_{\mathbf{j}_2}\bigg| &\leq \sum_{\ell=1}^{N_1\cdots N_d} \sum_{\mathbf{i} \in\mathcal{J}_{\mathbf{N};\ell}\cap \mathcal{I}_{\mathbf{p}}}\ell^2 \eta^\ell\\ &=\sum_{\ell=1}^{\min_i\{\lfloor \log_{p_i}N_i \rfloor \}} \bigg( \prod_{k=1}^{d}\bigg\lfloor\frac{N_k}{p_k^{\ell-1}}\bigg\rfloor-\prod_{k=1}^{d}\bigg\lfloor\frac{N_k}{p_k^\ell}\bigg\rfloor \bigg) \ell^2 \eta^\ell\\ &\leq\sum_{\ell=1}^{\lfloor \log_{p_1\cdots p_d}N_1\cdots N_d \rfloor } \bigg( \prod_{k=1}^{d}\bigg\lfloor\frac{N_k}{p_k^{\ell-1}}\bigg\rfloor-\prod_{k=1}^{d}\bigg\lfloor\frac{N_k}{p_k^\ell}\bigg\rfloor \bigg) \ell^2 \eta^\ell. \end{aligned} \end{align*} $$

Then, applying Lemma 2.2, the last sum is bounded by

$$ \begin{align*} \begin{aligned} &(N_1\cdots N_d)(p_1\cdots p_d-1)^2\sum_{\ell=1}^{\lfloor \log_{p_1\cdots p_d}N_1\cdots N_d \rfloor } \frac{\ell^2\eta^\ell}{(p_1\cdots p_d)^{\ell+1}}\\ &\quad=O\bigg( N_1\cdots N_d\bigg( \frac{\eta}{p_1\cdots p_d} \bigg)^{{\lfloor \log_{p_1\cdots p_d}N_1\cdots N_d \rfloor } } \bigg)\\ &\quad=O( (N_1\cdots N_d)^{1-\epsilon} ) \end{aligned} \end{align*} $$

for some $\epsilon>0$ , which gives the convergence of the series preceding equation (31). The proof is complete.

Lemma 4.5. Let $\mu $ be any probability measure on $\Sigma _m$ and let $F\in \mathcal {F}(S^\ell )$ . For $\mathbb {P}_\mu $ -a.e. $x\in \Sigma _m^{\mathbb {N}^d}$ , we have

$$ \begin{align*} \begin{aligned} &\lim_{\mathbf{N} \rightarrow \infty} \frac{1}{N_1\cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt] } F (x_{\mathbf{ j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}}) \\ &\quad =(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{1}{(p_1\cdots p_d)^{k+1}}\sum_{j=0}^{k-1}\mathbb{E}_{\mu}F(y_j,\ldots, y_{j+\ell-1}). \end{aligned} \end{align*} $$

Proof. For each $\mathbf {j}\in [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ , take

$$ \begin{align*} F (x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i(j)})})=F (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}}). \end{align*} $$

Then, the proof follows by Lemmas 2.2 and 4.4.

Lemma 4.6. For $\mathbb {P}_\mu $ -a.e. $x\in \Sigma _m^{\mathbb {N}^d}$ , we have

$$ \begin{align*} D(\mathbb{P}_\mu,x)=\frac{(p_1\cdots p_d-1)^2}{\log m}\sum_{\ell=1}^{\infty}\frac{H_{\ell}(\mu)}{(p_1\cdots p_d)^{\ell+1}}, \end{align*} $$

where $H_{\ell }(\mu )=-\sum _{a_1\cdots a_\ell }\mu ([a_1\cdots a_\ell ])\log \mu ([a_1\cdots a_\ell ])$ .

Proof. The proof is similar to the proof of [Reference Fan, Schmeling and Wu11, Theorem 1.3] combined with Lemmas 2.1, 2.2, and 4.5.

Let $\mathcal {F}(S^{\ell -1},\mathbb {R}^+)$ denote the cone of non-negative real functions on $S^{\ell -1}$ and $s\in \mathbb {R}$ . The nonlinear operator $\mathcal {N}_s:\mathcal {F}(S^{\ell -1},\mathbb {R}^+)\rightarrow \mathcal {F}(S^{\ell -1},\mathbb {R}^+)$ is defined by

(33) $$ \begin{align} \mathcal{N}_sy(a_1,a_2,\ldots,a_{\ell-1})=\bigg( \sum_{j\in S} e^{s\varphi(a_1,a_2,\ldots,a_{\ell-1},j)}y(a_2,\ldots,a_{\ell-1},j) \bigg)^{{1}/{p_1\cdots p_d}}. \end{align} $$

Define the pressure function by

(34) $$ \begin{align} P_{\varphi}(s)=(p_1\cdots p_d-1)(p_1\cdots p_d)^{\ell-2}\log \sum_{j\in S}\psi_s(j), \end{align} $$

where $\psi _s$ is the unique strictly positive fixed point of $\mathcal {N}_s$ . The function $\psi _s$ is defined on $S^{\ell -1}$ and it can be extended on $S^k$ for all $1\leq k\leq \ell -2$ by induction: for $a\in S^k$ ,

$$ \begin{align*} \psi^{(k)}_s(a)=\bigg( \sum_{j\in S}\psi^{(k+1)}_s(a,j) \bigg)^{{1}/{p_1\cdots p_d}}. \end{align*} $$

Then we defined $(\ell -1)$ -step Markov measure $\mu _s$ on $\Sigma _m$ with the initial law

(35) $$ \begin{align} \pi_s([a_1,\ldots,a_{\ell-1}])=\prod_{j=1}^{\ell-1}\frac{\psi_s(a_1,\ldots,a_j)}{\psi^{p_1\ldots p_d}_s(a_1,\ldots,a_{j-1})} \end{align} $$

and the transition probability

(36) $$ \begin{align} Q_s([a_1,\ldots,a_{\ell-1}],[a_2,\ldots,a_{\ell}])=e^{s\varphi(a_1,\ldots,a_{\ell})}\frac{\psi_s(a_2,\ldots,a_\ell)}{\psi^{p_1\ldots p_d}_s(a_1,\ldots,a_{\ell-1})}. \end{align} $$

In the following, we are going to establish a relation between the mass $\mathbb {P}_{\mu _s}([x_{\mathbf {1}}^{N_1,\ldots ,N_d}])$ and the multiple ergodic sum $\sum _{\mathbf {j}\in [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]} \varphi (x_{\mathbf {j}},\ldots ,x_{\mathbf {j}\cdot \mathbf { p}^{\ell -1}})$ . This can be regarded as the Gibbs property of the measure $\mathbb {P}_{\mu _s}$ .

Recall that for any $\mathbf {j}\in \mathbb {N}^d$ , there is a unique $\mathbf {i(j)}\in \mathcal {I}_{\mathbf {p}}$ such that $\mathbf {j}=\mathbf { i(j)}\cdot \mathbf {p}^j$ , $j \geq 0$ .

Define

$$ \begin{align*} \unicode{x3bb}_{\mathbf{j}}:=\left\{ \begin{aligned} &\{ \mathbf{i(j)}, \mathbf{i(j)}\cdot \mathbf{p},\ldots, \mathbf{i(j)}\cdot \mathbf{p}^j\} &\mbox{if } j<\ell-1,\\ &\{ \mathbf{i(j)}\cdot \mathbf{p}^{j-(\ell-1)},\ldots, \mathbf{i(j)}\cdot \mathbf{p}^j\} &\mbox{if } j \geq \ell-1. \end{aligned} \right. \end{align*} $$

For $x= (x_{\mathbf {j}})_{\mathbf {j}\in \mathbb {N}^d}\in \Sigma _m^{\mathbb {N}^d}$ , we define

$$ \begin{align*} B_{\mathbf{N}}(x):=\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \psi_s (x|_{\unicode{x3bb}_{\mathbf{j}}}). \end{align*} $$

The following formula is a consequence of the definitions of $\mu _s$ and $\mathbb {P}_{\mu _s}$ .

Lemma 4.7. We have

$$ \begin{align*} \begin{aligned} \log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])&= s\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{ p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad-\bigg(N_1\cdots N_d-\bigg\lfloor \frac{N_1\cdots N_d}{p_1\cdots p_d} \bigg\rfloor \bigg)p_1\cdots p_d \log\psi_s(\emptyset)\\ &\quad-p_1\cdots p_d B_{\mathbf{\lfloor N/p \rfloor}}(x)+B_{\mathbf{N}}(x). \end{aligned} \end{align*} $$

(For $\ell \geq 1$ , ${ \lfloor \mathbf {N}/\mathbf {p}^\ell \rfloor }=( \lfloor {N_1}/{p_1^\ell } \rfloor ,\ldots , \lfloor {N_d}/{p_d^\ell } \rfloor )$ .)

Proof. By the definition of $\mathbb {P}_{\mu _s}$ , we have

(37) $$ \begin{align} \log\mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])= \sum_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \mu_s([x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]). \end{align} $$

However, by the definition of $\mu _s$ , if $|\mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]| \leq \ell -1$ , we have

(38) $$ \begin{align} \begin{aligned} \log \mu_s([x|_{\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}]) &=\sum_{j=0}^{| \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|-1} \log \frac{\psi_s(x_{\mathbf{i}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})}{\psi_s^{p_1\cdots p_d}(x_{\mathbf{i}},\ldots,x_{\mathbf{ i}\cdot\mathbf{p}^{j-1}})}\\ &=\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \frac{\psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})}{\psi_s^{p_1\cdots p_d}(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}})}. \end{aligned} \end{align} $$

If $|\mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]| \geq \ell $ , $\log \mu _s([x|_{\mathcal {M}_{\mathbf {p}}(\mathbf {i})\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]}])$ is equal to

(39) $$ \begin{align} &\sum_{j=0}^{\ell-2} \log \frac{\psi_s(x_{\mathbf{i}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})}{\psi_s^{p_1\cdots p_d}(x_{\mathbf{i}},\ldots,x_{\mathbf{ i}\cdot\mathbf{p}^{j-1}})}\nonumber\\ &\qquad+\sum_{j=\ell-1}^{|\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|-1} \log \frac{\psi_s(x_{\mathbf{i}\cdot \mathbf{p}^{j-\ell+2}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})e^{s\varphi(x_{\mathbf{i}\cdot \mathbf{p}^{j-\ell+1}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})}}{\psi_s^{p_1\cdots p_d}(x_{\mathbf{i}\cdot \mathbf{ p}^{j-\ell+1}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j-1}})}\nonumber\\ &\quad= \sum_{j=0}^{| \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|-1} \log \frac{\psi_s(x_{\mathbf{i}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})}{\psi_s^{p_1\cdots p_d}(x_{\mathbf{i}},\ldots,x_{\mathbf{ i}\cdot\mathbf{p}^{j-1}})} +s\sum_{j=\ell-1}^{|\mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]|-1} \varphi(x_{\mathbf{i}\cdot \mathbf{p}^{j-\ell+1}},\ldots,x_{\mathbf{i}\cdot\mathbf{p}^{j}})\nonumber\\ &\quad=\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \frac{\psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})}{\psi_s^{p_1\cdots p_d}(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}})} +\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt],\mathbf{k \leq N}}\varphi(x|_{\unicode{x3bb}_{\mathbf{k}}}), \end{align} $$

where $\mathbf {k}\leq \mathbf {N}$ means $k_i\leq N_i$ for all $1\leq i \leq d$ .

Substituting equations (38) and (39) into equation (37), we get

(40) $$ \begin{align} \log\mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])=S_{\mathbf{N}}'+sS_{\mathbf{N}}", \end{align} $$

where

$$ \begin{align*} \begin{aligned} &S_{\mathbf{N}}'=\sum_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \frac{\psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})}{\psi_s^{p_1\cdots p_d}(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}})}\\ &S_{\mathbf{N}}"=\sum_{\mathbf{i} \in \mathcal{I}_{\mathbf{p}}\cap[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt],\mathbf{k \leq N}}\varphi(x|_{\unicode{x3bb}_{\mathbf{k}}}). \end{aligned} \end{align*} $$

For any fixed $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ , we write

$$ \begin{align*} \begin{aligned} \sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \frac{\psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})}{\psi_s^{p_1\cdots p_d}(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}})}=& \sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}}) \\ &-p_1\cdots p_d\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \psi_s(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}}). \end{aligned} \end{align*} $$

Recall that if

$$ \begin{align*} \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt] =\{ \mathbf{i}, \mathbf{i}\cdot \mathbf{p},\ldots, \mathbf{ i}\cdot \mathbf{p}^{j_0} \}, \end{align*} $$

then we denote

$$ \begin{align*} \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\lfloor\mathbf{N/p} \rfloor ]\kern-1.2pt]=\{ \mathbf{i}, \mathbf{i}\cdot \mathbf{ p},\ldots, \mathbf{i}\cdot \mathbf{p}^{j_0-1} \}, \end{align*} $$

and when $\mathbf {k}=\mathbf {i}$ , we have $x|_{\unicode{x3bb} _{\mathbf {k/p}}}=\emptyset $ .

Then we can write

$$ \begin{align*} \begin{aligned} \sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \frac{\psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})}{\psi_s^{p_1\cdots p_d}(x|_{\unicode{x3bb}_{\mathbf{\lfloor k/p \rfloor}}})}&= (1-p_1\cdots p_d)\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\lfloor\mathbf{N/p} \rfloor]\kern-1.2pt]} \log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}}) \\ &\quad-p_1\cdots p_d \log \psi_s(\emptyset)\\ &\quad+\sum_{\mathbf{k} \in \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt],\mathbf{k\cdot p} \notin \mathcal{M}_{\mathbf{p}}(\mathbf{i})\cap [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}}). \end{aligned} \end{align*} $$

Now we take the sum over $\mathbf {i} \in \mathcal {I}_{\mathbf {p}}\cap [\kern-1.3pt[ 1,\mathbf {N}]\kern-1.2pt]$ to get

$$ \begin{align*} \begin{aligned} S_{\mathbf{N}}'&=(1-p_1\cdots p_d)\sum_{\mathbf{k \leq\lfloor\mathbf{N/p} \rfloor}}\log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})\\ &\quad-p_1\cdots p_d \bigg( N_1\cdots N_d- \bigg\lfloor \frac{N_1\cdots N_d}{p_1\cdots p_d} \bigg\rfloor \bigg)\log \psi_s(\emptyset)\\ &\quad+\sum_{\mathbf{k\leq N}, \mathbf{k\cdot p} \notin [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}}). \end{aligned} \end{align*} $$

We can rewrite

$$ \begin{align*} &(1-p_1\cdots p_d)\sum_{\mathbf{k \leq \lfloor\mathbf{N/p} \rfloor}}\log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})+\sum_{\mathbf{k\leq N}, \mathbf{ k\cdot p} \notin [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}\log \psi_s(x|_{\unicode{x3bb}_{\mathbf{k}}})\\&\quad=-p_1\cdots p_d B_{\lfloor\mathbf{ N/p} \rfloor}(x)+B_{\mathbf{N}}(x). \end{align*} $$

Thus,

$$ \begin{align*} S_{\mathbf{N}}'=-p_1\cdots p_d \bigg( N_1\cdots N_d- \bigg\lfloor \frac{N_1\cdots N_d}{p_1\cdots p_d} \bigg\rfloor \bigg)\log \psi_s(\emptyset) -p_1\cdots p_d B_{\lfloor\mathbf{N/p} \rfloor}(x)+B_{\mathbf{N}}(x). \end{align*} $$

However, we have

$$ \begin{align*} S_{\mathbf{N}}"=\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}}). \end{align*} $$

Substituting these expressions of $S_{\mathbf {N}}'$ and $S_{\mathbf {N}}"$ into equation (40), we get the desired result.

4.1.1 Upper bound for the Hausdorff dimension

The purpose of this subsection is to provide a few lemmas needed to prove Lemma 4.12. The following results will be useful for estimation of the pointwise dimensions of $\mathbb {P}_{\mu _s}$ .

Lemma 4.8. [Reference Fan, Schmeling and Wu11, Lemma 7.1]

Let $(a_n)_{n\geq 1}$ be a bounded sequence of non-negative real numbers. Then,

$$ \begin{align*} \liminf_{n\rightarrow\infty} (a_{ \lfloor {n}/{q} \rfloor} -a_n )\leq 0. \end{align*} $$

We define

$$ \begin{align*} E^+(\alpha):=\bigg\{ (x_{\mathbf{j}})_{\mathbf{j}\in \mathbb{N}^d} \in \Sigma_m^{\mathbb{N}^d} : \limsup_{\mathbf{N}\rightarrow\infty} \frac{1}{\mathbf{N}} \sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\leq \alpha \bigg\} \end{align*} $$

and

$$ \begin{align*} E^-(\alpha):=\bigg\{ (x_{\mathbf{j}})_{\mathbf{j}\in \mathbb{N}^d}\in \Sigma_m^{\mathbb{N}^d} : \liminf_{\mathbf{ N}\rightarrow\infty} \frac{1}{\mathbf{N}} \sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\geq \alpha \bigg\}. \end{align*} $$

It is clear that

$$ \begin{align*} E(\alpha)=E^+(\alpha)\cap E^-(\alpha). \end{align*} $$

The upper bound of pointwise dimensions are obtained.

Lemma 4.9. For every $x\in E^+(\alpha )$ , we have

$$ \begin{align*} \text{ for all } s \leq 0,\quad \underline{D}(\mathbb{P}_{\mu_s},x)\leq \frac{P(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

For every $x\in E^-(\alpha )$ , we have

$$ \begin{align*} \text{ for all } s \geq 0,\quad \underline{D}(\mathbb{P}_{\mu_s},x)\leq \frac{P(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Consequently, for every $x\in E(\alpha )$ , we have

$$ \begin{align*} \text{ for all } s \in \mathbb{R},\quad \underline{D}(\mathbb{P}_{\mu_s},x)\leq \frac{P(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Proof. The proof is based on Lemma 4.7, which implies that for any $x\in \Sigma _m^{\mathbb {N}^d}$ and $N_1,\ldots , N_d\geq 1$ , we have

$$ \begin{align*} \begin{aligned} \underline{D}(\mathbb{P}_{\mu_s},x):&=-\frac{ \log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])}{N_1 \cdots N_d}\\ &=-\frac{s}{N_1 \cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad+\frac{(N_1\cdots N_d-\lfloor {N_1\cdots N_d}/{p_1\cdots p_d} \rfloor )}{N_1 \cdots N_d}p_1\cdots p_d \log\psi_s(\emptyset)\\ &\quad+ \frac{B_{\mathbf{\lfloor N/p \rfloor}}(x)}{{N_1 \cdots N_d}/{p_1\cdots p_d}}-\frac{B_{\mathbf{N}}(x)}{N_1 \cdots N_d}. \end{aligned} \end{align*} $$

Since the function $\psi _s$ is bounded, so is the sequence $({B_{\mathbf {k}\cdot \mathbf {p}^{i} }(x)}/{k_1p_1^i \cdots k_dp_d^i} )_{i=0}^{\infty }$ . Then by Lemma 4.8 with $n=k_1p_1^i \cdots k_dp_d^i$ and $q=p_1\cdots p_d$ , we have

$$ \begin{align*} &\liminf_{\mathbf{N}\rightarrow\infty}\frac{B_{\mathbf{\lfloor N/p \rfloor}}(x)}{{N_1 \cdots N_d}/{p_1\cdots p_d}}-\frac{B_{\mathbf{N}}(x)}{N_1 \cdots N_d}\\ &\quad\leq \liminf_{i\rightarrow\infty} \frac{B_{\mathbf{k}\cdot \mathbf{p}^{i-1} }(x)}{k_1p_1^{i-1} \cdots k_dp_d^{i-1}} -\frac{B_{\mathbf{k}\cdot \mathbf{p}^{i} }(x)}{k_1p_1^{i} \cdots k_dp_d^{i}} \leq 0. \end{align*} $$

Therefore,

$$ \begin{align*} \begin{aligned} \underline{D}(\mathbb{P}_{\mu_s},x)\leq& \liminf_{\mathbf{N}\rightarrow\infty}-\frac{s}{N_1 \cdots N_d \log m}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &+(p_1\cdots p_d-1) \log_m\psi_s(\emptyset). \end{aligned} \end{align*} $$

Now suppose that $x\in E^+(\alpha )$ and $s \leq 0$ . Since

$$ \begin{align*} \begin{aligned} &\liminf_{\mathbf{N}\rightarrow\infty}\frac{1}{N_1 \cdots N_d }\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{ p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad\leq\limsup_{\mathbf{N}\rightarrow\infty}\frac{1}{N_1 \cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{ p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad\leq\frac{\alpha}{(p_1\cdots p_d)^{\ell-1}}, \end{aligned} \end{align*} $$

we have

$$ \begin{align*} \begin{aligned} &\liminf_{\mathbf{N}\rightarrow\infty}-\frac{s}{N_1 \cdots N_d }\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{ p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad\leq-s\liminf_{\mathbf{N}\rightarrow\infty}\frac{1}{N_1 \cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &\quad\leq\frac{-s\alpha}{(p_1\cdots p_d)^{\ell-1}}, \end{aligned} \end{align*} $$

so that

$$ \begin{align*} \underline{D}(\mathbb{P}_{\mu_s},x)&\leq\frac{-s\alpha}{(p_1\cdots p_d)^{\ell-1}\log m}+(p_1\cdots p_d-1) \log_m\psi_s(\emptyset)\\&=\frac{P_{\varphi}(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}, \end{align*} $$

where the last equation follows from

$$ \begin{align*} \begin{aligned} P_{\varphi}(s)&=(p_1\cdots p_d-1)(p_1\cdots p_d)^{\ell-2}\log \sum_{j\in S}\psi_s(j)\\ &=(p_1\cdots p_d-1)(p_1\cdots p_d)^{\ell-1}\log \psi(\emptyset). \end{aligned} \end{align*} $$

By an analogous argument, we can prove the same result for $x\in E^-(\alpha )$ and $s\geq 0$ . The proof is complete.

Recall that $L_{\varphi }$ is the set of $\alpha $ such that $E(\alpha )\neq \emptyset $ . The following lemma gives the range of $L_{\varphi }$ .

Lemma 4.10. We have $L_{\varphi }\subset [ P^{\prime }_{\varphi }(-\infty ),P^{\prime }_{\varphi }(+\infty ) ]$ .

Proof. We prove it by contradiction. Suppose that $E(\alpha )\neq \emptyset $ for some $\alpha <P^{\prime }_{\varphi }(-\infty )$ . Let $x\in E(\alpha )$ . Then by Lemma 4.9, we have

(41) $$ \begin{align} \liminf_{\mathbf{N}\rightarrow\infty}-\frac{ \log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} ])}{N_1 \cdots N_d}\leq \frac{P_{\varphi}(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m} \quad\text{for all } s \in \mathbb{R}. \end{align} $$

However, by mean value theorem, we have

(42) $$ \begin{align} P_{\varphi}(s)-\alpha s=P_{\varphi}(s)-P_{\varphi}(0)-\alpha s+P_{\varphi}(0)=P^{\prime}_{\varphi}(\eta_s)s-\alpha s+P_{\varphi}(0) \end{align} $$

for some real number $\eta _s$ between $0$ and s. Since $P_{\varphi }$ is convex, $P^{\prime }_{\varphi }$ is increasing on $\mathbb {R}$ . Assume $s<0$ , we have

(43) $$ \begin{align} P^{\prime}_{\varphi}(\eta_s)s-\alpha s+P_{\varphi}(0)\leq P^{\prime}_{\varphi}(-\infty)s-\alpha s+P_{\varphi}(0)= (P^{\prime}_{\varphi}(-\infty)-\alpha )s+P_{\varphi}(0). \end{align} $$

Since $P^{\prime }_{\varphi }(-\infty )-\alpha>0$ , we deduce from equations (42) and (43) that for s close to $-\infty $ , we have $P_{\varphi }(s)-\alpha s<0$ . Then by equation (41), for s small enough, we obtain

$$ \begin{align*} \liminf_{\mathbf{N}\rightarrow\infty}-\frac{\log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} ])}{N_1 \cdots N_d}<0, \end{align*} $$

which implies $\mathbb {P}_{\mu _s}([x|_{[\kern-1.3pt[ 1,(N_{1,i},\ldots ,N_{d,i})]\kern-1.2pt]}])>1$ with $\min _{1\leq j \leq d}N_{j,i} \to \infty $ as $i\rightarrow \infty $ . This contradicts the fact that $\mathbb {P}_{\mu _s}$ is a probability measure on $\Sigma _m^{\mathbb {N}^d}$ . Thus, we have proved that for $\alpha $ such that $E(\alpha )\neq \emptyset $ , we have $\alpha \geq P^{\prime }_{\varphi }(-\infty )$ . By a similar argument, we have $\alpha \leq P^{\prime }_{\varphi }(+\infty )$ .

Lemma 4.11. (Billingsley’s lemma [Reference Billingsley4])

Let E be a Borel set in $\Sigma _m^{\mathbb {N}^d}$ and let $\nu $ be a finite Borel measure on $\Sigma _m^{\mathbb {N}^d}$ .

  1. (1) We have $\dim _H(E)\geq c$ if $\nu (E)>0$ and $\underline {D}(\nu ,x)\geq c$ for $\nu $ -a.e. x.

  2. (2) We have $\dim _H(E)\leq c$ if $\underline {D}(\nu ,x) \leq c$ for all $x\in E$ .

Recall that

$$ \begin{align*} P^*_{\varphi}(\alpha)=\inf_{s\in \mathbb{R}}(P_{\varphi}(s)-\alpha s). \end{align*} $$

An upper bound of the Hausdorff dimensions of level sets is a direct consequence of Lemmas 4.9 and 4.11.

Lemma 4.12. For any $\alpha \in (P^{\prime }_{\varphi }(-\infty ),P^{\prime }_{\varphi }(0))$ , we have

$$ \begin{align*} \dim_H E^+(\alpha)\leq \inf_{s\leq 0}\frac{P_{\varphi}(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

For any $\alpha \in (P^{\prime }_{\varphi }(0),P^{\prime }_{\varphi }(+\infty ))$ , we have

$$ \begin{align*} \dim_H E^-(\alpha)\leq \inf_{s\geq 0}\frac{P_{\varphi}(s)-\alpha s}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

In particular, we have

$$ \begin{align*} \dim_H E(\alpha)\leq \frac{P^*_{\varphi}(\alpha)}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

4.1.2 Lower bound for the Hausdorff dimension

This subsection is intended to establish Lemma 4.16. First, we need to do some preparations for proving the Ruelle-type formula below. We deduce some identities concerning the functions $\psi _s$ .

Recall that $\psi _s(a)$ are defined for $a\in \bigcup _{1\leq k \leq \ell -1}S^k$ . For $a\in S^{\ell -1}$ , we have

$$ \begin{align*} \psi_s^{p_1\cdots p_d}(a)=\sum_{b\in S} e^{s\varphi(a,b)} \psi_s(Ta,b) \end{align*} $$

and for $a\in S^k, 1\leq k \leq \ell -2$ , we have

$$ \begin{align*} \psi_s^{p_1\cdots p_d}(a)=\sum_{b\in S} \psi_s(a,b). \end{align*} $$

Differentiating the two sides of each of the above two equations with respect to s, we get for all $s\in S^{\ell -1}$ ,

$$ \begin{align*} p_1\cdots p_d \psi_s^{p_1\cdots p_d-1}(a) \psi^{\prime}_s(a)=\sum_{b\in S} e^{s\varphi(a,b)}\varphi(a,b) \psi_s(Ta,b)+\sum_{b\in S} e^{s\varphi(a,b)} \psi^{\prime}_s(Ta,b) \end{align*} $$

and for all $a\in \bigcup _{1\leq k \leq \ell -2}S^k$ ,

$$ \begin{align*} p_1\cdots p_d\psi_s^{p_1\cdots p_d-1}(a) \psi^{\prime}_s(a)=\sum_{b\in S} \psi^{\prime}_s(a,b). \end{align*} $$

Dividing these equations by $\psi _s^{p_1\cdots p_d}(a)$ (for different a), we get the following lemma.

Lemma 4.13. For any $a\in S^{\ell -1}$ , we have

(44) $$ \begin{align} p_1\cdots p_d \frac{\psi^{\prime}_s(a)}{\psi_s(a) }=\sum_{b\in S} \frac{e^{s\varphi(a,b)}\varphi(a,b) \psi_s(Ta,b)}{\psi_s^{p_1\cdots p_d}(a)}+\sum_{b\in S} \frac{e^{s\varphi(a,b)} \psi^{\prime}_s(Ta,b)}{\psi_s^{p_1\cdots p_d}(a)} \end{align} $$

and for any $a\in \bigcup _{1\leq k \leq \ell -2}S^k$ ,

(45) $$ \begin{align} p_1\cdots p_d \frac{ \psi^{\prime}_s(a)}{\psi_s(a)}=\sum_{b\in S} \frac{\psi^{\prime}_s(a,b)}{\psi_s^{p_1\cdots p_d}(a)}. \end{align} $$

We denote

$$ \begin{align*} w(a)=\frac{\psi^{\prime}_s(a)}{\psi^{\prime}_s(a)}, ~ v(a)=\sum_{b\in S}\frac{e^{s\varphi (a,b)}\psi^{\prime}_s(Ta,b)}{\psi_s^{p_1\cdots p_d}(a)}\quad (\text{for all } a\in S^{\ell-1}). \end{align*} $$

Then we have the following identities.

Lemma 4.14. ( $\mathbb {N}^d$ version of [Reference Fan, Schmeling and Wu11, Lemma 7.7 and Theorem 5.1])

For any $n\in \mathbb {N}$ , we have

(46) $$ \begin{align} \mathbb{E}_{\mu_s} \varphi(y_n^{n+\ell-1})=p_1\cdots p_d \mathbb{E}_{\mu_s} w(y_n^{n+\ell-2})-\mathbb{E}_{\mu_s} v(y_n^{n+\ell-2})\quad (\text{for all } n\geq 0), \end{align} $$
(47) $$ \begin{align} \mathbb{E}_{\mu_s} w(y_n^{n+\ell-2})=\mathbb{E}_{\mu_s} v(y_{n-1}^{n+\ell-3})\quad (\text{for all } n\geq 1), \end{align} $$
(48) $$ \begin{align} \mathbb{E}_{\mu_s} w(y_0^{\ell-2})=\frac{P^{\prime}_{\varphi}(s)}{p_1\cdots p_d (p_1\cdots p_d-1)}, \end{align} $$

and

(49) $$ \begin{align} (p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{1}{(p_1\cdots p_d)^{k+1}} \sum_{j=0}^{k-1}\mathbb{E}_{\mu_s}\varphi(y_j,\ldots, y_{j+\ell-1}) =P^{\prime}_{\varphi}(s). \end{align} $$

Proof. Using a similar argument from Lemma 4.5, the proof is almost identical to the proof of of [Reference Fan, Schmeling and Wu11, Lemma 7.7 and Theorem 5.1] by changing q to $p_1\cdots p_d$ .

As an application of Lemma 4.14, we get the following formula for $\dim _H \mathbb {P}_{\mu _s}$ .

Lemma 4.15. For any $s\in \mathbb {R}$ , we have

$$ \begin{align*} \dim_H \mathbb{P}_{\mu_s}=\frac{-s P^{\prime}_{\varphi}(s)+P_{\varphi}(s)}{(p_1\cdots p_d)^{\ell-1}}. \end{align*} $$

Proof. By Lemma 4.7, we have

$$ \begin{align*} \begin{aligned} -\frac{ \log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])}{N_1 \cdots N_d} =&-\frac{s}{N_1 \cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\ &+\frac{(N_1\cdots N_d-\lfloor {N_1\cdots N_d}/{p_1\cdots p_d} \rfloor )}{N_1 \cdots N_d}p_1\cdots p_d \log\psi_s(\emptyset)\\ &+ \frac{B_{\mathbf{\lfloor N/p \rfloor}}(x)}{{N_1 \cdots N_d}/{p_1\cdots p_d}}-\frac{B_{\mathbf{N}}(x)}{N_1 \cdots N_d}. \end{aligned} \end{align*} $$

Applying the LLN to the function $\psi _s$ , we get the $\mathbb {P}_{\mu _s}$ -almost everywhere existence of the limit $\lim _{\mathbf { N}\rightarrow \infty } {B_{\mathbf {N}}(x)}/{N_1\cdots N_d}$ . So,

$$ \begin{align*} \lim_{\mathbf{N}\rightarrow \infty} \frac{B_{\mathbf{\lfloor N/p \rfloor}}(x)}{{N_1 \cdots N_d}/{p_1\cdots p_d}}-\frac{B_{\mathbf{N}}(x)}{N_1 \cdots N_d}=0,\quad\mathbb{P}_{\mu_s}\mbox{-almost everywhere}. \end{align*} $$

However, by the Lemmas 4.14 and 4.5, we have

$$ \begin{align*} &\lim_{\mathbf{N} \rightarrow \infty}\frac{1}{N_1 \cdots N_d}\sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\lfloor\mathbf{N}/\mathbf{ p}^{\ell-1}\rfloor]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})\\&\quad=\frac{P^{\prime}_{\varphi}(s)}{(p_1\cdots p_d)^{\ell-1}},\quad \mathbb{P}_{\mu_s}\mbox{-almost everywhere}. \end{align*} $$

So we obtain that for $ \mathbb {P}_{\mu _s}$ -a.e. $x\in \Sigma _m^{\mathbb {N}^d}$ ,

$$ \begin{align*} \lim_{\mathbf{N} \rightarrow \infty} -\frac{ \log \mathbb{P}_{\mu_s}([x|_{[\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]}])}{N_1 \cdots N_d} =\frac{-s P^{\prime}_{\varphi}(s)+P_{\varphi}(s)}{(p_1\cdots p_d)^{\ell-1}}. \end{align*} $$

The proof is complete.

By Lemmas 4.14, 4.15, and Billingsley’s lemma, we get the following lower bound for $\dim _H E(P^{\prime }_{\varphi }(s))$ .

Lemma 4.16. For any $s\in \mathbb {R}$ , we have

$$ \begin{align*} \dim_H E(P^{\prime}_{\varphi}(s))\geq \frac{-s P^{\prime}_{\varphi}(s)+P_{\varphi}(s)}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

4.2 The case when $s_{\alpha }$ tends to $\pm \infty $

Lemma 4.17. [Reference Fan, Schmeling and Wu11, Theorem 5.6]

Suppose that $\alpha _{\min }< \alpha _{\max }$ . Then:

  1. (1) $P^{\prime }_{\varphi }(s)$ is strictly increasing on $\mathbb {R}$ ;

  2. (2) $\alpha _{\min }\leq P^{\prime }_{\varphi }(-\infty ) < P^{\prime }_{\varphi }(+\infty ) \leq \alpha _{\max }$ .

Proof. The proof is similar to [Reference Fan, Schmeling and Wu11, Theorem 5.6]. Thus, we omit it.

Theorem 4.18

  1. (1) We have the equality

    $$ \begin{align*} \alpha_{\min}= P^{\prime}_{\varphi}(-\infty) \end{align*} $$

    if and only if there exists a sequence $(y_i)_{i=1}^{\infty }\in \Sigma _m$ such that

    $$ \begin{align*} \varphi(y_k, y_{k+1},\ldots, y_{k+\ell-1})=\alpha_{\min} \quad\text{for all } k\geq 1. \end{align*} $$
  2. (2) We have the equality

    $$ \begin{align*} \alpha_{\max}= P^{\prime}_{\varphi}(+\infty) \end{align*} $$

    if and only if there exists a sequence $(x_i)_{i=1}^{\infty }\in \Sigma _m$ such that

    $$ \begin{align*} \varphi(x_k, x_{k+1},\ldots, x_{k+\ell-1})=\alpha_{\max} \quad\text{for all } k\geq 1. \end{align*} $$

Proof. We give the proof of the criterion for $\alpha _{\min }= P^{\prime }_{\varphi }(-\infty )$ . That for $P^{\prime }_{\varphi }(+\infty ) =\alpha _{\max }$ is similar.

Sufficient condition. Suppose that there exists a sequence $(z_i)_{j=0}^{\infty }\in \Sigma _m$ such that

$$ \begin{align*} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1})=\alpha_{\min} \quad\text{for all } j\geq 0. \end{align*} $$

We are going to prove that $\alpha _{\min }= P^{\prime }_{\varphi }(-\infty )$ . By Lemma 4.17, we have $\alpha _{\min }\leq P^{\prime }_{\varphi }(-\infty )$ , thus we only need to show that $\alpha _{\min }\geq P^{\prime }_{\varphi }(-\infty )$ . To see this, we need to find an $x\in \Sigma _m^{\mathbb {N}^d}$ such that

$$ \begin{align*} \lim_{\mathbf{N}\rightarrow\infty} \frac{1}{\mathbf{N}} \sum_{\mathbf{j}\in [\kern-1.3pt[ 1,\mathbf{N}]\kern-1.2pt]} \varphi (x_{\mathbf{j}},\ldots,x_{\mathbf{j}\cdot\mathbf{p}^{\ell-1}})=\alpha_{\min}. \end{align*} $$

Then by Lemma 4.10, $\alpha _{\min }\in [P^{\prime }_{\varphi }(-\infty ), P^{\prime }_{\varphi }(+\infty ) ]$ , so $\alpha _{\min }\geq P^{\prime }_{\varphi }(-\infty )$ . We can do this by choosing $x=(x_{\mathbf {j}})_{\mathbf {j}\in \mathbb {N}^d}=\prod _{\mathbf {i}\in \mathcal {I}_{\mathbf {p}}} (x_{\mathbf {i} \cdot \mathbf {p}^{j}})_{j=0}^{\infty }$ with

$$ \begin{align*} (x_{\mathbf{i} \cdot \mathbf{p}^{j}})_{j=0}^{\infty}=(z_j)_{j=0}^{\infty} \quad\text{for all } \mathbf{i}\in \mathcal{I}_{\mathbf{p}}. \end{align*} $$

Necessary condition. Suppose that there is no $(z_j)_{j=0}^{\infty }\in \Sigma _m$ such that

$$ \begin{align*} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1})=\alpha_{\min} \quad\text{for all } j\geq 0. \end{align*} $$

We show that there exists an $\epsilon>0$ such that

$$ \begin{align*} P^{\prime}_{\varphi}(s)\geq \alpha_{\min}+\epsilon \quad \text{for all } s\in \mathbb{R}, \end{align*} $$

which will imply that $P^{\prime }_{\varphi }(-\infty )\geq \alpha _{\min }$ .

From the hypothesis, we deduce that there are no words $z_0^{n+\ell -1}$ with $n\geq m^{\ell }$ such that

(50) $$ \begin{align} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1})=\alpha_{\min} \quad\text{for all } 0\leq j\leq n. \end{align} $$

Indeed, since $z_j^{j+\ell -1}\in S^{\ell }$ for all $0\leq j \leq n$ , there are at most $m^\ell $ choices for $z_j^{j+\ell -1}$ . So for any word with $n\geq m^{\ell }$ , there exist at least two $j_1<j_2\in \{0,\ldots , n \}$ such that

$$ \begin{align*} z_{j_1}^{j_1+\ell-1}=z_{j_2}^{j_2+\ell-1}. \end{align*} $$

Then if the word $z_0^{n+\ell -1}$ satisfies equation (50), the infinite sequence

$$ \begin{align*} (y_j)_{j=0}^{\infty}=(z_{j_1},\ldots, z_{j_2-1})^{\infty} \end{align*} $$

would verify that

$$ \begin{align*} \varphi(y_j, y_{j+1},\ldots, y_{j+\ell-1})=\alpha_{\min} \quad\text{for all } j\geq 0. \end{align*} $$

This contradicts the hypothesis. We conclude that for any word $z_0^{m^\ell +\ell -1}\in S^{m^\ell +\ell -1}$ , there exists at least one $0\leq j \leq m^\ell $ such that

$$ \begin{align*} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1})\geq \alpha^{\prime}_{\min}>\alpha_{\min}, \end{align*} $$

where $\alpha ^{\prime }_{\min }$ is the second smallest value of $\varphi $ over $S^\ell $ .

We deduce from the above discussions that for any $(z_j)_{j=0}^{\infty }\in \Sigma _m$ and $k \geq 0$ , we have

$$ \begin{align*} \sum_{j=k}^{k+m^\ell} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1}) \geq m^\ell \alpha_{\min} + \alpha^{\prime}_{\min}=(m^\ell +1)\alpha_{\min}+ \delta, \end{align*} $$

where $\delta =\alpha ^{\prime }_{\min }-\alpha _{\min }>0$ . This implies that for any $(z_j)_{j=0}^{\infty }\in \Sigma _m$ and $n \geq 1$ , we have

(51) $$ \begin{align} \sum_{j=0}^{n-1} \varphi(z_j, z_{j+1},\ldots, z_{j+\ell-1}) \geq n\alpha_{\min} + \bigg\lfloor \frac{n}{m^\ell+1} \bigg\rfloor \delta. \end{align} $$

By Lemma 4.14, we have

(52) $$ \begin{align} P^{\prime}_{\varphi}(s)&=(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{1}{(p_1\cdots p_d)^{k+1}} \sum_{j=0}^{k-1}\mathbb{E}_{\mu_s}\varphi(y_j,\ldots, y_{j+\ell-1})\nonumber\\ &=(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{1}{(p_1\cdots p_d)^{k+1}} \mathbb{E}_{\mu_s} \sum_{j=0}^{k-1}\varphi(y_j,\ldots, y_{j+\ell-1}). \end{align} $$

By equations (51) and (52), we get

$$ \begin{align*} \begin{aligned} P^{\prime}_{\varphi}(s)&=(p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{1}{(p_1\cdots p_d)^{k+1}} \bigg(k\alpha_{\min} + \bigg\lfloor \frac{k}{m^\ell+1} \bigg\rfloor \delta\bigg)\\ &=\alpha_{\min}+\delta (p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{\lfloor {k}/({m^\ell+1}) \rfloor}{(p_1\cdots p_d)^{k+1}}. \end{aligned} \end{align*} $$

Since

$$ \begin{align*} \delta (p_1\cdots p_d-1)^2\sum_{k=1}^{\infty}\frac{\lfloor {k}/({m^\ell+1}) \rfloor}{(p_1\cdots p_d)^{k+1}}>0, \end{align*} $$

we have proved that there exists an $\epsilon>0$ such that $P^{\prime }_{\varphi }(s)\geq \alpha _{\min }+\epsilon , \text { for all } s\in \mathbb {R}$ .

So far, we have calculated $\dim _H E(\alpha )$ for $\alpha \in (P^{\prime }_{\varphi }(-\infty ),P^{\prime }_{\varphi }(+\infty ))$ . Now we turn to the case when $\alpha =P^{\prime }_{\varphi }(-\infty )$ or $P^{\prime }_{\varphi }(+\infty )$ .

Theorem 4.19. [Reference Fan, Schmeling and Wu11, Theorem 7.11]

If $\alpha =P^{\prime }_{\varphi }(\pm \infty )$ , then $E(\alpha )\neq \emptyset $ and

$$ \begin{align*} \dim_H E(P^{\prime}_{\varphi}(\pm\infty))=\frac{P^*_{\varphi}(P^{\prime}_{\varphi}(\pm\infty))}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Proof. The proof of Theorem 4.19 follows from the following three lemmas established by Fan, Schmeling, and Wu [Reference Fan, Schmeling and Wu11].

The same argument of Lemma 4.14 is applied for obtaining the lemmas below.

Lemma 4.20. [Reference Fan, Schmeling and Wu11, Proposition 7.12]

We have

$$ \begin{align*} \mathbb{P}_{\mu_{-\infty}} ( E(P^{\prime}_{\varphi}(-\infty)) )=1. \end{align*} $$

In particular, $E(P^{\prime }_{\varphi }(-\infty ))\neq \emptyset $ .

Lemma 4.21. [Reference Fan, Schmeling and Wu11, Proposition 7.13]

We have

$$ \begin{align*} \dim_H \mathbb{P}_{\mu_{-\infty}} = \lim_{s\rightarrow -\infty} \frac{-P^{\prime}_{\varphi}(s)s_{\alpha}+P_{\varphi}(s)}{(p_1\cdots p_d)^{\ell-1}\log m}=\frac{P^*_{\varphi}(P^{\prime}_{\varphi}(-\infty))}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Lemma 4.22. [Reference Fan, Schmeling and Wu11, Proposition 7.14]

$$ \begin{align*} \dim_H E(P^{\prime}_{\varphi}(-\infty))=\frac{P^*_{\varphi}(P^{\prime}_{\varphi}(-\infty))}{(p_1\cdots p_d)^{\ell-1}\log m}. \end{align*} $$

Acknowledgements

We would like to sincerely thank the anonymous referee for providing inspiring comments and helpful suggestions for the first draft of this article. These significantly improved the readability and solidified the validity of theorems in the paper. J.-C.B. is partially supported by the National Science and Technology Council, ROC (Contract No NSTC 111-2115-M-004-005-MY3) and National Center for Theoretical Sciences. W.-G.H. is partially supported by the National Natural Science Foundation of China (Grant No.12271381). G.-Y.L. is partially supported by the National Science and Technology Council, ROC (Contract NSTC 111-2811-M-004-002-MY2).

References

Ban, J. C., Hu, W. G. and Lai, G. Y.. On the entropy of multidimensional multiplicative integer subshifts. J. Stat. Phys. 182(2) (2021), 120.10.1007/s10955-021-02703-7CrossRefGoogle Scholar
Ban, J. C., Hu, W. G. and Lai, G. Y.. Large deviation principle of multidimensional multiple averages on ${\mathbb{N}}^d$ . Indag. Math. (N.S.) 33 (2022), 450471.10.1016/j.indag.2021.09.010CrossRefGoogle Scholar
Ban, J. C., Hu, W. G. and Lin, S. S.. Pattern generation problems arising in multiplicative integer systems. Ergod. Th. & Dynam. Sys. 39(5) (2019), 12341260.10.1017/etds.2017.74CrossRefGoogle Scholar
Billingsley, P.. Ergodic Theory and Information. Wiley, New York, 1965.Google Scholar
Bourgain, J.. Double recurrence and almost sure convergence. J. Reine Angew. Math. 404 (1990), 140161.Google Scholar
Brunet, G.. Dimensions of ‘self-affine sponges’ invariant under the action of multiplicative integers. Ergod. Th. & Dynam. Sys. 43 (2021), 417459.10.1017/etds.2021.155CrossRefGoogle Scholar
Conze, J. P. and Lesigne, E.. Théorèmes ergodiques pour des mesures diagonales. Bull. Soc. Math. France 112 (1984), 143175.10.24033/bsmf.2003CrossRefGoogle Scholar
Fan, A. H.. Sur les dimensions de mesures. Studia Math. 111 (1994), 117.10.4064/sm-111-1-1-17CrossRefGoogle Scholar
Fan, A. H.. Some aspects of multifractal analysis. Geometry and Analysis of Fractals. Eds. Feng, D.-J. and Lau, K.-S.. Springer, Berlin, 2014, pp. 115145.10.1007/978-3-662-43920-3_5CrossRefGoogle Scholar
Fan, A. H., Liao, L. M. and Ma, J. H.. Level sets of multiple ergodic averages. Monatsh. Math. 168(1) (2012), 1726.10.1007/s00605-011-0358-5CrossRefGoogle Scholar
Fan, A. H., Schmeling, J. and Wu, M.. Multifractal analysis of some multiple ergodic averages. Adv. Math. 295 (2016), 271333.10.1016/j.aim.2016.03.012CrossRefGoogle Scholar
Frantzikinakis, N.. Some open problems on multiple ergodic averages. Preprint, 2016, arXiv:1103.3808.Google Scholar
Furstenberg, H., Katznelson, Y. and Ornstein, D.. The ergodic theoretical proof of Szemerédi’s theorem. Bull. Amer. Math. Soc. (N.S.) 7(3) (1982), 527552.10.1090/S0273-0979-1982-15052-2CrossRefGoogle Scholar
Gutman, Y., Huang, W., Shao, S. and Ye, X. D.. Almost sure convergence of the multiple ergodic average for certain weakly mixing systems. Acta Math. Sin. (Engl. Ser.) 34 (2018), 7990.10.1007/s10114-017-6366-1CrossRefGoogle Scholar
Host, B. and Kra, B.. Nonconventional ergodic averages and nilmanifolds. Ann. of Math. (2) 161 (2005), 397488.10.4007/annals.2005.161.397CrossRefGoogle Scholar
Kenyon, R., Peres, Y. and Solomyak, B.. Hausdorff dimension for fractals invariant under multiplicative integers. Ergod. Th. & Dynam. Sys. 32(5) (2012), 15671584.10.1017/S0143385711000538CrossRefGoogle Scholar
Peres, Y., Schmeling, J., Seuret, S. and Solomyak, B.. Dimensions of some fractals defined via the semigroup generated by 2 and 3. Israel J. Math. 199(2) (2014), 687709.10.1007/s11856-013-0058-zCrossRefGoogle Scholar
Peres, Y. and Solomyak, B.. Dimension spectrum for a nonconventional ergodic average. Real Anal. Exchange 37(2) (2012), 375388.10.14321/realanalexch.37.2.0375CrossRefGoogle Scholar
Pollicott, M.. A nonlinear transfer operator theorem. J. Stat. Phys. 166(3–4) (2017), 516524.10.1007/s10955-016-1646-1CrossRefGoogle ScholarPubMed
Wu, M.. Analyses multifractales de certaines moyennes ergodiques non-conventionnelles. PhD Thesis, L’Universit’e de Picardie Jules Verne, 2013.Google Scholar
Figure 0

Figure 1 The spectrum $\alpha\mapsto \dim_H(E(\alpha))$.