1. Introduction
The classical theory of Diophantine approximation is concerned with finding good approximations of irrationals. For any irrational $ x\in [0,1] $ , if one can find infinitely many rationals $ p/q $ such that $ |x-p/q|<q^{-\tau } $ with $ \tau>2 $ , then $ x $ is said to be $\tau $ -well approximable. In [Reference Hill and Velani10], Hill and Velani introduced a dynamical analogue of the classical theory of $\tau $ -well approximable numbers. The study of these sets is known as the so-called shrinking target problem. More precisely, consider a transformation $ T $ on a metric space $ (X,d) $ . Let $ \{B_n\}_{n\ge 1} $ be a sequence of balls with radius $ r(B_n)\to 0 $ as $ n\to \infty $ . The shrinking target problem concerns the size, especially the Hausdorff dimension, of the set
where ‘i.o.’ stands for infinitely often. Since its initial introduction, $ W(T,\{B_n\}_{n\ge 1}) $ has been studied intensively in many dynamical systems. See, for example, [Reference Allen and Bárány1, Reference Bárány and Rams2, Reference Bugeaud and Wang4, Reference He8, Reference Hill and Velani10–Reference Li, Wang, Wu and Xu16, Reference Shen and Wang20, Reference Wang and Zhang22] and reference therein.
The set $ W(T,\{B_n\}_{n\ge 1}) $ can be thought of as trajectories which hit shrinking targets $ \{B_n\}_{n\ge 1} $ infinitely often. Naturally, one would like to consider different targets, such as hyperrectangles, rather than just balls. To this end, motivated by the weighted theory of Diophantine approximation, the following set had also been introduced in $\beta $ -dynamical system. For $ d\ge 1 $ , let $ \beta _1,\ldots ,\beta _d>1 $ and let $ \mathcal P=\{P_n\}_{n\ge 1} $ be a sequence of parallelepipeds in $ [0,1)^d $ . Define
where $ T_{\beta _i}\colon [0,1)\to [0,1) $ is given by
Here, $ \lfloor \cdot \rfloor $ denotes the integer part of a real number. Under the assumption that each $ P_n $ is a hyperrectangle with sides parallel to the axes, the Hausdorff dimension of $ W(\mathcal P) $ , denoted by $ \dim _{\mathrm {H}} W(\mathcal P) $ , was calculated by Li et al [Reference Li, Liao, Velani and Zorin15, Theorem 12]. It should be pointed out that their result crucially relies on this assumption. To see this, observe that $ W(\mathcal P) $ can be written as
In the presence of such an assumption, $ (T_{\beta _1}\times \cdots \times T_{\beta _d})^{-n} P_n $ will be the union of hyperrectangles whose sides are also parallel to the axes. Thus, the ‘rectangle to rectangle’ mass transference principle by Wang and Wu [Reference Wang and Wu21] can be employed to obtain the desired lower bound of $ \dim _{\mathrm {H}} W(\mathcal P)$ . However, if this assumption is removed, then $ (T_{\beta _1}\times \cdots \times T_{\beta _d})^{-n} P_n $ is in general the union of parallelepipeds, and the mass transference principle, while still applicable, does not work well for this case. The main purpose of this paper is to determine $ \dim _{\mathrm {H}} W(\mathcal P) $ without assuming each $ P_n $ is a hyperrectangle. We further show that $ W(\mathcal P) $ has large intersection properties introduced by Falconer [Reference Falconer6], which means that the set $ W(\mathcal P) $ belongs, for some $ 0\le s\le d $ , to the class $ \mathscr G^s([0,1]^d) $ of $ G_\delta $ -sets, with the property that any countable intersection of bi-Lipschitz images of sets in $ \mathscr G^s([0,1]^d) $ has Hausdorff dimension at least $ s $ . In particular, the Hausdorff dimension of $ W(\mathcal P) $ is at least $ s $ .
Let
In slightly less rigorous words, the set $(T_{\beta _1}\times \cdots \times T_{\beta _d})^{-n} P_n$ consists of parallelepipeds with the same shape as $f^nP_n$ . Note that up to a translation, each $ P_n $ can be uniquely determined by $ d $ column vectors $ \alpha _j^{(n )}$ . In Lemma 3.3, we establish the existence of a rearrangement $f^n\alpha _{i_1}^{(n)},\ldots ,f^n\alpha _{i_d}^{(n)}$ of $f^n\alpha _1^{(n)},\ldots ,f^n\alpha _d^{(n)}$ , which ensures that upon the Gram–Schmidt process, the resulting pairwise orthogonal vectors, denoted by $\gamma _1^{(n)},\ldots ,\gamma _d^{(n)}$ , satisfy the inequality
Most importantly, this yields that up to a multiplicative constant, the optimal cover of $f^nP_n$ is the same as that of the hyperrectangle with sidelengths $|\gamma _1^{(n)}|\ge \cdots \ge |\gamma _d^{(n)}|>0$ . To describe the optimal cover of $(T_{\beta _1}\times \cdots \times T_{\beta _d})^{-n} P_n$ , let
and define
where the sets $\mathcal K_{n,1}(\tau )$ and $\mathcal K_{n,2}(\tau )$ are defined as
Theorem 1.1. Let $\mathcal P=\{P_n\}_{n\ge 1}$ be a sequence of parallelepipeds. For any $n\in {\mathbb {N}}$ , let $\gamma _1^{(n)},\ldots ,\gamma _d^{(n)}$ be the vectors described in equation (1.1). Then,
Further, we have $ W(\mathcal P)\in \mathscr G^{s^*}([0,1]^d) $ .
Remark 1.2. In fact, orthogonalizing the vectors $f^n\alpha _1^{(n)},\ldots ,f^n\alpha _d^{(n)}$ in different orders will result in different pairwise orthogonal vectors. However, not all of them can be well used to illustrate the optimal cover of $f^nP_n$ , only those satisfying equation (1.1) do. For example, let P be a parallelogram which is determined by two column vectors $\alpha _1=(1,0)^\top $ and $\alpha _2=(m,m)^\top $ , $m>1$ . Orthogonalizing in the order of $\alpha _1$ and $\alpha _2$ (respectively $\alpha _2$ and $\alpha _1$ ), we get the orthogonal vectors $\gamma _1=\alpha _1=(1,0)^\top $ and $\gamma _2=(0,m)^\top $ (respectively $\eta _1=\alpha _2=(m,m)^\top $ and $\eta _2=(1/2,-1/2)^\top $ ). Denote the rectangles determined by $\gamma _1$ and $\gamma _2$ (respectively $\eta _1$ and $\eta _2$ ) as R (respectively $\tilde R$ ). As one can easily see from Figure 1. P is contained in the rectangle obtained by scaling $\tilde R$ by a factor of $2$ , whereas for R, a factor of m is required. Note that $|\gamma _1|<|\gamma _2|$ , while $|\eta _1|>|\eta _2|$ . This simple example partially inspires us to choose a suitable order to orthogonalize $f^n\alpha _1^{(n)},\ldots ,f^n\alpha _d^{(n)}$ so that the resulting vectors satisfy equation (1.1), which turns out to be crucial (see Lemma 3.3 and equations (3.2) and (3.3)).
Remark 1.3. Li et al [Reference Li, Liao, Velani and Zorin15, Theorem 12] studied an analogous problem, where $P_n$ is restricted to be the following form:
and where $\psi _i$ is a positive function defined on natural numbers for $1\le i\le d$ . They further posed an additional condition that $\limsup _{n\to \infty }-\log \psi _i(n)/n<\infty $ ( $1\le i\le d$ ), as their proof of lower bound for $\dim _{\mathrm {H}} W(\mathcal P)$ relies on the ‘rectangle to rectangle’ mass transference principle [Reference Wang and Wu21, Theorem 3.3], which demands a similar condition. Their strategy is to investigate the accumulation points of the sequence $\{({-\log \psi _1(n)}/n,\ldots ,{-\log \psi _d(n)}/n)\}_{n\ge 1}$ , and subsequently selecting a suitable accumulation point to construct a Cantor subset of $W(\mathcal P)$ , thereby obtaining the lower bound for $\dim _{\mathrm {H}} W(\mathcal P)$ . However, if $\limsup _{n\to \infty }-\log \psi _i(n)/n=\infty $ for some i, they illustrated by an example [Reference Li, Liao, Velani and Zorin15, §5.3] that this strategy may not achieve the desired lower bound. This problem has been addressed in their recent paper [Reference Li, Liao, Velani, Wang and Zorin14]. We stress that Theorem 1.1 does not pose any similar condition on $P_n$ , and our approach differs from [Reference Li, Liao, Velani, Wang and Zorin14].
To gain insight into Theorem 1.1, we present two examples to illustrate how the rotations of rectangles affect the Hausdorff dimension of $ W(\mathcal P) $ .
Example 1.4. Let $ \beta _1=2 $ and $ \beta _2=4 $ . Let $ \{H_n\}_{n\ge 1} $ be a sequence of rectangles with $ H_n=[0,2^{-n}]\times [0,4^{-n}] $ . For a sequence $ \{\theta _n\}_{n\ge 1} $ with $ \theta _n\in [0,\pi /2] $ , let
where $ R_{\theta } $ denotes the counterclockwise rotation by an angle $ \theta $ . The translation $(1/2,1/2)$ here is only used to ensure $P_n\subset [0,1)^d$ . Suppose that $ \theta _n\equiv \theta $ for all $ n\ge 1 $ . For any $ n\ge 1 $ , we have
By Theorem 1.1, we get
Example 1.5. Let $ P_n $ be as in equation (1.3) but with $ \theta _n=\arccos 2^{-an} $ for some $ a> 0 $ . Then,
By Theorem 1.1, we get
The structure of the paper is as follows. In §2, we recall several notions and elementary properties of $\beta $ -transformation. In §3, we estimate the optimal cover of parallelepipeds in terms of Falconer’s singular value function. In §4, we prove Theorem 1.1.
2. $\beta $ -transformation
We start with a brief discussion that sums up various fundamental properties of $ \beta $ -transformation.
For $ \beta>1 $ , let $ T_\beta $ be the $\beta $ -transformation on $ [0,1) $ . For any $ n\ge 1 $ and $ x\in [0,1) $ , define $ \epsilon _n(x,\beta )=\lfloor \beta T_\beta ^{n-1}x\rfloor $ . Then, we can write
and we call the sequence
the $\beta $ -expansion of $ x $ . From the definition of $ T_\beta $ , it is clear that, for $ n\ge 1 $ , $ \epsilon _n(x,\beta ) $ belongs to the alphabet $ \{0,1,\ldots ,\lceil \beta -1\rceil \} $ , where $ \lceil x\rceil $ denotes the smallest integer greater than or equal to $ x $ . When $ \beta $ is not an integer, then not all sequences of $ \{0,1,\ldots ,\lceil \beta -1\rceil \}^{\mathbb {N}} $ are the $ \beta $ -expansions of some $ x\in [0,1) $ . This leads to the notion of $\beta $ -admissible sequence.
Definition 2.1. A finite or an infinite sequence $ (\epsilon _1,\epsilon _2,\ldots )\in \{0,1,\ldots ,\lceil \beta -1\rceil \}^{\mathbb {N}} $ is said to be $\beta $ -admissible if there exists an $ x\in [0,1) $ such that the $\beta $ -expansion of $ x $ begins with $ (\epsilon _1,\epsilon _2,\ldots ) $ .
Denote by $ \Sigma _\beta ^n $ the collection of all admissible sequences of length $ n $ . The following result of Rényi [Reference Rényi19] implies that the cardinality of $ \Sigma _\beta ^n $ is comparable to $ \beta ^n $ .
Lemma 2.2. [Reference Rényi19, equation (4.9)]
Let $ \beta>1 $ . For any $ n\ge 1 $ ,
where $ \# $ denotes the cardinality of a finite set.
Definition 2.3. For any $ \boldsymbol {\epsilon }_n:=(\epsilon _1,\ldots ,\epsilon _n)\in \Sigma _\beta ^n $ , we call
an $ n $ th level cylinder.
From the definition, it follows that $ T_\beta ^n|_{I_{n,\beta }(\boldsymbol {\epsilon }_n)} $ is linear with slope $ \beta ^n $ , and it maps the cylinder $ I_{n,\beta }(\boldsymbol {\epsilon }_n) $ into $ [0,1) $ . If $ \beta $ is not an integer, then the dynamical system $ (T_\beta , [0,1)) $ is not a full shift, and so $ T_\beta ^n|_{I_{n,\beta }(\boldsymbol {\epsilon }_n)} $ is not necessary onto. In other words, the length of $ I_{n,\beta }(\boldsymbol {\epsilon }_n) $ may be strictly less than $ \beta ^{-n} $ , which makes describing the dynamical properties of $ T_\beta $ more challenging. To get around this barrier, we need the following notion.
Definition 2.4. A cylinder $ I_{n,\beta }(\boldsymbol {\epsilon }_n) $ or a sequence $ \boldsymbol {\epsilon }_n\in \Sigma _\beta ^n $ is called $\beta $ -full if it has maximal length, that is, if
where $ |I| $ denotes the diameter of $ I $ .
When there is no risk of ambiguity, we will write full instead of $\beta $ -full. The importance of full sequences is based on the fact that the concatenation of any two full sequences is still full.
Proposition 2.5. [Reference Fan and Wang7, Lemma 3.2]
An $ n $ th level cylinder $ I_{n,\beta }(\boldsymbol {\epsilon }_n) $ is full if and only if, for any $\beta $ -admissible sequence $ \boldsymbol {\epsilon '}_m\in \Sigma _\beta ^m $ with $ m\ge 1 $ , the concatenation $ \boldsymbol {\epsilon }_n\boldsymbol {\epsilon }_m' $ is still $ \beta $ -admissible. Moreover,
So, for any two full cylinders $ I_{n,\beta }(\boldsymbol {\epsilon }_n), I_{m,\beta }(\boldsymbol {\epsilon }_m') $ , the cylinder $ I_{n+m,\beta }(\boldsymbol {\epsilon }_n\boldsymbol {\epsilon }_m') $ is also full.
For an interval $ I\subset [0,1) $ , let $ \Lambda _{\beta }^n(I) $ denote the set of full sequences $ \boldsymbol {\epsilon }_n $ of length $ n $ with $ I_{n,\beta }(\boldsymbol {\epsilon }_n)\subset I $ . In particular, if $ I=[0,1) $ , then we simply write $ \Lambda _{\beta }^n $ instead of $ \Lambda _{\beta }^n([0,1)) $ . For this case, the cardinality of $\Lambda _{\beta }^n$ can be estimated as follows.
Lemma 2.6. [Reference Li17, Lemma 1.1.46]
Let $\beta>1$ and $n\in {\mathbb {N}}$ .
-
(1) If $\beta \in {\mathbb {N}}$ , then
$$ \begin{align*}\#\Lambda_{\beta}^n=\beta^n.\end{align*} $$ -
(2) If $\beta>2$ , then
$$ \begin{align*}\#\Lambda_{\beta}^n>\frac{\beta-2}{\beta-1}\beta^n.\end{align*} $$ -
(3) If $1<\beta <2$ , then
$$ \begin{align*}\#\Lambda_{\beta}^n>\bigg(\prod_{i=1}^{\infty}(1-\beta^{-i})\bigg)\beta^n.\end{align*} $$
The general case $I\ne [0,1)$ requires the following technical lemma due to Bugeaud and Wang [Reference Bugeaud and Wang4].
Lemma 2.7. [Reference Bugeaud and Wang4, Proposition 4.2]
Let $\delta>0$ . Let $n_0\ge 3$ be an integer such that $(\beta n_0)^{1+\delta }<\beta ^{n_0\delta }$ . For any interval $I\subset [0,1)$ with $0<|I|<n_0\beta ^{-n_0}$ , there exists a full cylinder $I_{m,\beta }(\boldsymbol {\epsilon }_m)\subset I$ such that $|I|^{1+\delta }<|I_{m,\beta }(\boldsymbol {\epsilon }_m)|<|I|$ .
Now, we are ready to tackle with the general case.
Lemma 2.8. Let $ \delta>0 $ . Let $ n_0\ge 3 $ be an integer such that $ (\beta n_0)^{1+\delta }<\beta ^{n_0\delta } $ . Then, for any interval $ I $ with $ 0<|I|<n_0\beta ^{-n_0} $ , there exists a constant $ c_{\beta }>0 $ depending on $\beta $ such that for any $ n\ge -(1+\delta )\log _\beta |I|$ ,
Proof. Since $ |I|<n_0\beta ^{-n_0} $ , by Lemma 2.7, there exists a full cylinder $ I_{m,\beta }(\boldsymbol {\epsilon }_m) $ satisfying
For such m, we have $ n\ge m $ whenever $ n\ge -(1+\delta )\log _\beta |I| $ . By Proposition 2.5, the concatenation of two full sequences $ \boldsymbol {\epsilon }_{n-m} \in \Lambda _{\beta }^{n-m} $ and $ \boldsymbol {\epsilon }_m $ is still full. Thus,
where the constant $c_\beta>0$ depending on $\beta $ is given in Lemma 2.6.
3. Optimal cover of parallelepipeds
The proof of Theorem 1.1 relies on finding efficient covering by balls of the $\limsup $ set $W(\mathcal P)$ . With this in mind, we need to study the optimal cover of parallelepipeds, which is closely related to its Hausdorff content.
In what follows, for geometric reasons, it will be convenient to equip $\mathbb R^d$ with the maximal norm, and thus balls correspond to hypercubes. For any set $E\subset \mathbb R^d$ , its s-dimensional Hausdorff content is given by
In other words, the optimal cover of a Borel set can be characterized by its Hausdorff content, which is generally estimated by putting measures or mass distributions on it, following the mass distribution principle described below.
Proposition 3.1. (Mass distribution principle [Reference Bishop and Peres3, Lemma 1.2.8])
Let $ E $ be a subset of $ \mathbb R^d $ . If $ E $ supports a strictly positive Borel measure $ \mu $ that satisfies
for some constant $ 0<c<\infty $ and for every ball $B(\mathbf {x},r)$ , then $ \mathcal H^s_\infty (E)\ge \mu (E)/c $ .
Following Falconer [Reference Falconer5], when E is taken as a hyperrectangle R, its Hausdorff content can be expressed as the so-called singular value function. For a hyperrectangle $ R\subset \mathbb R^d $ with sidelengths $ a_1\ge a_2\ge \cdots \ge a_d>0 $ and a parameter $s\in [0,d]$ , the singular value function $\varphi ^s$ is defined by
where $m=\lfloor s\rfloor $ .
The next lemma allows us to estimate the Hausdorff content of a Borel set inside a hyperrectangle. Denote the d-dimensional Lebesgue measure by $\mathcal L^d$ .
Lemma 3.2. Let $ E\subset \mathbb R^d $ be a bounded Borel set. Assume that there exists a hyperrectangle R with sidelengths $a_1\ge a_2\ge \cdots \ge a_d>0$ such that $E\subset R$ and $\mathcal {L}^d(E)\ge c\mathcal {L}^d(R)$ for some $c>0$ , then for any $0<s\le d$ ,
Proof. The second inequality simply follows from $E\subset R$ and equation (3.1). So, we only need to prove the first one. Let $\nu $ be the normalized Lebesgue measure supported on E, that is,
For any $0<s\le d$ , let $m=\lfloor s\rfloor $ be the integer part of s. Now we estimate the $\nu $ -measure of arbitrary ball $B(\mathbf {x},r)$ with $r>0$ and $\mathbf {x}\in E$ . The proof is split into two cases.
Case 1: $0<r<a_d$ . Then,
Case 2: $a_{i+1}\le r<a_i$ for $1\le i\le d-1$ . It follows that
If $i>m=\lfloor s\rfloor $ , then the right-hand side can be estimated in a way similar to Case 1,
If $i\le m=\lfloor s\rfloor $ , then $i-s\le 0$ , and so
where the last inequality follows from the fact that $a_{i+1}\le \cdots \le a_{m+1}$ .
With the estimation given above, by the mass distribution principle, we have
as desired.
By the above lemma, to obtain the optimal cover of a parallelepiped P, it suffices to find a suitable hyperrectangle containing it. Since the optimal cover of P does not depend on its location, we assume that one of its vertices lies in the origin. With this assumption, P is uniquely determined by d column vectors, say $\alpha _1,\ldots ,\alpha _d$ . Moreover, we have
Lemma 3.3. Let P be a parallelepiped given above. There exists a hyperrectangle R such that
Proof. We will employ the Gram–Schmidt process to $\alpha _1,\ldots ,\alpha _d$ in a proper way to obtain d pairwise orthogonal vectors that yield the desired hyperrectangle.
First, let $\gamma _1=\alpha _{i_1}$ with $\alpha _{i_1}=\max _{1\le l\le d} |\alpha _l|$ . For $1<k\le d$ , let $\gamma _k$ be defined inductively as
where $\alpha _{i_k}$ is chosen so that
This is the standard Gram–Schmidt process and so $\gamma _1,\ldots ,\gamma _d$ are pairwise orthogonal. In addition,
Denote the rightmost upper triangular matrix by U. For any $\mathbf {x}=x_{i_1}\alpha _{i_1}+\cdots +x_{i_d}\alpha _{i_d}\in P$ with $(x_{i_1},\ldots ,x_{i_d})\in [0,1]^d$ , we have
The proof of Lemma 3.3 will be completed with the help of the following lemma.
Lemma 3.4. The absolute value of each entry of U is not greater than $2$ .
Proof. For any $1< k\le d$ , by the orthogonality of $\gamma _1,\ldots ,\gamma _{k-1}$ ,
where the last inequality follows from the definition of $\gamma _{k-1}$ (see equation (3.3)). This gives
By the above inequality and equation (3.3), for any $1\le l\le k$ , it follows that
which implies that
Now we proceed to prove Lemma 3.3.
Let $(U_{i1},\ldots ,U_{id})$ be the ith row of U. Since $0\le x_{i_k}\le 1$ , by Lemma 3.4, we have
and so
Therefore, $P\subset R$ which finishes the proof of the first point.
However, by an elementary result of linear algebra,
where the third equality follows from the fact that $\gamma _1,\ldots ,\gamma _d$ are pairwise orthogonal and U is upper triangular with all diagonal entries equal to 1, and the last equality follows from equation (3.6).
4. Proof of Theorem 1.1
Throughout, we write $a\asymp b$ if $c^{-1}\le a/b\le c$ , and $a\lesssim b$ if $a\le cb$ for some unspecified constant $c\ge 1$ .
4.1. Upper bound of $ \dim _{\mathrm {H}} W(\mathcal P) $
Obtaining upper estimates for the Hausdorff dimension of a $\limsup $ set is usually straightforward, as it involves a natural covering argument.
For $ 1\le i\le d $ and any $ \boldsymbol {\epsilon }_n^i=(\epsilon _1^i,\ldots ,\epsilon _n^i)\in \Sigma _{\beta _i}^n $ , we always take
to be the left endpoint of $ I_{n,\beta _i}(\boldsymbol {\epsilon }_n^i) $ . Write $ \mathbf {z}^*=(z_1^*,\ldots , z_d^*) $ . Then, $ W(\mathcal P) $ is contained in the following set:
For any $n\ge 1$ , let $f^n\alpha _1^{(n)},\ldots ,f^n\alpha _d^{(n)}$ be the vectors that determine $f^nP_n$ . By Lemma 3.3 and equation (3.2), there is a hyperrectangle $R_n$ with sidelengths $2^{d+1}|\gamma _1^{(n)}|\ge \cdots \ge 2^{d+1}|\gamma _d^{(n)}|>0$ such that $f^nP_n\subset R_n$ .
Recall that $\mathcal A_n=\{\beta _1^{-n},\ldots ,\beta _d^{-n},|\gamma _1^{(n)}|,\ldots ,|\gamma _d^{(n)}|\}$ , and for any $\tau \in \mathcal A_n$ ,
Let $\tau \in \mathcal A_n$ . We now estimate the number of balls of diameter $\tau $ needed to cover the set $E_n$ . We start by covering a fixed parallelepiped $P:=f^nP_n+\mathbf {z}^*$ . In what follows, one can regard P as a hyperrectangle, since $P=f^nP_n+\mathbf {z}^*\subset R_n+\mathbf {z}^*$ . It is easily verified that we can find a collection $\mathcal B_n(P)$ of balls of diameter $\tau $ that covers P with
Observe that the collection $\mathcal B_n(P)$ will also cover other parallelepipeds contained in $E_n$ along the direction of the ith axis with $i\in \mathcal K_{n,1}(\tau )$ . Namely, the collection of balls $\mathcal B_n(P)$ simultaneously covers
parallelepipeds. Since the number of parallelepipeds contained in $E_n$ is $\lesssim \beta _1^n\cdots \beta _d^{n}$ , one needs at most
balls of diameter $\tau $ to cover $E_n$ .
Now suppose that $s>s^*=\limsup s_n$ , where
Let $\varepsilon <s-s^*$ . For any large n, we have $s>s_n+\varepsilon $ . Let $\tau _0\in \mathcal A_n$ be such that the minimum in the definition of $s_n$ is attained. In particular, equation (4.3) holds for $\tau _0$ . The s-volume of the cover of $E_n$ is majorized by
Since the elements of $\mathcal A_n$ decay exponentially, the last equation is less than $e^{-n\delta \varepsilon }$ for some $\delta>0$ independent of n and $\varepsilon $ . It follows from the definition of s-dimensional Hausdorff measure that for any s, $\delta>0$ and $\varepsilon $ given above,
Therefore, $\dim _{\mathrm {H}} W(\mathcal P)\le s$ . Since this is true for all $s>s^*$ , we have
4.2. Lower bound of $ \dim _{\mathrm {H}} W(\mathcal P) $
The proof crucially relies on the following lemma.
Lemma 4.1. [Reference He9, Corollary 2.6]
Let $ \{F_n\}_{n\ge 1} $ be a sequence of open sets in $[0,1]^d$ and $ F=\limsup F_n $ . Let $ s>0 $ . If for any $ 0<t<s $ , there exists a constant $ c_t $ such that
holds for all hypercubes $ D\subset [0,1]^d $ , then $ F \in \mathscr G^{s}([0,1]^d) $ . In particular, $\dim _{\mathrm {H}} F\ge s$ .
Remark 4.2. A weaker version by Persson and Reeve [Reference Persson and Reeve18, Lemma 2.1] also applies to the current proof, but is not adopted here because it results in a more complex proof.
Let
where $ \mathbf {z}^*=(z_1^*,\ldots ,z_d^*) $ is defined as in equation (4.1).
Lemma 4.3. For any $ 0< t<s^*=\limsup s_n $ ,
holds for all hypercubes $ D\subset [0,1]^d $ , where the unspecified constant depends on d only. Therefore, $W(\mathcal P)\in \mathscr G^{s^*}([0,1]^d)$ and, in particular,
Proof. Fix $ 0<t<s^* $ . Write $\varepsilon =s^*-t$ . By definition, there exist infinitely many n such that
In view of Lemma 2.8, let $ D\subset [0,1]^d $ be a hypercube with $ |D|\le n_0\beta _d^{-n_0} $ , where $ n_0 $ is an integer such that $ (\beta _i n_0)^{1+\varepsilon /d}<\beta _i^{n_0\varepsilon /d} $ for $1\le i\le d$ . Let $ n $ be an integer such that equation (4.5) holds and for any $1\le i\le d$ ,
Obviously, there are still infinitely many n that satisfy these conditions. Write $ D=I_1\times \cdots \times I_d $ with $ |I_1|=\cdots =|I_d| $ . The first inequality in equation (4.6) ensures that Lemma 2.8 is applicable to bound $ \#\Lambda _{\beta _i}^n(I_i) $ from below for $ 1\le i\le d $ .
Recall from Lemma 3.3 and equation (3.2) that for any $n\ge 1$ , $f^nP_n+z^*$ is contained in some hyperrectangle with sidelengths $2^{d+1}|\gamma _1^{(n)}|\ge \cdots \ge 2^{d+1}|\gamma _d^{(n)}|>0$ . For any $n\in {\mathbb {N}}$ satisfying equations (4.5) and (4.6), define a probability measure $ \mu _n $ supported on $ E_n\cap D $ by
where $ \nu _{\mathbf {z}^*} $ is defined by
The equality $\mathcal {L}^d(f^nP_n+\mathbf {z}^*)=|\gamma _1^{(n)}|\cdots |\gamma _d^{(n)}|$ can be deduced from equation (3.7).
Let $ \mathbf {x}\in E_n\cap D $ and $ r>0 $ . Suppose that $\mathbf {x}\in f^nP_n+\mathbf {y}^*\subset E_n\cap D$ . Now, we estimate $ \mu _n (B(\mathbf {x},r)) $ , and the proof is divided into four distinct cases.
Case 1: $r\ge |D|$ . Clearly, since $t<s\le d$ ,
Case 2: $ r\le |\gamma _d^{(n)}| $ . Note that in the definition of $\mu _n $ , all the cylinders under consideration are full. We see that the ball $ B(\mathbf {x},r) $ intersects at most $ 2^d $ parallelepipeds with the form $ f^nP_n+\mathbf {z}^* $ . For any such parallelepiped, by the definition of $\nu _{\mathbf {z}^*}$ (see equation (4.8)) and Lemma 2.8, we have
Since $f^nP_n+\mathbf {z}^*$ is contained in some $I_{n,\beta _1}(\boldsymbol {\epsilon }_n^1)\times \cdots \times I_{n,\beta _d}(\boldsymbol {\epsilon }_n^d)$ , by a volume argument, we have $\sum _{i=1}^d(\log \beta _i^n+\log |\gamma _i^{(n)}|)< 0$ . This combined with $r\le |\gamma _d^{(n)}|<1$ gives
One can see that the right-hand side of equation (4.10) is just the one in equation (1.2) defined by choosing $\tau =|\gamma _d^{(n)}|$ , since
This means that the quantity in equation (4.10) is greater than or equal to $s_n$ , and so by equation (4.9), one has
Case 3: $ \beta _1^{-n}< r\le |D| $ . In this case, the ball $B(\mathbf {x},r)$ is sufficiently large so that for any hyperrectangle $R:=I_{n,\beta _1}(\boldsymbol {\epsilon }_n^1)\times \cdots \times I_{n,\beta _d}(\boldsymbol {\epsilon }_n^d)$ ,
A simple calculation shows that $B(\mathbf {x},r)$ intersects at most $\lesssim r^d\beta _1^n\cdots \beta _d^n$ hyperrectangles with the form $I_{n,\beta _1}(\boldsymbol {\epsilon }_n^1)\times \cdots \times I_{n,\beta _d}(\boldsymbol {\epsilon }_n^d)$ . By the definition of $\mu _n $ , one has
Case 4: Arrange the elements in $\mathcal A_n$ in non-descending order. Suppose that $\tau _{k+1}\le r<\tau _k$ with $\tau _k$ and $\tau _{k+1}$ being two consecutive terms in $\mathcal A_n$ . Let
be defined in the same way as in equation (1.2). It is easy to see that $B(\mathbf {x},r)$ can intersects at most
parallelepipeds with positive $\mu _n$ -measure. Moreover, the $\mu _n $ -measure of the intersection of each parallelepiped with $B(\mathbf {x},r)$ is majorized by
Therefore,
where
Clearly, as a function of r, $s(r)$ is monotonic on the interval $[\tau _{k+1},\tau _k]$ . So the minimal value is attained when $r=\tau _{k+1}$ or $\tau _k$ . First, suppose that the minimum is attained at $r=\tau _k$ . If $\mathcal K_{n,1}(\tau _k)=\mathcal K_{n,1}(\tau _{k+1})$ , then there is nothing to be proved. So we may assume that $\mathcal K_{n,1}(\tau _k)\ne \mathcal K_{n,1}(\tau _{k+1})$ . Since $\mathcal K_{n,1}(\tau _{k+1})\subsetneq \mathcal K_{n,1}(\tau _k)$ , one can see that $\tau _k=\beta _j^{-n}$ for some j and
It follows that
which implies that
By a similar argument, one still has $s(r)\ge s_n$ if the minimum is attained at $r=\tau _{k+1}$ . Therefore,
Summarizing the estimates of the $\mu _n $ -measures of arbitrarily balls presented in Cases 1–4, we get
where the unspecified constant does not depend on D. Finally, by the mass distribution principle,
This is true for infinitely many n, and the proof is completed.
Acknowledgements
This work was supported by NSFC (No. 1240010704). The author would like to thank Prof. Lingmin Liao for bringing this problem to his attention. Additionally, the author is grateful to the anonymous referees for their patience and efforts to improve the quality of the manuscript.