Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-23T05:49:45.515Z Has data issue: false hasContentIssue false

Inheritance of strong mixing and weak dependence under renewal sampling

Published online by Cambridge University Press:  14 October 2022

Dirk-Philip Brandes*
Affiliation:
Ulm University
Imma Valentina Curato*
Affiliation:
Ulm University
Robert Stelzer*
Affiliation:
Ulm University
*
*Postal address: Helmholtzstraße 18, 89069 Ulm, Germany.
*Postal address: Helmholtzstraße 18, 89069 Ulm, Germany.
*Postal address: Helmholtzstraße 18, 89069 Ulm, Germany.
Rights & Permissions [Opens in a new window]

Abstract

Let X be a continuous-time strongly mixing or weakly dependent process and let T be a renewal process independent of X. We show general conditions under which the sampled process $(X_{T_i},T_i-T_{i-1})^{\top}$ is strongly mixing or weakly dependent. Moreover, we explicitly compute the strong mixing or weak dependence coefficients of the renewal sampled process and show that exponential or power decay of the coefficients of X is preserved (at least asymptotically). Our results imply that essentially all central limit theorems available in the literature for strongly mixing or weakly dependent processes can be applied when renewal sampled observations of the process X are at our disposal.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Time series are ubiquitous in many applications, and it is often the case that the time interval separating two successive observations of the time series is itself random. We approach the study of such time series by using a continuous-time stationary process $X=(X_t)_{t \in \mathbb{R}}$ and a renewal process $T=(T_i)_{i \in \mathbb{Z}}$ which models the sampling scheme applied to X. We assume that X is strongly mixing or weakly dependent as defined in [Reference Rosenblatt39] and [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20] respectively, and that T is a process independent of X with inter-arrival time sequence $\tau=(\tau_i)_{i\in \mathbb{Z}\setminus \{0\}}$ . In this general model set-up, we show under which assumptions the renewal sampled process $Y=(Y_i)_{i \in \mathbb{Z}}$ defined as $Y_i=(X_{T_i},T_i-T_{i-1})^{\top}$ inherits strong mixing and weak dependence.

In the literature, the statistical inference methodologies based on renewal sampled data seldom employ a strongly mixing or weakly dependent process Y. To the best of our knowledge, the only existing example of this approach can be found in [Reference Aït-Sahalia and Mykland1], where it is shown that Y is $\rho$ -strongly mixing, and this property is used to study the consistency of maximum likelihood estimators for continuous-time diffusion processes. In contrast, there exist several statistical estimators whose asymptotic properties rely heavily on ad hoc tailor-made arguments for specific models X. Examples of this kind appear in spectral density estimation theory. In [Reference Lii and Masry33], [Reference Masry35], [Reference Masry36], and [Reference Masry37], Lii and Masry studied non-parametric and parametric estimators of the spectral density of X via an aliasing-free sampling scheme defined through a renewal process; see [Reference Lii and Masry33] for a general definition of this set-up. Schemes such as the Poisson scheme allow us to overcome the aliasing problem, typically observed when working with a process that is not band-limited. Moreover, the spectral density estimators determined via renewal sampled data are consistent and asymptotically normally distributed once we assume that X has finite moments of all orders. Renewal sampled data are also used to define a spectral density estimator for Gaussian processes [Reference Bardet and Bertrand4], kernel density estimators for strongly mixing processes [Reference Masry38], non-parametric estimators of volatility and drift for scalar diffusion [Reference Chorowski and Trabs15], and parametric estimators of the covariance function of X as in [Reference Brandes and Curato7] and [Reference McDunnough and Wolfson34]. In [Reference McDunnough and Wolfson34], McDunnough and Wolfson analyzed an estimator of the covariance function of a Gauss–Markov process and a continuous-time Lévy-driven moving average. In [Reference Brandes and Curato7], in particular, the asymptotic properties of the estimator are obtained by an opportune truncation of a Lévy-driven moving average process X that is proved to be strongly mixing.

Determining conditions under which the process Y inherits the asymptotic dependence of X significantly widens the applicability of renewal sampled data. Just as indicative examples, our analysis should enable the use of renewal sampled data to study spectral density estimators, Whittle estimators, and generalized method of moments estimators as defined in [Reference Bardet, Doukhan and León5], [Reference Curato and Stelzer16], [Reference do Rego Sousa and Stelzer22], and [Reference Rosenblatt40], Moreover, knowledge of the asymptotic dependence of Y allows us to apply well-established asymptotic results for $\alpha$ -mixing processes such as those in [Reference Bradley6, Chapter 10], [Reference Dedecker and Rio21], and [Reference Kanaya32]. These are functional and triangular array central limit theorems. The same argument applies to central limit theorems for weakly dependent processes such as those presented in [Reference Bulinski and Shashkin10], [Reference Dedecker and Doukhan19], and [Reference Doukhan and Wintenberger26]. In brief, understanding the dependence structure of the process Y allows us to obtain joint asymptotic results for $(X_{T_i},T_i-T_{i-1})_{i \in \mathbb{Z}}$ which enable inference on the process X even when the distribution of the sequence $\tau$ is not known, i.e. when the sampling scheme is not designed by an experimenter but just observed from the data. An example of the latter application appears in [Reference Brandes and Curato7, Theorem 5.2].

In this paper we study the inheritance of $\eta$ , $\lambda$ , $\kappa$ , $\zeta$ , $\theta$ -weak dependence and $\alpha$ -mixing, which are extensively analyzed in the monographs [Reference Bradley6], [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20], and [Reference Doukhan23]. Covariance inequalities play an important role in the analysis of such dependence notions. The first covariance inequalities satisfied by $\alpha$ -mixing processes and associated processes (which under opportune assumptions are equivalent to $\zeta$ -weakly dependent processes: see Remark 2.3) can be traced back to [Reference Ibragimov and Linnik30, Theorem 17.2.1] and [Reference Bulinski and Shabanovich9]. In general, for any positive integer u, v, indexes $i_1 \leq \cdots \leq i_u < i_u+r \leq j_1 \cdots \leq j_v$ with $r > 0$ , and functions F and G belonging to specific functional spaces (i.e. F and G are bounded Lipschitz or bounded measurable functions), weakly dependent and $\alpha$ -mixing processes both satisfy covariance inequalities of the following type:

(1.1) \begin{equation}|\mathrm{Cov}(F(X_{i_1},\ldots,X_{i_u}),G(X_{j_1},\ldots,X_{j_v}))|\leq c \,\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v) \, \epsilon(r),\end{equation}

where the sequence of coefficients $\epsilon=(\epsilon(r))_{r \in \mathbb{R}^+}$ converges to zero at infinity, c is a constant independent of r, and the function $\Psi(\!\cdot\!)$ has different shapes depending on the functional spaces where F and G are defined. A formal definition of the latter is given in Definition 2.2. Hence we introduce a unified formulation of weak dependence and $\alpha$ -mixing. We call a process $\Psi$ -weakly dependent if it satisfies such a covariance inequality, and we call $\epsilon$ the sequence of the $\Psi$ -coefficients. Note that such a sequence of coefficients corresponds to weak dependence or $\alpha$ -mixing coefficients – as defined in [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20, Section 2.2] and [Reference Bradley6, Definition 3.5] – depending on the function $\Psi(\!\cdot\!)$ .

Charlot and Rachdi [Reference Charlot and Rachdi14] obtained formulae for $\alpha$ , $\beta$ , $\phi$ , and $\rho$ -mixing coefficients for the process $(X_{T_i})_{i \in \mathbb{Z}}$ , but their line of proof does not automatically extend to weak dependence; see Remark 3.2 for more details. Moreover, they do not show when the convergence to zero of the coefficients is attained and that $(X_{T_i})_{i \in \mathbb{Z}}$ actually inherits strong mixing of X. In Theorem 3.1 we give a general proof for the computation of the $\Psi$ -coefficients related to the renewal process Y, which applies to weakly dependent and $\alpha$ -mixing processes alike. Moreover, we present several sampling schemes for which the convergence to zero of the $\Psi$ -coefficients is realized, and then the renewal process Y inherits the dependence structure of X. In particular, under the additional condition that X admits exponential or power decaying coefficients, we show that the $\Psi$ -coefficients related to Y preserve the exponential or power decay (at least asymptotically).

The paper is organized as follows. In Section 2 we present the definition of $\Psi$ -weakly dependent processes, which encompasses weakly dependent and $\alpha$ -mixing processes. In Section 3 we explicitly compute the $\Psi$ -coefficients of the process Y. Moreover, we present data sets for which the independence between a process T, modeling the random sampling scheme, and X is realistic. Finally, in Section 4, we show that if the underlying process X admits exponential or power decaying $\Psi$ -coefficients, then the process Y is $\Psi$ -weakly dependent and has coefficients with (at least asymptotically) the same decay. This section includes several examples of renewal sampling. In particular, Poisson sampling times are discussed. Section 5 concludes the paper.

2. Weak dependence and strong mixing conditions

We assume that all random variables and processes are defined on a given probability space $(\Omega, \mathcal{A},\mathbb{P})$ .

We let $\mathbb{N}^*$ refer to the set of positive integers, $\mathbb{N}$ the set of the non-negative integers, $\mathbb{Z}$ the set of all integers, and $\mathbb{R}_+$ the set of non-negative real numbers. We denote the Euclidean norm by $\| \!\cdot\! \|$ . However, due to the equivalence of all norms, none of our results depend on the chosen norm.

Although the theory developed below is most relevant for sampling processes defined in continuous time, we work with a general index set $\mathcal{I}$ , as this makes no difference and also covers other cases, such as a sampling of discrete-time processes or random fields.

Definition 2.1. The index set $\mathcal{I}$ denotes either $\mathbb{Z}$ , $\mathbb{R}$ , $\mathbb{Z}^m$ , or $\mathbb{R}^m$ . Given H and $J \subseteq \mathcal{I}$ , we define $d(H,J)=\min\{ \|i-j\|, i\in H,j\in J\}$ .

Even if our theory extends to random fields, we always refer to X as a process for simplicity. Moreover, we consider

(2.1) \begin{equation}\mathcal{F}=\bigcup_{u \in \mathbb{N}} \mathcal{F}_u \quad \textrm{and} \quad \mathcal{G}=\bigcup_{v \in \mathbb{N}} \mathcal{G}_v,\end{equation}

where $\mathcal{F}_u$ and $\mathcal{G}_v$ , respectively, are two classes of measurable functions from $(\mathbb{R}^d)^u$ to $\mathbb{R}$ and $(\mathbb{R}^d)^v$ to $\mathbb{R}$ that we specify individually later on. Finally, for a function that is unbounded or not Lipschitz, we set its $\|\!\cdot\!\|_{\infty}$ norm or Lipschitz constant equal to infinity.

Definition 2.2. Let $\mathcal{I}$ be an index set as in Definition 2.1, let $X=(X_t)_{t\in \mathcal{I}}$ be a process with values in $\mathbb{R}^d$ , and let $\Psi$ be a function from $\overline{\mathbb{R}_+^6}$ to $\mathbb{R}_+$ non-decreasing in all arguments. The process X is called $\Psi$ -weakly dependent if there exists a sequence of coefficients $\epsilon=(\epsilon(r))_{r \in \mathbb{R}_+}$ converging to 0 and satisfying inequality (1.1) for all

\begin{equation*}\begin{cases}(u,v) \in \mathbb{N}^* \times \mathbb{N}^*,\\[5pt]r\in \mathbb{R}_+, \\[5pt]\text{$I_u=\{i_1,\ldots,i_u\} \subseteq{\mathcal{I}} $ and $ J_v=\{\,j_1,\ldots,j_v\} \subseteq \mathcal{I}$, such that $d(I_u,J_v) \geq r$,}\\[5pt]\textrm{functions $ F \colon (\mathbb{R}^{d})^u \to \mathbb{R} $ and $ G\colon (\mathbb{R}^{d})^v \to \mathbb{R} $ belonging to $\mathcal{F}$ and $\mathcal{G}$ respectively,}\end{cases}\end{equation*}

where c is a constant independent of r.

Without loss of generality we always consider $\epsilon$ to be a non-increasing sequence of coefficients.

The first covariance inequality for Lipschitz functions of positively or negatively associated random fields in the literature appears in [Reference Bulinski and Shabanovich9]. Since this result, other covariance inequalities have been determined for functions F and G being either bounded Lipschitz or bounded measurable functions of processes and random fields. In the latter set-up, Definition 2.2 encompasses the so-called weak dependence conditions as described in [Reference Bulinski and Shashkin11, Definition 5.12] and [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20, Definition 2.2] for $\mathcal{I}=\mathbb{Z}, \mathbb{Z}^m$ . Therefore several sequences of coefficients $\epsilon$ satisfying Definition 2.2 are already well known.

  • Let $\mathcal{F}=\mathcal{G}$ and $\mathcal{F}_u$ be the class of bounded Lipschitz functions from $(\mathbb{R}^d)^u$ to $\mathbb{R}$ with respect to the distance $\delta$ on $(\mathbb{R}^d)^u$ defined by

    (2.2) \begin{equation}\delta(x^*,y^*)= \sum_{i=1}^u \|x_i-y_i\|,\end{equation}
    where $x^*=(x_1,\ldots,x_u)$ and $y^*=(y_1,\ldots,y_u)$ , and $x_i,y_i \in \mathbb{R}^d$ for all $i=1,\ldots,u$ . Then
    \begin{equation*}\mathrm{Lip}(F)=\sup_{x\neq y} \dfrac{|F(x)-F(y)|}{\| x_1-y_1 \|+\|x_2-y_2\|+ \ldots+ \|x_d-y_d\|}.\end{equation*}
    For
    \begin{equation*}\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)=u\;\mathrm{Lip}(F) \|G\|_{\infty} +v\;\mathrm{Lip}(G) \|F\|_{\infty},\end{equation*}
    $\epsilon$ corresponds to the $\eta$ -coefficients as defined in [Reference Doukhan and Louhichi25]. An extension of this definition for $\mathcal{I}=\mathbb{Z}^m$ is given in [Reference Doukhan and Lang24]. If, instead,
    \begin{align*}&\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)\\[5pt]&\quad = u\;\mathrm{Lip}(F) \|G\|_{\infty} +v\;\mathrm{Lip}(G) \|F\|_{\infty} +uv\;\mathrm{Lip}(F)\;\mathrm{Lip}(G) \nonumber,\end{align*}
    then $\epsilon$ corresponds to the $\lambda$ -coefficients as defined in [Reference Doukhan and Wintenberger26] for $\mathcal{I}=\mathbb{Z}$ and in [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20, Remark 2.1] for $\mathcal{I}=\mathbb{Z}^m$ . Moreover, for
    \begin{equation*}\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)= uv\;\mathrm{Lip}(F)\;\mathrm{Lip}(G),\end{equation*}
    $\epsilon$ corresponds to the $\kappa$ -coefficients, and, for
    \begin{equation*}\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)=\min\!(u,v) \mathrm{Lip}(F)\;\mathrm{Lip}(G),\end{equation*}
    $\epsilon$ corresponds to the $\zeta$ -coefficients as defined in [Reference Doukhan and Louhichi25]. The definition of $\zeta$ -weak dependence for $\mathcal{I}=\mathbb{Z}^m$ can be found in [Reference Bulinski and Suquet12].
  • Let $\mathcal{F}_u$ be the class of bounded measurable functions from $(\mathbb{R}^d)^u$ to $\mathbb{R}$ and let $\mathcal{G}_v$ be the class of bounded Lipschitz functions from $(\mathbb{R}^d)^v$ to $\mathbb{R}$ with respect to the distance $\delta$ defined in (2.2). Then, for

    \begin{equation*}\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)= v \| F\|_{\infty} \;\mathrm{Lip}(G),\end{equation*}
    $\epsilon$ corresponds to the $\theta$ -coefficients as defined in [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20]. An extension of this definition for $\mathcal{I}=\mathbb{Z}^m$ appears in [Reference Dedecker, Doukhan, Lang, León, Louhichi and Prieur20, Remark 2.1]. Moreover, an alternative definition for this notion of dependence is given in [Reference Dedecker and Doukhan19] for $\mathcal{F}_u$ as above and $\mathcal{G}_1$ the class of Lipschitz function from $\mathbb{R}^d$ to $\mathbb{R}$ , for $\mathcal{I}=\mathbb{Z}$ .

The extension to index sets $\mathcal{I}=\mathbb{R},\mathbb{R}^m$ of the weak dependence notions described above is straightforward.

Remark 2.1. The weak dependence conditions can all be alternatively formulated by further assuming that $F \in \mathcal{F}$ and $G \in \mathcal{G}$ are bounded by one. For more details on this issue, see [Reference Doukhan and Louhichi25] and [Reference Dedecker and Doukhan19]. Therefore an alternative definition of $\Psi$ -weak dependence exists where the function $\Psi$ in Definition (1.1) does not depend on $\|F\|_{\infty}$ and $\|G\|_{\infty}$ . In this case $\|F\|_{\infty}$ and $\|G\|_{\infty}$ are always bounded by one and therefore omitted from the notation.

We now show that Definition 2.2 also encompasses strong mixing. We first define the strong mixing coefficient [Reference Rosenblatt39].

We suppose that $\mathcal{A}_1$ and $\mathcal{A}_2$ are sub- $\sigma$ -fields of $\mathcal{A}$ and define

\begin{equation*}\alpha(\mathcal{A}_1, \mathcal{A}_2)\;:\!=\;\sup_{\substack{ A \in \mathcal{A}_1\\ B \in \mathcal{A}_2}}|P(A \cap B) - P(A)P(B)|.\end{equation*}

Let $\mathcal{I}$ be a set as in Definition 2.1. Then a process $X=(X_t)_{t \in \mathcal{I}}$ with values in $\mathbb{R}^d$ is said to be $\alpha_{u,v}$ -mixing for $u,v\in\mathbb{N}\cup\{\infty\}$ if

(2.3) \begin{equation}\alpha_{u,v}(r)\;:\!=\; \sup \{ \alpha(\mathcal{A}_{\Gamma_1}, \mathcal{B}_{\Gamma_2})\colon \Gamma_1, \Gamma_2\subseteq \mathcal{I}, |\Gamma_1| \leq u, |\Gamma_2| \leq v, d(\Gamma_1,\Gamma_2) \geq r \}\end{equation}

converges to zero as $r \to \infty$ , where $\mathcal{A}_{\Gamma_1}=\sigma( X_i \colon i \in \Gamma_1)$ and $\mathcal{B}_{\Gamma_2}=\sigma( X_j\colon j \in \Gamma_2)$ . If we let $\alpha(r)=\alpha_{\infty,\infty}(r)$ , it is apparent that $\alpha_{u,v}(r) \leq \alpha(r)$ . If $\alpha(r) \to 0$ as $r \to \infty$ , then X is simply said to be $\alpha$ -mixing. For a comprehensive discussion of the coefficients $\alpha_{u,v}(r)$ , $\alpha(r)$ and their relation to other strong mixing coefficients, we refer to [Reference Bradley6], [Reference Bulinski8], and [Reference Dedecker18].

Proposition 2.1. Let $\mathcal{I}$ be a set as in Definition 2.1 and let $X=(X_t)_{t\in \mathcal{I}}$ be a process with values in $\mathbb{R}^d$ and $\mathcal{F}=\mathcal{G}$ , where $\mathcal{F}_u$ is the class of bounded measurable functions from $(\mathbb{R}^d)^u$ to $\mathbb{R}$ . X is $\alpha$ -mixing if and only if there exists a sequence $(\epsilon(r))_{r \in \mathbb{R}_+}$ converging to 0 such that inequality (1.1) holds with

(2.4) \begin{equation}\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,v)=\|F\|_{\infty}\|G\|_{\infty}.\end{equation}

Proof. Set $\mathcal{A}_{I_u}=\sigma(X_i\colon i \in I_u)$ and $\mathcal{B}_{J_v}=\sigma(X_j\colon j \in I_v)$ . For arbitrary $(u,v) \in \mathbb{N}^* \times \mathbb{N}^*$ and $r \in \mathbb{R}_+$ , let $I_u=\{i_1,\ldots,i_u\}$ and $J_v=\{j_1,\ldots,j_v\}$ be arbitrary subsets of $\mathcal{I}$ such that $d(I_u,J_v) \geq r$ . Moreover, choose arbitrary $F \in \mathcal{F}_u$ and $G \in \mathcal{G}_v$ . By [Reference Ibragimov and Linnik30, Theorem 17.2.1],

\begin{equation*}|\mathrm{Cov}(F(X_{i_1},\ldots,X_{i_u}),G(X_{j_1},\ldots,X_{j_v}))| \leq 4 \, \alpha(\mathcal{A}_{I_u},\mathcal{B}_{I_v}) \, \|F\|_{\infty}\|G\|_{\infty}.\end{equation*}

Definition (2.3) immediately implies that the right-hand side of the inequality above is smaller than or equal to $4 \alpha(r) \,\|F\|_{\infty} \,\|G\|_{\infty}$ . Hence, if X is $\alpha$ -mixing then (1.1) holds with $\epsilon(r)=\alpha(r)$ and $c=4$ .

We assume now that the sequence X is $\Psi$ -weakly dependent with $\Psi$ given by (2.4). By [Reference Ibragimov and Linnik30, Theorem 17.2.1] and [Reference Bradley6, Remark 3.17(ii)], we can rewrite the definition of the $\alpha$ -coefficients as

\begin{align*}\alpha(r)&= \sup_{\substack{\Gamma_1, \Gamma_2 \subseteq \mathcal{I}\\|\Gamma_1|< \infty, |\Gamma_2|<\infty\\ d(\Gamma_1,\Gamma_2) \geq r}}\alpha(\mathcal{A}_{\Gamma_1},\mathcal{A}_{\Gamma_2}) \notag \\[5pt] &=\sup_{(u,v) \in \mathbb{N}\times \mathbb{N}} \sup_{\substack{I_u, J_v \subseteq \mathcal{I}\\ d(I_u,J_v) \geq r } } \sup_{\substack{F \in \mathcal{F}_u\\ G \in \mathcal{G}_v }} \biggl\{ \dfrac{1}{4 \|F\|_{\infty} \|G\|_{\infty}} |\mathrm{Cov}(F(X_{i_1},\ldots,X_{i_u}),G(X_{j_1},\ldots,X_{j_v}))| \biggr\}.\end{align*}

Hence

\begin{equation*}\alpha(r) \leq \dfrac{c}{4} \epsilon(r).\end{equation*}

If X is $\Psi$ -weakly dependent, then X is $\alpha$ -mixing.

Remark 2.2. ( $\theta$ -lex weak dependence.) The novel definition of $\theta$ -lex weak dependence on $\mathcal{I}=\mathbb{R}^m$ appearing in [Reference Curato, Stelzer and Ströh17] can be obtained by a slight modification of Definition 1.2. We use the notion of lexicographic order on $\mathbb{R}^m$ : for distinct elements $y=(y_1,\ldots,y_m)\in\mathbb{R}^m$ and $z=(z_1,\ldots,z_m)\in\mathbb{R}^m$ , we say that $y<_{\mathrm{lex}}z$ if and only if $y_1<z_1$ or $y_p<z_p$ for some $p\in\{2,\ldots,m\}$ and $y_q=z_q$ for $q=1,\ldots,p-1$ .

  • Let $\mathcal{F}_u$ be the class of bounded measurable functions from $(\mathbb{R}^d)^u$ to $\mathbb{R}$ and let $\mathcal{G}_1$ be the class of bounded Lipschitz functions from $\mathbb{R}^d$ to $\mathbb{R}$ with respect to the distance $\delta$ defined in (2.2). Moreover, let $I_u=\{i_1, \ldots, i_u\} \subset \mathbb{R}^m$ , and let $j \in \mathbb{R}^m$ be such that $i_s<_{\mathrm{lex}} j$ for all $s=1,\ldots,u$ , and $\mathrm{dist}(I_u,j)\geq r$ . Then inequality (1.1) holds for $\Psi(\|F\|_{\infty},\|G\|_{\infty},\mathrm{Lip}(F),\mathrm{Lip}(G),u,1)= \| F\|_{\infty}\;\mathrm{Lip}(G),$ and $\epsilon$ corresponds to the $\theta$ -lex-coefficients.

For $\mathcal{I}=\mathbb{Z}^m$ , this notion of dependence is more general than $\alpha_{\infty,1}$ -mixing as defined in (2.3), that is, it applies to a broader class of models. Further, for $\mathcal{I}=\mathbb{Z}$ , $\theta$ -lex weak dependence is more general than the notion of $\alpha$ -mixing. We refer the reader to [Reference Curato, Stelzer and Ströh17, Section 2] for a complete introduction to $\theta$ -lex weak dependence and its properties.

Remark 2.3. (Association.) Association offers a complementary approach to the analysis of processes and random fields; see [Reference Bulinski and Shashkin11] for a comprehensive introduction on this topic. Moreover, association is equivalent to $\zeta$ -weak dependence under the assumptions of [Reference Doukhan and Louhichi25, Lemma 4].

3. Strong mixing and weak dependence coefficients under renewal sampling

Let X be a strictly stationary $\mathbb{R}^d$ -valued process, that is, for all $n \in \mathbb{N}$ and all $t_1,\ldots,t_n \in \mathcal{I}$ it holds that the finite-dimensional distributions (indicated by $\mathcal{L}(\!\cdot\!)$ ) are shift-invariant:

\begin{equation*}\mathcal{L}(X_{t_1+h},\ldots,X_{t_n+h})=\mathcal{L}(X_{t_1},\ldots,X_{t_n})\quad\text{for all $h \in \mathcal{I}$.}\end{equation*}

We want to investigate the asymptotic dependence of X sampled at a renewal sequence.

We use a definition of renewal process based on the sequence $\tau$ (see [Reference Hunter29]) and that agrees with the definition of a two-sided Lévy process (see [Reference Applebaum3, page 124]) in the Poisson case. Similar sampling schemes are used in [Reference Aït-Sahalia and Mykland1], [Reference Charlot and Rachdi14], and [Reference Lii and Masry33], for example.

Definition 3.1. Let $\mathcal{I} \subseteq \mathbb{R}^m$ be a set as in Definition 2.1 and let $\tau=(\tau_i)_{i \in \mathbb{Z}\setminus \{0\}}$ be an $\mathcal{I}$ -valued sequence of non-negative (component-wise) independent and identically distributed (i.i.d.) random vectors with distribution function $\mu$ such that $\mu\{0\} <1$ . For $i \in \mathbb{Z}$ , we define an $\mathcal{I}$ -valued stochastic process $(T_i)_{i \in \mathbb{Z}}$ as

\begin{align*} T_0\;:\!=\; 0 \quad \text{and} \quad T_i \;:\!=\; \begin{cases}\sum_{j=1}^i \tau_j , &i \in \mathbb{N} , \\[5pt] -\sum_{j=i}^{-1} \tau_j , &-i \in \mathbb{N}.\end{cases}\end{align*}

The sequence $(T_i)_{i \in \mathbb{Z}}$ is called a renewal sampling sequence. When $\mathcal{I} \subset \mathbb{R}$ , we let $\tau$ denote the sequence of inter-arrival times.

Definition 3.2. Let $X=(X_t)_{t \in \mathcal{I}}$ be a process with values in $\mathbb{R}^d$ and let $(T_i)_{i \in \mathbb{Z}}$ be a renewal sampling sequence independent of X. We define the sequence $Y=(Y_i)_{i\in \mathbb{Z}}$ as the stochastic process with values in $\mathbb{R}^{d+1}$ given by

\begin{equation*} Y_i=( X_{T_i}, T_i-T_{i-1} )^{\top}.\end{equation*}

We call X the underlying process and Y the renewal sampled process.

Remark 3.1. (Independence of T and X.) The assumption of independence between the stochastic process T, modeling a random sampling scheme, and X is reasonable when working with time series whose records are not event-triggered. For example, transaction level data (see [Reference Hautsch27] for a survey) are records of trade or transactions occurring when a buyer and a seller agree on a price for a security (triggering event). Even if these data should not be modeled by assuming that T and X are independent, this assumption is broadly used in the literature analyzing financial data; see e.g. [Reference Aït-Sahalia and Mykland1], [Reference Aït-Sahalia and Mykland2], and [Reference Hayashi and Yoshida28]. Time series that are not event-triggered can, for example, be determined starting from the following data sets.

  • Modern health monitoring systems such as smartphones or wearable devices such as smartwatches enable monitoring of the health conditions of patients by measuring heart rate, electrocardiogram, and body temperature, among other information; see [Reference Vitabile, Marks, Stojanovic, Pllana, Molina, Krzyszton, Sikora, Jarynowski, Hosseinpour, Jakobik, Illic, Respicio, Moldovan, Pop and Salomie43]. These measurements are records on a discrete-time grid, mostly irregularly distributed. In these cases, observation times depend on the measuring instrument (typically sensors), i.e. on a random source independent of the process X, as observed in [Reference Bardet and Bertrand4]. In this context, the hypothesis of independence of T and X is entirely realistic.

  • Measurements from spatio-temporal random fields such as temperature, vegetation, or population are now recorded over a set of moving or fixed locations in space and time, typically not regularly distributed. These data sets are called point reference or raster data and are analyzed in earth science, for example. Further, GPS data, e.g. of a taxi, which periodically transmit the location of an object over time, are an example of spatio-temporal data sets called trajectory data, which are typically irregularly distributed in space and time. We refer the reader to the survey [Reference Wang, Cao and Yu44] and the references therein for an account of the data sets above and their practical relevance. The hypothesis of independence of T and X seems realistic for these data because their sampling in space–time depends on the instrument used to record them.

In the following theorem, we work with the class of functions defined in (2.1) and

\begin{equation*} \mathcal{\tilde{F}}=\bigcup_{u \in \mathbb{N}} \mathcal{\tilde{F}}_u \quad \text{and} \quad \mathcal{\tilde{G}}=\bigcup_{v \in \mathbb{N}} \mathcal{\tilde{G}}_v,\end{equation*}

where $\mathcal{\tilde{F}}_u$ and $\mathcal{\tilde{G}}_v$ are respectively two classes of measurable functions from $(\mathbb{R}^{d+1})^u$ to $\mathbb{R}$ and $(\mathbb{R}^{d+1})^v$ to $\mathbb{R}$ which can be either bounded or bounded Lipschitz.

Theorem 3.1. Let $Y=(Y_i)_{i \in \mathbb{Z}}$ be a renewal sampled process with the underlying process X being strictly stationary and $\Psi$ -weakly dependent with coefficients $\epsilon$ . Then Y is a strictly stationary process, and there exists a sequence $\mathcal{E}$ such that

\begin{equation*} |\mathrm{Cov}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u}),\tilde{G}(Y_{j_1},\ldots,Y_{j_v}))| \leq C \, \Psi(\|\tilde{F}\|_{\infty},\|\tilde{G}\|_{\infty},\mathrm{Lip}(\tilde{F}),\mathrm{Lip}(\tilde{G}),u,v) \, \mathcal{E}(n)\end{equation*}

for all

\begin{equation*}\begin{cases}(u,v) \in \mathbb{N}^* \times \mathbb{N}^*,\\[5pt] n\in \mathbb{N}, \\[5pt] \textit{$\{i_1,\ldots,i_u\} \subseteq \mathbb{Z} $ and $ \{j_1,\ldots,j_v\} \subseteq \mathbb{Z}$,} \\[5pt] \textit{with $ i_1\leq \ldots \leq i_u < i_u+n\leq j_1\leq \ldots\leq j_v$,} \\[5pt] \textit{functions $ \tilde{F} \colon (\mathbb{R}^{d+1})^u \to \mathbb{R} $ and $ \tilde{G}\colon (\mathbb{R}^{d+1})^v \to \mathbb{R} $ belonging to $\mathcal{\tilde{F}}$ and $\mathcal{\tilde{G}}$,}\end{cases}\end{equation*}

where C is a constant independent of n. Moreover,

(3.1) \begin{equation}\mathcal{E}(n)=\int_{\mathcal{I}} \epsilon(\|r\|) \, \mu^{*n}({\mathrm{d}} r),\end{equation}

where $\mu^{*0}$ is the Dirac delta measure in zero, and $\mu^{*n}$ is the n-fold convolution of $\mu$ for $n \geq 1$ .

Proof of Theorem 3.1. Y is a strictly stationary process by [Reference Brandes and Curato7, Proposition 2.1]. Consider arbitrary fixed $(u,v) \in \mathbb{N}^* \times \mathbb{N}^*$ , $n \in \mathbb{N}$ , $\{i_1,\ldots,i_u\} \subseteq \mathbb{Z}$ and $\{j_1,\ldots,j_v\} \subseteq \mathbb{Z}$ with $i_1\leq\ldots\leq i_u \leq i_u+n \leq j_1\leq \ldots\leq j_v$ , and functions $\tilde{F} \in \tilde{\mathcal{F}}$ and $\tilde{G} \in \tilde{\mathcal{G}}$ . Without loss of generality, let us consider throughout that $i_1 >0$ . Then, by conditioning with respect to the sequence $\tau$ and using the law of total covariance (see [Reference Chan, Guo, Lee and Li13, Proposition A.1]), we obtain

(3.2) \begin{align} &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! |\mathrm{Cov}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u}),\tilde{G}(Y_{j_1},\ldots,Y_{j_v})) | \nonumber \\[5pt] &\quad \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \leq | \mathbb{E}(\mathrm{Cov}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u}), \tilde{G}(Y_{j_1},\ldots,Y_{j_v})\mid \tau_i \colon i=1,\ldots,j_v ))| \end{align}
(3.3) \begin{align} \quad\quad +\, | \mathrm{Cov}(\mathbb{E}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u})\mid \tau_i\colon i=1,\ldots,j_v), \mathbb{E}(\tilde{G}(Y_{j_1},\ldots,Y_{j_v})\mid \tau_i \colon i=1,\ldots,j_v ) ) |. \end{align}

Let us first discuss the summand (3.3). We have

\begin{equation*}\mathbb{E}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u})\mid \tau_i\colon i=1,\ldots,j_v)= \mathbb{E}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u})\mid \tau_i\colon i=1,\ldots,i_u)\end{equation*}

because $\tilde{F}(Y_{i_1},\ldots,Y_{i_u})$ is independent of $\{\tau_i \colon i=i_{u}+1,\ldots,j_v\}$ . On the other hand,

\begin{align*}&\mathbb{E}(\tilde{G}(Y_{j_1},\ldots,Y_{j_v})\mid \tau_i \colon i=1,\ldots,j_v )\\[5pt] &\quad =\mathbb{E}\Bigl(\tilde{G}\Bigl(\Bigl(X_{T_{i_u}+\sum_{i=i_u+1}^{j_1} \tau_i}, \tau_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{T_{i_u}+\sum_{i=i_u+1}^{j_v} \tau_i}, \tau_{j_v}\Bigr)^{\prime}\Bigr)\mid \tau_i \colon i=1,\ldots,j_v \Bigr),\end{align*}

and, by stationarity of the process X and the i.i.d. property of $(\tau_i)_{i \in \mathbb{Z} \setminus \{0\}}$ , it is equal to

\begin{align*}&\mathbb{E}\Bigl(\tilde{G}\Bigl(\Bigl(X_{\sum_{i=i_u+1}^{j_1} \tau_i}, \tau_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=i_u+1}^{j_v} \tau_i}, \tau_{j_v}\Bigr)^{\prime}\Bigr)\mid \tau_i \colon i=1,\ldots,j_v \Bigr) \\[5pt] &\quad = \mathbb{E}\Bigl(\tilde{G}\Bigl(\Bigl(X_{\sum_{i=i_u+1}^{j_1} \tau_i}, \tau_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=i_u+1}^{j_v} \tau_i}, \tau_{j_v}\Bigr)^{\prime}\Bigr)\mid \tau_i \colon i=i_u+1,\ldots,j_v\Bigr)\end{align*}

because of the independence between

\begin{equation*}\Bigl\{\Bigl(X_{\sum_{i=i_u+1}^{j_1} \tau_i}, \tau_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=i_u+1}^{j_v} \tau_i}, \tau_{j_v}\Bigr)^{\prime}\Bigr\}\end{equation*}

and $\{ \tau_i \colon i=1,\ldots,i_u \}$ . Thus the summand (3.3) is equal to zero because

\begin{equation*}\mathbb{E}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u})\mid \tau_i\colon i=1,\ldots,i_u)\end{equation*}

and

\begin{equation*}\mathbb{E}\Bigl(\tilde{G}\Bigl(\Bigl(X_{\sum_{i=i_u+1}^{j_1} \tau_i}, \tau_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=i_u+1}^{j_v} \tau_i}, \tau_{j_v}\Bigr)^{\prime}\Bigr)\mid \tau_i \colon i=i_u+1,\ldots,j_v\Bigr) \Bigr)\end{equation*}

are independent.

The summand (3.2) is less than or equal to

\begin{align*}& \!\!\!\!\!\!\!\!\!\!\!\!\! \!\int_{\mathcal{I}^{j_v}} \Bigl| \mathrm{Cov}\Bigl(\tilde{F}\Bigl(\Bigl(X_{\sum_{i=1}^{i_1} s_i}, s_{i_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=1}^{i_u} s_i}, s_{i_u}\Bigr)^{\prime}\Bigr), \\[5pt] &\tilde{G}\Bigl(\Bigl(X_{\sum_{i=1}^{j_1} s_i}, s_{j_1}\Bigr)^{\prime},\ldots,\Bigl(X_{\sum_{i=1}^{j_v} s_i}, s_{j_v}\Bigr)^{\prime}\Bigr) \Bigr) \Bigr| \, {\mathrm{d}} \mathbb{P}_{\{\tau_i\colon i=1,\ldots,j_v\}}(s_1,\ldots,s_{j_v}),\end{align*}

where $\mathbb{P}_{\{\tau \}}$ indicates the joint distribution of the sequence $\tau$ . For a given $(s_{1},\ldots,s_{j_v}) \in \mathcal{I}^{j_v}$ , we have $\tilde{F}( (\cdot, s_{i_1}),\ldots,(\cdot,s_{i_u})) \in \mathcal{F}$ and $\tilde{G}((\cdot, s_{j_1}),\ldots,(\cdot, s_{j_v})) \in \mathcal{G}$ . Since X is a $\Psi$ -weakly dependent process, the above inequality is less than or equal to

\begin{align*}&\int_{\mathcal{I}^{j_1-i_u}} C \, \Psi(\|\tilde{F}((\cdot, s_{i_1}),\ldots,(\cdot,s_{i_u}))\|_{\infty},\|\tilde{G}((\cdot, s_{j_1}),\ldots,(\cdot, s_{j_v}))\|_{\infty},\mathrm{Lip}(\tilde{F}((\cdot, s_{i_1}),\ldots,(\cdot,s_{i_u}))), \\[5pt] &\quad \mathrm{Lip}(\tilde{G}((\cdot, s_{j_1}),\ldots,(\cdot, s_{j_v}))),u,v) \, \epsilon \Biggl(\Biggl\|\sum_{i=i_u+1}^{j_1} s_i \Biggr\|\Biggr) {\mathrm{d}} \mathbb{P}_{\{\tau_i\colon i=1,\ldots,j_v\}}(s_1,\ldots,s_{j_v}),\end{align*}

and, because the sequence $\{\tau_i\colon i=i_u+1,\ldots,j_1\}$ is independent of the sequence $\{\tau_i\colon i=1,\ldots,i_u,j_1 +1,\ldots,j_v\}$ , the integral above is less than or equal to

\begin{align*}& \int_{\mathcal{I}^{j_1-i_u}} C \, \Psi(\|\tilde{F}\|_{\infty},\|\tilde{G}\|_{\infty},\mathrm{Lip}(\tilde{F}),\mathrm{Lip}(\tilde{G}),u,v) \\[5pt] &\quad \epsilon \Biggl(\Biggl\|\sum_{i=i_u+1}^{j_1} s_i \Biggr\|\Biggr) {\mathrm{d}} \mathbb{P}_{\{\tau_i\colon i=i_u+1,\ldots,j_1\}}(s_{i_u+1},\ldots,s_{j_1}).\end{align*}

We see that $j_1-i_u \geq n$ , and without loss of generality the coefficients $\epsilon$ are non-increasing. Thus we can conclude that the integral above is less than or equal to

\begin{equation*}C \, \Psi(\|\tilde{F}\|_{\infty},\|\tilde{G}\|_{\infty},\mathrm{Lip}(\tilde{F}),\mathrm{Lip}(\tilde{G}),u,v) \int_{\mathcal{I}} \epsilon(\|r\|) \mu^{*n}({\mathrm{d}} r).\end{equation*}

Corollary 3.1. Let the assumptions of Theorem 3.1 hold. If the coefficients (3.1) are finite, and converge to zero as n goes to infinity, then Y is $\Psi$ -weakly dependent with coefficients $\mathcal{E}$ .

Proof of Corollary 3.1. The proof directly follows by Definition 2.2.

Remark 3.2. Charlot and Rachdi [Reference Charlot and Rachdi14] obtained $\alpha$ -coefficients related to the process $(X_{T_i})_{i \in \mathbb{Z}}$ equal to $\mathbb{E}[\alpha(T_n)]$ , for a renewal process T independent of X, which corresponds to (3.1) for all $n \in \mathbb{N}$ . Their results also extend to renewal processes T having inter-arrival time sequence $\tau$ which is itself $\alpha^{\prime}$ -mixing with coefficients equal to $\mathbb{E}[\alpha(T_n)]+\alpha^{\prime}$ . However, the techniques involved in their proof exploit the definition of $\alpha$ -mixing coefficients as given in (2.3) and are not directly applicable in the case of weakly dependent processes. Moreover, they do not discuss how to obtain the inheritance of strong mixing, i.e. that the obtained $\alpha$ -coefficients need to be finite and converge to zero as n goes to infinity.

We further explore this issue in Sections 4.1 and 4.2 by discussing several examples of sampling schemes for which the assumptions of Corollary 3.1 are satisfied.

Finally, concerning $\theta$ -lex weak dependence defined in Remark 2.2, we obtain the following result.

Corollary 3.2. Let X be a strictly stationary and $\theta$ -lex weakly dependent random field defined on $\mathbb{R}^m$ , and let T be a renewal sampling sequence independent of X with values in $\mathbb{R}^m$ . Then Y is a strictly stationary process, and there exists a sequence $\mathcal{E}$ such that

(3.4) \begin{equation}|\mathrm{Cov}(\tilde{F}(Y_{i_1},\ldots,Y_{i_u}),\tilde{G}(Y_{j}))| \leq C \, \|\tilde{F}\|_{\infty}\;\mathrm{Lip}(\tilde{G}) \, \mathcal{E}(n)\end{equation}

for all

\begin{equation*}\begin{cases}(u,v) \in \mathbb{N}^* \times \mathbb{N}^*,\\[5pt] n\in \mathbb{N}, \\[5pt] \textit{$\{i_1,\ldots,i_u\} \subseteq \mathbb{Z}$ and $ j \in \mathbb{Z}$,} \\[5pt] \textit{with $ i_1\leq \ldots \leq i_u < i_u+n\leq j$,} \\[5pt] \textit{functions $ \tilde{F} \colon (\mathbb{R}^{d+1})^u \to \mathbb{R} $ and $ \tilde{G}\colon \mathbb{R}^{d+1} \to \mathbb{R} $ belonging to $\mathcal{\tilde{F}}$ and $\mathcal{\tilde{G}}$,}\end{cases}\end{equation*}

where C is a constant independent of n, and $\mathcal{E}$ is defined in (3.1).

Proof of Corollary 3.2. The sequence $\tau$ is a sequence of non-negative i.i.d. random vectors, i.e. $T_{i_1} \leq_{\mathrm{lex}} \ldots \leq_{\mathrm{lex}} T_{i_u} \leq_{\mathrm{lex}} T_j$ . Hence stationarity of Y follows from [Reference Brandes and Curato7, Proposition 2.1], and the covariance inequality (3.4) holds by following the line of proof in Theorem 3.1.

Note that if the inequality (3.4) holds and the coefficients (3.1) are finite and converge to zero as n goes to infinity, then Y is a $\theta$ -weakly dependent process as defined in [Reference Dedecker and Doukhan19].

4. Explicit bounds for $\Psi$ -coefficients

In this section we consider renewal sampling of $X=(X_t)_{t \in \mathbb{R}}$ . Therefore the inter-arrival times $\tau$ are a sequence of non-negative i.i.d. random variables with values in $\mathbb{R}$ .

We first show that if X is $\Psi$ -weakly dependent and admits exponential or power decaying coefficients $\epsilon$ , then Y is, in turn, $\Psi$ -weakly dependent and its coefficients $\mathcal{E}$ preserve (at least asymptotically) the decay behavior of $\epsilon$ . This result directly enables the application of the limit theory for a vast class of $\Psi$ -weakly dependent processes Y, of which we present several examples throughout the section.

In fact central limit theorems for a $\Psi$ -weakly dependent process X typically hold under sufficient conditions of the following type: $\mathbb{E}[\|X_0\|^{\delta}]< \infty$ for some $\delta >0$ and the coefficients $\epsilon$ satisfy a condition of the form

(4.1) \begin{equation}\sum_{i=1}^\infty \epsilon(n)^{A(\delta)} < \infty,\end{equation}

where $A(\delta)$ is a certain function of $\delta$ . If X admits coefficients $\epsilon$ with exponential or sufficiently fast power decay, then conditions of type (4.1) are satisfied. If, in turn, Y is $\Psi$ -weakly dependent with coefficients having exponential or sufficiently fast power decay, then conditions of type (4.1) are also satisfied under renewal sampling.

4.1. Exponential decay

In terms of the Laplace transform of the inter-arrival times, we can obtain a general bound for the coefficients $(\mathcal{E}(n))_{n \in \mathbb{N}}$ .

Proposition 4.1. Let $X=(X_t)_{t \in \mathbb{R}}$ , $Y=(Y_i)_{i \in \mathbb{Z}}$ and $(T_i)_{i \in \mathbb{Z}}$ be as in Theorem 3.1. Let us assume that $\epsilon(r)\leq C\mathrm{e}^{-\gamma r}$ for $\gamma>0$ and denote the Laplace transform of the distribution function $\mu$ by

\begin{equation*}L_{\mu}(t)=\int_{\mathbb{R}^{+}} \mathrm{e}^{-tr} \mu({\mathrm{d}} r), \quad t \in \mathbb{R}_{+}.\end{equation*}

Then the process Y admits coefficients

\begin{equation*}\mathcal{E}(n)\leq C\biggl(\dfrac{1}{L_{\mu}(\gamma)}\biggr)^{-n},\end{equation*}

which converge to zero as n goes to infinity.

Proof of Proposition 4.1. We notice that $L_{\mu}(t) <1$ for $t>0$ and that $L_{\mu^{*n}}(t)=(L_{\mu}(t))^n$ ; see [Reference Sato41, Proposition 2.6].

Using the result obtained in Theorem 3.1, we have that

\begin{align*}\mathcal{E}(n)&=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \leq C \int_{\mathbb{R}_+} \mathrm{e}^{-\gamma r} \, \mu^{*n}({\mathrm{d}} r) = C L_{\mu^{*n}}(\gamma)=C (L_{\mu}(\gamma))^n.\end{align*}

As a direct consequence, if X is $\Psi$ -weakly dependent and admits exponentially decaying coefficients, the assumptions of Corollary 3.1 hold and Y inherits the asymptotic dependence structure of X under renewal sampling.

Example 4.1. If we have a renewal sampling with $\Gamma(\alpha,\beta)$ -distributed inter-arrival times for $\alpha, \beta >0$ , then $\mu^{*n}$ is the distribution function of a $\Gamma(n\alpha, \beta)$ -distributed random variable. By Proposition 4.1,

\begin{equation*}\mathcal{E}(n)=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \leq C \int_{(0,+\infty)} \mathrm{e}^{-\gamma r} \dfrac{\beta^{n\alpha}}{\Gamma(n\alpha)} r^{n\alpha-1} \mathrm{e}^{-\beta r} \, {\mathrm{d}} r = C \biggl( \dfrac{\gamma+\beta}{\beta} \biggr)^{-n\alpha}.\end{equation*}

A special case of the coefficients above is obtained for Poisson sampling, i.e. $\mu=\operatorname{Exp}\!(\lambda)$ with $\lambda>0$ . In this case, $\mu^{*n}$ is the distribution function of a $\Gamma(n,\lambda)$ -distributed random variable, and

\begin{equation*}\mathcal{E}(n)=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \leq C \int_{(0,+\infty)} \mathrm{e}^{-\gamma r} \dfrac{\lambda^n}{\Gamma(n)} r^{n-1} \mathrm{e}^{-\lambda r} \, {\mathrm{d}} r = C \biggl( \dfrac{\lambda+\gamma}{\lambda} \biggr)^{-n}.\end{equation*}

Remark 4.1. If the process X is $\Psi$ -weakly dependent with exponentially decaying coefficients, then the equidistant sampled process $(X_i)_{i \in \mathbb{Z}}$ has exponentially decaying coefficients $\epsilon(n)\leq C \mathrm{e}^{-\gamma n}$ for $\gamma >0$ and $n \in \mathbb{N}$ . By using the results in Example 4.1, we can design renewal sampling schemes such that the process Y has coefficients $\mathcal{E}$ with faster decay rate than the sequence of coefficients $\epsilon$ .

  • For $\Gamma(\alpha,\beta)$ -distributed inter-arrival times, we obtain that the process Y has faster-decaying coefficients than $(X_i)_{i \in \mathbb{Z}}$ if the parameters $\alpha, \beta >0$ are chosen such that

    \begin{equation*}\biggl( \dfrac{\gamma+\beta}{\beta} \biggr)^{\alpha} \geq \mathrm{e}^{\gamma}.\end{equation*}
  • In the case of Poisson sampling, the process Y admits faster-decaying coefficients than $(X_i)_{i \in \mathbb{Z}}$ if

    \begin{equation*}\lambda \geq \dfrac{\gamma}{\mathrm{e}^{\gamma}-1}.\end{equation*}
    The fraction appearing on the right-hand side of the inequality is less than 1 for all $\gamma >0$ . Therefore, because the average length of two adjacent observations is ruled by $\mathbb{E}[\tau_1]={1}/{\lambda}$ , we can design an (on average) lower sampling frequency scheme such that the coefficients $\mathcal{E}$ decay faster than $\epsilon$ , by choosing ${\gamma}/{(\mathrm{e^{\gamma}}-1)} \leq \lambda < 1$ .

4.2. Power decay

We now assume that the underlying process X is $\Psi$ -weakly dependent with coefficients $\epsilon(r)\leq C r^{-\gamma}$ for $\gamma>0$ .

We start with some concrete examples of inter-arrival time distributions $\mu$ (and therefore of renewal sampling sequences T), preserving the power decay of the coefficients $\epsilon$ .

Example 4.2. Let us consider renewal sampling with $\Gamma(\alpha,\beta)$ -distributed inter-arrival times for $\alpha, \beta >0$ . Then $\mu^{*n}$ is a $\Gamma(n\alpha, \beta)$ distribution. Thus

(4.2) \begin{equation}\mathcal{E}(n)=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \leq C \int_{(0,+\infty)} r^{-\gamma} \dfrac{\beta^{n\alpha}}{\Gamma(n\alpha)} r^{n\alpha-1} \mathrm{e}^{-\beta r} \, {\mathrm{d}} r = C \beta^{\gamma} \dfrac{\Gamma(n\alpha-\gamma)}{\Gamma(n\alpha)}.\end{equation}

For $n \to \infty$ , and applying Stirling’s series (see [Reference Tricomi and Erdélyi42]), we obtain that (4.2) is equal to $C \beta^{\gamma} n^{-\gamma} + {\mathrm{O}} (n^{-\gamma-1}).$

In the particular case of Poisson sampling, $\mu^{*n}$ is a $\Gamma(n,\lambda)$ distribution and

\begin{align*} \mathcal{E}(n)&=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r)\\[5pt] &\leq C \int_{(0,+\infty)} r^{-\gamma } \dfrac{\lambda^n}{\Gamma(n)} r^{n-1} \mathrm{e}^{-\lambda r} \, {\mathrm{d}} r \\[5pt] &= C \lambda^{\gamma} \dfrac{\Gamma(n-\gamma)}{\Gamma(n)}\\[5pt] & =C \lambda^{\gamma} n^{-\gamma}(1+{\mathrm{O}} (n^{-1})) \\[5pt] &= C \lambda^{\gamma} n^{-\gamma} + {\mathrm{O}} (n^{-\gamma-1}),\end{align*}

where the last equality holds as $n \to \infty$ .

Example 4.3. We let $\mathrm{Levy}(0,c)$ denote a Lévy distribution (see [Reference Zolotarev46, page 28]) with location parameter 0 and scale parameter c (a completely skewed $\frac{1}{2}$ -stable distribution). This distribution has infinite mean and variance. For $\mathrm{Levy}(0,c)$ -distributed inter-arrival times, we find that $\mu^{*n}$ is $\mathrm{Levy}(0,cn)$ . Thus

\begin{align*} \mathcal{E}(n)&=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r)\\[5pt] &\leq C\int_{\mathbb{R}_+} r^{-\gamma} \, \dfrac{({cn}/{2})^{{1}/{2}}}{\Gamma({1}/{2})} r^{-{3}/{2}} \mathrm{e}^{-{cn}/{2r}} \, {\mathrm{d}} r \\[5pt] &= C \dfrac{\Gamma({1}/{2}+\gamma)}{({cn}/{2})^{\gamma}\Gamma\bigl({1}/{2}\bigr)}\\[5pt] &= C \dfrac{\Gamma({1}/{2}+\gamma)}{\Gamma({1}/{2})} \biggl(\dfrac{c}{2}\biggr)^{-\gamma} n^{-\gamma}.\end{align*}

Example 4.4. We now consider the case where $\mu$ is an inverse Gaussian distribution with mean m and shape parameter $\lambda$ ( $IG(m,\lambda)$ for short). We see that $\mu^{*n}$ is an $IG(nm,n^2\lambda)$ distribution and

(4.3) \begin{align} \mathcal{E}(n)&= \int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \notag \\[5pt] &\leq C\int_{(0,+\infty)} r^{-\gamma} \biggl( \dfrac{n^2 \lambda}{2\pi r^3} \biggr)^{{1}/{2}}\exp\biggl(-\frac{n^2\lambda(r-nm)^2}{2n^2m^2r}\biggr) \, {\mathrm{d}} r \nonumber \\[5pt] &= n C \biggl( \dfrac{\lambda}{2\pi} \biggr)^{{1}/{2}} \exp\biggl(\frac{\lambda n}{m}\biggr) \int_{(0,+\infty)} r^{-\gamma-{3}/{2}}\exp\biggl(-\frac{\lambda n}{2 m}\biggl( \frac{r}{nm}+\frac{nm}{r}\biggr)\biggr) \, {\mathrm{d}} r \nonumber \\[5pt] &= C \biggl( \dfrac{\lambda}{2\pi} \biggr)^{{1}/{2}} m^{-\gamma-{1}/{2}} \, n^{-\gamma+{1}/{2}} \,\exp\biggl(\frac{\lambda n}{m}\biggr) \, 2 \, \mathcal{K}_{-\gamma-{1}/{2}}\biggl(\dfrac{\lambda n}{m}\biggr) \end{align}

after applying the substitution $x \;:\!=\; {r}/{(nm)}$ , and where $\mathcal{K}_{-\gamma-{1}/{2}}$ denotes a modified Bessel function of the third kind with order $-\gamma-\frac{1}{2}$ . Using the asymptotic expansion for modified Bessel functions from [Reference Jørgensen31, page 171], we obtain

\begin{equation*}\mathcal{K}_v(x)= \biggl(\dfrac{\pi}{2}\biggr)^{{1}/{2}} x^{-{1}/{2}} \mathrm{e}^{-x}(1+{\mathrm{O}} (x^{-1})).\end{equation*}

Thus, for $n \to \infty$ , (4.3) is equal to $({C}/{2}) m^{-\gamma}\, n^{-\gamma}+ {\mathrm{O}} (n^{-\gamma-1}).$

Example 4.5. Let the inter-arrival times follow a Bernoulli distribution with parameter $0 \leq p \leq 1$ . Then $\mu^{*n}$ is a $\operatorname{Bin}\!(n, p)$ distribution. If X admits coefficients $\epsilon(r)=C(1 \wedge r^{-\gamma})$ for $\gamma>0$ , then $\mathcal{E}(n)$ satisfies

(4.4) \begin{equation}\mathcal{E}(n)= \int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) = C \biggl( (1-p)^n + \sum_{j=1}^n j^{-\gamma} \binom{n}{j} p^j (1-p)^{n-j} \biggr) . \end{equation}

For $n \to \infty$ , applying the asymptotic expansion proved in [Reference Wuyungaowa and Wang45, Theorem 1], we have that (4.4) is equal to $C (np)^{-\gamma} + {\mathrm{O}} (n^{-\gamma-1})$ .

Example 4.6. Let us consider inter-arrival times such that $\mu([0,k))=0$ for a fixed $k >0$ . Then, straightforwardly,

\begin{equation*}\mathcal{E}(n)=\int_{\mathbb{R}_+} \epsilon(r) \, \mu^{*n}({\mathrm{d}} r) \leq C (nk)^{-\gamma}.\end{equation*}

In Examples 4.2, 4.4, and 4.5 we obtain asymptotic bounds for the coefficients $\mathcal{E}$ , whereas we have exact ones in Examples 4.3 and 4.6. For a general inter-arrival time distribution we can just show that the coefficients $\mathcal{E}$ decay at least (asymptotically) with the same power. This result relies on the following lemma.

Lemma 4.1. Let $\mu, \nu$ be two probability measures on $\mathbb{R}^{+}$ such that $\mu([0,b)) \leq \nu([0,b))$ for all $b >0$ and let $f\colon \mathbb{R}^{+} \to \mathbb{R}^{+}$ be non-increasing. Then

\begin{equation*}\int_{\mathbb{R}_+} f(r) \mu^{*n}({\mathrm{d}} r) \leq \int_{\mathbb{R}_+} f(r) \, \nu^{*n}({\mathrm{d}} r).\end{equation*}

Proof of Lemma 4.1. The proof follows by applying measure-theoretic induction.

Proposition 4.2. Let $X=(X_t)_{t \in \mathbb{R}}$ , $Y=(Y_i)_{i \in \mathbb{Z}}$ , and $(T_i)_{i \in \mathbb{Z}}$ be as in Theorem 3.1. Let us assume that $\epsilon(r)\leq C r^{-\gamma}$ for $\gamma>0$ . Let $a >0$ be a point in the support of $\mu$ such that $\mu([0,a))>0$ , and set $p=\mu([a,\infty])$ . Then the process Y admits coefficients $\mathcal{E}(n)\leq C (n a p)^{-\gamma}$ as $n \to \infty$ .

Proof of Proposition 4.2. Let us assume without loss of generality that $\mu\neq \delta_a$ (otherwise Example 4.6 applies for any $a \in \mathbb{R}_{+}$ ), where $\delta_a$ denotes the Dirac delta measure for $a \in \mathbb{R}_+$ . Set $\nu= p \delta_{a}+(1-p) \delta_0$ . The latter is a Bernoulli distribution that assigns probability p to the inter-arrival time a and $(1-p)$ to the time 0. It follows that $\mu([0,b))\leq \nu([0,b))$ for all $b >0$ . Then, by using Lemma 4.1, the result in Example 4.5, and [Reference Wuyungaowa and Wang45, Theorem 1],

\begin{align*} \mathcal{E}(n) &\leq C \int_{\mathbb{R}_+} r^{-\gamma} \mu^{*n}({\mathrm{d}} r) \\[5pt] &\leq C \int_{\mathbb{R}_+} r^{-\gamma} \nu^{*n}({\mathrm{d}} r) \\[5pt] &= C \Biggl( (1-p)^n + \sum_{j=1}^n (a j)^{-\gamma} \binom{n}{j} p^j (1-p)^{n-j} \Biggr) \\[5pt] &= C (n a p)^{-\gamma} + {\mathrm{O}} (n^{-\gamma- 1}),\end{align*}

where the last inequality holds for $n \to \infty$ .

Remark 4.2. Proposition 4.2 gives us an upper bound for the coefficients $\mathcal{E}$ . This means that the true decay of the coefficients $\mathcal{E}$ could be faster, in general, than $n^{-\gamma}$ . However, we have not found examples of sequences $\tau$ where this happens. Even for extremely heavy-tailed inter-arrival time distributions such as in Example 4.3, we can just find an estimate from above of the coefficients of the renewal sampled process Y, i.e. $\mathcal{E}(n) \leq C n^{-\gamma}$ for large n, that has the same power decay as the coefficients $\epsilon$ .

Proposition 4.2 summarizes the results given in this section. In fact, as long as X is $\Psi$ -weakly dependent such that there exists a $\gamma >0$ with $\epsilon(r) \leq C r^{-\gamma}$ , then the assumptions of Corollary 3.1 are satisfied and Y inherits the asymptotic dependence structure of X. Note that Proposition 4.2 ensures that Y is $\Psi$ -weakly dependent also when, for example, $\epsilon(r)=C{(r \log\!(r))}^{-1}$ and then $\epsilon(r) \leq C n^{-1}$ . Therefore caution has to be exercised when checking conditions of type (4.1) for the process Y.

Example 4.7. Let us consider the sufficient condition for the applicability of the central limit theorem for $\kappa$ -weakly dependent processes (see [Reference Doukhan and Wintenberger26]), where (4.1) holds with $A(\delta)=1$ . If X is a $\Psi$ -weakly dependent process with coefficients $\epsilon(r)=C{(r \log\!(r))}^{-1}$ , then Y is a $\Psi$ -weakly dependent process with coefficients $\mathcal{E}(n) \leq \tilde{C} n^{-1}$ as $n \to \infty$ by applying Proposition 4.2. We have that the coefficients $\epsilon(r)$ are summable and satisfy (4.1), but we do not know the summability of the coefficients $\mathcal{E}(n)$ , as Proposition 4.2 just gives an upper bound of their value, which is not summable.

5. Conclusion

We assume that our sampling scheme is described by a renewal sequence T independent of the process X being weakly dependent or $\alpha$ -mixing. We determine under which assumptions the process $Y=(X_{T_i},T_i-T_{i-1})$ is itself weakly dependent or $\alpha$ -mixing. If X admits exponential or power decaying coefficients, then Y inherits strong mixing or weak dependence, and its related coefficients preserve the exponential or power decay (at least asymptotically). Our general results enable the application of central limit theorems used under equidistant sampling schemes to renewal sampled data.

Other sampling schemes are of great interest in practical applications and constitute a natural continuation of our work, for instance sampling schemes where T is a point process dependent on X, as observed in transaction-level financial data. Moreover, when analyzing data from continuous spatio-temporal random fields, the theory we have developed so far allows us to analyze sampling along a self-avoiding walk that moves in non-negative coordinate directions. Another possible extension of our theory aims to study the random field sampling along a walk that moves in lexicographically increasing coordinate directions.

Acknowledgements

We would like to express our gratitude to the two anonymous reviewers and the Editors for their many insightful comments and suggestions.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Aït-Sahalia, Y. and Mykland, P. A. (2004). Estimators of diffusions with randomly spaced discrete observations: a general theory. Ann. Statist. 32, 21862222.CrossRefGoogle Scholar
Aït-Sahalia, Y. and Mykland, P. A. (2008). An analysis of Hansen–Scheinkman moment estimators for discretely and randomly sampled diffusions. J. Econom. 144, 126.CrossRefGoogle Scholar
Applebaum, D. (2004). Lévy Processes and Stochastic Calculus, 1st edn. Cambridge University Press.CrossRefGoogle Scholar
Bardet, J.-M. and Bertrand, P. R. (2010). A non-parametric estimator of the spectral density of a continuous-time Gaussian process observed at random times. Scand. J. Statist. 37, 458476.CrossRefGoogle Scholar
Bardet, J.-M., Doukhan, P. and León, J. R. (2008). Uniform limit theorems for the integrated periodogram of weakly dependent time series and their applications to Whittle’s estimate. J. Time Ser. Anal. 29, 906945.CrossRefGoogle Scholar
Bradley, R. (2007). Introduction to Strong Mixing Conditions, vol. 1. Kendrick Press, Utah.Google Scholar
Brandes, D.-P. and Curato, I. V. (2019). On the sample autocovariance of a Lévy driven moving average process when sampled at a renewal sequence. J. Statist. Planning Infer. 203, 2038.CrossRefGoogle Scholar
Bulinski, A. (1988). Various mixing conditions and the asymptotic normality of random fields. Dokl. Akad. Nauk SSSR 299, 785789.Google Scholar
Bulinski, A. and Shabanovich, E. (1998). Asymptotical behaviour for some functionals of positively and negatively dependent random fields. Fundam. Prikl. Mat. 4, 479492.Google Scholar
Bulinski, A. and Shashkin, A. (2005). Strong invariance principle for dependent multi-indexed random variables. Dokl. Akad. Nauk SSSR 72, 503506.Google Scholar
Bulinski, A. and Shashkin, A. (2007). Limit Theorems for Associated Random Fields and Related Systems. World Scientific, Singapore.CrossRefGoogle Scholar
Bulinski, A. and Suquet, C. (2001). Normal approximation for quasi-associated random fields. Statist. Prob. Lett. 54, 215226.CrossRefGoogle Scholar
Chan, R. C., Guo, Y. Z., Lee, S. T. and Li, X. (2019). Financial Mathematics, Derivatives and Structured Products. Springer Nature, Singapore.CrossRefGoogle Scholar
Charlot, F. and Rachdi, M. (2008). On the statistical properties of a stationary process sampled by a stationary point process. Statist. Prob. Lett. 78, 456462.CrossRefGoogle Scholar
Chorowski, J. and Trabs, M. (2016). Spectral estimation for diffusions with random sampling times. Stoch. Process. Appl. 126, 29763008.CrossRefGoogle Scholar
Curato, I. V. and Stelzer, R. (2019). Weak dependence and GMM estimation of supOU and mixed moving average processes. Electron. J. Statist. 13, 310360.CrossRefGoogle Scholar
Curato, I. V., Stelzer, R. and Ströh, B. (2021). Central limit theorems for stationary random fields under weak dependence with application to ambit and mixed moving average fields. Ann. Appl. Prob. 32, 1814–1861.Google Scholar
Dedecker, J. (1998). A central limit theorem for stationary random fields. Prob. Theory Relat. Fields 110, 397426.CrossRefGoogle Scholar
Dedecker, J. and Doukhan, P. (2003). A new covariance inequality and applications. Stoch. Process. Appl. 106, 6380.CrossRefGoogle Scholar
Dedecker, J., Doukhan, P., Lang, G., León, J. R., Louhichi, S. and Prieur, C. (2008). Weak Dependence: With Examples and Applications. Springer, New York.Google Scholar
Dedecker, J. and Rio, E. (2000). On the functional central limit theorem for stationary processes. Ann. Inst. H. Poincaré Prob. Statist. 36, 134.CrossRefGoogle Scholar
do Rego Sousa, T. and Stelzer, R. (2022). Moment based estimation for the multivariate COGARCH(1,1) process. Scand. J. Statist. 49, 681–717.CrossRefGoogle Scholar
Doukhan, P. (1994). Mixing: Properties and Examples (Lecture Notes Statist. 85). Springer, New York.Google Scholar
Doukhan, P. and Lang, G. (2002). Rates in the empirical central limit theorem for stationary weakly dependent random fields. Statist. Infer. Stoch. Process. 5, 199228.CrossRefGoogle Scholar
Doukhan, P. and Louhichi, S. (1999). A new weak dependence condition and applications to moment inequalities. Stoch. Process. Appl. 84, 313342.CrossRefGoogle Scholar
Doukhan, P. and Wintenberger, O. (2007). An invariance principle for weakly dependent stationary general models. Prob. Math. Statist. 27, 4573.Google Scholar
Hautsch, N. (2012). Econometrics of Financial High-Frequency Data. Springer, Berlin.CrossRefGoogle Scholar
Hayashi, T. and Yoshida, N. (2005). On covariance estimation of non-synchronously observed diffusion processes. Bernoulli 11, 359379.CrossRefGoogle Scholar
Hunter, J. J. (1974). Renewal theory in two dimensions: basic results. Adv. Appl. Prob. 6, 376391.CrossRefGoogle Scholar
Ibragimov, I. A. and Linnik, Y. V. (1971). Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff, Groningen.Google Scholar
Jørgensen, B. (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Springer, New York.CrossRefGoogle Scholar
Kanaya, S. (2017). Convergence rates of sums of $\alpha$ -mixing triangular arrays: with an application to nonparametric drift function estimation of continuous-time processes. Econometric Theory 33, 11211153.CrossRefGoogle Scholar
Lii, K. S. and Masry, E. (1992). Model fitting for continuous-time stationary processes from discrete-time data. J. Multivariate Anal. 41, 5679.CrossRefGoogle Scholar
McDunnough, P. and Wolfson, D. B. (1979). On some sampling schemes for estimating the parameters of a continuous time series. Ann. Inst. Statist. Math. 31, 487497.CrossRefGoogle Scholar
Masry, E. (1978). Alias-free sampling: an alternative conceptualization and its applications. IEEE Trans. Inform. Theory IT-24, 317324.CrossRefGoogle Scholar
Masry, E. (1978). Poisson sampling and spectral estimation of continuous-time processes. IEEE Trans. Inform. Theory IT-24, 173183.CrossRefGoogle Scholar
Masry, E. (1983). Nonparametric covariance estimation from irregularly-spaced data. Adv. Appl. Prob. 15, 113132.CrossRefGoogle Scholar
Masry, E. (1988). Random sampling of continuous-parameter stationary processes: statistical properties of joint density estimators. J. Multivariate Anal. 26, 133165.CrossRefGoogle Scholar
Rosenblatt, M. (1956). A central limit theorem and a strong mixing condition. Proc. Nat. Acad. Sci. USA 42, 4347.CrossRefGoogle Scholar
Rosenblatt, M. (1984). Asymptotic normality, strong mixing and spectral density estimates. Ann. Prob. 12, 11671180.CrossRefGoogle Scholar
Sato, K. (2013). Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge.Google Scholar
Tricomi, F. G. and Erdélyi, A. (1951). The asymptotic expansion of a ratio of gamma functions. Pacific J. Math. 1, 133––142.CrossRefGoogle Scholar
Vitabile, S., Marks, M., Stojanovic, D., Pllana, S., Molina, J. M., Krzyszton, M., Sikora, A., Jarynowski, A., Hosseinpour, F., Jakobik, A., Illic, A. S., Respicio, A., Moldovan, D., Pop, C. and Salomie, I. (2019). Medical data processing and analysis for remote health and activities monitoring. In High Performance Modelling and Simulation for Big Data Applications (Lecture Notes Comput. Sci. 11400), eds J. Kolodziej and H. González-Vélez, pp. 186220. Springer.CrossRefGoogle Scholar
Wang, S., Cao, J. and Yu, P. S. (2022). Deep learning for spatio-temporal data mining: a survey. IEEE Trans. Knowledge Data Engineering 34, 36813700.CrossRefGoogle Scholar
Wuyungaowa, and Wang, T. (2008). Asymptotic expansions for inverse moments of binomial and negative binomial. Statist. Prob. Lett. 78, 30183022.CrossRefGoogle Scholar
Zolotarev, V. M. (1986). One-Dimensional Stable Distributions (Trans. Math. Monographs 65). American Mathematical Society, Providence, RI.Google Scholar