Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-05T04:06:03.850Z Has data issue: false hasContentIssue false

Asymptotic results for sums and extremes

Published online by Cambridge University Press:  13 March 2024

Rita Giuliano*
Affiliation:
University of Pisa
Claudio Macci*
Affiliation:
University of Roma Tor Vergata
Barbara Pacchiarotti*
Affiliation:
University of Roma Tor Vergata
*
*Postal address: Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, I-56127 Pisa, Italy. Email: [email protected]
**Postal address: Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy.
**Postal address: Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, I-00133 Rome, Italy.
Rights & Permissions [Opens in a new window]

Abstract

The term moderate deviations is often used in the literature to mean a class of large deviation principles that, in some sense, fills the gap between a convergence in probability of some random variables to a constant, and a weak convergence to a centered Gaussian distribution (when such random variables are properly centered and rescaled). We talk about noncentral moderate deviations when the weak convergence is towards a non-Gaussian distribution. In this paper we prove a noncentral moderate deviation result for the bivariate sequence of sums and maxima of independent and identically distributed random variables bounded from above. We also prove a result where the random variables are not bounded from above, and the maxima are suitably normalized. Finally, we prove a moderate deviation result for sums of partial minima of independent and identically distributed exponential random variables.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The theory of large deviations gives an asymptotic computation of small probabilities on an exponential scale; see [Reference Dembo and Zeitouni5] as a reference for this topic. The basic definition of this theory is the concept of a large deviation principle (LDP for short). More precisely, a sequence of probability measures $\{\pi_n\,:\, n\geq 1\}$ on a topological space $\mathcal{X}$ satisfies the LDP, with speed $v_n$ and a rate function I, if the following conditions hold: $v_n\to\infty$ as $n\to\infty$ , the function $I\,:\,\mathcal{X}\to [0,\infty]$ is lower semicontinuous,

\begin{align*}\liminf_{n\to\infty}\frac{1}{v_n}\log\pi_n(O)\geq-\inf_{x\in O}I(x)\ \mbox{for all open sets}\ O,\end{align*}

and

\begin{align*}\limsup_{n\to\infty}\frac{1}{v_n}\log\pi_n(C)\leq-\inf_{x\in C}I(x)\ \mbox{for all closed sets}\ C.\end{align*}

Moreover, we talk about weak LDP (WLDP for short) if the above upper bound for closed sets holds for compact sets only. We also recall that, if every level set $\{x\in\mathcal{X}\,:\, I(x)\leq\eta\}$ is compact (for $\eta\geq 0$ ), the rate function I is said to be good. Finally, we say that the sequence $\{\pi_n\,:\, n\geq 1\}$ can often be seen as a sequence of laws of $\mathcal{X}$ -valued random variables $\{X_n\,:\, n\geq 1\}$ defined on the same probability space $(\Omega,\mathcal{F},P)$ , i.e. $\pi_n=P(X_n\in\cdot\,)$ for every $n\geq 1$ , and, with a slight abuse of terminology, we say that $\{X_n\,:\, n\geq 1\}$ satisfies the LDP.

In several common cases the rate function is good and uniquely vanishes at a certain point $x_\infty\in\mathcal{X}$ . Then we can show that $\{X_n\,:\, n\geq 1\}$ converges in probability to $x_\infty$ ; moreover, roughly speaking, if E is a Borel subset of $\mathcal{X}$ such that $x_\infty\notin\bar{E}$ , $P(X_n\in E)$ tends to zero as ${\textrm{e}}^{-v_n I(E)}$ (as $n\to\infty$ ), where $I(E)\,:\!=\,\inf_{x\in E}I(x)>0$ .

The term moderate deviations is used for a class of LDPs that fills the gap between two asymptotic regimes:

  1. (i) the convergence in probability of some random variables $\{X_n\,:\, n\geq 1\}$ to some constant $x_\infty$ , governed by the LDP with speed $v_n$ , and a good rate function I (this LDP will called the reference LDP);

  2. (ii) the weak convergence to a centered Gaussian distribution of a centered and suitably rescaled version of the random variables $\{X_n\,:\, n\geq 1\}$ .

More precisely, we mean a class of LDPs for which the random variables involved depend on a class of positive sequences $\{a_n\,:\, n\geq 1\}$ (called scalings) such that

(1) \begin{equation} a_n\to 0\quad \mbox{and}\quad a_nv_n\to\infty\quad (\mbox{as}\ n\to\infty);\end{equation}

the speed will be $1/a_n$ (so the speed is less fast than $v_n$ and this explains the term moderate), and these LDPs are governed by the same quadratic rate function J which uniquely vanishes at zero. Moreover, we can say that, in some sense, we can recover the asymptotic regimes (i) and (ii) above by choosing $a_n=1/v_n$ (so the second condition in (1) fails) and $a_n=1$ (so the first condition in (1) fails), respectively.

Some recent moderate deviation results in the literature concern cases in which the weak convergence is towards a non-Gaussian distribution. Hence we talk about noncentral moderate deviations (NCMD for short) and typically the common rate function J for the class of LDPs is not quadratic. Some examples of NCMD results can be found in [Reference Giuliano and Macci11], where the weak convergences are towards Gumbel, exponential, and Laplace distributions. In the same reference the interested reader can find references in the literature with other examples. The examples in the literature essentially concern univariate random variables; the only multivariate example we are aware of is presented in [Reference Leonenko, Macci and Pacchiarotti17], where the weak convergence is trivial because a sequence of identically distributed random variables is considered.

The aim of this paper is to prove two moderate deviation results, and a further LDP. The first moderate deviation result fills the gap between convergence to a constant for the bivariate sequence of sums and maxima of independent and identically distributed (i.i.d.) random variables (under suitable hypotheses; in particular they are bounded from above), and weak convergence towards a pair of independent random variables with standard Gaussian and Weibull marginal distributions (more precisely, we always have the Weibull distribution of parameter 1, i.e. the distribution of a random variable U such that $-U$ is exponentially distributed with mean 1); the weak convergence is a consequence of [Reference Chow and Teugels4, Theorem 1] (see also [Reference Anderson and Turkman1, Reference Hsing13] for generalizations; here we also cite [Reference Arendarczyk, Kozubowski and Panorska2, Reference Krizmanić16, Reference Qeadan, Kozubowski and Panorska21] among the other references on the joint distribution of sums and maxima). Thus we obtain the NCMD result for a bivariate sequence. In particular, in this paper we also prove the reference LDP with speed $v_n=n$ , i.e. Proposition 3.

We prove Proposition 6, which can be seen as a suitable modification of Proposition 3 (under some other suitable hypotheses; in particular they are not bounded from above). As we shall see, this new result is the LDP with $v_n=\log n$ .

The second moderate deviation result fills the gap between convergence to a constant for the sequence of sums of partial minima of i.i.d. exponential random variables, and weak convergence to a centered Gaussian distribution proved in [Reference Höglund12]. Thus we obtain a moderate deviation (MD) result. In this case the reference LDP with speed $v_n=\log n$ is a known result [Reference Giuliano and Macci10, Proposition 5.2].

By taking into account that in this paper we present some results with sums and maxima, we also recall here some other references which describe the impact of maximal order statistics in the asymptotics of the sum in the normal deviation region: [Reference Kratz14, Reference Kratz and Prokopenko15, Reference Müller19]. Actually, these references concern the case with heavy-tailed i.i.d. random variables.

We conclude this section with an outline of the paper. We start with some preliminaries in Section 2. We prove the NCMD result for the bivariate sequence in Section 3, and the MD result for the sums of partial minima of i.i.d. exponential random variables in Section 5. Finally, in Section 4 we prove Proposition 6.

2. Preliminaries

We start with a standard way to obtain the LDP with speed $v_n$ on a Polish space $\mathcal{X}$ . First, if we denote by $B_R(x)$ the open ball centered at x with radius R, we can obtain a weak LDP (i.e. the lower bound for open sets, and the upper bound for compact sets) showing that

\begin{align*}-I(x) \leq \lim_{R\to 0}\liminf_{n\to\infty}\frac{1}{v_n}\log\pi_n(B_R(x)) \leq\lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{v_n}\log\pi_n(B_R(x)) \leq -I(x)\end{align*}

for all $x\in\mathcal{X}$ (this can be seen as a consequence of [Reference Dembo and Zeitouni5, Theorem 4.1.11]); actually, we can consider an arbitrary basis of neighborhoods of each point $x\in\mathcal{X}$ instead of open balls. We can successively (see, e.g., [Reference Dembo and Zeitouni5, Lemma 1.2.18]) obtain the full LDP (i.e. the upper bound for closed sets), showing that the exponential tightness condition holds (see, e.g., [Reference Dembo and Zeitouni5, p. 8]): for all $b>0$ there exists a compact set $K_b$ such that

\begin{align*}\limsup_{n\to\infty}\frac{1}{v_n}\log\pi_n\big(K_b^\textrm{c}\big)\leq -b\end{align*}

or, equivalently,

\begin{align*}\pi_n\big(K_b^\textrm{c}\big)\leq a\textrm{e}^{-v_nb}\quad \mbox{eventually}\end{align*}

for some $a>0$ .

Moreover, in view of what we present in the next sections, we recall two results. The first is the well-known Gärtner–Ellis theorem [Reference Dembo and Zeitouni5, Theorem 2.3.6(c)]. Here we recall a simplified version of the theorem, with $\mathcal{X}=\mathbb{R}$ .

Proposition 1. Let $\{\pi_n\,:\, n\geq 1\}$ be a sequence of probability measures on $\mathbb{R}$ , and let $\{v_n\,:\, n\geq 1\}$ be a speed function. Moreover, assume that, for all $\theta\in\mathbb{R}$ , the limit

\begin{align*}\Lambda(\theta) \,:\!=\, \lim_{n\to\infty}\frac{1}{v_n}\int_{\mathbb{R}}{\textrm{e}}^{v_n\theta x}\,\pi_n({\textrm{d}} x)\end{align*}

exists as an extended real number, and that $0\in(\mathcal{D}(\Lambda))^\circ$ , where $\mathcal{D}(\Lambda) \,:\!=\, \{\theta\in\mathbb{R}\,:\,\Lambda(\theta)<\infty\}$ . Then, if $\Lambda$ is essentially smooth and lower semicontinuous, the sequence $\{\pi_n\,:\, n\geq 1\}$ satisfies the LDP with good rate function $\Lambda^*$ defined by $\Lambda^*(x)\,:\!=\,\sup_{\theta\in\mathbb{R}}\{\theta x-\Lambda(\theta)\}$ .

For completeness, we recall that the function $\Lambda$ is essentially smooth [Reference Dembo and Zeitouni5, Definition 2.3.5] if it is differentiable throughout the set $(\mathcal{D}(\Lambda))^\circ$ (assumed to be nonempty), and if it is steep, i.e. $|\Lambda^\prime(\theta_n)|$ tends to infinity whenever $\{\theta_n\,:\, n\geq 1\}\subset(\mathcal{D}(\Lambda))^\circ$ is a sequence which converges to a boundary point of $\mathcal{D}(\Lambda)$ .

The second result is [Reference Chaganty3, Theorem 2.3], which plays a crucial role in the proof of the first moderate deviation result (i.e. the one for the bivariate sequence of sums and maxima of i.i.d. random variables). In this case we have $\mathcal{X}=\mathcal{Y}\times\mathcal{Z}$ , where $\mathcal{Y}$ and $\mathcal{Z}$ are Polish spaces. Moreover, for any given probability measure $\pi_n$ on $\mathcal{X}=\mathcal{Y}\times\mathcal{Z}$ , we consider the marginal distributions $\pi_n^Y$ on $\mathcal{Y}$ and $\pi_n^Z$ on $\mathcal{Z}$ , i.e. $\pi_n^Y({\textrm{d}} y) = \int_{\mathcal{Z}}\,\pi_n({\textrm{d}} y,{\textrm{d}} z)$ and $\pi_n^Z({\textrm{d}} z) = \int_{\mathcal{Y}}\,\pi_n({\textrm{d}} y,{\textrm{d}} z)$ , and the conditional distributions $\big\{\pi_n^{Y\mid Z}({\textrm{d}} y\mid z)\,:\, z\in\mathcal{Z}\big\}$ on $\mathcal{Y}$ such that

$$\pi_n({\textrm{d}} y,{\textrm{d}} z) = \pi_n^{Y\mid Z}({\textrm{d}} y\mid z)\,\pi_n^Z({\textrm{d}} z).$$

Proposition 2. Let $\{\pi_n\,:\, n\geq 1\}$ be a sequence of probability measures on $\mathcal{X}=\mathcal{Y}\times\mathcal{Z}$ , where $\mathcal{Y}$ and $\mathcal{Z}$ are Polish spaces. We assume that the following conditions hold.

  1. (C1) The sequence $\big\{\pi_n^Z\,:\, n\geq 1\big\}$ satisfies the LDP with speed $v_n$ and a good rate function $I_Z$ .

  2. (C2) If $\{z_n\,:\, n\geq 1\}\subset\mathcal{Z}$ and $z_n\to z\in\mathcal{Z}$ , then $\Big\{\pi_n^{Y\mid Z}({\textrm{d}} y\mid z_n)\,:\, n\geq 1\Big\}$ satisfies the LDP with speed $v_n$ and good rate function $I_{Y\mid Z}(\,\cdot\mid z)$ , where $\big\{I_{Y\mid Z}(\,\cdot\mid z)\,:\, z\in\mathcal{Z}\big\}$ is a family of good rate functions such that

    (2) \begin{equation} (y,z)\mapsto I_{Y\mid Z}(y\mid z)\ is\ lower\ semicontinuous. \end{equation}

    Then $\big\{\pi_n\,:\, n\geq 1\big\}$ satisfies the WLDP with speed $v_n$ and rate function $I_{Y,Z}$ defined by $I_{Y,Z}(y,z) \,:\!=\, I_{Y\mid Z}(y\mid z) + I_Z(z)$ . Moreover, $\big\{\pi_n^Y\,:\, n\geq 1\big\}$ satisfies the LDP with speed $v_n$ and rate function $I_Y$ defined by

    \begin{align*}I_Y(y) \,:\!=\, \inf_{z\in\mathcal{Z}}\{I_{Y,Z}(y,z)\} = \inf_{z\in\mathcal{Z}}\{I_{Y\mid Z}(y\mid z) + I_Z(z)\};\end{align*}
    $\{\pi_n\,:\, n\geq 1\}$ satisfies the full LDP if the rate function $I_{Y,Z}$ is good and, in such a case, the rate function $I_Y$ is also good.

In what follows we apply Proposition 2 in the proofs of Propositions 3 and 5. Actually, we omit the statement for $\big\{\pi_n^Y\,:\, n\geq 1\big\}$ because it would allow us to recover well-known results.

3. NCMD for sums and maxima of i.i.d. random variables

Throughout this section we assume the following.

Assumption 1. Let $\{W_n\,:\, n\geq 1\}$ be a sequence of i.i.d. real random variables with density function f which is assumed to be positive only on an interval (m,M), where $-\infty\leq m<M<+\infty$ . We set

\begin{align*} \mathcal{I}=\overline{(m,M)} = \left\{ \begin{array}{l@{\quad}l} [m,M]&\ {if}\ m>-\infty, \\[3pt] ({-}\infty,M]&\ {if}\ m=-\infty. \end{array}\right. \end{align*}

Moreover, as usual, we set $F(z) \,:\!=\, \int_{-\infty}^zf(w)\,{\textrm{d}} w$ for $z\in\mathbb{R}$ ; then $F(M)=1$ and, if $m>-\infty$ , $F(m)=0$ . Finally, we also assume that, for every $z\in\mathcal{I}$ , the function $\kappa_{Y\mid Z}(\,\cdot\mid z)$ defined by

(3) \begin{equation} \kappa_{Y\mid Z}(\theta\mid z) \,:\!=\, \left\{ \begin{array}{l@{\quad}l} \theta m &\ {if}\ z=m>-\infty, \\[3pt] \log\big[\big({\int_{-\infty}^z{\textrm{e}}^{\theta w}f(w)\,{\textrm{d}} w}\big)/{F(z)}\big] &\ {otherwise} \end{array}\right. \end{equation}

is finite in a neighborhood of the origin $\theta=0$ , essentially smooth and lower semicontinuous.

We are interested in the asymptotic behavior of the sequence of bivariate random variables $\{(Y_n,Z_n)\,:\, n\geq 1\}$ defined by

\begin{align*}(Y_n,Z_n) \,:\!=\, \bigg(\frac{W_1+\cdots+W_n}{n},\max\{W_1,\ldots,W_n\}\bigg).\end{align*}

The first result in this section is the reference LDP, i.e. Proposition 3, which provides the full LDP of $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ in the final part of the statement of the proposition. Successively, under some further conditions (see Assumption 2), in Proposition 4 we show that we have weak convergence towards a non-Gaussian distribution, and in Proposition 5 we prove the NCMD result. Both Propositions 3 and 5 will be proved by applying [Reference Chaganty3, Theorem 2.3], i.e. Proposition 2.

3.1. The reference LDP

We start with the following proposition.

Proposition 3. Assume that Assumption 1 holds. Let $I_Z$ be defined by $I_Z(z)\,:\!=\,-\log F(z)$ for $z\in\mathcal{I}$ , with the rule $\log 0=-\infty$ for the case $z=m$ when $m>-\infty$ ; moreover, for each $z\in\mathcal{I}$ , let $I_{Y\mid Z}(\,\cdot\mid z)$ be the function defined by $I_{Y\mid Z}(y\mid z) \,:\!=\, \sup_{\theta\in\mathbb{R}}\{\theta y-\kappa_{Y\mid Z}(\theta\mid z)\}$ , where $\kappa_{Y\mid Z}(\theta\mid z)$ is defined by (3). Then $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ satisfies the WLDP with speed n and rate function $I_{Y,Z}$ defined by $I_{Y,Z}(y,z) \,:\!=\, I_{Y\mid Z}(y\mid z)+I_Z(z)$ . Moreover, $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ satisfies the full LDP if the rate function $I_{Y,Z}$ is good and, in such a case, the rate function $I_Y$ is also good.

Proof. We want to apply Proposition 2 (on the product space $\mathcal{Y}\times\mathcal{Z}\,:\!=\,\mathcal{I}\times\mathcal{I}$ ) to the sequence $\{\pi_n\,:\, n\geq 1\}$ defined by $\pi_n(\cdot)=P((Y_n,Z_n)\in\cdot\,)$ . It is known that Condition (C1) trivially holds; see, e.g., [Reference Giuliano and Macci9, Proposition 4.1]. So, in the remainder of the proof, we have to show that Condition (C2) holds.

First, we can easily check the condition in (2). Indeed, if we take $\{(y_n,z_n)\,:\, n\geq 1\}\subset\mathcal{I}\times\mathcal{I}$ such that $(y_n,z_n)\to (y,z)\in\mathcal{I}\times\mathcal{I}$ , we have

\begin{align*}I_{Y\mid Z}(y_n\mid z_n)\geq\theta y_n-\kappa_{Y\mid Z}(\theta\mid z_n)\quad \mbox{for all}\ \theta\in\mathbb{R},\end{align*}

which yields (if $z>m$ this is trivial; if $z=m>-\infty$ it follows from an application of L’Hôpital’s rule)

\begin{align*}\liminf_{n\to\infty}I_{Y\mid Z}(y_n\mid z_n)\geq\theta y-\kappa_{Y\mid Z}(\theta\mid z) \quad \mbox{for all}\ \theta\in\mathbb{R},\end{align*}

and we get $\liminf_{n\to\infty}I_{Y\mid Z}(y_n\mid z_n)\geq I_{Y\mid Z}(y\mid z)$ by taking the supremum with respect to $\theta\in\mathbb{R}$ . Thus, (2) is checked.

In order to complete the proof of Condition (C2), some preliminaries are needed. Namely, we recall a well-known result on order statistics, and we introduce a suitable family of densities.

  • For every $n\geq 1$ , let $W_{1:n},\ldots,W_{n:n}$ be the order statistics of $W_1,\ldots,W_n$ . Then the joint distribution of $(W_{1:n},\ldots,W_{n:n})$ has density

    \begin{align*}g(w_1,\ldots,w_n) = n!f(w_1)\cdots f(w_n)\textbf{1}_{w_1<\cdots<w_n}.\end{align*}
  • For every $z\in\mathcal{I}$ such that $z\neq m$ when $m>-\infty$ , we introduce the density $f(\,\cdot\mid z)$ defined by

    (4) \begin{equation} f(w\mid z) = \frac{f(w)}{F(z)}\textbf{1}_{({-}\infty,z)}(w). \end{equation}

We assume for the moment that

(5) \begin{equation} \log\mathbb{E}\big[{\textrm{e}}^{n\theta Y_n}\mid Z_n=z_n\big] = (n-1)\kappa_{Y\mid Z}(\theta\mid z_n)+\theta z_n\quad \mbox{for all}\ \theta\in\mathbb{R}; \end{equation}

this will be checked below. Then, by (5), we get

\begin{equation*} \lim_{n\to\infty}\frac{1}{n}\log\mathbb{E}\big[{\textrm{e}}^{n\theta Y_n}\mid Z_n=z_n\big] = \kappa_{Y\mid Z}(\theta\mid z)\quad \mbox{for all}\ \theta\in\mathbb{R}, \end{equation*}

and, by the hypotheses on the functions $\{\kappa_{Y\mid Z}(\,\cdot\mid z)\,:\, z\in\mathcal{I}\}$ , we see that Condition (C2) holds by a straightforward application of the Gärtner–Ellis theorem, i.e. Proposition 1.

To conclude, we have to check (5), and for simplicity we write z in place of $z_n$ . Actually, the case $z=m$ , when $m>-\infty$ , is immediate; therefore, from now on, we assume $z>m$ . Firstly we have

\begin{align*} & \mathbb{E}\big[{\textrm{e}}^{n\theta Y_n}\mid Z_n=z\big] \\ & = \mathbb{E}\Bigg[\exp\Bigg\{\theta\sum_{i=1}^nW_{i:n}\Bigg\}\mid W_{n:n}=z\Bigg] \\ & = \int_{\mathbb{R}^{n-1}}\exp\Bigg\{\theta\Bigg(\sum_{i=1}^{n-1}w_i+z\Bigg)\Bigg\} \frac{g(w_1,\ldots,w_{n-1},z)}{n(F(z))^{n-1}f(z)}\,{\textrm{d}} w_1\cdots {\textrm{d}} w_{n-1} \\ & = {\textrm{e}}^{\theta z}\int_{\mathbb{R}^{n-1}}\exp\Bigg\{\theta\sum_{i=1}^{n-1}w_i\Bigg\} \frac{n!f(w_1)\cdots f(w_{n-1})f(z)\textbf{1}_{w_1<\cdots<w_{n-1}<z}}{n(F(z))^{n-1}f(z)}\,{\textrm{d}} w_1\cdots{\textrm{d}} w_{n-1} \\ & = {\textrm{e}}^{\theta z}\int_{\mathbb{R}^{n-1}}\exp\Bigg\{\theta\sum_{i=1}^{n-1}w_i\Bigg\} \frac{(n-1)!f(w_1)\cdots f(w_{n-1})\textbf{1}_{w_1<\cdots<w_{n-1}<z}}{(F(z))^{n-1}}\,{\textrm{d}} w_1\cdots {\textrm{d}} w_{n-1} \\ & = {\textrm{e}}^{\theta z}\int_{\mathbb{R}^{n-1}}\exp\Bigg\{\theta\sum_{i=1}^{n-1}w_i\Bigg\} (n-1)!f(w_1\mid z)\cdots f(w_{n-1}\mid z)\textbf{1}_{w_1<\cdots<w_{n-1}}\,{\textrm{d}} w_1\cdots {\textrm{d}} w_{n-1}. \end{align*}

Then let $W_1^{(z)},\ldots,W_{n-1}^{(z)}$ be i.i.d. random variables with common density $f(\,\cdot\mid z)$ in (4); moreover, we denote their order statistics by $W_{1:n-1}^{(z)},\ldots,W_{n-1:n-1}^{(z)}$ . We have

\begin{align*} \mathbb{E}\big[{\textrm{e}}^{n\theta Y_n}\mid Z_n=z\big] & = {\textrm{e}}^{\theta z}\mathbb{E}\Bigg[\exp\Bigg\{\theta\sum_{i=1}^{n-1}W_{i:n-1}^{(z)}\Bigg\}\Bigg] \\ & = {\textrm{e}}^{\theta z}\mathbb{E}\Bigg[\exp\Bigg\{\theta\sum_{i=1}^{n-1}W_i^{(z)}\Bigg\}\Bigg] = {\textrm{e}}^{\theta z}\prod_{i=1}^{n-1}\mathbb{E}\big[{\textrm{e}}^{\theta W_i^{(z)}}\big], \end{align*}

and, by taking into account the definition of the function $\kappa_{Y\mid Z}(\,\cdot\mid z)$ in (3), we get

\begin{align*} \mathbb{E}\big[{\textrm{e}}^{n\theta Y_n}\mid Z_n=z\big] = {\textrm{e}}^{\theta z}\big(\exp\big\{\kappa_{Y\mid Z}(\theta\mid z)\big\}\big)^{n-1} = \exp\big\{(n-1)\kappa_{Y\mid Z}(\theta\mid z) + \theta z\big\}. \end{align*}

Thus (5) is checked.

We have the following remarks.

Remark 1. The sequence $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ satisfies the full LDP (because the rate function $I_{Y,Z}$ is good) if $m>-\infty$ , i.e. if $\mathcal{I}$ is compact. Indeed, in such a case, every closed level set of $I_{Y,Z}$ is compact (indeed it is a subset of $\mathcal{I}\times\mathcal{I}$ ). We also recall that if $m>-\infty$ , the function $\kappa_{Y\mid Z}(\,\cdot\mid z)$ is finite and differentiable.

Remark 2. We can wonder whether we can obtain a version of Proposition 3 when the distribution of the random variables $\{W_n\,:\, n\geq 1\}$ is not bounded from above, i.e. when $M=\infty$ . Firstly, in such a case, the rate function $I_Z$ in Proposition 3 is not good, and therefore we cannot apply Proposition 2. However, we can prove Proposition 6, i.e. a suitable modification of Proposition 3 with $\{P((Y_n,Z_n/h_n)\in\cdot\,)\,:\, n\geq 1\}$ in place of $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ , for some $h_n$ such that $h_n\to\infty$ .

It is interesting to present the following example in which the rate function $I_{Y,Z}$ is good even if $m=-\infty$ .

Example 1. We take $\mathcal{I}=({-}\infty,0]$ (so $M=0$ and $m=-\infty$ ). Let f be defined by $f(w) \,:\!=\, {\textrm{e}}^w\textbf{1}_{({-}\infty,0)}(w)$ . Then, for $z\in ({-}\infty,0]$ , we have $F(z)={\textrm{e}}^z$ , which yields $I_Z(z)=-z$ ;

\begin{align*} \kappa_{Y\mid Z}(\theta\mid z) & = \log\frac{\int_{-\infty}^z{\textrm{e}}^{\theta w}{\textrm{e}}^w\,{\textrm{d}} w}{{\textrm{e}}^z} = \log\bigg({\textrm{e}}^{-z}\int_{-\infty}^z{\textrm{e}}^{(\theta+1)w}\,{\textrm{d}} w\bigg) \\[3pt] & = \left\{ \begin{array}{l@{\quad}l} -z + \log\dfrac{{\textrm{e}}^{(\theta+1)z}}{\theta+1} &\ \mbox{if}\ \theta+1>0 \\[10pt] \infty &\ \mbox{if}\ \theta+1\leq 0 \end{array}\right. = \left\{ \begin{array}{l@{\quad}l} \theta z-\log(\theta+1) &\ \mbox{if}\ \theta>-1, \\[3pt] \infty &\ \mbox{if}\ \theta\leq -1, \end{array}\right. \end{align*}

and therefore, for $y<z\leq 0$ ,

\begin{align*} I_{Y\mid Z}(y\mid z) = \sup_{\theta\in\mathbb{R}}\big\{\theta y-\kappa_{Y\mid Z}(\theta\mid z)\big\} = \sup_{\theta>-1}\{\theta(y-z)+\log(\theta+1)\} = (z-y)-1-\log(z-y) \end{align*}

(indeed, the supremum above is attained at $\theta=({1}/({z-y}))-1$ ).

Now we are able to check that $I_{Y,Z}$ is a good rate function. For every $\eta\geq 0$ we have

\begin{align*} \{(y,z)\in\mathcal{I}\times\mathcal{I}\,:\, I_{Y,Z}(y,z)\leq\eta\} & = \{(y,z)\in\mathcal{I}\times\mathcal{I}\,:\, I_{Y\mid Z}(y\mid z)+I_Z(z)\leq\eta\} \\ & = \{(y,z)\,:\, y<z\leq 0,(z-y)-1-\log(z-y)-z\leq\eta\} \\ & \quad \subset\{(y,z)\,:\, y<z\leq 0,(z-y)-1-\log(z-y)\leq\eta,-z\leq\eta\}; \end{align*}

moreover, for every $z\in[{-}\eta,0]$ , there exist $r_1^\eta,r_2^\eta$ such that $0<r_1^\eta<1<r_2^\eta$ and $(z-y)-1-\log(z-y)\leq\eta$ if and only if $r_1^\eta<z-y<r_2^\eta$ ; therefore the (closed) level set $\{(y,z)\in\mathcal{I}\times\mathcal{I}\,:\, I_{Y,Z}(y,z)\leq\eta\}$ is a subset of the compact set (it is a parallelogram) $\big\{(y,z):-\eta\leq z\leq 0,z-r_2^\eta\leq y\leq z-r_1^\eta\big\}$ .

3.2. Weak convergence and NCMD

Throughout this paper we consider the Weibull distribution with parameter 1, i.e. the distribution of a random variable U such that $P(U\leq u)=\min\{{\textrm{e}}^u,1\}$ for all $u\in\mathbb{R}$ (thus $-U$ is an exponentially distributed random variable with mean 1). We start with the following assumption.

Assumption 2. Let $\{W_n\,:\, n\geq 1\}$ be a sequence of i.i.d. real random variables as in Assumption 1 with density function f (so the random variables $\{W_n\,:\, n\geq 1\}$ have finite mean $\mu<M$ and variance $\sigma^2>0$ ; indeed, $\kappa_{Y\mid Z}(\theta\mid M) = \log\mathbb{E}\big[{\textrm{e}}^{\theta W_1}\big]$ is finite in a neighborhood of the origin $\theta=0$ ). Moreover, we assume that $F^\prime(M{-})>0$ , i.e. the left derivative of F at M, exists and that $f(M)=F^\prime(M{-})$ . Finally, let $\{L(n)\,:\, n\geq 1\}$ be a sequence such that $L(n)\to F^\prime(M{-})$ as $n\to\infty$ .

It is well known that, if Assumption 2 holds, we have the following weak convergence results (as $n\to\infty$ ):

  1. (i) By the central limit theorem,

    \begin{align*}\bigg\{\frac{Y_n-\mu}{\sigma/\sqrt{n}}\,:\, n\geq 1\bigg\}\end{align*}
    converges weakly to a standard Gaussian distribution;
  2. (ii) $\{nL(n)(Z_n-M)\,:\, n\geq 1\}$ converges weakly to a Weibull distribution with parameter 1. Indeed, for every $z\leq 0$ , for a suitable remainder $o({1}/{n})$ (as $n\to\infty$ ), for n large enough we have

    \begin{align*} P(nL(n)(Z_n-M)\leq z) & = P\bigg(Z_n\leq M+\frac{z}{nL(n)}\bigg) \\ & = F^n\bigg(M+\frac{z}{nL(n)}\bigg) \\ & = \bigg(1+F^\prime(M{-})\frac{z}{nL(n)}+o\bigg(\frac{1}{n}\bigg)\bigg)^n \to {\textrm{e}}^z\ (\mbox{as}\ n\to\infty). \end{align*}

Remark 3. The weak convergence of $\{nL(n)(Z_n-M)\,:\, n\geq 1\}$ in (ii) can be related to a particular case of the Fisher–Tippett–Gnedenko theorem [Reference Embrechts, Klüppelberg and Mikosch6, Theorem 3.2.3]. More precisely, we mean the weak convergence of

\begin{align*}\bigg\{\frac{Z_n-M}{M-F^{-1}(1-1/n)}\,:\, n\geq 1\bigg\}\end{align*}

when the random variables $\{W_n\,:\, n\geq 1\}$ are in the maximum domain of attraction of the Weibull distribution with parameter 1; see [Reference Embrechts, Klüppelberg and Mikosch6, Table 3.4.3, p. 154] for $\alpha=1$ (for the related theory, see [Reference Embrechts, Klüppelberg and Mikosch6, Section 3.3.2]). Indeed, we have $M-F^{-1}(1-1/n)=n^{-1}L_1(n)$ for a slowly varying function $L_1$ ; then, since $M=F^{-1}(1)$ , we get

\begin{align*} L_1(n) = \frac{F^{-1}(1)-F^{-1}(1-1/n)}{1/n}\to (F^{-1})^\prime(1)=\frac{1}{F^\prime(M{-})}\ \mbox{as}\ n\to\infty, \end{align*}

and therefore $L_1(n)$ plays the role of ${1}/{L(n)}$ (at least for n large enough).

Actually, as we state in the next proposition, we have a stronger result, i.e. the weak convergence of the bivariate sequence towards a bivariate distribution with independent components.

Proposition 4. If Assumption 2 holds, then

\begin{align*}\bigg\{\bigg(\frac{Y_n-\mu}{\sigma/\sqrt{n}},nL(n)(Z_n-M)\bigg)\,:\, n\geq 1\bigg\}\end{align*}

converges weakly to a bivariate distribution with independent components distributed as a standard Gaussian distribution and a Weibull distribution with parameter 1.

Proof. This is a consequence of [Reference Chow and Teugels4, Theorem 1].

The next proposition provides a class of LDPs that fills the gap between the convergence of $\{(Y_n,Z_n)\,:\, n\geq 1\}$ to $(\mu,M)$ (governed by the LDP in Proposition 3 with speed $v_n=n$ ) and the weak convergence in Proposition 4. Then we have the NCMD result because the weak convergence in Proposition 4 is towards a non-Gaussian distribution (indeed, the second marginal distribution is not Gaussian).

Proposition 5. Assume that Assumption 2 holds. Then, for every sequence of positive numbers $\{a_n\,:\, n\geq 1\}$ such that (1) holds with $v_n=n$ , the sequence

\begin{align*} \bigg\{P\bigg(\bigg(\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}},a_nnL(n)(Z_n-M)\bigg)\in\cdot\bigg)\,:\, n\geq 1\bigg\} \end{align*}

satisfies the LDP with speed $1/a_n$ and good rate function $J_{Y,Z}$ defined by

\begin{align*} J_{Y,Z}(y,z)=\left\{ \begin{array}{l@{\quad}l} \frac{y^2}{2}-z &\ if\ z\leq 0, \\[4pt] \infty &\ otherwise. \end{array}\right. \end{align*}

Proof. We want to apply Proposition 2 (on the product space $\mathcal{Y}\times\mathcal{Z}\,:\!=\,\mathbb{R}\times ({-}\infty,0]$ ) to the sequence $\{\pi_n\,:\, n\geq 1\}$ defined by

\begin{align*}\pi_n(\cdot)=P\bigg(\bigg(\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}},a_nnL(n)(Z_n-M)\bigg)\in\cdot\bigg).\end{align*}

Notice that here we use some slightly different notation (i.e. $J_Z$ , $J_{Y\mid Z}$ , and $J_{Y,Z}$ in place of $I_Z$ , $I_{Y\mid Z}$ , and $I_{Y,Z}$ in Proposition 2, respectively). The proof is divided into three parts: the proof of Condition (C1), the proof of Condition (C2), and the proof of the goodness of the rate function $J_{Y,Z}$ .

Condition (C1)

Here we consider the sequence $\big\{\pi_n^Z\,:\, n\geq 1\big\}$ defined by $\pi_n^Z(\cdot) = P(a_nnL(n)(Z_n-M)\in\cdot\,).$ We have to prove that $\big\{\pi_n^Z\,:\, n\geq 1\big\}$ satisfies the LDP with speed $1/a_n$ and good rate function $J_Z$ defined by

\begin{align*} J_Z(z) = \left\{ \begin{array}{l@{\quad}l} -z &\ \mbox{if}\ z\leq 0, \\ \infty &\ \mbox{otherwise}. \end{array}\right. \end{align*}

We start with the proof of the upper bound for every closed set $C\subset({-}\infty,0]$ . If $0\in C$ it is trivial. If $0\notin C$ , we set $z_C\,:\!=\,\sup C$ and therefore we have $z_C=-\inf_{z\in C}J_Z(z)<0$ , with $z_C\in C$ . Then, for a suitable remainder $o({1}/{a_nn})$ (as $n\to\infty$ ), for n large enough we have

\begin{align*} P(a_nnL(n)(Z_n-M)\in C) & \leq P(a_nnL(n)(Z_n-M)\leq z_C) \\ & = P\bigg(Z_n\leq M+\frac{z_C}{a_nnL(n)}\bigg) \\ & = F^n\bigg(M+\frac{z_C}{a_nnL(n)}\bigg) = \bigg(1+F^\prime(M{-})\frac{z_C}{a_nnL(n)} + o\bigg(\frac{1}{a_nn}\bigg)\bigg)^n, \end{align*}

and therefore

\begin{multline*} \limsup_{n\to\infty}\frac{1}{1/a_n}\log P(a_nnL(n)(Z_n-M)\in C)\\ \leq\limsup_{n\to\infty}a_nn\log\bigg(1+F^\prime(M{-})\frac{z_C}{a_nnL(n)} + o\bigg(\frac{1}{a_nn}\bigg)\bigg) = z_C = -\inf_{z\in C}J_Z(z). \end{multline*}

Now the lower bound for open sets. For every open set $O\in({-}\infty,0]$ such that $z\in O$ , we have to check that

\begin{align*}\limsup_{n\to\infty}\frac{1}{1/a_n}\log P(a_nnL(n)(Z_n-M)\in O)\geq-J_Z(z).\end{align*}

This is trivial if $z=0$ because $P(a_nnL(n)(Z_n-M)\in O)\to 1$ (indeed, $a_nnL(n)(Z_n-M)$ converges in probability to zero as a trivial consequence of the Slutsky theorem). For $z<0$ we take $\varepsilon>0$ small enough to have $(z-\varepsilon,z+\varepsilon)\subset O\cap({-}\infty,0)$ and, by also taking into account some computations above from the proof of the upper bound for closed sets, for n large enough we get

\begin{align*} & P(a_nnL(n)(Z_n-M)\in O) \\ & \geq P(z-\varepsilon<a_nnL(n)(Z_n-M)<z+\varepsilon) \\ & = P\bigg(M+\frac{z-\varepsilon}{a_nnL(n)}<Z_n<M+\frac{z+\varepsilon}{a_nnL(n)}\bigg) \\ & = F^n\bigg(M+\frac{z+\varepsilon}{a_nnL(n)}\bigg)-F^n\bigg(M+\frac{z-\varepsilon}{a_nnL(n)}\bigg) \\ & = \bigg(1+F^\prime(M{-})\frac{z+\varepsilon}{a_nnL(n)}+o\bigg(\frac{1}{a_nn}\bigg)\bigg)^n -\bigg(1+F^\prime(M{-})\frac{z-\varepsilon}{a_nnL(n)}+o\bigg(\frac{1}{a_nn}\bigg)\bigg)^n \\ & = \bigg(1+F^\prime(M{-})\frac{z-\varepsilon}{a_nnL(n)}+o\bigg(\frac{1}{a_nn}\bigg)\bigg)^n \\ & \quad \times \bigg(\frac{(1+F^\prime(M{-})({z+\varepsilon})/{a_nnL(n)}+o({1}/{a_nn}))^n} {(1+F^\prime(M{-})({z-\varepsilon})/{a_nnL(n)}+o({1}/{a_nn}))^n}-1\bigg); \end{align*}

moreover,

\begin{align*} & \liminf_{n\to\infty}\frac{1}{1/a_n}\log P(a_nnL(n)(Z_n-M)\in O) \\ & \geq \liminf_{n\to\infty}a_nn\log\bigg(1+F^\prime(M{-})\frac{z-\varepsilon}{a_nnL(n)} + o\bigg(\frac{1}{a_nn}\bigg)\bigg) \\[5pt] & \quad + \liminf_{n\to\infty}a_n\log\bigg(\exp\bigg(n\log\bigg( \frac{1+F^\prime(M{-})({z+\varepsilon})/{a_nnL(n)} + o({1}/{a_nn})} {1+F^\prime(M{-})({z-\varepsilon})/{a_nnL(n)} + o({1}/{a_nn})}\bigg)\bigg) - 1\bigg), \end{align*}

where

\begin{align*} \liminf_{n\to\infty}a_nn\log\bigg(1+F^\prime(M{-})\frac{z-\varepsilon}{a_nnL(n)}+o\bigg(\frac{1}{a_nn}\bigg)\bigg) = z-\varepsilon \end{align*}

and

\begin{multline*} n\log\bigg(\frac{1+F^\prime(M{-})({z+\varepsilon})/{a_nnL(n)} + o({1}/{a_nn})} {1+F^\prime(M{-})({z-\varepsilon})/{a_nnL(n)} + o({1}/{a_nn})}\bigg) \\ = n\log\bigg(1 + \frac{F^\prime(M{-}){2\varepsilon}/{a_nnL(n)} + o({1}/{a_nn})} {1+F^\prime(M{-})({z-\varepsilon})/{a_nnL(n)} + o({1}/{a_nn})}\bigg) \sim \frac{2\varepsilon}{a_n}; \end{multline*}

so finally we have

\begin{align*} \liminf_{n\to\infty}\frac{1}{1/a_n}\log P(a_nnL(n)(Z_n-M)\in O)\geq z-\varepsilon+2\varepsilon=-J_Z(z)+\varepsilon, \end{align*}

and we conclude by letting $\varepsilon$ go to zero.

Condition (C2). Here we consider the sequence $\Big\{\pi_n^{Y\mid Z}(\,\cdot\mid z_n)\,:\, n\geq 1\Big\}$ defined by

\begin{align*} \pi_n^{Y\mid Z}(\,\cdot\mid z_n) = P\bigg(\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}}\in\cdot\mid a_nnL(n)(Z_n-M)=z_n\bigg), \end{align*}

where $\{z_n\,:\, n\geq 1\}\subset ({-}\infty,0]$ such that $z_n\to z$ (as $n\to\infty$ ) for some $z\in ({-}\infty,0]$ . Then we have to prove that $\Big\{\pi_n^{Y\mid Z}(\,\cdot\mid z_n)\,:\, n\geq 1\Big\}$ satisfies the LDP with speed $1/a_n$ and good rate function $J_{Y\mid Z}$ defined by $J_{Y\mid Z}(y\mid z)={y^2}/{2}$ . Note that the condition in (2) trivially holds; indeed, $(y,z)\mapsto J_{Y\mid Z}(y\mid z)={y^2}/{2}$ is a lower semicontinuous function. Moreover, in what follows, we simply write $J_Y(y)={y^2}/{2}$ in place of $J_{Y\mid Z}(y\mid z)={y^2}/{2}$ .

We apply the Gärtner–Ellis theorem, i.e. Proposition 1. Indeed, we show that

(6) \begin{equation} \lim_{n\to\infty}\frac{1}{1/a_n}\log\mathbb{E}\bigg[ \exp\bigg(\frac{\theta}{a_n}\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}}\bigg) \mid a_nnL(n)(Z_n-M)=z_n\bigg]=\frac{\theta^2}{2}\ (\mbox{for all}\ \theta\in\mathbb{R}), \end{equation}

and therefore, for every $z\leq 0$ , we get the desired LDP with rate function $J_Y$ defined by $J_Y(y)\,:\!=\,\sup_{\theta\in\mathbb{R}}\big\{\theta y-{\theta^2}/{2}\big\}$ for all $y\in\mathbb{R}$ , which coincides with the rate function $J_Y(y)={y^2}/{2}$ .

Now we recall that

\begin{align*}\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}}=\frac{nY_n-n\mu}{\sigma\sqrt{n/a_n}\,}\end{align*}

and $a_nnL(n)(Z_n-M)=z_n$ if and only if $Z_n=M+{z_n}/{a_nnL(n)}$ ; then, for n large enough, we have

\begin{align*} P\bigg(\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}}\in\cdot\mid a_nnL(n)(Z_n-M)=z_n\bigg) = P\bigg(\frac{M+{z_n}/{a_nnL(n)}+S_{n-1}^{(z_n)}-n\mu}{\sigma\sqrt{n/a_n}}\in\cdot\bigg), \end{align*}

where $S_{n-1}^{(z_n)}$ is the sum of $n-1$ i.i.d. random variables $W_1^{(z_n)},\ldots,W_{n-1}^{(z_n)}$ such that

\begin{align*}\log\mathbb{E}\big[e^{\theta W_1^{(z_n)}}\big] = \kappa_{Y\mid Z}\bigg(\theta\mid M+\frac{z_n}{a_nnL(n)}\bigg) \quad (\mbox{for all}\ \theta\in\mathbb{R}). \end{align*}

Thus, we get

\begin{multline*} \log\mathbb{E}\bigg[\exp\bigg(\frac{\theta}{a_n}\sqrt{a_n}\frac{Y_n-\mu}{\sigma/\sqrt{n}}\bigg)\mid a_nnL(n)(Z_n-M)=z_n\bigg] \\ = (n-1)\kappa_{Y\mid Z}\bigg(\frac{\theta}{\sigma\sqrt{a_nn}\,}\mid M+\frac{z_n}{a_nnL(n)}\bigg) + \theta\frac{M+{z_n}/{a_nnL(n)}-n\mu}{\sigma\sqrt{a_nn}}, \end{multline*}

where, for a suitable remainder $o({1}//{an_n})$ (as $n\to\infty$ ),

\begin{align*} \kappa_{Y\mid Z}\bigg(\frac{\theta}{{\sigma\sqrt{a_nn}}}\mid M+\frac{z_n}{a_nnL(n)}\bigg) & = \partial_\theta\kappa_{Y\mid Z}(0\mid M)\frac{\theta}{\sigma\sqrt{a_nn}\,} + \partial_z\kappa_{Y\mid Z}(0\mid M)\frac{z_n}{a_nnL(n)} \\ & \quad + \frac{1}{2}\partial_{\theta\theta}^2\kappa_{Y\mid Z}(0\mid M)\frac{\theta^2}{\sigma^2a_nn} + \frac{1}{2}\partial_{zz}^2\kappa_{Y\mid Z}(0\mid M)\frac{z_n^2}{a_n^2n^2L^2(n)} \\ & \quad + \partial_{\theta z}^2\kappa_{Y\mid Z}(0\mid M)\frac{\theta}{\sigma\sqrt{a_nn}\,} \frac{z_n}{a_nnL(n)} + o\bigg(\frac{1}{a_nn}\bigg); \end{align*}

moreover, we have $\partial_\theta\kappa_{Y\mid Z}(0\mid M) = \mu$ and $\partial_{\theta\theta}^2\kappa_{Y\mid Z}(0\mid M)=\sigma^2$ , and (we recall that $F(M)=1$ , $\int_m^Mf(w)\,{\textrm{d}} w = 1$ and $f(M)=F^\prime(M{-})$ is finite and positive)

\begin{align*} \partial_z\kappa_{Y\mid Z}(0\mid M) = \frac{F(z)}{\int_m^z{\textrm{e}}^{\theta w}f(w)\,{\textrm{d}} w} \frac{{\textrm{e}}^{\theta z}f(z)F(z)-f(z)\int_m^z{\textrm{e}}^{\theta w}f(w)\,{\textrm{d}} w}{F^2(z)}\bigg|_{(\theta,z)=(0,M)}=0. \end{align*}

Then we get the limit in (6), noting that

\begin{align*} & \frac{1}{1/a_n}\log\mathbb{E}\bigg[\exp\bigg(\frac{\theta}{a_n}\sqrt{a_n} \frac{Y_n-\mu}{\sigma/\sqrt{n}}\bigg)\mid a_nnL(n)(Z_n-M)=z_n\bigg] \\ & = a_n(n-1)\bigg\{\mu\frac{\theta}{\sigma\sqrt{a_nn}\,} + \frac{\sigma^2}{2}\frac{\theta^2}{\sigma^2a_nn} + \frac{1}{2}\partial_{zz}^2\kappa_{Y\mid Z}(0\mid M)\frac{z_n^2}{a_n^2n^2L^2(n)} \\ & \quad + \partial_{\theta z}^2\kappa_{Y\mid Z}(0\mid M)\frac{\theta}{\sigma\sqrt{a_nn}\,} \frac{z_n}{a_nnL(n)} + o\bigg(\frac{1}{a_nn}\bigg)\bigg\} + a_n\theta\frac{M+{z_n}/{a_nnL(n)}-n\mu}{\sigma\sqrt{a_nn}} \\ & = \frac{\theta}{\sigma\sqrt{a_nn}\,}\bigg(a_n(n-1)\mu + \partial_{\theta z}^2\kappa_{Y\mid Z}(0\mid M)\frac{z_n(n-1)}{nL(n)} +a_nM + \frac{z_n}{nL(n)}-a_nn\mu\bigg) \\ & \quad + \frac{\theta^2(n-1)}{2n} + \frac{a_n(n-1)}{2}\partial_{zz}^2\kappa_{Y\mid Z}(0\mid M)\frac{z_n^2}{a_n^2n^2L^2(n)} + a_n(n-1)o\bigg(\frac{1}{a_nn}\bigg) \to \frac{\theta^2}{2} \end{align*}

(for each fixed $\theta\in\mathbb{R}$ ).

Goodness of the rate function $J_{Y,Z}$

Here we have to check that, for every $\eta\geq 0$ , every closed level set of $J_{Y,Z}$ is compact. This can be done by noting that, for every $\eta\geq 0$ , we have

\begin{multline*} \{(y,z)\in\mathbb{R}\times({-}\infty,0]\,:\, J_{Y,Z}(y,z)\leq\eta\} \\ =\{(y,z)\in\mathbb{R}\times({-}\infty,0]\,:\, J_Y(y)+J_Z(z)\leq\eta\} \\ \subset\{y\in\mathbb{R}\,:\, J_Y(y)\leq\eta\}\times\{z\in({-}\infty,0]\,:\,J_Z(z)\leq\eta\}, \end{multline*}

where both $\{y\in\mathbb{R}\,:\, J_Y(y)\leq\eta\}$ and $\{z\in({-}\infty,0]\,:\, J_Z(z)\leq\eta\}$ are compact sets; so every level set is compact because it is a subset of a compact set.

Remark 4. The rate function $J_{Y,Z}(y,z)$ in Proposition 5 can be expressed as a sum of two functions which depend on y and z only, i.e. the marginal rate functions $J_Y(y)$ and $J_Z(z)$ that appear in the proof of that proposition. This is not surprising by the asymptotic independence stated in Proposition 4.

4. A modification of Proposition 3 when M is not finite

In this section we prove Proposition 6, i.e. a suitable modification of Proposition 3 with $\{P((Y_n,Z_n/h_n)\in\cdot\,)\,:\, n\geq 1\}$ in place of $\{P((Y_n,Z_n)\in\cdot\,)\,:\, n\geq 1\}$ , for some $h_n$ such that $h_n\to\infty$ ; actually, we consider some different hypotheses and, in particular, $M=\infty$ . In order to do that we refer to [Reference Giuliano and Macci9, Proposition 3.1] (in place of [Reference Giuliano and Macci9, Proposition 4.1]; we mean the part of the proof of Proposition 3 in which we check that Condition (C1) holds). We start with the following useful lemma.

Lemma 1. Let $\{\pi_n\}_n$ be a sequence of probability measures (on some Polish space) that satisfies the LDP with speed $s_n$ and good rate function I, which uniquely vanishes at some $r_0$ . Moreover, let $t_n$ be another speed function such that ${s_n}/{t_n}\to\infty$ . Then $\{\pi_n\}_n$ satisfies the LDP with speed $t_n$ and good rate function $\Delta(\cdot;\,r_0)$ defined by

\begin{equation*} \Delta(\cdot;\,r_0)\,:\!=\,\left\{ \begin{array}{l@{\quad}l} 0 &\ if\ r=r_0, \\ \infty &\ if\ r\neq r_0. \end{array}\right. \end{equation*}

Proof. First, we can say that $\{\pi_n\}_n$ is exponentially tight with respect to $s_n$ (this follows from the LDP of the sequence $\{\pi_n\}_n$ with speed $s_n$ and good rate function I, and [Reference Lynch and Sethuraman18, Lemma 2.6]). Then $\{\pi_n\}_n$ is also exponentially tight with respect to $t_n$ ; indeed, if for every $b>0$ there exists a compact set $K_b$ such that $\pi_n\big(K_b^\textrm{c}\big) \leq a{\textrm{e}}^{-s_n b}$ eventually for some $a>0$ , then we have the same estimate with $t_n$ in place $s_n$ because ${\textrm{e}}^{-s_n}\leq {\textrm{e}}^{-t_n}$ . So there exists at least a subsequence of $\{\pi_n\}_n$ which satisfies the LDP with speed $t_n$ (see, e.g., [Reference Puhalskii20, Theorem (P)]). We complete the proof by showing that, for every subsequence of $\{\pi_n\}_n$ (which we still call $\{\pi_n\}_n$ ) that satisfies the LDP with speed $t_n$ , the governing rate function is $\Delta(\cdot;\,r_0)$ . Here, as in Section 2, we consider the notation $B_R(r)$ for the open ball centered at r and with radius R. Then, by the hypotheses, we have

\begin{align*} -I(r) \leq \lim_{R\to 0}\liminf_{n\to\infty}\frac{1}{s_n}\log\pi_n(B_R(r)) \leq \lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{s_n}\log\pi_n(B_R(r)) \leq -I(r) \end{align*}

for every r in the Polish space; our aim is to get the same estimate (up to a subsequence) with $t_n$ in place of $s_n$ and $\Delta(\cdot;\,r_0)$ in place of I.

We start with the case $r=r_0$ . Then we trivially have

\begin{align*}\limsup_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_R(r))\leq 0=-\Delta(r_0;r_0),\end{align*}

whence we obtain $\lim_{R\to 0}\limsup_{n\to\infty}({1}/{t_n})\log\pi_n(B_R(r))\leq-\Delta(r_0;r_0)$ . Moreover, for every $R>0$ , we have $\pi_n(B_R(r))\to 1$ ; this yields

\begin{align*}\lim_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_R(r))=0=-\Delta(r_0;r_0),\end{align*}

whence we obtain $\lim_{R\to 0}\liminf_{n\to\infty}({1}/{t_n})\log\pi_n(B_R(r))=-\Delta(r_0;r_0)$ . Thus, the desired bounds for $r=r_0$ are proved, and we now consider the case $r\neq r_0$ . Then, we trivially have

\begin{align*}\liminf_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_R(r))\geq-\infty=-\Delta(r;r_0),\end{align*}

whence we obtain $\lim_{R\to 0}\liminf_{n\to\infty}({1}/{t_n})\log\pi_n(B_R(r))\geq-\Delta(r;r_0)$ . Moreover, we can find $\rho>0$ small enough that $I(\overline{B_\rho(r)})\,:\!=\,\inf\{I(y)\,:\, y\in\overline{B_\rho(r)}\}>0$ (thus $r_0\notin\overline{B_\rho(r)}$ ). Then

\begin{align*} \limsup_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_R(r)) \leq \limsup_{n\to\infty}\frac{s_n}{t_n}\frac{1}{s_n}\log\pi_n(\overline{B_\rho(r)}) \leq -\infty = -\Delta(r;r_0) \end{align*}

(because ${s_n}/{t_n}\to\infty$ and $\limsup_{n\to\infty}({1}/{s_n})\log\pi_n(\overline{B_\rho(r)})\leq-I(\overline{B_\rho(r)})$ ); so, by the monotonicity with respect to $\rho$ , we get

\begin{align*} \lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_R(r)) \leq \limsup_{n\to\infty}\frac{1}{t_n}\log\pi_n(B_\rho(r)) \leq -\Delta(r;r_0). \end{align*}

Thus, the desired bounds for $r\neq r_0$ are proved, and this completes the proof.

Now we are able to prove Proposition 6. In particular, we consider the notation in Assumption 1, and we again use the notation $\mu$ for the mean of the i.i.d. random variables $\{W_n\,:\, n\geq 1\}$ .

Proposition 6. Let $\{W_n\,:\, n\geq 1\}$ be i.i.d. random variables with common continuous distribution function F such that $\kappa_Y(\theta)\,:\!=\,\log\mathbb{E}[{\textrm{e}}^{\theta W_1}]$ is finite in a neighbourhood of $\theta=0$ . Assume that $M=\infty$ . We set $\mathcal{H}(x)=-\log(1-F(x))$ . Moreover, let $h_n$ be such that $1-F(h_n)={1}/{n}$ , or equivalently $\mathcal{H}(h_n)=\log n$ . We also assume that $\mathcal{H}$ is a regularly varying function at $\infty$ of index $\alpha>0$ , i.e.

\begin{align*}\lim_{y\to\infty}\frac{\mathcal{H}(xy)}{\mathcal{H}(y)} = x^\alpha\quad for\ all\ x>0.\end{align*}

Then $\{P((Y_n,Z_n/h_n)\in\cdot\,)\,:\, n\geq 1\}$ satisfies the LDP with speed $\log n$ and rate function $H_{Y,Z}$ defined by

\begin{align*} H_{Y,Z}(y,z)\,:\!=\,\left\{ \begin{array}{l@{\quad}l} H_Z(z) &\ if\ z\geq 1\ and\ y=\mu, \\ \infty &\ otherwise, \end{array}\right. \end{align*}

where $H_Z(z)\,:\!=\,z^\alpha-1$ .

Proof. It is well known that it is enough to prove the following two conditions:

  1. (i) for all $(y,z)\in\mathbb{R}^2$ ,

    \begin{align*} -H_{Y,Z}(y,z) & \leq \lim_{R\to 0}\liminf_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R)) \\ & \leq \lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R)) \\ & \leq -H_{Y,Z}(y,z); \end{align*}
  2. (ii) $\{P((Y_n,Z_n/h_n)\in\cdot\,)\,:\, n\geq 1\}$ is exponentially tight with respect to the speed $\log n$ .

For the first condition we start with two trivial cases $z<1$ and $y\neq\mu$ , and it is enough to check the upper bound. If $z<1$ we have

(7) \begin{equation} P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R))\leq P(Z_n/h_n\in(z-R,z+R)) \end{equation}

and, for $R>0$ small enough, $\limsup_{n\to\infty}({1}/{\log n})\log P(Z_n/h_n\in(z-R,z+R))=-\infty$ by the LDP in [Reference Giuliano and Macci9, Proposition 3.1]. If $y\neq\mu$ we have

\begin{align*}P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R)) \leq P(Y_n\in(y-R,y+R))\end{align*}

and, for $R>0$ small enough, $\limsup_{n\to\infty}({1}/{\log n})\log P(Y_n\in(y-R,y+R))=-\infty$ by the LDP of $\{Y_n\,:\, n\geq 1\}$ with speed $\log n$ with rate function $\Delta(\cdot;\,\mu)$ in Lemma 1; this LDP is a consequence of Lemma 1 together with Cramér’s theorem [Reference Dembo and Zeitouni5, Theorem 2.2.3] with $I=\kappa_Y^*$ (where $\kappa_Y^*$ is defined by $\kappa_Y^*(y)\,:\!=\,\sup_{\theta\in\mathbb{R}}\{\theta y-\kappa_Y(\theta)\}$ , which uniquely vanishes at $y=\mu$ ), $s_n=n$ , and $t_n=\log n$ .

So, we conclude the proof of the first condition by taking $z\geq 1$ and $y=\mu$ . The upper bound can be proved as we did before for the case $z<1$ ; indeed, by (7) and by the LDP in [Reference Giuliano and Macci9, Proposition 3.1], we have

\begin{multline*} \lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R))\\ \leq\lim_{R\to 0}\limsup_{n\to\infty}\frac{1}{\log n}\log P(Z_n/h_n\in(z-R,z+R))\leq-H_Z(z). \end{multline*}

For the lower bound, we take into account that

\begin{align*} & P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R)) \\ & = P(Z_n/h_n\in(z-R,z+R))-P((Y_n,Z_n/h_n)\in(y-R,y+R)^\textrm{c}\times(z-R,z+R)), \end{align*}

and we get

\begin{align*}\lim_{R\to 0}\liminf_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R))\geq-H_Z(z)\end{align*}

by applying [Reference Ganesh and Torrisi7, Lemma 19]. In order to do that, we remark that

\begin{align*}\liminf_{n\to\infty}\frac{1}{\log n}\log P(Z_n/h_n\in(z-R,z+R))\geq-H_Z(z)\end{align*}

by the LDP in [Reference Giuliano and Macci9, Proposition 3.1], and

\begin{align*} & \limsup_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)^c\times(z-R,z+R)) \\ & \leq \limsup_{n\to\infty}\frac{1}{\log n}\log P(Y_n\in(y-R,y+R)^c) \leq -\inf_{s\in(y-R,y+R)^c}\Delta(s,\mu)=-\infty \end{align*}

(here we take into account the LDP of $\{Y_n\,:\, n\geq 1\}$ with speed $\log n$ stated above). Then, [Reference Ganesh and Torrisi7, Lemma 19] yields

\begin{align*}\liminf_{n\to\infty}\frac{1}{\log n}\log P((Y_n,Z_n/h_n)\in(y-R,y+R)\times(z-R,z+R))\geq -H_Z(z),\end{align*}

and we easily get the desired lower bound.

We conclude with the second condition, i.e. the exponential tightness. By [Reference Lynch and Sethuraman18, Lemma 2.6], the marginal sequences are exponentially tight; thus, for all $b>0$ , there exist two compact sets $K_b^{(1)}$ and $K_b^{(2)}$ such that $P\big(Y_n\notin K_b^{(1)}\big)\leq a_1{\textrm{e}}^{-b\log n}$ and $P\big(Z_n/h_n\notin K_b^{(2)}\big)\leq a_2{\textrm{e}}^{-b\log n}$ eventually, for some $a_1,a_2>0$ . Then, since $K_z^{(1)}\times K_z^{(2)}$ is a compact set, we conclude the proof by noting that

\begin{align*} P\big((Y_n,Z_n/h_n)\notin K_b^{(1)}\times K_b^{(2)}\big) \leq P\big(Y_n\notin K_b^{(1)}\big) + P\big(Z_n/h_n\notin K_b^{(2)}\big) \leq (a_1+a_2){\textrm{e}}^{-b\log n} \end{align*}

eventually.

We conclude by noting that, as for the rate function $J_{Y,Z}(y,z)$ in Proposition 5 (see Remark 4), we have an asymptotic independence interpretation for the rate function $H_{Y,Z}(y,z)$ in Proposition 6.

Remark 5. The rate function $H_{Y,Z}(y,z)$ in Proposition 6 can be expressed as a sum of two functions which depend on y and z only, i.e. the marginal rate functions $\Delta(y;\mu)$ and $H_Z(z)$ that appear in the proof of that proposition.

5. MD for sums of minima of i.i.d. exponential random variables

We start with the following assumption.

Assumption 3. Let $\{W_n\,:\, n\geq 1\}$ be a sequence of i.i.d. real random variables with exponential distribution; more precisely, their common distribution function F is defined by $F(x)\,:\!=\,1-{\textrm{e}}^{-\lambda x}$ for all $x\geq 0$ . Moreover, let $\{X_n\,:\, n\geq 1\}$ be the sequence of random variables defined, for all $n\geq 2$ , by

\begin{align*}X_n\,:\!=\,\frac{\sum_{k=1}^n\min\{W_1,\ldots,W_k\}}{\log n}.\end{align*}

Now we recall two results. The first one provides the reference LDP, namely the LDP which governs the convergence of $X_n$ to ${1}/{\lambda}$ (as $n\to\infty$ ); indeed, the rate function $I_X$ in the next proposition uniquely vanishes at $x={1}/{\lambda}$ .

Proposition 7. Assume that Assumption 3 holds. Then $\{P(X_n\in\cdot\,)\,:\, n\geq 2\}$ satisfies the LDP with speed $\log n$ and rate function $I_X$ defined by

\begin{align*} I_X(x)\,:\!=\,\left\{ \begin{array}{l@{\quad}l} \big(\sqrt{\lambda x}-1\big)^2 &\ \mbox{if}\ x\geq 0, \\ \infty &\ \mbox{if}\ x<0. \end{array}\right. \end{align*}

Proof. See [Reference Giuliano and Macci10, Proposition 5.2].

The second result concerns the following weak convergence to a centered Gaussian distribution.

Proposition 8. Assume that Assumption 3 holds. Then $(X_n-{1}/{\lambda})\sqrt{\log n}$ converges weakly (as $n\to\infty$ ) to the centered Gaussian distribution with variance $\sigma^2={2}/{\lambda^2}$ .

Proof. The random variables $(X_n-{1}/{\lambda})\sqrt{({\lambda^2}/{2})\log n}$ converge weakly to the standard Gaussian distribution [Reference Höglund12]; indeed, the distribution function F in Assumption 3 satisfies the condition $\int_0^1|F(x)-x/b|x^{-2}\,{\textrm{d}} x<\infty$ (required in [Reference Höglund12]) if and only if $b={1}/{\lambda}$ . Then we can immediately get the desired weak convergence.

The aim of this section is to prove Proposition 9, which provides a class of LDPs that fills the gap between the convergence of $\{X_n\,:\, n\geq 1\}$ to ${1}/{\lambda}$ (governed by the LDP in Proposition 7 with speed $v_n=\log n$ ), and the weak convergence in Proposition 8. Then we get a (central) moderate deviation result because the weak convergence in Proposition 8 is towards a Gaussian distribution. We also remark that, as typically happens, we have $I_X^{\prime\prime}({1}/{\lambda})={1}/{\sigma^2}$ (where $\sigma^2={2}/{\lambda^2}$ as in Proposition 8); this equality can be checked with some easy computations, and we omit the details.

Proposition 9. Assume that Assumption 3 holds. Then, for every sequence of positive numbers $\{a_n\,:\, n\geq 1\}$ such that (1) holds with $v_n=\log n$ , the sequence

\begin{align*}\bigg\{P\bigg(\bigg(X_n-\frac{1}{\lambda}\bigg)\sqrt{a_n\log n}\in\cdot\bigg)\,:\, n\geq 2\bigg\}\end{align*}

satisfies the LDP with speed $1/a_n$ and rate function $J_X$ defined by $J_X(x)={x^2}/({2\sigma^2})$ , where $\sigma^2={2}/{\lambda^2}$ as in Proposition 8.

Proof. We apply the Gärtner–Ellis theorem, i.e. Proposition 1. Indeed, we show that

(8) \begin{equation} \lim_{n\to\infty}\frac{1}{1/a_n}\log\mathbb{E}\bigg[\exp\bigg(\frac{\theta}{a_n} \bigg(X_n-\frac{1}{\lambda}\bigg)\sqrt{a_n\log n}\bigg)\bigg] = \underbrace{\frac{\sigma^2\theta^2}{2}}_{=\,\theta^2/\lambda^2}\quad (\mbox{for all}\ \theta\in\mathbb{R}), \end{equation}

and therefore we get the desired LDP with rate function $J_X$ defined by

\begin{align*}J_X(x)\,:\!=\,\sup_{\theta\in\mathbb{R}}\bigg\{\theta x-\frac{\theta^2}{\lambda^2}\bigg\}\quad (\mbox{for all}\ x\in\mathbb{R}),\end{align*}

which coincides with the rate function $J_X$ in the statement.

We use a known expression for the moment-generating function of the random variable $\sum_{k=1}^n\min\{W_1,\ldots,W_k\}$ [Reference Ghosh, Babu and Mukhopadhyay8, (3.5)]:

\begin{align*} & \frac{1}{1/a_n}\log\mathbb{E}\bigg[\exp\bigg(\frac{\theta}{a_n}\bigg(X_n-\frac{1}{\lambda}\bigg) \sqrt{a_n\log n}\bigg)\bigg] \\ & = a_n\bigg({-}\frac{\theta\sqrt{a_n\log n}}{\lambda a_n} + \log\mathbb{E}\bigg[\exp\bigg(\frac{\theta\sqrt{a_n\log n}}{a_n}X_n\bigg)\bigg]\bigg) \\ & = -\frac{\theta\sqrt{a_n\log n}}{\lambda} + a_n\log\mathbb{E}\bigg[\exp\bigg( \frac{\theta\sum_{k=1}^n\min\{W_1,\ldots,W_k\}}{\sqrt{a_n\log n}}\bigg)\bigg] \\ & = \left\{ \begin{array}{l@{\quad}l} -\dfrac{\theta\sqrt{a_n\log n}}{\lambda} + a_n\sum_{k=1}^n\log\bigg(1+\dfrac{{\theta}/\big({\lambda\sqrt{a_n\log n}}\big)} {k\big(1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)\big)}\bigg) &\ \mbox{if}\ \dfrac{\theta}{\lambda\sqrt{a_n\log n}}<1, \\ \infty &\ \mbox{otherwise}; \end{array}\right. \end{align*}

then, for each fixed $\theta\in\mathbb{R}$ , we can take n large enough to have ${\theta}/\big({\lambda\sqrt{a_n\log n}}\big)<1$ (since $a_n\log n\to\infty$ , as $n\to\infty$ ).

Moreover, we remark that

(9) \begin{equation} \mbox{for all}\ v>\frac{1}{2},\ \mbox{there exists}\ \delta>0\ \mbox{such that}\ \log(1+x)\geq x-vx^2\ \mbox{for all}\ |x|<\delta \end{equation}

(this can be proved by checking that the function $g(x)\,:\!=\,\log(1+x)-(x-vx^2)$ has a local minimum at $x=0$ ); so, for $\delta>0$ as in (9), we take n large enough to have

\begin{align*}\bigg|\frac{{\theta}/\big({\lambda\sqrt{a_n\log n}}\big)}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)}\bigg|<\delta.\end{align*}

Finally, we set

\begin{align*} b_n &\,:\!=\, -\frac{\theta\sqrt{a_n\log n}}{\lambda} + \frac{{a_n\theta}/\big({\lambda\sqrt{a_n\log n}}\big)}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)} \sum_{k=1}^n\frac{1}{k} \\ & = \frac{-{\theta\sqrt{a_n\log n}}/{\lambda} + {\theta^2}/{\lambda^2} + {a_n\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\sum_{k=1}^n{1}/{k}}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)} \\ & = \frac{({\theta\sqrt{a_n\log n}}/({\lambda}))\big({-}1 + \big({\sum_{k=1}^n1/k}\big)/{\log n}\big) + {\theta^2}/{\lambda^2}}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)}, \end{align*}

and, for n large enough, we have

\begin{align*} & b_n-v\frac{{\theta^2}/({\lambda^2\log n})}{\big(1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)\big)^2} \sum_{k=1}^n\frac{1}{k^2} \\ & \leq -\frac{\theta\sqrt{a_n\log n}}{\lambda} + a_n\sum_{k=1}^n\log\bigg(1 + \frac{{\theta}/\big({\lambda\sqrt{a_n\log n}}\big)}{k\big(1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)\big)}\bigg)\leq b_n \end{align*}

by using (9) with

\begin{align*}x=\frac{{\theta}/\big({\lambda\sqrt{a_n\log n}}\big)}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)},\end{align*}

and by the well-known inequality $\log (1+y)\leq y$ for every $y>-1$ .

So, the desired condition (8) holds since

(10) \begin{equation} \lim_{n\to\infty}b_n=\frac{\theta^2}{\lambda^2}, \qquad \lim_{n\to\infty}\frac{{\theta^2}/({\lambda^2\log n})}{1-\big({\theta}/\big({\lambda\sqrt{a_n\log n}}\big)\big)^2} \sum_{k=1}^n\frac{1}{k^2} = 0. \end{equation}

Indeed, the first limit in (10) holds by (1) with $v_n=\log n$ (which yields $a_n\to 0$ and $a_n\log n\to\infty$ ), and by

\begin{align*}\lim_{n\to\infty}\sqrt{\log n}\bigg({-}1 + \frac{\sum_{k=1}^n1/k}{\log n}\bigg)=0;\end{align*}

the second limit in (10) trivially holds by taking into account $a_n\log n\to\infty$ and $\sum_{k=1}^\infty{1}/{k^2}<\infty$ .

In order to make the paper more self-contained we remark that the limit in (8) with $a_n=1$ yields the weak convergence in Proposition 8.

Acknowledgements

The authors thank two referees for some useful comments, and Professor Clive W. Anderson for some discussion on the content of [Reference Chow and Teugels4].

Funding information

This work has been partially supported by MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006 and CUP E83C23000330006), by University of Rome Tor Vergata (project ‘Asymptotic Methods in Probability’ (CUP E89C20000680005) and project ‘Asymptotic Properties in Probability’ (CUP E83C22001780005)) and by Indam-GNAMPA.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Anderson, C. W. and Turkman, K. F. (1991). The joint limiting distribution of sums and maxima of stationary sequences. J. Appl. Prob. 28, 3344.CrossRefGoogle Scholar
Arendarczyk, M., Kozubowski, T. J. and Panorska, A. K. (2018). The joint distribution of the sum and maximum of dependent Pareto risks. J. Multivariate Anal. 167, 136156.CrossRefGoogle Scholar
Chaganty, N. R. (1997). Large deviations for joint distributions and statistical applications. Sankhyā A 59, 147–166.Google Scholar
Chow, T. L. and Teugels, J. L. (1979). The sum and the maximum of i.i.d. random variables. In Proceedings of the Second Prague Symposium on Asymptotic Statistics, eds P. Mandl and M. Hušková, North-Holland, Amsterdam, pp. 81–92.Google Scholar
Dembo, A. and Zeitouni, O. (1998). Large Deviations Techniques and Applications, 2nd edn. Springer, New York.CrossRefGoogle Scholar
Embrechts, P., Klüppelberg, T. and Mikosch, T. (1997). Modelling Extremal Events. Springer, Berlin.CrossRefGoogle Scholar
Ganesh, A. and Torrisi, G. L. (2008). Large deviations of the interference in a wireless communication model. IEEE Trans. Inform. Theory 54, 35053517.CrossRefGoogle Scholar
Ghosh, M., Babu, G. J. and Mukhopadhyay, N. (1975). Almost sure convergence of sums of maxima and minima of positive random variables. Z. Wahrscheinlichkeitsth. 33, 4954.CrossRefGoogle Scholar
Giuliano, R. and Macci, C. (2014). Large deviation principles for sequences of maxima and minima. Commun. Statist. Theory Meth. 43, 10771098.CrossRefGoogle Scholar
Giuliano, R. and Macci, C. (2015). Asymptotic results for weighted means of random variables which converge to a Dickman distribution, and some number theoretical applications. ESAIM Prob. Statist. 19, 395413.CrossRefGoogle Scholar
Giuliano, R. and Macci, C. (2023). Some examples of noncentral moderate deviations for sequences of real random variables. Mod. Stoch. Theory Appl. 10, 111144.CrossRefGoogle Scholar
Höglund, T. (1972). Asymptotic normality of sums of minima of random variables. Ann. Math. Statist. 43, 351353.CrossRefGoogle Scholar
Hsing, T. (1995). A note on the asymptotic independence of the sum and maximum of strongly mixing stationary random variables. Ann. Prob. 23, 938947.CrossRefGoogle Scholar
Kratz, M. (2014). Normex, a new method for evaluating the distribution of aggregated heavy tailed risks. Extremes 17, 661691.CrossRefGoogle Scholar
Kratz, M. and Prokopenko, E. (2023). Multi-normex distributions for the sum of random vectors. Rates of convergence. Extremes 26, 509544.CrossRefGoogle Scholar
Krizmanić, D. (2020). On joint weak convergence of partial sum and maxima processes. Stochastics 92, 876899.CrossRefGoogle Scholar
Leonenko, N., Macci, C. and Pacchiarotti, B. (2021). Large deviations for a class of tempered subordinators and their inverse processes. Proc. R. Soc. Edinburgh Sect. A 151, 20302050.CrossRefGoogle Scholar
Lynch, J. and Sethuraman, J. (1987). Large deviations for processes with independent increments. Ann. Prob. 15, 610627.CrossRefGoogle Scholar
Müller, U. K. (2019). Refining the central limit theorem approximation via extreme value theory. Statist. Prob. Lett. 155, 108564.CrossRefGoogle Scholar
Puhalskii, A. (1991). On functional principle of large deviations. In New Trends in Probability and Statistics, Vol. 1, eds V. Sazonov and T. Shervashidze, VSP, Moklas, pp. 198–218.Google Scholar
Qeadan, F., Kozubowski, T. J. and Panorska, A. K. (2012). The joint distribution of the sum and the maximum of IID exponential random variables. Commun. Statist. Theory Meth. 41, 544569.CrossRefGoogle Scholar