Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-26T00:18:23.508Z Has data issue: false hasContentIssue false

Asymptotic results on tail moment and tail central moment for dependent risks

Published online by Cambridge University Press:  08 May 2023

Jinzhu Li*
Affiliation:
Nankai University
*
*Postal address: School of Mathematical Science and LPMC, Nankai University, Tianjin 300071, P. R. China. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we consider a financial or insurance system with a finite number of individual risks described by real-valued random variables. We focus on two kinds of risk measures, referred to as the tail moment (TM) and the tail central moment (TCM), which are defined as the conditional moment and conditional central moment of some individual risk in the event of system crisis. The first-order TM and the second-order TCM coincide with the popular risk measures called the marginal expected shortfall and the tail variance, respectively. We derive asymptotic expressions for the TM and TCM with any positive integer orders, when the individual risks are pairwise asymptotically independent and have distributions from certain classes that contain both light-tailed and heavy-tailed distributions. The formulas obtained possess concise forms unrelated to dependence structures, and hence enable us to estimate the TM and TCM efficiently. To demonstrate the wide application of our results, we revisit some issues related to premium principles and optimal capital allocation from the asymptotic point of view. We also give a numerical study on the relative errors of the asymptotic results obtained, under some specific scenarios when there are two individual risks in the system. The corresponding asymptotic properties of the degenerate univariate versions of the TM and TCM are discussed separately in an appendix at the end of the paper.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Consider a financial or insurance system that contains a finite number of, say $d\geq 2$ , individual components. Let $X_{1}$ , …, $X_{d}$ be d real-valued random variables describing the net losses (i.e., risks) of the individual components, and denote by $S_{d}=\sum_{i=1}^{d}X_{i}$ the overall risk of the system. All of $X_{1}$ , …, $X_{d}$ are assumed to be unbounded from above, since the ones with finite upper bounds will not pose substantial risks to the system. The tail moment (TM) of the kth individual risk for some $1\leq k\leq d$ is defined as the conditional moment of $X_{k}$ given that $S_{d}$ exceeds a certain threshold. Specifically, for each positive integer $n\in \textbf{N}^{+}$ , each $1\leq k\leq d$ , and any $t>0$ , the nth-order TM of the kth individual risk is formulated as

(1.1) \begin{equation}\textrm{TM}_{k}^{(n)}(t)=\mathbb{E}\!\left( X_{k}^{n}\left\vert S_{d}>t\right. \right) . \end{equation}

Here and hereafter, a mathematical expectation is assumed to exist by default whenever it appears. We further define the corresponding nth-order tail central moment (TCM) of the kth individual risk as

(1.2) \begin{equation}\textrm{TCM}_{k}^{(n)}(t)=\mathbb{E}\!\left( \left. \left( X_{k}-\textrm{TM}_{k}^{(1)}(t)\right) ^{n}\right\vert S_{d}>t\right) . \end{equation}

In practice, the threshold t is usually chosen to be the value at risk (VaR) of $S_{d}$ under a confidence level $q\in (0,1)$ , i.e., $\textrm{VaR}_{q}(S_{d})=\inf \{x\,:\,\mathbb{P}\!\left( S_{d}\leq x\right) \geq q\}$ .

The TM and TCM with even orders, i.e., $\textrm{TM}_{k}^{(2n)}(t)$ and $\textrm{TCM}_{k}^{(2n)}(t)$ for $n\in \textbf{N}^{+}$ , are always non-negative for any $t>0$ . In addition, the following study indicates that under our models, $\textrm{TM}_{k}^{(2n-1)}(t)$ will eventually be positive as t increases, which is not always the case for $\textrm{TCM}_{k}^{(2n-1)}(t)$ . Hence, under our models, $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(2n)}(t)$ can be regarded as standard measures with non-negativity when t is large or, correspondingly, when q is close to 1 if t is chosen to be $\textrm{VaR}_{q}(S_{d})$ . It is worth noting that a risk measure is usually not required to be non-negative (see, e.g., McNeil et al. [Reference McNeil, Frey and Embrechts36]), and then both $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ can be applied as risk measures. In fact, most of the time we are only interested in properties of $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ when t is very large, since only in such a case does the conditioning event $\{S_{d}>t\}$ mean an extreme system crisis that is of real concern. On the other hand, in general it is difficult or even impossible to derive exact closed-form expressions for $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ with respect to t, especially when there are various dependence structures among the individual risks. Thus, one of the main lines of study in this area is to seek effective and efficient estimates of $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ for large t, and this is also the main target of the present paper.

Remarkably, $\textrm{TM}_{k}^{(1)}(t)$ is a very popular risk measure and has attracted much scholarly attention from both researchers and practitioners in recent decades. In the literature, it is usually called the marginal expected shortfall and is used to measure the contribution of some individual component to a system crisis. Additionally, it is also widely applied in capital allocation. Some regulatory environments (e.g., the Swiss Solvency Test) require that the total capital of the system equals the conditional tail expectation of $S_{d}$ with respect to some threshold t, i.e., $\mathbb{E}\!\left( S_{d}\left\vert S_{d}>t\right. \right) $ . Then the most intuitive and commonly used capital allocation rule is the famous Euler one, which assigns the amount of $\textrm{TM}_{k}^{(1)}(t)$ to the kth individual component; see Denault [Reference Denault13], Asimit and Li [Reference Asimit and Li3], and Baione et al. [Reference Baione, De Angelis and Granito4]. There have been many fruitful contributions to the study of $\textrm{TM}_{k}^{(1)}(t)$ . See Cai and Li [Reference Cai and Li9], Furman and Landsman [Reference Furman and Landsman20], Dhaene et al. [Reference Dhaene14], Bargès et al. [Reference Bargès, Cossette and Marceau5], Vernic [Reference Vernic43], Ignatieva and Landsman [Reference Ignatieva and Landsman25], and Marri and Moutanabbir [Reference Marri and Moutanabbir35] for works devoted to finding exact expressions for $\textrm{TM}_{k}^{(1)}(t)$ when $(X_{1},\ldots ,X_{d})$ follows some specific joint distributions. On the other hand, assuming that $X_{1}$ , …, $X_{d}$ are non-negative and have distributions from the Fréchet or Gumbel max-domain of attraction (MDA), Asimit et al. [Reference Asimit, Furman, Tang and Vernic2] obtained a series of asymptotic formulas for $\textrm{TM}_{k}^{(1)}(t)$ as $t\rightarrow \infty $ under certain dependence structures, including both the asymptotic independence and asymptotic dependence cases. Some of the results in Asimit et al. [Reference Asimit, Furman, Tang and Vernic2] were extended to more general frameworks in the recent work of Li [Reference Li34]. See Joe and Li [Reference Joe and Li26], Hua and Joe [Reference Hua and Joe23], Zhu and Li [Reference Zhu and Li45], and Kley et al. [Reference Kley, Klüppelberg and Reinert29] for related discussions under the assumption that $(X_{1},\ldots ,X_{d})$ is of multivariate regular variation. Tang and Yuan [Reference Tang and Yuan42] considered a variant of $\textrm{TM}_{k}^{(1)}(t)$ , in which $(X_{1},\ldots ,X_{d})$ is modeled by a randomly weighted form $(\xi_{1}Y_{1},\ldots ,\xi _{d}Y_{d})$ . They obtained asymptotic results under the assumptions that $Y_{1}$ , …, $Y_{d}$ are independent random variables with heavy-tailed distributions and that $\xi_{1}$ , …, $\xi _{d}$ satisfy certain moment conditions. Recently, Chen and Liu [Reference Chen and Liu11] extended the work of Tang and Yuan [Reference Tang and Yuan42] to allow an asymptotic independence structure among $Y_{1}$ , …, $Y_{d}$ . For related investigations from the statistical perspective, we refer the reader to El Methni et al. [Reference El Methni, Gardes and Girard17], Cai et al. [Reference Cai, Einmahl, de Haan and Zhou8], Acharya et al. [Reference Acharya, Pedersen, Philippon and Richardson1], Hou and Wang [Reference Hou and Wang22], and Sun et al. [Reference Sun, Chen and Hu41].

Moreover, $\textrm{TCM}_{k}^{(2)}(t)$ is a multivariate extension of the so-called tail variance (TV) risk measure proposed by Furman and Landsman [Reference Furman and Landsman19], and it quantifies the degree of deviation between an individual risk and the corresponding marginal expected shortfall. Furman and Landsman [Reference Furman and Landsman19] and Ignatieva and Landsman [Reference Ignatieva and Landsman24] derived explicit expressions for $\textrm{TCM}_{k}^{(2)}(t)$ when $(X_{1},\ldots,X_{d})$ follows multivariate elliptical distributions. Other related studies have mainly concentrated on the TV, i.e., the degenerate univariate version of $\textrm{TCM}_{k}^{(2)}(t)$ with $d=1$ , and most of the results obtained have been for the random risk with a distribution of elliptical type; see Kim [Reference Kim27] and Kim and Kim [Reference Kim and Kim28]. We can also find applications of $\textrm{TM}_{k}^{(1)}(t)$ and $\textrm{TCM}_{k}^{(2)}(t)$ in optimal capital allocation problems based on tail mean-variance models; see Landsman [Reference Landsman30], Xu and Mao [Reference Xu and Mao44], Eini and Khaloozadeh [Reference Eini and Khaloozadeh16], and Cai and Wang [Reference Cai and Wang10] for details. Nevertheless, few existing works have focused on $\textrm{TM}_{k}^{(n)}(t)$ or $\textrm{TCM}_{k}^{(n)}(t)$ with higher orders, which also have a wide range of applications in constructing insurance premium principles and other risk measures incorporating higher tail moments (e.g., tail skewness and tail kurtosis); see Ramsay [Reference Ramsay39] and Bawa and Lindenberg [Reference Bawa and Lindenberg6]. Among the few contributions, Kim [Reference Kim27] gave some explicit expressions for the degenerate univariate version of $\textrm{TM}_{k}^{(n)}(t)$ with $d=1$ when the risk has a distribution from the exponential family, and Landsman et al. [Reference Landsman, Makov and Shushi31] extended the work of Kim [Reference Kim27] to the elliptical and log-elliptical distribution classes.

In this paper, we study the asymptotic behavior of $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ as $t\rightarrow \infty $ under the framework in which $X_{1}$ , …, $X_{d}$ are pairwise asymptotically independent and possess distributions from the Fréchet or Gumbel MDA. Under our models, we will provide a uniform methodology by which asymptotic results on $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ can be obtained for any $n\in \textbf{N}^{+}$ . All of our results are in the concise form of some constant times $t^{n}$ . The constants appearing in the formulas for $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(2n)}(t)$ are proved to be positive, and the constant corresponding to $\textrm{TCM}_{k}^{(2n-1)}(t)$ is also nonzero for most of choices of the model parameters. Hence, most of our asymptotic results are precise ones which enable us to effectively estimate $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ for large t. Additionally, thanks to the assumption of asymptotic independence among $X_{1}$ , …, $X_{d}$ , the results obtained depend only on information from the marginal distributions of $(X_{1},\ldots ,X_{d})$ , and hence can bring high efficiency to practical calculations. Another interesting finding observed from the derivations of our main results is that, although $X_{1}$ , …, $X_{d}$ are set to be real-valued, the left tails of $X_{1}$ , …, $X_{d} $ do not affect the asymptotic properties of $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ under our models, even when they are asymptotically comparable to the corresponding right tails.

The rest of this paper consists of four sections and an appendix. Section 2 introduces necessary preliminaries regarding some classes of distributions and asymptotic independence. Section 3 states the underlying assumptions of this work and presents our main asymptotic results for the TM and TCM. Section 4 gives a numerical study on the relative errors of the asymptotic results obtained when there are two components in the system. Section 5 proves our main results after some preparatory lemmas. The appendix is devoted especially to discussing the degenerate univariate versions of $\textrm{TM}_{k}^{(n)}(t) $ and $\textrm{TCM}_{k}^{(n)}(t)$ with $d=1$ .

2. Preliminaries

In what follows, a distribution $V=1-\overline{V}$ is always assumed to have an infinite upper endpoint, i.e., $\overline{V}(x)>0$ for any $x\in ({-}\infty,\infty )$ . In extreme value theory, V is said to belong to the Fréchet MDA if there is some $\alpha \geq 0$ such that the relation

(2.1) \begin{equation}\lim_{t\rightarrow \infty }\frac{\overline{V}(tx)}{\overline{V}(t)}=x^{-\alpha } \end{equation}

holds for any $x>0$ . In this case, V is also said to be from the class of regular variation, and we express the regularity property in (2.1) as $V\in \mathcal{R}_{-\alpha }$ , so that $\mathcal{R}$ is the union of all $\mathcal{R}_{-\alpha }$ over the range $\alpha\geq 0$ . The class $\mathcal{R}$ is an important class of heavy-tailed distributions, and its main members include the Pareto distribution, Student’s t-distribution, and the log-gamma distribution. See Bingham et al. [Reference Bingham, Goldie and Teugels7] for a monograph on regular variation. By definition, V is said to belong to the Gumbel MDA (with an infinite upper endpoint) if there is some positive auxiliary function h such that the relation

(2.2) \begin{equation}\lim_{t\rightarrow \infty }\frac{\overline{V}(t+h(t)x)}{\overline{V}(t)}=\textrm{e}^{-x} \end{equation}

holds for any $x\in ({-}\infty ,\infty )$ . We denote by $V\in\textrm{GMDA}(h)$ the property stated in (2.2). It is known that the function h is unique up to asymptotic equivalence and satisfies $h(t)=o(t)$ ; see Chapter 1.1 of Resnick [Reference Resnick40] or Chapter 3.3.3 of Embrechts et al. [Reference Embrechts, Klüppelberg and Mikosch18]. The Gumbel MDA contains both light-tailed distributions (e.g., the exponential and normal distributions) and heavy-tailed distributions (e.g., the log-normal distribution). It is easy to check that if $V\in \textrm{GMDA}(h)$ , then V belongs to the class of rapid variation, which is denoted by $\mathcal{R}_{-\infty }$ and is characterized by the following relation:

\begin{equation*}\lim_{t\rightarrow \infty}\frac{\overline{V}(tx)}{\overline{V}(t)}=0,\quad x>1.\end{equation*}

Clearly, the concepts of regular and rapid variation can be naturally extended to a general positive function g. Namely, for some $\beta \in[{-}\infty ,\infty ]$ , we write $g\in \mathcal{R}_{\beta }$ if (2.1) holds with $\overline{V}$ and $-\alpha $ replaced by g and $\beta $ , respectively. The well-known Karamata-type results hold for regularly and rapidly varying functions; i.e., if $g\in\mathcal{R}_{\beta }$ with $\beta \in ({-}\infty ,-1)$ , then

(2.3) \begin{equation}\lim_{t\rightarrow \infty }\frac{\int_{t}^{\infty }g(x)\textrm{d}x}{tg(t)}=-\frac{1}{\beta +1}, \end{equation}

while if $\beta =-\infty $ and g is non-increasing, then it holds for any $r\in ({-}\infty ,\infty )$ that

(2.4) \begin{equation}\lim_{t\rightarrow \infty }\frac{\int_{t}^{\infty }x^{r}g(x)\textrm{d}x}{t^{r+1}g(t)}=0. \end{equation}

See, e.g., Appendix A3 of Embrechts et al. [Reference Embrechts, Klüppelberg and Mikosch18] for more details.

Given d real-valued random variables $Z_{1}$ , …, $Z_{d}$ without upper bounds, we say they are pairwise asymptotically independent if, for each pair $1\leq i\neq j\leq d$ ,

(2.5) \begin{equation}\lim_{t\rightarrow \infty }\frac{\mathbb{P}(\!\left\vert Z_{i}\right\vert>t,Z_{j}>t)}{\mathbb{P}(Z_{i}>t)+\mathbb{P}(Z_{j}>t)}=0; \end{equation}

see, among many others, Chen and Yuen [Reference Chen and Yuen12], Li [Reference Li33], and Leipus et al. [Reference Leipus, Paukštys and Šiaulys32] for discussions and applications of this dependence structure. Note that the relation (2.5) will play an important role in dealing with joint probabilities related to the left or right tails of the random variables. If the right tails of $Z_{1}$ , …, $Z_{d}$ are asymptotically proportionally equivalent, i.e., $\lim_{t\rightarrow \infty }\mathbb{P}(Z_{i}>t)/\mathbb{P}(Z_{1}>t)=c_{i}$ for each $1\leq i\leq d$ and some $c_{i}>0$ , then the relation (2.5) is equivalent to

(2.6) \begin{equation}\lim_{t\rightarrow \infty }\frac{\mathbb{P}(\!\left\vert Z_{i}\right\vert>t,Z_{j}>t)}{\mathbb{P}(Z_{1}>t)}=0. \end{equation}

If, further, $Z_{1}$ has a regularly varying tail, then (2.6) implies that for any $a>0$ and $b>0$ ,

(2.7) \begin{equation}\lim_{x\rightarrow \infty }\frac{\mathbb{P}(\!\left\vert Z_{i}\right\vert>at,Z_{j}>bt)}{\mathbb{P}(Z_{1}>t)}=0. \end{equation}

To verify (2.7), we only need to note that

\begin{equation*}\limsup_{t\rightarrow \infty }\frac{\mathbb{P}(\!\left\vert Z_{i}\right\vert>at,Z_{j}>bt)}{\mathbb{P}(Z_{1}>t)}\leq \lim_{t\rightarrow \infty }\frac{\mathbb{P}\!\left( \left\vert Z_{i}\right\vert >\min\{a,b\}t,X_{j}>\min\{a,b\}t\right) }{\mathbb{P}\!\left( Z_{1}>\min \{a,b\}t\right) }\frac{\mathbb{P}(Z_{1}>\min \{a,b\}t)}{\mathbb{P}(Z_{1}>t)}=0.\end{equation*}

Hereafter, unless otherwise stated, all limit relationships hold as $t\rightarrow \infty $ . For two positive functions $g_{1}$ and $g_{2}$ , we write $g_{1}\left( t\right) \lesssim g_{2}\left(t\right) $ or $g_{2}\left( t\right) \gtrsim g_{1}\left( t\right) $ if $\limsup g_{1}\left( t\right) /g_{2}\left( t\right) \leq 1$ ; we write $g_{1}\left( t\right) \sim g_{2}\left( t\right) $ if $\lim g_{1}\left( t\right) /g_{2}\left( t\right) =1$ ; and we write $g_{1}\left( t\right) \asymp g_{2}\left( t\right) $ if $0\lt \lim\, \textrm{inf}\,g_{1}\left( t\right) /g_{2}\left( t\right) \leq \limsup g_{1}\left(t\right) /g_{2}\left( t\right) \lt \infty $ . For a real number a, we write $a^{+}=\max \{a,0\}$ and $a^{-}=-\min \{a,0\}$ . As usual, $\textbf{1}_{\{\cdot \}}$ stands for the indicator function.

3. Main results

In this section, we present our main asymptotic results for the TM and TCM defined by (1.1) and (1.2) with $n\in\textbf{N}^{+}$ . Denote by $F_{1}$ , …, $F_{d}$ the distributions of the individual risks $X_{1} $ , …, $X_{d}$ . We conduct our study under the following two assumptions, respectively.

Assumption 3.1. $F_{1}\in \mathcal{R}_{-\alpha }$ for some $\alpha >n$ , and $\overline{F}_{i}(t)\sim c_{i}\overline{F}_{1}(t)$ and $F_{i}({-}t)=O(\overline{F}_{1}(t))$ for each $1\leq i\leq d$ and some $c_{i}>0$ . Also, $X_{1}$ , …, $X_{d}$ are pairwise asymptotically independent.

Assumption 3.2. $F_{1}\in \textrm{GMDA}(h)$ , and $\overline{F}_{i}(t)\sim c_{i}\overline{F}_{1}(t)$ and $F_{i}({-}t)=O(\overline{F}_{1}(t))$ for each $1\leq i\leq d$ and some $c_{i}>0$ . Also, for each pair $1\leq i\neq j\leq d$ ,

(3.1) \begin{equation}\lim_{t\rightarrow \infty }\frac{\mathbb{P}\!\left( \left\vert X_{i}\right\vert >\epsilon h(t),X_{j}>t\right) }{\overline{F}_{1}(t)}=0,\quad {for\ any}\;\;\epsilon >0, \end{equation}

and

(3.2) \begin{equation}\lim_{t\rightarrow \infty }\frac{\mathbb{P}\!\left(X_{i}>Lh(t),X_{j}>Lh(t)\right) }{\overline{F}_{1}(t)}=0,\quad {for\ some}\;\;L>0. \end{equation}

The conditions regarding the marginal distributions of $(X_{1},\ldots ,X_{d}) $ in Assumptions 3.1 and 3.2 guarantee the existence of the TM and TCM. The dependence structure defined by (3.1) and (3.2) was first proposed in Mitra and Resnick [Reference Mitra and Resnick37] and has been extensively studied and applied in risk theory; see Asimit et al. [Reference Asimit, Furman, Tang and Vernic2], Hashorva and Li [Reference Hashorva and Li21], and Asimit and Li [Reference Asimit and Li3]. Since $h(t)=o(t)$ , the relation (3.1) obviously implies pairwise asymptotic independence among $X_{1}$ , …, $X_{d}$ .

For brevity, in what follows we write

(3.3) \begin{equation}C_{i}=\frac{c_{i}}{\sum_{j=1}^{d}c_{j}}\in \left( 0,1\right) ,\quad1\leq i\leq d, \end{equation}

where $c_{1}$ , …, $c_{d}$ are the positive constants from Assumption 3.1 or 3.2. Now we are ready to state our main results.

Theorem 3.1. Consider the TM defined by (1.1) with $n\in \textbf{N}^{+}$ .

(i) Under Assumption 3.1, it holds for each $1\leq k\leq d$ that

(3.4) \begin{equation}\textrm{TM}_{k}^{(n)}(t)\sim \frac{\alpha }{\alpha -n}C_{k}t^{n}.\end{equation}

(ii) Under Assumption 3.2, it holds for each $1\leq k\leq d$ that

(3.5) \begin{equation}\textrm{TM}_{k}^{(n)}(t)\sim C_{k}t^{n}. \end{equation}

Recall the TCM defined by (1.2). By the binomial expansion theorem, we have

\begin{equation*}\textrm{TCM}_{k}^{(n)}(t)=\sum_{i=0}^{n-1}\left(\begin{array}{c}n \\i\end{array}\right) ({-}1)^{i}\left( \textrm{TM}_{k}^{(1)}(t)\right) ^{i}\textrm{TM}_{k}^{(n-i)}(t)+({-}1)^{n}\left( \textrm{TM}_{k}^{(1)}(t)\right) ^{n},\end{equation*}

where $\left(\begin{array}{c}n \\i\end{array}\right) =n!/\left( i!(n-i)!\right) $ . Then, applying Theorem 3.1 immediately yields the following corollary for the TCM.

Corollary 3.1. Consider the TCM defined by (1.2) with $n\in \textbf{N}^{+}$ .

(i) Under Assumption 3.1, it holds for each $1\leq k\leq d$ that

(3.6) \begin{equation}\textrm{TCM}_{k}^{(n)}(t)=\left( A_{\alpha ,n,k}+o(1)\right) t^{n},\end{equation}

where

(3.7) \begin{equation}A_{\alpha ,n,k}=\sum_{i=0}^{n-1}\left(\begin{array}{c}n \\i\end{array}\right) ({-}1)^{i}\frac{\alpha ^{i+1}}{\left( \alpha -1\right)^{i}\left(\alpha -n+i\right) }C_{k}^{i+1}+({-}1)^{n}\left( \frac{\alpha }{\alpha -1}\right) ^{n}C_{k}^{n}. \end{equation}

(ii) Under Assumption 3.2, it holds for each $1\leq k\leq d$ that

(3.8) \begin{equation}\textrm{TCM}_{k}^{(n)}(t)=\left( A_{n,k}+o(1)\right) t^{n},\end{equation}

where

(3.9) \begin{equation}A_{n,k}=\lim_{\alpha \rightarrow \infty }A_{\alpha ,n,k}=C_{k}\left(1-C_{k}\right) \left( \left( 1-C_{k}\right)^{n-1}+({-}1)^{n}C_{k}^{n-1}\right) . \end{equation}

Remark 3.1. The relations (3.4) and (3.5) imply that $\textrm{TM}_{k}^{(n)}(t)\rightarrow \infty $ for any $n\in \textbf{N}^{+}$ under our models. Hence, when t is large enough, $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(2n)}(t)$ define two standard measures with non-negativity. Moreover, it is easy to see from (3.9) that $A_{2n,k}>0$ , since $0<C_{k}<1$ . Actually, we can also prove that

(3.10) \begin{equation}A_{\alpha ,2n,k}>0 \end{equation}

under the conditions of Corollary 3.1(i); see Section 5. Therefore, (3.6) and (3.8) provide us with precise asymptotic estimates of $\textrm{TCM}_{k}^{(2n)}(t)$ . On the other hand, both $A_{\alpha ,2n-1,k}$ and $A_{2n-1,k}$ may be less than or equal to 0 for certain choices of $\alpha $ and $C_{k}$ . When $A_{\alpha ,2n-1,k}<0$ or $A_{2n-1,k}<0$ , (3.6) or (3.8) implies that $\textrm{TCM}_{k}^{(2n-1)}(t)$ tends to $-\infty $ and hence is not a non-negative measure for large t. In case $A_{\alpha ,2n-1,k}=0$ or $A_{2n-1,k}=0$ , (3.6) or (3.8) fails to give a precise asymptotic result for $\textrm{TCM}_{k}^{(2n-1)}(t)$ . Seeking precise estimates of $\textrm{TCM}_{k}^{(2n-1)}(t)$ in such a case requires more nuanced analysis, and we will not focus on it in the present paper.

Further consider the special case where, as mentioned before, the threshold t is chosen to be $\textrm{VaR}_{q}(S_{d})$ with $q\in (0,1)$ . By Theorem 3.1 of Chen and Yuen [Reference Chen and Yuen12] and Theorem 3.1 of Hashorva and Li [Reference Hashorva and Li21], we have

(3.11) \begin{equation}\mathbb{P}\!\left( S_{d}>t\right) \sim \mathbb{P}\!\left(\sum_{i=1}^{d}X_{i}^{+}>t\right) \sim\sum_{i=1}^{d}\overline{F}_{i}(t)\sim \left(\sum_{i=1}^{d}c_{i}\right) \overline{F}_{1}(t) \end{equation}

under Assumption 3.1 or 3.2. Then, using Lemma 2.1 of Asimit et al. [Reference Asimit, Furman, Tang and Vernic2] gives that

(3.12) \begin{equation}\textrm{VaR}_{q}(S_{d})\sim \left( \sum_{i=1}^{d}c_{i}\right) ^{1/\alpha }\textrm{VaR}_{q}(X_{1}),\quad q\uparrow 1, \end{equation}

under Assumption 3.1, while using Lemma 2.4 and the analysis under Corollary 3.2 of Asimit et al. [Reference Asimit, Furman, Tang and Vernic2] gives that

(3.13) \begin{equation}\textrm{VaR}_{q}(S_{d})\sim \textrm{VaR}_{1-(1-q)/\sum_{i=1}^{d}c_{i}}(X_{1}),\quad q\uparrow 1, \end{equation}

under Assumption 3.2. Hence, plugging (3.12) into (3.4) and (3.6) and plugging (3.13) into (3.5) and (3.8), we obtain the following asymptotic results for the corresponding TM and TCM as $q\uparrow 1$ .

Corollary 3.2. Consider the TM and TCM defined by (1.1) and (1.2) with $n\in \textbf{N}^{+}$ and $t=\textrm{VaR}_{q}(S_{d})$ .

(i) Under Assumption 3.1, we have, for each $1\leq k\leq d$ ,

\begin{equation*}\textrm{TM}_{k}^{(n)}\left( \textrm{VaR}_{q}(S_{d})\right) \sim\frac{\alpha }{\alpha -n}C_{k}\left( \sum_{i=1}^{d}c_{i}\right)^{n/\alpha }\left( \textrm{VaR}_{q}(X_{1})\right) ^{n},\quad q\uparrow 1,\end{equation*}

and

\begin{equation*}\textrm{TCM}_{k}^{(n)}\left( \textrm{VaR}_{q}(S_{d})\right) =\left(\left( \sum_{i=1}^{d}c_{i}\right) ^{n/\alpha }A_{\alpha,n,k}+o(1)\right) \left( \textrm{VaR}_{q}(X_{1})\right) ^{n},\quad q\uparrow 1,\end{equation*}

where $A_{\alpha ,n,k}$ is given by (3.7).

(ii) Under Assumption 3.2, we have, for each $1\leq k\leq d$ ,

\begin{equation*}\textrm{TM}_{k}^{(n)}\left( \textrm{VaR}_{q}(S_{d})\right) \sim C_{k}\left(\textrm{VaR}_{1-(1-q)/\sum\nolimits_{i=1}^{d}c_{i}}(X_{1})\right)^{n},\quad q\uparrow 1,\end{equation*}

and

\begin{equation*}\textrm{TCM}_{k}^{(n)}\left( \textrm{VaR}_{q}(S_{d})\right) =\left(A_{n,k}+o(1)\right) \left( \textrm{VaR}_{1-(1-q)/\sum\nolimits_{i=1}^{d}c_{i}}(X_{1})\right) ^{n},\quad q\uparrow 1,\end{equation*}

where $A_{n,k}$ is given by (3.9).

Remark 3.2. Sine $X_{1}$ , …, $X_{d}$ are pairwise asymptotically independent, all of the asymptotic results obtained in Theorem 3.1, Corollary 3.1, and Corollary 3.2 involve only information from the marginal distributions of $(X_{1},\ldots ,X_{d})$ . This feature enables us to overcome the difficulties caused by dependence structures when estimating the values of the TM and TCM.

Remark 3.3. Furman and Landsman [Reference Furman and Landsman19] proposed the tail variance premium (TVP) and tail standard deviation premium (TSDP) principles for individual insurance risks. In terms of our notation, they are formulated as

\begin{equation*}\textrm{TVP}_{k}(t)=\textrm{TM}_{k}^{(1)}(t)+w\textrm{TCM}_{k}^{(2)}(t)\end{equation*}

and

\begin{equation*}\textrm{TSDP}_{k}(t)=\textrm{TM}_{k}^{(1)}(t)+w\sqrt{\textrm{TCM}_{k}^{(2)}(t)},\end{equation*}

where w is a non-negative constant; see Definition 3 of Furman and Landsman [Reference Furman and Landsman19]. Applying Theorem 3.1 and Corollary 3.1 immediately yields that

\begin{equation*}\textrm{TVP}_{k}(t)\sim w\textrm{TCM}_{k}^{(2)}(t)\sim w\!\left(\frac{\alpha}{\alpha -2}C_{k}-\frac{\alpha ^{2}}{\left( \alpha -1\right) ^{2}}C_{k}^{2}\right) t^{2}\end{equation*}

and

\begin{equation*}\textrm{TSDP}_{k}(t)\sim \left( \frac{\alpha }{\alpha -1}C_{k}+w\sqrt{\frac{\alpha }{\alpha -2}C_{k}-\frac{\alpha ^{2}}{\left( \alpha -1\right) ^{2}}C_{k}^{2}}\right) t\end{equation*}

under Assumption 3.1, and that

\begin{equation*}\textrm{TVP}_{k}(t)\sim w\textrm{TCM}_{k}^{(2)}(t)\sim wC_{k}\left(1-C_{k}\right) t^{2}\end{equation*}

and

\begin{equation*}\textrm{TSDP}_{k}(t)\sim \left( C_{k}+w\sqrt{C_{k}\left( 1-C_{k}\right) }\right) t\end{equation*}

under Assumption 3.2.

Remark 3.4. Our results can also be applied to find asymptotic solutions for some optimal capital allocation problems based on tail moment models. Denote by $p_{k}$ the capital allocated to $X_{k}$ for each $1\leq k\leq d$ , and by $p=\sum_{i=1}^{d}p_{i}$ the total capital, which is a fixed number. Dhaene et al. [Reference Dhaene, Tsanakas, Valdez and Vanduffel15] proposed a capital allocation criterion suggesting that the individual capital $p_{k}$ be set as close as possible to $X_{k}$ , in the sense of minimizing some distance measure. Xu and Mao [Reference Xu and Mao44] extended the idea of Dhaene et al. [Reference Dhaene, Tsanakas, Valdez and Vanduffel15] to a tail mean-variance model. Here we choose a quadratic distance measure and consider the following reduced version of the optimization problem studied in Xu and Mao [Reference Xu and Mao44]:

(3.14) \begin{equation}\min_{p_{1}(t),\ldots ,p_{d}(t)}\sum_{i=1}^{d}\mathbb{E}\!\left(\left. \left( X_{i}-p_{i}(t)\right) ^{2}\right\vert S_{d}>t\right),\quad \textrm{s.t.}\ \sum_{i=1}^{d}p_{i}(t)=p(t), \end{equation}

where $p_{1}(t)$ , …, $p_{d}(t)$ and p(t) are the individual capitals and total capital corresponding to the threshold t. By Theorem 2.2 of Xu and Mao [Reference Xu and Mao44], the optimal solution of (3.14) is

(3.15) \begin{equation}p_{k}^{\ast }(t)=\frac{p(t)-\sum_{i=1}^{d}\textrm{TM}_{i}^{(1)}(t)}{d}+\textrm{TM}_{k}^{(1)}(t),\quad 1\leq k\leq d. \end{equation}

Note that (3.15) holds for any $X_{1}$ , …, $X_{d}$ with finite second-order moments that make the problem (3.14) meaningful. Now, given the conditions of Theorem 3.1, the asymptotic estimate of $p_{k}^{\ast }(t)$ depends on the asymptotic behavior of p(t). Assume that $p(t)$ is set to be asymptotically proportionally equivalent to the conditional tail expectation of $S_{d}$ , i.e., for some $H\geq 1$ ,

\begin{equation*}p(t)\sim H\mathbb{E}\!\left( S_{d}\left\vert S_{d}>t\right. \right)=H\sum_{i=1}^{d}\textrm{TM}_{i}^{(1)}(t).\end{equation*}

Then, applying Theorem 3.1 to (3.15) gives that

\begin{equation*}p_{k}^{\ast }(t)\sim \frac{\alpha }{\alpha -1}\left( \frac{H-1}{d}+C_{k}\right) t\end{equation*}

under Assumption 3.1, and that

\begin{equation*}p_{k}^{\ast }(t)\sim \left( \frac{H-1}{d}+C_{k}\right) t\end{equation*}

under Assumption 3.2. When $H=1$ , so that $p(t)\sim\mathbb{E}\!\left(S_{d}\left\vert S_{d}>t\right. \right) $ , it holds under both Assumptions 3.1 and 3.2 that

\begin{equation*}p_{k}^{\ast }(t)\sim \textrm{TM}_{k}^{(1)}(t),\end{equation*}

which coincides with the Euler rule. Actually, in the case of $H=1$ , the asymptotic optimal solution $\left( p_{1}^{\ast }(t),\ldots,p_{d}^{\ast }(t)\right) $ minimizes each term in the summation of the optimization objective in (3.14). Moreover, if p(t) is set to be a quantity such that $t=o(p(t))$ , say, a linear combination of some higher-order TM and TCM, then it holds under both Assumptions 3.1 and 3.2 that

\begin{equation*}p_{1}^{\ast }(t)\sim \cdots \sim p_{d}^{\ast }(t)\sim\frac{1}{d}p(t).\end{equation*}

In this case, the asymptotic optimal allocation rule is just to allocate the total capital equally to each individual risk.

4. Numerical study on relative errors

In this section, we study the relative errors between our main asymptotic results and the accurate values corresponding to them when there are two individual risks (i.e., $d=2$ ) in the system. For this purpose, denote by $\widetilde{\textrm{TM}}_{k}^{(n)}(t)$ and $\widetilde{\textrm{TCM}}_{k}^{(n)}(t)$ with $k=1,2$ the asymptotic results obtained in Theorem 3.1 and Corollary 3.1 for $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ . Denote by $R_{\textrm{TM},k}^{(n)}(t)$ and $R_{\textrm{TCM},k}^{(n)}(t)$ the corresponding relative errors, i.e.,

(4.1) \begin{equation}R_{\textrm{TM},k}^{(n)}(t)=\frac{\left\vert \textrm{TM}_{k}^{(n)}(t)-\widetilde{\textrm{TM}}_{k}^{(n)}(t)\right\vert}{\textrm{TM}_{k}^{(n)}(t)} \end{equation}

and

(4.2) \begin{equation}R_{\textrm{TCM},k}^{(n)}(t)=\frac{\left\vert \textrm{TCM}_{k}^{(n)}(t)-\widetilde{\textrm{TCM}}_{k}^{(n)}(t)\right\vert }{\textrm{TCM}_{k}^{(n)}(t)}. \end{equation}

For simplicity, let the random risks $X_{1}$ and $X_{2}$ be non-negative in this section. We assume that the joint distribution of $(X_{1},X_{2})$ belongs to the Farlie–Gumbel–Morgenstern (FGM) family with a parameter $\theta \in [{-}1,1]$ , i.e.,

(4.3) \begin{equation}\mathbb{P}\!\left( X_{1}\leq x,X_{2}\leq y\right)=F_{1}(x)F_{2}(y)\left( 1+\theta\overline{F}_{1}(x)\overline{F}_{2}(y)\right) . \end{equation}

It follows from (4.3) that

(4.4) \begin{equation}\mathbb{P}\!\left( X_{1}>x,X_{2}>y\right) =\overline{F}_{1}(x)\overline{F}_{2}(y)\left( 1+\theta F_{1}(x)F_{2}(y)\right) , \end{equation}

which implies asymptotic independence between $X_{1}$ and $X_{2}$ . Note that $X_{1}$ and $X_{2}$ are positively dependent if $\theta \geq 0$ , and are negatively dependent if $\theta \leq 0$ ; see Definition 5.2.1 of Nelsen [Reference Nelsen38].

In what follows, we give some numerical analyses in two specific scenarios to illustrate the effects of different model parameters on the convergence rates of $R_{\textrm{TM},1}^{(n)}(t)$ and $R_{\textrm{TCM},1}^{(2)}(t)$ .

Scenario 4.1. Let $(X_{1},X_{2})$ have the joint distribution given by (4.3) with the marginal distributions

\begin{equation*}F_{i}(x)=1-\lambda _{i}^{\alpha }\left( x+\lambda _{i}\right)^{-\alpha },\quad x\geq 0,\ \alpha >0,\ i=1,2,\end{equation*}

where $\lambda _{1}$ and $\lambda _{2}$ are set to be 100 and 120, respectively.

In this scenario, it is easy to check that Assumption 3.1 holds with $c_{1}=1$ and $c_{2}=\left( \lambda _{2}/\lambda _{1}\right) ^{\alpha}$ . For choices of $\alpha $ and n such that $\alpha >n$ , applying Theorem 3.1(i) gives that

(4.5) \begin{equation}\widetilde{\textrm{TM}}_{1}^{(n)}(t)=\frac{\alpha }{\left( \alpha-n\right) \left( 1+\left( \lambda _{2}/\lambda _{1}\right) ^{\alpha}\right) }t^{n}. \end{equation}

On the other hand, denote by $f_{i}$ the density of $X_{i}$ for $i=1,2$ , i.e.,

\begin{equation*}f_{i}(x)=\alpha \lambda _{i}^{\alpha }\left( x+\lambda _{i}\right)^{-\alpha -1},\quad x\geq 0.\end{equation*}

By (4.3), the joint density of $(X_{1},X_{2})$ is

\begin{equation*}f_{X_{1},X_{2}}(x,y)=f_{1}(x)f_{2}(y)\left( 1+\theta \!\left(1-2F_{1}(x)\right) \left( 1-2F_{2}(y)\right) \right) ,\quad x,y\geq0.\end{equation*}

Then, the joint density of $(X_{1},X_{1}+X_{2})$ can be calculated and has the form

\begin{equation*}f_{X_{1},X_{1}+X_{2}}(x,y)=f_{1}(x)f_{2}(y-x)\left( 1+\theta \!\left(1-2F_{1}(x)\right) \left( 1-2F_{2}(y-x)\right) \right) ,\quad 0\leq x\leq y.\end{equation*}

Further calculations yield that

\begin{align*}\mathbb{P}\!\left( X_{1}\in \textrm{d}x,X_{1}+X_{2}>t\right)& = \int_{t}^{\infty }f_{X_{1},X_{1}+X_{2}}(x,y)\textrm{d}y\textrm{d}x \nonumber\\& = f_{1}(x)\overline{F}_{2}(t-x)\left( 1-\theta \!\left(1-2F_{1}(x)\right) F_{2}(t-x)\right) \textrm{d}x,\end{align*}

and

\begin{align*}\mathbb{P}\!\left( X_{1}+X_{2}>t\right) & = \int_{0}^{\infty}\mathbb{P}\!\left(X_{1}\in \textrm{d}x,X_{1}+X_{2}>t\right) \\& = \int_{0}^{t}f_{1}(x)\overline{F}_{2}(t-x)\left( 1-\theta \!\left(1-2F_{1}(x)\right) F_{2}(t-x)\right)\textrm{d}x+\overline{F}_{1}(t).\end{align*}

Thus, we can obtain the exact value of $\textrm{TM}_{1}^{(n)}(t)$ using

(4.6) \begin{align}\textrm{TM}_{1}^{(n)}(t) & = \int_{0}^{\infty }x^{n}\mathbb{P}\!\left(\left.X_{1}\in \textrm{d}x\right\vert X_{1}+X_{2}>t\right) \notag \\& = \frac{\int_{0}^{t}x^{n}f_{1}(x)\overline{F}_{2}(t-x)\left(1-\theta \!\left( 1-2F_{1}(x)\right) F_{2}(t-x)\right)\textrm{d}x+\int_{t}^{\infty}x^{n}f_{1}(x)\textrm{d}x}{\int_{0}^{t}f_{1}(x)\overline{F}_{2}(t-x)\left(1-\theta \!\left( 1-2F_{1}(x)\right) F_{2}(t-x)\right) \textrm{d}x+\overline{F}_{1}(t)}. \end{align}

Then, plugging (4.5) and (4.6) into (4.1) gives $R_{\textrm{TM},1}^{(n)}(t)$ . We calculate the values of $R_{\textrm{TM},1}^{(n)}(t)$ from $t=1000$ to $t=3000$ in steps of 50 for different choices of $\alpha $ , $\theta$ , and n, respectively, with the other parameters fixed. The corresponding numerical results are plotted in Figure 1. It is worth noting that the parameters $\alpha $ , $\theta$ , and n describe the tail behavior of the risks, the degree of the dependence, and the order of the TM, respectively. Thus, Figure 1(i) indicates that a heavier tail of $X_{1}$ , i.e., a smaller value of $\alpha $ , implies a faster convergence rate for $R_{\textrm{TM},1}^{(1)}(t)$ . Additionally, Figure 1(ii) shows that the convergence rate of $R_{\textrm{TM},1}^{(1)}(t)$ increases as the value of $\theta $ increases. In other words, the positive dependence between the risks may help to speed up the convergence of $R_{\textrm{TM},1}^{(1)}(t)$ , while the case of negative dependence is just the opposite. On the other hand, however, Figure 1(iii) seems inadequate to reveal a clear change law for $R_{\textrm{TM},1}^{(n)}(t)$ with respect to the order parameter n.

Figure 1. The graph of $R_{\textrm{TM},1}^{(n)}(t)$ in Scenario 4.1 with different parameters.

For $R_{\textrm{TCM},1}^{(n)}(t)$ , we consider here only $R_{\textrm{TCM},1}^{(2)}(t)$ , which corresponds to the interesting TV risk measure. By Corollary 3.1(i), it holds for $\alpha >2$ that

(4.7) \begin{equation}\widetilde{\textrm{TCM}}_{1}^{(2)}(t)=\left( \frac{\alpha }{\left(\alpha -2\right) \left( 1+\left( \lambda _{2}/\lambda _{1}\right)^{\alpha }\right) }-\frac{\alpha ^{2}}{\left( \alpha -1\right)^{2}\left( 1+\left( \lambda _{2}/\lambda _{1}\right) ^{\alpha}\right) ^{2}}\right) t^{2}. \end{equation}

The exact value of $\textrm{TCM}_{1}^{(2)}(t)$ can be calculated by using (4.6) and the equality

(4.8) \begin{equation}\textrm{TCM}_{1}^{(2)}(t)=\textrm{TM}_{1}^{(2)}(t)-\left( \textrm{TM}_{1}^{(1)}(t)\right) ^{2}. \end{equation}

Then, $R_{\textrm{TCM},1}^{(2)}(t)$ can be obtained by plugging (4.7) and (4.8) into (4.2). We calculate the values of $R_{\textrm{TCM},1}^{(2)}(t)$ from $t=1000$ to $t=3000$ in steps of 50 for different choices of $\alpha $ and $\theta $ , respectively, and present the corresponding numerical results in Figure 2. In contrast to the phenomenon shown in Figure 1(i) for $R_{\textrm{TM},1}^{(1)}(t)$ , Figure 2(i) indicates that the lighter the tail of $X_{1}$ is, the faster $R_{\textrm{TCM},1}^{(2)}(t)$ converges to 0. On the other hand, we cannot make a clear judgment on the change law of $R_{\textrm{TCM},1}^{(2)}(t)$ with respect to the dependence parameter $\theta $ according to Figure 2(ii).

Figure 2. The graph of $R_{\textrm{TCM},1}^{(n)}(t)$ in Scenario 4.1 with different parameters.

Scenario 4.2. Let $(X_{1},X_{2})$ have the joint distribution given by (4.3) with the marginal distributions

\begin{equation*}F_{1}(x)=F_{2}(x)=1-\textrm{e}^{-\left( \log \left( x+1\right)\right) ^{\gamma }},\quad x\geq 0,\ \gamma >1.\end{equation*}

In this scenario, we have $F_{1}\in \textrm{GMDA}(h)$ with

\begin{equation*}h(t)\sim \frac{t}{\gamma \left( \log t\right) ^{\gamma -1}}.\end{equation*}

Since $h(t)\rightarrow \infty $ , (3.1) follows from (4.4). It is not difficult to verify that

\begin{equation*}\overline{F}_{1}^{2}\left( h\!\left( t\right) \right) =o\!\left( \overline{F}_{1}\left( t\right) \right),\end{equation*}

which, combined with (4.4), implies (3.2). Thus, Assumption 3.2 holds with $c_{1}=c_{2}=1$ . Applying Theorem 3.1(ii) gives that

(4.9) \begin{equation}\widetilde{\textrm{TM}}_{1}^{(n)}(t)=\frac{1}{2}t^{n}. \end{equation}

The exact value of $\textrm{TM}_{1}^{(n)}(t)$ can be obtained using (4.6) with

\begin{equation*}f_{1}(x)=\frac{\gamma \left( \log \left( x+1\right) \right) ^{\gamma -1}\textrm{e}^{-\left( \log \left( x+1\right) \right) ^{\gamma}}}{x+1},\quad x\geq 0.\end{equation*}

Then, plugging (4.9) and (4.6) into (4.1) gives $R_{\textrm{TM},1}^{(n)}(t)$ . Additionally, by Corollary 3.1(ii), we have

\begin{equation*}\widetilde{\textrm{TCM}}_{1}^{(2)}(t)=\frac{1}{4}t^{2},\end{equation*}

which, together with (4.8) and (4.2), gives $R_{\textrm{TCM},1}^{(2)}(t)$ . Similarly as in Scenario 4.1, we calculate the values of $R_{\textrm{TM},1}^{(n)}(t)$ and $R_{\textrm{TCM},1}^{(2)}(t)$ from $t=1000$ to $t=3000$ in steps of 50 for different choices of the model parameters; we present the corresponding numerical results in Figures 3 and 4. In this scenario, $X_{1}$ and $X_{2}$ have an identical rapidly varying tail, which is lighter than any regularly varying tails as considered in Scenario 4.1. The numerical results reflect different relationships between the model parameters and the convergence rates of $R_{\textrm{TM},1}^{(n)}(t)$ and $R_{\textrm{TCM},1}^{(2)}(t)$ . Figures 3(i, ii) and 4(i, ii) indicate that a lighter tail of $X_{1}$ or a larger value of $\theta $ may lead to faster convergence rates for both $R_{\textrm{TM},1}^{(1)}(t)$ and $R_{\textrm{TCM},1}^{(2)}(t)$ . Moreover, Figure 3(iii) shows that a smaller value of n tends to give a faster convergence rate for $R_{\textrm{TM},1}^{(n)}(t)$ .

Figure 3. The graph of $R_{\textrm{TM},1}^{(n)}(t)$ in Scenario 4.2 with different parameters.

Figure 4. The graph of $R_{\textrm{TCM},1}^{(n)}(t)$ in Scenario 4.2 with different parameters.

It should be clarified that all the observations from the numerical study above are based on the model setups and value range of t that we considered in Scenarios 4.1 and 4.2. Rigorous investigations of the convergence properties of $R_{\textrm{TM},k}^{(n)}(t)$ and $R_{\textrm{TCM},k}^{(n)}(t)$ would require us to seek analytical asymptotic expressions for $R_{\textrm{TM},k}^{(n)}(t)$ and $R_{\textrm{TCM},k}^{(n)}(t)$ with respect to the model parameters. This task is essentially related to second-order asymptotic results on $\textrm{TM}_{k}^{(n)}(t)$ and $\textrm{TCM}_{k}^{(n)}(t)$ , and we will not pursue it in this paper. The reader is referred to Section 5.1 of Li [Reference Li34] for analytical asymptotic results on $R_{\textrm{TM},1}^{(1)}(t)$ in some special cases of Scenario 4.1.

5. Proofs of the main results

We begin with two lemmas established under general frameworks, in which the distributions of the random variables are not assumed to be of any specific type.

Lemma 5.1. Let $Z_{1}$ , …, $Z_{d}$ be d real-valued random variables with distributions $V_{1}$ , …, $V_{d}$ . Assume that the following conditions hold:

  1. (a) $\overline{V}_{i}(t)\asymp \overline{V}_{1}(t)$ for each $1\leq i\leq d$ .

  2. (b) For any $\epsilon >0$ and each pair $1\leq i\neq j\leq d$ ,

    \begin{equation*}\lim_{t\rightarrow \infty }\frac{\mathbb{P}\!\left( Z_{i}>\epsilon t,Z_{j}>t\right) }{\overline{V}_{1}(t)}=0.\end{equation*}
  3. (c) $\mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}>t\right) \sim \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right) \sim \sum_{i=1}^{d}\overline{V}_{i}(t)$ .

Then, for each $1\leq k\leq d$ , we have the following assertions:

  1. (i) For any $0<\varepsilon \leq 1$ , it holds uniformly for $y\in[ \varepsilon ,1]$ that

    \begin{equation*}\mathbb{P}\!\left( Z_{k}>yt,\sum_{i=1}^{d}Z_{i}>t\right) \sim \overline{V}_{k}(t).\end{equation*}
  2. (ii) It holds uniformly for $s\in [ t,\infty )$ that

    \begin{equation*}\mathbb{P}\!\left( Z_{k}>s,\sum_{i=1}^{d}Z_{i}>t\right) \sim \overline{V}_{k}(s).\end{equation*}

Proof. Part (i): Clearly, for all $y\in [ \varepsilon ,1]$ ,

\begin{equation*}\mathbb{P}\!\left( Z_{k}>t,\sum_{i=1}^{d}Z_{i}>t\right) \leq\mathbb{P}\!\left( Z_{k}>yt,\sum_{i=1}^{d}Z_{i}>t\right) \leq\mathbb{P}\!\left( Z_{k}>\varepsilon t,\sum_{i=1}^{d}Z_{i}>t\right) .\end{equation*}

Thus, we only need to prove

(5.1) \begin{equation}\mathbb{P}\!\left( Z_{k}>\varepsilon t,\sum_{i=1}^{d}Z_{i}>t\right)\lesssim \overline{V}_{k}(t) \end{equation}

and

(5.2) \begin{equation}\mathbb{P}\left( Z_{k}>t,\sum_{i=1}^{d}Z_{i}>t\right) \gtrsim\overline{V}_{k}(t). \end{equation}

We have

(5.3) \begin{align}\mathbb{P}\!\left( Z_{k}>\varepsilon t,\sum_{i=1}^{d}Z_{i}>t\right) &\leq \mathbb{P}\!\left( Z_{k}>\varepsilon t,\sum_{i=1}^{d}Z_{i}^{+}>t\right) \notag\\& = \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right)-\mathbb{P}\!\left(\sum_{i=1}^{d}Z_{i}^{+}>t,Z_{k}\leq \varepsilon t\right) \notag \\&=\!:\,\mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right)-I(t). \end{align}

It holds that

(5.4) \begin{align}I(t) &\geq \mathbb{P}\!\left( \bigcup_{i=1}^{d}\left\{Z_{i}>t\right\},Z_{k}\leq \varepsilon t\right) \notag \\& = \mathbb{P}\!\left( \bigcup_{i=1}^{d}\left\{ Z_{i}>t\right\} \right) -\mathbb{P}\!\left( \bigcup_{i=1}^{d}\left\{ Z_{i}>t\right\},Z_{k}>\varepsilon t\right) \notag \\&\geq \sum_{\substack{ i=1 \\ i\neq k}}^{d}\mathbb{P}\!\left(Z_{i}>t\right) -\sum_{1\leq i<j\leq d}\mathbb{P}\!\left(Z_{i}>t,Z_{j}>t\right) -\sum _{\substack{ i=1 \\ i\neq k}}^{d}\mathbb{P}\!\left( Z_{i}>t,Z_{k}>\varepsilon t\right),\end{align}

where in the last step we used the Bonferroni inequality and the fact that $\left\{ Z_{k}>t,Z_{k}>\varepsilon t\right\} =\left\{ Z_{k}>t\right\}$ , since $0<\varepsilon \leq 1$ . We then obtain (5.1) by plugging (5.4) into (5.3) and applying the conditions (a)–(c). On the other hand, we write

(5.5) \begin{align}\mathbb{P}\!\left( Z_{k}>t,\sum_{i=1}^{d}Z_{i}>t\right)& = \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}>t\right) -\mathbb{P}\!\left(\sum_{i=1}^{d}Z_{i}>t,Z_{k}\leq t\right) \notag \\& =\!:\,\mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}>t\right) -J(t).\end{align}

It holds that

(5.6) \begin{align}J(t) & = \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}>t,Z_{k}\leq t,\bigcup_{\substack{ i=1 \\ i\neq k}}^{d}\left\{ Z_{i}>t\right\} \right) +\mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}>t,\bigcap_{i=1}^{d}\left\{ Z_{i}\leq t\right\}\right) \notag \\&\leq \sum_{\substack{ i=1 \\ i\neq k}}^{d}\mathbb{P}\!\left(Z_{i}>t\right) +\mathbb{P}\!\left(\sum_{i=1}^{d}Z_{i}^{+}>t,\bigcap_{i=1}^{d}\left\{Z_{i}\leq t\right\} \right) \notag \\& =\sum_{\substack{ i=1 \\ i\neq k}}^{d}\mathbb{P}\!\left( Z_{i}>t\right) +\mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right) -\mathbb{P}\!\left(\bigcup_{i=1}^{d}\left\{ Z_{i}>t\right\} \right) \notag \\&\leq \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right)-\mathbb{P}\!\left( Z_{k}>t\right) +\sum_{1\leq i<j\leq d}\mathbb{P}\!\left( Z_{i}>t,Z_{j}>t\right) . \end{align}

Plugging (5.6) into (5.5) and noting the conditions (a)–(c), we obtain (5.2) and complete the proof of the assertion (i).

Part (ii): We always have

\begin{equation*}\mathbb{P}\!\left( Z_{k}>s,\sum_{i=1}^{d}Z_{i}>t\right) \leq \overline{V}_{k}(s).\end{equation*}

By the assertion (i) with $\varepsilon =1$ , it holds uniformly for $s\in[t,\infty )$ that

\begin{equation*}\mathbb{P}\!\left( Z_{k}>s,\sum_{i=1}^{d}Z_{i}>t\right) \geq\mathbb{P}\!\left( Z_{k}>s,\sum_{i=1}^{d}Z_{i}>s\right) \sim\overline{V}_{k}(s).\end{equation*}

Combining the two estimates above, we obtain the assertion (ii).

Lemma 5.2. Let $Z_{1}$ , …, $Z_{d}$ be d real-valued random variables with distributions $V_{1}$ , …, $V_{d}$ . Assume that the condition (a) of Lemma 5.1 and the following conditions hold:

  1. (b) For any $\epsilon >0$ and each pair $1\leq i\neq j\leq d$ ,

    \begin{equation*}\lim_{t\rightarrow \infty }\frac{\mathbb{P}\!\left( Z_{i}^{-}>\epsilon t,Z_{j}>t\right) +\mathbb{P}\!\left( Z_{i}>t,Z_{j}>t\right) }{\overline{V}_{1}(t)}=0.\end{equation*}
  2. (c) $\mathbb{P}\!\left(\sum_{i=1}^{d}Z_{i}^{+}>t\right) \sim\sum_{i=1}^{d}\overline{V}_{i}(t)$ .

Then, for each $1\leq k\leq d$ and any $\varepsilon >0$ , it holds uniformly for $y\in [ \varepsilon ,\infty )$ that

\begin{equation*}\mathbb{P}\!\left( Z_{k}^{-}>yt,\sum_{i=1}^{d}Z_{i}>t\right) =o\!\left(\overline{V}_{k}(t)\right) .\end{equation*}

Proof. It holds for all $y\in [ \varepsilon ,\infty )$ that

\begin{equation*}\mathbb{P}\!\left( Z_{k}^{-}>yt,\sum_{i=1}^{d}Z_{i}>t\right) \leq \mathbb{P}\!\left( Z_{k}^{-}>\varepsilon t,\sum_{i=1}^{d}Z_{i}^{+}>t\right) .\end{equation*}

Then, using steps similar to those shown in (5.3) and (5.4) and noting the fact that $\left\{ Z_{k}>t,Z_{k}^{-}>\varepsilon t\right\}=\varnothing $ , we have

\begin{align*}\mathbb{P}\!\left( Z_{k}^{-}>\varepsilon t,\sum_{i=1}^{d}Z_{i}^{+}>t\right)&\leq \mathbb{P}\!\left( \sum_{i=1}^{d}Z_{i}^{+}>t\right) -\sum_{i=1}^{d}\mathbb{P}\!\left( Z_{i}>t\right) \\&+\sum_{1\leq i<j\leq d}\mathbb{P}\!\left( Z_{i}>t,Z_{j}>t\right)+\sum _{\substack{ i=1 \\ i\neq k}}^{d}\mathbb{P}\!\left(Z_{i}>t,Z_{k}^{-}>\varepsilon t\right) .\end{align*}

Applying the conditions (a), (b $^{\prime }$ ), and (c $^{\prime }$ ) to the above relation completes the proof.

The next two lemmas can be regarded as a decomposition of the proof of Theorem 3.1. Through these two lemmas, we also find that the left tails of the individual risks do not affect the asymptotic properties of the TM and TCM under our models.

Lemma 5.3. Let $\textrm{TM}_{k,+}^{(n)}(t)=\mathbb{E}\!\left(\left( X_{k}^{+}\right) ^{n}\left\vert S_{d}>t\right. \right) $ for each $1\leq k\leq d$ .

  1. (i) If all the conditions, except $F_{i}({-}t)=O(\overline{F}_{1}(t)) $ , of Assumption 3.1 are satisfied, then

    \begin{equation*}\textrm{TM}_{k,+}^{(n)}(t)\sim \frac{\alpha }{\alpha -n}C_{k}t^{n}.\end{equation*}
  2. (ii) If all the conditions, except $F_{i}({-}t)=O(\overline{F}_{1}(t))$ , of Assumption 3.2 are satisfied, then

    \begin{equation*}\textrm{TM}_{k,+}^{(n)}(t)\sim C_{k}t^{n}.\end{equation*}

Proof. By Theorem 3.1 of Chen and Yuen [Reference Chen and Yuen12] and Theorem 3.1 of Hashorva and Li [Reference Hashorva and Li21], the equivalence relations shown in (3.11) hold under the conditions of this lemma. Recalling also (2.7) and (3.1), it is easy to check that all the conditions of Lemma 5.1 are satisfied by $(X_{1},\ldots ,X_{d})$ , and hence the assertions obtained in Lemma 5.1 hold for $(X_{1},\ldots ,X_{d})$ . For any $0<\varepsilon <1$ , we write

(5.7) \begin{align}\textrm{TM}_{k,+}^{(n)}(t) & = \left( \int_{0}^{\varepsilon t^{n}}+\int_{\varepsilon t^{n}}^{t^{n}}+\int_{t^{n}}^{\infty}\right) \mathbb{P}\!\left( X_{k}>x^{1/n}\left\vert S_{d}>t\right.\right) \textrm{d}x\notag \\&=\!:\,I_{1}(t)+I_{2}(t)+I_{3}(t). \end{align}

Clearly,

(5.8) \begin{equation}0\leq I_{1}(t)\leq \varepsilon t^{n}. \end{equation}

For $I_{2}(t)$ , we first note that, by (3.11) with $\overline{F}_{i}(t)\sim c_{i}\overline{F}_{1}(t)$ for each $1\leq i\leq d$ ,

(5.9) \begin{equation}\frac{\overline{F}_{i}(t)}{\mathbb{P}\!\left( S_{d}>t\right)}\rightarrow C_{i},\quad 1\leq i\leq d, \end{equation}

where $C_{i}$ is given by (3.3). Then, under Assumption 3.1 or 3.2, it holds that

(5.10) \begin{align}I_{2}(t) & = \frac{\int_{\varepsilon t^{n}}^{t^{n}}\mathbb{P}\!\left(X_{k}>x^{1/n},S_{d}>t\right) \textrm{d}x}{\mathbb{P}\!\left(S_{d}>t\right) }\notag \\&= \frac{\int_{\varepsilon }^{1}\mathbb{P}\!\left(X_{k}>ty^{1/n},S_{d}>t\right) \textrm{d}y}{\mathbb{P}\!\left( S_{d}>t\right) }t^{n} \notag \\&\sim \left( 1-\varepsilon \right) \frac{\overline{F}_{k}(t)}{\mathbb{P}\!\left( S_{d}>t\right) }t^{n} \notag \\&\sim \left( 1-\varepsilon \right) C_{k}t^{n}, \end{align}

where in the second step we used a change of variables, in the third step we used Lemma 5.1(i), and in the last step we used (5.9). Finally, under Assumption 3.1 or 3.2, we have

\begin{align*}I_{3}(t) &= \frac{\int_{t^{n}}^{\infty }\mathbb{P}\!\left(X_{k}>x^{1/n},S_{d}>t\right) \textrm{d}x}{\mathbb{P}\!\left(S_{d}>t\right) }\\&\sim \frac{\int_{t^{n}}^{\infty }\overline{F}_{k}(x^{1/n})\textrm{d}x}{\mathbb{P}\!\left( S_{d}>t\right) } \\& = \frac{n\int_{t}^{\infty }y^{n-1}\overline{F}_{k}(y)\textrm{d}y}{t^{n}\overline{F}_{k}(t)}\frac{\overline{F}_{k}(t)}{\mathbb{P}\!\left(S_{d}>t\right) }t^{n},\end{align*}

where in the second step we used Lemma 5.1(ii) and in the last step we used a change of variables. If Assumption 3.1 holds, then $t^{n-1}\overline{F}_{k}(t)\in \mathcal{R}_{-\alpha +n-1}$ with $-\alpha+n-1<-1$ . Hence, it follows from (5.9) and (2.3) that

(5.11) \begin{equation}I_{3}(t)\sim \frac{n}{\alpha -n}C_{k}t^{n}. \end{equation}

If Assumption 3.2 holds, then using (5.9) and (2.4) gives that

(5.12) \begin{equation}I_{3}(t)=o\!\left( t^{n}\right) . \end{equation}

Plugging (5.8), (5.10), and (5.11) or (5.12) into (5.7) and letting $\varepsilon \rightarrow 0$ , we complete the proof.

Lemma 5.4. Let $\textrm{TM}_{k,-}^{(n)}(t)=\mathbb{E}\!\left(\left( X_{k}^{-}\right) ^{n}\left\vert S_{d}>t\right. \right) $ for each $1\leq k\leq d$ . Under Assumption 3.1 or 3.2, we have

\begin{equation*}\textrm{TM}_{k,-}^{(n)}(t)=o\!\left( t^{n}\right) .\end{equation*}

Proof. In view of (3.11), it is easy to see that Lemma 5.2 is applicable for $(X_{1},\ldots ,X_{d})$ under Assumption 3.1 or 3.2. For any $0<\varepsilon <1$ , we write

(5.13) \begin{align}\textrm{TM}_{k,-}^{(n)}(t) & = \left( \int_{0}^{\varepsilon t^{n}}+\int_{\varepsilon t^{n}}^{\varepsilon^{-1}t^{n}}+\int_{\varepsilon ^{-1}t^{n}}^{\infty }\right)\mathbb{P}\!\left( X_{k}^{-}>x^{1/n}\left\vert S_{d}>t\right. \right) \textrm{d}x \notag \\&=\!:\,I_{1}(t)+I_{2}(t)+I_{3}(t). \end{align}

Clearly, the relation (5.8) still holds for $I_{1}(t)$ . In addition, under Assumption 3.1 or 3.2,

(5.14) \begin{align}I_{2}(t) & = \frac{\int_{\varepsilon t^{n}}^{\varepsilon ^{-1}t^{n}}\mathbb{P}\!\left( X_{k}^{-}>x^{1/n},S_{d}>t\right)\textrm{d}x}{\mathbb{P}\!\left(S_{d}>t\right) } \notag \\&\leq \frac{\mathbb{P}\!\left( X_{k}^{-}>\varepsilon ^{1/n}t,S_{d}>t\right) }{\overline{F}_{k}(t)}\frac{\overline{F}_{k}(t)}{\mathbb{P}\!\left(S_{d}>t\right) }\left( \varepsilon ^{-1}-\varepsilon \right) t^{n} \notag \\& = o\!\left( t^{n}\right) , \end{align}

where in the last step we used Lemma 5.2 and (5.9). Moreover, since $F_{k}({-}t)=O(\overline{F}_{1}(t))$ under Assumption 3.1 or 3.2, there is some constant B such that $\mathbb{P}\!\left(X_{k}^{-}>t\right) \leq B\overline{F}_{1}(t)$ for t large enough. Hence, we have

\begin{align*}I_{3}(t) & = \frac{\int_{\varepsilon ^{-1}t^{n}}^{\infty}\mathbb{P}\!\left( X_{k}^{-}>x^{1/n},S_{d}>t\right)\textrm{d}x}{\mathbb{P}\!\left(S_{d}>t\right) } \\&\leq \frac{\int_{\varepsilon ^{-1}t^{n}}^{\infty }\mathbb{P}\!\left(X_{k}^{-}>x^{1/n}\right) \textrm{d}x}{\mathbb{P}\!\left( S_{d}>t\right) } \\&\lesssim \frac{B\int_{\varepsilon ^{-1}t^{n}}^{\infty }\overline{F}_{1}\!\left( x^{1/n}\right) \textrm{d}x}{\mathbb{P}\!\left( S_{d}>t\right) } \\& = \frac{Bn\int_{\varepsilon ^{-1/n}t}^{\infty }y^{n-1}\overline{F}_{1}(y)\textrm{d}y}{\varepsilon ^{-1}t^{n}\overline{F}_{1}\left(\varepsilon^{-1/n}t\right) }\frac{\overline{F}_{1}\left( \varepsilon ^{-1/n}t\right) }{\mathbb{P}\!\left( S_{d}>t\right) }\varepsilon ^{-1}t^{n},\end{align*}

where in the last step we used a change of variables. If Assumption 3.1 holds, then using (2.3), $F_{1}\in\mathcal{R}_{-\alpha }$ , and (5.9) yields that

(5.15) \begin{equation}I_{3}(t)\lesssim \frac{Bn}{\alpha -n}C_{1}\varepsilon ^{\alpha/n-1}t^{n}. \end{equation}

If Assumption 3.2 holds, then it follows from (2.4) and (5.9) that

(5.16) \begin{equation}I_{3}(t)\lesssim \frac{Bn\int_{\varepsilon ^{-1/n}t}^{\infty }y^{n-1}\overline{F}_{1}(y)\textrm{d}y}{\varepsilon^{-1}t^{n}\overline{F}_{1}\left(\varepsilon ^{-1/n}t\right) }\frac{\overline{F}_{1}\left( t\right) }{\mathbb{P}\!\left( S_{d}>t\right) }\varepsilon ^{-1}t^{n}=o\!\left( t^{n}\right). \end{equation}

Plugging (5.8), (5.14), and (5.15) or (5.16) into (5.13) and letting $\varepsilon\rightarrow 0$ , we complete the proof.

Proof of Theorem 3.1. It is clear that

\begin{align*}\textrm{TM}_{k}^{(n)}(t) & = \mathbb{E}\!\left( \left(X_{k}^{+}-X_{k}^{-}\right) ^{n}\left\vert S_{d}>t\right. \right) \\& = \mathbb{E}\!\left( \left( X_{k}^{+}\right) ^{n}+({-}1)^{n}\left(X_{k}^{-}\right) ^{n}\left\vert S_{d}>t\right. \right) \\& = \textrm{TM}_{k,+}^{(n)}(t)+({-}1)^{n}\textrm{TM}_{k,-}^{(n)}(t).\end{align*}

A combination of Lemmas 5.3 and 5.4 indicates that

\begin{equation*}\textrm{TM}_{k,-}^{(n)}(t)=o(1)\textrm{TM}_{k,+}^{(n)}(t)\end{equation*}

under Assumption 3.1 or 3.2. Hence, we have

\begin{equation*}\textrm{TM}_{k}^{(n)}(t)\sim \textrm{TM}_{k,+}^{(n)}(t).\end{equation*}

Then the assertions (i) and (ii) of Lemma 5.3 imply the assertions (i) and (ii) of Theorem 3.1, respectively.

Proof of (3.10). Noting that the random variable $\left. \left( X_{k}-TM_{k}^{(1)}(t)\right) ^{2n}\right\vert D$ is non-negative for any event D, we have

\begin{align*}t^{2n} &\leq \mathbb{E}\!\left( \left. \left(X_{k}-TM_{k}^{(1)}(t)\right)^{2n}\right\vert X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right) \\& = \frac{\int_{0}^{\infty }\mathbb{P}\!\left( \left(X_{k}-TM_{k}^{(1)}(t)\right)^{2n}>x,X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right)\textrm{d}x}{\mathbb{P}\!\left( X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right) } \\& \leq \frac{\int_{0}^{\infty }\mathbb{P}\!\left( \left(X_{k}-TM_{k}^{(1)}(t)\right) ^{2n}>x,S_{d}>t\right) \textrm{d}x}{\mathbb{P}\!\left( S_{d}>t\right) }\frac{\mathbb{P}\!\left( S_{d}>t\right) }{\mathbb{P}\!\left( X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right) } \\& = \textrm{TCM}_{k}^{(2n)}(t)\frac{\mathbb{P}\!\left( S_{d}>t\right) }{\mathbb{P}\!\left( X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right) }.\end{align*}

Thus,

\begin{align*}\textrm{TCM}_{k}^{(2n)}(t) &\geq \frac{\mathbb{P}\!\left(X_{k}>t+TM_{k}^{(1)}(t),S_{d}>t\right) }{\mathbb{P}\!\left( S_{d}>t\right) }t^{2n} \\&\sim \frac{\overline{F}_{k}\left( t+TM_{k}^{(1)}(t)\right) }{\mathbb{P}\!\left( S_{d}>t\right) }t^{2n},\end{align*}

where the last step follows from Lemma 5.1(ii). By Theorem 3.1(i), it holds that

\begin{equation*}TM_{k}^{(1)}(t)\sim \frac{\alpha }{\alpha -1}C_{k}t\leq \frac{2\alpha }{\alpha -1}C_{k}t.\end{equation*}

Then,

\begin{align*}\textrm{TCM}_{k}^{(2n)}(t) &\gtrsim \frac{\overline{F}_{k}\left( \left( 1+\frac{2\alpha }{\alpha -1}C_{k}\right) t\right) }{\mathbb{P}\!\left(S_{d}>t\right) }t^{2n} \\&\sim C_{k}\left( 1+\frac{2\alpha }{\alpha -1}C_{k}\right)^{-\alpha }t^{2n},\end{align*}

where in the last step we used $F_{k}\in \mathcal{R}_{-\alpha }$ and (5.9). Comparing the above estimate with (3.6) gives that

\begin{equation*}A_{\alpha ,2n,k}\geq C_{k}\left( 1+\frac{2\alpha }{\alpha-1}C_{k}\right) ^{-\alpha }>0.\end{equation*}

This completes the proof of (3.10).

Appendix A. Supplementary discussions on univariate cases

In this appendix, we turn to the degenerate versions of the TM and TCM defined by (1.1) and (1.2) with only one risk under consideration. Denoting by a real-valued random variable X the single risk, for $n\in \textbf{N}^{+}$ and $t>0$ we write

(A.1) \begin{equation}\textrm{TM}^{(n)}(t)=\mathbb{E}\!\left( X^{n}\left\vert X>t\right.\right) \end{equation}

and

(A.2) \begin{equation}\textrm{TCM}^{(n)}(t)=\mathbb{E}\!\left( \left. \left( X-\textrm{TM}^{(1)}(t)\right) ^{n}\right\vert X>t\right) . \end{equation}

Note that $X\!\left\vert X>t\right. $ is a non-negative random variable with the same distribution as $X^{+}\left\vert X^{+}>t\right. $ . Hence, the asymptotic behavior of $\textrm{TM}^{(n)}(t)$ and $\textrm{TCM}^{(n)}(t)$ has nothing to do with the left tail of X. Let F be the distribution of $X$ . When $F\in \mathcal{R}$ , $\textrm{TM}^{(n)}(t)$ and $\textrm{TCM}^{(n)}(t)$ possess asymptotic expansions similar to those obtained for their multivariate counterparts in Theorem 3.1(i) and Corollary 3.1(i).

Theorem A.1. Consider $\textrm{TM}^{(n)}(t)$ and $\textrm{TCM}^{(n)}(t)$ defined by (A.1) and (A.2) with $n\in \textbf{N}^{+}$ . If $F\in\mathcal{R}_{-\alpha }$ for some $\alpha >n$ , then

(A.3) \begin{equation}\textrm{TM}^{(n)}(t)\sim \frac{\alpha }{\alpha -n}t^{n}\end{equation}

and

(A.4) \begin{equation}\textrm{TCM}^{(n)}(t)=\left( A_{\alpha ,n}+o(1)\right) t^{n},\end{equation}

where

\begin{equation*}A_{\alpha ,n}=\sum_{i=0}^{n-1}\left(\begin{array}{c}n \\i\end{array}\right) ({-}1)^{i}\frac{\alpha ^{i+1}}{\left( \alpha -1\right)^{i}\left( \alpha -n+i\right) }+({-}1)^{n}\left( \frac{\alpha }{\alpha-1}\right) ^{n}.\end{equation*}

Proof. We have

(A.5) \begin{align}\textrm{TM}^{(n)}(t) &= \left( \int_{0}^{t^{n}}+\int_{t^{n}}^{\infty}\right) \mathbb{P}\!\left( X>x^{1/n}\left\vert X>t\right. \right)\textrm{d}x\notag \\& = t^{n}+\frac{\int_{t^{n}}^{\infty }\overline{F}\!\left(x^{1/n}\right)\textrm{d}x}{\overline{F}\!\left( t\right) } \notag \\& = t^{n}+\frac{n\int_{t}^{\infty }y^{n-1}\overline{F}\!\left( y\right) \textrm{d}y}{t^{n}\overline{F}\!\left( t\right) }t^{n}, \end{align}

where in the last step we used a change of variables. Since $t^{n-1}\overline{F}(t)\in \mathcal{R}_{-\alpha +n-1}$ with $-\alpha +n-1<-1$ , applying (2.3) to the second term of (A.5) gives (A.3). Then, (A.4) follows from combining (A.3) with the equality

(A.6) \begin{equation}\textrm{TCM}^{(n)}(t)=\sum_{i=0}^{n-1}\left(\begin{array}{c}n \\i\end{array}\right) ({-}1)^{i}\left( \textrm{TM}^{(1)}(t)\right) ^{i}\textrm{TM}^{(n-i)}(t)+({-}1)^{n}\left( \textrm{TM}^{(1)}(t)\right) ^{n}.\end{equation}

This completes the proof.

Not surprisingly, the right-hand sides of (A.3) and (A.4) are equal to those of (3.4) and (3.6), respectively, with $C_{k}=1$ . By an approach similar to that used in proving (3.10), we can verify that $A_{\alpha,n}>0$ for any even n.

On the other hand, we can also obtain a precise asymptotic formula for $\textrm{TM}^{(n)}(t)$ by similar treatment when F comes from the wide class of rapid variation.

Theorem A.2. Consider $\textrm{TM}^{(n)}(t)$ defined by (A.1) with $n\in \textbf{N}^{+}$ . If $F\in \mathcal{R}_{-\infty }$ then

(A.7) \begin{equation}\textrm{TM}^{(n)}(t)\sim t^{n}. \end{equation}

Proof. The relation (A.7) can be derived by following the same approach as the proof of Theorem A.1, but applying (2.4) instead of (2.3).

In fact, it is easy to see from the proofs of Theorems A.1 and A.2 that (A.3) and (A.7) hold for any positive (not necessarily integer-valued) order satisfying the conditions of the theorems, because $X\!\left\vert X>t\right. $ is a non-negative random variable as mentioned before.

Now, plugging (A.7) into (A.6) yields only a rough estimate of $\textrm{TCM}^{(n)}(t)$ , i.e., $\textrm{TCM}^{(n)}(t)=o(t^{n})$ for any $n\in \textbf{N}^{+}$ . Thus, more conditions are required to obtain a precise asymptotic result of $\textrm{TCM}^{(n)}(t)$ when $F\in \mathcal{R}_{-\infty }$ . Here we consider only $\textrm{TCM}^{(2)}(t)$ , i.e., the TV risk measure, which is the simplest but also the most interesting special case of $\textrm{TCM}^{(n)}(t)$ . For this purpose, we restrict F to being a von Mises function with an infinite upper endpoint. That is to say, there is some real number z such that

(A.8) \begin{equation}\overline{F}(x)=\delta \exp \left\{ -\int_{z}^{x}\frac{1}{h(y)}\textrm{d}y\right\} ,\quad x>z, \end{equation}

where $\delta $ is some positive constant and h is a positive and absolutely continuous function with density $h^{\prime }$ such that $h^{\prime }(t)\rightarrow 0$ . It is known that if F is a von Mises function with the representation (A.8), then $F\in\textrm{GMDA}(h)\subset \mathcal{R}_{-\infty }$ , and F is differentiable on $(z,\infty )$ with positive density f such that

(A.9) \begin{equation}f(x)=\frac{\overline{F}(x)}{h(x)}; \end{equation}

see Chapter 1.1 of Resnick [Reference Resnick40] or Chapter 3.3.3 of Embrechts et al. [Reference Embrechts, Klüppelberg and Mikosch18]. The class of von Mises functions is an important subclass of the Gumbel MDA, and it contains many commonly used distributions, including the exponential, Erlang, normal, and log-normal distributions.

Theorem A.3. Consider $\textrm{TCM}^{(2)}(t)$ defined by (A.2) with $n=2$ . Let F be a von Mises function with the representation (A.8). If h is differentiable on $(z,\infty )$ and $\lim th^{\prime }(t)/h(t)$ exists, then

(A.10) \begin{equation}\textrm{TCM}^{(2)}(t)\sim h^{2}(t). \end{equation}

Proof. Denote by v the value of $\lim th^{\prime }(t)/h(t)$ , i.e.,

(A.11) \begin{equation}\frac{th^{\prime }(t)}{h(t)}\rightarrow v\in ({-}\infty ,\infty ).\end{equation}

Recalling (A.5), it holds that

(A.12) \begin{equation}\textrm{TM}^{(1)}(t)=t+\frac{\int_{t}^{\infty }\overline{F}\!\left(x\right) \textrm{d}x}{\overline{F}\!\left( t\right) }. \end{equation}

We write

(A.13) \begin{equation}I(t)=\frac{\int_{t}^{\infty }\overline{F}\!\left( x\right) \textrm{d}x/\overline{F}\!\left( t\right) -h(t)}{h^{2}(t)/t}=\frac{t\int_{t}^{\infty }\overline{F}\!\left( x\right) \textrm{d}x-t\overline{F}\!\left( t\right) h(t)}{\overline{F}\!\left( t\right) h^{2}(t)}. \end{equation}

Since $F\in \textrm{GMDA}(h)$ , Theorem 3.3.26 of Embrechts et al. [Reference Embrechts, Klüppelberg and Mikosch18] tells us that

(A.14) \begin{equation}\int_{t}^{\infty }\overline{F}\!\left( x\right) \textrm{d}x\sim \overline{F}\!\left( t\right) h(t). \end{equation}

The fact that $F\in \mathcal{R}_{-\infty }$ implies that $t^{K}\overline{F}\!\left(t\right) \rightarrow 0$ for any $K>0$ . Then, noting also $h(t)=o(t)$ , we have

\begin{equation*}\lim_{t\rightarrow \infty }t\int_{t}^{\infty }\overline{F}\!\left(x\right) \textrm{d}x=\lim_{t\rightarrow \infty }t\overline{F}\!\left(t\right) h(t)=\lim_{t\rightarrow \infty }\overline{F}\!\left( t\right)h^{2}(t)=0.\end{equation*}

Thus, applying L’Hospital’s rule yields that

\begin{align*}\lim_{t\rightarrow \infty }I(t) & = \lim_{t\rightarrow \infty }\frac{\int_{t}^{\infty }\overline{F}\!\left( x\right) \textrm{d}x-t\overline{F}\!\left( x\right) -\overline{F}\!\left( t\right) h(t)+tf\!\left( t\right) h(t)-t\overline{F}\!\left( t\right) h^{\prime }(t)}{-f\!\left( t\right) h^{2}(t)+2\overline{F}\!\left( t\right) h(t)h^{\prime }(t)} \\& = \lim_{t\rightarrow \infty }\frac{\int_{t}^{\infty}\overline{F}\!\left( x\right) \textrm{d}x-\overline{F}\!\left( t\right)h(t)-t\overline{F}\!\left(t\right) h^{\prime }(t)}{-\overline{F}\!\left( t\right) h(t)+2\overline{F}\!\left( t\right) h(t)h^{\prime }(t)} \\& = \lim_{t\rightarrow \infty }\frac{\int_{t}^{\infty}\overline{F}\!\left( x\right) \textrm{d}x/(\overline{F}\!\left(t\right) h(t))-1-th^{\prime}(t)/h(t)}{-1+2h^{\prime }(t)} \\& = v,\end{align*}

where in the second step we used (A.9) and in the last step we used (A.14), (A.11), and $h^{\prime }(t)\rightarrow 0$ . Recalling (A.13), we have

\begin{equation*}\frac{\int_{t}^{\infty }\overline{F}\!\left( x\right) \textrm{d}x}{\overline{F}\!\left( t\right) }=h(t)+v\frac{h^{2}(t)}{t}+o(1)\frac{h^{2}(t)}{t}.\end{equation*}

Plugging the above estimate into (A.12) gives that

\begin{equation*}\textrm{TM}^{(1)}(t)=t+h(t)+v\frac{h^{2}(t)}{t}+o(1)\frac{h^{2}(t)}{t},\end{equation*}

and hence

(A.15) \begin{equation}\left( \textrm{TM}^{(1)}(t)\right) ^{2}=t^{2}+2th(t)+\left(2v+1\right) h^{2}(t)+o(1)h^{2}(t). \end{equation}

On the other hand, it follows from (A.5) that

(A.16) \begin{equation}\textrm{TM}^{(2)}(t)=t^{2}+2\frac{\int_{t}^{\infty}x\overline{F}\!\left( x\right) \textrm{d}x}{\overline{F}\!\left(t\right) }. \end{equation}

Let

\begin{equation*}J(t)=\frac{\int_{t}^{\infty }x\overline{F}\!\left( x\right) \textrm{d}x/\overline{F}\!\left( t\right) -th(t)}{h^{2}(t)}=\frac{\int_{t}^{\infty }x\overline{F}\!\left( x\right) \textrm{d}x-t\overline{F}\!\left( t\right) h(t)}{\overline{F}\!\left( t\right) h^{2}(t)}.\end{equation*}

It is easy to check via (2.2) that $t\overline{F}(t)$ is a tail of a distribution from $\textrm{GMDA}(h)$ . Thus, in terms of $t\overline{F}(t)$ , (A.14) says that

\begin{equation*}\int_{t}^{\infty }x\overline{F}\!\left( x\right) \textrm{d}x\sim t\overline{F}\!\left( t\right) h(t)\rightarrow 0.\end{equation*}

Applying L’Hospital’s rule and the same arguments as used in deriving $\lim I(t)$ , we can obtain that

\begin{equation*}J(t)\rightarrow v+1,\end{equation*}

which implies that

\begin{equation*}\frac{\int_{t}^{\infty }x\overline{F}\!\left( x\right) \textrm{d}x}{\overline{F}\!\left( t\right) }=th(t)+\left( v+1\right) h^{2}(t)+o(1)h^{2}(t).\end{equation*}

Plugging the above estimate into (A.16) gives that

(A.17) \begin{equation}\textrm{TM}^{(2)}(t)=t^{2}+2th(t)+2\left( v+1\right)h^{2}(t)+o(1)h^{2}(t). \end{equation}

A combination of (A.15) and (A.17) yields that

\begin{equation*}\textrm{TCM}^{(2)}(t)=\textrm{TM}^{(2)}(t)-\left( \textrm{TM}^{(1)}(t)\right) ^{2}=h^{2}(t)+o(1)h^{2}(t),\end{equation*}

which is equivalent to (A.10).

Since the trend of the function h at infinity is quite flexible, the asymptotic properties of $\textrm{TCM}^{(2)}(t)$ corresponding to different choices of h may also be dramatically different. The following examples fully demonstrate this fact.

Example A.1. Let X follow an exponential distribution with

\begin{equation*}\overline{F}\!\left( x\right) =\textrm{e}^{-\rho x}\textbf{1}_{\{x>0\}}+\textbf{1}_{\{x\leq 0\}},\quad \rho >0.\end{equation*}

Clearly, F is a von Mises function with

\begin{equation*}h(x)=\frac{\overline{F}\!\left( x\right) }{f(x)}=\frac{1}{\rho }.\end{equation*}

Thus, $h^{\prime }(t)=th^{\prime }(t)/h(t)=0$ . Using Theorem A.3 gives that

\begin{equation*}\textrm{TCM}^{(2)}(t)\rightarrow \frac{1}{\rho ^{2}}.\end{equation*}

Actually, in this case routine calculations via (A.5) and (A.6) indicate that, for any $t>0$ ,

\begin{equation*}\textrm{TCM}^{(2)}(t)=\frac{1}{\rho ^{2}}.\end{equation*}

Example A.2. Let X follow a normal distribution with

\begin{equation*}\overline{F}\!\left( x\right) =\overline{\Phi }\left( \frac{x-\mu }{\sigma }\right) ,\quad \mu\in({-}\infty,+\infty) ,\ \sigma >0,\end{equation*}

where $\Phi $ is the standard normal distribution.

Denote by $\varphi (x)=\Phi ^{\prime }(x)$ the density of the standard normal distribution. It is easy to check by Proposition 1.1(b) of Resnick [Reference Resnick40] that F is a von Mises function with

\begin{equation*}h(x)=\frac{\overline{F}\!\left( x\right) }{f(x)}=\frac{\sigma \overline{\Phi }\left( \frac{x-\mu }{\sigma }\right) }{\varphi \left( \frac{x-\mu }{\sigma }\right) }.\end{equation*}

Using L’Hospital’s rule yields the well-known Mill’s ratio, i.e.,

(A.18) \begin{equation}\frac{\overline{\Phi }\left( t\right) }{\varphi (t)}\sim\frac{1}{t}. \end{equation}

Note also that

(A.19) \begin{equation}\varphi ^{\prime }(x)=-x\varphi (x). \end{equation}

We have

(A.20) \begin{align}\frac{th^{\prime }(t)}{h(t)} & = \frac{-tf^{2}(t)-t\overline{F}\!\left(t\right) f^{\prime }(t)}{\overline{F}\!\left( t\right) f(t)} \notag \\& = \frac{-\frac{t}{\sigma ^{2}}\varphi ^{2}\left( \frac{t-\mu }{\sigma }\right) -\frac{t}{\sigma ^{2}}\overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) \varphi ^{\prime }\left( \frac{t-\mu }{\sigma }\right) }{\frac{1}{\sigma }\overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) \varphi\!\left(\frac{t-\mu }{\sigma }\right) } \notag \\& = \frac{-\sigma t\varphi \!\left( \frac{t-\mu }{\sigma }\right)+t\!\left( t-\mu \right) \overline{\Phi }\left( \frac{t-\mu }{\sigma}\right) }{\sigma ^{2}\overline{\Phi }\left( \frac{t-\mu }{\sigma}\right) }, \end{align}

where in the last step we used (A.19). It is easy to see that both the numerator and denominator of the right-hand side of (A.20) tend to 0 as $t\rightarrow \infty $ . Applying L’Hospital’s rule gives that

\begin{align*}\lim_{t\rightarrow \infty }\frac{th^{\prime }(t)}{h(t)} & = \lim_{t\rightarrow \infty }\frac{-\sigma \varphi \left( \frac{t-\mu }{\sigma }\right) -t\varphi ^{\prime }\left( \frac{t-\mu }{\sigma }\right)+\left(2t-\mu \right) \overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) -\frac{t\left( t-\mu \right) }{\sigma }\varphi \!\left( \frac{t-\mu }{\sigma}\right)}{-\sigma \varphi \!\left( \frac{t-\mu }{\sigma }\right) } \\& = \lim_{t\rightarrow \infty }\frac{\varphi \!\left( \frac{t-\mu }{\sigma }\right) -2\frac{t-\mu }{\sigma }\overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) -\frac{\mu }{\sigma }\overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) }{\varphi \!\left( \frac{t-\mu }{\sigma }\right) } \\& = -1,\end{align*}

where in the second step we used (A.19) and in the last step we used (A.18). Hence, by Theorem A.3 and (A.18), we have

\begin{equation*}\textrm{TCM}^{(2)}(t)\sim \left( \frac{\sigma \overline{\Phi }\left( \frac{t-\mu }{\sigma }\right) }{\varphi \!\left( \frac{t-\mu }{\sigma }\right) }\right) ^{2}\sim \frac{\sigma ^{4}}{t^{2}},\end{equation*}

which implies that $\textrm{TCM}^{(2)}(t)\rightarrow 0$ in this case.

Example A.3. Let X follow a log-normal distribution with

\begin{equation*}\overline{F}\!\left( x\right) =\overline{\Phi }\left( \frac{\log x-\mu }{\sigma }\right) \textbf{1}_{\{x>0\}}+\textbf{1}_{\{x\leq 0\}},\end{equation*}

for $\mu $ , $\sigma$ , and $\Phi $ as specified in Example A.2.

Proposition 1.1 (b) of Resnick [Reference Resnick40] tells us that F is a von Mises function with

\begin{equation*}h(x)=\frac{\overline{F}\!\left( x\right) }{f(x)}=\frac{\sigma x\overline{\Phi }\left( \frac{\log x-\mu }{\sigma }\right) }{\varphi \!\left(\frac{\log x-\mu }{\sigma }\right) }.\end{equation*}

It follows from (A.19) that

\begin{align*}f^{\prime }(x) & = -\frac{1}{\sigma x^{2}}\varphi \!\left( \frac{\log x-\mu }{\sigma }\right) +\frac{1}{\sigma ^{2}x^{2}}\varphi ^{\prime }\left( \frac{\log x-\mu }{\sigma }\right) \\& = -\frac{\log x-\mu +\sigma ^{2}}{\sigma ^{3}x^{2}}\varphi \!\left( \frac{\log x-\mu }{\sigma }\right) .\end{align*}

Then, by steps similar to those shown in (A.20), we have

\begin{equation*}\frac{th^{\prime }(t)}{h(t)}=\frac{-\sigma \varphi \!\left( \frac{\log t-\mu }{\sigma }\right) +\left( \log t-\mu +\sigma ^{2}\right) \overline{\Phi }\left( \frac{\log t-\mu }{\sigma }\right) }{\sigma ^{2}\overline{\Phi }\left( \frac{\log t-\mu }{\sigma }\right) }.\end{equation*}

Applying L’Hospital’s rule gives that

\begin{align*}\lim_{t\rightarrow \infty }\frac{th^{\prime }(t)}{h(t)} & = \lim_{t\rightarrow \infty }\frac{-\frac{1}{t}\varphi ^{\prime }\left(\frac{\log t-\mu }{\sigma }\right) +\frac{1}{t}\overline{\Phi}\left( \frac{\log t-\mu }{\sigma }\right) -\frac{\log t-\mu +\sigma^{2}}{\sigma t}\varphi \!\left(\frac{\log t-\mu }{\sigma }\right) }{-\frac{\sigma }{t}\varphi \!\left( \frac{\log t-\mu }{\sigma }\right) } \\& = \lim_{t\rightarrow \infty }\frac{-\frac{1}{\sigma }\overline{\Phi}\left( \frac{\log t-\mu }{\sigma }\right) +\varphi \!\left(\frac{\log t-\mu }{\sigma}\right) }{\varphi \!\left( \frac{\log t-\mu }{\sigma }\right) } \\& = 1,\end{align*}

where in the second step we used (A.19) and in the last step we used (A.18). Hence, by Theorem A.3 and (A.18), we obtain that

\begin{equation*}\textrm{TCM}^{(2)}(t)\sim \left( \frac{\sigma t\overline{\Phi }\left( \frac{\log t-\mu }{\sigma }\right) }{\varphi \!\left( \frac{\log t-\mu }{\sigma }\right) }\right) ^{2}\sim \frac{\sigma ^{4}t^{2}}{\left( \log t\right) ^{2}},\end{equation*}

which implies that $\textrm{TCM}^{(2)}(t)\rightarrow \infty $ in this case.

Finally, recall the TVP and TSDP premium principles mentioned in Remark 3.3. The univariate versions of these premium principles are

\begin{equation*}\textrm{TVP}(t)=\textrm{TM}^{(1)}(t)+w\textrm{TCM}^{(2)}(t)\end{equation*}

and

\begin{equation*}\textrm{TSDP}(t)=\textrm{TM}^{(1)}(t)+w\sqrt{\textrm{TCM}^{(2)}(t)}.\end{equation*}

Applying Theorems A.1A.3 gives that

\begin{equation*}\textrm{TVP}(t)\sim w\textrm{TCM}^{(2)}(t)\sim \frac{w\alpha}{\left( \alpha -2\right) \left( \alpha -1\right) ^{2}}t^{2}\end{equation*}

and

\begin{equation*}\textrm{TSDP}(t)\sim \frac{1}{\alpha -1}\left( \alpha +w\sqrt{\frac{\alpha }{\alpha -2}}\right) t\end{equation*}

if $F\in \mathcal{R}_{-\alpha }$ with $\alpha >2$ , and that

\begin{equation*}\textrm{TVP}(t)\sim t+wh^{2}(t)\end{equation*}

and

\begin{equation*}\textrm{TSDP}(t)\sim \textrm{TM}^{(1)}(t)\sim t\end{equation*}

under the conditions of Theorem A.3.

Acknowledgements

The author is very grateful to the two anonymous reviewers for their thorough reading of the paper and constructive suggestions.

Funding information

This work was supported by the National Natural Science Foundation of China (grant numbers 11871289, 11931018, and 11911530091).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Acharya, V. V., Pedersen, L. H., Philippon, T. and Richardson, M. (2017). Measuring systemic risk. Rev. Financial Studies 30, 247.Google Scholar
Asimit, A. V., Furman, E., Tang, Q. and Vernic, R. (2011). Asymptotics for risk capital allocations based on conditional tail expectation. Insurance Math. Econom. 49, 310324.Google Scholar
Asimit, A. V. and Li, J. (2018). Systemic risk: an asymptotic evaluation. ASTIN Bull. 48, 673698.CrossRefGoogle Scholar
Baione, F., De Angelis, P. and Granito, I. (2021). Capital allocation and RORAC optimization under solvency 2 standard formula. Ann. Operat. Res. 299, 747763.Google Scholar
Bargès, M., Cossette, H. and Marceau, É. (2009). TVaR-based capital allocation with copulas. Insurance Math. Econom. 45, 348361.Google Scholar
Bawa, V. S. and Lindenberg, E. B. (1977). Capital market equilibrium in a mean-lower partial moment framework. J. Financial Econom. 5, 189200.Google Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge University Press.Google Scholar
Cai, J., Einmahl, J. H. J., de Haan, L. and Zhou, C. (2015). Estimation of the marginal expected shortfall: the mean when a related variable is extreme. J. R. Statist. Soc. B [Statist. Methodology] 77, 417442.Google Scholar
Cai, J. and Li, H. (2005). Conditional tail expectations for multivariate phase-type distributions. J. Appl. Prob. 42, 810825.Google Scholar
Cai, J. and Wang, Y. (2021). Optimal capital allocation principles considering capital shortfall and surplus risks in a hierarchical corporate structure. Insurance Math. Econom. 100, 329349.Google Scholar
Chen, Y. and Liu, J. (2022). An asymptotic study of systemic expected shortfall and marginal expected shortfall. Insurance Math. Econom. 105, 238251.Google Scholar
Chen, Y. and Yuen, K. C. (2009). Sums of pairwise quasi-asymptotically independent random variables with consistent variation. Stoch. Models 25, 7689.Google Scholar
Denault, M. (2001). Coherent allocation of risk capital. J. Risk 4, 134.Google Scholar
Dhaene, J. et al. (2008). Some results on the CTE-based capital allocation rule. Insurance Math. Econom. 42, 855863.Google Scholar
Dhaene, J., Tsanakas, A., Valdez, E. A. and Vanduffel, S. (2012). Optimal capital allocation principles. J. Risk Insurance 79, 128.CrossRefGoogle Scholar
Eini, E. J. and Khaloozadeh, H. (2021). The tail mean-variance optimal portfolio selection under generalized skew-elliptical distribution. Insurance Math. Econom. 98, 4450.Google Scholar
El Methni, J., Gardes, L. and Girard, S. (2014). Non-parametric estimation of extreme risk measures from conditional heavy-tailed distributions. Scand. J. Statist. 41, 9881012.Google Scholar
Embrechts, P., Klüppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance. Springer, Berlin.Google Scholar
Furman, E. and Landsman, Z. (2006). Tail variance premium with applications for elliptical portfolio of risks. ASTIN Bull. 36, 433462.CrossRefGoogle Scholar
Furman, E. and Landsman, Z. (2008). Economic capital allocations for non-negative portfolios of dependent risks. ASTIN Bull. 38, 601619.Google Scholar
Hashorva, E. and Li, J. (2015). Tail behavior of weighted sums of order statistics of dependent risks. Stoch. Models 31, 119.Google Scholar
Hou, Y. and Wang, X. (2021). Extreme and inference for tail Gini functionals with applications in tail risk measurement. J. Amer. Statist. Assoc. 116, 14281443.Google Scholar
Hua, L. and Joe, H. (2011). Second order regular variation and conditional tail expectation of multiple risks. Insurance Math. Econom. 49, 537546.Google Scholar
Ignatieva, K. and Landsman, Z. (2015). Estimating the tails of loss severity via conditional risk measures for the family of symmetric generalised hyperbolic distributions. Insurance Math. Econom. 65, 172186.CrossRefGoogle Scholar
Ignatieva, K. and Landsman, Z. (2021). A class of generalised hyper-elliptical distributions and their applications in computing conditional tail risk measures. Insurance Math. Econom. 101, 437465.Google Scholar
Joe, H. and Li, H. (2011). Tail risk of multivariate regular variation. Methodology Comput. Appl. Prob. 13, 671693.CrossRefGoogle Scholar
Kim, J. H. T. (2010). Conditional tail moments of the exponential family and its related distributions. N. Amer. Actuarial J. 14, 198216.Google Scholar
Kim, J. H. T. and Kim, S. Y. (2019). Tail risk measures and risk allocation for the class of multivariate normal mean-variance mixture distributions. Insurance Math. Econom. 86, 145157.CrossRefGoogle Scholar
Kley, O., Klüppelberg, C. and Reinert, G. (2018). Conditional risk measures in a bipartite market structure. Scand. Actuarial J. 2018, 328355.Google Scholar
Landsman, Z. (2010). On the tail mean-variance optimal portfolio selection. Insurance Math. Econom. 46, 547553.Google Scholar
Landsman, Z., Makov, U. and Shushi, T. (2016). Tail conditional moments for elliptical and log-elliptical distributions. Insurance Math. Econom. 71, 179188.Google Scholar
Leipus, R., Paukštys, S. and Šiaulys, J. (2021). Tails of higher-order moments of sums with heavy-tailed increments and application to the Haezendonck–Goovaerts risk measure. Statist. Prob. Lett. 170, paper no. 108998, 12 pp.Google Scholar
Li, J. (2013). On pairwise quasi-asymptotically independent random variables and their applications. Statist. Prob. Lett. 83, 20812087.CrossRefGoogle Scholar
Li, J. (2022). Asymptotic results on marginal expected shortfalls for dependent risks. Insurance Math. Econom. 102, 146168.Google Scholar
Marri, F. and Moutanabbir, K. (2022). Risk aggregation and capital allocation using a new generalized Archimedean copula. Insurance Math. Econom. 102, 7590.CrossRefGoogle Scholar
McNeil, A. J., Frey, R. and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press.Google Scholar
Mitra, A. and Resnick, S. I. (2009). Aggregation of rapidly varying risks and asymptotic independence. Adv. Appl. Prob. 41, 797828.Google Scholar
Nelsen, R. B. (2006). An Introduction to Copulas, 2nd edn. Springer, New York.Google Scholar
Ramsay, C. M. (1993). Loading gross premiums for risk without using utility theory. Trans. Soc. Actuaries 45, 305336.Google Scholar
Resnick, S. I. (1987). Extreme Values, Regular Variation and Point Processes. Springer, New York.Google Scholar
Sun, H., Chen, Y. and Hu, T. (2022). Statistical inference for tail-based cumulative residual entropy. Insurance Math. Econom. 103, 6695.Google Scholar
Tang, Q. and Yuan, Z. (2014). Randomly weighted sums of subexponential random variables with application to capital allocation. Extremes 17, 467493.Google Scholar
Vernic, R. (2017). Capital allocation for Sarmanov’s class of distributions. Methodology Comput. Appl. Prob. 19, 311330.Google Scholar
Xu, M. and Mao, T. (2013). Optimal capital allocation based on the tail mean-variance model. Insurance Math. Econom. 53, 533543.Google Scholar
Zhu, L. and Li, H. (2012). Asymptotic analysis of multivariate tail conditional expectations. N. Amer. Actuarial J. 16, 350363.Google Scholar
Figure 0

Figure 1. The graph of $R_{\textrm{TM},1}^{(n)}(t)$ in Scenario 4.1 with different parameters.

Figure 1

Figure 2. The graph of $R_{\textrm{TCM},1}^{(n)}(t)$ in Scenario 4.1 with different parameters.

Figure 2

Figure 3. The graph of $R_{\textrm{TM},1}^{(n)}(t)$ in Scenario 4.2 with different parameters.

Figure 3

Figure 4. The graph of $R_{\textrm{TCM},1}^{(n)}(t)$ in Scenario 4.2 with different parameters.