Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-21T18:58:01.389Z Has data issue: false hasContentIssue false

An extended class of multivariate counting processes and its main properties

Published online by Cambridge University Press:  15 November 2024

Ji Hwan Cha*
Affiliation:
Department of Statistics, Ewha Womans University, Seoul, Republic of Korea
Sophie Mercier
Affiliation:
Universite de Pau et des Pays de l’Adour, E2S UPPA, CNRS, LMAP, Pau, France
*
Corresponding author: Ji Hwan Cha; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, a new multivariate counting process model (called Multivariate Poisson Generalized Gamma Process) is developed and its main properties are studied. Some basic stochastic properties of the number of events in the new multivariate counting process are initially derived. It is shown that this new multivariate counting process model includes the multivariate generalized Pólya process as a special case. The dependence structure of the multivariate counting process model is discussed. Some results on multivariate stochastic comparisons are also obtained.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

1. Introduction

Until now, a variety of univariate counting processes for modeling univariate random recurrent events have been developed and studied intensively in the literature. However, in practice, stochastically dependent series of events are also frequently observed, leading to multivariate counting processes. For instance, in queueing models, bivariate point processes arise as the input and output processes (Daley [Reference Daley11]). In reliability applications, the occurrence of recurrent failures in two or more parts in a system are frequently positively dependent. In finance area, a bankruptcy of a financial company in one group may also affect those in other groups (Allen and Gale [Reference Allen and Gale2]). In econometrics, multivariate point processes are frequently used to model multivariate market events (see Bowsher [Reference Bowsher6]). In insurance, two types of recurrent claims can be modeled by a bivariate point process (Partrat [Reference Partrat21]). For more plenty of examples, see Cox and Lewis [Reference Cox and Lewis10]. Although some multivariate counting process models have been developed in the literature, practically available models which can meet practical needs are very limited and there still exists a big gap between the need for proper models in various applications and available useful models.

The main contribution of this paper is to develop a new general class of multivariate counting processes which has mathematical tractability and applicability. Specifically, in the multivariate counting process model developed in this paper, for example, the distribution for the numbers of events in a time interval and the stochastic intensity of the process can be obtained explicitly. This is practically important because it allows explicit expression of the likelihood function in estimation procedure, which increases the utility of the model considerably. Furthermore, the developed counting process model has sufficient flexibility because the baseline intensity functions contained in the model have general forms. In addition, the developed model is very general in the sense that it includes an existing model as a special case.

The paper is organized as follows. In Section 2, a new class of bivariate counting processes is defined and its basic properties are derived. Furthermore, the corresponding marginal processes and the future process of the developed model are also characterized. In Section 3, the stochastic intensity functions of the model are derived and it is shown that the model includes the bivariate generalized Pólya process in Cha and Giorgio [Reference Cha and Giorgio8] as a special case. In Section 4, the bivariate process is generalized to the multivariate case, and the results from the previous section are extended. In Section 5, multivariate stochastic comparisons for the numbers of events and the arrival times of the events are studied. In addition, the dependence structure of the process is analyzed.

2. Bivariate Poisson generalized gamma process and its basic properties

In this section, the bivariate Poisson generalized gamma process is defined and its basic properties are derived. To define the new counting process model, we first introduce the generalized gamma distribution proposed in Agarwal and Kalla [Reference Agarwal and Kalla1] and Ghitany [Reference Ghitany15]. A random variable Φ is said to follow the generalized gamma distribution (GGD) with parameters $(\nu, k, \alpha, l)$, where $\nu\geq0, k, \alpha, l \gt 0$, if its probability density function (pdf) is given by

(2.1)\begin{align} f(\phi)=\frac{\alpha^{k-\nu}}{\Gamma_{\nu} (k, \alpha l)}\frac{\phi^{k-1}\exp\{-\alpha\phi\}}{(\phi+l)^{\nu}}, \phi \gt 0, \end{align}

where

\begin{equation*} \Gamma_{\nu} (k, \beta)=\int_{0}^{\infty} \frac{y^{k-1}\exp\{-y\}} {(y+\beta)^{\nu}}dy, \end{equation*}

for all β > 0, with

(2.2)\begin{align} \Gamma_{\nu} (k, \alpha l)=\int_{0}^{\infty} \frac{ y^{k-1}\exp\{-y\}}{(y+\alpha l)^{\nu}}dy=\int_{0}^{\infty} \frac{\alpha ^{k-\nu} y^{k-1}\exp\{-\alpha y\}}{(y+l)^{\nu}}dy. \end{align}

The function $\Gamma_{\nu} (k, \beta)$ is called the generalized gamma function (see Kobayashi [Reference Kobayashi20]) and if ν = 0, then

\begin{align*} \Gamma_{0} (k, \beta)=\int_{0}^{\infty} y^{k-1}\exp\{-y\}dy=\Gamma(k), ~\forall k \gt 0. \end{align*}

Thus, when ν = 0, it can be seen that the pdf in (2.1) becomes that of a gamma distribution with parameter $(k,\alpha)$. Hence, the GGD includes the gamma distribution as a special case.

Coming back to the general case, one can note from Gupta and Ong [Reference Gupta and Ong17] that

\begin{equation*} \Gamma_{\nu}(k,\beta)=\frac{\Gamma\left( k\right) }{\beta^{\nu-k}} \varphi\left( k,k-\nu+1;\beta\right), \end{equation*}

where

\begin{equation*} \varphi\left( a,c;x\right) =\frac{1}{\Gamma\left( a\right) }\int _{0}^{\infty}\frac{e^{-xt}t^{a-1}}{\left( 1+t\right) ^{a-c+1}}dt, \end{equation*}

is the confluent hypergeometric function of the second kind. This allows an easy computation of $\Gamma_{\nu}(k,\beta)$, as $\varphi\left( a,c;x\right) $ and $\Gamma\left( k\right) $ are implemented in most statistical or mathematical software. From Ghitany [Reference Ghitany15], the moment generating function of Φ is given by

(2.3)\begin{equation} M_\Phi(s)=E(e^{s\Phi})=\left(1-\frac s\alpha\right)^{\nu-k}\frac{\Gamma_\nu\left(k,(\alpha-s)l\right)}{\Gamma_\nu\left(k,\alpha l\right)},\:s \lt \alpha, \end{equation}

and its r-th moment about the origin is

(2.4)\begin{equation}E(\Phi^r)=\alpha^{-r}\frac{\Gamma_\nu\left(k+r,\alpha l\right)}{\Gamma_\nu\left(k,\alpha l\right)},\:r\in\mathbb{N}^\ast.\end{equation}

Now, we will define the bivariate Poisson generalized gamma process using the GGD. Let $\{{\mathbf{N}}(t),~t\geq0\}$, where ${\mathbf{N}}(t)=(N_{1} (t),N_{2}(t))$, be a bivariate process. Then, the corresponding “pooled” point process $\{M(t),~t\geq0\}$, where $M(t)=N_{1}(t) + N_{2}(t)$, can be defined. In this paper, we will consider regular (also known as orderly) multivariate point processes. In a univariate point process $\{N(t),~t\geq 0\}$, regularity is intuitively the nonoccurrence of multiple events in a small interval (see e.g., Cox and Lewis [Reference Cox and Lewis10], Finkelstein [Reference Finkelstein13, Reference Finkelstein14]; see also Cha and Giorgio [Reference Cha and Giorgio8]). Note that there are two types of regularity in multivariate point processes: (i) marginal regularity and (ii) regularity. For a multivariate point process, we say the process is marginally regular if its marginal processes, considered as univariate point processes, are all regular. The multivariate process is said to be regular if the pooled process is regular. Throughout this paper, we will assume that the multivariate process $\{{\mathbf{N}}(t),~t\geq0\}$ of our interest is a regular process. In the following, we shall use the notation $\Phi \sim{\mathcal{GG}} (\nu, k, \alpha, l)$ to represent that the continuous random variable Φ follows the GGD with parameters $(\nu, k, \alpha, l)$ and $\{N(t),~t\geq0\} \sim{\mathcal{NHPP}} (\eta(t)) $ to indicate that the counting process $\{N(t),~t\geq0\}$ follows the nonhomogeneous Poisson process (NHPP) with intensity function $\eta(t)$.

Definition 2.1. (Bivariate Poisson Generalized Gamma Process)

A bivariate counting process $\{{\mathbf{N}}(t),~t\geq0\}$ is called the bivariate Poisson generalized gamma process (BPGGP) with the set of parameters $(\lambda_{1}(t),\lambda_{2}(t), \nu, k, \alpha, l)$, $\lambda _{i}(t) \gt 0,~\forall t \geq0$, $i=1,2$, $\nu\geq0, k, \alpha, l \gt 0$, if

  1. (i) $\{N_{i}(t),~t\geq0\}|(\Phi=\phi) \sim{\mathcal{NHPP}} (\phi\lambda_{i}(t)) $, $i=1,2$, independent;

  2. (ii) $\Phi\sim {\mathcal{GG}} (\nu, k, \alpha, l)$.

Throughout this paper, the BPGGP with the set of parameters $(\lambda_{1}(t),\lambda_{2}(t), \nu, k, \alpha, l)$ will be denoted by BPGGP$(\lambda_{1}(t),\lambda_{2}(t),\nu, k, \alpha, l)$.

Based on Definition 2.1, we now derive some basic properties of BPGGP and with that aim, let us introduce $\Lambda_{i}(t)\equiv\int_{0}^{t}\lambda_{i} (s)ds$$i=1,2$, $t \geq0$.

Proposition 2.2.

  1. (i) $\{{\mathbf{M}}(t)=(M_{1}(t),M_{2}(t)),~t\geq0\}$ is a BPGGP$(1, 1,\nu,k,\alpha,l)$ if and only if $\{{\mathbf{N}}\left(t\right) =\left( M_{1}(\Lambda_{1}(t) ),M_{2}(\Lambda_{2}(t) )\right) ,~t\geq0\}$ is a BPGGP$(\lambda_{1}(t),\lambda_{2}(t),\nu, k, \alpha, l)$.

  2. (ii) For c > 0, let $\tilde{\alpha}=\alpha c$, $\tilde{l}=l/c$ and $\tilde {\lambda_{i}}(t)=\lambda_{i}(t)/c$ for $i=1,2$. Then, a BPGGP$(\tilde {\lambda_{1}}(t),\tilde{\lambda_{2}}(t),\nu,k,\tilde{\alpha},\tilde{l})$ is a BPGGP$(\lambda_{1}(t),\lambda_{2}(t),\nu, k, \alpha, l)$.

The proof is similar to that of Proposition 1 from Cha and Mercier [Reference Cha and Mercier9] in the univariate case and it is omitted. As in that paper, the second point of Proposition 2.2 shows that the BPGGP model as given in Definition 2.1 is not identifiable. An additional constraint should hence be added such as $l \equiv1$ for instance, wherever statistical procedures are studied (which is not the case in the present paper).

We now study the distributions for the numbers of events, which are of major interest for any counting process model.

Theorem 2.3. For $0\leq u_{i1} \lt u_{i2} \lt \cdots \lt u_{im}$, $i=1,2$,

\begin{align*} & P(N_{i}(u_{i2})-N_{i}(u_{i1}) =n_{i},~i=1,2)\\ & =\Bigg[\prod_{i=1}^{2}\frac{(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1} ))^{n_{i}}}{{n_{i}}!}\Bigg]\frac{\alpha^{k-\nu}}{(\alpha+\sum_{i=1} ^{2}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1})))^{k+n_{1}+n_{2}-\nu}}\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\frac{\Gamma_{\nu}(k+n_{1}+n_{2} ,(\alpha+\sum_{i=1}^{2}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1})))l)} {\Gamma_{\nu}(k,\alpha l)}, \end{align*}

and

\begin{align*} & P(N_{i}(u_{ij})-N_{i}(u_{ij-1}) =n_{ij},~i=1,2,j=1,2,\cdots,m)\\ & =\Bigg[\prod_{i=1}^{2}\prod_{j=1}^{m}\frac{(\Lambda_{i}(u_{ij})-\Lambda _{i}(u_{ij-1}))^{n_{ij}}}{{n_{ij}}!}\Bigg]\frac{\alpha^{k-\nu}}{(\alpha +\sum_{i=1}^{2}\sum_{j=1}^{m}(\Lambda_{i}(u_{ij})-\Lambda_{i}(u_{ij-1} )))^{k+\sum_{i=1}^{2}\sum_{j=1}^{m}n_{ij}-\nu}}\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\frac{\Gamma_{\nu}(k+\sum_{i=1}^{2} \sum_{j=1}^{m}n_{ij},(\alpha+\sum_{i=1}^{2}\sum_{j=1}^{m}(\Lambda_{i} (u_{ij})-\Lambda_{i}(u_{ij-1})))l)}{\Gamma_{\nu}(k,\alpha l)}. \end{align*}

Proof. From the definition of BPGGP$(\lambda _{1}(t),\lambda_{2}(t),\nu,k,\alpha,l)$,

\begin{align*} & P(N_{i}(u_{i2})-N_{i}(u_{i1})=n_{i},~i=1,2)\\ & =\int_{0}^{\infty}\Bigg[\prod_{i=1}^{2}\frac{(\phi(\Lambda_{i} (u_{i2})-\Lambda_{i}(u_{i1})))^{n_{i}}\exp\{-\phi(\Lambda_{i}(u_{i2} )-\Lambda_{i}(u_{i1}))\}}{n_{i}!}\Bigg]\frac{\alpha^{k-\nu}}{\Gamma_{\nu }(k,\alpha l)}\frac{\phi^{k-1}\exp\{-\alpha\phi\}}{(\phi+l)^{\nu}}d\phi\\ & =\Bigg[\prod_{i=1}^{2}\frac{(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1} ))^{n_{i}}}{{n_{i}}!}\Bigg]\frac{\alpha^{k-\nu}}{\Gamma_{\nu}(k,\alpha l)}\\ & ~~~~~~~~~~~~~~~~\times\int_{0}^{\infty}\frac{\phi^{k+n_{1}+n_{2}-1} \exp\{-\phi(\alpha+\sum_{i=1}^{2}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1} )))\}}{(\phi+l)^{\nu}}d\phi\\ & =\Bigg[\prod_{i=1}^{2}\frac{(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1} ))^{n_{i}}}{{n_{i}}!}\Bigg]\frac{\alpha^{k-\nu}}{(\alpha+\sum_{i=1} ^{2}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1})))^{k+n_{1}+n_{2}-\nu}}\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\frac{\Gamma_{\nu}(k+n_{1}+n_{2} ,(\alpha+\sum_{i=1}^{2}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1})))l)} {\Gamma_{\nu}(k,\alpha l)}. \end{align*}

In a similar way,

\begin{align*} & P(N_{i}(u_{ij})-N_{i}(u_{ij-1}) =n_{ij},~i=1,2,j=1,2,\cdots,m)\\ & =\int_{0}^{\infty}\Bigg[\prod_{i=1}^{2}\prod_{j=1}^{m}\frac{(\phi (\Lambda_{i}(u_{ij})-\Lambda_{i}(u_{ij-1})))^{n_{ij}}\exp\{-\phi(\Lambda _{i}(u_{ij})-\Lambda_{i}(u_{ij-1}))\}}{n_{ij}!}\Bigg]\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times \frac{\alpha^{k-\nu}}{\Gamma_{\nu}(k,\alpha l)}\frac{\phi^{k-1}\exp \{-\alpha\phi\}}{(\phi+l)^{\nu}}d\phi\\ & =\Bigg[\prod_{i=1}^{2}\prod_{j=1}^{m}\frac{(\Lambda_{i}(u_{ij})-\Lambda _{i}(u_{ij-1}))^{n_{ij}}}{{n_{ij}}!}\Bigg]\frac{\alpha^{k-\nu}}{(\alpha +\sum_{i=1}^{2}\sum_{j=1}^{m}(\Lambda_{i}(u_{ij})-\Lambda_{i}(u_{ij-1} )))^{k+\sum_{i=1}^{2}\sum_{j=1}^{m}n_{ij}-\nu}}\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~\times\frac{\Gamma_{\nu}(k+\sum_{i=1}^{2} \sum_{j=1}^{m}n_{ij},(\alpha+\sum_{i=1}^{2}\sum_{j=1}^{m}(\Lambda_{i} (u_{ij})-\Lambda_{i}(u_{ij-1})))l)}{\Gamma_{\nu}(k,\alpha l)}. \end{align*}

The joint moments of $(N_{1}(t),N_{2}(t))$ also are of practical interest for the applications. They are obtained in the following theorem.

Theorem 2.4. Let $\{{\mathbf{N} }(t),~t\geq0\}$ be the BPGGP with the set of parameters $(\lambda _{1}(t),\lambda_{2}(t),\nu,k,\alpha,l)$. Then the following properties hold.

  1. (i) The joint moment generating function of $(N_{1}(t),N_{2}(t))$ is given by

    \begin{align*} M_{N_{1}(t_{1}),N_{2}(t_{2})}(s_{1},s_{2}) & =\left( 1-\frac{(\Lambda _{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) )}{\alpha}\right) ^{\nu-k}\\ & ~~~~~\times\frac{\Gamma_{\nu}\left[ k,(\alpha-(\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) ))l\right] }{\Gamma_{\nu}\left( k,\alpha l\right) } \end{align*}

    for all s 1 and s 2 such that $\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) \lt \alpha$.

  2. (ii) We have

    \begin{equation*} E\left[ N_{1}\left( t_{1}\right) ^{r_{1}}N_{2}\left( t_{2}\right) ^{r_{2}}\right] =\sum_{i_{1}=0}^{r_{1}}\sum_{i_{2}=0}^{r_{2}}\frac{\left( \Lambda_{1}(t_{1})\right) ^{i_{1}}\left( \Lambda_{2}(t_{2})\right) ^{i_{2} }}{\alpha^{i_{1}+i_{2}}}\frac{\Gamma_{\nu}\left( k+i_{1}+i_{2},\alpha l\right) }{\Gamma_{\nu}\left( k,\alpha l\right) } \genfrac{\{}{\}}{0pt}{}{r_{1}}{i_{1}} \genfrac{\{}{\}}{0pt}{}{r_{2}}{i_{2}} \end{equation*}

    for all $r_{1},r_{2}\in\mathbb{N}$, where the {braces} denote Stirling numbers of the second kind.

  3. (iii) The covariance of $(N_{1} (t_{1}),N_{2}(t_{2}))$ is given by

    (2.5)\begin{equation}Cov(N_1(t_1),N_2(t_2))=\frac{\Lambda_1(t_1)\Lambda_2(t_2)}{\alpha^2}\left[\frac{\Gamma_\nu\left(k+2,\alpha l\right)}{\Gamma_\nu\left(k,\alpha l\right)}-\left(\frac{\Gamma_\nu\left(k+1,\alpha l\right)}{\Gamma_\nu\left(k,\alpha l\right)}\right)^2\right],\end{equation}

    and the corresponding Pearson’s correlation coefficient is:

    \begin{equation*} \rho_{\left( N_{1}(t_{1}),N_{2}(t_{2})\right) }=\left( \sqrt{1+\frac {C\left( k,\alpha,l,\nu\right) }{\Lambda_{1}(t_{1})}}\sqrt{1+\frac{C\left( k,\alpha,l,\nu\right) }{\Lambda_{2}(t_{2})}}\right) ^{-1}, \end{equation*}

    where

    \begin{equation*} C\left( k,\alpha,l,\nu\right) =\alpha\left( \frac{\Gamma_{\nu}\left( k+2,\alpha l\right) }{\Gamma_{\nu}\left( k+1,\alpha l\right) }-\frac {\Gamma_{\nu}\left( k+1,\alpha l\right) }{\Gamma_{\nu}\left( k,\alpha l\right) }\right) ^{-1}. \end{equation*}
Proof.

  1. (i) Conditioning on Φ, we can write

    \begin{equation*} M_{N_{1}(t_{1}),N_{2}(t_{2})}(s_{1},s_{2})=E\left[ E\left( e^{s_{1} N_{1}(t_{1})+s_{2}N_{2}(t_{2})}|\Phi\right) \right], \end{equation*}

    where $\left[ N_{i}(t)|\Phi=\phi\right] $, $i=1,2$ are independent and Poisson distributed with parameter $\Lambda_{i}\left( t_{i}\right) \phi$, respectively. By using the moment generating function of a Poisson distribution (see e.g., Ross [Reference Ross23]), we have:

    \begin{align*} M_{N_{1}(t_{1}),N_{2}(t_{2})}(s_{1},s_{2}) & =E\left[ \exp\left\{ (\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda _{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) )\Phi\right\} \right] \\ & =M_{\Phi}\left[ (\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1} }-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) )\right] \\ & =\left( 1-\frac{(\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1} }-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) )}{\alpha}\right) ^{\nu-k}\\ & ~~~~~\times\frac{\Gamma_{\nu}\left[ k,(\alpha-(\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) ))l\right] }{\Gamma_{\nu}\left( k,\alpha l\right) } \end{align*}

    for all s 1 and s 2 such that $\Lambda_{1}\left( t_{1}\right) \left( e^{s_{1}}-1\right) +\Lambda_{2}\left( t_{2}\right) \left( e^{s_{2}}-1\right) \lt \alpha$, due to (2.3).

  2. (ii) By a similar procedure, we have:

    \begin{align*} E\left[ N_{1}\left( t_{1}\right) ^{r_{1}}N_{2}\left( t_{2}\right) ^{r_{2}}\right] & =E\left[ E\left( N_{1}\left( t_{1}\right) ^{r_{1} }N_{2}\left( t_{2}\right) ^{r_{2}}|\Phi\right) \right] \\ & =E\left[ E\left( N_{1}\left( t_{1}\right) ^{r_{1}}|\Phi\right) E\left( N_{2}\left( t_{2}\right) ^{r_{2}}|\Phi\right) \right], \end{align*}

    due to the conditional independence between $N_{1}\left( t_{1}\right) $ and $N_{2}\left( t_{2}\right) $ given Φ. Based on Formula (3.4) in Riordan [Reference Riordan22], which provides the moments of Poisson distributions with respect to Stirling numbers of the second kind, we now get:

    \begin{equation*} E\left[ N_{1}\left( t_{1}\right) ^{r_{1}}N_{2}\left( t_{2}\right) ^{r_{2}}\right] =E\left[ \sum_{i_{1}=0}^{r_{1}}\sum_{i_{2}=0} ^{r_{2}}\left( \Lambda_{1}(t_{1})\right) ^{i_{1}}\left( \Lambda_{2} (t_{2})\right) ^{i_{2}}\Phi^{i_{1}+i_{2}} \genfrac{\{}{\}}{0pt}{}{r_{1}}{i_{1}} \genfrac{\{}{\}}{0pt}{}{r_{2}}{i_{2}} \right]. \end{equation*}

    The result follows, based on (2.4).

  3. (iii) The computation of the covariance now is a direct consequence from point (ii). The variance of each $N_{i}\left( t_{i}\right) $, $i=1,2$, can be derived in the same way from point (ii) (or using Theorem 2 from Cha and Mercier [Reference Cha and Mercier9], based on the fact that $\{N_{i}(t),t\geq0\}$, $i=1,2$, are univariate Poisson generalized gamma processes, see Proposition 2.7 later on). We obtain

    (2.6)\begin{align}Var(N_i(t_i)) & = \frac{\Lambda_i(t_i)}\alpha\frac{\Gamma_\nu(k+1,\alpha l)}{\Gamma_\nu(k,\alpha l)}\nonumber\\ & \quad +\left(\frac{\Lambda_i(t_i)}\alpha\right)^2\left[\frac{\Gamma_\nu(k+2,\alpha l)}{\Gamma_\nu(k,\alpha l)}-\left(\frac{\Gamma_\nu(k+1,\alpha l)}{\Gamma_\nu(k,\alpha l)}\right)^2\right], \end{align}

    for $i=1,2$. The result for Pearson’s correlation coefficient then is a routine computation, remembering that

    \begin{equation*} \rho_{\left( N_{1}(t_{1}),N_{2}(t_{2})\right) }=\frac{Cov(N_{1}(t_{1} ),N_{2}(t_{2}))}{\sqrt{Var\left( N_{1}\left( t_{1}\right) \right) } \sqrt{Var\left( N_{2}\left( t_{2}\right) \right) }}. \end{equation*}

Remark 2.5. Based on (2.6) and (2.5), it is clear that $Cov\left( N_{1}(t_{1}),N_{2} (t_{2})\right) \gt 0$, which shows that $N_{1}(t_{1})$ and $N_{2}(t_{2})$ always are positively correlated, whatever the parameters of the BPGGP are, and whatever the times t 1 and t 2 are. Also, $\rho_{\left( N_{1} (t_{1}),N_{2}(t_{2})\right)} $ is non decreasing with respect to $\left( t_{1},t_{2}\right) $, with

\begin{align*} \lim_{\left( t_{1},t_{2}\right) \rightarrow\left( 0,0\right) ^{+}} \rho_{\left( N_{1}(t_{1}),N_{2}(t_{2})\right) } & =0,\\ \lim_{\left( t_{1},t_{2}\right) \rightarrow\left( \infty,\infty\right) }\rho_{\left( N_{1}(t_{1}),N_{2}(t_{2})\right) } & =1. \end{align*}

Now, in the following proposition, some important properties of the BPGGP will be stated. For this, we employ the same notations as those in Cha and Giorgio [Reference Cha and Giorgio8]. Denote by ${\mathcal{H}}_{Pt-} \equiv\{M(u),~0\leq u \lt t\}$ the history of the pooled process in $[0,t)$. Define $M(t-)$ as the total number of events in $[0,t)$ and Ti as the time from 0 until the arrival of the ith event in $[0,t)$ of the pooled process $\{M(t),~t\geq0\}$. Then ${\mathcal{H}} _{Pt-}$ can equivalently be defined in terms of $M(t-)$ and the sequential arrival points of the events $0\leq T_{1}\leq T_{2} \leq\cdots\leq T_{M(t-)} \lt t$ in $[0,t)$. Similarly, define the marginal histories of the marginal processes ${\mathcal{H}}_{it-} \equiv\{N_{i}(u),~0\leq u \lt t\}$, $i=1,2 $. Then, ${\mathcal{H}}_{it-} \equiv\{N_{i}(u),~0\leq u \lt t\}$ can also be defined in terms of $N_{i}(t-)$ and the sequential arrival points of the events $0\leq T_{i1}\leq T_{i2} \leq\cdots\leq T_{iN_{i}(t-)} \lt t$ in $[0,t)$, $i=1,2$, where $N_{i}(t-)$ is the total number of events of type i point process in $[0,t)$, $i=1,2$. In the following, we also use the definition of “p(t)-thinning” in Cha and Giorgio [Reference Cha and Giorgio8]. Also, we define univariate Poisson generalized Gamma process to define the marginal process.

Definition 2.6. (Poisson Generalized Gamma Process)

A counting process $\{N(t),~t\geq0\}$ is called the Poisson generalized gamma process (PGGP) with the set of parameters $(\lambda(t),\nu, k, \alpha, l)$, $\lambda(t) \gt 0,~\forall t \geq0$, $\nu\geq0, k, \alpha, l \gt 0$, if

  1. (i) $\{N(t),~t\geq0\}|(\Phi=\phi) \sim{\mathcal{NHPP}} (\phi \lambda(t))$;

  2. (ii) $\Phi\sim{\mathcal{GG}} (\nu, k, \alpha, l)$.

See Cha and Mercier [Reference Cha and Mercier9] for various properties of the PGGP.

For convenience, we now introduce the following notations: $N_{ui}(t)\equiv N_{i}(u+t)-N_{i}(u)$, $\Lambda_{i}(t) \equiv\int_{0}^{t}\lambda_{i}(u)du$, $i=1,2$, $\lambda(t)\equiv\lambda_{1}(t)+\lambda_{2}(t)$, $\Lambda(t)=\int _{0}^{t}\lambda(u)du=\Lambda_{1}(t)+\Lambda_{2}(t)$, and $p_{i}(t)\equiv \lambda_{i}(t)/\lambda(t)$, $i=1,2$.

Proposition 2.7. Let $\{{\mathbf{N}}(t),~t\geq0\}$ be the BPGGP with the set of parameters $(\lambda _{1}(t),\lambda_{2}(t), \nu, k, \alpha, l)$. Then

  1. (i) The pooled process $\{M(t),~t\geq0\}$ is PGGP$(\lambda(t),\nu, k, \alpha, l)$.

  2. (ii) The process $\{{\mathbf{N}}(t),~t\geq0\}$ is constructed by thinning of $\{M(t),~t\geq0\}$ with thinning probabilities $p_{i}(t)$, $i=1,2$: $\{(M_{p_{1}(\cdot)}(t),M_{p_{2}(\cdot)}(t)),~t\geq0\}$.

  3. (iii) Given $({\mathcal{H}}_{1u-}, {\mathcal{H}}_{2u-})$, $\{{\mathbf{N}} _{u}(t),~t\geq0\}$, where ${\mathbf{N}}_{u}(t)=(N_{u1}(t),N_{u2}(t))$, is BPGGP$(\lambda_{1}(t+u),\lambda_{2}(t+u), \nu, k+n_{1}+n_{2}, \alpha +\Lambda(u), l)$, where ni is the realization of $N_{i}(t-)$, $i=1,2$, respectively.

  4. (iv) For any fixed $u\geq0$, $\{{\mathbf{N}} _{u}(t),~t\geq0\}$ is “unconditionally” BPGGP$(\lambda_{1}(t+u),\lambda _{2}(t+u),\nu, k, \alpha, l)$.

  5. (v) The marginal process $\{N_{i} (t),~t\geq0\}$ is PGGP$(\lambda_{i}(t),\nu, k, \alpha, l)$, $i=1,2$.

Proof. Properties (i), (ii), (iv) and (v) obviously hold. Let $\{N(t),~t \geq0\}$ be PGGP$(\lambda(t),\nu, k, \alpha, l)$. Then, at an arbitrary time u > 0, given $\{{N(u-)}=n, T_{1}=t_{1},T_{2}=t_{2} ,\cdots,T_{n}=t_{n}\}$, the conditional future process $\{N_{u}(t),~t \geq 0\}$, where $N_{u}(t)\equiv N(u+t)-N(u)$, is a PGGP with the set of parameters $(\lambda(u+t),\nu, k+n,\alpha+\Lambda(u),l)$ (see Cha and Mercier [Reference Cha and Mercier9]). Then property (iii) also obviously holds due to property (ii).

Properties (iii) and (iv) state about the conditional and unconditional restarting properties of the BPGGP. For more details on the restarting property, see Cha [Reference Cha7] and Cha and Giorgio [Reference Cha and Giorgio8].

3. Further properties of bivariate Poisson generalized gamma process

An efficient characterization for a multivariate point process can be done through the stochastic intensity approach (see Cox and Lewis [Reference Cox and Lewis10], Cha and Giorgio [Reference Cha and Giorgio8]). As mentioned in Cha and Giorgio [Reference Cha and Giorgio8], a marginally regular bivariate process can be specified by the following complete intensity functions:

(3.1)\begin{align} \lambda_{1t} & \equiv\lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)\geq1|{\mathcal{H}}_{1t-};{\mathcal{H} }_{2t-}\right) }{\Delta t}=\lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)=1|{\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-}\right) }{\Delta t}, \cr \lambda_{2t} & \equiv\lim_{\Delta t \rightarrow0} \frac{P\left( N_{2}(t,t+\Delta t)\geq1|{\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-}\right) }{\Delta t}=\lim_{\Delta t \rightarrow0} \frac{P\left( N_{2}(t,t+\Delta t)=1|{\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-}\right) }{\Delta t}, \cr \lambda_{12t} & \equiv\lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)N_{2}(t,t+\Delta t)\geq1|{\mathcal{H}}_{1t-};{\mathcal{H} }_{2t-}\right) }{\Delta t}, \end{align}

where $N_{i}(t_{1},t_{2})$, $t_{1} \lt t_{2}$, represents the number of events in $[t_{1} , t_{2})$, $i=1,2$, respectively (see Cox and Lewis [Reference Cox and Lewis10]). For a regular process, $\lambda_{12t}=0$, and it is sufficient to specify just $\lambda_{1t}$ and $\lambda_{2t}$ in (3.1) in order to define a regular process.

Theorem 3.1. The complete intensity functions of the BPGGP with the set of parameters $(\lambda_{1}(t),\lambda _{2}(t), \nu, k, \alpha, l)$ are given by

(3.2)\begin{align} \lambda_{it}=\frac{1}{(\alpha+\Lambda_{1}(t)+\Lambda_{2} (t))}\frac{\Gamma_{\nu} (k+N_{1}(t-)+N_{2}(t-)+1, (\alpha+\Lambda _{1}(t)+\Lambda_{2}(t))l)}{\Gamma_{\nu} (k+N_{1}(t-)+N_{2}(t-), (\alpha +\Lambda_{1}(t)+\Lambda_{2}(t))l)}\lambda_{i}(t),~i=1,2. \end{align}

Proof. Observe that

\begin{align*} \lambda_{1t} & =\lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)=1|{\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-}\right) }{\Delta t} \\ & =E_{(\Phi| {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-})} \left[ \lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)=1|\Phi; {\mathcal{H}}_{1t-};{\mathcal{H} }_{2t-}\right) }{\Delta t}\right], \end{align*}

where $E_{(\Phi| {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-})}[~\cdot~]$ stands for the expectation with respect to the conditional distribution of $(\Phi| {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-})$ and

\begin{align*} \lim_{\Delta t \rightarrow0} \frac{P\left( N_{1}(t,t+\Delta t)=1|\Phi; {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-}\right) }{\Delta t}=\Phi\lambda _{1}(t). \end{align*}

Thus, $\lambda_{1t}=E_{(\Phi| {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-})} [\Phi\lambda_{1}(t)]$.

Similar to the procedure described in Cha [Reference Cha7], the conditional distribution of $(\Phi| {\mathcal{H}}_{it-}={\mathbf{h}}_{it-},~i=1,2)$, where ${\mathbf{h} }_{it-}\equiv(t_{i1},t_{i2},\cdots,t_{in_{i}},n_{i})$ is the realization of ${\mathcal{H}}_{it-}$, $i=1,2$, respectively, is given by

\begin{align*} & \Bigg(\phi^{n_{1}+n_{2}} \exp\left\{ -\phi\int_{0}^{t} \sum_{i=1} ^{2}\lambda_{i}(x)dx\right\} f(\phi) \Bigg) \\ & \times \Bigg(\int_{0}^{\infty}\phi^{n_{1}+n_{2}} \exp\left\{ -\phi\int_{0}^{t} \sum_{i=1}^{2}\lambda_{i}(x)dx\right\} f(\phi) d\phi\Bigg)^{-1}, \end{align*}

where we recall that f stands for the probability density function of Φ. Then,

\begin{align*} \lambda_{1t} & =E_{(\Phi| {\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-})} [\Phi\lambda_{1}(t)]\\ & = \frac{\int_{0}^{\infty} \phi^{n_{1}+n_{2}+1} \exp\left\{ -\phi\int_{0}^{t} \sum_{i=1}^{2}\lambda_{i}(x)dx\right\} f(\phi) d\phi}{\int_{0}^{\infty}\phi^{n_{1}+n_{2}} \exp\left\{ -\phi\int_{0}^{t} \sum_{i=1}^{2}\lambda_{i}(x)dx\right\} f(\phi) d\phi} \lambda_{1}(t), \end{align*}

which extends Proposition 4.1 in Grandell [Reference Grandell16] to a bivariate and non homogeneous mixed Poisson process. Now, one can check that

\begin{align*} & \int_{0}^{\infty}\phi^{n} \exp\left\{ -\phi\int_{0}^{t} \sum_{i=1} ^{2}\lambda_{i}(x)dx\right\} f(\phi) d\phi\\ & =\frac{\alpha^{k-\nu}} {(\alpha+\Lambda_{1}(t)+\Lambda_{2}(t))^{k+n-\nu}} \frac{\Gamma_{\nu} (k+n, (\alpha+\Lambda_{1}(t)+\Lambda_{2}(t))l)}{\Gamma_{\nu} (k, \alpha l)}. \end{align*}

Therefore,

\begin{align*} \lambda_{1t}=\frac{1}{(\alpha+\Lambda_{1}(t)+\Lambda_{2}(t))}\frac{\Gamma _{\nu} (k+N_{1}(t-)+N_{2}(t-)+1, (\alpha+\Lambda_{1}(t)+\Lambda_{2} (t))l)}{\Gamma_{\nu} (k+N_{1}(t-)+N_{2}(t-), (\alpha+\Lambda_{1} (t)+\Lambda_{2}(t))l)}\lambda_{1}(t). \end{align*}

The intensity function $\lambda_{2t}$ can be obtained symmetrically.

Proposition 3.2. Let $\{{\mathbf{N} }(t),~t\geq0\}$ be a BPGGP with the set of parameters $\left( \lambda_{1}(t),\lambda_{2}(t), \nu=0, k, \alpha, l\right) $ such that $\lambda_{i}(t)\equiv\phi_{i}(t)\exp\{\alpha(\Phi_{1}(t)+\Phi_{2}(t))\}$, $i=1,2$, where $\Phi_{i}(t)\equiv\int_{0}^{t}\phi_{i}(s)ds$, $i=1,2$. Then $\{{\mathbf{N}}(t),~t\geq0\}$ is a BVGPP$(\phi_{1}(t),\phi_{2}(t),1/\alpha ,k/\alpha)$ (whatever l is).

Proof. Under the given specific setting, it can be shown that the complete intensity functions in (3.2) becomes those in BVGPP given in Definition 1 of Cha and Girogio [Reference Cha and Giorgio8].

The result of Proposition 3.2 is also clear from the definition of the BPGGP and the characterization of the BVGPP in Theorem 2 of Cha and Giorgio [Reference Cha and Giorgio8].

It can be shown that

\begin{align*} \eta(k)\equiv\frac{\Gamma_{\nu} (k+1, \alpha l)}{\Gamma_{\nu} (k, \alpha l)}, \end{align*}

is increasing in k > 0 for any $\nu\geq0, \alpha, l \gt 0$ (see the proof of Proposition 8 in Cha and Mercier [Reference Cha and Mercier9]). Thus, the complete intensity functions in (3.2) are increasing in $N_{1}(t-)+N_{2}(t-)$. This implies that the proneness to the future event occurrence in each marginal process is increasing with the number of events, previously occurred in the pooled process.

4. Multivariate Poisson generalized gamma process

In this section, we study the multivariate Poisson generalized gamma process (MPGGP), by extending the results obtained in the previous sections. As in Cha and Giorgio [Reference Cha and Giorgio8], let $\{{\mathbf{N}}(t),~t\geq0\}$, where ${\mathbf{N} }(t)=(N_{1}(t),N_{2}(t),\cdots,N_{m}(t))$, be a multivariate process and define the corresponding “pooled” point process $\{M(t),~t\geq0\}$, where $M(t)=N_{1}(t)+N_{2}(t)+\cdots+N_{m}(t)$. Also, define the marginal point processes $\{N_{i}(t),~t\geq0\}$, $i=1,2,\cdots,m$, and the corresponding marginal histories of the marginal processes: ${\mathcal{H}}_{it-}$, $i=1,2,\cdots,m$. The MPGGP can be defined by generalizing Definition 2.1.

Definition 4.1. (Multivariate Poisson Generalized Gamma Process)

A multivariate counting process $\{{\mathbf{N}}(t),~t\geq0\}$ is called the multivariate Poisson generalized gamma process (MPGGP) with the set of parameters $(\lambda_{i}(t), i=1,2,\cdots,m, \nu, k, \alpha, l)$, where $\lambda_{i}(t) \gt 0,~\forall t \geq0$, $i=1,2,\cdots,m$, $\nu\geq0, k, \alpha, l \gt 0$, if

  1. (i) $\{N_{i}(t),~t\geq0\}|(\Phi=\phi) \sim{\mathcal{NHPP}} (\phi\lambda_{i}(t)) $, $i=1,2,\cdots,m$, independent;

  2. (ii) $\Phi\sim{\mathcal{GG}} (\nu, k, \alpha, l)$.

To state the properties of the MPGPP, we define $\lambda (t)=\sum_{i=1}^{m}\lambda_{i}(t)$, $\Lambda(t)=\int_{0}^{t}\lambda(v)dv$, $p_{i}(t)=\lambda_{i}(t)/\lambda(t)$, and $N_{ui}(t)\equiv N_{i} (u+t)-N_{i}(u)$, $i=1,2,\cdots,m$.

Proposition 4.2. Let $\{{\mathbf{N} }(t),~t\geq0\}$ be the MPGGP with the set of parameters $(\lambda_{i}(t), i=1,2,\cdots,m, \nu, k, \alpha, l)$. Then

  1. (i) The pooled process $\{M(t),~t\geq0\}$ is PGGP$(\lambda(t),\nu, k, \alpha, l)$.

  2. (ii) The process $\{{\mathbf{N}}(t),~t\geq0\}$ is constructed by thinning of $\{M(t),~t\geq0\}$ with thinning probabilities $p_{i}(t)$, $i=1,2,\cdots,m$: $\{(M_{p_{1}(\cdot)}(t),M_{p_{2}(\cdot)}(t),\cdots,M_{p_{m}(\cdot)} (t)),~t\geq0\}$.

  3. (iii) Given $({\mathcal{H}}_{iu-}, i=1,2,\cdots,m)$, $\{{\mathbf{N}}_{u}(t),~t\geq0\}$, where ${\mathbf{N}}_{u}(t)=(N_{u1} (t),N_{u2}(t),\cdots,N_{um}(t) )$, is MPGGP$(\lambda_{i}(t+u),i=1,2,\cdots,m, \nu, k+\sum_{i=1}^{m}n_{i}, \alpha+\Lambda(u), l)$, where ni is the realization of $N_{i}(t-)$, $i=1,2,\cdots,m$, respectively.

  4. (iv) For any fixed $u\geq0$, $\{{\mathbf{N}}_{u}(t),~t\geq0\}$ is “unconditionally” MPGGP$(\lambda_{i}(t+u), i=1,2,\cdots,m,\nu, k, \alpha, l)$.

  5. (v) The marginal process $\{N_{i}(t),~t\geq0\}$ is PGGP$(\lambda_{i}(t),\nu, k, \alpha, l)$, $i=1,2,\cdots,m$.

In addition to the basic properties stated in Proposition 4.2, it can be shown that the complete intensity functions are given by:

\begin{align*} \lambda_{it} &\equiv\lim_{\Delta t \rightarrow0} \frac{P\left( N_{i}(t,t+\Delta t)=1|{\mathcal{H}}_{1t-};{\mathcal{H}}_{2t-};\cdots ;{\mathcal{H}}_{mt-}\right) }{\Delta t} \\ & =\frac{1}{(\alpha+\sum_{i=1} ^{m}\Lambda_{i}(t))}\frac{\Gamma_{\nu} (k+\sum_{i=1}^{m}N_{i}(t-)+1, (\alpha+\sum_{i=1}^{m}\Lambda_{i}(t))l)}{\Gamma_{\nu} (k+\sum_{i=1}^{m} N_{i}(t-), (\alpha+\sum_{i=1}^{m}\Lambda_{i}(t))l)}\lambda_{i}(t), i=1,2,\cdots,m. \end{align*}

Furthermore, it can also be shown that

\begin{align*} & P(N_{i}(u_{i2})-N_{i}(u_{i1})=n_{i},~i=1,2,\cdots,m)\\ &= \Bigg[\prod_{i=1}^{m} \frac{(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1}))^{n_{i}}}{{n_{i}}!} \Bigg]\frac{\alpha^{k-\nu} }{(\alpha+\sum_{i=1}^{m}(\Lambda_{i}(u_{i2} )-\Lambda_{i}(u_{i1})))^{k+\sum_{i=1}^{m}n_{i}-\nu}}\\ & \times\frac{\Gamma_{\nu} (k+\sum_{i=1}^{m}n_{i}, (\alpha+\sum_{i=1}^{m}(\Lambda_{i}(u_{i2})-\Lambda_{i}(u_{i1})))l)} {\Gamma_{\nu} (k, \alpha l)}. \end{align*}

In addition to the above results, other properties obtained in the previous sections could be extended to the multivariate case in similar ways.

5. Comparison and monotony results with respect to the multivariate likelihood ratio ordering

The results of the previous sections were dependent on the specific properties of the generalized gamma distribution. We here provide some other results of MPGGPs based on the notion of multivariate likelihood ratio ordering, which hold in the more general setting of a multivariate mixed Poisson process which we now define.

Definition 5.1. Let $\{{\mathbf{N}}(t),~t\geq0\}$ be a multivariate counting process, where ${\mathbf{N}}(t)=(N_{1}(t),N_{2} (t),\cdots,N_{m}(t)),$ for all $t\geq0$ and let Φ be a non negative random variable, which is assumed to be absolutely continuous with respect to Lebesgue measure. Then $\{{\mathbf{N}}(t),~t\geq0\}$ is called the multivariate mixed Poisson process (MMPP) with the set of parameters $(\lambda_{i}(t),i=1,2,\cdots,m,\Phi)$, where $\lambda_{i}(t) \gt 0,~\forall t\geq0$, $i=1,2,\cdots,m$, if $\{N_{i}(t),~t\geq0\}|(\Phi=\phi)\sim {\mathcal{NHPP}}(\phi\lambda_{i}(t))$, $i=1,2,\cdots,m$ and are independent.

Remark 5.2. Based on its definition, a MPGGP is a specific MMPP, where Φ has a generalized gamma distribution.

In order to write down the different properties bellow, we now recall the notion of multivariate likelihood ratio ordering. We refer to Karlin and Rinott [Reference Karlin and Rinott18] for more details.

Definition 5.3. Let X and Y be two random vectors on $\mathbb{R}^{n}$ with respective density $f_{\mathbf{X}}$ and $f_{\mathbf{Y}}$ with respect to a common product measure $\sigma\left( d\mathbf{x}\right) $. (For instance X and Y may be both absolutely continuous or both discrete). Then, X is said to be smaller than Y in the multivariate likelihood ratio ordering (written ${\mathbf{X}}\prec_{\mathbf{lr}}{\mathbf{Y}}$) as soon as

\begin{equation*} f_{\mathbf{X}}\left( \mathbf{x}\right) f_{\mathbf{Y}}\left( \mathbf{y} \right) \leq f_{\mathbf{X}}\left( \mathbf{x}\wedge\mathbf{y}\right) f_{\mathbf{Y}}\left( \mathbf{x}\vee\mathbf{y}\right), \end{equation*}

for all x, $\mathbf{y}\in\mathbb{R}^{n}$, where the minimum $\wedge$ and the maximum $\vee$ are taken componentwise.

In the univariate case, the likelihood ratio order is denoted by $\prec_{lr} $.

Definition 5.4. Let X be a random vector on $\mathbb{R}^{n}$ with density $f_{\mathbf{X}}$ with respect to a product measure $\sigma\left( d\mathbf{x}\right) $. Then, X is said to be Multivariate Totally Positive property of order 2 (MTP2) as soon as

\begin{equation*} f_{\mathbf{X}}\left( \mathbf{x}\right) f_{\mathbf{X}}\left( \mathbf{y} \right) \leq f_{\mathbf{X}}\left( \mathbf{x}\wedge\mathbf{y}\right) f_{\mathbf{X}}\left( \mathbf{x}\vee\mathbf{y}\right), \end{equation*}

for all x, $\mathbf{y}\in\mathbb{R}^{n}$, which is just equivalent to ${\mathbf{X}}\prec_{\mathbf{lr}}{\mathbf{X}}$.

We first provide a result that extends Proposition 4 in Cha and Mercier [Reference Cha and Mercier9] to the multivariate setting.

Proposition 5.5. Let $\{{\mathbf{N}} (t),~t\geq0\}$ be a MMPP with set of parameters $\left( \lambda _{i}(t),i=1,2,\cdots,m,\Phi\right) $. Then ${\mathbf{N}}(t)$ increases with respect to t in the multivariate likelihood ratio ordering.

Proof. Let $0\leq t_{1} \lt t_{2}$. Let us show that ${\ \mathbf{N}}(t_{1})\prec_{\mathbf{lr}}{\mathbf{N}}(t_{2})$.

Let

\begin{align*} g_{j}\left( n_{1},\cdots,n_{m}|\phi\right) & \equiv P\left( N_{i} (t_{j})=n_{i},~i=1,2,\cdots,m|\Phi=\phi\right) \\ & =\prod_{i=1}^{m}P\left( N_{i}(t_{j})=n_{i}|\Phi=\phi\right), \end{align*}

for $j=1,2$ and $n_{i}\in\mathbb{N}$, $i=1,2,\cdots,m$, where $P\left( N_{i}(t_{j})=\cdot|\Phi=\phi\right) $ is the Poisson distribution with parameter $\Lambda_{i}\left( t_{j}\right) ~\phi$. As this distribution increases with respect to $\Lambda_{i}\left( t_{j}\right) $ in the likelihood ratio ordering and as $\Lambda_{i}\left( t_{1}\right) \leq \Lambda_{i}\left( t_{2}\right) $, we derive that $P\left( N_{i} (t_{1})=\cdot|\Phi=\phi\right) \prec_{lr}P\left( N_{i}(t_{2})=\cdot |\Phi=\phi\right) $, and next that $g_{1}\left( \cdots|\phi\right) \prec_{\mathbf{lr}}g_{2}\left( \cdots|\phi\right) $, as the multivariate likelihood ratio ordering is stable through conjunction (see Shaked and Shanthikumar [Reference Shaked and Shanthikumar24], Theorem 6.E.4(a) page 299).

Using that $\Phi\prec_{lr}\Phi$, we derive from Theorem 2.4 in Karlin and Rinott [Reference Karlin and Rinott18] that

\begin{equation*} \int_{\mathbb{R}_{+}}g_{1}\left( \cdots|\phi\right) f_{\Phi}\left( \phi\right) d\phi\prec_{\mathbf{lr}}\int_{\mathbb{R}_{+}}g_{2}\left( \cdots|\phi\right) f_{\Phi}\left( \phi\right) d\phi, \end{equation*}

which is just equivalent to ${\mathbf{N}}(t_{1})\prec_{\mathbf{lr} }{\mathbf{\ N}}(t_{2})$, and allows to conclude.

We next show that the marginal increments in a MMPP (taken at possibly different times for each margin) exhibit the MTP2 property.

Proposition 5.6. Let $\{{\mathbf{N}} (t),~t\geq0\}$ be a MMPP with set of parameters $\left( \lambda _{i}(t),i=1,2,\cdots,m,\Phi\right) $ and let $0\leq u_{i1}\leq u_{i2}$, $i=1,2,\cdots,m$. Then the random vector $\left( N_{i}(u_{i2} )-N_{i}(u_{i1}),~i=1,2,\cdots,m\right) $ is MTP2.

Proof. The distribution of $\left( N_{i}(u_{i2} )-N_{i}(u_{i1}),~i=1,2,\cdots,m\right) $ can be written as

\begin{equation*} P\left( N_{i}(u_{i2})-N_{i}(u_{i1})=n_{i},~i=1,2,\cdots,m\right) =\int_{\mathbb{R}_{+}}\left( \prod_{i=1}^{m}g_{i}\left( n_{i}|\phi\right) \right) f_{\Phi}\left( \phi\right) d\phi, \end{equation*}

for all $n_{i}\in\mathbb{N}$, $i=1,2,\cdots,m$, where

\begin{equation*} g_{i}\left( \cdot|\phi\right) =P\left( N_{i}(u_{i2})-N_{i}(u_{i1} )=\cdot|\Phi=\phi\right), \end{equation*}

stands for the Poisson distribution with parameter $\left( \Lambda_{i}\left( u_{i2}\right) -\Lambda_{i}\left( u_{i1}\right) \right) \phi$. As this distribution increases in the likelihood ordering with ϕ, the result follows from Property 7.2.18 in Denuit et al. [Reference Denuit, Dhaene, Goovaerts and Kaas12].

We recall that the MTP2 property is a strong positive dependence property, which entails for instance conditional increasingness in sequence and positive association, see e.g., Belzunce et al. [Reference Belzunce, Martínez-Riquelme and Mulero4]. Hence, such properties are fulfilled by the marginal increments in a MPGGP.

In a similar way, the concept of positive upper orthant dependent multivariate process (PUODMP) was defined in Cha and Giorgio [Reference Cha and Giorgio8]. We refer to this paper for more details. The following positive dependence result now is a direct consequence of Proposition 5.6.

Corollary 5.7. A MMPP is a positive upper orthant dependent multivariate process:

\begin{equation*} P(N_{i}(t_{i2})-N_{i}(t_{i1}) \gt n_{i},i=1,2,\cdots,m)\geq\Pi_{i=1}^{m} P(N_{i}(t_{i2})-N_{i}(t_{i1}) \gt n_{i}), \end{equation*}

for all $t_{i2} \gt t_{i1}$ and ni, $i=1,2,\cdots,m$

We now come to the extension of Proposition 5 in Cha and Mercier [Reference Cha and Mercier9] to the multivariate setting.

Proposition 5.8. Let $\{{\mathbf{N}}(t),~t\geq0\}$ and $\{\bar{\mathbf{N}}(t),~t\geq0\}$ be two MMPPs with respective sets of parameters $\left( \lambda_{i}(t),i=1,2,\cdots,m,\Phi\right) $ and $\left( \bar{\lambda}_{i}(t),i=1,2,\cdots,m,\bar{\Phi}\right) $. Assume that $\Phi\prec_{\mathbf{lr}}\bar{\Phi}$. Also, let $0\leq u_{i1}\leq u_{i2}$, $i=1,2,\cdots,m$, such that

(5.1)\begin{equation} \Lambda_{i}\left( u_{i2}\right) -\Lambda_{i}\left( u_{i1}\right) \leq \bar{\Lambda}_{i}\left( u_{i2}\right) -\bar{\Lambda}_{i}\left( u_{i1}\right), \end{equation}

for all $i=1,2,\cdots,m$.

Then, we have the following result:

(5.2)\begin{equation} \left( N_{i}(u_{i2})-N_{i}(u_{i1}),~i=1,2,\cdots,m\right) \prec_{ \mathbf{lr}}\left( \bar{N}_{i}(u_{i2})-\bar{N}_{i}(u_{i1}),~i=1,2,\cdots ,m\right), \end{equation}

for all $0\leq u_{i1}\leq u_{i2}$, $n_{i}\in\mathbb{N}$, $i=1,2,\cdots,m$, where lr refers to the multivariate likelihood ratio ordering.

Proof. We use some arguments from the proof of Theorem 3.8 in Belzunce et al. [Reference Belzunce, Mercader, Ruiz and Spizzichino5]. See also Theorem 2.7 in Khaledi and Shaked [Reference Khaledi and Shaked19] (which is written only for absolutely continuous random variables and hence cannot be applied here).

Let us first write

(5.3)\begin{align} g\left( n_{1},\cdots,n_{m}|\phi\right) & \equiv P\left( N_{i} (u_{i2})-N_{i}(u_{i1})=n_{i},~i=1,2,\cdots,m|\Phi=\phi\right) \nonumber\\ & =\prod_{i=1}^{m}P\left( N_{i}(u_{i2})-N_{i}(u_{i1})=n_{i}|\Phi =\phi\right) , \end{align}

where $P\left( N_{i}(u_{i2})-N_{i}(u_{i1})=\cdot|\Phi=\phi\right) $ is the Poisson distribution with parameter $\left( \Lambda_{i}\left( u_{i2}\right) -\Lambda_{i}\left( u_{i1}\right) \right) \phi$.

Using similar arguments as for the proof of Proposition 5.5, it is clear that

\begin{equation*} g\left( \cdots|\phi\right) \prec_{\mathbf{lr}}\bar{g}\left( \cdots |\phi\right) , \end{equation*}

where $\bar{g}$ is defined in a similar way as (5.3) for the process $\{\bar{\mathbf{N}}(t),~t\geq0\}$, because $\Lambda_{i}\left( u_{i2}\right) -\Lambda_{i}\left( u_{i1}\right) \leq\bar{\Lambda}_{i}\left( u_{i2}\right) -\bar{\Lambda}_{i}\left( u_{i1}\right) $.

As the Poisson distribution with parameter $\left( \Lambda_{i}\left( u_{i2}\right) -\Lambda_{i}\left( u_{i1}\right) \right) \phi$ also increases with respect to ϕ in the likelihood ratio ordering, we derive that $g\left( n_{1},\cdots,n_{m}|\phi\right) $ is MTP2 in $\left( n_{1},\cdots,n_{m},\phi\right) $.

Then, based on Theorem 2.4 in Karlin and Rinott [Reference Karlin and Rinott18], we get that

\begin{equation*} \int_{\mathbb{R}_{+}}g\left( \cdots|\phi\right) f_{\Phi}\left( \phi\right) d\phi\prec_{\mathbf{lr}}\int_{\mathbb{R}_{+}}\bar{g}\left( \cdots|\bar{\phi }\right) f_{\bar{\Phi}}\left( \bar{\phi}\right) d\bar{\phi}, \end{equation*}

which is just equivalent to (5.2) and achieves the proof.

As a by-product of the previous proposition, considering $u_{i1}=0$ and $u_{i2}=t$ for all $i=1,2,\cdots,m$, one can see that, if all parameters are fixed except from one, then ${\mathbf{N}}(t)$ increases in the likelihood ordering when k increases or $\Lambda\left( t\right) $ increases, and when α or ν decreases.

We next explore the conditions given in Proposition 5.8 to derive the comparison result on a simple example (BPGGP).

Example 5.9. Let $\{{\mathbf{N}}(t),~t\geq0\}$ and $\{\bar{\mathbf{N}}(t),~t\geq0\}$ be two BPGGPs with sets of parameters $\left( \lambda_{i}(t)=\lambda _{i},i=1,2,\nu,k,\alpha,l=1\right) $ and $\left( \bar{\lambda}_{i} (t)=\bar{\lambda}_{i},i=1,2,\bar{\nu},\bar{k},\bar{\alpha},\bar{l}=1\right) $, respectively.

For a given t, we know from Theorem 2.3 that the joint pdf of $\left( N_{1}(t),N_{2}(t)\right) $ is given by

\begin{align*} g_{12}\left( x_{1},x_{2}\right) & =\mathbb{P}\left[ \left( N_{1}(t)=x_{1},N_{2}(t)=x_{2}\right) \right] \\ & =\frac{\lambda_{1}^{x_{1}}\lambda_{2}^{x_{2}}t^{x_{1}+x_{2}}}{{x}_{1} !x_{2}!}\frac{\alpha^{k-\nu}}{\left[ \alpha+t\left( \lambda_{1}+\lambda _{2}\right) \right] ^{k+x_{1}+x_{2}-\nu}}\frac{\Gamma_{\nu}\left[ k+x_{1}+x_{2},\alpha+t\left( \lambda_{1}+\lambda_{2}\right) \right] }{\Gamma_{\nu}(k,\alpha)}, \end{align*}

for all $x_{1},x_{2}\in\mathbb{N}$, with a similar expression for the joint pdf of $\left( \bar{N}_{1}(t),\bar{N}_{2}(t)\right) $ (denoted by$\ \bar {g}_{12}$).

We set

\begin{align*} G\left( x_{1},x_{2},y_{1},y_{2}\right) & =g_{12}\left( x_{1}\wedge y_{1},x_{2}\wedge y_{2}\right) \bar{g}_{12}\left( x_{1}\vee y_{1},x_{2}\vee y_{2}\right) -g_{12}\left( x_{1},x_{2}\right) \bar{g}_{12}\left( y_{1},y_{2}\right) ,\\ \bar{G}\left( x_{1},x_{2},y_{1},y_{2}\right) & =\bar{g}_{12}\left( x_{1}\wedge y_{1},x_{2}\wedge y_{2}\right) g_{12}\left( x_{1}\vee y_{1},x_{2}\vee y_{2}\right) -g_{12}\left( x_{1},x_{2}\right) \bar{g} _{12}\left( y_{1},y_{2}\right), \end{align*}

for all $x_{1},x_{2},y_{1},y_{2}\in\mathbb{N}$. Then $\left( N_{1} (t),N_{2}(t)\right) \prec_{\mathbf{lr}}\left[ \succ_{\mathbf{\ lr}}\right] \left( \bar{N}_{1}(t),\bar{N}_{2}(t)\right) $ if and only if $G\left[ \bar{G}\right] $ remains non negative on $\mathbb{R}_{+}^{4}$.

We take t = 5, $\lambda_{1}=1 \lt $ $\bar{\lambda}_{1}=2$, $\lambda_{2} =1.5 \lt \bar{\lambda}_{2}=3$, ν = 1, k = 1, α = 1, $\bar{\nu}=1.25$, $\bar{k}=0.5$, $\bar{\alpha}=0.75$.

Then Condition (5.1) on the $\Lambda_{i}$’s and $\bar{\Lambda}_{i}$’s given in Proposition 5.8 is true. However Φ and $\bar{\Phi}$ are not comparable with respect to lr ordering. Indeed, the quotient of their respective pdfs (f and $\bar{f}$) is not monotonic, as can be seen in Figure 1.

Figure 1. The quotient $f/\bar{f}$ of the respective pdfs of Φ and $\bar{\Phi}$.

The functions $G(x_{1},x_{2},y_{1},y_{2})$ and $\bar{G}(x_{1},x_{2},y_{1},y_{2})$ are next plotted in Figure 2 with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 3,10\right) $ and $\left( x_{1} ,y_{1}\right) =\left( 8,2\right) $, respectively. As can be seen, G and $\bar{G}$ both change sign on $\mathbb{R}_{+}^{4}$ and consequently, $\left( N_{1}(t),N_{2}(t)\right) $ and $\left( \bar{N}_{1}(t),\bar{N}_{2}(t)\right) $ are not comparable with respect to the bivariate likelihood ratio ordering (for t = 5).

Figure 2. The functions $G(x_{1},x_{2},y_{1},y_{2})$ and $\bar{G}(x_{1},x_{2},y_{1},y_{2})$ with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 3,10\right) $ and $\left( x_{1},y_{1}\right) =\left( 8,2\right) $, respectively.

Based on the previous example, Condition (5.1) on the $\Lambda_{i}$’s and $\bar{\Lambda}_{i}$’s is not sufficient to derive the comparison result in Proposition 5.8, and some additional comparison assumption between Φ and $\bar{\Phi}$ is required.

We finally come to the comparison between the points of two MMPPs with different parameters. To begin with, we consider the case where the two processes share the same λi’s, $i=1,\dots,m$.

Proposition 5.10. Let $\{{\mathbf{N}}(t),~t\geq0\}$ and $\{\bar{\mathbf{N}}(t),~t\geq0\}$ be two MMPPs which share the same $\left( \lambda_{i}(t),i=1,2,\cdots,m\right) $ with different mixture distributions Φ and $\bar{\Phi}$, respectively. Assume that $\Phi \prec_{\mathbf{lr}}\bar{\Phi}$. For $i=1,2,\cdots,m$ and $n\in\mathbb{N}^{\ast}$, let Tin (resp. $\bar{T}_{in}$) be the n-th point of $\left\{ N_{i}\left( t\right) ,t\geq0\right\} $ (resp. $\left\{ \bar{N}_{i}\left( t\right) ,t\geq0\right\} $).

Then

\begin{equation*} \left( \bar{T}_{in_{i}},~i=1,2,\cdots,m\right) \prec_{\mathbf{lr}}\left( T_{in_{i}},~i=1,2,\cdots,m\right), \end{equation*}

for all $n_{i}\in\mathbb{N}^{\ast}$ and all $i=1,2,\cdots,m$.

Proof. Our aim is to use Theorem 3.8 in Belzunce et al. [Reference Belzunce, Mercader, Ruiz and Spizzichino5]. Let $i\in\left\{ 1,2,\cdots,m\right\} $ be fixed. Let us first show that $\left[ -T_{in_{i}}|\Phi=\phi\right] $ increases with respect to ϕ in the likelihood ratio ordering.

For that, we will use Theorem 3.7 in Belzunce et al. [Reference Belzunce, Lillo, Ruiz and Shaked3]. Let $\phi_{1} \leq\phi_{2}$. Then, the ratio of the respective density functions of $\left[ T_{i1}|\Phi=\phi_{1}\right] $ and $\left[ T_{i1}|\Phi=\phi_{2}\right] $ (first points in the i-th marginal processes) is

\begin{equation*} \frac{\phi_{1}\lambda_{i}\left( t\right) e^{-\phi_{1}\Lambda_{i}\left( t\right) }}{\phi_{2}\lambda_{i}\left( t\right) e^{-\phi_{2}\Lambda _{i}\left( t\right) }}=\frac{\phi_{1}}{\phi_{2}}e^{\left( \phi_{2}-\phi _{1}\right) \Lambda_{i}\left( t\right) }, \end{equation*}

and it is increasing. This implies that $\left[ T_{i1}|\Phi=\phi_{2}\right] \prec_{lr}\left[ T_{i1}|\Phi=\phi_{1}\right] $.

Also, the ratio of the corresponding cumulative hazard rate functions is

\begin{equation*} \frac{\phi_{1}\Lambda_{i}\left( t\right) }{\phi_{2}\Lambda_{i}\left( t\right) }=\frac{\phi_{1}}{\phi_{2}}, \end{equation*}

which is constant and hence non decreasing. Based on Theorem 3.7 in Belzunce et al. [Reference Belzunce, Lillo, Ruiz and Shaked3], we derive that $\left[ T_{in_{i}}|\Phi=\phi_{2}\right] \prec_{lr}\left[ T_{in_{i}}|\Phi=\phi_{1}\right] $, or equivalently that $\left[ -T_{in_{i}}|\Phi=\phi_{1}\right] \prec_{lr}\left[ -T_{in_{i}} |\Phi=\phi_{2}\right] $.

Hence, $\left[ -T_{in_{i}}|\Phi=\phi\right] $ increases with respect to ϕ in the likelihood ratio ordering.

Also, based on our assumptions, we know that $\Phi\prec_{lr}\bar{\Phi}$.

The result now is a direct consequence of Theorem 3.8 in Belzunce et al. [Reference Belzunce, Mercader, Ruiz and Spizzichino5].

A natural question now is: Is it possible to compare the points in two MMPPs which share the same Φ with different $\lambda_{i}(t)$’s for $i=1,2,\cdots,m$? The answer to this question is explored in next example.

Example 5.11. Let $\{{\mathbf{N}}(t),~t\geq0\}$ and $\{\bar{\mathbf{N}}(t),~t\geq0\}$ be two MPGGPs (which are specific MMPPs), which share $\left( \nu,k,\alpha,1\right) =\left( 0,1,1,1\right) $, so that Φ is exponentially distributed with mean 1. Let $\lambda_{i}(t)=\lambda_{i}$ and $\bar{\lambda}_{i}(t)=\bar {\lambda}_{i}$, $i=1,2$, be the corresponding constant baseline intensity functions of the NHPPs, respectively. Then, it is easy to check that the joint density function of $\left( T_{11},T_{21}\right) $ is

\begin{align*} f_{12}\left( x_{1},x_{2}\right) & =\int_{\mathbb{R}_{+}}\lambda_{1} \phi~e^{-\lambda_{1}\phi x_{1}}~\lambda_{2}\phi~e^{-\lambda_{2}\phi x_{2} }~e^{-\phi}d\phi\\ & =\frac{2\lambda_{1}\lambda_{2}}{\left( \lambda_{1}x_{1}+\lambda_{2} x_{2}+1\right) ^{3}}, \end{align*}

with a similar expression for the joint density function $\bar{f}_{12}$ of $\left( \bar{T}_{11},\bar{T}_{21}\right) $. The point is to see whether $\left( T_{11},T_{21}\right) $ and $\left( \bar{T}_{11},\bar{T} _{21}\right) $ are comparable with respect to the bivariate likelihood ordering. Let

\begin{align*} H\left( x_{1},x_{2},y_{1},y_{2}\right) & =f_{12}\left( x_{1}\wedge y_{1},x_{2}\wedge y_{2}\right) \bar{f}_{12}\left( x_{1}\vee y_{1},x_{2}\vee y_{2}\right) -f_{12}\left( x_{1},x_{2}\right) \bar{f}_{12}\left( y_{1},y_{2}\right) ,\\ \bar{H}\left( x_{1},x_{2},y_{1},y_{2}\right) & =\bar{f}_{12}\left( x_{1}\wedge y_{1},x_{2}\wedge y_{2}\right) f_{12}\left( x_{1}\vee y_{1},x_{2}\vee y_{2}\right) -f_{12}\left( x_{1},x_{2}\right) \bar{f} _{12}\left( y_{1},y_{2}\right), \end{align*}

for all $(x_{1},x_{2},y_{1},y_{2})\in\mathbb{R}_{+}^{4}$.

Then $\left( T_{11},T_{21}\right) \prec_{\mathbf{lr}}\left[ \succ _{\mathbf{\ lr}}\right] \left( \bar{T}_{11},\bar{T}_{21}\right) $ if and only if $H\left[ \bar{H}\right] $ remains non negative on $\mathbb{R} _{+}^{4}$. The functions $H(x_{1},x_{2},y_{1},y_{2})$ and $\bar{H}(x_{1} ,x_{2},y_{1},y_{2})$ are plotted in Figure 3 with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 2,0.01\right) $ and $\left( x_{1},y_{1}\right) =\left( 0.01,1\right) $, respectively, with $\lambda_{1}=\bar{\lambda}_{1}=\lambda_{2}=1 \lt \bar{\lambda}_{2}=6$. As can be seen, H and $\bar{H}$ both change sign on $\mathbb{R}_{+}^{4}$ and consequently, $\left( T_{11},T_{21}\right) $ and $\left( \bar{T}_{11},\bar{T}_{21}\right)$ are not comparable with respect to the bivariate likelihood ratio ordering.

Based on this simple example (with constant λi’s and $\bar{\lambda }_{i}$’s such that $\lambda_{1}=\bar{\lambda}_{1}=\lambda_{2} \lt \bar{\lambda }_{2}$), it seems that there is no hope to find conditions under which the points in two MMPPs with different $\lambda_{i} (t)$’s could be comparable with respect to the multivariate likelihood ratio ordering.

Figure 3. The functions $H(x_{1},x_{2},y_{1},y_{2})$ and $\bar{H}(x_{1},x_{2},y_{1},y_{2})$ with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 2,0.01\right) $ and $\left( x_{1},y_{1}\right) =\left( 0.01,1\right) $, respectively.

However, it is possible to get comparison results with respect to the weaker usual stochastic ordering. We recall that given two random vectors X and Y on $\mathbb{R}^{n}$, then X is said to be smaller than Y in the usual stochastic ordering (written ${\ \mathbf{X}}\prec_{\mathbf{sto}}{\mathbf{Y}}$) as soon as

\begin{equation*} E\left[ \varphi\left( \mathbf{X}\right) \right] \leq E\left[ \varphi\left( \mathbf{Y}\right) \right], \end{equation*}

for all non-decreasing function $\varphi:\mathbb{R}^{n}\longrightarrow \mathbb{R}$ such that the expectations exist. In the univariate setting (written $X\prec_{sto}Y$), it is equivalent to $\bar{F}_{X}\left( t\right) \leq\bar{F}_{Y}\left( t\right), $ for all $t\geq0$.

The multivariate likelihood ratio ordering is known to imply the usual stochastic ordering. See Shaked and Shanthikumar [Reference Shaked and Shanthikumar24] for more details.

We now come to the comparison result.

Proposition 5.12. Let $\{{\mathbf{N}}(t),~t\geq0\}$ and $\{\bar{\mathbf{N}}(t),~t\geq0\}$ be two MMPPs with sets of parameters $\left( \lambda_{i}(t),i=1,2,\cdots,m,\Phi\right) $ and $\left( \bar{\lambda}_{i}(t),i=1,2,\cdots,m,\bar{\Phi}\right) $, respectively. Assume that $\Phi\prec_{\mathbf{lr}}\bar{\Phi}$ and $\Lambda_{i}\left( t\right) \leq\bar{\Lambda}_{i}\left( t\right), $ for all $t\geq0$ and all $i=1,2,\cdots,m$. Using the notations of Proposition 5.10, we have:

\begin{equation*} \left( \bar{T}_{in_{i}},~i=1,2,\cdots,m\right) \prec_{\mathbf{sto}}\left( T_{in_{i}},~i=1,2,\cdots,m\right), \end{equation*}

for all $n_{i}\in\mathbb{N}^{\ast}$ and all $i=1,2,\cdots,m$.

Proof. Our aim is to use Theorem 3.1 in Belzunce et al. [Reference Belzunce, Mercader, Ruiz and Spizzichino5]. We already know from the proof of Proposition 5.10 that $\left[ -T_{in_{i}} |\Phi=\phi\right] $ increases with respect to ϕ in the likelihood ratio ordering, and hence also in the usual stochastic ordering.

Also, based on $\Lambda_{i}\left( t\right) \leq\bar{\Lambda}_{i}\left( t\right), $ for all $t\geq0$ and all $i=1,2,\cdots,m$, it is easy to check that the conditional survival functions of $\bar{T}_{i1}$ given $\bar{\Phi }=\phi$ and $T_{i1}$ given $\Phi=\phi$ fulfill

\begin{equation*} \bar{F}_{\bar{T}_{i1}|\bar{\Phi}=\phi}\left( t\right) \equiv\mathbb{P} \left( \bar{T}_{i1} \gt t|\bar{\Phi}=\phi\right) =e^{-\phi\bar{\Lambda} _{i}\left( t\right) }\leq e^{-\phi\Lambda_{i}\left( t\right) }=\bar {F}_{T_{i1}|\bar{\Phi}=\phi}\left( t\right), \end{equation*}

for all $t\geq0$, which means that $\left[ \bar{T}_{i1}|\bar{\Phi} =\phi\right] \prec_{sto}\left[ T_{i1}|\Phi=\phi\right] $. We derive from Theorem 3.1 in Belzunce et al. [Reference Belzunce, Lillo, Ruiz and Shaked3] that $\left[ \left( \bar{T}_{i1},\bar {T}_{i2},\dots,\bar{T}_{in_{i}}\right) |\bar{\Phi}=\phi\right] \prec _{sto}\left[ \left( T_{i1},T_{i2},\dots,T_{in_{i}}\right) |\Phi =\phi\right] $ and next that $\left[ \bar{T}_{in_{i}}|\bar{\Phi} =\phi\right] \prec_{sto}\left[ T_{in_{i}}|\Phi=\phi\right] $ (as the usual stochastic ordering is stable through marginalization), or equivalently that $\left[ -T_{in_{i}}|\Phi=\phi\right] \prec_{sto}\left[ \bar{T}_{in_{i} }|\bar{\Phi}=\phi\right] $.

Finally, based on $\Phi\prec_{lr}\bar{\Phi}$, we can derive that

\begin{equation*} \left( -T_{in_{i}},~i=1,2,\cdots,m\right) \prec_{\mathbf{sto}} \left( -\bar{T}_{in_{i}},~i=1,2,\cdots,m\right), \end{equation*}

from Theorem 3.1 in Belzunce et al. [Reference Belzunce, Mercader, Ruiz and Spizzichino5], which allows to conclude.

Remark 5.13. Note that all the results from Propositions 5.5, 5.6 and Corollary 5.7 hold for MPGGPs, as they are specific MMPPs. In order to apply Propositions 5.8, 5.10 and 5.12 for MPGGPs, one can use the following result which provides conditions under which $\Phi\prec_{lr}\bar{\Phi}$. The arguments are given in the proof of Proposit in Cha and Mercier [Reference Cha and Mercier9].

Lemma 5.14. Let $\Phi\sim{\mathcal{GG}}(\nu,k,\alpha,l=1) $ and $\bar{\Phi}\sim{\mathcal{GG}}(\bar{\nu},\bar{k},\bar{\alpha},\bar{l}=1)$. Then $\Phi\prec_{lr}\bar{\Phi}$ as soon as one of the following conditions holds:

  • $\bar{\alpha}=\alpha$, $\bar{k}\geq k$ and $\bar{k}-k\geq\bar{\nu} -\nu$;

  • $\bar{\alpha} \lt \alpha$ and $\left( \alpha-\bar{\alpha}+\bar{k}-k+\nu- \bar{\nu}\right) ^{2}-4\left( \alpha-\bar{\alpha}\right) \left( \bar{k} -k\right) \leq0$;

  • $\bar{\alpha} \lt \alpha$ and $\alpha-\bar{\alpha}+\bar{k}-k+\nu-\bar{\nu }\geq0$.

As a specific case, we can see that, if all parameters are fixed except from one, Φ increases in the likelihood ordering when k increases, and when α or ν decreases.

Acknowledgment

The authors thank the reviewers for helpful comments and suggestions.

Funding statement

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant Number: 2019R1A6A1A11051177).

Competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

Agarwal, S.K. & Kalla, S.L. (1996). A generalized gamma distribution and its application in reliability. Communications in Statistics – Theory and Methods 25(1): 201210.Google Scholar
Allen, F. & Gale, D. (2000). Financial contagion. Journal of Political Economy 108(1): 134.CrossRefGoogle Scholar
Belzunce, F., Lillo, R.E., Ruiz, J.M. & Shaked, M. (2001). Stochastic comparisons of nonhomogeneous processes. Probability in the Engineering and Informational Sciences 15(2): 199224.Google Scholar
Belzunce, F., Martínez-Riquelme, C. & Mulero, J. (2016). An Introduction to Stochastic Orders, Amsterdam: Elsevier/Academic Press.Google Scholar
Belzunce, F., Mercader, J.A., Ruiz, J.M. & Spizzichino, F. (2009). Stochastic comparisons of multivariate mixture models. Journal of Multivariate Analysis 100(8): 16571669.Google Scholar
Bowsher, C.G. (2006). Modelling security market events in continuous time: intensity based, multivariate point process models. Journal of Econometrics 141(2): 876912.CrossRefGoogle Scholar
Cha, J.H. (2014). Characterization of the generalized Pólya process and its applications. Advances in Applied Probability 46(4): 11481171.CrossRefGoogle Scholar
Cha, J.H. & Giorgio, M. (2016). On a class of multivariate counting processes. Advances in Applied Probability 48(2): 443462.Google Scholar
Cha, J.H. & Mercier, S. (2021). Poisson generalized Gamma process and its properties. Stochastics 93(8): 11231140.Google Scholar
Cox, D.R. & Lewis, P.A.W. (1972). Multivariate point processes. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, .CrossRefGoogle Scholar
Daley, D.J. (1968). The correlation structure of the output process of some single server queueing systems. Annals of Mathematical Statistics 39(3): 10071019.CrossRefGoogle Scholar
Denuit, M., Dhaene, J., Goovaerts, M. & Kaas, R. (2006). Actuarial Theory for Dependent Risks: Measures, Orders and Models. Chichester: John Wiley & Sons.Google Scholar
Finkelstein, M. (2004). Minimal repair in heterogeneous populations. Journal of Applied Probability 41(1): 281286.CrossRefGoogle Scholar
Finkelstein, M. (2008). Failure Rate Modeling for Reliability and Risk. London: Springer.Google Scholar
Ghitany, M.E. (1998). On a recent generalization of gamma distribution. Communications in Statistics – Theory and Methods 27(1): 223233.CrossRefGoogle Scholar
Grandell, J. (1997). Mixed Poisson Processes. Monographs on Statistics and Applied Probability, Vol. 77. London: Chapman & Hall.CrossRefGoogle Scholar
Gupta, R.C. & Ong, S.H. (2004). A new generalization of the negative binomial distribution. Computational Statistics & Data analysis 45(4): 287300.CrossRefGoogle Scholar
Karlin, S. & Rinott, Y. (1980). Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions. Journal of Multivariate Analysis 10(4): 467498.CrossRefGoogle Scholar
Khaledi, B.E. & Shaked, M. (2010). Stochastic comparisons of multivariate mixtures. Journal of Multivariate Analysis 101(10): 24862498.CrossRefGoogle Scholar
Kobayashi, K. (1991). On generalized gamma functions occurring in diffraction theory. Journal of the Physical Society of Japan 60(5): 15011512.CrossRefGoogle Scholar
Partrat, C. (1994). Compound model for two dependent kinds of claims. Insurance: Mathematics and Economics 15(2): 219231.Google Scholar
Riordan, J. (1937). Moment recurrence relations for binomial, Poisson and hypergeometric frequency distributions. Annals of Mathematical Statistics 8(2): 103111.Google Scholar
Ross, S.M. (2003). Introduction to Probability Models, 8th ed. San Diego: Academic Press.Google Scholar
Shaked, M. & Shanthikumar, J.G. (2007). Stochastic Orders. New York: Springer.CrossRefGoogle Scholar
Figure 0

Figure 1. The quotient $f/\bar{f}$ of the respective pdfs of Φ and $\bar{\Phi}$.

Figure 1

Figure 2. The functions $G(x_{1},x_{2},y_{1},y_{2})$ and $\bar{G}(x_{1},x_{2},y_{1},y_{2})$ with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 3,10\right) $ and $\left( x_{1},y_{1}\right) =\left( 8,2\right) $, respectively.

Figure 2

Figure 3. The functions $H(x_{1},x_{2},y_{1},y_{2})$ and $\bar{H}(x_{1},x_{2},y_{1},y_{2})$ with respect to $\left( x_{2},y_{2}\right) $ for $\left( x_{1},y_{1}\right) =\left( 2,0.01\right) $ and $\left( x_{1},y_{1}\right) =\left( 0.01,1\right) $, respectively.