Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-25T14:02:29.822Z Has data issue: false hasContentIssue false

A dual risk model with additive and proportional gains: ruin probability and dividends

Published online by Cambridge University Press:  08 February 2023

Onno Boxma*
Affiliation:
Eindhoven University of Technology
Esther Frostig*
Affiliation:
University of Haifa
Zbigniew Palmowski*
Affiliation:
Wrocław University of Science and Technology
*
*Postal address: Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, the Netherlands. Email address: [email protected]
**Postal address: Department of Statistics, Haifa University, Haifa, Israel. Email address: [email protected]
***Postal address: Department of Applied Mathematics, Wrocław University of Science and Technology, Wrocław, Poland. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We consider a dual risk model with constant expense rate and i.i.d. exponentially distributed gains $C_i$ ( $i=1,2,\dots$ ) that arrive according to a renewal process with general interarrival times. We add to this classical dual risk model the proportional gain feature; that is, if the surplus process just before the ith arrival is at level u, then for $a>0$ the capital jumps up to the level $(1+a)u+C_i$ . The ruin probability and the distribution of the time to ruin are determined. We furthermore identify the value of discounted cumulative dividend payments, for the case of a Poisson arrival process of proportional gains. In the dividend calculations, we also consider a random perturbation of our basic risk process modeled by an independent Brownian motion with drift.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

We consider a dual risk model with constant expense rate normalized to 1. Gains arrive according to a renewal process $\{N(t), t \geq 0\}$ with independent and identically distributed (i.i.d.) interarrival times $T_{i+1}-T_i$ having distribution $F({\cdot})$ , density $f({\cdot})$ , and Laplace–Stieltjes transform (LST) $\phi({\cdot})$ . If the surplus process just before the ith arrival is at level u, then the capital jumps up to the level $(1+a)u+C_i$ , $i=1,2,\dots$ , where $a>0$ and $C_1,C_2,\dots$ are i.i.d. exponentially distributed random variables with mean $1/\mu$ . Let U(t) be the surplus process, with $U(0)=x>0$ ; then we can write

(1.1) \begin{equation}U(t)=x-t+ \sum_{i=1}^{N(t)} (C_i+aU(T_i{-})), \,t \geq 0. \end{equation}

Taking $a=0$ yields a classical dual risk model, while $C_i \equiv 0$ yields a dual risk model with proportional gains. U(t) can also represent the workload in an M/G/1 queue or the inventory level in a storage model or dam model with a constant demand rate and occasional inflow that depends proportionally (apart from independent upward jumps) on the current amount of work in the system. We also give some results on a generalization of the model of (1.1) where at the ith jump epoch the jump has size $au+C_i$ with probability p, and has size $D_i$ with probability $1-p$ , where $D_1,D_2,\dots$ are independent, exp( $\delta$ )-distributed random variables, independent of $C_1,C_2,\dots$ .

In this paper we are interested (i) in exactly identifying the Laplace transforms of the ruin probability and the ruin time, and (ii) in approximating the value function, which is the cumulative discounted amount of dividends paid up to the ruin time under a fixed barrier strategy. To find this value function we solve a two-sided exit problem for the risk process (1.1), which seems to be interesting in itself. In the discounted dividend case we also add to the risk process (1.1) a perturbation modeled by a Brownian motion X(t) with drift; that is, we replace the negative drift $-t$ by the process X(t).

More formally, we start from the analysis of the ruin probability

(1.2) \begin{equation}R(x)\,:\!=\,\mathbb{P}_x(\tau_x<\infty),\end{equation}

where $\mathbb{P}_x({\cdot})\,:\!=\,\mathbb{P}(\cdot|U(0)=x)$ and the ruin time is defined as the first time the surplus process equals zero:

(1.3) \begin{equation}\tau_x = \inf\{t\geq 0\,:\, U(t)= 0\}.\end{equation}

Our method of analyzing R(x) is based on a one-step analysis where the process under consideration is viewed at successive claim times. We obtain the Laplace transform (with respect to initial capital) of the ruin probability for the risk process (1.1). We also analyze the double Laplace transform of the ruin time (with respect to initial capital and time).

Another quantity of interest for companies is the expected cumulative and discounted amount of dividend payments calculated under a barrier strategy. To approach the dividend problem for the barrier strategy with barrier b, we consider the controlled surplus process $U^b$ satisfying

(1.4) \begin{equation}U^b(t)=x-t+ \sum_{i=1}^{N(t)} \big(C_i+aU^b(T_i{-})\big)-L^b(t),\end{equation}

where the cumulative amount of dividends $L^b(t)$ paid up to time t comes from paying all the overflow above a fixed level b as dividends to shareholders. The object of interest is the average value of the cumulative discounted dividends paid up to the ruin time:

(1.5) \begin{equation}v(x)\,:\!=\,\mathbb{E}_x\!\left[\int_0^{\tau_x^b}{\rm e}^{-qt}{\rm d}L^b(t) \right],\end{equation}

where $\tau_x^b\,:\!=\,\inf\!\big\{t\geq 0\,:\, U^b(t)= 0\big\}$ is the ruin time and $q\geq 0$ is a given discount rate. Here we adopt the convention that $\mathbb{E}_x$ is the expectation with respect to $\mathbb{P}_x$ . We derive a differential-delay equation for v(x). However, such differential-delay equations are notoriously difficult to solve, and we have not been able to solve our equation. Hence we have developed the following approach. Under the additional assumption that $\{N(t), t \geq 0\}$ is a Poisson process and all $C_i$ equal zero, we find the expected cumulative discounted dividends

(1.6) \begin{equation}v_{N}(x)\,:\!=\, \mathbb{E}_x\!\left[\int_0^{\tau_x^b(N)}{\rm e}^{-qt}{\rm d}L^b(t) \right] ,\end{equation}

paid under the barrier strategy until the process reaches $\frac{b}{(a+1)^{N}}$ , that is, up to $\tau_x^b(N)\,:\!=\,\inf\!\big\{t \geq 0\,:\, U^b(t)=\frac{b}{(a+1)^N}\big\}$ . By taking N sufficiently large we can approximate the value function v(x) closely by $v_N(x)$ . To find $v_N(x)$ we first develop a method for solving a two-sided exit problem. Defining $d_{n}$ as the first time that $U^b$ reaches (down-crosses) $\frac{b}{(a+1)^{n}}$ and $u_{n}$ as the first time $U^b$ up-crosses $\frac{b}{(a+1)^{n}}$ , we determine (with $1_{\cdot}$ denoting an indicator function)

(1.7) \begin{equation}\rho_{N}(x)\,:\!=\,\mathbb{E}_{x}\!\left[ e^{-qd_{N}}1_{d_{N}<u_{ 0}}\right] ,\end{equation}

which seems to be of interest in its own right. We then use a very similar method to find $v_N(x)$ , and to also solve a second two-sided exit problem, determining

(1.8) \begin{equation}\mu_{N}(x)\,:\!=\,\mathbb{E}_{x}\big[e^{-qu_{0}}1_{u_{0}<d_{N}}\big].\end{equation}

This method of approximating a function $v({\cdot})$ by a sequence of functions $v_N({\cdot})$ with a certain recursive structure may be applicable in quite a few other settings, in particular in dam and storage models. In Section 5 we perform a similar analysis for the risk process (1.1) perturbed by an independent Brownian motion. There we also use the fluctuation theory of spectrally negative Lévy processes, expressing the exit identities in terms of so-called scale functions, as presented for example in Kyprianou [Reference Kyprianou17].

As Avanzi et al. in [Reference Avanzi, Gerber and Shiu5] point out, ‘Whereas [a classical model] is appropriate for an insurance company, [a dual model] seems to be natural for companies that have occasional gains. For companies such as pharmaceutical or petroleum companies, the jump should be interpreted as the net present value of future income from an invention or discovery. Other examples are commission-based businesses, such as real estate agent offices or brokerage firms that sell mutual funds or insurance products with a front-end load.’ Avanzi et al. [Reference Avanzi, Gerber and Shiu5] also suggest possible applications in modeling an annuity or pension fund, where the risk consists of survival and the event of death leads to gains. More precisely, in the model (1.1), a company which continuously pays expenses relevant to research or labor and operational costs occasionally gains some random income from selling a product, invention, or discovery; see e.g. [Reference Avanzi and Gerber4, Reference Avanzi3, Reference Bayraktar, Kyprianou and Yamazaki8, Reference Ng19, Reference Yin and Wen24, Reference Yin, Wen and Zhao25]. Lately, the budgets of many start-ups or e-companies have shown a different feature. Namely, their gains are not additive but depend strongly on the amount of investments, which usually are so huge that they are proportional to the value of the company. Then the arrival gain is proportional not only to the investments but also to the value of the company. Perhaps the most transparent case is the example of CD Projekt, one of the biggest Polish companies producing computer games. The issuance of new editions of its most famous game, Witcher, produces jumps in the value of the company (which are translated into jumps of asset value), and these jumps are proportional to the prior jump position of the value process; see Figure 1.

Figure 1. CD Projekt asset value; see https://businessinsider.com.pl.

Related literature. Not many papers consider the ruin probability for the classical dual risk process (without proportional gain mechanism), but it corresponds to the first busy period in a single-server queue with initial workload x, and consequently we can refer to [Reference Cohen14, Reference Prabhu21]. If the interarrival time has an exponential distribution, then one can apply fluctuation theory of Lévy processes to identify the Laplace transform of the ruin time as well; see e.g. Kyprianou [Reference Kyprianou17]. Albrecher et al. [Reference Albrecher, Badescu and Landriault2] study the ruin probability in the dual risk model under a loss-carryforward tax system and assuming exponentially distributed jump sizes. Palmowski et al. [Reference Palmowski, Ramsden and Papaioannou20] focus on a discrete-time set-up and study the finite-time ruin probability. In terms of analysis technique, the approach in Sections 2 and 3 bears similarities to the approach used in [Reference Boxma, Mandjes and Reed12, Reference Boxma, Löpker and Mandjes10, Reference Boxma, Löpker, Mandjes and Palmowski11, Reference Vlasiou23] to study Lindley-type recursions $W_{n+1} = {\rm max}(0,a W_n+X_n)$ , where $a=1$ in the classical setting of a single-server queue with $W_n$ the waiting time of the nth customer.

There is a good deal of work on dividend barriers in the dual model. All of those papers assume that the cost function is constant, and gains are modeled by a compound Poisson process. Avanzi et al. [Reference Avanzi, Gerber and Shiu5] consider cases where profits or gains follow an exponential distribution or a mixture of exponential distributions, and they derive explicit formulas for the expected discounted dividend value; see also Afonso et al. [Reference Afonso, Cardoso and dos Reis1]. Avanzi and Gerber [Reference Avanzi and Gerber4] use the Laplace transform method to study a dual model perturbed by a diffusion. Bayraktar et al. [Reference Bayraktar, Kyprianou and Yamazaki8] and Avanzi et al. [Reference Avanzi, Pérez, Wong and Yamazaki6] employ fluctuation theory to prove the optimality of a barrier strategy for all spectrally positive Lévy processes and express the value function in terms of scale functions. Yin et al. [Reference Yin and Wen24, Reference Yin, Wen and Zhao25] consider terminal costs and dividends that are paid continuously at a constant rate (that might be bounded from above) when the surplus is above that barrier; see also Ng [Reference Ng19] for similar considerations. Albrecher et al. [Reference Albrecher, Badescu and Landriault2] examine a dual risk model in the presence of tax payments. Marciniak and Palmowski [Reference Marciniak and Palmowski18] consider a more general dual risk process where the rate of the costs depends on the present amount of reserves. Boxma and Frostig [Reference Boxma and Frostig9] consider the time to ruin and the expected discounted dividends for a different dividend policy, where a certain part of the gain is paid as dividends if upon arrival the gain finds the surplus above a barrier b or if it would bring the surplus above that level.

Organization of the paper. Section 2 is devoted to the determination of the ruin probability, while Section 3 considers the law of the ruin time. Section 4 considers two-sided exit problems that allow one to find the ruin probability and the total discounted dividend payments for the special case that the only capital growth is proportional growth. In Section 5 we handle the Brownian perturbation of the risk process (1.1). Section 6 contains suggestions for further research.

2. The ruin probability

In this section we determine the Laplace transform of the ruin probability R(x) when starting in x, as defined in (1.2). By distinguishing the two cases in which no jump up occurs before x (hence ruin occurs at time x) and in which a jump up occurs at some time $t \in (0,x)$ , we can write

(2.1) \begin{equation}R(x) = 1-F(x) + \int_{t=0}^x \int_{y=0}^{\infty} R((1+a)(x-t)+y) \mu {\rm e}^{-\mu y} {\rm d}y {\rm d}F(t) .\end{equation}

Letting the Laplace transform

\begin{equation*}\rho(s) \,:\!=\, \int_{x=0}^{\infty} {\rm e}^{-sx} R(x) {\rm d}x,\end{equation*}

we have

(2.2) \begin{equation}\rho(s) = \frac{1-\phi(s)}{s}+ \int_{x=0}^{\infty} {\rm e}^{-sx} \int_{t=0}^{x} \int_{z=(1+a)(x-t)}^{\infty} R(z) \mu {\rm e}^{-\mu z}{\rm e}^{\mu(1+a)(x-t)} {\rm d}z {\rm d}F(t) {\rm d}x .\end{equation}

The triple integral in the right-hand side of (2.2), I(s), can be rewritten as follows:

\begin{eqnarray}I(s) \, &=& \, \int_{t=0}^{\infty} {\rm e}^{-st}\int_{x=t}^{\infty} {\rm e}^{-s(x-t)} {\rm e}^{\mu(1+a)(x-t)}\int_{z=(1+a)(x-t)}^{\infty} \mu {\rm e}^{-\mu z} R(z){\rm d}z{\rm d}x{\rm d}F(t)\nonumber\\\, &=& \, \phi(s) \int_{v=0}^{\infty} {\rm e}^{-sv + \mu(1+a)v} \int_{z=(1+a)v}^{\infty}\mu {\rm e}^{-\mu z} R(z) {\rm d}z {\rm d} v\nonumber\\\, &=& \, \phi(s) \int_{z=0}^{\infty} \mu {\rm e}^{-\mu z} R(z)\frac{{\rm e}^{(\mu(1+a)-s) \frac{z}{1+a}}-1}{\mu(1+a)-s} {\rm d} z .\nonumber\end{eqnarray}

Hence

(2.3) \begin{equation}\rho(s) = \frac{1-\phi(s)}{s} + \phi(s) \frac{\mu}{\mu(1+a)-s} \left[\rho\!\left(\frac{s}{1+a}\right)-\rho(\mu)\right] .\end{equation}

Introducing

(2.4) \begin{equation}H(s) \,:\!=\, \frac{1-\phi(s)}{s} - \phi(s) \frac{\mu}{\mu(1+a)-s} \rho(\mu), \, J(s) \,:\!=\, \phi(s) \frac{\mu}{\mu(1+a)-s},\end{equation}

we rewrite (2.3) as

(2.5) \begin{equation}\rho(s) = J(s) \rho\!\left(\frac{s}{1+a}\right) + H(s) .\end{equation}

Thus $\rho(s)$ is expressed in terms of $\rho\!\left(\frac{s}{1+a}\right)$ , and after $N-1$ iterations this results in

(2.6) \begin{equation}\rho(s) = \sum_{k=0}^{N-1} \prod_{j=0}^{k-1} J\!\left(\frac{s}{(1+a)^j}\right) H\!\left(\frac{s}{(1+a)^k}\right) +\rho\!\left(\frac{s}{(1+a)^N}\right)\prod_{j=0}^{N-1} J\!\left(\frac{s}{(1+a)^{j}}\right) ,\end{equation}

with an empty product being equal to 1. Observe that, for large k, $H\!\left(\frac{s}{(1+a)^k}\right)$ approaches some constant and $J\!\left(\frac{s}{(1+a)^k}\right)$ approaches

\begin{equation*}\frac{\phi(0) }{1+a} = \frac{1}{1+a} < 1.\end{equation*}

Hence the $\sum_{k=0}^{N-1} \prod_{j=0}^{k-1}$ term in (2.6) converges geometrically fast, and we obtain

(2.7) \begin{equation}\rho(s) = \sum_{k=0}^{\infty} \prod_{j=0}^{k-1} J\!\left(\frac{s}{(1+a)^j}\right) H\!\left(\frac{s}{(1+a)^k}\right) .\end{equation}

Note that $\rho(\mu)$ , featuring in the expression for H(s), is still unknown. Taking $s=\mu$ in (2.7) gives

\begin{equation*}\rho(\mu) = \sum_{k=0}^{\infty} \left(\prod_{j=0}^{k-1} J\!\left(\frac{\mu}{(1+a)^j}\right) \right)\left[ \frac{1 - \phi\!\left(\frac{\mu}{(1+a)^k}\right)}{\frac{\mu}{(1+a)^k}} -\phi\!\left(\frac{\mu}{(1+a)^k}\right) \frac{\mu}{\mu(1+a) - \frac{\mu}{(1+a)^k}} \rho(\mu) \right] ,\end{equation*}

and hence

(2.8) \begin{equation}\rho(\mu) =\frac{\sum_{k=0}^{\infty} \Big(\prod_{j=0}^{k-1} J\Big(\frac{\mu}{(1+a)^j}\Big) \Big )\frac{1 - \phi\Big(\frac{\mu}{(1+a)^k}\Big)}{\frac{\mu}{(1+a)^k}}}{1+\sum_{k=0}^{\infty} \Big (\prod_{j=0}^{k-1} J\Big(\frac{\mu}{(1+a)^j}\Big) \Big )\phi\Big(\frac{\mu}{(1+a)^k}\Big) \frac{(1+a)^k}{(1+a)^{k+1} - 1}}.\end{equation}

We can sum up our analysis in the following first main result.

Theorem 2.1. The Laplace transform of the ruin probability, $\rho(s) = \int_0^{\infty} {\rm e}^{-sx} \mathbb{P}_x(\tau_x < \infty) {\rm d}x$ , is given in (2.7) with H and J given in (2.4), where $\rho(\mu)$ is identified in (2.8).

Remark 2.1. It should be noticed that $R(x) \equiv 1$ satisfies Equation (2.1), but this trivial solution is not always the ruin probability. In fact, defining $X_n \,:\!=\, U(T_n{-})$ , the surplus just before the nth jump epoch, the discrete Markov chain $\{X_n, n \geq 1\}$ satisfies the affine recursion

\begin{equation*}X_{n}=(a+1)X_{n-1} +(C_n-(T_n-T_{n-1})).\end{equation*}

If $a>0$ , then from [Reference Buraczewski, Damek and Mikosch13, Theorem 2.1.3, p. 13] we have that with a strictly positive probability $X_n$ tends to $+\infty$ . Thus $R(x)<1$ if $a>0$ . If $a=0$ then we are facing a G/M/1 queue, whose busy period ends with probability one if and only if $- \phi^{\prime}(0) \geq \frac{1}{\mu}$ .

Remark 2.2. Both H(s) and J(s) have a singularity at $s=\mu (1+a)$ , which suggests that the expression for $\rho(s)$ in (2.7) has a singularity for every $s=\mu (1+a)^{j+1}$ , $j=0,1,\dots$ . However, $s=\mu(1+a)$ is a removable singularity, as is already suggested by the form of (2.3), where $s=\mu(1+a)$ also is a removable singularity. To verify formally that $s=\mu(1+a)$ is not a singularity of (2.7), we proceed as follows (the same procedure can be applied for $s=\mu (1+a)^{j+1}$ , $j=1,2,\dots$ ). Isolate the coefficients of the factor $\frac{1}{\mu(1+a)-s}$ in (2.7). Their sum C(s) equals

\begin{equation*}C(s) \,:\!=\, -\phi(s) \mu \rho(\mu) + \phi(s) \mu \sum_{k=1}^{\infty} \prod_{j=1}^{k-1} J\!\left(\frac{s}{(1+a)^j}\right) H\!\left(\frac{s}{1+a)^k}\right) .\end{equation*}

Introducing $k_1 \,:\!=\, k-1$ and $j_1 \,:\!=\,j-1$ , and using (2.7), we readily see that $C(\mu(1+a))=0$ .

Remark 2.3. For the case of Poisson arrivals, taking $F(x) = 1 - {\rm e}^{-\lambda x}$ , one gets a specific form for $\rho(s)$ , indicating that R(x) is a weighted sum of exponential terms.

More precisely, in this case $\phi(s)=\frac{\lambda}{\lambda +s}$ and then

\begin{equation*}J(s)= \frac{1}{1+a}\frac{\lambda}{\lambda+s}\frac{\mu(1+a)}{\mu(1+a)-s},\qquad H(s)=\frac{1}{\lambda+s}-\frac{\lambda}{\lambda+s}\frac{\mu(1+a)}{\mu(1+a)-s}\frac{\rho(\mu)}{1+a}.\end{equation*}

From (2.5) it follows that

(2.9) \begin{eqnarray}&&\rho(s)=H(s)+J(s)H\!\left(\frac{s}{1+a}\right)+J(s)J\!\left(\frac{s}{1+a}\right)H\!\left(\frac{s}{(1+a)^2}\right)\nonumber\\&&\qquad +\ldots+\prod_{j=0}^{k-1}J\!\left(\frac{s}{(1+a)^j}\right)H\!\left(\frac{s}{(1+a)^k}\right)+\ldots.\end{eqnarray}

Observe that for $k\in \mathbb{N}$ ,

(2.10) \begin{equation} \prod_{j=0}^{k-1}J\!\left(\frac{s}{(1+a)^j}\right)=\frac{1}{(1+a)^k}\prod_{j=0}^{k-1}\frac{\lambda(1+a)^j}{\lambda(1+a)^j+s}\prod_{j=0}^{k-1}\frac{\mu(1+a)^{j+1}}{\mu(1+a)^{j+1}-s}.\end{equation}

Note the following:

  • the first term $\frac{1}{(1+a)^k}$ on the right-hand side of (2.10) gives a geometric decay in (2.9);

  • the first product on the right-hand side of (2.10) is the LST of the sum $E_1+E_2+\ldots+E_k$ of independent random variables $E_j$ ( $j=1,2,\ldots, k$ ) with exponential distributions with parameter $\lambda(1+a)^j$ ; that is, it is the LST of a hyper-exponential distribution;

  • the second product represents the LST of $-(F_1+F_2+\ldots+F_k)$ , where $F_j$ , $j=1,2,\ldots, k$ , are independent random variables with exponential distributions with parameter $\mu(1+a)^{j+1}$ ;

  • hence $\prod_{j=0}^{k-1}J\!\left(\frac{s}{(1+a)^j}\right)$ describes the LST of the sum of the above random variables weighted by the geometric term.

Moreover, the term $H\!\left(\frac{s}{(1+a)^k}\right)$ appearing in (2.9) is the LST of

\begin{equation*}(1+a)^k \mathbb{P}\!\left(E_k>t\right)- \frac{\rho(\mu)}{1+a}\tilde{f}(t),\end{equation*}

where $\tilde{f}(t)$ is the density of $E_k-F_k$ , with $E_k$ and $F_k$ being independent.

In fact, this can be further generalized to general interarrival times using the representation (2.9) of $\rho(s)$ . Indeed, let T be a generic interarrival time between jumps. Then $J\!\left(\frac{s}{(1+a)^j}\right)$ is the LST of the density of $\frac{T}{(1+a)^j}-F_{j}$ multiplied by $\frac{1}{1+a}$ , where T and $F_j$ are independent. Similarly, $H\!\left(\frac{s}{(1+a)^k}\right)$ is the LST of

\begin{equation*}(1+a)^k \mathbb{P}\!\left(\frac{T}{(1+a)^k}>t\right)- \frac{\rho(\mu)}{1+a}\tilde{f}(t),\end{equation*}

where $\tilde{f}(t)$ is the density of the residual of $\frac{T}{(1+a)^k}$ minus $F_k$ :

\begin{equation*}\tilde{f}(t)=\phi(\mu (1+a)) \mu (1+a)^{k+1} e^{\mu (1+a)^{k+1} t}1_{t<0}+\int_0^\infty \mu (1+a) e^{-\mu (1+a) y} f(y+t/(1+a)^k) dy1_{t>0}. \end{equation*}

To get this observation we use the identity

\begin{eqnarray*}\phi\!\left(\frac{s}{(1+a)^k}\right)\frac{\mu(1+a)^{k+1}}{\mu(1+a)^{k+1}-s}\, &=& \,\phi(\mu (1+a))\frac{\mu (1+a)^{k+1}}{\mu(1+a)^{k+1}-s}\\&&+\,\mu (1+a) \frac{\phi\!\left(\frac{s}{(1+a)^k}\right)-\phi(\mu (1+a))}{\mu (1+a) -s/(1+a)^k}\end{eqnarray*}

and note that taking the LST of $\phi(\mu (1+a)) \mu (1+a)^{k+1} e^{\mu (1+a)^{k+1} t}1_{t<0}$ gives the first increment on the right-hand side of the above equality, while taking the LST of $\int_0^\infty \mu (1+a) e^{-\mu (1+a) y} f\big(y+t/(1+a)^k\big) dy1_{t>0} $ produces the second increment there.

Remark 2.4. We now make a few comments about possible generalizations. We could allow a hyper-exponential-K distribution for C, leading to K unknowns $\rho(\mu_1),\dots,\rho(\mu_K)$ which can be found by taking $s=\mu_1,\dots,s=\mu_K$ .

We could also consider the following generalization of the model (1.1): when the ith jump upwards occurs while $U(T_i{-})=u$ , that jump has size $au+C_i$ with probability p, and has size $D_i$ with probability $1-p$ , where $D_1,D_2,\dots$ are independent, exp( $\delta$ )-distributed random variables, independent of $C_1,C_2,\dots$ . By taking $p=1$ we get the old model, while $a=0$ , $\mu=\delta$ gives a classical dual risk model. It is readily verified that, for this generalized model, (2.3) becomes

(2.11) \begin{eqnarray}\rho(s) \, &=& \, \frac{1-\phi(s)}{s} + p \phi(s) \frac{\mu}{\mu(1+a)-s} \left[\rho\!\left(\frac{s}{1+a}\right)-\rho(\mu)\right]\nonumber\\[3pt]&+& \, (1-p) \phi(s) \frac{\delta}{\delta - s} [\rho(s) - \rho(\delta)] .\end{eqnarray}

Introducing

\begin{equation*}H_1(s) \,:\!=\, \frac{\frac{1-\phi(s)}{s} -p \phi(s) \frac{\mu}{\mu(1+a) -s} \rho(\mu) - (1-p) \phi(s) \frac{\delta}{\delta-s} \rho(\delta)}{1 - (1-p) \frac{\delta}{\delta -s} \phi(s)} ,\end{equation*}
\begin{equation*}J_1(s) \,:\!=\, \frac{p \phi(s) \frac{\mu}{\mu(1+a) -s}}{1 - (1-p) \frac{\delta}{\delta -s} \phi(s)} ,\end{equation*}

we rewrite (2.11) as

\begin{equation*}\rho(s) = J_1(s) \rho\!\left(\frac{s}{1+a}\right) + H_1(s),\end{equation*}

resulting in

(2.12) \begin{equation}\rho(s) = \sum_{k=0}^{\infty} \prod_{j=0}^{k-1} J_1\!\left(\frac{s}{(1+a)^j}\right) H_1\!\left(\frac{s}{(1+a)^k}\right) .\end{equation}

Finally, $\rho(\mu)$ and $\rho(\delta)$ have to be determined. One equation is supplied by substituting $s=\mu$ in (2.12) (just as was done below (2.7)). For a second equation we invoke Rouché’s theorem, which implies that, for any $p \in (0,1)$ , the equation $\delta - s - (1-p) \delta \phi(s) = 0$ has exactly one zero, say $s_1$ , in the right half s-plane. Observing that $\rho(s)$ is analytic in that half-plane, so that $\rho(s_1)$ is finite, it follows from (2.11) that

\begin{equation*}\frac{1-\phi(s_1)}{s_1} + p \phi(s_1) \frac{\mu}{\mu (1+a) - s_1}\left[\rho\!\left(\frac{s_1}{1+a}\right) - \rho(\mu)\right] - (1-p) \frac{\delta}{\delta - s_1} \phi(s_1) \rho(\delta) = 0.\end{equation*}

While this provides a second equation, it also introduces a third unknown, viz., $\rho\big(\frac{s_1}{1+a}\big)$ . However, substituting $s = \frac{s_1}{1+a}$ in (2.12) expresses $\rho\big(\frac{s_1}{1+a}\big)$ using $\rho(\mu)$ and $\rho(\delta)$ , thus providing a third equation.

3. The time to ruin

In this section we study the distribution of $\tau_x$ , the time to ruin when starting at level x, as defined in (1.3). Following a similar approach as in the previous section, again distinguishing between the first upward jump occurring before or after x, we can write

\begin{equation*}\mathbb{E}[{\rm e}^{-\alpha \tau_x}] = {\rm e}^{-\alpha x} (1-F(x)) +\int_{t=0}^x \int_{y=0}^{\infty} {\rm e}^{-\alpha y} \mathbb{E}[{\rm e}^{-\alpha \tau_{(1+a)(x-t)+y}}] \mu {\rm e}^{-\mu y} {\rm d}y {\rm d}F(t) .\end{equation*}

Remark 3.1. Taking $\alpha = 0$ yields the ruin probability R(x). In that respect, it would not have been necessary to present a separate analysis of R(x); however, to improve the readability of the paper, we have chosen to demonstrate the analysis technique first for the easier case of R(x).

Introducing the Laplace transform

\begin{equation*}\tau(s,\alpha) \,:\!=\, \int_{x=0}^{\infty} {\rm e}^{-sx} \mathbb{E} \!\left[{\rm e}^{-\alpha \tau_x}\right] {\rm d}x,\end{equation*}

and using calculations very similar to those leading to (2.3), we obtain

(3.1) \begin{equation}\tau(s,\alpha) = \frac{1-\phi(s+\alpha)}{s+\alpha} + \phi(s+\alpha) \frac{\mu}{\mu (1+a) -s} \left[\tau\!\left(\frac{s}{1+a},\alpha\right) - \tau(\mu,\alpha)\right] .\end{equation}

Introducing

(3.2) \begin{equation}H_1(s,\alpha) \,:\!=\, \frac{1-\phi(s+\alpha)}{s+\alpha} - \phi(s+\alpha) \frac{\mu}{\mu(1+a)-s} \tau(\mu,\alpha), \, J_1(s,\alpha) \,:\!=\, \phi(s+\alpha) \frac{\mu}{\mu(1+a)-s},\end{equation}

we rewrite (3.1) as

\begin{equation*}\tau(s,\alpha) = J_1(s,\alpha) \tau \!\left(\frac{s}{1+a},\alpha\right) + H_1(s,\alpha) ,\end{equation*}

which after $N-1$ iterations yields the following (an empty product being equal to 1):

(3.3) \begin{equation}\tau(s,\alpha) = \sum_{k=0}^{N-1} \prod_{j=0}^{k-1} J_1 \!\left(\frac{s}{(1+a)^j},\alpha\right)\! H_1\!\left(\frac{s}{(1+a)^k},\alpha\right) +\tau\!\left(\frac{s}{(1+a)^N},\alpha\right)\prod_{j=0}^{N-1} J_1\!\left(\frac{s}{(1+a)^{j}},\alpha\right)\!.\end{equation}

Observe that, for large k, $H_1\Big(\frac{s}{(1+a)^k},\alpha\Big)$ approaches some function of $\alpha$ and $J_1\Big(\frac{s}{(1+a)^k},\alpha\Big)$ approaches $\frac{\phi(\alpha) }{1+a} < 1$ . Hence the $\sum_{k=0}^{N-1} \prod_{j=0}^{k-1}$ term in (3.3) converges geometrically fast, and we obtain

(3.4) \begin{equation}\tau(s,\alpha) = \sum_{k=0}^{\infty} \prod_{j=0}^{k-1} J_1\!\left(\frac{s}{(1+a)^j},\alpha\right) H_1\!\left(\frac{s}{(1+a)^k},\alpha\right) .\end{equation}

Now, $\tau(\mu,\alpha)$ , featuring in the expression for $H_1(s,\alpha)$ , is still unknown. Taking $s=\mu$ in (3.4) gives

\begin{eqnarray}&&\tau(\mu,\alpha) = \sum_{k=0}^{\infty} \left(\prod_{j=0}^{k-1} J_1\!\left(\frac{\mu}{(1+a)^j},\alpha\right) \right)\nonumber\\&&\qquad \left[ \frac{1 - \phi\!\left(\frac{\mu}{(1+a)^k}+\alpha\right)}{\frac{\mu}{(1+a)^k}+\alpha} -\phi \!\left(\frac{\mu}{(1+a)^k}+\alpha\right) \frac{\mu}{\mu(1+a) - \frac{\mu}{(1+a)^k}} \tau(\mu,\alpha) \right] ,\nonumber\end{eqnarray}

and hence

(3.5) \begin{equation}\tau(\mu,\alpha) =\frac{\sum_{k=0}^{\infty} \Big(\prod_{j=0}^{k-1} J_1\Big(\frac{\mu}{(1+a)^j},\alpha\Big) \Big)\frac{1 - \phi\Big(\frac{\mu}{(1+a)^k}+\alpha\Big)}{\frac{\mu}{(1+a)^k}+\alpha}}{1+\sum_{k=0}^{\infty} \Big(\prod_{j=0}^{k-1} J_1\Big(\frac{\mu}{(1+a)^j}\Big) \Big )\phi\Big(\frac{\mu}{(1+a)^k}+\alpha\Big) \frac{(1+a)^k}{(1+a)^{k+1} - 1}}.\end{equation}

Thus we have proved the following, which is the second main result of this paper.

Theorem 3.1. The double Laplace transform (with respect to time and initial conditions) $\tau(s,\alpha) = \int_{x=0}^{\infty} {\rm e}^{-sx} \mathbb{E} \!\left[{\rm e}^{-\alpha \tau_x}\right] {\rm d}x$ is given in (3.4) with $H_1$ and $J_1$ given in (3.2), with $\tau(\mu,\alpha)$ identified in (3.5).

4. Exit problems, ruin, and barrier dividend value function

In this section we consider the same model as in the previous sections, but with the restriction that the only growth is a proportional growth occurring according to a Poisson process at rate $\lambda$ ; throughout this section we further assume that $C_{i}\equiv 0$ . In other words,

\begin{equation*}U(t)=x-t+ a\sum_{i=1}^{N(t)}U(T_i{-}), \,t \geq 0, \end{equation*}

and N(t) is a Poisson process with intensity $\lambda>0$ . In Subsection 4.1 we solve the two-sided downward exit problem for this model. In Subsection 4.2 we use a similar method to determine the discounted cumulative dividend payments paid up to the ruin time under the barrier strategy with barrier b and with a discount rate q. But first we briefly discuss an alternative, more straightforward, approach to determining the discounted cumulative dividend payments, pointing out why this approach does not work. We start from the observation that for $x>b$ we have

(4.1) \begin{equation}v(x)=v(b)+x-b.\end{equation}

We now focus on $x\leq b$ . One-step analysis based on the first arrival epoch gives

(4.2) \begin{equation}v(x)=\int_0^x\lambda {\rm e}^{-(\lambda+q)t} v((x-t)(1+a)){\rm d} t, \,0 \leq x<b,\end{equation}

and by taking $z\,:\!=\,x-t$ and taking the derivative with respect to x we end up with the equation

(4.3) \begin{equation}v^{\prime}(x)+(\lambda+q)v(x)=\lambda v(x(1+a)),\qquad x \leq \frac{b}{1+a}.\end{equation}

Moreover, from (4.1) we have

\begin{equation*}v^{\prime}(x)+(\lambda+q)v(x)=\lambda (x(1+a)-b+v(b)),\qquad \frac{b}{1+a} < x \leq b,\end{equation*}

which can easily be solved. Unfortunately, differential-delay equations like (4.3) seem hard to solve explicitly; cf. [Reference Hale and Verduyn Lunel15]. As an alternative, one could try to solve the equation numerically.

Instead, we adopt a different approach, in which we distinguish levels $L_n \,:\!=\, \frac{b}{(a+1)^n}$ and assume that ruin occurs when the level $\frac{b}{(a+1)^N}$ is reached for some value of N. When N is large, the expected amount of discounted cumulative dividends closely approximates the expected amount until ruin at zero occurs. The above choice of levels is suitable because each proportional jump upward brings the process from a value in $(L_{n+1},L_{n})$ to a value in $(L_{n},L_{n-1})$ .

4.1. Two-sided downward exit problem and ruin time

Recall that $d_{n}$ is the first time that U reaches (down-crosses) $\frac{b}{(a+1)^{n}}$ , and $u_{n}$ is the first time the risk process up-crosses $\frac{b}{(a+1)^{n}}$ . Hence $u_0$ is the time until level b is first up-crossed, i.e., a dividend is being paid. For an integer N we aim to obtain the Laplace transform $\rho_{N}(x)=\mathbb{E}_{x}\!\left[ e^{-qd_{N}}1_{d_{N}<u_{ 0}}\right]$ defined in (1.7). For large N, $\rho_N(x)$ approximates the LST of the time until ruin in the event that no dividend is ever paid. If $q=0$ , then $\rho_N(x)$ approximates the probability that ruin occurs before dividends are paid, that is, before reaching b. If $q=0$ and N and b are both tending to infinity, then $\rho_N(x)$ approximates the ruin probability R(x) defined in (1.2). For $1\leq n \leq N$ let

\begin{equation*}\rho_{n,N}\,:\!=\,\rho_{N}\!\left(\frac{b}{(a+1)^{n}}\right).\end{equation*}

To simplify notation we define $\rho_{n}\,:\!=\,\rho_{n,N}$ . Clearly, $\rho_{N}=1$ . As mentioned above, we consider levels

\begin{equation*}L_n \,:\!=\, \frac{b}{(a+1)^n},\quad n=0,1,\dots,N;\end{equation*}

let

\begin{equation*}\mathcal{N}\,:\!=\,\left\{L_1,\dots,L_N \right\}. \end{equation*}

To determine $\rho_N(x)$ when $x \in \left( \frac{b}{(a+1)^n},\frac{b}{(a+1)^{n-1}} \right] = (L_n,L_{n-1}]$ , note that, because $d_n < u_0$ , there must be a down-crossing of level $L_{n}$ before level b is up-crossed. We can now distinguish n different possibilities: when starting at x, the surplus process first decreases through $L_{n}$ , or it first increases via jumps above level $L_{n-j}$ before there is a first down-crossing through that same level $L_{n-j}$ , $j=1,\dots,n-1$ . Denoting the time for the former event by

\begin{equation*}T_{n,0} \,:\!=\, d_n 1_{d_n < u_{n-1}},\end{equation*}

and the times for the latter $n-1$ events by

\begin{equation*}T_{n,j}\,:\!=\,u_{n-1}1_{u_{n-1}<d_{n}}+u_{n-2}1_{u_{n-2}<d_{n-1}}+\ldots+u_{n- j}1_{u_{n-j}<d_{n-j+1}}+d_{n-j}1_{d_{n-j}<u_{n-j-1}} ,\end{equation*}

for $j=1,\dots,n-1$ , we can derive the following representation of $\rho_{N}$ . It will turn out to be useful to introduce

\begin{equation*}\bar{G}_{c}(t)=e^{-(\lambda+q)ct}\quad \text{and} \quad g_{c}(t)=\lambda c e^{-(\lambda+q)ct}.\end{equation*}

Theorem 4.1. For $\frac{b}{(a+1)^{n}}<x\leq \frac{b}{(a+1)^{n-1}}$ ,

(4.4) \begin{eqnarray} \rho_{N}(x) \, &=& \, \bar{G}_{1}\!\left(x-\frac{b}{(a+1)^{n}}\right)\rho_{n}\nonumber\\ &+& \,\bar{G}_{(a+1)}\circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\rho_{n-1}\nonumber\\ &+& \,\bar{G}_{(a+1)^{2}} \circledast g_{(a+1)} \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\rho_{n-2}\nonumber\\ &+& \,\ldots\nonumber\\&+& \,\bar{G}_{(a+1)^{n-1}} \circledast g_{(a+1)^{n-2}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\rho_{1}, \end{eqnarray}

where $\circledast$ denotes convolution.

Proof. Let $\frac{b}{(a+1)^{n}}<x\leq \frac{b}{(a+1)^{n-1}}$ . The n terms in the right-hand side of (4.4) represent the n disjoint possibilities where $d_n < u_0$ . Notice that $T_{n,j}$ is the first time that the process U reaches a level in $\mathcal{N}$ via a down-crossing, by reaching $\frac{b}{(a+1)^{n-j}}$ . Furthermore, $\rho_{n-j}$ is the Laplace transform of the time to reach $\frac{b}{(a+1)^{N}}$ starting at $\frac{b}{(a+1)^{n-j}}$ . Now first considering $T_{n,0}$ , we have

\begin{equation*} E_{x}\!\left[ e^{-qd_{n}}1_{d_{n}<u_{n-1}}\right]= {\rm e}^{-(q+\lambda)\big(x - \frac{b}{(a+1)^n}\big)} =\bar{G}_1\!\left(x-\frac{b}{(a+1)^{n}}\right) . \end{equation*}

By the strong Markov property, considering $T_{n,j}$ , we derive

\begin{eqnarray} &&\mathbb{E}_{x}\!\left[e^{-q\big(u_{n-1}+\dots+u_{n-j} +d_{n-j}\big)} 1_{u_{n-1}<d_n,\dots,u_{n-j}<d_{n-j+1},d_{n-j}<u_{n-j-1}} \right]=\nonumber\\ &&\nonumber\\ &&\qquad \int_{t_{1}=0}^{A_{n,0}}\int_{t_{2}=0}^{A_{n,1}}\ldots\int_{t_{j}=0}^{A_{n,j-1}}\bar{G}_1(A_{n,j})g_1(t_{j})g_1(t_{j- 1})\ldots g_1(t_{1}) {\rm d}t_{j}\ldots {\rm d}t_{1} , \nonumber \end{eqnarray}

where

\begin{equation*} A_{n,0}\,:\!=\,x-\frac{b}{(a+1)^{n}}, \end{equation*}

and for $k=1,2,\ldots ,n$ ,

(4.5) \begin{equation} A_{n,k}\,:\!=\,(a+1)(A_{n,k-1}-t_{k}) , \end{equation}

with $t_k$ an integration variable. By the change of variables $y_{j}=t_{j}/(a+1)^{j-1}$ , we obtain that

\begin{eqnarray} &&\mathbb{E}_{x}\!\left[e^{-q\big(u_{n-1}+\dots+u_{n-j} +d_{n-j}\big)} 1_{u_{n-1}<d_n,\dots,u_{n-j}<d_{n-j+1},d_{n-j}<u_{n-j-1}} \right]=\nonumber\\ &&\qquad =\bar{G}_{(a+1)^{j}} \circledast g_{(a+1)^{j-1}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right) ,\nonumber \end{eqnarray}

and the theorem follows.

From Theorem 4.1 it follows that to obtain $\rho_{n}$ we need to solve the following $N-1$ equations for $n=1,2,\ldots ,N-1$ :

\begin{eqnarray} &&\, \rho_{n}= \bar{G}_{1}\!\left(\frac{b}{(a+1)^{n}}-\frac{b}{(a+1)^{n+1}}\right)\rho_{n+1}\nonumber\\ &&\qquad +\,\bar{G}_{(a+1)} \circledast g_1\!\left(\frac{b}{(a+1)^{n}}-\frac{b}{(a+1)^{n+1}}\right)\rho_{n}\nonumber\\ &&\qquad +\,\bar{G}_{(a+1)^{2}} \circledast g_{(a+1)} \circledast g_1\!\left(\frac{b}{(a+1)^{n }}-\frac{b}{(a+1)^{n+1}}\right)\rho_{n-1}\nonumber\\ &&\qquad +\, \ldots\nonumber\\&&\qquad +\,\bar{G}_{(a+1)^{n}} \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_1\!\left(\frac{b}{(a+1)^{n }}-\frac{b}{(a+1)^{n+1}}\right)\rho_{1}. \nonumber \end{eqnarray}

Defining

\begin{equation*}\gamma_{0}(x)\,:\!=\,\bar{G}_1(x), \end{equation*}

and for $n\geq 1$ ,

\begin{equation*}\gamma_{n}(x)\,:\!=\,\bar{G}_{(a+1)^{n}} \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_1(x), \end{equation*}

by the formula for convolution of exponentials given in [Reference Ross22, Chapter 5] we have for $n\geq 1$

\begin{equation*} \gamma_{n}(x)= \left(\frac{\lambda}{\lambda+q}\right)^n \sum_{i=0}^{n}\frac{e^{- (\lambda + q)(a+1)^{i}x}}{\prod_{j\neq i} ((a+1)^{j}-(a+1)^{i})}. \end{equation*}

Thus we have the following set of linear equations for $n=1,2,\ldots,N-1$ :

\begin{equation*} \rho_{n}=\sum_{j=0}^{n}\gamma_{j}\!\left(\frac{b}{\left(a+1\right)^{n}}\left(1-\frac{1}{a+1}\right)\right)\rho_{n+1-j}.\end{equation*}

Notice that $\rho_{N}=1$ . The above formula can be rewritten as follows:

(4.6) \begin{equation}\rho_n = \sum_{j=0}^n \gamma_{j,n} \rho_{n+1-j} = \sum_{j=1}^{n+1} \gamma_{n+1-j,n} \rho_j ,\end{equation}

where

\begin{equation*}\gamma_{j,n} \,:\!=\, \gamma_j\!\left(\frac{b}{(a+1)^n}\left(1 - \frac{1}{a+1}\right)\right).\end{equation*}

Introducing the $(N-1) \times (N-1)$ matrix $\Gamma$ , with as its nth row $(\gamma_{n,n},\gamma_{n-1,n},\dots,$ $\gamma_{0,n},0\dots,0)$ , and the column vector $\rho \,:\!=\, (\rho_1,\dots,\rho_{N-1})^T$ , we can write the set of equations (4.6) as

\begin{equation*}\rho = \Gamma \rho + Z,\end{equation*}

where $Z =(0,\dots,0, \gamma_{0,N-1})^T$ . Hence, with I the $(N-1) \times (N-1)$ matrix with ones on the diagonal and zeroes at all other positions,

\begin{equation*}\rho = (I - \Gamma)^{-1} Z .\end{equation*}

4.2. Expected discounted dividends

Recall that $v_{N}(x)$ defined in (1.6) is the expected discounted dividends under the barrier strategy until reaching $\frac{b}{(a+1)^{N}}$ , that is up to $\tau_x^b(N)$ for the regulated process $U^b(t)$ defined in (1.4). Note that

\begin{equation*}v(x)=\lim_{N\rightarrow+\infty} v_N(x)\end{equation*}

for v(x) defined in (1.5). Let

(4.7) \begin{equation} v_{n}\,:\!=\,v_{N}\!\left(\frac{b}{(a+1)^{n}}\right)\quad\text{for $n=0,1,\dots,N-1$.}\end{equation}

The next theorem identifies $v_{N}(x)$ .

Theorem 4.2. For $\frac{b}{(a+1)^{n}}<x\leq \frac{b}{(a+1)^{n-1}}$ ,

(4.8) \begin{eqnarray} v_{N}\!\left(x\right)\,& =&\, \bar{G}_{1}\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right)v_{n}\nonumber\\ && +\, \bar{G}_{\left(a+1\right)} \circledast g_1\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right)v_{n-1}\nonumber\\ && +\, \bar{G}_{\left(a+1\right)^{2}} \circledast g_{\left(a+1\right)} \circledast g_1\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right)v_{n-2}\nonumber\\ && +\, \quad\ldots\nonumber\\&& +\, \bar{G}_{\left(a+1\right)^{n-1}} \circledast g_{\left(a+1\right)^{n-2}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right)v_{1}\nonumber\\ && +\, 1 \circledast g_{\left(a+1\right)^{n-1}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right)v\!\left(b\right)\nonumber\\ && +\, \left(a+1\right)^{n}\mathcal{Q} \circledast g_{\left(a+1\right)^{n-1}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{\left(a+1\right)^{n}}\right), \end{eqnarray}

where $\mathcal{Q}(x)=x$ and $\circledast$ again denotes convolution.

Proof. The proof follows exactly the same reasoning as the proof of Theorem 4.1: we consider again the disjoint events in which the first down-crossing of a level from $\mathcal{N}$ occurs at $L_n-j$ , $j=0,\dots,n-1$ . However, we now do not exclude the possibility that $L_0 = b$ is up-crossed before level $L_N$ is reached. This gives rise to the last two lines of (4.8). More precisely, let $L_{n}<x\leq L_{n-1}$ and let $\mathcal{A}_{n}$ be the event that level $ L_{0}=b$ is up-crossed before one of the levels $L_{j}, j=1,\ldots ,N$ , is downcrossed. This occurs when each of the subsequent n jumps occurs before down-crossing $L_{n-j+1}, j=0,\ldots ,n$ , i.e. when $u_{n-j}<d_{n+1-j}$ , $j=1,\ldots ,n$ . For $L_n<x\leq L_{n-1}$ , the time to this event is

\begin{equation*}\Upsilon_{n}(x)\,:\!=\,u_{n-1}1_{u_{n-1}<d_{n}}+u_{n-2}1_{u_{n-2}<d_{n-1}}+\ldots +u_{1 }1_{u_{1}<d_{2}}+u_{0}1_{u_{0}<d_{1}}.\end{equation*}

The Laplace transform of $\Upsilon_{n}$ (or the discounted time until $\mathcal{A}_{n}$ occurs), starting at x with $L_{n}<x\leq L_{n-1}$ , can be obtained by arguments similar to those leading to (4.6); that is,

\begin{eqnarray}&&\mathbb{E}_{x}\!\left[e^{-q\big(u_{n-1}+\dots+u_{0} \big)} 1_{u_{n-1}<d_n,\dots , u_{0}<d_{1}} \right]\nonumber\\&&= 1 \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right).\nonumber\end{eqnarray}

Once the process up-crosses b, a dividend is paid and the process restarts at level b. Thus the second-to-last line of (4.8) is the expected discounted dividends paid until ruin starting at time $u_{0}$ at level b (not including the dividends paid at this time). The amount of the expected discounted dividends paid at $u_{n-1}+\dots + u_{0}$ is

\begin{eqnarray*}&&\int_{t_{1}=0}^{A_{n,0}}\int_{t_{2}=0}^{A_{n,1}}\ldots \int_{t_{n}=0}^{A_{n,n-1}}A_{n,n}g_1(t_{n})g_1(t_{n- 1})\ldots g_1(t_{1}) {\rm d}t_{n}\ldots {\rm d}t_{1}\nonumber\\&&=(a+1)^{n}\mathcal{Q} \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_{1}\!\left(x-\frac{b}{(a+1)^{n}}\right),\nonumber\end{eqnarray*}

where $A_{n,n}$ is defined in (4.5) and the last equality is obtained by change of variables $y_{j}=\frac{t_{j}}{(1+a)^{j-1}}$ , $j=1,2,\ldots ,n$ .

It remains to determine $v_0=v(b),v_1,\dots,v_{N-1}$ , since then from Theorem 4.2 we have $v_N(x)$ for all $x \in (0,b]$ . Notice that $v_N=0$ . We first derive an equation for v(b). Let

\begin{equation*}\delta_{n}\,:\!=\,(a+1)^{n}\mathcal{Q} \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_1\!\left(\frac{b}{(a+1)^{n}}\left(1-\frac{1}{a+1}\right) \right),\end{equation*}

and let

\begin{equation*}\omega_{n}\,:\!=\,1 \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_{1}\!\left(\frac{b}{(a+1)^{n-1}}\left(1-\frac{1}{a+1}\right)\right).\end{equation*}

We distinguish between the following cases, when starting from b: (i) level $L_1 = \frac{b}{a+1}$ is reached before b is up-crossed again; this gives rise to the first term in the right-hand side of (4.9) below; (ii) b is up-crossed before level $L_1$ is reached. Thus

(4.9) \begin{eqnarray}v(b)\, &=& \,\bar{G}_1\!\left(b-\frac{b}{a+1}\right) v\!\left(\frac{b}{a+1}\right)+ \frac{\lambda}{\lambda+q}\left(1-\bar{G}_1\!\left(b-\frac{b}{a+1}\right)\right)v(b)\nonumber\\&&+\,(a+1)\mathcal{Q} \circledast g_{1}\!\left(b-\frac{b}{a+1}\right).\end{eqnarray}

Notice that

\begin{equation*}\frac{\lambda}{\lambda+q}\Big(1-\bar{G}_1\Big(b-\frac{b}{a+1}\Big)\Big)=1 \circledast g_{1}\!\left(b-\frac{b}{a+1}\right)=\omega_{1}.\end{equation*}

Hence

\begin{equation*}v_{0}=\gamma_{0}\Big(b-\frac{b}{a+1}\Big)v_{1} + \omega_{1}v_{0} + \delta_1.\end{equation*}

By taking $x = \frac{b}{(a+1)^{n-1}}$ in Theorem 4.2, we get for $n= 1,\dots,N-1$ ,

(4.10) \begin{equation}v_{n}=\sum_{j=0}^{n}\gamma_{j}\!\left(\frac{b}{(a+1)^{n-1}}\left(1-\frac{1}{a+1}\right)\right) v_{n+1-j}+\omega_{n+1}v_{0}+\delta_{n+1}.\end{equation}

Introducing the column vector $V \,:\!=\, (v_0,\dots,v_{N-1})^T$ , we can write the equation for $v_0$ and the set of equations (4.10) together as

\begin{equation*}V = \Psi V + \Delta,\end{equation*}

where $\Psi$ is an $N\times N$ matrix with row n, $n=0,\cdots,N-1$ , equal to $(\omega_{n+1},\gamma_{n,n},\cdots,$ $\gamma_{0,n},0,\cdots,0)$ . Notice that row $N-1$ is $(\omega_{N},\gamma_{N-1,N-1,},\cdots,\gamma_{1,N-1})$ . Hence

\begin{equation*}V = (I - \Psi)^{-1} \Delta .\end{equation*}

Remark 4.1. Our analysis can be used to solve the two-sided upward exit problem for our risk process as well. We recall that $\mu_{n}=\mu_{N}\!\left(\frac{b}{(a+1)^n}\right)$ and it is defined in (1.8). Then by the same arguments as those leading to (4.8) and (4.10), for $\frac{b}{(a+1)^{n}}<x\leq \frac{b}{(a+1)^{n-1}}$ , we obtain that

\begin{eqnarray} \mu_{N}(x)&&\,=\, \bar{G}_{1}\!\left(x-\frac{b}{(a+1)^{n}}\right)\mu_{n}\nonumber\\ &&\quad +\,\bar{G}_{(a+1)} \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\mu_{n-1}\nonumber\\ &&\quad +\,\bar{G}_{(a+1)^{2}} \circledast g_{(a+1)} \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\mu_{n-2}\nonumber\\ &&\quad +\,\quad \ldots \nonumber \\ &&\quad +\,\bar{G}_{(a+1)^{n-1}} \circledast g_{(a+1)^{n-2}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right)\mu_{1}\nonumber\\ &&\quad +\,1 \circledast g_{(a+1)^{n-1}} \circledast \ldots \circledast g_1\!\left(x-\frac{b}{(a+1)^{n}}\right) . \nonumber \end{eqnarray}

Similarly to the equation for $v_0$ and Equation (4.10), we then obtain barriers that can inform plans that

(4.11) \begin{align} \mu_{0} & = \gamma_{0}\Big(b-\frac{b}{a+1}\Big)\mu_{1}+\omega_{1},\\\mu_{n} & =\sum_{j=0}^{n}\gamma_{j}\!\left(\frac{b}{(a+1)^{n-1}}\left(1-\frac{1}{a+1}\right)\right) \mu_{n+1-j}+\omega_{n+1}, \, n=1,\dots,N-1. \nonumber \end{align}

Moreover, observe that $\mu_0 = \mu_N(b)=1$ .

5. Exit times and barrier dividends value function with Brownian perturbation

In this section we extend the model of Section 4 by allowing small perturbations between jumps. These perturbations are modeled by a Brownian motion X(t) with drift $\eta$ and variance $\sigma^{2}$ ; that is,

(5.1) \begin{equation}X(t)=\eta t +\sigma B(t), \end{equation}

for a standard Brownian motion B(t). Hence our risk process is formally defined as

\begin{equation*}U(t)=x+X(t)+\sum_{i=1}^{N(t)}aU(T_i{-}), \,t \geq 0, \end{equation*}

where N(t) is a Poisson process with intensity $\lambda >0$ . We apply the fluctuation theory of one-sided Lévy processes to solve the two-sided exit problem (Subsection 5.1) and to obtain the expected discounted barrier dividends (Subsection 5.2). The key functions for this fluctuation theory are the scale functions; see [Reference Kyprianou17]. To introduce these functions let us first define the Laplace exponent of X(t):

\begin{equation*}\psi(\theta) \,:\!=\, \frac{1}{t}\log \mathbb{E}_x\big[e^{\theta X_t}\big] = \eta \theta + \frac{\sigma^2}{2} \theta^2 .\end{equation*}

This function is strictly convex, is differentiable, equals zero at zero, and tends to infinity at infinity. Hence its right inverse $\Phi(q)$ exists for $q\geq 0$ . The first scale function $W^{(q)}(x)$ is the unique right-continuous function disappearing on the negative half-line whose Laplace transform is

\begin{equation*}\int_0^{\infty} e^{-\theta x} W^{(q)}(x) {\rm d}x = \frac{1}{\psi(\theta)-q} ,\end{equation*}

for $\theta>\Phi(q)$ . With the first scale function we can associate a second scale function via $Z^{(q)}(x)\,:\!=\,1-q\int_{0}^{x}W^{(q)}(y)dy$ . In the case of linear Brownian motion as defined in (5.1), the (first) scale function for a Brownian motion with drift $\eta$ and variance $\sigma^{2}$ equals (cf. [Reference Kuznetsov, Kyprianou and Rivero16])

\begin{equation*} W^{(q)}(x) =\frac{ 1}{\sqrt{\eta^{2}+2q\sigma^{2}}}\left[e^{\Big(\sqrt{\eta^{2}+2q\sigma^{2}}-\eta\Big)\frac{x}{\sigma^{2}}}-e^{-\Big(\sqrt{\eta^{2}+2q\sigma^{2}}+\eta\Big)\frac{x}{\sigma^{2}}}\right]. \end{equation*}

Let $\alpha<x<\beta$ and

\begin{equation*}d_{\alpha}^X=\min\{t\,:\, X(t)=\alpha\}\quad\text{and}\quad u_{\beta}^X=\min\{t\,:\,X(t)=\beta\}.\end{equation*}

Throughout this section we use the following three facts given in Theorem 8.1 and Theorem 8.7 of Kyprianou [Reference Kyprianou17]:

  1. 1.

    (5.2) \begin{equation} \mathbb{E}_{x}\!\left[e^{-qu_{\beta}^X}1_{u_{\beta}^X< d_{\alpha}^X}\right]=\frac{W^{(q)}(x-\alpha)}{W^{(q)}(\beta - \alpha)}. \end{equation}
  2. 2.

    (5.3) \begin{equation} \mathbb{E}_{x}\!\left[e^{- qd_{\alpha}^X}1_{d_{\alpha}^X<u_{\beta}^X}\right]=Z^{(q)}(x-\alpha)-\frac{W^{(q)}(x-\alpha)}{W^{(q)}(\beta - \alpha)}Z^{(q)}(\beta-\alpha).\end{equation}
  3. 3. Let $\mathcal{E}_{q}$ be an exponentially distributed random variable with parameter q independent of the process X. Then for $\alpha <x< \beta$ ,

    (5.4) \begin{eqnarray}&&\frac{\mathbb{P}_{x}\Big(X(\mathcal{E}_{q})\in (y,y+{\rm d}y), \mathcal{E}_{q}<u_{\beta}^X\wedge d_{\alpha}^X\Big)}{q{\rm d}y}=u^{(q)}_{\alpha,\beta}(x,y)\nonumber\\&&\qquad =\frac{W^{(q)}(x-\alpha)}{W^{(q)}(\beta - \alpha)}W^{(q)}(\beta-y)-W^{(q)}(x-y).\end{eqnarray}

5.1. Downward exit problem and ruin time

In this subsection we obtain

(5.5) \begin{equation}\rho_{N}(x)=\mathbb{E}_{x}\big[e^{-qd_{N}}1_{d_{N}<u_{0}}\big].\end{equation}

This is done in three steps. In Step 1 we determine the LST of the time needed, starting from some $x \in (L_{n},L_{n-1})$ , to reach a level in $\mathcal{N}$ by down-crossing $L_{n-k}$ , $k = 0, 1, \dots,n-1$ . In Step 2 we determine the LST of the time needed, starting from some $x \in (L_{n},L_{n-1})$ , to reach a level in $\mathcal{N}$ by up-crossing $L_{n-k}$ , $k = 1,2, \dots,n$ . In Step 3 we express $\rho_N(x)$ in $\rho_1,\dots,\rho_N$ , with $\rho_n$ the LST of the time needed to down-cross $L_N$ , starting from $L_n$ , and before up-crossing $L_0$ . We construct a system of linear equations in these $\rho_n$ , with the LSTs of Steps 1 and 2 featuring as coefficients in the equations.

Step 1: The time until the first down-crossing of $L_{n-k}$

Let $L_{n}<x<L_{n-1}$ , and let $d_n^X$ and $u_{n-1}^X$ respectively denote the times at which the X process first down-crosses $L_n$ and first up-crosses $L_{n-1}$ , when starting from x. By (5.3) we have

\begin{eqnarray}&&\xi_{n}(x-L_{n})\,:\!=\,\mathbb{E}_{x}\Big[e^{-qd_{n}^X}1_{d^X_{n}<u^X_{n-1}\wedge \mathcal{E}_{\lambda}}\Big]\nonumber\\&&=Z^{(q+\lambda)}(x- L_{n})-\frac{W^{(q+\lambda)} (x- L_{n})}{W^{(q+\lambda)}(L_{n-1}- L_{n})}Z^{(q+\lambda)}( L_{n-1}- L_{n}).\nonumber\end{eqnarray}

Denoting by $\tau_{L_{n-k}}^{-}$ the first time that U hits a level in $\mathcal{N}$ and that this is done by down-crossing $L_{n-k}$ , we derive

\begin{eqnarray*}\tau_{ L_{n-k}}^{-}\, &=& \,\mathcal{E}_{1,\lambda}1_{\mathcal{E}_{1,\lambda}<u^X_{n-1}\wedge d^X_{n}}+\mathcal{E}_{2,\lambda}1_{\mathcal{E}_{2,\lambda}<u^X_{n-2}\wedge d^X_{n-1}}\\&&+\ldots +\mathcal{E}_{k,\lambda}1_{\mathcal{E}_{k,\lambda}<u^X_{n-k}\wedge d^X_{n+1-k}}+d^X_{n-k}1_{d^X_{n-k}<\mathcal{E}_{k+1 ,\lambda}\wedge u^X_{n-k-1}},\end{eqnarray*}

where $\mathcal{E}_{k,\lambda}, k=1,\ldots ,N$ are i.i.d. distributed as $\mathcal{E}_{\lambda}$ . Let

\begin{eqnarray*}&&r_{n,n-k}(x)\,:\!=\,\mathbb{E}_{x}\!\left[e^{-q\tau_{ L_{n-k}}^{-}}\right]\\&&=\mathbb{E}_{x}\!\left[e^{-q(\sum_{i=1}^{k}\mathcal{E}_{i,\lambda}+d^X_{n-k})}1_{\mathcal{E}_{1,\lambda}<d^X_{n}\wedge u^X_{n-1}}1_{\mathcal{E}_{2,\lambda}<d^X_{n-1}\wedge u^X_{n-2}}\right.\\&&\qquad \left. \ldots 1_{\mathcal{E}_{k,\lambda}<d^X_{n-k+1}\wedge u^X_{n-k}}1_{d^X_{n-k}<\mathcal{E}_{k+1,\lambda}\wedge u^X_{n-k-1}}\right].\end{eqnarray*}

Observe that $r_{n,n-k}(x)$ is the partial LST of the time needed to reach $L_{n-k}$ from above before reaching any other level in $\mathcal{N}$ . Clearly,

(5.6) \begin{equation}r_{n,n}(x)=\xi_{n}(x-L_{n}).\end{equation}

Applying (5.4) and (5.6) yields

(5.7) \begin{eqnarray}&&r_{n,n-1}(x)=\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n},L_{n-1}}(x,y)r_{n-1,n-1}((a+1)y) {\rm d}y\nonumber\\&&=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}\int_{L_{n}}^{L_{n-1}}W^{(q+\lambda)}(L_{n-1}-y)\xi_{n-1}\big((a+1)y-L_{n-1}\big){\rm d}y \nonumber\\&&-\lambda\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\xi_{n-1}\big((a+1)y-L_{n-1}\big){\rm d}y.\end{eqnarray}

Note that

\begin{eqnarray*} &&\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\xi_{n-1}\big((a+1)y-L_{n-1}\big){\rm d}y\\ &&=\int_{0}^{x-L_{n}}W^{(q+\lambda)}(z)\xi_{n-1}((a+1)(x -z-L_{n})){\rm d}z\\ &&=W^{(q+\lambda)}\circledast\xi_{n-1,a+1}(x-L_{n}), \end{eqnarray*}

where $\circledast$ again denotes convolution and $\xi_{n,(a+1)^k}(x)\,:\!=\,\xi_{n}\big((a+1)^kx\big)$ , $k\in \mathbb{N}$ . Thus,

(5.8) \begin{eqnarray} &&r_{n,n-1}(x)=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})} W^{(q+\lambda)}\circledast\xi_{n-1,a+1}(L_{n-1}-L_{n})\nonumber\\&&\qquad -\lambda W^{(q+\lambda)}\circledast\xi_{n-1,a+1}(x-L_{n}).\end{eqnarray}

Define

\begin{eqnarray} A_{0,n,n-1}\,:\!=\,\lambda\frac{W^{(q+\lambda)}\circledast\xi_{n-1,a+1}(L_{n-1}-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}\quad\text{and}\quad A_{1,n,n-1}\,:\!=\,\lambda.\nonumber\end{eqnarray}

Then

\begin{equation*}r_{n,n-1}(x)=A_{0,n,n-1}W^{(q+\lambda)}(x-L_{n})-A_{1,n,n-1}W^{(q+\lambda)}\circledast\xi_{n-1,a+1}(x-L_{n}).\end{equation*}

We next obtain $r_{n,n-2}(x)$ . Applying (5.4) and (5.8), we have

\begin{eqnarray*} &&r_{n,n-2}(x)=\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n},L_{n-1}}(x,y)r_{n-1,n-2}((a+1)y){\rm d}y\nonumber\\ &&=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}\int_{L_{n}}^{L_{n-1}}W^{(q+\lambda)}(L_{n-1}-y)\cdot\\ &&\Big(A_{0,n-1,n-2}W^{(q+\lambda)}\big((a+1)y-L_{n-1}\big)\\&&-\,A_{1,n-1,n-2}W^{(q+\lambda)}\circledast\xi_{n-2,a+1}\big((a+1)y-L_{n-1}\big)\Big){\rm d}y\nonumber\\ &&-\,\lambda\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\cdot\\ &&\Big(A_{0,n-1,n-2}W^{(q+\lambda)}\big((a+1)y-L_{n-1}\big)\\&&-\,A_{1,n-1,n-2}W^{(q+\lambda)}\circledast\xi_{n-2,a+1}\big((a+1)y-L_{n-1}\big)\Big){\rm d}y.\\\end{eqnarray*}

Similarly as before, observe that

(5.9) \begin{eqnarray} &&\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)W^{(q+\lambda)}\big((a+1)y-L_{n-1}\big){\rm d}y=W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(x-L_{n}) ,\nonumber\\\end{eqnarray}

where $W^{(q+\lambda)}_{(a+1)^k}(x)\,:\!=\,W^{(q+\lambda)}((a+1)^kx)$ , $k\in \mathbb{N}$ , and

\begin{eqnarray*} &&\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)W^{(q+\lambda)}\circledast\xi_{n-2,a+1}\big((a+1)y-L_{n-1}\big){\rm d}y\\ &&=(a+1)W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\xi_{n-2,(a+1)^{2}}(x-L_{n}).\end{eqnarray*}

Define

\begin{eqnarray} &&A_{0,n,n-2}\,:\!=\,\frac{\lambda }{W^{(q+\lambda)}(L_{n-1}-L_{n})}\left(A_{0,n-1,n-2}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(L_{n-1}-L_{n})\right.\nonumber\\ &&\left.\quad-\,A_{1,n-1,n-2}(a+1)W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\xi_{n-2,(a+1)^{2}}(L_{n-1}-L_{n})\right)\nonumber\\ &&=\frac{1 }{W^{(q+\lambda)}(L_{n-1}-L_{n})}\cdot\nonumber\\ &&\quad\left(A_{1,n,n-2}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(L_{n-1}-L_{n})\right.\nonumber\\&&\left.\quad-\, A_{2,n,n-2} W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\xi_{n-2,(a+1)^{2}}(L_{n-1}-L_{n})\right),\nonumber\\ &&A_{1,n,n-2}\,:\!=\,\lambda A_{0,n-1,n-2},\qquad A_{2,n,n-2}\,:\!=\,\lambda (a+1)A_{1,n-1,n-2}.\nonumber\end{eqnarray}

Then

\begin{eqnarray*} &&r_{n,n-2}(x)=A_{0,n,n-2}W^{(q+\lambda)}(x-L_{n})-A_{1,n,n-2}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(x-L_{n})\\ &&\qquad +\,A_{2,n,n-2}W^{(q+\lambda)} \circledast W^{(q+\lambda)}_{a+1}\circledast \xi_{n-2,(a+1)^{2}}(x-L_{n}).\end{eqnarray*}

The general case for $k =2,\dots,n-1$ is given in the following proposition.

Proposition 5.1. For $L_{n}<x<L_{n-1}$ and $k =2,\dots,n-1$ ,

(5.10) \begin{eqnarray} &&r_{n,n-k}(x) =\sum_{j=0}^{k-1}({-}1)^{j}A_{j,n,n-k} \circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n})\nonumber\\ &&\qquad +\,({-}1)^{k}A_{k,n,n-k}\circledast_{i=0}^{k-1}W^{(q+\lambda)}_{(a+1)^{i}} \circledast\xi_{n-k,(a+1)^{k}}(x-L_{n}),\end{eqnarray}

where $A_{j,n,n-k}$ , $j=0,\ldots ,k$ , are coefficients which are obtained recursively.

Proof. The proof is by induction on k. Clearly, (5.10) holds for $k=2$ . Assume it holds for $k-1\geq2$ . By the induction hypothesis we have

(5.11) \begin{eqnarray} &&r_{n,n-(k-1)}(x) =\sum_{j=0}^{k-2}({-}1)^{j}A_{j,n,n-(k-1)} \circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n})\end{eqnarray}
\begin{eqnarray} &&\qquad +\,({-}1)^{k-1}A_{k-1,n,n-(k-1)}\circledast_{i=0}^{k-2}W^{(q+\lambda)}_{(a+1)^{i}} \circledast\xi_{n-(k-1),(a+1)^{k-1}}(x-L_{n}).\nonumber \end{eqnarray}

Using (5.4) and (5.11), we have

\begin{align*} &r_{n,n-k}(x)=\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n},L_{n-1}}(x,y) r_{n-1,n-k}((a+1)y){\rm d}y\\ &=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})} \int_{L_{n}}^{L_{n-1}}W^{(q+\lambda)}(L_{n-1}-y)\cdot\\ &\Bigg(\sum_{j=0}^{k-2}({-}1)^{j}A_{j,n-1,n-k}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}\big((a+1)y-L_{n-1}\big)\\ &\quad+({-}1)^{k-1}A_{k-1,n-1,n-k}\circledast_{i=0}^{k-2}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\xi_{n-k,(a+1)^{k-1}}\big((a+1)y-L_{n-1}\big)\Bigg){\rm d}y \\ & -\lambda\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\cdot\\ &\Bigg(\sum_{j=0}^{k-2}({-}1)^{j}A_{j,n-1,n-k}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}\big((a+1)y-L_{n-1}\big)\\ &+({-}1)^{k-1}A_{k-1,n-1,n-k}\circledast_{i=0}^{k-2}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\xi_{n-k,(a+1)^{k-1}}\big((a+1)y-L_{n-1}\big)\Bigg){\rm d}y.\end{align*}

Note that (5.9) holds, and for $j\geq 1$ ,

\begin{eqnarray*} && \int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}\big((a+1)y-L_{n-1}\big) {\rm d}y \\ &&\qquad =(a+1)\circledast_{i=0}^{j+1}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n}).\end{eqnarray*}

If we choose

\begin{eqnarray} &&A_{1,n,n-k}\,:\!=\,\lambda A_{0,n-1,n-k},\nonumber\\ &&A_{j+1,n,n-k}\,:\!=\,\lambda (a+1)A_{j,n-1,n-k},\qquad 1\leq j\leq k-1,\nonumber\\ &&A_{0,n,n-k}\,:\!=\,\frac{1}{W^{(q+\lambda)}(L_{n-1}-L_{n})}\left(\sum_{j=1}^{k-1} ({-}1)^{j-1} A_{j ,n,n-k}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(L_{n-1}-L_{n})\right.\nonumber\\ &&\qquad +\left. ({-}1)^{k-1} A_{k,n,n-k} \circledast_{i=0}^{k-1}W^{(q+\lambda)}_{(a+1)^{i}}\circledast \xi_{n-k,(a+1)^{k}}(L_{n-1}-L_{n})\vphantom{\sum_{j=1}^{k-1}}\right),\nonumber\end{eqnarray}

then (5.10) holds true, which completes the proof.

Step 2: The time until the first up-crossing of $L_{n-k}$

Let $L_{n}<x<L_{n-1}$ , and let $\tau_{L_{n-k}}^{+}$ be the first time that U reaches a level in $\mathcal{N}$ and that this is done by up-crossing $L_{n-k}$ by the Brownian motion. Note that

\begin{eqnarray*}\tau_{L_{n-k}}^{+}\, &=& \,\mathcal{E}_{1,\lambda}1_{\mathcal{E}_{1,\lambda}<u^X_{n-1}\wedge d^X_{n}}+\mathcal{E}_{2,\lambda}1_{\mathcal{E}_{2,\lambda}<u^X_{n-2}\wedge d^X_{n-1}}\\&&+\ldots +\mathcal{E}_{k-1,\lambda}1_{\mathcal{E}_{k-1,\lambda}<u^X_{n-k+1}\wedge d^X_{n+2-k}}+u^X_{n-k}1_{u^X_{n-k}<\mathcal{E}_{k ,\lambda}\wedge d^X_{n-k+1}}.\end{eqnarray*}

For $k=1,\ldots ,n$ we define

$$\matrix{ {{\omega _{n,n - k}}(x){\mkern 1mu} \,:\!=\, {\mkern 1mu} \mathbb{E}_{x}[{e^{ - q\tau _{{L_{n - k}}}^ + }}]} \hfill \cr {\quad \quad = \mathbb{E}_{x}\left[ {{e^{ - q \Big(\sum_{j=1}^{k-1}\mathcal{E}_{j,\lambda}+u^X_{n-k}\Big)}}{1_{{\mathcal{E}_{1,\lambda }} < u_{n - 1}^X \wedge d_n^X}} \ldots {1_{{\mathcal{E}_{k - 1,\lambda }} < u_{n - k + 1}^X \wedge d_{n + 2 - k}^X}}{1_{u_{n - k}^X < d_{n - k + 1}^X \wedge {\mathcal{E}_{k,\lambda }}}}} \right].} \hfill \cr } $$

Applying (5.2) we have

(5.12) \begin{equation}\Omega_{n-1}(x-L_{n})\,:\!=\,\omega_{n,n-1}(x)=\mathbb{E}_{x}\Big(e^{-q\tau_{L_{n-1}}^{+}}1_{u^X_{n-1}<\mathcal{E}_{1,\lambda}\wedge d^X_{n}}\Big)=\frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)}(L_{n-1}-L_{n})}.\end{equation}

Further, using (5.4) and (5.12), observe that

\begin{eqnarray*} && \omega_{n,n-2}(x)=\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n},L_{n-1}}(x,y)\omega_{n-1,n-2}((a+1)y){\rm d}y\\ &&=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}\int_{L_{n}}^{L_{n-1}}W^{(q+\lambda)}(L_{n-1}-y)\Omega_{n-2}\big((a+1)y-L_{n-1}\big){\rm d}y\nonumber\\ &&\qquad -\,\lambda\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\Omega_{n-2}\big((a+1)y-L_{n-1}\big){\rm d}y\nonumber\\ &&=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}W^{(q+\lambda)}\circledast\Omega_{n-2,a+1}(L_{n-1}-L_{n})\\ &&\qquad -\,\lambda W^{(q+\lambda)}\circledast\Omega_{n-2,a+1}(x-L_{n}),\end{eqnarray*}

where $\Omega_{n,(a+1)^k}(x)=\Omega_{n}((a+1)^kx)$ , $k\in \mathbb{N}$ . Let

\begin{eqnarray}&&B_{1,n,n-2}\,:\!=\,\lambda\quad\text{and}\quad B_{0,n,n-2}\,:\!=\,\lambda\frac{W^{(q+\lambda)}\circledast\Omega_{n-2,a+1}(L_{n-1}-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}.\nonumber\end{eqnarray}

Then

\begin{eqnarray}\omega_{n,n-2}(x)=B_{0,n,n-2}W^{(q+\lambda)}(x-L_{n})-B_{1,n,n-2}W^{(q+\lambda)}\circledast\Omega_{n-2,a+1}(x-L_{n}).\nonumber\end{eqnarray}

The next proposition gives a general expression for $\omega_{n,n-k}(x)$ .

Proposition 5.2. For $k =2,\dots,n$ we have

\begin{eqnarray}&&\omega_{n,n-k}(x)=\sum_{j=0}^{k-2}({-}1)^{j}B_{j,n,n-k}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n})\nonumber\\&&\qquad +\,({-}1)^{k-1}B_{k-1,n,n-k}\circledast_{i=0}^{k-2}W^{(q+\lambda)}_{(a+1)^{i}}\circledast \Omega_{ n-k,(a+1)^{k-1}}(x-L_{n}),\nonumber\end{eqnarray}

where $B_{j,n,n-k}$ , $j=0,\ldots ,k-1$ , are coefficients which are obtained recursively.

Proof. The proof is similar to the proof of Proposition 5.1. The proposition clearly holds for $k=2$ . Assume it holds for $k-1 \geq 2$ . Applying (5.4) we obtain that

\begin{align*} & \omega_{n,n-k}(x)=\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n},L_{n-1}}(x,y) \omega_{n-1,n-k}((a+1)y){\rm d}y\\ &=\lambda \frac{W^{(q+\lambda)}(x-L_{n})}{W^{(q+\lambda)} (L_{n-1}-L_{n})}\int_{L_{n}}^{L_{n-1}}W^{(q+\lambda)}(L_{n-1}-y)\cdot\\ &\quad\Bigg( \sum_{j=0}^{k-3}({-}1)^{j}B_{j,n-1,n-k} \circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}\big((a+1)y-L_{n-1}\big)\\ &\qquad +({-}1)^{k-2}B_{k-2,n-1,n-k}\circledast_{i=0}^{k-3}W^{(q+\lambda)}_{(a+1)^{i}} \circledast\Omega_{ n-k,(a+1)^{k-2}}\big((a+1)y-L_{n-1}\big)\Bigg){\rm d}y \\ &\quad-\lambda\int_{L_{n}}^{x}W^{(q+\lambda)}(x-y)\cdot\\ & \Bigg(\sum_{j=0}^{k-3}({-}1)^{j}B_{j,n-1,n-k} \circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}\big((a+1)y-L_{n-1}\big)\\ &\qquad +({-}1)^{k-2}B_{k-2,n-1,n-k}\circledast_{i=0}^{k-3}W^{(q+\lambda)}_{(a+1)^{i}} \circledast\Omega_{n-k,(a+1)^{k-2}}\big((a+1)y-L_{n-1}\big)\Bigg){\rm d}y.\end{align*}

Taking

\begin{eqnarray}&&B_{1,n,n-k}\,:\!=\,\lambda B_{0,n-1,n-k} , \quad B_{j+1,n,n-k}\,:\!=\,(a+1)\lambda B_{j,n-1,n-k},\quad j=1,\ldots ,k-2,\nonumber\\&&B_{0,n,n-k}\,:\!=\,\frac{1}{W^{(q+\lambda)}(L_{n-1}-L_{n})}\cdot\nonumber\\&&\quad\Bigg(\sum_{j=1}^{k-1}({-}1)^{j}B_{j,n ,n-k}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(L_{n-1}-L_{n})\nonumber\\&&+\,({-}1)^{k-1} B_{k-1,n,n-k} \circledast_{i=0}^{k-2}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\Omega_{n-k,(a+1)^{k-1}}(L_{n-1}-L_{n})\Bigg) \nonumber\end{eqnarray}

completes the proof of this proposition.

Step 3: Determination of the exit/ruin time transform $\rho_N(x)$

To find $\rho_{N}(x)$ we start from the key observation that for $L_{n}<x<L_{n-1}$ we have

(5.13) \begin{equation}\rho_{N}(x)=r_{n,n}(x)\rho_{n}+\sum_{j=1}^{n-1}(r_{n,n-j}(x)+\omega_{n,n-j}(x))\rho_{n-j},\end{equation}

where

\begin{equation*}\rho_{n}\,:\!=\,E_{L_{n}}\left[e^{-qd_N}1_{d_N<u_0}\right].\end{equation*}

In the next step we construct a system of linear equations to find $\rho_{n},\,n=1,2,\ldots ,N$ . Clearly, $\rho_{0}=0$ and $\rho_{N}=1$ . Moreover,

\begin{eqnarray*} &&\rho_{1}= \left(Z^{(q+\lambda)}(L_{1}-L_{2})-\frac{W^{(q+\lambda)} (L_{1}-L_{2})}{W^{(q+\lambda)}(L_{0}-L_{2})}Z^{(q+\lambda)}(L_{0}-L_{2})\right)\rho_{2}\\ &&\qquad + \left(\lambda \int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{2},L_{0}}(L_{1},y))r_{1,1}((a+1)y){\rm d}y \right) \rho_{1}. \end{eqnarray*}

The term in the first set of parentheses is the Laplace transform of the time to down-cross $L_{2}$ before $\mathcal{E}_{\lambda}$ and before $L_{0}$ is reached; cf. (5.3). The second term is the Laplace transform of $\mathcal{E}_{\lambda}$ , where the exponential time expires when $U\in (y,y+dy)$ is between $L_{2}$ and $L_{1}$ before reaching $L_{2}$ or $L_{0}$ , and then the time to reach $L_{1}$ from above. Similarly, note that

(5.14) \begin{eqnarray} &&\rho_{2}= \left(Z^{(q+\lambda)}(L_{2}-L_{3})-\frac{W^{(q+\lambda)} (L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}Z^{(q+\lambda)}(L_{1}-L_{3})\right)\rho_{3}\end{eqnarray}
(5.15) \begin{eqnarray} &&+\,\frac{W^{(q+\lambda)}(L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}\rho_{1}\qquad\qquad\qquad\qquad\qquad\qquad\end{eqnarray}
(5.16) \begin{equation} +\,\lambda\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)\left(r_{2,2}((a+1)y) \rho_{2}+(r_{2,1}((a+1)y)+\omega_{2,1}((a+1)y))\rho_{1}\right){\rm d}y\end{equation}
(5.17) \begin{equation} + \left( \lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)r_{1,1}((a+1)y){\rm d}y \right) \rho_1.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\end{equation}

The term in the parentheses in (5.14) is the expected discounted time to reach $L_{3}$ before a jump and before up-crossing $L_{1}$ . The factor in (5.15) is the expected discounted time to reach $L_{1}$ before a jump and before down-crossing $L_{3}$ (cf. (5.2)). The expressions (5.16) and (5.17) describe the expected discounted time until a jump when a jump occurs before reaching $L_{1}$ or $L_{3}$ , and then the expected discounted time until the process reaches one of the levels $L_{j}$ for $j\leq 2$ . The expression (5.16) describes the case where just before a jump U is between $L_{3}$ and $L_{2}$ , and (5.17) describes the case where just before a jump U is between $L_{2}$ and $L_{1}$ . By rearranging (5.14)–(5.17), we get

\begin{align*} &\rho_{2}=\left(Z^{(q+\lambda)}(L_{2}-L_{3})-\frac{W^{(q+\lambda)} (L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}Z^{(q+\lambda)}(L_{1}-L_{3})\right)\rho_{3}\\&+\left( \lambda\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)r_{2,2}((a+1)y){\rm d}y \right) \rho_{2}\\&+\left(\frac{W^{(q+\lambda)}(L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}+\lambda \int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)(r_{2,1}((a+1)y)+\omega_{2,1}((a+1)y){\rm d}y\right.\\&+\left. \lambda \int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)r_{1,1}((a+1)y){\rm d}y\right)\rho_{1}.\end{align*}

Using similar arguments, we can show that generally, for $1<n\leq N-1$ ,

\begin{eqnarray*} && \rho_{n}=\left(Z^{(q+\lambda)}(L_{n}-L_{n+1})-\frac{W^{(q+\lambda)} (L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}Z^{(q+\lambda)}(L_{n-1}-L_{n+1})\right)\rho_{n+1}\\ &&\qquad +\,\frac{W^{(q+\lambda)}(L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}\rho_{n-1}\\ &&\qquad +\,\lambda\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\Bigg(r_{n,n}((a+1)y)\rho_{n}\\&&\qquad \qquad +\,\sum_{k=1}^{n-1} \left( r_{n,n-k}((a+1)y)+\omega_{n,n-k}((a+1)y) \right) \rho_{n-k}\Bigg){\rm d}y\\&&\qquad +\,\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\Bigg(r_{n-1,n-1}((a+1)y)\rho_{n-1}\\&&\qquad \qquad +\,\sum_{k=1}^{n-2} (r_{n-1,n-1-k}((a+1)y)+\omega_{n-1,n-1-k}((a+1)y)\rho_{n-1-k}\Bigg){\rm d}y , \end{eqnarray*}

which is equivalent to

(5.18) \begin{align}&\rho_{n}=\left(Z^{(q+\lambda)}(L_{n}-L_{n+1})-\frac{W^{(q+\lambda)} (L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}Z^{(q+\lambda)}(L_{n-1}-L_{n+1})\right)\rho_{n+1}\nonumber\\&\quad+ \left( \lambda\int_{L_{n+1}}^{L_{n}}u_{L_{n+1},L_{n-1}}^{(q+\lambda)}(L_{n},y)r_{n,n}((a+1)y){\rm d}y \right) \rho_{n}\nonumber \\&\quad+\left(\lambda \int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n,n-1}((a+1)y)+\omega_{n,n-1}((a+1)y)){\rm d}y\right.\nonumber\\&\quad+\left.\frac{W^{(q+\lambda)}(L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}+\lambda \int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)r_{n-1,n-1}((a+1)y){\rm d}y\right)\rho_{n-1}\nonumber\\&\quad+\lambda\sum_{k=2}^{n-1}\Bigg(\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n,n-k}((a+1)y)+\omega_{n,n-k}((a+1)y)){\rm d}y\nonumber\\&\quad+\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n-1,n-1-(k-1)}((a+1)y)\nonumber\\&\quad+\omega_{n-1,n-1-(k-1)}((a+1)y)){\rm d}y\Bigg)\rho_{n-k}.\end{align}

Thus we have proved the following main result.

Theorem 5.1. The two-sided downward exit time transform $\rho_{N}(x)$ defined in (5.5) is given in (5.13) with $r_{n,n-k}$ identified in (5.6), (5.7), and Proposition 5.1; $\omega_{n,n-k}$ identified in (5.12) and Proposition 5.2; and $\rho_k$ given via the system of equations (5.18).

5.2. Expected discounted dividends until ruin

In this section we obtain $v_{N}(x)$ —the expected discounted dividends obtained up to the time when the process reaches $L_{N}$ , starting at x. Let $L_{n}<x<L_{n-1}$ and

\begin{equation*}\mathcal{T}_{n,0}(x)\,:\!=\,\mathbb{E}_{x}\Big[e^{-q\mathcal{S}_{n}}1_{\mathcal{E}_{1,\lambda}<u^X_{n-1}\wedge d^X_{n}}1_{\mathcal{E}_{2,\lambda}<u^X_{n-2}\wedge d^X_{n-1}}\ldots 1_{\mathcal{E}_{n,\lambda}<u^X_{0}\wedge d^X_{1}} \Big],\end{equation*}

where

\begin{equation*}\mathcal{S}_{n}\,:\!=\,\sum_{i=1}^{n}\mathcal{E}_{i,\lambda}.\end{equation*}

Thus $\mathcal{T}_{n,0}(x)$ is the expected discounted time until up-crossing $L_0$ by a jump when this occurs before reaching any level in $\mathcal{N}$ . Also for $L_{n}<x<L_{n-1}$ let

(5.19) \begin{equation}v_{n}^{J}(x)\,:\!=\,\mathbb{E}_{x}\Big[e^{-qS_n}1_{\mathcal{E}_{1,\lambda}<u^X_{n-1}\wedge d^X_{n}}1_{\mathcal{E}_{2,\lambda}< u_{n-2}^X \wedge d^X_{n-1}}\ldots 1_{\mathcal{E}_{n,\lambda} < u_0^X \wedge d^X_{1}}((a+1)U(\mathcal{S}_{n})- L_0) \Big].\end{equation}

Note that $v_{n}^{J}(x)$ is the expected discounted overflow above $L_0=b$ when this occurs before reaching any level in $\mathcal{N}$ . First consider $v_1^J(x)$ , so take $L_{1}<x<L_{0}$ . Applying (5.4) we get

(5.20) \begin{align} & v_{1}^{J}(x)=\lambda\int_{L_{1}}^{L_{0}}u^{(q+\lambda)}_{L_{1},L_{0}}(x,y)((a+1)y- L_0){\rm d}y\\&=\lambda\!\left(\frac{W^{(q+\lambda)}(x-L_{1})}{W^{(q+\lambda)}(L_{0}-L_{1})}\int_{L_{1}}^{L_0}W^{(q+\lambda)}(L_{0}-y)((a+1)y- L_0){\rm d}y\right.\nonumber\\&\qquad \left.-\int_{L_{1}}^{x}W^{(q+\lambda)}(x-y)((a+1)y- L_0){\rm d}y\right).\nonumber\end{align}

Thus,

\begin{equation*}v_{1}^{J}(x)=A_{1,f,0}W^{(q+\lambda)}(x-L_{1})-A_{1,f,1}W^{(q+\lambda)}\circledast\mathcal{Q}_{a+1}(x-L_{1}) ,\end{equation*}

where $\mathcal{Q}_{(a+1)^k}(x)=\mathcal{Q}((a+1)^kx)=(a+1)^kx$ , $k\in \mathbb{N}$ , and

\begin{eqnarray}&&A_{1,f,0}\,:\!=\,\lambda\frac{W^{(q+\lambda)}\circledast\mathcal{Q}_{a+1}(L_{0}-L_{1})}{W^{(q+\lambda)}(L_{0}-L_{1})}\quad\text{and}\quad A_{1,f,1}\,:\!=\,\lambda. \nonumber\end{eqnarray}

Similarly, replacing $(a+1)y- L_0$ by 1 in (5.20), we obtain that the expected discounted time until a jump above $L_0=b$ is

\begin{equation*}\mathcal{T}_{1,0}(x)=A_{1,\mathcal{T},0}W^{(q+\lambda)}(x-L_{1})-A_{1,\mathcal{T},1}\overline{W}^{(q+\lambda)}(x-L_{1}),\end{equation*}

where

(5.21) \begin{equation}A_{1,\mathcal{T},0}=\lambda\frac{\overline{W}^{(q+\lambda)}(L_{0}-L_{1})}{W^{(q+\lambda)}(L_{0}-L_{1})} \quad \text{and} \quad A_{1,\mathcal{T},1}=\lambda, \end{equation}

and $\overline{W}^{(q)}(x)=\int_{0}^{x}W^{(q)}(y)dy$ .

Next consider $v_2^J(x)$ , so take $L_{2}<x<L_{1}$ . Then

\begin{eqnarray*}&&v^{J}_{2}(x)=\lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{2},L_{1}}(x,y)v_{1}^{J}((a+1)y){\rm d}y\\&&=\lambda\frac{W^{(q+\lambda)}(x-L_{2})}{W^{(q+\lambda)}(L_{1}-L_{2})}\int_{L_{2}}^{L_{1}}W^{(q+\lambda)}(L_{1}-y)\left(A_{1,f,0}W^{(q+\lambda)}((a+1)y-L_{1})\right.\\&&\left.\qquad -\,A_{1,f,1}W^{(q+\lambda)}\circledast\mathcal{Q}_{a+1}((a+1)y-L_{1})\right){\rm d}y\\&&\qquad -\,\lambda \int_{L_{2}}^{x}W^{(q+\lambda)}(x-y)\left(A_{1,f,0}W^{(q+\lambda)}((a+1)y-L_{1})\right.\\&&\left.\qquad -\,A_{1,f,1}W^{(q+\lambda)}\circledast\mathcal{Q}_{a+1}((a+1)y-L_{1})\right){\rm d}y.\\\end{eqnarray*}

Thus,

\begin{eqnarray*}v^{J}_{2}(x)&&\,=\,A_{2,f,0}W^{(q+\lambda)}(x-L_{2})-A_{2,f,1}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(x-L_{2})\\&&\qquad +\,A_{2,f,2}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\mathcal{Q}_{(a+1)^{2}}(x-L_{2}),\end{eqnarray*}

where

\begin{eqnarray}&&A_{2,f, 1}\,:\!=\,\lambda A_{1,f,0}, \quad \quad A_{2,f,2}\,:\!=\,\lambda (a+1) A_{1,f,1},\nonumber\\&&A_{2,f,0}\,:\!=\,\frac{\lambda A_{1,f,0}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(L_{1}-L_{2})}{W^{(q+\lambda)}(L_{1}-L_{2})}\nonumber\\&&\qquad -\,\frac{\lambda A_{1,f,1}(a+1)W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\mathcal{Q}_{(a+1)^{2}}(L_{1}-L_{2})}{W^{(q+\lambda)}(L_{1}-L_{2})}\nonumber\\&&=\,\frac{A_{2,f,1}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(L_{1}-L_{2})-A_{2,f,2}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}\circledast\mathcal{Q}_{(a+1)^{2}}(L_{1}-L_{2})}{W^{(q+\lambda)}(L_{1}-L_{2})}.\nonumber\end{eqnarray}

Similarly,

\begin{eqnarray*}\mathcal{T}_{2,0}(x)\,=\,&&A_{2,\mathcal{T},0}W^{(q+\lambda)}(x-L_{2})-A_{2,\mathcal{T},1}W^{(q+\lambda)}\circledast W^{(q+\lambda)}_{a+1}(x-L_{2})\\&&\qquad +\,A_{2,\mathcal{T},2}W^{(q+\lambda)}\circledast\overline{W}_{a+1}(x-L_{2}),\end{eqnarray*}

where

(5.22) \begin{align}A_{2,\mathcal{T},1}\,:\!=\,\lambda A_{1,\mathcal{T},0} \quad \text{and} \quad A_{2,\mathcal{T},2}\,:\!=\,\lambda A_{1,\mathcal{T},1},\qquad\qquad\qquad \end{align}
(5.23) \begin{align} A_{2,\mathcal{T},0}\,:\!=\,\frac{ A_{2,\mathcal{T},1}W^{(q+\lambda)} \circledast W^{(q+\lambda)}_{a+1}(L_{1}-L_{2})-A_{2,\mathcal{T},2} W^{(q+\lambda)} \circledast\overline{W}^{(q+\lambda)}_{a+1}(L_{1}-L_{2})}{W^{(q+\lambda)}(L_{1}-L_{2})}. \end{align}

Using arguments similar to those in the proof of Proposition 5.2, one can derive the following result.

Proposition 5.3. For $L_{n}<x<L_{n-1}$ we have

\begin{eqnarray*} &&v^{J}_{n}(x)=\sum_{j=0}^{n-1}({-}1)^{j}A_{n,f,j}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n})\\&&\qquad +\,({-}1)^{n} A_{n,f,n} \circledast_{i=0}^{n-1} W^{(q+\lambda)}_{(a+1)^{i}}\circledast \mathcal{Q}_{(a+1)^{n}}(x-L_{n}), \end{eqnarray*}

where the $A_{n,f,j}$ are obtained recursively as follows:

\begin{eqnarray} &&A_{n,f,1}\,:\!=\,\lambda A_{n-1,f,0},\nonumber\\&&A_{n,f,j}\,:\!=\,\lambda(a+1)A_{n-1,f,j-1},\,\,\,2\leq j\leq n,\nonumber\\&&A_{n,f,0}\,:\!=\,\frac{\sum_{j=1}^{n-1}({-}1)^{j-1}A_{n ,f,j}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(L_{n-1}-L_{n})}{W^{(q+\lambda)}(L_{n-1}-L_{n})}\nonumber\\&&\qquad \qquad +\,({-}1)^{n-1}\frac{A_{n ,f,n}\circledast_{i=0}^{n-1}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\mathcal{Q}_{(a+1)^{n}}(L_{n-1}-L_{n})}{W^{(q+\lambda)}(L_{n-1}-L_{n})}. \nonumber \end{eqnarray}

Similarly, for $L_{n}<x<L_{n-1}$ ,

\begin{eqnarray*}&&\mathcal{T}_{n,0}(x)=\sum_{j=0}^{n-1}({-}1)^{j}A_{n,\mathcal{T},j}\circledast_{i=0}^{j-1}W^{(q+\lambda)}_{(a+1)^{i}}(x-L_{n})\\&&\qquad +\,({-}1)^{n} A_{n,\mathcal{T},n} \circledast_{i=0}^{n-1}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\overline{W}^{(q+\lambda)}_{(a+1)^{n}}(x-L_{n}),\end{eqnarray*}

where $A_{n,\mathcal{T},j}$ , $n=1,2$ , $j=0,1,2$ , are as in (5.21) and (5.22)–(5.23). For $n \geq 3$ , the $A_{n,\mathcal{T},j}$ are obtained recursively as follows:

\begin{eqnarray} &&A_{n,\mathcal{T},1}\,:\!=\,\lambda A_{n-1,\mathcal{T},0},\nonumber\\ &&A_{n,\mathcal{T},j}\,:\!=\,\lambda(a+1)A_{n-1,\mathcal{T},j-1},\,\,\,2\leq j\leq n, \nonumber\\&&A_{n,\mathcal{T},0}\,:\!=\,\frac{\sum_{j=1}^{n-1}({-}1)^{j-1}A_{n,\mathcal{T},j}\circledast_{i=0}^{j}W^{(q+\lambda)}_{(a+1)^{i}}(L_{n-1}-L_{n})}{W^{(q+\lambda)}(L_{n-1}-L_{n})}\nonumber\\&&\qquad \qquad +\,({-}1)^{n-1}\frac{A_{n,\mathcal{T},n}\circledast_{i=0}^{n-1}W^{(q+\lambda)}_{(a+1)^{i}}\circledast\overline{W}^{(q+\lambda)}_{(a+1)^{n}}(L_{n-1}-L_{n})}{W^{(q+\lambda)}(L_{n-1}-L_{n})}.\nonumber\end{eqnarray}

Recall that $v_N(x)$ gives the expected discounted dividends upon reaching $L_N$ , having started at state x, and (cf. (4.7)) $v_n = v_N(L_n)$ . Observe that for $L_{n}<x<L_{n-1}$ we have

(5.24) \begin{eqnarray}&&v_N (x)=\sum_{k=0}^{n-1}r_{n,n-k}(x)v_{n-k}+\sum_{k=1}^{n-1}\omega_{n,n-k}(x)v_{n-k}\nonumber\\&&\qquad +\,(\omega_{n,0}(x)+\mathcal{T}_{n,0})v_{0} + v_n^J(x).\end{eqnarray}

The first two terms in the right-hand side of (5.24) correspond to cases in which a level from $\mathcal{N}$ is reached before $L_0=b$ is reached or up-crossed. The $v_0$ term covers the two cases in which level $L_0$ is reached ( $\omega_{n,0}(x)$ is the expected discounted time to reach level $L_{0}$ before reaching any other level in $\mathcal{N}$ ) and level $L_0$ is up-crossed by a jump ( $\mathcal{T}_{n,0}(x)$ is the expected discounted time until up-crossing $L_{0}$ before reaching any other level in $\mathcal{N}$ ). Finally $v_n^J(x)$ is the expected discounted overflow above $L_0$ by a jump, when it occurs before reaching any level in $\mathcal{N}$ .

We now derive a system of equations identifying all $v_n$ . Clearly $v_{N}=0$ . Let us set an equation for $v_{0}$ . Assume that $U(0)=b=L_{0}$ . Let $\overline{X}(t)\,:\!=\,\sup_{ 0\leq s\leq t}\{X(s)\}$ , $V(t)\,:\!=\,(\overline{X}(t)-b)_{+}$ , and let $R(t)=X(t)-V(t)$ , where $y_{+}=\max\!(y,0)$ . Observe that V(t) is the cumulative amount of dividends obtained up to time t only via the process X(t). From Theorem 8.11 in Kyprianou [Reference Kyprianou17] we have, with $d^R_ {\alpha} = {\rm min}\{t\,:\, R(t) = \alpha\}$ ,

\begin{equation*}\frac{\mathbb{P}_{x}(R(\mathcal{E}_{q})\in (y,y+{\rm d}y), \mathcal{E}_{q}< d^{R}_{\alpha})} {q{\rm d}y}=\mu^{(q)}(x,y)=\frac{W^{(q)}(x-\alpha)}{W^{(q)^{\prime} }(b - \alpha)}W^{(q)^{\prime}}(b-y)-W^{(q)}(x-y),\end{equation*}

where $W^{(q)^{\prime}}(x)$ is the derivative of $W^{(q)}(x)$ with respect to x. Moreover, from [Reference Avram, Palmowski and Pistorius7] we know that the expected discounted dividends paid until $d_{L_1}^{R} \wedge \mathcal{E}_{\lambda}$ starting at b equal

\begin{equation*} \eta\!\left(b,\frac{b}{a+1}\right)\,:\!=\,\mathbb{E}_{b}\!\left[\int_{0}^{\infty}e^{-qt}1_{t< d_{L_1}^R \wedge\mathcal{E}_{\lambda}}{\rm d}V(t)\right]=\frac{W^{(q+\lambda)}\Big(b-\frac{b}{a+1}\Big)}{W^{(q+\lambda)^{\prime}}\Big(b-\frac{b}{a+1}\Big)}. \end{equation*}

Additionally, from Theorem 8.10(i) in Kyprianou [Reference Kyprianou17] with $\theta=0$ we have

(5.25) \begin{equation}\mathbb{E}_x\!\left[e^{-q d^R_\alpha}1_{d^R_{\alpha}<\mathcal{E}_{\lambda}}\right]=Z^{(q+\lambda)}(x-\alpha)-(q+\lambda)\frac{W^{(q+\lambda)}(b-\alpha)}{W^{(q+\lambda)^{\prime}}(b-\alpha)}W^{(q+\lambda)}(x-\alpha).\end{equation}

Therefore,

(5.26) \begin{eqnarray} &&v_{0}= \eta\!\left(b,\frac{b}{a+1}\right)+\lambda\int_{L_{1}}^{L_{0}}\mu^{(q+\lambda)}(b,y)((a+1)y-b+v_{0}){\rm d}y\nonumber\\ &&\qquad +\left(Z^{(q+\lambda)}(L_{0}-L_{1})-(q+\lambda)\frac{W^{(q+\lambda)}(L_{0}-L_{1})}{W^{(q+\lambda)^{\prime}}(L_{0}-L_{1})}W^{(q+\lambda)}(L_{0}-L_{1})\right)v_{1}. \end{eqnarray}

The second term is the expected discounted dividends due to a jump that occurs at time $\mathcal{E}_{\lambda}$ before down-crossing $L_{1}$ . The last term equals

\begin{equation*}\mathbb{E}_{L_0}\!\left[e^{-q d^R_{L_1}}1_{d^R_{L_1}<\mathcal{E}_{\lambda}}\right]v_1\end{equation*}

and hence is the expected discounted dividends when $L_{1}$ is down-crossed before the exponential time $\mathcal{E}_{\lambda}$ has expired. Further, we have

(5.27) \begin{eqnarray}&&v_{1}=\left(Z^{(q+\lambda)}(L_{1}-L_{2})-\frac{W^{(q+\lambda)} (L_{1}-L_{2})}{W^{(q+\lambda)}(L_{0}-L_{2})}Z^{(q+\lambda)}(L_{0}-L_{2})\right)v_{2}\end{eqnarray}
(5.28) \begin{eqnarray} +\left( \lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{2},L_{0}}(L_{1},y) r_{1,1}((a+1)y){\rm d}y\,\right) v_{1}\qquad\qquad\end{eqnarray}
(5.29) \begin{eqnarray} +\left(\frac{W^{(q+\lambda)}(L_{1}-L_{2})}{W^{(q+\lambda)}(L_{0}-L_{2})}\right.\qquad\qquad\qquad\qquad\qquad\ \ \qquad\end{eqnarray}
(5.30) \begin{eqnarray}+\,\lambda\!\left.\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{2},L_{0}}(L_{1},y)\left(\mathcal{T}_{1,0}((a+1)y) +\omega_{1,0}((a+1)y)\right){\rm d}y\right)\,v_{0}\end{eqnarray}
(5.31) \begin{eqnarray} +\,\lambda\int_{L_{1}}^{L_{0}}u^{(q+\lambda)}_{L_{2},L_{0}}(L_{1},y)((a+1)y-b){\rm d}y\qquad\qquad\qquad\qquad\end{eqnarray}
(5.32) \begin{eqnarray} +\,\lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{2},L_{0}}(L_{1},y)v_{1}^{J}((a+1)y){\rm d}y .\qquad\qquad\qquad\qquad \end{eqnarray}

The term in the parentheses in (5.27) is the expected discounted time to reach $L_{2}$ before a jump and before reaching $L_{0}$ . The term that multiplies $v_{1}$ in (5.28) is the expected discounted time to reach $L_{1}$ before any other level in $\mathcal{N}$ is reached. The term (5.29) is the expected discounted time to reach $L_{0}$ by the Brownian motion before down-crossing $L_{2}$ and before the exponential time has expired. The first term in (5.30) is the expected discounted time to jump above b when this jump occurs before the exponential time has expired, and the second term is the expected discounted time to reach b by the Brownian motion. The term (5.31) is the expected discounted dividends due to a jump when the exponential time has expired while the process is in $(L_{1},L_{0})$ , and (5.32) is the expected discounted dividends due to a jump when the exponential time has expired while the process is in $(L_2,L_1)$ . Similarly, we can observe that

(5.33) \begin{align}\!\!\! v_{2}=\left(Z^{(q+\lambda)}(L_{2}-L_{3})-\frac{W^{(q+\lambda)} (L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}Z^{(q+\lambda)}(L_{1}-L_{3})\right)v_{3}\qquad\qquad\qquad\qquad\end{align}
(5.34) \begin{align} +\left( \lambda\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)r_{2,2}((a+1)y){\rm d}y\, \right) v_{2}\qquad\qquad\qquad\qquad\qquad\quad\qquad\end{align}
(5.35) \begin{align} \ \ \ +\left(\frac{W^{(q+\lambda)}(L_{2}-L_{3})}{W^{(q+\lambda)}(L_{1}-L_{3})}+\lambda\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)(r_{2,1}((a+1)y)+\omega_{2,1}((a+1)y)){\rm d}y \right. \end{align}
(5.36) \begin{align} +\left.\lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)r_{1,1}((a+1)y){\rm d}y\right) v_{1}\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}
(5.37) \begin{align} +\left(\lambda\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)(\mathcal{T}_{2,0}((a+1)y)+\omega_{2,0}((a+1)y)){\rm d}y \right. \qquad\qquad\qquad\quad\end{align}
(5.38) \begin{align} +\left.\lambda\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)(\mathcal{T}_{1,0}((a+1)y)+\omega_{1,0}(a+1)y)){\rm d}y\right)v_{0}\qquad\qquad\qquad\end{align}
(5.39) \begin{align} +\,\lambda\!\left(\int_{L_{3}}^{L_{2}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)v_{2}^{J}((a+1)y){\rm d}y+\int_{L_{2}}^{L_{1}}u^{(q+\lambda)}_{L_{3},L_{1}}(L_{2},y)v_{1}^{J}((a+1)y){\rm d}y\right).\end{align}

Indeed, (5.33) is the expected discounted dividends when level $L_{3}$ is reached before any other level in $\mathcal{N}$ and before a jump. The term (5.34) is the expected discounted dividends when a jump occurs before reaching $L_{1}$ or $L_{3}$ , and just before the jump U is between $L_{2}$ and $L_{3}$ . Similarly, (5.35) and (5.36) are the expected discounted dividends when a jump occurs before reaching $L_{1}$ or $L_{3}$ and after this jump the first level that is reached is $L_{1}$ . Additionally, (5.37) and (5.38) are the expected discounted dividends when a jump occurs before reaching $L_{1}$ or $L_{3}$ and after this jump the first level that is reached is $L_{0}$ . Moreover, (5.39) is the expected discounted dividends due to overflow above b when a jump occurs before reaching $L_{1}$ or $L_{3}$ and after this jump the first level that is reached is $L_{0}=b$ due to a dividend payment after up-crossing b by a jump. Using similar arguments we can conclude that for $2\leq n\leq N-1$ we have

(5.40) \begin{align} v_{n}=\left(Z^{(q+\lambda)}(L_{n}-L_{n+1})-\frac{W^{(q+\lambda)} (L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}Z^{(q+\lambda)}(L_{n-1}-L_{n+1})\right)v_{n+1}\qquad\quad \end{align}
(5.41) \begin{align}&+\left( \lambda\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)r_{n,n}((a+1)y){\rm d}y\, \right) v_{n}\qquad\qquad\qquad\qquad\qquad\end{align}
(5.42) \begin{align}&+\left(\lambda \int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n,n-1}((a+1)y)+\omega_{n,n-1}((a+1)y)){\rm d}y\right.\qquad\end{align}
(5.43) \begin{align}\qquad &+\frac{W^{(q+\lambda)}(L_{n}-L_{n+1})}{W^{(q+\lambda)}(L_{n-1}-L_{n+1})}+\left.\lambda\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)r_{n-1,n-1}((a+1)y){\rm d}y\right)\,v_{n-1}\quad\end{align}
(5.44) \begin{align}\ \ \ \ &+\lambda\sum_{k=2}^{n-1}\Bigg(\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n,n-k}((a+1)y)+\omega_{n,n-k}((a+1)y)){\rm d}y\quad\end{align}
(5.45) \begin{align}\ \ \ \ &+\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)(r_{n-1,n-1-(k-1)}((a+1)y)\\&\qquad \qquad +\omega_{n-1,n-1-(k-1)}((a+1)y)){\rm d}y\Bigg)v_{n-k}\qquad\qquad\qquad\qquad\qquad\qquad\quad\nonumber\end{align}
(5.46) \begin{align}\ \ \ \ &+\lambda\Bigg( \int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\omega_{n,0}((a+1)y){\rm d}y\\&\qquad \qquad +\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\omega_{n-1,0}((a+1)y){\rm d}y\qquad\qquad\qquad\qquad\qquad\nonumber\end{align}
(5.47) \begin{align}\ \ \ \ &+\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\mathcal{T}_{n,0}((a+1)y){\rm d}y\\&\qquad \quad+\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)\mathcal{T}_{n-1,0}((a+1)y){\rm d}y\Bigg)v_{0}\qquad\qquad\quad\qquad\qquad\nonumber \end{align}
(5.48) \begin{align}\ \ &+\lambda\Bigg(\int_{L_{n+1}}^{L_{n}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)v^{J}_{n}((a+1)y){\rm d}y\\&\qquad \quad+\int_{L_{n}}^{L_{n-1}}u^{(q+\lambda)}_{L_{n+1},L_{n-1}}(L_{n},y)v^{J}_{n-1}((a+1)y){\rm d}y\Bigg).\qquad\qquad\qquad\qquad\qquad\nonumber\end{align}

The terms (5.40)–(5.45) are obtained by the same arguments as those leading to (5.18). The terms (5.46)–(5.47) describe the expected discounted time to reach $L_{0}$ by the Brownian motion when it is the first level reached in $\mathcal{N}$ . In (5.46), level $L_{0}$ is reached by the Brownian motion, and in (5.47) it is reached immediately after a jump above $L_{0}$ . Finally, (5.48) describes the expected discounted dividends paid due to up-crossing of $L_{0}$ when it occurs before any other level in $\mathcal{N}$ has been reached. Finally, notice that

(5.49) \begin{equation}v_{N}=0. \end{equation}

To sum up, we have the following main result.

Theorem 5.2. The value function $v_N(x)$ , defined formally in (1.6), is given by (5.24), with $r_{n,n-k}$ , $\omega_{n,n-k}$ identified in Propositions 5.1 and 5.2; with $v^{J}_{n}(x)$ , $\mathcal{T}_{n,0}(x)$ identified in Proposition 5.3; and with $v_n$ solving the system of linear equations (5.26)–(5.49).

6. Suggestions for further research

The present study could serve as a first step towards the analysis of more general classes of dual risk processes with proportional gains. Below we suggest a few topics for further research:

  1. (i) One could consider more general jumps up from level u, possibly of the form $u + \zeta(u) + C_i$ , where $\zeta(u)$ is a subordinator.

  2. (ii) In Sections 4 and 5 we have considered proportional growth at jump epochs, assuming that $C_i \equiv 0$ . It would be interesting to remove the latter assumption.

  3. (iii) Another interesting research topic is an exact analysis of the value function v(x) defined in (1.5), without recourse to the approximation approach with levels $L_0,\dots,L_N$ . One would then have to solve the differential-delay equation (4.2).

Acknowledgements

The authors are grateful to Eurandom (Eindhoven, the Netherlands) for organizing the Multidimensional Queues, Risk and Finance Workshop, where this project started.

Funding information

The research of O. Boxma was supported via a TOP-C1 grant from the Netherlands Organisation for Scientific Research. The research of E. Frostig was supported by the Israel Science Foundation, Grant No. 1999/18. The research of Z. Palmowski was supported by Polish National Science Centre Grant No. 2018/29/B/ST1/00756 (2019-2022).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Afonso, L. B., Cardoso, R. M. R. and dos Reis, E. (2013). Dividend problems in the dual risk model. Insurance Math. Econom. 53, 906918.CrossRefGoogle Scholar
Albrecher, H., Badescu, A. L. and Landriault, D. (2008). On the dual risk model with tax payments. Insurance Math. Econom. 42, 10861094.CrossRefGoogle Scholar
Avanzi, B. (2009). Strategies for dividend distribution: a review. N. Amer. Actuarial J. 13, 217251.CrossRefGoogle Scholar
Avanzi, B. and Gerber, H. U. (2008). Optimal dividends in the dual risk model with diffusion. ASTIN Bull. 38, 653667.CrossRefGoogle Scholar
Avanzi, B., Gerber, H. U. and Shiu, E. S. W. (2007). Optimal dividends in the dual model. Insurance Math. Econom. 41, 111123.CrossRefGoogle Scholar
Avanzi, B., Pérez, J. L., Wong, B. and Yamazaki, K. (2017). On optimal joint reflective and refractive dividend strategies in spectrally positive Lévy models. Insurance Math. Econom. 72, 148162.CrossRefGoogle Scholar
Avram, F., Palmowski, Z. and Pistorius, M. (2007). On the optimal dividend problem for a spectrally negative Lévy process. Ann. Appl. Prob. 17, 156180.CrossRefGoogle Scholar
Bayraktar, E., Kyprianou, A. and Yamazaki, K. (2014). On the optimal dividends in the dual model. ASTIN Bull. 43, 359372.CrossRefGoogle Scholar
Boxma, O. J. and Frostig, E. (2018). The dual risk model with dividends taken at arrival. Insurance Math. Econom. 83, 8392.CrossRefGoogle Scholar
Boxma, O. J., Löpker, A. and Mandjes, M. R. H. (2020). On two classes of reflected autoregressive processes. J. Appl. Prob. 57, 657678.CrossRefGoogle Scholar
Boxma, O. J., Löpker, A., Mandjes, M. R. H. and Palmowski, Z. (2021). A multiplicative version of the Lindley recursion. Queueing Systems 98, 225245.CrossRefGoogle Scholar
Boxma, O. J., Mandjes, M. R. H. and Reed, J. (2016). On a class of reflected AR(1) processes. J. Appl. Prob. 53, 816832.CrossRefGoogle Scholar
Buraczewski, D., Damek, E. and Mikosch, T. (2016). Stochastic Models with Power-Law Tails: The Equation $X = AX + B$ . Springer, Cham.CrossRefGoogle Scholar
Cohen, J. W. (1982). The Single Server Queue. North-Holland, Amsterdam.Google Scholar
Hale, J. K. and Verduyn Lunel, S. M. (1993). Introduction to Functional Differential Equations. Springer, Berlin.CrossRefGoogle Scholar
Kuznetsov, A., Kyprianou, A. E. and Rivero, V. (2013). The theory of scale functions for spectrally negative Lévy processes. In Lévy Matters II, Springer, Berlin, pp. 97186.Google Scholar
Kyprianou, A. E. (2006). Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer, Berlin.Google Scholar
Marciniak, E. and Palmowski, Z. (2018). On the optimal dividend problem in the dual models with surplus-dependent premiums. J. Optimization Theory Appl. 179, 533552.CrossRefGoogle Scholar
Ng, A. (2009). On the dual model with a dividend threshold. Insurance Math. Econom. 44, 315324.CrossRefGoogle Scholar
Palmowski, Z., Ramsden, L. and Papaioannou, A. D. (2018). Parisian ruin for the dual risk process in discrete-time. Europ. Actuarial J. 8, 197214.CrossRefGoogle ScholarPubMed
Prabhu, N. U. (1998). Stochastic Storage Processes. Springer, New York.CrossRefGoogle Scholar
Ross, S. (2009). Introduction to Probability Models, 10th edn. Academic Press, New York.Google Scholar
Vlasiou, M. (2006). Lindley-type recursions. Doctoral Thesis, Eindhoven University of Technology.Google Scholar
Yin, C. and Wen, Y. (2013). Optimal dividend problem with terminal value for spectrally positive Lévy processes. Insurance Math. Econom. 53, 769773.CrossRefGoogle Scholar
Yin, C., Wen, Y. and Zhao, Y. (2014). On the dividend problem for a spectrally positive Lévy process. ASTIN Bull. 44, 635651.CrossRefGoogle Scholar
Figure 0

Figure 1. CD Projekt asset value; see https://businessinsider.com.pl.