Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-23T05:39:37.614Z Has data issue: false hasContentIssue false

Unified approach for solving exit problems for additive-increase and multiplicative-decrease processes

Published online by Cambridge University Press:  30 August 2022

Remco van der Hofstad*
Affiliation:
Eindhoven University of Technology
Stella Kapodistria*
Affiliation:
Eindhoven University of Technology
Zbigniew Palmowski*
Affiliation:
Wrocław University of Science and Technology
Seva Shneer*
Affiliation:
Heriot-Watt University
*
*Postal address: Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
*Postal address: Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
****Postal address: Faculty of Pure and Applied Mathematics, Wrocław University of Science and Technology, Wyb. Wyspiańskiego 27, 50-370 Wrocław, Poland. Email: [email protected]
*****Postal address: Heriot-Watt University, Edinburgh, UK. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We analyse an additive-increase and multiplicative-decrease (also known as growth–collapse) process that grows linearly in time and that, at Poisson epochs, experiences downward jumps that are (deterministically) proportional to its present position. For this process, and also for its reflected versions, we consider one- and two-sided exit problems that concern the identification of the laws of exit times from fixed intervals and half-lines. All proofs are based on a unified first-step analysis approach at the first jump epoch, which allows us to give explicit, yet involved, formulas for their Laplace transforms. All eight Laplace transforms can be described in terms of two so-called scale functions associated with the upward one-sided exit time and with the upward two-sided exit time. All other Laplace transforms can be obtained from the above scale functions by taking limits, derivatives, integrals, and combinations of these.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

We analyse an additive-increase and multiplicative-decrease (also known as growth–collapse or stress–release) process $X\equiv\left ( X_t\right)_{t\geq 0}$ that grows linearly with slope $\beta$ and experiences downward jumps at Poisson epochs, say $(T_i)_{i\in\mathbb{N}}$ with fixed intensity $\lambda$ . The collapses are modelled by multiplying the present process position by a fixed proportion $p\in (0,1)$ , i.e. $-\Delta X_{T_i}=(1-p)X_{T_i-}$ for $\Delta X_t=X_t-X_{t-}$ . We assume that the process starts on the positive half-line $X_0=x>0$ . An illustration of a path of the process $X_t$ is depicted in Fig. 1. For more information on this class of processes, the interested reader is referred to [Reference Löpker, van Leeuwaarden and Ott39]. Note that, without loss of generality, we can assume that $\beta=1$ . Results for general $\beta$ may be obtained using a simple time rescaling. On a logarithmic scale the considered process is equivalent to a jump diffusion with drift coefficient $1/x$ and jumps coming from the Poisson process multiplied by $\log p$ , which is a particular jump diffusion.

Figure 1. Additive-increase and multiplicative-decrease process path.

The additive-increase and the multiplicative-decrease (AIMD) process has various applications. For example, it appears as the fluid limit scaling for some queueing models (with binomial catastrophe rates) used in modelling population growth subject to mild catastrophes; see, e.g., [Reference Adan, Economou and Kapodistria1, Reference Artalejo, Economou and Lopez-Herrero2] and the references therein. Such processes can be viewed as a particular example of the so-called shot noise model, which is used in models of earthquakes, avalanches, or neuron firings. Moreover, this process is also used in the AIMD algorithm to model the Transmission Control Protocol (with $p=\tfrac{1}{2}$ ), the dominant protocol for data transfer over the internet (see [Reference Dumas, Guillemin and Robert17, Reference Guillemin, Robert and Zwart26]).

1.1. Main contribution of the paper

The main objective of this work is to identify the laws of the exit (aka first passage) times

(1.1) \begin{equation} \tau_{\uparrow a}(x) = \inf\{t\geq 0\,:\, X_t> a \mid X_0=x\} \end{equation}

for $x\in[0, a),$ and

(1.2) \begin{equation} \tau_{\downarrow b}(x)=\inf\{t \geq 0\,:\, X_t\leq b\mid X_0=x\}\end{equation}

for $x\in [b,+\infty)$ , and to present a unifying framework for their derivation.

The master plan, in a fluctuation theory of Markov processes with jumps in one direction, is to produce a great variety of exit identities in terms of a few key functions only (the so-called scale functions, a term originating from diffusion theory; see, e.g., [Reference Feller21, Reference Feller22, Reference Feller23, Reference Ito and McKean27]). These crucial functions appear in the Laplace transform of the exit times in (1.1) and (1.2). Hence, the first main step is to identify these ‘alphabet functions’. This is the main result in this paper, where we identify these scale functions. It is commonly believed that at most only three scale functions (or three letters) are needed.

In the case of Lévy processes there are only two scale functions, which are related to the various ways in which the process can exit the interval: only in a continuous way via the upper end of this interval, and possibly by a jump via the lower end of this interval; see [Reference Avram, Grahovac and Vardar-Acar5] for further discussion. With these scale or key functions at hand, we are usually able to solve most other identities for the original process or for related transformed ones. Such transformations are usually obtained by a reflection (at the running infimum or supremum of the process), a refraction (only a fixed portion of the process is reflected), a twisting of measure, or an additional randomization (via Poisson observation, subordination, etc.). Finally, this ‘scale-functions paradigm’ is used in many applications, appearing in queueing theory, actuarial science, optimization, mathematical finance, or control theory, again exemplifying their importance; see [Reference Kyprianou33] for details.

The above complete plan has been executed only for spectrally negative Lévy processes—an overview of this theory is given in [Reference Kyprianou33]; see also [Reference Avram, Kyprianou and Pistorius4, Reference Bertoin8, Reference Bertoin9, Reference Bingham10, Reference Doney16, Reference Emery18, Reference Kyprianou and Palmowski31, Reference Rogers45, Reference Zolotarev49]. Most proofs in this theory only rely on two key properties: the Markov property and the skip-free property. Hence, there is hope that part of this master plan can be realized for other processes with only downward jumps as well. In this paper we identify ‘alphabet’ functions for the additive-increase and multiplicative-decrease process X introduced formally above.

We show that for the additive-increase and multiplicative-decrease process, as well as for related transformed ones, we only need two scale functions, with which all other Laplace transforms can be related; see Remarks 2.3 and 2.6.

There are already some results of this kind for other Markov processes; see [Reference Blumenthal and Getoor11] for an overview. In risk and queueing theories some of the Lévy-type results have been already generalized to compound renewal processes; see [Reference Asmussen and Albrecher3]. Other deep results have been obtained for diffusion processes (see [Reference Breiman12, Reference Darling and Siegert15, Reference Gihman and Skorohod19, Reference Lachal34, Reference Lehoczky38, Reference Sweet and Hardin47]). Similar results have been derived in the context of generalized spectrally negative Ornstein–Uhlenbeck processes (see [Reference Hadjiev24, Reference Jacobsen and Jensen29, Reference Novikov41, Reference Patie42]). Later, spectrally positive self-similar Markov processes were analysed as well; see [Reference Li and Palmowski36]. Other types of processes where scale functions have been successfully identified concern Lévy-driven Langevin-type processes [Reference Czarna, Pérez, Rolski and Yamazaki14]; affine processes [Reference Avram and Usabel7]; Markov addititive processes [Reference Ivanovs and Palmowski28, Reference Kyprianou and Palmowski32]; and Segerdahl–Tichy processes [Reference Avram and Pérez6, Reference Marciniak and Palmowski40, Reference Paulsen and Gjessing43, Reference Segerdahl46, Reference Tichy48].

The additive-increase and multiplicative-decrease processes under consideration have one substantial difference from the abovementioned Lévy processes, namely the jump size depends on the position of the process X prior to this jump. This produces substantial difficulties in solving the exit problems, and handling them requires a new approach. In particular, the principal tools in Lévy-type fluctuation theory, Wiener–Hopf factorization and Itô’s excursion theory, are not available in our case. Instead, we rely on a first-step analysis that allows us to identify the two scale functions. This is a novel approach for such problems, as we explain in more detail now.

1.2. First-step analysis as a main method

The proposed unifying approach for the computation of the exit identities relies on a first-step analysis based on finding the position of the considered processes right after the first jump epoch. This approach produces a recursive equation, which we subsequently solve. Instead of this approach, one could also implement the differential equation method, as often used for diffusion and Lévy processes, and this would yield the same recursive equation as the first-step analysis proposed in this paper.

This method has long been known in the literature (see, e.g., [Reference Asmussen and Albrecher3, Chapter XII], [Reference Cohen13, Part 2], and [Reference Gusak and Koralyuk20]), but we believe that it is the first time that this type of analysis (i.e. the identification of all scale functions) has been done for our process with a proportional size of down-jumps from the present position. We think that this approach, using only the Markov property and the structure of trajectories, could be used for other Markov processes having the skip-free property as well.

We manage to solve the exit problems for reflected processes (where reflection occurs at the lower and upper fixed levels, as well as at the running infimum and maximum). For such processes, our approach is more standard and it is based on using the method in [Reference Lehoczky38], and on the construction of a Kennedy martingale adapted to our model, followed by an application of the optimal stopping theorem.

1.3. Organisation of the paper

The remainder of this paper is structured as follows. In Sections 2 and 3 we present our main results with their proofs. In Section 4, we close with a discussion of our results, alternative approaches, and possible future directions.

2. Exit identities

Given the initial position of the stochastic process, say $X_0=x$ , the exit problems are solved by characterizing the Laplace–Stieltjes transforms of $\tau_{\uparrow a}(x)$ in (1.1), $\tau_{\downarrow b}(x)$ in (1.2), and

(2.1) \begin{equation}{ \tau_{a,b}(x)=\min\{\tau_{\uparrow a}(x),\, \tau_{\downarrow b}(x)\} }\end{equation}

with $x\in (b,a)$ . Note that the stochastic process X will hit the threshold a when crossing upwards as it can only move continuously upwards (creeping). On the other hand, it jumps over the threshold b when crossing b from above. As such (creeping versus jumping), the derivation of the two exit times encompasses different properties requiring typically very different techniques. Here, we propose a unifying framework for exit times, for both one- and two-sided exit problems. In what follows, suppose that the law ${\mathbb P}_{x}$ corresponds to the conditional version of ${\mathbb P}$ given that $X_0=x$ . Analogously, ${\mathbb E}_x$ denotes the expectation with respect to ${\mathbb P}_x$ . Let $\mathcal{F}_t$ be a right-continuous natural filtration of X satisfying the usual conditions.

We now state our main results, given in Theorems 2.1 and 2.2, which study upward and downward crossing problems, and Theorems 2.3 and 2.4, which study two-sided exit problems, below. We start by discussing upward one-sided exit problems.

Theorem 2.1. (Upward one-sided exit problem.) Given $x\in (0,a)$ , the Laplace–Stieltjes transform of the upward exit time $\tau_{\uparrow a}(x)$ of the additive-increase and multiplicative-decrease process $( X_t)_{t\geq 0}$ is given by

\begin{equation*} Z_\uparrow (w;\, x,a)\,:\!=\,{\mathbb E}_x \big[\textrm{e}^{-w \tau_{\uparrow a}(x)}\big] = \frac{Z_\uparrow (w;\, 0,a)}{Z_\uparrow (w;\, 0,x)}, \qquad \textrm{Re}[w]>0, \end{equation*}

with

(2.2) \begin{equation} Z_\uparrow (w;\, 0,x)=\frac{1}{ \sum_{k=0}^\infty\frac{\left( x/(w+\lambda)\right)^k}{k!} \left(\lambda/(w+\lambda) ;\,p\right)_k}, \end{equation}

with $(u;\,p)_k$ denoting the Pochhammer numbers given by

\begin{equation*}(u;\,p)_k=\begin{cases} 1,& k=0,\\[5pt] \prod_{i=0}^{k-1}(1 - u p^i), &k=1,2,\ldots\end{cases}\end{equation*}

Note that this result matches the result reported in [Reference Löpker and Stadje37, Section 4.2], in which the authors instead used a martingale approach for the derivation of the recursive equation satisfied by the Laplace–Stieltjes transform of the exit time.

Proof. Note that $\tau_{\uparrow a}(0) = \tau_{\uparrow x}(0) + \tau_{\uparrow a}(x)$ for all $x\in(0,a)$ , where $\tau_{\uparrow x}(0)$ and $\tau_{\uparrow a}(x)$ are independent. This is due to the strong Markov property, combined with the fact that $X_{\tau_{\uparrow a}(x)}=a$ since X does not have upward jumps (only creeps upwards). Hence,

(2.3) \begin{equation} Z_\uparrow (w;\, 0,a) = Z_\uparrow (w;\, 0,x) Z_\uparrow (w;\, x,a) \end{equation}

for all $w \in\mathbb{C}$ with $\textrm{Re}[w]>0$ , and for all $x\in(0,a)$ . This implies that in order to prove Theorem 2.1 it suffices to prove (2.2).

For this purpose, we write $\tau_{\uparrow a}(0) \stackrel{\textrm{d}}{=} a {\textbf{1}}_{\{T > a\}} + (T + \tau_{\uparrow a}(Tp) ){\textbf{1}}_{\{T \le a\}}$ , with $T=\inf\{t\,:\, X_t<X_{t-}\}$ denoting the time of the first downward jump, which is exponentially distributed with intensity $\lambda$ , and ${\textbf{1}}_{\{\cdot\}}$ denoting the indicator function taking value one if the event inside the brackets is satisfied and zero otherwise. The above result readily implies that

\begin{align*} Z_\uparrow (w;\, 0,a) & \,:\!=\,{\mathbb E}_0 [\textrm{e}^{-w \tau_{\uparrow a}(0)}]= \textrm{e}^{-(\lambda+w )a} + \int_{0}^{a} \lambda \textrm{e}^{-\lambda t} \textrm{e}^{-w t} {\mathbb E}_{tp} [\textrm{e}^{-w\tau_{\uparrow a}(tp)}]\,\textrm{d}t\\ & = \textrm{e}^{-(\lambda+w )a} + \lambda \int_{0}^{a} \textrm{e}^{-(\lambda+w ) t} Z_\uparrow (w;\, tp,a)\, \textrm{d}t.\end{align*}

In light of (2.3), this yields

\begin{align*} Z_\uparrow (w;\, 0,a) & = \textrm{e}^{-(\lambda+w )a} + \lambda Z_\uparrow (w;\, 0,a) \int_{0}^{a} \textrm{e}^{-(\lambda+w ) t} \frac{1}{Z_\uparrow (w;\, 0,tp)}\, \textrm{d}t.\end{align*}

This may be rewritten as

(2.4) \begin{equation} 1 = \frac{\textrm{e}^{-(\lambda+w )a}}{Z_\uparrow (w;\, 0,a)} + \lambda \int_{0}^{a} \textrm{e}^{-(\lambda+w ) t} \frac{1}{Z_\uparrow (w;\, 0,tp)} \, \textrm{d}t.\end{equation}

Let $\tilde{Z}_\uparrow(w, s) = \int_0^\infty \textrm{e}^{-sa} [{1}/{Z_\uparrow (w;\, 0,a)}]\, \textrm{d}a$ (note that this is well defined for $s\in\mathbb{C}$ with $\textrm{Re}[s]>\lambda+\textrm{Re}[w]$ ). Multiplying both sides of (2.4) by $\textrm{e}^{-sa}$ and integrating over a yields, after straightforward manipulations,

\begin{align*} s\tilde{Z}_\uparrow(w, \lambda+w+s)& =- \frac{\lambda}{p}\tilde{Z}_\uparrow\left(w, \frac{\lambda+w+s}{p}\right)+1.\end{align*}

Setting $z = \lambda + w + s$ leads to the recursive relation $(\lambda+w -z) \tilde{Z}_\uparrow(w, z)= ({\lambda}/{p}) \tilde{Z}_\uparrow(w, zp^{-1})$ $-1$ . All in all, the above can be written as $\tilde{Z}_\uparrow (w, z) =A(w, z)+B(w, z)\tilde{Z}_\uparrow (w, z p^{-1})$ , yielding, upon iterating,

\begin{align*} \tilde{Z}_\uparrow (w, z) &=\sum_{k=0}^\infty A(w, z p^{-k})\prod_{i=0}^{k-1}B(w, z p^{-i})+\lim_{k\to\infty}\tilde{Z}_\uparrow (w, z p^{-k})\prod_{i=0}^{k-1}B(w, z p^{-i}),\end{align*}

with

\begin{align*} A(w, z) = \frac{-1}{\lambda+w -z},\qquad B(w,z)=\frac{\lambda}{p(\lambda+w -z)}.\end{align*}

Note that $\lim_{k\to\infty}\tilde{Z}_\uparrow (w, z p^{-k})=0$ and that $\lim_{k\to\infty}\prod_{i=0}^{k-1}B(w, z p^{-i})=0$ for all $\textrm{Re}[z]>\lambda+\textrm{Re}[w]$ . Hence,

\begin{align*} \tilde{Z}_\uparrow (w, z) =-\sum_{k=0}^\infty \frac{1}{\lambda+w-z p^{-k}} \prod_{i=0}^{k-1}\frac{\lambda}{p (w+\lambda - z p^{-i})} \sum_{k=0}^\infty \frac{1}{z^{k+1}} \prod_{i=0}^{k-1} (\lambda+w -\lambda p^i).\end{align*}

The last equation follows by a straightforward application of [Reference Gasper and Rahman25, Equation (1.5.4)]. Equation (2.2) then follows immediately.

Alternatively, one can look for a solution in the form $\tilde{Z}_\uparrow(w, z) = \sum_{k=-\infty}^\infty\! c_k(w) z^{-k}$ with unknown $c_k(w)$ . The above relation readily implies that $c_k(w)=0$ for all $k\le 0$ , $c_1(w)=1$ , and, for $k \ge 2$ , $c_k(w) = \prod_{i=0}^{k-1} (\lambda+w -\lambda p^i)$ .

In the following remark we prove the uniqueness of the solution obtained in Theorem 2.1 (equivalently, the uniqueness of the solution of Theorem 2.2).

Remark 2.1. (Uniqueness solutions.) Let us assume that there are two solutions, say $\tilde{Z}^+_\uparrow (w, z) $ and $\tilde{Z}^-_\uparrow (w, z) $ , to a recursive equation of the type

(2.5) \begin{align} \tilde{Z}_\uparrow (w, z) &=A(w, z)+B(w, z)\tilde{Z}_\uparrow (w, z p^{-1}). \end{align}

Then, the uniqueness of the solution follows from the recursive application of (2.5), since

\begin{align*} \tilde{Z}^+_\uparrow (w, z) -\tilde{Z}^-_\uparrow (w, z) & \stackrel{(2.5)}{=} B (w, z)\big(\tilde{Z}^+_\uparrow (w, z p^{-1})-\tilde{Z}^-_\uparrow (w, z p^{-1})\big) \\ & \stackrel{\phantom{(\ref{eqn7})}}{=} \lim_{k\to\infty} \big(\tilde{Z}^+_\uparrow (w, z p^{-k})-\tilde{Z}^-_\uparrow (w, z p^{-k})\big) \prod_{i=0}^{k-1} B(w, zp^{-i})=0 \end{align*}

as $\lim_{k\to\infty} \tilde{Z}_\uparrow (w, z p^{-k}) = 0$ by the definition of the transform, and since

\begin{equation*}\lim_{k\to\infty} \prod_{i=0}^{k-1} B(w, zp^{-i})=\lim_{k\to\infty} \prod_{i=0}^{k-1}\frac{\lambda}{p (w+\lambda - z p^{-i})}=0\end{equation*}

for all $\textrm{Re}[z]>\lambda+\textrm{Re}[w]$ . Thus, the solution obtained is unique.

We continue by studying downward one-sided exit problems.

Theorem 2.2. (Downward one-sided exit problem.) Given $x\in (b,\infty)$ , the Laplace–Stieltjes transform

(2.6) \begin{equation} Z_\downarrow(w;\,x,b)\,:\!=\, {\mathbb E}_x[\textrm{e}^{-w\tau_{\downarrow b}(x)}], \qquad\textrm{Re}[w]>0, \end{equation}

of the downward crossing time $\tau_{\downarrow b}(x)$ of the additive-increase and multiplicative-decrease process $( X_t)_{t\geq 0}$ equals

(2.7) \begin{align} & Z_\downarrow(w;\,x,b) \nonumber \\ & = \frac{w}{w+\lambda} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \textbf{1}_{\{b<x\leq bp^{-k}\}}\nonumber\\ & \quad + \frac{w}{\lambda(w+\lambda)} \sum_{k=0}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^{k+1} \sum_{i=0}^{k} \frac{ 1-(1+\tilde{C}(w;\,b))p^{i-k} }{ \prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \textbf{1}_{\{bp^{-k}>x\}}\textrm{e}^{(w+\lambda)p^i x}, \end{align}

with

(2.8) \begin{align} \tilde{C}(w;\,b)&= \frac{ \sum_{l=0}^\infty \textrm{e}^{-b (w+\lambda) p^{-l}} \frac{\lambda^{l}({-}1)^l p^{l(l+1)/2}}{ (w+\lambda)^{l}(p;\,p)_l} } {\sum_{l=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-l} }\frac{\lambda^{l}({-}1)^l p^{l(l-1)/2}}{(w+\lambda)^{l}(p;\,p)_l} }-1.\end{align}

Proof. In order to compute the first downward crossing time, we employ a first-step analysis approach yielding

(2.9) \begin{align} \tau_{\downarrow b}(x)\stackrel{\textrm{d}}{=} T\textbf{1}_{\{(x + T)p\leq b\}}+ (T+\tau_{\downarrow b}((x + T)p) ) \textbf{1}_{\{(x + T)p> b\}},\end{align}

with T denoting the time of the first downward jump, which is exponentially distributed with intensity $\lambda$ . Let $\tilde{Z}_\downarrow(w,z;\,b)\,:\!=\,\int_{b}^\infty \textrm{e}^{-z x }\mathbb{E}_x [\textrm{e}^{-w\tau_{\downarrow b}(x)}]\,\textrm{d}x$ with $\textrm{Re}[z]>\lambda+\textrm{Re}[w]$ ; then the above result, after cumbersome but straightforward computations, implies that

(2.10) \begin{align} \tilde{Z}_\downarrow (w, z;\, b) & = \frac{\lambda}{w+\lambda }\Bigg(\frac{ \textrm{e}^{b (w+\lambda)(1-p^{-1})-bz}-\textrm{e}^{-bz p^{-1}}}{w+\lambda-z }+\frac{\textrm{e}^{-b z }-\textrm{e}^{-b z p^{-1}}}{z}\Bigg)\nonumber\\ & \quad -\frac{\lambda \textrm{e}^{b(w+\lambda- z ) }}{p (w+\lambda- z )}\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) + \frac{\lambda}{p (w+\lambda- z )}\tilde{Z}_\downarrow (w, z p^{-1};\, b).\end{align}

All in all, the above can be written as $\tilde{Z}_\downarrow (w, z;\, b) =C(w, z)+D(w, z)\tilde{Z}_\downarrow (w, z p^{-1};\, b)$ , yielding, upon iterating,

\begin{align*} \tilde{Z}_\downarrow (w, z;\, b) &=\sum_{k=0}^\infty C(w, z p^{-k})\prod_{i=0}^{k-1}D(w, z p^{-i})+\lim_{k\to\infty}\tilde{Z}_\downarrow (w, z p^{-k};\,b)\prod_{i=0}^{k-1}D(w, z p^{-i}).\end{align*}

Note that $\lim_{k\to\infty}\tilde{Z}_\downarrow (w, z p^{-k};\,b)=0$ and that $\lim_{k\to\infty}\!\prod_{i=0}^{k-1}D(w, z p^{-i})=0$ for all $z\in\mathbb{C}$ with $\textrm{Re}[z]>\lambda+\textrm{Re}[w]$ . All in all,

(2.11) \begin{align} \tilde{Z}_\downarrow (w, z;\, b) &=\frac{\lambda}{w+\lambda} \sum_{k=0}^\infty \Bigg(\frac{ \textrm{e}^{b (w+\lambda)(1-p^{-1}) - bz p^{-k}}-\textrm{e}^{- b z p^{-k-1}}}{w+\lambda - z p^{-k}} + \frac{\textrm{e}^{-b z p^{-k}}-\textrm{e}^{- b z p^{-k-1}}}{z p^{-k}}\Bigg) \nonumber \\ & \qquad \times \prod_{i=0}^{k-1}\frac{\lambda}{p (w+\lambda - z p^{-i})}\nonumber\\ & \qquad -\tilde{Z}_\downarrow (w, (w+\lambda ) p^{-1};\,b)\sum_{k=0}^\infty \textrm{e}^{b(w+\lambda - z p^{-k}) } \prod_{i=0}^{k}\frac{\lambda}{p (w+\lambda - z p^{-i})}.\end{align}

In order to compute $\tilde{Z}_\downarrow (w, (w+\lambda ) p^{-1};\, b)$ , we first multiply (2.11) by $w+\lambda - z $ . After simplifying the resulting expressions we set $z =w+\lambda$ , rendering the left-hand side of (2.11) zero. This yields, after some straightforward algebraic manipulations,

\begin{align*} & \tilde{Z}_\downarrow (w, (w+\lambda ) p^{-1};\, b) \\ & = \Bigg[\frac{\lambda}{w+\lambda} \sum_{k=1}^\infty \Bigg(\frac{ \textrm{e}^{b (w+\lambda)(1-p^{-1}-p^{-k})}-\textrm{e}^{- b (w+\lambda) p^{-k-1}}}{(w+\lambda)(1 - p^{-k})} +\frac{\textrm{e}^{-b (w+\lambda) p^{-k}}-\textrm{e}^{-b (w+\lambda) p^{-k-1}}}{(w+\lambda)p^{-k}}\Bigg) \\ & \qquad \times \prod_{i=1}^{k-1}\frac{\lambda}{p (w+\lambda)(1- p^{-i})}\Bigg] \Bigg[\sum_{k=0}^\infty \textrm{e}^{b(w+\lambda)(1- p^{-k}) }\prod_{i=1}^{k}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} \Bigg]^{-1}\\ & = \frac{p}{w+\lambda}\textrm{e}^{-b(w+\lambda)p^{-1}} - \frac{p}{w+\lambda}\textrm{e}^{-b(w+\lambda)p^{-1}} \frac{1 } {\sum_{k=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-k} }\prod_{i=1}^{k}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} }\nonumber\\ & \quad +\frac{\lambda}{w+\lambda}\textrm{e}^{-b(w+\lambda)p^{-1}} \Bigg[ \frac{ \sum_{k=1}^\infty \frac{\textrm{e}^{-b (w+\lambda) p^{-k}}}{(w+\lambda)p^{-k}} \prod_{i=1}^{k-1}\frac{\lambda}{p (w+\lambda)(1- p^{-i})} } {\sum_{k=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-k} }\prod_{i=1}^{k}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} }\nonumber\\ & \qquad \qquad \qquad \qquad \qquad - \frac{ \sum_{k=1}^\infty \!\left(\frac{ \textrm{e}^{- b (w+\lambda) p^{-k-1}}}{(w+\lambda)(1 - p^{-k})} +\frac{\textrm{e}^{-b (w+\lambda) p^{-k-1}}}{(w+\lambda)p^{-k}}\right) \prod_{i=1}^{k-1}\frac{\lambda}{p (w+\lambda)(1- p^{-i})} } {\sum_{k=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-k} }\prod_{i=1}^{k}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} } \Bigg]\nonumber\\ &=\frac{p}{w+\lambda}\textrm{e}^{-b(w+\lambda)p^{-1}} -\frac{w}{w+\lambda}\textrm{e}^{-b(w+\lambda) } \frac{ \sum_{k=1}^\infty \! \frac{\textrm{e}^{-b (w+\lambda) p^{-k}}}{(w+\lambda)p^{-k}} \prod_{i=1}^{k-1}\frac{\lambda}{p (w+\lambda)(1- p^{-i})} } {\sum_{k=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-k} }\prod_{i=1}^{k}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} } . \end{align*}

In light of this expression, (2.11) yields

(2.12) \begin{align} & \tilde{Z}_\downarrow (w, z;\, b) \nonumber \\ & = \frac{\lambda}{w+\lambda} \sum_{k=0}^\infty \Bigg({-}\frac{\textrm{e}^{- b z p^{-k-1}}}{w+\lambda - z p^{-k}} +\frac{\textrm{e}^{-b z p^{-k}}-\textrm{e}^{- b z p^{-k-1}}}{z p^{-k}}\Bigg) \prod_{i=0}^{k-1}\frac{\lambda}{p (w+\lambda - z p^{-i})} \nonumber \\ & \quad +\frac{w}{w+\lambda} \frac{ \sum_{l=1}^\infty \! \frac{\textrm{e}^{-b (w+\lambda) p^{-l}}}{(w+\lambda)p^{-l}} \prod_{i=1}^{l-1}\frac{\lambda}{p (w+\lambda)(1- p^{-i})} } {\sum_{l=0}^\infty \textrm{e}^{-b(w+\lambda)p^{-l} }\prod_{i=1}^{l}\frac{\lambda}{p (w+\lambda)(1-p^{-i})} } \sum_{k=0}^\infty \textrm{e}^{- bz p^{-k} } \prod_{i=0}^{k}\frac{\lambda}{p (w+\lambda - z p^{-i})}\nonumber\\ & = \frac{\textrm{e}^{-bz}}{z} - \frac{w}{w+\lambda} \sum_{k=0}^\infty \frac{\textrm{e}^{- bz p^{-k} }}{zp^{-k}} \prod_{i=0}^{k-1}\frac{\lambda}{p (w+\lambda - z p^{-i})} \nonumber \\ & \quad + \frac{w}{w+\lambda}\frac{p}{\lambda} \tilde{C}(w;\,b) \sum_{k=0}^\infty \textrm{e}^{- bz p^{-k} } \prod_{i=0}^{k}\frac{\lambda}{p (w+\lambda - z p^{-i})},\end{align}

with $\tilde{C}(w;\,b)$ as given in (2.8).

We now proceed with the inversion of the Laplace–Stieltjes transform with respect to z. To this end, we rewrite (2.12) by expanding the products into summations:

\begin{align*} & \tilde{Z}_\downarrow(w, z;\,b) \\ & = \frac{\textrm{e}^{-bz}}{z} - \frac{w}{w+\lambda} \sum_{k=0}^\infty \frac{\textrm{e}^{- bz p^{-k} }\lambda^k}{z} \prod_{i=0}^{k-1}\frac{1}{w+\lambda - z p^{-i}} \\ & \quad + \frac{w}{w+\lambda}\frac{p}{\lambda} \tilde{C}(w;\,b) \sum_{k=0}^\infty \frac{\textrm{e}^{- bz p^{-k} }\lambda^{k+1}}{p^{k+1}} \prod_{i=0}^{k}\frac{1}{w+\lambda - z p^{-i}}\\ & = \frac{\lambda}{w+\lambda}\frac{\textrm{e}^{-bz}}{z} \\ & \quad + \frac{w}{w+\lambda} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \Bigg( \frac{1}{(w+\lambda)} \sum_{i=0}^{k-1} \frac{1}{\prod_{j=0,j\neq i}^{k-1}(1-p^{i-j})} \frac{\textrm{e}^{- bz p^{-k} }}{z-(w+\lambda)p^{i}} - \frac{\textrm{e}^{- bz p^{-k} }}{z} \Bigg) \\ & \quad - \frac{w}{w+\lambda}\frac{p}{\lambda} \tilde{C}(w;\,b) \sum_{k=0}^\infty \bigg(\frac{\lambda}{ p(w+\lambda)}\bigg)^{k+1} \sum_{i=0}^{k} \frac{p^i}{\prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \frac{\textrm{e}^{- bz p^{-k} }}{z-(w+\lambda) p^{i}}\\ & = \frac{w}{w+\lambda} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \frac{\textrm{e}^{-bz}-\textrm{e}^{- bz p^{-k} }}{z} \\ & \quad + \frac{w}{(w+\lambda)^2} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \sum_{i=0}^{k-1} \frac{1}{\prod_{j=0,j\neq i}^{k-1}(1-p^{i-j})} \frac{\textrm{e}^{- bz p^{-k} }}{z-(w+\lambda)p^{i}} \\ & \quad - \frac{w}{w+\lambda}\frac{p}{\lambda} \tilde{C}(w;\,b) \sum_{k=0}^\infty \bigg(\frac{\lambda}{ w+\lambda}\bigg)^{k+1} \sum_{i=0}^{k} \frac{p^{i-k-1}}{\prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \frac{\textrm{e}^{- bz p^{-k} }}{z-(w+\lambda) p^{i}} \\ & = \frac{w}{w+\lambda} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \int_{b}^{\infty} \textrm{e}^{-zx}\textbf{1}_{\{b\lt x\leq bp^{-k}\}} \, \textrm{d}x \\ & \quad + \frac{w}{\lambda(w+\lambda)} \sum_{k=0}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^{k+1} \sum_{i=0}^{k} \frac{1-p^{i-k}}{\prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \int_{b}^{\infty} \textrm{e}^{-zx}\textbf{1}_{\{bp^{-k}\lt x\}}\textrm{e}^{(w+\lambda)p^i x} \, \textrm{d}x \\ & \quad - \frac{w}{\lambda(w+\lambda)} \tilde{C}(w;\,b) \sum_{k=0}^\infty \bigg(\frac{\lambda}{ w+\lambda}\bigg)^{k+1} \\ & \qquad \times \sum_{i=0}^{k} \frac{p^{i-k}}{\prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \int_{b}^{\infty} \textrm{e}^{-zx}\textbf{1}_{\{bp^{-k}\lt x\}}\textrm{e}^{(w+\lambda)p^i x} \, \textrm{d}x \\ & = \frac{w}{w+\lambda} \sum_{k=1}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^k \int_{b}^{\infty} \textrm{e}^{-zx}\textbf{1}_{\{b\lt x\leq bp^{-k}\}} \, \textrm{d}x \\ & \quad + \frac{w}{\lambda(w+\lambda)} \sum_{k=0}^\infty \bigg(\frac{\lambda}{w+\lambda}\bigg)^{k+1} \\ & \qquad \times \sum_{i=0}^{k} \frac{ 1-(1+\tilde{C}(w;\,b))p^{i-k} }{\prod_{j=0,j\neq i}^{k}(1-p^{i-j})} \int_{b}^{\infty} \textrm{e}^{-zx}\textbf{1}_{\{bp^{-k}\lt x\}}\textrm{e}^{(w+\lambda)p^i x} \, \textrm{d}x , \end{align*}

which completes the proof of the theorem.

Remark 2.2. (Alternative approach.) Instead of the above solution, we could equivalently consider that

(2.13) \begin{align} \tilde{Z}_\downarrow (w, z;\, b)=\sum_{n=-\infty}^\infty c_n(w;\,b) z^{n}, \end{align}

and substitute this into the recursion (2.10). This yields

\begin{align*} (w+\lambda-z)\sum_{n=-\infty}^\infty c_n(w;\,b) z^{n} =\sum_{n=0}^\infty a_n(w;\,b) z^{n} +\sum_{n=-\infty}^\infty \lambda c_n(w;\,b)p^{-n-1} z^{n}, \end{align*}

with

\begin{align*} \sum_{n=0}^\infty a_n(w;\,b) z^{n} & = \frac{\lambda}{w+\lambda} \bigg(\textrm{e}^{b (w+\lambda)(1-p^{-1})-bz}-\textrm{e}^{-bz p^{-1}}+ \big(\textrm{e}^{-b z }-\textrm{e}^{-b z p^{-1}}\big)\frac{w+\lambda-z}{z}\bigg) \\ & \quad - \frac{\lambda }{p}\textrm{e}^{b(w+\lambda- z ) }\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) . \end{align*}

Equating the coefficients of $z^n$ , $n\in\mathbb{Z}$ , yields

(2.14) \begin{align}{2} c_n(w;\,b) & = \frac{a_n(w;\,b) + c_{n-1}(w;\,b)}{w+\lambda(1 -p^{-n-1})} & & \nonumber \\ & = \sum_{k=0}^n\frac{a_k(w;\,b)}{\prod_{i=k+1}^{n+1}(w+\lambda(1 -p^{-i}))} +\frac{c_{-1}(w;\,b)}{\prod\limits_{i=1}^{n+1}(w+\lambda(1 -p^{-i}))}, \qquad & n & \geq 0, \end{align}
(2.15) \begin{align} c_{-n}(w;\,b) & = (w+\lambda(1 -p^{n-2})) c_{-n+1}(w;\,b) \nonumber \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\\ & = \prod_{i=0}^{n-2}\left(w+\lambda(1 -p^{-i})\right) c_{-1}(w;\,b), \qquad n \geq 2, \end{align}

with $c_{-1}(w;\,b)=\textbf{1}_{\{w=0\}}$ . On the one hand, note that $\lim_{n\to\infty}c_{-n}(w;\,b) =0$ . On the other hand, taking the limit in the left-hand side of (2.15) yields

\begin{align*} \prod\limits_{i=0}^{\infty}\left(w+\lambda(1 -p^{-i})\right) =\begin{cases} 0&\text{ if }w=0,\\ \infty&\text{otherwise}. \end{cases} \end{align*}

Moreover, $\tilde{Z}_\downarrow(0, z;\,b)=\int_{b}^\infty \textrm{e}^{-z x } \, \textrm{d}x=\textrm{e}^{-bz}/z$ . So, all in all, $c_{-1}(w;\,b)=\textbf{1}_{\{w=0\}}$ and $c_{-n}(w;\,b)=0$ for all $n\geq 2$ . Note that this is consistent with the result from (2.12), as $\lim_{z\to0}z \tilde{Z}_\downarrow (w, z;\, b)=1-\textbf{1}_{\{w\neq0\}}=\textbf{1}_{\{w=0\}}$ .

Furthermore, we can compute $\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) $ by noting that the sequence $a_n(w;\,b)$ can be decomposed into $a_n(w;\,b)=a_{n,1}(w;\,b)+\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) a_{n,2}(w;\,b)$ , where the subsequences $a_{n,1}(w;\,b)$ and $a_{n,2}(w;\,b)$ are fully known (they are the coefficients of the Taylor expansions of the exponents). Then, by the definition of (2.13) and by (2.14) and (2.15),

\begin{align*} \tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) &=\sum_{n=0}^\infty \sum_{k=0}^n \frac{(a_{k,1}(w;\,b) +\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b) a_{k,2}(w;\,b))\big(\frac{w+\lambda}{p}\big)^{n}} {\prod_{i=k+1}^{n+1}\!(w+\lambda(1 -p^{-i}))} \\ & \quad + \textbf{1}_{\{w=0\}}\sum_{n=-1}^\infty \frac{\big(\frac{w+\lambda}{p}\big)^{n}}{\prod_{i=1}^{n+1}\!(w+\lambda(1 -p^{-i}))} , \end{align*}

which yields, after straightforward computations, the value of $\tilde{Z}_\downarrow (w, (w+\lambda)p^{-1};\, b)$ . Note that this form of the double Laplace–Stieltjes transform does not permit a straightforward inversion with respect to z, as done in the proof of Theorem 2.2.

Having established the laws governing the one-sided exit problems (up-crossing by creeping or down-crossing by a jump), we proceed with the next two theorems, in which we analyse the double-sided exit problems.

Theorem 2.3. (Upward two-sided exit problem.) For $x\in [b,a)$ , the Laplace–Stieltjes transform

(2.16) \begin{equation} L_{\uparrow}(w;\,x,a,b) \,:\!=\, {\mathbb E}_x \big[\textrm{e}^{-w \tau_{\uparrow a}(x)}{\textbf{1}}_{\{\tau_{\uparrow a} (x) < \tau_{\downarrow b}(x)\}}\big] \end{equation}

of the upward two-sided exit time $\tau_{\uparrow a}(x)$ of the additive-increase and multiplicative-decrease process $( X_t)_{t\geq 0}$ equals

(2.17) \begin{equation} L_{\uparrow}(w;\,x,a,b)=\frac{L_{\uparrow}(w;\,b,a,b)}{L_{\uparrow}(w;\,b,x,b)}, \end{equation}

and $L_{\uparrow}(w;\,b,x,b)$ (similarly for $L_{\uparrow}(w;\,b,a,b)$ ) may be found recursively for all values of x as follows:

For $x \in (b,bp^{-1}]$ ,

(2.18) \begin{equation} L_{\uparrow}(w;\,b,x,b) = \textrm{e}^{-(w+\lambda) (x-b)}. \end{equation}

For $x\in (b/p^{k}, b/p^{k+1}]$ and all $k\geq 1$ ,

(2.19) \begin{align} L_{\uparrow}(w;\,b,x,b) & = {\textrm{e}^{-(w+\lambda) (x-b/p^k)}} \nonumber \\ & \quad \times \Bigg[\frac{1}{L_{\uparrow}(w;\,b,b/p^k,b)} - \int_0^{x-b/p^k} \frac{\lambda \textrm{e}^{-(\lambda+w ) t}} {L_{\uparrow}(w;\,b,b/p^{k-1} + pt,b)} \, \textrm{d}t\Bigg]^{-1}, \end{align}

where it is assumed that recursively $L_{\uparrow}(w;\,b,y,b)$ is known for all $y \le b/p^k$ , the starting point of the recursion being given by (2.18).

Proof. First, we have, for all $y\in[b,x)$ , $\tau_{\uparrow a}(y) {\textbf{1}}_{\{\tau_{\uparrow a}(y) < \tau_{\downarrow b}(y)\}} \stackrel{\textrm{d}}{=} \tau_{\uparrow x}(y) {\textbf{1}}_{\{\tau_{\uparrow x}(y) < \tau_{\downarrow b}(y)\}} + \tau_{\uparrow a}(x) {\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}}$ , and note that the random variables on the right-hand side are independent, which follows straightforwardly from the Markov property. Taking $y=b$ proves (2.17). We can therefore focus only on the computation of $L_{\uparrow}(w;\,b,x,b)$ for $b< x\leq a$ , in which the starting position is set to b.

If $x\in (b, b/p)$ , then $L_{\uparrow}(w;\,b,x,b) = {\mathbb E}_b\big[\textrm{e}^{-w\tau_{\uparrow x}(b)}{\textbf{1}}_{\{\tau_{\uparrow a(b)} < \tau_{\downarrow b}(b)\}}\big] = \textrm{e}^{-w (x-b)} {\mathbb P}(T > x-b) = \textrm{e}^{-(w+\lambda) (x-b)}$ , which proves (2.18).

Next, assume that $x\in (b/p^{k}, b/p^{k+1}]$ for $k\geq 1$ . Then, from arguments similar to those used in the proof of (2.17),

(2.20) \begin{equation} L_{\uparrow}(w;\,b,x,b) = L_{\uparrow}(w;\,b,b/p^k,b) L_{\uparrow}(w;\,b/p^k,x,b). \end{equation}

Now note that

\begin{multline*} \tau_{\uparrow x}(b/p^k){\textbf{1}}_{\{\tau_{\uparrow x}(b/p^k) < \tau_{\downarrow b}(b/p^k)\}} \stackrel{\textrm{d}}{=} (x-b/p^k) {\textbf{1}}_{\{T > x-b/p^k\}} \\ + \left(T + \tau_{\uparrow x}(p(b/p^k+T)){\textbf{1}}_{\{\tau_{\uparrow x}(p(b/p^k+T)) < \tau_{\downarrow b}(p(b/p^k+T))\}}\right){\textbf{1}}_{\{T \le x-b/p^k\}}. \end{multline*}

Taking Laplace transforms, we obtain

\begin{equation*}L_{\uparrow}(w;\,b/p^k,x,b) = \textrm{e}^{-(w+\lambda)(x-b/p^k)} + \int_0^{x-b/p^k} \lambda \textrm{e}^{-(w+\lambda)t} L_{\uparrow}(w;\,b/p^{k-1}+pt,x,b) \, \textrm{d}t.\end{equation*}

We can now apply (2.20) to rewrite the above as

\begin{equation*}\frac{L_{\uparrow}(w;\,b,x,b)}{L_{\uparrow}(w;\,b,b/p^k,b)} = \textrm{e}^{-(w+\lambda) (x-b/p^k)} + \int_0^{x-b/p^k} \lambda \textrm{e}^{-(\lambda+w ) t} \frac{L_{\uparrow}(w;\,b,x,b)}{L_{\uparrow}(w;\,b,b/p^{k-1} + pt,b)}\,\textrm{d}t,\end{equation*}

which leads to (2.19).

Remark 2.3. (Relation of Theorems 2.1 and 2.3.) The result of Theorem 2.1 might be retrieved from Theorem 2.3 if we use the fact that $\lim_{b \to 0} L_{\uparrow}(w;\,x,a,b) = Z_{\uparrow}(w;\,x,a)$ when a and x are fixed. In particular, note that (2.19) looks very similar to, for instance, the equation just above (2.4). However, it is hard to perform this limit from the result in Theorem 2.3 since, when we let b tend to 0 while leaving x fixed, $b/p^k$ tends to zero as well. This makes it difficult to explicitly compute the limit. Thus, while theoretically the scale function $Z_\uparrow(w;\,x,a)$ can be directly related to $L_{\uparrow}(w;\,x,a,b)$ , in practice this is difficult.

Remark 2.4. (Convenient rewrite of (2.19).) Note also that, if we introduce

\begin{equation*}K_{\uparrow}(w;\,a,x,b) = \frac{1}{L_{\uparrow}(w;\,a,x,b)},\end{equation*}

then (2.19), rewritten for $K_{\uparrow}$ , for $x\in (b/p^{k}, b/p^{k+1}]$ and all $k\geq 1$ , simplifies to

\begin{align*} K_{\uparrow}(w;\,b,x,b) & = \textrm{e}^{(w+\lambda) (x-b/p^k)} K_{\uparrow}(w;\,b,b/p^k,b) \\ & \quad - \textrm{e}^{(w+\lambda) (x-b/p^k)} \int_{0}^{x-b/p^k} \frac{\lambda}{p} \textrm{e}^{-(\lambda+w )t}K_{\uparrow}(w;\,b,b/p^{k-1}+pt,b)\,\textrm{d}t \\ & = \textrm{e}^{(w+\lambda) (x-b/p^k)} K_{\uparrow}(w;\,b,b/p^k,b) \\ & \quad - \textrm{e}^{(w+\lambda) (x-b/p^k)} \int_{b/p^{k-1}}^{px} \frac{\lambda}{p} \textrm{e}^{-(\lambda+w )(s/p-b/p^k)}K_{\uparrow}(w;\,b,s,b)\,\textrm{d}s \\ & = \textrm{e}^{(w+\lambda) (x-b/p^k)} K_{\uparrow}(w;\,b,b/p^k,b) \\ & \quad - \textrm{e}^{(w+\lambda)x} \int_{b/p^{k-1}}^{px} \frac{\lambda}{p} \textrm{e}^{-(\lambda+w)s/p}K_{\uparrow}(w;\,b,s,b)\,\textrm{d}s, \end{align*}

where $K_{\uparrow}(w;\,b,x,b) = \textrm{e}^{(w+\lambda) (x-b)}$ for $x \in (b,b/p]$ .

This also allows us to perform the iteration explicitly for more values of x. Indeed, for $k=1$ and thus $x\in (b/p, b/p^{2}]$ , the recursion yields

(2.21) \begin{align} & K_{\uparrow}(w;\,b,x,b) \nonumber\\ & = \textrm{e}^{(w+\lambda) (x-b/p)} K_{\uparrow}(w;\,b,b/p,b) - \textrm{e}^{(w+\lambda)x} \int_{b}^{px} \frac{\lambda}{p} \textrm{e}^{-(\lambda+w)s/p}K_{\uparrow}(w;\,b,s,b)\,\textrm{d}s \nonumber\\ & = \textrm{e}^{(w+\lambda)(x-b/p)}\textrm{e}^{(w+\lambda) (b/p-b)}-\textrm{e}^{(w+\lambda) x} \int_b^{xp} \frac{\lambda}{p} \textrm{e}^{-(\lambda+w )s/p} \textrm{e}^{(w+\lambda) (s-b)}\,\textrm{d}s\nonumber\\ & = \textrm{e}^{(w+\lambda)(x-b)}-\textrm{e}^{(w+\lambda) x} \int_b^{xp} \frac{\lambda}{p} \textrm{e}^{-(w+\lambda)s (1-1/p)}\,\textrm{d}s\nonumber\\ & = \textrm{e}^{(w+\lambda)(x-b)}-\textrm{e}^{(w+\lambda)x} \frac{\lambda}{(1-p)(w+\lambda)}\big[\textrm{e}^{-(w+\lambda)b(1-1/p)}-\textrm{e}^{-(w+\lambda)x (1-p)}\big]. \end{align}

We extend this one iteration further, to obtain, for $k=2$ and thus $x\in (b/p^{2}, b/p^{3}]$ ,

\begin{align*} K_{\uparrow}(w;\,b,x,b) = \textrm{e}^{(w+\lambda) (x-b/p^2)} K_{\uparrow}(w;\,b,b/p^2,b)& \\ & \!\!\!\! - \textrm{e}^{(w+\lambda) x} \int_{b/p}^{px} \lambda \textrm{e}^{-(\lambda+w ) s/p}K_{\uparrow}(w;\,b,s,b)\,\textrm{d}s. \end{align*}

After this, we can substitute (2.21) to compute $K_{\uparrow}(w;\,b,x,b)$ for $x\in (b/p^{2}, b/p^{3}]$ . By iteration, it is easily seen that there exist coefficients $a_{n,k}=a_{n,k}(p,w,\lambda)$ such that, for $x\in (b/p^{k}, b/p^{k+1}]$ and all $k\geq 1$ ,

(2.22) \begin{equation}{ K_{\uparrow}(w;\,b,x,b)=\sum_{n=0}^k a_{n,k} \textrm{e}^{(w+\lambda)p^n x}. }\end{equation}

In turn, this representation looks similar to (2.7), but here we have no exact formula for the coefficients $a_{n,k}$ , whereas in (2.7) we do.

Remark 2.5. (Continuity and differentiability of $a\mapsto L_{\uparrow}(w;\,x,a,b)$ .) The above formulas are also convenient for studying the continuity and differentiability properties of $x\mapsto L_{\uparrow}(w;\,b,x,b)$ . Indeed, by $K_{\uparrow}(w;\,x,a,b) = 1/L_{\uparrow}(w;\,x,a,b)$ and (2.22), $x\mapsto L_{\uparrow}(w;\,b,x,b)$ is continuous everywhere, while it is continuously differentiable at every point except possibly $x=b/p^k$ for all $k\geq 0$ . In this countable number of points, however, left- and right-derivatives do exist. Through (2.17), this can be extended to continuity and almost everywhere differentiability of $a\mapsto L_{\uparrow}(w;\,x,a,b)$ for general x and b.

Theorem 2.4. (Downward two-sided exit problem.) For $x\in [b,a)$ , the Laplace–Stieltjes transform

(2.23) \begin{equation} L_{\downarrow}(w;\,x,a,b)={\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)}{\textbf{1}}_{\{\tau_{\uparrow a}(x)>\tau_{\downarrow b}(x) \}}\big] \end{equation}

of the downward two-sided exit time $\tau_{\downarrow b}(x)$ of the additive-increase and multiplicative-decrease process $( X_t)_{t\geq 0}$ equals

(2.24) \begin{equation} L_{\downarrow}(w;\,x,a,b) = Z_\downarrow(w;\,x,b)-L_{\uparrow}(w;\,x,a,b)Z_\downarrow(w;\,a,b). \end{equation}

Proof. Clearly, for all $x\in [b,a)$ , $\tau_{\downarrow b}(x){\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}} \stackrel{\textrm{d}}{=} \tau_{\uparrow a}(x){\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}} +\tau_{\downarrow b}(a)$ , and note again that the random variables $\tau_{\uparrow a}(x){\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}}$ and $\tau_{\downarrow b}(a)$ on the right-hand side are independent. Thus, we have ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)}{\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}} \big] = {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow a}(x)}{\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}} \big]$ ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(a)} \big]$ . Noting that

\begin{equation*}{\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)}\big]={\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)} {\textbf{1}}_{\{\tau_{\uparrow a}(x) < \tau_{\downarrow b}(x)\}}\big]+{\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)} {\textbf{1}}_{\{\tau_{\uparrow a}(x) > \tau_{\downarrow b}(x)\}}\big],\end{equation*}

and using Theorems 2.2 and 2.3, completes the proof of the theorem.

Remark 2.6. (Relation of Theorems 2.3 and 2.4 and number of scale functions.) Note that $\tau_{\downarrow b}(a) \ge T_1 + \cdots + T_{\log_{1/p} (a/b)}$ for independent $T_1, T_2,\ldots\sim {\text{Exp}}(\lambda)$ , and hence $\tau_{\downarrow b}(a) \to \infty$ almost surely as $a \to \infty$ while b remains fixed. As $L_{\uparrow}(w;\,x,a,b) \le 1$ for any set of parameters, if we let $a \to \infty$ with a fixed b in (2.24), we obtain that the second term vanishes, so that $\lim_{a \to \infty} L_{\downarrow}(w;\,x,a,b)=\lim_{a \to \infty}{\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)} {\textbf{1}}_{\{\tau_{\uparrow a}(x)>\tau_{\downarrow b}(x) \}}\big] = Z_\downarrow(w;\,x,b)$ , as can be expected from Theorem 2.2. Thus, the scale function $Z_\downarrow(w;\,x,b)$ can be directly related to $L_{\uparrow}(w;\,x,a,b)$ . Since (2.24) in Theorem 2.4 also identifies $L_{\downarrow}(w;\,x,a,b)$ in terms of $L_{\uparrow}$ and $Z_\downarrow$ , we conclude that in total we need the two scale functions $Z_{\uparrow}$ and $L_{\uparrow}$ , rather than four.

3. Exit identities for reflected processes

We now consider two types of reflected versions of the process X:

  • For the first, we reflect at a, i.e. the stochastic process grows linearly over time until it reaches a, then it stays there until the next jump occurs, and at the jump time, the process jumps from a to ap. We denote this process by $X^{\overline{a}}=(X_t^{\overline{a}})_{t\geq0}$ .

  • For the second, we reflect at b, i.e. the process grows linearly and whenever, due to a jump, the process jumps over the downward level b it is put back to b and it continues its evolution in time according to the background process X from level b. We denote this process by $X^{\underline{b}}=(X_t^{\underline{b}})_{t\geq0}$ .

In this section we analyse the first passage times of these two reflected processes defined as $\tau_{\downarrow c}^{\overline{a}}(x)=\inf\{t\geq 0\,:\, X_t^{\overline{a}}< c\mid X_0=x\}$ and $\tau_{\uparrow c}^{\underline{b}}(x)=\inf\{t\geq 0\,:\, X_t^{\underline{b}}> c\mid X_0=x\}$ .

Finally, in this paper we identify the Laplace transforms of the first passage time $\tau_c(x)=\inf\{t\geq 0\,:\, Y_t> c\mid X_0=x\}$ of the process $Y_t=\overline{X}_t-X_t$ reflected at a running supremum $\overline{X}_t=\sup_{s\leq t} X_s \vee \overline{X}_0$ , as well as the first passage time $\widehat{\tau}_c(x)=\inf\{t\geq 0 \,:\, \widehat{Y}_t> c\mid X_0=x\}$ of the process $\widehat{Y}_t=X_t-\underline{X}_t$ reflected at its running infimum $\underline{X}_t=\inf_{s\leq t} X_s \wedge \underline{X}_0$ .

Note that the process Y stays at 0 until the first jump epoch T of X. Then, right after the first jump it equals $Y_T=(1-p)X_{T-}$ , and hence the jump of Y is positive ( $\Delta Y_T= Y_T>0$ ). Later, $t\mapsto Y_t$ decreases until the next jump.

The process $\widehat{Y}_t$ evolves in a different way. At the beginning, it equals $t-x$ until the first jump. Then, if at the epoch T of the first jump of X we have $X_T\geq x$ , then the process $\widehat{Y}$ evolves without any changes (except a shift by the initial position x). If $X_T<x$ instead, then $\widehat{Y}_T =0$ . In this case, our new initial position equals $X_T$ and the process $\widehat{Y}$ evolves as before.

We now present results concerning the exit problems for the reflected processes.

Theorem 3.1. (First passage time for the reflected process at the upper level.) For $x\in [c,a)$ , the Laplace–Stieltjes transform of the first passage time $\tau_{\downarrow c}^{\bar a}(x)$ for the reflected process at the upper level is given by

(3.1) \begin{align} {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(x)}\big] = L_{\downarrow}&(w;\,x,a,c) \nonumber \\ & + L_{\uparrow}(w;\,x,a,c)\frac{\lambda}{w+\lambda } \bigg(\frac{L_{\downarrow}(w;\,pa,a,b)(w+\lambda)}{w+\lambda-L_{\uparrow}(w;\,pa,a,b) \lambda}\textbf{1}_{\{pa>c\}}+\textbf{1}_{\{pa\leq c\}}\bigg).\end{align}

Proof. To prove (3.1) note that, by the Markov property, for $a>c/p$ ,

(3.2) \begin{align} {\mathbb E}_x[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a} (x)}] & = {\mathbb E}_x[\textrm{e}^{-w\tau_{\downarrow c}(x)}\textbf{1}_{\{\tau_{\downarrow c}(x)<\tau_{\uparrow a}(x)\}}] \nonumber \\ & \quad + {\mathbb E}_x[\textrm{e}^{-w\tau_{\uparrow a}(x)}\textbf{1}_{\{\tau_{\uparrow a}(x)<\tau_{\downarrow c}(x)\}}]\frac{\lambda}{w+\lambda}{\mathbb E}_{pa}[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(pa)}]. \end{align}

Using (2.23) implies that ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow c}(x)} \textbf{1}_{\{\tau_{\downarrow c}(x)<\tau_{\uparrow a}(x)\}}\big]=L_{\downarrow}(w;\,x,a,c)$ , i.e. the first term in (3.2) is equal to the first part of (3.1).

To investigate the second term in (3.1), we commence by noting that, by (2.16), ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow a}(x)} \textbf{1}_{\{\tau_{\uparrow a}(x)<\tau_{\downarrow c}(x)\}}\big]=L_{\uparrow}(w;\,x,a,c)$ , so that (3.2) becomes ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a} (x)}\big]=L_{\downarrow}(w;\,x,a,c) +L_{\uparrow}(w;\,x,a,c)[{\lambda}/({w+\lambda})]{\mathbb E}_{pa}[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(pa)}]$ . Taking $x=pa$ , we obtain a linear equation for ${\mathbb E}_{pa}\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(pa)}\big]$ that can be solved as

\begin{equation*}{\mathbb E}_{pa}\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(pa)}\big]= \frac{L_{\downarrow}(w;\,x,a,c)}{1-L_{\uparrow}(w;\,x,a,c)\frac{\lambda}{w+\lambda}}.\end{equation*}

This proves (3.1) for $a>c/p$ .

For $a\leq c/p$ , we instead note that ${\mathbb E}_{pa}\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(pa)}\big]=1$ , so that now (3.2) becomes

\begin{equation*} {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow c}^{\bar a}(x)}\big] ={\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow c}(x)}\textbf{1}_{\{\tau_{\downarrow c}(x)<\tau_{\uparrow a}(x)\}}\big] +{\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow a}(x)}\textbf{1}_{\{\tau_{\uparrow a}(x)<\tau_{\downarrow c}(x)\}}\big] \frac{\lambda}{w+\lambda}. \end{equation*}

Again using how the expectations can be translated into $L_{\downarrow}(w;\,x,a,c)$ and $L_{\uparrow}(w;\,x,a,c)$ , this completes the proof.

Theorem 3.2. (First passage time for the reflected process at the lower level.) For $x\in [b,c)$ , the Laplace–Stieltjes transform of the first passage time $\tau_{\uparrow c}^{\underline{b}}(x)$ for the reflected process at the lower level is given by

\begin{equation*} {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(x)}\big]=L_{\uparrow}(w;\,x,b,c)+L_{\downarrow}(w;\,x,b,c)\frac{L_{\uparrow}(w;\,b,b,c)}{1-L_{\downarrow}(w;\,b,b,c)}. \end{equation*}

Proof. The proof is similar to the proof of Theorem 3.1, and our exposition is brief. Indeed, starting from x, either we go to level c before visiting b or the other way around. In the latter case we start from level b. Hence, (3.2) now becomes ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(x)}\big]= {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow c}(x)}\textbf{1}_{\{\tau_{\uparrow c}(x)<\tau_{\downarrow b}(x)\}}\big] +{\mathbb E}_x\big[\textrm{e}^{-w\tau_{\downarrow b}(x)}\textbf{1}_{\{\tau_{\downarrow b}(x)<\tau_{\uparrow c}(x)\}}\big] {\mathbb E}_b\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(b)}\big]$ , which in turn can be written as ${\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(x)}\big]\,f= L_{\uparrow}(w;\,x,b,c)+L_{\downarrow}(w;\,x,b,c){\mathbb E}_b\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(b)}\big]$ . Now taking $x=b$ in the above and using it to calculate ${\mathbb E}_b\big[\textrm{e}^{-w\tau_{\uparrow c}^{\underline{b}}(b)}\big]$ completes the proof.

The statements for the process X reflected at the running supremum and infimum are much more complex. They are much more important, though, as they describe the behaviour of so-called drawdown and drawup processes Y and $\widehat{Y}$ . We start with the exit times when the process is reflected at the supremum. In the statement, we write $\partial L_{\uparrow}(w;\,z,y,u)=(\partial_+ L_{\uparrow}(w;\,z,v,u)/\partial v)_{v=y}$ for the partial right derivative $\partial_+$ and similarly for $L_{\downarrow}$ , where these derivatives exist due to Remark 2.5, identity (2.24), and the definition of the function $Z_\downarrow(w;\,x,b)$ given in (2.7), and are continuous except at countable many points.

Theorem 3.3. (First passage time for the reflected process at running supremum.) Assume that $\overline{X}_0=X_0=x$ . Then, the Laplace–Stieltjes transform of the exit time $\tau_c(x)$ of the process Y that is the reflected version of X reflected at the running supremum, equals

(3.3) \begin{align} & {\mathbb E}_x\big[\textrm{e}^{-w \tau_c(x)}\big]\\[4pt]& = \int_x^{+\infty}\frac{\partial L_{\uparrow}(0;\,w,w,w-c)}{L_{\uparrow}(0;\,w,w,w-c)} \exp\!\left\{-\int_x^w \frac{\partial L_{\uparrow}(w;\,z,z,z-c)}{L_{\uparrow}(w;\,z,z,z-c)} \,\textrm{d}z\right\} \frac{ \partial L_{\downarrow}(w;\,w,w,w-c)}{\partial L_{\downarrow}(0;\,w,w,w-c)} \, \textrm{d}w. \nonumber \end{align}

Remark 3.1. (More general initial positions.) In the above theorem, we can consider a more general initial position of the reflected processes than zero. For example, to get (3.3) for $\overline{X}_0=z\geq x =X_0$ , we can instead consider

\begin{align*} {\mathbb E}_x\big[\textrm{e}^{-w \tau_c(x)} \mid X_0=x, \overline{X}_0=z\big] & = {\mathbb E}_x\big[\textrm{e}^{-w \tau_{\downarrow z-c}(x)}\textbf{1}_{\{\tau_{\downarrow z-c}(x)<\tau_{\uparrow z}(x)\}}\big] \\ & \quad + {\mathbb E}_x\big[\textrm{e}^{-w\tau_{\uparrow z}(x)}\textbf{1}_{\{\tau_{\uparrow z}(x)<\tau_{\downarrow z-c}(x)\}}\big] {\mathbb E}_z\big[\textrm{e}^{-w \tau_c(z)} \mid X_0=z, \overline{X}_0=z\big] \end{align*}

and apply Theorems 2.3 and 2.4.

Remark 3.2. (Alternative approach to Theorem 3.3.) The first passage time for the reflected process at running maximum considered in Theorem 3.3 could be analysed using [Reference Landriault, Li and Zhang35, Theorem 3.1 and Example 3.5] in terms of the solution of some integral equation. We decided to do it in a more explicit way.

Proof of Theorem 3.3.To prove (3.3), we adapt the argument of [Reference Lehoczky38] executed for diffusion processes. In the first step, we will find the law of $\overline{X}_{\tau_c(x)}$ . To find ${\mathbb P}_x(\overline{X}_{\tau_c(x)}>w)$ we partition the interval [x, w] into n subintervals $[s_{i}^n, s_{i+1}^n]$ ( $i=0,1,\ldots, n-1$ ) such that $m_n=\max_i\! (s_{i+1}^n-s_i^n)\to 0$ as $n\rightarrow +\infty$ . We approximate ${\mathbb P}_x(\overline{X}_{\tau_c(x)}>w)$ by ${\mathbb P}(A_n)$ for $A_n=\bigcap_{i=0}^{n-1} \{X$ hits $s_{i+1}^n$ before X jumps below $s^n_{i}-c\}$ . To do this, we have to prove that this approximation does not depend on the chosen partition, for which we use the fact that the process X crosses new levels upward in a continuous way, so that ${\mathbb P}_x(\overline{X}_{\tau_c(x)}>w)=\lim_{n\to+\infty}{\mathbb P}(A_n)$ .

Then, by the Markov property and Theorem 2.3,

(3.4) \begin{align} & {\mathbb P}_x(\overline{X}_{\tau_c(x)}>w)=\lim_{n\to+\infty}{\mathbb P}(A_n) \nonumber \\ & = \exp\Bigg\{\lim_{n\to+\infty} \sum_{i=0}^{n-1} \log L_{\uparrow}(0;\,s^n_i,s^n_{i+1},s^n_i-c)\Bigg\}\nonumber\\ & = \exp\Bigg\{-\lim_{n\to+\infty} \sum_{i=0}^{n-1} (s^n_{i+1}-s^n_{i})\frac{1}{s^n_{i+1}-s^n_{i}}\log\bigg(1- \frac{L_{\uparrow}(0;\,s^n_i,s^n_{i+1},s^n_i-c)-1}{L_{\uparrow}(0;\,s^n_i,s^n_{i+1},s^n_i-c)}\bigg)\Bigg\} \nonumber\\ & = \exp\bigg\{{-}\int_x^w \frac{\partial L_{\uparrow}(0;\,z,z,z-c)}{L_{\uparrow}(0;\,z,z,z-c)}\;\textrm{d}z \bigg\},\end{align}

where we have used the fact that $\partial L_{\uparrow}(0;\,z,z,z-c)$ is a continuous function of z except possibly at countably many points, so that the above Riemann integral is well defined. As a result,

(3.5) \begin{equation}{ {\mathbb P}(\overline{X}_{\tau_c(x)}\in \textrm{d} w)=\exp\bigg\{{-}\int_x^w \frac{\partial L_{\uparrow}(0;\,z,z,z-c)}{L_{\uparrow}(0;\,z,z,z-c)}\,\textrm{d}z\bigg\} \frac{\partial L_{\uparrow}(0;\,w,w,w-c)}{L_{\uparrow}(0;\,w,w,w-c)}\,\textrm{d} w.}\end{equation}

Define the sequence of stopping times $\varrho^n_k=\inf\{t\geq 0\,:\, X_{\Xi^n_{k-1}+t}-X_{\Xi^n_{k-1}}=s^n_k-s^n_{k-1}$ or $-c\}$ for $\Xi_k^n=\sum_{i=1}^k\varrho^n_i$ . Then

\begin{align*} {\mathbb E}_x \big[ \textrm{e}^{-w \tau_c(x)} \mid \overline{X}_{\tau_c(x)}=w\big] & = \lim_{n\to+\infty}\prod_{i=1}^n{\mathbb E}_x\big[ \textrm{e}^{-w \varrho^n_i} \mid X_{\Xi^n_{i-1}}=s^n_{i-1}, X_{\Xi^n_{i}}=s^n_{i}\big] \\ & \qquad \qquad \quad \times {\mathbb E}_x \big[ \textrm{e}^{-w \varrho^n_{n+1}} \mid X_{\Xi^n_{n}}=w, X_{\Xi^n_{n+1}}\leq w-c \big],\end{align*}

with $s_{n+1}^n>w$ such that $s^n_{n+1}-w$ tends to 0 as $n\rightarrow+\infty$ . By Theorems 2.3 and 2.4,

\begin{equation*} {\mathbb E}_x\big[ \textrm{e}^{-w \varrho^n_i}\mid X_{\Xi^n_{i-1}}=s^n_{i-1}, X_{\Xi^n_{i}}=s^n_{i}\big] =\frac{L_{\uparrow}(w;\,s^n_{i-1},s^n_{i},s^n_{i-1}-c)}{L_{\uparrow}(0;\,s^n_{i-1},s^n_{i},s^n_{i-1}-c)} \end{equation*}

and

\begin{align*} {\mathbb E}_x \big[ \textrm{e}^{-w \varrho^n_{n+1}} \mid X_{\Xi^n_{n}}=w, X_{\Xi^n_{n+1}}\leq w-c \big] & = \frac{L_{\downarrow}(w;\,w,s^n_{n+1},w-c)}{L_{\downarrow}(0;\,w,s^n_{n+1},w-c)} \\ & = \frac{L_{\downarrow}(w;\,w,s^n_{n+1},w-c)/(s_{n+1}^n-w)}{L_{\downarrow}(0;\,w,s^n_{n+1},w-c)/(s_{n+1}^n-w)}.\end{align*}

Hence, applying the same limiting arguments as in (3.4), we derive

(3.6) \begin{align} {\mathbb E}_x \big[ \textrm{e}^{-w \tau_c(x)} \mid \overline{X}_{\tau_c(x)} =\ & w\big] = \exp\bigg\{-\int_x^w \frac{\partial L_{\uparrow}(w;\,z,z,z-c)}{L_{\uparrow}(w;\,z,z,z-c)}\,\textrm{d} z\bigg\}\\[4pt]& \times \exp\bigg\{\int_x^w \frac{\partial L_{\uparrow}(0;\,z,z,z-c}{L_{\uparrow}(0;\,z,z,z-c)}\,\textrm{d} z\bigg\} \frac{\partial L_{\downarrow}(w;\,w,w,w-c)}{\partial L_{\downarrow}(0;\,w,w,w-c)}. \nonumber \end{align}

Using the fact that ${\mathbb E}_x \big[ \textrm{e}^{-w \tau_c(x)}\big]=\int_x^{+\infty} {\mathbb E}_x \big[ \textrm{e}^{-w \tau_c(x)}\mid \overline{X}_{\tau_c(x)}=w\big]{\mathbb P}(\overline{X}_{\tau_c(x)}\in \textrm{d} w)$ , together with (3.5) and (3.6), completes the proof.

To analyse the reflection at the running infimum we use martingale theory for the first time. We first set the stage. Let a(w, c, u) be a solution of the equation

(3.7) \begin{equation} Z_\downarrow(w;\,u+c,u)+L_{\uparrow}(w;\,u+c,a(w,c,u),u)=1.\end{equation}

We first argue that the solution to (3.7) always exists and is unique. Note that the above equation is equivalent to

\begin{equation*} {\mathbb E}_{u+c} \big[\textrm{e}^{-w \tau_{\downarrow u}(u+c)}\big] + {\mathbb E}_{u+c} \big[\textrm{e}^{-w \tau_{\uparrow a(w,c,u)}(u+c)} {\textbf{1}}_{\{\tau_{\uparrow a(w,c,u)} (u+c) < \tau_{\downarrow u}(u+c)\}}\big] = 1.\end{equation*}

By Theorem 2.3, $a\mapsto L_{\uparrow}(w;\,u+c,a,u)$ is continuous. Moreover, $L_{\uparrow}(w;\,u+c,a,u)={\mathbb E}_{u+c} \big[\textrm{e}^{-w \tau_{\uparrow a}(u+c)}{\textbf{1}}_{\{\tau_{\uparrow a} (u+c) < \tau_{\downarrow u}(u+c)\}}\big]$ tends to 1 as $a\downarrow u+c$ , and to 0 as $a\uparrow+\infty$ . Since $Z_\downarrow(w;\,u+c,u)<1$ , the solution of (3.7) indeed always exists. What is more, ${\mathbb E}_{u+c} \big[\textrm{e}^{-w \tau_{\uparrow a}({u+c})}{\textbf{1}}_{\{\tau_{\uparrow a} ({u+c}) < \tau_{\downarrow u}({u+c})\}}\big]$ is monotonic in a, which gives the uniqueness of the solution of (3.7).

Theorem 3.4. (First passage time for the reflected process at the running infimum.) The exit time $\widehat\tau_c(x)$ of the process $\widehat{Y}$ that is the reflected version of X reflected at the running supremum satisfies

  1. (i) $\widehat{\tau}_c(x)=\tau_{\uparrow c}(x)$ when $\underline{X}_0=0$ ;

  2. (ii) for $X_0 =x>0$ and $0<X_0 =u\leq x$ , instead, the Laplace–Stieltjes transform of $\widehat\tau_c(x)$ equals ${\mathbb E}_x\big[\textrm{e}^{-w\widehat{\tau}_c(x)}\big] = Z_{\downarrow}(w;\,x,u)+L_{\uparrow}(w;\,x,a(w,c,u),u)$ , where a(w, c, u) solves (3.7).

Proof. We follow the main idea of [Reference Pistorius44], although the proof requires substantial changes compared to the case of Lévy processes, due to the lack of space-homogeneity of our process X. Fix $0<b<x<a$ . Recall that $\tau_{a,b}(x)=\min\{\tau_{\uparrow a}(x),\tau_{\downarrow b}(x)\}$ by (2.1). By Theorem 2.3, $\textbf{1}_{\{\tau_{\uparrow a}(x)<\tau_{\downarrow b}(x)\}}=L_{\uparrow}(w;\,X_{\tau_{a,b}(x)}, a,b)$ , where we take $L_{\uparrow}(w;\,x, a,b)=0$ for $x<b$ . By the Markov property, $t\mapsto {\mathbb E}_x[\textrm{e}^{-w \tau_{a,b}(x)} L_{\uparrow}(w;\,X_{\tau_{a,b}(x)}, a,b) \mid \mathcal{F}_t]=\textrm{e}^{-w \tau_{a,b}(x)\wedge t} L_{\uparrow}(w;\,X_{\tau_{a,b}(x)\wedge t}, a,b)$ can be seen to be a local martingale.

Similarly, from Theorem 2.4, by putting $Z_\downarrow(w;\,x,b)=1$ for $x<b$ , we conclude that $\textbf{1}_{\{\tau_{\downarrow b}(x)<\tau_{\uparrow a}(x)\}}= Z_\downarrow(w;\,X_{\tau_{a,b}(x)},b)-L_{\uparrow}(w;\,X_{\tau_{a,b}(x)},a,b)Z_\downarrow(w;\,a,b)$ , and hence $t\mapsto \textrm{e}^{-w \tau_{a,b}(x)\wedge t}\left(Z_\downarrow(w;\,X_{\tau_{a,b}(x)\wedge t},b)-L_{\uparrow}(w;\,X_{\tau_{a,b}(x)\wedge t},a,b)Z_\downarrow(w;\,a,b)\right)$ is a local martingale as well.

By taking a linear combination, we observe that $t\mapsto\textrm{e}^{-w \tau_{a,b}(x)\wedge t}Z_\downarrow(w;\,X_{\tau_{a,b}(x)\wedge t},b)$ is also a local martingale. Denoting

(3.8) \begin{equation} F(w;\, x, a,b)=Z_\downarrow(w;\,x,b)+L_{\uparrow}(w;\,x, a,b), \end{equation}

this finally means that $t\mapsto \textrm{e}^{-w \tau_{a,b}(x)\wedge t}F(w;\,X_{\tau_{a,b}(x)\wedge t}, a, b)$ is a local martingale. This is the starting point of our analysis.

Since $0<b<a$ are general, we can conclude that the function $y\rightarrow F(w;\,y,a,b)$ is in the domain of the extended generator $\mathcal{A}^\dagger$ of the process X when it is exponentially killed with intensity w (denoted here by $X^\dagger_t$ ), i.e. the function $F(w;\,y,a,b)$ is in the set of functions f for which there exists a function $\mathcal{A}^\dagger f$ such that the process $f(X_t^\dagger) -\int_0^t \mathcal{A}^\dagger f(X_s^\dagger)\,\textrm{d}s$ is a local martingale. More precisely,

\begin{equation*} \textrm{e}^{-w \tau_{a,b}(x)\wedge t}F(w;\,X_{\tau_{a,b}(x)\wedge t},a, b) -\int_0^{\tau_{a,b}(x)\wedge t}\mathcal{A}^\dagger F(w;\,X_{s},a,b) \, \textrm{d}s \end{equation*}

is a local martingale. That is, for any $0<b<a$ ,

(3.9) \begin{equation} \mathcal{A}^\dagger F(w;\,y,a,b)= 0,\qquad b<y<a. \end{equation}

This completes the first step of our proof.

In the second step we use the following version of Itô’s formula (see also [Reference Kella and Yor30] adapted to our setup). For a càdlàg adapted process $t\mapsto V_t$ , which is of finite variation, and a function $(y,z)\mapsto f(y, z)$ that is continuous in y, that lies in the domain of $\mathcal{A}^\dagger$ , and that is continuously differentiable with respect to z, we then obtain that

(3.10) \begin{align} t\mapsto f(X_t^\dagger, V_t)- \int_0^t \mathcal{A}^\dagger f(X_s^\dagger, V_s)\,\textrm{d}s - \int_0^t \frac{\partial}{\partial z} f(X_s^\dagger, z)_{|z=V_s} \,& \textrm{d}V_s^c \\ &- \sum_{s\leq t}(f(X_s^\dagger, V_s) - f(X_s^\dagger, V_{s-}))\end{align}

is a local martingale.

Without loss of generality we can assume that $p^k a(w,c,z)\neq z$ for any $k\geq 1$ . Indeed, it is enough to choose an appropriate c and then an approximate $\widehat{\tau}_c(x)$ for general $c>0$ by the monotonic limit of $\widehat{\tau}_{c_n}(x)$ with respect to $c_n$ satisfying the above condition. Then we can use (3.10) with $V_t=\underline{X}_t$ and $f(y,z)=F(w;\,y,a(w,c,z),z)$ , as this function is continuously differentiable with respect to z, which follows from the definition of the function F given in (3.8), and the formulas (2.7), (2.18), and (2.19). Note that $\sum_{s\leq t}\textrm{e}^{-ws} (F(w;\,X_s,$ $\left. a (w,c,\underline{X}_{s}), \underline{X}_{s})- F(w;\,X_s, a(w,c,\underline{X}_{s-}), \underline{X}_{s-})\right) =0$ . Indeed, either $X_s>\underline{X}_{s}$ or $X_s=\underline{X}_{s}$ (i.e. we crossed the previous infimum at time s). In the first case $\underline{X}_{s}=\underline{X}_{s-}$ and hence $F(w;\,X_s,$ $a (w,c,\underline{X}_{s}), \underline{X}_{s})=F(w;\,X_s, a(w,c,\underline{X}_{s-}), \underline{X}_{s-})$ . Otherwise, $X_s=\underline{X}_{s}\leq \underline{X}_{s-}$ and $F(w;\,X_s, $ $a (w,c,\underline{X}_{s}), \underline{X}_{s})=F(w;\,X_s, a(w,c,\underline{X}_{s-})$ , $\underline{X}_{s-})=1$ because, by (3.8), we have $F(w;\,x, a, z)=1$ for all $x\leq z$ . This follows from the observation that in this case $Z_\downarrow(w;\,x,z)=1$ by (2.6) and $L_{\uparrow}(w;\,x, a(w,c,z),z)=0$ by (2.16). Moreover, in our model $\underline{X}_t^c=0$ , because of the upward drift and downward jumps of the process $X_t$ we can cross past the infimum only by a jump. By (3.9), this gives that $t\mapsto \textrm{e}^{-wt} F(w;\,X_{t },a(w,c,\underline{X}_{t}),\underline{X}_{t})$ is a local martingale. Note that up to time $\widehat{\tau}_c(x)$ , the processes $X_t$ and $\underline{X}_t$ are bounded by $u+c$ . By the Optional Stopping Theorem, we then obtain that

(3.11) \begin{equation}{ {\mathbb E}_x \big[\textrm{e}^{-w \widehat{\tau}_c(x)}F(w;\,X_{\widehat{\tau}_c(x)},a(w,c, \underline{X}_{\widehat{\tau}_c(x)}),\underline{X}_{\widehat{\tau}_c(x)})\big]=F(w;\,x,a(w,x,u),u). }\end{equation}

We next argue that $F(w;\,X_{\widehat{\tau}_c(x)},a(w,c, \underline{X}_{\widehat{\tau}_c(x)}),\underline{X}_{\widehat{\tau}_c(x)})=1$ almost surely. Observe that $X_{\widehat{\tau}_c(x)}-\underline{X}_{\widehat{\tau}_c(x)}=\widehat{Y}_{\widehat{\tau}_c(x)}=c$ , and hence by the definition of a(w, x, u) in (3.7), we obtain $F(w;\,X_{\widehat{\tau}_c(x)},a(w, c, \underline{X}_{\widehat{\tau}_c(x)}),\underline{X}_{\widehat{\tau}_c(x)}) =F(w;\,\underline{X}_{\widehat{\tau}_c(x)}+c,a(w, c, \underline{X}_{\widehat{\tau}_c(x)}),\underline{X}_{\widehat{\tau}_c(x)}) =1$ almost surely, as required. This completes the proof by (3.8) and (3.11).

4. Discussion and further research

In this section we discuss alternative approaches as well as possible future research.

4.1. An alternative approach to the two-sided exit problems in Theorems 2.12.4

Two-sided exit problems related to the exit times in (1.1) and (1.2), as studied in Theorems 2.12.4, can also be derived by solving exit problems for an appropriately scaled sequence of queueing models. Consider, for example, an immigration-and-catastrophe model in which immigrations occur according to a Poisson process at rate $\beta_m = m\beta$ , and catastrophes are governed by the so-called binomial catastrophes mechanism: at the epochs of a catastrophic event that occurs according to an independent Poisson process at rate $\lambda_m=m\lambda$ , every member of the population survives with fixed probability independently of anything else. Denoting the population size at time t by $Q^{(m)}_t$ , we can prove that the exit times of the fluid scaled limit $ \lim_{m\to\infty} Q^{(m)}_t/m$ converges weakly to the exit times of the process X. We decided to solve our exit problems in a direct way, thus avoiding additional arguments related to weak convergence.

4.2. An alternative approach to our results, with a focus on Theorem 2.2

We note that a similar approach may be used to obtain the results when, instead of focusing on the first time the process jumps downwards, we focus on an infinitesimally small time interval right after 0, say $(0,\varepsilon)$ . We only provide a brief sketch of the derivation and only consider the one-sided downward exit time as in Theorem 2.2.

We condition on the number of jumps in $(0,\varepsilon)$ . Two jumps will happen with a probability that is $o(\varepsilon)$ , one jump with a probability $\lambda \varepsilon + o(\varepsilon)$ , and no jumps will happen with probability $1 - \lambda \varepsilon + o(\varepsilon)$ .

Denote ${\varphi(x,t)} = {\mathbb P}(\tau_{\downarrow b}(x) > t)$ . Consider first $x \in [b,b/p)$ , in which case a jump in $(0,\varepsilon)$ takes the value of the process below b immediately, and therefore ${\varphi(x,t)} = (1-\lambda \varepsilon + o(\varepsilon)) {\mathbb P}(\tau_b^-(x+\varepsilon) > t-\varepsilon) + o(\varepsilon)$ . We now divide both sides by $\varepsilon$ and let $\varepsilon \uparrow 0$ to obtain

\begin{equation*}\lim_{\varepsilon \uparrow 0}\frac{{\varphi(x,t)} - \varphi(x+\varepsilon, t-\varepsilon)}{\varepsilon} = - \lambda {\varphi(x,t)}.\end{equation*}

The numerator on the left-hand side may be written as ${\varphi(x,t)} -\varphi(x, t-\varepsilon) + \varphi(x, t-\varepsilon) - \varphi(x+\varepsilon, t-\varepsilon)$ , and we therefore obtain

(4.1) \begin{equation} {\varphi^{\prime}_x(x,t)}-{\varphi^{\prime}_t(x,t)} = \lambda {\varphi(x,t)},\end{equation}

which is valid for all $x \in (b,b/p)$ . Consider now $x \ge b/p$ . For these values, ${\varphi(x,t)} = \lambda \varepsilon {\mathbb P}(\tau_b^-(xp) > t-\varepsilon) + (1-\lambda \varepsilon + o(\varepsilon)) {\mathbb P}(\tau_b^-(x+\varepsilon > t-\varepsilon) + o(\varepsilon)$ . Similar arguments imply that

(4.2) \begin{equation} {\varphi^{\prime}_x(x,t)}-{\varphi^{\prime}_t(x,t)} = \lambda {\varphi(x,t)} - \lambda \varphi(xp,t),\end{equation}

which is valid for all $x \ge b/p$ . We can then check that the differential equations in (4.1) and (4.2) are equivalent to the integral equation implied, in a straightforward manner, by (2.9). We refrain from discussing such approaches further.

4.3. Applications of our results and future directions

The exit problems studied in this paper might also be used in applications. An obvious choice is to look at all problems where fluctuation theory has been applied for the Lévy processes. This is, of course, a long-term project and we are confident that our results will contribute to its development. Another possible application might lie in the development of asymptotic results. Indeed, the formulas that we provide for the Laplace transforms of exit times are closely related to the tail behaviour of these exit times, through inversion or Tauberian theorems. We refrain from such an analysis, as it requires various different techniques, and thus would make the paper less coherent.

Funding information

The work of SK and RvdH is supported by the NWO Gravitation Networks grant 024.002.003. The work of ZP is partially supported by the National Science Centre under grant 2018/29/B/ST1/00756.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Adan, I. J. B. F., Economou, A. and Kapodistria, S. (2009). Synchronized reneging in queueing systems with vacations. Queueing Systems 62, 133.CrossRefGoogle Scholar
Artalejo, J. R, Economou, A. and Lopez-Herrero, M. J. (2006). Evaluating growth measures in populations subject to binomial and geometric catastrophes. Math. Biosci. Eng. 4, 573594.Google Scholar
Asmussen, S. and Albrecher, H. (2010). Ruin Probabilities. World Scientific, Singapore.CrossRefGoogle Scholar
Avram, F. Kyprianou, A. E. and Pistorius, M. (2004). Exit problems for spectrally negative Lévy processes and applications to (Canadized) Russian options. Ann. Appl. Prob. 14, 215238.CrossRefGoogle Scholar
Avram, F., Grahovac, D. and Vardar-Acar, C. (2019). The $W, Z/\mu, \delta$ paradigm for the first passage of strong Markov processes without positive jumps. Risks, 7, 18.CrossRefGoogle Scholar
Avram, F. and Pérez, J. L. (2019). A review of first-passage theory for the Segerdahl–Tichy risk process and open problems. Risks 7, 117.CrossRefGoogle Scholar
Avram, F. and Usabel, M. (2004). The Gerber–Shiu expected discounted penalty–reward function under an affine jump-diffusion model. ASTIN Bull. 38, 461481.CrossRefGoogle Scholar
Bertoin, J. (1996). On the first exit time of a completely asymmetric stable process from a finite interval. Bull. London Math. Soc. 28, 514520.CrossRefGoogle Scholar
Bertoin, J. (1997). Exponential decay and ergodicity of completely asymmetric Lévy processes in a finite interval. Ann. Appl. Prob. 7, 156169.CrossRefGoogle Scholar
Bingham, N. H. (1975). Fluctuation theory in continuous time. Adv. Appl. Prob. 7, 705766.CrossRefGoogle Scholar
Blumenthal, R. M. and Getoor, R. K. (1968). Markov Processes and Potential Theory. Academic Press, New York.Google Scholar
Breiman, L. (1968). Probability. Addison-Wesley, Reading, MA.Google Scholar
Cohen, J. W. (1982). The Single Server Queue. North Holland, Amsterdam.Google Scholar
Czarna, I., Pérez, J.-L., Rolski, T. and Yamazaki, K. (2019). Fluctuation theory for level-dependent Lévy risk processes. Stoch. Process. Appl. 129, 54065449.CrossRefGoogle Scholar
Darling, D. and Siegert, A. J. F. (1953). The first passage problem for a continuous Markov process. Ann. Math. Statist. 24, 624639.CrossRefGoogle Scholar
Doney, R. A. (2005). Some excursion calculations for spectrally one-sided Lévy processes. In Séminaire de Probabilités XXXVIII (Lect. Notes Math. 1857), eds M. Émery, M. Ledoux and M. Yor. Springer, Berlin, pp. 515.CrossRefGoogle Scholar
Dumas, V., Guillemin, F. and Robert, P. (2002). A Markovian analysis of additive-increase, multiplicative-decrease (AIMD) algorithms. Adv. Appl. Prob. 34, 85111.CrossRefGoogle Scholar
Emery, D. J. (1973). Exit problems for a spectrally positive process. Adv. Appl. Prob. 5, 498520.CrossRefGoogle Scholar
Gihman, I. I. and Skorohod, A. V. (1972). Stochastic Differential Equations. Springer, New York.CrossRefGoogle Scholar
Gusak, D. and Koralyuk, V. S. (1968). On the first passage time across a given level for processes with independent increments. Theory Prob. Appl. 13, 448456.CrossRefGoogle Scholar
Feller, W. (1952). The parabolic differential equations and the associated semi-groups of transformations. Ann. Math. 55, 468519.CrossRefGoogle Scholar
Feller, W. (1954). The general diffusion operator and positivity preserving semi-groups in one dimension. Ann. Math. 60, 417436.CrossRefGoogle Scholar
Feller, W. (1955). On second order differential operators. Ann. Math. 61, 90105.CrossRefGoogle Scholar
Hadjiev, D. J. (1985). The first passage problem for generalized Ornstein–Uhlenbeck processes with nonpositive jumps. In Séminaire de Probabilités XIX (Lect. Notes Math. 1123), eds J. Azéma and M. Yor. Springer, Berlin, pp. 8090.Google Scholar
Gasper, G. and Rahman, M. (2004). Basic Hypergeometric Series (Encyc. Math. Appl. 96). Cambridge University Press.CrossRefGoogle Scholar
Guillemin, F., Robert, P. and Zwart, A. P. (2004). AIMD algorithms and exponential functionals. Ann. Appl. Prob., 14, 90117.CrossRefGoogle Scholar
Ito, K. and McKean, J. P. (1965). Diffusion Processes and their Sample Paths. Springer, New York.Google Scholar
Ivanovs, J. and Palmowski, Z. (2012). Occupation densities in solving exit problems for Markov additive processes and their reflections. Stoch Process. Appl. 122, 33423360.CrossRefGoogle Scholar
Jacobsen, M. and Jensen, A. T. (2007). Exit times for a class of piecewise exponential Markov processes with two-sided jumps. Stoch. Process. Appl. 117, 13301356.CrossRefGoogle Scholar
Kella, O. and Yor, M. (2017). Unifying the Dynkin and Lebesgue–Stieltjes formulae. J. Appl. Prob. 54, 252266.CrossRefGoogle Scholar
Kyprianou, A. and Palmowski, Z. (2005). A martingale review of some fluctuation theory for spectrally negative Lévy processes. In Séminaire de Probabilités XXXVIII (Lect. Notes Math. 1857), eds M. Émery, M. Ledoux and M. Yor. Springer, Berlin, pp. 1629.CrossRefGoogle Scholar
Kyprianou, A. and Palmowski, Z. (2008). Fluctuations of spectrally negative Markov additive processes. In Séminaire de Probabilités XLI (Lect. Notes Math. 1934), eds C. Donati-Martin, M. Émery, A. Rouault, and C. Stricker. Springer, Berlin, pp. 121135.CrossRefGoogle Scholar
Kyprianou, A. (2006). Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer, Berlin.Google Scholar
Lachal, A. (2000). First exit time from a bounded interval for a certain class of additive functionals of Brownian motion. J. Theor. Prob. 13, 733775.CrossRefGoogle Scholar
Landriault, D., Li, B. and Zhang, H. (2017). A unified approach for drawdown (drawup) of time-homogeneous Markov processes. J. Appl. Prob. 54, 603626.CrossRefGoogle Scholar
Li, B. and Palmowski, Z. (2018). Fluctuations of Omega-killed spectrally negative Lévy processes. Stoch. Process. Appl. 128, 32733299.CrossRefGoogle Scholar
Löpker, A. and Stadje, W. (2011). Hitting times and the running maximum of Markovian growth–collapse processes. J. Appl. Prob. 48, 295312.CrossRefGoogle Scholar
Lehoczky, J. P. (1977). Formulas for stopped diffusion processes with stopping times based on the maximum. Ann. Prob. 5, 601607.CrossRefGoogle Scholar
Löpker, A., van Leeuwaarden, J. S. H. and Ott, T. J. (2009). TCP and iso-stationary transformations. Queueing Systems 63, 459475.Google Scholar
Marciniak, E. and Palmowski, Z. (2016). On the optimal dividend problem for insurance risk models with surplus-dependent premiums. J. Optim. Theor. Appl. 168, 723742.CrossRefGoogle Scholar
Novikov, A. A. (1981). The martingale approach in problems on the time of the first crossing of nonlinear boundaries. Trudy Mat. Inst. Steklov. 158, 130152.Google Scholar
Patie, P. (2007). Two-sided exit problem for a spectrally negative $\alpha$ -stable Ornstein–Uhlenbeck process and the Wright’s generalized hypergeometric functions. Electron. Commun. Prob. 12, 146160.Google Scholar
Paulsen, J. and Gjessing, H. (1997). Ruin theory with stochastic return on investments. Adv. Appl. Prob. 29, 965985.CrossRefGoogle Scholar
Pistorius, M. (2004). On exit and ergodicity of the spectrally one-sided Lévy process reflected at its infimum. J. Theor. Prob. 17, 183220.CrossRefGoogle Scholar
Rogers, L. C. G. (1990). The two-sided exit problem for spectrally positive Lévy processes. Adv. Appl. Prob. 22, 486487.CrossRefGoogle Scholar
Segerdahl, C.-O. (1955). When does ruin occur in the collective theory of risk? Scand. Actuarial J. 38, 2236.CrossRefGoogle Scholar
Sweet, A. L. and Hardin, J. C. (1970). Solutions for some diffusion processes with two barriers. J. Appl. Prob. 7, 423431.CrossRefGoogle Scholar
Tichy, R. (1984). Uber eine zahlentheoretische Methode zur numerischen Integration und zur Behandlung von Integralgleichungen. Osterreichische Akademie der Wissenschaften Mathematisch-Naturwissenschaftliche Klasse Sitzungsberichte II 193, 329358.Google Scholar
Zolotarev, V. M. (1964). The first passage time of a level and the behaviour at infinity for a class of processes with independent increments. Theory Prob. Appl. 9, 653661.CrossRefGoogle Scholar
Figure 0

Figure 1. Additive-increase and multiplicative-decrease process path.