Hostname: page-component-7bb8b95d7b-fmk2r Total loading time: 0 Render date: 2024-09-28T21:08:41.306Z Has data issue: false hasContentIssue false

Predicting the last zero before an exponential time of a spectrally negative Lévy process

Published online by Cambridge University Press:  16 January 2023

Erik J. Baurdoux*
Affiliation:
London School of Economics and Political Science
José M. Pedraza*
Affiliation:
University of Waterloo
*
*Postal address: Department of Statistics, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE, United Kingdom. Email address: [email protected]
**Postal address: Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, N2L 3G1, Canada. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Given a spectrally negative Lévy process, we predict, in an $L_1$ sense, the last passage time of the process below zero before an independent exponential time. This optimal prediction problem generalises [2], where the infinite-horizon problem is solved. Using a similar argument as that in [24], we show that this optimal prediction problem is equivalent to solving an optimal prediction problem in a finite-horizon setting. Surprisingly (unlike the infinite-horizon problem), an optimal stopping time is based on a curve that is killed at the moment the mean of the exponential time is reached. That is, an optimal stopping time is the first time the process crosses above a non-negative, continuous, and non-increasing curve depending on time. This curve and the value function are characterised as a solution of a system of nonlinear integral equations which can be understood as a generalisation of the free boundary equations (see e.g. [21, Chapter IV.14.1]) in the presence of jumps. As an example, we numerically calculate this curve in the Brownian motion case and for a compound Poisson process with exponential-sized jumps perturbed by a Brownian motion.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The study of last exit times has received much attention in several areas of applied probability, e.g. risk theory, finance, and reliability, in the past few years. Consider the Cramér–Lundberg process, a process consisting of a deterministic drift and a compound Poisson process with only negative jumps (see Figure 1), which is typically used to model the capital of an insurance company. Of particular interest is the moment of ruin, $\tau_0^-$ , which is defined to refer to the first moment when the process becomes negative. Within the framework of the insurance company having sufficient funds to endure negative capital for a considerable amount of time, another quantity of interest is the last time, $g_{e_{\theta}}$ , that the process is below zero before an exponential time $e_{\theta}$ . In a more general setting, we can consider a spectrally negative Lévy process instead of the classical risk process. Several works, for example [Reference Baurdoux1] and [Reference Chiu and Yin9], have studied the Laplace transform of the last time before an exponential time that a spectrally negative Lévy process is below some given level.

Figure 1. Cramér–Lundberg process with $\tau_0^-$ , the moment of ruin, and $g_{\theta}$ , the last zero before an exponential time.

Last passage time is increasingly becoming a vital factor in financial modelling, as shown in [Reference Madan, Roynette and Yor18] and [Reference Madan, Roynette and Yor19], where the authors conclude that the price of a European put and call option, modelled by non-negative and continuous martingales that vanish at infinity, can be expressed in terms of the probability distributions of some last passage times.

Another application of last passage times is in degradation models. The authors of [Reference Paroissin and Rabehasaina20] propose a spectrally positive Lévy process to model the ageing of a device in which they consider a subordinator perturbed by an independent Brownian motion. A motivation for considering this model is that the presence of a Brownian motion can model small repairs of the device, while the jumps represent major deterioration. In the literature, the failure time of a device is defined as the first hitting time of a critical level b. An alternative approach is to consider instead the last time that the process is under the level b, since the paths of this process are not necessarily monotone and this allows the process to return below the level b after it goes above b.

The aim of this work is to predict the last time a spectrally negative Lévy process is below zero before an independent exponential time, where the term ‘to predict’ is understood to mean to find a stopping time that is closest (in the $L^1$ sense) to this random time. This problem is an example of the optimal prediction problems which have been widely investigated by many researchers. For example, [Reference Graversen, Peskir and Shiryaev13] predicted the value of the ultimate maximum of a Brownian motion in a finite-horizon setting, whereas [Reference Shiryaev23] focused on the last time of the attainment of the ultimate maximum of a (driftless) Brownian motion and proceeded to show that it is equivalent to predicting the last zero of the process in this setting. The work of the latter was generalised by [Reference Du Toit, Peskir and Shiryaev10] for a linear Brownian motion. The paper [Reference Bernyk, Dalang and Peskir4] studied the time at which a stable spectrally negative Lévy process attains its ultimate supremum in a finite horizon of time; this was later generalised by [Reference Baurdoux and van Schaik3] for any Lévy process in an infinite horizon of time. Investigations on the time of the ultimate minimum and the last zero of a transient diffusion process were carried out by [Reference Glover, Hulley and Peskir12] and [Reference Glover and Hulley11], respectively, within a subclass of functions.

In [Reference Baurdoux and Pedraza2] the last zero of a spectrally negative Lévy process in a infinite-horizon setting is predicted. It is shown that an optimal stopping time that minimises the $L_1$ -distance to the last zero of a spectrally negative Lévy process with drift is the first time the Lévy process crosses above a fixed level $a^*\geq 0$ (which is characterised in terms of the cumulative distribution of the overall infimum of the process). As is the case in the Canadisation of American-type options (see e.g. [Reference Carr8]), given the memoryless property of the exponential distribution, one would expect that the generalisation of the aforementioned problem to an exponential time horizon would result in an infinite-horizon optimal prediction problem, and hence have a non-time-dependent solution. However, it turns out that this is not the case. Indeed, we show the existence of a continuous, non-increasing, and non-negative boundary such that an optimal stopping time is given by the first passage time, before the median of the exponential time, above this curve. The proof relies on solving an equivalent (finite-horizon) optimal stopping problem that depends on time and the process itself. Moreover, based on the ideas of [Reference Du Toit, Peskir and Shiryaev10] we characterise the boundary and the value function as the unique solutions of a system of nonlinear integral equations. Such a system can be thought of as a generalisation of the free boundary equation (see e.g. [Reference Peskir and Shiryaev21, Section 14]) allowing for the presence of jumps. We consider two examples where numerical calculations are implemented to find the optimal boundary.

This paper is organised as follows. In Section 2 we introduce some important notation regarding Lévy processes, and we outline some known fluctuation identities that will be useful later. We then formulate the optimal prediction problem and prove that it is equivalent to an optimal stopping problem whose solution is given in Theorem 2.1 Section 3 is dedicated to the solution of the optimal stopping problem. The main result of this paper is stated in Theorem 3.1, and its proof is detailed in Section 3.1. The last section makes use of Theorem 3.1 to find numerical solutions of the optimal stopping problem for the case of a Brownian motion with drift and a compound Poisson process perturbed by a Brownian motion.

2. Prerequisites and formulation of the problem

We start this section by introducing some important notation, and we give an overview of some fluctuation identities for spectrally negative Lévy processes. Readers can refer to [Reference Bertoin5], [Reference Sato22], or [Reference Kyprianou15] for more details about Lévy processes.

A Lévy process $X=\{X_t,t\geq 0 \}$ is an almost surely (a.s.) càdlàg process that has independent and stationary increments such that ${\mathbb{P}}(X_0=0)=1$ . Every Lévy process X is also a strong Markov $\mathbb{F}$ -adapted process. For all $x\in {\mathbb{R}}$ , denote by ${\mathbb{P}}_x$ the law of X when started at the point $x\in {\mathbb{R}}$ ; that is, ${\mathbb{E}}_x(\cdot)={\mathbb{E}}(\cdot|X_0=x)$ . Because of the spatial homogeneity of Lévy processes, the law of X under ${\mathbb{P}}_x$ is the same as that of $X+x$ under ${\mathbb{P}}$ .

Let X be a spectrally negative Lévy process, that is, a Lévy process starting from 0 with only negative jumps and non-monotone paths, defined on a filtered probability space $(\Omega,{\mathcal{F}}, \mathbb{F}, {\mathbb{P}})$ where $\mathbb{F}=\{{\mathcal{F}}_t,t\geq 0 \}$ is the filtration generated by X which is naturally enlarged (see [6, Definition 1.3.38]). We suppose that X has Lévy triplet $(\mu,\sigma, \Pi)$ where $\mu \in {\mathbb{R}}$ , $\sigma\geq 0$ , and $\Pi$ is a measure (Lévy measure) concentrated on $({-}\infty,0)$ satisfying $\int_{({-}\infty,0)} \big(1\wedge x^2\big)\Pi({{d}} x)<\infty$ .

Let $\psi$ be the Laplace exponent of X, defined as

\begin{align*}\psi(\beta)\,:\!=\,\log\!\big({\mathbb{E}}\big(e^{\beta X_1}\big)\big).\end{align*}

Then $\psi$ exists in ${\mathbb{R}}_+$ , and it is strictly convex and infinitely differentiable with $\psi(0)=0$ and $\psi(\infty)=\infty$ . From the Lévy–Khintchine formula, we know that $\psi$ takes the form

\begin{align*}\psi(\beta)=-\mu \beta +\frac{1}{2} \sigma^2 \beta^2 + \int_{({-}\infty,0)}\big(e^{\beta x}-1-\beta x {\mathbb{I}}_{\{ x>-1\}}\big) \Pi({{d}} x)\end{align*}

for all $\beta\geq 0$ . Denote by $\tau_a^+$ the first time the process X is above the level $a \in {\mathbb{R}}$ , i.e.,

\begin{align*}\tau_a^+=\inf\big\{ t>0\,:\, X_t>a\big\}.\end{align*}

Then it can be shown that, for any $a\geq 0$ , its Laplace transform is given by

(2.1) \begin{align}{\mathbb{E}}\Big(e^{-q\tau_a^+}{\mathbb{I}}_{\{\tau_a^+<\infty \}}\Big)=e^{-\Phi(q)a},\end{align}

where $\Phi$ corresponds to the right inverse of $\psi$ , which is defined by

\begin{align*}\Phi(q)=\sup\{ \theta \geq 0\,:\, \psi(\theta)=q \}\end{align*}

for any $q\geq 0$ .

Now we introduce the scale functions. This family of functions is the key to the derivation of fluctuation identities for spectrally negative Lévy processes. The notation used is mainly based on [Reference Kuznetsov, Kyprianou and Rivero14] and [Reference Kyprianou15] (see Chapter 8). For $q\geq 0$ , the function $W^{(q)}$ is such that $W^{(q)}=0$ for $x<0$ and $W^{(q)}$ is characterised on $[0,\infty)$ as a strictly increasing and continuous function whose Laplace transform satisfies

\begin{align*}\int_0^{\infty} e^{-\beta x} W^{(q)}(x){{d}} x=\frac{1}{\psi(\beta)-q}, \qquad \text{for } \beta>\Phi(q).\end{align*}

We further define the function $Z^{(q)}$ by

\begin{align*}Z^{(q)}(x)=1+q\int_0^x W^{(q)}(y){{d}} y.\end{align*}

Denote by $\tau_0^-$ the first passage time of X into the set $({-}\infty,0)$ , that is,

\begin{align*}\tau_0^-=\inf\big\{t>0\,:\, X_t<0\big\}.\end{align*}

It turns out that the Laplace transform of $\tau_0^-$ can be written in terms of the scale functions. Specifically,

(2.2) \begin{align}{\mathbb{E}}_x\Big(e^{-q\tau_0^-} {\mathbb{I}}_{\big\{\tau_0^-<\infty \big\}}\Big)=Z^{(q)}(x)-\frac{q}{\Phi(q)}W^{(q)}(x)\end{align}

for all $q\geq 0$ and $x\in {\mathbb{R}}$ . It can be shown that the paths of X are of finite variation if and only if

\begin{align*}\sigma=0 \qquad\text{and} \qquad \int_{({-}1,0)}({-}y) \Pi({{d}} y)<\infty.\end{align*}

In this case, we may write

\begin{align*}\psi(\lambda)=\delta \lambda -\int_{({-}\infty,0)}\big(1-e^{\lambda y}\big)\Pi({{d}} y),\end{align*}

where

(2.3) \begin{align}\delta\,:\!=\,-\mu-\int_{({-}1,0)}x\Pi({{d}} x).\end{align}

Note that monotone processes are excluded from the definition of spectrally negative Lévy processes, so we assume that $\delta>0$ when X is of finite variation. The value of $W^{(q)}$ at zero depends on the path variation of X. In the case where X is of infinite variation we have that $W^{(q)}(0)=0$ ; otherwise,

(2.4) \begin{align}W^{(q)}(0)=\frac{1}{\delta}.\end{align}

For any $a \in {\mathbb{R}}$ and $q\geq 0$ , the q-potential measure of X killed upon entering the set $[a,\infty)$ is absolutely continuous with respect to the Lebesgue measure. A density is given for all $x,y\leq a$ by

(2.5) \begin{align}\int_0^{\infty} e^{-qt} {\mathbb{P}}_x\!\left( X_t \in {{d}} y,t<\tau_a^+ \right) {{d}} t=\big[e^{-\Phi(q)(a-x)}W^{(q)}(a-y)-W^{(q)}(x-y) \big]{{d}} y.\end{align}

Let $g_{\theta}$ be the last passage time below zero before an exponential time, i.e.,

(2.6) \begin{align}g_{\theta}=\sup\big\{0 \leq t \leq {e_{\theta}} \,:\,X_t\leq 0 \big\},\end{align}

where ${e_{\theta}}$ is an exponential random variable with parameter $\theta \geq 0$ . Here we use the convention that an exponential random variable with parameter 0 is taken to be infinite with probability 1. In the case of $\theta=0$ , we simply write $g=g_0$ .

Note that $g_{\theta} \leq {e_{\theta}}<\infty$ ${\mathbb{P}}$ -a.s. for all $\theta > 0$ . However, in the case where $\theta=0$ , g could be infinite. Therefore, we assume that $\theta>0$ throughout this paper. Moreover, we have that $g_{\theta}$ has finite moments for all $ \theta>0$ .

Remark 2.1. Since X is a spectrally negative Lévy process, we can exclude the case of a compound Poisson process, and hence the only way of exiting the set $({-}\infty,0]$ is by creeping upwards. This tells us that $X_{g_{\theta}-}=X_{g_{\theta}}=0$ in the event of $\{g_{ \theta}<{e_{\theta}}\}$ and that $g_{\theta}=\sup\{0\leq t \leq {e_{\theta}}\,:\, X_t<0\}$ holds ${\mathbb{P}}$ -a.s.

Clearly, up to any time $t\geq 0$ the value of $g_{\theta}$ is unknown (unless X is trivial), and it is only with the realisation of the whole process that we know that the last passage time below 0 has occurred. However, this is often too late: typically, at any time $t\geq 0$ , we would like to know how close we are to the time $g_{\theta}$ so that we can take some action based on this information. We search for a stopping time $\tau_*$ of X that is as ‘close’ as possible to $g_{\theta}$ . Consider the optimal prediction problem

(2.7) \begin{align}V_*=\inf_{\tau \in \mathcal{T} } {\mathbb{E}}(|g_{\theta}-\tau|),\end{align}

where $\mathcal{T}$ is the set of all stopping times.

Note that the random time $g_{\theta}$ is only $\mathbb{F}$ -measurable, so it is not immediately obvious how to solve the optimal prediction problem by using the theory of optimal stopping. Hence, in order to find the solution we solve an equivalent optimal stopping problem. In the next lemma we establish an equivalence between the optimal prediction problem (2.7) and an optimal stopping problem. This equivalence is mainly based on the work of [Reference Urusov24].

Lemma 2.1. Suppose that $\{X_t, t\geq 0\}$ is a spectrally negative Lévy process. Let $g_{\theta}$ be the last time that X is below the level zero before an exponential time ${e_{\theta}}$ with $\theta > 0$ , as defined in (2.6). For any $\tau \in \mathcal{ T}$ we have

\begin{align*} {\mathbb{E}}(|g_{\theta}-\tau|)={\mathbb{E}}(g_{\theta})+{\mathbb{E}}\!\left(\int_0^{\tau} {G^{(\theta)}}(s,X_s){{d}} s\right),\end{align*}

where the function $G^{(\theta)}$ is given by

$$G^{(\theta)}(s,x)=1+2e^{-\theta s}\left[ \frac{\theta}{\Phi(\theta)} W^{(\theta)}(x)-Z^{(\theta)}(x)\right]$$

for all $x\in {\mathbb{R}}$ . Then the stopping time which minimises (2.7) is the same one that minimises the optimal stopping problem given by

(2.8) \begin{align}V=\inf_{\tau\in \mathcal{T}} {\mathbb{E}}\!\left( \int_0^{\tau} G^{(\theta)}(s,X_s){{d}} s\right).\end{align}

In particular,

(2.9) \begin{align}V_*=V+{\mathbb{E}}(g_{\theta}).\end{align}

Proof. Fix any stopping time $\tau \in \mathcal{T}$ . We have that

\begin{align*}|g_{\theta}-\tau|= \int_0^{\tau}\Big[2{\mathbb{I}}_{\{ g_{\theta}\leq s\}}-1\Big]{{d}} s+g_{\theta}.\end{align*}

From Fubini’s theorem and the tower property of conditional expectations, we obtain

\begin{align*}{\mathbb{E}}\!\left[\int_0^{\tau} {\mathbb{I}}_{\{ g_{\theta}\leq s\}} {{d}} s\right]&={\mathbb{E}}\!\left[ \int_0^{\infty}{\mathbb{I}}_{\{s<\tau \}} {\mathbb{E}}\big[{\mathbb{I}}_{\{ g_{\theta}\leq s\}}|{\mathcal{F}}_s\big] {{d}} s \right]\\&={\mathbb{E}}\!\left[ \int_0^{\tau} {\mathbb{P}}\big( g_{\theta}\leq s |{\mathcal{F}}_s\big){{d}} s \right].\end{align*}

Note that in the event of $\{{e_{\theta}}\leq s\}$ , we have $g_{\theta}\leq s$ , so that

\begin{align*}{\mathbb{P}}\big(g_{\theta}\leq s|{\mathcal{F}}_s\big)=1-e^{-\theta s}+{\mathbb{P}}\big(g_{\theta}\leq s,{e_{\theta}}>s|{\mathcal{F}}_s \big).\end{align*}

On the other hand, for $\{{e_{\theta}} >s\}$ , as a consequence of Remark 2.1, the event $\{g_{\theta}\leq s\}$ is equal to $\{ X_u \geq 0 \text{ for all } u\in [s,{e_{\theta}}]\}$ (up to a ${\mathbb{P}}$ -null set). Hence we get that for all $s\geq 0$ ,

\begin{align*}{\mathbb{P}}\big(g_{\theta}\leq s, {e_{\theta}}>s|{\mathcal{F}}_s\big)&={\mathbb{P}}\big(X_u \geq 0 \text{ for all } u\in [s,{e_{\theta}}],{e_{\theta}}>s|{\mathcal{F}}_s \big)\\&={\mathbb{P}}\!\left(\inf_{0 \leq u \leq {e_{\theta}}-s} X_{u+s} \geq 0,{e_{\theta}}>s|{\mathcal{F}}_s\right) \\&= e^{-\theta s} {\mathbb{P}}_{X_s}\!\left( \underline{X}_{{e_{\theta}} } \geq 0\right) ,\end{align*}

where $\underline{X}_t=\inf_{0\leq s\leq t} X_s$ for any $t \geq 0$ , and the last equality follows from the lack-of-memory property of the exponential distribution and the Markov property for the Lévy process. Hence, we have that

\begin{align*}{\mathbb{P}}\big(g_{\theta}\leq s, {e_{\theta}}>s|{\mathcal{F}}_s\big)&=e^{-\theta s}F^{(\theta)}(X_s),\end{align*}

where for all $x\in {\mathbb{R}}$ , $F^{(\theta)}(x)={\mathbb{P}}_x\big(\underline{X}_{{e_{\theta}} } \geq 0\big)$ . Then, since ${e_{\theta}}$ is independent of X, we have that for $x\in {\mathbb{R}}$ ,

\begin{align*}F^{(\theta)}(x)&={\mathbb{P}}_x\big(\underline{X}_{{e_{\theta}}}\geq 0\big)={\mathbb{P}}_x\big({e_{\theta}} < \tau_0^-\big)\\&=1-{\mathbb{E}}_x\Big(e^{-\theta \tau_0^-} {\mathbb{I}}_{\big\{\tau_0^-<\infty\big\}}\Big)=\frac{\theta}{\Phi(\theta)}W^{(\theta)}(x)-Z^{(\theta)}(x)+1,\end{align*}

where the last equality follows from Equation (2.2). Thus,

\begin{align*}{\mathbb{P}}(g_{\theta}\leq s |{\mathcal{F}}_s)&=1-e ^{-\theta s}+ e^{-\theta s}\left[ \frac{\theta}{\Phi(\theta)} W^{(\theta)}(X_s)-Z^{(\theta)}(X_s)+1 \right]\\&=1+e^{-\theta s}\left[ \frac{\theta}{\Phi(\theta)} W^{(\theta)}(X_s)-Z^{(\theta)}(X_s)\right].\end{align*}

Therefore,

\begin{align*}V_*&=\inf_{\tau\in \mathcal{T}} {\mathbb{E}}(|g_{\theta}-\tau|)\\&={\mathbb{E}}(g_{\theta})+\inf_{\tau \in \mathcal{T}} {\mathbb{E}}\!\left(\int_0^{\tau} [2{\mathbb{P}}(g_{\theta}\leq s|{\mathcal{F}}_s)-1]{{d}} s \right)\\&={\mathbb{E}}(g_{\theta})+\inf_{\tau \in \mathcal{T}} {\mathbb{E}}\!\left(\int_0^{\tau} \left(1+2e^{-\theta s}\left[ \frac{\theta}{\Phi(\theta)} W^{(\theta)}(X_s)-Z^{(\theta)}(X_s)\right]\right){{d}} s \right).\end{align*}

The conclusion holds.

Note that if we evaluate at $\theta=0$ , the function $G^{(0)}$ coincides with the gain function found in [Reference Baurdoux and Pedraza2] (see Lemma 3.2 and Remark 3.3). In order to find the solution to the optimal stopping problem (2.8) (and hence (2.7)), we extend its definition to the Lévy process (and hence strong Markov process) $\{(t,X_t),t\geq 0\}$ in the following way. Define the function $V\,:\, {\mathbb{R}}_+\times {\mathbb{R}} \mapsto {\mathbb{R}}$ as

(2.10) \begin{align}V^{(\theta)}(t,x)=\inf_{\tau\in \mathcal{T} }{\mathbb{E}}_{t,x}\!\left( \int_0^{\tau} G^{(\theta)}(s+t,X_{s+t}){{d}} s\right)= \inf_{\tau \in \mathcal{T}}{\mathbb{E}}\!\left( \int_0^{\tau} G^{(\theta)}(s+t,X_s+x){{d}} s \right),\end{align}

so that

\begin{align*}V_*=V^{(\theta)}(0,0)+{\mathbb{E}}(g_{\theta}).\end{align*}

The next theorem states the solution of the optimal stopping problem (2.10) and hence the solution of (2.7).

Theorem 2.1. Let $\{X_t, t\geq 0\}$ be any spectrally negative Lévy process and ${e_{\theta}}$ an exponential random variable with parameter $\theta > 0$ independent of $\mathbb{F}$ . There exists a non-increasing and continuous curve ${b^{(\theta)}}\,:\,[0,{m_{\theta}}] \mapsto {\mathbb{R}}_+$ such that ${b^{(\theta)}}\geq {h^{(\theta)}}$ , where $h^{(\theta)}(t)\,:\!=\,\inf\{ x \in {\mathbb{R}}\,:\, G^{(\theta)}(t,x)\geq 0\}$ , and the infimum in (2.10) is attained by the stopping time

\begin{align*}\tau_D=\inf\big\{ s \in [0,{m_{\theta}}-t]\,:\, X_s \geq {b^{(\theta)}}(s+t) \big\}\end{align*}

when $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ and by $\tau_D=0$ when $(t,x)\in [{m_{\theta}},\infty)\times {\mathbb{R}} $ , where ${m_{\theta}}=\log\!(2)/\theta$ . Moreover, the function ${b^{(\theta)}}$ is uniquely characterised as in Theorem 3.1.

Note that the proof of Theorem 2.1 is rather long and hence is split into a series of lemmas. We dedicate Section 3 to the proof.

3. Solution to the optimal stopping problem

In this section we solve the optimal stopping problem (2.10). The proof relies on showing that $\tau_D$ as defined in Theorem 2.1 is indeed an optimal stopping time, by using the general theory of optimal stopping and properties of the function ${V^{(\theta)}}$ . Hence, some properties of ${b^{(\theta)}}$ are derived. The main contribution of this section (Theorem 3.1) characterises ${V^{(\theta)}}$ and ${b^{(\theta)}}$ as the unique solution of a nonlinear system of integral equations within a certain family of functions.

Recall that ${V^{(\theta)}}$ is given by

\begin{align*}V^{(\theta)}(t,x)=\inf_{\tau\in \mathcal{T} }{\mathbb{E}}_{t,x}\!\left( \int_0^{\tau} G^{(\theta)}(s+t,X_{s+t}){{d}} s\right).\end{align*}

From the proof of Lemma 2.1 we note that $G^{(\theta)}$ can be written as

\begin{align*}G^{(\theta)}(s,x)=1+2e^{-\theta s} \big[F^{(\theta)}(x)-1\big],\end{align*}

where $F^{(\theta)}$ is the distribution function of the positive random variable $-\underline{X}_{{e_{\theta}}}$ given by

(3.1) \begin{align}F^{(\theta)}(x)=\frac{\theta}{\Phi(\theta)}W^{(\theta)}(x)-Z^{(\theta)}(x)+1\end{align}

for all $x\in {\mathbb{R}}$ . Note that, for each $\theta>0$ , the random variable $-\underline{X}_{{e_{\theta}}}$ has support on $[0,\infty)$ , and hence the function $F^{(\theta)}$ is strictly increasing on $[0,\infty)$ . Indeed, from the Wiener–Hopf factorisation (see e.g. [Reference Kyprianou15, Theorem 6.15, pp. 171--172]), we know that for any $\theta>0$ , the random variable $\underline{X}_{{e_{\theta}}}$ is infinitely divisible with no Gaussian component and with Lévy measure given by $\pi^-({{d}} x)=\int_0^{\infty}\frac{1}{t}e^{-pt} {\mathbb{P}}(X_t\in {{d}} x)$ for $x<0$ . Moreover, since X creeps upwards and is not a subordinator, we have that $\pi^-$ has support on $({-}\infty,0]$ . Then, from the Lévy–Khintchine formula and [Reference Sato22, Theorem 24.10(iii), p. 152], we deduce that the support of the random variable $\underline{X}_{{e_{\theta}}}$ is $({-}\infty,0]$ as claimed.

Now we give some intuition about the function $G^{(\theta)}$ . Recall that for all $\theta \geq 0$ , $W^{\theta}$ and $Z^{(\theta)}$ are continuous and strictly increasing functions on $[0,\infty)$ such that $W^{(\theta)}(x)=0$ and $Z^{(\theta)}(x)=1$ for $x \in ({-}\infty,0)$ . From the above and Equation (3.1) we have that for a fixed $t\geq 0$ , the function $x\mapsto G^{(\theta)}(t,x)$ is strictly increasing and continuous in $[0,\infty)$ , with a possible discontinuity at 0, depending on the path variation of X. Moreover, we have that $\lim_{x\rightarrow \infty} G^{(\theta)}(t,x)=1$ for all $t\geq 0$ . For $x<0$ and $t\geq 0$ , we have that the function $G^{(\theta)}$ takes the form $G^{(\theta)}(t,x)=1-2e^{-\theta t}$ . Similarly, from the fact that $F^{(\theta)}(x)-1\leq 0$ for all $x\in {\mathbb{R}}$ , we have that for a fixed $x\in {\mathbb{R}}$ the function $t\mapsto G^{(\theta)}(t,x)$ is continuous and strictly increasing on $[0,\infty)$ . Furthermore, from the fact that $0\leq F^{(\theta)}(x) \leq 1$ , we have that the function G is bounded by

(3.2) \begin{align}1-2e^{-\theta t}\leq G^{(\theta)}(x,t)\leq 1,\end{align}

which implies that $|G^{(\theta)}|\leq 1$ . Recall that ${m_{\theta}}$ is defined as the median of the random variable ${e_{\theta}}$ , that is,

\begin{align*}{m_{\theta}}=\frac{\log\!(2)}{\theta}.\end{align*}

Hence from (3.2) we have that $G^{(\theta)}(t,x)\geq 0$ for all $x\in {\mathbb{R}}$ and $t\geq {m_{\theta}}$ . The above observations tell us that, to solve the optimal stopping problem (2.10), we are interested in a stopping time such that before stopping, the process X spends most of its time in the region where ${G^{(\theta)}}$ is negative, taking into account that (t, X) can live in the set $\{ (s,x) \in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {G^{(\theta)}}(s,x)>0\}$ and then return to the set $\{ (s,x) \in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {G^{(\theta)}}(s,x)\leq 0 \}$ . The only restriction that applies is that if a considerable amount of time has passed, then we must stop.

Recall that the function $h^{(\theta)}\,:\,{\mathbb{R}}_+\mapsto {\mathbb{R}}$ is defined as

(3.3) \begin{align}h^{(\theta)}(t)=\inf\big\{ x \in {\mathbb{R}}\,:\, G^{(\theta)}(t,x)\geq 0\big\}=\inf\bigg\{ x \in {\mathbb{R}}\,:\, F^{(\theta)}(x) \geq 1-\frac{1}{2}e^{\theta t} \bigg\}, \qquad t\geq 0.\end{align}

Hence, we can see that the function $h^{(\theta)}$ is a non-increasing continuous function on $[0,{m_{\theta}})$ such that $\lim_{t\uparrow {m_{\theta}}} h^{(\theta)}(t)=0$ and ${h^{(\theta)}}(t)=-\infty$ for $t\in[{m_{\theta}},\infty)$ . Then ${h^{(\theta)}}$ must satisfy ${h^{(\theta)}}(t)\geq \lim_{s\uparrow {m_{\theta}}} {h^{(\theta)}}(s)=0 $ for $t \in [0,{m_{\theta}})$ .

In order to characterise the stopping time that minimises (2.10), we first derive some properties of the function ${V^{(\theta)}}$ .

Lemma 3.1. The function ${V^{(\theta)}}$ is non-decreasing in each argument. Moreover, we have $V^{(\theta)}(t,x) \in ({-}{m_{\theta}},0]$ for all $x\in {\mathbb{R}}$ and $t \geq 0$ . In particular, $V^{(\theta)}(t,x)<0$ for any $t\geq 0$ with $x < h^{(\theta)}(t)$ and ${V^{(\theta)}}(t,x)=0$ for all $(t,x)\in [{m_{\theta}},\infty)\times {\mathbb{R}}$ .

Proof. First, note that $V^{(\theta)}\leq 0$ follows from taking $\tau \equiv 0$ in the definition of ${V^{(\theta)}}$ . Moreover, recall that ${G^{(\theta)}}(t,x) > 0$ for $t> {m_{\theta}}$ and $x\in {\mathbb{R}}$ , so we have that ${V^{(\theta)}} $ vanishes on $[{m_{\theta}},\infty)\times {\mathbb{R}}$ . The fact that ${V^{(\theta)}}$ is non-decreasing in each argument follows from the non-decreasing property of the functions $t \mapsto G^{(\theta)}(t,x)$ and $x \mapsto G^{(\theta)}(t,x)$ , as well as the monotonicity of the expectation. Moreover, using standard arguments we can see that

\begin{align*}\big\{(t,x)\in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, x<{h^{(\theta)}}(t)\big\} &=\big\{(t,x)\in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, G^{(\theta)}(t,x)<0\big\}\\&\subset \big\{ (t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, V^{(\theta)}(t,x)<0\big\}.\end{align*}

Next we will show that $V^{(\theta)}(t,x)>-{m_{\theta}}$ for all $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ and for all $\theta > 0$ . Note that $t<{m_{\theta}}$ if and only if $1-2e^{-\theta t}<0$ . Then for all $(s,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ we have that

\begin{align*}{G^{(\theta)}}(s,x)\geq 1-2e^{-\theta s}\geq \big(1-2e^{-\theta s}\big){\mathbb{I}}_{\{s<{m_{\theta}} \}}.\end{align*}

Hence, for all $x\in {\mathbb{R}}$ and $t<{m_{\theta}}$ ,

\begin{align*}{V^{(\theta)}}(t,x)& \geq \inf_{\tau\in \mathcal{T}} {\mathbb{E}} \!\left(\int_0^{\tau} \big(1-2e^{-\theta (s+t)}\big){\mathbb{I}}_{\{t+s<{m_{\theta}} \}} {{d}} s \right)\\& = - \sup_{\tau\in \mathcal{T}} {\mathbb{E}} \!\left(\int_0^{\tau} \big(2e^{-\theta (s+t)}-1\big){\mathbb{I}}_{\{t+s<{m_{\theta}} \}} {{d}} s \right).\end{align*}

The term in the last integral is non-negative, so we obtain for all $t<{m_{\theta}}$ and $x\in {\mathbb{R}}$ that

\begin{align*}{V^{(\theta)}}(t,x) &\geq -\left(\int_0^{\infty} (2e^{-\theta (s+t)}-1){\mathbb{I}}_{\{t+s<{m_{\theta}} \}}{{d}} s \right)\\&= - \left(\int_0^{{m_{\theta}}-t} (2e^{-\theta (s+t)}-1){{d}} s \right)\\&>-{m_{\theta}}.\end{align*}

By using a dynamic programming argument and the fact that ${V^{(\theta)}}$ vanishes on the set $[{m_{\theta}},\infty)\times {\mathbb{R}}$ , we can see that

\begin{align*}{V^{(\theta)}}(t,x) &=\inf_{\tau\in \mathcal{T} }{\mathbb{E}}_{t,x}\!\left( \int_0^{\tau \wedge ({m_{\theta}}-t)} G^{(\theta)}(s+t,X_{s+t}){{d}} s\right),\end{align*}

so that (since $|{G^{(\theta)}}|\leq 1$ ) we have that for all $t\geq 0$ and $x\in {\mathbb{R}}$ ,

\begin{align*}{\mathbb{E}}_{t,x}\!\left( \sup_{s\geq 0} \left| \int_0^{s \wedge ({m_{\theta}}-t)} G^{(\theta)}(r+t,X_{r+t}){{d}} r \right|\right)<\infty.\end{align*}

Moreover, as a consequence of the properties of $F^{(\theta)}$ we have that the function ${G^{(\theta)}}$ is upper semi-continuous, so we can see that ${V^{(\theta)}}$ is upper semi-continuous (since ${V^{(\theta)}}$ is the infimum of upper semi-continuous functions). Next, consider the Markov process $\{ (t,L_t,X_t), t\geq 0 \}$ , where

\begin{align*} L_t=\int_0^t {G^{(\theta)}}(s+t,X_{s+t}) {{d}} s, \qquad t\geq 0.\end{align*}

For each $t\geq 0$ and $a, x\in {\mathbb{R}}$ , we have that

\begin{align*} \widetilde{V}^{(\theta)} (t,a,x)\,:\!=\,\sup_{\tau \in \mathcal{T}} {\mathbb{E}}\!\left( L_{t+\tau}|L_t=a, X_t=x \right)={V^{(\theta)}}(t,x)+a.\end{align*}

Therefore, from the general theory of optimal stopping (see [Reference Peskir and Shiryaev21, Corollary 2.9, p. 46]) we have that an optimal stopping time for (2.10) is given by

(3.4) \begin{align}\inf\Big\{s\geq 0\,:\, \widetilde{V}^{(\theta)}(t+s,L_{t+s},X_{t+s})=L_{t+s}\Big\}=\inf\big\{s\geq 0\,:\, \big(t+s,X_{t+s}\big)\in D \big\}=\tau_D,\end{align}

where $D=\{(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}\,:\, {V^{(\theta)}}(t,x)=0 \}$ is a closed set.

Hence, from Lemma 3.1, we derive that $D=\{ (t,x) \in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, x \geq b^{(\theta)}(t)\}$ , where the function $b^{(\theta)}\,:\,{\mathbb{R}}_+ \mapsto {\mathbb{R}}$ is given by

\begin{align*}b^{(\theta)}(t)=\inf\{x \in {\mathbb{R}}\,:\, (t,x)\in D \}\end{align*}

for each $t\geq 0$ . It follows from Lemma 3.1 that $b^{(\theta)}$ is non-increasing and $b^{(\theta)}(t)\geq h^{(\theta)}(t)\geq 0$ for all $ t\geq 0$ . Moreover, ${b^{(\theta)}}(t)=-\infty$ for $t \in [{m_{\theta}},\infty)$ , since $V^{(\theta)}(t,x)=0$ for all $t\geq {m_{\theta}}$ and $x \in {\mathbb{R}}$ , giving us $\tau_D \leq ({m_{\theta}}-t)\vee 0$ . In the case that $t<{m_{\theta}}$ , we have that $b^{(\theta)}(t)$ is finite-valued, as we will prove in the following lemma.

Lemma 3.2. Let $\theta> 0$ . The function ${b^{(\theta)}}$ is finite-valued for all $t\in [0,{m_{\theta}})$ .

Proof. For any $\theta >0$ and fixed $t\geq 0$ , consider the optimal stopping problem

\begin{align*}\mathcal{V}_t^{(\theta)}(x)=\inf_{\tau \in \mathcal{T}_{{m_{\theta}} -t}} {\mathbb{E}}_x\!\left(\int_0^{\tau} [1+2e^{-\theta t}( F^{(\theta)}(X_s)-1)] {{d}} s \right), \qquad x\in {\mathbb{R}},\end{align*}

where $\mathcal{T}_{{m_{\theta}}-t}$ is the set of all stopping times of $\mathbb{F}$ bounded by ${m_{\theta}}-t$ . From the fact that for all $s \geq 0$ and $x\in {\mathbb{R}}$ , $G(s+t,x)\geq 1+2e^{-\theta t} (F^{(\theta)}(x)-1)$ , and that $\tau_D\in \mathcal{T}_{{m_{\theta}}-t}$ (under ${\mathbb{P}}_{t,x}$ for all $x\in {\mathbb{R}}$ and $t<{m_{\theta}}$ ), we have that

(3.5) \begin{align}{V^{(\theta)}}(t,x)\geq \mathcal{V}_t^{(\theta)}(x)\end{align}

for all $x\in {\mathbb{R}}$ . Hence it suffices to show that there exists $\tilde{x}_t$ (finite) sufficiently large so that $\mathcal{V}_t^{(\theta)}(x)=0$ for all $x\geq \tilde{x}_t$ .

It can be shown that an optimal stopping time for $\mathcal{V}_t^{(\theta)}$ is $\tau_{\mathcal{D}_t}$ , the first entry time before ${m_{\theta}}-t$ to the set $\mathcal{D}_t=\{ x\in {\mathbb{R}}\,:\, \mathcal{V}_t^{(\theta)}(x)=0 \}$ . We proceed by contradiction. Assume that $\mathcal{D}_t=\emptyset$ ; then $\tau_{\mathcal{D}_t}={m_{\theta}}-t$ . Hence, by the dominated convergence theorem and the spatial homogeneity of Lévy processes, we have that

\begin{align*}0 \geq \lim_{x\rightarrow \infty} \mathcal{V}_t^{(\theta)}(x)= {\mathbb{E}}\!\left(\int_0^{{m_{\theta}}-t} \lim_{x\rightarrow \infty} \big[1+2e^{-\theta t}( F^{(\theta)}(X_s+x)-1)\big] {{d}} s \right)={m_{\theta}}-t>0,\end{align*}

which is a contradiction. Therefore, we conclude that for each $t \geq 0$ , there exists a finite value $\tilde{x}_t$ such that ${b^{(\theta)}}(t)\leq \tilde{x}_t$ .

Remark 3.1. From the proof of Lemma 3.2, we find an upper bound of the boundary ${b^{(\theta)}}$ . Define, for each $t\in [0,{m_{\theta}})$ , $u^{(\theta)}(t)=\inf\{x \in {\mathbb{R}}\,:\, \mathcal{V}^{(\theta)}_t(x)=0 \}$ . Then it follows that $u^{(\theta)}$ is a non-increasing finite function such that

\begin{align*}u^{(\theta)}(t)\geq {b^{(\theta)}}(t)\end{align*}

for all $t \in [0,{m_{\theta}})$ .

Next we show that the function ${V^{(\theta)}}$ is continuous.

Lemma 3.3. The function ${V^{(\theta)}}$ is continuous. Moreover, for each $x\in {\mathbb{R}}$ , $t \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}_+$ , and for every $t \in {\mathbb{R}}_+$ , $x \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}$ .

Proof. First, we are showing that for a fixed $t\geq 0$ , the function $x \mapsto V^{(\theta)}(t,x)$ is Lipschitz on ${\mathbb{R}}$ . Recall that if $t\geq {m_{\theta}}$ , then $V^{(\theta)}(t,x)=0$ for all $x\in {\mathbb{R}}$ , so the assertion is clear. Suppose that $t<{m_{\theta}}$ . Let $x, y \in {\mathbb{R}}$ and define $\tau_{x}^*= \tau_{D}(t,x)=\inf\{s\geq 0\,:\, X_s+x\geq {b^{(\theta)}}(s+t) \}$ . Since $\tau_x^*$ is optimal in $V^{(\theta)}(t,x)$ (under ${\mathbb{P}}$ ), we have that

\begin{align*}V^{(\theta)}(t,y)&-V^{(\theta)}(t,x)\\&\leq {\mathbb{E}}\!\left(\int_0^{\tau_x^*} G^{(\theta)}(s+t,X_s+y){{d}} s\right)-{\mathbb{E}}\!\left(\int_0^{\tau_x^*} G^{(\theta)}(s+t,X_s+x){{d}} s\right)\\&={\mathbb{E}}\!\left(\int_0^{\tau_x^*} 2e^{-\theta (s+t)}[F^{(\theta)}(X_s+y)-F^{(\theta)}(X_s+x)]{{d}} s\right).\end{align*}

Define the stopping time

\begin{align*}\tau_{{b^{(\theta)}}(0)-x}^+=\inf \{t \geq 0\,:\, X_t \geq b^{(\theta)}(0)-x\}.\end{align*}

Then we have that $\tau_x^* \leq \tau_{b^{(\theta)}(0)-x}^+$ (since $b^{(\theta)}$ is a non-increasing function). From the fact that $F^{(\theta)}$ is non-decreasing, we obtain that for ${b^{(\theta)}}(0)\geq y\geq x$ ,

\begin{align*}V^{(\theta)}(t,y)-V^{(\theta)}(t,x)&\leq 2 {\mathbb{E}}\!\left(\int_0^{\tau_{{b^{(\theta)}}(0)-x}^+} e^{-\theta s}\big[F^{(\theta)}(X_s+y)-F^{(\theta)}(X_s+x)\big]{{d}} s\right).\end{align*}

Using Fubini’s theorem and a density of the potential measure of the process killed upon exiting $({-}\infty,{b^{(\theta)}}(0)]$ (see Equation (2.5)), we get that

\begin{align*}V^{(\theta)}&(t,y)-V^{(\theta)}(t,x)\\&\leq 2 \int_{-\infty}^{{b^{(\theta)}}(0)} \big[F^{(\theta)}(z+y-x)-F^{(\theta)}(z)\big]\int_0^{\infty}e^{-\theta s} {\mathbb{P}}_x\Big(X_s\in {{d}} z,\tau_{{b^{(\theta)}}(0)}^+>s\Big){{d}} s \\&= 2\int_{-\infty}^{{b^{(\theta)}}(0)} \big[F^{(\theta)}(z+y-x)-F^{(\theta)}(z)\big] \\&\qquad \qquad \times \left[e^{-\Phi(\theta)\big({b^{(\theta)}}(0)-x\big)} W^{(\theta)}({b^{(\theta)}}(0)-z)-W^{(\theta)}(x-z) \right] {{d}} z\\&\leq 2 e^{-\Phi(\theta)({b^{(\theta)}}(0)-x)} W^{(\theta)}\big({b^{(\theta)}}(0)-x+y\big)\int_{x-y}^{{b^{(\theta)}}(0)} \big[F^{(\theta)}(z+y-x)-F^{(\theta)}(z)\big] {{d}} z,\end{align*}

where in the last inequality we used the fact that $W^{(\theta)}$ is strictly increasing and non-negative and that $F^{(\theta)}$ vanishes at $({-}\infty,0)$ . By an integration-by-parts argument, we obtain that

\begin{align*}\int_{x-y}^{{b^{(\theta)}}(0)} \big[F^{(\theta)}(z+y-x)-F^{(\theta)}(z)\big] {{d}} z=(y-x)F^{(\theta)}\big({b^{(\theta)}}(0)+y-x\big).\end{align*}

Moreover, it can be checked (see [Reference Kuznetsov, Kyprianou and Rivero14, Lemma 3.3]) that $z\mapsto e^{-\Phi(\theta)(z)} W^{(\theta)}(z)$ is a continuous function in the interval $[0,\infty)$ such that

\begin{align*}\lim_{z\rightarrow \infty} e^{-\Phi(\theta)(z)}W^{(\theta)}(z)=\frac{1}{\psi'(\Phi(\theta))}<\infty.\end{align*}

This implies that there exists a constant $M>0$ such that for every $z\in {\mathbb{R}}$ , $ 0\leq e^{-\Phi(\theta)(z)}W^{(\theta)}$ $(z)<M$ . Then we obtain that for all $x\leq y \leq {b^{(\theta)}}(0)$ ,

\begin{align*}0\leq {V^{(\theta)}}(t,y)-{V^{(\theta)}}(t,x)\leq 2M (y-x) e^{\Phi(\theta) y} \leq 2M (y-x) e^{\Phi(\theta) {b^{(\theta)}}(0)}.\end{align*}

On the other hand, since ${b^{(\theta)}}(0)\geq {b^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ , we have that for all $(t,x) \in [0,{m_{\theta}}) \times [{b^{(\theta)}}(0),\infty)$ , ${V^{(\theta)}}(t,x)=0$ . Hence we obtain that for all $x,y \in {\mathbb{R}}$ and $t\geq 0$ ,

(3.6) \begin{align}\big|V^{(\theta)}(t,y)-V^{(\theta)}(t,x)\big| \leq 2 M |y-x| e^{\Phi(\theta) {b^{(\theta)}}(0)}.\end{align}

Therefore we conclude that for a fixed $t\geq 0$ , the function $x \mapsto {V^{(\theta)}}(t,x)$ is Lipschitz on ${\mathbb{R}}$ .

Using a similar argument and the fact that the function $t \mapsto e^{-\theta t}$ is Lipschitz continuous on $[0,\infty)$ , we can show that for any $s,t< {m_{\theta}}$ ,

\begin{align*}\big|V^{(\theta)}(s,x)-V^{(\theta)}(t,x)\big| \leq 2\theta {m_{\theta}}|s-t|,\end{align*}

and therefore $t\mapsto {V^{(\theta)}}(t,x)$ is Lipschitz continuous for all $x\in {\mathbb{R}}$ .

In order to derive more properties of the boundary ${b^{(\theta)}}$ , we first state some auxiliary results. Recall that if $f \in C_b^{1,2}({\mathbb{R}}_+ \times {\mathbb{R}})$ , the set of real bounded $C^{1,2}$ functions on $ {\mathbb{R}}_+ \times {\mathbb{R}}$ with bounded derivatives, the infinitesimal generator of (t, X) is given by

(3.7) \begin{align}\mathcal{A}_{(t,X)} (f)(t,x)&=\frac{\partial }{\partial t} f(t,x)-\mu \frac{\partial }{\partial x} f(t,x)+\frac{1}{2}\sigma^2 \frac{\partial^2}{\partial x^2} f (t,x)\nonumber\\&\qquad+ \int_{({-}\infty,0)}[f(t,x+y)-f(t,x)-y{\mathbb{I}}_{\{y>-1 \}} \frac{\partial}{\partial x} f(t,x)]\Pi({{d}} y).\end{align}

Let $C={\mathbb{R}}_+ \times {\mathbb{R}} \setminus D=\{ (t,x) \in {\mathbb{R}}_+ \times {\mathbb{R}}\,:\, x<{b^{(\theta)}}(t)\}$ be the continuation region. Then we have that the value function ${V^{(\theta)}}$ satisfies a variational inequality in the sense of distributions. The proof is analogous to the one presented in [Reference Lamberton and Mikou16] (see Proposition 2.5), so the details are omitted.

Lemma 3.4. Fix $\theta > 0$ . The distribution $\mathcal{A}_{(t,X)} {V^{(\theta)}} +{G^{(\theta)}}$ is non-negative on ${\mathbb{R}}_+\times {\mathbb{R}}$ . Moreover, we have that $ \mathcal{A}_{(t,X)} {V^{(\theta)}} +{G^{(\theta)}}=0$ on C.

We define a special function which is useful in proving the left-continuity of the boundary ${b^{(\theta)}}$ . For $\theta>0$ , we define an auxiliary function in the set D. Let

(3.8) \begin{align}\varphi^{(\theta)}(t,x)= \int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y) \Pi( {{d}} y)+{G^{(\theta)}}(t,x), \qquad (t,x)\in D.\end{align}

From the fact that ${V^{(\theta)}}$ vanishes on D and that $\Pi$ is finite on sets of the form $({-}\infty,-\varepsilon)$ for $\varepsilon>0$ , we can see that $|\varphi^{(\theta)}(t,x)|<\infty$ for all (t, x) in the interior of D. Moreover, by the lemma above and the properties of ${V^{(\theta)}}$ and ${G^{(\theta)}}$ , it can be shown that $\varphi$ is strictly positive, continuous, and strictly increasing (in each argument) in the interior of D.

Now we are ready to give further properties of the curve ${b^{(\theta)}}$ in the set $[0,{m_{\theta}})$ .

Lemma 3.5. The function $b^{(\theta)}$ is continuous on $[0,{m_{\theta}})$ . Moreover we have that $\lim_{t\uparrow {m_{\theta}}} {b^{(\theta)}}(t)=0$ .

Proof. The method of proof of the continuity of ${b^{(\theta)}}$ in $[0,{m_{\theta}})$ is heavily based on the work of [Reference Lamberton and Mikou16] (see Theorem 4.2, where the continuity of the boundary is shown in the American option context), so the proof is omitted.

We then show that the limit holds. Define ${b^{(\theta)}}({m_{\theta}}{-})\,:\!=\,\lim_{t\uparrow {m_{\theta}}} {b^{(\theta)}}(t)$ . We obtain ${b^{(\theta)}}({m_{\theta}}{-})\geq 0$ , since ${b^{(\theta)}}(t) \geq {h^{(\theta)}}(t)\geq 0$ for all $t\in [0,{m_{\theta}})$ . The proof is by contradiction, so we assume that ${b^{(\theta)}}({m_{\theta}}{-})y>0$ . Note that for all $x \in {\mathbb{R}}$ , we have that ${V^{(\theta)}}({m_{\theta}},0)=0$ and ${G^{(\theta)}}({m_{\theta}},x)=F^{(\theta)}(x)$ . Moreover, we have that

\begin{align*}\mathcal{A}_X \big({V^{(\theta)}}\big)+{G^{(\theta)}}=- \partial_t {V^{(\theta)}} \leq 0\end{align*}

in the sense of distributions on $(0,{m_{\theta}})\times \big(0,{b^{(\theta)}}({m_{\theta}}{-}) \big)$ . Hence, by continuity, we can derive, for $t\in [0,{m_{\theta}})$ , that $\mathcal{A}_X \big({V^{(\theta)}}\big)(t,\cdot)+{G^{(\theta)}}(t,\cdot) \leq 0 $ on the interval $\big(0,{b^{(\theta)}}({m_{\theta}}{-})y\big)$ . Hence, by taking $t\uparrow {m_{\theta}}$ we obtain that

\begin{align*}0\geq \lim_{t\uparrow {m_{\theta}}} \mathcal{A}_X \big({V^{(\theta)}}\big)(t,\cdot)+{G^{(\theta)}}(t,\cdot)=F^{(\theta)} >0\end{align*}

in the sense of distributions, where we used the continuity of ${V^{(\theta)}}$ and ${G^{(\theta)}}$ , the fact that ${V^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ , and the fact that $F^{(\theta)}(x)>0$ for all $x>0$ . Note that we have got a contradiction. We conclude that ${b^{(\theta)}}({m_{\theta}})=0$ .

Define the value

(3.9) \begin{align}t_b\,:\!=\,\inf\big\{t\geq 0\,:\, {b^{(\theta)}}(t)\leq 0 \big\}.\end{align}

Note that in the case where X is a process of infinite variation, we have that the distribution function of the random variable $-\underline{X}_{{e_{\theta}}}$ , $F^{(\theta)}$ , is continuous on ${\mathbb{R}}$ , strictly increasing, and strictly positive in the open set $(0,\infty)$ , with $F^{(\theta)}(0)=0$ . This fact implies that the inverse function of $F^{(\theta)}$ exists on $(0,\infty)$ , and the function ${h^{(\theta)}}$ can be written for $t\in [0,{m_{\theta}})$ as

\begin{align*}h^{(\theta)}(t)= \big(F^{(\theta)}\big)^{-1}\bigg( 1-\frac{1}{2 } e^{\theta t}\bigg).\end{align*}

Hence we conclude that ${h^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ . Therefore, when X is a process of infinite variation, we have ${b^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ and hence $t_b={m_{\theta}}$ . For the case of finite variation, we have that $t_b \in [0,{m_{\theta}})$ , which implies that ${b^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ and ${b^{(\theta)}}(t)>0$ for all $t\in [0,t_b)$ . In the next lemma, we characterise its value.

Lemma 3.6. Let $\theta>0$ and let X be a process of finite variation. We have that for all $t\geq 0$ and $x\in {\mathbb{R}}$ ,

\begin{align*}\int_{({-}\infty,0)} \big[{V^{(\theta)}}(t,x+y)-{V^{(\theta)}}(t,x)\big]\Pi({{d}} y)>-\infty.\end{align*}

Moreover, for any Lévy process, $t_b$ is given by

(3.10) \begin{align}t_b=\inf \!\left\{ t\in [0,{m_{\theta}}]\,:\,\int_{({-}\infty,0)}V^{(\theta)}_B(t,y)\Pi({{d}} y) +{G^{(\theta)}}(t,0)\geq 0 \right\},\end{align}

where $V^{(\theta)}_B$ is given by

\begin{align*}V^{(\theta)}_B(t,y)= {\mathbb{E}}_{y}\big(\tau_0^+\wedge ({m_{\theta}}-t)\big)-\frac{2}{\theta}e^{-\theta t}\big[1-{\mathbb{E}}_{y}\big(e^{-\theta (\tau_0^+ \wedge ({m_{\theta}}-t))}\big)\big]\end{align*}

for all $t\in [0,{m_{\theta}})$ and $y \in {\mathbb{R}}$ .

Proof. Assume that X is a process of finite variation. We first show that

\begin{align*}\int_{({-}\infty,0)} \big[{V^{(\theta)}}(t,x+y)-{V^{(\theta)}}(t,x)\big]\Pi({{d}} y)>-\infty\end{align*}

for all $t\geq 0$ and $x\in {\mathbb{R}}$ . The case $t\geq {m_{\theta}}$ is straightforward since ${V^{(\theta)}}(t,x)=0$ for all $x\in {\mathbb{R}}$ . The case $t<{m_{\theta}}$ follows from the Lipschitz continuity of the mapping $x\mapsto {V^{(\theta)}}(t,x)$ , the fact that $\Pi$ is finite on intervals away from zero, and since $\int_{({-}1,0)} y\Pi({{d}} y)>-\infty$ when X is of finite variation. Moreover, from Lemma 3.4, we obtain that

\begin{align*}\int_{({-}\infty,0)} \big[{V^{(\theta)}}(t,x+y)-{V^{(\theta)}}(t,x)\big] \Pi({{d}} y)+{G^{(\theta)}}(t,x)&=- \frac{\partial}{\partial t} {V^{(\theta)}}(t,x) -\delta \frac{\partial}{\partial x} {V^{(\theta)}}(t,x)\\&\leq 0\end{align*}

on C in the sense of distributions, where the last inequality follows since ${V^{(\theta)}}$ is non-decreasing in each argument and $\delta>0$ is defined in (2.3). Then by the continuity of the functions ${V^{(\theta)}}$ and ${G^{(\theta)}}$ (recall that ${G^{(\theta)}}$ is at least continuous on $(0,\infty)\times (0,\infty)$ and right-continuous at points of the form (t, 0) for $t\geq 0$ ) we can derive

(3.11) \begin{align}\int_{({-}\infty,0)} \big[{V^{(\theta)}}(t,y)-{V^{(\theta)}}(t,0)\big] \Pi({{d}} y)+{G^{(\theta)}}(t,0) \leq 0\end{align}

for all $t\in [0,t_b)$ .

Next, we show that the set $\big\{t\in [0,{m_{\theta}})\,:\, {b^{(\theta)}}(t)=0 \big\}$ is non-empty. We proceed by contradiction. Assume that ${b^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ , so that $t_b={m_{\theta}}$ . Taking $t\uparrow {m_{\theta}}$ in (3.11) and applying the dominated convergence theorem, we obtain that

\begin{align*}0&\geq \lim_{t\uparrow {m_{\theta}} }\left\{ \int_{({-}\infty,0)} \big[{V^{(\theta)}}(t,y)-{V^{(\theta)}}(t,0)\big] \Pi({{d}} y)+{G^{(\theta)}}(t,0) \right\}\\&={G^{(\theta)}}({m_{\theta}},0)=F^{(\theta)}(0)>0,\end{align*}

where the strict inequality follows from

$$F^{(\theta)}(0)=\frac{\theta}{\Phi(\theta)} W^{(\theta)}(0)=\frac{\theta}{\delta \Phi(\theta)}>0$$

since X is of finite variation. Therefore we observe a contradiction, which shows that $\big\{t\in [0,{m_{\theta}})\,:\, {b^{(\theta)}}(t)=0 \big\}\neq \emptyset$ . Moreover, by definition, we have that $t_b=\inf\{ t\in [0,{m_{\theta}})\,:$ ${b^{(\theta)}}(t)=0\}$ .

Next we find an expression for ${V^{(\theta)}}(t,x)$ when $t\in (0,{m_{\theta}})$ and $x< 0$ . Since ${b^{(\theta)}}(t)\geq 0$ for all $t\in [0,{m_{\theta}})$ , we have that

(3.12) \begin{align}{V^{(\theta)}}(t,x)&={\mathbb{E}}_x\!\left(\int_0^{\tau_{0}^+ \wedge ({m_{\theta}}-t)} (1-2e^{-\theta (t+s)} ) {{d}} s \right)+{\mathbb{E}}_x\!\left({\mathbb{I}}_{\big\{\tau_{0}^+<{m_{\theta}}-t \big\}} {V^{(\theta)}}\big(t+\tau_0^+,0\big) \right)\nonumber\\&=V^{(\theta)}_B(t,x)+{\mathbb{E}}_x\!\left({\mathbb{I}}_{\big\{\tau_{0}^+<{m_{\theta}}-t \big\}} {V^{(\theta)}}\big(t+\tau_0^+,0\big) \right) ,\end{align}

where the first equality follows since $X_s\leq 0$ for all $s\leq \tau_0^+$ and $G(t,x)=1-2e^{-\theta t}$ for all $x<0$ . Hence, in particular, we have that ${V^{(\theta)}}(t,x)=V^{(\theta)}_B(t,x)$ for all $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ .

We show that (3.10) holds. From the discussion after Lemma 3.4, we know that

\begin{align*}\varphi^{(\theta)}(t,x)=\int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y)\Pi ({{d}} y)+{G^{(\theta)}}(t,x) > 0\end{align*}

for all $x>0$ and $t\geq t_b$ . Then by taking $x\downarrow 0$ , making use of the right continuity of $x\mapsto G(t,x)$ and the continuity of ${V^{(\theta)}}$ (see Lemma 3.3), and applying the dominated convergence theorem, we derive that

\begin{align*}\int_{({-}\infty,0)} {V^{(\theta)}}(t_b,y)\Pi ({{d}} y)+{G^{(\theta)}}(t_b,0) \geq 0.\end{align*}

In particular, if $t_b=0$ , (3.10) holds since $t\mapsto {V^{(\theta)}}(t,y)$ (for all $y\in {\mathbb{R}}$ ) and ${G^{(\theta)}}(t,0)$ are non-decreasing functions. If $t_b>0$ , taking $t\uparrow t_b$ in (3.11) gives us

\begin{align*}\int_{({-}\infty,0)} {V^{(\theta)}}(t_b,y)\Pi({{d}} y)+{G^{(\theta)}}(t,0)\leq 0.\end{align*}

Hence we have that $\int_{({-}\infty,0)}V^{(\theta)}_B(t_b,y)\Pi({{d}} y)+{G^{(\theta)}}(t_b,0)=0$ , with (3.10) becoming clear because $t\mapsto V^{(\theta)}_B(t,x)$ is non-decreasing. If X is a process of infinite variation, we have that ${h^{(\theta)}}(t)>0$ for all $t\in [0,{m_{\theta}})$ and therefore ${G^{(\theta)}}(t,x)< 0$ for all $t\in [0,{m_{\theta}})$ and $x\leq 0$ , which implies that

\begin{align*}t_b={m_{\theta}}= \inf \!\left\{ t\in [0,{m_{\theta}}]\,:\,\int_{({-}\infty,0)}V^{(\theta)}_B(t,y)\Pi({{d}} y) +{G^{(\theta)}}(t,0)\geq 0 \right\}.\end{align*}

Now we prove that the derivatives of ${V^{(\theta)}}$ exist at the boundary ${b^{(\theta)}}$ for those points in which ${b^{(\theta)}}$ is strictly positive.

Lemma 3.7. For all $t\in [0,t_b)$ , the partial derivatives of ${V^{(\theta)}}(t,x)$ at $\big(t,{b^{(\theta)}}(t)\big)$ exist and are equal to zero, i.e.,

\begin{align*}\frac{\partial}{\partial t} {V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)=0 \qquad {and} \qquad \frac{\partial}{\partial x} {V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)=0.\end{align*}

Proof. First we prove the assertion in the first argument. Using a similar idea as in Lemma 3.3, we have that for any $t<t_b$ , $x\in {\mathbb{R}}$ , and $h>0$ ,

\begin{align*}0&\leq \frac{ {V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)- {V^{(\theta)}}(t-h,{b^{(\theta)}}(t)) }{h}\\&\qquad \qquad \leq 2 {\mathbb{E}}_{{b^{(\theta)}}(t)}\left( \int_{0}^{\tau^*_{h} } \frac{\big[e^{-\theta(r+t-h)}-e^{-\theta(r+t)}\big]}{h} {{d}} r\right)\\& \qquad\qquad\leq 2 {\mathbb{E}}_{{b^{(\theta)}}(t)}\left( \int_{0}^{\tau^+_{{b^{(\theta)}}(t-h)} } \frac{\big[e^{-\theta(r+t-h)}-e^{-\theta(r+t)}\big]}{h} {{d}} r\right)\\&\qquad\qquad = \frac{\big[e^{-\theta(t-h)}-e^{-\theta t}\big]}{h} \frac{1}{\theta} {\mathbb{E}}_{{b^{(\theta)}}(t)}\left( 1-e^{-\theta \tau^+_{{b^{(\theta)}}(t-h)}} \right),\end{align*}

where $\tau^*_h=\inf\{r \in [0,{m_{\theta}}-t+h]\,:\, X_r \geq {b^{(\theta)}}(r+t-h) \}$ is the optimal stopping time for ${V^{(\theta)}}(t-h,x)$ and the second inequality follows since b is non-increasing. Hence, by (2.1) and the continuity of ${b^{(\theta)}}$ , we obtain that

\begin{align*} \lim_{h\downarrow 0} \frac{ {V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)- {V^{(\theta)}}(t-h,{b^{(\theta)}}(t)) }{h}= 0.\end{align*}

Now we show that the partial derivative of the second argument exists at ${b^{(\theta)}}(t)$ and is equal to zero. Fix any time $t \in [0,t_b)$ , $\varepsilon>0$ , and $x\leq b^{(\theta)}(t)$ (without loss of generality, we assume that $\varepsilon<x$ ). By a similar argument to that provided in Lemma 3.3, we obtain that

(3.13) \begin{align}V^{(\theta)}&(t,x)-V^{(\theta)}(t,x-\varepsilon)\nonumber\\& \leq 2 \int_{-\infty}^{{b^{(\theta)}}(t)} \big[F^{(\theta)}(z+\varepsilon)-F^{(\theta)}(z)\big]\nonumber\\&\qquad\qquad \times \left[e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} W^{(\theta)}({b^{(\theta)}}(t)-z)-W^{(\theta)}(x-\varepsilon-z) \right]{{d}} z \nonumber\\& = 2 e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} \int_{x-\varepsilon}^{{b^{(\theta)}}(t)} \big[F^{(\theta)}(z+\varepsilon)-F^{(\theta)}(z)\big] W^{(\theta)}({b^{(\theta)}}(t)-z){{d}} z\nonumber\\&\qquad + 2 \int_{0}^{x-\varepsilon} \big[F^{(\theta)}(z+\varepsilon)-F^{(\theta)}(z)\big] \nonumber\\&\qquad\qquad \times \left[e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} W^{(\theta)}({b^{(\theta)}}(t)-z)-W^{(\theta)}(x-\varepsilon-z) \right]{{d}} z\nonumber\\&\qquad +2 \int_{-\varepsilon}^{0} F^{(\theta)}(z+\varepsilon)\nonumber\\&\qquad\qquad\times \left[e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} W^{(\theta)}({b^{(\theta)}}(t)-z)-W^{(\theta)}(x-\varepsilon-z) \right]{{d}} z .\end{align}

Dividing by $\varepsilon$ , we have that for $t\in [0,{m_{\theta}})$ and $\varepsilon<x $ ,

\begin{align*}0\leq \frac{ V^{(\theta)}(t,x)-V^{(\theta)}(t,x-\varepsilon)}{\varepsilon}\leq R_1^{(\varepsilon)}(t,x)+R_2^{(\varepsilon)}(t,x)+R_3^{(\varepsilon)}(t,x),\end{align*}

where

\begin{align*} R_1^{(\varepsilon)}(t,x)&= 2 e^{-\Phi(\theta)\big({b^{(\theta)}}(t)-x+\varepsilon\big)} \frac{1}{\varepsilon} \int_{x-\varepsilon}^{{b^{(\theta)}}(t)} \big[F^{(\theta)}(z+\varepsilon)-F^{(\theta)}(z)\big] W^{(\theta)}({b^{(\theta)}}(t)-z){{d}} z,\\[3pt] {\mathbb{R}}_2^{(\varepsilon)}(t,x)&= 2\frac{1}{\varepsilon} \int_{0}^{x-\varepsilon} \big[F^{(\theta)}(z+\varepsilon)-F^{(\theta)}(z)\big]\\[3pt]&\qquad \qquad \times \left[e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} W^{(\theta)}({b^{(\theta)}}(t)-z)-W^{(\theta)}(x-\varepsilon-z) \right]{{d}} z,\\[3pt]{\mathbb{R}}_3^{(\varepsilon)}(t,x)&= 2 \frac{1}{\varepsilon}\int_{-\varepsilon}^{0} F^{(\theta)}(z+\varepsilon)\\[3pt]&\qquad \qquad \times \left[e^{-\Phi(\theta)({b^{(\theta)}}(t)-x+\varepsilon)} W^{(\theta)}\big({b^{(\theta)}}(t)-z\big)-W^{(\theta)}(x-\varepsilon-z) \right]{{d}} z .\end{align*}

By using that W and F are non-decreasing, that $W^{(\theta)}$ (and hence $F^{(\theta)}$ ) has left and right derivatives, and the dominated convergence theorem, we can show that for $t\in [0,t_b)$ , $\lim_{\varepsilon \downarrow 0} R_i^{(\varepsilon)}\big(t,{b^{(\theta)}}(t)\big)=0$ for $i=1,2, 3$ . Hence, we have that

\begin{align*}\lim_{\varepsilon \downarrow 0} \frac{V^{(\theta)}\big(t,{b^{(\theta)}}(t)\big)-V^{(\theta)}\big(t,{b^{(\theta)}}(t)-\varepsilon\big)}{\varepsilon} = 0.\end{align*}

This proves that $x \mapsto V^{(\theta)}(x,t)$ is differentiable at $b^{(\theta)}(t)$ , with $ \partial/\partial x V^{(\theta)}(t,b^{(\theta)}(t))=0$ for $t\in [0,t_b)$ .

Remark 3.2. Note that when X is of infinite variation we have that $W^{(\theta)}$ (and hence $F^{(\theta)}$ ) is a $C^1(0,\infty)$ function (see Lemma 8.2 and the discussion thereafter in [Reference Kyprianou15, pp. 240--241]). Hence, we deduce from the mean value theorem and Equation (3.13) that for each $t \in [0,{m_{\theta}})$ , there exists a constant $C_t>0$ such that

\begin{align*} {V^{(\theta)}}\big(t,{b^{(\theta)}}(t)-\varepsilon\big) \geq -\varepsilon^2 C_t\end{align*}

for any $\varepsilon>0$ . Therefore, by Lemma 3.6 and the above, we deduce that $|\varphi\big(t,{b^{(\theta)}}(t)\big)|<\infty $ for all $t\in [0,{m_{\theta}})$ for any spectrally negative Lévy process X.

The next theorem looks at how the value function ${V^{(\theta)}}$ and the curve ${b^{(\theta)}}$ can be characterised as a solution of nonlinear integral equations within a certain family of functions. These equations are in fact generalisations of the free boundary equation (see e.g. [Reference Peskir and Shiryaev21, Section 14.1, pp. 219--221], in a diffusion setting) in the presence of jumps. It is important to mention that the proof of Theorem 3.1 is mainly inspired by the ideas of [Reference Du Toit, Peskir and Shiryaev10], with some extensions to allow for the presence of jumps.

Theorem 3.1. Let X be a spectrally negative Lévy process and let $t_b$ be as characterised in (3.10). For all $t\in [0,t_b)$ and $x\in {\mathbb{R}}$ , we have that

(3.14) \begin{align}{V^{(\theta)}}&(t,x)\nonumber\\&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right)\nonumber \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} \int_{\big({-}\infty,{b^{(\theta)}}(r+t)-X_r\big)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {\mathbb{I}}_{\big\{X_r> {b^{(\theta)}}(r+t) \big\}}{{d}} r \right)\end{align}

and ${b^{(\theta)}}(t)$ solves the equation

(3.15) \begin{align}0&={\mathbb{E}}_{{b^{(\theta)}}(t)}\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right)\nonumber \\&\text{ }-{\mathbb{E}}_{{b^{(\theta)}}(t)}\!\left( \int_0^{{m_{\theta}}-t} \int_{\big({-}\infty,{b^{(\theta)}}(r+t)-X_r\big)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {\mathbb{I}}_{\big\{X_r> {b^{(\theta)}}(r+t) \big\}}{{d}} r \right).\end{align}

If $t\in [t_b,{m_{\theta}})$ , we have that ${b^{(\theta)}}(t)=0$ and

(3.16) \begin{align}{V^{(\theta)}}(t,x)= {\mathbb{E}}_x\big(\tau_0^+\wedge ({m_{\theta}}-t)\big)-\frac{2}{\theta}e^{-\theta t}\big[1-{\mathbb{E}}_x\big(e^{-\theta (\tau_0^+ \wedge ({m_{\theta}}-t))}\big)\big]\end{align}

for all $x\in {\mathbb{R}}$ . Moreover, the pair $\big({V^{(\theta)}},{b^{(\theta)}}\big)$ are uniquely characterised as the solutions to Equations (3.14)–(3.16) in the class of continuous functions on ${\mathbb{R}}_+\times{\mathbb{R}}$ and ${\mathbb{R}}_+$ , respectively, such that ${b^{(\theta)}}\geq {h^{(\theta)}}$ , ${V^{(\theta)}}\leq 0$ , and $\int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y)\Pi({{d}} y) +{G^{(\theta)}}(t,x) \geq 0$ for all $t\in [0,t_b)$ and $x\geq {b^{(\theta)}}(t)$ .

3.1. Proof of Theorem 3.1

Since the proof of Theorem 3.1 is rather long, we split it into a series of lemmas. This subsection is entirely dedicated to this purpose. With the help of Itô’s formula, and following an analogous argument to that of [Reference Lamberton and Mikou17] (in the infinite-variation case), we prove that ${V^{(\theta)}}$ and ${b^{(\theta)}}$ are solutions to the integral equations listed above. The finite-variation case is proved using an argument that considers the consecutive times at which X hits the curve ${b^{(\theta)}}$ .

Lemma 3.8. The pair $\big({V^{(\theta)}},{b^{(\theta)}}\big)$ are solutions to Equations (3.14)–(3.16).

Proof. Recall from Lemma 3.6 that when $t_b<{m_{\theta}}$ , the value function ${V^{(\theta)}}(t,x)$ satisfies Equation (3.16) for $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ . We also have that Equation (3.15) follows from (3.14) by letting $x={b^{(\theta)}}(t)$ and using that ${V^{(\theta)}}\big(t,{b^{(\theta)}}(t)\big)=0$ .

We proceed to show that $({V^{(\theta)}},{b^{(\theta)}})$ solves Equation (3.14). First, we assume that X is a process of infinite variation. We follow an argument analogous to that used for [Reference Lamberton and Mikou17, Theorem 3.2]. Consider a regularising sequence $\{\rho_n \}_{n\geq 1}$ of non-negative $C^{\infty}({\mathbb{R}}_+\times {\mathbb{R}})$ functions with support in $[{-}1/n,0]\times [{-}1/n,0]$ such that $\int_{-\infty}^0 \int_{-\infty}^0 \rho_n(s,y){{d}} s {{d}} y=1$ . For every $n\geq 1$ , define the function $V^{(\theta)}_n$ by

\begin{align*}V^{(\theta)}_n(t,x)=({V^{(\theta)}} \ast \rho_n)(t,x)=\int_{-\infty}^0 \int_{-\infty}^0 {V^{(\theta)}}(t+s,x+y)\rho_n(s,y) {{d}} s {{d}} y\end{align*}

for any $(t,x)\in [1/n,\infty)\times {\mathbb{R}}$ . Then for each $n\geq 1$ , the function $V^{(\theta)}_n$ is a $C^{1,2}({\mathbb{R}}_+\times {\mathbb{R}})$ bounded function (since ${V^{(\theta)}}$ is bounded). Moreover, it can be shown that $ V^{(\theta)}_n \uparrow V$ on ${\mathbb{R}}_+\times {\mathbb{R}}$ when $n\rightarrow \infty$ and that (see the proof of [Reference Lamberton and Mikou16, Proposition 2.5])

(3.17) \begin{align}\frac{\partial}{\partial t}V^{(\theta)}_n(t,x)+\mathcal{A}_{X}\big(V^{(\theta)}_n\big)(t,x) =-\big({G^{(\theta)}}*\rho_n\big)(u,x) \hspace{0.2in}\text{for all } (t,x) \in [1/n,\infty)\times{\mathbb{R}} \cap C,\end{align}

where $\mathcal{A}_{X}$ is the infinitesimal generator of X given in (3.7) and $C={\mathbb{R}}_+\times {\mathbb{R}} \setminus D$ . Let $t\in (0,t_b]$ , $m>0$ such that $t>1/m$ , and $x\in {\mathbb{R}}$ . Applying Itô’s formula to $V^{(\theta)}_n(t+s,X_{s}+x )$ , for $s\in[0,{m_{\theta}}-t]$ , we obtain that for any $n\geq m$ ,

\begin{align*}V^{(\theta)}_n(s+t, X_{s}+x)&= V^{(\theta)}_n(t,x)+ M_{s}^{t,n}\\&\qquad+\int_0^{s } \left[ \frac{\partial}{\partial t} V^{(\theta)}_n(r+t,X_r+x) + \mathcal{A}_{X}\big( V^{(\theta)}_n\big)(r+t,X_r+x)\right]{{d}} r,\end{align*}

where $\{ M_{s}^{t,n}, t\geq 0 \}$ is a zero-mean martingale. Hence, taking the expectation and using (3.17), we derive that

\begin{align*}{\mathbb{E}}(V^{(\theta)}_n&(s+t, X_{s}+x)) \\&= V^{(\theta)}_n(t,x)+{\mathbb{E}}\!\left(\int_0^{s } \left[ \frac{\partial}{\partial t} V^{(\theta)}_n(r+t,X_r+x) + \mathcal{A}_{X}\big( V^{(\theta)}_n\big)(r+t,X_r+x)\right]{{d}} r \right)\\&= V^{(\theta)}_n(t,x)-{\mathbb{E}}\!\left(\int_0^{s } \big({G^{(\theta)}}*\rho_n\big)(r+t,X_r+x){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right)\\&\qquad + {\mathbb{E}}\!\left(\int_0^{s } \int_{({-}\infty,0)} V^{(\theta)}_n (r+t,X_r+x+y) \Pi({{d}} y){\mathbb{I}}_{\{X_r >{b^{(\theta)}}(r+t) \}}{{d}} r \right),\end{align*}

where we used the fact that ${b^{(\theta)}}(s)$ is finite for all $s\geq 0$ and that ${\mathbb{P}}(X_s+x=b(t+s))=0$ for all $s>0$ and $x\in {\mathbb{R}}$ when X is of infinite variation (see [Reference Sato22, Theorem 27.4, p. 175]). Taking $s={m_{\theta}}-t$ , using the fact that ${V^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ , and letting $n\rightarrow \infty$ (by the dominated convergence theorem), we obtain that (3.14) holds for any $(t,x)\in (0,t_b)\times{\mathbb{R}}$ . The case when $t=0$ follows by continuity.

For the finite-variation case, we define the auxiliary function

\begin{align*}R^{(\theta)}(t,x)&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right)\nonumber \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} \int_{({-}\infty,0)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {\mathbb{I}}_{\{X_r>{b^{(\theta)}}(r+t) \}}{{d}} r \right)\end{align*}

for all $(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ . We then prove that $R^{(\theta)}={V^{(\theta)}}$ . First, note that from the discussion after Lemma 3.4 we have that $\int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y) +{G^{(\theta)}}(t,x)\geq 0$ for all $(t,x)\in D$ . Then we have that for all $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ ,

\begin{align*}|R^{(\theta)}(t,x)|\leq {\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } |{G^{(\theta)}}(r+t,X_r)| {{d}} r \right) \leq {m_{\theta}}-t,\end{align*}

where we used that $|{G^{(\theta)}}|\leq 1$ in the last inequality. For each $(t,x)\in {\mathbb{R}}_+\times {\mathbb{R}}$ , we define the times at which the process X hits the curve ${b^{(\theta)}}$ . Let $\tau_b^{(1)}=\inf\{s\in [0,{m_{\theta}}-t]\,:\, X_s\geq {b^{(\theta)}}(s+t) \}$ , and for $k\geq 1$ ,

\begin{align*}\sigma_b^{(k)}&=\inf\big\{s\in \big[\tau_b^{k},{m_{\theta}}-t\big] \,:\, X_s<{b^{(\theta)}}(s+t) \big\},\\\tau_b^{(k+1)}&=\inf\big\{s\in \big[\sigma_b^{k},{m_{\theta}}-t\big] \,:\, X_s \geq {b^{(\theta)}}(s+t) \big\},\end{align*}

where in this context we understand that $\inf \emptyset ={m_{\theta}}-t$ . Taking $t\in [0,{m_{\theta}}]$ and $x>{b^{(\theta)}}(t)$ gives us

\begin{align*}&R^{(\theta)}(t,x)\\&=-{\mathbb{E}}_x\!\left( \int_0^{\sigma_b^{(1)}} \int_{({-}\infty,0)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {{d}} r \right)+{\mathbb{E}}_x\!\left(\int_{\sigma_b^{(1)}}^{\tau_b^{(2)} } {G^{(\theta)}}(r+t,X_r){{d}} r \right)\\&\qquad+{\mathbb{E}}_x\!\left({\mathbb{I}}_{\{\tau_b^{(2)}<{m_{\theta}}-t \}} \int_{\tau_b^{(2)}}^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right) \\&\qquad-{\mathbb{E}}_x\!\left( {\mathbb{I}}_{\{\tau_b^{(2)}<{m_{\theta}}-t \}} \int_{\tau_b^{(2)}}^{{m_{\theta}}-t} \int_{({-}\infty,0)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {\mathbb{I}}_{\big\{X_r> {b^{(\theta)}}(r+t) \big\}}{{d}} r \right)\\&=-{\mathbb{E}}_x\!\left( \int_0^{\sigma_b^{(1)}} \int_{({-}\infty,0)} {V^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {{d}} r \right)\\&\qquad+{\mathbb{E}}_x\Big( {V^{(\theta)}}\Big(t+\sigma_b^{(1)} , X_{\sigma_b^{(1)} }\Big){\mathbb{I}}_{\big\{ \sigma_b^{(1)}<{m_{\theta}}-t \big\}} \Big)\\&\qquad+{\mathbb{E}}_x\Big(R^{(\theta)}\Big(t+\tau_{b}^{(2)}, X_{\tau_{b}^{(2)}}\Big){\mathbb{I}}_{\big\{ \tau_b^{(2)}<{m_{\theta}}-t \big\}} \Big),\end{align*}

where the last equality follows from the strong Markov property applied at times $\sigma_b^{(1)}$ and $\tau_b^{(2)}$ , respectively, and from the fact that $\tau_D$ is optimal for ${V^{(\theta)}}$ . Using the compensation formula for Poisson random measures (see [Reference Kyprianou15, Theorem 4.4, p. 99]), it can be shown that

\begin{align*}{\mathbb{E}}_x\bigg( \int_0^{\sigma_b^{(1)}} \int_{({-}\infty,0)} {V^{(\theta)}}(r+t,& X_r+y)\Pi({{d}} y) {{d}} r \bigg)\\&={\mathbb{E}}_x \!\left({V^{(\theta)}}\Big(t+\sigma_b^{(1)},X_{\sigma_b^{(1)}} \Big) {\mathbb{I}}_{\big\{ \sigma_b^{(1)}<{m_{\theta}}-t \big\}} \right).\end{align*}

Hence, for all $(t,x)\in D$ , we have that

\begin{align*}R^{(\theta)}(t,x)={\mathbb{E}}_x \!\left(R^{(\theta)}\Big(t+\tau_{b}^{(2)}, X_{\tau_{b}^{(2)}}\Big){\mathbb{I}}_{\big\{ \tau_b^{(2)}<{m_{\theta}}-t \big\}} \right).\end{align*}

Using an induction argument, it can be shown that for all $(t,x)\in D$ and $n\geq 2$ ,

(3.18) \begin{align}R^{(\theta)}(t,x)={\mathbb{E}}_x\bigg(R^{(\theta)}\big(t+\tau_{b}^{(n)}, X_{\tau_{b}^{(n)}}\big){\mathbb{I}}_{\big\{ \tau_b^{(n)}<{m_{\theta}}-t \big\}} \bigg)={\mathbb{E}}_x\Big(R^{(\theta)}\Big(t+\tau_{b}^{(n)}, X_{\tau_{b}^{(n)}}\Big) \Big),\end{align}

where the last equality follows since $R^{(\theta)}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Since X is of finite variation, it can be shown that for all $x\in {\mathbb{R}}$ , $\lim_{n \rightarrow \infty} \tau_b^{(n)}={m_{\theta}}-t$ ${\mathbb{P}}_x$ -a.s. Therefore, from (3.18) and taking $n\rightarrow \infty$ , we conclude that for all $(t,x)\in D$ ,

\begin{align*}|R^{(\theta)}(t,x)|\leq \lim_{n\rightarrow \infty}{\mathbb{E}}_x\!\left(|R^{(\theta)}\Big(t+\tau_{b}^{(n)}, X_{\tau_{b}^{(n)}}\Big)| \right) \leq \lim_{n\rightarrow \infty} {\mathbb{E}}_x\big({m_{\theta}}-t-\tau_b^{(n)}\big)=0,\end{align*}

where the last inequality follows from the dominated convergence theorem. On the other hand, if we take $t\in [0,{m_{\theta}}]$ and $x<{b^{(\theta)}}(t)$ , then by the strong Markov property applied to the filtration at time $\tau_b^{(1)}$ , we have that

\begin{align*}R^{(\theta)}(t,x)={\mathbb{E}}_x\!\left( \int_0^{\tau_b^{(1)}} {G^{(\theta)}}(r+t,X_r) {{d}} r \right) +{\mathbb{E}}_x\Big(R^{(\theta)} \Big(t+\tau_b^{(1)} , X_{\tau_b^{(1)}} \Big)\Big)={V^{(\theta)}}(t,x),\end{align*}

where we used the fact that $\tau_b^{(1)}$ is an optimal stopping time for ${V^{(\theta)}}$ and that $R^{(\theta)}$ vanishes on D. So then (3.14) also holds in the finite-variation case.

Next we proceed to show the uniqueness result. Suppose that there exist a non-positive continuous function ${U^{(\theta)}}\,:\, [0,{m_{\theta}}]\times {\mathbb{R}} \mapsto ({-}\infty,0]$ and a continuous function ${c^{(\theta)}}$ on $[0,{m_{\theta}})$ such that ${c^{(\theta)}}\geq {h^{(\theta)}}$ and ${c^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ . We assume that the pair $\big({U^{(\theta)}},{c^{(\theta)}}\big)$ solves the equations

(3.19) \begin{align}{U^{(\theta)}}&(t,x)\nonumber\\&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\big\{X_r < {c^{(\theta)}} (r+t) \big\}}{{d}} r\right)\nonumber\\&\qquad -{\mathbb{E}}_x\!\left(\int_0^{{m_{\theta}}-t}\int_{({-}\infty,{c^{(\theta)}}(r+t)-X_r)}{U^{(\theta)}} (r+t,X_r+y) \Pi({{d}} y) {\mathbb{I}}_{\big\{X_r> {c^{(\theta)}} (r+t) \big\}} {{d}} r\right)\end{align}

and

(3.20) \begin{align}0&={\mathbb{E}}_{{c^{(\theta)}}(t)}\!\left( \int_0^{{m_{\theta}}-t} {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r< {c^{(\theta)}} (r+t) \}}{{d}} r\right)\nonumber\\&\hspace{.2in} -{\mathbb{E}}_{{c^{(\theta)}}(t)}\!\left(\int_0^{{m_{\theta}}-t}\int_{({-}\infty,{c^{(\theta)}}(r+t)-X_r)}{U^{(\theta)}} (r+t,X_r+y) \Pi({{d}} y) {\mathbb{I}}_{\{X_r > {c^{(\theta)}} (r+t) \}} {{d}} r\right)\end{align}

when $t\in [0,t_b)$ and $x\in {\mathbb{R}}$ . For $t\in [t_b,{m_{\theta}})$ and $x\in {\mathbb{R}}$ , we assume that

(3.21) \begin{align}{U^{(\theta)}}(t,x)= {\mathbb{E}}_x\big(\tau_0^+\wedge ({m_{\theta}}-t)\big)-\frac{2}{\theta}e^{-\theta t}\big[1-{\mathbb{E}}_x\big(e^{-\theta \big(\tau_0^+ \wedge ({m_{\theta}}-t)\big)}\big)\big].\end{align}

In addition, we assume that

(3.22) \begin{align}\int_{\big({-}\infty,{c^{(\theta)}}(t)-x\big)} {U^{(\theta)}}(t,x+y)\Pi({{d}} y)+{G^{(\theta)}}(t,x)\geq 0, \hspace{.15in} \text{for all } t\in [0,t_b) \text{ and } x>{c^{(\theta)}}(t).\end{align}

Note that $({U^{(\theta)}},{c^{(\theta)}})$ solving the above equations means that ${U^{(\theta)}}(t,{c^{(\theta)}}(t))=0$ for all $t \in [0,{m_{\theta}})$ and ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Denote by $D_c$ the ‘stopping region’ under the curve ${c^{(\theta)}}$ , i.e., $D_c=\{(t,x) \in [0,{m_{\theta}}]\times {\mathbb{R}}\,:\, x \geq {c^{(\theta)}}(t) \}$ , and recall that $D=\{(t,x) \in [0,{m_{\theta}}]\times {\mathbb{R}}\,:\, x \geq {b^{(\theta)}}(t) \}$ is the ‘stopping region’ under the curve ${b^{(\theta)}}$ . We show that ${U^{(\theta)}}$ vanishes on $D_c$ in the next lemma.

Lemma 3.9. We have that ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ .

Proof. Since the statement is clear for $(t,x)\in [t_b,{m_{\theta}})\times [0,\infty)$ , we take $t\in [0,t_b)$ and $x\geq {c^{(\theta)}}(t)$ . Define $\sigma_c$ to be the first time that the process is outside $D_c$ before time ${m_{\theta}}-t$ , i.e.,

\begin{align*}\sigma_c=\inf\{0\leq s\leq {m_{\theta}}-t\,:\, X_{s} < {c^{(\theta)}}(t+s) \},\end{align*}

where in this context we understand that $\inf \emptyset= {m_{\theta}}-t$ . From the fact that $X_{r}\geq {c^{(\theta)}}(t+r)$ for all $r< \sigma_c$ and the strong Markov property at time $\sigma_c$ , we obtain that

\begin{align*}{U^{(\theta)}}(t,x)&={\mathbb{E}}_x\big({U^{(\theta)}}\big(t+\sigma_c,X_{\sigma_c}\big)\big)\\&\qquad -{\mathbb{E}}_x\!\left(\int_0^{\sigma_c}\int_{\big({-}\infty,{c^{(\theta)}}(r+t)-X_r\big)}{U^{(\theta)}} (r+t,X_r+y) \Pi({{d}} y) {{d}} r\right)\\&={\mathbb{E}}_x\Big({U^{(\theta)}}(t+\sigma_c,X_{\sigma_c}){\mathbb{I}}_{\{\sigma_c<{m_{\theta}}-t, X_{\sigma_c}<{c^{(\theta)}} (t+\sigma_c) \}} \Big) \\&\qquad-{\mathbb{E}}_x\!\left(\int_0^{\sigma_c}\int_{\big({-}\infty,{c^{(\theta)}}(r+t)-X_r\big)}{U^{(\theta)}} (r+t,X_r+y) \Pi({{d}} y) {{d}} r\right),\end{align*}

where the last equality follows since ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x \in {\mathbb{R}}$ and ${U^{(\theta)}}(t,{c^{(\theta)}}(t))=0$ for all $t\in [0,t_b)$ . Then, applying the compensation formula for Poisson random measures (see [Reference Kyprianou15, Theorem 4.4, p. 99]), we get

\begin{align*}{\mathbb{E}}_x\bigg({U^{(\theta)}}(t+\sigma_c,X_{\sigma_c})&{\mathbb{I}}_{\big\{\sigma_c<{m_{\theta}}-t, X_{\sigma_c}<{c^{(\theta)}} (t+\sigma_c) \big\}} \bigg)\\&={\mathbb{E}}_{x}\!\left( \int_0^{\sigma_c}\int_{\big({-}\infty,{c^{(\theta)}}(t+r)-X_{t+r}\big)}{U^{(\theta)}}(t+r,X_{r}+y) \Pi({{d}} y) {{d}} r\right).\end{align*}

Hence ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ , as claimed.

The next lemma shows that ${U^{(\theta)}}$ can be expressed as an integral involving only the gain function ${G^{(\theta)}}$ stopped at the first time the process enters the set $D_c$ . As a consequence, ${U^{(\theta)}}$ dominates the function ${V^{(\theta)}}$ .

Lemma 3.10. We have that ${U^{(\theta)}}(t,x)\geq {V^{(\theta)}}(t,x)$ for all $(x,t)\in {\mathbb{R}}\times [0,{m_{\theta}}]$ .

Proof. Note that we can assume that $t\in [0,t_b)$ , because for $(t,x)\in D_c$ we have ${U^{(\theta)}}(t,x)=0\geq {V^{(\theta)}}(t,x)$ , and for $t\in [t_b,{m_{\theta}})$ we have ${U^{(\theta)}}(t,x)={V^{(\theta)}}(t,x)$ , for all $x\in {\mathbb{R}}$ . Consider the stopping time

\begin{align*}\tau_c=\inf\big\{ s \in [0,{m_{\theta}}-t] \,:\, X_{s} \geq {c^{(\theta)}}(t+s) \big\}.\end{align*}

Let $x\leq {c^{(\theta)}} (t)$ . Using the fact that $X_{r}< {c^{(\theta)}}(t+r)$ for all $r\leq \tau_c$ and the strong Markov property at time $\tau_c$ , we obtain that

(3.23) \begin{align}{U^{(\theta)}}(t,x)&={\mathbb{E}}_x\!\left( \int_0^{\tau_c} {G^{(\theta)}}(r+t,X_r){{d}} r\right) +{\mathbb{E}}_x\big({U^{(\theta)}}\big(t+\tau_c,X_{\tau_c}\big)\big)\nonumber\\&={\mathbb{E}}_x\!\left( \int_0^{\tau_c} {G^{(\theta)}}(r+t,X_r){{d}} r\right),\end{align}

where the second equality follows since X creeps upwards, and therefore $X_{\tau_c}={c^{(\theta)}}(t+\tau_c)$ for $\{\tau_c<{m_{\theta}}-t \}$ and ${U^{(\theta)}}({m_{\theta}},x)=0$ for all $x\in {\mathbb{R}}$ . Then from the definition of ${V^{(\theta)}}$ (see (2.10)), we have that

\begin{align*}{U^{(\theta)}}(t,x)\geq \inf_{\tau \in \mathcal{T}} {\mathbb{E}}_{t,x}\!\left( \int_0^{\tau} G^{(\theta)}(X_{t+r},t+r) dr\right)={V^{(\theta)}}(t,x).\end{align*}

Therefore ${U^{(\theta)}} \geq {V^{(\theta)}}$ on $ [0,{m_{\theta}}]\times {\mathbb{R}}$ .

We proceed by showing that the function ${c^{(\theta)}}$ is dominated by ${b^{(\theta)}}$ . In the upcoming lemmas, we show that equality indeed holds.

Lemma 3.11. We have that ${b^{(\theta)}}(t)\geq {c^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ .

Proof. The statement is clear for $t\in [t_b,{m_{\theta}})$ . We prove the statement by contradiction. Suppose that there exists a value $t_0\in [0,t_b)$ such that ${b^{(\theta)}}(t_0)<{c^{(\theta)}}(t_0)$ , and take $x\in ({b^{(\theta)}}(t_0),{c^{(\theta)}}(t_0))$ . Consider the stopping time

\begin{align*}\sigma_b=\inf\{s\in [0,{m_{\theta}}-t_0]\,:\, X_{s} < {b^{(\theta)}}(t_0+s) \}.\end{align*}

Applying the strong Markov property to the filtration at time $\sigma_b$ , we obtain that

\begin{align*}{U^{(\theta)}}(t_0,x)&={\mathbb{E}}_{x }({U^{(\theta)}}(t_0+\sigma_b, X_{\sigma_b}))+{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b}{G^{(\theta)}}(t_0+r,X_{r}){\mathbb{I}}_{\big\{ X_{r} <{c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right)\\&\qquad-{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b} \int_{({-}\infty,0)} {U^{(\theta)}}(t+r,X_{r}+y)\Pi( {{d}} y) {\mathbb{I}}_{\big\{X_{r} > {c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right),\end{align*}

where we used the fact that ${U^{(\theta)}}(t,x)=0$ for all $(t,x)\in D_c$ . From Lemma 3.10 and the fact that ${U^{(\theta)}}\leq 0$ (by assumption), we have that for all $t\in [0,{m_{\theta}})$ and $x>{b^{(\theta)}}(t)$ , $ {U^{(\theta)}}(t,x)=0$ . Hence, by the compensation formula for Poisson random measures, we obtain that

\begin{align*}0 &\geq {U^{(\theta)}}(t_0,x)\\&={\mathbb{E}}_{x }\Big({U^{(\theta)}}\big(t_0+\sigma_b, X_{\sigma_b}\big){\mathbb{I}}_{\big\{\sigma_b<{m_{\theta}}-t, X_{\sigma_b}<{b^{(\theta)}}(t_0+\sigma_b) \big\}}\Big)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b}{G^{(\theta)}}(t_0+r,X_{r}){\mathbb{I}}_{\big\{ X_{r} <{c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right)\\&\qquad-{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b} \int_{({-}\infty,0)} {U^{(\theta)}}(t+r,X_{r}+y)\Pi( {{d}} y) {\mathbb{I}}_{\big\{X_{r} >{c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right)\\&={\mathbb{E}}_{x}\!\left( \int_0^{\sigma_b}\int_{({-}\infty,0)} {U^{(\theta)}}(t_0+r,X_{r}+y) \Pi({{d}} y){{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b}{G^{(\theta)}}(t_0+r,X_{r}){\mathbb{I}}_{\big\{ X_{r} <{c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right)\\&\qquad-{\mathbb{E}}_{x}\!\left(\int_0^{\sigma_b} \int_{({-}\infty,0)} {U^{(\theta)}}(t+r,X_{r}+y)\Pi( {{d}} y) {\mathbb{I}}_{\big\{X_{r} > {c^{(\theta)}}(t_0+r) \big\}}{{d}} r\right)\\&={\mathbb{E}}_{x}\bigg( \int_0^{\sigma_b} \bigg[ \int_{({-}\infty,0)} {U^{(\theta)}}(t_0+r,X_{r}+y) \Pi({{d}} y) +{G^{(\theta)}}(t_0+r,X_{r}) \bigg]{\mathbb{I}}_{\big\{ X_{r} <{c^{(\theta)}}(t_0+r) \big\}}{{d}} r\bigg).\end{align*}

Recall from the discussion after Lemma 3.4 that the function $\varphi_t^{(\theta)}$ is strictly positive on D. Hence we obtain that for all $(t,x)\in D$ ,

\begin{align*} \int_{({-}\infty,0)} {U^{(\theta)}}(t,x+y)\Pi({{d}} y)+{G^{(\theta)}}(t,x)&\geq \int_{({-}\infty,0)} {V^{(\theta)}}(t,x+y)\Pi({{d}} y)+{G^{(\theta)}}(t,x)\\ &=\varphi_t^{(\theta)}(t,x)\\ &>0.\end{align*}

The assumption that ${b^{(\theta)}}(t_0)<{c^{(\theta)}}(t_0)$ together with the continuity of the functions ${b^{(\theta)}}$ and ${c^{(\theta)}}$ means that there exists $s_0\in(t_0,{m_{\theta}})$ such that ${b^{(\theta)}}(r)<{c^{(\theta)}}(r)$ for all $r\in [t_0,s_0]$ . Consequently, the ${\mathbb{P}}_{x}$ -probability of X spending a strictly positive amount of time (with respect to Lebesgue measure) in this region is strictly positive. We can then conclude that

\begin{align*}& 0 \geq {\mathbb{E}}_{x}\bigg( \int_0^{\sigma_b} \bigg[ \int_{({-}\infty,0)} {U^{(\theta)}}(t_0+r,X_{r}+y) \Pi({{d}} y) +{G^{(\theta)}}(t_0+r,X_{r}) \bigg]{\mathbb{I}}_{\{ X_{r} <{c^{(\theta)}}(t_0+r) \}}{{d}} r\bigg)>0.\end{align*}

This is a contradiction, and therefore we conclude that ${b^{(\theta)}}(t) \geq {c^{(\theta)}}(t)$ for all $t\in [0,{m_{\theta}})$ .

Note that the definition of ${U^{(\theta)}}$ on $[t_b,{m_{\theta}})\times {\mathbb{R}}$ (see Equation (3.21)) together with the condition (3.22) implies that

\begin{align*}\int_{({-}\infty,0)} {U^{(\theta)}}(t,x+y)\Pi({{d}} y)+ {G^{(\theta)}}(t,x)\geq 0\end{align*}

for all $t\in [0,{m_{\theta}})$ and $x>{c^{(\theta)}}(t)$ . The next lemma shows that ${U^{(\theta)}}$ and ${V^{(\theta)}}$ coincide.

Lemma 3.12. We have that ${b^{(\theta)}}(t)={c^{(\theta)}}(t)$ for all $t\geq 0$ , and hence ${V^{(\theta)}}={U^{(\theta)}}$ .

Proof. We prove that ${b^{(\theta)}}={c^{(\theta)}}$ by contradiction. Assume that there exists $s_0$ such that ${b^{(\theta)}}(s_0)>{c^{(\theta)}}(s_0)$ . Since ${c^{(\theta)}}(t)={b^{(\theta)}}(t)=0$ for all $t\in [t_b,{m_{\theta}})$ , we deduce that $s_0\in [0,t_b)$ . Let $\tau_b$ be the stopping time

\begin{align*}\tau_b=\inf\big\{t\geq 0\,:\, X_s \geq {b^{(\theta)}}(s_0+t) \big\}.\end{align*}

With the Markov property applied to the filtration at time $\tau_b$ , we obtain that for any $x\in ({c^{(\theta)}}(s_0),{b^{(\theta)}}(s_0))$ ,

\begin{align*}{\mathbb{E}}_{x}({U^{(\theta)}}(s_0&+\tau_b, X_{\tau_b} ))\\&={U^{(\theta)}}(s_0,x)-{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b}{G^{(\theta)}}(r+s_0,X_r){\mathbb{I}}_{\{ X_{r} <{c^{(\theta)}}(r+s_0) \}} {{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b} \int_{({-}\infty,0)} {U^{(\theta)}}(r+s_0, X_r+y){\mathbb{I}}_{\{ X_{r} >{c^{(\theta)}}(r+s_0) \}}\Pi( {{d}} y){{d}} r\right)\\&\geq {V^{(\theta)}}(s_0,x)-{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b}{G^{(\theta)}}(r+s_0,X_r){\mathbb{I}}_{\{ X_{r} <{c^{(\theta)}}(r+s_0) \}} {{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b} \int_{({-}\infty,0)} {U^{(\theta)}}(r+s_0, X_r+y){\mathbb{I}}_{\{ X_{r} > {c^{(\theta)}}(r+s_0) \}}\Pi( {{d}} y){{d}} r\right)\\&={\mathbb{E}}_{x}\!\left(\int_0^{\tau_b}{G^{(\theta)}}(r+s_0,X_r){\mathbb{I}}_{\{ X_{r} \geq {c^{(\theta)}}(r+s_0) \}} {{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b} \int_{({-}\infty,0)} {U^{(\theta)}}(r+s_0, X_r+y){\mathbb{I}}_{\{ X_{r} > {c^{(\theta)}}(r+s_0) \}}\Pi( {{d}} y){{d}} r\right),\end{align*}

where the second inequality follows from the fact that ${U^{(\theta)}}\geq {V^{(\theta)}}$ (see Lemma 3.10) and the last equality follows as $\tau_b$ is the optimal stopping time for ${V^{(\theta)}}(s_0,x)$ . Note that since X creeps upwards, we have that ${U^{(\theta)}}(s_0+\tau_b,X_{\tau_b})={U^{(\theta)}}(s_0+\tau_b,{b^{(\theta)}}(s_0+\tau_b))=0$ .

Hence,

\begin{align*} {\mathbb{E}}_{x}&\left(\int_0^{\tau_b}{G^{(\theta)}}(r+s_0,X_r){\mathbb{I}}_{\{ X_{r} \geq {c^{(\theta)}}(r+s_0) \}} {{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b} \int_{({-}\infty,0)} {U^{(\theta)}}(r+s_0, X_r+y){\mathbb{I}}_{\{ X_{r} > {c^{(\theta)}}(r+s_0) \}}\Pi( {{d}} y){{d}} r\right) \leq 0. \end{align*}

However, the continuity of the functions ${b^{(\theta)}}$ and ${c^{(\theta)}}$ gives the existence of $s_1\in (s_0,{m_{\theta}})$ such that $ {c^{(\theta)}}(r)<{b^{(\theta)}}(r)$ for all $r\in [s_0,s_1]$ . Combining this with the fact that $\int_{({-}\infty,0)} {U^{(\theta)}}(x+y,t)\Pi({{d}} y)+{G^{(\theta)}}(x,t)>0$ for all $(t,x)\in D_c$ , we can conclude that

\begin{align*} {\mathbb{E}}_{x}&\left(\int_0^{\tau_b}{G^{(\theta)}}(r+s_0,X_r){\mathbb{I}}_{\{ X_{r} \geq {c^{(\theta)}}(r+s_0) \}} {{d}} r\right)\\&\qquad+{\mathbb{E}}_{x}\!\left(\int_0^{\tau_b} \int_{({-}\infty,0)} {U^{(\theta)}}(r+s_0, X_r+y){\mathbb{I}}_{\{ X_{r} > {c^{(\theta)}}(r+s_0) \}}\Pi( {{d}} y){{d}} r\right) > 0, \end{align*}

which shows a contradiction.

4. Examples

4.1. Brownian motion with drift

Suppose that $X=\{X_t ,t\geq 0 \}$ is a Brownian motion with drift. That is, for any $t\geq 0$ , $X_t=\mu t+\sigma B_t$ , where $\sigma>0$ and $\mu\in {\mathbb{R}}$ . In this case, we have that

\begin{align*}\psi(\beta)=\mu \beta +\frac{1}{2}\sigma^2 \beta^2\end{align*}

for all $\beta \geq 0$ . Then

\begin{align*}\Phi(q)=\frac{1}{\sigma^2}\left[\sqrt{\mu^2+2\sigma^2 q }-\mu \right].\end{align*}

It is well known that $-\underline{X}_{{e_{\theta}}}$ has exponential distribution (see e.g. [7, p. 251] or [Reference Kyprianou15, p. 233]) with distribution function given by

\begin{align*}F^{(\theta)}(x)=1-\exp\!\left({-}\frac{x}{\sigma^2}\left[\sqrt{\mu^2+2\sigma^2 \theta }+\mu \right] \right)\qquad \text{for } x>0.\end{align*}

Denote by $\Phi(x;a,b^2)$ the distribution function of a normal random variable with mean $a \in {\mathbb{R}}$ and variance $b^2$ ; i.e., for any $x\in {\mathbb{R}}$ ,

\begin{align*}\Phi(x;\,a, b^2)= \int_{-\infty}^x \frac{1}{\sqrt{2\pi b^2}} e^{-\frac{1}{2b^2} (y-a)^2}{{d}} y.\end{align*}

For any $b,s,t\geq 0$ and $x\in {\mathbb{R}}$ , define the function

\begin{align*}K(t,x,s,b)&={\mathbb{E}}\!\left( G^{(\theta)}(s+t, X_s+x){\mathbb{I}}_{\{ X_s+x\leq b\}}\right).\end{align*}

Then it can easily be shown that

\begin{align*}K(t,x,s,b)&= \Phi\big(b-x;\,\mu s,\sigma^2 s\big)-2e^{-\theta (s+t)}\Phi\big({-}x;\,\mu s,\sigma^2 s\big)\\&\qquad-2e^{-\theta t} \exp\!\left({-}\frac{x}{\sigma^2}\left[\sqrt{\mu^2+2\sigma^2 \theta }+\mu \right] \right)\\&\qquad\qquad \times \left[ \Phi\big(b-x,-s\sqrt{\mu^2+2\sigma^2 \theta},s \sigma^2\big)-\Phi\big({-}x,-s\sqrt{\mu^2+2\sigma^2 \theta},s \sigma^2\big) \right].\end{align*}

Thus, we have that $b^{(\theta)}$ satisfies the nonlinear integral equation

\begin{align*}\int_0^{{m_{\theta}}-t} K(t,b^{(t)}(t),s,b^{(\theta)}(t+s)){{d}} s=0\end{align*}

for all $t\in [0,{m_{\theta}})$ , and the value function ${V^{(\theta)}}$ is given by

\begin{align*}{V^{(\theta)}}(t,x)=\int_0^{{m_{\theta}}-t} K(t,x,s, {b^{(\theta)}}(t+s)) {{d}} s\end{align*}

for all $(t,x)\in {\mathbb{R}}_+ \times {\mathbb{R}}$ . Note that we can approximate the integrals above by Riemann sums, so a numerical approximation can be implemented. Indeed, take $n \in \mathbb{Z}_+$ sufficiently large and define $h={m_{\theta}}/n$ . For each $k\in \{0,1,2,\ldots,n \}$ , we define $t_k=kh$ . Then the sequence of times $\{ t_k, k= 0,1,\ldots,n \}$ is a partition of the interval $[0,{m_{\theta}}]$ . Then, for any $x\in {\mathbb{R}}$ and $t\in [t_k,t_{k+1})$ for $k \in \{0,1,\ldots, n-1 \}$ , we approximate ${V^{(\theta)}}(t,x)$ by

\begin{align*}V^{(\theta)}_h(t_k,x)=\sum_{i=k}^{n-1} K(t_k,x,t_{i-k+1},b_{i})h,\end{align*}

where the sequence $\{b_k, k=0,1,\ldots,n -1\}$ is a solution to

\begin{align*}\sum_{i=k}^{n-1} K(t_k,x,t_{i-k+1},b_{i})=0\end{align*}

for each $k\in \{0,1,\ldots, n-1 \}$ . Note that the sequence $\{b_k, k=0,1,\ldots,n \}$ is a numerical approximation to the sequence $\{{b^{(\theta)}}(t_k), k=0,1,\ldots,n-1 \}$ (for n sufficiently large) and can be calculated by using backwards induction. In Figure 2, we show a numerical calculation of the equations above. The parameters used are $\mu=2$ and $\sigma=1$ , and we chose ${m_{\theta}}=10$ and $n=10,000$ time steps (so that $h=0.001$ ).

Figure 2. Brownian motion with drift $\mu=2$ and $\sigma=1$ . Left panel: optimal boundary. Right panel: value function fixing $t=1$ .

4.2. Brownian motion with exponential jumps

Let $X=\{X_t,t\geq 0 \}$ be a compound Poisson process perturbed by a Brownian motion; that is,

(4.1) \begin{align}X_t=\sigma B_t+\mu t -\sum_{i=1}^{N_t} Y_i,\end{align}

where $B=\{B_t,t\geq 0 \}$ is a standard Brownian motion, $N=\{N_t,t\geq 0 \}$ is a Poisson process with rate $\lambda$ independent of B, $\mu \in {\mathbb{R}}$ , $\sigma > 0$ , and the sequence $\{Y_1,Y_2,\ldots \}$ is a sequence of independent random variables exponentially distributed with mean $1/\rho>0$ . Then in this case, the Laplace exponent is derived as

\begin{align*}\varphi(\beta)=\frac{\sigma^2}{2}\beta^2 +\mu \beta -\frac{\lambda \beta}{\rho +\beta }.\end{align*}

Its Lévy measure, given by $\Pi({{d}} y)=\lambda \rho e^{\rho y} {\mathbb{I}}_{\{y<0 \}} {{d}} y$ , is a finite measure, and X is a process of infinite variation. According to [Reference Kuznetsov, Kyprianou and Rivero14, Equation (7), p. 101], the scale function in this case is given for $q\geq 0$ and $x\geq 0$ by

\begin{align*}W^{(q)}(x)=\frac{e^{\Phi(q)x}}{\psi'(\Phi(q))}+\frac{e^{\zeta_1(q) x}}{\psi'(\zeta_1(q))} +\frac{e^{\zeta_2(q) x}}{\psi'(\zeta_2(q))},\end{align*}

where $\zeta_2(q)$ , $\zeta_1(q)$ , and $\Phi(q)$ are the three real solutions to the equation $\psi(\beta)=q$ , which satisfy $\zeta_2(q)<-\rho<\zeta_1(q)<0<\Phi(q)$ . The second scale function, $Z^{(q)}$ , takes the form

\begin{align*}Z^{(q)}(x)&=1+q \left[ \frac{e^{\Phi(q)x}-1}{\Phi(q)\psi'(\Phi(q))}+\frac{e^{\zeta_1(q) x}-1}{\zeta_1(q)\psi'(\zeta_1(q))} +\frac{e^{\zeta_2(q) x}-1}{\zeta_2(q)\psi'(\zeta_2(q))} \right].\end{align*}

Note that since we have exponential jumps (and hence $\Pi({{d}} y)=\lambda \rho e^{\rho y} {\mathbb{I}}_{\{y<0\}}$ ), we have that for all $t\in [0,{m_{\theta}})$ and $x>0$ ,

\begin{align*}\int_{({-}\infty,-x)} {V^{(\theta)}}(t,{b^{(\theta)}}(t)+x+y)\Pi({{d}} y)&=e^{-\rho x }\int_{({-}\infty,0)} {V^{(\theta)}}(t,{b^{(\theta)}}(t)+y)\Pi({{d}} y).\end{align*}

Then, for any $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ , Equation (3.14) reads

\begin{align*}{V^{(\theta)}}(t,x)&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{b^{(\theta)}}(r+t) \}}{{d}} r \right)\nonumber \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} e^{-\rho(X_r -{b^{(\theta)}}(r+t)) }\mathcal{V}(r+t) {\mathbb{I}}_{\big\{X_r> {b^{(\theta)}}(r+t) \big\}}{{d}} r \right),\end{align*}

where for any $r,s\in [0,{m_{\theta}})$ , $b\geq 0$ , and $x\in {\mathbb{R}}$ ,

\begin{align*}\mathcal{V}(t)&=\int_{({-}\infty,0)} {V^{(\theta)}}(t,{b^{(\theta)}}(t)+y)\Pi({{d}} y).\end{align*}

Note that the equation above suggests that in order to find a numerical value of ${b^{(\theta)}}$ using Theorem 3.1, we only need to know the values of the function $\mathcal{V}$ and not the values of $\int_{({-}\infty,0)}{V^{(\theta)}}(t,x+y)\Pi({{d}} y)$ for all $t\in [0,{m_{\theta}}]$ and $x>{b^{(\theta)}}(t)$ . The next corollary confirms this notion.

Corollary 4.1. Let $\theta>0$ . Assume that $X=\{X_t, t\geq 0 \}$ is of the form (4.1) with $\mu\in {\mathbb{R}}$ , $\sigma, \lambda, \rho>0$ . Suppose that ${c^{(\theta)}}$ and $\mathcal{U}$ are continuous functions on $[0,{m_{\theta}})$ such that ${c^{(\theta)}} \geq h^{(\theta)}$ and $0\geq \mathcal{U}(t) \geq -{G^{(\theta)}}(t,{c^{(\theta)}}(t))$ for all $t\in [0,{m_{\theta}})$ . For any $(t,x)\in [0,{m_{\theta}}]\times {\mathbb{R}}$ we define the function

\begin{align*}{U^{(\theta)}}(t,x)&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{c^{(\theta)}}(r+t) \}}{{d}} r \right) \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} e^{-\rho(X_r -{c^{(\theta)}}(r+t)) }\mathcal{U}(r+t) {\mathbb{I}}_{\{X_r> {c^{(\theta)}}(r+t) \}}{{d}} r \right).\end{align*}

Further assume that there exists a value $h>0$ such that ${U^{(\theta)}}(t,x)=0$ for any $t\in [0,{m_{\theta}})$ and $x\in [{b^{(\theta)}}(t),{b^{(\theta)}}(t)+h]$ . If ${U^{(\theta)}}$ is a non-positive function, we have that ${c^{(\theta)}}= {b^{(\theta)}}$ and ${U^{(\theta)}}={V^{(\theta)}}$ .

Proof. First note that, since X is of infinite variation, ${\mathbb{P}}_x(X_r={c^{(\theta)}}(r+t))=0$ for all $r,t \in [0,{m_{\theta}})$ such that $r+t<{m_{\theta}}$ and $x\in {\mathbb{R}}$ . Hence, by the continuity of ${G^{(\theta)}}$ and $\mathcal{U}$ , and by the dominated convergence theorem, we have that ${U^{(\theta)}}$ is continuous. By Theorem 3.1, it is enough to show that ${U^{(\theta)}}$ satisfies the integral equation

\begin{align*}{U^{(\theta)}}&(t,x)\\&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{c^{(\theta)}}(r+t) \}}{{d}} r \right) \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} \int_{({-}\infty,{c^{(\theta)}}(r+t)-X_r)} {U^{(\theta)}}(r+t,X_r+y)\Pi({{d}} y) {\mathbb{I}}_{\{X_r> {c^{(\theta)}}(r+t) \}}{{d}} r \right)\\&={\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t } {G^{(\theta)}}(r+t,X_r){\mathbb{I}}_{\{X_r <{c^{(\theta)}}(r+t) \}}{{d}} r \right) \\&\qquad-{\mathbb{E}}_x\!\left( \int_0^{{m_{\theta}}-t} e^{-\rho(X_r -{c^{(\theta)}}(r+t)) }H(r+t) {\mathbb{I}}_{\{X_r>{c^{(\theta)}}(r+t) \}}{{d}} r \right),\end{align*}

where $H(r)=\int_{({-}\infty,0)} {U^{(\theta)}}\big(r,{c^{(\theta)}}(r)+y\big)\Pi({{d}} y)$ for all $r\in [0,{m_{\theta}})$ , and in the last equality we used the explicit form of $\Pi({{d}} y)$ . Then it suffices to show that $H(t)=\mathcal{U}(t)$ for all $t\in [0,{m_{\theta}})$ .

Let $t\geq 0$ . For any $\delta \in (0,{m_{\theta}}-t)$ , consider the stopping time

\begin{align*}\tau_{\delta}=\inf\big\{ s\in [0,\delta]\,:\, X_s\notin \big[{c^{(\theta)}}(s+t), {c^{(\theta)}}(s+t)+h\big]\big\}.\end{align*}

Note that for any $s<\tau_{\delta}$ we have that $X_s\in \big({c^{(\theta)}}(s+t), {c^{(\theta)}}(s+t)+h\big)$ and $X_{\delta}\in \big({c^{(\theta)}}(s+t), {c^{(\theta)}}(s+t)+h\big)$ in the event $\{ \tau_{\delta}=\delta \}$ . Then, using the strong Markov property at time $\tau_{\delta}$ , we have that for any $x\in [{c^{(\theta)}}(t), {c^{(\theta)}}(t)+h)$ ,

\begin{align*}0&={U^{(\theta)}}(t,x)\\&= -{\mathbb{E}}_x\!\left( \int_0^{\tau_{\delta}} e^{-\rho(X_r -{c^{(\theta)}}(r+t)) }\mathcal{U}(r+t) {{d}} r \right)+{\mathbb{E}}_x\Big( {U^{(\theta)}}\big(t+\tau_{\delta},X_{\tau_{\delta}}\big){\mathbb{I}}_{\big\{X_{\tau_{\delta}}<{c^{(\theta)}}(t+\tau_{\delta})\big\} }\Big),\end{align*}

where in the last equality we used the fact that ${U^{(\theta)}}(t,x)=0$ for all $x\in [{c^{(\theta)}}(t), {c^{(\theta)}}(t)+h]$ , the continuity of ${c^{(\theta)}}$ , and the fact that X can only cross above ${c^{(\theta)}}$ by creeping. By using the compensation formula for Poisson random measures, we obtain that

\begin{align*}{\mathbb{E}}_x\Big( {U^{(\theta)}}(t+\tau_{\delta},X_{\tau_{\delta}})&{\mathbb{I}}_{\big\{X_{\tau_{\delta}}<{c^{(\theta)}}(t+\tau_{\delta})\big\} }\Big)\\&={\mathbb{E}}_x\!\left( \int_{0}^{\tau_{\delta}} \int_{({-}\infty,0)} {U^{(\theta)}}(t+r,X_{r}+y) {\mathbb{I}}_{\{X_{r}+y<{c^{(\theta)}}(t+r)\} }{{d}} r \Pi( {{d}} y)\right)\\&={\mathbb{E}}_x\!\left( \int_0^{\tau_{\delta}} e^{-\rho(X_r -{c^{(\theta)}}(r+t)) }H(r+t) {{d}} r \right).\end{align*}

Hence we conclude that for any $\delta>0$ ,

(4.2) \begin{align}0&={\mathbb{E}}_x\!\left( \int_0^{\tau_{\delta}} e^{-\rho(X_r -{c^{(\theta)}}(r+t)) }[H(r+t)-\mathcal{U}(r+t)] {{d}} r \right),\end{align}

and hence

\begin{align*}0\leq {\mathbb{E}}_x\!\left( \int_0^{\tau_{\delta}} [H(r+t)-\mathcal{U}(r+t)] {{d}} r \right).\end{align*}

By the continuity of H and $\mathcal{U}$ we obtain that

\begin{align*}0\leq \lim_{\delta \downarrow 0 }\frac{1}{{\mathbb{E}}_x(\tau_{\delta})}{\mathbb{E}}_x\!\left( \int_0^{\tau_{\delta}} [H(r+t)-\mathcal{U}(r+t)] {{d}} r \right)=H(t)-\mathcal{U}(t).\end{align*}

Moreover, from Equation (4.2) we conclude that $H(t)=\mathcal{U}(t)$ for all $t\in [0,{m_{\theta}})$ and the conclusion holds.

Hence, for any $(t,x)\in [0,{m_{\theta}})\times {\mathbb{R}}$ , we can write

\begin{align*}{V^{(\theta)}}(t,x)&=\int_0^{{m_{\theta}}-t} K_1(t,x,r,{b^{(\theta)}}(r+t)){{d}} r -\int_0^{{m_{\theta}}-t} \mathcal{V}(r+t) K_2(t,x,r,{b^{(\theta)}}(r+t)){{d}} r,\end{align*}

where for any $r,s\in [0,{m_{\theta}})$ , $b\geq 0$ , and $x\in {\mathbb{R}}$ ,

\begin{align*}\mathcal{V}(t)&=\int_{({-}\infty,0)} {V^{(\theta)}}(t,{b^{(\theta)}}(t)+y)\Pi({{d}} y),\\K_1(t,x,s,b)&= {\mathbb{E}}\!\left( {G^{(\theta)}}(s+t,X_s+x){\mathbb{I}}_{\{X_s <b-x \}}\right),\\K_2(t,x,s,b)&= {\mathbb{E}}\!\left( e^{-\rho(X_s+x -b) } {\mathbb{I}}_{\{X_s>b-x \}} \right).\end{align*}

Take a value $h_0>0$ sufficiently small. Then the functions ${b^{(\theta)}}$ and $\mathcal{V}$ satisfy the integral equations

\begin{align*}\int_0^{{m_{\theta}}-t} &K_1\big(t,{b^{(\theta)}}(t),r,{b^{(\theta)}}(r+t)\big){{d}} r\\& -\int_0^{{m_{\theta}}-t} \mathcal{V}\big(r+t,{b^{(\theta)}}(r+t)\big) K_2\big(t,{b^{(\theta)}}(t),r,{b^{(\theta)}}(r+t)\big){{d}} r =0,\\\int_0^{{m_{\theta}}-t} &K_1\big(t,{b^{(\theta)}}(t)+h_0,r,{b^{(\theta)}}(r+t)\big){{d}} r \\&\qquad-\int_0^{{m_{\theta}}-t} \mathcal{V}\big(r+t,{b^{(\theta)}}(r+t)\big) K_2\big(t,{b^{(\theta)}}(t)+h_0,r,{b^{(\theta)}}(r+t)\big){{d}} r =0\end{align*}

for all $t\in [0,{m_{\theta}}]$ . We can approximate the integrals above by Riemann sums, so a numerical approximation can be implemented. Indeed, take $n \in \mathbb{Z}_+$ sufficiently large and define $h={m_{\theta}}/n$ . For each $k\in \{0,1,2,\ldots,n\}$ , we define $t_k=kh$ . Then the sequence of times $\{ t_k, k= 0,1,\ldots,n \}$ is a partition of the interval $[0,{m_{\theta}}]$ . Then, for any $x\in {\mathbb{R}}$ and $t\in [t_k,t_{k+1})$ , we approximate ${V^{(\theta)}}(t,x)$ by

\begin{align*}V^{(\theta)}_h(t_k,x)=\sum_{i=k}^{n-1} [K_1(t_k,x,t_{i-k+1},b_{i})-\mathcal{V}_iK_2(t_k,x,t_{i-k+1},b_{i}) ]h,\end{align*}

where the sequence $\{(b_k, \mathcal{V}_k), k=0,1,\ldots,n-1 \}$ is a solution to

\begin{align*}\sum_{i=k}^{n-1} [K_1(t_k,b_k,t_{i-k+1},b_{i})-\mathcal{V}_{i}K_2(t_k,b_k,t_{i-k+1},b_{i}) ]=0,\\\sum_{i=k}^{n-1} [K_1(t_k,b_k+h_0,t_{i-k+1},b_{i})-\mathcal{V}_{i}K_2(t_k,b_k+h_0,t_{i-k+1},b_{i}) ]=0\end{align*}

for each $k\in \{0,1,\ldots, n -1\}$ . For n sufficiently large, the sequence $\{(b_k,\mathcal{V}_k), k=0,1,\ldots,n \}$ is a numerical approximation to $\{({b^{(\theta)}}(t_k), \mathcal{V}(t_k)), k=0,1,\ldots,n \}$ (provided that $V^{(\theta)}_h\leq 0$ ) and can be calculated by using backwards induction. The functions $K_1$ and $K_2$ can be estimated using simulation methods. In Figure 3, we include a plot of the numerical calculation of ${b^{(\theta)}}$ and ${V^{(\theta)}}(0,x)$ using the parameters $\theta=\log\!(2)/10$ , $\mu=3$ , $\sigma=\lambda=\rho=1$ . In this case we take $n=10,000$ time steps (so that $h=0.001$ ), $h_0=0.001$ , and we estimate the functions $K_1$ and $K_2$ by simulating $N=30,000$ sample paths of the process $\{X_{hj}, j\in \{0,1,\ldots, n\} \}$ .

Figure 3. Brownian motion with drift perturbed by a compound Poisson process with exponential-sized jumps with $\mu=3$ and $\sigma=\lambda=\rho=1$ . Left panel: optimal boundary. Right panel: value function fixing $t=0$ .

Acknowledgements

Support from the Department of Statistics of the LSE and the LSE Ph.D. Studentship is gratefully acknowledged by José M. Pedraza. We are also grateful to two anonymous referees for their useful suggestions, which improved the presentation of this paper.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There are no competing interests to declare which arose during the preparation or publication process of this article.

References

Baurdoux, E. J. (2009). Last exit before an exponential time for spectrally negative Lévy processes. J. Appl. Prob. 46, 542558.CrossRefGoogle Scholar
Baurdoux, E. J. and Pedraza, J. M. (2020). Predicting the last zero of a spectrally negative Lévy process. In XIII Symposium on Probability and Stochastic Processes, Birkhäuser, Cham, pp. 77105.CrossRefGoogle Scholar
Baurdoux, E. J. and van Schaik, K. (2014). Predicting the time at which a Lévy process attains its ultimate supremum. Acta Appl. Math. 134, 2144.CrossRefGoogle Scholar
Bernyk, V., Dalang, R. C. and Peskir, G. (2011). Predicting the ultimate supremum of a stable Lévy process with no negative jumps. Ann. Prob. 39, 23852423.CrossRefGoogle Scholar
Bertoin, J. (1998). Lévy Processes. Cambridge University Press.Google Scholar
Bichteler, K. (2002). Stochastic Integration with Jumps. Cambridge University Press.CrossRefGoogle Scholar
Borodin, A. N. and Salminen, P. (2002). Handbook of Brownian Motion—Facts and Formulae. Birkhäuser, Basel.CrossRefGoogle Scholar
Carr, P. (1998). Randomization and the American put. Rev. Financial Studies 11, 597626.CrossRefGoogle Scholar
Chiu, S. N. and Yin, C. (2005). Passage times for a spectrally negative Lévy process with applications to risk theory. Bernoulli 11, 511522.Google Scholar
Du Toit, J., Peskir, G. and Shiryaev, A. (2008). Predicting the last zero of Brownian motion with drift. Stochastics 80, 229245.CrossRefGoogle Scholar
Glover, K. and Hulley, H. (2014). Optimal prediction of the last-passage time of a transient diffusion. SIAM Journal on Control and Optimization 52, 38333853.CrossRefGoogle Scholar
Glover, K., Hulley, H. and Peskir, G. (2013). Three-dimensional Brownian motion and the golden ratio rule. Ann. Appl. Prob. 23, 895922.CrossRefGoogle Scholar
Graversen, S. E., Peskir, G. and Shiryaev, A. N. (2001). Stopping Brownian motion without anticipation as close as possible to its ultimate maximum. Theory Prob. Appl. 45, 4150.CrossRefGoogle Scholar
Kuznetsov, A., Kyprianou, A. E. and Rivero, V. (2013). The theory of scale functions for spectrally negative Lévy processes. In Levy Matters II, Springer, Berlin, Heidelberg, pp. 97186.Google Scholar
Kyprianou, A. E. (2014). Fluctuations of Lévy Processes with Applications. Springer, Berlin, Heidelberg.CrossRefGoogle Scholar
Lamberton, D. and Mikou, M. (2008). The critical price for the American put in an exponential Lévy model. Finance Stoch. 12, 561581.CrossRefGoogle Scholar
Lamberton, D. and Mikou, M. A. (2013). Exercise boundary of the American put near maturity in an exponential Lévy model. Finance Stoch. 17, 355394.CrossRefGoogle Scholar
Madan, D., Roynette, B. and Yor, M. (2008). From Black–Scholes formula, to local times and last passage times for certain submartingales. Preprint. Available at https://hal.archives-ouvertes.fr/hal-00261868.Google Scholar
Madan, D., Roynette, B. and Yor, M. (2008). Option prices as probabilities. Finance Res. Lett. 5, 7987.CrossRefGoogle Scholar
Paroissin, C. and Rabehasaina, L. (2013). First and last passage times of spectrally positive Lévy processes with application to reliability. Methodology Comput. Appl. Prob. 17, 351372.CrossRefGoogle Scholar
Peskir, G. and Shiryaev, A. (2006). Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel.Google Scholar
Sato, K.-I. (1999). Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press.Google Scholar
Shiryaev, A. N. (2009). On conditional-extremal problems of the quickest detection of nonpredictable times of the observable Brownian motion. Theory Prob. Appl. 53, 663678.CrossRefGoogle Scholar
Urusov, M. A. (2005). On a property of the moment at which Brownian motion attains its maximum and some optimal stopping problems. Theory Prob. Appl. 49, 169176.CrossRefGoogle Scholar
Figure 0

Figure 1. Cramér–Lundberg process with $\tau_0^-$, the moment of ruin, and $g_{\theta}$, the last zero before an exponential time.

Figure 1

Figure 2. Brownian motion with drift $\mu=2$ and $\sigma=1$. Left panel: optimal boundary. Right panel: value function fixing $t=1$.

Figure 2

Figure 3. Brownian motion with drift perturbed by a compound Poisson process with exponential-sized jumps with $\mu=3$ and $\sigma=\lambda=\rho=1$. Left panel: optimal boundary. Right panel: value function fixing $t=0$.