Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-22T22:13:54.135Z Has data issue: false hasContentIssue false

Optimal stopping of Gauss–Markov bridges

Published online by Cambridge University Press:  04 December 2024

Abel Azze*
Affiliation:
CUNEF Universidad
Bernardo D’Auria*
Affiliation:
University of Padua
Eduardo García-Portugués*
Affiliation:
Universidad Carlos III de Madrid
*
*Postal address: Department of Quantitative Methods, CUNEF Universidad, Spain. Email address: [email protected]
**Postal address: Department of Mathematics, University of Padua, Italy. Email address: [email protected]
***Postal address: Department of Statistics, Universidad Carlos III de Madrid, Spain. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We solve the non-discounted, finite-horizon optimal stopping problem of a Gauss–Markov bridge by using a time-space transformation approach. The associated optimal stopping boundary is proved to be Lipschitz continuous on any closed interval that excludes the horizon, and it is characterized by the unique solution of an integral equation. A Picard iteration algorithm is discussed and implemented to exemplify the numerical computation and geometry of the optimal stopping boundary for some illustrative cases.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The problem of optimally stopping a Markov process to attain a maximum mean reward dates back to Wald’s sequential analysis [Reference Wald74] and is consolidated in the work of [Reference Dynkin31]. Ever since, it has received increasing attention from numerous theoretical and practical perspectives, as comprehensively compiled in the book of [Reference Peskir and Shiryaev65]. However, optimal stopping problems (OSPs) are mathematically complex objects, which makes it difficult to obtain sound results in general settings and typically leads to requiring smoothness conditions and simplifying assumptions for their solution. One of the most popular simplifying assumptions is the time-homogeneity of the underlying Markovian process.

Time-inhomogeneous diffusions can be cast back to time-homogeneity (see, e.g., [Reference Dochviri30, Reference Shiryaev70, Reference Taylor72]) at the cost of increasing the dimension of the OSP, which increases its complexity, hampering subsequent derivations or limiting studies to tackling specific, simplified time dependencies. Take as examples the works of [Reference Krylov52, Reference Oshima59, Reference Yang76], which proved different types of continuities and characterizations of the value function; those of [Reference Friedman40, Reference Jacka and Lynn48], which shed light on the shape of the stopping set; and [Reference Friedman39, Reference Peskir64], which studied the smoothness of the associated free boundary. To mitigate the burden of time-inhomogeneity, many of these works ask for the process’s coefficients to be Lipschitz continuous or at least bounded. This widespread assumption excludes important classes of time-dependent processes, such as diffusion bridges, whose drifts explode as time approaches a terminal point.

In a broad and rough sense, bridge processes, or bridges for short, are stochastic processes ‘anchored’ to deterministic values at some initial and terminal time points. Formal definitions and potential applications of different classes of bridges have been extensively studied. Bessel and Lévy bridges are respectively described by [Reference Pitman and Yor66, Reference Salminen68], and by [Reference Erickson and Steck34, Reference Hoyle, Hughston and Macrina47]. A canonical reference for Gaussian bridges can be found in the work of [Reference Gasbarra, Sottinen and Valkeila41], while Markov bridges are addressed in great generality by [Reference Çetin and Danilova16, Reference Chaumont and Bravo18, Reference Fitzsimmons, Pitman and Yor36].

In finance, diffusion bridges are appealing models from the perspective of a trader who wants to incorporate his beliefs about future events, for example in trading perishable commodities, modeling the presence of arbitrage, incorporating forecasts from algorithms and expert predictions, or trading mispriced assets that could rapidly return to their fair price. Works that consider models based on a Brownian bridge (BB) to address these and other insider trading situations include [Reference Angoshtari and Leung3, Reference Back4, Reference Brennan and Schwartz10, Reference Campi and Çetin12Reference Cartea, Jaimungal and Kinzebulatov15, Reference Çetin and Xing17, Reference Chen, Leung and Zhou20, Reference Kyle53, Reference Liu and Longstaff55, Reference Sottinen and Yazigi71]. The early work of [Reference Boyce9] had already suggested the use of a BB to model the perspective of an investor who wants to optimally sell a bond. Recently, [Reference D’Auria, García-Portugués and Guada23] applied a BB to optimally exercise an American option in the presence of the so-called stock-pinning effect (see [Reference Golez and Jackwerth43, Reference Krishnan and Nelken50, Reference Ni, Pearson and Poteshman57, Reference Ni, Pearson, Poteshman and White58]), obtaining competitive empirical results when compared to the classic Black–Scholes model. On the other hand, [Reference Hilliard and Hilliard45] used an Ornstein–Uhlenbeck bridge (OUB) to model the effect of short-lived arbitrage opportunities in pricing an American option, relying on a binomial-tree numerical method instead of deriving analytical results.

Non-financial applications of BBs include their adoption to model animal movement (see [Reference Horne, Garton, Krone and Lewis46, Reference Kranstauber49, Reference Krumm51, Reference Venek, Brunauer and Schneider73]), and their construction as a limit case of sequentially drawing elements without replacement from a large population (see [Reference Rosén67]). The latter connection makes BBs good asymptotic models for classical statistical problems, such as variations of the urn problem (see [Reference Andersson2, Reference Chen, Grigorescu and Kang19, Reference Ekström and Wanntorp33]).

Whenever the goal is to optimize the time at which to take an action, all of the aforementioned situations in which BBs, OUBs, or diffusion bridges are applicable can be intertwined with optimal stopping theory. However, within the time-inhomogeneous realm, diffusion bridges are particularly challenging to treat with classical optimal stopping tools, as they feature explosive drifts. It comes as no surprise, then, that the literature addressing this topic is sparse compared to its non-bridge counterpart. The first incursion into OSPs with diffusion bridges is in the work of Shepp [Reference Shepp69], who solved the OSP of a BB by linking it to that of a simpler Brownian motion (BM) representation. More recent studies of OSPs with diffusion bridges continue to revolve around variations of the BB. The works [Reference Ekström and Wanntorp33, Reference Ernst and Shepp35] revisited Shepp’s problem with novel methods of solution. In particular, [Reference De Angelis and Milazzo26, Reference Ekström and Wanntorp33] widened the class of gain functions; [Reference D’Auria, García-Portugués and Guada23] considered the (exponentially) discounted version; and [Reference Ekström and Vaicenavicius32, Reference Föllmer37, Reference Glover42, Reference Leung, Li and Li54] introduced randomization in either the terminal time or the pinning point. To the best of our knowledge, the only solution to an OSP with diffusion bridges that goes beyond the BB is that of [Reference Azze, D’Auria and García-Portugués24], which extends Shepp’s technique to embrace an OUB.

Both the BB and the OUB belong to the class of Gauss–Markov bridges (GMBs), that is, bridges that simultaneously exhibit the Markovian and Gaussian properties. Because of their enhanced tractability and wide applicability, these processes have been in the spotlight for some decades, especially in recent years. A good compendium of works related to GMBs can be found in [Reference Abrahams and Thomas1, Reference Barczy and Kern5Reference Barczy and Kern7, Reference Buonocore, Caputo, Nobile and Pirozzi11, Reference Chen and Georgiou21, Reference Hildebrandt and Rœlly44].

In this paper we solve the finite-horizon OSP of a GMB. In doing so, we generalize not only Shepp’s result for the BB case, but also its methodology. Indeed, the same type of transformation that casts a BB into a BM is embedded in a more general change-of-variable method for solving OSPs, which is detailed in [Reference Peskir and Shiryaev65, Section 5.2] and illustratively used in [Reference Pedersen and Peskir60] for nonlinear OSPs. When the GM process is also a bridge, such a representation presents regularities that we show are useful to overcome the bridges’ explosive drifts. Loosely, the drift’s divergence is equated to that of a time-transformed BM and then explained in terms of the laws of iterated logarithms. This trick allows us to work out the solution of an equivalent infinite-horizon OSP with a time-space transformed BM underneath, and then cast the solution back into the original terms. The solution is attained, in a probabilistic fashion, by proving that both the value function and the optimal stopping boundary (OSB) are regular enough to meet the premises of a relaxed Itô’s lemma that allows us to derive the free-boundary equation. In particular, we prove the Lipschitz continuity of the OSB, which we use to derive the global continuous differentiability of the value function and, consequently, the smooth-fit condition. The free-boundary equation is given in terms of a Volterra-type integral equation with a unique solution.

For enriched perspective and a full view of the reach of GMBs, we provide, in addition to the BM representation, a third angle from which GMBs can be seen: as time-inhomogeneous OUBs. Hence our work also extends the work of [Reference Azze, D’Auria and García-Portugués24] for a time-independent OUB. This OUB representation is arguably more appealing for numerical exploration of the OSB’s shape, which is done by using a Picard iteration algorithm that solves the free-boundary equation. The OSB exhibits a trade-off between two pulling forces, the one towards the mean-reverting level of the OUB representation, and the other anchoring the process at the horizon. The numerical results also reveal that the OSB is not monotonic in general, making this paper one of the few results in the optimal stopping literature that characterizes non-monotonic OSBs in a general framework.

The rest of this paper is organized as follows. Section 2 establishes four equivalent definitions of GMBs, including the time-space transformed BM representation. Section 3 introduces the finite-horizon OSP of a GMB and proves its equivalence to that of an infinite-horizon, time-dependent gain function, and a BM underneath. The auxiliary OSP is then treated in Section 4 as a standalone problem. This section also accounts for the main technical work of the paper, where classical and new techniques of optimal stopping theory are combined to obtain the solution of the OSP. This solution is then translated back into original terms in Section 5, where the free-boundary equation is provided. Section 6 discusses the practical aspects of numerically solving the free-boundary equation and shows computer drawings of the OSB. Final remarks are given in Section 7.

2. Gauss–Markov bridges

Both Gaussian and Markovian processes exhibit features that are appealing from the theoretical, computational, and applied viewpoints. Gauss–Markov (GM) processes, that is, processes that are Gaussian and Markovian at the same time, merge the advantages of these two classes. They inherit the convenient Markovian lack of memory and the Gaussian processes’ property of being characterized by their mean and covariance functions. Additionally, the Markovianity of Gaussian processes is equivalent to the property that their covariances admit a certain ‘factorization’. The following lemma collects this useful characterization, whose proof follows from the lemma on page 863 of [Reference Borisov8], and Theorem 1 and Remarks 12 in [Reference Mehr and McFadden56].

Here and subsequently, when we mention a non-degenerate GM process in an interval, we mean that its marginal distributions are non-degenerate in the same interval. In addition, we always consider the GM processes defined in their natural filtrations.

Lemma 1. (Characterization of non-degenerate GM processes.)

A function $R\;:\;[0, T]^2\rightarrow\mathbb{R}$ such that $R(t_1, t_2) \neq 0$ for all $t_1, t_2\in(0, T)$ is the covariance function of a non-degenerate GM process in (0, T) if and only if there exist functions $r_1, r_2\;:\;[0, T]\rightarrow\mathbb{R}$ , which are unique up to a multiplicative constant, such that

  1. (i) $R(t_1, t_2) = r_1(t_1\wedge t_2)r_2(t_1\vee t_2)$ ;

  2. (ii) $r_1(t) \neq 0$ and $r_2(t) \neq 0$ for all $t\in(0, T)$ ;

  3. (iii) $r_1/r_2$ is positive and strictly increasing on (0, T).

Moreover, $r_1$ and $r_2$ take the form

(1) \begin{align} r_1(t) = \begin{cases} R(t, t'), & t\leq t', \\[5pt] R(t, t)R(t', t')/R(t', t), & t > t', \end{cases} \quad r_2(t) = \begin{cases} R(t, t)/R(t, t'), & t\leq t', \\[5pt] R(t', t)/R(t', t'), & t > t', \end{cases} \end{align}

for some $t'\in(0, T)$ . Changing $t^{\prime}$ is equivalent to scaling $r_1$ and $r_2$ by a constant factor.

We say that the functions $r_1$ and $r_2$ in Lemma 1 are a factorization of the covariance function R. The lemma provides a simple technique to construct GM processes with ad hoc covariance functions that are not necessarily time-homogeneous. This is particularly useful given the complexity of proving the positive-definiteness of an arbitrary function to check its validity as a covariance function. GM processes also admit a simple representation by means of time-space transformed BMs (see, e.g., [Reference Mehr and McFadden56]), which results in higher tractability. Moreover, viewed through the lens of diffusions, GM processes account for space-linear drifts and space-independent volatilities, both coefficients being time-dependent (see, e.g., [Reference Buonocore, Caputo, Nobile and Pirozzi11]).

A Gauss–Markov bridge (GMB) is a process that results from ‘conditioning’ (see, e.g., [Reference Gasbarra, Sottinen and Valkeila41] for a formal definition) a GM process to start and end at some initial and terminal points. It is straightforward to see that the Markovian property is preserved after conditioning. The bridge process also inherits the Gaussian property, although this is not as obvious (see, e.g., [Reference Williams and Rasmussen75, Formula A.6] or [Reference Buonocore, Caputo, Nobile and Pirozzi11]). Hence the above-mentioned conveniences of GM processes are inherited by GMBs. In particular, the time-space transformed BM representation takes a specific form that characterizes GMBs and forms the backbone of our main results. The following proposition sheds light on that representation and serves to formally define a GMB as well as to offer different characterizations.

Proposition 1. (Gauss--Markov bridges.)

Let $\smash{X = \{X_u\}_{u\in [0, T]}}$ be a GM process defined on the probability space $(\Omega, \mathcal{F}, \textsf{P})$ , for some $T > 0$ . The following statements are equivalent:

  1. (i) There exists a time-continuous GM process, non-degenerate on [0, T], defined on $(\Omega, \mathcal{F}, \textsf{P})$ , and denoted by $\smash{\widetilde{X} = \{\widetilde{X}_u\}_{u\in[0, T]}}$ , whose mean and covariance functions are twice continuously differentiable, and such that

    \begin{align*} \mathrm{Law}(X, \textsf{P}) = \mathrm{Law}(\widetilde{X}, \textsf{P}_{x, T, z}), \end{align*}
    with $\textsf{P}_{x, T, z}(\cdot) = \textsf{P}(\cdot\ |\widetilde{X}_0 = x, \widetilde{X}_T = z)$ for some $x\in\mathbb{R}$ and $(T, z)\in\mathbb{R}_+\times\mathbb{R}$ .
  2. (ii) Let $m(t) \;:\!=\; \textsf{E}\left[ X_t\right]$ and $R(t_1, t_2) \;:\!=\; \textsf{C}\mathrm{ov}\left[ X_{t_1},X_{t_2}\right]$ , where $\textsf{E}$ and $\textsf{C}\mathrm{ov}$ are the mean and covariance operators related to $\textsf{P}$ . Then $t \mapsto m(t)$ is twice continuously differentiable, and there exist functions $r_1$ and $r_2$ that are unique up to multiplicative constants and such that

    1. (ii.1) $R(t_1, t_2) = r_1(t_1\wedge t_2)r_2(t_1\vee t_2)$ ;

    2. (ii.2) $r_1(t) \neq 0$ and $r_2(t) \neq 0$ for all $t\in(0, T)$ ;

    3. (ii.3) $r_1/r_2$ is positive and strictly increasing on (0, T);

    4. (ii.4) $r_1(0) = r_2(T) = 0$ ;

    5. (ii.5) $r_1$ and $r_2$ are twice continuously differentiable;

    6. (ii.6) $r_1(T) \neq 0$ and $r_2(0) \neq 0$ .

  3. (iii) X admits the representation

    (2) \begin{align} \left\{ \begin{aligned} X_t &= \alpha(t) + \beta_T(t)\left((z - \alpha(T))\gamma_T(t) + \left(B_{\gamma_T(t)} + \frac{x - \alpha(0)}{\beta_T(0)}\right)\right),\; t\in[0, T), \\[5pt] X_{T} &= z, \end{aligned} \right. \end{align}
    where $\left\{B_u\right\}_{u\in\mathbb{R}_+}$ is a standard BM, and $\alpha\;:\;[0, T]\rightarrow\mathbb{R}$ , $\beta_T\;:\;[0, T]\rightarrow\mathbb{R}_+$ , and $\gamma_T\;:\;[0, T)\rightarrow\mathbb{R}_+$ are twice continuously differentiable functions such that the following hold:
    1. (iii.1) $\beta_T(T) = \gamma_T(0) = 0$ ;

    2. (iii.2) $\gamma_T$ is monotonically increasing;

    3. (iii.3) $\lim_{t\rightarrow T} \gamma_T(t) = \infty$ and $\lim_{t\rightarrow T} \beta_T(t)\gamma_T(t) = 1$ .

  4. (iv) The process X is the unique strong solution of the following OUB stochastic differential equation (SDE):

    (3) \begin{align} \mathrm{d} X_t = \theta(t)(\kappa(t) - X_t)\,\mathrm{d} t + \nu(t)\,\mathrm{d} B_t,\quad t\in(0, T), \end{align}
    with initial condition $X_0 = x$ . Here $\left\{B_t\right\}_{t\in\mathbb{R}_+}$ is a standard BM, and $\theta\;:\;[0, T)\rightarrow\mathbb{R}_+$ , $\kappa\;:\;[0, T]\rightarrow\mathbb{R}$ , and $\nu\;:\;[0, T]\rightarrow\mathbb{R}_+$ are continuously differentiable functions such that the following hold:
    1. (iv.1) $\lim_{t\rightarrow T}\int_0^t\theta(u)\,\mathrm{d} u = \infty$ ;

    2. (iv.2) $\nu^2(t) = \theta(t)\exp\left\{-\int_0^t\theta(u)\,\mathrm{d} u\right\}$ , or equivalently $\theta(t) = \nu^2(t)\big/\int_t^T\nu^2(u)\,\mathrm{d} u$ .

Proof. ( i ) $\Longrightarrow$ ( ii ): X is a non-degenerate GM process on (0, T), as it arises from conditioning a process with the same qualities to take deterministic values at $t = 0$ and $t = T$ . Hence, Lemma 1 guarantees that $R(t_1, t_2) \;:\!=\; \textsf{C}\mathrm{ov}\left[ X_{t_1},X_{t_2}\right]$ meets the conditions (ii.1)–( ii.3 ). Since X degenerates at $t = 0$ and $t = T$ , and by ( ii.1 ), the condition ( ii.4 ) holds true. From the twice continuous differentiability (with respect to both variables) of the covariance function of $\widetilde{X}$ , we deduce the same property for X, which, alongside (1), implies ( ii.5 ).

We now prove ( ii.6 ). Let $\widetilde{m}, \widetilde{r}_1, \widetilde{r}_2\;:\;[0, T]\rightarrow\mathbb{R}$ be the mean and the covariance factorization of $\widetilde{X}$ . Hence (see, e.g., [Reference Williams and Rasmussen75, Formula A.6] or [Reference Buonocore, Caputo, Nobile and Pirozzi11]),

(4) \begin{align} m(t) = \widetilde{m}(t) + (x - \widetilde{m}(0))\frac{r_2(t)}{r_2(0)} + (z - \widetilde{m}(T))r_1(t),\quad t\in[0, T), \end{align}

and

(5) \begin{align} \left\{ \begin{aligned} r_1(t) &= \frac{\widetilde{r}_1(t)\widetilde{r}_2(0) - \widetilde{r}_1(0)\widetilde{r}_2(t)}{\widetilde{r_1}(T)\widetilde{r}_2(0) - \widetilde{r}_1(0)\widetilde{r}_2(T)}, \\[5pt] r_2(t) &= \widetilde{r}_1(T)\widetilde{r_2}(t) - \widetilde{r}_1(t)\widetilde{r}_2(T). \end{aligned} \right. \end{align}

From the continuity of $\widetilde{R}$ and the representation (1) we obtain the continuity of $\widetilde{r}_1/\widetilde{r}_2$ . Note that $\widetilde{r}_2$ does not vanish at $t = 0$ and $t = T$ , thanks to the non-degenerate nature of $\widetilde{X}$ at both boundary points. Hence we can extend the fact that $\widetilde{r}_1/\widetilde{r}_2$ is increasing, which was established in ( iii ) from Lemma 1, to $t = 0$ and $t = T$ , which implies that $\widetilde{r}_1(T)\widetilde{r}_2(0) - \widetilde{r}_1(0)\widetilde{r}_2(T) > 0$ . Therefore (5) implies $r_1(T)=1$ and $r_2(0) > 0$ . This does not mean that $r_1(T)$ and $r_2(0)$ must be positive, as $-r_1$ and $-r_2$ are also a factorization of R, but it does imply ( ii.6 ).

( ii ) $\Longrightarrow$ ( i ): Consider the functions

(6) \begin{align} \widetilde{m}(t) &\;:\!=\; m(t) - (x - m_1)\frac{r_2(t)}{r_2(0)} - (z - m_2)r_1(t), \quad t \in(0, T), \end{align}

with $\widetilde{m}(0) \;:\!=\; m_1$ and $\widetilde{m}(T) \;:\!=\; m_2$ for $m_1, m_2 \in\mathbb{R}$ , and

(7) \begin{align} \widetilde{r}_1(t) \;:\!=\; a r_1(t) + b r_2(t), \qquad \widetilde{r}_2(t) \;:\!=\; c r_1(t) + d r_2(t), \quad t \in[0, T], \end{align}

for $a, b, c, d > 0$ and such that $ad > bc$ . This relation is satisfied, for instance, if we set $a = b = c = 1$ and $d = 2$ . We can divide by $r_2(0)$ in (6) since ( ii.6 ) holds true. Let $h(t) \;:\!=\; r_1(t)/r_2(t)$ and $\widetilde{h}(t) \;:\!=\; \widetilde{r}_1(t)/\widetilde{r}_2(t)$ . We get $\widetilde{h}(t) = (ah(t) + b)/(ch(t) + d)$ from (7). Hence

\begin{align*} \widetilde{h}'(t) > 0 \Longleftrightarrow h'(t)\left(ad - bc\right) > 0. \end{align*}

The condition ( ii.3 ) along with our choice of a, b, c, and d guarantees that the right-hand side of the equivalence holds. Therefore, $\widetilde{h}(t)$ is strictly increasing. Since $\widetilde{h}$ is also positive, $\widetilde{R}(t_1, t_2) \;:\!=\; \widetilde{r}_1(t_1\wedge t_2)\widetilde{r}_2(t_1\vee t_2)$ is the covariance function of a non-degenerate GM process, as stated in Lemma 1. Let $\widetilde{X} = \{\widetilde{X}_t\}_{t\in [0, T]}$ be a GM process with mean $\widetilde{m}(t)$ and covariance $\widetilde{R}(t_1, t_2)$ . From the differentiability of m, $r_1$ , and $r_2$ , alongside (6) and (7), we deduce that of $\widetilde{m}$ , $\widetilde{r}_1$ , and $\widetilde{r}_2$ (and $\widetilde{R}$ ).

One can check, after some straightforward algebra and in alignment with (4)–(5), that the mean and covariance functions of the GMB derived from conditioning $\widetilde{X}$ to go from (0, x) to (T, z) coincide with m and R.

( i ) $\Longrightarrow$ ( iii ): Let $\widetilde{m}(t) \;:\!=\; \textsf{E}[\widetilde{X}_t]$ and $\widetilde{R}(t_1, t_2) \;:\!=\; \textsf{C}\mathrm{ov}\left[ \widetilde{X}_{t_1},\widetilde{X}_{t_2}\right]$ . As a result of conditioning $\widetilde{X}$ to have initial and terminal points (0, x) and (T, z), we have that X is a GM process with mean m given by (4) and covariance factorization $r_1$ and $r_2$ given by (5). Although this is not explicitly indicated, recall that m depends on x, T, and z, and $r_1$ and $r_2$ depend on T.

Therefore, X admits the representation

(8) \begin{align} X_t = m(t) + r_2(t)B_{h(t)},\quad 0\leq t < T, \end{align}

where $t\mapsto h(t) \;:\!=\; r_1(t)/r_2(t)$ is a strictly increasing function such that $h(0) = 0$ and $\lim_{t\rightarrow T}h(t) = \infty$ . Since $\lim_{t\rightarrow T}r_2(t)h(t) = r_1(T) = 1$ (see (5)), the law of the iterated logarithm allows us to continuously extend $X_t$ to T as the $\textsf{P}$ -almost sure (a.s.) limit $X_T \;:\!=\; \lim_{t\rightarrow T}X_t = z$ . The representation (2) and the properties (iii.1)–( iii.3 ) follow after taking $\alpha = \widetilde{m}$ , $\beta_T = r_2$ , and $\gamma_T = h$ . It also follows that $\alpha$ , $\beta_T$ , and $\gamma_T$ are twice continuously differentiable, as are $\widetilde{m}$ , $\widetilde{r}_1$ , and $\widetilde{r}_2$ .

( iii ) $\Longrightarrow$ ( ii ): Assuming that $X = \{X_t\}_{t\in[0, T]}$ admits the representation (2) and that the properties ( iii.1 )–( iii.3 ) hold, we have that X is a GMB with covariance factorization given by $r_1(t) = \beta_T(t_1)\gamma_T(t_1)$ and $r_2(t) = \beta_T(t)$ . It readily follows that $r_1$ and $r_2$ satisfy the conditions ( ii.1 )–( ii.6 ). It is also trivial to note that X has a twice continuously differentiable mean.

( i ) $\Longrightarrow$ ( iv ): Let $\textsf{E}_{t, x}$ and $\mathbb{E}_{s, y}$ be the mean operators with respect to the probability measures $\textsf{P}_{t, x}$ and $\mathbb{P}_{s, y}$ such that $\textsf{P}_{t, x}(\cdot) = \textsf{P}(\cdot | X_t = x)$ and $\mathbb{P}_{s, y}(\cdot) = \mathbb{P}(\cdot | B_s = y)$ , where $\left\{B_u\right\}_{u\in\mathbb{R}_+}$ is the BM in the representation (8). Then

\begin{align*} \mathrm{Law}\left(\left\{X_u\right\}_{u\in [t, T)}, \textsf{P}_{t, x}\right) = \mathrm{Law}\left(\left\{m(u) + r_2(u)B_{h(u)}\right\}_{u\in[t, T)}, \mathbb{P}_{s, y}\right), \end{align*}

for $s = h(t)$ and $y = (x - m(t))/r_2(t)$ . Hence

\begin{align*} \textsf{E}_{t, x}\left[ X_{t+\varepsilon} - x\right] &= \textsf{E}_{s, y}\left[ m(t+\varepsilon) + r_2(t+\varepsilon)B_{h(t+\varepsilon)} - x\right] \\[5pt] &= \textsf{E}_{s, y}\left[ m(t+\varepsilon) + \frac{r_2(t+\varepsilon)}{r_2(t)}(x- m(t)) + r_2(t+\varepsilon)B_{h(t+\varepsilon)-h(t)} - x\right]. \end{align*}

Likewise,

\begin{align*} \textsf{E}_{t, x}\left[ (X_{t + u} - x)^2\right] &= \textsf{E}\left[ \left(m(t+\varepsilon) + \frac{r_2(t+\varepsilon)(x - m(t))}{r_2(t)} + r_2(t+\varepsilon)B_{h(t+\varepsilon) - h(t)} - x\right)^2\right] \\[5pt] &= \left(m(t+\varepsilon) + \frac{r_2(t+\varepsilon)(x - m(t))}{r_2(t)} - x\right)^2 + r_2^2(t+\varepsilon)(h(t+\varepsilon) - h(t)). \end{align*}

Therefore,

\begin{align*} \lim_{\varepsilon\downarrow 0}\varepsilon^{-1}\textsf{E}_{t, x}\left[ X_{t+\varepsilon} - x\right] &= m'(t) + (x - m(t))r^{\prime}_{2}(t)/r_2(t), \\[5pt] \lim_{\varepsilon\downarrow 0}\varepsilon^{-1}\textsf{E}_{t, x}\left[ \left(X_{t+\varepsilon} - x\right)^2\right] &= r_2^2(t)h'(t). \end{align*}

By comparing the drift and volatility terms, we find that X is the unique strong solution (see Example 2.3 in [Reference Çetin and Danilova16]) of the SDE (3) for

(9) \begin{align} \left\{ \begin{aligned} \theta(t) &= -r^{\prime}_{2}(t)/r_2(t), \\[5pt] \kappa(t) &= m(t) - m'(t)r_2(t)/r^{\prime}_{2}(t), \\[5pt] \nu(t) &= r_2(t)\sqrt{h'(t)}. \end{aligned} \right. \end{align}

It follows from (9) (or one can directly derive from (3)) that

(10) \begin{align} m(t) &= \varphi(t)\left(x + \int_0^t \frac{\kappa(u)\theta(u)}{\varphi(u)}\,\mathrm{d} u\right) \end{align}
(11) \begin{align} &= \varphi(t)\left(x + \int_0^t \frac{\widetilde{m}(u)\theta(u) - \widetilde{m}'(u)}{\varphi(u)}\,\mathrm{d} u + (z - \widetilde{m}(T))\int_0^t \frac{r_1(u)\theta(u) - r^{\prime}_{1}(u)}{\varphi(u)}\,\mathrm{d} u\right) \end{align}

and

(12) \begin{align} r_1(t) = \varphi(t)\int_0^t\frac{\nu^2(u)}{\varphi^2(u)}\,\mathrm{d} u,\quad r_2(t) = \varphi(t), \end{align}

for $t\in[0, T)$ , with $\varphi(t) = \exp\left\{-\int_0^t \theta(u)\,\mathrm{d} u\right\}$ . Since X is degenerate at $t = T$ , $r_2(T) = 0$ , which implies ( iv.1 ). By comparing (11) with (4), we obtain

\begin{align*} r_1(t) = \varphi(t)\int_0^t \frac{r_1(u)\theta(u) - r^{\prime}_{1}(u)}{\varphi(u)}\,\mathrm{d} u = 2\varphi(t)\int_0^t \frac{r_1(u)\theta(u)}{\varphi(u)}\,\mathrm{d} u - r_1(t), \end{align*}

which, after using (12), leads to

\begin{align*} \int_0^t \frac{\nu^2(u)}{\varphi^2(u)}\,\mathrm{d} u = \int_0^t \frac{r_1(u)\theta(u)}{\varphi(u)}\,\mathrm{d} u. \end{align*}

Differentiating with respect to t both sides of the equation above, and relying again on (12), we get

\begin{align*} \frac{\nu^2(t)}{\varphi^2(t)}=\theta(t)\int_0^t\frac{\nu^2(u)}{\varphi^2(u)}\,\mathrm{d} u. \end{align*}

The expression above is an ordinary differential equation in $f(t) = \int_0^t\nu^2(u)/\varphi^2(u)\,\mathrm{d} u$ whose solution is $f(t) = C_1 + 1/\varphi(t)$ for some constant $C_1$ . Hence $f'(t) = \theta(t)/\varphi(t)$ . Therefore, some straightforward algebra leads us to the first equality in ( iv.2 ), which implies that

\begin{align*} \int_0^t\nu^2(u)\,\mathrm{d} u = C_2 + \int_0^t\theta(u)\varphi(u)\,\mathrm{d} u = C_2 + 1 - \varphi(t), \end{align*}

for a constant $C_2\in\mathbb{R}$ . Since $\lim_{t\rightarrow T}\varphi(t) = 0$ , we have $C_2 = \int_0^T\nu^2(u)\,\mathrm{d} u - 1$ . Hence

\begin{align*} \int_0^t\theta(u)\,\mathrm{d} u &= -\ln\left(C_2 + 1 - \int_0^t\nu^2(u)\,\mathrm{d} u\right), \end{align*}

from which the second equality in ( iv.2 ) follows after differentiating.

Finally, from the smoothness of $\widetilde{m}$ , $\widetilde{r}_1$ , and $\widetilde{r}_2$ , which implies that of m, $r_1$ , and $r_2$ , it follows that $\theta$ , $\kappa$ , and $\nu$ are continuously differentiable.

( iv ) $\Longrightarrow$ ( ii ): The functions $\theta$ , $\kappa$ , and $\nu$ are sufficiently regular for us to prove, using Itô’s lemma, that

\begin{align*} X_t = \varphi(t)\left(X_0 + \int_0^t\frac{\kappa(u)\theta(u)}{\varphi(u)}\,\mathrm{d} u + \int_0^t\frac{\nu(u)}{\varphi(u)}\,\mathrm{d} B_u\right) \end{align*}

is the unique strong solution (see Example 2.3 in [Reference Çetin and Danilova16]) of (3), where again $\varphi(t) = \exp\left\{-\int_0^t \theta(u)\,\mathrm{d} u\right\}$ . That is, X is a GM process with mean m and covariance factorization $r_1$ and $r_2$ given by (10) and (12), respectively.

The relations ( ii.2 ) and ( ii.3 ) are trivial to check. From ( iv.1 ), ( ii.4 ) follows. The continuous differentiability of $\theta$ , $\kappa$ , and $\nu$ implies ( ii.5 ). Using ( iv.2 ) and integrating by parts, we get that

(13) \begin{align} r_1(t) = 1 - \varphi(t). \end{align}

It follows that ( ii.6 ) holds, as $r_1(T) \;:\!=\; \lim_{t\rightarrow T}r_1(t) = 1$ and $r_2(0) = 1$ .

Remark 1. From the condition ( iv.2 ) and the relation (9), we get that $r^{\prime}_{2}(t)r_2(t) < 0$ for all $t\in(0, T)$ . Hence, since $r_2$ is continuous and does not vanish in [0, T), it can be chosen as either positive and decreasing, or negative and increasing. In (5), the positive decreasing version is chosen, which is reflected by the fact that we assume $\beta_T > 0$ in the representation (8). Since $\beta_T = r_2$ , we have that $\beta_T$ is also decreasing. Likewise, (5) and (13) indicate that $r_1$ is chosen as positive and increasing.

One could argue that defining a GMB should only require the process to degenerate at $t = 0$ and $t = T$ , which is equivalent to ( ii.1 )–( ii.4 ). However, GMBs defined in this way are not necessarily derived from conditioning a GM process, as assumed in the representation ( i ). Indeed, consider the Gaussian process $X = \{X_t\}_{t\in[0, 1]}$ with zero mean and covariance function $R(t_1, t_2) = r_1(t_1\wedge t_2)r_2(t_1 \vee t_2)$ for all $t_1, t_2 \in [0, 1]$ , where $r_1(t) = t^2(1-t)$ and $r_2(t) = t(1 - t)$ . Lemma 1 implies that R is a valid covariance function and X is Markovian. Moreover, since $r_1(0) = r_2(1) = 0$ , X is a bridge from (0, 0) to (1, 0). However, $r_1(0) = r_2(0) = 0$ . That is, ( ii.6 ) fails, and hence X does not satisfy the definition (ii). Recognizing the differences between the two definitions of GMBs, we adopt the one in which a GM process is conditioned to take deterministic values at some initial and future time, since the representation (2) is key to our results in Section 4: it reveals the (linear) dependence of the mean with respect to x and z, and it clarifies the relationship between OUBs and GMBs in ( iv ).

Notice that a higher smoothness of the GMB mean and covariance factorization is assumed in all of the alternative characterizations in Proposition 1. This is clearly a useful assumption to define GMBs, but it is not necessary. We discuss this in Remark 3. In the rest of the paper, we implicitly assume the twice continuous differentiability of the mean and covariance factorization every time we mention a GMB.

Although it is easily obtained from (9), for the sake of reference we write down the explicit relation between the BM representation (2) and the OUB representation (3), namely,

(14) \begin{align}\left\{\begin{aligned} \theta(t) &= -\beta^{\prime}_{T}(t)/\beta_T(t), \\[5pt] \kappa(t) &= \alpha(t) - \beta_T(t)/\beta^{\prime}_{T}(t) (\alpha'(t) + (z - \alpha(T))\beta_T(t)\gamma^{\prime}_{T}(t)), \\[5pt] \nu(t) &= \beta_T(t)\sqrt{\gamma^{\prime}_{T}(t)}.\end{aligned}\right.\end{align}

It is also worth mentioning that the condition ( iv.2 ), which is necessary and sufficient for an OU process to be an OUB, was also recently found in [Reference Hildebrandt and Rœlly44, Theorem 3.1] for the case where $\kappa$ is assumed constant.

Finally, we rely on the classic OU process to illustrate the characterization in Lemma 1 and the connection between all alternative definitions in Proposition 1.

Example 1. (Ornstein–Uhlenbeck bridge.)

Let $\widetilde{X} = \left\{X_t\right\}_{t\in\mathbb{R}_+}$ be an OU process, that is, the unique strong solution of the SDE

\begin{align*} \mathrm{d} X_t = aX_t\,\mathrm{d} t + c\,\mathrm{d} B_t,\quad t\in(0, T), \end{align*}

where $\left\{B_u\right\}_{u\in\mathbb{R}_+}$ is a standard BM, and $a\in\mathbb{R}$ , $c \in \mathbb{R}_+$ . Then $\widetilde{X}$ is a time-continuous GM process that is non-degenerate on [0, T]. Its mean and covariance factorization are twice continuously differentiable. In fact, they take the form

\begin{align*} \widetilde{m}(t) &= \textsf{E}\left[ \widetilde{X}_t\right] = \widetilde{X}_0e^{at}, \\[5pt] \widetilde{R}(t_1, t_2) &= \textsf{C}\mathrm{ov}\left[ \widetilde{X}_{t_1},\widetilde{X}_{t_2}\right] = \widetilde{r}_1(t_1\wedge t_2)\widetilde{r}_2(t_1\vee t_2), \\[5pt] \widetilde{r}_1(t) &= \sinh\!(at), \quad \widetilde{r}_2(t) = c^2e^{at}/a. \end{align*}

Note that $\widetilde{m}$ , $\widetilde{r}_1$ , and $\widetilde{r}_2$ satisfy the conditions ( i )–( iii ) from Lemma 1.

Let $\smash{X = \{X_u\}_{u\in [0, T]}}$ be a GM process defined on the same probability space as $\widetilde{X}$ , for some $T > 0$ . In agreement with Proposition 1, the following statements are equivalent:

  1. (i) The process X results from conditioning $\widetilde{X}$ to $\widetilde{X}_0 = x$ and $\widetilde{X}_T = z$ in the sense of (iii.1) from Proposition 1, for some $x\in\mathbb{R}$ and $(T, z)\in\mathbb{R}_+\times\mathbb{R}$ .

  2. (ii) The mean and covariance factorization of X are twice continuously differentiable, and they satisfy the conditions ( ii.6 )–( ii.6 ). In fact, they take the form

    \begin{align*} m(t) &= \textsf{E}\left[ X_t\right] = (x\sinh\!(a(T - t)) + z\sinh\!(at))/\sinh\!(aT), \\[5pt] R(t_1, t_2) &= \textsf{C}\mathrm{ov}\left[ X_{t_1},X_{t_2}\right] = r_1(t_1\wedge t_2)r_2(t_1\vee t_2), \\[5pt] r_1(t) &= \sinh\!(at)/\sinh\!(aT), \quad r_2(t) = c^2\sinh\!(a(T-t))/a, \end{align*}
    which follows after working out the formulae (4) and (5) (see also Proposition 3.3 in [Reference Barczy and Kern7]).
  3. (iii) We have $X_T=z$ , and on [0, T), X admits the following representation:

    \begin{align*} X_t =\widetilde{X}_0e^{at} + \frac{c^2\sinh\!(a(T-t))}{a}\left((z - \widetilde{X}_0e^{aT})\gamma_T(t) + \left(B_{\gamma_T(t)} + \frac{a(x - \widetilde{X}_0)}{c^2\sinh\!(aT)}\right)\!\right)\!. \end{align*}

    This expression does not depend on $\widetilde{X}_0$ ; indeed, after some manipulation it simplifies to

    \begin{align*} X_t = \frac{\sinh\!(a(T-t))}{\sinh\!(aT)}x + \frac{\sinh\!(at)}{\sinh\!(aT)}z + \frac{c^2\sinh\!(a(T-t))}{a}B_{\gamma_T(t)}\;, \end{align*}
    which is in alignment with the ‘space–time transform’ representation in [Reference Barczy and Kern7].
  4. (iv) The process X is the unique strong solution of the SDE

    \begin{align*} \mathrm{d} X_t = \theta(t)(\kappa(t) - X_t)\,\mathrm{d} t + \nu(t)\,\mathrm{d} B_t,\quad t\in(0, T), \end{align*}
    with initial condition $X_0 = x$ and
    \begin{align*} \left\{ \begin{aligned} \theta(t) &= \coth\!(a(T-t)), \\[5pt] \kappa(t) &= z/\cosh\!(a(T-t)), \\[5pt] \nu(t) &= c. \end{aligned} \right. \end{align*}
    These expressions for the drift and volatility terms of X come from (14) and are in agreement with Equation (3.2) in [Reference Barczy and Kern7]. The conditions ( iv.1 ) and ( iv.2 ) follow straightforwardly.

3. Two equivalent formulations of the OSP

For $0 \leq t < T$ , let $X = \{X_u\}_{u\in[0, T]}$ be a real-valued, time-continuous GMB with $X_T = z$ , for some $z\in\mathbb{R}$ . Define the finite-horizon OSP

(15) \begin{align} V_{T, z}(t, x) &\;:\!=\; \textrm{sup}_{\tau\leq T - t }\textsf{E}_{t, x}\left[ X_{t + \tau}\right],\end{align}

where $V_{T, z}$ is the value function and $\textsf{E}_{t, x}$ is the mean operator with respect to the probability measure $\textsf{P}_{t, x}$ such that $\textsf{P}_{t, x}(X_t = x) = 1$ . The supremum in (15) is taken across all random times $\tau$ such that $t + \tau$ is a stopping time for X, although, for simplicity, we will refer to $\tau$ as a stopping time from now on.

Likewise, consider a BM $\{Y_u\}_{u\in\mathbb{R}_+}$ on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ , and define the infinite-horizon OSP

(16) \begin{align} W_{T, z}(s, y) \;:\!=\; \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s, y}\left[ G_{T, z}\left(s + \sigma, Y_{s + \sigma}\right)\right],\end{align}

for $(s, y)\in\mathbb{R}_+\times\mathbb{R}$ , where $\mathbb{P}_{s, y}$ and $\mathbb{E}_{s, y}$ have definitions analogous to those of $\textsf{P}_{t, x}$ and $\textsf{E}_{t, x}$ ; that is, $Y_{s + u} = y + B_u$ under $\mathbb{P}_{s, y}$ , where $\left\{B_u\right\}_{u\in\mathbb{R}_+}$ is a standard BM. The supremum in (16) is taken across the stopping times of $\{Y_{s+u}\}_{u\in\mathbb{R}_+}$ , and the (gain) function $G_{T, z}$ takes the form

(17) \begin{align} G_{T, z}(s, y) \;:\!=\; \alpha(\gamma_T^{-1}(s)) + \beta_T(\gamma_T^{-1}(s))\left((z - \alpha(T))s + y\right),\end{align}

for $\alpha$ , $\beta_T$ , and $\gamma_T$ as in ( iii.1 )–( iii.3 ) from Proposition 1.

Note that we have used different notation for the probability and expectation operators in the OSPs (15) and (16). The intention is to emphasize the difference between the probability spaces relative to the original GMB and the resulting BM. We shall keep this notation for the rest of the paper.

Solving (15) and (16) means providing a tractable expression for $V_{T, z}(t, x)$ and $W_{T, z}(s, y)$ , as well as finding (if they exist) stopping times $\tau^* = \tau^*(t, x)$ and $\sigma^* = \sigma^*(s, y)$ such that

\begin{align*} V_{T, z}(t, x) = \textsf{E}_{t, x}\left[ X_{t + \tau^*}\right], \quad W_{T, z}(s, y) = \mathbb{E}_{s, y}\left[ G_{T,z}\left(s + \sigma^*, Y_{s + \sigma^*}\right)\right].\end{align*}

In such a case, $\tau^*$ and $\sigma^*$ are called optimal stopping times (OSTs) for (15) and (16), respectively.

We claim that the OSPs (15) and (16) are equivalent in the sense specified in the following proposition. In summary, the representation (2) equates the original GMB to the BM transformed by the gain function $G_{T, z}$ , and ( iii.3 ) changes the finite horizon T into an infinite horizon.

Proposition 2. (Equivalence of the OSPs)

Let V and W be the value functions in (15) and (16). For $(t, x)\in[0,T]\times\mathbb{R}$ , let $s = \gamma_T(t)$ and $y = \left(x - \alpha(t)\right)/\beta_T(t) - \gamma_T(t)(z - \alpha(T))$ . Then

(18) \begin{align} V_{T, z}(t, x) = W_{T, z}\left(s, y\right). \end{align}

Moreover, $\tau^* = \tau^*(t, x)$ is an OST for $V_{T, z}$ if and only if $\sigma^* = \sigma^*(s, y)$ , defined so that $s + \sigma^* = \gamma_T(t + \tau^*)$ , is an OST for W.

Proof. From (2), we have the following representation for $X_{t+u}$ under $\textsf{P}_{t, x}$ :

\begin{align*} X_{t+u} &= \alpha(t+u) + \beta(t+u)\left((z-\alpha(T))\gamma_T(t+u) + \left(B_{\gamma_T(t+u)} + \frac{X_0 - \alpha(0)}{\beta(0)}\right)\right) \\[5pt] &= G_{T, z}\left(\gamma_T(t + u), \left(B_{\gamma_T(t+u)} + \frac{X_0 - \alpha(0)}{\beta(0)}\right)\right) \\[5pt] &= G_{T, z}\left(\gamma_T(t + u), \left(B_{\gamma_T(t+u)}-B_{\gamma_T(t)}+\frac{X_t - \alpha(t)}{\beta_T(t)} - (z - \alpha(t))\gamma_T(t)\right)\right), \end{align*}

where, in the last equation, we used the relation

\begin{align*} B_{\gamma_T(t)} + \frac{X_0 - \alpha(0)}{\beta_T(0)} = \frac{X_t - \alpha(t)}{\beta_T(t)} - (z - \alpha(t))\gamma_T(t). \end{align*}

Let $Y_{s+v}\;:\!=\; B^{\prime}_v+y$ and $B^{\prime}_v\;:\!=\;B_{s+v}-B_s$ , with $\{B^{\prime}_v\}_{v\in\mathbb{R}_+}$ being a standard $\mathbb{P}_{s, y}$ -BM. We recall that we use $\mathbb{P}$ instead of $\textsf{P}$ to emphasize the time-space change, although the measure remains the same.

We have that

\begin{align*} X_{t+u} &= G_{T, z}\left(\gamma_T(t + u), Y_{\gamma_T(t+u)}\right) . \end{align*}

For every stopping time $\tau$ of $\{X_{t + u}\}_{u\in[0, T-t]}$ , consider the stopping time $\sigma$ of $\{Y_{s + u}\}_{u\in\mathbb{R}_+}$ such that $s + \sigma = \gamma_T(t + \tau)$ . Then (18) follows from the following sequence of equalities:

\begin{align*} V_{T, z}(t, x) &= \mathrm{sup}_{\tau \leq T-t} \textsf{E}_{t, x}\left[ X_{t + \tau}\right] = \mathrm{sup}_{\sigma \geq 0} \mathbb{E}_{s, y}\left[ G_{T, z}\left(s + \sigma, Y_{s + \sigma}\right)\right] = W_{T, z}\left(s, y\right). \end{align*}

Furthermore, suppose that $\tau^* = \tau^*(t, x)$ is an OST for (15) and that there exists a stopping time $\sigma' = \sigma'(s, y)$ that performs better than $\sigma^* = \sigma^*(s, y)$ in (16). Consider $\tau' = \tau'(t, x)$ such that $t + \tau' = \gamma_T^{-1}(s + \sigma')$ . Then

\begin{align*} \textsf{E}_{t, x}\left[ X_{t + \tau'}\right] = \mathbb{E}_{s, y}\left[ G_{t, T}(s + \sigma', Y_{s + \sigma'})\right] > \mathbb{E}_{s, y}\left[ G_{t, T}(s + \sigma^*, Y_{s + \sigma*})\right] = \textsf{E}_{t, x}\left[ X_{t + \tau^*}\right], \end{align*}

which contradicts the fact that $\tau^*$ is optimal. Using similar arguments, we can obtain the reverse implication, that is, that if $\sigma^*$ is an OST for (16), then $\tau^*$ is an OST for (15).

4. Solution of the infinite-horizon OSP

We have shown that solving (15) is equivalent to solving (16), which is expressed in terms of a simpler BM. In this section we leverage that advantage to solve (16), but first we rewrite it with cleaner notation that hides its explicit connection with the original OSP and allows us to treat (16) as a standalone problem.

Let $\left\{Y_u\right\}_{u\in\mathbb{R}_+}$ be a BM on the probability space $\left(\Omega, \mathcal{F}, \mathbb{P}\right)$ . Define the probability measure $\mathbb{P}_{s,y}$ so that $\mathbb{P}_{s, y}(Y_s = y) = 1$ . Consider the OSP

(19) \begin{align} W(s, y) \;:\!=\; \mathrm{sup}_{\sigma \geq 0}\mathbb{E}_{s, y}\left[ G(s + \sigma, Y_{s+\sigma})\right] = \mathrm{sup}_{\sigma \geq 0}\mathbb{E}\left[ G(s + \sigma, Y_{\sigma} + y)\right],\end{align}

where $\mathbb{E}$ and $\mathbb{E}_{s, y}$ are the mean operators with respect to $\mathbb{P}$ and $\mathbb{P}_{s, y}$ , respectively. The supremum in (19) is taken across the stopping times of $Y = \left\{Y_{s+u}\right\}_{u\in\mathbb{R}_+}$ . The (gain) function G takes the form

(20) \begin{align} G(s, y) = a_1(s) + a_2(s)\left(c_0s + y\right),\end{align}

where $c_0\in\mathbb{R}$ and $a_1, a_2\;:\;\mathbb{R}_+\rightarrow\mathbb{R}$ are assumed to be such that

(21a) \begin{align} a_1\; \mathrm{ and } a_2\; \mathrm{ are twice continuously differentiable}; \end{align}
(21b) \begin{align} \mathrm{a}_1, \mathrm{a}^{\prime}_{1}, \mathrm{a}^{\prime\prime}_{1}, \mathrm{a}_2, \mathrm{a}^{\prime}_{2}, \mathrm{and}\;\mathrm{a}^{\prime\prime}_{2} \;\mathrm{are bounded}; \end{align}
(21c) \begin{align} \mathrm{there\;exists } c_1\in\mathbb{R}\; \mathrm{ such that}\; \lim_{s\rightarrow\infty} a_1(s) = c_1; \end{align}
(21d) \begin{align} \mathrm{for all } s\in\mathbb{R},\ a_2(s) \gt 0; \end{align}
(21e) \begin{align} \mathrm{there\;exists } c_2\in\mathbb{R}\; \mathrm{ such\;that}\; \lim_{s\rightarrow\infty} a_2(s)s = c_2; \end{align}
(21f) \begin{align} \mathrm{for all } s\in\mathbb{R},\ a^{\prime}_{2}(s) \lt 0. \end{align}

The assumptions (21a)–(21f) do not further restrict the class of GMBs considered in Proposition 1. Indeed, (21a)–(21b) are implied by the twice continuous differentiability of the GMB’s mean and covariance factorization, while (21c)–(21f) are obtained from the degenerative nature of the GMB. In fact, the infinite-horizon OSP (19) under Assumptions (21a)–(21f) is equivalent to the finite-horizon OSP (15) with a GMB as the underlying process. The following remarks shed light on this equivalence.

Remark 2. Equation (20), as well as Assumptions (21c)–(21e), follow from (17) and ( iii.1 )–( iii.3 ) in Proposition 1. Indeed, the constant $c_0$ and the functions $a_1$ and $a_2$ are taken so that $c_0 = z - \alpha(T)$ , $a_1(s) = \alpha(\gamma_{T}^{-1}(s))$ , and $a_2(s) = \beta_T(\gamma_T^{-1}(s))$ .

Remark 3. Assumptions (21a) and (21b) are derived from the twice continuous differentiability of $\alpha$ , $\beta_T$ , and $\gamma_T$ . These assumptions are used to prove smoothness properties of the value function and the OSB. The assumptions on the first derivatives are used to prove the Lipschitz continuity of the value function (see Proposition 3), while the ones on the second derivatives are required to prove the local Lipschitz continuity of the OSB (see Proposition 7).

Remark 4. The following relation, which we use recurrently throughout the paper, follows from (21a), (21b), and (21e):

(22) \begin{align} \lim_{s\rightarrow\infty} a^{\prime}_{2}(s)s = 0. \end{align}

Alternatively, (22) can be derived directly from (5) and the fact that $\lim_{s\rightarrow\infty}a_2(s) = 0$ . Indeed,

\begin{align*} \lim_{s\rightarrow\infty}a^{\prime}_{2}(s)s =&\; \lim_{s\rightarrow\infty}a^{\prime}_{2}(s)s + a_2(s) = \lim_{s\rightarrow\infty}\partial_s \left[a_2(s)s\right] = \lim_{s\rightarrow\infty} \partial_s r_1(\gamma_T^{-1}(s)) = \lim_{t\rightarrow T} \frac{r^{\prime}_{1}(t)}{\gamma^{\prime}_{T}(t)} \\[5pt] =&\; \lim_{t\rightarrow T}\frac{r^{\prime}_{1}(t)r_2^2(t)}{r^{\prime}_{1}(t)r_2(t) - r_1(t)r^{\prime}_{2}(t)} = 0, \end{align*}

where $\partial_s$ denotes the derivative with respect to the variable $s\in\mathbb{R}_+$ . In the last equality we used that $0 \leq r^{\prime}_{1}(t)/r^{\prime}_{2}(t) \leq r_1(t)/r_2(t)$ , which holds because $r_1$ and $r_2$ are respectively an increasing and a decreasing function (see Remark 1).

Likewise, (22) along with the L’Hôpital rule implies that

(23) \begin{align} \lim_{s\rightarrow\infty} a^{\prime\prime}_{2}(s)s^2 = -\lim_{s\rightarrow\infty} a^{\prime}_{2}(s)s = 0. \end{align}

Again, (23) can be obtained from its representation in terms of the covariance factorization given by $r_1$ and $r_2$ :

\begin{align*} \lim_{s\rightarrow\infty}a^{\prime\prime}_{2}(s)s^2 =&\; \lim_{s\rightarrow\infty}\partial_{ss} \left[a_2(s)s\right]s = \lim_{s\rightarrow\infty} \partial_{ss} r_1(\gamma_T^{-1}(s))\gamma_T(\gamma_T^{-1}(s)) \\[5pt] =&\; \lim_{s\rightarrow\infty} \partial_{s} \frac{r^{\prime}_{1}(\gamma_T^{-1}(s))}{\gamma^{\prime}_{T}(\gamma_T^{-1}(s))}\gamma_T(\gamma_T^{-1}(s)) = \lim_{t\rightarrow T} \left(\frac{r^{\prime\prime}_{1}(t)}{(\gamma^{\prime}_{T}(t))^2} - \frac{r_1(t)\gamma^{\prime\prime}_{T}(t)}{(\gamma^{\prime}_{T}(t))^3}\right)\gamma_T(t) \\[5pt] =&\;\lim_{t\rightarrow T} \left(\frac{r_1^2(t)r_2^3(t)\big(r^{\prime}_{1}(t)r^{\prime\prime}_{2}(t) - r^{\prime\prime}_{1}(t)r^{\prime}_{2}(t)\big)}{\big(\!r^{\prime}_{1}(t)r_2(t) - r_1(t)r_2^3(t)\big)} - \frac{2r_1(t)r_2^2(t)\big(r^{\prime}_{1}(t)\big)^2}{\big(\!r^{\prime}_{1}(t)r_2(t) - r_1(t)r^{\prime}_{2}(t)\big)^2}\right) \\[5pt] =&\; 0, \end{align*}

where $\partial_{ss}$ indicates the second derivative with respect to s.

Remark 5. Assumption (21f) is needed to derive the boundedness of the OSB (see Proposition 6 and Remark 6). Similarly to Assumptions (21a)–(21e), Assumption (21f) can be obtained from the regularity of the underlying GMB already used in Section 2, and does not impose any further restrictions. Specifically, Assumption (21f) is equivalent to the condition $\theta(t) > 0$ for all $t\in[0, T]$ , in the OUB representation (iv) from Proposition 1, and to $\beta_T(t) = r_2(t) > 0$ and $\beta^{\prime}_{T}(t) = r^{\prime}_{2}(t)< 0$ , in terms of the representations (iii) and (ii) (see Remark 1).

Notice that (21c), (21e), and (22), together with the law of the iterated logarithm, imply that, for all $(s, y)\in\mathbb{R}_+\times\mathbb{R}$ ,

(24) \begin{align} \mathbb{P}_{s,y}\mathrm{-\!\!}\lim_{u\to\infty}G(s + u, Y_{s+u}) = c_1 + c_0c_2.\end{align}

For later reference, let us introduce the notation

(25) \begin{align}\left. \begin{aligned} A_1 &\;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a_1(s)\right|,\quad A^\prime_{1} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime}_{1}(s)\right|,\quad A^{\prime\prime}_{1} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime\prime}_{1}(s)\right|, \\[5pt] A_2 &\;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a_2(s)\right|,\quad A^{\prime}_{2} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime}_{2}(s)\right|,\quad A^{\prime}_{2} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime\prime}_{2}(s)\right|, \\[5pt] A_3 &\;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a_2(s)s\right|, A^\prime_{3} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime}_{2}(s)s\right|, A^{\prime\prime}_{3} \;:\!=\; \mathrm{sup}_{s\in\mathbb{R}_+} \left|a^{\prime\prime}_{2}(s)s\right|. \end{aligned}\right\}\end{align}

In addition, we will often require the expressions for the partial derivatives of G, namely,

(26) \begin{align} \partial_t G(s, y) &= a^{\prime}_{1}(s) + c_0a_2(s) + a^{\prime}_{2}(s)(c_0s + y), \end{align}
(27) \begin{align} \partial_x G(s, y) &= a_2(s). \end{align}

Here and subsequently, $\partial_t$ and $\partial_x$ respectively stand for the differential operators with respect to time and space.

Notice that (21e) guarantees the existence of $m > 0$ such that $\left|a_2(s)\right| \leq (1 + m)/s$ for all $s\geq 1$ , which, combined with the boundedness of $a_1$ , $a_2$ , and $s\mapsto a_2(s)s$ , implies the following bound with $A = \max\{A_1 + |c_0|A_3, A_2\}$ :

(28) \begin{align} \mathbb{E}_{s, y}\bigg[\!&\mathrm{sup}_{u\in\mathbb{R}_+} \left|G\left(s + u, Y_{s+u}\right)\right|\bigg]\nonumber\\[5pt] \leq&\; \mathrm{sup}_{u\in\mathbb{R}_+}\left|a_1(u) + a_2(u)(c_0u + y)\right| + \mathbb{E}\bigg[\!\mathrm{sup}_{u\in\mathbb{R}_+} \left|a_2(s + u)Y_u\right|\bigg] \nonumber \\[5pt] \leq&\; A(1 + |y|) + \mathbb{E}\bigg[\!\mathrm{sup}_{u\in\mathbb{R}_+}\left|a_2(s + u)Y_u\right|\bigg] \nonumber \\[5pt] \leq&\; A(1 + |y|) +\!\max_{u\leq 1\vee(1 - s)}\!\left|a_2(s + u)\right|\mathbb{E}\left[ \mathrm{sup}_{u\leq 1\vee(1 - s)}\!\left|Y_u\right|\right]\! + \mathbb{E}\bigg[\!\mathrm{sup}_{u\geq 1\vee(1 - s)}\!\left|a_2(s + u)Y_u\right|\bigg] \nonumber \\[5pt] \leq&\; A(1 + |y|) + \max_{u\leq 1}\left|a_2(u)\right|\mathbb{E}\left[ \mathrm{sup}_{u\leq 1}\left|Y_u\right|\right] + (1 + m)\mathbb{E}\bigg[\!\mathrm{sup}_{u\geq 1}\left|Y_u\right| \big/ u\bigg] \nonumber \\[5pt] =&\; A\left(1 + \left(\left|y\right| + \mathbb{E}\left[ \mathrm{sup}_{u\leq 1}\left|Y_u\right|\right]\right)\right) + (1 + m)\mathbb{E}\left[ \mathrm{sup}_{u\geq 1}\left|Y_{1/u}\right|\right] \nonumber \\[5pt] =&\; A\left(1 + \left(\left|y\right| + \mathbb{E}\left[ \mathrm{sup}_{u\leq 1}\left|Y_u\right|\right]\right)\right) + (1 + m)\mathbb{E}\left[ \mathrm{sup}_{u\leq 1}\left|Y_u\right|\right] < \infty. \end{align}

In the last equality, the time-inversion property of the BM was used.

The continuity of G alongside (28) implies the continuity of W. However, given Assumptions (21a)–(21e), one can obtain higher smoothness for the value function, namely its Lipschitz continuity, as shown in the proposition below.

Proposition 3. (Lipschitz continuity of the value function.)

For any bounded set $\mathcal{R}\subset \mathbb{R}$ there exists $L_\mathcal{R} > 0$ such that

(29) \begin{align} \left|W(s_1, y_1) - W(s_2, y_2)\right| \leq L_\mathcal{R}(|s_1 - s_2| + |y_1 - y_2|), \end{align}

for all $(s_1, y_1), (s_2, y_2)\in\mathbb{R}_+\times\mathcal{R}$ .

Proof. For any $(s_1, y_1), (s_2, y_2)\in\mathbb{R}_+\times \mathcal{R}$ , the following equality holds:

\begin{align*} & W(s_1, y_1) - W(s_2, y_2) \\[5pt] &= \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1, y_1}\left[ G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)\right] - \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1, y_2}\left[ G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)\right] \\[5pt] &\quad + \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1, y_2}\left[ G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)\right] - \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_2, y_2}\left[ G\left(s_2 + \sigma, Y_{s_2 + \sigma}\right)\right]. \end{align*}

Since $|\mathrm{sup}_\sigma a_\sigma - \mathrm{sup}_\sigma b_\sigma|\leq \mathrm{sup}_\sigma|a_\sigma - b_\sigma|$ , and by Jensen’s inequality,

(30) \begin{align} \bigg|\mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1,y_1}&\, [G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)] - \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1, y_2}\left[ G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)\right]\bigg| \nonumber \\[5pt] &\leq \mathbb{E}\left[ \mathrm{sup}_{u \geq 0}\left|G\left(s_1 + u, Y_u + y_1\right) - G\left(s_1 + u, Y_u + y_2\right)\right|\right] \nonumber \\[5pt] &= \mathrm{sup}_{u \geq 0}\left|a_2(s_1 + u)(y_1 - y_2)\right| \nonumber \\[5pt] &\leq A_2\left|y_1 - y_2\right|. \end{align}

Likewise,

(31) \begin{align} \bigg|\mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_1,y_2}&\,G\left(s_1 + \sigma, Y_{s_1 + \sigma}\right)] - \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_2, y_2}\left[ G\left(s_2 + \sigma, Y_{s_2 + \sigma}\right)\right]\bigg| \nonumber \\[5pt] &\leq \mathbb{E}\left[ \mathrm{sup}_{u \geq 0}\left|G\left(s_1 + u, Y_u + y_2\right) - G\left(s_2 + u, Y_u + y_2\right)\right|\right] \nonumber \\[5pt] &= \mathbb{E}\left[ \mathrm{sup}_{u \geq 0}\left|\partial_t G\left(\eta_u, Y_u + y_2\right)(s_1 - s_2)\right|\right] \nonumber \\[5pt] &\leq \left(A^\prime_{1} + (A^\prime_{3} + A_2)|c_0| + A^{\prime}_{2}\left(\mathrm{sup}_{y\in\mathcal{R}}\left\{y\right\} + \mathbb{E}\left[ \mathrm{sup}_{u \geq 0}\left|Y_u\right|\right]\right)\right)|s_1 - s_2|, \end{align}

where $\eta_u\in(s_1\wedge s_2 + u, s_1\vee s_2 + u)$ comes from the mean value theorem, which, along with (26), was used to derive the last inequality. The constants $A^\prime_{1}$ , $A_2$ , $A^{\prime}_{2}$ , and $A^\prime_{3}$ were defined in (25). We finally get (29) after merging (30) and (31).

Define $\sigma^* = \sigma^*(s, y) \;:\!=\; \inf\left\{u\in\mathbb{R}_+ \;:\; (s + u, Y_{s+u}) \in \mathcal{D}\right\}$ , where the closed set

\begin{align*} \mathcal{D} \;:\!=\; \left\{(s, y)\in\mathbb{R}_+\times\mathbb{R}\;:\; W(s, y) = G(s, y)\right\}\end{align*}

is called the stopping set. The continuity of W and G (it suffices to have lower semi-continuity of W and upper semi-continuity of G), along with (28) and (24), guarantees that $\sigma^*$ is an OST for (19) (see Corollary 2.9 and Remark 2.10 in [Reference Peskir and Shiryaev65]), meaning that

(32) \begin{align} W(s, y) = \mathbb{E}_{s, y}\left[ G(s + \sigma^*, Y_{s + \sigma^*})\right].\end{align}

Applying Itô’s lemma to (19) and (32), we get a martingale term $\int_0^u a_2(s + r)\,\mathrm{d} B_r$ that turns out to be uniformly integrable as $\int_0^\infty a_2^2(s + r)\,\mathrm{d} r < \infty$ , by (21e). Taking the $\mathbb{P}_{s,y}$ -expectation, this term vanishes and we get the following alternative representations of W:

(33) \begin{align} W(s, y) - G(s, y) &= \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s, y}\left[ \int_0^\sigma \mathbb{L} G\left(s + u, Y_{s+u}\right)\,\mathrm{d} u\right]\nonumber\\[5pt] &= \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*} \mathbb{L} G\left(s + u, Y_{s+u}\right)\,\mathrm{d} u\bigg],\end{align}

where $\mathbb{L} \;:\!=\; \partial_t + \frac{1}{2}\partial_{xx}$ is the infinitesimal generator of the process $\left\{\left(s, Y_s\right)\right\}_{s\in \mathbb{R}_+}$ and the operator $\partial_{xx}$ is shorthand for $\partial_x\partial_x$ . Note that $\mathbb{L} G = \partial_t G$ .

Denote by $\mathcal{C}$ the complement of $\mathcal{D}$ ,

\begin{align*} \mathcal{C} \;:\!=\; \left\{(s, y) \in\mathbb{R}_+\times\mathbb{R} \;:\; W(s, y) > G(s, y)\right\},\end{align*}

which is called the continuation set. The boundary between $\mathcal{D}$ and $\mathcal{C}$ is the OSB, and it determines the OST $\sigma^*$ .

In addition to the Lipschitz continuity, higher smoothness of the value function is achieved away from the OSB, as stated in the next proposition. We also determine the connection between the OSP (19) and the associated free-boundary problem. For further details on this connection in a more general setting we refer to Section 7 of [Reference Peskir and Shiryaev65].

Proposition 4. (Higher smoothness of the value function and the free-boundary problem.)

We have $W\in C^{1, 2}(\mathcal{C})$ ; that is, the functions $\partial_t W$ , $\partial_x W$ , and $\partial_{xx} W$ exist and are continuous on $\mathcal{C}$ . Additionally, $y\mapsto W(s, y)$ is convex for all $s\in\mathbb{R}_+$ , and $\mathbb{L} W = 0$ on $\mathcal{C}$ .

Proof. The convexity of W with respect to the space coordinate is a straightforward consequence of the linearity of $Y_{s+u}$ with respect to y under $\mathbb{P}_{s, y}$ , for all $s\in\mathbb{R}_+$ . Indeed, it follows from (19) that $W(s, ry_1 + (1-r)y_2) \leq rW(s, y_1) + (1-r)W(s, y_2)$ , for all $y_1,y_2\in\mathbb{R}$ and $r\in[0, 1]$ .

Since W is continuous on $\mathcal{C}$ (see Proposition 3) and the coefficients in the parabolic operator $\mathbb{L}$ are smooth enough (it suffices to require local $\alpha$ -Hölder continuity), the standard theory of parabolic partial differential equations [Reference Friedman38, Section 3, Theorem 9] guarantees that, for an open rectangle $\mathcal{R}\subset \mathcal{C}$ , the initial-boundary value problem

(34) \begin{align} \left\{ \begin{aligned} \mathbb{L}\; f &= 0 && \mathrm{in }\; \mathcal{R}, \\[5pt] f &= W && \mathrm{on }\; \partial \mathcal{R}, \end{aligned} \right. \end{align}

where $\partial \mathcal{R}$ refers to the boundary of $\mathcal{R}$ , has a unique solution $f\in C^{1, 2}(\mathcal{R})$ . Therefore, we can use Itô’s formula on $f(s + u, Y_{s+u})$ at $u = \sigma_{\mathcal{R}}$ , that is, the first time $(s+u, Y_{s+u})$ exits $\mathcal{R}$ , and then take the $\mathbb{P}_{s, y}$ -expectation with $(s, y) \in \mathcal{R}$ , which guarantees the vanishing of the martingale term and yields, along with (34) and the strong Markov property, the equalities $W(s, y) = \mathbb{E}_{s, y}\left[ W(s + \sigma_{\mathcal{R}}, Y_{s + \sigma_{\mathcal{R}}})\right] = f(s, y)$ . Since $W = G$ on $\mathcal{D}$ , it follows that $W\in C^{1, 2}(\mathcal{D})$ .

In addition to the partial differentiability of W, it is possible to provide relatively explicit forms for $\partial_t W$ and $\partial_x W$ by relying on the representation (33) and the fact that $a_1$ and $a_2$ are differentiable functions.

Proposition 5. (Partial derivatives of the value function.)

For any $(s, y)\in\mathcal{C}$ , consider the OST $\sigma^* = \sigma^*(s, y)$ . Then

(35) \begin{align} \partial_t W(s, y) = \partial_t G(s, y) + \mathbb{E}_{s, y}\bigg[\!\int_s^{s + \sigma^*}\left(a^{\prime\prime}_{1}(u) + 2c_0a^{\prime}_{2}(u) + a^{\prime\prime}_{2}(u)(c_0u + Y_u)\right)\, \mathrm{d} u\bigg] \end{align}

and

(36) \begin{align} \partial_x W(s, y) = \mathbb{E}_{s, y}\left[ a_2(s + \sigma^*)\right]. \end{align}

Proof. Since $\sigma^* = \sigma^*(s, y)$ is suboptimal for any initial condition other than (s, y), we have

\begin{align*} \varepsilon^{-1}\left(W(s, y) - W(s - \varepsilon, y)\right) &\leq \varepsilon^{-1}\mathbb{E}\left[ G(s + \sigma^*, Y_{\sigma^*} + y) - G(s - \varepsilon + \sigma^*, Y_{\sigma^*} + y)\right] \end{align*}

for any $0 < \varepsilon \leq s$ . Hence, by letting $\varepsilon\rightarrow 0$ and recalling that $W\in C^{1,2}(\mathcal{C})$ (see Proposition 4), we get that, for $(s, y)\in\mathcal{C}$ ,

(37) \begin{align} \partial_t W(s, y) &\leq \mathbb{E}_{s, y}\left[ \partial_t G(s + \sigma^*, Y_{s + \sigma^*})\right] = \partial_t G(s, y) + \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*}\mathbb{L}\partial_t G(s + u, Y_{s+u})\, \mathrm{d} u\bigg]. \end{align}

In the same fashion, we obtain that

\begin{align*} \varepsilon^{-1}\left(W(s + \varepsilon, y) - W(s, y)\right) &\geq \varepsilon^{-1}\mathbb{E}\left[ G(s + \varepsilon + \sigma^*, Y_{\sigma^*} + y) - G(s + \sigma^*, Y_{\sigma^*} + y)\right], \end{align*}

which, after we let $\varepsilon\rightarrow 0$ , yields (37) in the reverse direction. Therefore, (35) is proved after computing $\mathbb{L}\partial_t G(s + u, Y_{s+u}) = \partial_{tt} G(s + u, Y_{s+u})$ .

To get the analogous result for the space coordinate, notice that

\begin{align*} \varepsilon^{-1}\left(W(s, y) - W(s, y - \varepsilon)\right) &\leq \varepsilon^{-1}\mathbb{E}\left[ W(s + \sigma^*, Y_{\sigma^*} + y) - W(s + \sigma^*, Y_{\sigma^*} + y - \varepsilon)\right] \\[5pt] &\leq \varepsilon^{-1}\mathbb{E}\left[ G(s + \sigma^*, Y_{\sigma^*} + y) - G(s + \sigma^*, Y_{\sigma^*} + y - \varepsilon)\right] \\[5pt] &= \mathbb{E}_{s, y}\left[ a_2(s + \sigma^*)\right], \end{align*}

while the same reasoning yields the inequality

\begin{align*} \varepsilon^{-1}\left(W(s, y + \varepsilon) - W(s, y)\right) \geq \mathbb{E}_{s, y}\left[ a_2(s + \sigma^*)\right]. \end{align*}

By letting $\varepsilon\rightarrow 0$ , we obtain (36).

Besides the regularity of the value function, that of the OSB is also key to solving the OSP. However, defined as the boundary between $\mathcal{D}$ and $\mathcal{C}$ , the OSB admits little space for technical manipulations. The next proposition gives us a handle on the OSB by showing that it is the graph of a bounded function of time, above which $\mathcal{D}$ lies.

Proposition 6. (Shape of the OSB.)

There exists a function $b\;:\;\mathbb{R}_+\rightarrow\mathbb{R}$ such that

\begin{align*} \mathcal{D} = \left\{(s, y)\in\mathbb{R}_+\times\mathbb{R} \;:\; y \geq b(s)\right\}. \end{align*}

Moreover, $g(s) < b(s) < \infty$ for all $s\in\mathbb{R}_+$ , where $g(s) \;:\!=\; (-a^{\prime}_{1}(s) - c_0(a_2(s) + a^{\prime}_{2}(s)s))/a^{\prime}_{2}(s)$ .

Proof. Define b as

(38) \begin{align} b(s) \;:\!=\; \inf\left\{y\;:\; (s, y) \in \mathcal{D}\right\},\quad s\in\mathbb{R}_+. \end{align}

The claimed shape for the stopping set is a straightforward consequence of the decreasing behavior of $y\mapsto (W-G)(s, y)$ for all $s\in\mathbb{R}_+$ , which follows from (21f), (26), and (33).

To derive the lower bound for b, notice that, for all (s, y) such that $\partial_t G(s, y) > 0$ , we can pick a ball $\mathcal{B}$ such that $(s, y)\in\mathcal{B}$ and $\partial_t G > 0$ on $\mathcal{B}$ . Recalling (33) and letting $\sigma_\mathcal{B} = \sigma_\mathcal{B}(s, y)$ be the first exit time of $Y$ from $\mathcal{B}$ , we get that

\begin{align*} W(s, y) - G(s, y) \geq \mathbb{E}_{s, y}\left[ \int_0^{\sigma_\mathcal{B}} \partial_t G\left(s + u, Y_{s+u}\right)\,\mathrm{d} u\right] > 0, \end{align*}

which means that $(s, y)\in\mathcal{C}$ . The claimed lower bound for b follows from using (26) and (21f) to realize that $\partial_t G(s, y) > 0$ if and only if $y < g(s)$ .

We now prove that $b(s) < \infty$ for all $s \in\mathbb{R}_+$ . Let $X = \big\{X_t\big\}_{t \in [0, T]}$ be the OUB representation of the process $s\mapsto G(s, Y_s)$ , that is, the unique strong solution of (3), with drift $\mu(t, x) = \theta(t)(\kappa(t) - x)$ and volatility (function) $\nu$ . This GMB X is well defined, as we can trace back functions $\alpha$ , $\beta_T$ , and $\gamma_T$ and values T and z such that the OSP (16) is in the form (19) (see Remark 2).

In addition to X, define the OUBs $X^{(i)}$ , for $i = 1, 2$ , with volatility $\nu$ and drifts

\begin{align*} \mu^{(1)}(t, x) = \theta(t)(K - x),\quad \mu^{(2)}(t, x) = \frac{\underline{\nu}}{\overline{\nu}(T-t)}(K - x), \end{align*}

respectively, where $K \;:\!=\; \max\{\kappa(t) \;:\; t\in[0, T]\}$ , $\overline{\nu} \;:\!=\; \max\{\nu(t) \;:\; t\in[0, T]\}$ , and $\underline{\nu} \;:\!=\; \min\{\nu(t) \;:\; t\in[0, T]\}$ . Consider the OSPs

\begin{align*} V^{(0)}(t, x) &\;:\!=\; \mathrm{sup}_{\tau\leq T-t}\mathbb{E}_{t, x}\left[ X_{t+\tau}\right], \\[5pt] V^{(1)}(t, x) &\;:\!=\; \mathrm{sup}_{\tau\leq T-t}\mathbb{E}_{t, x}\left[ X_{t+\tau}^{(1)}\right], \\[5pt] V_K^{(2)}(t, x) &\;:\!=\; \mathrm{sup}_{\tau\leq T-t}\mathbb{E}_{t, x}\left[ K + |X_{t+\tau}^{(2)} - K|\right], \end{align*}

alongside their respective stopping sets $\mathcal{D}^{(0)}$ , $\mathcal{D}^{(1)}$ , and $\mathcal{D}^{(2)}_K$ .

Notice that $\mu(t, x) \leq \mu^{(1)}(t, x)$ for all $(t, x)\in[0, T)\times\mathbb{R}$ . Hence $X_{t+u}\leq X_{t+u}^{(1)}$ $\mathbb{P}_{t, x}$ -a.s. for all $u\in[0, T-t]$ , as Corollary 3.1 in [Reference Peng and Zhu61] states. This implies that $\mathcal{D}^{(1)} \subset \mathcal{D}^{(0)}$ .

On the other hand, it follows from ( ii.2 ) that $\theta(t) \geq \underline{\nu}/(\overline{\nu}(T-t))$ , meaning that $\mu(t, x) \leq \mu^{(2)}(t, x)$ if and only if $x \geq K$ . By using the same comparison result in [Reference Peng and Zhu61], we get the second inequality in the following sequence of relations:

\begin{align*} X_{t+u}^{(1)} \leq K + |X_{t+u}^{(1)} - K| \leq K + |X_{t+u}^{(2)} - K| \end{align*}

$\mathbb{P}_{t, x}$ -a.s. for all $u\in[0, T-t]$ . Hence, for a pair $(t, x)\in \mathcal{D}_K^{(2)}$ , we get that $V^{(0)}(t, x) \leq V^{(2)}_K(t, x) = x$ , that is, $(t, x) \in \mathcal{D}^{(1)}$ and therefore $\mathcal{D}^{(2)}_K \subset \mathcal{D}^{(0)}$ . The OSP related to $V^{(2)}_K$ can be shown to account for a finite OSB. Specifically, it is a multiple of that of a BB (see [Reference D’Auria and Ferriero22, Section 5]). Then, $\mathcal{D}^{(0)} \cap \left(\{t\}\times\mathbb{R}\right)$ is non-empty for all $t\in[0, T)$ , and the equivalence result in Proposition 2 guarantees that so are the sets of the form $\mathcal{D} \cap \left(\{t\}\times\mathbb{R}\right)$ , meaning that the OSB b is bounded from above.

Remark 6. Note that the same reasoning we used to derive the lower bound on b in the proof of Proposition 6 also implies that, if $a^{\prime}_{2}(s) > 0$ for some $s\in\mathbb{R}_+$ , then $(s, y)\in\mathcal{C}$ for all $y > (-a^{\prime}_{1}(s) - c(a_2(s) + a^{\prime}_{2}(s)s))/a^{\prime}_{2}(s)$ , meaning that $b(s) = \infty$ . To avoid this explosion of the OSB we impose $a^{\prime}_{2}(s) < 0$ for all $s\in\mathbb{R}_+$ in (21f).

Summarizing, we have proved that W satisfies the free-boundary problem

\begin{align*} \mathbb{L} W(s, y) &= 0 && \mathrm{for }\; y < b(t), \\[5pt] W(s, y) &> G(s, y) && \mathrm{for }\; y < b(t), \\[5pt] W(s, y) &= G(s, y) && \mathrm{for }\; y\geq b(t).\end{align*}

Since b is unknown, an additional condition, generally known as the smooth-fit condition, is needed to guarantee the uniqueness of the solution of this free-boundary problem. When b is regular enough, this is done by making the value and the gain function coincide smoothly at the free boundary.

The works of [Reference De Angelis25, Reference De Angelis and Stabile28, Reference Peskir64] address the smoothness of the free boundary. For one-dimensional, time-homogeneous processes with locally Lipschitz-continuous drift and volatility, [Reference De Angelis25] provides the continuity of the free boundary. The paper [Reference Peskir64] works with the two-dimensional case in a fairly general setting, proving the impossibility of first-type discontinuities (second-type discontinuities are not addressed). The paper [Reference De Angelis and Stabile28] goes further by proving the local Lipschitz continuity of the free boundary in a higher-dimensional framework. In particular, local Lipschitz continuity suffices for the smooth-fit condition to hold (see Proposition 8 below), which is the main reason we tailor the method of [Reference De Angelis and Stabile28] to fit our setting in the next proposition. Specifically, the relation between the partial derivatives imposed on Assumption (D) in [Reference De Angelis and Stabile28] excludes our gain function, but Equation (43) overcomes this issue.

Proposition 7. (Lipschitz continuity and differentiability of the OSB.)

The OSB b is Lipschitz continuous on any closed interval of $\mathbb{R}_+$ .

Proof. Let $H(s, y) \;:\!=\; W(s, y) - G(s, y)$ , fix two arbitrary non-negative numbers $\underline{s}$ and $\bar{s}$ such that $\underline{s} < \bar{s}$ , and consider the closed interval $I = [\,\underline{s}, \bar{s}\,]$ . Proposition 6 guarantees that b is bounded from below, and hence we can choose $r < \inf \left\{b(s)\;:\; s\in I\right\}$ . Then $I\times\left\{r\right\}\subset\mathcal{C}$ , meaning that $H(s, r) > 0$ for all $s\in I$ . Since H is continuous (see Proposition 3) on $\mathcal{C}$ , there exists a constant $a > 0$ such that $H(s, r) \geq a$ for all $s\in I$ . Therefore, for all $\delta$ such that $0 < \delta \leq a$ , and all $s\in I$ , there exists $y\in\mathbb{R}$ such that $H(s, y) = \delta$ . Such a value of y is unique, as $\partial_x H < 0$ on $\mathcal{C}$ (see (36)). Hence we can denote it by $b_\delta(s)$ and define the function $b_\delta\;:\;I\rightarrow [r, b(s))$ . Because H is regular enough away from the boundary, we can apply the implicit function theorem, which states the differentiability of $b_\delta$ along with the fact that

(39) \begin{align} b^{\prime}_{\delta}(s) = -\partial_t H(s, b_\delta(s)) / \partial_x H(s, b_\delta(s)). \end{align}

Note that $b_\delta$ increases as $\delta \rightarrow 0$ and is upper-bounded, uniformly in $\delta$ , by b, which is proved to be finite in Proposition 6. Hence $b_\delta$ converges pointwise, as $\delta \rightarrow 0$ , to some limit function $b_0$ such that $b_0 \leq b$ on I. The reverse inequality follows from

\begin{align*} H(s, b_0(s)) = \lim_{\delta \rightarrow 0}H(s, b_\delta(s)) = \lim_{\delta \rightarrow 0}\delta = 0, \end{align*}

meaning that $(s, b_0(s))\in\mathcal{D}$ . Hence $b_0 = b$ on I.

For $(s, y)\in \mathcal{C}$ such that $s\in I$ and $y > r$ , consider the stopping times $\sigma^* = \sigma^*(s, y)$ and

\begin{align*} \sigma_r = \sigma_r(s, y) = \inf\{u\geq 0 \;:\; (s + u, Y_{s+u}) \notin I\times (r, \infty) \}. \end{align*}

Recalling (35), it readily follows that

(40) \begin{align} \left|\partial_t H(s, y)\right| \leq K^{(1)}\ m(s, y) \end{align}

for $K^{(1)} = \max\left\{A^{\prime\prime}_{1} + 2c_0A^{\prime}_{2} + c_0A^{\prime\prime}_{3}, 1\right\}$ and

\begin{align*} m(s, y) \;:\!=\; \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*}\left(1 + \left|a^{\prime\prime}_{2}(s+u)Y_{s+u}\right|\right)\,\mathrm{d} u\bigg]. \end{align*}

By the tower property of conditional expectation, the strong Markov property, and the fact that $\sigma^*(s, y) = \sigma_r + \sigma^*\left(s + \sigma_r, Y_{s + \sigma_r}\right)$ whenever $\sigma_r \leq \sigma^*$ , we have that

(41) \begin{align} m(s, y) &= \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*\wedge\sigma_r}\left(1 + \left|a^{\prime\prime}_{2}(s + u)Y_{s+u}\right|\right)\,\mathrm{d} u + \mathbb{1}\left(\sigma_r \leq \sigma^*\right) m(s + \sigma_r, Y_{s + \sigma_r})\bigg]. \end{align}

On the set $\left\{\sigma_r \leq \sigma^*\right\}$ , $(s + \sigma_r, Y_{s + \sigma_r}) \in \Gamma_s$ $\mathbb{P}_{s, y}$ -a.s. whenever $r < y < b(s)$ , with $\Gamma_s \;:\!=\; \left((s, \bar{s})\times\{r\}\right) \cup \left(\{\bar{s}\}\times[r, b(\bar{s})]\right)$ . Hence, if $\sigma_r \leq \sigma^*$ , then

(42) \begin{align} m\left(s + \sigma_r, Y_{s + \sigma_r}\right) &\leq \mathrm{sup}_{(s^{\prime}, y^{\prime}) \in \Gamma_s}m(s', y^{\prime}) \nonumber \\[5pt] &\leq \mathrm{sup}_{(s^{\prime}, y^{\prime}) \in \Gamma_s} \mathbb{E}_{s^{\prime}, y^{\prime}}\left[ \int_{0}^{\infty}\left(1 + \left|a^{\prime\prime}_{2}(s^{\prime} + u)Y_{s+u}\right|\right)\,\mathrm{d} u\right] \nonumber \\[5pt] &\leq \mathrm{sup}_{(s^{\prime}, y^{\prime}) \in \Gamma_s} \!\left(\int_{0}^{\infty}\left(1 + \left|a^{\prime\prime}_{2}(s^{\prime} + u)y^{\prime}\right|\right)\,\mathrm{d} u + \int_{0}^{\infty}\mathbb{E}\!\left[ \left|a^{\prime\prime}_{2}(s^{\prime} + u)Y_u\right|\right]\!\mathrm{d} u\!\right) \nonumber \\[5pt] &\leq \int_{0}^{\infty}\left(1 + \left|a^{\prime\prime}_{2}(u)M\right|\right)\,\mathrm{d} u + \int_{0}^{\infty}\left|a^{\prime\prime}_{2}(s^{\prime} + u)\right|\sqrt{2u/\pi}\,\mathrm{d} u < \infty, \end{align}

with $M \;:\!=\; \max\{|\mathrm{sup}_{s\in I}b(s)|, |r|\}$ . We can guarantee the convergence of both integrals since (23) implies that $\left|a^{\prime\prime}_{2}(s)\right|$ is asymptotically equivalent to $s^{-2}$ . By plugging (42) into (41), recalling (40), and noticing that $1 + \left|a^{\prime\prime}_{2}(s + u)Y_{s+u}\right| \leq 1 + A^{\prime}_{2}M$ whenever $u\leq\sigma^*\wedge\sigma_r$ , we obtain that there exists $K_I^{(2)} > 0$ such that

(43) \begin{align} \left|\partial_t H(s, y)\right| \leq K_I^{(2)}\ \mathbb{E}_{s, y}\left[ \sigma^*\wedge\sigma_r + \mathbb{1}\left(\sigma_r \leq \sigma^*\right)\right]. \end{align}

Arguing as in (41) and relying on (27), (36), and (21f), we get that

(44) \begin{align} |\partial_x& H(s, y)| \nonumber\\[5pt] &= \mathbb{E}_{s, y}\left[ a_2(s) - a_2(s + \sigma^*)\right] = \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*}-a^{\prime}_{2}(s + u)\,\mathrm{d} u\bigg] \nonumber \\[5pt] &= \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*\wedge\sigma_r}-a^{\prime}_{2}(s + u)\,\mathrm{d} u + \mathbb{1}\left(\sigma_r \leq \sigma^*\right) \left|\partial_x H(s + \sigma_r, Y_{s + \sigma_r})\right|\bigg] \nonumber \\[5pt] &\geq \mathbb{E}_{s, y}\bigg[\!\int_0^{\sigma^*\wedge\sigma_r}-a^{\prime}_{2}(s + u)\,\mathrm{d} u + \mathbb{1}\left(\sigma_r \leq \sigma^*, \sigma_r < \overline{s} - s\right)\left|\partial_x H(s + \sigma_r, r)\right|\bigg]. \end{align}

Since $I\times\left\{r\right\}\subset\mathcal{C}$ , we can take $\varepsilon > 0$ such that $\mathcal{R}_\varepsilon \;:\!=\; [\,\underline{s}, \overline{s} + \varepsilon]\times(r - \varepsilon, r + \varepsilon)\subset\mathcal{C}$ . Then $\sigma^* > \sigma_\varepsilon$ $\mathbb{P}_{s, r}$ -a.s. for all $s\in I$ , where

\begin{align*} \sigma_\varepsilon = \sigma_\varepsilon(s, r) \;:\!=\; \inf\left\{u\geq 0 \;:\; \left(s + u, Y_{s+u}\right) \notin \mathcal{R}_\varepsilon\right\}. \end{align*}

Hence

(45) \begin{align} \left|\partial_x H(s + \sigma_r, r)\right| &\geq \inf_{s\in I} \left|\partial_x H(s, r)\right| = \inf_{s\in I} \mathbb{E}_{s, r}\left[ a_2(s) - a_2(s + \sigma^*)\right] \nonumber \\[5pt] &\geq \inf_{s\in I} \mathbb{E}_{s, r}\left[ a_2(s) - a_2(s + \sigma_\varepsilon)\right] \nonumber \\[5pt] &\geq \inf_{s\in I} \left(a_2(s) - a_2(\overline{s} + \varepsilon)\right)\mathbb{P}_{s, r}\left( \sigma_\varepsilon = \overline{s} + \varepsilon - s\right) \nonumber \\[5pt] &\geq \left(a_2(\overline{s}) - a_2(\overline{s} + \varepsilon)\right)\mathbb{P}\bigg(\mathrm{sup}_{u\leq \overline{s} + \varepsilon - \underline{s}} \left|Y_u\right| < \varepsilon\bigg) > 0, \end{align}

where we use that $a_2$ is decreasing. Recalling that $a^{\prime}_{2}$ is a bounded function and plugging (45) into (44), we get that, for a constant $K_{I, \varepsilon}^{(3)} > 0$ ,

(46) \begin{align} \left|\partial_x H(s, y)\right| \geq K_{I,\varepsilon}^{(3)}\ \mathbb{E}_{s, y}\left[ \sigma^*\wedge\sigma_r + \mathbb{1}\left(\sigma_r \leq \sigma^*, \sigma_r < \overline{s} - s\right)\right]. \end{align}

Substituting (43) and (46) into (39), we get the following bound for the derivative of b with some constant $K_{I,\varepsilon}^{(4)} > 0$ , $y_\delta = b_\delta(s)$ , and $\sigma_\delta = \sigma^*(s, y_\delta)$ :

(47) \begin{align} \left|b^{\prime}_{\delta}(s)\right| &\leq K_{I,\varepsilon}^{(4)}\ \frac{\mathbb{E}_{s, y_\delta}\left[ \sigma_\delta\wedge\sigma_r + \mathbb{1}\left(\sigma_r \leq \sigma_\delta\right)\right]}{\mathbb{E}_{s, y_\delta}\left[ \sigma_\delta\wedge\sigma_r + \mathbb{1}\left(\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s\right)\right]} \nonumber \\[5pt] &\leq K_{I,\varepsilon}^{(4)}\left(1 + \frac{\mathbb{P}_{s, y_\delta}\left( \sigma_r \leq \sigma_\delta\right)}{\mathbb{E}_{s, y_\delta}\left[ \sigma_\delta\wedge\sigma_r + \mathbb{1}\left(\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s\right)\right]}\right) \nonumber \\[5pt] &\leq K_{I,\varepsilon}^{(4)}\left(1 + \frac{\mathbb{P}_{s, y_\delta}\left( \sigma_r \leq \sigma_\delta, \sigma_r = \bar{s} - s\right)}{\mathbb{E}_{s, y_\delta}\left[ \sigma_\delta\wedge\sigma_r\right]} + \frac{\mathbb{P}_{s, y_\delta}\left( \sigma_r \leq \sigma_\delta, \sigma_r < \bar{s} - s\right)}{\mathbb{E}_{s, y_\delta}\left[ \mathbb{1}\left(\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s\right)\right]}\right) \nonumber \\[5pt] &\leq K_{I,\varepsilon}^{(4)}\left(2 + \frac{\mathbb{P}_{s, y_\delta}\left( \sigma_r \leq \sigma_\delta, \sigma_r = \bar{s} - s\right)}{\mathbb{E}_{s, y_\delta}\left[ \mathbb{1}\left(\sigma_r\leq\sigma_\delta, \sigma_r = \bar{s} - s\right)\left(\sigma_\delta\wedge\sigma_r\right)\right]}\right) \nonumber \\[5pt] &\leq K_{I,\varepsilon}^{(4)}\left(2 + \frac{1}{\bar{s} - s}\right). \end{align}

Let $I_\varepsilon = [\,\underline{s}, \bar{s} - \varepsilon]$ for $\varepsilon > 0$ small enough. By (47), there exists a constant $L_{I, \varepsilon} > 0$ , independent of $\delta$ , such that $|b^{\prime}_{\delta}(s)| < L_{I, \varepsilon}$ for all $s\in I_\varepsilon$ and $0 < \delta \leq a$ . Hence the Arzelà–Ascoli theorem guarantees that $b_\delta$ converges to b uniformly in $\delta \in I_\varepsilon$ .

Given the local Lipschitz continuity of the OSB, it is relatively easy to prove the global continuous differentiability of the value function from the law of the iterated logarithms and the work of [Reference De Angelis and Peskir27], which, in turn, implies the smooth-fit condition. This approach is commented on in Remark 4.5 of [Reference De Angelis and Stabile28]. The proposition below provides the details.

Proposition 8. (Global $C_1$ regularity of the value function.)

We have that W is continuously differentiable in $\mathbb{R}_+\times \mathbb{R}$ .

Proof. Since $W = G$ on $\mathcal{D}$ , and W has continuous partial derivatives in C (see Proposition 4), it follows that W is continuously differentiable on $\mathcal{D}^\circ$ and on $\mathcal{C}$ , where $\mathcal{D}^\circ$ stands for the interior of $\mathcal{D}$ . To conclude the proof, it remains to show such regularity on $\partial \mathcal{C}$ .

Note that the law of the iterated logarithm alongside the local Lipschitz continuity of b yields the following, for all $s\in\mathbb{R}_+$ and some constant $L_s > 0$ that depends on s:

\begin{align*} \mathbb{P}_{s, b(s)}(\inf\{u > 0 &\;:\; Y_{s+u} > b(s+u)\} = 0) \\[5pt] &= \lim_{\varepsilon\downarrow 0} \mathbb{P}_{s, b(s)}\left( \inf\left\{u > 0 \;:\; Y_{s+u} > b(s+u)\right\} < \varepsilon\right) \\[5pt] &= \lim_{\varepsilon\downarrow 0}\mathbb{P}_{s, b(s)}\left( \mathrm{sup}_{u\in(0,\varepsilon)} \left(Y_{s+u} - b(s+u)\right) > 0\right) \\[5pt] &= \lim_{\varepsilon\downarrow 0}\mathbb{P}_{s, b(s)}\left( \mathrm{sup}_{u\in(0,\varepsilon)} \frac{Y_{s+u} - b(s+u)}{\sqrt{2u\ln\!(\!\ln\!(1/u))}} > 0\right) \\[5pt] &\geq \lim_{\varepsilon\downarrow 0}\mathbb{P}_{s, b(s)}\left( \mathrm{sup}_{u\in(0,\varepsilon)} \frac{Y_{s+u} - b(s) + L_s u}{\sqrt{2u\ln\!(\!\ln\!(1/u))}} > 0\right) \\[5pt] &= \mathbb{P}_{s, b(s)}\left( \limsup_{u\downarrow 0} \frac{Y_{s+u} - b(s) + L_s u}{\sqrt{2u\ln\!(\!\ln\!(1/u))}} > 0\right) = 1. \end{align*}

That is, $\left\{(s+u, Y_{s+u})\right\}_{u\in\mathbb{R}_+}$ immediately enters $\mathcal{D}^{\circ}$ $\mathbb{P}_{s, b(s)}$ -a.s., and hence Corollary 6 from [Reference De Angelis and Peskir27] guarantees that $\sigma^*(s_n, y_n) \rightarrow \sigma^*(s, b(s)) = 0\ \mathbb{P}$ -a.s. for any sequence $(s_n, y_n)$ that converges to (s, b(s)) as $n\rightarrow\infty$ .

Therefore, the dominated convergence theorem and (36) show that

$$\partial_x W(s, b(s)^-) = a_2(s) = \partial_x G(s, b(s)).$$

Since $W = G$ on $\mathcal{D}$ , it also holds that $\partial_x W(s, b(s)^+) = \partial_x G(s, b(s)) = a_2(s)$ , and hence $W_x$ is continuous on $\partial \mathcal{C}$ , which is the required smooth-fit condition.

On the other hand, consider a sequence $s_n$ such that $(s_n, b(s))\in \mathcal{C}$ for all n and $s_n\uparrow s$ as $n\rightarrow\infty$ . Relying again on the dominated convergence theorem and using (35), we get that $\partial_t W(s_n, b(s)) \rightarrow \partial_t G(s, b(s))$ . We trivially reach the same convergence by taking $(s_n, b(s))\in \mathcal{D}$ for all n, since $W = G$ on $\mathcal{D}$ . Arguing identically, we obtain that $\partial_t W(s_n, b(s)) \rightarrow \partial_t G(s, b(s))$ whenever $s_n \downarrow s$ . Hence $W_t$ is continuous on $\partial \mathcal{C}$ , which finally yields the global $C^1$ regularity of W.

We are now able to provide the solution for the OSP (19). Indeed, so far we have gathered all the regularity conditions needed to apply an extended Itô’s formula to $W(s + u, Y_{s+u})$ to obtain characterizations of the value function and the OSB. The former is given in terms of an integral of the OSB, while the latter is proved to be the unique solution of a type-two nonlinear Volterra integral equation. Both characterizations benefit from the Gaussianity of the BM, yielding relatively explicit integrands. Theorem 1 dives into details. Its proof requires the following lemma.

Lemma 2. For all $(s, y)\in\mathbb{R}_+\times\mathbb{R}$ ,

\begin{align*} \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ W(s + u, Y_{s+u})\right] = c_1 + c_0c_2, \end{align*}

where $c_1$ and $c_2$ come from Equations (21e) and (21c), respectively.

Proof. Let $s_u \;:\!=\; s + u$ for $s, u \in\mathbb{R}_+$ . The Markov property of Y implies that

\begin{align*} \lim_{u\rightarrow\infty}&\,\mathbb{E}_{s, y}\left[ W(s_u, Y_{s_u})\right] \\[5pt] &= \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ \mathrm{sup}_{\sigma\geq 0}\mathbb{E}_{s_u, Y_{s_u}}\left[ G\left(s_u + \sigma, Y_{s_u + \sigma}\right)\right]\right] \\[5pt] &\leq \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ \mathbb{E}_{s_u, Y_{s_u}}\left[ \mathrm{sup}_{r\geq 0}G\left(s_u + r, Y_{s_u + r}\right)\right]\right] \\[5pt] &= \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ \mathrm{sup}_{r\geq 0}\left\{a_1(s_u + r) + c_0a_2(s_u + r)(s_u + r) + a_2(s_u + r)Y_{s_u+r}\right\}\right] \\[5pt] &= \mathbb{E}_{s, y}\bigg[\!\lim_{u\rightarrow\infty}\mathrm{sup}_{r\geq 0}\left\{a_1(s_u + r) + c_0a_2(s_u + r)(s_u + r) + a_2(s_u + r)Y_{s_u+r}\right\}\bigg] \\[5pt] &= \mathbb{E}_{s, y}\bigg[\!\limsup_{u\rightarrow\infty}\left\{a_1(s_u) + c_0a_2(s_u)s_u + a_2(s_u)Y_{s_u}\right\}\bigg] \\[5pt] &= c_1 + c_0c_2, \end{align*}

where the interchangeability of the limit and the mean operator is justified by the monotone convergence theorem. The last equality follows from (21c) and (21e), along with the law of the iterated logarithm, implying that $\limsup_{u\rightarrow\infty} a_2(s_u)Y_{s_u} = 0$ .

Likewise, we have that

\begin{align*} \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ W(s_u, Y_{s_u})\right] &\geq \lim_{u\rightarrow\infty}\mathbb{E}_{s, y}\left[ \mathbb{E}_{s_u, Y_{s_u}}\left[ \inf_{r\geq 0}G\left(s_u + r, Y_{s_u + r}\right)\right]\right] \\[5pt] &= \mathbb{E}_{s, y}\left[ \liminf_{u\rightarrow\infty}\left\{a_1(s_u) + c_0a_2(s_u)s_u + c_0a_2(s_u)Y_{s_u}\right\}\right] \\[5pt] &= c_1 + c_0c_2, \end{align*}

which concludes the proof.

Theorem 1. (Solution of the OSP.)

The OSB related to the OSP (19) satisfies the free-boundary (integral) equation

(48) \begin{align} G(s, b(s)) = c_1 + c_0c_2 - \int_s^\infty K(s, b(s), u, b(u)) \,\mathrm{d} u, \end{align}

where the kernel K is defined as

\begin{align*} K(s_1, y_1, s_2, y_2) \;:\!=\;&\; \left((a^{\prime}_{1}(s_2) + c_0a_2(s_2) + c_0a^{\prime}_{2}(s_2)(s_2 + y_1)\right)\bar{\Phi}_{s_1, y_1, s_2, y_2}\\[5pt] &+ c_0a^{\prime}_{2}(s_2)\sqrt{s_2 - s_1}\phi_{s_1, y_1, s_2, y_2} \end{align*}

with $0 \leq s_1 \leq s_2$ , $y_1, y_2\in\mathbb{R}$ , and

\begin{align*} \bar{\Phi}_{s_1, y_1, s_2, y_2} \;:\!=\; \bar{\Phi}\left(\frac{y_2 - y_1}{\sqrt{s_2 - s_1}}\right), \quad \phi_{s_1, y_1, s_2, y_2} \;:\!=\; \phi\left(\frac{y_2 - y_1}{\sqrt{s_2 - s_1}}\right). \end{align*}

The functions $\phi$ and $\bar{\Phi}$ are respectively the density and survival functions of a standard normal random variable. In addition, the integral equation (48) admits a unique solution among the class of continuous functions $f\;:\;\mathbb{R}_+\rightarrow\mathbb{R}$ of bounded variation.

The value function is given by the formula

(49) \begin{align} W(s, y) &= c_1 + c_0c_2 - \int_s^\infty K(s, y, u, b(u)) \,\mathrm{d} u. \end{align}

Proof. Propositions 38 provide the regularity required to apply an extended Itô’s lemma (see [Reference Peskir62] for an original derivation and Lemma A2 in [Reference D’Auria, García-Portugués and Guada23] for a reformulation that better suits our setting) to $W(s + h, Y_{s+h})$ for $s, h\geq 0$ . Since $\mathbb{L} W = 0$ on $\mathcal{C}$ and $W = G$ on $\mathcal{D}$ , after taking the $\mathbb{P}_{s, y}$ -expectation (which cancels the martingale term) it follows that

(50) \begin{align} W(s, y) &= \mathbb{E}_{s, y}\left[ W(s + h, Y_{s+h})\right] - \mathbb{E}_{s, y}\left[ \int_0^h (\mathbb{L} W)\left(s + u, Y_{s+u}\right)\,\mathrm{d} u\right] \nonumber \\[5pt] &= \mathbb{E}_{s, y}\left[ W(s + h, Y_{s+h})\right] - \mathbb{E}_{s, y}\left[ \int_0^h \partial_t G\left(s + u, Y_{s+u}\right)\mathbb{1}\left(Y_{s+u} \geq b(s + u)\right)\,\mathrm{d} u\right], \end{align}

where the local-time term does not appear because of the smooth-fit condition. Hence, by taking $h\rightarrow\infty$ in (50) and relying on Lemma 2, we get the following formula for the value function:

(51) \begin{align} W(s, y) &= c_1 + c_0c_2 - \mathbb{E}_{s, y}\left[ \int_0^\infty (\mathbb{L} W)\left(s + u, Y_{s+u}\right)\,\mathrm{d} u\right] \nonumber \\[5pt] &= c_1 + c_0c_2 - \mathbb{E}_{s, y}\left[ \int_0^\infty \partial_t G\left(s + u, Y_{s+u}\right)\mathbb{1}\left(Y_{s+u} \geq b(s + u)\right)\,\mathrm{d} u\right]. \end{align}

We can obtain a more tractable version of (51) by exploiting the linearity of ${y\mapsto \partial_t G(s, y)}$ (see (26)) as well as the fact that $Y_{s+u}\sim \mathcal{N}(y, u)$ under $\mathbb{P}_{s, y}$ . Then,

\begin{align*} \mathbb{E}_{s, y}\left[ Y_{s+u}\mathbb{1}\left(Y_{s+u}\geq x\right)\right] = \bar{\Phi}((x - y)/\sqrt{u})y + \sqrt{u}\phi((x - y)/\sqrt{u}). \end{align*}

By right-shifting the integrating variable s units, we get Equation (49).

Now take $y\downarrow b(s)$ in both (51) and (49) to derive the free-boundary equation

(52) \begin{align} G(s, b(s)) &= c_1 + c_0c_2 - \mathbb{E}_{s, b(s)}\left[ \int_0^\infty \partial_t G\left(s + u, Y_{s+u}\right)\mathbb{1}\left(Y_{s+u} \geq b(s + u)\right)\,\mathrm{d} u\right], \end{align}

alongside the more explicit expression (48).

The uniqueness of the solution of Equation (52) is established via a well-known methodology first developed by [Reference Peskir63, Theorem 3.1], which we omit here for the sake of brevity.

5. Solution of the original OSP

In this section we continue with the notation used in Section 3.

Recall that Proposition 2 dictates the equivalence between the OSPs (15) and (16), and gives explicit formulae to link their value functions and OSTs. Consequently, it follows that the stopping time $\tau^*(t, x)$ defined in Proposition 2 in terms of $\sigma^*(s, y)$ is not only optimal for (15), but has the following representation under $\textsf{P}_{t, x}$ :

(53) \begin{align} \tau^*(t, x) &= \inf\left\{u\geq 0 \;:\; X_{t + u} \geq \textsf{b}_{T, z}(t + u)\right\},\quad \textsf{b}_{T, z}(t) \;:\!=\; G_{T, z}(s, b_{T, z}(s)),\end{align}

where $\textsf{b}_{T, z}$ and $b_{T, z}$ are respectively the OSBs related to (15) and (16), and s is defined, in terms of t, in Proposition 2. Note that $b_{T, z}$ coincides with the function defined in (38), with constants $c_0$ , $c_1$ , and $c_2$ , from (20), (21c), and (21e), taking the values

(54) \begin{align} c_0 = z - \alpha(T),\quad c_1 = \alpha(T),\quad c_2 = 1,\end{align}

where $\alpha$ comes from ( iii.3 ) in Proposition 1 (see also Remark 2).

Moreover, it is not necessary to compute $W_{T, z}$ and $b_{T, z}$ to obtain $V_{T, z}$ and $\textsf{b}_{T, z}$ . By considering the infinitesimal generator of $\left\{\left(t, X_t\right)\right\}_{t \in [0, T]}$ , $\textsf{L}$ , letting $s_\varepsilon = s + \varepsilon$ and $t_\varepsilon = \gamma_T^{-1}(s_\varepsilon)$ for $\varepsilon > 0$ , and using (18) alongside the chain rule, we get that

(55) \begin{align} \left(\mathbb{L} W_{T, z}\right)(s, y) \;:\!=\;&\; \lim_{\varepsilon\rightarrow 0}\varepsilon^{-1}\left(\mathbb{E}_{s, y}\left[ W_{T, z}\left(s_\varepsilon, Y_{s_\varepsilon}\right)\right] - W_{T, z}(s, y)\right) \nonumber \\[5pt] =&\; \lim_{\varepsilon\rightarrow 0}\varepsilon^{-1}\left(\textsf{E}_{t, x}\left[ V_{T, z}(t_\varepsilon, X_{t_\varepsilon})\right] - V_{T, z}(t, x)\right) \nonumber \\[5pt] =&\; \left(\textsf{L} V_{T, z}\right)(t, x)\big[\gamma_T^{-1}\big]'(s). \end{align}

We recall the relations between s and t, and y and x, in Proposition 2. After integrating with respect to $\gamma_T^{-1}(u)$ instead of u in (50), keeping in mind (54) and (55), and recalling that $\textsf{L} V_{T, z}(t, x) = 0$ for all $x\leq \textsf{b}_{T, z}(t)$ and $V_{T, z}(t, x) = x$ for all $x\geq\textsf{b}_{T, z}(t)$ , we get the formula

(56) \begin{align} V_{T, z}(t, x) &= z - \textsf{E}_{t, x}\left[ \int_0^{T-t} (\textsf{L} V_{T, z})(t + u, X_{t + u})\,\mathrm{d} u\right] \nonumber \\[5pt] &= z - \textsf{E}_{t, x}\left[ \int_0^{T-t} \mu(t + u, X_{t + u})\mathbb{1}(X_{t + u} \geq \textsf{b}_{T, z}(t + u))\,\mathrm{d} u\right], \end{align}

where, in alignment with (14),

\begin{align*} \mu(t, x) \;:\!=\;&\; \lim_{u\downarrow 0}u^{-1}\textsf{E}_{t, x}\left[ X_{t + u} - x\right] = \theta(t)(\kappa(t) - x)\\[5pt] =&\; \alpha'(t) + \left(x - \alpha(t)\right)\frac{\beta^{\prime}_{T}(t)}{\beta_T(t)} + (z - \alpha(T))\beta_T(t)\gamma^{\prime}_{T}(t).\end{align*}

As we did to obtain (49), we can use the linearity of $x\mapsto\mu(t, x)$ and the Gaussian marginal distributions of X to produce a refined version of (56):

(57) \begin{align} V_{T, z}(t, x) = z - \int_t^T\textsf{K}(t, x, u, \textsf{b}_{T, z}(u))\,\mathrm{d} u,\end{align}

where

(58) \begin{align} & \textsf{K}(t_1, x_1, t_2, x_2) \nonumber \\ & \;:\!=\; \theta(t_2)\left((\kappa(t_2) - \textsf{E}_{t_1, x_1}\left[ X_{t_2}\right]) {\Phi}_{t_{1}, x_{1}, t_{2}, x_{2}} - \sqrt{\mathrm{V}\mathrm{ar}_{t_1}\left[ X_{t_{2}}\right]}\frac{\beta^{\prime}_{T}(t_2)}{\beta_T(t_2)} {\boldsymbol{\phi}}_{t_{1}, x_{1}, t_{2}, x_{2}}\right) \end{align}
(59) \begin{align} =&\; \left(\alpha'(t_2) + \left(\textsf{E}_{t_1, x_1}\left[ X_{t_2}\right] - \alpha(t_2)\right)\frac{\beta^{\prime}_{T}(t_2)}{\beta_T(t_2)} + (z - \alpha(T))\beta_T(t_2)\gamma^{\prime}_{T}(t_2)\right){\Phi}_{t_1, x_1, t_2, x_2} \nonumber\\[5pt] & + \sqrt{\mathrm{V}\mathrm{ar}_{t_1}\left[ X_{t_2}\right]}\frac{\beta^{\prime}_{T}(t_2)}{\beta_T(t_2)}{\boldsymbol{\phi}}_{t_1, x_1, t_2, x_2}, \end{align}

with $0 \leq t_1 \leq t_2 < T$ , $x_1, x_2\in\mathbb{R}$ , and

\begin{align*} {\unicode{x003A6}}_{t_1, x_1, t_2, x_2} \;:\!=\; \bar{\Phi}\left(\frac{x_2 - \textsf{E}_{t_1, x_1}\left[ X_{t_2}\right]}{\sqrt{\mathrm{V}\mathrm{ar}_{t_1}\left[ X_{t_2}\right]}}\right), \quad {\unicode{x003C6}}_{t_1, x_1, t_2, x_2} \;:\!=\; \phi\left(\frac{x_2 - \textsf{E}_{t_1, x_1}\left[ X_{t_2}\right]}{\sqrt{\mathrm{V}\mathrm{ar}_{t_1}\left[ X_{t_2}\right]}}\right),\end{align*}

and, as stated in (10), (12), and (14),

(60) \begin{align} \textsf{E}_{t_1, x_1}\left[ X_{t_2}\right] &= \varphi(t_2)\left(\frac{x_1}{\varphi(t_1)} + \int_{t_1}^{t^2}\frac{\kappa(u)\theta(u)}{\varphi(u)}\,\mathrm{d} u\right) \end{align}
(61) \begin{align} &= \alpha(t_2) +\beta_T(t_2)\left((z - \alpha(T))\gamma_T(t_2) - \frac{x_1 - \alpha(t_1) - \beta_T(t_1)\gamma_T(t_1)(z - \alpha(T))}{\beta_T(t_1)}\right), \nonumber \\[5pt] \mathrm{V}\mathrm{ar}_{t_1}\left[ X_{t_2}\right] &= \varphi^2(t_2)\int_{t_1}^{t_2}\frac{\nu^2(u)}{\varphi^2(u)}\,\mathrm{d} u \\[5pt] &= \beta_T(t_1)\gamma_T(t_1)\beta_T(t_2), \nonumber\end{align}

with $\varphi(t) = \exp\left\{-\int_{0}^{t}\theta(u)\,\mathrm{d} u\right\}$ . Hence, after taking $x\downarrow\textsf{b}(t)$ in (56) (or by directly expressing (52) in terms of the original OSP, as we did to obtain (56) from (51)), we get the free-boundary equation

\begin{align*} \textsf{b}_{T, z}(t) &= z - \textsf{E}_{t, \textsf{b}_{T, z}(t)}\left[ \int_0^{T-t} (\mathbb{L}_X V_{T, z})(t + u, X_{t + u})\,\mathrm{d} u\right] \\[5pt] &= z - \textsf{E}_{t, \textsf{b}_{T, z}(t)}\left[ \int_0^{T-t} \mu(t + u, X_{t + u})\mathbb{1}(X_{t + u} \geq \textsf{b}_{T, z}(t + u))\,\mathrm{d} u\right],\end{align*}

which is also expressible as

(62) \begin{align} \textsf{b}_{T, z}(t) = z - \int_t^T \textsf{K}(t, \textsf{b}_{T, z}(t), u, \textsf{b}_{T, z}(u))\,\mathrm{d} u.\end{align}

The uniqueness of the solution of the Volterra-type integral equation (62) follows from that of (48).

Remark 7. We highlight some smoothness properties that the value function V and the OSB $\textsf{b}$ inherit from W and b, based on the equivalences (18) and (53).

From the Lipschitz continuity of W on compact sets of $\mathbb{R}_+\times\mathbb{R}$ (see Proposition 3) we obtain that of V on compact sets of $[0, T)\times\mathbb{R}$ . Higher smoothness of V is also attained away from the boundary, $(t, \textsf{b}(t))$ for all $t\in[0, T)$ , which follows from Proposition 4. The continuous differentiability of W obtained in Proposition 8 implies that of V.

The OSB $\textsf{b}$ is Lipschitz continuous on any closed subinterval of [0, T), which is a consequence of Proposition 7.

6. Numerical results

In this section we shed light on the OSB’s shape by using a Picard iteration algorithm to solve the free-boundary equation (62). This approach is commonly used in the optimal stopping literature; see, e.g., the works of [Reference De Angelis and Milazzo26, Reference Detemple and Kitapbayev29].

A Picard iteration scheme approaches (62) as a fixed-point problem. From an initial candidate boundary, it produces a sequence of functions by iteratively computing the integral operator in the right-hand side of (62) until the error between consecutive boundaries is below a prescribed threshold. More precisely, for a partition $0 = t_0 < t_1 < \cdots < t_N = T$ of [0, T], $N\in\mathbb{N}$ , the updating mechanism that generates subsequent boundaries follows after the discretization of the integral in (62) using a right Riemann sum:

(63) \begin{align} \textsf{b}_i^{(k)} &= z - \sum_{j = i}^{N - 2} \textsf{K}\left(t_i, \textsf{b}_i^{(k - 1)}, t_{j + 1}, \textsf{b}_{j + 1}^{(k - 1)}\right)(t_{j + 1} - t_j), \quad i = 0, 1, \dots, N-2, \end{align}
(64) \begin{align} \textsf{b}_{N-1}^{(k)} &= \textsf{b}_N^{(k)} = z, \end{align}

for $k = 1, 2, \dots$ and with $\textsf{b}_i^{(k)}$ standing for the value of the boundary at $t_i$ output after the kth iteration. We neglect the $(N-1)$ -addend of the sum and instead consider (64), since $\textsf{K}(t, x, T, z)$ is not well defined. As the integral in (62) is finite, the last piece vanishes as $t_{N - 1}$ approaches T. Given that $\textsf{b}(T) = z$ , we set the initial constant boundary $\textsf{b}_i^{(0)} = z$ for all $i = 0, \dots, N$ . We stop the fixed-point algorithm when the relative (squared) $L_2$ -distance between the consecutive discretized boundaries, defined as

\begin{align*} d_k \;:\!=\; \frac{\sum_{i=1}^N \left(\textsf{b}_i^{(k)} - \textsf{b}_i^{(k-1)}\right)^2(t_i - t_{i-1})}{\sum_{i=1}^N \left(\textsf{b}_i^{(k)}\right)^2(t_i - t_{i-1})},\end{align*}

is lower than $10^{-3}$ .

We show empirical evidence of the convergence of this Picard iteration scheme in Figures 12. For each computer drawing of the OSB, we provide smaller images at the bottom with the (logarithmically-scaled) errors $d_k$ , which tend to decrease at a steep pace, making the algorithm converge ( $d_k < 10^{-3}$ ) after few iterations.

Figure 1. The picture shows a comparison between the exact OSB of a BB and its numerical computation, which is obtained by setting $\widetilde{\theta}\equiv 0$ and taking a constant volatility $\widetilde{\nu}$ in the OU representation (65). For the images on top, the solid colored lines represent the computed OSBs for the different choices of the volatility coefficient $\widetilde{\nu}$ (image (a)), the partition length N (image (b)), and the type of partition considered (image (c)). The black dashed, dotted, and dashed-dotted lines represent the OSB of a BB associated with the different values of $\widetilde{\nu}$ . Specifications are shown in the legend and caption of each image. Image (c) accounts for a subplot that shows, as a function of the partition size N (x-axis), the evolution of the relative $L_2$ error between the various computed boundaries and the true one (y-axis). The smaller images at the bottom show the log-errors $\log_{10}(d_k)$ between consecutive boundaries for each iteration $k = 1, 2,\ldots$ of the Picard algorithm.

Figure 2. The first row of three plots shows $1/\widetilde{\theta}$ (continuous line) versus $1/\theta$ (dashed line) for the different choices of the slope $\widetilde{\theta}$ (image (a)), the mean-reverting level $\widetilde{\kappa}$ (image (b)), and the volatility $\widetilde{\nu}$ (image (c)). Specifications of the functions are given in the legend and caption of each image. The second row does the same for $\widetilde{\kappa}$ and $\kappa$ . The main plot, in the third row, shows in solid colored lines the computed OSBs. The smaller images at the bottom display the log-errors $\log_{10}(d_k)$ between consecutive boundaries for each iteration $k = 1, 2,\ldots$ of the Picard algorithm.

We perform all boundary computations by relying on the SDE representation of the kernel $\textsf{K}$ defined at (58), (60), and (61), since we adopted the viewpoint of a GMB derived from conditioning a time-dependent OU process to degenerate at the horizon. The relation between the ‘parent’ OU process and the resulting OUB is neatly stated in [Reference Buonocore, Caputo, Nobile and Pirozzi11, Section 3], although we include here a modified version that fits our notation better. That is, if $\widetilde{X} = \{\widetilde{X}_t\}_{t\in[0, T]}$ solves the SDE

(65) \begin{align} \mathrm{d} \widetilde{X}_t = \widetilde{\theta}(t)(\widetilde{\kappa}(t) - \widetilde{X}_t)\,\mathrm{d} t + \widetilde{\nu}(t)\,\mathrm{d} B_t,\quad t\in[0, T],\end{align}

then the corresponding GMB is an OUB that solves the SDE

(66) \begin{align} \mathrm{d} X_t = \theta(t)(\kappa(t) - X_t)\,\mathrm{d} t + \nu(t)\,\mathrm{d} B_t,\quad t\in(0, T),\end{align}

with

(67) \begin{align} \left\{ \begin{aligned} \theta(t) &= \widetilde{\theta}(t) + \frac{\widetilde{\nu}^2(t)}{\widetilde{\varphi}^2(t)\int_t^T \widetilde{\nu}^2(u)/\widetilde{\varphi}(u)\,\mathrm{d} u}, \\[5pt] \kappa(t) &= \widetilde{\kappa}(t) + \frac{\widetilde{\nu}^2(t)}{\theta(t)}\frac{x - \widetilde{\varphi}(T)\int_t^T \widetilde{\kappa}(u)\widetilde{\theta}(u)/\widetilde{\varphi}(u)\,\mathrm{d} u}{\widetilde{\varphi}(t)\widetilde{\varphi}(T) \int_t^T \widetilde{\nu}^2(u)/\widetilde{\varphi}(u)\,\mathrm{d} u}, \\[5pt] \nu(t) &= \widetilde{\nu}(t), \end{aligned} \right.\end{align}

and where $\widetilde{\varphi}(t) = \exp\{-\int_0^t\widetilde{\theta}(u)\,\mathrm{d} u\}$ . We choose the representations (65) and (66) for GM processes and GMBs over those given in Lemma 1 and in (iii) from Proposition 1 because the former have a more intuitive meaning. Indeed, recall that $\theta$ ( $\widetilde{\theta}$ ) indicates the strength with which the underlying process is pulled towards the mean-reverting level $\kappa$ ( $\widetilde{\kappa}$ ), while $\nu$ ( $\widetilde{\nu}$ ) regulates the intensity of the white noise.

Figure 1 shows the numerically computed OSB when the underlying diffusion is a BB, that is, when $\widetilde{\theta}(t) = 0$ and $\widetilde{\nu}(t) = \sigma$ , for all $t\in[0, T]$ and $\sigma > 0$ . We rely on such a case to empirically validate the accuracy of the Picard algorithm, in Figure 1(a), by comparing it against the explicit OSB of a BB, which is known to take the form $z + K\sigma\sqrt{T - t}$ , for $K\approx 0.8399$ . This result was originally due to [Reference Shepp69]. Notice in Figure 1(b) how the numerical boundary approaches the real one as the time partition becomes thinner.

For all boundary computations, $T = 1$ and $N = 500$ were set unless otherwise stated. We used the logarithmically-spaced partition $t_i = \ln\left(1 + i(e - 1)/N\right)$ , since numerical tests suggested that the best performance is achieved when using a non-uniform mesh whose distances $t_i - t_{i-1}$ smoothly decrease. Figure 1(c) illustrates the effect of the mesh increments by comparing the performance of the logarithmically-spaced partition against an equally-spaced one and another that is also equally spaced until the second-to-last node, where the distance suddenly shrinks to a fourth of the regular spacing. Note how the first partition significantly outperforms the other two, with a lower overall $L_2$ -error due to its better accuracy near the horizon. Intuition might dictate that introducing the sudden shrink at the horizon could result in better performance by diminishing the error that arises when considering (64), yet Figure 1(c) indicates otherwise.

Figure 2 shows the numerical computation of OSBs for more general cases rather than the BB. It shows how changing the coefficients of the process affects the OSB shape. In the first two rows of images, we visually represent the transformation of coefficients (67). The volatility is excluded as it remains the same after the ‘bridging’ of the OU process. To compare the slopes we rely on $1/\widetilde{\theta}(t)$ and $1/\theta(t)$ , as $\theta(t) \rightarrow \infty$ as $t\rightarrow \infty$ (see (iv) in Proposition 1) and thus its explosion would have obscured the shape of the bounded function $\widetilde{\theta}$ , had they been plotted in the same graph. In alignment with the meaning of each time-dependent coefficient, the OSB is pulled towards $\widetilde{\kappa}$ with a strength directly proportional to $\widetilde{\theta}$ . This pulling force conflicts with the much stronger one towards the pinning point of the bridge process, resulting in an attraction towards the ‘bridged’ mean-reverting level $\kappa$ with strength dictated by $\theta$ . We recall that modifying $\widetilde{\nu}$ , and thus $\nu$ , is equivalent to changing $\theta$ , by ( iv.2 ). We remind the reader that the functions $\Phi$ and $\phi$ in Figure 2 stand for the distribution and the density of a standard normal random variable. The former is used to smoothly represent sudden changes of regime, while the latter introduces smooth temporal anomalies. For instance, $\widetilde{\kappa}(t) = 2\Phi(50t - 25) - 1$ rapidly changes the mean-reverting level of the underlying process from $-1$ to 1 around $t = 0.5$ , and $\widetilde{\nu}(t) = 1 + \sqrt{2\pi}\phi(100t - 25)$ introduces a brief period of increased volatility around $t = 0.25$ , before and after which the volatility remains at (constant) baseline levels. Periodic fluctuations of the parameters were also considered, as they typically arise in problems that account for seasonality.

Notice that from Proposition 1 it readily follows that all coefficients $\theta$ , $\kappa$ , and $\nu$ used in this section satisfy Assumptions (21a)–(21f), as they are twice continuously differentiable and satisfy the conditions ( iv.1 ) and ( iv.2 ), and $\theta(t) > 0$ for all $t\in[0, T)$ .

The R code in the public repository https://github.com/aguazz/OSP_GMB implements the Picard iteration algorithm (63)–(64). The repository allows for full replicability of the above numerical examples.

7. Concluding remarks

We solved the finite-horizon OSP of a GMB by proving that its OSB uniquely solves the Volterra-type integral equation (62).

In Section 2 we provided a comprehensive study of GMBs, presenting four equivalent definitions that make it easier to identify, create, and understand them from different perspectives. One of these representations allows us to bypass the challenge of working with diffusions with non-bounded drifts and instead work with an equivalent infinite-horizon OSP with a BM underneath. Equations (53) explicitly relate OSTs to OSBs, while (57) and (62) give the value formula and free-boundary equation in the original OSP.

Our method for solving the alternative OSP consisted in solving the associated free-boundary problem. To do so, in Section 4 we obtained several regularity properties of the value function and the OSB, among which the local Lipschitz continuity of the OSB stands out as a remarkable property.

In Section 6, we approached the free-boundary equation as a fixed-point problem in order to numerically explore the geometry of the OSB. This provided insights about its shape for different sets of coefficients of the underlying GMB, seen as bridges derived from conditioning a time-dependent OU process to hit a pinning point at the horizon. The OSB shows an attraction toward the mean-reverting level, which fades away as time approaches the horizon, where the boundary hits the OUB’s pinning point.

In the context of gain functions beyond the identity, it is worth noting that the representation (2) can still be used to transform the initial OSP into an infinite-horizon one with a BM underneath. This prompts the question of extending the methodology in Section 4 to address more flexible gain functions. A practical starting point for this extension might be to consider a space-linear gain function, which results in simple forms for the partial derivatives (recall (26) and (27)) and keeps available the comparison method used in Proposition 6 to obtain the boundedness of the OSB. Also, the new gain function should account for boundedness and time-wise differentiability regularities equivalent to Assumptions (21a)–(21f).

Acknowledgements

The authors thank the anonymous referees for their comments, which helped in improving the quality of the manuscript.

Funding information

The authors acknowledge support from the grants PID2020-116694GB-I00 (first and second authors) and PID2021-124051NB-I00 (third author), funded by MCIN/AEI/10.13039/ 501100011033 and by ERDF: A Way of Making Europe. The third author acknowledges support from the Convocatoria de la Universidad Carlos III de Madrid de Ayudas para la Recualificación del Sistema Universitario Español para 2021–2023, funded by Spain’s Ministerio de Ciencia, Innovación y Universidades.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Abrahams, J. and Thomas, J. (1981). Some comments on conditionally Markov and reciprocal Gaussian processes (corresp.). IEEE Trans. Inf. Theory 27, 523525.CrossRefGoogle Scholar
Andersson, P. (2012). Card counting in continuous time. J. Appl. Prob. 49, 184198.CrossRefGoogle Scholar
Angoshtari, B. and Leung, T. (2019). Optimal dynamic basis trading. Ann. Finance 15, 307335.CrossRefGoogle Scholar
Back, K. (1992). Insider trading in continuous time. Rev. Financial Studies 5, 387409.CrossRefGoogle Scholar
Barczy, M. and Kern, P. (2011). General alpha-Wiener bridges. Commun. Stoch. Anal. 5, 585608.Google Scholar
Barczy, M. and Kern, P. (2013). Representations of multidimensional linear process bridges. Random Operators Stoch. Equat. 21, 159189.Google Scholar
Barczy, M. and Kern, P. (2013). Sample path deviations of the Wiener and the Ornstein–Uhlenbeck process from its bridges. Brazilian J. Prob. Statist. 27, 437466.CrossRefGoogle Scholar
Borisov, I. S. (1983). On a criterion for Gaussian random processes to be Markovian. Theory Prob. Appl. 27, 863865.CrossRefGoogle Scholar
Boyce, W. M. (1970). Stopping rules for selling bonds. Bell J. Econom. Manag. Sci. 1, 2753.CrossRefGoogle Scholar
Brennan, M. J. and Schwartz, E. S. (1990). Arbitrage in stock index futures. J. Business 63, S7S31.CrossRefGoogle Scholar
Buonocore, A., Caputo, L., Nobile, A. G. and Pirozzi, E. (2013). On some time-non-homogeneous linear diffusion processes and related bridges. Sci. Math. Japon. 76, 5577.Google Scholar
Campi, L. and Çetin, U. (2007). Insider trading in an equilibrium model with default: a passage from reduced-form to structural modelling. Finance Stoch. 11, 591602.CrossRefGoogle Scholar
Campi, L., Çetin, U. and Danilova, A. (2011). Dynamic Markov bridges motivated by models of insider trading. Stoch. Process. Appl. 121, 534567.CrossRefGoogle Scholar
Campi, L., Çetin, U. and Danilova, A. (2013). Equilibrium model with default and dynamic insider information. Finance Stoch. 17, 565585.CrossRefGoogle Scholar
Cartea, Á., Jaimungal, S. and Kinzebulatov, D. (2016). Algorithmic trading with learning. Internat. J. Theoret. Appl. Finance 19, article no. 1650028.CrossRefGoogle Scholar
Çetin, U. and Danilova, A. (2016). Markov bridges: SDE representation. Stoch. Process. Appl. 126, 651679.CrossRefGoogle Scholar
Çetin, U. and Xing, H. (2013). Point process bridges and weak convergence of insider trading models. Electron. J. Prob. 18, 124.CrossRefGoogle Scholar
Chaumont, L. and Bravo, G. U. (2011). Markovian bridges: weak continuity and pathwise constructions. Ann. Prob. 39, 609647.CrossRefGoogle Scholar
Chen, R. W., Grigorescu, I. and Kang, M. (2015). Optimal stopping for Shepp’s urn with risk aversion. Stochastics 87, 702722.CrossRefGoogle Scholar
Chen, X., Leung, T. and Zhou, Y. (2021). Constrained dynamic futures portfolios with stochastic basis. Ann. Finance 18, 133.CrossRefGoogle Scholar
Chen, Y. and Georgiou, T. (2016). Stochastic bridges of linear systems. IEEE Trans. Automatic Control 61, 526531.Google Scholar
D’Auria, B. and Ferriero, A. (2020). A class of Itô diffusions with known terminal value and specified optimal barrier. Mathematics 8, article no. 123.CrossRefGoogle Scholar
D’Auria, B., García-Portugués, E. and Guada, A. (2020). Discounted optimal stopping of a Brownian bridge, with application to American options under pinning. Mathematics 8, 1159.CrossRefGoogle Scholar
Azze, A., D’Auria, B. and García-Portugués, E. (2024). Optimal stopping of an Ornstein–Uhlenbeck bridge. Stoch. Process. Their Appl. 172, 104342.Google Scholar
De Angelis, T. (2015). A note on the continuity of free-boundaries in finite-horizon optimal stopping problems for one-dimensional diffusions. SIAM J. Control Optimization 53, 167184.CrossRefGoogle Scholar
De Angelis, T. and Milazzo, A. (2020). Optimal stopping for the exponential of a Brownian bridge. J. Appl. Prob. 57, 361384.CrossRefGoogle Scholar
De Angelis, T. and Peskir, G. (2020). Global $C^{1}$ regularity of the value function in optimal stopping problems. Ann. Appl. Prob. 30, 10071031.Google Scholar
De Angelis, T. and Stabile, G. (2019). On Lipschitz continuous optimal stopping boundaries. SIAM J. Control Optimization 57, 402436.CrossRefGoogle Scholar
Detemple, J. and Kitapbayev, Y. (2020). The value of green energy under regulation uncertainty. Energy Econom. 89, article no. 104807.CrossRefGoogle Scholar
Dochviri, B. (1995). On optimal stopping of inhomogeneous standard Markov processes. Georgian Math. J. 2, 335346.CrossRefGoogle Scholar
Dynkin, E. B. (1963). The optimum choice of the instant for stopping a Markov process. Soviet Math. Dokl. 150, 627629.Google Scholar
Ekström, E. and Vaicenavicius, J. (2020). Optimal stopping of a Brownian bridge with an unknown pinning point. Stoch. Process. Appl. 130, 806823.CrossRefGoogle Scholar
Ekström, E. and Wanntorp, H. (2009). Optimal stopping of a Brownian bridge. J. Appl. Prob. 46, 170180.CrossRefGoogle Scholar
Erickson, W. W. and Steck, D. A. (2022). Anatomy of an extreme event: What can we infer about the history of a heavy-tailed random walk? Phys. Rev. E 106, 054142.CrossRefGoogle Scholar
Ernst, P. A. and Shepp, L. A. (2015). Revisiting a theorem of L. A. Shepp on optimal stopping. Commun. Stoch. Anal. 9, 419423.Google Scholar
Fitzsimmons, P., Pitman, J. and Yor, M. (1993). Markovian bridges: construction, palm interpretation, and splicing. In Seminar on Stochastic Processes, 1992, eds E. Çinlar, K. L. Chung, M. J. Sharpe, R. F. Bass and K. Burdzy, Birkhäuser, Boston, pp. 101134.Google Scholar
Föllmer, H. (1972). Optimal stopping of constrained Brownian motion. J. Appl. Prob. 9, 557571.CrossRefGoogle Scholar
Friedman, A. (1964). Partial Differential Equations of Parabolic Type. Prentice-Hall, Englewood Cliffs.Google Scholar
Friedman, A. (1975). Parabolic variational inequalities in one space dimension and smoothness of the free boundary. J. Funct. Anal. 18, 151176.CrossRefGoogle Scholar
Friedman, A. (1975). Stopping time problems and the shape of the domain of continuation. In Control Theory, Numerical Methods and Computer Systems Modelling, Springer, Berlin, Heidelberg, pp. 559566.CrossRefGoogle Scholar
Gasbarra, D., Sottinen, T. and Valkeila, E. (2007). Gaussian bridges. In Stochastic Analysis and Applications, eds F. E. Benth, G. Di Nunno, T. Lindstrøm, B. Øksendal and T. Zhang, Springer, Berlin, pp. 361–382.CrossRefGoogle Scholar
Glover, K. (2020). Optimally stopping a Brownian bridge with an unknown pinning time: a Bayesian approach. Stoch. Process. Appl. 150, 919937.CrossRefGoogle Scholar
Golez, B. and Jackwerth, J. C. (2012). Pinning in the S&P 500 futures. J. Financial Econom. 106, 566585.CrossRefGoogle Scholar
Hildebrandt, F. and Rœlly, S. (2020). Pinned diffusions and Markov bridges. J. Theoret. Prob. 33, 906917.CrossRefGoogle Scholar
Hilliard, J. E. and Hilliard, J. (2015). Pricing American options when there is short-lived arbitrage. Internat. J. Financial Markets Derivatives 4, 4353.CrossRefGoogle Scholar
Horne, J. S., Garton, E. O., Krone, S. M. and Lewis, J. S. (2007). Analyzing animal movements using Brownian bridges. Ecology 88, 23542363.CrossRefGoogle ScholarPubMed
Hoyle, E., Hughston, L. P. and Macrina, A. (2011). Lévy random bridges and the modelling of financial information. Stoch. Process. Appl. 121, 856884.CrossRefGoogle Scholar
Jacka, S. and Lynn, R. (1992). Finite-horizon optimal stopping, obstacle problems and the shape of the continuation region. Stoch. Stoch. Rep. 39, 2542.Google Scholar
Kranstauber, B. (2019). Modelling animal movement as Brownian bridges with covariates. Movement Ecol. 7, article no. 22.CrossRefGoogle ScholarPubMed
Krishnan, H. and Nelken, I. (2001). The effect of stock pinning upon option prices. Risk (December 2001), 17–20.Google Scholar
Krumm, J. (2021). Brownian bridge interpolation for human mobility? In Proceedings of the 29th International Conference on Advances in Geographic Information Systems (SIGSPATIAL ’21), Association for Computing Machinery, New York, pp. 175–183.Google Scholar
Krylov, N. V. (1980). Controlled Diffusion Processes. Springer, New York.CrossRefGoogle Scholar
Kyle, A. S. (1985). Continuous auctions and insider trading. Econometrica 53, 13151335.CrossRefGoogle Scholar
Leung, T., Li, J. and Li, X. (2018). Optimal timing to trade along a randomized Brownian bridge. Internat. J. Financial Studies 6, article no. 75.CrossRefGoogle Scholar
Liu, J. and Longstaff, F. A. (2004). Losing money on arbitrage: optimal dynamic portfolio choice in markets with arbitrage opportunities. Rev. Financial Studies 17, 611641.CrossRefGoogle Scholar
Mehr, C. B. and McFadden, J. A. (1965). Certain properties of Gaussian processes and their first-passage times. J. R. Statist. Soc. B [Statist. Methodology] 27, 505522.CrossRefGoogle Scholar
Ni, S. X., Pearson, N. D. and Poteshman, A. M. (2005). Stock price clustering on option expiration dates. J. Financial Econom. 78, 4987.Google Scholar
Ni, S. X., Pearson, N. D., Poteshman, A. M. and White, J. (2021). Does option trading have a pervasive impact on underlying stock prices? Rev. Financial Studies 34, 19521986.CrossRefGoogle Scholar
Oshima, Y. (2006). On an optimal stopping problem of time inhomogeneous diffusion processes. SIAM J. Control Optimization 45, 565579.CrossRefGoogle Scholar
Pedersen, J. L. and Peskir, G. (2002). On nonlinear integral equations arising in problems of optimal stopping. In Functional Analysis VII: Proceedings of the Postgraduate School and Conference held in Dubrovnik, September 17–26, 2001, eds D. Bakić, P. Pandžić and G. Peskir, University of Aarhus, Department of Mathematical Sciences, pp. 159–175.Google Scholar
Peng, S. and Zhu, X. (2006). Necessary and sufficient condition for comparison theorem of 1-dimensional stochastic differential equations. Stoch. Process. Appl. 116, 370380.CrossRefGoogle Scholar
Peskir, G. (2005). A change-of-variable formula with local time on curves. J. Theoret. Prob. 18, 499535.CrossRefGoogle Scholar
Peskir, G. (2005). On the American option problem. Math. Finance 15, 169181.CrossRefGoogle Scholar
Peskir, G. (2019). Continuity of the optimal stopping boundary for two-dimensional diffusions. Ann. Appl. Prob. 29, 505530.CrossRefGoogle Scholar
Peskir, G. and Shiryaev, A. (2006). Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel.Google Scholar
Pitman, J. and Yor, M. (1982). A decomposition of Bessel bridges. Z. Wahrscheinlichkeitsth. 59, 425457.CrossRefGoogle Scholar
Rosén, B. (1965). Limit theorems for sampling from finite populations. Ark. Mat. 5, 383424.CrossRefGoogle Scholar
Salminen, P. (1984). Brownian excursions revisited. In Seminar on Stochastic Processes, 1983, eds E. Çinlar, K. L. Chung and R. K. Getoor, Birkhäuser, Boston, pp. 161187.Google Scholar
Shepp, L. A. (1969). Explicit solutions to some problems of optimal stopping. Ann. Math. Statist. 40, 9931010.CrossRefGoogle Scholar
Shiryaev, A. (2008). Optimal Stopping Rules. Springer, Berlin, Heidelberg.Google Scholar
Sottinen, T. and Yazigi, A. (2014). Generalized Gaussian bridges. Stoch. Process. Appl. 124, 30843105.CrossRefGoogle Scholar
Taylor, H. M. (1968). Optimal stopping in a Markov process. Ann. Math. Statist. 39, 13331344.CrossRefGoogle Scholar
Venek, V., Brunauer, R. and Schneider, C. (2016). Evaluating the Brownian bridge movement model to determine regularities of people’s movements. J. Geograph. Inf. Sci. 4, 2035.Google Scholar
Wald, A. (1947). Sequential Analysis. John Wiley, New York.Google Scholar
Williams, C. K. and Rasmussen, C. E. (2006). Gaussian Processes for Machine Learning. MIT Press.Google Scholar
Yang, Y. (2014). Refined solutions of time inhomogeneous optimal stopping problem and zero-sum game via Dirichlet form. Prob. Math. Statist. 34, 253271.Google Scholar
Figure 0

Figure 1. The picture shows a comparison between the exact OSB of a BB and its numerical computation, which is obtained by setting $\widetilde{\theta}\equiv 0$ and taking a constant volatility $\widetilde{\nu}$ in the OU representation (65). For the images on top, the solid colored lines represent the computed OSBs for the different choices of the volatility coefficient $\widetilde{\nu}$ (image (a)), the partition length N (image (b)), and the type of partition considered (image (c)). The black dashed, dotted, and dashed-dotted lines represent the OSB of a BB associated with the different values of $\widetilde{\nu}$. Specifications are shown in the legend and caption of each image. Image (c) accounts for a subplot that shows, as a function of the partition size N (x-axis), the evolution of the relative $L_2$ error between the various computed boundaries and the true one (y-axis). The smaller images at the bottom show the log-errors $\log_{10}(d_k)$ between consecutive boundaries for each iteration $k = 1, 2,\ldots$ of the Picard algorithm.

Figure 1

Figure 2. The first row of three plots shows $1/\widetilde{\theta}$ (continuous line) versus $1/\theta$ (dashed line) for the different choices of the slope $\widetilde{\theta}$ (image (a)), the mean-reverting level $\widetilde{\kappa}$ (image (b)), and the volatility $\widetilde{\nu}$ (image (c)). Specifications of the functions are given in the legend and caption of each image. The second row does the same for $\widetilde{\kappa}$ and $\kappa$. The main plot, in the third row, shows in solid colored lines the computed OSBs. The smaller images at the bottom display the log-errors $\log_{10}(d_k)$ between consecutive boundaries for each iteration $k = 1, 2,\ldots$ of the Picard algorithm.