Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-23T00:36:28.825Z Has data issue: false hasContentIssue false

Constrained optimal stopping under a regime-switching model

Published online by Cambridge University Press:  27 March 2024

Takuji Arai*
Affiliation:
Keio University
Masahiko Takenaka*
Affiliation:
Keio University
*
*Postal address: Department of Economics, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, 108-8345, Japan.
*Postal address: Department of Economics, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, 108-8345, Japan.
Rights & Permissions [Opens in a new window]

Abstract

We investigate an optimal stopping problem for the expected value of a discounted payoff on a regime-switching geometric Brownian motion under two constraints on the possible stopping times: only at exogenous random times, and only during a specific regime. The main objectives are to show that an optimal stopping time exists as a threshold type and to derive expressions for the value functions and the optimal threshold. To this end, we solve the corresponding variational inequality and show that its solution coincides with the value functions. Some numerical results are also introduced. Furthermore, we investigate some asymptotic behaviors.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In the real options literature, the following type of optimal stopping problem appears frequently:

(1) \begin{equation} \sup_{\tau\in{\mathcal T}}{\mathbb E}[{\textrm{e}}^{-r\tau}\pi(X_\tau)\mid X_0=x],\end{equation}

where $r>0$ is the exogenous discount rate, $X=\{X_t\}_{t\geq0}$ is a stochastic process, which we call the cash-flow process, ${\mathcal T}$ is the set of all stopping times that investors can choose, and $\pi$ is an ${\mathbb R}$ -valued function, which we call the payoff function. We can regard (1) as a function on x, which we call the value function. The problem in (1) concerns the optimal investment timing for an investment whose payoff is given by the random variable $\pi(X_t)$ when executed at time t. The most typical example of $\pi$ is

(2) \begin{equation} \pi(x) = {\mathbb E}\bigg[\int_t^\infty {\textrm{e}}^{-r(s-t)}X_s\,{\textrm{d}} s - I \mid X_t=x\bigg],\end{equation}

which expresses the value of an investment that starts at time t with an initial cost $I>0$ and that brings to the investor perpetually an instantaneous return $X_s$ at each time $s>t$ . Note that the right-hand side of (2) becomes a function on x when the process X has the strong Markov property, such as a geometric Brownian motion. The main concern of (1) is to show that an optimal stopping time $\tau^*\in{\mathcal T}$ exists and can be expressed as $\tau^*=\inf\{t>0\mid X_t\geq x^*\}$ for some $x^*\in{\mathbb R}$ . This type of optimal stopping is called threshold type, and $x^*$ is called its optimal threshold. It is significant to examine whether an optimal stopping is of threshold type. If so, the optimal strategy becomes apparent, and the optimal stopping time can be explicitly described. This framework of optimal stopping problems was discussed in [Reference McDonald and Siegel17]; see also [Reference Dixit and Pindyck6, Chapter 5]. Here we focus on discussing (1) when $X_t$ is a regime-switching geometric Brownian motion under two constraints on ${\mathcal T}$ , and the payoff function $\pi$ is given as $\pi(x)=\alpha(x-K)^+-I$ for some $\alpha>0$ , $K\geq0$ , and $I\geq0$ .

Regime-switching models, widely studied in mathematical finance (see [Reference Bollen2, Reference Buffington and Elliott3, Reference Buffington and Elliott4, Reference Elliott, Chan and Siu9, Reference Guo10, Reference Guo and Zhang11] and so forth), are models in which the regime, representing, e.g., the economy’s general state, changes randomly. In this paper, we consider a regime-switching model with two regimes, $\{0,1\}$ . Let $\theta=\{\theta_t\}_{t\geq0}$ be a stochastic process expressing the regime at time t. In particular, $\theta$ is a $\{0,1\}$ -valued continuous-time Markov chain. Then the cash-flow process X is given by the solution to the following stochastic differential equation (SDE):

(3) \begin{equation} {\textrm{d}} X_t = X_t(\mu_{\theta_t}\,{\textrm{d}} t + \sigma_{\theta_t}\,{\textrm{d}} W_t), \quad X_0>0,\end{equation}

where $\mu_i\in{\mathbb R}$ and $\sigma_i>0$ for $i=0,1$ , and $W=\{W_t\}_{t\geq0}$ is a one-dimensional standard Brownian motion independent of $\theta$ . Considering a regime-switching model, we need to define a value function for each initial regime; that is, for each $i=0,1$ , we define the value function $v_i$ as

(4) \begin{equation} v_i(x) \,:\!=\, \sup_{\tau\in{\mathcal T}}{\mathbb E}[{\textrm{e}}^{-r\tau}\pi(X_\tau)\mid\theta_0=i,X_0=x].\end{equation}

Furthermore, we impose two constraints on ${\mathcal T}$ . Liquidity risk and other considerations mean that investment is not always possible. Therefore, it is significant to analyze models with constraints on investment opportunities and timing. Hence, we impose two constraints simultaneously in this paper. One is the random arrival of investment opportunities. More precisely, we restrict stopping to only at exogenous random times given by the jump times of a Poisson process independent of W and $\theta$ . Another is the regime constraint. We add the restriction that stopping is feasible only during regime 1.

Now, we introduce some related works. The problem in (1) was discussed in [Reference Bensoussan, Yan and Yin1] for the same cash-flow process X as defined in (3) without restrictions on stopping. It treated the case where $\pi$ is given as (2) and showed that an optimal stopping time exists as a threshold type by an argument based on partial differential equation (PDE) techniques. The same problem was discussed in [Reference Nishihara19] for a two-state regime-switching model with $\pi(x)=x-I$ under the regime constraint, but with a cash-flow process that is still a geometric Brownian motion without regime switching. Note that [Reference Nishihara19] assumed that an optimal stopping exists as a threshold type. In addition, [Reference Egami and Kevkhishvili8] also studied the same problem for the case where X is a regime-switching diffusion process but without restrictions on stopping. On the other hand, the restriction of stopping at exogenous random times was considered in [Reference Dupuis and Wang7] in the case where the cash-flow process is a geometric Brownian motion and the payoff function is of American call option type, i.e. $\pi(x)=(x-K)^+$ , and did not deal with regime-switching models. In [Reference Dupuis and Wang7], a variational inequality (VI) was first derived through a heuristic discussion. This was solved, and it was shown by a probabilistic argument that the solution to the VI coincides with the value function. There are other many works dealing with this issue, such as [Reference Hobson12, Reference Hobson and Zeng13, Reference Lange, Ralph and Støre15, Reference Lempa16, Reference Menaldi and Robin18].

To the best of our knowledge, this paper is the first study that deals with the constrained optimal stopping problem on a regime-switching geometric Brownian motion. It is also new to simultaneously impose the random arrival of investment opportunities and the regime constraint. We note that the discussion in this paper is based on the approach in [Reference Dupuis and Wang7].

The paper is organized as follows: Some mathematical preparations and the formulation of our optimal stopping problem are given in Section 2. Section 3 introduces the corresponding VI and solves its modified version in which two boundary conditions are replaced. We derive explicit expressions for the solution to the modified VI, which involves solutions to quartic equations, but it can be easily computed numerically. In Section 4, assuming that the two boundary conditions replaced in Section 3 are satisfied, we prove that the solution to the VI coincides with the value functions and the optimal threshold for our optimal stopping problem. In addition, we introduce some numerical results. Section 5 is devoted to illustrating some results on asymptotic behaviors, and Section 6 concludes the paper.

2. Preliminaries and problem formulation

We consider a regime-switching model with state space $\{0,1\}$ and suppose that the regime process $\theta$ is a $\{0,1\}$ -valued continuous-time Markov chain with generator

\begin{equation*}\left(\begin{array}{c@{\quad}c}-\lambda_0 & \lambda_0 \\[5pt]\lambda_1 & -\lambda_1\end{array}\right),\end{equation*}

where $\lambda_0,\lambda_1>0$ . We use the convention $\theta_\infty\equiv1$ to simplify the definition of ${\mathcal T}$ , which is defined later. Note that the length of regime i follows an exponential distribution with parameter $\lambda_i$ . We take the process X defined in (3) as the cash-flow process, and assume throughout this paper that

(5) \begin{equation} r>\mu_0\vee\mu_1.\end{equation}

As mentioned in [Reference Bensoussan, Yan and Yin1], if (5) is violated, the value function might take any large value by choosing a large stopping time. Let $J=\{J_t\}_{t\geq0}$ be a Poisson process with intensity $\eta>0$ independent of W and $\theta$ , and denote by $T_k$ its kth jump time for $k\in{\mathbb N}$ with the conventions $T_0\equiv0$ and $T_\infty\equiv\infty$ , where ${\mathbb N}\,:\!=\,\{1,2,\dots\}$ . Note that the process J generates the exogenous random times when an investment opportunity arrives. In other words, for $k\in{\mathbb N}$ , $T_k$ represents the kth investment opportunity time. Suppose that $\theta$ , W, and J are defined on a complete probability space $(\Omega,{\mathcal F},{\mathbb P})$ . In addition, we denote by ${\mathbb F}=\{{\mathcal F}_t\}_{t\geq0}$ the filtration generated by $\theta$ , W, and J. Assume that ${\mathbb F}$ satisfies the usual condition. Furthermore, we restrict stopping to only when the regime is 1. Thus, the set of all possible stopping times is described by

\[ {\mathcal T} \,:\!=\, \{\tau\in{\mathcal T}_0 \mid \textrm{ for each }\omega\in\Omega,\ \theta_{\tau(\omega)}(\omega)=1 \textrm{ and }\tau(\omega)=T_j(\omega)\textrm{ for some }j\in{\mathbb N}_\infty\},\]

where ${\mathcal T}_0$ is the set of all $[0,\infty]$ -valued stopping times and ${\mathbb N}_\infty\,:\!=\,{\mathbb N}\cup\{\infty\}$ . Next we formulate the payoff function $\pi$ as follows:

(6) \begin{equation} \pi(x)=\alpha(x-K)^+-I\end{equation}

for some $\alpha>0$ , $K\geq0$ , and $I\geq0$ , but we exclude the case where $K=I=0$ since the optimal threshold $x^*$ is 0, as seen in Remark 1. This formulation includes $\pi(x)=(x-K)^+$ as treated in [Reference Dupuis and Wang7], and $\pi(x)=x-I$ as in [Reference Nishihara19]. Moreover, (6) covers the payoff function introduced in (2). In fact, [Reference Bensoussan, Yan and Yin1] showed that

\[ {\mathbb E}\bigg[\int_0^\infty{\textrm{e}}^{-rt}X_t\,{\textrm{d}} t\mid\theta_0=i,X_0=x\bigg] = \frac{(r-\mu_{1-i}+\lambda_i+\lambda_{1-i})x}{(r-\mu_{1-i})(r-\mu_i)+\lambda_i(r-\mu_{1-i})+\lambda_{1-i}(r-\mu_i)}.\]

In the setting described above, we define the value functions $v_i$ , $i=0,1$ as follows:

(7) \begin{equation} \left\{ \begin{array}{l} v_1(x)\,:\!=\,\displaystyle\sup_{\tau\in{\mathcal T}}{\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau}\pi(X_\tau)], \\[5pt] v_0(x)\,:\!=\,{\mathbb E}^{0,x}[{\textrm{e}}^{-r\xi_0}v_1(X_{\xi_0})] \end{array}\right.\end{equation}

for $x>0$ , where $\xi_0\,:\!=\,\inf\{t>0\mid\theta_t=1\}$ and ${\mathbb E}^{i,x}$ means the expectation with the initial condition $\theta_0=i$ and $X_0=x$ . In fact, we should define $v_0$ as $v_0(x)\,:\!=\,\sup_{\tau\in{\mathcal T}}{\mathbb E}^{0,x}[{\textrm{e}}^{-r\tau}\pi(X_\tau)]$ in terms of (4), but the definition in (7) is justified by the following:

\begin{align*} \sup_{\tau\in{\mathcal T}}{\mathbb E}^{0,x}[{\textrm{e}}^{-r\tau}\pi(X_\tau)] & = {\mathbb E}^{0,x}\bigg[{\textrm{e}}^{-r\xi_0}\sup_{\tau^\prime\in{\mathcal T}^\prime} {\mathbb E}[{\textrm{e}}^{-r\tau^\prime}\pi(X^\prime_\tau)\mid\theta^\prime_0=1,X^\prime_0=X_{\xi_0}]\bigg]\\& = {\mathbb E}^{0,x}[{\textrm{e}}^{-r\xi_0}v_1(X_{\xi_0})],\end{align*}

where $\theta^\prime$ and $X^\prime$ are independent copies of $\theta$ and X, respectively, and ${\mathcal T}^\prime$ is the set of all possible stopping times defined based on $\theta^\prime$ and $X^\prime$ . We discuss the optimal stopping problem (7) in the following sections.

Remark 1. When $K=I=0$ , $\pi$ is given as $\pi(x)=\alpha x$ ; thus $v_1(x)=\alpha\sup_{\tau\in{\mathcal T}}{\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau}X_\tau]$ holds. Since $\{{\textrm{e}}^{-rt}X_t\}_{t\geq0}$ is a supermartingale, ${\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau_1}X_{\tau_1}]\geq{\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau_2}X_{\tau_2}]$ holds for any pair $\tau_1, \tau_2$ of stopping times with $\tau_1\leq\tau_2$ by [Reference Revuz and Yor20, Theorem 3.3, Chapter II]. Thus, the optimal stopping time is given by the first one we can stop, which corresponds to the case where the optimal threshold is 0.

3. Variational inequality

We discuss the variational inequality (VI) corresponding to the value functions $v_i$ , $i=0,1$ . From the same sort of argument as in [Reference Dupuis and Wang7, Section 3], the VI is given as follows.

Problem 1. Find two non-negative $C^2$ -functions $V_0, V_1\colon{\mathbb R}_+\to{\mathbb R}_+$ and a constant $x^*\geq\widetilde{K}$ satisfying

(8)
(9)
(10)
(11)
(12)
(13)
(14)

where ${\mathbb R}_+\,:\!=\,[0,\infty)$ , $\widetilde{K}\,:\!=\,K+({I}/{\alpha})$ , and ${\mathcal A}_i$ , $i=0,1$ , are the infinitesimal generators of X under regime i defined as $({\mathcal A}_if)(x) \,:\!=\, \mu_ixf^{\prime}(x) + \frac{1}{2}\sigma_i^2x^2f^{\prime\prime}(x)$ , $x>0$ , for the $C^2$ -function f.

Remark 2. We now explain intuitively the derivation of the VI (8)–(14). First of all, the value functions $v_i$ , $i=0,1$ , are non-negative since $v_1(x)\geq\sup_{\tau\in{\mathcal T}}{\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau}({-}I)]=0$ . Thus, (8) would hold. Assuming that the optimal stopping time $\tau^*$ is of threshold type with the optimal threshold $x^*$ , that is,

\[ \tau^*=\inf\{t>0\mid X_t\geq x^*, \theta_t=1, t=T_j\textrm{ for some }j\in{\mathbb N}\}, \]

we expect that the conditions (12)–(14) are satisfied. Next, suppose that $\theta_0=0$ . Since $\xi_0\sim\exp(\lambda_0)$ , we can rewrite $v_0$ as

\[ v_0(x) = {\mathbb E}^{0,x}\bigg[\int_0^\infty{\textrm{e}}^{-(r+\lambda_0)t}\lambda_0v_1(X_t)\,{\textrm{d}} t\bigg]. \]

Thus, (9) is derived from [Reference Karatzas and Shreve14, Proposition 5.7.2]. To see (10) and (11), we define

(15) \begin{equation} \overline{V}(x) \,:\!=\, \pi(x) \vee V_1(x) = \left\{ \begin{array}{l@{\quad}l} \pi(x), & x\geq x^*, \\[4pt] V_1(x), \quad & x<x^*. \end{array} \right. \end{equation}

We can then unify (10) and (11) into

(16) \begin{equation} -rV_1(x)+{\mathcal A}_1 V_1(x)+\lambda_1(V_0(x)-V_1(x))+\eta(\overline{V}(x)-V_1(x))=0, \ \ \ x>0. \end{equation}

$\overline{V}$ corresponds to $\overline{v}(x)\,:\!=\,v_1(x)\vee\pi(x)$ , which is the value function of the optimal stopping problem with 0 added to the possible stopping times, as discussed in [Reference Dupuis and Wang7], i.e. $\overline{v}$ is described as

\[ \overline{v}(x) = \sup_{\tau\in{\mathcal T}\cup\{0\}}{\mathbb E}^{1,x}[{\textrm{e}}^{-r\tau}\pi(X_\tau)]. \]

Now, assume $\theta_0=1$ and define $\xi_1\,:\!=\,\inf\{t>0\mid\theta_t=0\}$ . Since $\xi_1\sim\exp(\lambda_1)$ and $T_1\sim\exp(\eta)$ , we can rewrite $v_1$ as

\begin{align*} v_1(x) & = {\mathbb E}^{1,x}\big[{\textrm{e}}^{-r(\xi_1\wedge T_1)} \big(v_0(X_{\xi_1}){\bf 1}_{\{\xi_1<T_1\}}+\overline{v}(X_{T_1}){\bf 1}_{\{\xi_1>T_1\}}\big)\big] \\ & = {\mathbb E}^{1,x}\bigg[\int_0^\infty\int_0^\infty{\textrm{e}}^{-r(t_1\wedge t_2)}\big(v_0(X_{t_1}){\bf 1}_{\{t_1<t_2\}} + \overline{v}(X_{t_2}){\bf 1}_{\{t_1>t_2\}}\big)\lambda_1{\textrm{e}}^{-\lambda_1t_1}\eta{\textrm{e}}^{-\eta t_2}\,{\textrm{d}} t_1\,{\textrm{d}} t_2\bigg] \\ & = {\mathbb E}^{1,x}\bigg[\int_0^\infty {\textrm{e}}^{-(r+\lambda_1+\eta)t}(\lambda_1v_0(X_t)+\eta\overline{v}(X_t))\,{\textrm{d}} t\bigg]. \end{align*}

As a result, [Reference Karatzas and Shreve14, Proposition 5.7.2] provides (16).

This section aims to solve the following modified version of Problem 1, in which we replace the boundary conditions (13) and (14) with (17) below.

Problem 2. Find two $C^2$ -functions $V_0, V_1\colon{\mathbb R}_+\to{\mathbb R}_+$ and a constant $x^*\geq\widetilde{K}$ satisfying (8)–(12) and

(17) \begin{equation} 0<\lim_{x\to\infty}\frac{V_1(x)}{\pi(x)}<1. \end{equation}

To solve Problem 2, we need some preparations. For $i=0,1$ and $k={\textrm{L}},{\textrm{U}}$ , $G^k_i$ is the quadratic function on $\beta\in{\mathbb R}$ defined as

\[ G^k_i(\beta)\,:\!=\,\frac{1}{2}\sigma^2_i\beta(\beta-1)+\mu_i\beta-(\lambda_i+r+\eta{\bf 1}_{\{i=1,k={\textrm{U}}\}}).\]

The equation $G^k_i(\beta)=0$ has one positive and one negative solution, denoted by $\zeta^{k,+}_i$ and $\zeta^{k,-}_i$ , respectively. For each $k={\textrm{U}},{\textrm{L}}$ , we write $F^k(\beta)\,:\!=\,G^k_0(\beta)G^k_1(\beta)-\lambda_0\lambda_1$ , and consider the quartic equation $F^k(\beta)=0$ . Since $F^k(0)>0$ , $F^k\big(\zeta^{k,\pm}_i\big)<0$ , and $F^k(\beta)\to\infty$ as $\beta$ tends to $\pm\infty$ , the equation $F^k(\beta)=0$ has four different solutions, two of which are positive, and two of which are negative. Now, for the equation $F^{\textrm{L}}(\beta)=0$ , we denote the larger positive solution by $\beta^{\textrm{L}}_{\textrm{A}}$ and the other positive solution by $\beta^{\textrm{L}}_{\textrm{B}}$ . Note that $F^{\textrm{L}}(1)$ is positive, and $\zeta^{{\textrm{L}},+}_i>1$ holds since $G^{\textrm{L}}_i(1)<0$ . Thus, $1<\beta^{\textrm{L}}_{\textrm{B}}<\zeta^{L,+}_i<\beta^{\textrm{L}}_{\textrm{A}}$ holds for $i=0,1$ . A similar argument can be found in [Reference Guo10, Remark 2.1]. Furthermore, the same holds for the quartic equation $F^{\textrm{U}}(\beta)=0$ . Let $\beta^{\textrm{U}}_{\textrm{A}}$ and $\beta^{\textrm{U}}_{\textrm{B}}$ be the larger and the other negative solutions to $F^{\textrm{U}}(\beta)=0$ , respectively, i.e. $\beta^{\textrm{U}}_{\textrm{B}}<\zeta^{U,-}_i<\beta^{\textrm{U}}_{\textrm{A}}<0$ holds for $i=0,1$ . In addition, we define the following constants:

(18) \begin{equation} \left\{ \begin{array}{l} a_0 \,:\!=\, \displaystyle\frac{\alpha\eta\lambda_0} {(r-\mu_0+\lambda_0)(r-\mu_1+\lambda_1+\eta)-\lambda_0\lambda_1}, \\[15pt] a_1 \,:\!=\, \displaystyle\frac{\alpha\eta(r-\mu_0+\lambda_0)} {(r-\mu_0+\lambda_0)(r-\mu_1+\lambda_1+\eta)-\lambda_0\lambda_1}, \\[15pt] b_0 \,:\!=\, \displaystyle\frac{\alpha\widetilde{K}\eta\lambda_0} {\lambda_0\lambda_1-(r+\lambda_0)(r+\lambda_1+\eta)}, \\[15pt] b_1 \,:\!=\, \displaystyle\frac{\alpha\widetilde{K}\eta(r+\lambda_0)} {\lambda_0\lambda_1-(r+\lambda_0)(r+\lambda_1+\eta)}, \end{array} \right.\end{equation}

and

(19) \begin{equation} \left\{ \begin{array}{l} P^{\textrm{L}}_{\textrm{A}} \,:\!=\, \displaystyle \frac{\alpha(\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}})({-}\beta^{\textrm{L}}_{\textrm{B}}+\beta^{\textrm{U}}_{\textrm{B}})+a_1(\beta^{\textrm{U}}_{\textrm{A}}-1)(\beta^{\textrm{U}}_{\textrm{B}}-1)} {(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] Q^{\textrm{L}}_{\textrm{A}} \,:\!=\, \displaystyle \frac{-\alpha\widetilde{K}(\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}})({-}\beta^{\textrm{L}}_{\textrm{B}}+\beta^{\textrm{U}}_{\textrm{B}})+b_1\beta^{\textrm{U}}_{\textrm{A}}\beta^{\textrm{U}}_{\textrm{B}}} {(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] P^{\textrm{L}}_{\textrm{B}} \,:\!=\, \displaystyle \frac{\alpha(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{A}})(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})-a_1(\beta^{\textrm{U}}_{\textrm{A}}-1)(\beta^{\textrm{U}}_{\textrm{B}}-1)} {(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] Q^{\textrm{L}}_{\textrm{B}} \,:\!=\, \displaystyle \frac{-\alpha\widetilde{K}(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{A}})(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})-b_1\beta^{\textrm{U}}_{\textrm{A}}\beta^{\textrm{U}}_{\textrm{B}}} {(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] P^{\textrm{U}}_{\textrm{A}} \,:\!=\, \displaystyle \frac{\alpha(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{B}})+a_1(\beta^{\textrm{U}}_{\textrm{B}}-1)(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{B}}-1)} {(\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] Q^{\textrm{U}}_{\textrm{A}} \,:\!=\, \displaystyle \frac{-\alpha\widetilde{K}(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{B}})+b_1\beta^{\textrm{U}}_{\textrm{B}}(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{B}})} {(\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] P^{\textrm{U}}_{\textrm{B}} \,:\!=\, \displaystyle \frac{\alpha(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{A}})({-}\beta^{\textrm{L}}_{\textrm{B}}+\beta^{\textrm{U}}_{\textrm{A}})-a_1(\beta^{\textrm{U}}_{\textrm{A}}-1)(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-1)} {(\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}, \\[15pt] Q^{\textrm{U}}_{\textrm{B}} \,:\!=\, \displaystyle \frac{-\alpha\widetilde{K}(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{A}})({-}\beta^{\textrm{L}}_{\textrm{B}}+\beta^{\textrm{U}}_{\textrm{A}})-b_1\beta^{\textrm{U}}_{\textrm{A}}(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}})} {(\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})(\beta^{\textrm{L}}_{\textrm{A}}+\beta^{\textrm{L}}_{\textrm{B}}-\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}})}. \end{array} \right.\end{equation}

With the above preparations, we solve Problem 2 as follows.

Proposition 1. Problem 2 has the following unique solution $(V_0,V_1,x^*)$ : For $i=0,1$ ,

(20)
(21)

and

(22) \begin{equation} x^* = -\frac{\dfrac{(1-\beta^{\textrm{L}}_{\textrm{A}})Q^{\textrm{L}}_{\textrm{A}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})} + \dfrac{(1-\beta^{\textrm{L}}_{\textrm{B}})Q^{\textrm{L}}_{\textrm{B}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}})} + \dfrac{(\beta^{\textrm{U}}_{\textrm{A}}-1)Q^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} + \dfrac{(\beta^{\textrm{U}}_{\textrm{B}}-1)Q^{\textrm{U}}_{\textrm{B}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})} + \dfrac{b_0}{\lambda_0}} {\dfrac{(1-\beta^{\textrm{L}}_{\textrm{A}})P^{\textrm{L}}_{\textrm{A}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})} + \dfrac{(1-\beta^{\textrm{L}}_{\textrm{B}})P^{\textrm{L}}_{\textrm{B}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}})} + \dfrac{(\beta^{\textrm{U}}_{\textrm{A}}-1)P^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} + \dfrac{(\beta^{\textrm{U}}_{\textrm{B}}-1)P^{\textrm{U}}_{\textrm{B}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})}}, \end{equation}

where

(23) \begin{equation} \left\{ \begin{array}{l} A^k_1 = (x^*)^{-\beta^k_{\textrm{A}}}(P^k_{\textrm{A}} x^*+Q^k_{\textrm{A}}), \\[4pt] B^k_1 = (x^*)^{-\beta^k_{\textrm{B}}}(P^k_{\textrm{B}} x^*+Q^k_{\textrm{B}}), \\[4pt] A^k_0 = \dfrac{-\lambda_0}{G^k_0(\beta^k_{\textrm{A}})}A^k_1, \\[9pt] B^k_0 = \dfrac{-\lambda_0}{G^k_0(\beta^k_{\textrm{B}})}B^k_1 \end{array} \right. \end{equation}

for $k={\textrm{L}},{\textrm{U}}$ .

Proof. For the time being, we use $\widetilde{\pi}(x)\,:\!=\,\alpha x-\alpha\widetilde{K}$ instead of $\pi$ , i.e. we rewrite (11) and (12) as follows:

(24)
(25)

Step 1. For $0<x<x^*$ , a general solution to (9) and (10) is expressed as (20) with some $A^{\textrm{L}}_i,B^{\textrm{L}}_i\in{\mathbb R}$ and some $\beta^{\textrm{L}}_{\textrm{A}},\beta^{\textrm{L}}_{\textrm{B}}>0$ . Note that the non-negativity of $\beta^{\textrm{L}}_{\textrm{A}}$ and $\beta^{\textrm{L}}_{\textrm{B}}$ is derived from the condition (8). Without loss of generality, we may assume that $\beta^{\textrm{L}}_{\textrm{A}}>\beta^{\textrm{L}}_{\textrm{B}}$ . Substituting (20) for (9) and (10), we obtain

\[ (A^{\textrm{L}}_iG^{\textrm{L}}_i(\beta^{\textrm{L}}_{\textrm{A}}) + \lambda_iA^{\textrm{L}}_{1-i})x^{\beta^{\textrm{L}}_{\textrm{A}}} + (B^{\textrm{L}}_iG^{\textrm{L}}_i(\beta^{\textrm{L}}_{\textrm{B}}) + \lambda_iB^{\textrm{L}}_{1-i})x^{\beta^{\textrm{L}}_{\textrm{B}}}=0, \qquad i=0,1, \]

for any $x\in(0,x^*)$ , which is equivalent to $A^{\textrm{L}}_iG^{\textrm{L}}_i(\beta^{\textrm{L}}_{\textrm{A}}) + \lambda_iA^{\textrm{L}}_{1-i}=0$ and $B^{\textrm{L}}_iG^{\textrm{L}}_i(\beta^{\textrm{L}}_{\textrm{B}}) + \lambda_iB^{\textrm{L}}_{1-i}=0$ for $i=0,1$ . Thus, $\beta^{\textrm{L}}_{\textrm{A}}$ satisfies $A^{\textrm{L}}_0G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})A^{\textrm{L}}_1G^{\textrm{L}}_1(\beta^{\textrm{L}}_{\textrm{A}}) = ({-}\lambda_0A^{\textrm{L}}_1)({-}\lambda_1A^{\textrm{L}}_0)$ , i.e. $G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})G^{\textrm{L}}_1(\beta^{\textrm{L}}_{\textrm{A}}) - \lambda_0\lambda_1=0$ . In addition, the same is true for $\beta^{\textrm{L}}_{\textrm{B}}$ . Thus, as defined above, $\beta^{\textrm{L}}_{\textrm{A}}$ and $\beta^{\textrm{L}}_{\textrm{B}}$ are the larger and smaller positive solutions to the equation $F^{\textrm{L}}(\beta)=0$ . Moreover, $A^{\textrm{L}}_i$ and $B^{\textrm{L}}_i$ satisfy the following:

(26) \begin{equation} A^{\textrm{L}}_0 = -\frac{\lambda_0}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})}A^{\textrm{L}}_1, \qquad B^{\textrm{L}}_0 = -\frac{\lambda_0}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}})}B^{\textrm{L}}_1. \end{equation}

Step 2. Next, we discuss the case where $x>x^*$ . First, we need to find a special solution to (9) and (24), since (24) is inhomogeneous. Note that $\widetilde{\pi}$ is of linear growth. For each $i=0,1$ , we can therefore write a special solution as $a_ix+b_i$ . Substituting $a_ix+b_i$ for (9) and (24), we have

(27) \begin{equation} \left\{ \begin{array}{l} ({-}ra_0+\mu_0a_0+\lambda_0(a_1-a_0))x + ({-}rb_0+\lambda_0(b_1-b_0)) = 0, \\[5pt] ({-}ra_1+\mu_1a_1+\lambda_1(a_0-a_1)+\eta(\alpha-a_1))x + ({-}rb_1+\lambda_1(b_0-b_1)+\eta({-}\alpha\widetilde{K}-b_1))=0 \end{array} \right. \end{equation}

for any $x>x^*$ ; in other words, all the coefficients in (27) are 0, from which $a_i$ and $b_i$ satisfy (18).

Now, we derive $V_i(x)$ for $x>x^*$ in the same way as the previous step. For each $i=0,1$ , we can write a general solution to (9) and (24) as

\[ V_i(x) = A^{\textrm{U}}_ix^{\beta^{\textrm{U}}_{\textrm{A}}} + B^{\textrm{U}}_ix^{\beta^{\textrm{U}}_{\textrm{B}}} + a_ix + b_i, \qquad x>x^*, \]

with some $A^{\textrm{U}}_i,B^{\textrm{U}}_i\in{\mathbb R}$ and $\beta^{\textrm{U}}_{\textrm{A}},\beta^{\textrm{U}}_{\textrm{B}}\in{\mathbb R}$ . By (9), (24), and (27), it follows that

\[ A^{\textrm{U}}_iG^{\textrm{U}}_i(\beta^{\textrm{U}}_{\textrm{A}}) + \lambda_iA^{\textrm{U}}_{1-i}=0, \qquad B^{\textrm{U}}_iG^{\textrm{U}}_i(\beta^{\textrm{U}}_{\textrm{B}}) + \lambda_iB^{\textrm{U}}_{1-i}=0 \]

for $i=0,1$ . Thus, in the same way as for Step 1, $\beta^{\textrm{U}}_{\textrm{A}}$ and $\beta^{\textrm{U}}_{\textrm{B}}$ are solutions to the quartic equation $F^{\textrm{U}}(\beta)=0$ . On the other hand, if either of $\beta^{\textrm{U}}_{\textrm{A}}$ or $\beta^{\textrm{U}}_{\textrm{B}}$ is positive, then (17) is violated since any positive solution is greater than 1. Thus, $\beta^{\textrm{U}}_{\textrm{A}}$ and $\beta^{\textrm{U}}_{\textrm{B}}$ are the negative solutions, and we may take them so that $\beta^{\textrm{U}}_{\textrm{B}}<\beta^{\textrm{U}}_{\textrm{A}}<0$ without loss of generality. Moreover, we have

(28) \begin{equation} A^{\textrm{U}}_0 = -\frac{\lambda_0}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})}A^{\textrm{U}}_1, \qquad B^{\textrm{U}}_0 = -\frac{\lambda_0}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})}B^{\textrm{U}}_1. \end{equation}

Step 3. By the $C^2$ property of $V_1$ and the boundary condition (25), it follows that

\begin{equation*}\left\{\begin{array}{l} A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}}+B^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}} = A^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}}+B^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}}+a_1x^*+b_1 = \widetilde{\pi}(x^*), \nonumber \\[5pt] \beta^{\textrm{L}}_{\textrm{A}} A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-1}+\beta^{\textrm{L}}_{\textrm{B}} B^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-1} = \beta^{\textrm{U}}_{\textrm{A}} A^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}-1}+\beta^{\textrm{U}}_{\textrm{B}} B^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}-1}+a_1, \nonumber \\[5pt] \beta^{\textrm{L}}_{\textrm{A}}(\beta^{\textrm{L}}_{\textrm{A}}-1)A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-2} + \beta^{\textrm{L}}_{\textrm{B}}(\beta^{\textrm{L}}_{\textrm{B}}-1)B^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-2} = \beta^{\textrm{U}}_{\textrm{A}}(\beta^{\textrm{U}}_{\textrm{A}}-1)A^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}-2}\\[5pt] \qquad + \beta^{\textrm{U}}_{\textrm{B}}(\beta^{\textrm{U}}_{\textrm{B}}-1)B^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}-2}. \nonumber \end{array}\right.\end{equation*}

Solving the above, together with (26) and (28), we obtain (23).

Step 4. In this step, we derive (22). Since $V_0$ and $V^\prime_0$ are continuous at $x^*$ , we have

\begin{equation*}\left\{\begin{array}{l} A^{\textrm{L}}_0(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}}+B^{\textrm{L}}_0(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}} = A^{\textrm{U}}_0(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}}+B^{\textrm{U}}_0(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}}+a_0x^*+b_0, \nonumber \\[6pt] \beta^{\textrm{L}}_{\textrm{A}} A^{\textrm{L}}_0(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-1}+\beta^{\textrm{L}}_{\textrm{B}} B^{\textrm{L}}_0(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-1} = \beta^{\textrm{U}}_{\textrm{A}} A^{\textrm{U}}_0(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}-1}+\beta^{\textrm{U}}_{\textrm{B}} B^{\textrm{U}}_0(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}-1}+a_0. \nonumber \end{array}\right.\end{equation*}

Using (23) and cancelling $a_0$ , we obtain

\begin{align*} \bigg(\frac{(1-\beta^{\textrm{L}}_{\textrm{A}})P^{\textrm{L}}_{\textrm{A}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})} & + \frac{(1-\beta^{\textrm{L}}_{\textrm{B}})P^{\textrm{L}}_{\textrm{B}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}})} + \frac{(\beta^{\textrm{U}}_{\textrm{A}}-1)P^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} + \frac{(\beta^{\textrm{U}}_{\textrm{B}}-1)P^{\textrm{U}}_{\textrm{B}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})}\bigg)x^* \\[7pt]& + \frac{(1-\beta^{\textrm{L}}_{\textrm{A}})Q^{\textrm{L}}_{\textrm{A}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}})} + \frac{(1-\beta^{\textrm{L}}_{\textrm{B}})Q^{\textrm{L}}_{\textrm{B}}}{G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}})} + \frac{(\beta^{\textrm{U}}_{\textrm{A}}-1)Q^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} + \frac{(\beta^{\textrm{U}}_{\textrm{B}}-1)Q^{\textrm{U}}_{\textrm{B}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})} + \frac{b_0}{\lambda_0} = 0, \end{align*}

and denote this as ${\mathcal P} x^*+{\mathcal Q}=0$ . Recall that $\beta^{\textrm{L}}_{\textrm{A}}>\zeta^{{\textrm{L}},+}_0>\beta^{\textrm{L}}_{\textrm{B}}>1$ and $\beta^{\textrm{U}}_{\textrm{B}}<\zeta^{{\textrm{U}},-}_0<\beta^{\textrm{U}}_{\textrm{A}}<0$ . Thus, $G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}}),G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{B}})>0$ and $G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}}),G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})<0$ hold. Moreover, we can easily see that $P^{\textrm{L}}_{\textrm{A}},P^{\textrm{U}}_{\textrm{B}}<0$ , $P^{\textrm{L}}_{\textrm{B}},P^{\textrm{U}}_{\textrm{A}}>0$ , $Q^{\textrm{L}}_{\textrm{A}},Q^{\textrm{U}}_{\textrm{B}}>0$ , and $Q^{\textrm{L}}_{\textrm{B}},Q^{\textrm{U}}_{\textrm{A}}<0$ . Thus, all the terms in ${\mathcal P}$ are positive, and those in ${\mathcal Q}$ are negative. We then have $x^*=-{{\mathcal Q}}/{{\mathcal P}}>0$ , i.e. (22) holds.

Step 5. We show that $V_i$ , $i=0,1$ , are ${\mathbb R}_+$ -valued in this last step. Since $V_i(x)\sim a_ix+b_i$ as $x\to\infty$ and $a_i>0$ for $i=0,1$ , there is an $M>0$ such that $V_i(x)>0$ for any $x>M$ and $i=0,1$ . Now, we write $V_{\overline{i}}(\widehat{x})\,:\!=\,\min_{x\in(0,M]}\min_{i=0,1}V_i(x)$ and assume that $V_{\overline{i}}(\widehat{x})<0$ . We then have $V^\prime_{\overline{i}}(\widehat{x})=0$ , $V^{\prime\prime}_{\overline{i}}(\widehat{x})>0$ , and $V_{\overline{i}}(\widehat{x})\leq V_{1-\overline{i}}(\widehat{x})$ . When $\widehat{x}\in(0,x^*)$ , it follows that

(29) \begin{equation} -rV_{\overline{i}}(\widehat{x}) + \mu_{\overline{i}}\widehat{x} V_{\overline{i}}^\prime(\widehat{x}) + \frac{1}{2}\sigma_{\overline{i}}^2\widehat{x}^2V_{\overline{i}}^{\prime\prime}(\widehat{x}) + \lambda_{\overline{i}}(V_{1-\overline{i}}(\widehat{x}) - V_{\overline{i}}(\widehat{x})) = 0. \end{equation}

Thus, we have $V_{\overline{i}}(\widehat{x})\geq0$ , which contradicts the assumption that $V_{\overline{i}}(\widehat{x})<0$ . Next, consider the case where $\widehat{x}>x^*$ . If $\overline{i}=0$ , then (29) holds. This is a contradiction. When $\overline{i}=1$ , we have

\[ -rV_1(\widehat{x}) + \mu_1\widehat{x}V_1^\prime(\widehat{x}) + \frac{1}{2}\sigma_1^2\widehat{x}^2V_1^{\prime\prime}(\widehat{x}) + \lambda_1(V_0(\widehat{x}) - V_1(\widehat{x})) + \eta(\widetilde{\pi}(\widehat{x}) - V_1(\widehat{x})) = 0. \]

The second, third, and fourth terms are non-negative. In addition, the fifth term is also non-negative since $\widetilde{\pi}(\widehat{x})>\widetilde{\pi}(x^*)=V_1(x^*)\geq V_1(\widehat{x})$ . Thus, $V_1(\widehat{x})\geq0$ , which is a contradiction. Lastly, when $\widehat{x}=x^*$ , for any $\varepsilon>0$ there is a $\delta>0$ such that $\mu_{\overline{i}}V^\prime_{\overline{i}}(x)>-\varepsilon$ , $V^{\prime\prime}_{\overline{i}}(x)>0$ , and $V_{1-\overline{i}}(x)-V_{\overline{i}}(x)>-\varepsilon$ hold for any $x\in(x^*-\delta,x^*)$ . We then have $-rV_{\overline{i}}(x)-\varepsilon x^*-\lambda_{\overline{i}}\varepsilon\leq0$ for any $x\in(x^*-\delta,x^*)$ from (29), which means that $V_{\overline{i}}(x^*)\geq0$ holds. This is a contradiction. Consequently, $V_i$ , $i=0,1$ , are ${\mathbb R}_+$ -valued. In particular, we have $V_1(x^*)=\widetilde{\pi}(x^*)\geq0$ , from which $x^*\geq\widetilde{K}$ follows. Thus, $V_i$ , $i=0,1$ , satisfy (11) and (12) since $\widetilde{K}\geq K$ and $\widetilde{\pi}(x)=\pi(x)$ for any $x\geq K$ . Consequently, $(V_0,V_1,x^*)$ gives the unique solution to Problem 2. This completes the proof of Proposition 1.

4. Verification

In this section, we show that the functions $V_i$ , $i=0,1$ , given in Proposition 1 coincide with the value functions $v_i$ , $i=0,1$ , defined by (7), and an optimal stopping time $\tau^*$ exists as a threshold type with the optimal threshold $x^*$ given in (22). To this end, we assume that $V_1$ satisfies the boundary conditions (13) and (14). This assumption should be proven, but we will give only a sufficient condition because it is difficult and complicated.

Proposition 2. $V_1$ satisfies (13) and (14) whenever all of the following conditions hold:

  1. (i) $B^{\textrm{L}}_1, A^{\textrm{U}}_1>0$ ,

  2. (ii) $\beta^{\textrm{L}}_{\textrm{A}}(\beta^{\textrm{L}}_{\textrm{A}}-1)A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-2} + \beta^{\textrm{L}}_{\textrm{B}}(\beta^{\textrm{L}}_{\textrm{B}}-1)B^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-2}>0$ , and

  3. (iii) $\beta^{\textrm{L}}_{\textrm{A}} A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-1} + \beta^{\textrm{L}}_{\textrm{B}} B^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-1}<\alpha$ .

Proof. We have $V_1^{\prime\prime}(x) = x^{\beta^{\textrm{L}}_{\textrm{B}}-2}\{\beta^{\textrm{L}}_{\textrm{A}}(\beta^{\textrm{L}}_{\textrm{A}}-1)A^{\textrm{L}}_1x^{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \beta^{\textrm{L}}_{\textrm{B}}(\beta^{\textrm{L}}_{\textrm{B}}-1)B^{\textrm{L}}_1\}$ on $(0,x^*)$ . Since $1<\beta^{\textrm{L}}_{\textrm{B}}<\beta^{\textrm{L}}_{\textrm{A}}$ , $V_1^{\prime\prime}(x)>0$ holds on $(0,x^*)$ under (i) if $A^{\textrm{L}}_1\geq0$ . On the other hand, (ii) implies that $\beta^{\textrm{L}}_{\textrm{A}}(\beta^{\textrm{L}}_{\textrm{A}}-1)A^{\textrm{L}}_1(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \beta^{\textrm{L}}_{\textrm{B}}(\beta^{\textrm{L}}_{\textrm{B}}-1)B^{\textrm{L}}_1>0$ . Thus, $V_1^{\prime\prime}(x)>0$ on $(0,x^*)$ even if $A^{\textrm{L}}_1<0$ , i.e. $V_1$ is convex on $(0,x^*)$ . Moreover, (iii) yields that $V_1^\prime(x^*)<\alpha$ , i.e. $V_1^\prime(x)<\alpha$ on $(0,x^*)$ by the convexity of $V_1$ . Simultaneously, $V_1^\prime(x)>0$ holds since $V_1^\prime(0+)=0$ and $V_1^{\prime\prime}(x)>0$ . As a result, $V_1(x)>0$ on $(0,x^*)$ . Hence, we have

\[ V_1(x) > \{-\alpha(x^*-x)+V_1(x^*)\}\vee 0 = \{-\alpha(x^*-x)+\pi(x^*)\} \vee 0 \geq \pi(x) \]

for any $x\in(0,x^*)$ , from which (13) is satisfied.

Next, from (i), (ii), $\beta^{\textrm{U}}_{\textrm{B}}<\beta^{\textrm{U}}_{\textrm{A}}<0$ , and the continuity of $V_1^{\prime\prime}$ at $x^*$ , we have

\begin{align*} V_1^{\prime\prime}(x) & = x^{\beta^{\textrm{U}}_{\textrm{B}}-2}\{\beta^{\textrm{U}}_{\textrm{A}}(\beta^{\textrm{U}}_{\textrm{A}}-1)A^{\textrm{U}}_1x^{\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}}} + \beta^{\textrm{U}}_{\textrm{B}}(\beta^{\textrm{U}}_{\textrm{B}} - 1)B^{\textrm{U}}_1\} \\ & \geq x^{\beta^{\textrm{U}}_{\textrm{B}}-2}\{\beta^{\textrm{U}}_{\textrm{A}}(\beta^{\textrm{U}}_{\textrm{A}}-1)A^{\textrm{U}}_1(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}-\beta^{\textrm{U}}_{\textrm{B}}} + \beta^{\textrm{U}}_{\textrm{B}}(\beta^{\textrm{U}}_{\textrm{B}}-1)B^{\textrm{U}}_1\} > 0 \end{align*}

for any $x\in(x^*,\infty)$ , i.e. $V_1$ is convex on $(x^*,\infty)$ . We then have $V^\prime_1(x^*)\leq V^\prime_1(x)\leq a_1$ for any $x\in(x^*,\infty)$ since $V^\prime_1(x)\to a_1$ as $x\to\infty$ . Since $a_1<\alpha$ holds by (18), we obtain

\[ V_1(x)<\alpha(x-x^*)+V_1(x^*)=\alpha(x-x^*)+\pi(x^*)=\pi(x) \]

on $(x^*,\infty)$ , and (14) follows.

Using numerical computation, we can confirm that the above conditions (i)–(iii) are met for many parameter sets. In fact, with $r=0.1$ and $\pi(x)=(x-0.9)^+-0.1$ fixed, the values of $\mu_0$ and $\mu_1$ as $-5, -2, -1, -0.5, 0, 0.05, 0.099$ , and $\sigma_0$ , $\sigma_1$ , $\lambda_0$ , $\lambda_1$ , and $\eta$ as $0.1, 1, 2, 5$ , the above conditions were satisfied for all 50 176 parameter sets.

Remark 3. To show that $V_i$ , $i=0,1$ , satisfy (13) and (14), a PDE approach discussed in [Reference Bensoussan, Yan and Yin1] should be helpful. Note that [Reference Bensoussan, Yan and Yin1] treated the same cash-flow process as this paper, but no restrictions on stopping time were considered. Thus, different from (9)–(11), the two value functions in [Reference Bensoussan, Yan and Yin1] satisfy the same type of PDEs. In addition, since the random arrival of investment opportunities is not considered in [Reference Bensoussan, Yan and Yin1], the value functions on the interval $(x^*,\infty)$ coincide with the payoff function $\pi$ . In our setting, however, $V_1$ satisfies different PDEs depending on whether x is greater than $x^*$ or not. As a result, the structure of the VI in our setting becomes much more complicated than that in [Reference Bensoussan, Yan and Yin1], making the proof of (13) and (14) challenging.

Let us start with some preparations in order to show our main theorem. First of all, we show the following lemma.

Lemma 1. For $i=0,1$ , $V^\prime_i$ is bounded, and there is a $c_i>0$ such that $V_i(x)\leq c_ix$ for any $x>0$ .

Proof. For any $x\in(0,x^*)$ and $i=0,1$ , we have

\[ |V^\prime_i(x)| = \big|A^{\textrm{L}}_i\beta^{\textrm{L}}_{\textrm{A}} x^{\beta^{\textrm{L}}_{\textrm{A}}-1} + B^{\textrm{L}}_i\beta^{\textrm{L}}_{\textrm{B}} x^{\beta^{\textrm{L}}_{\textrm{B}}-1}\big| \leq \big|A^{\textrm{L}}_i\big|\beta^{\textrm{L}}_{\textrm{A}}(x^*)^{\beta^{\textrm{L}}_{\textrm{A}}-1} + \big|B^{\textrm{L}}_i\big|\beta^{\textrm{L}}_{\textrm{B}}(x^*)^{\beta^{\textrm{L}}_{\textrm{B}}-1} \]

since $\beta^{\textrm{L}}_{\textrm{A}},\beta^{\textrm{L}}_{\textrm{B}}>1$ . On the other hand, for any $x>x^*$ and $i=0,1$ ,

\[ |V^\prime_i(x)| = \big|A^{\textrm{U}}_i\beta^{\textrm{U}}_{\textrm{A}} x^{\beta^{\textrm{U}}_{\textrm{A}}-1} + B^{\textrm{U}}_i\beta^{\textrm{U}}_{\textrm{B}} x^{\beta^{\textrm{U}}_{\textrm{B}}-1} + a_i\big| \leq \big|A^{\textrm{U}}_i\beta^{\textrm{U}}_{\textrm{A}}\big|(x^*)^{\beta^{\textrm{U}}_{\textrm{A}}-1} + \big|B^{\textrm{U}}_i\beta^{\textrm{U}}_{\textrm{B}}\big|(x^*)^{\beta^{\textrm{U}}_{\textrm{B}}-1} + |a_i|, \]

since $\beta^{\textrm{U}}_{\textrm{A}},\beta^{\textrm{U}}_{\textrm{B}}<0$ . Hence, $V^\prime_i$ is bounded, which implies that, for each $i=0,1$ , there is a $c_i>0$ such that $V_i(x)\leq c_ix$ for any $x>0$ since $V_i(0+)=0$ .

In addition, we define $T^1_k \,:\!=\, \inf\{t>T^1_{k-1}\mid\theta_t=1$ and $t=T_j$ for some $j\in{\mathbb N}\}$ for $k\in{\mathbb N}$ with the conventions $T^1_0\equiv0$ and $T^1_\infty\equiv\infty$ . Note that $T^1_k\in{\mathcal T}$ represents the kth time when stopping is feasible, and ${\mathcal T}$ is described as

\[ {\mathcal T} = \{\tau\in{\mathcal T}_0 \mid \textrm{for each }\omega\in\Omega, \tau(\omega)=T^1_j(\omega)\textrm{ for some }j\in{\mathbb N}_\infty\}.\]

Now, we define $N^*\,:\!=\,\inf\{n\in{\mathbb N}\mid X_{T^1_n}\geq x^*\}$ , with the convention $\inf\emptyset=\infty$ . Note that $N^*$ is an ${\mathbb N}_\infty$ -valued stopping time, where ${\mathbb N}_\infty\,:\!=\,{\mathbb N}\cup\{\infty\}$ . Hereafter, we write $Z\sim\exp(\lambda)$ when a random variable Z follows the exponential distribution with parameter $\lambda>0$ .

The following theorem is our main result.

Theorem 1. Suppose that $V_1$ satisfies (13) and (14). Then $v_i(x)=V_i(x)$ holds for any $x>0$ and $i=0,1$ , and the stopping time $\tau^*\,:\!=\,T^1_{N^*}\in{\mathcal T}$ is optimal for the optimal stopping problem defined by (7).

Proof. We divide this proof into five steps.

Step 1. In this step, we fix $\theta_0=0$ and $X_0=x$ , and write $\xi_0\,:\!=\,\inf\{t>0\mid\theta_t=1\}$ . For $i=0,1$ , we denote by $Y^i=\{Y^i_t\}_{t\geq0}$ a geometric Brownian motion starting at 1 under regime i, i.e. the solution to the SDE ${\textrm{d}} Y^i_t = Y^i_t(\mu_i\,{\textrm{d}} t + \sigma_i\,{\textrm{d}} W_t)$ , $Y^i_0=1$ . In the following, when we write $Y^i_t$ , its independent copy may be taken if necessary. Note that $xY^0_t=X_t$ holds if $t<\xi_0$ , and $\xi_0\sim\exp(\lambda_0)$ . Now, we see the following:

(30) \begin{equation} V_0(x)={\mathbb E}\bigg[\int_0^\infty{\textrm{e}}^{-(r+\lambda_0)t}\lambda_0V_1(xY^0_t)\,{\textrm{d}} t\bigg], \qquad x>0. \end{equation}

To this end, we first define

(31) \begin{equation} \Phi^0_t \,:\!=\, {\textrm{e}}^{-(r+\lambda_0)t}V_0(xY^0_t){\bf 1}_{\{t<\xi_0\}}. \end{equation}

Itô’s formula implies that

\begin{multline*} \Phi^0_t = \bigg\{V_0(x) + \int_0^t{\textrm{e}}^{-(r+\lambda_0)s}({-}(r+\lambda_0)V_0(xY^0_s)+{\mathcal A}_0V_0(xY^0_s))\,{\textrm{d}} s \\ + \int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\sigma_0xY^0_sV_0^\prime(xY^0_s)\,{\textrm{d}} W_s\bigg\}{\bf 1}_{\{t<\xi_0\}}. \end{multline*}

Taking expectation on both sides, we have

\begin{align*} {\mathbb E}[\Phi^0_t] & = V_0(x){\mathbb E}[{\bf 1}_{\{t<\xi_0\}}] + {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}({-}\lambda_0)V_1(xY^0_s)\,{\textrm{d}} s{\bf 1}_{\{t<\xi_0\}}\bigg] \\ & \quad + {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\sigma_0xY^0_sV_0^\prime(xY^0_s)\,{\textrm{d}} W_s {\bf 1}_{\{t<\xi_0\}}\bigg] \\ & = \bigg\{V_0(x) - {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\lambda_0V_1(xY^0_s)\,{\textrm{d}} s\bigg] + {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\sigma_0xY^0_sV_0^\prime(xY^0_s){\textrm{d}} W_s\bigg]\bigg\}{\textrm{e}}^{-\lambda_0t} \\ & = \bigg\{V_0(x) - {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\lambda_0V_1(xY^0_s)\,{\textrm{d}} s\bigg]\bigg\}{\textrm{e}}^{-\lambda_0t}. \end{align*}

The first equality is due to (9); the second is due to the independence of $\xi_0$ and W, and $\xi_0\sim\exp(\lambda_0)$ . The last equality is obtained from the boundedness of $V_0^\prime$ by Lemma 1 and the integrability of $\int_0^t(Y^0_s)^2\,{\textrm{d}} s$ . From (31), we obtain

\[ V_0(x) = {\mathbb E}[{\textrm{e}}^{-(r+\lambda_0)t}V_0(xY^0_t)] + {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_0)s}\lambda_0V_1(xY^0_s)\,{\textrm{d}} s\bigg]. \]

Since $V_0(x)\leq c_0x$ from Lemma 1 and $r>\mu_0$ from (5), we have

\[ {\mathbb E}[{\textrm{e}}^{-(r+\lambda_0)t}V_0(xY^0_t)] \leq {\mathbb E}[{\textrm{e}}^{-(r+\lambda_0)t}c_0xY^0_t], \]

which tends to 0 as $t\to\infty$ . As a result, since $V_1\geq0$ , the monotone convergence theorem implies (30).

Since $\xi_0\sim\exp(\lambda_0)$ , (30) can be rewritten as

(32) \begin{equation} V_0(x) = {\mathbb E}[{\textrm{e}}^{-r\xi_0}V_1(xY^0_{\xi_0})] = {\mathbb E}^{0,x}[{\textrm{e}}^{-r\xi_0}V_1(X_{\xi_0})]. \end{equation}

From (7), showing $v_1=V_1$ , we obtain $v_0=V_0$ immediately. In what follows, we focus on the proof of $v_1=V_1$ .

Step 2. Throughout the rest of this proof, we fix $\theta_0=1$ and $X_0=x$ . Here we aim to show the following by a similar argument to Step 1:

(33) \begin{equation} V_1(x) = {\mathbb E}\bigg[\int_0^\infty{\textrm{e}}^{-(r+\lambda_1+\eta)t}\{\lambda_1V_0(xY^1_t) + \eta\overline{V}(xY^1_t)\}\,{\textrm{d}} t\bigg], \end{equation}

where $\overline{V}$ is defined in (15). To this end, we define $\Phi^1_t \,:\!=\, {\textrm{e}}^{-(r+\lambda_1+\eta)t}V_1(xY^1_t){\bf 1}_{\{t<\xi_1\wedge T_1\}}$ , where $\xi_1 \,:\!=\, \inf\{t>0\mid \theta_t=0\}$ . In addition, recall that $T_1=\inf\{t>0\mid J_t=1\}$ , i.e. the first investment opportunity time. Noting that ${\mathbb P}(t<\xi_1\wedge T_1) = {\textrm{e}}^{-(\lambda_1+\eta)t}$ , we obtain

\[ {\mathbb E}[\Phi^1_t] = \bigg\{V_1(x) - {\mathbb E}\bigg[\int_0^t{\textrm{e}}^{-(r+\lambda_1+\eta)s}(\lambda_1V_0(xY^1_s) + \eta\overline{V}(xY^1_s))\,{\textrm{d}} s\bigg]\bigg\}{\textrm{e}}^{-(\lambda_1+\eta)t} \]

from Itô’s formula and (16). By the same sort of argument as in Step 1, (33) follows.

Step 3. This step is devoted to preparing some notation. First of all, we define two sequences of stopping times inductively as follows: $\xi^{0\to1}_0\equiv0$ and, for $k\in{\mathbb N}$ ,

\begin{align*} \xi^{1\to0}_k & \,:\!=\, \inf\{t>\xi^{0\to1}_{k-1}\mid\theta_{t-}=1,\theta_t=0\}, \\[5pt] \xi^{0\to1}_k & \,:\!=\, \inf\{t>\xi^{1\to0}_k\mid\theta_{t-}=0,\theta_t=1\}. \end{align*}

We call the time interval $[\xi^{0\to1}_{k-1},\xi^{0\to1}_k)$ the kth phase. Note that each phase begins when the regime changes into 1, moves to regime 0 midway through, and ends when it returns to regime 1 again. Moreover, we define the following two sequences of independent and identically distributed random variables: $U^1_k \,:\!=\, \xi^{1\to0}_k-\xi^{0\to1}_{k-1}$ , $U^0_k \,:\!=\, \xi^{0\to1}_k-\xi^{1\to0}_k$ . Note that each $U^i_k\sim\exp(\lambda_i)$ expresses the length of regime i in the kth phase, and $U^0_{k_0}$ and $U^1_{k_1}$ are independent for any $k_0,k_1\in{\mathbb N}$ . For $k\in{\mathbb N}$ , we denote by $\widetilde{T}_k$ the first investment opportunity time after the start of the kth phase, i.e.

\[ \widetilde{T}_k \,:\!=\, \inf\{t>\xi^{0\to1}_{k-1}\mid t=T_j\textrm{ for some }j\in{\mathbb N}\}. \]

Note that $\widetilde{T}_k$ is not necessarily in the kth phase, and $\theta_{\widetilde{T}_k}$ may take the value of 0. In addition, we define $U^{\textrm{P}}_k \,:\!=\, \widetilde{T}_k-\xi^{0\to1}_{k-1}\sim\exp(\eta)$ , which represents the length of time from the start of the kth phase until the arrival of the first investment opportunity.

Step 4. In this step, we show that

(34) \begin{equation} V_1(x) = {\mathbb E}^{1,x}[{\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1})]. \end{equation}

Recall that $T^1_1 = \inf\{t>0\mid\theta_t=1\textrm{ and }t=T_j\textrm{ for some }j\in{\mathbb N}\}$ , i.e. the time when stopping becomes feasible for the first time.

First of all, we can rewrite (33) as

(35) \begin{equation} V_1(x) = {\mathbb E}\Big[{\textrm{e}}^{-rU^1_1}V_0(xY^1_{U^1_1}){\bf 1}_{\{U^1_1<U^{\textrm{P}}_1\}} + {\textrm{e}}^{-rU^{\textrm{P}}_1}\overline{V}(xY^1_{U^{\textrm{P}}_1}){\bf 1}_{\{U^{\textrm{P}}_1<U^1_1\}}\Big], \end{equation}

since $U^1_1$ is independent of $U^{\textrm{P}}_1$ and ${\mathbb P}(U^{\textrm{P}}_1>t) = {\textrm{e}}^{-\eta t}$ . Using (32) and (35), we have

\begin{align*} V_1(x) & = {\mathbb E}\Big[{\textrm{e}}^{-rU^1_1}\big(E^{-rU^0_1}V_1\big(xY^1_{U^1_1}Y^0_{U^0_1}\big)\big){\bf 1}_{\{U^1_1<U^{\textrm{P}}_1\}} + {\textrm{e}}^{-rU^{\textrm{P}}_1}\overline{V}\big(xY^1_{U^{\textrm{P}}_1}\big){\bf 1}_{\{U^{\textrm{P}}_1<U^1_1\}}\Big] \\ & = {\mathbb E}\Big[{\textrm{e}}^{-r(U^1_1+U^0_1)}\Big({\textrm{e}}^{-rU^1_2}V_0\big(xY^1_{U^1_1}Y^0_{U^0_1}Y^1_{U^1_2}\big) {\bf 1}_{\{U^1_2<U^{\textrm{P}}_2\}} + {\textrm{e}}^{-rU^{\textrm{P}}_2}\overline{V}\big(xY^1_{U^1_1}Y^0_{U^0_1}Y^1_{U^{\textrm{P}}_2}\big){\bf 1}_{\{U^{\textrm{P}}_2<U^1_2\}}\Big) {\bf 1}_{\{U^1_1<U^{\textrm{P}}_1\}} \\ & \qquad + {\textrm{e}}^{-rU^{\textrm{P}}_1}\overline{V}\big(xY^1_{U^{\textrm{P}}_1}\big){\bf 1}_{\{U^{\textrm{P}}_1<U^1_1\}}\Big] \\ & = {\mathbb E}\Big[{\textrm{e}}^{-r(U^1_1+U^0_1+U^1_2)}V_0\big(xY^1_{U^1_1}Y^0_{U^0_1}Y^1_{U^1_2}\big) {\bf 1}_{\{U^1_1<U^{\textrm{P}}_1\}\cap\{U^1_2<U^{\textrm{P}}_2\}} \\ & \qquad + {\textrm{e}}^{-r(U^1_1+U^0_1+U^{\textrm{P}}_2)}\overline{V}\big(xY^1_{U^1_1}Y^0_{U^0_1}Y^1_{U^{\textrm{P}}_2}\big) {\bf 1}_{\{U^1_1<U^{\textrm{P}}_1\}\cap\{U^{\textrm{P}}_2<U^1_2\}} + {\textrm{e}}^{-rU^{\textrm{P}}_1}\overline{V}\big(xY^1_{U^{\textrm{P}}_1}\big) {\bf 1}_{\{U^{\textrm{P}}_1<U^1_1\}}\Big]. \end{align*}

Note that all random variables in the above are independent. Now, we write

\[ Z^0_n \,:\!=\, \exp\Bigg\{{-}r\Bigg(\sum_{k=1}^nU^1_k+\sum_{k=1}^{n-1}U^0_k\Bigg)\Bigg\} V_0\Bigg(x\prod_{k=1}^{n-1}\big(Y^1_{U^1_k}Y^0_{U^0_k}\big)Y^1_{U^1_n}\Bigg){\bf 1}_{\bigcap_{k=1}^n\{U^1_k<U^{\textrm{P}}_k\}} \]

for $n\in{\mathbb N}$ , $\overline{Z}_1 \,:\!=\, {{\textrm{e}}^{-rU^{\textrm{P}}_1}\overline{V}\big(xY^1_{U^{\textrm{P}}_1}\big){\bf 1}_{\{U^{\textrm{P}}_1<U^1_1\}}}$ , and

\[ \overline{Z}_k \,:\!=\, \exp\Bigg\{{-}r\Bigg(\sum_{j=1}^{k-1}(U^1_j+U^0_j)+U^{\textrm{P}}_k\Bigg)\Bigg\} \overline{V}\Bigg(x\prod_{j=1}^{k-1}\big(Y^1_{U^1_j}Y^0_{U^0_j}\big)Y^1_{U^{\textrm{P}}_k}\Bigg) {\bf 1}_{\bigcap_{j=1}^{k-1}\{U^1_j<U^{\textrm{P}}_j\}\cap\{U^{\textrm{P}}_k<U^1_k\}} \]

for $k\geq2$ . Note that, for $k\in{\mathbb N}$ , we can rewrite $\overline{Z}_k$ as

(36) \begin{equation} \overline{Z}_k = {\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1}){\bf 1}_{\{\xi^{0\to1}_{k-1}\leq T^1_1<\xi^{1\to0}_k\}} \end{equation}

when $\theta_0=1$ and $X_0=x$ . We then have, for any $n\in{\mathbb N}$ , $V_1(x) = {\mathbb E}\big[Z^0_n+\sum_{k=1}^n\overline{Z}_k\big]$ .

From Lemma 1 and the independence of all the random variables, it follows that

\begin{align*} {\mathbb E}[Z^0_n] & \leq {\mathbb E}\Bigg[\exp\Bigg\{{-}r\Bigg(\sum_{k=1}^nU^1_k+\sum_{k=1}^{n-1}U^0_k\Bigg)\Bigg\} V_0\Bigg(x\prod_{k=1}^nY^1_{U^1_k}\prod_{k=1}^{n-1}Y^0_{U^0_k}\Bigg)\Bigg] \\ & \leq {\mathbb E}\Bigg[c_0x\prod_{k=1}^n\big({\textrm{e}}^{-rU^1_k}Y^1_{U^1_k}\big) \prod_{k=1}^{n-1}\big({\textrm{e}}^{-rU^0_k}Y^0_{U^0_k}\big)\Bigg] \\ & = c_0x\prod_{k=1}^n{\mathbb E}\big[{\textrm{e}}^{-rU^1_k}Y^1_{U^1_k}\big] \prod_{k=1}^{n-1}{\mathbb E}\big[{\textrm{e}}^{-rU^0_k}Y^0_{U^0_k}\big] \leq c_0x\bigg(\frac{\lambda_1}{r-\mu_1+\lambda_1}\bigg)^n\bigg(\frac{\lambda_0}{r-\mu_0+\lambda_0}\bigg)^{n-1}, \end{align*}

since

\[ {\mathbb E}\Big[e^{-rU^i_k}Y^i_{U^i_k}\Big] = \frac{\lambda_i}{r-\mu_i+\lambda_i}. \]

As a result, we obtain $\lim_{n\to\infty}{\mathbb E}[Z^0_n]=0$ . Since each $\overline{Z}_k$ is non-negative, the monotone convergence theorem implies that

\[ V_1(x) = \lim_{n\to\infty}{\mathbb E}\Bigg[Z^0_n+\sum_{k=1}^n\overline{Z}_k\Bigg] = {\mathbb E}\Bigg[\sum_{k=1}^\infty\overline{Z}_k\Bigg]. \]

Thus, (36) provides that

\[ V_1(x) = {\mathbb E}^{1,x}\Bigg[{\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1}) \sum_{k=1}^\infty{\bf 1}_{\{\xi^{0\to1}_{k-1}\leq T^1_1<\xi^{1\to0}_k\}}\Bigg] = {\mathbb E}^{1,x}\big[{\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1}){\bf 1}_{\{T^1_1<\infty\}}\big]. \]

On the other hand, ${\textrm{e}}^{-rt}\overline{V}(X_t)\leq {\textrm{e}}^{-rt}(c_1\vee\alpha)X_t$ holds. Since ${\textrm{e}}^{-rt}X_t$ is a non-negative supermartingale, it converges to 0 almost surely (a.s.) as $t\to\infty$ by, e.g., [Reference Karatzas and Shreve14, Problem 1.3.16]. As a result, we have

\[ {\mathbb E}^{1,x}\big[{\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1}){\bf 1}_{\{T^1_1<\infty\}}\big] = {\mathbb E}^{1,x}\big[{\textrm{e}}^{-rT^1_1}\overline{V}(X_{T^1_1})\big], \]

from which (34) follows.

Step 5. We define a filtration ${\mathbb G}=\{{\mathcal G}_n\}_{n\in{\mathbb N}_0}$ as ${\mathcal G}_n\,:\!=\,{\mathcal F}_{T^1_n}$ and a process $\overline{S}=\{\overline{S}_n\}_{n\in{\mathbb N}_0}$ as $\overline{S}_n\,:\!=\,{\textrm{e}}^{-rT^1_n}\overline{V}\big(X_{T^1_n}\big)$ , where ${\mathbb N}_0\,:\!=\,{\mathbb N}\cup\{0\}$ . We then have, for any $n\in{\mathbb N}_0$ ,

\begin{align*} \overline{S}_n & \geq {\textrm{e}}^{-rT^1_n}V_1\big(X_{T^1_n}\big) \\ & = {\textrm{e}}^{-rT^1_n}{\mathbb E}^{1,y}\Big[{\textrm{e}}^{-r\widehat{T}^1_1} \overline{V}\big(X_{\widehat{T}^1_1}\big)\Big]\Big|_{y=X_{T^1_n}} = {\mathbb E}^{1,x}\Big[{\textrm{e}}^{-rT^1_{n+1}}\overline{V}\big(X_{T^1_{n+1}}\big)\mid{\mathcal G}_n\Big] = {\mathbb E}^{1,x}[\overline{S}_{n+1}\mid{\mathcal G}_n], \end{align*}

where $\widehat{T}^1_1$ is an independent copy of $T^1_1$ . Thus, $\overline{S}$ is a non-negative ${\mathbb G}$ -supermartingale, and $\overline{S}_n$ converges to 0 a.s. as $n\to\infty$ . On the other hand, [Reference Dupuis and Wang7, Lemma 1] implies

(37) \begin{equation} {\mathcal T}=\{T^1_N\mid N\textrm{ is an ${\mathbb N}_\infty$-valued ${\mathbb G}$-stopping time}\}. \end{equation}

Since $\overline{S}_0\geq{\mathbb E}^{1,x}[\overline{S}_n]\geq0$ for any $n\in{\mathbb N}$ , the optional sampling theorem, e.g. [Reference Dellacherie and Meyer5, Theorem 16, Chapter V], together with (34), yield

\[ V_1(x) = {\mathbb E}^{1,x}[\overline{S}_1] \geq {\mathbb E}^{1,x}[\overline{S}_N] \geq {\mathbb E}^{1,x}\Big[{\textrm{e}}^{-rT^1_N}\pi\big(X_{T^1_N}\big)\Big] \]

for any ${\mathbb N}_\infty$ -valued ${\mathbb G}$ -stopping time N. Taking the supremum on the right-hand side over all such N, we obtain $v_1\leq V_1$ from (7) and (37).

Next, we see the reverse inequality $v_1\geq V_1$ . To this end, we recall that $N^*\,:\!=\,\inf\big\{n\in{\mathbb N}\mid X_{T^1_n}\geq x^*\big\}$ and define $\overline{S}^*_n\,:\!=\,\exp\{-rT^1_{N^*\wedge n}\}\overline{V}\big(X_{T^1_{N^*\wedge n}}\big)$ for $n\in{\mathbb N}_0$ . As shown in Lemma 2, $\overline{S}^*=\{\overline{S}^*_n\}_{n\in{\mathbb N}_0}$ is a uniformly integrable martingale, which implies that

\begin{align*} V_1(x) \leq \overline{V}(x) = \overline{S}^*_0 & = \lim_{n\to\infty}{\mathbb E}^{1,x}\big[\overline{S}^*_n\big] \\ & = {\mathbb E}^{1,x}\Big[\lim_{n\to\infty}\overline{S}^*_n\Big] = {\mathbb E}^{1,x}\Big[{\textrm{e}}^{-rT^1_{N^*}}\overline{V}\big(X_{T^1_{N^*}}\big)\Big] = {\mathbb E}^{1,x}\Big[{\textrm{e}}^{-rT^1_{N^*}}\pi\big(X_{T^1_{N^*}}\big)\Big]\leq v_1(x), \end{align*}

since $\overline{V}(x)=\pi(x)$ for any $x\geq x^*$ , and $T^1_{N^*}\in{\mathcal T}$ . Consequently, we obtain

\[ v_1(x)=V_1(x)={\mathbb E}^{1,x}\Big[{\textrm{e}}^{-rT^1_{N^*}}\pi\big(X_{T^1_{N^*}}\big)\Big], \qquad x>0, \]

and thus the stopping time $T^1_{N^*}$ is optimal. This completes the proof of Theorem 1.

Lemma 2. $\overline{S}^*$ is a uniformly integrable martingale.

Proof. We prove this lemma with an argument similar to [Reference Dupuis and Wang7, Step 2, Section 3.2]. First of all, for any $n\in{\mathbb N}$ , we have

\begin{align*} {\mathbb E}^{1,x}\big[\overline{S}^*_n\mid{\mathcal G}_{n-1}\big] & = {\mathbb E}^{1,x}\big[{\textrm{e}}^{-rT^1_n}\overline{V}(X_{T^1_n}){\bf 1}_{\{N^*\geq n\}}\mid{\mathcal G}_{n-1}\big] + {\mathbb E}^{1,x}\big[{\textrm{e}}^{-rT^1_{N^*}}\overline{V}\big(X_{T^1_{N^*}}\big){\bf 1}_{\{N^*<n\}} \mid {\mathcal G}_{n-1}\big] \\ & = {\textrm{e}}^{-rT^1_{n-1}}{\mathbb E}^{1,y}\big[e^{-r\widehat{T}^1_1} \overline{V}\big(X_{\widehat{T}^1_1}\big)\big]\Big|_{y=X_{T^1_{n-1}}}{\bf 1}_{\{N^*\geq n\}} + {\textrm{e}}^{-rT^1_{N^*}}\overline{V}\big(X_{T^1_{N^*}}\big){\bf 1}_{\{N^*<n\}} \\ & = {\textrm{e}}^{-rT^1_{n-1}}V_1\big(X_{T^1_{n-1}}\big){\bf 1}_{\{N^*\geq n\}} + {\textrm{e}}^{-rT^1_{N^*}}\overline{V}\big(X_{T^1_{N^*}}\big){\bf 1}_{\{N^*<n\}} \\ & = {\textrm{e}}^{-rT^1_{n-1}}\overline{V}\big(X_{T^1_{n-1}}\big){\bf 1}_{\{N^*\geq n\}} + {\textrm{e}}^{-rT^1_{N^*}}\overline{V}\big(X_{T^1_{N^*}}\big){\bf 1}_{\{N^*<n\}} = \overline{S}^*_{n-1}, \end{align*}

where $\widehat{T}^1_1$ is an independent copy of $T^1_1$ . As a result, $\overline{S}^*$ is a ${\mathbb G}$ -martingale.

Next, we show the uniform integrability. To see this, we have only to show that $\sup_{n\in{\mathbb N}}{\mathbb E}^{1,x}\big[\big|\overline{S}^*_n\big|^p\big]<\infty$ for some $p>1$ . Since $\overline{V}(x)\leq(c_1\vee\alpha)x$ , it suffices to see that

(38) \begin{equation} \sup_{n\in{\mathbb N}}{\mathbb E}^{1,x}\Big[\exp\big\{{-}prT^1_{N^*\wedge n}\big\}X^p_{T^1_{N^*\wedge n}}\Big]<\infty \quad \textrm{for some }p>1. \end{equation}

Note that

\begin{align*} {\textrm{e}}^{-prt}X^p_t & = x^p\exp\bigg\{p\int_0^t\bigg(\mu_{\theta_s}-r-\frac{1}{2}\sigma^2_{\theta_s}\bigg)\,{\textrm{d}} s + p\int_0^t\sigma_{\theta_s}\,{\textrm{d}} W_s\bigg\} \\ & = x^p\exp\bigg\{p\int_0^t\bigg(\mu_{\theta_s}-r+\frac{p-1}{2}\sigma^2_{\theta_s}\bigg)\,{\textrm{d}} s - \int_0^t\frac{p^2}{2}\sigma^2_{\theta_s}\,{\textrm{d}} s + \int_0^tp\sigma_{\theta_s}\,{\textrm{d}} W_s\bigg\}. \end{align*}

Now, we take a $p>1$ satisfying $\mu_i-r+\frac{1}{2}\sigma_i^2(p-1)<0$ for any $i=0,1$ . Writing $M^*_n \,:\!=\, {\textrm{e}}^{-prT^1_n}X^p_{T^1_n}$ , $n\in{\mathbb N}_0$ , we can see that $M^*=\{M^*_n\}_{n\in{\mathbb N}_0}$ is a non-negative ${\mathbb G}$ -supermartingale. Thus, the optional sampling theorem, e.g. [Reference Dellacherie and Meyer5, Theorem 16, Chapter V], implies that

\[ {\mathbb E}^{1,x}\Big[\exp\big\{-prT^1_{N^*\wedge n}\big\}X^p_{T^1_{N^*\wedge n}}\Big] = {\mathbb E}^{1,x}\big[M^*_{N^*\wedge n}\big]\leq M^*_0=x^p \]

holds for any $n\in{\mathbb N}$ , from which (38) follows.

By Theorem 1, an optimal stopping time $\tau^*$ exists as a threshold type with the optimal threshold $x^*$ if $V_1$ in Proposition 1 satisfies the boundary conditions (13) and (14). Moreover, (20), (21), and (22) give expressions for the value functions $v_i$ , $i=0,1$ , and the optimal threshold $x^*$ , respectively. Although these expressions contain solutions to quartic equations, we can compute the value of $x^*$ numerically and illustrate the value functions $v_i$ , $i=0,1$ ; for example, for the case where $\pi(x)=(x-0.9)^+-0.1$ , $r=0.1$ , $\mu_0=-0.1$ , $\mu_1=0.05$ , $\sigma_0=0.2$ , $\sigma_1=0.1$ , $\lambda_0=2$ , $\lambda_1=1$ , and $\eta=1$ , we obtain approximately

\[ v_0(x) = \begin{cases} -4.05\times10^{-5}x^{17.18}+0.10x^{3.52}, & 0<x<x^*, \\[5pt] 0.16x^{-5.28}-0.12x^{-26.12}+0.80x-0.83, & x>x^*, \end{cases}\]
\[ v_1(x) = \begin{cases} -3.53\times10^{-5}x^{17.18}+0.11x^{3.52}, & 0<x<x^*, \\[5pt] 0.07x^{-5.28}-0.91x^{-26.12}+0.88x-0.87, & x>x^* \end{cases}\]

by rounding off the third decimal place of all the coefficients and all exponents, and $x^* = 1.250\,142\,442\,232\,948$ . Figure 1 illustrates the functions $v_0(x)$ , $v_1(x)$ , and $\pi(x)$ by dashed, solid, and bold curves. Furthermore, it is immediately seen that the value functions $v_i$ , $i=0,1$ , are non-negative, non-decreasing convex functions and $v_i(x)\sim a_ix$ as $x\to\infty$ for $i=0,1$ . However, the magnitude relationship of $v_0$ and $v_1$ depends on how we take the parameters. The function $v_1$ is larger in the above example, but simply replacing the values of $\mu_0$ and $\mu_1$ with $0.08$ and $-0.5$ , respectively, reverses the magnitude relationship between $v_0$ and $v_1$ as illustrated in Figure 2. Also, $x^*$ for this case takes the value $1.004\,076\,896\,125\,947$ . Note that the two cases discussed here satisfy all three conditions of Proposition 2.

5. Asymptotic behaviors

This section discusses the asymptotic behaviors of the value functions $v_i$ , $i=0,1$ , and the optimal threshold $x^*$ when some parameter goes to $\infty$ . To compare with results in the existing literature, we consider the case where X is a geometric Brownian motion given as ${\textrm{d}} X_t = X_t(\mu\,{\textrm{d}} t + \sigma\,{\textrm{d}} W_t)$ , i.e. $\mu=\mu_0=\mu_1$ and $\sigma= \sigma_0=\sigma_1$ . Then, simple calculations show that

(39) \begin{equation} \left\{ \begin{array}{l} \beta^{\textrm{L}}_{\textrm{A}} = \dfrac{1}{2} - \dfrac{\mu}{\sigma^2} + \sqrt{\bigg(\dfrac{1}{2}-\dfrac{\mu}{\sigma^2}\bigg)^2+\dfrac{2(\lambda_0+\lambda_1+r)}{\sigma^2}}, \\[5pt] \beta^{\textrm{L}}_{\textrm{B}} = \dfrac{1}{2} - \dfrac{\mu}{\sigma^2} + \sqrt{\bigg(\dfrac{1}{2}-\dfrac{\mu}{\sigma^2}\bigg)^2 + \dfrac{2r}{\sigma^2}}, \\[5pt] \beta^{\textrm{U}}_{\textrm{A}} = \dfrac{1}{2} - \dfrac{\mu}{\sigma^2} - \sqrt{\bigg(\dfrac{1}{2}-\dfrac{\mu}{\sigma^2}\bigg)^2 + \dfrac{1}{\sigma^2}(\lambda_0+\lambda_1+\eta+2r-\sqrt{(\lambda_0+\lambda_1+\eta)^2-4\lambda_0\eta})}, \\[5pt] \beta^{\textrm{U}}_{\textrm{B}} = \dfrac{1}{2} - \dfrac{\mu}{\sigma^2} - \sqrt{\bigg(\dfrac{1}{2}-\dfrac{\mu}{\sigma^2}\bigg)^2 + \dfrac{1}{\sigma^2}(\lambda_0+\lambda_1+\eta+2r+\sqrt{(\lambda_0+\lambda_1+\eta)^2-4\lambda_0\eta})}. \end{array} \right.\end{equation}

Figure 1. $v_0(x)$ , $v_1(x)$ , and $\pi(x)$ for $\mu_0=-0.1$ and $\mu_1=0.05$ .

Figure 2. $v_0(x)$ , $v_1(x)$ , and $\pi(x)$ for $\mu_0=0.08$ and $\mu_1=-0.5$ .

5.1. Asymptotic behaviors as $\eta\to\infty$

When $\eta\to\infty$ , investment opportunities arrive continuously, which means only the regime constraint remains. First of all, we have

(40) \begin{equation} \lim_{\eta\to\infty}\beta^{\textrm{U}}_{\textrm{A}} = \frac{1}{2} - \frac{\mu}{\sigma^2} - \sqrt{\bigg(\frac{1}{2}-\frac{\mu}{\sigma^2}\bigg)^2 + \frac{2(\lambda_0+r)}{\sigma^2}} = \zeta^{L,-}_0, \qquad \lim_{\eta\to\infty}\beta^{\textrm{U}}_{\textrm{B}} = -\infty,\end{equation}

but the values of $\beta^{\textrm{L}}_{\textrm{A}}$ and $\beta^{\textrm{L}}_{\textrm{B}}$ are independent of $\eta$ . In addition, it follows that

(41) \begin{equation} a_0 \to \frac{\alpha\lambda_0}{r-\mu+\lambda_0}, \qquad a_1 \to \alpha, \qquad b_0 \to -\frac{\alpha\widetilde{K}\lambda_0}{r+\lambda_0}, \qquad b_1 \to -\alpha\widetilde{K}\end{equation}

as $\eta\to\infty$ . By (19) and (40), we can see that

\[ P^{\textrm{L}}_{\textrm{A}} \to \frac{({-}\beta^{\textrm{L}}_{\textrm{B}}+1)\alpha}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}}, \qquad Q^{\textrm{L}}_{\textrm{A}} \to \frac{\beta^{\textrm{L}}_{\textrm{B}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}}, \qquad P^{\textrm{L}}_{\textrm{B}} \to \frac{(\beta^{\textrm{L}}_{\textrm{A}}-1)\alpha}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}}, \qquad Q^{\textrm{L}}_{\textrm{B}} \to \frac{-\beta^{\textrm{L}}_{\textrm{A}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}}\]

as $\eta\to\infty$ , and $P^{\textrm{U}}_{\textrm{A}}$ , $Q^{\textrm{U}}_{\textrm{A}}$ , $P^{\textrm{U}}_{\textrm{B}}$ , and $Q^{\textrm{U}}_{\textrm{B}}$ converge to 0, which implies that $\lim_{\eta\to\infty}v_1(x)=\alpha x -\alpha\widetilde{K}$ . Now, we assume that $\lim_{\eta\to\infty}v_1(x)\geq\pi(x)$ for any $x\in(0,x^*_\infty)$ , which makes Theorem 1 available. By Proposition 1, Theorem 1, and (39), we obtain

\begin{align*} & \lim_{\eta\to\infty}v_1(x) \\ & = \begin{cases} \dfrac{({-}\beta^{\textrm{L}}_{\textrm{B}}+1)\alpha x^*_\infty+\beta^{\textrm{L}}_{\textrm{B}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} \bigg(\dfrac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{L}}_{\textrm{A}}} + \dfrac{(\beta^{\textrm{L}}_{\textrm{A}}-1)\alpha x^*_\infty-\beta^{\textrm{L}}_{\textrm{A}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} \bigg(\dfrac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{L}}_{\textrm{B}}}, & 0<x<x^*_\infty, \\[10pt] \alpha x - \alpha\widetilde{K}, & x>x^*_\infty, \end{cases} \end{align*}

where $\beta^{\textrm{L}}_{\textrm{A}}$ and $\beta^{\textrm{L}}_{\textrm{B}}$ are given in (39), and $x^*_\infty\,:\!=\,\lim_{\eta\to\infty}x^*$ is given in (42). Since $G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{A}}) = \lambda_1$ and $G^{\textrm{L}}_0(\beta^{\textrm{L}}_{\textrm{B}}) = -\lambda_0$ , we have, for $0<x<x^*_\infty$ ,

\[ \lim_{\eta\to\infty}v_0(x) = -\frac{\lambda_0}{\lambda_1} \frac{({-}\beta^{\textrm{L}}_{\textrm{B}}+1)\alpha x^*_\infty+\beta^{\textrm{L}}_{\textrm{B}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} \bigg(\frac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{L}}_{\textrm{A}}} + \frac{(\beta^{\textrm{L}}_{\textrm{A}}-1)\alpha x^*_\infty-\beta^{\textrm{L}}_{\textrm{A}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} \bigg(\frac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{L}}_{\textrm{B}}}, \]

by (26). In addition, the continuity of $V_0$ at $x^*_\infty$ , togther with (41) and $P^{\textrm{U}}_{\textrm{B}},Q^{\textrm{U}}_{\textrm{B}}\to0$ , imply that

\[ \lim_{\eta\to\infty}v_0(x) = \overline{A}^{\textrm{U}}_0\bigg(\frac{x}{x^*_\infty}\bigg)^{\zeta^{{\textrm{L}},-}_0} + \frac{\alpha\lambda_0}{r-\mu+\lambda_0}x - \frac{\alpha\widetilde{K}\lambda_0}{r+\lambda_0}, \qquad x>x^*_\infty,\]

where $\zeta^{{\textrm{L}},-}_0 = \lim_{\eta\to\infty}\beta^{\textrm{U}}_{\textrm{A}}$ by (40), and

\begin{equation*} \overline{A}^{\textrm{U}}_0 \,:\!=\, -\frac{\lambda_0}{\lambda_1} \frac{({-}\beta^{\textrm{L}}_{\textrm{B}}+1)\alpha x^*_\infty+\beta^{\textrm{L}}_{\textrm{B}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \frac{(\beta^{\textrm{L}}_{\textrm{A}}-1)\alpha x^*_\infty-\beta^{\textrm{L}}_{\textrm{A}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} - \frac{\alpha\lambda_0}{r-\mu+\lambda_0}x^*_\infty+\frac{\alpha\widetilde{K}\lambda_0}{r+\lambda_0}.\end{equation*}

From this last equation we have

\begin{align*} \lim_{\eta\to\infty}\frac{-\lambda_0P^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} & = -\frac{\lambda_0}{\lambda_1}\frac{({-}\beta^{\textrm{L}}_{\textrm{B}}+1)\alpha}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \frac{(\beta^{\textrm{L}}_{\textrm{A}}-1)\alpha}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}}-\frac{\alpha\lambda_0}{r-\mu+\lambda_0}, \\ \lim_{\eta\to\infty}\frac{-\lambda_0Q^{\textrm{U}}_{\textrm{A}}}{G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})} & = -\frac{\lambda_0}{\lambda_1}\frac{\beta^{\textrm{L}}_{\textrm{B}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \frac{-\beta^{\textrm{L}}_{\textrm{A}}\alpha\widetilde{K}}{\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}}} + \frac{\alpha\widetilde{K}\lambda_0}{r+\lambda_0}.\end{align*}

Substituting these limits and the limits obtained so far into (22), we get the following:

(42) \begin{align} & x^*_\infty = \nonumber \\[6pt] & \frac{(r-\mu+\lambda_0) \big\{\big(\lambda_0\big(\beta^{\textrm{L}}_{\textrm{A}}-\zeta^{{\textrm{L}},-}_0\big)\beta^{\textrm{L}}_{\textrm{B}} + \lambda_1\big(\beta^{\textrm{L}}_{\textrm{B}}-\zeta^{{\textrm{L}},-}_0\big)\beta^{\textrm{L}}_{\textrm{A}}\big)(r+\lambda_0) + \zeta^{{\textrm{L}},-}_0(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})\lambda_0\lambda_1\big\}\widetilde{K}} {(r+\lambda_0)\big\{\big(\lambda_0\big(\beta^{\textrm{L}}_{\textrm{A}}-\zeta^{{\textrm{L}},-}_0\big)(\beta^{\textrm{L}}_{\textrm{B}}-1) + \lambda_1\big(\beta^{\textrm{L}}_{\textrm{B}}-\zeta^{{\textrm{L}},-}_0\big)(\beta^{\textrm{L}}_{\textrm{A}}-1)\big)(r-\mu+\lambda_0) + \big(\zeta^{{\textrm{L}},-}_0-1\big)(\beta^{\textrm{L}}_{\textrm{A}}-\beta^{\textrm{L}}_{\textrm{B}})\lambda_0\lambda_1\big\}}.\end{align}

We can see that

$$x^*_\infty\geq\frac{\beta^L_A}{\beta^L_A-1}\widetilde{K}\geq\widetilde{K}$$

holds. In addition, for the case where $\alpha=1$ and $K=0$ , we can confirm that the above result coincides with [Reference Nishihara19, Proposition 1].

5.2. Asymptotic behaviors as $\lambda_0\to\infty$

As $\lambda_0$ tends to $\infty$ , the regime 0 vanishes, and only the constraint on the random arrival of investment opportunities remains. In other words, the model converges to the one treated in [Reference Dupuis and Wang7]. In this case, it follows that

(43) \begin{equation} \left\{ \begin{array}{l} \lim\limits_{\lambda_0\to\infty}\beta^{\textrm{L}}_{\textrm{A}} = \infty, \\ \lim\limits_{\lambda_0\to\infty}\beta^{\textrm{U}}_{\textrm{A}} = \dfrac{1}{2} - \dfrac{\mu}{\sigma^2} - \sqrt{\bigg(\dfrac{1}{2}-\dfrac{\mu}{\sigma^2}\bigg)^2 + \dfrac{2(\eta+r)}{\sigma^2}}, \\[5pt] \lim\limits_{\lambda_0\to\infty}\beta^{\textrm{U}}_{\textrm{B}} = -\infty, \\ \lim\limits_{\lambda_0\to\infty}a_0,a_1 = \dfrac{\alpha\eta}{r-\mu+\eta}, \\[10pt] \lim\limits_{\lambda_0\to\infty}b_0,b_1 = -\dfrac{\alpha\widetilde{K}\eta}{r+\eta}. \end{array} \right.\end{equation}

Note that the value of $\beta^{\textrm{L}}_{\textrm{B}}$ is independent of $\lambda_0$ . We then have $\lim_{\lambda_0\to\infty}P^{\textrm{L}}_{\textrm{A}},Q^{\textrm{L}}_{\textrm{A}},P^{\textrm{U}}_{\textrm{B}},Q^{\textrm{U}}_{\textrm{B}}=0$ , and

\[ \lim_{\lambda_0\to\infty}P^{\textrm{L}}_{\textrm{B}} = \alpha, \quad \lim_{\lambda_0\to\infty}Q^{\textrm{L}}_{\textrm{B}} = -\alpha\widetilde{K}, \quad \lim_{\lambda_0\to\infty}P^{\textrm{U}}_{\textrm{A}} = \frac{(r-\mu)\alpha}{r-\mu+\eta}, \quad \lim_{\lambda_0\to\infty}Q^{\textrm{U}}_{\textrm{A}} = -\frac{r\alpha\widetilde{K}}{r+\eta}.\]

Moreover, $G^{\textrm{U}}_0(\beta^{\textrm{U}}_{\textrm{A}})\sim\eta-\lambda_0$ as $\lambda_0\to\infty$ . In the same way as the previous subsection, we obtain

\begin{align*} \lim_{\lambda_0\to\infty}x^* & = \frac{(r-\mu+\eta)\{(\beta^{\textrm{L}}_{\textrm{B}}-1)(r+\eta) + (1-\beta^{\textrm{U}}_{\textrm{A}})r + \eta\}\widetilde{K}} {(r+\eta)\{(\beta^{\textrm{L}}_{\textrm{B}}-1)(r-\mu+\eta) + (1-\beta^{\textrm{U}}_{\textrm{A}})(r-\mu)\}} \\[5pt] & = \frac{(r-\mu+\eta)((r+\eta)\beta^{\textrm{L}}_{\textrm{B}} - r\beta^{\textrm{U}}_{\textrm{A}})\widetilde{K}} {(r+\eta)((r-\mu+\eta)\beta^{\textrm{L}}_{\textrm{B}} - (r-\mu)\beta^{\textrm{U}}_{\textrm{A}}-\eta)}\ ({=}:\,x^*_\infty), \\[10pt] \lim_{\lambda_0\to\infty}v_1(x) & = \begin{cases} \alpha(x^*_\infty-\widetilde{K})\bigg(\dfrac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{L}}_{\textrm{B}}}, & 0<x<x^*_\infty, \\[5pt] \bigg(\dfrac{(r-\mu)\alpha x^*_\infty}{r-\mu+\eta} - \dfrac{r\alpha\widetilde{K}}{r+\eta}\bigg) \bigg(\dfrac{x}{x^*_\infty}\bigg)^{\beta^{\textrm{U}}_{\textrm{A}}} + \dfrac{\alpha\eta}{r-\mu+\eta}x - \dfrac{\alpha\widetilde{K}\eta}{r+\eta}, & x>x^*_\infty, \end{cases}\end{align*}

and $\lim_{\lambda_0\to\infty}v_0(x)=\lim_{\lambda_0\to\infty}v_1(x)$ for any $x>0$ , where $\beta^{\textrm{U}}_{\textrm{A}}$ is the limit given in (43). As seen in [Reference Dupuis and Wang7], we can prove that

$$x^*_\infty\geq \frac{r(r-\mu+\eta)}{(r-\mu)(r+\eta)}\widetilde{K}\geq\widetilde{K}$$

holds, and the boundary conditions (13) and (14) are satisfied. When $\alpha=0$ and $I=0$ , the result in this subsection is consistent with [Reference Dupuis and Wang7].

6. Conclusions

We considered a two-state regime-switching model and discussed the optimal stopping problem defined by (7) under two constraints on stopping: the random arrival of investment opportunities, and a regime constraint. Under the assumption that the boundary conditions (13) and (14) are satisfied, we showed that an optimal stopping time exists as a threshold type. In addition, we derived expressions for the value functions $v_i$ , $i=0,1$ , and the optimal threshold $x^*$ , which include solutions to quartic equations, but can be easily computed numerically. The asymptotic behaviors of $v_i$ , $i=0,1$ , and $x^*$ were also discussed. On the other hand, confirmation that (13) and (14) are always satisfied is still open. As discussed in Remark 3, this is a difficult question. In addition, as future work, it is possible to extend our model to one in which stopping is possible even in regime 0 but only at the jump times of an independent Poisson process with different intensity from regime 1.

Funding information

Takuji Arai gratefully acknowledges the financial support of the MEXT Grant in Aid for Scientific Research (C) No. 18K03422.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Bensoussan, A., Yan, Z. and Yin, G. (2012). Threshold-type policies for real options using regime-switching models. SIAM J. Financial Math. 3, 667689.CrossRefGoogle Scholar
Bollen, N. P. (1998). Valuing options in regime-switching models. J. Deriv. 6, 3850.CrossRefGoogle Scholar
Buffington, J. and Elliott, R. J. (2002). American options with regime switching. Internat. J. Theoret. Appl. Finance 5, 497514.CrossRefGoogle Scholar
Buffington, J. and Elliott, R. J. (2002). Regime switching and European options. In Stochastic Theory and Control, ed. B. Pasik-Duncan. Springer, Berlin, pp. 73–82.Google Scholar
Dellacherie, C. and Meyer, P. A. (1982). Probabilities and Potential B (North-Holland Math. Studies 72). North-Holland, Amsterdam.Google Scholar
Dixit, R. K. and Pindyck, R. S. (2012). Investment Under Uncertainty. Princeton University Press.CrossRefGoogle Scholar
Dupuis, P. and Wang, H. (2002). Optimal stopping with random intervention times. Adv. Appl. Prob. 34, 141157.CrossRefGoogle Scholar
Egami, M. and Kevkhishvili, R. (2020). A direct solution method for pricing options in regime switching models. Math. Finance 30, 547576.CrossRefGoogle Scholar
Elliott, R. J., Chan, L. and Siu, T. K. (2005). Option pricing and Esscher transform under regime switching. Ann. Finance 1, 423432.CrossRefGoogle Scholar
Guo, X. (2001). An explicit solution to an optimal stopping problem with regime switching. J. Appl. Prob. 38, 464481.CrossRefGoogle Scholar
Guo, X. and Zhang, Q. (2004). Closed-form solutions for perpetual American put options with regime switching. SIAM J. Appl. Math. 64, 20342049.Google Scholar
Hobson, D. (2021). The shape of the value function under Poisson optimal stopping. Stoch. Process. Appl. 133, 229246.CrossRefGoogle Scholar
Hobson, D. and Zeng, M. (2022). Constrained optimal stopping, liquidity and effort. Stoch. Process. Appl. 150, 819843.CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. (2012). Brownian Motion and Stochastic Calculus (Graduate Texts Math. 113). Springer, Berlin.Google Scholar
Lange, R. J., Ralph, D. and Støre, K. (2020). Real-option valuation in multiple dimensions using Poisson optional stopping times. J. Financial Quant. Anal. 55, 653677.CrossRefGoogle Scholar
Lempa, J. (2012). Optimal stopping with information constraint. Appl. Math. Optim. 66, 147173.CrossRefGoogle Scholar
McDonald, R. and Siegel, D. (1986). The value of waiting to invest. Quart. J. Econom. 101, 707727.CrossRefGoogle Scholar
Menaldi, J. L. and Robin, M. (2016). On some optimal stopping problems with constraint. SIAM J. Control Optim. 54, 26502671.CrossRefGoogle Scholar
Nishihara, M. (2020). Closed-form solution to a real option problem with regime switching. Operat. Res. Lett. 48, 703707.CrossRefGoogle Scholar
Revuz, D. and Yor, M. (2013). Continuous Martingales and Brownian Motion (Grundlehren der mathematischen Wissenschaften 293). Springer, Berlin.Google Scholar
Figure 0

Figure 1. $v_0(x)$, $v_1(x)$, and $\pi(x)$ for $\mu_0=-0.1$ and $\mu_1=0.05$.

Figure 1

Figure 2. $v_0(x)$, $v_1(x)$, and $\pi(x)$ for $\mu_0=0.08$ and $\mu_1=-0.5$.