1. Introduction
The optimal single stopping problem, under both uncertainty and ambiguity (or Knightian uncertainty, especially drift uncertainty), has attracted a great deal of attention and been well studied; we may refer to the papers [Reference Bayraktar and Yao1], [Reference Bayraktar and Yao2], [Reference Cheng and Riedel6], [Reference Peskir and Shiryaev17]. Consider a filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in[0,T]},\mathbb{P})$ satisfying the usual conditions of right-continuity and completeness. Given a nonnegative and adapted reward process $\{X_t\}_{t\in[0,T]}$ with some integrability and regularity conditions, we then define
where $\mathcal{S}_0$ is the collection of all stopping times taking values between 0 and T. The operator $\mathcal{E}[{\cdot}]$ corresponds to the classical expectation $\mathbb{E}[{\cdot}]$ when the agent faces only risk or uncertainty (i.e., he does not know the future state, but knows exactly the distribution of the reward process), while it corresponds to some nonlinear expectation if ambiguity is taken into account (i.e., the agent does not even have full confidence about the distribution). In both situations, the main objective is to compute the value $V_0$ as explicitly as possible and find some stopping time $\tau^*$ at which the supremum is attained, that is, $V_0=\mathcal{E}[X_{\tau^*}]$ . For this purpose, consider the value function
where $\mathcal{S}_t$ is the set of stopping times greater than t. Assuming some regularity of the reward family $\{X_t\}_{t\in[0,T]}$ and some appropriate conditions on the nonlinear conditional expectation $\mathcal{E}_t[{\cdot}]$ , we prove that the process $\{V_t\}_{t\in[0,T]}$ admits a modification that is right-continuous with left limits (RCLL), which, for simplicity, is still denoted by $\{V_t\}_{t\in[0,T]}$ . Furthermore, the stopping time given in terms of the first hitting time
is optimal, and $\{V_t\}_{t\in[0,T]}$ is the smallest $\mathcal{E}$ -supermartingale (which reduces to the classical supermartingale when $\mathcal{E}[{\cdot}]$ is the linear expectation) dominating the reward process $\{X_t\}_{t\in[0,T]}$ . One of the most important applications of the single optimal stopping problem is pricing for American options.
Motivated by the pricing for financial derivatives with several exercise rights in the energy market (swing options), one needs to solve an optimal multiple stopping problem. Mathematically, given a reward process $\{X_t\}_{t\in[0,T]}$ , if an agent has d exercise rights, the price of this contract is defined as follows:
To avoid triviality, we assume that there exists a constant $\delta>0$ , which represents the length of the refracting time interval, such that the difference of any two successive exercises is greater than $\delta$ . Therefore, $\widetilde{S}_0^d$ is the collection of stopping times $(\tau_1,\cdots,\tau_d)$ such that $\tau_1\geq 0$ and $\tau_j-\tau_{j-1}\geq \delta$ , for any $j=2,\cdots,d$ . There are several papers concerning this kind of problem. To name a few, [Reference Bender and Schoenmakers3] and [Reference Meinshausen and Hambly16] mainly deal with the discrete-time case, focusing on the Monto Carlo methods and algorithm, while [Reference Carmona and Touzi5] investigates the continuous-time case, allowing the time horizon to be either finite or infinite. It is worth pointing out that none of the existing literature considers the multiple stopping problem under Knightian uncertainty.
In fact, to make the value function well-defined for both the single and the multiple stopping problem, the reward can be given by a set of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ satisfying some compatibility properties, which means that we do not need to assume that the reward family can be aggregated into a progressive process. Under this weaker assumption on the reward family, [Reference El Karoui8] and [Reference Kobylanski, Quenez and Rouy-Mironescu15] establish the existence of the optimal stopping times for the single stopping problem and multiple stopping problem, respectively. Without aggregation of the reward family and the value function, the optimal stopping time is no longer given by the first hitting time of processes but by the essential infimum over an appropriate set of stopping times.
In the present work, we study the multiple stopping problem under Knightian uncertainty without the requirement of aggregation of the reward family. We will use the filtration-consistent nonlinear expectations established in [Reference Bayraktar and Yao1] to model Knightian uncertainty. First, we focus on the single stopping problem. Similarly as in the classical case, the value function is a kind of nonlinear supermartingale which is the smallest one dominating the reward family. Furthermore, the value function has the same regularity as the reward family in the single stopping case. Applying an approximation method, we prove the existence of the optimal stopping times under the assumption that the reward family is continuous along stopping times under nonlinear expectation (see Definition 3.3). It is important to note that in proving the existence of optimal stopping times, we need the assumption that the nonlinear expectation is sub-additive and positive homogenous, which is to say that the nonlinear expectation is an upper expectation. Hence, this optimal stopping problem is in fact a ‘ $\sup_\tau \sup_P$ ’ problem.
For the multiple stopping case, one important observation is that the value function of the d-stopping problem coincides with that of the single stopping case corresponding to a new reward family, where the new reward family is given by the maximum of a set of value functions associated with the $(d-1)$ -stopping problem. Therefore, we may construct the optimal stopping times by an induction method, provided that this new reward family satisfies the conditions under which the optimal single stopping time exists. The main difficulty in this problem is due to some measurability issues. To overcome this difficulty, we need to slightly modify the reward family to a new one and to establish the regularity of the induced value functions.
Recall that in [Reference Bayraktar and Yao2] and [Reference Cheng and Riedel6], for the single stopping problems under Knightian uncertainty, the reward is given by an RCLL adapted process, and the optimal stopping time can be represented as a first hitting time, which provides an efficient way to calculate an optimal stopping time. In our setting, if the reward family satisfies some stronger regularity conditions than those required in the existence result, we can prove that the reward family and the associated value function can be aggregated into some progressively measurable processes. Therefore, in this case, the optimal stopping times can be interpreted in terms of hitting times of processes.
The paper is organized as follows. We first recall some basic notation and results about the $\mathbb{F}$ -expectation in Section 2. In Section 3, we investigate the properties of the value function and construct the optimal stopping times for the optimal single stopping problem under nonlinear expectations. Then we solve the optimal double stopping problem under nonlinear expectations in Section 4. In Section 5 we study some aggregation results when the reward family satisfies some strong regularity conditions, and then interpret the optimal stopping times as the first hitting times of processes. The optimal d-stopping problem appears in the appendix.
2. $\mathbb{F}$ -expectations and their properties
In this paper, we fix a finite time horizon $T>0$ . Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space equipped with a filtration $\mathbb{F}=\{\mathcal{F}_t\}_{t\in[0,T]}$ satisfying the usual conditions of right-continuity and completeness. We denote by $L^0(\mathcal{F}_T)$ the collection of all $\mathcal{F}_T$ -measurable random variables. We first recall some basic notation and properties of the so-called $\mathbb{F}$ -expectation introduced in [Reference Bayraktar and Yao1]. Roughly speaking, the $\mathbb{F}$ -expectation is a nonlinear expectation defined on a subspace of $L^0(\mathcal{F}_T)$ which satisfies the following algebraic properties.
Definition 2.1. Let $\mathscr{D}_T$ denote the collection of all non-empty subsets $\Lambda$ of $L^0(\mathcal{F}_T)$ satisfying the following:
-
(D1) $0,1\in\Lambda$ ;
-
(D2) for any $\xi,\eta\in\Lambda$ and $A\in\mathcal{F}_T$ , both $\xi+\eta$ and $I_A \xi$ $|\xi|$ belong to $\Lambda$ ;
-
(D3) for any $\xi,\eta\in L^0(\mathcal{F}_T)$ with $0\leq \xi\leq \eta$ , almost surely (a.s.), if $\eta\in\Lambda$ , then $\xi\in\Lambda$ .
Definition 2.2. ([Reference Bayraktar and Yao1]) An $\mathbb{F}$ -consistent nonlinear expectation ( $\mathbb{F}$ -expectation for short) is a pair $(\mathcal{E},\Lambda)$ in which $\Lambda\in\mathscr{D}_T$ and $\mathcal{E}$ denotes a family of operators $\{\mathcal{E}_t[{\cdot}]\,:\,\Lambda\mapsto\Lambda_t\,:\!=\,\Lambda\cap L^0(\mathcal{F}_t)\}_{t\in[0,T]}$ satisfying the following hypotheses for any $\xi,\eta\in\Lambda$ and $t\in[0,T]$ :
-
(A1) Monotonicity (positively strict): $\mathcal{E}_t[\xi]\leq \mathcal{E}_t[\eta]$ a.s. if $\xi\leq \eta$ a.s. Moreover, if $0\leq \xi\leq \eta$ a.s. and $\mathcal{E}_0[\xi]=\mathcal{E}_0[\eta]$ , then $\xi=\eta$ a.s.
-
(A2) Time-consistency: $\mathcal{E}_s[\mathcal{E}_t[\xi]]=\mathcal{E}_s[\xi]$ , a.s., for any $0\leq s\leq t\leq T$ .
-
(A3) Zero–one law: $\mathcal{E}_t[\xi I_A]=\mathcal{E}_t[\xi]I_A$ , a.s., for any $A\in\mathcal{F}_t$ .
-
(A4) Translation-invariance: $\mathcal{E}_t[\xi+\eta]=\mathcal{E}_t[\xi]+\eta$ , a.s., if $\eta\in\Lambda_t$ .
Example 2.1. The following pairs are $\mathbb{F}$ -expectations:
-
(1) $\big(\{\mathbb{E}_t[{\cdot}]\}_{t\in[0,T]}, L^1(\mathcal{F}_T)\big)$ : the classical expectation $\mathbb{E}$ .
-
(2) $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ : the g-expectation with Lipschitz generator g(t, z) which is progressively measurable and square-integrable, and which satisfies $g(t,0)=0$ (see [Reference Bayraktar and Yao2], [Reference Coquet, Hu, Mémin and Peng7]).
-
(3) $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^e(\mathcal{F}_T)\big)$ : the g-expectation with convex generator g(t, z) having quadratic growth in z and satisfying $g(t,0)=0$ , where $L^e(\mathcal{F}_T)\,:\!=\,\{\xi\in L^0(\mathcal{F}_T)\,:\, \mathbb{E}[\!\exp(\lambda|\xi|)]<\infty, \forall \lambda>0\}$ (see [Reference Bayraktar and Yao2]).
-
(4) Let $\mathcal{P}$ be a set of probability measures satisfying the following conditions:
-
(i) For any $\mathbb{Q}\in \mathcal{P}$ , $\mathbb{Q}$ is equivalent to $\mathbb{P}$ and the density process is bounded away from zero by a constant.
-
(ii) Let $\mathbb{Q}^i\in\mathcal{P}$ with density process $\big\{q^i_t\big\}_{t\in[0,T]}$ , $i=1,2$ . Fix a stopping time $\tau$ . Define a new measure $\mathbb{Q}$ with density process $\{q_t\}_{t\in[0,T]}$ , where
\begin{align*} q_t=\begin{cases} q^1_t, &0\leq t\leq \tau;\\[4pt] \frac{q^1_\tau q^2_t}{q^2_\tau}, &\tau<t\leq T.\end{cases}\end{align*}Then we have $\mathbb{Q}\in\mathcal{P}$ .
For any $\xi\in L^2(\mathcal{F}_T)$ , set
\begin{align*} \mathcal{E}_t[\xi]=\mathop {{\rm{ess}}\,{\mkern 1mu} {\rm{inf}}}\limits_{\mathbb{Q}\in \mathcal{P}}\mathbb{E}_t^\mathbb{Q}[\xi].\end{align*}Then $\big(\{\mathcal{E}_t[{\cdot}]\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ is an $\mathbb{F}$ -expectation. Actually, this kind of nonlinear expectation can be regarded as a coherent risk measure. More examples can be found in [Reference Föllmer and Schied11]. -
The pair $(\{\underline{\mathcal{E}}_t[{\cdot}]\},\textrm{Dom}(\mathcal{E}))$ is almost an $\mathbb{F}$ -expectation, where
and $\{\mathcal{E}^i\}_{i\in\mathcal{I}}$ is a stable class of $\mathbb{F}$ -expectations (for the definition of stable class, we may refer to Definition 3.2 in [Reference Bayraktar and Yao1]). By Proposition 4.1 in [Reference Bayraktar and Yao2], the family of operators $\{\underline{\mathcal{E}}_t[{\cdot}]\}_{t\in[0,T]}$ satisfies (A2)–(A4) in Definition 2.2 as well as
That is to say, the nonlinear operator $\underline{\mathcal{E}}_t[{\cdot}]$ preserves all of the properties (A1)–(A4) except the strict comparison property.
For notational simplicity, we will substitute $\mathcal{E}[{\cdot}]$ for $\mathcal{E}_0[{\cdot}]$ . We denote the domain $\Lambda$ by Dom $(\mathcal{E})$ and introduce the following subsets of Dom $(\mathcal{E})$ :
Definition 2.3. ([Reference Bayraktar and Yao1].)
-
(1) An $\mathbb{F}$ -adapted process $X=\{X_t\}_{t\in[0,T]}$ is called an $\mathcal{E}$ -process if $X_t\in$ Dom $(\mathcal{E})$ , for any $t\in[0,T]$ .
-
(2) An $\mathcal{E}$ -process is said to be an $\mathcal{E}$ -supermartingale (resp. $\mathcal{E}$ -martingale, $\mathcal{E}$ -submartingale) if for any $0\leq s\leq t\leq T$ , $\mathcal{E}_s[X_t]\leq$ (resp. $=$ , $\geq$ ) $X_s$ , a.s.
For any $\mathbb{F}$ -adapted process X, its right-limit process is defined as follows:
where $q_n^+(t)=\frac{[2^n t]}{2^n}T$ . Let X be an $\mathcal{E}$ -process. For any stopping time $\tau\in \mathcal{S}^F_0$ , where $\mathcal{S}_0^F$ is the collection of all stopping times taking values in a finite set, by Condition (D2) in Definition 2.1, it is easy to check that $X_\tau\in$ Dom $_\tau(\mathcal{E})$ . For any $\xi\in$ Dom $(\mathcal{E})$ , $\big\{X^\xi_t\big\}_{t\in[0,T]}$ is an $\mathcal{E}$ -process, where $X_t^\xi=\mathcal{E}_t[\xi]$ . Therefore, for any $\tau\in\mathcal{S}_0^F$ , we may define an operator $\mathcal{E}_\tau[{\cdot}]\,:\,\textrm{Dom}(\mathcal{E})\mapsto\textrm{Dom}_\tau(\mathcal{E})$ by
In order to make the operator $\mathcal{E}_\tau[{\cdot}]$ well-defined for any stopping time $\tau$ , we need to put the following hypotheses on the $\mathbb{F}$ -expectation and the associated domain Dom $(\mathcal{E})$ :
-
(H0) For any $A\in\mathcal{F}_T$ with $P(A)>0$ , we have $\lim_{n\rightarrow\infty}\mathcal{E}[nI_A]=\infty$ .
-
(H1) For any $\xi\in\textrm{Dom}^+(\mathcal{E})$ and any $\{A_n\}_{n\in\mathbb{N}}\subset \mathcal{F}_T$ with $\lim_{n\rightarrow\infty}\uparrow I_{A_n}=1$ , a.s., we have $\lim_{n\rightarrow\infty}\uparrow\mathcal{E}[\xi I_{A_n}]=\mathcal{E}[\xi]$ .
-
(H2) For any $\xi,\eta\in\textrm{Dom}^+(\mathcal{E})$ and any $\{A_n\}_{n\in\mathbb{N}}\subset \mathcal{F}_T$ with $\lim_{n\rightarrow\infty}\downarrow I_{A_n}=0$ , a.s., we have $\lim_{n\rightarrow\infty}\downarrow\mathcal{E}[\xi+\eta I_{A_n}]=\mathcal{E}[\xi]$ .
-
(H3) For any $\xi\in\textrm{Dom}^+(\mathcal{E})$ and $\tau\in\mathcal{S}_0$ , $X^{\xi,+}_\tau\in \textrm{Dom}^+(\mathcal{E})$ .
-
(H4) $\textrm{Dom}(\mathcal{E})\in \widetilde{\mathscr{D}_T}\,:\!=\,\{\Lambda\in\mathscr{D}_T\,:\,\mathbb{R}\subset\Lambda\}$ .
Example 2.2. The $\mathbb{F}$ -expectations (1)–(3) listed in Example 2.1 satisfy (H0)–(H4).
Under the above assumptions, [Reference Bayraktar and Yao1] shows that the process $\big\{X_t^{\xi,+}\big\}_{t\in[0,T]}$ is an RCLL modification of $\big\{X_t^\xi\big\}_{t\in[0,T]}$ for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ . Then for any stopping time $\tau\in\mathcal{S}_0$ , the conditional $\mathbb{F}$ -expectation of $\xi\in \textrm{Dom}^+(\mathcal{E})$ at $\tau$ is given by
It is easy to check that $\widetilde{\mathcal{E}}_\tau[{\cdot}]$ is an operator from $\textrm{Dom}^+(\mathcal{E})$ to $\textrm{Dom}^+(\mathcal{E})_\tau\,:\!=\,\textrm{Dom}^+(\mathcal{E})\cap L^0(\mathcal{F}_\tau)$ . Furthermore, $\big\{\widetilde{\mathcal{E}}_t[{\cdot}]\big\}_{t\in[0,T]}$ defines an $\mathbb{F}$ -expectation and for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ , $\big\{\widetilde{\mathcal{E}}_t[\xi]\big\}_{t\in[0,T]}$ is an RCLL modification of $\{\mathcal{E}_t[\xi]\}_{t\in[0,T]}$ . For simplicity, we still denote $\widetilde{\mathcal{E}}_t[{\cdot}]$ by $\mathcal{E}_t[{\cdot}]$ , and it satisfies the following properties.
Proposition 2.1 ([Reference Bayraktar and Yao1].) For any $\xi,\eta\in {Dom}^+(\mathcal{E})$ and $\tau\in \mathcal{S}_0$ , the following hold:
-
(1) Monotonicity (positively strict): $\mathcal{E}_\tau[\xi]\leq \mathcal{E}_\tau[\eta]$ a.s. if $\xi\leq \eta$ a.s. Moreover, if $\mathcal{E}_\sigma[\xi]=\mathcal{E}_\sigma[\eta]$ a.s. for some $\sigma\in\mathcal{S}_0$ , then $\xi=\eta$ a.s.
-
(2) Time-consistency: $\mathcal{E}_\sigma[\mathcal{E}_\tau[\xi]]=\mathcal{E}_\sigma[\xi]$ , a.s., for any $\tau,\sigma\in\mathcal{S}_0$ with $\sigma\leq \tau$ .
-
(3) Zero–one law: $\mathcal{E}_\tau[\xi I_A]=\mathcal{E}_\tau[\xi]I_A$ , a.s., for any $A\in\mathcal{F}_\tau$ .
-
(4) Translation-invariance: $\mathcal{E}_\tau[\xi+\eta]=\mathcal{E}_\tau[\xi]+\eta$ , a.s., if $\eta\in{Dom}^+_\tau(\mathcal{E})$ .
-
(5) Local property: $\mathcal{E}_\tau[\xi I_A+\eta I_{A^c}]=\mathcal{E}_\tau[\xi]I_A+\mathcal{E}_\tau[\eta]I_{A^c}$ , a.s., for any $A\in\mathcal{F}_\tau$ .
-
(6) Constant-preserving: $\mathcal{E}_\tau[\xi]=\xi$ , a.s., if $\xi\in{Dom}^+_\tau(\mathcal{E})$ .
Proposition 2.2 ([Reference Bayraktar and Yao1].) Let X be a nonnegative $\mathcal{E}$ -supermartingale. Then we have the following:
-
(1) Assume either that $\mathrm{ess\,sup}_{t\in \mathcal{I}}X_t\in{Dom}^+(\mathcal{E})$ (where $\mathcal{I}$ is the set of all dyadic rational numbers less than T) or that for any sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset{Dom}^+(\mathcal{E})$ that converges a.s. to some $\xi\in L^0(\mathcal{F}_T)$ ,
\begin{align*} \liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]<\infty \textrm{ implies } \xi\in {Dom}^+(\mathcal{E}).\end{align*}Then for any $\tau\in\mathcal{S}_0$ , $X^+_\tau\in {Dom}^+(\mathcal{E})$ . -
(2) If $X_t^+\in {Dom}^+(\mathcal{E})$ for any $t\in[0,T]$ , then $X^+$ is an RCLL $\mathcal{E}$ -supermartingale such that for any $t\in[0,T]$ , $X_t^+\leq X_t$ , a.s.
-
(3) Moreover, if the function $t\mapsto\mathcal{E}[X_t]$ from $[0, T]$ to $\mathbb{R}$ is right-continuous, then $X^+$ is an RCLL modification of X. Conversely, if X has a right-continuous modification, then the function $t\mapsto\mathcal{E}[X_t]$ is right-continuous.
Fatou’s lemma and the dominated convergence theorem still hold for the conditional $\mathbb{F}$ -expectation $\mathcal{E}_\tau[{\cdot}]$ .
Proposition 2.3 ([Reference Bayraktar and Yao1].) Let $\{\xi_n\}_{n\in\mathbb{N}}\subset {Dom}^+(\mathcal{E})$ converge a.s. to some $\xi\in {Dom}^+(\mathcal{E})$ . Then for any $\tau\in\mathcal{S}_0$ , we have
Furthermore, if there exists an $\eta\in {Dom}^+(\mathcal{E})$ such that $\xi_n\leq \eta$ a.s. for any $n\in\mathbb{N}$ , then the limit $\xi\in {Dom}^+(\mathcal{E})$ , and for any $\tau\in \mathcal{S}_0$ we have
Throughout this paper, we assume that the $\mathbb{F}$ -expectation satisfies the hypotheses (H0)–(H4) and the following condition:
-
(H5) If the sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset \textrm{Dom}^+(\mathcal{E})$ converges to $\xi\in L^0(\mathcal{F}_T)$ a.s. and satisfies $\liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]<\infty$ , then we have $\xi\in\textrm{Dom}^+(\mathcal{E})$ .
This assumption is mainly used to prove the following lemma.
Lemma 2.1. Let $\Xi$ be a subset of ${Dom}^+(\mathcal{E})$ . Suppose that $\sup_{\xi\in\Xi}\mathcal{E}[\xi]<\infty$ . Set $\eta=\mathrm{ess\,sup}_{\xi\in\Xi}\xi$ . Then we have $\eta\in{Dom}^+(\mathcal{E})$ .
Proof. By the definition of essential supremum, there exists a sequence $\{\xi_n\}_{n\in\mathbb{N}}\subset \Xi$ such that $\xi_n\rightarrow \eta$ a.s. Since $\liminf_{n\rightarrow\infty}\mathcal{E}[\xi_n]\leq \sup_{\xi\in\Xi}\mathcal{E}[\xi]<\infty$ , Assumption (H5) implies that $\eta\in \textrm{Dom}^+(\mathcal{E})$ .
Remark 2.1. The classical expectation naturally satisfies Assumption (H5), by Fatou’s lemma. Consider the g-expectation introduced in Example 2.1(2). If we additionally assume that the function g is convex in its second component, we may check that Fatou’s lemma still holds for the g-expectation (see Proposition A.1 in [Reference Ferrari, Li and Riedel10]). Hence, in this case, Assumption (H5) is fulfilled. However, for some other g-expectations, Assumption (H5) may not hold. We refer to Example 5.1 in [Reference Bayraktar and Yao2] as a counterexample.
Remark 2.2. By Corollary 2.2 and Propositions 2.7–2.9 in [Reference Bayraktar and Yao1], all the properties in this section still hold for the random variables in $\textrm{Dom}^{c}(\mathcal{E})$ .
3. The optimal single stopping problem under nonlinear expectation
In this section, we study the optimal single stopping problem under the $\mathbb{F}$ -expectation. Throughout this paper, for each fixed stopping time $\tau$ , $\mathcal{S}_\tau$ represents the collection of all stopping times taking values between $\tau$ and T. We now introduce the definition of an admissible family, which can be interpreted as the payoff process in the classical case.
Definition 3.1. A family of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be admissible if the following conditions are satisfied:
-
(1) For all $\tau\in\mathcal{S}_0$ , $X(\tau)\in{Dom}^+_\tau(\mathcal{E}) $ .
-
(2) For all $\tau,\sigma\in\mathcal{S}_0$ , we have $X(\tau)=X(\sigma)$ a.s. on the set $\{\tau=\sigma\}$ .
Remark 3.1. Since the $\mathbb{F}$ -expectation is translation-invariant, all the results in this paper still hold if the family of random variables $\{X(\tau),\tau\in\mathcal{S}_0\}$ is bounded from below.
Now consider the reward given by the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ . For each $S\in\mathcal{S}_0$ , the value function at time S takes the following form:
Definition 3.2. For each fixed $S\in\mathcal{S}_0$ , an admissible family $\{X(\tau),\tau\in\mathcal{S}_S\}$ is said to be an $\mathcal{E}$ -supermartingale system (resp. an $\mathcal{E}$ -martingale system) if, for any $\tau,\sigma\in\mathcal{S}_S$ with $\tau\leq \sigma$ a.s., we have
Proposition 3.1. If $\{X(\tau),\tau\in\mathcal{S}_0\}$ is an admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , then the value function $\{v(S), S\in\mathcal{S}_0\}$ defined by (3.1) has the following properties:
-
(i) $\{v(S), S\in\mathcal{S}_0\}$ is an admissible family;
-
(ii) $\{v(S), S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system which is greater than $\{X(S), S\in\mathcal{S}_0\}$ ;
-
(iii) for any $S\in\mathcal{S}_0$ , we have
(3.2) \begin{equation} \mathcal{E}[v(S)]=\sup_{\tau\in\mathcal{S}_S}\mathcal{E}[X(\tau)]. \end{equation}
Proof. The proof of this proposition is similar to the one in [Reference Grigorova13] (see Section 8) and to the proof of Propositions 1.1–1.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.
Remark 3.2. (1) For the cases when the admissible family $\{v(\tau),\tau\in\mathcal{S}_0\}$ can be aggregated, we may refer to Proposition 5.1 in Section 5.
(2) It follows from Equation (3.2) that
Consequently, we obtain that $v(S)<\infty$ , a.s., for any $S\in\mathcal{S}_0$ .
(3) Assumption (H5) is mainly used to make sure that the value function v(S) at any stopping time S belongs to $\textrm{Dom}^+(\mathcal{E})$ . We can drop this assumption by requiring that the admissible family satisfy $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0} X(\tau)\in \textrm{Dom}(\mathcal{E})$ . Under this new condition, since $0\leq v(S)\leq \mathcal{E}_S[\eta]$ , it follows that $v(S)\in\textrm{Dom}^+(\mathcal{E})$ .
The following proposition gives the characterization of the optimal stopping time for the value function (3.1).
Proposition 3.2. For each fixed $S\in\mathcal{S}_0$ , let $\tau^*\in\mathcal{S}_S$ be such that $\mathcal{E}[X(\tau^*)]<\infty$ . The following statements are equivalent:
-
(a) $\tau^*$ is S-optimal for v(S), i.e.,
(3.3) \begin{equation} v(S)=\mathcal{E}_S[X(\tau^*)]; \end{equation} -
(b) $v(\tau^*)=X(\tau^*)$ and $\mathcal{E}[v(S)]=\mathcal{E}[v(\tau^*)]$ ;
-
(c) $\mathcal{E}[v(S)]=\mathcal{E}[X(\tau^*)]$ .
Proof. The proof is the same as that of Proposition 4.1 in [Reference Grigorova12], so we omit it.
Remark 3.3. It is worth mentioning that most of the results in Propositions 3.1 and 3.2 still hold if the reward family is not ‘adapted’, which means that $X(\tau)$ is $\mathcal{F}_T$ -measurable rather than $\mathcal{F}_\tau$ -measurable for any $\tau\in\mathcal{S}_0$ . In fact, the first difference is that $\{v(S),S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system which is greater than $\{\mathcal{E}_S[X(S)],S\in\mathcal{S}_0\}$ . The second is that we need to replace $X(\tau^*)$ by $\mathcal{E}_{\tau^*}[X(\tau^*)]$ in the assertion (b) of Proposition 3.2. Furthermore, the results do not depend on the regularity of the reward family.
We now study the regularity of the value functions $\{v(\tau),\tau\in\mathcal{S}_0\}$ , after introducing the following definition of continuity.
Definition 3.3. An admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if for any $\tau\in\mathcal{S}_0$ and $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ such that $\tau_n\downarrow \tau$ a.s. (resp., $\tau_n\uparrow\tau$ a.s.), we have $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ . The family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is called continuous along stopping times in $\mathcal{E}$ -expectation (C $\mathcal{E}$ ) if it is both RC $\mathcal{E}$ and LC $\mathcal{E}$ .
Proposition 3.3. Suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ .
Proof. The proof is the same as that of Proposition 1.5 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with linear expectation $\mathbb{E}$ replaced by $\mathbb{F}$ -expectation $\mathcal{E}$ .
Remark 3.4. (i) By Remark 3.3, the above result does not rely on the ‘adapted’ property of the reward family.
(ii) For any fixed $\sigma\in\mathcal{S}_0$ , suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ , which means that if $S\in\mathcal{S}_\sigma$ and $\{S_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_\sigma$ satisfy $S_n\downarrow S$ , then we have $\lim_{n\rightarrow\infty}\mathcal{E}[X(S_n)]=\mathcal{E}[X(S)]$ . Then we can prove that the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ .
(iii) Furthermore, if the RC $\mathcal{E}$ admissible family $\{X(\tau),\tau\in\mathcal{S}_\sigma\}$ is only well-defined for the stopping times greater than $\sigma$ , then by a similar analysis as in the proof of Proposition 3.1, $\{v(S),S\in\mathcal{S}_0\}$ is still an $\mathcal{E}$ -supermartingale system, but without the dominance property that $v(S)\geq X(S)$ for $S\leq \sigma$ . We can then prove that the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along all stopping times greater than $\sigma$ .
In order to show the existence of the optimal stopping time for the value function v(S), we need furthermore to assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies the following conditions:
-
(H6) Sub-additivity: for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in\textrm{Dom}^+(\mathcal{E})$ , $\mathcal{E}_\tau[\xi+\eta]\leq \mathcal{E}_\tau[\xi]+\mathcal{E}_\tau[\eta]$ .
-
(H7) Positive homogeneity: for any $\tau\in\mathcal{S}_0$ , $\lambda\geq 0$ , and $\xi\in \textrm{Dom}^+(\mathcal{E})$ , $\mathcal{E}_\tau[\lambda \xi]=\lambda\mathcal{E}_\tau[\xi]$ .
The main idea in proving the existence is to apply an approximation method. More precisely, for $\lambda\in(0,1)$ , we define an $\mathcal{F}_S$ -measurable random variable $\tau^\lambda(S)$ by
Remark 3.5. It is important to note that this stopping time $\tau^\lambda(S)$ is defined as an essential infimum of a set of stopping times, instead of being defined trajectorially. It was introduced for the first time in [Reference Kobylanski, Quenez and Rouy-Mironescu15] (see Equation (1.6) in [Reference Kobylanski, Quenez and Rouy-Mironescu15]).
We will show that the sequence $\big\{\tau^\lambda(S)\big\}_{\lambda\in(0,1)}$ admits a limit as $\lambda$ goes to 1 and that the limit is the optimal stopping time. Our first observation is that the stopping time $\tau^\lambda(S)$ is $(1-\lambda)$ -optimal for the problem (3.2).
Lemma 3.1. Let the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfy all the assumptions (H0)–(H7), and suppose that $\{X(\tau),\tau\in\mathcal{S}_0\}$ is a C $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . For each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , the stopping time $\tau^\lambda(S)$ satisfies
This lemma is analogous to Lemma 1.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with linear expectation $\mathbb{E}$ replaced by $\mathbb{F}$ -expectation $\mathcal{E}$ . For reader’s convenience, we give a short proof here.
Proof. Fix $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ . For any $\tau^i\in\mathcal{S}_S$ such that $\lambda v\big(\tau^i\big)\leq X\big(\tau^i\big)$ , $i=1,2$ , it is easy to check that the stopping time $\tau$ defined by $\tau=\tau^1\wedge \tau^2$ preserves the same property as $\tau^i$ . Hence, there exists a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S$ with $\lambda v(\tau_n)\leq X(\tau_n)$ , such that $\tau_n\downarrow \tau^\lambda(S)$ . By the monotonicity and positive homogeneity, we have $\lambda\mathcal{E}[v(\tau_n)]\leq \mathcal{E}[X(\tau_n)]$ for any $n\in\mathbb{N}$ . Letting n go to infinity and applying the RC $\mathcal{E}$ property of v and X, we obtain
We claim that, for each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , the stopping time $\tau^\lambda(S)$ satisfies
Now, combining Equations (3.6) and (3.7), we obtain
The proof is complete.
Proof of Equation (3.7). Note that Equation (3.7) is the same as Equation (1.11) in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with the classical conditional expectation $\mathbb{E}[{\cdot}|\mathcal{F}_S]$ replaced by the $\mathbb{F}$ -expectation $\mathcal{E}_S$ . Therefore, the proof is similar. For the convenience of the reader, we give a short proof here.
For simplicity, we denote $\mathcal{E}_S\big[v\big(\tau^\lambda(S)\big)\big]$ by $J^\lambda(S)$ . Recalling that $\{v(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, we have $J^\lambda(S)\leq v(S)$ . It remains to prove the reverse inequality.
We first claim that $\big\{J^\lambda(\tau),\tau\in\mathcal{S}_0\big\}$ is an $\mathcal{E}$ -supermartingale system. Indeed, let $S,S^{\prime}\in\mathcal{S}_0$ be such that $S\leq S^{\prime}$ . Noting that $\{v(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, it is easy to check that $S\leq \tau^\lambda(S)\leq \tau^\lambda(S^{\prime})$ and
where we have used the time-consistency in the first two equalities. Hence, the claim holds.
We then show that for any $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ , we have $L^\lambda(S)\geq X(S)$ , where $L^\lambda(S)=\lambda v(S)+(1-\lambda)J^\lambda(S)$ . Indeed, by a simple calculation, we obtain that
where the third equality is obtained from the zero–one law, in the first inequality we used that $J^\lambda(S)\geq 0$ , and the last inequality follows from $v(S)\geq X(S)$ and the definition of $\tau^\lambda(S)$ .
Since the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies (H6) and (H7), it is easy to check that $\big\{L^\lambda(\tau),\tau\in\mathcal{S}_0\big\}$ is an $\mathcal{E}$ -supermartingale system. By Proposition 3.1, we have $L^\lambda(S)\geq v(S)$ , which, together with $v(S)<\infty$ obtained in Remark 3.2, implies that $J^\lambda(S)\geq v(S)$ . The above analysis completes the proof.
Remark 3.6. Clearly, an $\mathbb{F}$ -expectation that satisfies (H6)–(H7) is a ‘positively convex’ $\mathbb{F}$ -expectation (see Definition 3.1 in [Reference Bayraktar and Yao1]). In [Reference Bayraktar and Yao2], the optimal single stopping problem induced by some positively convex $\mathbb{F}$ -expectation can be solved. Note that our assumptions (H6)–(H7) are stronger than the property of positive convexity. This is mainly because our optimal stopping time is not defined trajectorially and we need to ensure that the crucial inequality (3.6) holds.
Now we state the main result of this section, which is analogous to the results in Theorem 1.1 of [Reference Kobylanski, Quenez and Rouy-Mironescu15], shown in the case of a classical expectation.
Theorem 3.1. Under the same assumptions as those of Lemma 3.1, for each $S\in\mathcal{S}_0$ , there exists an optimal stopping time for v(S) defined by (3.1). Furthermore, the stopping time
is the minimal optimal stopping time for v(S).
Proof. The proof is the same as the proof of Theorem 1.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.
Remark 3.7. Compared with the usual case, in which the optimal stopping time is defined trajectorially, our optimal stopping time is interpreted as the essential infimum, which makes it possible to relax the condition on the regularity of the reward family. For example, in [Reference Cheng and Riedel6], the reward $\{X_t\}_{t\in[0,T]}$ is assumed to be RCLL and LC $\mathcal{E}$ . The price for the weak condition of regularity is that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ should be positive homogenous and sub-additive. These two assumptions on the $\mathbb{F}$ -expectation are mainly used to prove the existence of the optimal stopping time. For the properties which do not depend on the existence of the optimal stopping time, we may drop the positive homogeneity and sub-additivity of the $\mathbb{F}$ -expectation.
With the help of the existence of the optimal stopping time, we may establish the LC $\mathcal{E}$ property of the value function when the reward family is LC $\mathcal{E}$ , which is analogous to Proposition 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15].
Proposition 3.4. Under the same assumptions as those of Theorem 3.1, the value function $\{v(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ .
Proof. The proof is similar to the proof of Proposition 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15] with $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.
Remark 3.8. (i) Let $\{S_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ be such that $S_n\uparrow S$ , where S is a stopping time. By an analysis similar to that of Remark 1.6 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], we have
that is, the mapping $S\mapsto \tau^*(S)$ is left-continuous along stopping times.
(ii) Suppose that the family $\{X(\tau),\tau\in\mathcal{S}_0\}$ in Proposition 3.4 is only left-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ (i.e., if $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_\sigma$ and $\tau_n\uparrow\tau$ , then we have $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ ). If, for any $S\in\mathcal{S}_0$ , the optimal stopping time $\tau^*(S)$ defined by (3.8) is no less than $\sigma$ , then the value function $\{v(S),S\in\mathcal{S}_0\}$ is still LC $\mathcal{E}$ .
In the remainder of this section, suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7). Now, given an admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , for each fixed $\theta\in\mathcal{S}_0$ we define the following random variable:
Then, for each $\tau\in\mathcal{S}_0$ , $X^{\prime}(\tau)$ is $\mathcal{F}_\tau$ -measurable and bounded from below, and
In addition, $X^{\prime}(\tau)=X^{\prime}(\sigma)$ on the set $\{\tau=\sigma\}$ . Let us define
Note that all the properties of $\mathcal{E}$ hold for random variables which are bounded from below (see Remark 2.2). Then all the results in Proposition 3.1, Proposition 3.2, and Remark 3.2 still hold if we replace X and v by X ′ and v ′, respectively. Furthermore, if the original admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ , by Remark 3.4, the family $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ . The following theorem indicates that there exists an optimal stopping time for v ′(S), and that the family $\{v^{\prime}(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ (not only left-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ ), provided that the family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is C $\mathcal{E}$ .
Theorem 3.2. Let the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfy all the assumptions (H0)–(H7), and let $\{X(\tau),\tau\in\mathcal{S}_0\}$ be a C $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . For each $S\in\mathcal{S}_0$ , there exists an optimal stopping time for v′(S). Furthermore, the stopping time
is the minimal optimal stopping time for v′(S), and the value function $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ .
Proof. For any $\lambda\in(0,1)$ , we define a random variable $\tau^{\prime,\lambda}(S)$ by
Since, for any $S\in\mathcal{S}_0$ , we have $v^{\prime}(S)\geq \mathcal{E}_S[X(T)]\geq 0$ , this implies that $\tau\geq \theta$ , where $\tau\in\big\{\tau\in \mathcal{S}_S\,:\, \lambda v^{\prime}(\tau)\leq X^{\prime}(\tau), \textrm{ a.s.}\big\}$ . Therefore, we obtain that $\tau^{\prime,\lambda}(S)\geq \theta$ . It follows that for any fixed $S\in\mathcal{S}_0$ and any $\tau\geq \tau^{\prime,\lambda}(S)$ , we have $X^{\prime}(\tau)=X(\tau)$ . Modifying the proofs of Lemma 3.1, Theorem 3.1, and Proposition 3.4, we finally get the desired result.
4. The optimal double stopping problem under nonlinear expectation
In this section, we consider the optimal double stopping problem under the $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). We first introduce the definition of the appropriate reward family.
Definition 4.1. The family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be biadmissible if it has the following properties:
-
(1) for all $\tau,\sigma\in\mathcal{S}_0$ , $X(\tau,\sigma)\in \textrm{Dom}^+_{\tau\vee\sigma}(\mathcal{E})$ ;
-
(2) for all $\tau,\sigma,\tau^{\prime},\sigma^{\prime}\in\mathcal{S}_0$ , $X(\tau,\sigma)=X(\tau^{\prime},\sigma^{\prime})$ on the set $\{\tau=\tau^{\prime}\}\cap \{\sigma=\sigma^{\prime}\}$ .
Now, suppose we are given a biadmissible reward family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ such that $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the corresponding value function is defined as follows:
Similarly to the case of the single optimal stopping problem, we have the following properties.
Proposition 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau_1,\tau_2\in\mathcal{S}_0}\mathcal{E}[X(\tau_1,\tau_2)]<\infty$ ; then the value function $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1) satisfies the following properties:
-
(i) for each $S\in\mathcal{S}_0$ , there exists a sequence of pairs of stopping times $\big\{\big(\tau_1^n,\tau_2^n\big)\big\}_{n\in\mathbb{N}}\subset \mathcal{S}_S\times\mathcal{S}_S$ such that $\mathcal{E}_S\big[X\big(\tau_1^n,\tau_2^n\big)\big]$ converges monotonically up to v(S);
-
(ii) $\{v(S),S\in\mathcal{S}_0\}$ is an admissible family;
-
(iii) $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system;
-
(iv) for each $S\in\mathcal{S}_0$ , we have
\begin{align*} \mathcal{E}[v(S)]=\sup_{\tau,\sigma\in\mathcal{S}_S}\mathcal{E}[X(\tau,\sigma)].\end{align*}
Proof. The proof is similar to the proof of Proposition 2.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]. The main difference is in the method of proving that $v(S)\in \textrm{Dom}^+(\mathcal{E})$ . In fact, the measurability and nonnegativity follow from the definition of v(S). By (i), we have $v(S)=\lim_{n\rightarrow\infty}\mathcal{E}_S\big[X\big(\tau_1^n,\tau_2^n\big)\big]$ . Because
Assumption (H5) implies that $v(S)\in \textrm{Dom}^+(\mathcal{E})$ .
Remark 4.1. (i) Under the integrability condition $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ and (iv) in Proposition 4.1, we conclude that $v(S)<\infty$ , a.s.
(ii) If Assumption (H5) does not hold, we need to assume furthermore that $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau,\sigma\in\mathcal{S}_0}X(\tau,\sigma)\in\textrm{Dom}(\mathcal{E})$ in order to ensure that Proposition 4.1 still holds.
In the following, we will show that the value function defined by (4.1) coincides with the value function of the single stopping problem corresponding to a new reward family. Motivated by the results in [Reference Kobylanski, Quenez and Rouy-Mironescu15] (cf. (2.2) and (2.3) in [Reference Kobylanski, Quenez and Rouy-Mironescu15] for the definitions of $u_1$ , $u_2$ and the new reward), for each $\tau\in\mathcal{S}_0$ we define
and
The first observation is that the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ is admissible.
Lemma 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau,\sigma}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family defined by (4.3) is admissible and we have $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}\big[\widetilde{X}(\tau)\big]<\infty$ .
Proof. It is sufficient to prove that $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is admissible. Similarly as in the proof of Proposition 4.1, $u_1(\tau)$ is $\mathcal{F}_\tau$ -measurable and $u_1(\tau)\in\textrm{Dom}^+(\mathcal{E})$ . For each fixed $\tau,\sigma\in\mathcal{S}_0$ , set $A=\{\tau=\sigma\}$ and $\theta^A=\theta I_A+T I_{A^c}$ , where $\theta\in\mathcal{S}_\tau$ . It is easy to check that $A\in\mathcal{F}_{\tau\wedge\sigma}$ , $\theta^A\in\mathcal{S}_\sigma$ and
Taking the supremum over all $\theta\in\mathcal{S}_\tau$ implies that $u_1(\tau)\leq u_1(\sigma)$ on A. By symmetry, we have $u_1(\sigma)\leq u_1(\tau)$ on A. Therefore, $u_1(\tau)I_A=u_1(\sigma)I_A$ .
It is easy to verify that $0\leq \widetilde{X}(\tau)\leq v(\tau)$ . By Proposition 4.1, we have
The next theorem states that $\{v(S),S\in\mathcal{S}_0\}$ is the smallest $\mathcal{E}$ -supermartingale system such that $v(S)\geq \widetilde{X}(S)$ , for any $S\in\mathcal{S}_0$ . In other words, v(S) corresponds to the value function u(S) associated with the reward family $\{\widetilde{X}(S),S\in\mathcal{S}_0\}$ , where
Theorem 4.1. Suppose that $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is a biadmissible family such that $\sup_{\tau_1,\tau_2\in\mathcal{S}_0}\mathcal{E}[X(\tau_1,\tau_2)]<\infty$ . Then, for each stopping time $S\in\mathcal{S}_0$ , we have $v(S)=u(S)$ .
Proof. The proof is similar to the proof of Theorem 2.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], with the classical expectation $\mathbb{E}$ replaced by $\mathcal{E}$ , so we omit it.
With the help of the characterization of the value function v stated in Theorem 4.1, we may construct the optimal stopping times for either the multiple problem (4.1) or the single problem (4.2), (4.4) if we obtain the optimal stopping times for one of the problems.
Proposition 4.2. Fix $S\in\mathcal{S}_0$ . Suppose that $\big(\tau_1^*,\tau_2^*\big)\in\mathcal{S}_S\times\mathcal{S}_S$ is optimal for v(S). Then we have the following:
-
(1) $\tau_1^*\wedge\tau_2^*$ is optimal for u(S);
-
(2) $\tau_1^*$ is optimal for $u_2\big(\tau_1^*\big)$ on the set A;
-
(3) $\tau_2^*$ is optimal for $u_1\big(\tau_2^*\big)$ on the set $A^c$ ,
where $A=\big\{\tau_1^*\leq \tau_2^*\big\}$ . On the other hand, suppose that the stopping times $\theta^*,\theta_i^*$ , $i=1,2$ , satisfy the following conditions:
-
(i) $\theta^*$ is optimal for u(S);
-
(ii) $\theta_1^*$ is optimal for $u_2(\theta^*)$ ;
-
(iii) $\theta_2^*$ is optimal for $u_1(\theta^*)$ .
Set
where $B=\{u_1(\theta^*)\leq u_2(\theta^*)\}$ . Then the pair $\big(\sigma^*_1,\sigma^*_2\big)$ is optimal for v(S).
Proof. The proof is similar to the proof of Proposition 2.4 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.
By Proposition 4.2, in order to obtain the multiple optimal stopping times for v(S) defined by (4.1), it is sufficient to derive the optimal stopping times for the auxiliary single stopping problems (4.2) and (4.4). For this purpose, according to Theorem 3.1, we need to study some regularity results for $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ . Before establishing this property, we first introduce the definition of continuity for a biadmissible family.
Definition 4.2. A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if, for any $\tau,\sigma\in\mathcal{S}_0$ and any sequences $\{\tau_n\}_{n\in\mathbb{N}}, \{\sigma_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ such that $\tau_n\downarrow \tau$ , $\sigma_n\downarrow\sigma$ (resp., $\tau_n\uparrow \tau$ , $\sigma_n\uparrow\sigma$ ), one has $\mathcal{E}[X(\tau,\sigma)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n,\sigma_n)]$ .
By a proof similar to that of Proposition 3.3, we have the following regularity result.
Proposition 4.3. If the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ , then the family $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1) is RC $\mathcal{E}$ .
The regularity of the new reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ requires some strong continuity of the biadmissible family. Because of the nonlinearity of the expectation, the definition is slightly different from Definition 2.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15].
Definition 4.3. A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be uniformly right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [URC $\mathcal{E}$ (resp., ULC $\mathcal{E}$ )] if, for any $\sigma\in\mathcal{S}_0$ and any sequence $\{\sigma_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0$ such that $\sigma_n\downarrow\sigma$ (resp., $\sigma_n\uparrow\sigma$ ), one has
Furthermore, the biadmissible family is said to be uniformly continuous along stopping times in $\mathcal{E}$ -expectation (UC $\mathcal{E}$ ) if it is both URC $\mathcal{E}$ and ULC $\mathcal{E}$ .
Definition 4.4. An $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is said to be dominated by another $\mathbb{F}$ -expectation $\big(\widetilde{\mathcal{E}},\textrm{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ if $\textrm{Dom}(\mathcal{E})\subset \textrm{Dom}\big(\widetilde{\mathcal{E}}\big)$ and for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in \textrm{Dom}(\mathcal{E})$ , one has
Remark 4.2. From the requirements on the domain of $\mathcal{E}$ (see Definition 2.1 and Assumptions (H3)–(H5)), we may not conclude that $\xi-\eta\in \textrm{Dom}(\mathcal{E})$ for any $\xi,\eta\in\textrm{Dom}(\mathcal{E})$ . Therefore, the above definition of dominance cannot be written as
However, if $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},\textrm{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ , then for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in \textrm{Dom}^c(\mathcal{E})$ we have
First, if $\xi\in\textrm{Dom}^c(\mathcal{E})$ , by (D2) in Definition 2.1 and Assumption (H4), we have $\xi-c\in \textrm{Dom}^+(\mathcal{E})$ . Since $0\leq |\xi-\eta|=|(\xi-c)-(\eta-c)|\leq \xi+\eta-2c$ , by (D2) and (D3), it follows that $|\xi-\eta|\in\textrm{Dom}(\mathcal{E})$ . It is easy to check that
By the symmetry of $\xi$ and $\eta$ , we obtain Equation (4.6).
Example 4.1.
-
(1) If for any $\tau\in\mathcal{S}_0$ and $\xi,\eta\in\textrm{Dom}(\mathcal{E})$ , $\mathcal{E}_\tau[\xi+\eta]\leq \mathcal{E}_\tau[\xi]+\mathcal{E}_\tau[\eta]$ , the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ is dominated by itself. In particular, $\big(\{\mathbb{E}_t[{\cdot}]\}_{t\in[0,T]}, L^1(\mathcal{F}_T)\big)$ is dominated by itself.
-
(2) For a generator g with Lipschitz constant $\kappa$ , the g-expectation $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ is dominated by $\big(\big\{\mathcal{E}^{\tilde{g}}_t[{\cdot}]\big\}_{t\in[0,T]},L^2(\mathcal{F}_T)\big)$ , where $\tilde{g}(t,z)=\kappa|z|$ .
Theorem 4.2. Let $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ be an $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC $\widetilde{\mathcal{E}}$ with $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (4.3) is RC $\mathcal{E}$ .
Proof. By the definition of $\widetilde{X}$ , we only need to prove that the family $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ . Let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\downarrow\theta$ . Since $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC $\widetilde{\mathcal{E}}$ , by Equation (4.6), we have
It follows that for each fixed $\sigma\in\mathcal{S}_0$ , the family $\{X(\tau,\sigma),\tau\in\mathcal{S}_\sigma\}$ is admissible and right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ . (It is important to note that the whole family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ may not be admissible, since $X(\tau,\sigma)$ is $\mathcal{F}_\sigma$ -measurable rather than $\mathcal{F}_\tau$ -measurable if $\tau\leq \sigma$ .) By Proposition 3.3 and Remark 3.4, we obtain that the family $\{U_1(S,\theta),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\theta$ , where
That is, $\lim_{n\rightarrow\infty}\mathcal{E}[U_1(\theta_n,\theta)]=\mathcal{E}[U_1(\theta,\theta)]$ .
Now we state the following lemma, whose proof appears after the conclusion of the current argument.
Lemma 4.2. For any stopping times $\tau,\sigma_1,\sigma_2$ , we have
Using this lemma, by the URC $\widetilde{\mathcal{E}}$ property of $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ , as n goes to infinity, we obtain that
The above analysis indicates that
The proof is complete.
Proof of Lemma 4.2. By an analysis similar to the one in the proof of Proposition 3.1, for each fixed $\tau\in\mathcal{S}_0$ there exists a sequence of stopping times $\{S_m\}_{m\in\mathbb{N}}\subset \mathcal{S}_{\tau}$ such that
By a simple calculation, we have
The proof is complete.
The main difficulty is to prove the LC $\mathcal{E}$ property of the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , because of some measurability issues. More precisely, let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\uparrow\theta$ . We need to prove that $\lim_{n\rightarrow\infty}\mathcal{E}[u_1(\theta_n)]=\mathcal{E}[u_1(\theta)]$ . However, we cannot follow the proof of the RC $\mathcal{E}$ property in Theorem 4.2. The problem is that the relation $\lim_{n\rightarrow\infty}\mathcal{E}[U_1(\theta_n,\theta)]=\mathcal{E}[U_1(\theta,\theta)]=\mathcal{E}[u_1(\theta)]$ may not hold, where $U_1$ is given by (4.7). Although $\{U_1(S,\theta),S\in\mathcal{S}_0\}$ can be interpreted as the value function associated with the family $\{X(\tau_1, \theta),\tau_1\in\mathcal{S}_0\}$ , we cannot apply Proposition 3.4 since the reward $\{X(\tau_1,\theta),\tau_1\in\mathcal{S}_0\}$ is not admissible. The main idea is to modify this reward slightly and then apply the LC $\mathcal{E}$ property of the modified reward family stated in Theorem 3.2.
Theorem 4.3. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC ${\mathcal{E}}$ with $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (4.3) is LC $\mathcal{E}$ .
Proof. By the definition of $\widetilde{X}$ , it suffices to prove that $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is LC $\mathcal{E}$ . Let $\{\theta_n\}_{n\in\mathbb{N}}$ be a sequence of stopping times such that $\theta_n\uparrow\theta$ . Now we define
It is easy to check that for any $\tau\in\mathcal{S}_0$ , $X^{\prime}(\tau,\theta)$ is $\mathcal{F}_\tau$ -measurable and bounded from below, with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[|X^{\prime}(\tau,\theta)|]<\infty$ . Therefore, by Theorem 3.2, the value function $\{v^{\prime}(S),S\in\mathcal{S}_0\}$ defined by
is LC $\mathcal{E}$ . It follows that $\lim_{n\rightarrow\infty}\mathcal{E}[v^{\prime}(\theta_n)]=\mathcal{E}[v^{\prime}(\theta)]$ . By the definition of X ′, it is easy to check that
which implies that $\lim_{n\rightarrow\infty}\mathcal{E}[v^{\prime}(\theta_n)]=\mathcal{E}[u_1(\theta)]$ . Note that for any $\tau\in\mathcal{S}_{\theta_n}$ , we have
Set $\eta=1+\mathrm{ess\,sup}_{\tau,\sigma\in\mathcal{S}_0}X(\tau,\sigma)$ . By Lemma 2.1, we have $\eta\in\textrm{Dom}^+(\mathcal{E})$ . By a similar analysis as in Lemma 4.2, we obtain that
where $A_n=\{\theta_n<\theta\}$ . For the first part of the right-hand side, it is easy to check that
Noting that $I_{A_n}\downarrow 0$ and $\{A_n\}_{n\in\mathbb{N}}\subset\mathcal{F}_T$ , by Assumption (H2), we obtain that $\lim_{n\rightarrow\infty}[\eta I_{A_n}]=0$ . Finally, we get that
The proof is complete.
Now we can establish the existence of optimal stopping times for the value function defined by (4.1).
Theorem 4.4. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC ${\mathcal{E}}$ . Then there exists a pair of optimal stopping times $\big(\tau_1^*,\tau_2^*\big)$ for the value function v(S) defined by (4.1).
Proof. The proof is similar to the proof of Theorem 2.3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15], so we omit it.
Since v defined by (4.1) coincides with the value function of the optimal single stopping problem with the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , by Propositions 3.3 and 3.4, $\{v(\tau),\tau\in\mathcal{S}_0\}$ is C $\mathcal{E}$ if $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ is C $\mathcal{E}$ .
Corollary 4.1. Under the same hypotheses as those of Theorem 4.4, the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ defined by (4.1) is C $\mathcal{E}$ .
Remark 4.3. By Proposition 3.3, the RC $\mathcal{E}$ property of $\{v(\tau),\tau\in\mathcal{S}_0\}$ does not depend on the existence of optimal stopping times. Thus, the conditions can be weakened to those of Theorem 4.2 to guarantee the RC $\mathcal{E}$ property of $\{v(\tau),\tau\in\mathcal{S}_0\}$ .
Remark 4.4. The optimal d-stopping time problem under nonlinear expectation is similar to the optimal d-stopping problem under classical expectation (cf. Section 3 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]). We only list the results in the appendix.
Example 4.2. Application to swing options. Suppose that $T=\infty$ and the $\mathbb{F}$ -expectation satisfies all the assumptions (H0)–(H7). Recall that a swing option is a contract which gives its holder the right to exercise it more than once, with the exercise times separated by a fixed amount of time $\delta>0$ , called the refracting time. Now, consider the swing option with two exercise times. If the holder exercises it at a stopping time $\tau$ , then she will get the payoff $Y(\tau)$ . The objective of the holder is try to obtain the maximum expected payoff, i.e., at each stopping time S,
It is easy to check that the value does not change if we interchange the roles of $\tau_1$ and $\tau_2$ ; mathematically,
By Equations (4.2) and (4.3), we have
where $Z(\tau)=\mathrm{ess\,sup}_{\sigma \in \mathcal{S}_{\tau+\delta}}\mathcal{E}_\tau[Y(\sigma)]$ . If $\{Y(\tau),\tau\in\mathcal{S}_0\}$ is an admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[Y(\tau)]<\infty$ , then by Theorem 4.1, we have $v(S)=u(S)$ , where
This result is similar to Proposition 3.2 in [Reference Carmona and Dayanik4].
If we additionally assume that the admissible family $\{Y(\tau),\tau\in \mathcal{S}_0\}$ is continuous along stopping times (cf. Definition 5.1), by Proposition 2.3 and noting that $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}Y(\tau)\in \textrm{Dom}(\mathcal{E})$ , we have that the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is UC $\mathcal{E}$ , where $X(\tau,\sigma)=Y(\tau)+Y(\sigma)$ . Let $\tau_1^*$ be the minimal optimal stopping time for the value function u(S), and let $\tau_2^*$ be the minimal optimal stopping time for the value function $Z\big(\tau_1^*\big)$ . By Theorem 4.4, the pair $\big(\tau_1^*,\tau_2^*\big)$ is the optimal stopping time for v(S), which is similar to Proposition 5.4 in [Reference Carmona and Dayanik4].
5. Aggregation of the optimal multiple stopping problem
We first recall some basic results from [Reference El Karoui and Quenez9]. Let $\big(\big\{\mathcal{E}^g_t[{\cdot}]\big\}_{t\in[0,T]}, L^2(\mathcal{F}_T)\big)$ be the g-expectation satisfying the assumptions in Example 2.2. Now, given an adapted, nonnegative process $\{X_t\}_{t\in[0,T]}$ which has continuous sample path with $\mathbb{E}\big[\!\sup_{t\in[0,T]}X_t^2\big]<\infty$ , the value function is defined by
By Proposition 5.5 in [Reference El Karoui and Quenez9], the first hitting time
is an optimal stopping time (a similar result can be found in [Reference Cheng and Riedel6]). This formulation makes it efficient to compute an optimal stopping time.
In this section, we aim to express the optimal stopping times studied in the previous parts in terms of the hitting times of processes. According to Theorem A.1, the multiple optimal stopping times can be constructed by induction. Therefore, it is sufficient to study the double stopping case, for which it remains to aggregate the value function and the reward family. For this purpose, we need to make some stronger regularity conditions.
In the next part of this section, assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies (H0)–(H5) (a typical example is the g-expectation given in Remark 2.1). The following proposition can be used to aggregate the value function of both the single and the multiple stopping problem.
Proposition 5.1. Let $\{h(\tau),\tau\in\mathcal{S}_0\}$ be a nonnegative, RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system with $h(0)<\infty$ . Then there exists an adapted process $\{h_t\}_{t\in[0,T]}$ which is RCLL such that it aggregates the family $\{h(\tau),\tau\in\mathcal{S}_0\}$ , i.e., $h_\tau=h(\tau)$ for any $\tau\in\mathcal{S}_0$ .
Proof. Consider the process $\{h(t)\}_{t\in[0,T]}$ . Since this process is an $\mathcal{E}$ -supermartingale and the function $t\rightarrow\mathcal{E}[h(t)]$ is right-continuous, by Proposition 2.2, there is an $\mathcal{E}$ -supermartingale $\{h_t\}_{t\in[0,T]}$ which is RCLL such that for each $t\in[0,T]$ , $h_t=h(t)$ a.s. For each $n\in\mathbb{N}$ , set $\mathcal{I}_n=\big\{0,\frac{1}{2^n}\wedge T, \frac{2}{2^n}\wedge T,\cdots, T\big\}$ and $\mathcal{I}=\cup_{n=1}^\infty\mathcal{I}_n$ . Then, for any stopping time $\tau$ taking values in $\mathcal{I}$ , we have $h_\tau=h(\tau)$ , a.s., which implies that
For any stopping time $\tau\in\mathcal{S}_0$ , we may construct a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}$ which takes values in $\mathcal{I}$ , such that $\tau_n\downarrow\tau$ . Noting that $\{h_t\}_{t\in[0,T]}$ is RCLL, $h_{\tau_n}$ converges to $h_\tau$ . It is obvious that $h_{\tau_n}\leq \mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}h(\tau)\,=\!:\,\eta$ . Since $\{h(\tau),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system, we have
Then by Lemma 2.1, we obtain that $\eta\in\textrm{Dom}^+(\mathcal{E})$ . Noting that $\{h(\tau),\tau\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ and applying the dominated convergence theorem 2.3, we may check that
Assume that $P\big(h_\tau\neq h(\tau)\big)>0$ . Without loss of generality, we may assume that $P(A)>0$ , where $A=\{h_\tau>h(\tau)\}$ . Set $\tau_A=\tau I_A+TI_{A^c}$ . It is easy to check that $\tau_A$ is a stopping time and $h_{\tau_A}\geq h(\tau_A)$ with $P\big(h_{\tau_A}>h(\tau_A)\big)=P(A)>0$ . It follows that $\mathcal{E}[h(\tau_A)]<\mathcal{E}\big[h_{\tau_A}\big]$ , which contradicts Equation (5.2). Therefore, we obtain that $h_\tau=h(\tau)$ for any $\tau\in\mathcal{S}_0$ .
Remark 5.1. Consider the nonlinear operator $\widetilde{\mathcal{E}}^g$ induced by a backward stochastic differential equation (BSDE) with default, which is a nontrivial extension of the g-expectation generated by a BSDE (more details can be found in [Reference Grigorova, Quenez and Sulem14]). Let $\{X(\tau),\tau\in \mathcal{S}_0\}$ be an $\widetilde{\mathcal{E}}^g$ -supermartingale family. By Lemma A.1 in [Reference Grigorova, Quenez and Sulem14], there exists a right upper semicontinuous optional process $\{X_t\}_{t\in[0,T]}$ which aggregates the family $\{X(\tau),\tau\in \mathcal{S}_0\}$ . The case of a smallest $\widetilde{\mathcal{E}}^g$ -supermartingale family supposed to be right-continuous is addressed in [Reference Grigorova, Quenez and Sulem14, Proposition A.6].
With the help of Proposition 5.1, the value function $\{v(\tau),\tau\in\mathcal{S}_0\}$ can be aggregated as an RCLL $\mathcal{E}$ -supermartingale.
Proposition 5.2. Suppose that $\{X(\tau),\tau\in\mathcal{S}_0\}$ is an RC $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then there exists an RCLL $\mathcal{E}$ -supermartingale $\{v_t\}_{t\in[0,T]}$ which aggregates the family $\{v(S),S\in\mathcal{S}_0\}$ defined in (3.1); i.e., for each stopping time S, $v(S)=v_S$ a.s.
Proof. By Propositions 3.1 and 3.3, $\{v(S),S\in\mathcal{S}_0\}$ is a nonnegative, RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system. Recalling (3.2), we have
The result follows from Proposition 5.1.
For the reward family $\{X(\tau),\tau\in\mathcal{S}_0\}$ , since it is not an $\mathcal{E}$ -supermartingale system, we cannot apply Proposition 5.1 to conclude that it can be aggregated. In order to do this, we need to require the following continuity property of the reward family.
Definition 5.1. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) An admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is said to be right-continuous along stopping times (RC) if for any $\tau\in\mathcal{S}_0$ and any sequence $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ such that $\tau_n\downarrow\tau$ , one has $X(\tau)=\lim_{n\rightarrow\infty}X(\tau_n)$ .
Remark 5.2. If the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is RC with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ , then it is RC $\mathcal{E}$ . Indeed, let $\{\tau_n\}_{n\in\mathbb{N}}\subset\mathcal{S}_0$ be a sequence of stopping times such that $\tau_n\downarrow\tau$ , a.s. By Lemma 2.1, the random variable $\eta\,:\!=\,\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}X(\tau)$ belongs to $\textrm{Dom}^+(\mathcal{E})$ . Since $X(\tau_n)\leq \eta$ , applying the dominated convergence theorem 2.3 implies that
The following theorem, obtained in [Reference Kobylanski, Quenez and Rouy-Mironescu15], is used to aggregate the reward family.
Theorem 5.1. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) Suppose that the admissible family $\{X(\tau),\tau\in\mathcal{S}_0\}$ is right-continuous along stopping times. Then there exists a progressive process $\{X_t\}_{t\in[0,T]}$ such that for each $\tau\in\mathcal{S}_0$ , $X(\tau)=X_\tau$ , a.s., and such that there exists a nonincreasing sequence of right-continuous processes $\{X^n_t\}_{t\in[0,T]}$ such that for each $(t,\omega)\in[0,T]\times\Omega$ , $\lim_{n\rightarrow\infty}X_t^n(\omega)=X_t(\omega)$ .
Now we can prove that the optimal stopping time for the single stopping problem obtained in Section 2 can be represented as a first hitting time.
Theorem 5.2. Suppose that the $\mathbb{F}$ -expectation satisfies all the assumptions (H0)–(H7). Let $\{X(\tau),\tau\in\mathcal{S}_0\}$ be an RC and LC $\mathcal{E}$ admissible family with $\sup_{\tau\in\mathcal{S}_0}\mathcal{E}[X(\tau)]<\infty$ . Then for any $S\in\mathcal{S}_0$ , the optimal stopping time of v(S) defined by (3.8) can be given by a first hitting time. More precisely, let $\{X_t\}_{t\in[0,T]}$ be the progressive process given by Theorem 5.1 that aggregates $\{X(\tau),\tau\in\mathcal{S}_0\}$ , and let $\{v_t\}_{t\in[0,T]}$ be the RCLL $\mathcal{E}$ -supermartingale that aggregates the family $\{v(\tau),\tau\in\mathcal{S}_0\}$ . Then the random variable defined by
is the minimal optimal stopping time for v(S).
Proof. For $\lambda\in(0,1)$ , set
It is easy to check that the mapping $\lambda\mapsto \bar{\tau}^\lambda(S)$ is nondecreasing. Then the stopping time
is well-defined. The proof remains almost the same as the proofs of Lemma 3.1 and Theorem 3.1 if $\tau^\lambda(S)$ , $\hat{\tau}(S)$ , and $\tau^*(S)$ are replaced by $\bar{\tau}^\lambda(S)$ , $\bar{\tau}(S)$ , and $\tau(S)$ , respectively, except the proof for Equation (3.6). In order to prove (3.6) in the present setting, that is, to prove the inequality
it is sufficient to verify that for each $S\in\mathcal{S}_0$ and $\lambda\in(0,1)$ ,
For the proof of this assertion, we may refer to Lemma 4.1 in [Reference Kobylanski, Quenez and Rouy-Mironescu15]. The proof is complete.
In the following, we will show that the optimal stopping times for the multiple stopping problem can be given in terms of hitting times. For simplicity, we only consider the double stopping time problem. Let us first aggregate the value function.
Proposition 5.3. Let $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ be an RC $\mathcal{E}$ biadmissible family such that $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ . Then there exists an $\mathcal{E}$ -supermartingale $\{v_t\}_{t\in[0,T]}$ with RCLL sample paths that aggregates the family $\{v(S),S\in\mathcal{S}_0\}$ defined by (4.1); i.e., for each $S\in\mathcal{S}_0$ , $v_S=v(S)$ , a.s.
Proof. By Propositions 4.1 and 4.3, the family $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system which is RC $\mathcal{E}$ . Remark 4.1 implies that $v(0)<\infty$ . Therefore, the result follows from Proposition 5.1.
In order to aggregate the reward family obtained by (4.3), by Theorem 5.1, it suffices to show that it is RC. Since this new reward is defined by the value function of the single stopping problem corresponding to the biadmissible family, we need to assume the following regularity condition on the biadmissible family.
Definition 5.2. ([Reference Kobylanski, Quenez and Rouy-Mironescu15].) A biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is said to be uniformly right-continuous along stopping times (URC) if $\sup_{\tau,\sigma\in\mathcal{S}_0}\mathcal{E}[X(\tau,\sigma)]<\infty$ and if for each nonincreasing sequence of stopping times $\{S_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S$ which converges a.s. to a stopping time $S\in\mathcal{S}_0$ , one has
Theorem 5.3. Suppose that there exists an $\mathbb{F}$ -expectation $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ satisfying (H0)–(H5) that dominates $(\mathcal{E},{Dom}(\mathcal{E}))$ . Let $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ be a biadmissible family which is URC. Then the family $\{\widetilde{X}(S),S\in\mathcal{S}_0\}$ defined by (4.3) is RC.
Proof. By the expression for $\widetilde{X}$ , it is sufficient to prove that the family $\{u_1(\tau),\tau\in\mathcal{S}_0\}$ is RC. For any $\tau,\sigma\in\mathcal{S}_0$ , we define
Since $u_1(\tau)=U_1(\tau,\tau)$ , it remains to prove that $\{U_1(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is RC.
Now let $\{\tau_n\}_{n\in\mathbb{N}},\{\sigma_n\}_{n\in\mathbb{N}}$ be two nonincreasing sequences of stopping times that converge to $\tau$ and $\sigma$ respectively. It is easy to check that
It is obvious that for each fixed $\sigma\in\mathcal{S}_0$ , the family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ is RC. By Remark 5.2, this family is also RC $\mathcal{E}$ . Note that $\{U_1(\tau,\sigma),\tau\in\mathcal{S}_0\}$ can be regarded as the value function of the single optimal stopping problem associated with the reward $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ . Although the reward family $\{X(\tau,\sigma),\tau\in\mathcal{S}_0\}$ may not be admissible owing to the lack of adaptedness, i.e., $X(\tau,\sigma)$ is not $\mathcal{F}_\tau$ -measurable if $\tau<\sigma$ , Remarks 3.3 and 3.4 imply that $\{U_1(\tau,\sigma),\tau\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale which is RC $\mathcal{E}$ . By Proposition 5.2, we obtain that there exists an RCLL adapted process $\{U_t^{1,\sigma}\}_{t\in[0,T]}$ such that for each stopping time $\tau\in\mathcal{S}_0$ ,
Hence, the first part of the right-hand side of (5.6) can be written as $\big|U^{1,\sigma}_\tau-U^{1,\sigma}_{\tau_n}\big|$ . By the right-continuity of $\big\{U_t^{1,\sigma}\big\}_{t\in[0,T]}$ , it converges to 0 as n goes to infinity.
For any $m\in\mathbb{N}$ , set $Z_m=\sup_{r\geq m}\{\mathrm{ess\,sup}_{\tau\in\mathcal{S}_0}|X(\tau,\sigma)-X(\tau,\sigma_r)|\}$ . It is easy to check that
An analysis similar to the one in the proof of Lemma 2.1 shows that $\eta\in\textrm{Dom}^+(\mathcal{E})$ . Therefore, $Z_m\in\textrm{Dom}^+(\mathcal{E})$ for any $m\in\mathbb{N}$ . By a simple calculation, for any $n\geq m$ , we have
Since, for any $\xi\in \textrm{Dom}^+(\mathcal{E})$ , the family $\big\{\widetilde{\mathcal{E}}_t[\xi]\big\}_{t\in[0,T]}$ is right-continuous, it follows that for any $m\in\mathbb{N}$ ,
Note that $Z_m$ converges to 0 as m goes to infinity. By the dominated convergence theorem 2.3, letting m go to infinity in (5.8), we obtain that the second term of the right-hand side of (5.6) converges to 0. The proof is complete.
Combining Theorems 5.1 and 5.3, we get the following aggregation result.
Corollary 5.1. Under the same hypotheses as those of Theorem 5.3, there exists some progressive right-continuous adapted process $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ which aggregates the family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ , i.e., for any $\tau\in\mathcal{S}_0$ , $\widetilde{X}_\tau=\widetilde{X}(\tau)$ , a.s., and such that there exists a nonincreasing sequence of right-continuous processes $\{\widetilde{X}_t^n\}_{t\in[0,T]}$ that converges to $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ .
Theorem 5.4. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7) and that the biadmissible family $\{X(\tau,\sigma),\tau,\sigma\in\mathcal{S}_0\}$ is URC and ULC ${\mathcal{E}}$ . Then the optimal stopping time for the value function defined by (4.1) can be given in terms of some first hitting times.
Proof. Let $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ be the new reward family given by (4.3). By Theorems 4.3 and 5.3, it is LC $\mathcal{E}$ and RC. Applying Theorem 5.1, there exists a progressively measurable process $\big\{\widetilde{X}_t\big\}_{t\in[0,T]}$ which aggregates this family. Let $\{u_t\}_{t\in[0,T]}$ be an RCLL process that aggregates the value function defined as (4.4), which corresponds to the reward family $\big\{\widetilde{X}(\tau),\tau\in\mathcal{S}_0\big\}$ by Proposition 5.2. Then Theorem 5.2 implies that, for any $S\in\mathcal{S}_0$ , the stopping time
is optimal for u(S).
For each $\theta\in\mathcal{S}_{\theta^*}$ , set $X^{(1)}(\theta)=X(\theta,\theta^*)$ and $X^{(2)}(\theta)=X(\theta^*,\theta)$ . For $i=1,2$ , it is obvious that the family $\{X^{(i)}(\theta),\theta\in\mathcal{S}_{\theta^*}\}$ is admissible, RC, and LC ${\mathcal{E}}$ . In order to aggregate this family using Theorem 5.1, we need to extend its definition to all stopping times $\theta\in\mathcal{S}_0$ . One of the candidates is
It is easy to check that the family $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_0\big\}$ is admissible, RC, and left-continuous in expectation along stopping times greater than $\theta^*$ . By Theorem 5.1, there exists a progressive process $\{\widetilde{X}^{(i)}_t\}_{t\in[0,T]}$ that aggregates $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_{0}\big\}$ . Consider the following value function:
Applying Theorem 3.2, we obtain that the family $\big\{\widetilde{v}^{(i)}(S),S\in\mathcal{S}_0\big\}$ is an RC $\mathcal{E}$ $\mathcal{E}$ -supermartingale system. Furthermore, for any $S\geq \theta^*$ , we have $\widetilde{v}^{(i)}(S)=u_i(S)$ , where $u_i$ is defined by (4.2). By Proposition 5.2, there exists an RCLL process $\big\{\widetilde{v}^i_t\big\}_{t\in[0,T]}$ that aggregates the family $\{\widetilde{v}^{(i)}(S),S\in\mathcal{S}_{0}\}$ . Now, we define
By an analysis similar to the one in the proof of Theorem 3.2, Theorem 5.2 still holds for the reward family given by $\big\{\widetilde{X}^{(i)}(\theta),\theta\in\mathcal{S}_0\big\}$ , which implies that the stopping time $\theta^*_i$ is optimal for $\widetilde{v}_i(\theta^*)$ , and then optimal for $u_i(\theta^*)$ . Now, set $B=\{u_1(\theta^*)\leq u_2(\theta^*)\}=\big\{\widetilde{v}^{(1)}(\theta^*)\leq \widetilde{v}^{(2)}(\theta^*)\big\}=\big\{\widetilde{v}^1_{\theta^*}\leq \widetilde{v}^2_{\theta^*}\big\}$ . By Proposition 4.2, the pair of stopping times $\big(\tau_1^*,\tau_2^*\big)$ given by
is optimal for v(S). The proof is complete.
Appendix A
As in Section 4, we assume that the $\mathbb{F}$ -expectation $(\mathcal{E},\textrm{Dom}(\mathcal{E}))$ satisfies Assumptions (H0)–(H5). Now we introduce the optimal d-stopping time problem. The reward family should satisfy the following conditions.
Definition A.1. A family of random variables $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be d-admissible if it satisfies the following conditions:
-
(1) for all $\tau=(\tau_1,\cdots,\tau_d)\in\mathcal{S}_0^d$ , $X(\tau)\in \textrm{Dom}^+_{\tau_1\vee\cdots\vee\tau_d}(\mathcal{E})$ ;
-
(2) for all $\tau,\sigma\in\mathcal{S}_0^d$ , $X(\tau)=X(\sigma)$ a.s. on $\{\tau=\sigma\}$ .
For each fixed stopping time $S\in\mathcal{S}_0$ , the value function of the optimal d-stopping time problem associated with the reward family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is given by
Similarly to the optimal double stopping time case, the family $\{v(S),S\in\mathcal{S}_0\}$ is admissible and is an $\mathcal{E}$ -supermartingale system, as the following proposition shows.
Proposition A.1. Let $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ be a d-admissible family of random variables with $\sup_{\tau\in\mathcal{S}_0^d}[X(\tau)]<\infty$ . Then the value function $\{v(S),S\in\mathcal{S}_0\}$ defined by (A.1) satisfies the following properties:
-
(i) $\{v(S),S\in\mathcal{S}_0\}$ is an admissible family;
-
(ii) for each $S\in\mathcal{S}_0$ , there exists a sequence of stopping times $\{\tau^n\}_{n\in\mathbb{N}}\subset \mathcal{S}_S^d$ such that $\mathcal{E}_S[X(\tau^n)]$ converges monotonically up to v(S);
-
(iii) $\{v(S),S\in\mathcal{S}_0\}$ is an $\mathcal{E}$ -supermartingale system;
-
(iv) for each $S\in\mathcal{S}_0$ , we have $\mathcal{E}[v(S)]=\sup_{\tau\in\mathcal{S}_S^d}\mathcal{E}[X(\tau)]$ .
In the following, we will interpret the value function v(S) defined in (A.1) as the value function of an optimal single stopping problem associated with a new reward family. For this purpose, for each $i=1,\cdots,d$ and $\theta\in\mathcal{S}_0$ , consider the following random variable:
where
It is easy to see that $u^{(i)}(\theta)$ is the value function of the optimal $(d-1)$ -stopping problem corresponding to the reward $\big\{X^{(i)}(\tau,\theta),\tau\in\mathcal{S}_\theta^{d-1}\big\}$ . Now we define
and
The following theorem indicates that the value function v defined by (A.1) coincides with u.
Theorem A.1. Let $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ be a d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then, for any $S\in\mathcal{S}_0$ , we have $v(S)=u(S)$ .
With the above characterization of the value function, we may propose a possible construction of the optimal multiple stopping times by induction.
Proposition A.2. For any fixed $S\in\mathcal{S}_0$ , suppose the following:
-
(1) there exists $\theta^*\in\mathcal{S}_S$ such that $u(S)=\mathcal{E}_S\big[\widehat{X}(\theta^*)\big]$ ;
-
(2) for any $i=1,\cdots,d$ , there exists $\theta^{(i)*}= \left(\theta_1^{(i)*},\cdots,\theta_{i-1}^{(i)*},\theta_{i+1}^{(i)*},\cdots,\theta_d^{(i)*} \right)\in\mathcal{S}_{\theta^*}^{d-1}$ such that $u^{(i)}(\theta^*)=\mathcal{E}_{\theta^*}[X^{(i)}(\theta^{(i)*},\theta^*)]$ .
Let $\{B_i\}_{i=1}^d$ be an $\mathcal{F}_{\theta^*}$ -measurable and disjoint partition of $\Omega$ such that $\widehat{X}(\theta^*)=u^{(i)}(\theta^*)$ on the set $B_i$ , $i=1,\cdots,d$ . Set
Then $\tau^*=\big(\tau_1^*,\cdots,\tau_d^*\big)$ is optimal for v(S), and $\tau_1^*\wedge\cdots\wedge\tau_d^*=\theta^*$ .
Proposition A.3. For any fixed $S\in\mathcal{S}_0$ , suppose that $\tau^*=\big(\tau_1^*,\cdots,\tau_d^*\big)$ is optimal for v(S). Then we have the following:
-
(1) $\tau_1^*\wedge\cdots\wedge \tau_d^*$ is optimal for u(S);
-
(2) for any $i=1,\cdots,d$ , $\big(\tau_1^*,\cdots,\tau_{i-1}^*,\tau^*_{i+1},\cdots,\tau_d^*\big)$ is optimal for $u^{(i)}\big(\tau_i^*\big)$ on the set $\big\{\tau_1^*\wedge\cdots\wedge \tau_d^*=\tau_i^*\big\}$ .
Remark A.1. None of the above results in this section need any regularity assumption on the reward family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ .
The definition of continuity for the reward with d parameters is similar to the one for the double stopping case.
Definition A.2. A d-admissible family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [RC $\mathcal{E}$ (resp., LC $\mathcal{E}$ )] if, for any $\tau\in\mathcal{S}_0^d$ and any sequence $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_0^d$ such that $\tau_n\downarrow \tau$ (resp., $\tau_n\uparrow \tau$ ), one has $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ . If the family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is both RC $\mathcal{E}$ and LC $\mathcal{E}$ , it is said to be continuous along stopping times in $\mathcal{E}$ -expectation (C $\mathcal{E}$ ).
Proposition A.4. Suppose that $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is an RC $\mathcal{E}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\{v(S),S\in\mathcal{S}_0\}$ is RC $\mathcal{E}$ .
Remark A.2. As in the analysis of Remark 3.4, suppose that $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ and is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ (i.e., if a sequence of stopping times $\{\tau_n\}_{n\in\mathbb{N}}\subset \mathcal{S}_\sigma^d$ satisfies $\tau_n\downarrow \tau$ , then one has $\mathcal{E}[X(\tau)]=\lim_{n\rightarrow\infty}\mathcal{E}[X(\tau_n)]$ ). Then the family of value functions $\{v(S),S\in\mathcal{S}_0\}$ is right-continuous in $\mathcal{E}$ -expectation along stopping times greater than $\sigma$ .
By Theorem A.1 and Proposition A.2, the value function and the optimal multiple stopping times of the optimal d-stopping problem can be constructed from those of the optimal $(d-1)$ -stopping problem. Therefore, by induction, the multiple stopping problem can be reduced to nested single stopping problems. In addition, the existence of the optimal stopping time for the single stopping problem associated with the new reward $\big\{\widehat{X}(S),S\in\mathcal{S}_0\big\}$ is the building block for constructing the optimal stopping time for the original d-stopping problem. According to Theorem 3.1, it remains to investigate the regularity of this new reward family.
Definition A.3. A d-admissible family $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is said to be uniformly right-continuous (resp., left-continuous) along stopping times in $\mathcal{E}$ -expectation [URC $\mathcal{E}$ (resp., ULC $\mathcal{E}$ )] if for each $i=1,\cdots,d$ , $S\in\mathcal{S}_0$ , and sequence of stopping times $\{S_n\}_{n\in\mathbb{N}}$ such that $S_n\downarrow S$ (resp., $S_n\uparrow S$ ), one has
Proposition A.5. Let $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ be an $\mathbb{F}$ -expectation satisfying Assumptions (H0)–(H5). Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ is dominated by $\big(\widetilde{\mathcal{E}},{Dom}\big(\widetilde{\mathcal{E}}\big)\big)$ and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a URC $\widetilde{\mathcal{E}}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\big\{\widehat{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (A.4) is RC $\mathcal{E}$ .
Since the left-continuity along stopping times in $\mathcal{E}$ -expectation relies on the existence of optimal stopping times, the conditions under which the LC $\mathcal{E}$ holds is more restrictive than the RC $\mathcal{E}$ case, and the proof of LC $\mathcal{E}$ is more complicated, as explained before the statement of Theorem 4.3 in Section 4.
Proposition A.6. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies (H0)–(H7) and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a UC ${\mathcal{E}}$ d-admissible family (i.e., both URC $\mathcal{E}$ and ULC $\mathcal{E}$ ) with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then the family $\big\{\widehat{X}(\tau),\tau\in\mathcal{S}_0\big\}$ defined by (A.4) is LC $\mathcal{E}$ .
With the help of Propositions A.2, A.5, and A.6, we can now establish the existence result for the optimal stopping times for the multiple stopping problem.
Theorem A.2. Suppose that the $\mathbb{F}$ -expectation $(\mathcal{E},{Dom}(\mathcal{E}))$ satisfies all the assumptions (H0)–(H7) and $\big\{X(\tau),\tau\in\mathcal{S}_0^d\big\}$ is a UC ${\mathcal{E}}$ d-admissible family with $\sup_{\tau\in\mathcal{S}_0^d}\mathcal{E}[X(\tau)]<\infty$ . Then there exists an optimal stopping time $\tau^*\in\mathcal{S}_S^d$ for v(S), that is,
In order to characterize the optimal multiple stopping times in a minimal way, we should first define a partial order relation $\prec_d$ on $\mathbb{R}^d$ . This relation can be found in [15]; for the reader’s convenience, we also state it here. For $d=1$ and any $a,b\in\mathbb{R}$ , $a\prec_1 b$ if and only if $a\leq b$ , and for $d>1$ and any $(a_1,\cdots,a_d),(b_1,\cdots,b_d)\in\mathbb{R}^d$ , $(a_1,\cdots,a_d)\prec_d(b_1,\cdots,b_d)$ if and only if either $a_1\wedge\cdots\wedge a_d<b_1\wedge \cdots\wedge b_d$ , or
Definition A.4. For each fixed $S\in\mathcal{S}_0$ , a d-stopping time $(\tau_1,\cdots,\tau_d)\in\mathcal{S}_S^d$ is said to be d-minimal optimal for the value function v(S) defined by (A.1) if it is minimal for the order $\prec_d$ in the set $\big\{\tau\in\mathcal{S}_S^d\,:\,v(S)=\mathcal{E}_S[X(\tau)]\big\}$ , which is the collection of all optimal stopping times.
Proposition A.7. For each fixed $S\in\mathcal{S}_0$ , a d-stopping time $(\tau_1,\cdots,\tau_d)\in\mathcal{S}_S^d$ is d-minimal optimal for the value function v(S) defined by (A.1) if and only if the following hold:
-
(1) $\theta^*=\tau_1\wedge\cdots\wedge \tau_d$ is the minimal optimal stopping time for u(S) defined by (A.5);
-
(2) for $i=1,\cdots,d$ , $\theta^{*(i)}=\tau_i\in\mathcal{S}_S^{d-1}$ is the $(d-1)$ -minimal optimal stopping time for $u^{(i)}(\theta^*)$ defined by (A.2) on the set $\big\{u^{(i)}(\theta^*)\geq \vee_{k\neq i} u^{(k)}(\theta^*)\big\}$ .
Acknowledgements
We thank the associate editor and the anonymous referees for their pertinent comments, which helped to improve this work.
Funding information
The authors gratefully acknowledge financial support from the Qilu Young Scholars Program of Shandong University and the German Research Foundation (DFG) through the Collaborative Research Centre 1283, Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.