Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-22T20:02:36.407Z Has data issue: false hasContentIssue false

On the Brownian range and the Brownian reversal

Published online by Cambridge University Press:  03 December 2024

Yifan Li*
Affiliation:
University of Manchester
*
*Postal address: Alliance Manchester Business School, Booth Street West, Manchester, M13 9SS, UK. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

This paper studies a novel Brownian functional defined as the supremum of a weighted average of the running Brownian range and its running reversal from extrema on the unit interval. We derive the Laplace transform for the squared reciprocal of this functional, which leads to explicit moment expressions that are new to the literature. We show that the proposed Brownian functional can be used to estimate the spot volatility of financial returns based on high-frequency price observations.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

On a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0},\mathbb{P})$ , let $W=(W_t)_{t\in[0,1]}$ denote a standard Brownian motion starting at $W_0=0$ , and denote $M_t \;:\!=\; \sup_{0\leq s\leq t}W_s$ and $m_t\;:\!=\; \inf_{0\leq s\leq t}W_s$ as its associated supremum and infimum processes. The process $R_t \;:\!=\; M_t -m_t$ is known as the Brownian range process, whose properties have been studied extensively in [Reference Chong, Cowan and Holst6], [Reference Feller15], [Reference Imhof18], and [Reference Vallois30], among others. Properties of the Brownian range process are exploited to measure the variation of financial returns; see e.g. [Reference Bollerslev, Li and Li3], [Reference Christensen and Podolskij7], [Reference Garman and Klass16], [Reference Li, Wang and Zhang25], and [Reference Parkinson27]. Intuitively, when one models the log-price of an asset by a scaled Brownian motion, its range process in an interval summarizes the maximum span of the asset price during that interval, which provides more precise measurements of the price variation than using only prices at end-points.

Nevertheless, $R_t$ only increases its value when $W_t$ refreshes its running extrema, and is otherwise constant. Crucially, it does not reveal how much the Brownian motion has reversed back towards $W_0$ from the running extrema, which can contain substantial amount of information about the variability of the Brownian path. To capture the reversal part of the Brownian path, we define $U_t$ and $u_t$ as the Brownian reversal processes from above and below, respectively:

\begin{equation*}U_t \;:\!=\; M_t - W_t^+, \quad u_t \;:\!=\; m_t - W_t^-,\end{equation*}

where $W_t^+=(W_t\vee 0)$ and $W_t^- = (W_t\wedge0)$ are the sections of the trajectories of $W_t$ above and below $W_0$ . A graphical illustration is presented in Figure 1. Intuitively, $U_t$ and $u_t$ are the proportions of the Brownian paths reflected at its supremum or infimum that are above or below $W_0$ , respectively. Whenever $W_t = M_t$ (resp. $W_t = m_t$ ), we have $U_t = 0$ (resp. $u_t=0$ ) so that the Brownian motion is exactly at its supremum (resp. infimum) with zero reversal. As $W_t$ evolves from the extrema towards $W_0$ , the magnitudes of $U_t$ or $u_t$ increase, and vice versa. The maximum reversal from above (resp. below) is attained when $W_t$ crosses $W_0$ from above (resp. below), in which case we have $U_t= M_t$ (resp. $u_t = m_t$ ), indicating that the Brownian motion at time t has returned to $W_0$ from above (resp. below).

Figure 1. An illustration of running Brownian extrema and reversals.

Similar to the construction of the Brownian range which summarizes the magnitudes of the Brownian supremum and infimum, the total reversal of $W_t$ from both above and below can be summarized by the following process, which we refer to as the Brownian reversal at time t:

(1.1) \begin{equation}V_t \;:\!=\; U_t - u_t \equiv R_t -|W_t|.\end{equation}

It is easy to see that $0\leq V_t\leq R_t$ for all t, as the Brownian motion cannot reverse by more than its total range. For any t, the joint law of $(R_t,V_t)$ can be derived from the Lévy trivariate law of $(W_t,M_t,m_t)$ , which can be found in e.g. [Reference Cox and Miller8] and [Reference Feller15].

The main contribution of this paper is to document the analytical properties of a novel Brownian functional defined as the supremum of the weighted average of $V_t$ and $R_t$ on [0,1]:

\begin{equation*} S^{(\alpha)}\;:\!=\; \sup_{t\in[0,1]}\{\alpha V_t + (1-\alpha)R_t \}\equiv \sup_{t\in[0,1]}s_t^{(\alpha)},\quad \alpha\in[0,1],\end{equation*}

where $s^{(\alpha)}_{t} \;:\!=\; R_t - \alpha|W_t|$ in view of (1.1). As two important special cases, $S^{(0)}\equiv R_1$ is the Brownian range, and $S^{(1)} \equiv \sup_{t\in[0,1]}V_t$ is termed the maximal Brownian reversal. According to [Reference Rogers and Shepp29], one can usually guess whether the law of a particular Brownian functional is straightforward or impossible to establish, but it is not immediately clear which category $S^{(\alpha)}$ belongs to. In this paper, we shall show that the law of $S^{(\alpha)}$ can be explicitly characterized. In particular, we derive the Laplace transform of the random variable $T^{(\alpha)} \;:\!=\; (S^{(\alpha)})^{-2}$ for all $\alpha\in[0,1]$ , which reveals many interesting properties of $S^{(\alpha)}$ . For example, we prove that all moments of $T^{(\alpha)}$ are rational functions of $\alpha$ , and the maximal Brownian reversal has the first moments $\mathbb{E}[T^{(1)}]=1$ and $\mathbb{E}[S^{(1)}]=\sqrt{8/\pi}\ln 2$ . As an application, we show that $S^{(\alpha)}$ can be used to construct more precise spot volatility estimators than those based on high-frequency price ranges such as [Reference Bollerslev, Li and Li3] and [Reference Li, Wang and Zhang25], where the precision gain comes from the reversal part of the Brownian path not captured by the price range.

The remainder of the paper is organized as follows. Section 2 presents the main results of the paper concerning the distributional properties of $S^{(\alpha)}$ . An application of $S^{(\alpha)}$ to spot volatility estimation is shown in Section 3, followed by the proofs to all theoretical results in Section 4.

2. Main results

Our key result is a semi-closed-form expression for the Laplace transform of $T^{(\alpha)}$ , which allows us to obtain explicit moments of $S^{(\alpha)}$ and $T^{(\alpha)}$ as well as their density functions. To derive the Laplace transform, we introduce some notations that will be used in this section. Denote the hyperbolic sine, cosine, and tangent functions as $\sinh[x]$ , $\cosh[x]$ , and $\tanh[x]$ for $x\in\mathbb{R}$ , respectively. The inverse function of $y=\tanh[x]$ will be denoted by $x=\textrm{arctanh}[y]$ for $y\in(\!-\!1,1)$ . For a general process $(A_t)_{t\geq 0}$ , let $T_{x}(A)\;:\!=\; \inf\{t\geq0 \colon A_t > x \}$ denote the first hitting time of A to the level x since time 0 with the convention $\inf\,\{\emptyset\} = \infty$ . We shall use $n\in\mathbb{N}$ to denote a generic natural number (excluding 0).

We start from the following equal events:

\begin{equation*}\{ S^{(\alpha)} > x \}=\biggl\{ \sup_{t\in[0,1]} s^{(\alpha)}_{t}> x \biggr\} =\{T_x(s^{(\alpha)}) < 1 \}.\end{equation*}

By the Brownian scaling law, we deduce that

\begin{equation*}\mathbb{P}(S^{(\alpha)} > x) = \mathbb{P}(T_x(s^{(\alpha)}) < 1 )=\mathbb{P}( x^2T_1(s^{(\alpha)}) < 1 )=\mathbb{P}( T_1(s^{(\alpha)})^{-1/2} > x ),\end{equation*}

which implies that

\begin{equation*}T^{(\alpha)} \;:\!=\; (S^{(\alpha)})^{-2}\overset{d}{=}T_1(s^{(\alpha)}).\end{equation*}

Therefore $T^{(\alpha)}$ has the same law as the first hitting time of the non-negative process $s^{(\alpha)}$ to 1. As two special cases, $T_1(s^{(0)})$ is the hitting time of the Brownian range process to 1, which is studied in [Reference Chong, Cowan and Holst6] and [Reference Imhof18], and $T_1(s^{(1)})$ is the first hitting time of the Brownian reversal process $V_t$ to 1, whose distributional properties are unknown in the literature. For all $\alpha\in[0,1]$ , we have $R_t\geq s^{(\alpha)}_t$ for all $t\geq 0$ and it follows that $ T_1(s^{(1)})\geq T_1(s^{(\alpha)})$ . The Laplace transform of $T^{(\alpha)}$ is stated in Theorem 2.1 below.

Theorem 2.1. Let $\bar\alpha \;:\!=\; \alpha/(1-\alpha)$ . For all $\alpha\in[0,1)$ and $\lambda >0$ , it holds that

(2.1) \begin{equation}\mathcal{L}_\alpha(\lambda)\;:\!=\; \mathbb{E}[{\textrm{e}}^{-\lambda T^{(\alpha)}}] = \sinh[\sqrt{2\lambda}]^{-2}\int_0^{\tanh[\sqrt{2\lambda}]^2}(1-y)^{-1} {}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar\alpha}{2};\;y\biggr)\,{\textrm{d}} y,\end{equation}

where ${}_2{F}_1(a,b;\;c;\;x)$ is the Gauss hypergeometric function defined by

\[ {}_2{F}_1(a,b;\;c;\;x) =\sum_{k=0}^\infty \dfrac{(a)_k(b)_k}{(c)_k k!}x^k, \]

in which $(a)_k\;:\!=\; a(a+1)\cdots(a+k-1)$ is the rising factorial with $(a)_0=1$ . When $\alpha = 1$ , $\mathcal{L}_1(\lambda)$ takes the form

(2.2) \begin{equation}\mathcal{L}_1(\lambda) = \lim_{\alpha\to1} \mathcal{L}_\alpha(\lambda)= 2\sinh[\sqrt{2\lambda}]^{-2}\ln\cosh[\sqrt{2\lambda}].\end{equation}

For all $\lambda\geq 0$ and $\alpha\in[0,1]$ , it holds that

(2.3) \begin{equation} \mathcal{L}_\alpha(\lambda)\leq\mathcal{L}_0(\lambda) = \cosh[\sqrt{\lambda/2}]^{-2}\leq 1.\end{equation}

Remark 2.1. With $\alpha = 0$ , we have ${}_2{F}_1(\tfrac{1}{2},1;\;1;\;y) = (1-y)^{-1/2}$ , and thus the expression of $\mathcal{L}_0(\lambda)$ can be verified by carrying out the integration in (2.1), which agrees with the Laplace transform of the Brownian range time given in e.g. [Reference Borodin and Salminen5], [Reference Chong, Cowan and Holst6], and [Reference Imhof18]. When $\bar\alpha=n$ , then $\alpha = n/(n+1)$ and the Gauss hypergeometric function can be evaluated explicitly by the following recursive relation [Reference Olver, Olde Daalhuis, Lozier, Schneider, Boisvert, Clark, Miller, Saunders, Cohl and McClain26, eq. (15.5.16)]:

\begin{equation*}F_{n+2}(y)=\dfrac{(n+2)(1-(1-y)F_n(y))}{(n+1)y},\quad F_0(y) = (1-y)^{-1/2},\quad F_1(y) =\textrm{arctanh}[\sqrt{y}]/\sqrt{y},\end{equation*}

where

\[F_n(y) \;:\!=\; {}_2{F}_1\biggl(\frac{1}{2},1;\;1+\frac{n}{2};\;y\biggr).\]

This further leads to a series of closed-form Laplace transforms by analytical integration. Examples of the Laplace transforms for $n\in\{1,2,3,4\}$ are presented as

(2.4) \begin{equation}\begin{split}\mathcal{L}_{1/2}(\lambda)& = 2\lambda \sinh[\sqrt{2\lambda}]^{-2}, \quad \mathcal{L}_{2/3}(\lambda) = 8\sinh[\sqrt{2\lambda}]^{-2}\ln\cosh[\sqrt{\lambda/2}],\\[5pt] \mathcal{L}_{3/4}(\lambda)& = 3\sinh[\sqrt{2\lambda}]^{-2}\bigl(\sqrt{2\lambda}\tanh[\sqrt{2\lambda}]^{-1}-1\bigr) ,\\[5pt] \mathcal{L}_{4/5}(\lambda)& = \dfrac{4}{3}\sinh[\sqrt{2\lambda}]^{-2}\bigl(\tanh[\sqrt{x/2}]^2+4\ln\cosh[\sqrt{x/2}] \bigr),\end{split}\end{equation}

and the expressions for a general n can be obtained in the same fashion. The family of Laplace transforms appears to be new in the literature.

Remark 2.2. Interestingly, the Laplace transform of $\mathcal{L}_{1/2}(\lambda)$ in (2.4) also shows up in [Reference Biane, Pitman and Yor2] and [Reference Pitman and Yor28]. Combining equations (1.4), (1.8), and (1.9) of [Reference Biane, Pitman and Yor2], we deduce the following unobvious result:

\begin{equation*}T^{(1/2)}\overset{d}{=}\Bigl(\max_{t,s\in[0,1]}|b_t-b_s|\Bigr)^2,\end{equation*}

where $b=(b_t)_{t\in[0,1]}$ is the standard Brownian bridge, that is, a Brownian motion W on [0,1] conditioned on $W_0=W_1 = 0$ . It is intriguing to identify the probabilistic link that causes the Laplace transforms of two completely different Brownian functionals to coincide. However, this requires individual theoretical analysis which deviates from the main focus of this paper, and is thus left for future research.

The expression for $\mathcal{L}_\alpha(\lambda)$ allows us to evaluate the moments of $T^{(\alpha)}$ and $S^{(\alpha)}$ . We begin by summarizing some known results in the literature corresponding to the special cases $\alpha\in\{0,\tfrac{1}{2}\}$ . Let us denote $\kappa^{(\alpha)}_k\;:\!=\; \mathbb{E}[ (T^{(\alpha)})^k]$ and $\mu_k^{(\alpha)}\;:\!=\; \mathbb{E}[(S^{(\alpha)})^{k}]$ . From Table 1 of [Reference Biane, Pitman and Yor2], we deduce the following results for all $k\in\mathbb{N}$ after some straightforward simplification:

(2.5) \begin{equation}\begin{split}\kappa^{(0)}_k&= \dfrac{k!(\!-\!2)^{k+1}}{(2k)!(k+1)}(1-4^{k+1})B_{2(k+1)},\quad \kappa_{n}^{(1/2)} =\dfrac{k!}{(2k)!}(1-2k)(\!-\!2)^{3k}B_{2k},\\[5pt] \mu_k^{(0)} & = \dfrac{4}{\sqrt{\pi}} \Gamma\biggl(\dfrac{k+1}{2}\biggr)(1-2^{2-k})2^{k/2}\zeta(k-1),\quad \mu_k^{(1/2)}=\sqrt{\dfrac{2}{\pi}}2^{(1-k)/2}k\Gamma\biggl(\dfrac{3+k}{2}\biggr)\zeta(k+1),\end{split}\end{equation}

where $B_n$ is the nth Bernoulli number, $\Gamma(x) = \int_0^\infty {\textrm{e}}^{-t}t^{x-1}\,{\textrm{d}} t$ is the gamma function, and $\zeta(x)$ is the Riemann zeta function. The moments of the Brownian range, $\mu_k^{(0)}$ , also appear in [Reference Garman and Klass16] and [Reference Parkinson27]. It is worth noting that $\mu_2^{(0)} = 4\ln 2$ can be derived by taking the limit of the above expression using properties of the zeta function. Also, as $\mu_k^{(0)}\geq \mu_k^{(\alpha)}$ for all $\alpha\in[0,1]$ due to $R_1\geq S^{(\alpha)}$ , we conclude that all moments of $S^{(\alpha)}$ are finite.

We now turn to the general case with $\alpha\in[0,1]$ . We begin with the calculation of $\kappa_k^{(\alpha)}$ , which have surprisingly simple forms. In detail, we shall show that $\kappa_k^{(\alpha)}$ are finite for all $k\in\mathbb{N}$ and are rational functions of $\alpha$ . To this end, we define the rational sequence $(C_{n,k})_{n,k\in\mathbb{N}}$ through the following recursive relation:

\begin{equation*}C_{n,1}= \dfrac{2^{3n+5}(1-4^{n+2})(2n+3)B_{2(n+2)}}{(2(n+2))!}, \quad C_{n,k} = \sum_{j=0}^nC_{j,k-1}C_{n-j,1}.\end{equation*}

We also need the following sequence of rational functions for $k\in\mathbb{N}\cup\{0\}$ ,

(2.6) \begin{equation}D_k^{(\alpha)} \;:\!=\; \dfrac{\bar{\alpha}}{\bar{\alpha}-1} \biggl(1-\dfrac{(1/2)_{k+1}}{(\bar\alpha/2)_{k+1}}\biggr),\end{equation}

which is continuous on [0,1] with well-defined limits at $\alpha\in\{0,1/2,1\}$ ,

(2.7) \begin{equation}D_k^{(0)}=\dfrac{2(1/2)_{k+1}}{k!},\quad D_k^{(1/2)}=H_{2(k+1)}-H_{k+1}/2, \quad D_k^{(1)}\equiv 1,\end{equation}

where $H_n=\sum_{k=1}^nk^{-1}$ is the nth harmonic number. We have the following.

Proposition 2.1. For all $k\in\mathbb{N}$ and $\alpha\in[0,1]$ , we have $\kappa_k^{(\alpha)}\leq \kappa_k^{(1)}<\infty$ with

(2.8) \begin{equation}\kappa_k^{(\alpha)} = \sum_{n=0}^k \binom{k}{n} \kappa_{n}^{(1/2)} \gamma_{k-n}^{(\alpha)},\end{equation}

where $\gamma_{k}^{(\alpha)}$ for $\alpha\in(0,1)\setminus\{1/2\}$ is defined as the finite sum

(2.9) \begin{equation}\gamma_{k}^{(\alpha)} = (\!-\!1)^k k!\sum_{n=0}^k\dfrac{D^{(\alpha)}_{n}C_{k-n,n+1}}{2(n+1)},\end{equation}

with the limiting cases

(2.10) \begin{equation} \gamma_{k}^{(0)}=\dfrac{(\!-\!1)^kk!2^{k+1}}{(2(k+1))!}, \quad \gamma_{k}^{(1/2)}= \mathbb{1}_{\{k=0\}}, \quad \gamma_{k}^{(1)}=(\!-\!1)^kk!\dfrac{2^{3k+2}(4^{k+1}-1)B_{2(k+1)}}{(k+1)(2(k+1))!}.\end{equation}

Remark 2.3. As $\kappa_k^{(\alpha)}$ is a finite combination of $\gamma_{k-n}^{(\alpha)}$ which is a rational function of $\alpha$ , we conclude that $\kappa_k^{(\alpha)}$ is a rational function of $\alpha$ , which implies that $\kappa_k^{(\alpha)}\in\mathbb{Q}$ whenever $\alpha\in\mathbb{Q}\cap[0,1]$ . It is also worth noting that $\gamma_k^{(1/2)}= \mathbb{1}_{\{k=0\}}$ reduces (2.8) to the trivial identity $\kappa_k^{(1/2)} =\kappa_k^{(1/2)}$ .

Taking $\alpha = 1$ , equation (2.5) and Proposition 2.1 lead to the following simpler form for $\kappa_k^{(1)}$ .

Corollary 2.1. It holds that

(2.11) \begin{equation}\kappa^{(1)}_k = -\sum_{n=0}^k \binom{k}{n}\dfrac{(\!-\!2)^{2n+1}\kappa^{(0)}_n\kappa^{(1/2)}_{k-n}}{(2n+1)(2n+2)}.\end{equation}

Using the closed-form Laplace transforms in (2.4), it is possible to derive a simplified formula for $\kappa^{(\alpha)}_k$ with $\alpha \in\{\tfrac{1}{2},\tfrac{2}{3},\tfrac{3}{4}\}$ that does not involve a double sum. Regrettably, for a general $\alpha\in[0,1]$ , we are unable to simplify the polynomial expressions in Proposition 2.1. Nevertheless, Proposition 2.1 allows us to iteratively construct the exact form of $\kappa^{(\alpha)}_k$ for all $k\in\mathbb{N}$ . To provide a more concrete illustration on the functional form of $\kappa^{(\alpha)}_k$ derived in Proposition 2.1, we list a few expressions for $k\leq 4$ :

\begin{equation*}\begin{split}\kappa^{(\alpha)}_1&=\dfrac{1}{2-\alpha}, \quad \kappa^{(\alpha)}_2 = \dfrac{4}{3(4-3\alpha)}, \quad\kappa^{(\alpha)}_3= \dfrac{4(20\alpha^2-62\alpha+51)}{15(2-\alpha)(4-3\alpha)(6-5\alpha)},\\[5pt] \kappa^{(\alpha)}_4& = \dfrac{32(372-757\alpha+530\alpha^2-126\alpha^3)}{105(2-\alpha)(4-3\alpha)(6-5\alpha)(8-7\alpha)}.\end{split}\end{equation*}

Setting $\alpha\in\{0,\tfrac{1}{2},1\}$ , we obtain the values

\begin{equation*}\begin{split}\kappa^{(0)}_1 &= \dfrac{1}{2},\quad \kappa^{(0)}_2 = \dfrac{1}{3},\quad \kappa^{(0)}_3=\dfrac{17}{60},\quad\kappa^{(0)}_4 = \dfrac{31}{105},\\[5pt] \kappa^{(1/2)}_1 &= \dfrac{2}{3},\quad \kappa^{(1/2)}_2 = \dfrac{8}{15},\quad \kappa^{(1/2)}_3=\dfrac{32}{63},\quad\kappa^{(1/2)}_4 = \dfrac{128}{225},\\[5pt] \kappa^{(1)}_1 &= 1,\quad \kappa^{(1)}_2 = \dfrac{4}{3},\quad \kappa^{(1)}_3=\dfrac{12}{5},\quad\kappa^{(1)}_4 = \dfrac{608}{105},\end{split}\end{equation*}

and it is routine to verify that these values are indeed consistent with the analytical expression in equation (2.5) and Corollary 2.1. The simple result of $\mathbb{E}[T^{(1)}]=1$ is worth highlighting, which means that the Brownian reversal process $V_t$ hits 1 with a unit mean hitting time. This coincides with the well-known result $\mathbb{E}[T_1(|W|)]=1$ , which is the expected time for a standard Brownian motion to exit $[\!-\!1,1]$ . However, as $\mathbb{E}[{\textrm{e}}^{-\lambda T_1(|W|)}]=\cosh[\sqrt{2\lambda}]^{-1}$ , the distribution of $T_1(|W|)$ is completely different from that of $T^{(1)}$ despite the identical first moment. For example, one can verify that $2\textrm{Var}[T^{(1)}]=\textrm{Var}[T_1(|W|)]=\tfrac{2}{3}$ , so $T^{(1)}$ is less dispersed than $T_1(|W|)$ .

We proceed to examine $\mu_k^{(\alpha)}$ for $k\in\mathbb{N}$ , which can be evaluated (see [Reference Cressie and Borkent9]) from the following fractional derivative of the Laplace transform of $T^{(\alpha)}$ since $\mu_k^{(\alpha)}\equiv \mathbb{E}[(T^{(\alpha)})^{-k/2}]$ :

(2.12) \begin{equation}\mu_k^{(\alpha)} = \dfrac{1}{\Gamma(k/2)}\int_0^\infty \mathcal{L}_\alpha(\lambda)\lambda^{k/2-1}\,{\textrm{d}}\lambda,\end{equation}

which converges for all $k\in\mathbb{N}$ as the moments of $S^{(\alpha)}$ are finite. We derive the following representation.

Proposition 2.2. For all $k\in\mathbb{N}$ and $\alpha\in[0,1]$ , it holds that

(2.13) \begin{equation}\mu_k^{(\alpha)} = \dfrac{2^{1-k/2}}{\Gamma(k/2)}\sum_{n=0}^\infty\dfrac{D_n^{(\alpha)}}{n+1} \int_0^1 \textrm{arctanh} [y]^{k-1} y^{2n} \,{\textrm{d}} y.\end{equation}

We obtain some more tractable formulae below using (2.13).

Corollary 2.2. For all $k\in\mathbb{N}$ , it holds that

(2.14) \begin{equation}\mu_k^{(1)} = -\dfrac{2^{1-k/2}}{\Gamma(k/2)}\int_0^1 y^{-2}\ln(1-y^2) \textrm{arctanh} [y]^{k-1} \,{\textrm{d}} y.\end{equation}

For $\alpha\in(0,1)\setminus\{\tfrac{1}{2}\}$ , we have

(2.15) \begin{equation}\begin{split}\mu_1^{(\alpha)} &= \sqrt{\dfrac{2}{\pi}}\dfrac{\bar\alpha}{\bar\alpha-1}(2\ln 2-\Psi_0(\bar\alpha)),\\[5pt] \mu_2^{(\alpha)} &=\dfrac{\bar{\alpha}}{\bar{\alpha}-1}\biggl( \dfrac{\pi^2}{12}+(\ln2 -\Psi_0(\bar{\alpha})^2/2)^2+\dfrac{\Psi_1(\bar{\alpha})}{4} \biggr),\end{split}\end{equation}

where

\[\Psi_k(x) = \psi_k\biggl(\frac{1+x}{2}\biggr)-\psi_k\biggl(\frac{x}{2}\biggr),\]

and $\psi_k(x)$ is the kth-order polygamma function, defined as the kth derivative of the digamma function $\psi_0(x)\;:\!=\; {\textrm{d}}\ln\Gamma(x)/{\textrm{d}} x$ .

Remark 2.4. The definite integral in (2.14) can be calculated explicitly for $k\in\{1,2,3\}$ via MATHEMATICA $^{\circledR}$ :

(2.16) \begin{equation}\mu_1^{(1)} = \sqrt{8/\pi}\ln2, \quad \mu_2^{(1)} = \pi^2/12+(\ln 2)^2,\quad \mu_3^{(1)}=\dfrac{\pi^2\ln2}{3\sqrt{2\pi}}+ \dfrac{3\zeta(3)}{2\sqrt{2\pi}}.\end{equation}

This should be compared with the following values generated by (2.5):

(2.17) \begin{equation}\begin{split}\mu_1^{(0)} &= \sqrt{\dfrac{8}{\pi}}, \quad \mu_2^{(0)} = 4\ln 2, \quad \mu_{3}^{(0)}=\dfrac{\sqrt{8}}{3}\pi^{3/2}, \\[5pt] \mu_1^{(1/2)}&=\dfrac{\pi^{3/2}}{3\sqrt{2}}, \quad \mu_2^{(1/2)} = \dfrac{3\zeta(3)}{2}, \quad \mu_{3}^{(1/2)}=\dfrac{\pi^{7/2}}{30\sqrt{2}}.\end{split}\end{equation}

Thus the moments $\mu_k^{(1)}$ appear to be more sophisticated than the special cases in (2.17). Using properties of polygamma functions and L’Hôpital’s rule, one can verify that the limits of the functions in (2.15) at $\alpha\in\{0,\tfrac{1}{2},1\}$ are identical to those in (2.16) and (2.17). However, we are unable to derive a general expression for $k>2$ , in which case the integral in (2.13) does not simplify to elementary functions of n. In this case, it is more convenient to calculate the moments by numerically integrating (2.12) than from (2.14), as the latter converges very slowly when $\alpha$ is near zero.

Remark 2.5. The quantity $\sqrt{8/\pi}\ln2$ also appears in [Reference Douady, Shiryaev and Yor11] as $\mathbb{E}[\mathbb{D}_1] = \mu_1^{(1)}$ , where $\mathbb{D}_1=W_\sigma - \inf_{\sigma\leq t\leq 1} W_t$ and $\sigma$ is the time that W attains its maximum in [0,1], that is, $\mathbb{D}_1$ is the maximum Brownian ‘downfall’ from its global maximum $W_\sigma$ on [0,1] to the partial minimum on $[\sigma,1]$ . The distribution of $\mathbb{D}_1$ is closely related to the Brownian meander, as discussed in [Reference Durrett and Iglehart12]. However, the equality $\mathbb{E}[\mathbb{D}_1] = \mu_1^{(1)}$ only holds in the first order, as we find $\mathbb{E}[\mathbb{D}_1^2] ={{\pi^2}/{6}}\neq \mu_2^{(1)}$ by integrating the density of $\mathbb{D}_1$ .

We conclude this section with some comments on evaluating the density function of $T^{(\alpha)}$ , denoted by $f_T(t;\alpha)$ , which also produces the density function of $S^{(\alpha)}$ by a standard Jacobian transform. By definition, we have $\mathcal{L}_\alpha(\lambda)=\int_0^\infty {\textrm{e}}^{-\lambda t}f_T(t;\alpha)\,{\textrm{d}} t$ , so that for almost all $t\geq 0$ , $f_T(t;\alpha)$ can be recovered by the inverse Laplace transform of $\mathcal{L}_\alpha(\lambda)$ in principle (see e.g. [Reference Widder33]). However, apart from some special cases (e.g. $\alpha=0,\tfrac{1}{2},\tfrac{3}{4},\tfrac{5}{6},\ldots$ ), the inverse Laplace transform does not seem to have analytical solutions, including the relatively simple $\mathcal{L}_1(\lambda)$ . Therefore, for a general $\alpha\in[0,1]$ we recommend calculating $f_T(t;\alpha)$ based on numerical inverse Laplace transforms. For some selected values of $\alpha$ , we present the density functions $f_T(t;\alpha)$ in Figure 2. The figure shows that $T^{(\alpha)}$ have unimodal and right-skewed distributions across different choices of $\alpha\in[0,1]$ . The probability mass moves towards the right tail as $\alpha$ increases, which reflects the fact that $T^{(\alpha)}$ is increasing in $\alpha$ .

Figure 2. Probability density functions of $T^{(\alpha)}$ for various choices of $\alpha$ . Each line plots $f_T(t;\alpha)$ for the choice of $\alpha$ presented in the figure legend. Apart from the $\alpha=0$ case where the expression of $f_T(t;0)$ can be found in [Reference Chong, Cowan and Holst6, eq. (4)], all the densities are generated by numerical inverse Laplace transforms via MATHEMATICA $^{\circledR}$ .

3. Spot volatility estimation with price ranges and reversals

In this section, we show that $S^{(\alpha)}$ can be used to construct simple yet precise spot volatility estimators of financial returns, which is a key economic variable of interest to investors and policy makers. We shall adopt the setting in [Reference Bollerslev, Li and Li3], [Reference Bollerslev, Li and Liao4], and [Reference Li, Wang and Zhang25], and assume that the log-price of an asset $(P_t)_{t\geq 0}$ follows a semi-martingale of the form

(3.1) \begin{equation}P_t = P_0+\int_0^t \mu_s \,{\textrm{d}} s + \int_0^t\sigma_s \,{\textrm{d}} W_s+ J_t, \quad t\in[0,T],\end{equation}

where W is a standard Brownian driving the diffusive part of the price process, the drift process $\mu$ and the volatility process $\sigma$ are both càdlàg (right continuous with left limits) adapted, and J is a pure-jump process driven by some Poisson random measure. The interval [0, T] can be understood as the length of a trading session when the price is observed, and our inference target can be $\sigma_\tau$ or $\sigma_\tau^2$ at some fixed time $\tau\in(0,T)$ , which are known as the spot volatility or the spot variance at time $\tau$ .

Following [Reference Bollerslev, Li and Li3] and [Reference Li, Wang and Zhang25], to facilitate our asymptotic analysis on spot volatility estimation, we impose some regularity conditions on the processes above.

Assumption 3.1. For the price process P in (3.1), we assume that there exists a sequence of diverging stopping times $(T_m)_{m\geq 1}$ and a sequence of finite constants $(K_m)_{m\geq 0}$ such that, for all $m\geq 1$ :

  1. (i) $|\mu_t| + |\sigma_t| + |\sigma_t|^{-1} + F_t(\mathbb{R}\setminus\{0\})\leq K_m$ for all $t\in[0,T_m]$ , where $F_t$ denotes the spot Lévy measure of J;

  2. (ii) for all $s,t\in[0,T]$ , it holds that $\mathbb{E}[|\sigma_{t\wedge T_m}-\sigma_{s\wedge T_m}|^2]\leq K_m|t-s|^{2\kappa}$ for some constant $\kappa >0$ .

As discussed in [Reference Bollerslev, Li and Li3] and [Reference Li, Wang and Zhang25], the above assumption is very mild. It only assumes local boundedness of various processes and that $\sigma_t$ is $\kappa$ -Hölder-continuous in the $L_2$ -norm. The assumption encompasses almost all stylized facts of asset prices, such as stochastic volatility with a diurnal pattern and leverage effect. In particular, by choosing $\kappa<1/2$ , we allow $\sigma_t$ to have a ‘rough’ path, which is advocated by recent studies in the financial mathematics literature [Reference El Euch, Fukasawa and Rosenbaum13, Reference El Euch and Rosenbaum14, Reference Gatheral, Jaisson and Rosenbaum17].

To estimate the spot quantities at some predetermined time $\tau$ , we focus on a shrinking window starting at $\tau$ , $I_n=[\tau,\tau+\Delta_n]\subset[0,T]$ , where n represents the stages of the statistical experiment, and the asymptotic limit is achieved by letting $\Delta_n\to0$ as $n\to\infty$ . This corresponds to the fixed-k asymptotics in [Reference Bollerslev, Li and Liao4] using exactly $k=1$ block, and our theory naturally extends to the case with fixed $k>1$ following the procedure in [Reference Li, Wang and Zhang25].

We construct the following functional from the observed price path on $I_n$ , which is essentially the same functional as $S^{(\alpha)}$ constructed from P instead of W, up to a normalization factor of $\Delta_n^{1/2}$ :

\begin{equation*}v_n^{(\alpha)}\;:\!=\; \Delta_n^{-1/2}\sup_{t\in I_n}\{w_{t} - \alpha|r_{t}| \},\quad \alpha\in[0,1],\end{equation*}

where $r_{t} \;:\!=\; P_{t} - P_{\tau}$ is the log-return on the interval $[\tau,t]$ for some $t\in I_n$ , and $w_{t}\;:\!=\; h_{t}-l_{t}$ is the price range on $[\tau, t]$ , in which the ‘high’ and ‘low’ returns on $[\tau,t]$ are defined as $h_{t}\;:\!=\; \sup_{s\in[\tau,t]} r_{s}$ and $l_{t}\;:\!=\; \inf_{s\in[\tau,t]} r_{s}$ , respectively. As a comparison, a candlestick chart observed on $I_n$ consists of the open-to-close, high, and low returns on $I_n$ , $(r_{\tau+\Delta_n},h_{\tau+\Delta_n},l_{\tau+\Delta_n})$ . From here, it is clear that both $v_n^{(\alpha)}$ and the candlestick data use the entire price path of P on $I_n$ . The main difference is that $v_n^{(\alpha)}$ allows us to measure the maximum price reversal through the outer supremum operator, while the candlestick data can only measure the price reversal at the end-point of the interval, i.e. $w_{\tau+\Delta_n} - |r_{\tau+\Delta_n}|$ . Note that in practice we only observe a discrete price path contaminated by measurement errors. Nevertheless, as argued in [Reference Bollerslev, Li and Li3] and [Reference Li, Wang and Zhang25], we may choose $\Delta_n$ to ‘not-too-finely’ sample the high-frequency prices, such that the price functionals can reliably approximate those computed from the efficient price path.

In the same vein as [Reference Bollerslev, Li and Li3], [Reference Bollerslev, Li and Liao4], [Reference Jacod, Li and Liao20], and [Reference Li, Wang and Zhang25], we show that $v_n^{(\alpha)}$ is coupled with a limit Brownian functional that has the same distribution as $S^{(\alpha)}$ scaled by $\sigma_\tau$ .

Theorem 3.1. Under Assumption 3.1 as $n\to \infty$ , it holds that

(3.2) \begin{equation}\sup_{\alpha\in[0,1]}|v_n^{(\alpha)} - \sigma_\tau S^{(\alpha)}_n| = {\textrm{o}}_p(1),\end{equation}

where

\[S^{(\alpha)}_n\;:\!=\; \Delta_n^{-1/2}\sup_{t\in I_n}\biggl\{\sup_{s\in[\tau,t]}W_s -\inf_{s\in[\tau,t]}W_s -\alpha|W_{t}-W_\tau|\biggr\}\]

satisfies $S^{(\alpha)}_n \overset{d}{=}S^{(\alpha)}$ , for all $\alpha\in[0,1],n\in\mathbb{N}$ .

Equation (3.2) implies that the family of functionals $(v_n^{(\alpha)})_{\alpha\in[0,1]}$ is, up to a scaling factor, coupled with the family of limit experiments $(S^{(\alpha)}_n)_{\alpha\in[0,1]}$ , whose distribution does not depend on n and is identical to $S^{(\alpha)}$ that we discussed in the previous section. The spot volatility $\sigma_\tau$ arises as the scale parameter of the limit experiment due to the linear design of the functional. This naturally leads to a class of scale-equivariant estimators of $\sigma^p_\tau$ for some $p>0$ defined as $\widehat\sigma^p(\alpha) \;:\!=\; C_p(\alpha)(v^{(\alpha)}_n)^p$ , where $C_p(\alpha)$ is some positive function of $\alpha$ . By the continuous mapping theorem and the asymptotic equivalence lemma, Theorem 3.1 implies that

\begin{equation*}\widehat\sigma^p(\alpha)/\sigma_\tau^p - C_p(\alpha)(S^{(\alpha)}_n)^p\xrightarrow{p} 0 \Rightarrow \widehat\sigma^p(\alpha)/\sigma_\tau^p\xrightarrow{d} C_p(\alpha)(S^{(\alpha)})^p\quad \text{for all $\alpha\in[0,1],$}\end{equation*}

where the limiting variable is a pivotal quantity that is invariant to the unknown (possibly stochastic) parameter $\sigma_\tau$ . This allows us to construct asymptotic confidence intervals for $\sigma_\tau^p$ . In detail, for a confidence level $\rho$ , a valid confidence interval for $\sigma_\tau^p$ can be constructed by choosing $U_\rho>L_\rho>0$ that satisfies the coverage constraint

\begin{equation*}\lim_{\Delta_n\to0}\mathbb{P}(U^{-1}_\rho< \widehat\sigma^p(\alpha)/\sigma_\tau^p <L^{-1}_\rho )=\rho\Leftrightarrow \mathbb{P}((C_p(\alpha)L_\rho)^{2/p}< T^{(\alpha)} < (C_p(\alpha)U_\rho)^{2/p} )=\rho,\end{equation*}

which is based on the distribution of $T^{(\alpha)}$ that can be evaluated from the Laplace transforms in Theorem 2.1. The constants $U_\rho$ and $L_\rho$ can be uniquely determined by minimizing a volume measure of the interval, such as the interval length or the ratio of end-points.

More importantly, we can directly assess the precision of the class of estimators $ \widehat\sigma^p(\alpha)$ through the properties of its scale-invariant limiting distribution, which leads to optimal choices of $C_p(\alpha)$ and $\alpha$ . Following the classic statistical literature [Reference Le Cam23, Reference Lehmann and Casella24, Reference van der Vaart31] and the procedure in [Reference Bollerslev, Li and Li3], we evaluate the precision of the estimator $\widehat\sigma^p(\alpha)$ by its scale-invariant asymptotic mean squared error (AMSE), defined as

(3.3) \begin{equation}\text{AMSE}(\widehat\sigma^p(\alpha)) \;:\!=\; \mathbb{E}\bigl[(C_p(\alpha)(S^{(\alpha)})^p-1)^2\bigr]=C_p(\alpha)^2\mu_{2p}^{(\alpha)} - 2C_p(\alpha)\mu_{p}^{(\alpha)}+1.\end{equation}

Two choices of $C_p(\alpha)$ are common in this context, which will be our focus here: (1) by imposing an asymptotic unbiasedness (UB) condition, we obtain the estimator $\widehat\sigma^p_{\mathit{UB}}(\alpha) \;:\!=\; (v^{(\alpha)}_n)^p/\mu_p^{(\alpha)}$ ; (2) by differentiating (3.3) with respect to $C_p(\alpha)$ and minimizing the AMSE, we arrive at $\widehat\sigma^p_{\mathit{MSE}}(\alpha) \;:\!=\; (v^{(\alpha)}_n)^p\mu_{p}^{(\alpha)}/\mu_{2p}^{(\alpha)}$ . The AMSEs of the two estimators have the following simplified expression:

(3.4) \begin{equation}\text{AMSE}(\widehat\sigma^p_{\mathit{UB}}(\alpha)) =\dfrac{\mu_{2p}^{(\alpha)}}{(\mu_p^{(\alpha)})^2} - 1, \quad \text{AMSE}(\widehat\sigma^p_{\mathit{MSE}}(\alpha))= 1-\dfrac{(\mu_{p}^{(\alpha)})^2}{\mu_{2p}^{(\alpha)}},\end{equation}

which can be directly computed using the results in Corollary 2.2 for every $\alpha$ . It is natural to ask whether we can further minimize the above AMSEs over $\alpha\in[0,1]$ . Conveniently, it is easy to see that $\text{AMSE}(\widehat\sigma^p_{\mathit{UB}}(\alpha))$ and $\text{AMSE}(\widehat\sigma^p_{\mathit{MSE}}(\alpha))$ share the same first-order optimality condition with respect to $\alpha$ , so the solution to the first-order optimality condition $\alpha^*_p$ will minimize or maximize both AMSEs. We verify graphically in Figure 3 that $\alpha^*_p$ is indeed a global minimizer on $\alpha\in[0,1]$ for $p\in\{1,2\}$ .

Figure 3. AMSEs of $\widehat\sigma^p_{\mathit{UB}}(\alpha)$ and $\widehat\sigma^p_{\mathit{MSE}}(\alpha)$ for $p\in\{1,2\}$ as a function of $\alpha$ . The optimal choices $\alpha^*_p$ are solved numerically by minimizing (3.4) via MATHEMATICA $^{\circledR}$ .

Figure 3 reveals substantial AMSE reductions when one optimally mixes the price range and the price reversal through $v_n^{(\alpha)}$ , in comparison to the estimators that only depend on the price range. To quantify the precision gain and compare with the state-of-the-art benchmark estimators, we present the numerical values of the AMSEs for different choices of $\alpha$ in Table 1, along with the AMSEs of the optimal candlestick-based spot volatility estimators of [Reference Bollerslev, Li and Li3], which we shall abbreviate as the BLL estimator.

Table 1 confirms that, compared to the estimators using only price ranges ( $\alpha=0$ ), the optimal choices of $\alpha^*_p$ are approximately half the AMSEs. This vast AMSE reduction is consistent with our intuition that the reversal part of the price path contains substantial information about volatility that is not captured by the price range. For the BLL estimators, their AMSEs are only slightly better than our estimators using solely the maximal price reversal ( $\alpha=1$ ), but still more than 20% higher than our optimal ones. This suggests that $v_n^{(1)}$ alone conveys roughly the same amount of information about the spot volatility as the candlestick data observed in the same interval. From our previous discussion, we can view the BLL estimators as optimal combinations of the price range $w_{\tau + \Delta_n}$ and the price reversal $w_{\tau + \Delta_n}-|r_{\tau + \Delta_n}|$ measured at the end-point of $I_n$ . This is enhanced by $v_n^{(\alpha)}$ , which explores optimal combination for the entire price trajectory on $I_n$ through the outer supremum operator.

To reflect on the results in Figure 3 and Table 1, we argue that, in a thought experiment where retail investors can choose to acquire either $v_n^{(\alpha)}$ or the candlestick data to learn about the spot volatility of the asset, $v_n^{(\alpha)}$ would be the preferred choice due to the lower AMSEs and the simpler functional form of the estimator. In practice, this precision gain may not be easily achievable by retail investors, as $v_n^{(\alpha)}$ requires tick-level transaction data which is costly to obtain, while the high-frequency candlestick charts, despite also being constructed from tick-level data, are widely available from trading platforms and apps by convention. However, it is important to point out that the candlestick charts, which have been a prevailing technical analysis tool over the world for centuries, can still be easily improved by the relatively simple and analytically tractable functional $v_n^{(\alpha)}$ . Further precision gains may be possible by optimally combining the candlestick data with $v_n^{(\alpha)}$ . Unfortunately this requires knowledge about the joint law $(S^{(\alpha)}, W_1,M_1,m_1)$ , which seems out of reach by the methods developed in this paper.

We conclude this paper on an intriguing information design problem about high-frequency data: on a given interval $\Delta_n$ , which functional is the most efficient at summarizing the information about $\sigma_\tau$ from the potentially noisy price trajectory? The answer to this question allows data vendors to provide more effective summary statistics for retail investors to monitor the market movements and risks. We are unable to provide a definite answer here, but we hope that the new functional $S^{(\alpha)}$ presents a promising path towards this ultimate goal.

4. Proofs

Proof of Theorem 2.1. We start with the observation that, for all $\alpha\in[0,1]$ , $s^{(\alpha)}_{t}\leq R_t$ , for all t. Therefore $R_t$ must hit 1 before $s^{(\alpha)}_{t}$ , which allows us to decompose $T_1(s^{(\alpha)})$ as

(4.1) \begin{equation}T_1(s^{(\alpha)}) = T_1(R)+ \tau^{(\alpha)},\end{equation}

where $\tau^{(\alpha)}\geq 0$ is another Brownian stopping time which is identically zero if $\alpha = 0$ (since by definition $T_1(s^{(0)})\equiv T_1(R)$ ), and its properties for $\alpha \in (0,1]$ will be elaborated in what follows. Let $W_{T_1(R)}$ denote the value of W at the time $T_1(R)$ . We shall use the following Laplace transform of the pair $(T_1(R),W_{T_1(R)})$ , which can be found in [Reference Borodin and Salminen5, page 242]:

\begin{equation*}\mathbb{E}\bigl[{\textrm{e}}^{-\lambda T_1(R)} ; W_{T_1(R)}\in \,{\textrm{d}} w\bigr] = \dfrac{\sqrt{2\lambda}\sinh[\sqrt{2\lambda}|w|] }{\sinh[\sqrt{2\lambda}]^2 }\,{\textrm{d}} w,\quad |w|\leq 1.\end{equation*}

Furthermore, the Markov property of the Brownian motion implies that $\tau^{(\alpha)}$ is independent of $ T_1(R)$ conditional on $W_{T_1(R)}$ . Therefore the key to deriving the density of $T_1(s^{(\alpha)})$ is to study the density of $\tau^{(\alpha)}$ conditional on $W_{T_1(R)}$ , and without loss of generality we may take $W_{T_1(R)}\geq 0$ by the symmetry of the Brownian motion.

Table 1. AMSEs of spot volatility and spot variance estimators

Note. For each $p\in\{0,1\}$ and $\alpha\in\{0,\alpha_p^*,1\}$ the table reports the AMSEs of the estimators $\widehat\sigma^p_{\mathit{UB}}(\alpha)$ (Asymptotically unbiased) and $\widehat\sigma^p_{\mathit{MSE}}(\alpha)$ (Optimal AMSE), where the values of $\alpha_p^*$ are reported in Figure 3. The columns headed BLL report the AMSEs of the optimal candlestick-based spot volatility estimators of [Reference Bollerslev, Li and Li3], where the asymptotically unbiased and the optimal AMSE estimators correspond to their optimal estimators under Stein’s loss and the quadratic loss, respectively. The AMSEs of these estimators are taken from Tables 1 and 2 of [Reference Bollerslev, Li and Li3].

To study the conditional density of $\tau^{(\alpha)}$ , let us consider the path of $W_t$ on $t\in [T_1(R), T_1(s^{(\alpha)})]$ . Define the re-centred Brownian motion $\tilde{W}_h\;:\!=\; W_{T_1(R)+h} -W_{T_1(R)}$ on $h\in[0,\tau^{(\alpha)}]$ with the time $T_1(R)$ normalized to 0, i.e. $h = t-T_1(R)$ . Since $T_1(s^{(\alpha)})\leq \inf\{t\geq T_1(R) \colon W_t\leq 0\}$ or equivalently $\tau^{(\alpha)}\leq \inf\{h\geq 0 \colon \tilde{W}_h\leq -W_{T_1(R)}\}$ , it is not hard to see that $\tilde{W}_0=0$ and $\tilde{W}_h\geq -W_{T_1(R)}$ on $h\in[0,\tau^{(\alpha)}]$ . Denoting $\tilde{M}_h \;:\!=\; \sup_{s\in [0,h]} \tilde{W}_s$ , one can show that, for all $t\in[T_1(R), T_1(s^{(\alpha)})]$ or $h\in[0,\tau^{(\alpha)}]$ ,

\begin{equation*}s^{(\alpha)}_{t} = R_t-\alpha|W_t| = 1+ \tilde{M}_{t-T_1(R)} - \alpha(\tilde{W}_{t-T_1(R)} + W_{T_1(R)})= 1+ \tilde{M}_{h} - \alpha(\tilde{W}_{h} + W_{T_1(R)}).\end{equation*}

Therefore, with $W_{T_1(R)}\geq 0$ , we see that

\begin{equation*}\tau^{(\alpha)} = \inf\{h\geq 0 \colon \tilde{M}_h-\alpha \tilde{W}_h \geq \alpha W_{T_1(R)} \}.\end{equation*}

Thus $\tau^{(\alpha)}$ is the first hitting time of the non-negative process $\tilde{M}_h-\alpha \tilde{W}_h $ to the level $\alpha W_{T_1(R)}$ .

To derive the Laplace transform of $\tau^{(\alpha)}$ , we start with a well-known identity due to Lévy (see e.g. [Reference Karatzas and Shreve22, eq. (6.34)], which holds almost surely:

\begin{equation*}(\tilde{M}_h-\tilde{W}_h, \tilde{M}_h)=(|B_h|,L_h) \quad \text{for all ${h>0}$,}\end{equation*}

where $B_h$ is a standard Brownian motion starting at zero and $L_h$ is its associated Brownian local time process at level 0. This implies that almost surely we have

\begin{equation*}\tilde{M}_h-\alpha \tilde{W}_h =\alpha|B_h| +(1-\alpha)L_h \quad \text{for all ${h}>0$,}\end{equation*}

which further leads to the observation that

\begin{equation*}\tau^{(\alpha)}|W_{T_1(R)}\overset{d}{=}T_{W_{T_1(R)}}(\Sigma^{(\bar\alpha)}),\end{equation*}

where

\begin{equation*}\Sigma_h^{(\bar\alpha)}\;:\!=\; |B_h| +\bar{\alpha}^{-1}L_h, \quad \bar\alpha = \dfrac{\alpha}{1-\alpha}.\end{equation*}

As $\alpha\in(0,1]\Leftrightarrow \bar\alpha \in(0,\infty]$ , the process $\Sigma_h^{(\bar\alpha)}$ is well-defined. Conditional on $W_{T_1(R)}$ , the Laplace transform of $T_{W_{T_1(R)}}(\Sigma^{(\bar\alpha)})$ for $\alpha<1$ can be found in [Reference Doney10, page 235] based on the calculations in [Reference Azéma and Yor1] (see also [Reference Jeulin and Yor21]):

\begin{align*} \mathbb{E}\bigl[{\textrm{e}}^{-\lambda \tau^{(\alpha)}}\mid W_{T_1(R)}=w\bigr] &=\mathbb{E}\bigl[{\textrm{e}}^{-\lambda T_{W_{T_1(R)}}(\Sigma^{(\bar\alpha)})}\mid W_{T_1(R)}=w\bigr] \\[5pt] & = \dfrac{\bar\alpha \sqrt{2\lambda}}{\sinh[w\sqrt{2\lambda}]^{\bar\alpha}}\int_0^w \sinh[y\sqrt{2\lambda}]^{\bar\alpha-1}\,{\textrm{d}} y\\[5pt] & =\dfrac{\bar\alpha}{2}\int_0^1\dfrac{t^{\bar{\alpha}/2-1}}{(1+\sinh[w\sqrt{2\lambda}]^2t)^{1/2}}\,{\textrm{d}} t\\[5pt] & = {}_2{F}_1\biggl(\dfrac{1}{2},\dfrac{\bar\alpha}{2};\;1+\dfrac{\bar\alpha}{2};-\sinh[w\sqrt{2\lambda}]^2\biggr),\end{align*}

where the third equality is derived by a change of variable $t= \sinh[y\sqrt{2\lambda}]^2/\sinh[w\sqrt{2\lambda}]^2$ , and the final result follows from the integral representation [Reference Olver, Olde Daalhuis, Lozier, Schneider, Boisvert, Clark, Miller, Saunders, Cohl and McClain26, eq. (15.6.1)] of the Gauss hypergeometric function. Now, by the conditional independence of $\tau^{(\alpha)}$ and $T_1(R)$ , we arrive at the Laplace transform of $T_1(s^{(\alpha)})$ by integrating out $W_{T_1(R)}$ :

(4.2) \begin{align}\mathcal{L}_{\alpha}(\lambda) & =\int_{-1}^1\mathbb{E}\bigl[{\textrm{e}}^{-\lambda T_1(R)} ; W_{T_1(R)}\in \,{\textrm{d}} w\bigr]\,\mathbb{E}\bigl[{\textrm{e}}^{-\lambda \tau^{(\alpha)}}\mid W_{T_1(R)}=w\bigr] \notag \\[5pt] &= \int_0^1 \dfrac{2\sqrt{2\lambda}\sinh[w\sqrt{2\lambda}] }{\sinh[\sqrt{2\lambda}]^2 } {}_2{F}_1\biggl(\dfrac{1}{2},\dfrac{\bar\alpha}{2};\;1+\dfrac{\bar\alpha}{2};-\sinh[w\sqrt{2\lambda}]^2\biggr)\,{\textrm{d}} w \notag \\[5pt] & =\int_0^1 \dfrac{2\sqrt{2\lambda}\tanh[w\sqrt{2\lambda}]}{\sinh[\sqrt{2\lambda}]^2} {}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar\alpha}{2};\tanh[w\sqrt{2\lambda}]^2\biggr)\,{\textrm{d}} w \notag \\[5pt] & = \sinh[\sqrt{2\lambda}]^{2}\int_0^{\tanh[\sqrt{2\lambda}]^2}(1-y)^{-1} {}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar\alpha}{2};\;y\biggr)\,{\textrm{d}} y ,\end{align}

where a factor of 2 is added to account for the case $W_{T_1(R)}\in[\!-\!1,0]$ , the second equality can be derived from ${}_2{F}_1(a,b;\;c;\;z)=(1-z)^{-a}{}_2{F}_1(a,c-b;\;c;\;z/(1-z))$ in [Reference Olver, Olde Daalhuis, Lozier, Schneider, Boisvert, Clark, Miller, Saunders, Cohl and McClain26, eq. (15.8.1)], and the last equality follows from the change of variable $y = \tanh[w\sqrt{2\lambda}]^2$ . This proves (2.1).

We now verify (2.2) and show that $\mathcal{L}_1(\lambda)$ can be obtained from $\mathcal{L}_\alpha(\lambda)$ by letting $\alpha \to 1$ . We shall start with a direct derivation of $\mathcal{L}_1(\lambda)$ . Notice that when $\alpha=1$ , we have $\tau^{(1)}\mid W_{T_1(R)}\overset{d}{=}T_{W_{T_1(R)}}(|B|)$ . In this case, the conditional Laplace transform of $T_{W_{T_1(R)}}(|B|)$ takes a simple form [Reference Borodin and Salminen5, page 355]:

\begin{equation*}\mathbb{E}\bigl[ {\textrm{e}}^{-\lambda \tau^{(1)}}\mid W_{T_1(R)}=w\bigr] = \cosh[|w|\sqrt{2\lambda}]^{-1}.\end{equation*}

Plugging the above into (4.2) and simplifying the integral yields the same Laplace transform as in (2.2). To show that $\mathcal{L}_1(\lambda)\equiv \lim_{\alpha\to 1}\mathcal{L}_\alpha(\lambda)$ , we note that

\[ {}_2{F}_1\biggl(\frac{1}{2},1;\;1+\frac{\bar{\alpha}}{2};\;y\biggr)\]

is monotonically increasing on [0, 1] and satisfies, for all real $\bar{\alpha}>1$ ,

\begin{equation*}1 = {}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar{\alpha}}{2};\;0\biggr)<{}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar{\alpha}}{2};\;y\biggr)<{}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar{\alpha}}{2};\;1\biggr)= \dfrac{ \bar{\alpha} }{\bar{\alpha}-1},\end{equation*}

by the Gauss summation theorem. Since $\alpha\to 1\Leftrightarrow \bar\alpha\to\infty$ , we deduce that

\[\lim_{\alpha\to 1}{}_2{F}_1\biggl(\frac{1}{2},1;\;1+\frac{\bar{\alpha}}{2};\;y\biggr)=1\]

uniformly on $y\in[0,1]$ . Thus one can interchange the limit with the integration in the last line of (4.2) to derive

\begin{equation*}\lim_{\alpha\to1}\mathcal{L}_{\alpha}(\lambda) = \sinh[\sqrt{2\lambda}]^{-2}\int_0^{\tanh[\sqrt{2\lambda}]^2}(1-y)^{-1}\,{\textrm{d}} y=2\sinh[\sqrt{2\lambda}]^{-2}\ln\cosh[\sqrt{2\lambda}]\equiv \mathcal{L}_1(\lambda),\end{equation*}

as desired. Finally, to verify (2.3), note that by (4.1) we have $ T^{(\alpha)}\geq T_1(R)$ pathwise for all $\alpha\in[0,1]$ , which implies that for all $\lambda\geq 0$ and $\alpha\in[0,1]$ , ${\textrm{e}}^{-\lambda T^{(\alpha)}}\leq {\textrm{e}}^{-\lambda T_1(R)}$ and hence $\mathcal{L}_\alpha(\lambda)\leq\mathcal{L}_0(\lambda)$ by taking expectation. The analytical expression of $\mathcal{L}_0(\lambda)$ is well-known (see e.g. [Reference Chong, Cowan and Holst6], [Reference Imhof18]), from which one immediately concludes that $\mathcal{L}_0(\lambda)\leq\mathcal{L}_0(0)= 1$ for all $\lambda\geq 0$ . This completes the proof.

Proof of Proposition 2.1. We begin with the observation that $T^{(\alpha)}\leq T^{(1)}$ pathwise, and thus $\kappa^{(\alpha)}_k\leq \kappa^{(1)}_k$ for all $k\in\mathbb{N}$ . Therefore all moments of $T^{(\alpha)}$ exist if $\kappa^{(1)}_k<\infty$ for all $k\in\mathbb{N}$ . As $\mathcal{L}_1(\lambda)<\infty$ for all $\lambda\geq 0$ from Theorem 2.1, by differentiating under the integral sign, the dominated convergence theorem implies that

(4.3) \begin{equation}\kappa^{(1)}_k = \lim_{\lambda\to 0} \mathbb{E}[(T^{(1)})^k {\textrm{e}}^{-\lambda T^{(1)}} ] = \lim_{\lambda\to 0} (\!-\!1)^k\mathcal{L}^{(k)}_1(\lambda),\end{equation}

where $\mathcal{L}^{(k)}_\alpha(\lambda)$ is the kth derivative of $\mathcal{L}_\alpha(\lambda)$ with respect to $\lambda$ . Therefore it suffices to show that

\begin{equation*}\lim_{\lambda\to 0} (\!-\!1)^k\mathcal{L}^{(k)}_1(\lambda) = (\!-\!1)^k \mathcal{L}^{(k)}_1(0)<\infty,\end{equation*}

and the above quantity is guaranteed to be non-negative by the Bernstein–Widder theorem [Reference Widder32, Theorem 12a]. To prove this, we shall write

(4.4) \begin{equation}\mathcal{L}_1(\lambda) = \mathcal{L}_{1/2}(\lambda)\mathcal{G}_{1}(\lambda),\end{equation}

where $\mathcal{L}_{1/2}(\lambda)$ is given in (2.4), and $\mathcal{G}_{1}(\lambda) = \lambda^{-1}\ln\cosh[\sqrt{2\lambda}]$ . We thus have, by the general Leibniz rule,

\begin{equation*}\lim_{\lambda\to 0}(\!-\!1)^k\mathcal{L}^{(k)}_1(\lambda) = \sum_{n=0}^k \binom{k}{n}\kappa_n^{(1/2)} (\!-\!1)^{k-n}\lim_{\lambda\to 0}\mathcal{G}^{(k-n)}_{1}(\lambda),\end{equation*}

where $\mathcal{L}_{1/2}^{(n)}(0) =\lim_{\lambda\to 0}\mathcal{L}^{(n)}_{1/2}(\lambda) = (\!-\!1)^{-n}\kappa_n^{(1/2)}$ holds since $\mathcal{L}_{1/2}(\lambda)$ is the Laplace transform of $T^{(1/2)}$ with all moments finite (see equation (2.5) and [Reference Biane, Pitman and Yor2]). Therefore the above quantity is finite for all $k\in\mathbb{N}$ if

\begin{equation*}(\!-\!1)^{k}\lim_{\lambda\to 0}\mathcal{G}^{(k)}_{1}(\lambda) = (\!-\!1)^{k}\mathcal{G}^{(k)}_{1}(0)<\infty.\end{equation*}

The Maclaurin series of $\mathcal{G}_{1}(\lambda)$ is known [34]:

(4.5) \begin{equation}\mathcal{G}_{1}(\lambda) = \sum_{k=0}^\infty \dfrac{2^{3k+2}(4^{k+1}-1)B_{2(k+1)}}{(k+1)(2(k+1))!}\lambda^{k} = \sum_{k=0}^\infty\dfrac{(\!-\!1)^{-k}}{k!}\gamma_k^{(1)} \lambda^{k},\end{equation}

where $\gamma_k^{(1)}$ is defined in (2.10). From the above, we immediately see that $(\!-\!1)^{k}\mathcal{G}^{(k)}_{1}(0)=\gamma_k^{(1)}<\infty$ , and in fact also $\gamma_k^{(1)}>0$ as $B_n(\!-\!1)^n>0$ , for all n. We thus conclude that all moments of $T^{(1)}$ , and hence $T^{(\alpha)}$ for $\alpha\in[0,1]$ , are finite.

The above result allows us to deduce $\kappa^{(\alpha)}_k = (\!-\!1)^k\mathcal{L}_1^{(k)}(0)$ in view of (4.3) and the existence of the limit. To derive an explicit formula for $\kappa^{(\alpha)}_k$ , we consider the following decomposition of $\mathcal{L}_\alpha(\lambda)$ similar to (4.4):

(4.6) \begin{equation}\mathcal{L}_\alpha(\lambda) = \mathcal{L}_{1/2}(\lambda)\mathcal{G}_{\alpha}(\lambda),\end{equation}

where $\mathcal{G}_{\alpha}(\lambda)$ takes the form

\begin{equation*}\mathcal{G}_{\alpha}(\lambda) = (2\lambda)^{-1}\int_0^{\tanh[\sqrt{2\lambda}]^2}(1-y)^{-1} {}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar\alpha}{2};\;y\biggr)\,{\textrm{d}} y.\end{equation*}

By the general Leibniz rule again, and since we know that the moments exist,

\begin{equation*}\kappa_{k}^{(\alpha)} =(\!-\!1)^k\mathcal{L}^{(k)}_{\alpha}(0)=(\!-\!1)^k\sum_{n=0}^k\binom{k}{n}\mathcal{L}^{(n)}_{1/2}(0)\mathcal{G}^{(k-n)}_{\alpha}(0).\end{equation*}

Comparing the above to (2.8), we only need to show that $\gamma_{k}^{(\alpha)}$ in (2.9) satisfies $\gamma_{k}^{(\alpha)}\equiv (\!-\!1)^k\mathcal{G}^{(k)}_{\alpha}(0)$ for all $k\in\mathbb{N}$ and $\alpha\in[0,1]$ . It suffices to show that $\mathcal{G}_\alpha(\lambda)$ has the following Maclaurin series:

(4.7) \begin{equation}\mathcal{G}_{\alpha}(\lambda) = \sum_{k=0}^\infty \dfrac{(\!-\!1)^k \gamma_{k}^{(\alpha)}}{k!}\lambda^k.\end{equation}

The case with $\alpha = 1$ is already proved in (4.5), and the case $\alpha = 1/2$ is trivial since $\mathcal{G}_{1/2}(\lambda)\equiv 1$ , so we simply have $\gamma_k^{(1/2)}\equiv \mathbb{1}_{\{k=0\}}$ . For $\alpha = 0$ , we have $\mathcal{G}_{0}(\lambda) = \lambda^{-1}(\cosh[\sqrt{2\lambda}]-1)$ with the following Maclaurin series:

\begin{equation*}\mathcal{G}_{0}(\lambda) = \sum_{k=0}^\infty \dfrac{2^{k+1}}{(2(k+1))!}\lambda^k = \sum_{k=0}^\infty \dfrac{(\!-\!1)^k\gamma_{k}^{(0)}}{k!}\lambda^k,\end{equation*}

where $\gamma_{k}^{(0)}$ and $\gamma_{k}^{(1/2)}$ correspond to those provided in (2.10), as desired.

For the general case $\alpha\in(0,1)\setminus\{\tfrac{1}{2}\}$ , we start with the claim that

\begin{equation*}\tanh[\sqrt{2x}]^{2k} = x^k\sum_{n=0}^\infty C_{n,k}x^{n}, \quad k\in\mathbb{N}.\end{equation*}

This easily follows by induction from the Maclaurin series of $\tanh[\sqrt{2x}]^{2}$ below and the recursive relation of $C_{n,k}$ by a Cauchy sum argument,

\begin{equation*}\tanh[\sqrt{2x}]^{2} = x\sum_{n=0}^\infty C_{n,1}x^{n},\end{equation*}

which can be derived from the Maclaurin series of $\tanh(x)$ provided in [35].

We are now ready to derive the Maclaurin series of $\mathcal{G}_\alpha(\lambda)$ . Using the geometric series expansion and the Maclaurin series of the Gauss hypergeometric function, we deduce

(4.8) \begin{align}\mathcal{G}_\alpha(\lambda) &= (2\lambda)^{-1} \int_0^{\tanh[\sqrt{2\lambda}]^{2}} \sum_{s=0}^\infty y^s{}_2{F}_1\biggl(\dfrac{1}{2},1;\;1+\dfrac{\bar\alpha}{2};\;y\biggr)\,{\textrm{d}} y \notag \\[5pt] &= (2\lambda)^{-1} \int_0^{\tanh[\sqrt{2\lambda}]^{2}}\sum_{s=0}^\infty \sum_{n=0}^\infty\dfrac{ (1/2)_n}{(1+\bar\alpha/2)_n} y^{n+s} \,{\textrm{d}} y \notag\\[5pt] &= (2\lambda)^{-1} \int_0^{\tanh[\sqrt{2\lambda}]^{2}}\sum_{k=0}^\infty D^{(\alpha)}_{k}y^{k} \,{\textrm{d}} y \notag\\[5pt] &= (2\lambda)^{-1}\sum_{k=0}^\infty D^{(\alpha)}_{k} \dfrac{\tanh[\sqrt{2\lambda}]^{2(k+1)}}{k+1},\end{align}

where the third equality follows from a change of index and the identity

\begin{equation*}D_k^{(\alpha)} \;:\!=\; \dfrac{\bar{\alpha}}{\bar{\alpha}-1}\biggl(1- \dfrac{(1/2)_{k+1}}{(\bar{\alpha}/2)_{k+1}} \biggr) = \sum_{n=0}^k\dfrac{ (1/2)_n}{(1+\bar\alpha/2)_n} ,\end{equation*}

which can be easily proved via induction, and is well-defined for all $\bar\alpha\in(0,\infty)\setminus\{1\}$ (the limiting cases with $\bar\alpha\in\{0,1,\infty\}$ given in (2.7) can be easily verified). The interchange of sum and limit in the last equality of (4.8) is justified by the fact that all the summands are positive. Plugging in the Maclaurin series of $\tanh[\sqrt{2x}]^{2k}$ and reordering, we arrive at the Maclaurin series of $\mathcal{G}_\alpha(\lambda)$ :

\begin{equation*}\mathcal{G}_\alpha(\lambda) = \sum_{k=0}^\infty D^{(\alpha)}_{k}\sum_{n=0}^\infty \dfrac{C_{n,k+1}\lambda^{k+n}}{2(k+1)}=\sum_{k=0}^\infty \sum_{n=0}^k\dfrac{D^{(\alpha)}_{n}C_{k-n,n+1}}{2(n+1)} \lambda^{k}.\end{equation*}

It is now clear that by setting

\[\gamma_k^{(\alpha)}=(\!-\!1)^k k!\sum_{n=0}^k\dfrac{D^{(\alpha)}_{n}C_{k-n,n+1}}{2(n+1)},\]

we recover (4.7) as desired. This completes the proof.

Proof of Corollary 2.1. In view of Proposition 2.1 and equation (2.5), the corollary follows from the observation

\begin{equation*}\gamma^{(1)}_n =(\!-\!1)^n n!\dfrac{2^{3n+2}(4^{n+1}-1)B_{2(n+1)}}{(n+1)(2(n+1))!}=-\dfrac{(\!-\!2)^{2n+1}\kappa^{(0)}_n}{(2n+1)(2n+2)},\end{equation*}

and the final expression in (2.11) can be obtained by

\[\kappa_k^{(1)} = \sum_{n=0}^k \binom{k}{n} \kappa_{k-n}^{(1/2)} \gamma_{n}^{(1)},\]

which is equivalent to (2.8). This completes the proof.

Proof of Proposition 2.2. We start with the following infinite series representation of $\mathcal{L}_\alpha(\lambda)$ :

(4.9) \begin{equation}\mathcal{L}_\alpha(\lambda)=\sum_{n=0}^\infty \dfrac{D_n^{(\alpha)}}{n+1} \tanh[\sqrt{2\lambda}]^{2n}(1-\tanh[\sqrt{2\lambda}]^{2}),\end{equation}

which follows from (2.4), (4.6), and (4.8) after some straightforward simplification. Since $D^{(\alpha)}_n$ is defined for all $\alpha\in[0,1]$ by (2.6) and (2.7), there are no ambiguities in the above definition for $\alpha\in\{0,\tfrac{1}{2},1\}$ . By Theorem 2.1, (4.9) is convergent on $\lambda\geq 0$ for all $\alpha\in[0,1]$ . We use (2.13) to deduce that

(4.10) \begin{align}\mu^{(\alpha)}_k & = \dfrac{1}{\Gamma(k/2)} \sum_{n=0}^\infty\dfrac{D_n^{(\alpha)}}{n+1}\int_0^\infty \tanh[\sqrt{2\lambda}]^{2n}(1-\tanh[\sqrt{2\lambda}]^{2})\lambda^{k/2-1}\,{\textrm{d}} \lambda, \notag \\[5pt] & = \dfrac{2^{1-k/2}}{\Gamma(k/2)}\sum_{n=0}^\infty\dfrac{D_n^{(\alpha)}}{n+1} \int_0^1 \textrm{arctanh}[y]^{k-1}y^{2n} \,{\textrm{d}} y,\end{align}

where the exchange of sum and integration is permitted in the first line due to the fact that all integrands are positive, and the second line follows by change of variables $x=\tanh[\sqrt{2\lambda}]^2$ and then $y=\sqrt{x}$ . This completes the proof.

Proof of Corollary 2.2. When $\alpha = 1$ , from (2.7) we learn that $D^{(1)}_n\equiv 1$ . Plugging this into (4.10) and exchanging the order of integration with the infinite sum, (2.14) follows from the identity

\begin{equation*}\sum_{n=0}^\infty \dfrac{y^{2n}}{n+1} = -y^{-2}\ln(1-y^2).\end{equation*}

Now, for (2.15), we note that the integral in (2.13) can be solved in closed form,

\begin{equation*}\int_0^1 y^{2n} \,{\textrm{d}} y=\dfrac{1}{2n+1}, \quad \int_0^1 \textrm{arctanh}[y] y^{2n} \,{\textrm{d}} y= \dfrac{H_{n}+2\ln 2}{2(2n+1)},\end{equation*}

and recall that $H_n$ is the nth harmonic number. The expressions in (2.15) are derived by substituting the above into (4.10) and simplifying the infinite sums using MATHEMATICA $^{\circledR}$ . This completes the proof.

Proof of Theorem 3.1. By a standard localization procedure (see [Reference Jacod and Protter19, Section 4.4.1]), we can without loss of generality work under a strengthened version of Assumption 3.1 by assuming that the bounds in Assumption 3.1(i) hold with $T_1=\infty$ . Also, we can assume that $P_t$ is continuous on $I_n$ , following the discussion in the proof of Theorem 1 of [Reference Li, Wang and Zhang25].

To prove (3.2), we start with the following normalized statistics defined for $t\in I_n$ :

\begin{align*}w_{n,t}&\;:\!=\; \Delta_n^{-1/2}w_t, \quad r_{n,t}\;:\!=\; \Delta_n^{-1/2}r_t,\\[5pt] R_{n,t}&\;:\!=\; \Delta_n^{-1/2}\biggl(\sup_{s\in[\tau,t]} W_s -\inf_{s\in[\tau,t]}W_s\biggr),\quad W_{n,t}\;:\!=\; \Delta_n^{-1/2}(W_{t}- W_\tau).\end{align*}

Thus it is understood that $v_n^{(\alpha)} \equiv \sup_{t\in I_n}\{w_{n,t}-\alpha |r_{n,t}|\}$ and similarly for $S^{(\alpha)}_n \equiv\sup_{t\in I_n}\{R_{n,t}-\alpha |W_{n,t}|\} $ . Intuitively, $(R_{n,t},W_{n,t})$ is the counterpart of $(w_{n,t},r_{n,t})$ constructed from W instead of P. We have the following observation:

(4.11) \begin{align}\sup_{\alpha\in[0,1]}|v_n^{(\alpha)} - \sigma_\tau S^{(\alpha)}_n|&\leq \sup_{\alpha\in[0,1]} \biggl|\sup_{t\in I_n}\{w_{n,t}-\alpha |r_{n,t}|\}-\sigma_\tau\sup_{t\in I_n}\{R_{n,t}-\alpha |W_{n,t}| \} \biggr| \notag \\[5pt] &\leq \sup_{\alpha\in[0,1]}\sup_{t\in I_n}\big| w_{n,t}-\alpha |r_{n,t}|- \sigma_\tau R_{n,t} + \sigma_\tau \alpha |W_{n,t}| \big| \notag\\[5pt] &\leq \sup_{\alpha\in[0,1]} \biggl\{\sup_{t\in I_n}|w_{n,t}-\sigma_\tau R_{n,t}|+\alpha\sup_{t\in I_n}\big| |r_{n,t}|-\sigma_\tau |W_{n,t}|\big|\biggr\} \notag\\[5pt] &\leq \sup_{t\in I_n}|w_{n,t}-\sigma_\tau R_{n,t}|+\sup_{t\in I_n}\big| |r_{n,t}|-\sigma_\tau |W_{n,t}|\big|.\end{align}

Thus it suffices to prove that both terms in the last line above are of order ${\textrm{o}}_p(1)$ .

Recall that $J\equiv 0$ on $I_n$ . For the first term,

\begin{align*}\sup_{t\in I_n}|w_{n,t}-\sigma_\tau R_{n,t}| & = \Delta_n^{-1/2}\sup_{t\in I_n}\biggl|\sup_{s\in[\tau,t]}P_s -\sigma_\tau\sup_{s\in[\tau,t]}W_s-\inf_{s\in[\tau,t]}P_s +\sigma_\tau \inf_{s\in[\tau,t]}W_s \biggr| \\[5pt] & \leq \Delta_n^{-1/2}\sup_{t\in I_n}\biggl|2\sup_{s\in[\tau,t]}|P_s-P_\tau -\sigma_\tau(W_s-W_\tau)| \biggr|\\[5pt] & \leq 2\Delta_n^{-1/2}\sup_{t\in I_n}|P_t-P_\tau -\sigma_\tau(W_t-W_\tau)|\\[5pt] & = 2\Delta_n^{-1/2}\sup_{t\in I_n}\biggl|\int_\tau^{t}\mu_s\,{\textrm{d}} s + \int_\tau^t(\sigma_s -\sigma_\tau)\,{\textrm{d}} W_s\biggr|\\[5pt] & \leq 2\Delta_n^{-1/2}\int_\tau^{\tau+\Delta_n}|\mu_s|\,{\textrm{d}} s + 2\Delta_n^{-1/2}\sup_{t\in I_n}\biggl|\int_\tau^t(\sigma_s -\sigma_\tau)\,{\textrm{d}} W_s\biggr|\\[5pt] & = {\textrm{O}}_p(\Delta^{1/2}_n) + {\textrm{O}}_p(\Delta_n^\kappa) = {\textrm{o}}_p(1),\end{align*}

where the first inequality is due to a reverse triangle inequality applied to the supremum norm, and note that the infimum can be interpreted as the supremum norm applied to the negative process. The last two estimates follow from Assumption 3.1 which are proved in [Reference Li, Wang and Zhang25]. Similarly, for the second term in (4.11), we have

\begin{equation*}\begin{split}\sup_{t\in I_n}\big||r_{n,t}|-\sigma_\tau |W_{n,t}| \big|&=\Delta_n^{-1/2}\sup_{t\in I_n}\Big||P_t-P_\tau| -\sigma_\tau |W_t-W_\tau| \Big| \\[5pt] & \leq \Delta_n^{-1/2}\sup_{t\in I_n}\Big|P_t - P_\tau -\sigma_\tau(W_t -W_\tau)| \Big|\\[5pt] &={\textrm{O}}_p(\Delta^{1/2}_n) + {\textrm{O}}_p(\Delta_n^\kappa) = {\textrm{o}}_p(1),\end{split}\end{equation*}

as desired. Finally, by the Brownian scaling law, it is not hard to see that $(W_{n,t})_{t\in I_n}\overset{d}{=}(W_t)_{t\in[0,1]}$ for arbitrary choice of $\Delta_n>0$ . Since $[\tau,\tau+\Delta_n]\subset[0,T]$ for all n by assumption, we have $S^{(\alpha)}_n\overset{d}{=}S^{(\alpha)}$ for all $\alpha\in[0,1]$ and n, and the proof is complete.

Acknowledgements

The author thanks two anonymous referees for their various comments and suggestions, which substantially improved the quality and mathematical rigour of the paper. The author thanks the participants in 2024 Asian Meeting of the Econometric Society in China for their helpful discussions. The author acknowledges the hospitality of the School of Economics at Singapore Management University, where part of this work was written. The author is deeply grateful for Professor Jia Li’s illuminating and thought-provoking discussions.

Competing interests

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

Azéma, J. and Yor, M. (1979). Une solution simple au problème de Skorokhod. Sém. de Prob. XIII, 90–115.CrossRefGoogle Scholar
Biane, P., Pitman, J. and Yor, M. (2001). Probability laws related to the Jacobi theta and Riemann zeta function and Brownian excursions. Bull. Amer. Math. Soc. 38, 435465.CrossRefGoogle Scholar
Bollerslev, T., Li, J. and Li, Q. (2024). Optimal nonparametric range-based volatility estimation. J. Econometrics 238, 105548.CrossRefGoogle Scholar
Bollerslev, T., Li, J. and Liao, Z. (2021). Fixed-k inference for volatility. Quant. Econom. 12, 10531084.CrossRefGoogle Scholar
Borodin, A. N. and Salminen, P. (2002). Handbook of Brownian Motion: Facts and Formulae. Birkhäuser, Basel.CrossRefGoogle Scholar
Chong, K. S., Cowan, R. and Holst, L. (2000). The ruin problem and cover times of asymmetric random walks and Brownian motions. Adv. Appl. Prob. 32, 177192.CrossRefGoogle Scholar
Christensen, K. and Podolskij, M. (2007). Realized range-based estimation of integrated variance. J. Econometrics 141, 323349.CrossRefGoogle Scholar
Cox, D. R. and Miller, H. D. (1965). The Theory of Stochastic Processes. CRC Press, Boca Raton.Google Scholar
Cressie, N. and Borkent, M. (1986). The moment generating function has its moments. J. Statist. Planning Infer. 13, 337344.CrossRefGoogle Scholar
Doney, R. A. (1998). Some calculations for perturbed Brownian motion. Sém. de Prob. XXXII, 231–236.Google Scholar
Douady, R., Shiryaev, A. N. and Yor, M. (2000). On probability characteristics of ‘downfalls’ in a standard Brownian motion. Theory Prob. Appl. 44, 2938.CrossRefGoogle Scholar
Durrett, R. T. and Iglehart, D. L. (1977). Functionals of Brownian meander and Brownian excursion. Ann. Prob. 5, 130135.Google Scholar
El Euch, O., Fukasawa, M. and Rosenbaum, M. (2018). The microstructural foundations of leverage effect and rough volatility. Finance Stoch. 22, 241280.CrossRefGoogle Scholar
El Euch, O. and Rosenbaum, M. (2019). The characteristic function of rough Heston models. Math. Financ. 29, 338.CrossRefGoogle Scholar
Feller, W. (1951). The asymptotic distribution of the range of sums of independent random variables. Ann. Math. Statist. 22, 427432.CrossRefGoogle Scholar
Garman, M. B. and Klass, M. J. (1980). On the estimation of security price volatilities from historical data. J. Business 53, 6778.CrossRefGoogle Scholar
Gatheral, J., Jaisson, T. and Rosenbaum, M. (2018). Volatility is rough. Quant. Finance 18, 933949.CrossRefGoogle Scholar
Imhof, J.-P. (1985). On the range of Brownian motion and its inverse process. Ann. Prob. 13, 10111017.CrossRefGoogle Scholar
Jacod, J. and Protter, P. E. (2012). Discretization of Processes, 1st edn. Springer, New York.CrossRefGoogle Scholar
Jacod, J., Li, J. and Liao, Z. (2021). Volatility coupling. Ann. Statist. 49, 19821998.CrossRefGoogle Scholar
Jeulin, T. and Yor, M. (1981). Sur les distributions de certaines fonctionnelles du mouvement Brownien. Sém. de Prob. XV, 210–226.CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus, 2nd edn. Springer, New York.Google Scholar
Le Cam, L. L. (1986). Asymptotic Methods in Statistical Decision Theory. Springer, New York.CrossRefGoogle Scholar
Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation, 2nd edn. Springer, New York.Google Scholar
Li, J., Wang, D. and Zhang, Q. (2024). Reading the candlesticks: An OK estimator for volatility. Rev. Econ. Statist. 106, 11141128.CrossRefGoogle Scholar
NIST Digital Library of Mathematical Functions. Release 1.2.1 of 2024-06-15, ed. Olver, F. W. J., Olde Daalhuis, A. B., Lozier, D. W., Schneider, B. I., Boisvert, R. F., Clark, C. W., Miller, B. R., Saunders, B. V., Cohl, H. S. and McClain, M. A.. Available at https://dlmf.nist.gov/https://dlmf.nist.gov/.Google Scholar
Parkinson, M. (1980). The extreme value method for estimating the variance of the rate of return. J. Business 53, 6165.CrossRefGoogle Scholar
Pitman, J. and Yor, M. (2003). Infinitely divisible laws associated with hyperbolic functions. Canad. J. Math. 55, 292330.CrossRefGoogle Scholar
Rogers, L. C. G. and Shepp, L. (2006). The correlation of the maxima of correlated Brownian motions. J. Appl. Prob. 43, 880883.CrossRefGoogle Scholar
Vallois, P. (1995). Decomposing the Brownian path via the range process. Stoch. Process. Appl. 55, 211226.CrossRefGoogle Scholar
van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge University Press.CrossRefGoogle Scholar
Widder, D. V. (1941). The Laplace Transform (Princeton Mathematical Series). Princeton University Press.Google Scholar
Widder, D. V. (1934). The inversion of the Laplace integral and the related moment problem. Trans. Amer. Math. Soc. 36, 107.CrossRefGoogle Scholar
Wolfram Research, Inc. http://functions.wolfram.com/01.20.06.0021.01, added 2001-10-29.Google Scholar
Wolfram Research, Inc. http://functions.wolfram.com/01.21.06.0002.01, added 2001-10-29.Google Scholar
Figure 0

Figure 1. An illustration of running Brownian extrema and reversals.

Figure 1

Figure 2. Probability density functions of $T^{(\alpha)}$ for various choices of $\alpha$. Each line plots $f_T(t;\alpha)$ for the choice of $\alpha$ presented in the figure legend. Apart from the $\alpha=0$ case where the expression of $f_T(t;0)$ can be found in [6, eq. (4)], all the densities are generated by numerical inverse Laplace transforms via MATHEMATICA$^{\circledR}$.

Figure 2

Figure 3. AMSEs of $\widehat\sigma^p_{\mathit{UB}}(\alpha)$ and $\widehat\sigma^p_{\mathit{MSE}}(\alpha)$ for $p\in\{1,2\}$ as a function of $\alpha$. The optimal choices $\alpha^*_p$ are solved numerically by minimizing (3.4) via MATHEMATICA$^{\circledR}$.

Figure 3

Table 1. AMSEs of spot volatility and spot variance estimators