Hostname: page-component-7bb8b95d7b-2h6rp Total loading time: 0 Render date: 2024-09-28T21:04:31.230Z Has data issue: false hasContentIssue false

Sturm–Liouville theory and decay parameter for quadratic markov branching processes

Published online by Cambridge University Press:  19 January 2023

Anyue Chen*
Affiliation:
Southern University of Science and Technology and University of Liverpool
Yong Chen*
Affiliation:
Jiangxi Normal University
Wu-Jun Gao*
Affiliation:
Shenzhen Technology University
Xiaohan Wu*
Affiliation:
Harbin Institute of Technology
*
*Postal address: Department of Mathematics, Southern University of Science and Technology, Shenzhen, 518055, China; Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK. Email address: [email protected]
**Postal address: School of Mathematics and Statistics, Jiangxi Normal University, Nanchang, 330022, China. Email address: [email protected]
***Postal address: College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518118, China. Email address: [email protected]
****Postal address: Department of Mathematics, Harbin Institute of Technology, Harbin, 150001, China. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

For a quadratic Markov branching process (QMBP), we show that the decay parameter is equal to the first eigenvalue of a Sturm–Liouville operator associated with the partial differential equation that the generating function of the transition probability satisfies. The proof is based on the spectral properties of the Sturm–Liouville operator. Both the upper and lower bounds of the decay parameter are given explicitly by means of a version of Hardy’s inequality. Two examples are provided to illustrate our results. The important quantity, the Hardy index, which is closely linked to the decay parameter of the QMBP, is deeply investigated and estimated.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The motivation for the present paper is to study the decay properties of quadratic Markov branching processes. We give the formal definition as follows.

Definition 1.1. A quadratic Markov branching process is a continuous-time Markov chain with state space ${\mathbb{Z}}_{+}={\left\{{0,1,\dots}\right\}}$ determined by the q-matrix $Q=\{q_{ij};\;i,j\in {\mathbb{Z}}_{+}\}$ defined by

(1.1) \begin{equation} q_{ij}=\left\{ \begin{array}{l@{\quad}l} i^{2}b_{j-i+1} &\quad\text{if } j\geq (i-1 ) {\geq 0},\\[5pt] 0 &\quad \mbox{otherwise,} \end{array}\right.\end{equation}

where $\{b_j\; :\; j\in {\mathbb{Z}}_{+}\}$ is a given real sequence which satisfies the usual nontrivial conditions

(1.2) \begin{align} b_j\geq 0\;(j\neq 1),\,\, -b_1=\sum_{j\neq 1}b_j,\,\,b_0>0,\, \text{ and }\, \sum_{j=2}^{\infty}b_j>0.\end{align}

Let $m_d$ and $m_b$ be the mean death and mean birth rates, respectively. Then we have

(1.3) \begin{equation}m_d=b_0\qquad \mbox{and}\qquad m_b=\sum_{j=2}^{\infty}(j-1)b_j.\end{equation}

When $m_d\ge m_b$ , the jump chain almost surely hits the absorbing zero state. Thus, there is a unique Q-function. Uniqueness may not hold if $m_d<m_b$ , but in all cases, the forward Kolmogorov system has exactly one solution, which is the Feller minimal solution; see [Reference Chen6, Reference Chen3]. The corresponding Markov process ${\left\{{Z(t);\,t\ge 0}\right\}}$ is called a quadratic Markov branching process, henceforth referring to as a QMBP. Note that the quadratic branching process no longer obeys the branching property.

Let

(1.4) \begin{equation}B(s)=\sum_{j=0}^{\infty}b_js^j\end{equation}

denote the generating function of the sequence $\{b_j;\;j\geq 0\}$ . As a power series, this generating function has a convergence radius $\varrho_b^{-1}=\limsup\limits_{n\to\infty}\sqrt[n]{b_n}$ . Clearly, $\varrho_b\geq 1$ .

The generating function B(s) possesses the following simple yet useful properties, whose proof is well known and thus omitted here.

Proposition 1.1. The generating function B(s) is a convex function of $s\in [0, \varrho_b)$ , and hence the equation $B(s)=0$ has at most two roots in $[0, \varrho_b)$ and, in particular, in [0, 1]. More specifically, if $B'(1)\leq 0$ , then $B(s)>0$ for all $s\in {[0,1)}$ , and 1 is the only root of the equation $B(s)=0$ in [0, 1].

It is easy to see that $B'(1)= m_b-m_d$ , which explains the probability interpretation of the important quantity B (1).

Let the following assumption hold in the rest of the present paper.

Assumption 1.1. Assume that $B'(1)<0$ ; that is to say, $m_d > m_b$ .

Let $P(t)=(P_{ij}(t))$ denote the transition function where $P_{ij}(t)=\mathbb{P}(Z_t=j\,|\,Z_0=i).$ Denote the communicating class for the transition function P(t) by C. By the assumption given in Definition 1.1, it is easy to see that for our QMBPs, the communicating class C is just ${\mathbb{N}}=\{1,2,\cdot\cdot\cdot\}$ . The decay parameter of the process is defined by

(1.5) \begin{equation}\lambda_C=-\lim_{t\to \infty}\frac{1}{t}\log P_{ij}(t).\end{equation}

General theory asserts that the limit exists and that it is independent of $i,j\in C$ . It is easy to show that

(1.6) \begin{align}\lambda_C=\inf{\left\{{\lambda \ge 0{:}\,\, \int_0^{\infty} P_{ij}(t) e^{\lambda t}{\textrm{d}} t =\infty,\,\, i,j\in C}\right\}}.\end{align}

For a review of this topic, we refer the readers to van Doorn and Pollett [Reference Van Doorn and Pollett21]. A very useful representation for the decay parameter can be found in Theorem 3.3.2(iii) of Jacka and Roberts [Reference Jacka and Roberts12].

In nearly all stochastic models that can be well modeled by a continuous-time Markov chain with absorbing states, obtaining and/or estimating the corresponding decay parameter is a very important topic. The main aim of this paper is to investigate this question for QMBPs.

The structure of this paper is as follows: after the introductory Section 1, we state our main conclusions in Section 2; the proofs will be given in Sections 3 and 5. Examples will be provided in Section 4.

2. Main results

Our first main result is a representation theorem for the decay parameter $\lambda_{C}$ of the QMBP by means of the classical generating function method. Let $\{F_i (s, t);\; i \in {\mathbb{Z}}_+\}$ be the generating functions of the Q-function P(t) of the QMBP. That is,

\begin{align*} F_i(s, t)=\sum_{j=0}^\infty P_{ij}(t)s^j, \qquad i\geq 0.\end{align*}

Define

(2.1) \begin{equation}w(s)=\frac{1}{B(s)} ,\quad J=(0,1),\end{equation}

where B(s) is defined in (1.4).

Consider the differential expression M defined by

(2.2) \begin{equation}My \;:\!=\; (\!-\!s y'(s))',\quad y\in {\mathfrak{H}}=L^2(J,\,w). \end{equation}

It is known by Chen [Reference Chen3] that $F_i(s,t)$ is the unique solution of the equation

(2.3) \begin{equation} \frac{\partial }{\partial t} {F}_i(s,t)=-w^{-1} M F_i(s,t) , \qquad {(s,t)\in (0,1)\times (0,\infty),} \end{equation}

with initial condition

\begin{align*} F_i(s, 0)= s^i.\end{align*}

To solve the partial differential equation (2.3), we will make use of Sturm–Liouville theory. We first find the suitable self-adjoint realization $(S,\,D(S))$ of the minimal operator $S_{min}$ of (M, w) on J (see Definition 3.1 below), and then study the spectral properties of $(S,\,D(S))$ . The following is our representation theorem for $\lambda_C$ for the QMBP.

Theorem 2.1. The decay parameter $\lambda_C$ for the QMBP is equal to the first eigenvalue $\ell_0$ of the self-adjoint Sturm–Liouville operator (S, D(S)) in the Hilbert space $L^2(J,\,w)$ defined by

(2.4) \begin{align} S g = w^{-1} M g \quad \textit{ for } g\in D(S), \end{align}
(2.5) \begin{align} D(S)&={\left\{{y+c v_1\;:\; y\in D_{{min}},\,c\in {\mathbb{R}}}\right\}}, \end{align}

where $D_{min}$ is the domain of $S_{min}$ , and $v_1$ is a $C^{\infty}(J)$ function such that

(2.6) \begin{equation}v_1(s)= \left\{ \begin{array}{l@{\quad}l} 1 &\quad\textit{when } 0<s< c_1, \\[5pt] 0 &\quad\textit{when } c_2<s<1, \end{array}\right.\end{equation}

with some $0<c_1<c_2<1$ .

Remark 2.1. The identity (2.5) means that the dimension of the quotient space $D(S)/D_{min}$ of D(S) and $D_{min}$ is 1. That is to say, the deficiency index of the differential expression M on J is $d=1$ . The function $v_1$ is not unique and can be taken as any function $\tilde{v}$ such that $\tilde{v}-v_1\in D_{min}$ .

By Theorem 2.1, to find the decay parameter $\lambda_C$ for the QMBP is just to find the first eigenvalue $\ell_0$ of the self-adjoint operator (S, D(S)). Then, by means of the variational formula for the first eigenvalue $\ell_0$ , we obtain upper and lower bounds on $\lambda_{C}$ .

Theorem 2.2. The variational formula for the decay parameter $\lambda_C$ is

(2.7) \begin{align}{\lambda_C}=\inf{\left\{{\frac {\int_0^1 s\big( g'(s)\big)^2 {\textrm{d}} s}{\int_0^1 g^2(s) w(s){\textrm{d}} s}\;:\;\, g\not\equiv 0,\,g\in C_{c}^{\infty}(J)}\right\}}.\end{align}

Furthermore, $\lambda_C$ has the lower and upper bounds

(2.8) \begin{equation} \frac{1}{ 4 D^2}\leq {\lambda_C} \leq \frac{1}{ D^2},\end{equation}

where $D^2$ is given by

(2.9) \begin{equation} D^2 \;:\!=\; \sup_{s\in (0,1)}{\left\{{(\!-\!\log s )\cdot \left(\int_0^s \frac{1}{B(r)}{\textrm{d}} r \right)}\right\}}.\end{equation}

From Theorems 2.1 and 2.2, particularly from (2.8) and (2.9), we see that to estimate the value of $D^2$ is a key issue. Let us agree to call $D^2$ the Hardy index. The following corollaries concentrate on discussing the Hardy index.

Corollary 2.1. We have that

\begin{align*} \frac{b_0-m_b}{4(\!\log 2)^2}\le \lambda_C\le \frac{b_0}{(\!\log 2)^2}. \end{align*}

Sharper bounds for $\lambda_C$ can be given as follows.

Corollary 2.2. We have that

(2.10) \begin{align}\frac{b_0-m_b}{4(\!\log\!(1+\sqrt{\kappa_1}))^2}\le\lambda_C\le\frac{m_b-\kappa_2 }{\kappa_2(\!\log\!(1+\sqrt{\kappa_2}))^2},\end{align}

where $\kappa_1=\frac{m_b}{b_0}$ ,

\begin{align*} \kappa_2=\frac{m_b}{A(s_0)+s_0\cdot m_b},\end{align*}

A(s) is determined by $A(s)=\frac{B(s)}{1-s}$ , and $s_0$ is determined by the equation $-m_b=A'(s_0)$ , which guarantees that $0<s_0<1$ and that $\kappa_1<\kappa_2$ .

Corollary 2.3. If $B''(1)<2b_0$ , then

(2.11) \begin{align}\frac{b_0-m_b}{4(\!\log\!(1+\sqrt{\kappa'_{\!\!1}}))^2}\le\lambda_C\le\frac{b_0-m_b}{(\!\log\!(1+\sqrt{\kappa'_{\!\!2}}))^2},\end{align}

where

\begin{align*} \kappa'_{\!\!1}=\frac{B''(1)}{2b_0}, \qquad {\kappa'_{\!\!2}}=\frac{\sum_{j=2}^{\infty}b_j}{b_0}.\end{align*}

Remark 2.2. There are two kinds of bounds on the quantity $D^2$ in (2.9), in Corollaries 2.2 and 2.3, respectively. It can easily be seen that the upper bound in Corollary 2.2 is better than the one in Corollary 2.3 if and only if $m_b<\frac12B''(1)$ . Also, the lower bound in Corollary 2.2 is better than the one in Corollary 2.3 if and only if

\begin{align*} \frac{\log\!(1+\sqrt{1+k_2})}{\log\!(1+\sqrt{k'_{\!\!2}})}>\frac{A(s_0)+m_b\cdot s_0-m_b}{b_0-m_b}.\end{align*}

Furthermore, the assumption $B''(1)<2b_0$ is not necessary for the lower bound on $\lambda_C$ in Corollary 2.3.

We can find new and better upper and lower bounds for $\lambda_C$ by using the result for Example 2 discussed in Section 4.

Corollary 2.4. There exist $s_1\in(0,1)$ and $s_2\in(0,1)$ such that

(2.12) \begin{equation}\frac{1}{4\phi_2(s_2)}\le \lambda_C\le\frac{1}{\phi_1(s_1)},\end{equation}

where

\begin{align*} \phi_1(s)=(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)\left[b_0+(b_0+b_1)r+\frac12 A''(0)r^2\right]},\end{align*}

and

\begin{align*} \phi_2(s)=(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)\left[b_0+(b_0+b_1)r+\frac12 A''(1)r^2\right]},\end{align*}

A(s) is determined by $A(s)=\frac{B(s)}{1-s}$ , and $s_1$ and $s_2$ are determined by $\sup_{s\in(0,1)}\phi_1(s)=\phi_1(s_1)$ and $\sup_{s\in(0,1)}\phi_2(s)=\phi_2(s_2)$ , respectively.

Remark 2.3. Here, $\phi_i(s)$ , $i=1, 2$ , are elementary functions; see (4.10) for their analytical expressions. The point $s_i$ is the unique stationary point of the function $\phi_i(s) $ in (0, 1), which can be obtained through basic numerical methods. Hence, both $\sup_{s\in(0,1)}\phi_1(s)=\phi_1(s_1)$ and $\sup_{s\in(0,1)}\phi_2(s)=\phi_2(s_2)$ can be evaluated easily.

The proofs of these four corollaries can be found in Section 5.

3. Sturm–Liouville theory and the proofs of Theorems 2.1 and 2.2

For any interval J of the real line, we denote by $L^1(J,\,{\mathbb{R}})$ the linear space of real-valued Lebesgue-integrable functions defined on J. The notation $L^1_{\textrm{loc}} (J,\,{\mathbb{R}})$ is used to denote the linear space of functions y satisfying $y\in L^1([\alpha,\beta],\,{\mathbb{R}})$ for all compact intervals $[\alpha,\beta]\subseteq J$ . As usual, we also write these respectively as $L^1(J)$ and $L^1_{\textrm{loc}} (J)$ for simplicity. The class of absolutely continuous functions on the compact interval $[\alpha,\beta]$ is denoted by $AC[\alpha,\beta]$ . Also, we denote by $AC_{\textrm{loc}}(J)$ the collection of real-valued functions which are absolutely continuous on all compact intervals $[\alpha,\beta]\subseteq J$ .

3.1. Sturm–Liouville theory

For the differential expression M given by

\begin{equation*} My(s) \;:\!=\; -(p(s)y'(s))'+ q(s)y(s), \quad \text{ on }\quad J, \end{equation*}

with

(3.1) \begin{equation} J=(a,b),\,-\infty\le a<b\le \infty,\quad 1/p, q, w\in {L^1}_{\textrm{loc}} (J,\,{\mathbb{R}}), \end{equation}

and the expression domain of M being functions y such that $y,\, py'\in AC_{\textrm{loc}}(J)$ , the following definitions are taken from Zettl [Reference Zettl22].

Definition 3.1. (The maximal and minimal operators.) The maximal domain $D_{max}$ of M on J with weight function $w>0$ is defined by

\begin{align*} D_{max}={\left\{{g\in L^2(J,w)\;:\;g,pg'\in AC_{\textrm{loc}}(J),w^{-1}Mg\in L^2(J,w) }\right\}}.\end{align*}

Define

\begin{align*}S_{max} g&= w^{-1}Mg,\, \text{ for } g\in D_{max},\\[5pt] S'_{\!\!min} g&= w^{-1}Mg, \,\text{ for } g\in D_{max} \text{ such that } g \text{ has compact support on } J.\end{align*}

Then $S_{max}$ is called the maximal operator of (M, w) on J, $S'_{\!\!min}$ is called the preminimal operator, and the minimal operator $S_{min}$ of (M, w) on J is defined as the closure of $S'_{\!\!min}$ . The domain of $S_{min}$ is denoted by $D_{ min}$ .

Any self-adjoint extension of the minimal operator $S_{min}$ satisfies

\begin{align*} S_{{min}}\subset S=S^*\subset S_{{max}}.\end{align*}

It is well known that the domain D(S) is determined by two-point boundary conditions which depend on the classification of the endpoints as limit-circle or limit-point.

Definition 3.2. Consider the Sturm–Liouville equation

(3.2) \begin{equation} My(s)=\ell w(s) y(s),\quad \quad\ell \in {\mathbb{R}}, \quad \text{ on }\quad J. \end{equation}

The endpoint a

  • is regular if, in addition to (3.1),

    \begin{equation*}1/p, q, w\in {L^1((a,d),\,{\mathbb{R}})}\end{equation*}
    holds for some (and hence any) $d\in J$ ;
  • is limit-circle (LC) if all solutions of the equation (3.2) are in $L^2((a,d),\, {w})$ for some (and hence any) $d\in (a,b)$ ;

  • is limit-point (LP) if it is not LC.

Similar definitions are made at the endpoint b. An endpoint is called singular if it is not regular. It is well known that the LC and LP classifications are independent of $\ell\in {\mathbb{R}}$ .

Lemma 3.1. Let (M, w) be given as in (2.1) and (2.2). Then both $s=0$ and $s=1$ are singular, and the endpoints $s=0$ and $s=1$ are LC and LP, respectively. Moreover, the deficiency index of M on J is $d=1$ .

Proof. It is clear that $\frac{1}{s}\notin {L^1((0,d), {\mathbb{R}})}$ and $w\notin {L^1((d,1), {\mathbb{R}})}$ for any $d\in (0,1)$ . Hence, the endpoints $s=0$ and $s=1$ are singular.

Let $\bar{v}_1(s) \equiv 1$ and $v_2(s)=\log s$ on (0, 1). Taking $\ell=0$ , it is easy to see that $\bar{v}_1,\,v_2$ are nontrivial linearly independent solutions of the equation

\begin{align*} M y(s)=(\!-\!s y'(s))'= \ell w(s) y(s) .\end{align*}

Since

\begin{align*} w(s)=\frac{1}{(1-s)A(s)},\end{align*}

where $A(s)> 0$ and is analytic on [0, 1] (see Lemma 5.1 below), we see that $\bar{v}_1,\,v_2 \in L^2((0,d),\, {w})$ and $\bar{v}_1\notin L^2((d,1),\, {w})$ with $d\in (0,1)$ . Hence, by Definition 3.2, the endpoints $t=0$ and $t=1$ are LC and LP, respectively. Hence, the deficiency index of M on J is $d=1$ ; see Theorem 10.4.5 of Zettl [Reference Zettl22].

Lemma 3.2. Let $v_1\in C^{\infty}(J)$ be given as in (2.6). Then

(3.3) \begin{align} D(S)={\left\{{y\in D_{{max}}\;:\; \lim_{s\to 0+} s y'(s)=0}\right\}} \end{align}
(3.4) \begin{align} = {\left\{{y+c v_1\;:\;y\in D_{{min}},\,c\in {\mathbb{R}}}\right\}}\end{align}

is a self-adjoint domain. Moreover, (S, D(S)) is the unique self-adjoint extension of $S_{{min}}$ such that $y(s)=s-1$ belongs to the domain D(S).

Proof. The function $v_1$ can be constructed by means of the smooth cut-off function; see Davies [Reference Davies8, p. 47] for details. Let $v_2(s)=\log s$ and $\ell=0$ .

Let $p(s)=s$ . For y and z in the expression domain of M, the Lagrange sesquilinear form $[\, , ]$ is given by

\begin{align*}[y,z]\;:\!=\;ypz'-zpy'.\end{align*}

It is known that for any $y,z\in D_{max}$ , both limits

\begin{align*}[y,z](0)=\lim_{s\to 0+} [y,z](s),\qquad [y,z](1) = \lim_{s\to 1-} [y,z](s)\end{align*}

exist and are finite. See Zettl [Reference Zettl22, Lemma 10.2.3].

It is clear that $v_1,\,v_2$ are nontrivial real solutions of the equation

\begin{align*} M y(s)=(\!-\!s y'(s))'= \ell w(s) y(s) \end{align*}

on $(0, c_1)$ satisfying $[v_1,v_2](s)=1,\,s\in (0,\,c_1)$ . When $y\in D_{{max}}$ , we have that

\begin{align*}[y,v_1](0)&=\lim_{s\rightarrow 0+} [y,v_1](s) =-\lim_{s\rightarrow 0+} s y'(s),\\[5pt] [y,v_2](0)&=\lim_{s\rightarrow 0+} [y,v_2](s) =\lim_{s\rightarrow 0+}( y(s)-s\log s y'(s) ).\end{align*}

Since 0 is LC and 1 is LP, Theorem 10.4.5 of Zettl [Reference Zettl22] says that D(S) is a self-adjoint domain if and only if there exist $A_1,A_2\in {\mathbb{R}}$ , with $(A_1,A_2)\neq (0,0)$ , such that

\begin{align*} D(S)={\left\{{y\in D_{{max}}\;:\; A_1\cdot[y,v_1](0)+ A_2\cdot[y,v_2](0)=0}\right\}}\end{align*}

holds. Now, taking $(A_1,A_2)= (1,0)$ , we obtain (3.3).

It is easy to check that $v_1,\,v_2\in D_{max}$ . Since $[v_1,v_2](0)=1$ , we have $v_1\notin D_{min}$ . It is clear that $sv'_{\!\!1}(s)=0$ on $(0,c_1)$ . Thus, $v_1\in D(S)/D_{min}$ . Note that the deficiency index of M on J is $d=1$ . Hence, we obtain (3.4).

When $y(s)=s-1$ , we see that $[y,v_1](0)=0,\,[y,v_2](0)=-1$ . If $(A_1,A_2)\neq (0,0)$ satisfies $A_1\cdot [y,v_1](0)+ A_2\cdot [y,v_2](0)=0$ , then $A_1\neq 0,\, A_2=0$ . Hence, (3.3) is the unique self-adjoint extension of $S_{{min}}$ such that $y(s)=s-1$ belongs to the domain D(S).

Next we will show that the operator (S, D(S)) has the BD property, i.e., it has spectra discrete and bounded below. Before that, let us briefly make some comments on the BD property. The criteria for empty essential spectrum (or, say, discrete spectrum) of singular self-adjoint differential operators (Sturm–Liouville operators) have been thoroughly explored in the literature on analysis. The classical method employed is that of oscillation theory; see [Reference Glazman10, Reference Dunford and Schwartz9, Reference Rollins20, Reference Bailey, Everitt, Hinton and Zettl2]. In particular, Theorem 4.1(ii) of [Reference Ahlbrandt, Hinton and Lewis1] gives a necessary and sufficient condition using this theory. A sufficient condition is given in [Reference Rollins20] using the Friedrichs extension theorem. Other necessary and sufficient conditions are given in [Reference Ćurgus and Read7, Reference Mao16] using compact embedding theorems.

In the literature on probability, the Sturm–Liouville operator is viewed as a generator of a diffusion process on the line. This explanation of the probabilistic meaning can be traced back to Kolmogorov, Feller, and Itô. For a diffusion operator with a killing term, Theorem 7.1(i) of [Reference Chen5] is an extension of Theorem 4.1(ii) of [Reference Ahlbrandt, Hinton and Lewis1] mentioned above.

There are also easier ways to obtain the same results, such as Theorem 4.1(ii) of [Reference Ahlbrandt, Hinton and Lewis1] and the method of [Reference Rollins20] mentioned above, but we will employ oscillation theory, along the same lines as [Reference Bailey, Everitt, Hinton and Zettl2] and [Reference Hinton and Lewis11], to show Lemma 3.3, since it is more elementary.

Lemma 3.3. The operator (S, D(S)) has the BD property. Moreover, the spectrum $\sigma(S)$ is real, simple, and discrete;

\begin{align*} \sigma(S)={\left\{{\ell_k\in {\mathbb{R}}, \, k=0,1,2,\dots}\right\}},\\[5pt] \ell_k<\ell_{k+1},\quad \ell_k\to \infty \,(\textit{as } k\to \infty). \end{align*}

If $\varphi_k $ is an eigenfunction of $\ell_k$ , then $\varphi_k \in C^{\infty}(J)$ and has exactly k zeros in $J=(0,1)$ . In addition, the set of eigenfunctions ${\left\{{\varphi_k ,k\in {\mathbb{Z}}_{+} }\right\}}$ is orthogonal and complete in $\mathfrak{H}=L^2(J,\,w)$ .

Proof. Define

\begin{align*} A[\alpha,\beta]={\left\{{f\;:\;[\alpha,\beta]\to {\mathbb{R}}\;:\;\, f\in AC[\alpha,\beta],f'\in L^2(\alpha,\beta), \text{ and } f(\alpha )=f( \beta)=0}\right\}}.\end{align*}

We let

(3.5) \begin{equation}B(s)=(1-s)A(s).\end{equation}

Then $A(s)\neq 0$ and is analytic on ${\left\vert{s}\right\vert}< 1$ . It follows from Lemma 2.2 of Chen [Reference Chen3] that

(3.6) \begin{align} B(s)&>0\qquad \forall s\in [0,1),\nonumber\\[5pt] A(s)&>0\qquad \forall s\in [0,1],\end{align}

where in the last inequality $A(1)>0$ is from $B'(1)=-A(1)<0$ (see Lemma 5.1 below).

The proof is in the spirit of Bailey et al. [Reference Bailey, Everitt, Hinton and Zettl2] and Hinton and Lewis [Reference Hinton and Lewis11]. We need only show that for each real number $\ell$ there is a $\delta>0$ (which may depend on $\ell$ ) such that, if $[\alpha,\beta]\subset {(0,\, \delta)}$ or $[\alpha,\beta]\subset (1-\delta, \, 1)$ and $y\in A[\alpha,\beta]$ , $y\not\equiv 0$ , then

(3.7) \begin{equation} \int_{\alpha}^{\beta} {\left\{{s(y'(s))^2-\ell w(s) y^2(s)}\right\}}{\textrm{d}} s >0. \end{equation}

It is clear that we need only show (3.7) for $\ell>0$ . We make use of a Hardy-type inequality (see Hinton and Lewis [Reference Hinton and Lewis11]): if $f\in A[\alpha,\beta]$ with $f \not\equiv 0$ , then

(3.8) \begin{align} \int_{\alpha}^{\beta} \frac{1}{s(\!\log s)^{2}} f^2(s){\textrm{d}} s\leq 4 \int_{\alpha}^{\beta} s[f'(s)]^2 {\textrm{d}} s.\end{align}

For any $\ell>0$ , we have that when $s>0$ is small enough,

\begin{align*} \frac14 \frac{1}{s(\!\log s)^{2}}-\frac{\ell}{(1-s)A(s)}\geq\frac14 \frac{1}{s(\!\log s)^{2}}-\frac{\ell}{(1-s)m} >0,\end{align*}

where $m>0$ is the minimum value of A(s) on [0, 1], and it follows from (3.8) that

\begin{align*} \int_{\alpha}^{\beta} {\left\{{s(y')^2-\ell w y^2}\right\}} {\textrm{d}} s \geq \int_{\alpha}^{\beta}\Big( \frac14 \frac{1}{s(\!\log s)^{2}}-\frac{\ell}{(1-s)A(s)}\Big) y^2 {\textrm{d}} s >0.\end{align*}

The well-known inequality

\begin{align*} \frac{x}{1+x} \le \log\!(1+x)\le x,\quad \forall x>-1,\end{align*}

implies that when $s\in (0,1)$ ,

\begin{align*}\frac{1}{s(\!\log s)^{2}}&=\frac{1}{s\big(\!\log\!(1+ s-1)\big)^{2}}\geq \frac{1}{s}\frac{s^2}{(1-s)^2}=\frac{s }{(1-s)^2}.\end{align*}

Hence, for any $\ell>0$ , we have that when $1-s$ is small enough,

\begin{align*}\frac14 \frac{1}{s(\!\log s)^{2}}-\frac{\ell}{(1-s)A(s)}&\geq\frac{1 }{1-s } \Big(\frac14\frac{s }{1-s } -\frac{\ell}{ m}\Big)>0. \end{align*}

Combining this with (3.8), we have that

\begin{align*} \int_{\alpha}^{\beta} {\left\{{s(y')^2-\ell w y^2}\right\}} {\textrm{d}} s \geq \int_{\alpha}^{\beta}\Big( \frac14 \frac{1}{s(\!\log s)^{2}}-\frac{\ell}{(1-s)A(s)}\Big)y^2 {\textrm{d}} s >0.\end{align*}

Therefore, (3.7) holds, which implies that the operator S has the BD property. Since the endpoint $s=1$ is LP, the other conclusions are given by the case (8.ii) of Theorem 10.12.1 in Zettl [Reference Zettl22, p. 208] and Theorem XIII 4.2 of Dunford and Schwartz [Reference Dunford and Schwartz9, p. 1331].

Denote by ${\langle {f,\,g}\rangle}$ the inner product in the space ${\mathfrak{H}}$ for every pair of elements $f,\,g$ in ${\mathfrak{H}}=L^2(J,\,w)$ .

Lemma 3.4. If $f\in D_{{min}}$ then

(3.9) \begin{equation}{\langle {S_{{min}}f,\,f }\rangle}\ge \int_0^1 s (f'(s))^2{\textrm{d}} s.\end{equation}

Proof. The assumption $f\in D_{{min}}$ implies that there exists a series $f_n$ with compact support in J such that $f_n\to f$ and $S_{{min}} f_n\to S_{{min}}f $ in $\mathfrak{H}$ . Hence, for any $0<\epsilon<s<1$ , we have that as $n\to \infty$ ,

\begin{align*}-sf'_{\!\!n}(s)+\epsilon f'_{\!\!n}(\epsilon)&=\int_{\epsilon}^s (\!-\!r f'_{\!\!n}(r))'{\textrm{d}} r\\[5pt] & \to \int_{\epsilon}^s (\!-\!r f'(r))'{\textrm{d}} r\\[5pt] &=-sf'(s)+\epsilon f'(\epsilon).\end{align*}

Thanks to (3.3), by letting $\epsilon\to 0$ , we see that $f'_{\!\!n}(s)\to f'(s)$ holds for all $s\in J$ .

Moreover, integration by parts implies that

\begin{align*}{\langle { S_{{min}}f,\,f }\rangle}&=\lim_{n\to \infty} {\langle {S_{{min}}f_n,\,f_n}\rangle}\\[5pt] &=\lim_{n\to \infty} \int_{0}^1 (\!-\!sf'_{\!\!n}(s))' f_n(s){\textrm{d}} s\\[5pt] &=\lim_{n\to \infty} \int_0^1 s (f'_{\!\!n}(s))^2{\textrm{d}} s\\[5pt] &\ge \int_0^1 s (f'(s))^2{\textrm{d}} s,\end{align*}

where the last inequality follows from Fatou’s lemma.

Lemma 3.5. (S, D(S)) is a nonnegative self-adjoint operator on $\mathfrak{H}$ .

Proof. Let $v_1$ be given as in (2.6). The identity (3.4) implies that we need only show that $ {\langle {S(f+cv_1), f+cv_1 }\rangle}\ge 0$ holds for all $f\in D_{{min}}$ and $c\in {\mathbb{R}}$ . For simplicity, we can assume that $c=1$ . Lemma 3.4 implies that

\begin{align*}{\langle {S(f+v_1), f+v_1 }\rangle}&={\langle {S f, f }\rangle}+ 2{\langle {Sf, v_1}\rangle} +{\langle {S v_1, v_1 }\rangle}\\[5pt] &\ge \int_0^1 s (f'(s))^2{\textrm{d}} s +2{\langle {Sf,v_1}\rangle} +{\langle {S v_1, v_1 }\rangle}.\end{align*}

By integration by parts, we see that

\begin{align*}{\langle {S v_1, v_1 }\rangle}=\int_0^1 s (v'_{\!\!1}(s))^2{\textrm{d}} s ,\qquad{\langle {Sf,v_1}\rangle}=\int_0^1 sf'(s)v'_{\!\!1}(s) {\textrm{d}} s.\end{align*}

Hence,

\begin{align*}{\left\vert{2{\langle {Sf,v_1}\rangle}}\right\vert}\le \int_0^1 s\!\left[(f'(s))^2+(v'_{\!\!1}(s))^2\right] {\textrm{d}} t=\int_0^1 s (f'(s))^2{\textrm{d}} s +{\langle {S v_1, v_1 }\rangle}.\end{align*}

Thus, we see that ${\langle {S(f+v_1), f+v_1 }\rangle}\ge 0$ .

Corollary 3.1. The first eigenvalue of the operator (S, D(S)) is positive; i.e., $\ell_0>0$ .

Proof. Lemma 3.5 implies that $\ell_0\ge 0$ . We need only show that 0 is not an eigenvalue. In Lemma 3.1, we have shown that the nontrivial linearly independent solutions of the equation $Sf\equiv 0$ are $\bar{v}_1(s) \equiv 1$ and $v_2(s)=\log s$ on (0, 1). It is clear that none of the nontrivial linear combinations of $\bar{v}_1$ and $v_2$ is in D(S), which implies that 0 is not an eigenvalue. Hence, $\ell_0>0$ .

3.2. Proof of Theorem 2.1

We first provide a representation of the generating function ${F}_i(s, t)$ with $i\ge 1$ . Since $F_i(s, 0)\notin D(S)$ , we cannot apply the eigenfunction expansion theory in $\mathfrak{H}$ directly. But it is clear that $F_i(s, 0)-1\in D(S)$ . Hence, to get around this difficulty, we need only consider the equation of the function $\bar{F}_i(s, t)=F_i(s, t)-1$ .

Since the Feller minimal Q-function is honest when $B'(1)<0$ (see [Reference Chen6, Reference Chen3]), it is clear that by (3.3), $\bar{F}_i(s, t)\in D(S)$ for all $t\ge 0$ . Then we obtain

(3.10) \begin{equation} \frac{\partial }{\partial t} \bar{F}_i(s,t)=-S{ \bar{F}}_i(s,t), \qquad (s,t)\in (0,1)\times (0,\infty), \end{equation}

with initial condition

\begin{align*} \bar{F}_i(s,\,0)=s^i-1.\end{align*}

We will derive a series representation of $\bar{F}_i(s, t)$ by the eigenfunction method.

Lemma 3.6. In the sense of abstract Cauchy problems, the above partial differential equation (3.10) has a unique solution (one and only one solution), whose eigenfunction expansion is

(3.11) \begin{equation} \bar{F}_i(s,t)=\sum_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k} \varphi_k(s),\qquad s\in (0,1),\end{equation}

where the series converges in $L^2(J,w)$ , ${\left\{{\ell_k,\,\varphi_k(s)}\right\}}$ are the spectra of the operator (S, D(S)) given in Lemma 3.3, and the coefficient ${\left\{{a^{(i)}_k }\right\}}$ is given by

(3.12) \begin{equation} a^{(i)}_k ={\langle {s^i-1,\, \varphi_k(s)}\rangle}.\end{equation}

Proof. We resort to the theory of semigroups of linear operators; see Pazy [Reference Pazy18, Chapter 4].

First, by Lemma 3.3 and Corollary 3.1, the Hille–Yosida theorem (see Pazy [Reference Pazy18, Theorem 1.3.1]) implies that $(\!-\!S,D(S))$ is the infinitesimal generator of a $C_0$ semigroup of contractions ${\left\{{T(t),t\geq 0}\right\}}$ on $L^2(J,w)$ .

Second, it follows from Pazy [Reference Pazy18, Theorem 4.1.3] that the abstract Cauchy problem (3.10) has a unique solution $u(t)=T(t)f$ for every initial value $f\in D(S)$ . Taking $f=s^i-1$ , we have that $\bar{F}_i(s,t)=T(t)f$ .

Third, by the spectral theorem for self-adjoint operators, the solution $\bar{F}_i(s,t)$ has the following representation:

(3.13) \begin{align} \bar{F}_i(s,t)&=\sum_{k=0}^{\infty} e^{-t \ell_k}\varphi_k(s) {\langle {f,\,\varphi_k}\rangle} =\sum_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k}\cdot \varphi_k(s). \end{align}

The following lemma ensures that the series in (3.11) can be differentiated with respect to s term by term.

Lemma 3.7. For each fixed $t\in[0,\infty)$ , the series $\sum\limits_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k} \varphi_k(s)$ and $\sum\limits_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k} \varphi_k'(s)$ converge absolutely and uniformly with respect to s on every compact subset of $J=(0,1)$ , where $\varphi'_{\!\!k}$ means the derivative of $\varphi_k$ .

Proof. Since $f=s^i-1\in D(S)$ , we have that $T_tf \in D(S)$ from Theorem 2.4(c) of Pazy [Reference Pazy18, p. 5]. Note that the second-order differential operator (S, D(S)) has a complete orthonormal set ${\left\{{\varphi_k}\right\}}$ of eigenfunctions. Thus, Theorem XIII.4.3 of Dunford and Schwartz [Reference Dunford and Schwartz9, p. 1332] implies that the eigenfunction expansion

\begin{align*} T_tf (s)= \sum_{k=0}^{\infty} a^{(j)}_k e^{-t\ell_k} \varphi_k(s) \end{align*}

converges uniformly and absolutely on each compact subinterval of $J=(0,1)$ , and the series may be differentiated term by term, with the differentiated series retaining the properties of absolute and uniform convergence.

Lemma 3.8. For any $i\in {\mathbb{N}}$ and for each $t\in[0,\infty)$ , we have that

(3.14) \begin{equation} P_{i1}(t)+\sum_{j=2} j P_{ij}(t)s^{j-1}= \sum_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k} \varphi'_{\!\!k}(s),\qquad s\in (0,1).\end{equation}

Proof. The uniqueness of the solution to the partial differential equation (3.10) implies that

(3.15) \begin{align} \sum_{j=0}^{\infty} P_{ij}(t)s^{j}= F_i(s,t)=1+\sum_{k=0}^{\infty} a^{(i)}_k e^{-t\ell_k} \varphi_k(s),\quad s\in(0,1).\end{align}

Because the series on the left-hand side of (3.15) is an analytic function of s when ${\left\vert{s}\right\vert}< 1$ and the series on the right-hand side of (3.15) can be differentiated about $s\in (0,1)$ term by term, we can differentiate the two series in (3.15) term by term with respect to s.

Remark 3.1. We can characterize the decay parameter $\lambda_C$ using only Equation (3.14). That is to say, we do not need to take $s\to 0+$ in Equation (3.14) to obtain an explicit expression for $P_{i1}(t) $ as in the previous work of Letessier and Valent [Reference Letessier and Valent15] and Roehner and Valent [Reference Roehner and Valent19].

Lemma 3.9. The decay parameter $\lambda_C$ for the QMBP satisfies the inequality

(3.16) \begin{equation} \lambda_C\geq \ell_0. \end{equation}

Proof. By taking $t=0$ in Lemma 3.7, we see that the series

\begin{align*} \sum\limits_{k=0}^{\infty} a^{(i)}_k \varphi'_{\!\!k}(s)\end{align*}

is uniformly and absolutely convergent on every compact subset of $J=(0,1)$ . Thus, by the Weierstrass M-test, the series

\begin{align*} \sum\limits_{k=0}^{\infty} a^{(i)}_k e^{-t(\ell_k-\lambda)} \varphi'_{\!\!k}(s)\end{align*}

is uniformly convergent with respect to $t\in[0,\infty)$ for each $s\in (0,1)$ .

Observe that $P_{11}(t)$ is dominated by the left-hand side of (3.14), so taking the Laplace transform and integrating term by term we obtain for each ${\lambda}<\ell_0$ the bound

\begin{align*} \int_{0}^{\infty} e^{\lambda t} P_{11}(t){\textrm{d}} t \le \int_{0}^{\infty} e^{\lambda t} \sum_{k=0}^{\infty} a^{(1)}_k e^{-t\ell_k} \varphi'_{\!\!k}(s) {\textrm{d}} t =-(R_{\lambda } f)'(s),\quad s\in (0,1),\end{align*}

where $f(s)=s-1$ and $R_{\lambda}$ is the resolvent of S. The last equality is again from Theorem XIII.4.3 of Dunford and Schwartz [Reference Dunford and Schwartz9, p. 1332], since $R_{\lambda}f\in D(S)$ (see Pazy [Reference Pazy18, p. 9]).

The fact that $R_{\lambda}f\in D(S)$ also implies that ${\left\vert{(R_{\lambda } f)'(s)}\right\vert}<\infty$ on any compact subinterval of J. Thus,

(3.17) \begin{equation} \int_{0}^{\infty} e^{\lambda t} P_{11}(t){\textrm{d}} t<\infty,\qquad 0\leq \lambda<\ell_0,\end{equation}

which implies that

(3.18) \begin{equation} \lambda_C=\sup{\left\{{\lambda\geq 0\;:\;\int_{0}^{\infty} e^{\lambda t} P_{11}(t){\textrm{d}} t<\infty }\right\}}\geq \ell_0. \end{equation}

We are now ready to give the proof of our first main result stated in Section 2.

Proof of Theorem 2.1. We give a proof by contradiction. Suppose $\lambda_C\neq \ell_0$ . Then it follows from Lemma 3.9 that $\lambda_C> \ell_0$ . Define $\tau = \inf{\left\{{t\ge 0,\,X_t=0 }\right\}}$ and

\begin{align*} x_i(t)=P_i(\tau>t)=\sum\limits_{j\in {\mathbb{N}}} P_{ij}(t).\end{align*}

Since the set $N_0={\left\{{i\in {\mathbb{N}}\;:\;q_{i0}>0}\right\}}={\left\{{1}\right\}}$ is finite, the conclusion in Jacka and Roberts [Reference Jacka and Roberts12] implies that

\begin{align*}\lambda_C =- \lim_{t\to \infty} \frac{\log x_1(t)}{t}. \end{align*}

Thus, for each $\epsilon>0$ such that $\ell_0 +\epsilon <\lambda_C$ , we obtain that when t is large enough,

\begin{align*} e^{ t (\ell_0 +\epsilon)}x_1(t)\le 1.\end{align*}

Hence,

\begin{equation*}\lim_{t\to \infty} e^{\ell_0 t}x_1(t)=\lim_{t\to \infty} \sum_{j\in {\mathbb{N}}} e^{\ell_0 t} P_{1j}(t) =0,\end{equation*}

which implies that

(3.19) \begin{equation} \lim_{t\to \infty}e^{\ell_0 t}\left[ P_{11}(t)+\sum_{j=2}^{\infty} j P_{1j}(t)s^{j-1}\right]=0,\qquad s\in(0,1).\end{equation}

On the other hand, it follows from Lemma 3.8 that for any $s\in(0,1)$ ,

(3.20) \begin{align} 0\le \lim_{t\to \infty}e^{\ell_0 t}[ P_{11}(t)+\sum_{j=2}^{\infty}j P_{1j}(t)s^{j-1}]&= \lim_{t\to \infty} \sum_{k=0}^{\infty} a^{(1)}_k e^{-t(\ell_k-\ell_0)} \varphi'_{\!\!k}(s)\nonumber \\[5pt] &=a^{(1)}_0 \varphi'_{\!\!0}(s),\end{align}

where the last equality is from Lebesgue’s dominated convergence theorem and the absolute convergence of the series $\sum\limits_{k=0}^{\infty} a^{(1)}_k \varphi'_{\!\!k}(s)$ .

By Lemma 3.3, we can take $\varphi_0(s)> 0$ , $s\in(0,1)$ . Hence,

(3.21) \begin{equation} a_0^{(1)}=\int_0^1 (s-1)\varphi_0(s) w(s){\textrm{d}} s < 0.\end{equation}

By combining Equations (3.20)–(3.21) with Equation (3.19), we obtain that $\varphi'_{\!\!0}(s)\equiv 0,\,s\in (0,1)$ . Thus, $\varphi_0(s)\equiv \text{constant}$ in (0, 1), which is a contradiction to Corollary 3.1. Thus, the proof of Theorem 2.1 is finished.

3.3. Proof of Theorem 2.2

Denote by ${\mathfrak{G}}$ the Hilbert space $L^2(J,\,w_1)$ with $w_1(s)=s$ . Since

\begin{align*} \frac{1}{w(s)},\,\frac{1}{w_1(s)}\in L^1_{loc}(J),\end{align*}

the Cauchy–Schwarz inequality implies that ${\mathfrak{H}},\,{\mathfrak{G}}\subset L^1_{loc}(J)$ ; the reader can refer to Corollary 1.6 of Kufner and Opic [Reference Kufner and Opic14] for details.

Define $\mathcal{S}={\left\{{w(s),w_1(s)}\right\}}$ . Let us define the Sobolev space with weight $\mathcal{S}$ ,

\begin{equation*}W^{1,2}(J,\,\mathcal{S}),\end{equation*}

as the set of all functions $f\in {\mathfrak{H}}$ such that the weak derivative (or say distributional derivative) $\textrm{D}f$ is again an element of ${\mathfrak{G}}$ . Theorem 1.11 of Kufner and Opic [Reference Kufner and Opic14] says that $W^{1,2}(J,\,\mathcal{S})$ is a Hilbert space if equipped with the norm

\begin{align*} |\|f|\|^2 ={\left\Vert{f}\right\Vert}_{{\mathfrak{H}}}^2+{\left\Vert{\textrm{D}f}\right\Vert}_{{\mathfrak{G}}}^2.\end{align*}

Let $C_{c}^{\infty}(J)$ denote the space of infinitely differentiable functions $\phi\;:\;J\to {\mathbb{R}}$ with compact support in J. Since $w(s),\,w_1(s),\,\frac{1}{w(s)},\,\frac{1}{w_1(s)}\in L^1_{loc}(J)$ , it follows from Lemma 4.4 of Kufner and Opic [Reference Kufner and Opic14] that $C_{c}^{\infty}(J)\subset W^{1,2}(J,\,\mathcal{S})$ . Then we define

\begin{align*} W^{1,2}_0(J,\,\mathcal{S})=\overline{C_{c}^{\infty}(J)},\end{align*}

the closure being taken with respect to the norm of the weighted Sobolev space $W^{1,2}(J,\,\mathcal{S})$ .

Let Q be the quadratic form defined on the domain $D'_{\!\!min}$ of the nonnegative symmetric operator $S'_{\!\!min}$ by

\begin{align*}Q (f,g)={\langle {S'_{\!\!min} f,\,g }\rangle}=\int_0^1\, s f'(s)g'(s){\textrm{d}} s.\end{align*}

By the Friedrichs extension theorem (see Theorem 4.4.5 of Davies [Reference Davies8]), the quadratic form Q is closable. Let $\bar{Q}$ be the closure of Q. Since the domain $D(\bar{Q})$ of $\bar{Q}$ is the closure of $D'_{\!\!min}$ with respect to the norm of the weighted Sobolev space $W^{1,2}(J,\,\mathcal{S})$ , we have that

\begin{align*} D(\bar{Q})=W^{1,2}_0(J,\,\mathcal{S}).\end{align*}

Lemma 3.10. $(S,\,D(S))$ is the Friedrichs extension of $(S_{min}, D_{{min}})$ . That is to say, $\bar{Q}$ is the quadratic form arising from the nonnegative self-adjoint operator $(S,\,D(S))$ .

Proof. Let $(L,\,D(L))$ be the nonnegative self-adjoint operator associated with the closed quadratic form $\bar{Q}$ . We need only show that $ D(L) =D(S)$ .

Since $(L,\,D(L))$ is a self-adjoint realization of $(S_{min}, D_{{min}})$ , there exist $a_1,\,a_2\in {\mathbb{R}}$ with $(a_1,\,a_2)\neq(0,0)$ such that

\begin{align*}D(L)={\left\{{y+c\cdot(a_1v_1+a_2v_2)\;:\;y\in D_{{min}},\,c\in {\mathbb{R}}}\right\}},\end{align*}

where $v_1,\,v_2$ are given in Lemma 3.2. On the other hand, we have that

\begin{align*} D(L)\subset D\!\left(L^{\frac12}\right)= D(\bar{Q}).\end{align*}

Hence, $a_1v_1+a_2v_2 \in D(\bar{Q})\subset W^{1,2}(J,\,\mathcal{S})$ , which implies that

\begin{align*}a_1v'_{\!\!1}(s)+a_2v'_{\!\!2}(s)\in {\mathfrak{G}},\end{align*}

i.e.,

\begin{align*}\int_0^1 \big(a_1v'_{\!\!1}(s)+a_2v'_{\!\!2}(s)\big)^2 s{\textrm{d}} s<\infty.\end{align*}

Since $v'_{\!\!1}(s)\in C_c^{\infty}(J)$ and $v'_{\!\!2}(s)=\dfrac{1}{s}$ , we see that $a_2=0$ . Hence $ a_1\neq 0$ and $D(L)=D(S)$ .

We now provide a proof for our second main result stated in Section 2.

Proof of Theorem 2.2. Since $D(\bar{Q})=\overline{C_{c}^{\infty}(J)}$ with respect to the norm of the weighted Sobolev space $W^{1,2}(J,\,\mathcal{S})$ , we see that $C_{c}^{\infty}(J)$ is a core for $\bar{Q}$ . The variational formula (see Theorem 4.5.3 of Davis [Reference Davies8]) implies that the first eigenvalue of S can be expressed as

\begin{align*} \ell_0&=\inf{\left\{{ {Q}(f)\;:\;\, f\in C_{c}^{\infty}(J),\,{\left\Vert{f}\right\Vert}_{{\mathfrak{H}}}=1}\right\}}\\[5pt] &=\inf{\left\{{\frac {\int_0^1 s\big( f'(s)\big)^2 {\textrm{d}} s}{\int_0^1 f^2(s) w(s){\textrm{d}} s}\;:\;\, f\not\equiv 0,\,f\in C_{c}^{\infty}(J)}\right\}}. \end{align*}

Hence, we obtain (2.7).

It is obvious that for any $\xi\in (0,1)$ , the function

\begin{align*} f_{\xi}(s)=\int_s^1\frac{1}{ w_1(r)} \textrm{1}_{(\xi, 1)}(r) {\textrm{d}} r,\quad s\in (0,1),\end{align*}

belongs to the domain D(S). Hardy’s inequality (see Theorem 6.2 of Opic and Kufner [Reference Opic and Kufner17, p. 65]) implies that the optimal constant C of Hardy’s inequality

\begin{align*} \left(\int_0^1 f^2(s) w(s){\textrm{d}} s \right)^{\frac12} \le C \left(\int_0^1\big( f'(s)\big)^2 w_1(s) {\textrm{d}} s \right)^{\frac12},\quad f(1)=0, \end{align*}

satisfies the estimates

\begin{align*} D \le C\le 2 D,\end{align*}

where

\begin{align*} D=\sup_{s\in (0,1)}{\left\{{\left( \int_0^s w(r){\textrm{d}} r \right)^{\frac12}\left( \int_s^1 \frac{1}{w_1(r)}{\textrm{d}} r\right)^{\frac12}}\right\}}. \end{align*}

Hence, we obtain (2.8) and (2.9). This completes the proof of Theorem 2.2.

4. Examples

We now provide two examples to illustrate the results we obtained in the previous section. The purpose of providing these two examples is twofold. On the one hand, they show that in some cases the value of the Hardy index $D^2$ can be given exactly. On the other hand, they will be helpful in getting better bounds for estimating Hardy index values for general models; see Section 5.

Example 4.1. (Quadratic birth--death process.) When $b_j\equiv 0$ for all $j\geq 3$ , the quadratic branching process (1.1) degenerates to a birth–death process with the birth rate $\{\nu_n\}$ and death rate $\{\mu_n\}$ as follows:

\begin{equation*} \left\{ \begin{array}{l@{\quad}l} \nu_n= b n^2, &\quad \\[5pt] \mu_n= a n^2 . &\quad \end{array}\right.\end{equation*}

Here we have set $a=b_0$ and $b=b_2$ ; the condition $B'(1)<0$ means that $b<a$ . Let $\kappa=\frac{b}{a}$ . Although this process has been extensively discussed, we are still able to obtain some new conclusions. In particular, for this special case, we can get the exact value of $D^2$ presented in (2.9). Indeed, it is fairly easy to show (see below) that

(4.1) \begin{align} D^2&= \frac{1}{a-b}\sup_{s \in (0,1)} {\left\{{(\!-\!\log s)\left(\!\log\frac{1- \kappa s}{ 1-s } \right)}\right\}}\nonumber\\[5pt] &=\frac{\big[\!\log\!( 1+\sqrt{1-\kappa} )\big]^2}{a-b},\end{align}

which then implies that

\begin{equation*}\frac{a-b}{4\big[ \!\log\!(1+\sqrt{1-\kappa})\big]^2}\leq \lambda_C\leq \frac{a-b}{\big[ \!\log\!(1+\sqrt{1-\kappa})\big]^2}.\end{equation*}

When $b \to a^-$ , the limit of the lower bound is $ \dfrac{a}{4}$ , which is the exact value of the decay parameter $\lambda_C$ when $a=b$ . See Chen [Reference Chen4] or Roehner and Valent [Reference Roehner and Valent19].

Comparing our results with bounds obtained in Chen [Reference Chen4], we find that our estimates are better than the estimates in Chen [Reference Chen4, Theorem 4.2],

\begin{align*} \frac1{4\delta}\leq \lambda_C\leq\frac1{\delta},\end{align*}

but worse than the improved estimates in Chen [Reference Chen4, Corollary 4.4],

\begin{align*} \frac1{\delta_1}\leq \lambda_C \leq\frac1{\delta'_{\!\!1}}.\end{align*}

For more details on $\delta, \delta_1, \delta'_{\!\!1}$ , we refer to Chen [Reference Chen4, Section 4].

To obtain the exact value of $D^2$ for our quadratic birth–death process, we need the following lemma.

Lemma 4.1. Suppose that $\sigma$ is a strictly positive constant. Then

\begin{align*} \log\!(1+\sigma t)\log\!\left(1+ \frac{\sigma }{t}\right)\leq [\!\log\!(1+\sigma)]^2, \qquad\forall t\in (0,\infty).\end{align*}

Proof. We maximize $f(t,s)=\log\!(1+\sigma t)\log\!(1+ {\sigma }{s}),\,(s,t)\in (0,\infty)\times (0,\infty)$ , subject to the constraint $s-\frac{1}{t}=0$ , using the method of Lagrange multipliers. Let

\begin{align*} F(t,s,\theta)=\log\!(1+\sigma t)\log\!(1+ {\sigma }{s})+ \left(s-\frac{1}{t} \right)\theta.\end{align*}

Then we have that

(4.2) \begin{equation} \left\{ \begin{array}{l@{\quad}l@{\quad}l} \dfrac{\sigma }{1+\sigma t}\log\!(1+ {\sigma }{s})+\dfrac{\theta}{t^2}=0,\\[13pt] \dfrac{\sigma }{1+\sigma s}\log\!(1+ {\sigma }{t})+ {\theta} =0,\\[13pt] s-\dfrac{1}{t}=0, \end{array}\right.\end{equation}

which implies that

(4.3) \begin{align} \frac{1+\sigma t}{t }\log\!(1+ \sigma t)= \frac{1+\sigma s}{s}\log\!(1+ \sigma s).\end{align}

Consider the function

\begin{align*} G(t)=\frac{1+\sigma t}{t }\log\!(1+ \sigma t).\end{align*}

Since for $t\in (0,\infty)$ ,

\begin{align*} G'(t)=\frac{\sigma}{t}-\frac{1}{t^2} \log\!(1+ \sigma t)=\frac{1}{t^2} [\sigma t -\log\!(1+ \sigma t) ]>0,\end{align*}

Equation (4.3) implies that $t=s$ . Together with the third equation of (4.2), we have that $t=s=1$ , which implies the desired inequality.

We can also get the result by another, more elementary approach.

Let $f(t)=\log\!(1+\sigma t)\log\!(1+ \frac{\sigma }{t})$ ; then it is clear that $f(t)=f(\frac1t)$ . By differentiating both sides, we obtain

\begin{align*} f'(t)=-\frac{1}{t^2}f'\left(\frac1t \right),\end{align*}

which implies that if $f'(t)\geq0$ for $t\in (0,1)$ , then $f'(t)\leq0$ for $t\in (1,\infty)$ . Thus, the desired inequality follows from $f(t)\leq f(1)$ .

Hence it remains to show that $f'(t)\geq0$ on (0, 1), which can be simplified to

(4.4) \begin{equation}(t^2+\sigma t)\log\!\left(1+\frac{\sigma}{t}\right)\geq(1+\sigma t)\log\!(1+\sigma t),\qquad 0<t<1.\end{equation}

Now consider $g(x)=(1+\sigma x)\log\!(1+\sigma x)$ and the straight line $l(x)=(1+\sigma)\log\!(1+\sigma)x$ . It is easy to see that

\begin{align*} g(0)=l(0),\qquad g(1)=l(1).\end{align*}

Hence, by the convexity of g(x), we obtain

(4.5) \begin{equation}g(x)<l(x),\text{ for }x\in(0,1),\qquad g(x)>l(x)\text{ for }x\in (1,\infty).\end{equation}

Thus, to show (4.4), it suffices to show

(4.6) \begin{equation} (t^2+\sigma t)\log \left(1+\frac{\sigma}{t} \right)\geq l(t),\qquad 0<t<1.\end{equation}

Letting $s=\frac1t$ yields that (4.6) is equivalent to

\begin{equation*} (1+\sigma s)\log\!(1+\sigma s)\geq l(s),\qquad 1<s<\infty.\end{equation*}

This follows immediately from (4.5), which completes the proof.

Now we are ready to get the $D^2$ value for the quadratic birth–death process. Indeed, for $\kappa\in (0,1)$ , taking $\sigma=\sqrt{1-\kappa}$ and $\frac{1}{x}=1+\sigma t $ , we immediately obtain from Lemma 4.1 that

\begin{align*} \sup_{x \in (0,1)} {\left\{{-\log x\log\frac{1-\kappa x}{ 1-x } }\right\}} &=\big[ \!\log\!( 1+\sqrt{1-\kappa} )\big]^2.\end{align*}

Substituting the above identity into (5.4), we have that

(4.7) \begin{align}D^2=\frac{\big[\!\log\!( 1+\sqrt{1-\kappa} )\big]^2}{(1-\kappa)a}.\end{align}

Together with (2.8) and the remark before Theorem 2.2, this yields the conclusions for Example 4.1.

Example 4.2. (Quadratic branching process with upwardly skipping 2.)

A quadratic branching process is called with upwardly skipping 2 if $b_0>0$ , $b_2\geq0$ , $b_3>0$ , and $b_j\equiv 0$ for all $j\geq 4$ . We are aware that this case has not yet been discussed in the literature. For this new case, we have

\begin{align*}B(s)&=(s-1)[b_3s^2 +(b_2+b_3)s- b_0].\end{align*}

Hence $B'(1)<0$ is equivalent to $b_2+2b_3<b_0$ , which then implies that there are three real roots $s_0, s_1, s_2$ of $B(s)=0$ , which satisfy $s_0=1$ , $s_1>1$ , and $s_2 <0$ . Moreover, it is fairly easy to show that the function

\begin{equation*}\phi(s)=(\!-\!\log s) \left(\int_0^s \frac{{\textrm{d}} r}{B (r)} \right)\end{equation*}

is concave on (0, 1) (see Lemma 4.2 below), and there is only one stationary point $s_0$ of the function $\phi(s)$ with $0<s_0<1$ , i.e., $\phi'(s_0)=0$ . Hence

\begin{equation*}D^2=\sup_{s\in (0,1)} \phi(s)=\phi(s_0),\end{equation*}

and

(4.8) \begin{align} \frac{1}{4 \phi(s_0) }\le \lambda_C\le \frac{1}{\phi(s_0) }.\end{align}

For convenience, let us denote B(s) by B(x), and let $x_1=s_0$ , $x_2=s_1$ , and $x_3=s_2$ ; thus $x_1=1$ , $x_2=c>1$ , and $x_3=-d$ with $d>c$ when $b_3=1$ .

Lemma 4.2. Let the above assumptions on the cubic polynomial B(x) hold. Then we have that the function

(4.9) \begin{align} \varphi(x)=\left(\!\log \frac{1}{x}\right)\cdot\left( \int_0^x \frac{1}{B(t)}{\textrm{d}} t\right) ,\qquad x\in (0,1), \end{align}

is concave on (0, 1).

Proof. Without any loss of generality, we assume that $b_3=1$ . Then $B(x)=(x-1)\hbox{$(x-c)$} (x+d)$ . Hence, by the method of undetermined coefficients, we have the resolution in partial fractions of the function $\frac{1}{B(x)}$ :

\begin{align*} \frac{1}{B(x)}&=\frac{\alpha_1}{x-1}+\frac{\alpha_2}{x-c}+\frac{\alpha_3}{x+d},\\[5pt] \alpha_2&=\frac{1}{(c -1)(c+d)}>0,\,\,\alpha_3=\frac{1}{(d +1)(d+c)}>0,\\[5pt] \alpha_1&=\frac{1}{(1-c)(1+d)}=-(\alpha_2+\alpha_3). \end{align*}

Thus, for any $x\in (0,1)$ ,

\begin{align*} \int_0^x \frac{1}{B(t)}{\textrm{d}} t &=\alpha_2\int_0^x \frac{1}{1-t}-\frac{1}{ c-t}{\textrm{d}} t +\alpha_3\int_0^x \frac{1}{ d+ t}+ \frac{1}{1-t}{\textrm{d}} t \\[5pt] &=\alpha_2 \log \frac{1-\frac{x}{c}}{1-x}+\alpha_3 \log \frac{1+\frac{x}{d}}{1-x} .\\[5pt] \end{align*}

It follows that

(4.10) \begin{align}\varphi(x)= \alpha_2 \log\frac{1}{x}\log \frac{1-\frac{x}{c}}{1-x}+\alpha_3 \log\frac{1}{x}\log \frac{1+\frac{x}{d}}{1-x}.\end{align}

Since ${\left\vert{\frac{1}{c}}\right\vert}<1$ and ${\left\vert{\frac{1}{d}}\right\vert}<1$ , it follows from Lemma 4.3 that both

\begin{align*} \log\frac{1}{x}\log \frac{1-\frac{x}{c}}{1-x}\end{align*}

and

\begin{align*} \log\frac{1}{x}\log \frac{1+\frac{x}{d}}{1-x} \end{align*}

are concave, which implies that $\varphi''(x)<0$ because $\alpha_2,\,\alpha_3>0$ .

The following simple inequality involving the logarithm function is crucial to our later analysis; the proof of the inequality can be found in, say, Kuang’s book [Reference Kuang13, Theorem 53, p. 293].

Proposition 4.1. If $x>0$ and $x\neq 1$ , then

(4.11) \begin{equation} \frac{\log x}{x-1}\leq \frac{1+x}{2x}. \end{equation}

We also need the following inequality about a univariate quadratic polynomial. We omit its proof, since it is very simple.

Proposition 4.2.

(4.12) \begin{equation} p(1-p)x^2-4px+p-1<0,\qquad \forall x\in (0,1),\,\, \,\,{\left\vert{p}\right\vert}<1. \end{equation}

Lemma 4.3. Suppose that ${\left\vert{p}\right\vert}<1$ is a fixed constant; then the function defined by

\begin{align*} f(x)=-\log x\log\frac{1+p x}{ 1-x }\end{align*}

is a concave function on (0, 1), i.e., $f''(x)<0$ on (0, 1).

Proof. By the Leibniz rule, we can easily compute the first and the second derivatives of the function f(x) as follows:

(4.13) \begin{align} f'(x)&=-\frac{1}{x }\log \frac{1+p x}{ 1-x } +\frac{ 1+p }{(1-x)(1+px)}\log\frac{1}{x}, \end{align}
(4.14) \begin{align} f''(x)&=\frac{1}{x^2}\log \frac{1+p x}{ 1-x } -\frac{2(1+p)}{x(1-x)(1+px)}+\frac{(1+p)(2px+1-p)}{(1-x)^2(1+px)^2}\log\frac{1}{ x}. \end{align}

It is easy to check that on (0, 1), the function f(x) satisfies the following symmetric relationship:

(4.15) \begin{equation} f(x)=f\!\left(\frac{1-x}{1+px}\right). \end{equation}

Define the transformation

\begin{align*} y=T(x)=\frac{1-x}{1+px}=\frac{1}{p}\left[\frac{1+p}{1+px}-1\right],\quad x\in(0,1).\end{align*}

By differentiating the symmetric equation (4.15) and using the chain rule and the product rule, we immediately obtain that when $x\in (0,\, 1)$ ,

(4.16) \begin{align} f'(x)&=f'(y)\big|_{y=T(x)}\cdot \frac{-(1+p)}{(1+px)^2},\nonumber\\[5pt] f''(x)&= f''(y)\big|_{y=T(x)}\cdot \left[\frac{(1+p)}{(1+px)^2}\right]^2+f'(y)\big|_{y=T(x)}\cdot\frac{2p(1+p)}{(1+px)^3}\nonumber\\[5pt] &=\frac{ 1+p }{(1+px)^3}\cdot\left[\frac{1+p}{1+px}f''(y)+2p f'(y) \right]\big|_{y=T(x)}\nonumber\\[5pt] &=\frac{ 1+p }{(1+px)^3}\cdot\big[(1+py)f''(y)+ 2p f'(y) \big]\big|_{y=T(x)}. \end{align}

It follows from Equations (4.13) and (4.14) that

(4.17) \begin{align} &\quad (1+px)f''(x)+ 2p f'(x)\nonumber\\[5pt] &=-\frac{2(1+p)}{x(1-x)}+\frac{1-px}{x^2}\log \frac{1+p x}{ 1-x }+ \frac{(1+p)^2}{(1-x)^2(1+px) }\log\frac{1}{ x}. \end{align}

It follows from Proposition 4.1 that for any $x\in(0,1)$ and ${\left\vert{p}\right\vert}<1$ ,

\begin{align*} \log\frac{1}{ x}& \leq \left(\frac{1}{x}-1\right)\frac{1+\frac{1}{x}}{\frac{2}{x}}=\frac{(1-x)(1+x)}{2x},\\[5pt] \mbox{and\;\;}\log \frac{1+p x}{ 1-x }&\leq \left(\frac{1+p x}{ 1-x }-1\right)\frac{1+\frac{1+p x}{ 1-x }}{\frac{2 (1+p x)}{( 1-x) } } =\frac{(1+p)x}{1-x}\cdot\frac{2+(p-1)x}{2(1+px)}. \end{align*}

Substituting the above two inequalities into Equation (4.17) then yields

\begin{align*} &\quad (1+px)f''(x)+ 2p f'(x)\\[5pt] &\leq -\frac{2(1+p)}{x(1-x)}+\frac{1-px}{x }\frac{ 1+p }{1-x}\cdot\frac{2+(p-1)x}{2(1+px)}+ \frac{(1+p)^2}{(1-x) (1+px) }\frac{ (1+x)}{2x}\\[5pt] &=\frac{1+p}{2x(1-x)(1+px)}\big[p(1-p)x^2-4px+p-1 \big]\\[5pt] &<0\qquad \text{(by Proposition 4.2).} \end{align*}

Finally, substituting the above inequality into Equation (4.16), we immediately obtain the desired $f''(x)<0$ on (0, 1).

By Lemma 4.2, we see that $\varphi(x)$ is concave on (0, 1). It is not difficult to prove that there is only one point $x_0\in (0, 1)$ such that $\varphi'(x_0)=0$ , and we also have

(4.18) \begin{align} D^2=\sup_{x\in (0,1)}\varphi(x)= \varphi(x_0).\end{align}

Therefore, for our second example we can estimate $\lambda_C$ using the above equality and Theorem 2.2. That is,

(4.19) \begin{equation} \frac{1}{ 4\varphi(x_0)}\leq \lambda_C\leq \frac{1}{ \varphi(x_0)}.\end{equation}

5. Estimation of Hardy index (proofs of Corollaries 2.12.4)

The basic aim of this final section is to estimate the value of the Hardy index $D^2$ for our QMBPs and to prove Corollaries 2.12.4, which were stated in Section 2. To achieve this aim we need the following simple yet useful lemma which reveals the deep properties of A(s) (which is defined above, e.g. in (3.5), as $A(s)=\frac{B(s)}{1-s}$ ).

Lemma 5.1. The function A(s) is a positive bounded analytic function on (0, 1) whose derivatives are negative functions on [0, 1]. In particular, A(s) is strictly decreasing on [0, 1] with minimum value on [0, 1] given by $A(1)=b_0-m_b$ and maximum value on [0, 1] given by $A(0)=b_0$ . Also, A(s) is concave on (0, 1).

Proof. Under the condition $B'(1)<0$ , we know that by Proposition 1.1, B(s) has no zero on (0, 1). It follows that as a power series, B(s) is analytic on (0, 1), and thus so is the function A(s). In particular, A(s) is a continuous function of $s\in (0,1)$ . Note that

\begin{equation*} \lim_{s\downarrow0}A(s)=b_0>0 \end{equation*}

and

\begin{equation*} \lim_{s\uparrow 1}A(s)=\lim_{s\uparrow 1}\frac{B(s)}{1-s}=\frac{B'(1)}{(\!-\!1)}=(\!-\!1)B'(1)>0,\end{equation*}

which is a finite value.

In short,

\begin{align*} \lim_{s\downarrow 0}A(s)=b_0, \qquad \lim_{s\uparrow 1}A(s)=m_d-m_b=b_0-m_b.\end{align*}

It follows that A(s) is positive and bounded on [0, 1]. We now show that A(s) is strictly decreasing on [0, 1].

Note that for $s\in(0,1)$ we have

(5.1) \begin{equation}A(s)= B(s)\cdot\sum_{n=0}^{\infty}s^n=\sum_{j=0}^{\infty}b_j s^j\cdot\sum_{n=0}^{\infty}s^n.\end{equation}

Since A(s) is analytic on (0, 1), we may expand A(s) as a power series on (0, 1):

\begin{equation*}A(s)=\sum_{n=0}^{\infty}a_ns^n.\end{equation*}

Then by (5.1) we get

(5.2) \begin{equation}\forall\; 0\le n<+\infty,\quad a_n=\sum_{k=0}^nb_k.\end{equation}

Now by (5.2) and (1.2) we get that $a_0>0$ , $a_1<0$ , and

\begin{equation*}\forall n\geq 2, \quad a_n\le 0.\end{equation*}

It follows that all the coefficients of all the derivatives of A(s) are definitely nonpositive (and usually negative, except in the trivial case of a polynomial in which many coefficients are zero). Therefore, all the derivative functions of A(s) are negative on [0, 1]. In particular, for any $s\in[0,1]$ , $A'(s)<0$ , and thus

\begin{equation*}A(s)\downdownarrows \mbox{on } (0,1).\end{equation*}

Therefore

\begin{equation*}Min_{s\in[0,1]}A(s)=A(1)=b_0-m_b\equiv m_d-m_b\equiv(\!-\!1)B'(1),\end{equation*}

and

\begin{equation*}Max_{s\in[0,1]}A(s)=A(0)=b_0.\end{equation*}

Thus for any $s\in(0,1)$ we have

\begin{equation*}0<b_0-m_b<A(s)<b_0<+\infty.\end{equation*}

The fact that A(s) is concave on (0, 1) also easily follows from the fact that $A''(s)\le 0$ for all $s\in(0,1)$ .

Using the interesting and useful properties of A(s) stated in Lemma 5.1, we are able to prove Corollaries 2.12.4.

Proof of Corollary 2.1. Note first that the Hardy index $D^2$ represented in (2.9) can be rewritten as

(5.3) \begin{align} D^2= \sup_{x\in (0,1)} {\left\{{\int_0^x \frac{1}{(1-t)A(t)}{\textrm{d}} t \int_x^1 \frac{1}{t} {\textrm{d}} t}\right\}}.\end{align}

By Lemma 5.1, we know that $0<b_0-m_b\leq A(x)\leq b_0$ for all x in [0, 1]. It follows that

\begin{align*} \frac{1}{b_0}\int_0^x \frac{1}{1-t}{\textrm{d}} t \int_x^1 \frac{1}{t} {\textrm{d}} t\leq \int_0^x \frac{1}{(1-t)A(t)}{\textrm{d}} t \int_x^1 \frac{1}{t} {\textrm{d}} t\leq \frac{1}{b_0-m_b}\int_0^x \frac{1}{1-t}{\textrm{d}} t \int_x^1 \frac{1}{t} {\textrm{d}} t.\end{align*}

Clearly we have that for all $x\in (0,1)$ ,

\begin{align*} \int_0^x \frac{1}{1-t}{\textrm{d}} t \int_x^1 \frac{1}{t} {\textrm{d}} t&=\log\!(1-x)\log x\\[5pt] &\leq \frac14 \left(\log x(1-x) \right)^2\\[5pt] &\leq \frac14 \left(\!\log \frac14 \right)^2= (\!\log 2)^2.\end{align*}

Hence, we obtain that the quantity $D^2$ in (2.9) satisfies

\begin{align*}\frac{(\!\log 2)^2}{b_0 }\le D^2\le \frac{(\!\log 2)^2}{b_0-m_b}, \end{align*}

which completes the proof of Corollary 2.1.

In order to show Corollary 2.2, first recall that for the quadratic birth–death process (see Example 4.1 in Section 4), we have assumed that $b_0=a>0$ , $b_2=b>0$ , and $b_j\equiv0$ for all $j\ge 3$ ; thus $b_1=-(a+b)$ . Then $B(x)=a-(a+b)x+bx^2=(1-x)(a-bx)$ , and the condition $B'(1)<0$ means that $b<a$ . Define $\kappa=\frac{b}{a} $ . From Equation (2.9), it is easy to see that

(5.4) \begin{align} D^2&= \frac{1}{a-b}\sup_{x \in (0,1)} {\left\{{-\log x\log\frac{1- \kappa x}{ 1-x } }\right\}}.\end{align}

Proof of Corollary 2.2. As proved in Lemma 5.1, A(s) is strictly decreasing and concave on [0, 1]. It follows directly that A(s) is sandwiched between the secant line of A(s), denoted by $y_1(s)$ , and the tangent line of A(s), denoted by $y_2(s)$ , which are defined as follows on [0, 1]:

(5.5) \begin{equation}y_1(s)=-m_b\cdot s+b_0,\end{equation}
(5.6) \begin{equation}y_2(s)= -m_b\cdot s+A(s_0)+m_bs_0,\end{equation}

where $s_0$ is determined by the equation $-m_b=A'(s_0)$ , which guarantees that $0<s_0<1$ . To be more exact, we have that

(5.7) \begin{equation}y_1(s)\le A(s)\le y_2(s)\mbox{ for all } s\in[0,1].\end{equation}

Using (5.4) and (5.7), we easily get that

\begin{equation*}\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)y_2(r)}\leq D^2\leq \sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)y_1(r)}.\end{equation*}

Substituting (5.5) and (5.6) into the above yields that

(5.8) \begin{equation}\begin{split}&\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)(m_bs_0+A(s_0)-m_br)}\\[5pt] &\leq D^2\\[5pt] &\leq \sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)(b_0-m_br)}.\end{split}\end{equation}

Now, both the rightmost and the leftmost term of (5.8) are in the format of $D^2$ in the quadratic birth–death process discussed in Example 4.1; thus, by using the conclusions obtained in Example 4.1 and a little algebra, we immediately obtain

\begin{align*} \frac{\kappa_2(\!\log\!(1+\sqrt{\kappa_2}))^2}{m_b-\kappa_2 }\le D^2\le \frac{(\!\log\!(1+\sqrt{\kappa_1}))^2}{b_0-m_b}. \end{align*}

Then (2.10) immediately follows, which completes the proof of Corollary 2.2.

Using an idea similar to that used in proving Corollary 2.2, we may prove Corollary 2.3 as follows.

Proof of Corollary 2.3. Recall that

\begin{align*} D^2\equiv\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)A(r)},\end{align*}

where A(s) is analytic on (0, 1), and thus there exists $\xi\in(0,1)$ such that

\begin{equation*}A(s)=A(0)+A'(\xi)s.\end{equation*}

But as proved in Lemma 5.1, A (s) is decreasing on [0, 1], and thus $A'(1)\leq A'(\xi)\leq A'(0)$ . Considering that $A'(0)=-\sum\limits_{j=2}^{\infty}b_j$ and $A'(1)=-\frac12 B''(1)$ , and noting that $A(0)=b_0$ , we easily get that

\begin{equation*}b_0-\frac12B''(1)\cdot s\leq A(s)\leq b_0-\left(\sum_{j=2}^{\infty}b_j\right)\cdot s.\end{equation*}

Then we claim that

(5.9) \begin{align} \frac{(\!\log\!(1+\sqrt{\kappa'_{\!\!2}}))^2}{b_0-m_b}\le D^2\le \frac{(\!\log\!(1+\sqrt{\kappa'_{\!\!1}}))^2}{b_0-m_b}. \end{align}

In fact, since $B'(1)<0$ , we get that

\begin{align*} \frac{-A'(0)}{A(0)}=\frac{\sum\limits_{j=2}^{\infty}b_j}{b_0}<1.\end{align*}

Now, using a method similar to that used in proving Corollary 2.2, together with the conclusions obtained in Example 4.1, we easily obtain the right-hand side of (5.9). Moreover, under the condition $B''(1)<2b_0$ , we may use the conclusions obtained in Example 4.1 once again to show that the left-hand side of (5.9) is also true. The proof of the conclusions in Corollary 2.3 is finished.

The basic idea in proving Corollaries 2.2 and 2.3 was to sandwich the function A(s) between two straight lines and then use the conclusions obtained in Example 4.1. We now prove Corollary 2.4 by sandwiching the function A(s) between two parabolas and then using the conclusions obtained in Example 4.2.

Proof of Corollary 2.4. We have

\begin{align*} D^2\equiv\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)A(r)},\end{align*}

where A(s) is analytic on (0, 1) and thus there exists $\xi\in(0,1)$ such that

\begin{equation*}A(s)=a_0+a_1s+\frac{A''(\xi)}{2}s^2,\end{equation*}

with $a_0=b_0$ , $a_1=b_0+b_1<0$ . However, by Lemma 5.1, A ′′(s) is decreasing on (0, 1), and thus

\begin{equation*}A''(1)\leq A''(\xi)\leq A''(0);\end{equation*}

hence for all $s\in(0,1)$ we have

\begin{equation*}a_0+a_1s+\frac12A''(1)s^2\leq A(s)\leq a_0+a_1s+\frac12A''(0)s^2.\end{equation*}

It is easy to see that

\begin{align*} A'(s)=\sum\limits_{n=1}^{\infty}na_ns^{n-1}\end{align*}

and

\begin{align*} A''(s)=\sum\limits_{n=2}^{\infty}n(n-1)a_ns^{n-2},\end{align*}

and thus

\begin{equation*}A''(0)=2a_2=2(b_0+b_1+b_2)\leq 0,\end{equation*}
\begin{equation*}A''(1)=\sum_{n=2}^{\infty}n(n-1)a_n=\sum_{n=2}^{\infty}n(n-1)\sum_{k=0}^{n}b_k\le 0.\end{equation*}

Now, if we further assume that $A''(1)>-\infty$ , then

\begin{equation*}-\infty<\sum_{n=2}^{\infty}n(n-1)\sum_{k=0}^nb_k\leq A''(\xi)\leq 2(b_0+b_1+b_2) \le 0.\end{equation*}

For notational convenience, write

\begin{equation*}E(s)=a_0+a_1s+\frac12A''(0)s^2,\end{equation*}
\begin{equation*}F(s)=a_0+a_1s+\frac12A''(1)s^2.\end{equation*}

Then for every $s\in(0,1)$ ,

\begin{align*} (1-s)F(s)\leq B(s)\leq (1-s)E(s),\end{align*}

and thus

\begin{equation*}\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)F(r)}\leq D^2\leq \sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)E(r)}.\end{equation*}

Now, by using our result regarding Example 4.2 and the preliminary remark made before, we get that the functions

\begin{align*} (\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)F(r)}\end{align*}

and

\begin{align*} (\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)E(r)}\end{align*}

are concave on (0, 1). It follows that, if we let

\begin{equation*}\phi_1(s)=(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)F(r)}\end{equation*}

and

\begin{equation*}\phi_2(s)=(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)E(r)},\end{equation*}

then there exist $s_1\in(0,1)$ and $s_2\in(0,1)$ such that

\begin{equation*}\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)F(r)}=\phi_1(s_1)\end{equation*}

and

\begin{equation*}\sup_{s\in(0,1)}(\!-\!\log s)\int_0^s \frac{{\textrm{d}} r}{(1-r)E(r)}=\phi_1(s_2).\end{equation*}

Then we get

\begin{equation*}\phi_1(s_1)\le D^2\le\phi_2(s_2)\end{equation*}

and consequently obtain (2.12), which completes the proof of Corollary 2.4.

Acknowledgements

We thank the anonymous referees for their helpful comments and suggestions, which led to the improvement of the paper.

Funding information

The work of Y. Chen is supported by the National Natural Science Foundation of China (No. 11961033), and the work of W.-J. Gao is supported by the National Natural Science Foundation of China (No. 11701265).

Competing interests

There are no competing interests to declare which arose during the preparation or publication process of this article.

References

Ahlbrandt, C. D., Hinton, D. B. and Lewis, R. T. (1981). Necessary and sufficient conditions for the discreteness of the spectrum of certain singular differential operators. Canad. J. Math. 33, 229246.CrossRefGoogle Scholar
Bailey, P. B., Everitt, W. N., Hinton, D. B. and Zettl, A. (2002). Some spectral properties of the Heun differential equation. In Operator Methods in Ordinary and Partial Differential Equations, Birkhäuser, Basel, pp. 87110.CrossRefGoogle Scholar
Chen, A. Y. (2002). Uniqueness and extinction properties of generalised Markov branching processes. J. Math. Anal. Appl. 274, 482494.CrossRefGoogle Scholar
Chen, M. F. (2010). Speed of stability for birth–death processes. Front. Math. China. 5, 379515.CrossRefGoogle Scholar
Chen, M. F. (2014). Criteria for discrete spectrum of 1D operators. Commun. Math. Statist. 2, 279309.CrossRefGoogle Scholar
Chen, R. R. (1997). An extended class of time-continuous branching processes. J. Appl. Prob. 34, 1423.CrossRefGoogle Scholar
Ćurgus, B. and Read, T. (2002). Discreteness of the spectrum of second-order differential operators and associated embedding theorems. J. Differential Equat. 184, 526548.CrossRefGoogle Scholar
Davies, E. B. (1995). Spectral Theory and Differential Operators. Cambridge University Press.CrossRefGoogle Scholar
Dunford, N. and Schwartz, J. T. (1963). Linear Operators, Part II: Spectral Theory, Self Adjoint Operators in Hilbert Space. John Wiley, New York.Google Scholar
Glazman, I. M. (1965). Direct Methods of Qualitative Spectral Analysis of Singular Differential Operators. Israel Program for Scientific Translations, Jerusalem.Google Scholar
Hinton, D. B. and Lewis, R. T. (1979). Singular differential operators with spectra discrete and bounded below. Proc. R. Soc. Edinburgh A 84, 117134.CrossRefGoogle Scholar
Jacka, S. D. and Roberts, G. O. (1995). Weak convergence of conditioned processes on a countable state space. J. Appl. Prob. 32, 902916.CrossRefGoogle Scholar
Kuang, J. C. (2003). Applied Inequalities, 3rd edn. Shandong Science and Technology Press (in Chinese).Google Scholar
Kufner, A. and Opic, B. (1984). How to define reasonably weighted Sobolev spaces. Comment. Math. Univ. Carolin. 23, 537554.Google Scholar
Letessier, J. and Valent, G. (1984). The generating function method for quadratic asymptotically symmetric birth and death processes. SIAM J. Appl. Math. 44, 773783.CrossRefGoogle Scholar
Mao, Y. H. (2006). On the empty essential spectrum for Markov processes in dimension one. Acta Math. Sinica Eng. Ser. 22, 807812.CrossRefGoogle Scholar
Opic, B. and Kufner, A. (1990). Hardy-Type Inequalities. Longman Science and Technology, Harlow.Google Scholar
Pazy, A. (1983). Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York.CrossRefGoogle Scholar
Roehner, B. and Valent, G. (1982). Solving the birth and death processes with quadratic asymptotically symmetric transition rates. SIAM J. Appl. Math. 42, 10201046.CrossRefGoogle Scholar
Rollins, L. W. (1972). Criteria for discrete spectrum of singular selfadjoint differential operators. Proc. Amer. Math. Soc. 34, 195200.CrossRefGoogle Scholar
Van Doorn, E. A. and Pollett, P. K. (2013). Quasi-stationary distributions for discrete-state models. Europ. J. Operat. Res. 230, 114.CrossRefGoogle Scholar
Zettl, A. (2005). Sturm–Liouville Theory. American Mathematical Society, Providence, RI.Google Scholar