1. Introduction
Random walks are widely used in applied mathematics, physics, and computer science and much attention is devoted to the analysis of performance measures of random walks in graphs. Examples of such performance measures are the cover time which is the time needed to visit at least once all the nodes of the graph or the hitting time of a particular subset of nodes which is the time needed to reach one node of the subset. Many applications of random walks on graphs are proposed in [Reference Aldous2] as for instance the analysis of electrical networks which are detailed in [Reference Doyle and Snell10]. Very interesting surveys on random walks on graphs are [Reference Aldous and Fill3] and [Reference Lovasz16] while [Reference Motowani and Raghavan17] additionally contains many applications in both computer science and networks. In telecommunication systems, random walks are used to explore the network or to collect information. For instance, in the context of graph theory for social networks, a notion of centrality has been used to assess the relative importance of nodes in a given network topology and a novel form of centrality based on the second-order moments of return times in random walks was proposed in [Reference Kermarrec, Le Merrer, Sericola and Trédan15].
It has been shown in [Reference Feige12] and [Reference Feige11] that the expected cover time in a random walk with n states is at least
$(1 - o(1))n \ln(n)$ and at most
$(4/27+o(1))n^3$. In the case of a regular graph, for which each vertex has the same number of neighbors, it is at most
$2n^2$, see [Reference Feige11]. In [Reference Banderier and Dobrow5], the authors consider an extension of the cover time called the marking time defined as follows. When the random walk reaches node i, a coin is flipped and with probability pi the node i is marked. The marking time is then the time needed by the walk to mark all its nodes. They give general formulas for the expected marking time of a random walk and provide asymptotics when the pi’s are small. In [Reference Ikeda, Kubo and Yamashita14], the authors first prove that for any transition probability matrix, both the expected hitting time and the expected cover time of the path graph is
$\Omega(n^2)$. They also prove that for some particular Markov chains called degree-biased random walks, the expected hitting time is
$O(n^2)$ and the expected cover time is
$O(n^2\ln(n))$. The cover time of a discrete-time homogenous Markov chain is analyzed in [Reference Sericola19] where both its distribution and its moments are considered with applications to particular graphs namely the generalized cycle graph, the complete graph, and the generalized path graph.
In this paper, we consider hitting times on the lollipop graph which consists of a complete graph (a clique) on m vertices and a path graph on n − m vertices. One end of the path graph is connected to one vertex, denoted by m, of the clique, and its other end is the target vertex n for the hitting times. Such a graph has been denoted by
$L_m^n$ for
$n,m \in \mathbb{N}$, with
$2 \leq m \leq n$. It is represented in Figure 1 for m = 8 and n = 14. Observe that when m = n we obtain the well-known complete graph Kn.

Figure 1. The lollipop graph
$L_8^{14}$.
The expected hitting time of the path end node n, when starting from a node
$i \in \{1,\ldots,m-1\}$, has been obtained in [Reference Brightwell and Winkler9] where the authors prove that the maximum expected hitting time among all the n-vertex graphs is obtained for the lollipop graph
$L_m^n$ when
$m = \lceil 2(n-1)/3 \rceil$, with value
$(4/27+o(1))n^3$. This major result is obviously quite important and shows that a detailed analysis of the lollipop graph is relevant.
This is the goal of our paper. The novelty of the proof techniques lies in the way we obtain detailed recurrence formulas for the calculation of all moments of the hitting times. These formulas then make it possible to exhibit the asymptotic behavior of all the moments. This phase is clearly the hardest to come by. From these asymptotic moments, we obtain the asymptotic distribution of the hitting times using a standard argument of complex analysis.
The paper is organized as follows. In Section 2, we give general formulas for the distribution and for all the moments of the hitting times in a general Markov chain. We use these results in Section 3 to derive the mean and the variance of the hitting times on the lollipop graph. Our proof for obtaining the expected hitting times on the lollipop graph differs from the one of [Reference Brightwell and Winkler9] but we need it to obtain higher order moments. We also obtain in Section 3 the moment generating function of the hitting times and we propose a simple method to compute the convergence domain of this function which consists in evaluating the eigenvalues and in particular the spectral radius of the sub-stochastic matrix arising in the distribution expression of the hitting time. We propose in Section 4 a general recursive way to obtain all the moments of the hitting time Tn of node n and we use this recursion to obtain the asymptotic behavior of the distribution of Tn when n tends to infinity for various values of m. As a particular case we obtain that, when
$m = \lceil 2(n-1)/3 \rceil$, the asymptotic distribution of
$27T_n/(4n^3)$ is exponential with rate 1. Section 5 concludes the paper.
2. General results
Consider a homogeneous discrete-time Markov chain
$X=\{X_n,\; n \in \mathbb{N}\}$ over a finite state space S. We denote by P the transition probability matrix of X. We suppose that X is irreducible and, for every
$j \in S$, we define the hitting time Tj of state j as the first instant at which state j is reached by X, that is

The irreducibility of X implies that, for every
$j \in S$, Tj is finite almost surely. Let A be a non-empty subset of states of S. We denote by TA the hitting time of subset A, defined by the time needed to reach A, that is

The complementary subset of A in S,
$S \setminus A$, is denoted simply by Ac. The partition
$\{A,A^c\}$ of S induces a decomposition of P into four submatrices as

where matrix PA (resp.
$P_{A^c}$) contains the transition probabilities between states of A (resp. Ac) and matrix
$P_{A,A^c}$ (resp.
$P_{A^c,A}$) contains the transition probabilities from states of A (resp. Ac) to states of Ac (resp. A). We also denote by
$\mathbb{1}$ the column vector with all its entries equal to 1, its dimension being specified by the context. With these notations, we have, for all
$i \in A^c$ and
$k \geq 1$, see for instance [Reference Sericola18],

The r th moment of the hitting time of subset A starting from state
$i \in A^c$ is then given by

The following theorem gives a finite expression of this r-th moment which involves only the successive powers of matrix
$\left(I - P_{A^c}\right)^{-1}$.
Theorem 2.1. For every
$r \geq 1$, we have

where, for every
$\ell = 1,\ldots,r$,

Proof. For
$r \geq 0$ and
$x \in (0,1]$, we introduce the notation
${\displaystyle G_r(x) = \sum_{k=1}^\infty k^r x^{k-1}P_{A^c}^{k-1}}$. We prove by induction that, for
$r \geq 1$, we have

The result is true for r = 1. Indeed, for r = 1, we have

which gives

By taking the derivative with respect to x, we get

On the other hand, the equality is satisfied since
$c_{1,1} = 1$.
Suppose that the relation is true for index r − 1, that is, suppose that

We then have

By taking the derivative with respect to x, we get

Using the induction hypothesis, we obtain

Observing that
$xP_{A^c}(\ell+1)c_{r-1,\ell}
= -(\ell+1)c_{r-1,\ell}(I-xP_{A^c}) + (\ell+1)c_{r-1,\ell}I$, we get

Note that
$-c_{r-1,1}=(-1)^{r+1}=c_{r,1}$. Moreover, since it is well-known that
$c_{r-1,r-1}=(r-1)!$, we have
$rc_{r-1,r-1}=r!=c_{r,r}$. Finally, for
$\ell=2,\ldots,r-1$, we obtain

We thus recover

which completes the proof by induction. Using relation (2), we obtain

Matrix P being stochastic we have
$\left(I - P_{A^c}\right)^{-1}P_{A^c,\,A} \mathbb{1} = \mathbb{1}$, which completes the proof.
As a byproduct of Theorem 2.1, the reader may note that taking r = 1 in the theorem we get the expected hitting time of subset A, that is

while taking r = 2 we get the second-order moment of TA, that is

In this spirit, the following section is devoted to the analysis of all moments of the hitting time on the lollipop graph.
3. The lollipop graph
Recall that for
$n,m \in \mathbb{N}$, with
$2 \leq m \lt n$, the n-vertex lollipop graph,
$L_m^n$, consists of a clique on m vertices and a path graph on n − m vertices. The m vertices of the clique are numbered
$1,\ldots,m$ and the n − m vertices of the path are numbered
$m+1, \ldots,n$, vertex m being the one joining the clique and the path as shown in Figure 1.
We consider a random walk on the vertices of a lollipop graph
$L_m^n$, that is a Markov chain
$X = \{X_k, \; k \in \mathbb{N}\}$, for which the transition probability from a state i to each neighbor j of i occurs with equal probability. More precisely, the state space of X is the set
$S = \{1,\ldots,m-1,m,m+1,\ldots,n\}$ with transition probability matrix P which non-zero entries are given by

For example, this matrix is represented below for m = 5 and n = 9

The hitting time on the lollipop graph is the time needed to reach state n when the initial state is state
$i = 1,\ldots,n-1$. According to the notation of the previous section, it is given by the random variable Tn. As usual, we use the convention that a sum of terms
$\sum_a^b (...)$ is set to 0 when a > b. Concerning the rth moment of Tn, we will write
$\mathbb{E}_i(T_{n}^r)$ for
$\mathbb{E}(T_{n}^r \mid X_0=i)$,
$\mathbb{V}_i(T_{n})$ for the variance
$\mathbb{V}(T_{n} \mid X_0=i)$ and for any event E,
$\mathbb{P}_i(E)$ for
$\mathbb{P}(E \mid X_0=i)$.
3.1. Mean hitting time
Using relation (3) with
$A=\{n\}$, we obtain for
$i = 1,\ldots,n-1$,

Matrix
$P_{\{1,\ldots,n-1\}}$ is the submatrix obtained from P by deleting row n and column n. As suggested by the example above, we decompose matrix
$P_{\{1,\ldots,\,n-1\}}$ through the partition
$\{1,\ldots,m-1\}$,
$\{m\}$,
$\{m+1,\ldots,n-1\}$ of the set of states
$\{1,\ldots,n-1\}$ as follows

where
$\top$ denotes the transpose operator, Q is the
$(m-1,m-1)$ matrix given, for i ≠ j, by
$Q_{i,j}=1/(m-1)$, c is the unit column vector defined by
$c=(1,0,\ldots,0)^\top$ with dimension
$n-m-1$ and R is the
$(n-m-1,n-m-1)$ matrix, of which non-zero entries are given by
$R_{i,\,i-1}=R_{i,\,i+1}=1/2$ for
$i = m+1,\ldots,n-1$.
We then have the following explicit expression of the mean hitting time of state n when starting from state i, with i ≠ n. A part of this result has already been obtained in [Reference Brightwell and Winkler9] when
$i \in \{1,\ldots,m-1\}$ using another proof. The proof used here is relevant because it is the starting point for the obtention of higher order moments for which we need in addition the mean hitting time of state n when starting from state
$i \in \{m,\ldots,n-1\}$.
Theorem 3.1. For every
$n,m \in \mathbb{N}$, with
$2 \leq m \leq n$, we have

Proof. We introduce the column vector
$V = (v_1, \ldots,v_{n-1})^\top$ defined by
$V = (I - P_{\{1,\ldots,n-1\}})^{-1}\mathbb{1}$. We decompose vector V following the partition used in (6), by writing

We then have
$\mathbb{E}_i(T_{n})=v_i$ with

Observing, by symmetry, that
$\mathbb{E}_i(T_{n})$ has the same value for every
$i \in \{1,\ldots,m-1\}$, we have
$v_1=\cdots=v_{m-1}$. We denote this common value by v. We then have

which leads to

where we have defined
$v_n=\mathbb{E}_n(T_{n})=0$. Using the first relation of (7) in the second one of (7), we obtain
$v_{m+1} = v - m^2$. Defining
$z_i = v_i - v_{i+1}$ for
$i=m,\ldots,n-1$, the third relation of (7) gives
$z_i = z_{i-1} + 2$, for
$i=m+1, \ldots,n-1$ with
$z_m = v_m - v_{m+1} = v - m+1 - (v - m^2) = m^2-m+1$. It follows that
$z_{m+i} = z_m + 2i = m^2-m+1 +2i$ and thus we obtain by telescoping

We then get the value of v using
$v = v_m + m -1$. In the same way, we have

which gives for
$i=0,\ldots,n-m$,

that is
$v_{i} = (n-m)(m^2-2m+n) - (i-m)(m^2-2m+i)$, for
$i=m,\ldots,n$.
3.2. Variance of the hitting time
To evaluate the variance of the hitting time Tn, we first calculate the second-order moment
$\mathbb{E}(T_{n}^2)$. Using relation (4) with
$A=\{n\}$, we obtain for
$i = 1,\ldots,n-1$,

Theorem 3.2. For every
$n,m \in \mathbb{N}$, with
$2 \leq m \leq n$, we have, for
$i = 1,\ldots,m-1$

and for
$i = m,\ldots,n$

where the values of v and
$v_\ell$, which are given by Theorem 3.1, are
$v = (n-m)(m^2-2m+n)+m-1$ and
$v_\ell = (n-m)(m^2-2m+n) - (\ell-m)(m^2-2m+\ell)$, for
$\ell = m,\ldots,n-1$.
Proof. The proof is close to the one of Theorem 3.1. We first introduce the column vectors
$V = (v_1, \ldots,v_{n-1})^\top$ and
$W = (w_1, \ldots,w_{n-1})^\top$ defined by

Vector W is thus given by
$W = (I - P_{\{1,\ldots,n-1\}})^{-1}V$, where vector V has been obtained in Theorem 3.1. As we did for vector V, we decompose vector W following the partition used in (6), by writing

We first evaluate the vector W which is given by

Observing, by symmetry, that
$\mathbb{E}_i(T_{n}^2)$ has the same value for every
$i = 1,\ldots,m-1$, we have
$w_1=\cdots=w_{m-1}$. We denote this common value by w. We then have

which leads to

where we have defined
$w_n=\mathbb{E}_n(T_{n}^2)=0$. The first relation of (8) gives
$w_m = w - (m-1)v$. Using this result in the second relation of (8) and the fact that
$v_m = v - (m - 1)$, we obtain
$w_{m+1} = w - m^2v + m(m-1)$.
Defining
$z_i = w_i - w_{i+1}$ for
$i=m,\ldots,n-1$, the third relation of (8) gives
$z_i = z_{i-1} + 2v_i$, for
$i=m+1, \ldots,n-1$ with
$z_m = w_m - w_{m+1} = (m^2-m+1)v - m(m-1)$. It follows that

and thus

We then get the value of w using
$w = w_m + (m -1)v$. In the same way, we have

which gives for
$i=1,\ldots,n-m$,

that is, for
$i=m+1,\ldots,n$,

Since
$\mathbb{E}_i(T_{n}^2) = 2 w_i - v_i$, we obtain for
$i=1,\ldots,m-1$,
$\mathbb{E}_i(T_{n}^2) = 2w-v = 2w_m + (2m-3)v$, that is

and, for
$i=m,\ldots,n-1$,

which completes the proof.
The variance of Tn, when the initial state is state i, is then given by
$\mathbb{V}_i(T_{n}) = \mathbb{E}_i(T_{n}^2) - \mathbb{E}_i(T_{n})^2$.
3.3. Moments generating function of the hitting time
Using relation (1), the probability mass function of Tn is given, for every
$k \geq 1$, by

The column vector
$B = (b_1,\ldots,b_{n-1})^\top$ defined by
$B = P_{\{1,\ldots,n-1\},\{n\}} \mathbb{1}$ has all its entries equal to 0, except the last one, that is, its
$(n-1)$th entry which is equal to
$1/2$. We then have for every
$k \geq 1$,

with
$B = (0,\ldots,0,1/2)^\top$. The matrix
$P_{\{1,\ldots,n-1\}}$ being an irreducible sub-stochastic matrix, its spectral radius ρn satisfies
$0 \lt \rho_n \lt 1$. The moment generating function of the hitting time Tn when the initial state is state i is given by the function

This series converges for all real numbers t such that
$e^t\rho_n \lt 1$, that is for all t such that
$t \lt -\ln(\rho_n)$. For these values of t, we obtain

An explicit expression of the moment generating function
$M_i(t)$ is given by the following theorem.
Theorem 3.3. For every
$n,m \in \mathbb{N}$, with
$2 \leq m \leq n$ and for all t such that
$t \lt -\ln(\rho_n)$, we have

where the
$a_{i,\ell}$ are given by

with the convention that
${\displaystyle \binom{b}{a} = 0}$ if
$a \notin \{0,\ldots,b-1\}$ and
${\displaystyle \binom{b}{b} = 1}$ for all b < 0.
Proof. We introduce the column vector
$M(t) = (M_1(t), \ldots,M_{n-1}(t))^\top$ defined by

As we did in the proofs of the previous theorems, we decompose vectors M(t) and B following the partition used in (6), by writing


We then have

Observing, by symmetry, that
$\mathbb{E}_i\left(e^{tT_{n}}\right)$ has the same value for every
$i \in \{1,\ldots,m-1\}$, we have
$M_1(t)=\cdots=M_{m-1}(t)$. We denote this common value by
$\mu(t)$. Hence

which leads to

where we have used the fact that
$M_n(t)=\mathbb{E}_n\left(e^{tT_{n}}\right)=1$. The first relation of (11) gives

Using this result in the second relation of (11), we obtain
$M_{m+1}(t) = me^{-t}M_m(t) - (m-1)\mu(t)$, that is

The third relation of (11) can be written as

This second-order recurrence relation together with the expressions of
$M_m(t)$ and
$M_{m+1}(t)$ implies that
$M_{m+i}(t)$ can be written as

We then must have for
$i = 2, \ldots, n-m$,

It follows that the coefficients
$a_{i,\ell}$ must satisfy, for
$i=2,\ldots,n-m$,

with
$a_{0,0} = -(m-2)$,
$a_{0,1} = m-1$,
$a_{1,0} = -(m-1)$,
$a_{1,1} = -m(m-2)$, and
$a_{1,2} = m(m-1)$. These initial values suggest that the coefficients
$a_{i,\ell}$ can be expressed, for
$i=2,\ldots,n-m$, as

where

with
$a'_{0,0} = 1$,
$a'_{0,1} = 1$,
$a'_{1,0} = 1$,
$a_{1,1} = m$, and
$a_{1,2} = m$. It is then easily checked that the following explicit expression of the
$a'_{i,\ell}$ satisfies relations (13)

with the convention that
${\displaystyle \binom{b}{a} = 0}$ if
$a \notin \{0,\ldots,b-1\}$ and
${\displaystyle \binom{b}{b} = 1}$ for all b < 0.
Since
$M_n(t)=1$, we get by taking
$i=n-m$ in relation (12),

which can be written as

This completes the proof.
Observe that, since
$M_i(0)=1$ for every
$i=1,\ldots,n$, we have

The moment generating function
$M_i(t)$ is defined only for
$t \lt -\ln(\rho_n)$ and it is well-known from the Perron–Frobenius theorem that ρn is an eigenvalue of matrix
$P_{\{1,\ldots,n-1\}}$. That is why we study the eigenvalues of matrix
$P_{\{1,\ldots,n-1\}}$ in the following subsection.
3.4. Eigenvalues
We analyze in this section the eigenvalues of matrix
$P_{\{1,\ldots,n-1\}}$. For fixed
$2 \leq m \leq n$ and
$k = 1,\ldots,n-1$, we consider submatrix
$P_{\{1,\ldots,k\}}$ of matrix P which consists in removing the
$n-1-k$ last lines and columns of matrix
$P_{\{1,\ldots,n-1\}}$. We denote by
$R_{k}(x)$ the characteristic polynomial of matrix
$P_{\{1,\ldots,k\}}$ which is the determinant of matrix
$xI - P_{\{1,\ldots,k\}}$, that is

The following theorem gives the characteristic polynomial
$R_{k}(x)$ of matrix
$P_{\{1,\ldots,k\}}$, for
$k=1,\ldots,n-1$.
Theorem 3.4. We have

For
$k=1,\ldots,n-m+1$, we have

where the k-th order polynomial
$S_k(x)$ has k distinct real roots which interlace with the k − 1 distinct real roots of
$S_{k-1}(x)$, for
$k \geq 2$.
Matrix
$P_{\{1,\ldots,\,n-1\}}$ has n − 1 real eigenvalues :
$-1/(m-1)$ with multiplicity m − 2 and
$n-m+1$ other distinct real eigenvalues with multiplicity 1. All its eigenvalues belong to the interval
$(-1,1)$ with the greatest one which is its spectral radius ρn belongs to the interval
$[1/2,1)$.
Proof. For every
$k = 1,\ldots,m-1$ and
$x \in \mathbb{R}$, the matrix
$xI - P_{\{1,\ldots,k\}}$ is given by
$(xI - P_{\{1,\ldots,k\}})_{i,\,i}=x$ and
$(xI - P_{\{1,\ldots,k\}})_{i,j}=-1/(m-1)$, for i ≠ j. We also consider the k × k matrix
$M(k,x)$ defined by
$M_{1,1}(k,x)=-1/(m-1)$,
$M_{i,i}(k,x)=x$, for
$i \geq 2$ and
$M_{i,j}(k,x)=-1/(m-1)$, for i ≠ j.
The determinants
$R_{k}(x)$ and
$D_k(x)$ of matrices
$xI - P_{\{1,\ldots,k\}}$ and
$M(k,x)$, respectively, can then be written recursively, for
$k \geq 2$ as

Indeed, we performed the expansion of the determinants
$R_{k}(x)$ and
$D_k(x)$ using the first row. The first term is obtained from the expansion of the first column and the second term is obtained from the k − 1 identical terms derived from the expansion of the second to the
$(k-1)$th column. The initial values of these relations are

It is easily checked by induction that the solution to these equations is given, for
$k = 1,\ldots,m-1$, by

We consider now the characteristic polynomial of matrix
$P_{\{1,\ldots,m\}}$. The matrix
$xI - P_{\{1,\ldots,m\}}$ is given by
$(xI - P_{\{1,\ldots,m\}})_{i,i}=x$,
$(xI - P_{\{1,\ldots,m\}})_{i,j}=-1/(m-1)$, for
$i =1,\ldots,m-1$ with i ≠ j and
$(xI - P_{\{1,\ldots,m\}})_{m,j}=-1/m$, for j ≠ m. The characteristic polynomial
$R_m(x)$ of matrix
$P_{\{1,\ldots,m\}}$ is then given by

Here we performed the expansion of the determinants using the last row. The first term is obtained from the expansion of the last column and the second term is obtained from the m − 1 identical terms derived from the expansion of the
$(m-1)$th column to the second column. Replacing
$R_{m-1}(x)$ and
$D_{m-1}(x)$ by their values obtained above, we get

Concerning the characteristic polynomials
$R_{m+1}(x),\ldots,R_{n-1}(x)$, we observe that (see (6) and the example (5)) the submatrix obtained from matrix
$P_{\{1,\ldots,\,n-1\}}$ by deleting its m − 1 first rows and columns is a tridiagonal matrix. We thus easily get

and, for
$k=m+2,\ldots,n-1$,

For
$k=1,\ldots,n-m+1$, we introduce the notation

We then have



and, for
$k=4,\ldots,n-m+1$,

For
$k=1,\ldots,n-m+1$, Sk is a k-th order polynomial which satisfies

We denote by
$\gamma^{(k)}_1,\ldots,\gamma^{(k)}_{k}$ the k roots of Sk. For k = 1, we easily have

For k = 2, S 2 is a second-order polynomial with 2 real roots which are

It is easily checked that we have

Consider now relations (14) and (15), and fix an integer
$k=3,\ldots,n-m+1$. Let γ be a root of
$S_{k-1}$. We then have
$S_{k-1}(\gamma)=0$ which gives by both (14) and (15),
$S_{k}(\gamma) S_{k-2}(\gamma) \leq 0$. Suppose that
$S_{k}(\gamma)=0$. Since
$S_{k-1}(\gamma)=0$,
$S_{k}(\gamma)=0$ is equivalent to
$S_{k-2}(\gamma) = 0$. Writing (15) for integer k − 1, this implies that
$S_{k-3}(\gamma) = 0$, which in turn implies recursively that
$S_{k-4}(\gamma) = 0, \ldots, S_2(\gamma) = 0$. Using now (14), this implies that
$S_{1}(\gamma) = 0$, which gives
$\gamma = \gamma^{(1)}_1$, which is wrong because the 2 roots
$\gamma^{(2)}_1$ and
$\gamma^{(2)}_2$ of S 2 are different from
$\gamma^{(1)}_1$.
It follows that if γ is a root of
$S_{k-1}$ then
$S_{k}(\gamma) S_{k-2}(\gamma) \lt 0$, for all
$k=3,\ldots,n-m+1$.
We have seen above that the roots of S 1 and S 2 are real and interlace, that is, that the root of S 1 is in between the two real roots of S 2. Suppose that the roots of
$S_{k-1}$ and Sk are all real and interlace, that is, that there is always a root of
$S_{k-1}$ in between two successive roots of Sk. We thus have

It follows, using (16), that for
$\ell = 1,\ldots,k$,

Since
$S_{k+1}(\gamma^{(k)}_{\ell}) S_{k-1}(\gamma^{(k)}_{\ell}) \lt 0$, we deduce that for
$\ell = 1,\ldots,k$,

This implies that in each open interval
$(\gamma^{(k)}_{\ell-1},\gamma^{(k)}_{\ell})$ there is a root of
$S_{k+1}$. Moreover, since
$S_{k+1}(\gamma^{(k)}_{k}) \lt 0$ and
$\lim_{x \longrightarrow \infty} S_{k}(x) = +\infty$, we have
$\gamma^{(k+1)}_{k+1} \gt \gamma^{(k)}_{k}$. In the same way, if k is even then since
$S_{k+1}(\gamma^{(k)}_{1}) \gt 0$ and
$\lim_{x \longrightarrow -\infty} S_{k+1}(x) = -\infty$, we have
$\gamma^{(k+1)}_{1} \lt \gamma^{(k)}_{1}$. If k is odd then since
$S_{k+1}(\gamma^{(k)}_{1}) \lt 0$ and
$\lim_{x \longrightarrow -\infty} S_{k+1}(x) = \infty$, we also have
$\gamma^{(k+1)}_{1} \lt \gamma^{(k)}_{1}$. The rest of the proof is due to the Perron–Frobenius Theorem.
This result allows us to compute recursively by dichotomy the spectral radius ρn of matrix
$P_{\{1,\ldots,\,n-1\}}$ for any precision ɛ given in advance. From Theorem 3.4, for
$k \geq m$, ρk is the greatest root of polynomial
$S_{k-m+1}$ defined in the proof of this theorem. The recursion then begins with

From Theorem 3.4, we have
$\rho_k \in (\rho_{k-1},1)$ and thus the algorithm starts with
$a := \rho_{k-1}$ and
$b:= 1$. We then compute
$S_{k-m+1}((a+b)/2)$. If
$S_{k-m+1}((a+b)/2) \gt 0$ then we set
$b := (a+b)/2$ and if
$S_{k-m+1}((a+b)/2) \lt 0$ then we set
$a := (a+b)/2$. We repeat these instructions until
$\left| S_{k-m+1}((a+b)/2)\right| \leq \varepsilon$, which finally leads to
$\rho_k := (a+b)/2$ with a precision equal to ɛ. The algorithm shown in Table 1 computes the spectral radius ρk of matrix
$P_{\{1,\ldots,k-1\}}$ obtained by removing the n − k last rows and columns of matrix
$P_{\{1,\ldots,\,n-1\}}$, for
$k=2,\ldots,n$.
Table 1. Algorithm computing the spectral radius ρk of matrix
$P_{\{1,\ldots,k-1\}}$ obtained by removing the n − k last rows and columns of matrix
$P_{\{1,\ldots,n-1\}}$, for
$k=2,\ldots,n$ and a fixed precision ɛ.

We give below some numerical values of ρk for m = 14, n = 23 and
$\varepsilon=10^{-10}$. We get
$\rho_k = (k-2)/13$ for
$k=2,\ldots,13$ and


4. Asymptotic analysis
We analyze in this section the asymptotic behavior of both the moments and the distribution of the hitting time Tn when n tends to infinity.
4.1. Maximum expected hitting time and corresponding moments
From Theorem 3.1, we easily see, as expected, that the maximum expected hitting time on the lollipop graph is obtained when the initial state is any state
$i \in \{1,\ldots,m-1\}$. The value of m which gives the maximum expected hitting time together with the corresponding maximum was obtained in [Reference Brightwell and Winkler9]. We recall this result in the following corollary of Theorem 3.1 and we give, for this maximal value of m, the expected hitting time on the lollipop graph for the values i = m and
$i=n-1$ of the initial state.
Corollary 4.1. For every
$n \geq 3$ and
$i \in \{1,\ldots,m-1\}$, we have

the maximum being reached for
$m = \lceil 2(n-1)/3 \rceil$. For this value of m, we have

Proof. From Theorem 3.1, we have to find, for a fixed
$n \geq 3$, the maximum of the sequence u(m) defined by

The difference
$u(m+1)-u(m)$ gives

It is thus easily checked that u(m) is maximal when
$m= \lceil 2(n-1)/3 \rceil$. The maximal mean hitting time is then given by
$u(\lceil 2(n-1)/3 \rceil)$.

When the initial state
$i \in \{m,\ldots,n-1\}$ its enough to use Theorem 3.1.
The following corollary of Theorem 3.2 gives the second-order moment of Tn when
$m= \lceil 2(n-1)/3 \rceil$ and the initial state is any state
$i \in \{1,\ldots,m-1\}$, that is when the mean hitting time is maximal.
Corollary 4.2. For every
$n \geq 3$ with
$m= \lceil 2(n-1)/3 \rceil$ and
$i=1,\ldots,m-1$, we have

Proof. It consists simply in setting
$m = \lceil 2(n-1)/3 \rceil$ in Theorem 3.2, that is by taking successively
$m=2n/3$,
$m=2(n-1)/3$ and
$m= (2n-1)/3$.
In order to deal with the other moments, we first give a recursion to compute them for all the values of m and n with
$2 \leq m \leq n$. This recursion follows the same lines that have been used in the proofs of Theorems 3.1 and 3.2.
While Theorem 2.1 gives a vector expression of the moments of Tn, Theorem 4.3 gives, using this vector expression, a recurrence relation to compute the coordinates of this vector which are given in relations (17) and (18). These two relations are quite important because they are used in Theorems 4.4 and 4.6 to get the asymptotic behavior of the moments of Tn.
From Theorem 2.1, to get the rth moment of Tn, we need to evaluate the coordinates of the column vector
$\left(I - P_{\{1,\ldots,n-1\}}\right)^{-r}\mathbb{1}$. We introduce the notation

Theorem 4.3. For every
$n,m \in \mathbb{N}$, with
$2 \leq m \leq n$ and for every
$r \geq 1$, we have
for
$i=1,\ldots,m-1$, the
$v^{(r)}_i$ are all equal to
$v^{(r)}$ which are recursively given by

and for
$i=m,\ldots,n$,

with initial conditions
$v^{(-1)} = 0$ and
$v^{(0)}_i = 1$, for all
$i=1,\ldots,n$.
Proof. Let
$V^{(r)}$ the column vector defined, for
$r \geq 0$, by

We then have
$V^{(0)} = 1$ and for all
$r \geq 1$,

As we did in the proof of Theorems 3.1 and 3.2, we decompose vector
$V^{(r)}$ following the partition used in (6), by writing

For
$r \geq 1$, we have

Observing, by symmetry, that
$\mathbb{E}_i(T_{n}^r)$ has the same value for every
$i \in \{1,\ldots,m-1\}$, we have
$v^{(r)}_1=\cdots=v^{(r)}_{m-1}$. We denote this common value by
$v^{(r)}$. We then have

which leads to

where we have defined
$v^{(r)}_n=\mathbb{E}_n(T_{n}^r)=0$. The first relation of (19) gives

Using this result in the second relation of (19), we obtain

From (20), we have
$v^{(r-1)} - v^{(r-1)}_m = (m-1)v^{(r-2)}$, which is equal to 0 for r = 1, since have
$V^{(0)}=1$. We thus introduce the notation
$V^{(-1)} = 0$ which is the null vector. It follows that for all
$r \geq 1$, we have

Defining
$z_i = v^{(r)}_{i} - v^{(r)}_{i+1}$ for
$i=m,\ldots,n-1$, the third relation of (19) gives
$z_i = z_{i-1} + 2v^{(r-1)}_{i}$, for
$i=m+1, \ldots,n-1$ with
$z_m = v^{(r)}_{m} - v^{(r)}_{m+1} = (m^2-m+1)v^{(r-1)} - m(m-1)v^{(r-2)}$. It follows that

and thus

We then get the value of
$v^{(r)}$ using
$v^{(r)} = v^{(r)}_m + (m-1)v^{(r-1)}$. In the same way, we have

which gives for
$i=1,\ldots,n-m$,

which can also be written, for
$i=m+1,\ldots,n$, as

This concludes the proof.
4.2. Asymptotic moments and distribution
Armed with Theorem 4.3, we are now in position to tackle the asymptotic analysis of the various moments
$\mathbb{E}_i(T_n^r)$ for
$r \geq 1$ as
$n\longrightarrow \infty$. Naturally, the result strongly depends on the asymptotic behavior of the parameter m along with n. We consider in this section the two following cases. The first case is the most natural one, namely the case when

Theorem 4.4 and Corollary 4.5 give a complete description of the asymptotic behavior of both all moments and distribution of Tn in this case. The second case is the complementary situation when
$m\ll n$. More technically, we investigate the case when

Theorem 4.6 and Corollary 4.7 give a complete description of the asymptotic behavior of both all moments and distribution of Tn in this case.
Let us begin with the situation when
$m/n \longrightarrow a$. The following theorem gives the asymptotic behavior of the rth moment of Tn in this situation.
Theorem 4.4. Assume there is an
$a \in (0,1)$ such that for every
$n \geq 3$, we have

Then, for every given value of
$r \geq 1$, there exists a r-dependent sequence
$\varepsilon_{r}(n)$ such that for any
$n\geq 3$ and any
$i = 1,\ldots,m-1$, we have

where the sequence
$\varepsilon_{r}(n)$ goes to zero as
$n \longrightarrow \infty$. Note that as in Theorem 4.3 both sides of this relation are independent of i.
Proof. The proof is based on the asymptotic analysis of the
$v^{(r)}$ and
$v^{(r)}_{i}$ obtained in Theorem 4.3.
From now on, and in order to keep reasonably light notation, any function denoted either by
$\varepsilon_r(n)$, or
$\varepsilon_r(n,i)$ for
$i=1,\ldots,m-1$, or
$\varepsilon_r(n,i)$ for
$i=m,\ldots,n-1$, or simply by
$\varepsilon(n)$ if there is no dependence on r, denotes a sequence which goes to zero as n goes to infinity uniformly in i, in the sense that

The precise value of such functions may change from line to line in the computations below.
First step
In this step, we prove by induction on r that for each r, there exists sequences
$\varepsilon_r(n)$ and
$\varepsilon_{r}(n,i)$ such that

where
$\varepsilon_r(n)$ and
$\varepsilon_{r}(n,i)$, for
$i=m,\dots, n-1$, go to zero as
$n \longrightarrow \infty$ in the sense defined in (21).
If (22) is true for some given r, then we have

Here we used the fact that
$n-m = (1-a)n (1+ \varepsilon(n)).$ We also used the variable change
$\ell := n-\ell$ in the original sums over
$\ell$ and the well-known asymptotic behavior

In the same spirit, if (22) is true for some given r, then we also have, for any
$i=m,\ldots,n-1$,

where we used the variable change
$\ell := n-\ell$ in the original sums over
$\ell$ and the fact that our assumptions on m and i imply
$(n-i)(i-m)^2= n^4 \varepsilon(n,i)$, together with
$(i-m)^3= n^4 \varepsilon(n,i)$ and
$(i-m)^2= n^3 \varepsilon(n,i)$.
Now, using the expression of
$v^{(r)}$ given by relation (17), we may deduce that, if (22) is true for integers r and r − 1, then we have

where we have used
$m=an(1+\varepsilon(n))$, together with relation (22) for the values r and r − 1, as well as the necessary relation (23). Similarly, using the expression of
$v^{(r)}_i$ given by relation (18), we obtain that, if (22) is true for integers r and r − 1, then we have for
$i=m,\ldots,n-1$,

where we have used
$m=an(1+\varepsilon(n))$, together with relation (22) for the values r and r − 1, as well as the necessary relations (23) and (25).
We have proved that if (22) is true for integers r and r − 1, then the same relation holds for integer r + 1.
To conclude this first step, there remains to prove that (22) is true for integers r = 1 and r = 2.
For r = 1, since
$v^{(0)}_i = 1$ and
$v^{(-1)}_i = 0$, for any i, using relation (17) in the case r = 1, we have

Similarly, using relation (18) in the case r = 1, we have for
$i=m,\ldots,n-1$,

Thus, relations (22) are true for r = 1.
In the same way, for r = 2 using now relations (23) and (25) in the case r = 2, which is legitimate due to the previous verification, and inserting these relations in (17) and (18) in the case r = 2, we get successively

and, for
$i=m,\ldots,n-1$,

Thus relations (22) are valid for r = 2. This concludes our recursion.
Second step
Inserting the above results in the formulae obtained in Theorem 4.3, we have for every
$i=1,\ldots,$
$m-1$,

which completes the proof.
Theorem 4.4 now allows us to determine the asymptotic behavior of the distribution of the hitting time Tn, when
$m/n \longrightarrow a$ as
$n \longrightarrow \infty$.
Corollary 4.5. Assume there is an
$a \in (0,1)$ such that for every
$n \geq 3$, we have

Then, for all
$t \geq 0$, and every
$i = 1,\ldots,m-1$

Proof. Let Z be a random variable exponentially distributed with rate 1. It is well-known that for every
$r \geq 0$, we have
$\mathbb{E}(Z^r) = r!$. Thus, the power series
$\sum_{r \geq 0} \mathbb{E}(Z^r) x^r/r!$ has a positive radius (it is equal to 1). This implies, from Theorem 30.1 of [Reference Billingsley7], that the random variable Z is uniquely determined by its moments. From Theorem 4.4, we have for every
$r \geq 1$,

It follows from Theorem 30.2 of [Reference Billingsley7] that, for all
$i = 1,\ldots,m-1$

which completes the proof.
In the particular case where

which corresponds to the maximum expected hitting time as shown in Corollary 4.1, we obtain by taking
$a=2/3$ in Theorem 4.4 and Corollary 4.5

Observe, as expected, that the maximum of
$a^2(1-a)$ in
$(0,1)$ is obtained for
$a=2/3$ and is equal to
$4/27$.
Note also that when

the lollipop graph
$L_n^n$ is the complete graph Kn. In that case, we easily obtain that Tn has a geometric distribution with parameter
$1/(n-1)$. We thus get

We deduce easily that for the complete graph we have, for all
$t \geq 0$,

This ends our discussion of the case
$m/n\to a$ as
$n \longrightarrow \infty$.
Let us now investigate the complementary case when
$m \ll n$. More technically, we now discuss the case when

We stress here that the more general case where the parameter m is sublinear with respect to n, that is, the case when
$\displaystyle m\mathop{\sim}_{n\longrightarrow\infty} a n^\beta$ for some
$\beta \in (0,1)$ and some a > 0, is much more complicated when
$\beta\geq 1/2$, and will be the subject of further research. We thus somehow restrict our attention to the case
$\beta \lt 1/2$ below.
Theorem 4.6. Assume that for every
$n \geq 3$, we have

Then for every
$r \geq 1$, there is a sequence
$\varepsilon_r(n)$ such that for any
$i = 1,\ldots,m-1$ we have

where
$E_{2r}$ are the well-known Euler numbers.
Proof. As we did for Theorem 4.4, the proof is based on the asymptotic analysis of the
$v^{(r)}$ and
$v^{(r)}_{i}$ obtained in Theorem 4.3. Set
$m=\sqrt{n}\varepsilon(n)$, and use the notation defined in (21).
We prove by induction that for every
$n \geq 1$, we have

If (26) is true for a fixed r, then we have

We now observe, using (24), that

where we have used that
$m =\sqrt{n} \varepsilon(n)$. Similarly, we observe

This provides

Using now the well-known relations

we conclude that

Following the same lines, if (26) is true for a fixed r then we have, for
$i=m,\ldots,n-1$,

We are now in position to prove relations (26) by induction.
Suppose relations (26) are true for integers r and r − 1. We then have, using (27) together with (17)

Similarly, if relations (26) are true for integers r and r − 1, we then have, using (27) and (28) together with formula (18), for
$i=m,\ldots,n-1$,

where we used the well-known fact that
$|E_{2(r+1)}| = (-1)^{r+1}E_{2(r+1)}$.
To complete our recursion, there remains to check that relations (26) are valid for r = 1 and r = 2.
For r = 1, since
$v^{(0)}=v^{(0)}_i= 1$ and
$v^{(-1)}=v^{(-1)}_i = 0$ for any i, we have, using relation (17)

since
$|E_2|=1$, and, whenever
$i=m,\ldots, n-1$, using formula (18), we also have

since
$E_0=1$ and
$E_2=1$. Thus, relations (26) are true for r = 1.
In the same way, for r = 2, using relation (27) for r = 1 in (17), which is legitimate thanks to the previous verification, provides

and, using relations (27) and (28) for r = 1 in (18), which is legitimate thanks to the previous verification, provides for
$i=m,\ldots,n-1$,

Thus relations (26) are also valid for r = 2.
This completes the proof of relations (26) by induction.
Now armed with relation (26), we use Theorem 2.1, to deduce that, for every
$i=1,\ldots,m-1$, we have

which completes the proof.
Theorem 4.6 allows us to determine the asymptotic behavior of the distribution of the hitting time Tn, when
$m/\sqrt{n}$ tends to 0 when n tends to infinity.
Corollary 4.7. Assume that for every
$n \geq 3$, we have

Then for all
$t \geq 0$ and for every
$i = 1,\ldots,m-1$, we have

Proof. Introducing the notation
$Y_n = T_n/(2n^2)$, we deduce from Theorem 4.6 that, for every
$i=1, \ldots,m-1$,

Consider the sequence of positive reals numbers sr defined by

The first values of this sequence are
$s_0=1$,
$s_1=1/2$,
$s_2=5/12$,
$s_3=61/120$,
$s_4=277/336$. Our task is now to identify the probability measure µ associated with the asymptotic moments given by the sequence sr, that is, such that for any r,

This problem is well-known and is called the Stieltjes moment problem. If such a measure exists it is called a normalized (because
$s_0=1$) Stieltjes moment sequence. If the measure is unique then the sequence sr is said to be determinate, see for instance [Reference Berg and Durán6].
First, a sufficient condition for determinacy is given by the following relation, called Carleman’s condition

To prove (29), we use the well-known fact (see for instance [Reference Borwein, Borwein and Dilcher8]) that

Using the Stirling formula for
$r!$, this leads to

Since
$1/2r$ tends to 0 when r tends to infinity, we can write

Thus

This series being divergent the Carleman condition (29) is satisfied.
Second, in order to determine the probability measure µ, we use the following relation, which can be found in [Reference Abramowitz and Stegun1] or in [Reference Amigo4]

Introducing the notation

the moments sr can then be written as

Observing that
$r!/\lambda_k^r$ is the rth order moment of the exponential distribution with rate λk, we obtain that the probability measure µ is given by the density

We then obtain, using the Fréchet–Shohat theorem, see [Reference Frechet and Shohat13] or [Reference Sodin20],

which completes the proof.
Note that this result applies in the particular case m = 2, for which the lollipop graph
$L_2^n$ is nothing but the classical path graph.
Corollaries 4.5 and 4.7 show that Tn, properly normalized, converges in distribution to an exponential distribution and to a linear combination of exponential distributions respectively. The fact that these limits involve exponential distributions seems to be intuitively clear because Tn is reached after a geometric number of attempts. Nevertheless, observe that the normalization term which leads to the convergence in distribution is of the order of
$\mathbb{E}(T_n)$. Indeed, when m = an, Corollary 4.5 tells us that
$T_n/\mathbb{E}(T_n)$ converges in distribution. In the same way, for instance when m is constant, Corollary 4.7 tells us that
$T_n/\mathbb{E}(T_n)$ converges in distribution. This means that the convergence of
$T_n/\mathbb{E}(T_n)$ should occur for more general graphs.
5. Conclusion
We considered in this paper both the distribution and the moments of the hitting time on the lollipop graph
$L_m^n$ which has the maximum expected hitting time among all the n vertex graph. We obtained recurrence relations for all order moments and we used these relations to analyze the asymptotic behavior of the hitting time distribution when n tends to infinity. The main difficulty was to exhibit the asymptotic behavior of all order moments which has led to the asymptotic behavior of the distribution of the hitting times Tn. We observed that for several values of m depending on n, the random variable
$T_n/\mathbb{E}(T_n)$ converges in distribution. Further research will thus be to analyze more general graphs to see if this convergence still occurs. A first step will be to consider other particular graphs such as the barbell graph obtained by connecting two copies of a complete graph by a bridge or the tadpole graph obtained by joining a cycle graph to a path graph. Another quite interesting question would be to wonder whether the lollipop graph maximizes some of the higher moments among all n vertex graphs.
Competing interest
The authors declare none.