1. Introduction
Let C be a nonempty closed geodesic convex subset of a Hadamard manifold $\mathbb{M},~ T_x \mathbb{M}$ be the tangent space of $\mathbb{M}$ at $x \in \mathbb{M}$ and $T\mathbb{M}$ be the tangent bundle of $\mathbb{M}$. The inclusion problem is to find
where $\Upsilon: C \to T\mathbb{M}$ is a single-valued vector field, $\Phi:C \to 2^{T\mathbb{M}}$ is a multivalued vector field and 0 denotes the zero section of $T \mathbb{M}$. We denote by Ω the solution set of VIP(1.1). The VIP (1.1) is central importance in nonlinear and convex analysis. The theory of variational inclusion problem has been studied by many authors in various linear spaces (see [Reference Dong, Jiang, Cholamjiak and Shehu12, Reference Gibali, Reich and Zalas15, Reference Gibali and Thong16, Reference Lions and Mercier24, Reference Mainge27, Reference Tseng41]) due to its wide applications in many fields such as machine learning, statistical regression and signal recovery (see [Reference Bačák and Reich7, Reference Combettes and Wajs10, Reference Duchi and Singer13]). Due to the importance and interest of the problem, many iterative procedures have been proposed for solving VIP (1.1) in linear spaces (Hilbert and Banach spaces) (see [Reference Reich and Salinas32]).
A simple and efficient method for solving VIP (1.1) is the forward-backward splitting algorithm introduced by Lions and Mercier [Reference Lions and Mercier24] in a real Hilbert space H. This method is of the form:
where $J_{r_n}^{\Phi}=(I +r_n \Phi)^{-1}$ denotes the resolvent of Φ, I denotes the identity mapping on H and $\{r_n\}$ is a positive real sequence. They proved that the iterative method converges weakly to an element in Δ under the assumption that ϒ is α-inverse strongly monotone.
In 2000, Tseng [Reference Tseng41] introduced one of the most suitable iterative techniques utilized for solving VIP (1.1) known as Tseng’s method. Using this method, Tseng [Reference Tseng41] was able to dispense with the inverse strongly monotonicity condition which is known to be a strict assumption imposed ϒ in (1.1). In [Reference Tseng41], ϒ is known to be monotone and L-Lipschitz continuous. The weakness known with Tseng’s method is that its stepsize requires the prior knowledge of the Lipschitz constant of the underlying operator. However, from a practical point of view, the Lipschitz constant is very difficult to approximate. In recent years, modifications of Tseng’s method have received great attention by many authors (see [Reference Dong, Jiang, Cholamjiak and Shehu12, Reference Gibali and Thong16, Reference Oyewole, Abass, Mebawondu and Aremu29, Reference Shehu36, Reference Thong and Cholamjiak40]). Recently, Gibali and Thong [Reference Gibali and Thong16] introduced a Mann and Viscosity method together with a new step size rule for solving VIP (1.1) in the framework of real Hilbert spaces. Under standard assumption such as the Lipschitz continuity and maximal monotonicity, they established a strong convergence of the proposed algorithm. Extension of concepts and techniques from linear spaces to Riemannian manifolds has some important advantages (see [Reference Ferreira and Oliveira14, Reference Li, Lopez and Marquez23, Reference Sakai34]). For instance, some optimization problems with non-convex objective functions become convex from the Riemannian geometry point of view, and some constrained optimization problems can be regarded as unconstrained ones with an appropriate Riemannian metric. In addition, the study of convex minimization problems and inclusion problems in nonlinear spaces have proved to be very useful in computing medians and means of trees, which are very important in computational phylogenetics, diffusion tensor imaging, consensus algorithms and modelling of airway systems in human lungs and blood vessels (see [Reference Baćak5, Reference Baćak6, Reference Baćak26]). Thus, nonlinear spaces are more suitable frameworks for the study of optimization problems from linear to Riemannian manifolds. For instance, Li et al. [Reference Li, López and Martín-Márquez22] established the convergence of the proximal algorithm on Hadamard manifolds by using the facts that zeros of a maximal operator are fixed point of its resolvent. In another result, Li et al. [Reference Li, Lopez and Marquez23] introduced the idea of a firmly non-expansive and resolvent of the set-valued monotone operator in the framework of Hadamard manifolds. They established a strong relationship between firmly non-expansive mappings and monotone vector fields. Very recently, Khammahawong et al. [Reference Khammahawong, Kumam, Chaipunya and Martinez20] introduced the following Tseng iterative methods for approximating the solution of VIP (1.1) as follows:
Under suitable condition, they established a convergence result without prior knowledge of the Lipschitz constant.
In 1964, Polyak [Reference Polyak31] introduced the inertial extrapolation method as a useful tool for speeding up the rate of convergence of iterative methods. Alvarez and Attouch [Reference Alvarez and Attouch3] introduced and constructed the heavy-ball method with the proximal point algorithm to solve a problem of maximal monotone operator. They defined their method as follows:
where $\{\theta_{n}\} \subset [0,1)$ and $\{\lambda_n\}$ is non-decreasing with $\sum\limits_{n=1}^{\infty}\theta_n\|x_n-x_{n-1}\| \lt \infty.$ They established that the sequence generated by Equation (1.6) converges weakly to a zero of the monotone operator B. For growing interests in this direction (see [Reference Abass, Godwin, Narain and Darvish2, Reference Oyewole and Reich30, Reference Tan and Cho39]).
Motivated by the recent interests in iterative methods with inertial extrapolation studied in [Reference Abass, Godwin, Narain and Darvish2, Reference Cholamjiak, Thong and Cho9, Reference Dong, Jiang, Cholamjiak and Shehu12] and other related articles. Very recently, Khammahawong et al. [Reference Khammahawong, Kumam, Chaipunya and Martinez20] proposed the following inertial Mann iterative method for approximating non-expansive mappings on Hadamard manifold: choose $q_0, q_1 \in \mathbb{M}$
where T is a non-expansive mapping, $\{\lambda_k\} \subset [0, \infty)$ and $\{\gamma_{k}\} \subset (0,1)$satisfy the following conditions:
(i) $0 \leq \lambda_k \leq \lambda \lt 1,~\forall~k \geq 1,$
(ii) $\sum\limits_{k=1}^{\infty}\lambda_kd^2(q_k, q_{k-1}) \lt \infty$,
(iii) $0 \lt \gamma_{1} \leq \gamma_{k} \leq \gamma_{2} \lt 1,~\forall~k \geq 1,$
(iv) $\sum\limits_{k=1}^{\infty}\gamma_{k} \lt \infty.$
In recent years, inertial techniques [Reference Abass, Godwin, Narain and Darvish2, Reference Dong, Jiang, Cholamjiak and Shehu12] were introduced to speed up the rate of convergence of iterative algorithms. Recall that the fundamental characteristics of the inertial methods is that the next iterate is determined by the previous two (or more) iterates, and that this small change can greatly improve the convergence of the non-accelerated methods. The results of several computational tests and applications testifies that inertial methods can significantly improve the rate of convergence of non-accelerated methods. However, the monotonicity of the iterative sequence generated by the inertial methods is lost, which could results in inertial methods sometimes converging more slowly than the non-inertial versions. To overcome this situation, Mu and Peng [Reference Mu and Peng28] proposed an alternated inertial method that recovers the Fejér monotonicity of the even subsequence associated with the solution set of the problem. It is known that the main idea of the alternated inertial method is to add inertial effects only at odd iteration steps and not at even steps, which is the origin of the term ‘alternated’ in the method. Several authors have employed the alternated inertial method to solve some optimization problems such as variational inequalities, split feasibility problems and others (see [Reference Henderickx and Olshevsky17, Reference Iutzeler and Malick18, Reference Mu and Peng28]). The advantage of these method has been discussed in the literature (see [Reference Mu and Peng28]).
Question: Can we modify Algorithm 1.1 so that we can employ a different step size in each iteration and improve the computational efficiency of the algorithm in nonlinear spaces?
Inspired by the results of Gibali and Thong [Reference Gibali and Thong16], Sunthrayuth et al. [Reference Sunthrayuth, Pholasa and Cholamjiak37], Khammahawong et al. [Reference Khammahawong, Kumam, Chaipunya and Martinez20] and Ansari et al. [Reference Ansari, Babu and Ali4], we introduce an alternated inertial Tseng-type method with a self adaptive procedure which generates dynamic step-sizes in the setting of a Hadamard manifolds. Using our iterative method, we prove that the sequence generated by our iterative method linear converges to the solution of the inclusion problem. It is worth-mentioning that our iterative method is independent of the Lipschitz constant of the underlying operator. Our method extends and generalizes many related results from linear spaces to Riemannian manifolds.
Remark 1.2. We will like to emphasize that approximating a solution of VIP have some possible real life applications to mathematical models whose constraints can be expressed as fixed points of some nonlinear mappings. In fact, they are very useful in computing medians and means of trees, which are very important in computational phylogenetics, diffusion tensor imaging, consensus algorithms and modeling of airway systems in human lungs and blood vessels (see [Reference Baćak6]).
We present some of the contributions of our result as follows:
(i) The result in this article generalizes the results in [Reference Abass1, Reference Dong, Jiang, Cholamjiak and Shehu12, Reference Gibali and Thong16, Reference Khammahawong, Kumam, Chaipunya and Martinez20, Reference Sunthrayuth, Pholasa and Cholamjiak37, Reference Thong and Cholamjiak40, Reference Tseng41] from linear spaces to nonlinear spaces.
(ii) The choice of our inertial factor in our proposed method appears weaker than the choice in [Reference Khammahawong, Chaipunya and Kumam19]. The inertial $\beta_k=1$ is allowed in our method. In addition, we were able to dispense with the conditions (i)–(iv) imposed on the parameters in [Reference Khammahawong, Kumam, Chaipunya and Martinez20].
(iii) We prove that our proposed method converges linearly to the solution of VIP (1.1).
(iv) Our method requires a self-adaptive procedure which is allowed to increase from iteration to iteration and which is independent of the Lipschitz constant of the underlying operator unlike the result of Shehu [Reference Shehu36] where the knowledge of the Lipschitz constant is required.
(v) The theoretical result in this article is supported by a well-designed computational experiment which shows that our proposed method is very reliable and performs better than the non-inertial and non-alternated versions.
2. Preliminaries
Let $\mathbb{M}$ be an m-dimensional manifold, let $x \in \mathbb{M}$ and let $T_x \mathbb{M}$ be the tangent space of $\mathbb{M}$ at $x \in \mathbb{M}$. We denote by $T\mathbb{M}=\bigcup_{x \in \mathbb{M}}T_x \mathbb{M}$ the tangent bundle of $\mathbb{M}$. An inner product $\mathcal{R}\langle \cdot, \cdot \rangle$ is called a Riemannian metric on $\mathbb{M}$ if $\langle \cdot,\cdot\rangle_x :T_x \mathbb{M} \times T_x \mathbb{M} \to \mathbb{R}$ is an inner product for all $x \in \mathbb{M}.$ The corresponding norm induced by the inner product $\mathcal{R}_x \langle \cdot,\cdot \rangle $ on $T_x \mathbb{M}$ is denoted by $\|\cdot \|_x.$ We will drop the subscript x and adopt $\|\cdot\|$ for the corresponding norm induced by the inner product. A differentiable manifold $\mathbb{M}$ endowed with a Riemannian metric $\mathcal{R}\langle \cdot,\cdot \rangle$ is called a Riemannian manifold. In what follows, we denote the Riemannian metric $\mathcal{R} \langle \cdot,\cdot \rangle$ by $\langle \cdot,\cdot \rangle$ when no confusion arises. Given a piecewise smooth curve $\gamma:[a,b] \to \mathbb{M}$ joining x to y (that is, $\gamma(a)=x$ and $\gamma(b)=y$), we define the length $l(\gamma)$ of γ by $l(\gamma) := \int_{a}^{b}\|\gamma^{\prime}(t)\|dt$. The Riemannian distance $d(x,y)$ is the minimal length over the set of all such curves joining x to y. The metric topology induced by d coincides with the original topology on $\mathbb{M}$. We denote by $\nabla$ the Levi–Civita connection associated with the Riemannian metric [Reference Sakai35].
Let γ be a smooth curve in $\mathbb{M}$. A vector field X along γ is said to be parallel if $\nabla_{\gamma^{\prime}}X=\textbf{0}$, where 0 is the zero tangent vector. If $\gamma^{\prime}$ itself is parallel along γ, then we say that γ is a geodesic and $\|\gamma^{\prime}\|$ is a constant. If $\|\gamma^{\prime}\|=1$, then the geodesic γ is said to be normalized. A geodesic joining x to y in $\mathbb{M}$ is called a minimizing geodesic if its length equals $d(x,y)$. A Riemannian manifold $\mathbb{M}$ equipped with a Riemannian distance d is a metric space $(\mathbb{M},d)$. A Riemannian manifold $\mathbb{M}$ is said to be complete if for all $x \in \mathbb{M}$, all geodesics emanating from x are defined for all $t \in \mathbb{R}$. The Hopf–Rinow theorem [Reference Sakai35], posits that if $\mathbb{M}$ is complete, then any pair of points in $\mathbb{M}$ can be joined by a minimizing geodesic. Moreover, if $(\mathbb{M},d)$ is a complete metric space, then every bounded and closed subset of $\mathbb{M}$ is compact. If $\mathbb{M}$ is a complete Riemannian manifold, then the exponential map $\exp_{x}: T_x \mathbb{M} \to \mathbb{M}$ at $x \in \mathbb{M}$ is defined by
where $\gamma_v(\cdot,x)$ is the geodesic starting from x with velocity v (that is, $\gamma_v(0,x)=x$ and $\gamma_{v}^{\prime}(0,x)=v$). Then, for any $t,$ we have $\exp_{x}tv=\gamma_v(t,x)$ and $\exp_{x}{0}=\gamma_v(0,x)=x.$ Note that the mapping $\exp_x$ is differentiable on $T_x \mathbb{M}$ for every $x \in \mathbb{M}.$ The exponential map $\exp_{x}$ has an inverse $\exp_{x}^{-1}: \mathbb{M} \to T_x \mathbb{M}.$ For any $x,y \in \mathbb{M},$ we have $d(x,y)=\|\exp_{y}^{-1}x\|=\|\exp_{x}^{-1}y\|$ (see [Reference Sakai35] for more details). The parallel transport $P_{\gamma,\gamma(b),\gamma(a)}: T_{\gamma(a)}\mathbb{M} \to T_{\gamma(b)}\mathbb{M}$ on the tangent bundle $T\mathbb{M}$ along $\gamma:[a,b] \to \mathbb{R}$ with respect to $\nabla$ is defined by
where F is the unique vector field such that $\nabla_{\gamma^{\prime}(t)}v=\textbf{0}$ for all $t\in [a,b]$ and $F(\gamma(a))=v.$ If γ is a minimizing geodesic joining x to $y,$ then we write $\Gamma_{y,x}$ instead of $\Gamma_{\gamma,y,x}.$ Note that for every $a,b,r,s \in \mathbb{R},$ we have
Also, $\Gamma_{\gamma(b),\gamma(a)}$ is an isometry from $T_{\gamma(a)}\mathbb{M}$ to $T_{\gamma(b)}\mathbb{M},$ that is, the parallel transport preserves the inner product
We now give some examples of Hadamard manifolds.
Space 1: Let $\mathbb{R}_{++}^{m}$ be the product space $\mathbb{R}_{++}^m := \{(x_1,x_2, \cdots, x_m): x_i \in \mathbb{R}_{++},~i=1,2, \cdots, m\}$. Let $\mathbb{M} = ((\mathbb{R})_++, \langle \cdot,\cdot \rangle)$ be the m-dimensional Hadamard manifold with the Riemannian metric $\langle p,q \rangle =p^T q$ and the distance $d(x,y)=|\ln\frac{x}{y}|=|\ln \sum\limits_{i=1}^{m}\frac{x_i}{y_i}|,$ where $x,y \in \mathbb{M}$ with $x=\{x_i\}_{i=1}^{m}$ and $y=\{y_i\}_{i=1}^{m}$.
A subset $K \subset \mathbb{M}$ is said to be convex if for any two points $x,y \in K,$ the geodesic γ joining x to y is contained in $K.$ That is, if $\gamma:[a,b] \to \mathbb{M}$ is a geodesic such that $x=\gamma(a)$ and $y=\gamma(b),$ then $\gamma((1-t)a+tb) \in K$ for all $t \in [0,1].$ A complete simply connected Riemannian manifold of non-positive sectional curvature is called a Hadamard manifold. We denote by $\mathbb{M}$ a finite dimensional Hadamard manifold. Henceforth, unless otherwise stated, we represent by K a non-empty, closed and convex subset of $\mathbb{M}.$
Definition 2.1. Let Φ be a multi-valued vector field with closed and convex $Dom(\Phi).$ We say that $p \in Dom(\Phi)$ is a singular point of Φ, if
Next, we introduce the concepts of monotonicity of vector fields in the setting of Hadamard manifolds. Suppose C is a non-empty, closed and convex subset of $\mathbb{M}$. Let $\mathcal{H}(C)$ denote the set of all single-valued vector fields $\Upsilon:C \to T\mathbb{M}$ such that $\Upsilon(p) \in T_{p}\mathbb{M}$, for each $p \in C.$ Let Z(C) denote to the set of all multi-valued vector fields $\Phi: C \to 2^{T\mathbb{M}}$ such that $\Phi(p)\subseteq T_{p}\mathbb{M}$ for each $p \in C,$ and the denote $Dom(\Phi)$ the domain of Φ defined by $Dom(\Phi)=\{p \in C: \Phi(p)\neq \emptyset\}.$
We now collect some results and definitions which we shall use in the next section.
Definition 2.2. [Reference Wang, López, Martín-Márquez and Li42] A vector field $\Upsilon \in \mathcal{H}(C)$ is said to be
(i) monotone, if
\begin{align*} \langle \Upsilon(p), \exp_{p}^{-1}q\rangle \leq \langle \Upsilon(q), -\exp_{q}^{-1}p\rangle,~\forall~p,q \in C, \end{align*}(ii) L-Lipschitz continuous if there exists L > 0 such that
\begin{align*} \|\Gamma_{p,q}\Upsilon(q)-\Upsilon(p)\| \leq Ld(p,q),~\forall~p, q \in C, \end{align*}(iii) strongly monotone if,
\begin{align*} \langle \Upsilon(p), \exp_{p}^{-1}q \rangle +\langle \Upsilon(q), \exp_{q}^{-1}p\rangle \leq -\mu d^2(p,q), \forall~p, q \in X. \end{align*}
Definition 2.3. [Reference Cruz, Ferreira and Pérez11] A vector field $\Phi \in Z(C)$ is said to be
(i) monotone, if for all $p, q \in Dom(G)$
\begin{align*} \langle u, \exp_{p}^{-1}q\rangle \leq \langle v, -\exp_{q}^{-1}p\rangle,~\forall~u \in \Phi(p)~ \text{and}~\forall~ v \in \Phi(q), \end{align*}(ii) maximal monotone if it is monotone and $\forall~p \in C$ and $u \in T_{p}C,$ the condition
\begin{align*} \langle u, \exp_{p}^{-1}q\rangle \leq \langle v, -\exp_{q}^{-1}p\rangle,~\forall~q \in Dom(\Phi)~\text{and}~ \forall~v \in \Phi(q)~\text{implies that}~u \in \Phi(p). \end{align*}
Definition 2.4. [Reference Ferreira and Oliveira14] Let C be a non-empty, closed and subset of $\mathbb{M}$ and $\{x_n\}$ be a sequence in $\mathbb{M}$. Then $\{x_n\}$ is said to be Fejèr convergent with respect to C if for all $p \in C$ and $n \in \mathbb{N},$
Definition 2.5. [Reference Li, López and Martín-Márquez21] Let $\Phi \in Z(C)$ be a vector field and $x_0 \in C.$ Then Φ is said to be upper Kuratowki semicontinuous at x 0 if for any sequences $\{x_n\} \subseteq C$ and $\{v_n\} \subset T\mathbb{M}$ with each $v_n \in \Phi(x_n),$ the relations $\lim\limits_{n\to \infty} {v_n}=v_0$ imply that $v_0 \in V(x_0)$. Moreover, V is said to be upper Kuratowski semicontinuous on C if it is upper Kuratowski semicontinuous for each $x \in C.$
Lemma 2.6. [Reference Ferreira and Oliveira14] Let C be a non-empty, closed and closed subset of $\mathbb{M}$ and $\{x_n\} \subset \mathbb{M}$ be a sequence such that $\{x_n\}$ be a Fejér convergent with respect to $C.$ Then the following hold:
(i) For every $p \in C,~ d(x_n, p)$ converges,
(ii) $\{x_n\}$ is bounded,
(iii) Assume that every cluster point of $\{x_n\}$ belongs to C, then $\{x_n\}$ converges to a point in C.
Proposition 2.7. [Reference Sakai35]. Let $x \in \mathbb{M}$. The exponential mapping $\exp_{x}: T_x \mathbb{M} \to \mathbb{M}$ is a diffeomorphism. For any two points $x,y \in \mathbb{M},$ there exists a unique normalized geodesic joining x to $y,$ which is given by
A geodesic triangle $\Delta(p,q,r)$ of a Riemannian manifold $\mathbb{M}$ is a set containing three points $p,$ $q,$ r and three minimizing geodesics joining these points.
Proposition 2.8. [Reference Sakai35]. Let $\Delta(p,q,r)$ be a geodesic triangle in $\mathbb{M}$. Then
and
Moreover, if θ is the angle at $p,$ then we have
Also,
Remark 2.9. [Reference Li, López and Martín-Márquez21] If $x, y \in \mathbb{M}$ and $v \in T_{y}\mathbb{M},$ then
Lemma 2.10. [Reference Khammahawong, Chaipunya and Kumam19] Let $\mathbb{M}$ be a Hadamard manifold and let $u,v, w \in \mathbb{M}.$ Then
For any $x \in \mathbb{M}$ and $C \subset \mathbb{M},$ there exists a unique point $y \in K$ such that $d(x,y) \leq d(x,z)$ for all $z \in C$. This unique point y is called the nearest point projection of x onto the closed and convex set C and is denoted $P_C(x).$
Lemma 2.11. [Reference Wang, López, Martín-Márquez and Li42]. For any $x \in \mathbb{M},$ there exists a unique nearest point projection $y=P_{C}(x).$ Furthermore, the following inequality holds:
Lemma 2.12. [Reference Li, López and Martín-Márquez21]. Let $x_0 \in \mathbb{M}$ and $\{x_n\} \subset \mathbb{M}$ with $x_n \to x_0.$ Then the following assertions hold:
(i) For any $y \in \mathbb{M},$ we have $\exp_{x_n}^{-1}y \to \exp_{x_0}^{-1}x_n$ and $\exp_{y}^{-1}x_n \to \exp_{y}^{-1}x_0,$
(ii) If $v_n \in T_{x_n}\mathbb{M}$ and $v_n \to v_0,$ then $v_0 \in T_{x_0}\mathbb{M},$
(iii) Given $u_n,v_n \in T_{x_n}\mathbb{M}$ and $u_0, v_0 \in T_{x_0}\mathbb{M}$, if $u_n \to u_0,$ then $\langle u_n, v_n\rangle \to \langle u_0, v_0\rangle,$
(iv) For any $u \in T_{x_0}\mathbb{M},$ the function $F: \mathbb{M} \to T\mathbb{M},$ defined by $F(x)=\Gamma_{x, x_0}u$ for each $x \in \mathbb{M}$ is continuous on $\mathbb{M}$.
The next lemma presents the relationship between triangles in $\mathbb{R}^2$ and geodesic triangles in Riemannian manifolds (see [Reference Bridson and Haefliger8]).
Lemma 2.13. [Reference Bridson and Haefliger8]. Let $\Delta(x_1,x_2,x_3)$ be a geodesic triangle in $\mathbb{M}.$ Then there exists a triangle $\Delta(\bar{x}_1,\bar{x}_2,\bar{x}_3)$ corresponding to $\Delta(x_1,x_2,x_3)$ such that $d(x_i,x_{i+1})=\|\bar{x}_{i}-\bar{x}_{i+1}\|$ with the indices taken modulo 3. This triangle is unique up to isometries of $\mathbb{R}^2.$
The triangle $\Delta(\bar{x}_1,\bar{x}_2,\bar{x}_3)$ in Lemma 2.13 is said to be the comparison triangle for $\Delta(x_1,x_2,x_3) \subset \mathbb{M}.$ The points $\bar{x}_1,$ $\bar{x}_2$ and $\bar{x}_3$ are called comparison points to the points $x_1,x_2$ and x 3 in $\mathbb{M}.$
A function $h: \mathbb{M} \to \mathbb{R}$ is said to be geodesic if for any geodesic $\gamma \in \mathbb{M},$ the composition $h \circ \gamma: [u,v] \to \mathbb{R}$ is convex, that is,
Lemma 2.14. [Reference Li, López and Martín-Márquez21] Let $\Delta(p,q,r)$ be a geodesic triangle in a Hadamard manifold $\mathbb{M}$ and $\Delta(p^{\prime}, q^{\prime}, r^{\prime})$ be its comparison triangle.
(i) Let $\alpha, \beta, \gamma$ (respectively $\alpha^{\prime}, \beta^{\prime}, \gamma^{\prime})$ be the angles of $\Delta(p,q,r)$ (respectively $\Delta(p^{\prime}, q^{\prime}, r^{\prime}))$ at the vertices p,q,r (respectively $p^{\prime}, q^{\prime}, r^{\prime})$. Then, the following inequalities hold:
\begin{equation*}\alpha^{\prime} \geq \alpha, ~\beta^{\prime} \geq \beta,~\gamma^{\prime}\geq \gamma,\end{equation*}(ii) Let z be a point in the geodesic joining p to q and $z^{\prime}$ its comparison point in the interval $[p^{\prime}, q^{\prime}].$ Suppose that $d(z,p)=\|z^{\prime}-p^{\prime}\|$ and $d(z^{\prime},q^{\prime})=\|z^{\prime}-q^{\prime}\|$. Then the following inequality holds:
\begin{equation*}d(z,r)\leq \|z^{\prime}-r^{\prime}\|.\end{equation*}
Lemma 2.15. [Reference Li, López and Martín-Márquez21] Let $x_0 \in \mathbb{M}$ and $\{x_n\} \subset \mathbb{M}$ be such that $x_n \to x_0.$ Then, for any $y \in \mathbb{M},$ we have $\exp_{x_n}^{-1}y \to \exp_{x_0}^{-1}y$ and $\exp_{y}^{-1}x_n \to \exp_{y}^{-1}x_0;$
The following propositions (see [Reference Ferreira and Oliveira14]) are very useful in our convergence analysis:
Proposition 2.16. Let M be a Hadamard manifold and $d: M \times M: \to \mathbb{R}$ be the distance function. Then the function d is convex with respect to the product Riemannian metric. In other words, given any pair of geodesics $\gamma_1: [0,1] \to M$ and $\gamma_2 : [0,1] \to M,$ then for all $t \in [0,1],$ we have
In particular, for each $y \in M,$ the function $d(\cdot,y): M \to \mathbb{R}$ is a convex function.
Proposition 2.17. Let $\mathbb{M}$ be a Hadamard manifold and $x \in \mathbb{M}$. The map $\Phi_x=d^2(x,y)$ satisfying the following:
(1) $\Phi_x$ is convex. Indeed, for any geodesic $\gamma: [0,1] \to \mathbb{M}$, the following inequality holds for all $t \in [0,1]:$
\begin{align*} d^2(x, \gamma(t)) \leq (1-t)d^2(x, \gamma(0)) + td^2(x, \gamma(1))-t(1-t)d^2(\gamma(0), \gamma(1)). \end{align*}(2) $\Phi_x$ is smooth. Moreover, $\partial \Phi_x(y)=-2\exp_{y}^{-1}x.$
Proposition 2.18. Let M be a Hadamard manifold and $x \in M.$ Let $\rho_{x}(y)=\frac{1}{2}d^2(x,y)$. Then $\rho_{x}(y)$ is strictly convex and its gradient at y is given by
3. Main result
In this section, we introduce an alternating inertial Tseng-type method for solving variational inclusion problem on Hadamard manifolds. We state the following assumptions:
(B1) $\Upsilon \in \mathcal{H}(K)$ is monotone and L-Lipschitz continuous, and $\Phi \in Z(K)$ is maximal monotone.
(B2) The solution set $\Omega:= (\Upsilon+\Phi)^{-1}(0)$ is non-empty.
(D1) $\{\nu_k\}$ is a non-negative real numbers sequence such that $\sum\limits_{k=1}^{\infty}\nu_k \lt \infty.$
We start by establishing a technical lemma useful to our analysis.
Lemma 3.4. Let $\{q_k\}$ be a sequence generated by Algorithm 3.3 and the sequence $\{\gamma_{k}\}$ is generated by Equation (3.4). Then we have that $\lim\limits_{k \to \infty}\gamma_{k}=\gamma$ and $\gamma \in \bigg[\min\big\{\frac{\mu}{L},~ \gamma_{0}\big\}, \gamma_{0}+ \nu_k \bigg],$ where $\nu=\sum\limits_{k=0}^{\infty}\nu_k.$
Proof. It is obvious that ϒ is L-Lipschitz continuous with constant L > 0, then in the case of $\Gamma_{w_k, u_k}(\Upsilon(u_k)-\Upsilon(w_k)) \neq 0,$ we obtain
By the definition of $\gamma_{k+1}$ in Equation (1.5) and mathematical induction, we have that the sequence $\{\gamma_{k}\}$ has upper bound of $\gamma_{0}+ \nu$ and lower bound $\min \big\{\frac{\mu}{L}, \gamma_0\big\}.$ The rest of the proof is similar to Lemma 3.1 in [Reference Liu and Yang25], so we omit it.
Theorem 3.5. Suppose that Assumptions $(B1)$ and $(B2)$ hold, then the sequence $\{q_k\}$ generated by Algorithm 3.3 converges to an element in Ω.
Proof. Take k as even and pick a point $p \in \Omega.$ Then from Equation (3.2), we have $\frac{1}{\gamma_{2k+1}}\exp^{-1}_{w_{2k+1}}u_{2k+1}-\Gamma_{w_{2k+1,}u_{2k+1}}\Upsilon(u_{2k+1}) \in \Phi(w_{k+1}).$ By monotonicity of Φ yields
Since ϒ is monotone vector field, then
Combining Equations (3.6) and (3.7), we obtain
which further yields
Fix $k \in \mathbb{N}$. Let $\Delta(u_{2k+1}, w_{2k+1}, p) \subseteq M$ be a geodesic triangle with vertices $u_{2k+1},~ w_{2k+1}$ and p and $\Delta(\overline{u_{2k+1}}, \overline{w_{2k+1}}, \overline{p}) \subseteq \mathbb{R}^2$ be the corresponding comparison triangle, so $d(u_{2k+1}, p)=\|\overline{u_{2k+1}}-p\|~, d(w_{2k+1}, p)=\|\overline{w_{2k+1}}-\overline{p}\|$ and $d(w_{2k+1}, u_{2k+1})=\|\overline{w_{2k+1}}-\overline{u_{2k+1}}\|$. In addition, let $\Delta(q_{2k+2}, w_{2k+1}, p) \subseteq M$ be a geodesic triangle with vertices $q_{2k+2},~ w_{2k+2}$ and p and $\Delta(\overline{q_{2k+2}}, \overline{w_{2k+1}}, \overline{p})$. It follows that
Let θ and $\overline{\theta}$ be the angles of the vertices $w_{2k+1}$ and $\overline{w_{2k+1}}$ respectively. By Lemma 2.14 (i), we get $\overline{\theta} \geq \theta$. Therefore, we obtain from Lemma 2.13 and Equation (2.4) that
Using the same approach as in Equation (3.10), we get
Also,
On substituting Equations (3.10), (3.11) and (3.12) in Equation (3.9)
Applying Remark 2.9, Lemma 2.10 and Equation (2.1), we obtain from Equation (3.13) that
On substituting Equations (3.4) and (3.8) in Equation (3.14), we get
Following the same process as in Equation (3.15), we have
Set $k \in \mathbb{N},$ then we consider the geodesic triangle $\Delta(q_{2k+1}, q_{2k}, p)$ in $\mathbb{M}$ with vertices $q_{2k+1}, q_{2k}, p$ and a corresponding comparison triangle $\Delta(\overline{q_{2k+1}}, \overline{q_{2k}}, \overline{p}) \subseteq \mathbb{R}^2.$ Hence, we obtain from Lemma 2.13 that $d(q_{2k+1},p)=\|\overline{q_{2k+1}}-\overline{p}\|, ~d(q_{2k+1}, q_{2k})=\|\overline{q_{2k+1}}-\overline{q_{2k}}\|$ and $d(q_{2k}, p)=\|q_{2k}-\overline{p}\|$. Also, from Algorithm 3.3, the comparison point for $u_{2k+1}$ is $\overline{u_{2k+1}}=\overline{q_{2k+1}}+ \beta_{2k+1}(\overline{q_{2k+1}}-\overline{q_{2k}}).$ Thus
It is obvious from Algorithm 3.3 that $q_{2k+1}=\exp_{w_{2k}}(\gamma_{2k}(\Gamma_{w_{2k}, u_{2k}}\Upsilon(u_{2k})-\Upsilon(w_{2k})))$ is $\overline{q_{2k+1}}= \overline{w_{2k}}-\gamma_{2k}(\Upsilon(\overline{w_{2k}}-\Upsilon \overline{u_{2k}})).$ Now, suppose $d(q_{2k+1}, w_{2k})=\|\overline{q_{2k+1}}-\overline{w_{2k}}\|$, then from the diffeomorphism of the exp map, we obtain
Hence, we obtain
On substituting Equations (3.16), (3.17) and (3.18) into Equation (3.15), we obtain
Since $\gamma_{k} \to \gamma \gt 0$, we obtain
Now, fix c such that
then there exists $\eta_0 \gt 1$ such that
Thus, we obtain from Equation (3.19) that
Hence, we get
Therefore, we arrive at the conclusion that $\lim\limits_{k \to \infty}d(q_{2k}, p)$ exists and $\{q_{2k}\}$ is bounded. Thus, from Equation (3.20), we obtain
And consequently, we get
Using Equations (3.21) and (3.22), we have
Since $\{q_{2k}\}$ is bounded, there exists a subsequence $\{q_{2k_{l}}\}$ of $\{q_{2k}\}$ which converges to a cluster point g of $\{q_{2k}\}.$ From Equations (3.21) and (3.23), we also have subsequences $\{w_{2k_{l}}\}$ of $\{w_{2k}\}$ and $\{u_{2k_{l}}\}$ of $\{u_{2k}\}$ which converge weakly to cluster points of g of $\{w_{2k}\}$ and $\{u_{2k}\}$ respectively. Next, we show that $g \in \Omega.$
By step 2 of Algorithm 3.3, we have
Thus, applying Equation (3.22), we get
so,
Since ϒ is a Lipschitz continuous vector field and $u_{2k_{l}} \to g$ as $l \to \infty$. By combining Equations (3.24) and (3.25)
Since Φ is a maximal monotone vector field, thus it is upper Kuratowski semicontinuous. Hence, $-\Upsilon (g) \in \Phi(g),$ that is $g \in \Omega.$ Therefore, we conclude by Lemma 2.6 (iii) that $\{q_{2k}\}$ generated by Algorithm 3.3 converges to a solution of $\Delta.$
In this section, we obtain a linear convergence of the Algorithm 3.3.
Theorem 3.6 Suppose that $\Upsilon \in \mathcal{H}(K)$ is σ-strongly monotone and L-Lipschitz and $\Phi \in Z(K)$ is maximal monotone. Then the sequence $\{x_k\}$ generated by Algorithm 3.3 linearly converges to a point in $\Omega,$ where Ω remain as defined in $(L2)$.
Proof. From Equation (3.6), we have
Since ϒ is a σ-strongly monotone vector field, then
By combining Equations (3.27) and (3.28), we obtain
which also implies that
Thus,
It follows that
Applying triangular inequality and Equation (3.30), we have
Using $u_{2k}={q_{2k}},$ we obtain
On substituting Equation (3.31) into Equation (3.19), we get
Suppose $\chi:=\liminf \bigg(1+\frac{1+ \frac{\mu \gamma_{2k}}{\gamma_{2k+1}}}{\sigma \gamma_{2k}}\bigg)^{-2}$ then there exists $M \geq 1$ such that
Thus
Hence we obtain from Equation (3.32) that
We deduce from Equations (3.16) and (3.31) that
Using Equations (3.33) and (3.34), we get
We conclude from Equations (3.33) and (3.35) that $\{q_k\}$ converges linearly to p.
4. Numerical example
In this section, we report some numerical examples to illustrate the efficiency of our method.
Let $\mathbb{R}^{3}_{++}=\{x=(x_1,x_2,x_3) \in \mathbb{R}^3 : x_i \gt 0, i=1,2,3\},$ $\mathbb{M}=(\mathbb{R}^{3}_{++},\langle \cdot,\cdot \rangle ) $ be the Riemannian manifold with the Riemannian metric is defined by
where G(x) is a diagonal matrix defined $G(x)=diag(x_{1}^{-2},x_{2}^{-2},x_{3}^{-2}).$ The Riemannian $d: \mathbb{M} \times \mathbb{M} \to \mathbb{R}_{+}$ is defined by
The sectional curvature of the Riemannian manifold $\mathbb{M}$ is 0. Thus $\mathbb{M}=(\mathbb{R}_{++}^{3},\langle \cdot,\cdot \rangle )$ is a Hadamard manifold. Let $x=(x_1,x_2,x_3) \in \mathbb{M}.$ Then, the exponential map $\exp_{x}: T_x \mathbb{M} \to $ is defined by
for all $u=(u_1,u_2,u_3) \in T_x \mathbb{M}.$ The inverse of the exponential map, $\exp_{x}^{-1}: \mathbb{M}\to T_x \mathbb{M}$ is defined by
for all $x,y \in \mathbb{M}.$ The parallel transport $\Gamma_{y,x}: T_x \mathbb{M} \to T_y \mathbb{M}$ is defined by
for all $u=(u_1,u_2,u_3) \in T_x M.$ Let $C=\{x=(x_1,x_2,x_3) \in \mathbb{M}: 0 \lt x_i \le 1, ~~\mbox{for}~~i=1,2,3 \}$ be the geodesic convex subset of $M.$ Let $\Upsilon:\mathbb{M} \to T\mathbb{M}$ be defined by
and $\Upsilon:\mathbb{M} \to T\mathbb{M}$ be defined by
Then Φ is maximal monotone vector field on C and ϒ is continuous and monotone vector field on C (see [Reference Sahu, Babu and Sharma33, Example 1]). By simple calculation, we see that wk in Algorithm 3.3 can be expressed as
Note that $(\Phi+\Upsilon)^{-1}(0)=\{(1,\frac{1}{e},\frac{1}{2})\}.$ Choose $\gamma_1=1.7,$ $\beta_k=0.5,$ $\nu_k=\frac{1}{k\sqrt{k}}$ and $\mu=\frac{1}{2}.$We terminate the execution of the process at $E_n=d(x_{n+1},x_n)=10^{-3}$ and make a comparison of Algorithm 3.3 with non-alternated and non-accelerated version of our Algorithm. The result of this experiment for different initial values of x 0 and x 1 is shown in Figure 1.
(Case 1) $x_0=[0.7,0.7,0.7]$ and $x_1=[0.8,0.8,0.8],$
(Case 2) $x_0=[2,1,2]$ and $x_1=[2,2,1],$
(Case 3) $x_0=[2,2,2]$ and $x_1=[1,1,1],$
(Case 4) $x_0=[1.5,1.5,1.5]$ and $x_1=[1.3,1.2,1.1].$
5. Conclusion
In this article, we propose an alternating inertial Tseng-type method for approximating solution of singularities of variational inclusion problems which are defined by means of the sum of a single-valued and multi-valued vector field on an Hadamard manifold. Our method uses a self-adaptive procedure which generates dynamic step-sizes converging to a positive constant for solving variational inclusion problem. Under some suitable conditions, we prove that the sequence generated by our method converges to a solution of variational inclusion problem. Lastly, we present some numerical example to show the behaviour of iterative method. It can be seen from our proposed method that our algorithm converges faster than the non-alternated and non-accelerated method.
Competing interests
None.