Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-21T23:45:23.563Z Has data issue: false hasContentIssue false

Nonlinear optimal control for a 4-DOF SCARA robotic manipulator

Published online by Cambridge University Press:  25 April 2023

G. Rigatos*
Affiliation:
Unit of Industrial Automation, Industrial Systems Institute, Rion Patras, Greece
M. Abbaszadeh
Affiliation:
Department of ECS Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
K. Busawon
Affiliation:
Department of Mechanical Engineering, University of Northumbria, Newcastle, UK
J. Pomares
Affiliation:
Department of Systems Engineering, University of Alicante, Alicante, Spain
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Selective compliance articulated robot arms (SCARA) robotic manipulators find wide use in industry. A nonlinear optimal control approach is proposed for the dynamic model of the 4-degrees of freedom (DOF) SCARA robotic manipulator. The dynamic model of the SCARA robot undergoes approximate linearization around a temporary operating point that is recomputed at each time-step of the control method. The linearization relies on Taylor series expansion and on the associated Jacobian matrices. For the linearized state-space model of the system, a stabilizing optimal (H-infinity) feedback controller is designed. To compute the controller’s feedback gains, an algebraic Riccati equation is repetitively solved at each iteration of the control algorithm. The stability properties of the control method are proven through Lyapunov analysis. The proposed control method is advantageous because: (i) unlike the popular computed torque method for robotic manipulators, it is characterized by optimality and is also applicable when the number of control inputs is not equal to the robot’s number of DOFs and (ii) it achieves fast and accurate tracking of reference setpoints under minimal energy consumption by the robot’s actuators. The nonlinear optimal controller for the 4-DOF SCARA robot is finally compared against a flatness-based controller implemented in successive loops.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

Selective compliance articulated robot arms (SCARA) robots are widely used in industrial tasks as well as in the teaching of robotics and in the related testing of new robot control algorithms [Reference Rigatos and Busawon1Reference He, Mai, Cui, Gao, Yang, Zhang, Chen, Chen and Tang5]. The rapid development of the Computer, Communication and Consumer Electronics Industry (3C industry) has led also to the spread of the use of SCARA robots [Reference Makino6Reference Krönig, Hörler, Caseiro, Grossen, Araujo, Kneib and Bauri10]. SCARA robots exhibit agility in assembly tasks for the 3C industry and particularly in the fabrication of electronic devices, as well as in welding, handling of objects, and pick and place tasks with high speed, short time-cycle, accurate path following, and in general much flexible operation [Reference Urea and Kern11Reference Bianchi, van der Maas, Maljaari and Heemels15]. Of course, to achieve the precise execution of such tasks SCARA robots have to be equipped with computationally powerful microprocessors and have to be also supplied with elaborated nonlinear control algorithms [Reference Takagi and Uchiyama16Reference Bensafia, Ladaci, Khettab and Chemoni20].

SCARA robots are high-performance robotic manipulators with relatively simple structure. With three revolute joints (named as shoulder, elbow, and wrist, respectively), a SCARA robot can move its end-effector horizontally, while with a prismatic joint it can move the end-effector vertically [Reference Chang, Jaroonshiriphan, Bernhardt and Ludden21Reference Ali, Boutat-Baddus, Bacis-Aubny and Darouach26]. The configuration of the SCARA robot (Fig. 1) is outlined as follows: First it comprises a revolute joint about the vertical axis. This joint swings a rigid arm and at the end of this arm there is a second revolute joint which swings the second arm again about the vertical axis. The first two revolute joints enable to move horizontally a load picked by the robot’s end-effector. A prismatic joint (tool) is mounted at the end of the second arm. This can move straight up and down. Finally, at the end of the tool there is a third revolute joint which allows for the precise positioning and orientation of the load. Indicative recent results showing the application of SCARA robots in industrial tasks can be found in refs. [Reference Shan, Li, Liu and Huang27Reference He, Li, Xu, Zhou and Li31]. Potential application areas for the article’s developments, always related to industrial robotics, are described in refs. [Reference Wei and Zhang32Reference Khoshbin, Youssef, Meziane and Otis35].

Figure 1. The 4-DOF SCARA robotic manipulator and the reference frames (coordinate systems) associated with the robot’s links and joints.

In the present article, a nonlinear optimal control method is proposed for the nonlinear model of a 4-degrees of freedom (DOF) SCARA robot [Reference Rigatos, Siano and Abbaszadeh36, Reference Rigatos and Karapanou37]. The dynamic model of the 4-DOF SCARA robot undergoes first approximate linearization around a temporary operating point which is updated at each sampling instance. This operating point is defined by the present value of the robot’s state vector and by the last sampled value of the control inputs vector. The linearization process relies on the first-order Taylor series expansion and on the computation of the associated Jacobian matrices [Reference Rigatos and Tzafestas38Reference Rigatos and Zhang40]. The modeling error, which is due to the truncation of higher-order terms in the Taylor series expansion, is proven to be small and is asymptotically compensated by the robustness of the control algorithm. For the approximately linearized state-space description of the system, a stabilizing H-infinity feedback controller is defined.

The proposed H-infinity controller achieves the solution of the optimal control problem for the SCARA robot under model uncertainty and external perturbations. Actually, it represents a min-max differential game which takes place between (i) the control inputs of the system that try to minimize a cost function comprising a quadratic term of the state vector’s tracking error and (ii) the model uncertainty and exogenous perturbation terms which try to maximize this cost function. To compute the stabilizing feedback gains of this controller, an algebraic Riccati equation has to be also solved at each time-step of the control method [Reference Rigatos and Busawon1, Reference Rigatos41]. The global stability properties of the control scheme are proven through Lyapunov analysis. First, it is proven that the control loop satisfies the H-infinity tracking performance criterion [Reference Rigatos and Busawon1, Reference Toussaint, Basar and Bullo42]. Next, it is proven that under moderate conditions, global asymptotic stability of the control loop is ensured. To implement state estimation-based control without need to measure the entire vector of the system, the H-infinity Kalman filter is used as a robust state estimator [Reference Rigatos and Busawon1]. The nonlinear optimal control method retains the advantages of linear optimal control, that is, fast and accurate tracking of reference setpoints under moderate variations of the control inputs.

The article has a meaningful contribution to the area of nonlinear control. One can point out the advantages of the nonlinear optimal control method against nonlinear model predictive control (NMPC) [Reference Rigatos and Karapanou37]. In NMPC, the stability properties of the control scheme remain unproven and the convergence of the iterative search for an optimum often depends on initialization and parameter values’ selection. It is also noteworthy that the nonlinear optimal control method is applicable to a wider class of dynamical systems than approaches based on the solution of state-dependent Riccati equations (SDRE). The SDRE approaches can be applied only to dynamical systems which can be transformed to the linear parameter varying form. Besides, the nonlinear optimal control method performs better than nonlinear optimal control schemes which use approximation of the solution of the Hamilton–Jacobi–Bellman equation by Galerkin series expansions. The stability properties of the Galerkin series expansion-based optimal control approaches are still unproven.

The structure of the article is as follows: In Section 2, the dynamic model of the 4-DOF SCARA robot is given and the associated state-space model in the affine-in-the-input nonlinear state-space form is formulated. In Section 3, the dynamic model of the SCARA robot undergoes approximate linearization through Taylor series expansion and with the computation of the associated Jacobian matrices. In Section 4, the H-infinity optimal control problem is formulated for the dynamic model of the SCARA robot. In Section 5, the global stability properties of the H-infinity control scheme are proven through Lyapunov analysis. Besides, the H-infinity Kalman filter is introduced as a robust state estimator. In Section 6, the differential flatness properties of the 4-DOF SCARA robotic manipulator are proven and a flatness-based control in successive loops is developed for this robotic system. In Section 7, the accuracy of setpoints tracking by the state variables of the SCARA robot, under the nonlinear optimal control method, is further confirmed through simulation experiments. Finally, in Section 8 concluding remarks are stated.

2. Dynamic model of the 4-DOF SCARA robotic manipulator

2.1. State-space model of the SCARA robot

The diagram of the 4-DOF SCARA robot is shown in Fig. 1. The associated state-space model of the robot’s dynamics takes the form [Reference Rigatos and Busawon1, Reference Voglewede, Smith and Monti2]

(1) \begin{align} M(\theta )\ddot{\theta }+\tilde{C}(\theta,\dot{\theta })\dot{\theta }+G(\theta )+\tilde{d}=\tau (t) \end{align}

where $\theta =[\theta _1,\theta _2,\theta _3,\theta _4]^T$ is the robot’s vector of state variables, $\theta _i$ , $i=1,2,4$ are the joints’ turn angle, $\dot{\theta }_i$ , $i=1,2,4$ are the joints’ angular speeds, $\theta _3$ is the position of the prismatic joint, $\dot{\theta }_3$ is the velocity of the prismatic joint, $\tilde{d}$ is the disturbances vector, $M(\theta )$ is the inertia matrix, $\tilde{C}(\theta,\dot{\theta })$ is the Coriolis and centrifugal forces matrix, and $G(\theta )$ is the gravitational forces vector. These parameters of the robotic model are defined as follows [Reference Voglewede, Smith and Monti2]:

(2) \begin{align} M(\theta )=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}{p_1}+{p_2}\text{cos}(\theta _2) & p_3+0.5{p_2}\text{cos}(\theta _2) & 0 & -p_5 \\ \\[-8pt] p_2+0.5{p_2}\text{cos} (\theta _2) & p_2 & 0 & -p_5 \\ \\[-8pt] 0 & 0 & p_4 & 0 \\ \\[-8pt] -p_5 & -p_5 & 0 & p_5 \end{array}\right) \end{align}
(3) \begin{align} \tilde{C}(\theta,\dot{\theta })=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} -{p_2}\text{cos}(\theta _1)\dot{\theta }_2 & -0.5{p_2}\text{sin}(\theta _2)\dot{\theta _2} & 0 & 0 \\ \\[-8pt] 0.5{p_2}\text{sin}(\theta _2)\dot{\theta }_1 & 0 & 0 & 0 \\ \\[-8pt] 0 & 0 & 0 & 0 \\ \\[-8pt] 0 & 0 & 0 & 0 \end{array}\right) & G(\theta )=\begin{pmatrix} 0 \\ \\[-8pt] 0 \\ \\[-8pt] {p_4}\bar{g} \\ \\[-8pt] 0 \end{pmatrix} & \tilde{d}= \begin{pmatrix}{b_1}\dot{\theta }_1 \\ \\[-8pt] {b_2}\dot{\theta }_2 \\ \\[-8pt] {b_3}\dot{\theta }_3 \\ \\[-8pt] {b_4}\dot{\theta }_4 \end{pmatrix} \end{align}

In the previous model, $\tau _i$ are the control inputs, $I_i$ are the moments of inertia around the centroid (the center of rotation of each link is at distance $\kappa$ from the end of the link), $m_i$ is the mass of the ith link, $d_i$ is the center of mass of the ith link, $l_i$ is the length of the ith link, $\theta _i$ is the angle (position) of the ith joint, while $\bar{g}$ is the acceleration of gravity. It holds that $I_i={m_i}\kappa _i^2$ , $p_1={\sum _{i=1}^4}I_i+{m_1}{d_1^2}+{m_2}(d_1^2+l_1^2)+(m_3+m_4)(l_1^2+l_2^2)$ , $p_2=2[{l_1}{d_2}{m_2}+{l_1}{l_2}(m_3+m_4)]$ , $p_3=m_3+m_4$ , $p_4=m_3+m_4$ , and $p_5=I_4$ .

About the elements of the inertia matrix, one has $m_{11}={p_1}+{p_2}\text{cos}(\theta _2)$ , $m_{12}=p_3+0.5{p_2}\text{cos}(\theta _2)$ , $m_{13}=0$ , $m_{14}=-p_5$ , $m_{21}=p_2+0.5{p_2}\cos (\theta _2)$ , $m_{22}=p_2$ , $m_{23}=0$ , $m_{34}=-p_5$ , $m_{31}=0$ , $m_{32}=0$ , $m_{33}=p_4$ , $m_{34}=0$ , $m_{41}=-p_5$ , $m_{42}=-p_5$ , $m_{43}=0$ , $m_{44}=p_5$ .

About the elements of the Coriolis matrix, one has: $c_{11}=-{p_2}\text{cos}(\theta _1)\dot{\theta }_2$ , $c_{12}=-0.5{p_2}\text{sin}(\theta _2)\dot{\theta _2}$ , $c_{13}=0$ $c_{14}=0$ , $c_{21}=0.5{p_2}\text{sin}(\theta _2)\dot {\theta}_1$ , $c_{22}=0$ , $c_{23}=0$ , $c_{24}=0$ , $c_{31}=0$ , $c_{32}=0$ , $c_{33}=0$ , $c_{34}=0$ , $c_{41}=0$ , $c_{42}=0$ , $c_{43}=0$ , $c_{44}=0$ .

About the elements of the gravitational forces vector, one has: $g_{1}=0$ , $g_{2}=0$ , $g_{3}={p_4}\bar{g}$ , and $g_{4}=0$ .

About the elements of the disturbances (friction) vector, one has: $d_{1}={b_1}\dot{\theta _1}$ , $d_{2}={b_2}\dot{\theta _2}$ , $d_{3}={b_3}\dot{\theta _3}$ , and $d_{4}={b_4}\dot{\theta _4}$ .

Next, the inverse of the inertia matrix $M$ is defined as

(4) \begin{align} N^{-1}=\dfrac{1 }{{\text{det}M}}\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11} & -M_{21} & M_{31} & -M_{41} \\ \\[-8pt] -M_{12} & M_{22} & -M_{32} & M_{42} \\ \\[-8pt] M_{13} & -M_{23} & M_{33} & -M_{43} \\ \\[-8pt] -M_{14} & M_{24} & -M_{34} & M_{44} \end{array}\right) \end{align}

where the above noted subdeterminants $M_{ij}$ $i=1,\cdots,4$ and $j=1,\cdots,4$ are defined as

\begin{align*} M_{11}=m_{22}(m_{33}m_{44}-m_{43}m_{34})-m_{23}(m_{32}m_{44}-m_{43}m_{34}+m_{24})(m_{32}m_{43}-m_{42}m_{33})\\[3pt] M_{12}=m_{21}(m_{33}m_{44}-m_{43}m_{34})-m_{23}(m_{31}m_{44}-m_{41}m_{34}+m_{24})(m_{31}m_{43}-m_{41}m_{33})\\[3pt] M_{13}=m_{21}(m_{32}m_{44}-m_{42}m_{34})-m_{22}(m_{31}m_{44}-m_{41}m_{34}+m_{24})(m_{31}m_{43}-m_{41}m_{33})\\[3pt] M_{14}=m_{21}(m_{32}m_{43}-m_{42}m_{33})-m_{22}(m_{31}m_{43}-m_{41}m_{33}+m_{23})(m_{31}m_{42}-m_{41}m_{32}) \end{align*}
\begin{align*} M_{21}=m_{12}(m_{33}m_{44}-m_{43}m_{34})-m_{13}(m_{32}m_{44}-m_{42}m_{34}+m_{14})(m_{32}m_{43}-m_{42}m_{33})\\[3pt] M_{22}=m_{11}(m_{23}m_{24}-m_{43}m_{44})-m_{13}(m_{31}m_{44}-m_{41}m_{14}+m_{14})(m_{31}m_{43}-m_{41}m_{23})\\[3pt] M_{23}=m_{11}(m_{32}m_{44}-m_{42}m_{44})-m_{12}(m_{31}m_{44}-m_{41}m_{34}+m_{14})(m_{31}m_{42}-m_{41}m_{32})\\[3pt] M_{24}=m_{11}(m_{32}m_{43}-m_{42}m_{33})-m_{12}(m_{31}m_{43}-m_{41}m_{33}+m_{13})(m_{31}m_{42}-m_{41}m_{32}) \end{align*}
\begin{align*} M_{31}=m_{12}(m_{23}m_{44}-m_{43}m_{24})-m_{13}(m_{22}m_{44}-m_{42}m_{24}+m_{14})(m_{22}m_{43}-m_{42}m_{23})\\[3pt] M_{32}=m_{11}(m_{23}m_{44}-m_{42}m_{24})-m_{13}(m_{12}m_{44}-m_{41}m_{24}+m_{14})(m_{12}m_{43}-m_{41}m_{23})\\[3pt] M_{33}=m_{11}(m_{22}m_{44}-m_{42}m_{24})-m_{12}(m_{21}m_{44}-m_{41}m_{24}+m_{14})(m_{41}m_{22}-m_{21}m_{42})\\[3pt] M_{34}=m_{11}(m_{22}m_{43}-m_{42}m_{23})-m_{12}(m_{21}m_{43}-m_{41}m_{23}+m_{13})(m_{21}m_{42}-m_{41}m_{22}) \end{align*}
\begin{align*} M_{41}=m_{12}(m_{23}m_{34}-m_{33}m_{24})-m_{13}(m_{22}m_{34}-m_{32}m_{23}+m_{14})(m_{22}m_{33}-m_{32}m_{23})\\[3pt] M_{42}=m_{11}(m_{22}m_{43}-m_{33}m_{42})-m_{31}(m_{12}m_{43}-m_{13}m_{44}+m_{41})(m_{12}m_{33}-m_{31}m_{32})\\[3pt] M_{43}=m_{11}(m_{22}m_{34}-m_{32}m_{24})-m_{12}(m_{21}m_{34}-m_{31}m_{24}+m_{14})(m_{21}m_{32}-m_{31}m_{22})\\[3pt] M_{44}=m_{11}(m_{22}m_{33}-m_{32}m_{23})-m_{12}(m_{21}m_{33}-m_{31}m_{23}+m_{13})(m_{21}m_{32}-m_{31}m_{22}) \end{align*}

The determinant of matrix $M$ is

\begin{align*} \text{det}M=m_{11}M_{11}-m_{12}M_{12}+m_{13}M_{13}-m_{14}M_{14} \end{align*}

For the dynamic model of the SCARA robot that was initially written in the form

(5) \begin{align} M(\theta )\ddot{\theta }+\tilde{C}(\theta,\dot{\theta })\dot{\theta }+G(\theta )+\tilde{d}=\tau \end{align}

it holds that

(6) \begin{align} \tilde{C}(\theta,\dot{\theta })\dot{\theta }=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} c_{11} & c_{12} & c_{13} & c_{14} \\ \\[-8pt] c_{21} & c_{22} & c_{23} & c_{24} \\ \\[-8pt] c_{31} & c_{32} & c_{33} & c_{34} \\ \\[-8pt] c_{41} & c_{42} & c_{43} & c_{44} \end{array}\right) \begin{pmatrix} \dot{\theta }_1 \\ \\[-8pt] \dot{\theta }_2 \\ \\[-8pt] \dot{\theta }_3 \\ \\[-8pt] \dot{\theta }_4 \end{pmatrix} \end{align}

or equivalently

(7) \begin{align} C(\theta,\dot{\theta })=\begin{pmatrix} c_1 \\ \\[-8pt] c_2 \\ \\[-8pt] c_3 \\ \\[-8pt] c_4 \end{pmatrix} =\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} c_{11}\dot{\theta }_1 & c_{12}\dot{\theta }_2 & c_{13}\dot{\theta }_3 & c_{14}\dot{\theta }_4 \\ \\[-8pt] c_{21}\dot{\theta }_2 & c_{22}\dot{\theta }_2 & c_{23}\dot{\theta }_3 & c_{24}\dot{\theta }_4 \\ \\[-8pt] c_{31}\dot{\theta }_3 & c_{32}\dot{\theta }_2 & c_{33}\dot{\theta }_3 & c_{34}\dot{\theta }_4 \\ \\[-8pt] c_{41}\dot{\theta }_4 & c_{42}\dot{\theta }_2 & c_{43}\dot{\theta }_3 & c_{44}\dot{\theta }_4 \end{array}\right) \end{align}

Consequently, the dynamic model of the robot can be written as

(8) \begin{gather} M(\theta )\ddot{\theta }+C(\theta,\dot{\theta })+G(\theta )+\tilde{d}(\dot{\theta })=\tau{\Rightarrow } \nonumber\\ \ddot{\theta }+M^{-1}(\theta )C(\theta,\dot{\theta })+M^{-1}(\theta )G(\theta )+M^{-1}(\theta ))\tilde{d}(\dot{\theta })=M^{-1}(\theta )\tau{\Rightarrow }\nonumber\\ \ddot{\theta }=-M^{-1}(\theta )C(\theta,\dot{\theta })-M^{-1}(\theta )G(\theta )-M^{-1}(\theta ))\tilde{d}(\dot{\theta })+M^{-1}(\theta )\tau{\Rightarrow }\nonumber\\ \ddot{\theta }=-M^{-1}(\theta )[C(\theta,\dot{\theta })+G(\theta )+\tilde{d}(\dot{\theta })]+M^{-1}(\theta )\tau \end{gather}

Therefore, the dynamic model of the SCARA robot is written as

(9) \begin{align} \begin{pmatrix} \ddot{\theta }_1 \\ \\[-8pt] \ddot{\theta }_2 \\ \\[-8pt] \ddot{\theta }_3 \\ \\[-8pt] \ddot{\theta }_4 \end{pmatrix}&=-\dfrac{1 }{{\text{det}M}} \begin{pmatrix} M_{11} & -M_{21} & M_{31} & -M_{41} \\ \\[-8pt] -M_{12} & M_{22} & -M_{32} & M_{42} \\ \\[-8pt] M_{13} & -M_{23} & M_{33} & -M_{43} \\ \\[-8pt] -M_{14} & M_{24} & -M_{34} & M_{44} \end{pmatrix} \begin{pmatrix} c_1+g_1+d_1 \\ \\[-8pt] c_2+g_2+d_2 \\ \\[-8pt] c_3+g_3+d_3 \\ \\[-8pt] c_4+g_4+d_4 \end{pmatrix}+ \nonumber \\ &\quad +\dfrac{1 }{{\text{det}M}} \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} M_{11} & -M_{21} & M_{31} & -M_{41} \\ \\[-8pt] -M_{12} & M_{22} & -M_{32} & M_{42} \\ \\[-8pt] M_{13} & -M_{23} & M_{33} & -M_{43} \\ \\[-8pt] -M_{14} & M_{24} & -M_{34} & M_{44} \end{array}\right) \begin{pmatrix} \tau _1 \\ \\[-8pt] \tau _2 \\ \\[-8pt] \tau _3 \\ \\[-8pt] \tau _4 \end{pmatrix} \end{align}

Equivalently, using that the torques vector $\tau =[\tau _1,\tau _2,\tau _3,\tau _4]^T$ is the control inputs vector $u=[u_1,u_2,u_3,u_4]^T$ , the dynamic model of the SCARA robot is written as

\begin{align*} \begin{pmatrix} \ddot{\theta }_1 \\ \\[-8pt] \ddot{\theta }_2 \\ \\[-8pt] \ddot{\theta }_3 \\ \\[-8pt] \ddot{\theta }_4 \end{pmatrix}&= \begin{pmatrix}{\dfrac{-M_{11}(c_1+g_1+d_1)+M_{21}(c_2+g_2+d_2)-M_{31}(c_3+g_3+d_3)+M_{41}(c_4+g_4+d_4)}{\text{det}M}} \\ \\[-5pt] {\dfrac{M_{12}(c_1+g_1+d_1)-M_{22}(c_2+g_2+d_2)+M_{32}(c_3+g_3+d_3)-M_{42}(c_4+g_4+d_4)}{\text{det}M}} \\ \\[-5pt] {\dfrac{-M_{13}(c_1+g_1+d_1)+M_{23}(c_2+g_2+d_2)-M_{33}(c_3+g_3+d_3)+M_{43}(c_4+g_4+d_4)}{\text{det}M}} \\ \\[-5pt] {\dfrac{M_{14}(c_1+g_1+d_1)-M_{24}(c_2+g_2+d_2)+M_{34}(c_3+g_3+d_3)-M_{44}(c_4+g_4+d_4)}{\text{det}M}} \end{pmatrix}+ \end{align*}
(10) \begin{align} \nonumber\\[-30pt] &\quad +\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\dfrac{M_{11}} { \text{det}M} & -\dfrac{M_{21} }{ \text{det}M} &{\dfrac{M_{31}} { \text{det}M}} & -\dfrac{M_{41} }{ \text{det}M}\\ \\[-4pt] -\dfrac{M_{12} }{\text{det}M} &\dfrac{M_{22} }{ \text{det}M} & -\dfrac{M_{32} }{ \text{det}M} &\dfrac{M_{42} }{ \text{det}M}\\ \\[-4pt] \dfrac{M_{13} }{ \text{det}M} & -\dfrac{M_{23} }{ \text{det}M} &\dfrac{M_{33} }{ \text{det}M} & -\dfrac{M_{34} }{ \text{det}M} \\ \\[-4pt] -\dfrac{M_{14} }{ \text{det}M} &\dfrac{M_{24} }{ \text{det}M} & -\dfrac{M_{34} }{ \text{det}M} &\dfrac{M_{44} }{ \text{det}M} \end{array}\right) \begin{pmatrix} u_1 \\ \\[-8pt] u_2 \\ \\[-8pt] u_3 \\ \\[-8pt] u_4 \end{pmatrix} \end{align}

The state vector of the SCARA robot is

(11) \begin{align} x&=[x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8]^T{\Rightarrow }\nonumber \\ x&=[\theta _1,\dot{\theta }_1,\theta _2,\dot{\theta }_2,\theta _3,\dot{\theta }_3,\theta _4,\dot{\theta }_4]^T \end{align}

Moreover, the following functions are defined:

\begin{align*} f_1(x)=x_2 &\quad f_2(x)={{-M_{11}(c_1+g_1+d_1)+M_{21}(c_2+g_2+d_2)-M_{31}(c_3+g_3+d_3)+M_{41}(c_4+g_4+d_4)} \over \text{det}M}\\[5pt] f_3(x)=x_4 &\quad f_4(x)={{M_{12}(c_1+g_1+d_1)-M_{22}(c_2+g_2+d_2)+M_{32}(c_3+g_3+d_3)-M_{42}(c_4+g_4+d_4)} \over \text{det}M}\\[5pt] f_5(x)=x_6 &\quad f_6(x)={{-M_{13}(c_1+g_1+d_1)+M_{23}(c_2+g_2+d_2)-M_{33}(c_3+g_3+d_3)+M_{43}(c_4+g_4+d_4)} \over \text{det}M}\\[5pt] f_7(x)=x_8 &\quad f_8(x)={{M_{14}(c_1+g_1+d_1)-M_{24}(c_2+g_2+d_2)+M_{34}(c_3+g_3+d_3)-M_{44}(c_4+g_4+d_4)} \over \text{det}M} \end{align*}
\begin{align*} \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}g_{11}(x)=0 & g_{12}(x)=0 & g_{13}=0 & g_{14}=0\\ \\[-8pt] g_{21}(x)=\dfrac{M_{11} }{\text{det}M} & g_{22}(x)=-\dfrac{M_{21} }{\text{det}M} & g_{23}(x)=\dfrac{M_{31} }{\text{det}M} & g_{24}(x)=-\dfrac{M_{41} }{ \text{det}M} \\ \\[-8pt] g_{31}(x)=0 & g_{32}(x)=0 & g_{33}(x)=0 & g_{34}(x)=0 \\ \\[-8pt] g_{41}(x)=-\dfrac{M_{12} }{ \text{det}M} & g_{42}(x)=\dfrac{M_{22}}{\text{det}M} & g_{43}(x)=-\dfrac{M_{32} }{ \text{det}M} & g_{44}(x)=\dfrac{M_{42} }{ \text{det}M} \\ \\[-8pt] g_{51}(x)=0 & g_{52}(x)=0 & g_{53}(x)=0 & g_{54}(x)=0\\ \\[-8pt] g_{61}(x)=\dfrac{M_{13} }{\text{det}M} & g_{62}(x)=-\dfrac{M_{23} }{ \text{det}M} & g_{63}(x)=\dfrac{M_{33} }{ \text{det}M} & g_{64}(x)=-\dfrac{M_{43}}{ \text{det}M}\\ \\[-8pt] g_{71}(x)=0 & g_{72}(x)=0 & g_{73}(x)=0 & g_{74}(x)=0\\ \\[-8pt] g_{81}(x)=-\dfrac{M_{14} }{ \text{det}M} & g_{82}(x)=\dfrac{M_{24} }{ \text{det}M} & g_{83}(x)=-\dfrac{M_{34} }{ \text{det}M} & g_{84}(x)=\dfrac{M_{44} }{ \text{det}M} \end{array}\end{align*}

Thus, the state-space model of the 4-DOF SCARA robot is written as

(12) \begin{align} \begin{aligned} \begin{pmatrix} \dot{x}_1 \\ \\[-9pt] \dot{x}_2 \\ \\[-9pt] \dot{x}_3 \\ \\[-9pt] \dot{x}_4 \\ \\[-9pt] \dot{x}_5 \\ \\[-9pt] \dot{x}_6 \\ \\[-9pt] \dot{x}_7 \\ \\[-9pt] \dot{x}_8 \end{pmatrix}= \begin{pmatrix} f_1(x)\\ \\[-9pt] f_2(x)\\ \\[-9pt] f_3(x)\\ \\[-9pt] f_4(x)\\ \\[-9pt] f_5(x)\\ \\[-9pt] f_6(x)\\ \\[-9pt] f_7(x)\\ \\[-9pt] f_8(x) \end{pmatrix}+ \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} g_{11}(x) & g_{12}(x) & g_{13}(x) & g_{14}(x)\\ \\[-7pt] g_{21}(x) & g_{22}(x) & g_{23}(x) & g_{24}(x) \\ \\[-7pt] g_{31}(x) & g_{32}(x) & g_{33}(x) & g_{34}(x) \\ \\[-7pt] g_{41}(x) & g_{42}(x) & g_{43}(x) & g_{44}(x) \\ \\[-7pt] g_{51}(x) & g_{52}(x) & g_{53}(x) & g_{54}(x) \\ \\[-7pt] g_{61}(x) & g_{62}(x) & g_{63}(x) & g_{64}(x) \\ \\[-7pt] g_{71}(x) & g_{72}(x) & g_{73}(x) & g_{74}(x) \\ \\[-7pt] g_{81}(x) & g_{82}(x) & g_{83}(x) & g_{84}(x) \end{array}\right) \begin{pmatrix} \tau _1 \\ \\[-9pt] \tau _2 \\ \\[-9pt] \tau _3 \\ \\[-9pt] \tau _4 \end{pmatrix} \end{aligned} \end{align}

or in concise form one has the affine-in-the-input nonlinear state-space model

(13) \begin{align} \dot{x}=f(x)+g(x)u \end{align}

where $x\,{\in }\,R^{8{\times }1}$ , $f(x)\,{\in }\,R^{8{\times }1}$ , $g(x)\,{\in }\,R^{8{\times }4}$ , and $u\,{\in }\,R^{4{\times }1}$ .

3. Approximate linearization of the dynamic model of the SCARA robot

3.1. Linearization of the robot’s dynamics

The dynamic model of the 4-DOF SCARA robot being initially expressed in the state-space form

(14) \begin{align} \dot{x}=f(x)+g(x)u \end{align}

undergoes approximate linearization at each sampling instance around the temporary operating point $(x^{*},u^{*})$ , where $x^{*}$ is the present value of the system’s state vector and $u^{*}$ is the last sampled value of the control inputs vector. The linearization process is based on Taylor series expansion and gives

(15) \begin{align} \dot{x}=Ax+Bu+\tilde{d} \end{align}

where $\tilde{d}$ is the cumulative disturbances vector which may be due to truncation of higher-order terms from the Taylor series expansion (b) and exogenous perturbations (c) sensor measurements noise of any distribution. Matrices $A$ and $B$ are Jacobian matrices of the Taylor series expansion which are defined as:

(16) \begin{align} A&={\nabla _x}[f(x)+g(x)u]\mid _{(x^{*},u^{*})}{\Rightarrow }\nonumber \\ A&={\nabla _x}f(x)\mid _{(x^{*},u^{*})}+{\nabla _x}{g_1}(x)u\mid _{(x^{*},u_1^{*})}+{\nabla _x}{g_2}(x)u_2\mid _{(x^{*},u^{*})}\nonumber \\ &\quad +{\nabla _x}{g}_3(x)u_3\mid _{(x^{*},u^{*})}+{\nabla _x}{g}_4(x)u_4\mid _{(x^{*},u^{*})} \end{align}
(17) \begin{align} B={\nabla _u}[f(x)+g(x)u]\mid _{(x^{*},u^{*})}{\Rightarrow }B=g(x)\mid _{(x^{*},u^{*})} \end{align}

where $g_i(x), \ i=1,\cdots,4$ are the columns of the control inputs gain matrix $g(x)$ .

This linearization approach which has been followed for implementing the nonlinear optimal control scheme results into a quite accurate model of the system’s dynamics. Consider again the affine-in-the-input state-space model

(18) \begin{align} \dot{x}&=f(x)+g(x)u{\Rightarrow }\nonumber \\ \dot{x}&=[f(x^{*})+{\nabla _x}f(x)\mid _{x^{*}}(x-x^{*})]+[g(x^{*})+{\nabla _x}g(x)\mid _{x^{*}}(x-x^{*})]u^{*}+g(x^{*})u^{*}+g(x^{*})(u-u^{*})+\tilde{d}_1{\Rightarrow }\nonumber \\ \dot{x}&=[{\nabla _x}f(x)\mid _{x^{*}}+{\nabla _x}g(x)\mid _{x^{*}}u^{*}]x+g(x^{*})u-[{\nabla _x}f(x)\mid _{x^{*}}+{\nabla _x}g(x)\mid _{x^{*}}u^{*}]x^{*}+f(x^{*})+g(x^{*})u^{*}+\tilde{d}_1 \end{align}

where $\tilde{d}_1$ is the modeling error due to truncation of higher order terms in the Taylor series expansion of $f(x)$ and $g(x)$ . Next, by defining $A=[{\nabla _x}f(x)\mid _{x^{*}}+{\nabla _x}g(x)\mid _{x^{*}}u^{*}]$ , $B=g(x^{*})$ one obtains

(19) \begin{align} \dot{x}=Ax+Bu-Ax^{*}+f(x^{*})+g(x^{*})u^{*}+\tilde{d}_1 \end{align}

Moreover by denoting $\tilde{d}=-Ax^{*}+f(x^{*})+g(x^{*})u^{*}+\tilde{d}_1$ about the cumulative modeling error term in the Taylor series expansion procedure, one has

(20) \begin{align} \dot{x}=Ax+Bu+\tilde{d} \end{align}

which is the approximately linearized model of the dynamics of the system of Eq. (15). The term $f(x^{*})+g(x^{*})u^{*}$ is the derivative of the state vector at $(x^{*},u^{*})$ which is almost annihilated by $-Ax^{*}$ .

3.2. Computation of the Jacobian matrices

The computation of the Jacobian matrices $A$ and $B$ proceeds as follows:

Computation of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ :

First row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{{\partial }f_1}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_2}}=1$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }f_1}}{{{\partial }x_7}}=0$ , and $\dfrac{{{\partial }f_1}}{{{\partial }x_8}}=0$ .

Second row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : It holds that $f_2(x)=\dfrac{f_{2,\text{num}} }{ f_{2,\text{den}}}$ with $f_{2,\text{num}}=-M_{11}(c_1+g_1+d+1)+M_{21}(c_2+g_2+d_2)-M_{31}(c_3+g_3+d_3)+M_{41}(c_4+g_4+d_4)$ and $f_{2,\text{den}}=\text{det}M$ . Thus, for $i=1,2,\cdots,8$ one has

(21) \begin{align} {{{\partial }f_{2,\text{num}}} \over{{\partial }x_i}}&=-{{{\partial }M_{11}} \over{{\partial }x_i}}\left(c_1+g_1+d_1\right)- M_{11}\left({{{\partial }c_1} \over{{\partial }x_i}}+{{{\partial }g_1} \over{{\partial }x_i}}+{{{\partial }d_1} \over{{\partial }x_i}}\right)+\nonumber \\[3pt] &\quad +{{{\partial }M_{21}} \over{{\partial }x_i}}\left(c_2+g_2+d_2\right)+ M_{21}\left({{{\partial }c_2} \over{{\partial }x_i}}+{{{\partial }g_2} \over{{\partial }x_i}}+{{{\partial }d_2} \over{{\partial }x_i}}\right)-\nonumber \\[3pt] &\quad -{{{\partial }M_{31}} \over{{\partial }x_i}}\left(c_3+g_3+d_3\right)- M_{31}\left({{{\partial }c_3} \over{{\partial }x_i}}+{{{\partial }g_3} \over{{\partial }x_i}}+{{{\partial }d_3} \over{{\partial }x_i}}\right)+\nonumber \\[3pt] &\quad +{{{\partial }M_{41}} \over{{\partial }x_i}}\left(c_4+g_4+d_4\right)+ M_{41}\left({{{\partial }c_4} \over{{\partial }x_i}}+{{{\partial }g_4} \over{{\partial }x_i}}+{{{\partial }d_4} \over{{\partial }x_i}}\right) \end{align}

and also

(22) \begin{align} {{{\partial }f_{2,\text{den}}} \over{{\partial }x_i}}={{{\partial }\text{det}M} \over{{\partial }x_i}} \end{align}

and finally

(23) \begin{align} {{{\partial }f_2} \over{{\partial }x_i}}={{{{{\partial }f_{2,\text{num}}} \over{{\partial }x_i}}f_{2,\text{den}}-f_{2,\text{num}}{{{\partial }f_{2,\text{den}}} \over{{\partial }x_i}}} \over{\text{det}M^2}} \end{align}

Third row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{{\partial }f_3}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_4}}=1$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }f_3}}{{{\partial }x_7}}=0$ and $\dfrac{{{\partial }f_3}}{{{\partial }x_8}}=0$ .

Fourth row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : It holds that $f_4(x)=\dfrac{f_{4,\text{num}} }{ f_{4,\text{den}}}$ with $f_{4,\text{num}}=M_{12}(c_1+g_1+d+1)-M_{22}(c_2+g_2+d_2)+M_{32}(c_3+g_3+d_3)-M_{42}(c_4+g_4+d_4)$ and $f_{4,\text{den}}=\text{det}M$ . Thus, for $i=1,2,\cdots,8$ one has

(24) \begin{align} {{{\partial }f_{4,\text{num}}} \over{{\partial }x_i}}={{{\partial }M_{12}} \over{{\partial }x_i}}\left(c_1+g_1+d_1\right)+ M_{12}\left({{{\partial }c_1} \over{{\partial }x_i}}+{{{\partial }g_1} \over{{\partial }x_i}}+{{{\partial }d_1} \over{{\partial }x_i}}\right)+ \nonumber\\[3pt] -{{{\partial }M_{22}} \over{{\partial }x_i}}\left(c_2+g_2+d_2\right)- M_{22}\left({{{\partial }c_2} \over{{\partial }x_i}}+{{{\partial }g_2} \over{{\partial }x_i}}+{{{\partial }d_2} \over{{\partial }x_i}}\right)-\nonumber\\[3pt] {{{\partial }M_{32}} \over{{\partial }x_i}}\left(c_3+g_3+d_3\right)+ M_{32}\left({{{\partial }c_3} \over{{\partial }x_i}}+{{{\partial }g_3} \over{{\partial }x_i}}+{{{\partial }d_3} \over{{\partial }x_i}}\right)-\nonumber\\[3pt] -{{{\partial }M_{42}} \over{{\partial }x_i}}\left(c_4+g_4+d_4\right)- M_{42}\left({{{\partial }c_4} \over{{\partial }x_i}}+{{{\partial }g_4} \over{{\partial }x_i}}+{{{\partial }d_4} \over{{\partial }x_i}}\right) \end{align}

and also

(25) \begin{align} {{{\partial }f_{4,\text{den}}} \over{{\partial }x_i}}={{{\partial }\text{det}M} \over{{\partial }x_i}} \end{align}

and finally

(26) \begin{align} {{{\partial }f_4} \over{{\partial }x_i}}={{{{{\partial }f_{4,\text{num}}} \over{{\partial }x_i}}f_{4,\text{den}}-f_{4,\text{num}}{{{\partial }f_{4,\text{den}}} \over{{\partial }x_i}}} \over{\text{det}M^2}} \end{align}

Fifth row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{{\partial }f_5}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_6}}=1$ , $\dfrac{{{\partial }f_5}}{{{\partial }x_7}}=0$ , and $\dfrac{{{\partial }f_5}}{{{\partial }x_8}}=0$ .

Sixth row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : It holds that $f_6(x)=\dfrac{f_{6,\text{num}} }{ f_{6,\text{den}}}$ with $f_{6,\text{num}}=-M_{13}(c_1+g_1+d+1)+M_{23}(c_2+g_2+d_2)-M_{33}(c_3+g_3+d_3)+M_{43}(c_4+g_4+d_4)$ and $f_{6,\text{den}}=\text{det}M$ . Thus, for $i=1,2,\cdots,8$ one has

(27) \begin{align} {{{\partial }f_{6,\text{num}}} \over{{\partial }x_i}}=-{{{\partial }M_{13}} \over{{\partial }x_i}}\left(c_1+g_1+d_1\right)- M_{13}\left({{{\partial }c_1} \over{{\partial }x_i}}+{{{\partial }g_1} \over{{\partial }x_i}}+{{{\partial }d_1} \over{{\partial }x_i}}\right)+ \nonumber\\[2pt] +{{{\partial }M_{23}} \over{{\partial }x_i}}\left(c_2+g_2+d_2\right)+ M_{23}\left({{{\partial }c_2} \over{{\partial }x_i}}+{{{\partial }g_2} \over{{\partial }x_i}}+{{{\partial }d_2} \over{{\partial }x_i}}\right)- \nonumber\\[2pt] -{{{\partial }M_{33}} \over{{\partial }x_i}}\left(c_3+g_3+d_3\right)- M_{33}\left({{{\partial }c_3} \over{{\partial }x_i}}+{{{\partial }g_3} \over{{\partial }x_i}}+{{{\partial }d_3} \over{{\partial }x_i}}\right)+ \nonumber\\[2pt] +{{{\partial }M_{43}} \over{{\partial }x_i}}\left(c_4+g_4+d_4\right)+ M_{43}\left({{{\partial }c_4} \over{{\partial }x_i}}+{{{\partial }g_4} \over{{\partial }x_i}}+{{{\partial }d_4} \over{{\partial }x_i}}\right) \end{align}

and also

(28) \begin{align} {{{\partial }f_{6,\text{den}}} \over{{\partial }x_i}}={{{\partial }\text{det}M} \over{{\partial }x_i}} \end{align}

and finally

(29) \begin{align} {{{\partial }f_6} \over{{\partial }x_i}}={{{{{\partial }f_{6,\text{num}}} \over{{\partial }x_i}}f_{6,\text{den}}-f_{6,\text{num}}{{{\partial }f_{6,\text{den}}} \over{{\partial }x_i}}} \over{\text{det}M^2}} \end{align}

Seventh row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{{\partial }f_7}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }f_7}}{{{\partial }x_7}}=0$ , and $\dfrac{{{\partial }f_7}}{{{\partial }x_8}}=1$ .

Eighth row of the Jacobian matrix ${\nabla _x}f(x)\mid _{(x^{*},u^{*})}$ : It holds that $f_4(x)=\dfrac{f_{8,\text{num}} }{ f_{8,\text{den}}}$ with $f_{8,\text{num}}=M_{14}(c_1+g_1+d+1)-M_{24}(c_2+g_2+d_2)+M_{34}(c_3+g_3+d_3)-M_{44}(c_4+g_4+d_4)$ and $f_{8,\text{den}}=\text{det}M$ . Thus, for $i=1,2,\cdots,8$ one has

(30) \begin{align} {{{\partial }f_{8,\text{num}}} \over{{\partial }x_i}}={{{\partial }M_{14}} \over{{\partial }x_i}}\left(c_1+g_1+d_1\right)+ M_{14}\left({{{\partial }c_1} \over{{\partial }x_i}}+{{{\partial }g_1} \over{{\partial }x_i}}+{{{\partial }d_1} \over{{\partial }x_i}}\right)+ \nonumber \\[2pt] -{{{\partial }M_{24}} \over{{\partial }x_i}}\left(c_2+g_2+d_2\right)- M_{24}\left({{{\partial }c_2} \over{{\partial }x_i}}+{{{\partial }g_2} \over{{\partial }x_i}}+{{{\partial }d_2} \over{{\partial }x_i}}\right)- \nonumber \\[2pt] {{{\partial }M_{34}} \over{{\partial }x_i}}\left(c_3+g_3+d_3\right)+ M_{34}\left({{{\partial }c_3} \over{{\partial }x_i}}+{{{\partial }g_3} \over{{\partial }x_i}}+{{{\partial }d_3} \over{{\partial }x_i}}\right)- \nonumber \\[2pt] -{{{\partial }M_{44}} \over{{\partial }x_i}}\left(c_4+g_4+d_4\right)- M_{44}\left({{{\partial }c_4} \over{{\partial }x_i}}+{{{\partial }g_4} \over{{\partial }x_i}}+{{{\partial }d_4} \over{{\partial }x_i}}\right) \end{align}

and also

(31) \begin{align} {{{\partial }f_{8,\text{den}}} \over{{\partial }x_i}}={{{\partial }\text{det}M} \over{{\partial }x_i}} \end{align}

and finally

(32) \begin{align} {{{\partial }f_8} \over{{\partial }x_i}}={{{{{\partial }f_{8,\text{num}}} \over{{\partial }x_i}}f_{8,\text{den}}-f_{8,\text{num}}{{{\partial }f_{8,\text{den}}} \over{{\partial }x_i}}} \over{\text{det}M^2}} \end{align}

Computation of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ .

First row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{11}(x)}{{{\partial }x_1}}=0$ for $i=1,2,\cdots,8$ .

Second row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{21}(x)}{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{11}} }{{{\partial }x_i}}\text{det}M-M_{11}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Third row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{31}(x)}{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Fourth row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{41}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{12}} }{{{\partial }x_i}}\text{det}M+M_{12}\dfrac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Fifth row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{51}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Sixth row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{61}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{13}} }{{{\partial }x_i}}\text{det}M-M_{13}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Seventh row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{71}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Eighth row of the Jacobian matrix ${\nabla _x}g_1(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{81}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{14}} }{{{\partial }x_i}}\text{det}M+M_{14}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Computation of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ .

First row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{12}(x) }{{{\partial }x_1}}=0$ for $i=1,2,\cdots,8$ .

Second row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{22}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{21}} }{{{\partial }x_i}}\text{det}M+M_{21}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Third row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{32}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Fourth row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{42}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{22}} }{{{\partial }x_i}}\text{det}M-M_{22}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Fifth row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{52}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Sixth row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{62}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{23}} }{{{\partial }x_i}}\text{det}M+M_{23}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Seventh row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{72}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Eighth row of the Jacobian matrix ${\nabla _x}g_2(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{82}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{24}} }{{{\partial }x_i}}\text{det}M-M_{24}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}}}{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Computation of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ .

First row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{13}(x) }{{{\partial }x_1}}=0$ for $i=1,2,\cdots,8$ .

Second row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{23}(x)}{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{31}} }{{{\partial }x_i}}\text{det}M-M_{31}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Third row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{33}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Fourth row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{43}(x) }{{{\partial }x_i}}=\dfrac{-{\frac{{{\partial }M_{32}} }{{{\partial }x_i}}\text{det}M+M_{32}\frac{{{\partial }\text{det}M}}{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Fifth row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{53}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Sixth row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{63}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{33}} }{{{\partial }x_i}}\text{det}M-M_{33}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Seventh row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{73}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Eighth row of the Jacobian matrix ${\nabla _x}g_3(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{83}(x) }{{{\partial }x_i}}=\dfrac{-{\frac{{{\partial }M_{34}} }{{{\partial }x_i}}\text{det}M+M_{34}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Computation of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ .

First row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{14}(x) }{{{\partial }x_1}}=0$ for $i=1,2,\cdots,8$ .

Second row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{24}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{41}} }{{{\partial }x_i}}\text{det}M+M_{41}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Third row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{34}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Fourth row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{44}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{42}} }{{{\partial }x_i}}\text{det}M-M_{42}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}}}{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Fifth row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{54}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Sixth row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{64}(x) }{{{\partial }x_i}}=\dfrac{{-\frac{{{\partial }M_{33}} }{{{\partial }x_i}}\text{det}M+M_{33}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Seventh row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{74}(x) }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Eighth row of the Jacobian matrix ${\nabla _x}g_4(x)\mid _{(x^{*},u^{*})}$ : $\dfrac{{\partial }g_{84}(x) }{{{\partial }x_i}}=\dfrac{{\frac{{{\partial }M_{44}} }{{{\partial }x_i}}\text{det}M-M_{44}\frac{{{\partial }\text{det}M} }{{{\partial }x_i}}} }{{\text{det}M^2}}$ , for $i=1,2,\cdots,8$

Next, one computes the partial derivatives of the sub-determinants $M_{ij}$ and of the determinant $\text{det}M$ :

(33) \begin{align} {{{\partial }M_{11}} \over{{\partial }x_i}}&={{{\partial }m_{22}} \over{{\partial }x_i}}\left(m_{33}m_{44}-m_{43}m_{34}\right)+\nonumber\\&\quad +m_{22}\left({{{\partial }m_{33}} \over{{\partial }x_i}}m_{44}+m_{33}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{34}-m_{43}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)-\nonumber\\&\quad -{{{\partial }m_{23}} \over{{\partial }x_i}}\left(m_{32}m_{44}-m_{42}m_{34}-\right) \nonumber\\&\quad -m_{23}\left({{{\partial }m_{33}} \over{{\partial }x_i}}m_{44}+m_{32}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{34}-m_{42}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)+\nonumber\\&\quad +{{{\partial }m_{24}} \over{{\partial }x_i}}\left(m_{32}m_{43}-m_{42}m_{33}\right)+\nonumber\\&\quad +m_{24}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{32}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{33}-m_{42}{{{\partial }m_{33}} \over{{\partial }x_i}} \right) \end{align}

Equivalently, one has

(34) \begin{align} {{{\partial }M_{12}} \over{{\partial }x_i}}&={{{\partial }m_{21}} \over{{\partial }x_i}}\left(m_{33}m_{44}-m_{43}m_{34}\right)+\nonumber\\[2pt] &\quad+m_{21}\left({{{\partial }m_{13}} \over{{\partial }x_i}}m_{44}+m_{13}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{34}-m_{43}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)-\nonumber\\[2pt] &\quad-{{{\partial }m_{23}} \over{{\partial }x_i}}\left(m_{31}m_{44}-m_{41}m_{34}-\right) \nonumber\\[2pt] &\quad-m_{23}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{44}+m_{31}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{34}-m_{41}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)+\nonumber\\[2pt] &\quad+{{{\partial }m_{24}} \over{{\partial }x_i}}\left(m_{31}m_{43}-m_{41}m_{33}\right)+\nonumber\\[2pt] &\quad +m_{24}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{33}-m_{41}{{{\partial }m_{33}} \over{{\partial }x_i}} \right) \end{align}

Moreover, it holds that

(35) \begin{align} {{{\partial }M_{13}} \over{{\partial }x_i}}&={{{\partial }m_{21}} \over{{\partial }x_i}}\left(m_{32}m_{44}-m_{42}m_{34}\right)+\nonumber\\[2pt] &\quad +m_{21}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{44}+m_{32}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{34}-m_{43}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)-\nonumber\\[2pt] &\quad -{{{\partial }m_{23}} \over{{\partial }x_i}}\left(m_{31}m_{44}-m_{41}m_{34}-\right)\nonumber \\[2pt] &\quad -m_{23}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{44}+m_{31}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{34}-m_{41}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)+\nonumber\\[2pt] &\quad +{{{\partial }m_{24}} \over{{\partial }x_i}}\left(m_{31}m_{42}-m_{41}m_{32}\right)+\nonumber\\[2pt] &\quad +m_{24}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{42}+m_{31}{{{\partial }m_{42}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{32}-m_{41}{{{\partial }m_{32}} \over{{\partial }x_i}} \right) \end{align}

Additionally, it holds that

(36) \begin{align} {{{\partial }M_{14}} \over{{\partial }x_i}}&={{{\partial }m_{21}} \over{{\partial }x_i}}\left(m_{32}m_{43}-m_{42}m_{33}\right)+ \nonumber \\[3pt]&\quad +m_{21}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{32}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{44}} \over{{\partial }x_i}}m_{33}-m_{42}{{{\partial }m_{32}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{22}} \over{{\partial }x_i}}\left(m_{31}m_{43}-m_{41}m_{33}-\right) \nonumber \\[3pt]&\quad -m_{22}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{33}} \over{{\partial }x_i}}m_{33}-m_{41}{{{\partial }m_{33}} \over{{\partial }x_i}} \right)+ \nonumber \\[3pt]&\quad +{{{\partial }m_{23}} \over{{\partial }x_i}}\left(m_{31}m_{42}-m_{41}m_{32}\right)+ \nonumber \\[3pt]&\quad +m_{23}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{42}+m_{31}{{{\partial }m_{42}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{32}-m_{41}{{{\partial }m_{32}} \over{{\partial }x_i}} \right) \end{align}

In a similar manner, one obtains

(37) \begin{align} {{{\partial }M_{21}} \over{{\partial }x_i}}&={{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{33}m_{44}-m_{43}m_{34}\right)+\nonumber\\[3pt] &\quad +m_{12}\left({{{\partial }m_{33}} \over{{\partial }x_i}}m_{44}+m_{33}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{33}-m_{43}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)-\nonumber\\[3pt] &\quad -{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{32}m_{44}-m_{42}m_{34}-\right) \nonumber\\[3pt] &\quad -m_{13}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{44}+m_{32}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{34}-m_{42}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)+\nonumber\\[3pt] &\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{32}m_{43}-m_{42}m_{33}\right)+ \nonumber\\[3pt] &\quad +m_{14}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{32}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{33}-m_{42}{{{\partial }m_{33}} \over{{\partial }x_i}} \right) \end{align}

Equivalently, one has

(38) \begin{align} {{{\partial }M_{22}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{23}m_{24}-m_{43}m_{44}\right)+ \nonumber\\[3pt]&\quad +m_{11}\left({{{\partial }m_{23}} \over{{\partial }x_i}}m_{24}+m_{23}{{{\partial }m_{24}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{44}-m_{43}{{{\partial }m_{44}} \over{{\partial }x_i}} \right)- \nonumber\\[3pt]&\quad -{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{31}m_{44}-m_{41}m_{24}-\right) \nonumber\\[3pt]&\quad -m_{13}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{44}+m_{31}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{24}-m_{41}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)+\nonumber\\[3pt]&\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{31}m_{43}-m_{41}m_{23}\right)+\nonumber\\[3pt]&\quad +m_{14}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{23}-m_{41}{{{\partial }m_{23}} \over{{\partial }x_i}} \right) \end{align}

Following this procedure, one gets

(39) \begin{align} {{{\partial }M_{23}} \over{{\partial }x_i}}& ={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{32}m_{44}-m_{42}m_{34}\right)+\nonumber\\[3pt]&\quad +m_{11}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{44}+m_{32}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{34}-m_{42}{{{\partial }m_{34}} \over{{\partial }x_i}} \right)-\nonumber\\[3pt] &\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{31}m_{43}-m_{41}m_{33}-\right) \nonumber\\[3pt] &\quad -m_{12}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{33}-m_{41}{{{\partial }m_{33}} \over{{\partial }x_i}} \right)+\nonumber\\[3pt] &\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{31}m_{42}-m_{41}m_{32}\right)+\nonumber\\[3pt]&\quad +m_{14}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{32}-m_{41}{{{\partial }m_{32}} \over{{\partial }x_i}} \right) \end{align}

Additionally, it holds that

(40) \begin{align} {{{\partial }M_{24}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{32}m_{43}-m_{42}m_{33}\right)+\nonumber\\[3pt] &\quad +m_{11}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{32}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{33}-m_{42}{{{\partial }m_{33}} \over{{\partial }x_i}} \right)-\nonumber\\[3pt]&\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{31}m_{43}-m_{41}m_{33}-\right) \nonumber\\[3pt] &\quad -m_{12}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{43}+m_{31}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{33}-m_{41}{{{\partial }m_{33}} \over{{\partial }x_i}} \right)+\nonumber\\[3pt] &\quad +{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{31}m_{42}-m_{41}m_{22}\right)+\nonumber\\[3pt] &\quad +m_{13}\left({{{\partial }m_{31}} \over{{\partial }x_i}}m_{42}+m_{31}{{{\partial }m_{42}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{22}-m_{41}{{{\partial }m_{22}} \over{{\partial }x_i}} \right) \end{align}

In this context, one obtains

(41) \begin{align} {{{\partial }M_{31}} \over{{\partial }x_i}}&={{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{23}m_{44}-m_{43}m_{24}\right)+\nonumber\\[3pt]&\quad +m_{12}\left({{{\partial }m_{23}} \over{{\partial }x_i}}m_{44}+m_{23}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{43}} \over{{\partial }x_i}}m_{24}-m_{43}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)-\nonumber\\[3pt] &\quad -{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{22}m_{44}-m_{42}m_{24}-\right) \nonumber\\[3pt] &\quad -m_{13}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{44}+m_{22}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{24}-m_{42}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)+\nonumber\\[3pt] &\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{22}m_{43}-m_{42}m_{23}\right)+\nonumber\\[3pt] &\quad +m_{14}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{22}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{23}-m_{42}{{{\partial }m_{23}} \over{{\partial }x_i}} \right) \end{align}

Additionally, one has

(42) \begin{align} {{{\partial }M_{32}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{23}m_{44}-m_{42}m_{24}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{23}} \over{{\partial }x_i}}m_{44}+m_{23}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{24}-m_{42}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt] &\quad -{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{12}m_{44}-m_{41}m_{24}-\right) \nonumber \\[3pt] &\quad -m_{13}\left({{{\partial }m_{12}} \over{{\partial }x_i}}m_{44}+m_{12}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{24}-m_{41}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt] &\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{12}m_{43}-m_{41}m_{23}\right)+\nonumber \\[3pt]&\quad +m_{14}\left({{{\partial }m_{12}} \over{{\partial }x_i}}m_{43}+m_{12}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{23}-m_{41}{{{\partial }m_{23}} \over{{\partial }x_i}} \right) \end{align}

Furthermore, one has

(43) \begin{align} {{{\partial }M_{33}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{22}m_{44}-m_{42}m_{24}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{44}+m_{22}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{24}-m_{42}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{21}m_{44}-m_{41}m_{24}-\right) \nonumber \\[3pt]&\quad -m_{12}\left({{{\partial }m_{21}} \over{{\partial }x_i}}m_{44}+m_{21}{{{\partial }m_{44}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{24}-m_{41}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{41}m_{22}-m_{21}m_{42}\right)+\nonumber \\[3pt]&\quad +m_{14}\left({{{\partial }m_{41}} \over{{\partial }x_i}}m_{22}+m_{41}{{{\partial }m_{22}} \over{{\partial }x_i}}-{{{\partial }m_{21}} \over{{\partial }x_i}}m_{42}-m_{21}{{{\partial }m_{42}} \over{{\partial }x_i}} \right) \end{align}

Continuing in this manner, one gets

(44) \begin{align} {{{\partial }M_{34}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{22}m_{43}-m_{42}m_{23}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{43}+m_{22}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{42}} \over{{\partial }x_i}}m_{23}-m_{42}{{{\partial }m_{23}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{21}m_{43}-m_{41}m_{23}-\right) \nonumber \\[3pt]&\quad -m_{12}\left({{{\partial }m_{21}} \over{{\partial }x_i}}m_{43}+m_{21}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{23}-m_{41}{{{\partial }m_{23}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{21}m_{42}-m_{41}m_{22}\right)+\nonumber \\[3pt]&\quad +m_{13}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{42}+m_{22}{{{\partial }m_{42}} \over{{\partial }x_i}}-{{{\partial }m_{41}} \over{{\partial }x_i}}m_{22}-m_{41}{{{\partial }m_{22}} \over{{\partial }x_i}} \right) \end{align}

Besides, one has

(45) \begin{align} {{{\partial }M_{41}} \over{{\partial }x_i}}&={{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{22}m_{34}-m_{33}m_{24}\right)+\nonumber \\[3pt]&\quad +m_{12}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{34}+m_{22}{{{\partial }m_{34}} \over{{\partial }x_i}}-{{{\partial }m_{33}} \over{{\partial }x_i}}m_{24}-m_{33}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{23}m_{34}-m_{32}m_{23}-\right) \nonumber \\[3pt]&\quad -m_{13}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{34}+m_{22}{{{\partial }m_{34}} \over{{\partial }x_i}}-{{{\partial }m_{32}} \over{{\partial }x_i}}m_{23}-m_{32}{{{\partial }m_{23}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{22}m_{33}-m_{32}m_{23}\right)+\nonumber \\[3pt]&\quad +m_{14}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{33}+m_{22}{{{\partial }m_{33}} \over{{\partial }x_i}}-{{{\partial }m_{32}} \over{{\partial }x_i}}m_{23}-m_{32}{{{\partial }m_{23}} \over{{\partial }x_i}} \right) \end{align}

Equivalently, one obtains

(46) \begin{align} {{{\partial }M_{42}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{32}m_{43}-m_{33}m_{42}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{32}} \over{{\partial }x_i}}m_{43}+m_{32}{{{\partial }m_{42}} \over{{\partial }x_i}}-{{{\partial }m_{33}} \over{{\partial }x_i}}m_{42}-m_{33}{{{\partial }m_{42}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{31}} \over{{\partial }x_i}}\left(m_{12}m_{43}-m_{13}m_{42}-\right) \nonumber \\[3pt]&\quad -m_{31}\left({{{\partial }m_{12}} \over{{\partial }x_i}}m_{43}+m_{12}{{{\partial }m_{43}} \over{{\partial }x_i}}-{{{\partial }m_{13}} \over{{\partial }x_i}}m_{42}-m_{13}{{{\partial }m_{42}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{41}} \over{{\partial }x_i}}\left(m_{12}m_{33}-m_{13}m_{32}\right)+\nonumber \\[3pt]&\quad +m_{41}\left({{{\partial }m_{12}} \over{{\partial }x_i}}m_{33}+m_{12}{{{\partial }m_{33}} \over{{\partial }x_i}}-{{{\partial }m_{13}} \over{{\partial }x_i}}m_{32}-m_{13}{{{\partial }m_{32}} \over{{\partial }x_i}} \right) \end{align}

In a similar manner, one gets

(47) \begin{align} {{{\partial }M_{43}} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}\left(m_{22}m_{34}-m_{32}m_{24}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{34}+m_{22}{{{\partial }m_{34}} \over{{\partial }x_i}}-{{{\partial }m_{32}} \over{{\partial }x_i}}m_{24}-m_{32}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{21}m_{34}-m_{31}m_{24}-\right) \nonumber \\[3pt]&\quad -m_{12}\left({{{\partial }m_{21}} \over{{\partial }x_i}}m_{34}+m_{21}{{{\partial }m_{34}} \over{{\partial }x_i}}-{{{\partial }m_{31}} \over{{\partial }x_i}}m_{24}-m_{31}{{{\partial }m_{24}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{14}} \over{{\partial }x_i}}\left(m_{21}m_{32}-m_{31}m_{22}\right)+\nonumber \\[3pt]&\quad +m_{14}\left({{{\partial }m_{21}} \over{{\partial }x_i}}m_{32}+m_{21}{{{\partial }m_{32}} \over{{\partial }x_i}}-{{{\partial }m_{31}} \over{{\partial }x_i}}m_{22}-m_{31}{{{\partial }m_{22}} \over{{\partial }x_i}} \right) \end{align}

Finally, one has that

(48) \begin{align} {{{\partial }M_{44}} \over{{\partial }x_i}}&={{{\partial }m_{22}} \over{{\partial }x_i}}\left(m_{22}m_{33}-m_{32}m_{23}\right)+\nonumber \\[3pt]&\quad +m_{11}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{33}+m_{22}{{{\partial }m_{33}} \over{{\partial }x_i}}-{{{\partial }m_{32}} \over{{\partial }x_i}}m_{23}-m_{32}{{{\partial }m_{23}} \over{{\partial }x_i}} \right)-\nonumber \\[3pt]&\quad -{{{\partial }m_{12}} \over{{\partial }x_i}}\left(m_{22}m_{33}-m_{31}m_{23}-\right) \nonumber \\[3pt]&\quad -m_{12}\left({{{\partial }m_{22}} \over{{\partial }x_i}}m_{33}+m_{23}{{{\partial }m_{33}} \over{{\partial }x_i}}-{{{\partial }m_{31}} \over{{\partial }x_i}}m_{23}-m_{31}{{{\partial }m_{23}} \over{{\partial }x_i}} \right)+\nonumber \\[3pt]&\quad +{{{\partial }m_{13}} \over{{\partial }x_i}}\left(m_{21}m_{32}-m_{31}m_{22}\right)+\nonumber \\[3pt]&\quad +m_{13}\left({{{\partial }m_{21}} \over{{\partial }x_i}}m_{32}+m_{21}{{{\partial }m_{32}} \over{{\partial }x_i}}-{{{\partial }m_{31}} \over{{\partial }x_i}}m_{22}-m_{31}{{{\partial }m_{22}} \over{{\partial }x_i}} \right) \end{align}

About the partial derivatives of the determinant $\text{det}M$ one has for $i=1,2,\cdots,8$

(49) \begin{align} {{{\partial }\text{det}M} \over{{\partial }x_i}}&={{{\partial }m_{11}} \over{{\partial }x_i}}M_{11}+m_{11}{{{\partial }M_{11}} \over{{\partial }x_i}}-{{{\partial }m_{12}} \over{{\partial }x_i}}M_{12}-m_{12}{{{\partial }M_{12}} \over{{\partial }x_i}}+ \nonumber\\[3pt]& +{{{\partial }m_{13}} \over{{\partial }x_i}}M_{13}+m_{13}{{{\partial }M_{13}} \over{{\partial }x_i}}-{{{\partial }m_{14}} \over{{\partial }x_i}}M_{14}-m_{14}{{{\partial }M_{14}} \over{{\partial }x_i}} \end{align}

Next, the derivatives of the elements of the inertia matrix $M$ are computed.

It holds that $m_{11}=p_1+{p_2}\text{cos}(x_3)$ . Thus one has: $\dfrac{{{\partial }m_{11}} }{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_3}}=-{p_2}\text{sin}(x_3)$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{11}} }{{{\partial }x_8}}=0$ .

Besides, it holds that $m_{12}=m_{21}=p_3+0.5{p_2}\text{sin}(x_3)x_4$ , thus $\dfrac{{{\partial }m_{12}} }{{{\partial }x_1}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_2}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_3}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_3}}=0.5{p_2}\text{cos}(x_3)x_4$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_4}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_4}}=0.5{p_2}\text{sin}(x_3)$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_5}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_6}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_7}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{12}} }{{{\partial }x_8}}=\dfrac{{{\partial }m_{21}} }{{{\partial }x_8}}=0$ .

Moreover, it holds that $m_{13}=m_{31}=0$ , thus $\dfrac{{{\partial }m_{13}} }{{{\partial }x_1}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_2}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_3}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_4}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_5}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_6}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_7}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{13}} }{{{\partial }x_8}}=\dfrac{{{\partial }m_{31}} }{{{\partial }x_8}}=0$ .

Additionally, it holds that $m_{14}=m_{41}=-p_5$ , thus $\dfrac{{{\partial }m_{14}} }{{{\partial }x_1}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_i}}=0$ $\dfrac{{{\partial }m_{14}} }{{{\partial }x_2}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{14}} }{{{\partial }x_3}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{14}} }{{{\partial }x_5}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{14}} }{{{\partial }x_6}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{14}} }{{{\partial }x_7}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{14}} }{{{\partial }x_8}}=\dfrac{{{\partial }m_{41}} }{{{\partial }x_8}}=0$

Moreover, it holds that $m_{22}=p_2$ , thus $\dfrac{{{\partial }m_{22}}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{22}}}{{{\partial }x_8}}=0$

Furthermore, it holds that $m_{23}=m_{32}=0$ , thus $\dfrac{{{\partial }m_{23}}}{{{\partial }x_1}}=\dfrac{{{\partial }m_{23}} }{{{\partial }x_1}}=$ $\dfrac{{{\partial }m_{23}}}{{{\partial }x_2}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_3}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_4}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_5}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_6}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_7}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{23}}}{{{\partial }x_8}}=\dfrac{{{\partial }m_{32}} }{{{\partial }x_8}}=0$ .

Besides, it holds that $m_{24}=m_{42}=-p_5$ , thus $\dfrac{{{\partial }m_{24}} }{{{\partial }x_1}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_1}}=$ $\dfrac{{{\partial }m_{24}} }{{{\partial }x_2}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_3}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_4}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_5}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_6}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_7}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{24}} }{{{\partial }x_8}}=\dfrac{{{\partial }m_{42}} }{{{\partial }x_8}}=0$ .

Moreover, it holds that $m_{33}=p_4$ , thus $\dfrac{{{\partial }m_{33}}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }m_{33}}}{{{\partial }x_8}}=0$

Additionally, it holds that $m_{34}=m_{43}=$ , thus $\dfrac{{{\partial }m_{34}} }{{{\partial }x_1}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_1}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_2}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_2}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_3}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_3}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_4}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_4}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_5}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_5}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_6}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_6}}=$ , $\dfrac{{{\partial }m_{34}} }{{{\partial }x_7}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_7}}=$ ,. $\dfrac{{{\partial }m_{34}} }{{{\partial }x_8}}=\dfrac{{{\partial }m_{43}} }{{{\partial }x_8}}=$ .

Finally, it holds that $m_{44}=p_5$ , thus $\dfrac{{{\partial }m_{44}}}{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }m_{44}}}{{{\partial }x_7}}=0$ ,. $\dfrac{{{\partial }m_{44}}}{{{\partial }x_8}}=0$ .

Finally, about the computation of the partial derivatives of the Coriolis forces vector one has

(50) \begin{align} C(x,\dot{x})\dot{x}=\begin{pmatrix} c_{11}x_2+c_{12}x_4+c_{13}x_6+c_{14}{x_8} \\ \\[-8pt] c_{21}x_2+c_{22}x_4+c_{23}x_6+c_{24}{x_8} \\ \\[-8pt] c_{31}x_2+c_{22}x_4+c_{23}x_6+c_{24}{x_8} \\ \\[-8pt] c_{41}x_2+c_{42}x_4+c_{43}x_6+c_{44}{x_8} \end{pmatrix} \end{align}

It holds that for $i=1.3,5,7$

(51) \begin{align} {{{\partial }{c_1}} \over{{\partial }x_i}}={{{\partial }{c_{11}}} \over{{\partial }x_i}}x_2+{{{\partial }{c_{12}}} \over{{\partial }x_i}}x_4+{{{\partial }{c_{13}}} \over{{\partial }x_i}}x_6+{{{\partial }{c_{14}}} \over{{\partial }x_i}}x_8 \end{align}

and also

(52) \begin{align} {{{\partial }{c_1}} \over{{\partial }x_2}}={{{\partial }{c_{11}}} \over{{\partial }x_2}}x_2+c_{11}+{{{\partial }{c_{12}}} \over{{\partial }x_2}}x_4+{{{\partial }{c_{13}}} \over{{\partial }x_2}}x_6+{{{\partial }{c_{14}}} \over{{\partial }x_2}}x_8 \end{align}
(53) \begin{align} {{{\partial }{c_1}} \over{{\partial }x_4}}={{{\partial }{c_{11}}} \over{{\partial }x_4}}x_2+{{{\partial }{c_{12}}} \over{{\partial }x_4}}x_4+c_{12}+{{{\partial }{c_{13}}} \over{{\partial }x_4}}x_6+{{{\partial }{c_{14}}} \over{{\partial }x_4}}x_8 \end{align}
(54) \begin{align} {{{\partial }{c_1}} \over{{\partial }x_6}}={{{\partial }{c_{11}}} \over{{\partial }x_6}}x_2+{{{\partial }{c_{12}}} \over{{\partial }x_6}}x_4+{{{\partial }{c_{13}}} \over{{\partial }x_6}}x_6+c_{13}+{{{\partial }{c_{14}}} \over{{\partial }x_6}}x_8 \end{align}
(55) \begin{align} {{{\partial }{c_1}} \over{{\partial }x_8}}={{{\partial }{c_{11}}} \over{{\partial }x_8}}x_2+{{{\partial }{c_{12}}} \over{{\partial }x_8}}x_4+{{{\partial }{c_{13}}} \over{{\partial }x_8}}x_6+{{{\partial }{c_{14}}} \over{{\partial }x_8}}x_8+c_{14}\end{align}

Equivalently, it holds that for $i=1.3,5,7$

(56) \begin{align} {{{\partial }{c_2}} \over{{\partial }x_i}}={{{\partial }{c_{21}}} \over{{\partial }x_i}}x_2+{{{\partial }{c_{22}}} \over{{\partial }x_i}}x_4+{{{\partial }{c_{23}}} \over{{\partial }x_i}}x_6+{{{\partial }{c_{24}}} \over{{\partial }x_i}}x_8 \end{align}

and also

(57) \begin{align} {{{\partial }{c_2}} \over{{\partial }x_2}}={{{\partial }{c_{21}}} \over{{\partial }x_2}}x_2+c_{21}+{{{\partial }{c_{22}}} \over{{\partial }x_2}}x_4+{{{\partial }{c_{23}}} \over{{\partial }x_2}}x_6+{{{\partial }{c_{24}}} \over{{\partial }x_2}}x_8 \end{align}
(58) \begin{align} {{{\partial }{c_2}} \over{{\partial }x_4}}={{{\partial }{c_{21}}} \over{{\partial }x_4}}x_2+{{{\partial }{c_{22}}} \over{{\partial }x_4}}x_4+c_{22}+{{{\partial }{c_{23}}} \over{{\partial }x_4}}x_6+{{{\partial }{c_{24}}} \over{{\partial }x_4}}x_8 \end{align}
(59) \begin{align} {{{\partial }{c_2}} \over{{\partial }x_6}}={{{\partial }{c_{1}}} \over{{\partial }x_6}}x_2+{{{\partial }{c_{22}}} \over{{\partial }x_6}}x_4+{{{\partial }{c_{23}}} \over{{\partial }x_6}}x_6+c_{23}+{{{\partial }{c_{24}}} \over{{\partial }x_6}}x_8 \end{align}
(60) \begin{align} {{{\partial }{c_2}} \over{{\partial }x_8}}={{{\partial }{c_{21}}} \over{{\partial }x_8}}x_2+{{{\partial }{c_{22}}} \over{{\partial }x_8}}x_4+{{{\partial }{c_{23}}} \over{{\partial }x_8}}x_6+{{{\partial }{c_{24}}} \over{{\partial }x_8}}x_8+c_{24} \end{align}

Similarly, it holds that for $i=1.3,5,7$

(61) \begin{align} {{{\partial }{c_3}} \over{{\partial }x_i}}={{{\partial }{c_{31}}} \over{{\partial }x_i}}x_2+{{{\partial }{c_{32}}} \over{{\partial }x_i}}x_4+{{{\partial }{c_{33}}} \over{{\partial }x_i}}x_6+{{{\partial }{c_{34}}} \over{{\partial }x_i}}x_8 \end{align}

and also

(62) \begin{align} {{{\partial }{c_3}} \over{{\partial }x_2}}={{{\partial }{c_{31}}} \over{{\partial }x_2}}x_2+c_{31}+{{{\partial }{c_{32}}} \over{{\partial }x_2}}x_4+{{{\partial }{c_{33}}} \over{{\partial }x_2}}x_6+{{{\partial }{c_{34}}} \over{{\partial }x_2}}x_8 \end{align}
(63) \begin{align} {{{\partial }{c_3}} \over{{\partial }x_4}}={{{\partial }{c_{31}}} \over{{\partial }x_4}}x_2+{{{\partial }{c_{32}}} \over{{\partial }x_4}}x_4+c_{32}+{{{\partial }{c_{33}}} \over{{\partial }x_4}}x_6+{{{\partial }{c_{34}}} \over{{\partial }x_4}}x_8 \end{align}
(64) \begin{align} {{{\partial }{c_3}} \over{{\partial }x_6}}={{{\partial }{c_{3}}} \over{{\partial }x_6}}x_2+{{{\partial }{c_{32}}} \over{{\partial }x_6}}x_4+{{{\partial }{c_{33}}} \over{{\partial }x_6}}x_6+c_{33}+{{{\partial }{c_{34}}} \over{{\partial }x_6}}x_8 \end{align}
(65) \begin{align} {{{\partial }{c_3}} \over{{\partial }x_8}}={{{\partial }{c_{31}}} \over{{\partial }x_8}}x_2+{{{\partial }{c_{32}}} \over{{\partial }x_8}}x_4+{{{\partial }{c_{33}}} \over{{\partial }x_8}}x_6+{{{\partial }{c_{34}}} \over{{\partial }x_8}}x_8+c_{34} \end{align}

Finally, it holds that for $i=1.3,5,7$

(66) \begin{align} {{{\partial }{c_4}} \over{{\partial }x_i}}={{{\partial }{c_{41}}} \over{{\partial }x_i}}x_2+{{{\partial }{c_{42}}} \over{{\partial }x_i}}x_4+{{{\partial }{c_{43}}} \over{{\partial }x_i}}x_6+{{{\partial }{c_{44}}} \over{{\partial }x_i}}x_8 \end{align}

and also

(67) \begin{align} {{{\partial }{c_4}} \over{{\partial }x_2}}={{{\partial }{c_{41}}} \over{{\partial }x_2}}x_2+c_{41}+{{{\partial }{c_{42}}} \over{{\partial }x_2}}x_4+{{{\partial }{c_{43}}} \over{{\partial }x_2}}x_6+{{{\partial }{c_{44}}} \over{{\partial }x_2}}x_8 \end{align}
(68) \begin{align} {{{\partial }{c_4}} \over{{\partial }x_4}}={{{\partial }{c_{41}}} \over{{\partial }x_4}}x_2+{{{\partial }{c_{42}}} \over{{\partial }x_4}}x_4+c_{42}+{{{\partial }{c_{43}}} \over{{\partial }x_4}}x_6+{{{\partial }{c_{44}}} \over{{\partial }x_4}}x_8 \end{align}
(69) \begin{align} {{{\partial }{c_4}} \over{{\partial }x_6}}={{{\partial }{c_{4}}} \over{{\partial }x_6}}x_2+{{{\partial }{c_{42}}} \over{{\partial }x_6}}x_4+{{{\partial }{c_{43}}} \over{{\partial }x_6}}x_6+c_{43}+{{{\partial }{c_{44}}} \over{{\partial }x_6}}x_8 \end{align}
(70) \begin{align} {{{\partial }{c_4}} \over{{\partial }x_8}}={{{\partial }{c_{41}}} \over{{\partial }x_8}}x_2+{{{\partial }{c_{42}}} \over{{\partial }x_8}}x_4+{{{\partial }{c_{43}}} \over{{\partial }x_8}}x_6+{{{\partial }{c_{44}}} \over{{\partial }x_8}}x_8+c_{44} \end{align}

Next, the following partial derivatives of the elements $c_{ij}$ $i=1,2,3,4$ and $j=1,2,3,4$ of the Coriolis matrix are computed.

It holds $c_{11}=-{p_2}\text{sin}(x_1)x_4$ , thus one has that: $\dfrac{{{\partial }c_{11}} }{{{\partial }x_1}}=-{p_2}\text{cos}(x_1)x_4$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_3}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_4}}=-{o_2}\text{sin}(x_1)$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_8}}=0$ .

Additionally, it holds that $c_{12}=-0.5{p_2}\text{sin}(x_3)x_4$ , thus $\dfrac{{{\partial }c_{12}} }{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_2}}=0$ , $\dfrac{{{\partial }c_{12}} }{{{\partial }x_3}}=-0.5{p_2}\text{cos}(x_3)x_4$ , $\dfrac{{{\partial }c_{12}} }{{{\partial }x_4}}=-0.5{p_2}\text{sin}(x_3)$ , $\dfrac{{{\partial }c_{12}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }c_{12}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }c_{12}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }c_{11}} }{{{\partial }x_8}}=0$ .

Moreover, it holds that $c_{13}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{13}} }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$

Additionally, it holds that $c_{14}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{14}} }{{{\partial }x_i}}=$ , for $i=1,\cdots,8$

Additionally, it holds that $c_{21}=0.5{p_2}\text{sin}(x_3)x_2$ , thus, one obtains: $\dfrac{{{\partial }c_{21}} }{{{\partial }x_1}}=0$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_2}}=0.5{p_2}\text{sin}(x_3)$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_3}}=0.5{p_2}\text{cos}(x_3)x_2$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_4}}=0$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_5}}=0$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_6}}=0$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_7}}=0$ , $\dfrac{{{\partial }c_{21}} }{{{\partial }x_8}}=0$ .

Additionally, it holds that $c_{22}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{22}} }{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$

Moreover, it holds that $c_{23}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{23}} }{{{\partial }x_i}}=0$ for $i=1,2,\cdots,8$ .

Furthermore, it holds that $c_{24}=0$ ; thus, one obtains $\dfrac{{{\partial }c_{24}} }{{{\partial }x_i}}=0$ for $i=1,2,\cdots,8$

Additionally, one has that $c_{31}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{31}} }{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$

Furthermore, it holds $c_{32}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{32}} }{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$ .

Moreover, it holds that $c_{33}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{33}}}{{{\partial }x_1}}=0$ , for $i=1,2,\cdots,8$

Additionally, it holds that $c_{34}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{34}} }{{{\partial }x_1}}=0$ , for $i=1,2,\cdots,8$

Furthermore, one has that $c_{41}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{41}} }{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$

Moreover, it holds that $c_{42}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{42}}}{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$

Furthermore, it holds that $c_{43}=0$ ; thus, one obtains: $\dfrac{{{\partial }c_{43}} }{{{\partial }x_i}}=$ , for $i=1,2,\cdots,8$ .

Finally, it holds that $c_{44}=0$ . Thus, one obtains $\dfrac{{{\partial }c_{44}}}{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ .

In a similar manner, one computes the partial derivatives of the elements of the gravitational forces matrix. It holds that $g_{1}=0$ ; thus, one obtains $\dfrac{{{\partial }g_{1}} }{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ .

Additionally, it holds that $g_{2}=0$ ; thus, one obtains $\dfrac{{{\partial }g_{2}} }{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ .

Moreover, it holds that $g_{3}={p_4}\bar{g}$ ; thus, one obtains $\dfrac{{{\partial }g_{3}} }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

Finally, it holds that $g_{4}=0$ ; thus, one obtains $\dfrac{{{\partial }g_{4}} }{{{\partial }x_i}}=0$ , for $i=1,2,\cdots,8$ .

In a similar manner, one computes the partial derivatives of the elements of the disturbances vector $\tilde{d}$ . It holds that $d_{1}={b_1}x_2$ ; thus, one obtains $\dfrac{{{\partial }d_{1}} }{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ and $i\,{\neq }\,2$ , while $\dfrac{{{\partial }d_{1}} }{{{\partial }x_2}}=b_1$ .

Additionally, it holds that $d_{2}={b_2}x_4$ ; thus, one obtains $\dfrac{{{\partial }d_{2}} }{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ and $i\,{\neq }\,4$ , while $\dfrac{{{\partial }d_{2}} }{{{\partial }x_4}}=b_2$ .

Moreover, it holds that $d_{3}={b_3}x_6$ ; thus, one obtains $\dfrac{{{\partial }d_{3}}}{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ and $i\,{\neq }\,6$ , while $\dfrac{{{\partial }d_{3}} }{{{\partial }x_6}}=b_3$ .

Finally, it holds that $d_{4}={b_4}x_8$ ; thus, one obtains $\dfrac{{{\partial }d_{4}} }{{{\partial }x_i}}=0$ , for $i=1,\cdots,8$ and $i\,{\neq }\,8$ , while $\dfrac{{{\partial }d_{4}}}{{{\partial }x_8}}=b_4$ .

4. Design of an H-infinity nonlinear feedback controller

4.1. Equivalent linearized dynamics of the 4-DOF SCARA robot

After linearization around its current operating point, the dynamic model for the 4-DOF SCARA robot is written as

(71) \begin{align} \dot{x}=Ax+Bu+d_1 \end{align}

Parameter $d_1$ stands for the linearization error in the 4-DOF SCARA robot’s model that was given previously in Eq. (15). The reference setpoints for the state vector of the aforementioned dynamic model are denoted by ${\bf{x_d}}=[x_1^{d},\cdots,x_8^{d}]$ . Tracking of this trajectory is achieved after applying the control input $u^{*}$ . At every time instant, the control input $u^{*}$ is assumed to differ from the control input $u$ appearing in Eq. (71) by an amount equal to ${\Delta }u$ , that is, $u^{*}=u+{\Delta }u$

(72) \begin{align} \dot{x}_d=Ax_d+Bu^{*}+d_2 \end{align}

The dynamics of the controlled system described in Eq. (71) can be also written as

(73) \begin{align} \dot{x}=Ax+Bu+Bu^{*}-Bu^{*}+d_1 \end{align}

and by denoting $d_3=-Bu^{*}+d_1$ as an aggregate disturbance term one obtains

(74) \begin{align} \dot{x}=Ax+Bu+Bu^{*}+d_3 \end{align}

By subtracting Eq. (72) from Eq. (74), one has

(75) \begin{align} \dot{x}-\dot{x}_d=A(x-x_d)+Bu+d_3-d_2 \end{align}

By denoting the tracking error as $e=x-x_d$ and the aggregate disturbance term as $\tilde{d}=d_3-d_2$ , the tracking error dynamics becomes

(76) \begin{align} \dot{e}=Ae+Bu+\tilde{d} \end{align}

The above linearized form of the 4-DOF SCARA robot’s model can be efficiently controlled after applying an H-infinity feedback control scheme.

4.2. The nonlinear H-infinity control

The initial nonlinear model of the 4-DOF SCARA robot is in the form

(77) \begin{align} \dot{x}=f(x,u) \ \ x\,{\in }\,R^n, \ u\,{\in }\,R^m \end{align}

Linearization of the model of the 4-DOF SCARA robot is performed at each iteration of the control algorithm around its present operating point ${(x^{*},u^{*})}=(x(t),u(t-T_s))$ . The linearized equivalent of the system is described by

(78) \begin{align} \dot{x}=Ax+Bu+L\tilde{d} \ \ x\,{\in }\,R^n, \ u\,{\in }\,R^m, \ \tilde{d}\,{\in }\,R^q \end{align}

where matrices $A$ and $B$ are obtained from the computation of the previously defined Jacobians and vector $\tilde{d}$ denotes disturbance terms due to linearization errors. The problem of disturbance rejection for the linearized model that is described by

(79) \begin{align} \dot{x}&=Ax+Bu+L\tilde{d} \\[3pt] y&=Cx \nonumber\end{align}

where $x\,{\in }\,R^n$ , $u\,{\in }\,R^m$ , $\tilde{d}\,{\in }\,R^q$ , and $y\,{\in }\,R^p$ , cannot be handled efficiently if the classical linear quadratic regulator control scheme is applied. This is because of the existence of the perturbation term $\tilde{d}$ . The disturbance term $\tilde{d}$ apart from modeling (parametric) uncertainty and external perturbation terms can also represent noise terms of any distribution.

In the $H_{\infty }$ control approach, a feedback control scheme is designed for trajectory tracking by the system’s state vector and simultaneous disturbance rejection, considering that the disturbance affects the system in the worst possible manner. The disturbances’ effects are incorporated in the following quadratic cost function:

(80) \begin{align} J(t)={1 \over 2}{\int _0^T}[{y^T}(t)y(t)+r{u^T}(t)u(t)-{\rho ^2}{\tilde{d}^T}(t)\tilde{d}(t)]dt, \ \ r,{\rho }\gt 0 \end{align}

The significance of the negative sign in the cost function’s term that is associated with the perturbation variable $\tilde{d}(t)$ is that the disturbance tries to maximize the cost function $J(t)$ while the control signal $u(t)$ tries to minimize it. The physical meaning of the relation given above is that the control signal and the disturbances compete to each other within a min-max differential game. This problem of min-max optimization can be written as ${min_{u}}{max_{\tilde{d}}}J(u,\tilde{d})$ .

The objective of the optimization procedure is to compute a control signal $u(t)$ which can compensate for the worst possible disturbance, that is, externally imposed to the 4-DOF SCARA robot. However, the solution to the min-max optimization problem is directly related to the value of parameter $\rho$ . This means that there is an upper bound in the disturbances magnitude that can be annihilated by the control signal.

4.3. Computation of the feedback control gains

For the linearized system given by Eq. (79), the cost function of Eq. (80) is defined, where the coefficient $r$ determines the penalization of the control input and the weight coefficient $\rho$ determines the reward of the disturbances’ effects. It is assumed that (i) the energy that is transferred from the disturbances signal $\tilde{d}(t)$ is bounded, that is, ${\int _0^{\infty }}{\tilde{d}^T(t)}\tilde{d}(t){dt}\lt \infty$ , (ii) matrices $[A, B]$ and $[A, L]$ are stabilizable, and (iii) matrix $[A, C]$ is detectable. In the case of a tracking problem, the optimal feedback control law is given by

(81) \begin{align} u(t)=-Ke(t) \end{align}

with $e=x-x_d$ to be the tracking error, and $K=\dfrac{1 }{ r}{B^T}P$ where $P$ is a positive definite symmetric matrix. As it will be proven in Section 5, matrix $P$ is obtained from the solution of the Riccati equation

(82) \begin{align} {A^T}P+PA+Q-P\left({2 \over r}B{B^T}-{1 \over{\rho ^2}}L{L^T}\right)P=0 \end{align}

where $Q$ is a positive semi-definite symmetric matrix. The worst case disturbance is given by

(83) \begin{align} \tilde{d}(t)={1 \over \rho ^2}{L^T}Pe(t) \end{align}

The solution of the H-infinity feedback control problem for the 4-DOF SCARA robot and the computation of the worst case disturbance that the related controller can sustain come from superposition of Bellman’s optimality principle when considering that the robot is affected by two separate inputs: (i) the control input $u$ and (ii) the cumulative disturbance input $\tilde{d}(t)$ . Solving the optimal control problem for $u$ , that is, for the minimum variation (optimal) control input that achieves elimination of the state vector’s tracking error, gives $u=-\dfrac{1 }{ r}{B^T}Pe$ . Equivalently, solving the optimal control problem for $\tilde{d}$ , that is, for the worst case disturbance that the control loop can sustain gives $\tilde{d}=\dfrac{1 }{ \rho ^2}{L^T}Pe$ .

The diagram of the considered control loop for the 4-DOF SCARA robot is depicted in Fig. 2.

Figure 2. Diagram of the control scheme for the 4-DOF SCARA robotic manipulator.

5. Lyapunov stability analysis

5.1. Stability proof

Through Lyapunov stability analysis, it will be shown that the proposed nonlinear control scheme assures $H_{\infty }$ tracking performance for the 4-DOF SCARA robot, and that in case of bounded disturbance terms asymptotic convergence to the reference setpoints is achieved. The tracking error dynamics for the 4-DOF SCARA robot is written in the form

(84) \begin{align} \dot{e}=Ae+Bu+L\tilde{d} \end{align}

where in the 4-DOF SCARA robot’s case $L=\,{\in }\,R^{8{\times }8}$ is the disturbance inputs gain matrix. Variable $\tilde{d}$ denotes model uncertainties and external disturbances of the 4-DOF SCARA robot’s model. The following Lyapunov equation is considered:

(85) \begin{align} V={1 \over 2}{e^T}Pe \end{align}

where $e=x-x_d$ is the tracking error. By differentiating with respect to time, one obtains

(86) \begin{align}\dot{V}={1 \over 2}{\dot{e}^T}Pe+{1 \over 2}{e^T}P\dot{e}{\Rightarrow }\dot{V}={1 \over 2}{[Ae+Bu+L\tilde{d}]^T}Pe+{1 \over 2}{e^T}P[Ae+Bu+L\tilde{d}]{\Rightarrow } \end{align}
(87) \begin{align} \dot{V}={1 \over 2}[{e^T}{A^T}+{u^T}{B^T}+{\tilde{d}^T}{L^T}]Pe+{1 \over 2}{e^T}P[Ae+Bu+L\tilde{d}]{\Rightarrow } \end{align}
(88) \begin{align} \dot{V}={1 \over 2}{e^T}{A^T}Pe+{1 \over 2}{u^T}{B^T}Pe+{1 \over 2}{\tilde{d}^T}{L^T}Pe+{1 \over 2}{e^T}PAe+{1 \over 2}{e^T}PBu+{1 \over 2}{e^T}PL\tilde{d} \end{align}

The previous equation is rewritten as

(89) \begin{align} \dot{V}={1 \over 2}{e^T}({A^T}P+PA)e+({1 \over 2}{u^T}{B^T}Pe+{1 \over 2}{e^T}PBu)+({1 \over 2}{\tilde{d}^T}{L^T}Pe+{1 \over 2}{e^T}PL\tilde{d}) \end{align}

Assumption: For given positive definite matrix $Q$ and coefficients $r$ and $\rho$ , there exists a positive definite matrix $P$ , which is the solution of the following matrix equation:

(90) \begin{align} {A^T}P+PA=-Q+P({2 \over r}B{B^T}-{1 \over \rho ^2}L{L^T})P \end{align}

Moreover, the following feedback control law is applied to the system

(91) \begin{align} u=-{1 \over{r}}{B^T}Pe \end{align}

By substituting Eqs. (90) and (91), one obtains

(92) \begin{align} \dot{V}={1 \over 2}{e^T}[-Q+P({2 \over r}B{B^T}-{1 \over \rho ^2}L{L^T})P]e+{e^T}PB(-{1 \over{r}}{B^T}Pe)+{e^T}PL\tilde{d}{\Rightarrow } \end{align}
(93) \begin{align} \dot{V}&=-{1 \over 2}{e^T}Qe+{1 \over{r}}{e^T}PB{B^T}Pe-{1 \over{2\rho ^2}}{e^T}PL{L^T}Pe \nonumber\\[3pt]&\quad -{1 \over{r}}{e^T}PB{B^T}Pe+{e^T}PL\tilde{d} \end{align}

which after intermediate operations gives

(94) \begin{align} \dot{V}=-{1 \over 2}{e^T}Qe-{1 \over{2\rho ^2}}{e^T}PL{L^T}Pe+{e^T}PL\tilde{d} \end{align}

or, equivalently

(95) \begin{align} \dot{V}=-{1 \over 2}{e^T}Qe-{1 \over{2\rho ^2}}{e^T}PL{L^T}Pe+{1 \over 2}{e^T}PL\tilde{d}+{1 \over 2}{\tilde{d}^T}{L^T}Pe \end{align}

Lemma: The following inequality holds:

(96) \begin{align} {1 \over 2}{e^T}L\tilde{d}+{1 \over 2}\tilde{d}{L^T}Pe-{1 \over{2\rho ^2}}{e^T}PL{L^T}Pe{\leq }{1 \over 2}{\rho ^2}{\tilde{d}^T}\tilde{d} \end{align}

Proof: The binomial $({\rho }{\alpha }-\dfrac{1 }{ \rho }b)^2$ is considered. Expanding the left part of the above inequality, one gets

(97) \begin{align} \begin{aligned}{\rho ^2}{a^2}+{1 \over{\rho ^2}}{b^2}-2ab \geq 0 \Rightarrow{1 \over 2}{\rho ^2}{a^2}+{1 \over{2\rho ^2}}{b^2}-ab \geq 0 \Rightarrow \\ ab-{1 \over{2\rho ^2}}{b^2} \leq{1 \over 2}{\rho ^2}{a^2} \Rightarrow{1 \over 2}ab+{1 \over 2}ab-{1 \over{2\rho ^2}}{b^2} \leq{1 \over 2}{\rho ^2}{a^2} \end{aligned} \end{align}

The following substitutions are carried out: $a=\tilde{d}$ and $b={e^T}{P}L$ and the previous relation becomes

(98) \begin{align} {1 \over 2}{\tilde{d}^T}{L^T}Pe+{1 \over 2}{e^T}PL\tilde{d}-{1 \over{2\rho ^2}}{e^T}PL{L^T}Pe{\leq }{1 \over 2}{\rho ^2}\tilde{d}^T\tilde{d} \end{align}

Eq. (98) is substituted in Eq. (95) and the inequality is enforced, thus giving

(99) \begin{align} \dot{V}{\leq }-{1 \over 2}{e^T}Qe+{1 \over 2}{\rho ^2}{\tilde{d}^T}\tilde{d} \end{align}

Eq. (99) shows that the $H_{\infty }$ tracking performance criterion is satisfied. The integration of $\dot{V}$ from $0$ to $T$ gives

(100) \begin{align} \begin{aligned}{\int _0^T}\dot{V}(t)dt{\leq }-{1 \over 2}{\int _0^T}{||e||_Q^2}{dt}+{1 \over 2}{\rho ^2}{\int _0^T}{||\tilde{d}||^2}{dt}{\Rightarrow }\\ 2V(T)+{\int _0^T}{||e||_Q^2}{dt}{\leq }2V(0)+{\rho ^2}{\int _0^T}{||\tilde{d}||^2}dt \end{aligned} \end{align}

Moreover, if there exists a positive constant $M_d\gt 0$ such that

(101) \begin{align} \int _0^{\infty }{||\tilde{d}||^2}dt \leq M_d \end{align}

then one gets

(102) \begin{align} {\int _0^{\infty }}{||e||_Q^2}dt \leq 2V(0)+{\rho ^2}{M_d} \end{align}

Thus, the integral ${\int _0^{\infty }}{||e||_Q^2}dt$ is bounded. Moreover, $V(T)$ is bounded and from the definition of the Lyapunov function $V$ in Eq. (85) it becomes clear that $e(t)$ will be also bounded since $e(t) \ \in \ \Omega _e=\{e|{e^T}Pe{\leq }2V(0)+{\rho ^2}{M_d}\}$ . According to the above and with the use of Barbalat’s lemma, one obtains $\lim _{t \rightarrow \infty }{e(t)}=0$ .

After following the stages of the stability proof one arrives at Eq. (99) which shows that the H-infinity tracking performance criterion holds. By selecting the attenuation coefficient $\rho$ to be sufficiently small and in particular to satisfy $\rho ^2\lt ||e||^2_Q/ ||\tilde{d}||^2$ , one has that the first derivative of the Lyapunov function is upper bounded by 0. This condition holds at each sampling instance and consequently global stability for the control loop can be concluded.

5.2. Robust state estimation with the use of the $H_{\infty }$ Kalman filter

The control loop has to be implemented with the use of information provided by a small number of sensors and by processing only a small number of state variables. To reconstruct the missing information about the state vector of the 4-DOF SCARA robot it is proposed to use a filtering scheme and based on it to apply state estimation-based control [Reference Rigatos and Busawon1, Reference Rigatos41]. By denoting as $A(k)$ , $B(k)$ , $C(k)$ the discrete-time equivalents of matrices $A$ , $B$ , $C$ which constitute the linearized state-space model of Eq. (15), the recursion of the $H_{\infty }$ Kalman Filter, for the model of the SCARA robot, can be formulated in terms of a measurement update and a time update part.

Measurement update:

(103) \begin{align} D(k)&=[I-{\theta }W(k)P^{-}(k)+{C^T}(k)R(k)^{-1}C(k)P^{-}(k)]^{-1} \nonumber \\[3pt] K(k)&=P^{-}(k)D(k){C^T}(k)R(k)^{-1} \nonumber\\[3pt] \hat{x}(k)&=\hat{x}^{-}(k)+K(k)[y(k)-C\hat{x}^{-}(k)] \end{align}

Time update:

(104) \begin{align} \hat{x}^{-}(k+1)=A(k)x(k)+B(k)u(k) \nonumber\\[3pt] P^{-}(k+1)=A(k)P^{-}(k)D(k)A^T(k)+Q(k) \end{align}

where it is assumed that parameter $\theta$ is sufficiently small to assure that the covariance matrix ${P^{-}(k)}^{-1}-{\theta }W(k)+C^T(k)R(k)^{-1}C(k)$ will be positive definite. When $\theta =0$ , the $H_{\infty }$ Kalman filter becomes equivalent to the standard Kalman filter. One can measure only a part of the state vector of the SCARA robot, for instance state variables $x_1$ , $x_3$ , $x_5$ , and $x_7$ and can estimate through filtering the rest of the state vector elements ( $x_2$ , $x_4$ , $x_6$ , and $x_8$ ). Moreover, the proposed Kalman filtering method can be used for sensor fusion purposes.

6. Flatness-based control in successive loops for the 4-DOF SCARA robotic manipulator

6.1. Differential flatness properties of the 4-DOF SCARA robot

The dynamic model of the 4-DOF SCARA robot is differentially flat. The flat outputs vector of the system is $Y=[y_1,y_2,y_3,y_4]^T=[x_1,x_3,x_5,x_7]^T$ . Differential flatness is associated with the following two conditions: (i) all state variables of the system can be expressed as differential functions of its flat outputs and (ii) the flat outputs and their derivatives are differentially independent which signifies that they are not connected through a relation in the form of a linear homogeneous differential equation [Reference Rigatos and Busawon1, Reference Rigatos41].

Obviously, it holds that $x_2=\dot{x}_1$ , $x_4=\dot{x}_3$ , $x_6=\dot{x}_5$ , and $x_8=\dot{x}_7$ . This signifies that state variables $x_2$ , $x_4$ , $x_6$ , and $x_8$ are differential functions of the system’s flat outputs. Besides, using that

(105) \begin{align} \begin{pmatrix} \ddot{x}_1 \\ \\[-7pt] \ddot{x}_3 \\ \\[-7pt] \ddot{x}_5 \\ \\[-7pt] \ddot{x}_7 \end{pmatrix}= \begin{pmatrix} f_2(x) \\ \\[-7pt] f_4(x) \\ \\[-7pt] f_6(x) \\ \\[-7pt] f_8(x) \end{pmatrix}+ \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} g_{11}(x) & g_{21}(x) & g_{31}(x) & g_{41}(x) \\ \\[-7pt] g_{12}(x) & g_{22}(x) & g_{32}(x) & g_{42}(x) \\ \\[-7pt] g_{13}(x) & g_{23}(x) & g_{33}(x) & g_{43}(x) \\ \\[-7pt] g_{14}(x) & g_{24}(x) & g_{34}(x) & g_{44}(x) \end{array}\right) \begin{pmatrix} \tau _1 \\ \\[-7pt] \tau _2 \\ \\[-7pt] \tau _3 \\ \\[-7pt] \tau _4 \end{pmatrix} \end{align}

and by solving with respect to the control inputs one obtains

(106) \begin{align} \begin{pmatrix} \tau _1 \\ \\[-7pt] \tau _2 \\ \\[-7pt] \tau _3 \\ \\[-7pt] \tau _4 \end{pmatrix}= \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} g_{11}(x) & g_{21}(x) & g_{31}(x) & g_{41}(x) \\ \\[-7pt] g_{12}(x) & g_{22}(x) & g_{32}(x) & g_{42}(x) \\ \\[-7pt] g_{13}(x) & g_{23}(x) & g_{33}(x) & g_{43}(x) \\ \\[-7pt] g_{14}(x) & g_{24}(x) & g_{34}(x) & g_{44}(x) \end{array}\right)^{-1} \left [ \begin{pmatrix} \ddot{x}_1 \\ \\[-7pt] \ddot{x}_3 \\ \\[-7pt] \ddot{x}_5 \\ \\[-7pt] \ddot{x}_7 \end{pmatrix}- \begin{pmatrix} f_2(x) \\ \\[-7pt] f_4(x) \\ \\[-7pt] f_6(x) \\ \\[-7pt] f_8(x) \end{pmatrix} \right ] \end{align}

The above relation signifies that the control inputs $\tau _1$ , $\tau _2$ , $\tau _3$ , and $\tau _4$ are also differential functions of the flat outputs of the system. Consequently, the 4-DOF SCARA robot is differentially flat. The differential flatness property means also [Reference Rigatos and Busawon1, Reference Toussaint, Basar and Bullo42] that (i) the robotic model is input–output linearizable and (ii) setpoints for all state variables of the robot can be defined. Actually one selects first setpoints for the state variables which coincide with the flat outputs $x_1^d$ , $x_3^d$ , $x_5^d$ , and $x_7^d$ , and next defines setpoints for the rest of the state variables $x_2^d$ , $x_4^d$ , $x_6^d$ , and $x_8^d$ which are associated with the flat outputs through the previously explained differential relations. The differential flatness property is also an implicit proof of the system’s controllability.

6.2. Flatness-based control in successive loops for the SCARA robot

The state-space model for the SCARA robot that was previously defined in Eq. (12) and in Eq. (13) is used again and the new state vector is defined as $z=[z_1,z_2,z_3,z_4,z_5,z_6,z_7,z_8]^T$ where $z_1=x_1$ , $z_2=x_3$ , $z_3=x_5$ , $z_4=x_7$ , $z_5=x_2$ , $z_6=x_4$ , $z_7=x_6$ , and $z_8=x_8$ . Using the new notation for the robot’s state variables, the associated state-space model becomes

(107) \begin{align} \begin{pmatrix} \dot{z}_1 \\ \\[-7pt] \dot{z}_2 \\ \\[-7pt] \dot{z}_3 \\ \\[-7pt] \dot{z}_4 \\ \\[-7pt] \dot{z}_5 \\ \\[-7pt] \dot{z}_6 \\ \\[-7pt] \dot{z}_7 \\ \\[-7pt] \dot{z}_8 \end{pmatrix}= \begin{pmatrix} z_5 \\ \\[-7pt] z_6 \\ \\[-7pt] z_7 \\ \\[-7pt] z_8 \\ \\[-7pt] f_2(z) \\ \\[-7pt] f_4(z) \\ \\[-7pt] f_6(z) \\ \\[-7pt] f_8(x) \end{pmatrix} +\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 0 & 0 & 0 & 0 \\ \\[-7pt] 0 & 0 & 0 & 0 \\ \\[-7pt] 0 & 0 & 0 & 0 \\ \\[-7pt] 0 & 0 & 0 & 0 \\ \\[-7pt] g_{21}(z) & g_{22}(z) & g_{23}(z) & g_{24}(z) \\ \\[-7pt] g_{41}(z) & g_{42}(z) & g_{43}(z) & g_{44}(z) \\ \\[-7pt] g_{61}(z) & g_{62}(z) & g_{63}(z) & g_{64}(z) \\ \\[-7pt] g_{81}(z) & g_{82}(z) & g_{83}(z) & g_{84}(z) \end{array}\right) \begin{pmatrix} u_1 \\ \\[-7pt] u_2 \\ \\[-7pt] u_3 \\ \\[-7pt] u_4 \end{pmatrix} \end{align}

Next, the following subvectors and submatrices are defined:

(108) \begin{align} z_{1,4}=[z_1,z_2,z_3,z_4]^T\ f_{1,4}=0_{4{\times }1} \quad g_{1,4}=I_{4{\times }4} \end{align}
(109) \begin{align} z_{5,8}=[z_5,z_6,z_7,z_8]^T &\quad f_{5,8}=[f_2(z),f_4(z),f_6(z),f_8(z)]^T & g_{5,8}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} g_{21}(z) & g_{22}(z) & g_{23}(z) & g_{24}(z) \\ \\[-7pt] g_{41}(z) & g_{42}(z) & g_{43}(z) & g_{44}(z) \\ \\[-7pt] g_{61}(z) & g_{62}(z) & g_{63}(z) & g_{64}(z) \\ \\[-7pt] g_{81}(z) & g_{82}(z) & g_{83}(z) & g_{84}(z) \end{array}\right) \end{align}

Thus, the state-space model of the SCARA robot can be decomposed into two subsystems

(110) \begin{align}\dot{z}_{1,4}=f_{1,4}(z_{1,4})+g_{1,4}(z_{1,4})z_{5,8} \end{align}
(111) \begin{align} \dot{z}_{5,8}=f_{5,8}(z_{1,4},z_{5,8})+g_{5,8}(z_{1,4},z_{5,8})u \end{align}

As it has already been proven, the dynamic model of the SCARA robot is a differentially flat system with flat outputs vector $y=z_{1,4}$ . Indeed from Eq. (110), one solves for $z_{5,8}$ , giving $z_{5,8}=\dot{z}_{1,4}$ which confirms that $z_{5,8}$ is a differential function of flat outputs vector $y$ . Moreover, from Eq. (111) one solves for $u$ which gives $u=g_{5,8}^{-1}(z_{1,4},z_{5.8})[\dot{z}_{5,8}-f_{5,8}(z_{1,4},z_{5,8})]$ , thus confirming that $z_{5,8}$ is also a differential function of the flat outputs vector $y$ . This completes again the proof of differential flatness properties for the 4-DOF SCARA robot.

Next, it will be proven that each one of the subsystems of Eqs. (110) and (111), if viewed independently, is also differentially flat. For the subsystem of Eq. (110), the flat output is taken to be $y_1=z_{1,4}$ and the virtual control input is $v_1=z_{5,8}$ . It holds that

(112) \begin{align} v_1=z_{5,8}=\dot{z}_{1,4}{\Rightarrow }v_1=h_1(y_1,\dot{y}_1) \end{align}

while $f_{1,4}(z_{1,4})$ , $g_{1,4}(z_{1,4})$ have constant elements. Therefore, the subsystem of Eq. (110) is differentially flat. Additionally, for Eq. (111) the flat outputs vector is taken to $y_2=z_{5,8}$ and the real control input is $u$ , while $z_{1,4}$ is viewed as a vector of coefficients. It holds that

(113) \begin{align} \begin{aligned} u=g_{5,8}^{-1}(z_{1,4},z_{5.8})[\dot{z}_{5,8}-f_{5,8}(z_{1,4},z_{5,8})]\\[3pt] {\Rightarrow }u=h_1(y_2,\dot{y}_2) \end{aligned} \end{align}

Therefore, the subsystem of Eq. (111) is also differentially flat. Using the differential flatness property for each one of the subsystems of Eqs. (110) and (111), one has that each subsystem is input–output linearizable [Reference Toussaint, Basar and Bullo42]. Thus, one can design a stabilizing feedback controller about them by inverting their input–-output linearized description.

For the subsystem of Eq. (110) and using the virtual control input $v_1=z_{5,8}$ , the stabilizing feedback controller is taken to be

(114) \begin{align} v_1=z_{5,8}^d=g_{1,4}^{-1}(z_{1,4})[\dot{z}_{1,4}^d-f_{1,4}(z_{1,4})-{K_1}(z_{1,4}-z_{1,4}^d)] \end{align}

where $K_1\,{\in }\,R^{4{\times }4}$ is a diagonal gain matrix with diagonal elements $k_{1,ii}\gt 0$ , $i=1,\cdots,4$ .

For the subsystem of Eq. (111) and using the real control input $u$ , the stabilizing feedback controller is taken to be

(115) \begin{align} u=g_{5,8}^{-1}(z_{1,4},z_{5,8})[\dot{z}_{5,8}^d-f_{5,8}(z_{1,4},z_{5,8})-{K_2}(z_{5,8}-z_{5,8}^d)] \end{align}

where $K_2\,{\in }\,R^{4{\times }4}$ is a diagonal gain matrix with diagonal elements $k_{2,ii}\gt 0$ , $i=1,\cdots,4$ . In the above equations, $z_{1,4}^d$ and $z_{5,8}^d$ are setpoints for state vectors $z_{1,4}$ and $z_{5,8}$ , respectively. Moreover, in Eq. (115) one has that the virtual control input $v_1$ of the subsystem of Eq. (110) becomes setpoint $z_{5,8}^d$ for the subsystem of Eq. (111).

By substituting the control input of Eq. (114) into the subsystem of Eq. (110), one obtains

(116) \begin{align} \begin{aligned} \dot{z}_{1,4}=f_{1,4}(z_{1,4})+g_{1,4}(z_{1,4}){\cdot }g_{1,4}^{-1}(z_{1,4})[\dot{z}_{1,4}^d-f_{1,4}(z_{1,4})-{K_1}(z_{1,4}-z_{1,4}^d)]\\[3pt] {\Rightarrow }(\dot{z}_{1,4}-\dot{z}_{1,4}^d)+{K_1}(z_{1,4}-z_{1,4}^d)=0 \end{aligned} \end{align}

By substituting the control input of Eq. (115) into the subsystem of Eq. (111), one obtains

(117) \begin{align} \begin{aligned} \dot{z}_{5,8}=f_{5,8}(z_{1,4},z_{5,8})+g_{1,4}(z_{1,4},z_{5.8}){\cdot }g_{1,4}^{-1}(z_{1,4},z_{5,8})[\dot{z}_{5,8}^d-f_{5,8}(z_{1,4},z_{5,8})-{K_2}(z_{5,8}-z_{5,8}^d)]\\[3pt] {\Rightarrow }(\dot{z}_{1,4}-\dot{z}_{1,4}^d)+{K_1}(z_{1,4}-z_{1,4}^d)=0 \end{aligned} \end{align}

By defining the tracking error variables $e_{1,4}=z_{1,4}-z_{1,4}^d$ , $e_{5,8}=z_{5,8}-z_{5,8}^d$ , one has the following tracking error dynamics for the two subsystems:

(118) \begin{align} \dot{e}_{1,4}+{K_1}e_{1,4}=0\,{\Rightarrow }\,\text{lim}_{t{\rightarrow }\infty }e_{1,4}=0{\Rightarrow }\text{lim}_{t{\rightarrow }\infty }z_{1,4}=z_{1,4}^d \end{align}

By substituting the control input of Eq. (115) into the subsystem of Eq. (111), one obtains

(119) \begin{align} \dot{e}_{5,8}+{K_2}e_{5,8}=0{\Rightarrow }\text{lim}_{t{\rightarrow }\infty }e_{5,8}=0{\Rightarrow }\text{lim}_{t{\rightarrow }\infty }z_{5,8}=z_{5,8}^d \end{align}

Thus, all state variables $x_i$ , $i=1,\cdots,8$ of the 4-DOF SCARA robotic manipulator converge to the associated setpoints.

A proof of the global stability properties of the proposed flatness-based control scheme in successive loops can be also obtained through Lyapunov stability analysis. To this end, the following Lyapunov function is defined:

(120) \begin{align} V={1 \over 2}[{e_{1,4}^T}e_{1,4}+{e_{5,8}^T}e_{5,8}] \end{align}

By differentiating in time, one has

(121) \begin{align} \dot{V}={1 \over 2}[2{e_{1,4}^T}\dot{e}_{1,4}+2{e_{5,8}^T}\dot{e}_{5,8}]{\Rightarrow }\dot{V}={e_{1,4}^T}\dot{e}_{1,4}+{e_{5,8}^T}\dot{e}_{5,8} \end{align}

Moreover from the equations about the tracking error dynamics of the two subsystems, given in Eqs. (118) and (119), one has $\dot{e}_{1,4}=-{K_1}e_{1,4}$ and $\dot{e}_{5,8}=-{K_2}e_{5,8}$ . Thus, it holds that

(122) \begin{align} \begin{aligned} \dot{V}={e_{1,4}^T}\dot{e}_{1,4}+{e_{5,8}^T}\dot{e}_{5,8}{\Rightarrow }\dot{V}=-{e_{1,4}^T}{K_1}e_{1,4}-{e_{5,8}^T}{K_2}e_{5,8}\\[3pt] {\Rightarrow }\dot{V}\lt 0 \ \forall \ e_{1,4}\,{\neq }\,0, \ e_{5,8}\,{\neq }\,0 \end{aligned} \end{align}

while $\dot{V}=0$ if and only if $e_{1,4}=0$ and $e_{5,8}=0$ . Consequently, the Lyapunov function of the robotic system converges asymptotically to 0 and finally one has again that $\text{lim}_{t{\rightarrow }\infty }e_{1,4}=0{\Rightarrow }\text{lim}_{t{\rightarrow }\infty }z_{1,4}=z_{1,4}^d$ and $\text{lim}_{t{\rightarrow }\infty }e_{5,8}=0{\Rightarrow }\text{lim}_{t{\rightarrow }\infty }z_{5,8}=z_{8,8}^d$ .

7. Simulation tests

7.1. Results on nonlinear optimal control of the 4-DOF SCARA robot

The global stability properties of the control method and the elimination of the state vector’s tracking error which were previously proven through Lyapunov analysis are further confirmed through simulation experiments. The parameters of the model of the 4-DOF SCARA robot which have been used in the simulation tests have been according to [Reference Voglewede, Smith and Monti2]. To compute the stabilizing feedback gains of the controller, the algebraic Riccati equation of Eq. (90) had to be repetitively solved at each iteration of the control algorithm. The obtained results are depicted in Figs. 318. The real values of the state variables of the 4-DOF SCARA robot are printed in blue, their estimates which are provided by the H-infinity Kalman filter are printed in green color while the associated setpoints are printed in red. The performance of the nonlinear optimal control method was very satisfactory. Actually, through all test cases it has been confirmed that the control method can achieve fast and accurate tracking of reference trajectories (setpoints) under moderate variations of the control inputs. The simulation tests come to confirm that the control method has global (and not local) stability properties. Under the nonlinear optimal control method, the state variables of the SCARA robot can track precisely setpoints with fast and abrupt changes. Moreover, the convergence to these setpoints is independent from initial conditions. The variations of the Lyapunov function of the nonlinear optimal control method of the 4-DOF SCARA robotic manipulator are shown in Figs. 19 and 20.

Figure 3. Tracking of setpoint 1 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value), (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 4. Tracking of setpoint 1 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 5. Tracking of setpoint 2 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 6. Tracking of setpoint 2 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 7. Tracking of setpoint 3 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value), (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 8. Tracking of setpoint 3 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 9. Tracking of setpoint 4 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 10. Tracking of setpoint 4 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 11. Tracking of setpoint 5 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 12. Tracking of setpoint 5 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 13. Tracking of setpoint 6 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 14. Tracking of setpoint 6 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 15. Tracking of setpoint 7 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 16. Tracking of setpoint 7 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 17. Tracking of setpoint 8 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 18. Tracking of setpoint 8 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 19. Lyapunov functions for nonlinear optimal control of the 4-DOF SCARA robotic manipulator when tracking setpoints 1–4.

Figure 20. Lyapunov functions for nonlinear optimal control of the 4-DOF SCARA robotic manipulator when tracking setpoints 5–8.

Regarding the selection of values for the controller gains, it can be noted that parameters $r$ , $\rho$ , and $Q$ which appear in the method’s algebraic Riccati equations are assigned offline constant values, whereas gains vector $K$ is updated at each sampling instance, based on the positive definite and symmetric matrix $P$ which is the solution of the method’s algebraic Riccati equation. The tracking accuracy and the transient performance of the control scheme depend on the values of coefficients $r$ , $\rho$ and on the values of the elements of the diagonal matrix $Q$ . Actually, for relatively small values of $r$ , one achieves elimination of the state vectors’ tracking error one. Moreover, for relatively high values of the diagonal elements of matrix $Q$ , one achieves fast convergence of the state variables to reference trajectories. Finally, the smallest value of the attenuation coefficient $\rho$ that results into a valid solution of the method’s Riccati equation in the form of the positive definite and symmetric matrix $P$ , it the one that provides the control loop with maximum robustness. Moreover, according to Parseval’s theorem the consumption of energy by the actuators of the robot is proportional to the integral in time of the square of the associated control inputs. This also gives a clear indicator on how the aggregate consumed power of the robot is distributed to its actuators during the execution of several tasks.

Comparing to past attempts for solving the H-infinity control problem for nonlinear dynamical systems, the article’s control approach is substantially different [Reference Rigatos and Karapanou37]. Preceding results on the use of H-infinity control to nonlinear dynamical systems were limited to the case of affine-in-the-input systems with drift-only dynamics and considered that the control inputs gain matrix is not dependent on the values of the system’s state vector. Moreover, in these approaches the linearization was performed around points of the desirable trajectory whereas in the present article’s control method the linearization points are related to the value of the state vector at each sampling instant as well as to the last sampled value of the control inputs vector. The Riccati equation which has been proposed for computing the feedback gains of the controller is novel, so is the presented global stability proof through Lyapunov analysis.

The proposed H-infinity (optimal) control method for the 4-DOF SCARA robot exhibits several advantages when compared against other linear or nonlinear control schemes [Reference Rigatos and Karapanou37]. For instance, (i) in contrast to global linearization-based control schemes (Lie algebra-based control and differential flatness theory-based control) it does not need complicated changes of state variables (diffeomorphisms) and does not come against singularity problems in the computation of the control inputs, (ii) in contrast to sliding-mode control or to backstepping control the proposed nonlinear optimal control scheme does not require the state-space model of the system to be in a specific form (e.g., triangular, canonical, etc.), (iii) in contrast to proportional integral derivative control the proposed nonlinear optimal control method is globally stable and functions well at changes of operating points, (iv) in contrast to multi-model-based control and linearization around multiple operating points, the nonlinear optimal control scheme requires linearization around one single operating point and thus it avoids the computational burden for solving multiple Riccati equations or linear matrix inequalities, and (v) moreover, unlike the popular computed torque method for robotic manipulators, the new control approach is characterized by optimality and is also applicable when the number of control inputs is not equal to the robot’s number of DOFs.

To elaborate on the tracking performance and on the robustness of the proposed nonlinear optimal control method for the SCARA robot, the following tables are given: (i) Table I which provides information about the accuracy of tracking of the reference setpoints by the state variables of the SCARA robot’s state-space model, (ii) Table II which provides information about the robustness of the control method to parametric changes in the model of the SCARA robot’s dynamics (change ${\Delta }a\%$ in the mass $m_2$ of the second link), (iii) Table III which provides information about the precision in state variables’ estimation that is achieved by the H-infinity Kalman filter, (iv) Table IV which provides the approximate convergence times of the SCARA robot’s state variables to the associated setpoints, and (v) Table V which provides information about the $\%$ distribution of the total consumed power in the actuators of the SCARA robot.

Table I. Nonlinear optimal control – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the disturbance-free case.

Table II. Nonlinear optimal control – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the case of disturbances.

Table III. Nonlinear optimal control – RMSE $\cdot 10^{-5}$ for the estimation performed by the H-infinity KF.

Table IV. Nonlinear optimal control – Convergence time (sec) for the SCARA robot’s state variables.

Table V. Nonlinear optimal control –- Distribution of power % in actuators.

The proposed nonlinear optimal control method is of global (and not local) stability properties. This is explicitly proven through Lyapunov stability analysis. The article’s Lyapunov stability proof makes use of the tracking error dynamics of the initial nonlinear system. The computed control inputs are applied to the initial nonlinear model of the 4-DOF SCARA robotic manipulator and not to its linear approximation. It is ensured that the linearization error due to truncation of higher-order terms in the Taylor-series expansion remains small because the linearization process is performed at each sampling period around the present value of the SCARA robot’s state vector and not at a point on the desirable trajectory. By taking the span between the linearization point and the system’s state vector at each sampling period to be small, one concludes that the model which is obtained from linearization describes with precision the initial nonlinear dynamics of the robot. This is also proven in detail through Eqs. (18)–(20) which appear in subsection 3.1 of the article.

External disturbances are taken into account in the design of the article’s nonlinear optimal (H-infinity) controller through the disturbance inputs gain matrix $L$ which appears in the method’s algebraic Riccati equation (Eqs. (82) and (90)) as well as in the equation about the tracking error’s dynamics (Eq. (84)). As explained in the end of subsection 4.3, the worst possible disturbance that the nonlinear optimal control method can sustain is given by $\tilde{d}=\dfrac{1 }{{\rho ^2}}{L^T}Pe$ ; therefore, it is dependent on the attenuation coefficient $\rho$ and the disturbance inputs gain matrix $L$ . Ideally, to achieve high disturbance rejection capability, one should assign small values to $\rho$ and large values to $L$ . However, practically this can result into a Riccati equation which may no longer be solvable. Therefore, there is a trade-off between those values of $\rho$ and $L$ that give high disturbance rejection capability, and the values of these parameters which ensure a solution for the method’s Riccati equation.

7.2. Results on flatness-based control in successive loops for the 4-DOF SCARA robot

Results about the tracking accuracy and the speed of convergence to setpoints of the successive-loops flatness-based control method, in the case of the 4-DOF SCARA robotic manipulator, are shown in Figs. 2136. It can be noticed that under this control scheme one achieves fast and precise tracking of reference setpoints for all state variables of the robotic system. It is noteworthy that through the stages of this method one solves also the setpoints definition problem for all state variables of the redundant robotic manipulator. Actually, the selection of setpoints for the state vector $z_{1,4}=[x_1, x_3, x_5,x_7]^T$ is unconstrained. On the other side by defining state vector $z_{5,8}=[x_2,x_4,x_6,x_8]^T$ as virtual control input for the subsystem of $z_{1,4}$ , one can find the setpoints for $z_{5,8}$ as functions of the setpoints for $z_{1,4}$ . The speed of convergence of the state variables of the robotic system under flatness-based control implemented in successive loops is dependent on the selection of values for the diagonal gain matrices $K_1$ , $K_2$ of Eqs. (114) and (115). The variations of the Lyapunov function of the flatness-based control loop of the 4-DOF SCARA robotic manipulator are shown in Figs. 37 and 38.

Figure 21. Tracking of setpoint 1 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 22. Tracking of setpoint 1 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 23. Tracking of setpoint 2 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 24. Tracking of setpoint 2 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 25. Tracking of setpoint 3 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 26. Tracking of setpoint 3 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 27. Tracking of setpoint 4 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 28. Tracking of setpoint 4 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 29. Tracking of setpoint 5 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 30. Tracking of setpoint 5 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Table VI. Flatness-based control in successive loops – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the disturbance-free case.

Figure 31. Tracking of setpoint 6 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 32. Tracking of setpoint 6 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 33. Tracking of setpoint 7 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

To elaborate on the tracking performance and on the robustness of the proposed flatness-based control method in successive loops for the SCARA robot, the following tables are given: (i) Table VI which provides information about the accuracy of tracking of the reference setpoints by the state variables of the SCARA robot’s state-space model, (ii) Table VII which provides information about the robustness of the control method to parametric changes in the model of the SCARA robot’s dynamics (change ${\Delta }a\%$ in the mass $m_2$ of the second link), (iii) Table VIII which provides information about the precision in state variables’ estimation that is achieved by the H-infinity Kalman filter, (iv) Table IX which provides the approximate convergence times of the SCARA robot’s state variables to the associated setpoints, and (v) Table X which provides information about the $\%$ distribution of the total consumed power in the actuators of the SCARA robot.

Table VII. Flatness-based control in successive loops – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the case of disturbances.

Table VIII. Flatness-based control in successive loops – RMSE $\cdot 10^{-5}$ for the estimation performed by the H-infinity KF.

Table IX. Flatness-based control in successive loops – Convergence time (sec) for the SCARA robot’s state variables.

Table X. Flatness-based control in successive loops – Distribution of power % in actuators.

Figure 34. Tracking of setpoint 7 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 35. Tracking of setpoint 8 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 36. Tracking of setpoint 8 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$ , $u_2$ applied to the robot and (b) tracking error variables $e_1$ , $e_3$ , $e_5$ , and $e_7$ of the SCARA robot.

Figure 37. Lyapunov functions for flatness-based control in successive loops of the 4-DOF SCARA robotic manipulator when tracking setpoints 1–4.

Figure 38. Lyapunov functions for flatness-based control in successive loops of the 4-DOF SCARA robotic manipulator when tracking setpoints 5–8.

8. Conclusions

SCARA-type robots are widely applied in several industrial tasks. To improve their accuracy, and speed in tasks’ execution as well as to reduce the cost of their functioning, elaborated control algorithms have to be used about them. In the present article, a novel nonlinear optimal control approach has been used for the dynamic model of the 4-DOF SCARA robot with three revolute joints and one prismatic joint. At a first stage, the dynamic model of the SCARA robot undergoes approximate linearization with first-order Taylor series expansion and through the computation of the associated Jacobian matrices. The linearization point is updated at each sampling instance and is defined by the present value of the system’s state vector and by the last sampled value of the control inputs vector.

At a second stage, a stabilizing H-infinity feedback controller is designed. The H-infinity controller achieves solution of the optimal control problem for the model of the SCARA robot under model uncertainty and external perturbations. The H-infinity controller represents a min-max differential game which takes place between (i) the control inputs which try to minimize a quadratic cost function of the state vector’s tracking error and (ii) the model imprecision and the external perturbation terms which try to maximize this cost function. To compute the stabilizing feedback gains of the H-infinity controller, an algebraic Riccati equation had to be repetitively solved at each time-step of the control algorithm. The global stability properties of the control scheme have been proven through Lyapunov analysis. First, it has been demonstrated that the control method satisfies the H-infinity tracking performance criterion, while under moderate conditions it has been proven that the control loop is globally asymptotically stable. Finally, to implement state estimation-based control, the H-infinity Kalman filter has been used as a robust state estimator. The nonlinear optimal control approach retains the advantages of the standard linear optimal control, that is, fast and accurate tracking of reference setpoints under moderate variations of the control inputs. Finally, the nonlinear optimal control method has been tested against flatness-based control implemented in successive loops.

Acknowledgments

(i) This research work has been partially supported by Research Grant Ref. 3671/ "Control and estimation of nonlinear and PDE dynamical systems" of the Unit of Industrial automation of the Industrial Systems Institute. (ii) The authors of this article confirm that to their knowledge no conflict of interest exists with third parties about the content of the present manuscript. (iii) The contribution of the authors to this research work is designated by their order of appearance in the article’s list of authors. (iv) Computation data for this manuscript are available by the corresponding author upon reasonable request.

References

Rigatos, G. and Busawon, K.. Robotic Manipulators and Vehicles: Control, Estimation and Filtering (Springer, Cham, 2018).CrossRefGoogle Scholar
Voglewede, P., Smith, A. P. C. and Monti, A., “Dynamic performance of a SCARA robot manipulator with uncertainty using polynomial chaos theory,” IEEE Trans. Robot. 25(1), 206219 (2009).CrossRefGoogle Scholar
Dessaint, L. A., Saad, M. and Al-Haddad, K., “An adaptive controller of a direct drive robot,” IEEE Trans. Ind. Electron. 39(2), 105111 (1992).CrossRefGoogle Scholar
Zhen, S., Zhou, Z., Liu, X., Chen, F., Zhao, H. and Chen, Y. H., “A novel practical robust control inheriting PID for SCARA robot,” IEEE Access 8, 227407227419 (2020).10.1109/ACCESS.2020.3045789CrossRefGoogle Scholar
He, Y., Mai, X., Cui, C., Gao, J., Yang, Z., Zhang, K., Chen, X., Chen, Y. and Tang, H., “Dynamic modeling simulation and experimental verification of a wafer handling SCARA robot with decoupling servo control,” IEEE Access 7, 4714347153 (2019).CrossRefGoogle Scholar
Makino, H., “Development of the SCARA,” J. Robot. Mechatron. 26(1), 114 (2014).CrossRefGoogle Scholar
Das, M. T. and Dulger, L. C., “Mathematical modelling simulation and expertimenal verification of a SCARA robot,” Simul. Model. Pract. Theory 13(3), 257271 (2005).CrossRefGoogle Scholar
Er, N. M. J., Low, C. B., Nah, K. H., Lim, M. H. and Ng, S. Y., “Real-time implementation of a dynamic Fuzzy neural network for a SCARA,” Microprocess Microsyst. 26(9-10), 449461 (2002).Google Scholar
Oral, A., “Patterning automation of sqaure mosaics using computer assited SCARA robot,” Robotica 27(6), 897903 (2009).CrossRefGoogle Scholar
Krönig, L., Hörler, P., Caseiro, S., Grossen, L., Araujo, R., Kneib, J. P. and Bauri, M., “Precision control of miniature SCARA robots for multi-object spectrographs,” J. Optomechatronics 14(1), 5377 (2020).10.1080/15599612.2020.1829218CrossRefGoogle Scholar
Urea, C. and Kern, J., “Trajectory tracking control of a real redundant manipulator of the SCARA type,” J. Electr. Eng. Technol. 11(1), 215226 (2016).CrossRefGoogle Scholar
Urea, C. and Kern, J., “Modelling and control of a redundant SCARA-type manipulator robot,” Int. J. Adv. Robot. Syst. 9(58), 114 (2012).Google Scholar
Urea, C., Cortes, J. and Pascal, J., “Design, construction and control of a SCARA manipulator with 6 degrees of freedom,” J. Appl. Res. Technol. 14(6), 396404 (2014).CrossRefGoogle Scholar
Chonal, H., Guyon, J. B., Koessler, A., Dechambre, Q., Boudon, B., Blaysat, B. and Bouton, N., “Geometrical defect identification of a SCARA robot from a vector modelling of kinematic joints invariant,” Mech. Mach. Theory 162, 104333104362 (2021).Google Scholar
Bianchi, M., van der Maas, A., Maljaari, E. and Heemels, W. P. M. H.. Offset-Free MPC for Resource Sharing on a Nonlinear SCARA Robot. In: IFAC NMPC 2018, 6th IFAC Conference on Nonlinear Model Predictive Control, Madison, Winsconsin, USA (2018).Google Scholar
Takagi, S. and Uchiyama, N., “Robust control system design for SCARA robots using adaptive pole placement,” IEEE Trans. Ind. Electron. 52(3), 315321 (2005).10.1109/TIE.2005.847578CrossRefGoogle Scholar
Visioli, A. and Legnani, G., “On the trajectory tracking control of industrial SCARA robot manipulators,” IEEE Trans. Ind. Electron. 49(1), 224233 (2002).10.1109/41.982266CrossRefGoogle Scholar
Dehuk, A., Nguyen, A. T., Dequidt, A., Vermeiren, L. and Dambrine, M.. Disturbance Observer-Based Tracking Control for Industrial SCARA Robot Manipulators. In: IEEE IECON 2019, IEEE 45th Annual Conference of the Industrial Electronics Society, Lisbon, Portugal (2019).Google Scholar
Ciliz, M. K. and Tuncay, M. O., “Comparative experiments with a multiple moddel-based adaptive controller for for a SCAARA type direct-drive manipulator,” Robotica 23(6), 721729 (2005).CrossRefGoogle Scholar
Bensafia, Y., Ladaci, S., Khettab, K. and Chemoni, A., “Fractional order model reference adaptive control for SCARA robot trajecotry tracking,” Int. J. Ind. Syst. Eng. 30(2), 138156 (2018).Google Scholar
Chang, T., Jaroonshiriphan, P., Bernhardt, M. and Ludden, P., “Web-based comamnd shaping of Cobra 600 Robot wioth a swinging load,” IEEE Trans. Ind. Inform. 2(1), 5969 (2006).CrossRefGoogle Scholar
Sim, H. S., Kao, Y. M., Jang, S. H., Ahn, D. K., Cho, B. N. and Han, S. H.. A Study on Visual Feedback Control of SCARA Robot Arm. In: IEEE ICCAS 2015, IEEE. 2015 15th Intl. Conference on Control, Automation and Systems, Busan, Korea (2018).Google Scholar
van der Maars, A., Steinbuch, Y. F., Boverhof, A. and Heemel, W. P., “Switched Control of a SCARA Robot with Shared Actuation Resources,” In: IFAC WC 2017, IFAC. 2017 World Congress, (Toulouse, France, 2017).Google Scholar
Arakelian, V. and Briot, S.. Dynamic Balancing of the SCARA Robot, RmanSys 2008. In: 17th CISM-IFToMM Symposium on Robot Design, Tokyo, Japan (2008).Google Scholar
Indri, M., Calafiore, G., Legnani, G., Juttu, F. and Visioli, A., “Optimized Dynamic Calibration of a SCARA Robot, IFAC,” In: 2002 15th Triennial World Congress, (Barcelona, Spain, 2002).Google Scholar
Ali, H. S., Boutat-Baddus, L., Bacis-Aubny, Y. and Darouach, M.. $H_{\infty }$ Control of a SCARA Robot Using Polytopic LPV Approach. In: IEEE MED 2006, IEEE. 2006 Meditternaean Conference on Control and Automation, Ancona, Italy (2006).Google Scholar
Shan, X., Li, Y., Liu, H. and Huang, T., “Residual vibration reduction of high-speed pick-and-place parallel robot using input shaping,” Chin. J. Mech. Eng. 35(16), 18 (2022).CrossRefGoogle Scholar
Yovchev, K. and Miteva, L., “Payload adaptive iterative learning control for robotic manipulators,” Mech. Sci. 13(1), 427326 (2022).CrossRefGoogle Scholar
Ibaraki, S. and Usui, R., “A novel error mapping of bi-directional angular positioning deviation of rotary axes in a SCARA-type robot by ”open-loop” tracking interferometer measurement,” Precis. Eng. 74, 6068 (2022).CrossRefGoogle Scholar
Urrea, C., Kerr, I. and Alvarez, E., “Design and implementation of fault-tolerant control strategies for a real underactuated manipulator robot,” J. Complex Intell. Syst. 8(6), 51015123 (2022).CrossRefGoogle Scholar
He, Y., Li, X., Xu, Z., Zhou, X. and Li, S., “Collaboration of multiple SCARA robots with guaranteed safety using recurrent neural networks,” Neurocomputing 456, 110 (2021).CrossRefGoogle Scholar
Wei, B. and Zhang, D., “A review of dynamic balancing for robotic mechanisms,” Robotica 39(1), 5571 (2021).CrossRefGoogle Scholar
Singh, C. and Liu, G., “Energy-aware redundant actuation for safe spring-assisted modular and reconfigurable robot,” Robotica 40(12), 44984511 (2022).CrossRefGoogle Scholar
Zou, Q., Zhang, D. and Huang, G., “Kinematic joint matrix and block diagram for a group of parallel manipulators,” Robotica 41(3), 939961 (2023).10.1017/S0263574722001187CrossRefGoogle Scholar
Khoshbin, E., Youssef, K., Meziane, R. and Otis, M. J. D., “Reconfigurable fully constrained cable-driven parallel mechanism for avoiding collision between cables with human,” Robotica 40(12), 44054430 (2022).10.1017/S0263574722000996CrossRefGoogle Scholar
Rigatos, G., Siano, P. and Abbaszadeh, M., “No nlinear H-infinity control for 4-DOF underactuated overhead cranes,” Trans. Inst. Meas. Control. 40(7), 23642377 (2017).10.1177/0142331217703702CrossRefGoogle Scholar
Rigatos, G. and Karapanou, E.. Advances in Applied Nonlinear Optimal Control (Cambridge Scholars Publishing, Newcastle, 2020).Google Scholar
Rigatos, G. G. and Tzafestas, S. G., “Extended Kalman filtering for Fuzzy modelling and multi-sensor fusion,” Math. Comput. Model. Dyn. Syst. 13(3), 251266 (2007).CrossRefGoogle Scholar
Basseville, M. and Nikiforov, I.. Detection of Abrupt Changes: Theory and Applications (Prentice-Hall, New Jersey, 1993).Google Scholar
Rigatos, G. and Zhang, Q., “Fuzzy model validation using the local statistical approach,” Fuzzy Sets Syst. 60(7), 882904 (2009).CrossRefGoogle Scholar
Rigatos, G.. Nonlinear Control and Filtering Using Differential Flatness Approaches: Applications to Electromechanical Systems (Springer Cham, 2015).CrossRefGoogle Scholar
Toussaint, G. J., Basar, T. and Bullo, F.. $H_{\infty }$ Optimal Tracking Control Techniques for Nonlinear Underactuated Systems. In: Proceedings of IEEE CDC 2000, 39th IEEE Conference on Decision and Control, Sydney, Australia (2000).Google Scholar
Figure 0

Figure 1. The 4-DOF SCARA robotic manipulator and the reference frames (coordinate systems) associated with the robot’s links and joints.

Figure 1

Figure 2. Diagram of the control scheme for the 4-DOF SCARA robotic manipulator.

Figure 2

Figure 3. Tracking of setpoint 1 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value), (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 3

Figure 4. Tracking of setpoint 1 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 4

Figure 5. Tracking of setpoint 2 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 5

Figure 6. Tracking of setpoint 2 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 6

Figure 7. Tracking of setpoint 3 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value), (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 7

Figure 8. Tracking of setpoint 3 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 8

Figure 9. Tracking of setpoint 4 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 9

Figure 10. Tracking of setpoint 4 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 10

Figure 11. Tracking of setpoint 5 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 11

Figure 12. Tracking of setpoint 5 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 12

Figure 13. Tracking of setpoint 6 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 13

Figure 14. Tracking of setpoint 6 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 14

Figure 15. Tracking of setpoint 7 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 15

Figure 16. Tracking of setpoint 7 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 16

Figure 17. Tracking of setpoint 8 for the SCARA robot under nonlinear optimal control (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 17

Figure 18. Tracking of setpoint 8 for the SCARA robot under nonlinear optimal control (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 18

Figure 19. Lyapunov functions for nonlinear optimal control of the 4-DOF SCARA robotic manipulator when tracking setpoints 1–4.

Figure 19

Figure 20. Lyapunov functions for nonlinear optimal control of the 4-DOF SCARA robotic manipulator when tracking setpoints 5–8.

Figure 20

Table I. Nonlinear optimal control – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the disturbance-free case.

Figure 21

Table II. Nonlinear optimal control – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the case of disturbances.

Figure 22

Table III. Nonlinear optimal control – RMSE $\cdot 10^{-5}$ for the estimation performed by the H-infinity KF.

Figure 23

Table IV. Nonlinear optimal control – Convergence time (sec) for the SCARA robot’s state variables.

Figure 24

Table V. Nonlinear optimal control –- Distribution of power % in actuators.

Figure 25

Figure 21. Tracking of setpoint 1 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 26

Figure 22. Tracking of setpoint 1 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 27

Figure 23. Tracking of setpoint 2 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 28

Figure 24. Tracking of setpoint 2 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 29

Figure 25. Tracking of setpoint 3 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 30

Figure 26. Tracking of setpoint 3 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 31

Figure 27. Tracking of setpoint 4 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 32

Figure 28. Tracking of setpoint 4 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 33

Figure 29. Tracking of setpoint 5 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 34

Figure 30. Tracking of setpoint 5 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 35

Table VI. Flatness-based control in successive loops – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the disturbance-free case.

Figure 36

Figure 31. Tracking of setpoint 6 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 37

Figure 32. Tracking of setpoint 6 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 38

Figure 33. Tracking of setpoint 7 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 39

Table VII. Flatness-based control in successive loops – Tracking RMSE $\times 10^{-3}$ for the SCARA robot in the case of disturbances.

Figure 40

Table VIII. Flatness-based control in successive loops – RMSE $\cdot 10^{-5}$ for the estimation performed by the H-infinity KF.

Figure 41

Table IX. Flatness-based control in successive loops – Convergence time (sec) for the SCARA robot’s state variables.

Figure 42

Table X. Flatness-based control in successive loops – Distribution of power % in actuators.

Figure 43

Figure 34. Tracking of setpoint 7 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 44

Figure 35. Tracking of setpoint 8 for the SCARA robot under flatness-based control in successive loops (a) convergence of state variables $x_1$ to $x_4$ to their reference setpoints (red line: setpoint, blue line: real value, green line: estimated value) and (b) convergence of state variables $x_5$ to $x_8$ to their reference setpoints.

Figure 45

Figure 36. Tracking of setpoint 8 for the SCARA robot under flatness-based control in successive loops (a) control inputs $u_1$, $u_2$ applied to the robot and (b) tracking error variables $e_1$, $e_3$, $e_5$, and $e_7$ of the SCARA robot.

Figure 46

Figure 37. Lyapunov functions for flatness-based control in successive loops of the 4-DOF SCARA robotic manipulator when tracking setpoints 1–4.

Figure 47

Figure 38. Lyapunov functions for flatness-based control in successive loops of the 4-DOF SCARA robotic manipulator when tracking setpoints 5–8.