Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-21T23:39:19.695Z Has data issue: false hasContentIssue false

An experimental evaluation of robot-stopping approaches for improving fluency in collaborative robotics

Published online by Cambridge University Press:  15 March 2024

Lorenzo Scalera
Affiliation:
Polytechnic Department of Engineering and Architecture, University of Udine, Udine, Italy
Federico Lozer
Affiliation:
Polytechnic Department of Engineering and Architecture, University of Udine, Udine, Italy
Andrea Giusti
Affiliation:
Robotics and Intelligent Systems Engineering, Fraunhofer Italia Research, Bolzano, Italy
Alessandro Gasparetto*
Affiliation:
Polytechnic Department of Engineering and Architecture, University of Udine, Udine, Italy
*
Corresponding author: Alessandro Gasparetto; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

This paper explores and experimentally compares the effectiveness of robot-stopping approaches based on the speed and separation monitoring for improving fluency in collaborative robotics. In the compared approaches, a supervisory controller checks the distance between the bounding volumes enclosing human operator and robot and prevents potential collisions by determining the robot’s stop time and triggering a stop trajectory if necessary. The methods are tested on a Franka Emika robot with 7 degrees of freedom, involving 27 volunteer participants, who are asked to walk along assigned paths to cyclically intrude the robot workspace, while the manipulator is working. The experimental results show that scaling online the dynamic safety zones is beneficial for improving fluency of human-robot collaboration, showing significant statistical differences with respect to alternative approaches.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

In the context of Industry 4.0, collaborative robotics refers to the integration of robots into modern manufacturing processes in a way that allows for safe human-robot interaction. This technology enables human workers to operate alongside robots without the need for physical barriers, improving efficiency and productivity while maintaining a safe work environment [Reference Hanna, Larsson, Götvall and Bengtsson1, Reference Li, Hu, Zhou and Pham2]. In order to guarantee safety of both human workers and robots, specific measures are crucial to minimize injuries to human operators in case of contacts [Reference Huang, Gao, Liu, Chen and Zhang3Reference Pacheco Quiñones, Paterna, De Benedictis, Maffiodo, Franco and Ferraresi5] or to prevent unsafe physical interaction by means of collision avoidance strategies [Reference Wang, Wei, Du, Liu, Yang, Wei and Fang6, Reference Merckaert, Convens, Wu, Roncone, Nicotra and Vanderborght7].

Several safety strategies in human-robot collaboration are built on online trajectory scaling or motion re-planning to maintain a separation and avoid contact between human and robot in motion. Such approaches can be considered for realizing the speed and separation monitoring (SSM) paradigm described in the ISO/TS 15,066 [8]. The SSM is implemented in several works in the literature to minimize the risk of injury in a collaborative robotics application. A method for the real-time motion control of manipulators for risk-free human-robot coexistence is presented in ref. [Reference Merckaert, Convens, Wu, Roncone, Nicotra and Vanderborght7], where experiments performed on a Franka Emika robot in a dynamic environment show the capability of the proposed controller in avoiding unpredictable human motions. A trajectory scaling method based on safety evaluation is proposed in ref. [Reference Lippi and Marino9], and tested in a scenario where multiple mobile manipulators perform an operation in interaction with a human operator. Furthermore, the authors in ref. [Reference Yang, Xie, Chen, Ding and Wang10] describe an approach for dynamic SSM based on scene semantic information, using a thermal and a depth camera.

Another example of motion planning for human-robot collaboration can be found in ref. [Reference Faroni, Beschi and Pedrocchi11], which also provides a novel approach for embedding a human model in the robot path planner. The technique specifically tackles the issue of minimizing path execution time, including pauses and slowdowns caused by the presence of humans. In ref. [Reference Grushko, Vysocký, Oščádal, Vocetka, Novák and Bobovský12], the mutual understanding for human-robot collaboration is improved using haptic feedback devices in conjunction with human-aware motion planning to communicate intended trajectories. Moreover, the authors in ref. [Reference Lagomarsino, Lorenzini, De Momi and Ajoudani13] describe a method for the adaptation of robot motion to optimize the compromise made between the smoothness of the robot trajectories and the overall execution time, while guaranteeing safety of the human operator. In ref. [Reference Liu, Qu, Xu, Du, Jia, Song and Liu14], a robust and efficient collision avoidance planning method is proposed to generate a collision-free trajectory in real-time by leveraging a repulsive force generation method and an optimization routine to minimize the deviation of the optimized trajectory from the reference one. The authors in ref. [Reference Braglia, Tagliavini, Pini and Biagiotti15] present an online motion planning strategy for safe human-robot cooperation using B-splines and hidden Markov models. The framework is validated using the Franka Emika arm and tested in simple start-goal tasks. More recently, in ref. [Reference Liu, Rocco, Zanchettin, Zhao, Jiang and Mei16], a real-time hierarchical control method based on model predictive control and stochastic control barrier functions is proposed to ensure safety in human-involved dynamic environments.

Additional approaches exploit the implementation of the SSM criterion together with the power and force limiting (PFL) methodology, which assumes a maximum amount of energy to be transferred from the robot to the human in the event of collision before the robot stops. In ref. [Reference Svarny, Tesar, Behrens and Hoffmann17], a safe physical human-robot interaction is achieved taking advantage of both the SSM and the PFL in a collaborative task using a KUKA LBR iiwa robot. Furthermore, in ref. [Reference Kim, Yamada, Okamoto, Sennin and Kito18], potential runaway motion and physical interaction are combined to ensure safety in a collaborative assembly scenario and to maximize the efficiency of SSM.

Many approaches consider the robot encapsulated in safety regions to obtain a simple and effective representation of the robot needed for the estimation of possible collisions. In ref. [Reference Byner, Matthias and Ding19], the parts of the manipulator are approximated by spheres and a separation distance between human and robot as well as the direction of robot motion are considered to modulate robot speed when the operator approaches the robot. In ref. [Reference Safeea, Neto and Bearee20], an online collision avoidance strategy is developed, with capsules representing humans and robots to adjust on-the-fly off-line generated paths. A similar strategy is used in ref. [Reference Secil and Ozkan21], where the representation of robot and human is obtained using capsules to simplify and speed up the distance calculation. An alternative approach is presented in ref. [Reference Ferraguti, Landi, Singletary, Lin, Ames, Secchi and Bonfè22], where the authors consider control barrier functions in the design of safety-related controllers for robotic systems. In ref. [Reference Kot, Wierbica, Oščádal, Spurný and Bobovský23], the authors adopted an approach based on elastic bands for human-robot collaboration, where obstacles are represented by a set of grid-aligned voxels acquired by a camera system. Furthermore, in ref. [Reference Li, Zhang, Razmjoo and Calinon24], the robot geometry is represented as distance fields, which leverage the kinematic structure of the robot to generalize configuration-agnostic signed distance functions to arbitrary robot configurations.

In ref. [Reference Choi, Park, Roh, Lee, Mohammed, Ghasemi and Jeong25], a mixed reality framework for safety-aware human-robot collaboration is presented employing reduced models of robot and human based on 3D offset of links. An approach for the explicit representation of danger zones surrounding the robot is proposed in ref. [Reference Lacevic, Zanchettin and Rocco26] to avoid unintended collisions between human and robot in motion. Danger zones are defined using an almost exact representation via triangular mesh or built using an approximate convex representation. In ref. [Reference Lyu, Ruppel, Hendrich, Li, Görner and Zhang27], a strategy for collision-free collaboration between humans and robots based on intention and trajectory prediction is presented, using several capsules to cover the robot and the human arm. Furthermore, a method for non-collision between humans and robots while maximizing robot up-time and staying on path is shown in ref. [Reference Pereira, Baumann, Gerstner and Althoff28], and it is compared with an approach based on static safety zones also assessing the perceived safety of the human coworker.

A method based on dynamic safety zones is proposed in ref. [Reference Scalera, Giusti, Renato Vidoni, Matt and Riedl29] following the SSM paradigm. In that approach, human and robot are encapsulated in bounding volumes described as sphere-swept lines (SSLs) and a safety controller checks online their pairwise distance and triggers a stop trajectory if a possible collision is detected. Unlike alternative approaches, the use of SSLs allows for efficient real-time computations of the distance between robot and human operator. The approach is further developed in refs. [Reference Scalera, Vidoni and Giusti30] and [Reference Scalera, Giusti, Vidoni and Gasparetto31], where the size of the safety zones is determined by the robot stop time being minimized online, taking constraints from the robot dynamics model into account.

Differently from previous works, this paper explores and compares experimentally the effectiveness of different robot-stopping strategies based on dynamic safety zones with multiple subjects, considering human-robot collaboration fluency metrics. The methods are extensively tested on a 7-degree-of-freedom robot arm, involving 27 volunteer participants, who are asked to walk along assigned paths to periodically enter the robot workspace while the manipulator is working. The experimental results show that scaling online the dynamic safety zones is beneficial for improving fluency in collaborative robotics, showing significant statistical differences with respect to alternative approaches based on the linear search of the best stop time as in ref. [Reference Scalera, Giusti, Renato Vidoni, Matt and Riedl29], and on the implementation of the ISO/TS 15,066 with static safety zones [8]. Improvements of $60.7\%$ and $53.0\%$ in the total task time are found with the optimal scaling of safety zones and with the linear search of the stop time, respectively, with respect to the static safety zones approach.

To summarize, the main scientific contributions of this work are:

  • the experimental comparison of different robot-stopping strategies based on dynamic safety zones to plan a safe stop trajectory in case of possible collision with a human operator;

  • the results of an extensive experimental campaign with multiple volunteer participants, to evaluate the performance of the compared strategies with multiple subjects interacting with the robot during bespoke experimental sessions;

  • the application of multiple human-robot collaboration fluency metrics as quantitative measures for a statistical analysis of the collaboration performance, to evaluate not only the performance of the compared strategies in terms of total task time but also considering additional metrics that provide indications on the quality of the collaboration between human and robot.

The paper is organized as follows. Section 2 describes the addressed problem and the compared approaches. The materials and methods are illustrated in Section 3, whereas the experimental results are analyzed in Section 4. The conclusions are in Section 5.

2. Robot-stopping approaches

2.1. Problem description

We compare three different collision avoidance strategies based on the definition of bounding volumes enclosing human and robot to ensure safety and improve fluency in collaborative robotics applications. The strategies experimentally evaluated in this work follow the SSM paradigm. We consider the SSM since it is one of the most advanced collaborative modalities among those described by the technical specification ISO/TS 15,066 that does not include contact with the operator when the robot is in motion. In the compared approaches, a supervisory controller continuously checks the compliance with the SSM and steers the robot to stop in case an unintended, potential collision is identified, without violating the kinematics and dynamics limits of the manipulator.

The robot can never get any closer to the human than the protective separation distance $S_p$ while it is moving, which is calculated online as defined by the technical specification [8]: $S_p = S_h + S_r + S_s + \xi$ . $S_h$ corresponds to the space traveled by the human, moving at the maximum speed ( $v_h=1.6\,m/s$ ) considered by the ISO/TS 15,066 [8], during the reaction time $t_r$ of the supervisory controller and the stop time of the robot $t_s$ ; $S_r$ represents the space traveled by the robot while moving at its maximum speed $v_r$ during the reaction time $t_r$ ; $S_s$ accounts for the distance traveled by the robot during the stop time $t_s$ . The range of the human body that can access the sensing field undetected, the tolerance of the sensing system, and the location uncertainty of the robot are all taken into consideration by $\xi$ .

In the strategies compared in this work, SSLs are used to encapsulate human and robot and to check the separation distance between them. An SSL for a generic link of the robot is represented by the Minkowski sum of a sphere of radius $r_v$ (the smallest radius required to encompass the link geometry) and a line segment, defined by the end points $\boldsymbol{a}_k$ and $\boldsymbol{b}_k$ (Fig. 1).

Safety zones are then defined as SSLs that also consider the protective separation distance $S_p$ , and overestimate the distance the robot must travel to stop, from its current kinematic state. The radius $r_{sz}$ of a safety zone enclosing a generic link is then computed as $r_{sz} = r_v + S_p$ . Then, in order to determine whether a potential collision is going to take place and the stop trajectory needs to be activated, the distance between each pair of line segments belonging to the human and the robot is compared with the radii of the bounding volumes of the human and the safety zones of the manipulator.

Figure 1. A SSL and the corresponding safety zone encapsulate the k-th part of the manipulator (light blue), and a bounding volume encloses one body part (red). The minimum-distance line segment is illustrated as dashed line (white).

2.2. Compared approaches

In this paper, we analyze three collision avoidance strategies, proposed by the same authors and characterized by different approaches for the calculation of the robot stop time:

All the three approaches are compliant with the technical specification, with no distinction made between humans in the collaborative workspace with or without mobility issues or prior experience. However, these strategies affect differently the fluency in the collaboration, which directly reflects on the productivity and cycle time of the robotic system [Reference Scalera, Giusti, Vidoni and Gasparetto31, Reference Hoffman32]. The three compared approaches are briefly recalled in the following.

2.2.1. Approach 1: optimal scaling of dynamic safety zones

Approach 1 corresponds to scaling online the dynamic safety zones, as proposed in refs. [Reference Scalera, Vidoni and Giusti30, Reference Scalera, Giusti, Vidoni and Gasparetto31]. The approach is centered on minimizing online the time $t_s$ that the robot requires to safely halt, starting from its current kinematic state. Once $t_s$ is defined, a potential stop trajectory $\boldsymbol{q}_s$ is planned as a 5-th order polynomial in the joint space of the robot from its kinematic state at $t_0+t_r$ (i.e., initial position $\boldsymbol{q}_{s,i}$ , velocity $\dot{\boldsymbol{q}}_{s,i}$ , and acceleration $\ddot{\boldsymbol{q}}_{s,i}$ ) to $t_0+t_r+t_s$ , to the final joint state defined by $\boldsymbol{q}_{s,f}=\boldsymbol{q}_{s,i}$ , $\dot{\boldsymbol{q}}_{s,f}=0$ , and $\ddot{\boldsymbol{q}}_{f,i}=0$ , where $t_0$ is the current time.

The optimization problem that computes online the stop time $t_s$ is formulated as follows:

(1) \begin{equation} \underset{t_s}{\text{min}} \quad w_0 \, t_s + w_1 \, \vert t_s - t_{s,prev}\vert \quad \end{equation}

subject to

(2) \begin{gather} q_{i,min}\leq q_{s,i} \leq q_{i,max} \nonumber \\ \vert \dot{q}_{s,i} \vert \leq \dot{q}_{i,max} \nonumber \\ \vert \ddot{q}_{s,i} \vert \leq \ddot{q}_{i,max} \nonumber \\ \vert \dddot{q}_{s,i} \vert \leq \dddot{q}_{i,max} \nonumber \\ \vert{\tau }_i(\boldsymbol{q}_s(t),\dot{\boldsymbol{q}}_s(t),\ddot{\boldsymbol{q}}_s(t))\vert \leq{\tau }_{i,max} \nonumber \\ \vert{\dot{\tau }}_i(\boldsymbol{q}_s(t),\dot{\boldsymbol{q}}_s(t),\ddot{\boldsymbol{q}}_s(t))\vert \leq \dot{\tau }_{i,max} \nonumber \\ \quad t\in [t_0+t_r, \, t_0+t_r+t_s], \quad i=1,\ldots,N \end{gather}

$w_0$ and $w_1$ are positive coefficients, and $t_{s,prev}$ is the robot-stopping time obtained at the previous iteration. Since the size of the radii of the safety zones depends on the stop time, the term $t_{s,prev}$ is added to maintain the size of the safety zones smoothly over time and prevent substantial differences between subsequent iterations. $q_{i,min}$ and $q_{i,max}$ are the position limits at the i-th joint, whereas $\dot{q}_{i,max}$ , $\ddot{q}_{i,max}$ , $\dddot{q}_{i,max}$ , ${\tau }_i$ , and $\dot{\tau }_{i,max}$ are the limits of velocity, acceleration, jerk, torque, and torque rate at the same joint, respectively. The stop trajectory should indeed respect all the kinematics and dynamics limits of the manipulator to be feasible for the robot controller. Finally, $N$ represents the number of DOFs of the manipulator.

To verify the torque constraints, the torques $\boldsymbol{\tau }$ during the stop trajectory $\boldsymbol{q}_s$ are calculated as follows:

(3) \begin{equation} \boldsymbol{\tau }=\boldsymbol{M}(\boldsymbol{q}_s)\ddot{\boldsymbol{q}}_s+\boldsymbol{C}(\boldsymbol{q}_s,\dot{\boldsymbol{q}}_s)\dot{\boldsymbol{q}}_s+\boldsymbol{F}_v \dot{\boldsymbol{q}}_s + \boldsymbol{f}_c \, \text{sign}(\dot{\boldsymbol{q}}_s)+\boldsymbol{g}(\boldsymbol{q}_s) \end{equation}

where $\boldsymbol{M}(\boldsymbol{q}_s)$ is the mass matrix of the manipulator, $\boldsymbol{C}(\boldsymbol{q}_s,\dot{\boldsymbol{q}}_s)$ accounts for the Coriolis and centrifugal terms, whereas the viscous, Coulomb friction, and gravity effects are modeled by the terms $\boldsymbol{F}_v \dot{\boldsymbol{q}}_s$ , $\boldsymbol{f}_c \, \text{sign}(\dot{\boldsymbol{q}}_s)$ , and $\boldsymbol{g}(\boldsymbol{q}_s)$ , respectively. The torque rate $\boldsymbol{\dot{\tau }}$ is computed as incremental ratio from the joint torque over time. In this work, the dynamic model parameters of the robot are considered to be known. However, in case of imperfect knowledge of the robot dynamics, an approach based on interval arithmetic can be adopted to ensure compliance with the joint torques‘ limits [Reference Giusti and Nainer33, Reference Scalera, Nainer, Giusti and Gasparetto34].

Once $t_s$ with the potential stop trajectory is defined, the safety separation distance $S_p$ is computed. The contributions needed for the calculation of $S_p$ in the three approaches compared in this work are listed in Table I. $S_h$ is considered the worst case, in which the human travels toward the robot with the maximum speed $v_h$ considered by the ISO/TS 15,066. $S_r$ is computed as the product of the maximum speed of each robot part $v_{s_k}$ at time $t_0+t_r$ for the reaction time $t_r$ . Finally, $S_s$ is defined as the time integral of the maximum linear speed $v_{\delta _k}$ , directed on the shortest-distance line segment in relation to the human, as:

(4) \begin{equation} v_{\delta _k}(t)= \text{max}(\vert \dot{\boldsymbol{a}}_{e_k}(t) \cdot \boldsymbol{\delta }(t)\vert,\vert \dot{\boldsymbol{b}}_{e_k}(t) \cdot \boldsymbol{\delta }(t)\vert ) \end{equation}

where $\dot{\boldsymbol{a}}_{e_k}$ and $\dot{\boldsymbol{b}}_{e_k}$ represent the velocities of the spherical end-caps that encapsulate the k-th part of the manipulator. Furthermore, $\boldsymbol{\delta }(t)$ is the minimum-distance direction between human and manipulator. Please refer to [Reference Scalera, Vidoni and Giusti30, Reference Scalera, Giusti, Vidoni and Gasparetto31] for a complete description of the approach.

Table I. Contributions of $S_p$ in each approach.

2.2.2. Approach 2: scaling of dynamic safety zones with linear search of the stop time

Approach 2 differs from Approach 1 since it does not make use of an online optimization for the computation of the stop time. Indeed, the best stop trajectory is defined by a 5-th order polynomial, and a $t_s$ selected among a set of $n$ predefined tentative stop times $\boldsymbol{t}_s=[t_{s,1},\ldots,t_{s,n}]$ , with $t_{s,j}\lt t_{s,j+1}$ for $j=1,\ldots,n$ , and $t_{s,n}\leq t_{s,wc}$ , being $t_{s,wc}$ the stop time of the worst condition. The selected stop trajectory is the first in the set that meets the kinematic and dynamic constraints of the manipulator. As it can be seen from Table I, in Approach 2, no projection of the velocity of a link on the shortest-distance vector between human and robot is performed in the computation of $S_s$ . Please refer to [Reference Scalera, Giusti, Renato Vidoni, Matt and Riedl29] for a detailed description of Approach 2.

Differently from Approach 1, the scaling of dynamic safety zones with linear search of the stop time does not account for the smoothness of the radii of the safety zones over time. This can negatively affect the collaboration and does not exploit the entire set of kinematic and dynamic constraints of the robot during a safety stop. However, considering a just-in-time strategy for the definition of the stop time, Approach 2 can be used in case the available computational resources do not ensure the online solution of the optimization problem in Eq. (1) with (2) within the available reaction time.

2.2.3. Approach 3: static safety zones

Approach 3 corresponds to the most basic implementation of the SSM considering the ISO/TS 15,066 [8], which leads to static safety zones. In this case, the value of the stop time is constant and equal to $t_{s,wc}$ . Therefore, the radii of the safety zones of the robot do not change over time and do not consider the kinematics and dynamics of the manipulator during its nominal trajectory.

Figure 2. Experimental setup.

3. Materials and methods

3.1. Robotic system

The experiments are performed using a Franka Emika Panda arm with 7 DOFs (Fig. 2). The robot is controlled in the Robot Operating System (ROS) Melodic Morenia with Python by means of a computer running Ubuntu 18.04 LTS Bionic Beaver. The workstation is equipped with an Intel Core i5-10600k, 32 GB of RAM, and an Intel UHD Graphics 630.

The interface with the controller of the robot is managed by libfranka, the client-side C++ implementation of the Franka Control Interface, which executes commands in real time to control the arm, reads its status at $1\,kHz$ , and accesses the model library to calculate the desired kinematics and dynamics model terms. The rate of the supervisory controller is set equal to $20\,Hz$ (where the bottleneck is given by the human-pose tracking system described in Section 3.2). Furthermore, the interface with ROS is established by means of franka_ros. MoveIt is used as path planner for robot, and the Robotics Toolbox for Python [Reference Corke and Haviland35] is adopted for the kinematic description of the manipulator using a rigid body tree representation. The joint position controller is used to steer the robot during the experimental tests.

The minimization of the stop time in compliance with the kinematic and dynamic constraints of the robot (Eq. (1) with (2)) is solved online using the IPOPT algorithm of the open-source nonlinear optimization tool CasADi [Reference Andersson, Gillis, Horn, Rawlings and Diehl36] and the urdf2casadi library [Reference Johannessen, Arbo and Gravdahl37], which provides CasADi with the robot dynamics function written in symbolic expressions, based on the URDF model of the manipulator. The dynamics model terms of the robot utilized in the optimization problem are obtained from the model experimentally tested in [Reference Gaz, Cognetti, Oliva, Giordano and De Luca38]. The joint space limits of the manipulator are reported in Table II.

Table II. Joint space limits of the Franka Emika robot from [40].

The proposed setup with the Franka Emika arm can be representative of a basic collaborative robotics application since the robot is lightweight, provides joint torque measurements, and has a payload/weight ratio compatible with similar manipulators of the same size [Reference Haddadin, Parusel, Johannsmeier, Golz, Gabl, Walch, Sabaghian, Jähne, Hausperger and Haddadin39].

3.2. Perception system

An Intel RealSense D435 depth camera is adopted to track the position of the human during the experiments. To identify and track online the operator present in the field of view of the Intel RealSense camera, we used the CubeMOS Skeleton Tracking SDK provided by Intel RealSense. For each identified human, 18 body joints are tracked and recorded. However, in the experimental tests presented in this work, only the joints needed for the reconstruction of the arms, head, and chest are kept. In this way, the human can be encapsulated in six bounding volumes, whose radii are chosen considering the anthropometric data of British adults in ref. [Reference Pheasant and Haslegrave41].

The body joints are expressed in pixel coordinates and connected together to form a stylized 2D representation of the human skeleton, as can be seen in Fig. 2a. The skeleton tracking algorithm implements a conversion from pixel to metric coordinates to directly acquire the body joints in 3D Cartesian coordinates in the reference frame of the camera. To compute the distance between human and robot, the human body joints need first to be transformed in the robot base reference frame. For this reason, we first compute the transformation matrix from the camera reference frame to the reference frame of an ArUco marker placed in the close proximity to the robot base using the open-source C++ ArUco library [Reference Garrido-Jurado, Muñoz-Salinas, Madrid-Cuevas and Medina-Carnicer42, Reference Romero-Ramirez, Muñoz-Salinas and Medina-Carnicer43]. The ArUco marker calibration provides real-time marker-based 3D orientation estimation using the computer vision libraries of OpenCV.

To obtain the position of the marker with respect to the robot base reference frame, a 3D printed calibration tool is attached to the end-effector, and the manipulator is manually moved to touch the marker’s corners with the calibration tip, as shown in Fig. 2b. In this manner, by considering the robot kinematics and the height of the calibration tip, it is possible to retrieve the transformation matrix from the marker reference frame to the robot base. The camera-to-robot transformation matrix is then obtained by multiplying the camera-to-marker transformation matrix with the marker-to-robot one. Finally, by means of the camera-to-robot transformation matrix, the coordinates of the human body joints can be transformed in the robot base reference frame.

The shortest distance between robot and human is calculated between each couple of line segments that identify the bounding volumes enclosing the manipulator and its human counterpart. The minimum-distance algorithm considers two line segments [Reference Ericson44] and makes the computation of the shortest distance between human and manipulator efficient and suitable for online applications.

3.3. Test protocol

The experimental tests aimed at evaluating and comparing the collision avoidance strategies described in Section 2 with a scenario of human-robot collaboration tested by different human subjects. The collaborative scenario implemented for the comparison includes the robot arm performing a task defined as a point-to-point motion between four random points in the joint space of the manipulator. The nominal trajectories used for the experiments are defined as point-to-point motions in joint space and planned with 5-th-order polynomials with zero initial/final velocity and acceleration. The points are specified in Tables III and IV for the experiments in Figs. 3a and 3b, respectively. The robot takes a nominal time of $10\,s$ to reach the points and move back to the initial position. This task is repeated three times, for a total time of $30\,s$ , without taking into account potential safety pauses and restarts.

Table III. Specifications of the point-to-point motions for Test A (Fig. 3a).

Table IV. Specifications of the point-to-point motions for Test B (Fig. 3b).

Figure 3. Desired paths for the robot.

Figure 4. Paths for the human operator. Measures are in millimeters.

During the operation of the robot, the human walks on a path indicated by adhesive tape on the floor of the laboratory (Fig. 4), and performs cyclical intrusions in its workspace so as to induce safety stops of the manipulator. The relative motion of the human with respect to the robot is not strictly determined, since each participant in the test exhibits slight variations in walking style, movement patterns, and physical dimensions, contributing to a diverse range of characteristics that induce the robot to stop and restart at different time instants. The human keeps walking on the assigned path at constant speed until the robot has completed its task. However, the supervisory controller is not adaptive to the instantaneous human speed. Indeed, the space $S_h$ considered by the supervisory controller (estimated space that can be covered by the human during the stop time) is computed considering the worst-case scenario, i.e., the maximum speed ( $v_h=1.6\,m/s$ ) mentioned by the ISO/TS 15,066. All participants in the experiments performed two tests defined as follows:

  • Test A: the human walks back and forth on the linear path shown in blue in Fig. 4a, while the robot performs the task shown in Fig. 3a. Each subject repeats Test A three times: one for each of the Approaches 1, 2, and 3.

  • Test B: the human walks around the robot table in the counterclockwise direction on the path shown in red in Fig. 4b, while the robot performs the task shown in Fig. 3b. Each subject repeats Test B two times: one for each of the Approaches 1 and 2.

While Test A compares all the three approaches discussed in this work, Test B is introduced to highlight the differences between Approaches 1 and 2. Indeed, the two approaches based on dynamic safety zones allow the human to move closer to the robot while it is in motion. On the other hand, Approach 3 results in large static safety zones, which do not allow the robot to move while the human is close to its workspace (e.g., when the human is walking near the table). To better appreciate the experimental tests, a video is available as supplementary material and online.Footnote 1

3.4. Participants

In this work, a total of 27 people participated, aged 22 to 40 years with mean of 27.3 years and standard deviation of 4.8 years. Eight of them had previous experience with human-robot collaborative applications. There were no mobility issues among the subjects in the experiment.

Prior to the start of the experiment, participants provided informed consent and were instructed to move at a constant walking speed along the path defined for each test, until the robot had completed its task and without being affected by the movement of the robot. Each subject performed a total of five experiments (3 for Test A, 2 for Test B), provided in random order and without knowing which of the approaches was being tested.

3.5. Data analysis

Data acquired during the experiments are processed in MATLAB to obtain quantitative metrics for the comparison of the three approaches. During the tests, robot positions and velocities are saved at $1\,kHz$ , whereas the stop time, the radii of the robot safety zones, and the distance between human and robot, as well as the positions of human body joints in the 3D space, are recorded at the supervisory controller rate.

The experimental results obtained with the three approaches are first compared in terms of robot stop time and radii of the safety zones, which provide a first indication of the effectiveness of the different strategies. Furthermore, the practical benefit of the approaches is evaluated using multiple quantitative fluency metrics that better measure the quality of the human-robot shared-location teamwork. Indeed, when integrating robots into a shared environment, it is essential to assess the seamless interaction between robotic teammates and their human counterparts. Metrics for evaluating collaborative fluency were introduced in ref. [Reference Hoffman and Breazeal45], within the context of an anticipatory controller designed for shared workspace decision-making processes. Subsequent studies have put forth various objective indices to evaluate different aspects of human-robot collaboration [Reference Hoffman32, Reference Dragan, Bauman, Forlizzi and Srinivasa46].

Figure 5. Experimental results for Test A: stop time (a), radii of the safety zones (b), T-TIME (c), R-IDLE (d), C-ACT-WS (e), and R-STOPS (f).

In this work, we consider the total task time and the robot idle time, as defined in ref. [Reference Hoffman32]. Furthermore, the results are also compared in terms of concurrent activity in the robot workspace, introduced in [Reference Scalera, Giusti, Vidoni and Gasparetto31], as well as number of robot stops. These quantitative metrics are briefly recalled in the following:

  • Total task time (T-TIME): the time taken by the robot to complete its task, considering safety stops and restarts due to the human intrusion. T-TIME is strictly linked to the productivity of the robot.

  • Robot idle time (R-IDLE): the percentage of T-TIME that the robot has remained still during the execution of its task. It is related to the time during which the robot is stationary or waiting to be reactivated by the supervisory controller. This duration is calculated by measuring the time in which the norm of joint velocities is below $0.05\,m/s$ . High performance in terms of fluency and productivity in the collaboration between human and robot are found for low values of R-IDLE.

  • Concurrent activity in the robot workspace (C-ACT-WS): a variant of the concurrent activity, i.e., the percentage of T-TIME during which human and robot are both concurrently active, regardless the human is inside or outside the robot workspace [Reference Hoffman32]. C-ACT-WS calculates the percentage of time that robot and human are both active in the robot workspace, out of which the human intrudes at least partially [Reference Scalera, Giusti, Vidoni and Gasparetto31]. Therefore, C-ACT-WS evaluates the fluency of the collaboration between robot and human in close contact. This can also be seen as a measure of the reactivity of the robot in the collaborative workspace. In the experiments presented in this work, the human never stops walking and is considered active for the entire duration of the tests while being in the robot workspace.

  • Number of robot stops (R-STOPS): the count of safety stops that took place during the task, i.e., the number of times a stop trajectory has been engaged by the robot.

Since we wanted to obtain a quantitative comparison, we did not consider subjective measures. To evaluate the results, a statistical analysis is conducted, which includes performing the Shapiro-Wilk test to verify normality for each set of numerical continuous variables [Reference Shapiro and Wilk47]. Statistical differences between two robot-stopping approaches on a continuous outcome that is normally distributed are evaluated using the paired-sample t-test [48]. The correlation between non-normal continuous variables and stopping approaches is evaluated using the Kruskal-Wallis (K-W) H-test [Reference Kruskal and Wallis49].

4. Experimental results

Figs. 5 and 6 summarize the experimental results for Tests A and B, respectively. In the figures, the values of robot stop time, radii of the safety zones, T-TIME, R-IDLE, C-ACT-WS, and R-STOPS across participants are reported in the form of box plots for the compared robot-stopping approaches. Furthermore, Tables V and VI report the quantitative metrics for the considered variables as mean and standard deviation across participants, together with the percentage difference between the means of pairs of approaches.

Figure 6. Experimental results for Test B: stop time (a), radii of the safety zones (b), T-TIME (c), R-IDLE (d), C-ACT-WS (e), and R-STOPS (f).

The values of stop times for Test A are normally distributed for Approach 1, but not for Approaches 2 and 3. Stop times with Approach 1 result significantly lower than Approaches 2 and 3, with improvements of $15.0\%$ (1-2), $57.5\%$ (1-3), and $50.0\%$ (2-3). Considering Test B, stop times are found to be normally distributed. A significant difference is found between Approach 1 and 2 in Test B, where Approach 1 performs better with faster stops of the robot ( $9.5\%$ ).

In Test A, the radii of the robot safety zones are non-normally distributed for Approach 1; only $r_{sz_1}$ and $r_{sz_2}$ are normally distributed for Approach 2; all radii are non-normally distributed for Approach 3. The radii of the safety zones are smaller with Approach 1 with respect to 2; with Approach 1 with respect to 3, and with Approach 2 with respect to 3. Considering Test B, the radii of the safety zones are non-normally distributed for Approach 1, but are normally distributed for Approach 2. A significant difference is found also in Test B, where Approach 1 outperforms Approach 2 in this case. These trends can be also noticed from the box plots of Figs. 5 and 6, and the percentages reported in Tables V and VI.

By taking into account fluency metrics for collaborative robotics, the values of T-TIME for Test A exhibit a non-normal distribution. Lower total task times are found for Approach 1, followed by Approach 2. In particular, with Approach 1, the task is concluded in the $60.7\%$ of less time with respect to Approach 3, and in the $53.0\%$ of less time with respect to Approach 2. Furthermore, the robot completes the operation $16.5\%$ faster with Approach 2 than with Approach 3. Better results in terms of T-TIME are found for Approach 1 and also in Test B, where values are normally distributed across participants. The task was completed $17.3\%$ faster by the robot with Approach 1 than with Approach 2 in Test B.

A trend similar to T-TIME is found for R-IDLE. In Test A, the values of robot idle time are non-normally distributed across participants for Approaches 1 and 2, whereas present a normal distribution for Approach 3. The percentage of the T-TIME that the robot has remained still during the execution of the task is significantly lower for Approach 1, followed by Approach 2, with percentage improvements of $75.9\%$ (1-3), $66.7\%$ (2-3), and $27.5\%$ (1-2). In Test B, values of robot idle time are normally distributed across subjects. In Test B, R-IDLE is found to be $23.6\%$ lower for Approach 1 than for Approach 2.

Table V. Mean $\pm$ standard deviation for stop time, radii of the safety zones, and fluency metrics resulted from Test A. $\Delta$ % indicates the percentage difference between the means of pairs of approaches. : not significative; : p-value $\lt 0.05$ ; : p-value $\lt 0.01$ .

Table VI. Mean $\pm$ standard deviation for stop time, radii of the safety zones, and fluency metrics resulted from Test B. $\Delta$ % indicates the percentage difference between the means of pairs of approaches. : not significative; : p-value $\lt 0.05$ ; : p-value $\lt 0.01$ .

The values of C-ACT-WS are not distributed normally for Approaches 1 and 3, whereas following a normal distribution for the second considered robot-stopping approach. The K-W test reports significant differences between Approaches 1 and 2, 1 and 3, and 2 and 3. From Fig. 5 and Table V, it can be noted that no concurrent activity in the robot workspace is possible with Approach 3. Furthermore, Approach 1 performs $18.3\%$ better than Approach 2. C-ACT-WS is significantly higher ( $40.6\%$ ) for Approach 1 than Approach 2 also in Test B, where values are normally distributed.

Finally, the numbers of robot stops are non-normally distributed across participants in Test A. In this case, Approach 1 performs better than 2 and 3 with percentage improvements of $63.7\%$ (1-3) and $55.2\%$ (1-2), but no coherent difference in the number of robot stops is found between Approaches 2 and 3. The values of R-STOPS are normally distributed in Test B, and a significant reduction ( $20.5\%$ ) of robot stops is found when using the optimal scaling of dynamic safety zones with respect to the approach based on the linear search of the stop time.

To summarize, it can be noticed that, even though all compared approaches consider the technical specification to provide safety for the human operator during the collaboration with the manipulator, the use of dynamic safety zones and the optimization of the robot stop time provide statistically significant better results not only in terms of robot stop time and radii of the safety zones but also in terms of quantitative fluency metrics (T-TIME, R-IDLE, C-ACT-WS, and R-STOPS).

More in detail, scaling online the robot safety zones allows the robot to operate in closer vicinity with the operator with respect to other compared strategies (as testified by the C-ACT-WS), and results in a more efficient and productive collaboration with fewer safety stops of the robot. On the other hand, the approach based on static safety zones is undoubtedly effective from a safety point of view but does not allow humans to approach the robot while it is in motion, resulting in bad performance in terms of fluency metrics. The linear search of the stop time can be considered a compromise when tools for online optimization are not available, at the price of significantly lower performance with respect to the optimal scaling of dynamic safety zones.

5. Conclusions

In this paper, we explored and experimentally compared the effectiveness of robot-stopping approaches based on the speed and separation monitoring for improving fluency in collaborative robotics. In the compared approaches, a supervisory controller checks the distance between the bounding volumes enclosing human operator and robot and prevents potential collisions by determining the robot’s stop time and triggering a stop trajectory if necessary. The methods have been extensively tested, involving 27 volunteer participants, who were asked to walk along assigned paths to periodically enter the robot workspace while the manipulator was working. The experimental results showed that scaling online the dynamic safety zones is beneficial for improving quantitative fluency metrics in collaborative robotics, demonstrating significant statistical differences with respect to alternative approaches based on the linear search of the best stop time, and on the implementation of the ISO/TS 15,066. Improvements of $60.7\%$ and $53.0\%$ in the total task time are found with the optimal scaling of safety zones and with the linear search of the stop time, respectively, with respect to the static safety zones approach.

The experimental results found in this work provide insights that could be useful for choosing the most suitable approach depending on the needs of the collaborative robotics application and the metrics that the user prefers. For instance, in case the total task time is the more stringent requirement with vast computational resources, the choice could be toward Approach 1, even if this approach is more computationally demanding than the others. In case the preferred metric is the C-ACT-WS, the only choice is between Approach 1 and 2 since no concurrent activity in the robot workspace is permitted by Approach 3. Moreover, in case no (or limited) computational resources are available to perform an online optimization, the forced choice would be toward Approach 2 or 3. These findings may be much more significant if we consider a heavier robot whose stop times are more influenced by the robot dynamics.

Future developments of this work will include the exploration and experimental validation of alternative approaches for the trajectory planning in human-robot collaboration, e.g., reactive strategies that do not necessarily stop the robot while still being safe. Furthermore, the proposed approach will be extended to mobile manipulators to increase fluency and productivity for collaborative applications in more challenging contexts.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S0263574724000262.

Acknowledgments

The authors thank Stefano Macorig and Nicola Pradolin for their help in the experimental setup.

Author contributions

All authors contributed to the study’s conception and design. Material preparation, data collection, and analysis were performed by Lorenzo Scalera and Federico Lozer. The first draft of the manuscript was written by Lorenzo Scalera and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Financial support

This research has been partially developed within the Laboratory for Big Data, IoT, Cyber Security funded by Friuli Venezia Giulia region, and the Laboratory for Artificial Intelligence for Human-Robot Collaboration (AI4HRC) funded by Fondazione Friuli, and partially supported by iNEST-Interconnected NordEst Innovation Ecosystem, funded by PNRR (Mission 4.2, Investment 1.5), NextGeneration EU - Project ID: ECS 00000043. The second author acknowledges support from the National Doctorate in Robotics and Intelligent Machines of University of Genoa (Italy).

Competing interests

The authors declare no competing interests exist.

Ethical approval

None.

References

Hanna, A., Larsson, S., Götvall, P.-L. and Bengtsson, K., “Deliberative safety for industrial intelligent human–robot collaboration: Regulatory challenges and solutions for taking the next step towards Industry 4.0,” Robot Cim-INT Manuf 78, 102386 (2022).CrossRefGoogle Scholar
Li, W., Hu, Y., Zhou, Y. and Pham, D. T., “Safe human–robot collaboration for industrial settings: A survey,” J Intell Manuf 127 (2023). doi: 10.1007/s10845-023-02159-4.Google Scholar
Huang, S., Gao, M., Liu, L., Chen, J. and Zhang, J., “Collision detection for cobots: A back-input compensation approach,” IEEE/ASME Trans Mech 27(6), 49514962 (2022).CrossRefGoogle Scholar
Seriani, S., Gallina, P., Scalera, L. and Lughi, V., “Development of n-DoF preloaded structures for impact mitigation in cobots,” J Mech Robot 10(5), 051009 (2018).CrossRefGoogle Scholar
Pacheco Quiñones, D., Paterna, M., De Benedictis, C., Maffiodo, D., Franco, W. and Ferraresi, C., “Contact force regulation in physical human-machine interaction based on model predictive control,” Robotica 41(11), 34093425 (2023).CrossRefGoogle Scholar
Wang, Y., Wei, L., Du, K., Liu, G., Yang, Q., Wei, Y. and Fang, Q., “An online collision-free trajectory generation algorithm for human–robot collaboration,” Robot Com-Int Manuf 80, 102475 (2023).CrossRefGoogle Scholar
Merckaert, K., Convens, B., Wu, C.-J., Roncone, A., Nicotra, M. M. and Vanderborght, B., “Real-time motion control of robotic manipulators for safe human–robot coexistence,” Robot Com-Int Manuf 73, 102223 (2022).CrossRefGoogle Scholar
ISO Standard, “ISO/TS 15066: 2016: Robots and Robotic Devices–Collaborative Robots,” In: International Organization for Standardization, (Geneva, Switzerland, 2016).Google Scholar
Lippi, M. and Marino, A., “Human multi-robot safe interaction: A trajectory scaling approach based on safety assessment,” Ieee Trans Contr Syst Tech 29(4), 15651580 (2020).CrossRefGoogle Scholar
Yang, B., Xie, S. X., Chen, G., Ding, Z. and Wang, Z., “Dynamic speed and separation monitoring based on scene semantic information,” J Intell Robot Syst 106(2), 35 (2022).CrossRefGoogle Scholar
Faroni, M., Beschi, M. and Pedrocchi, N., “Safety-aware time-optimal motion planning with uncertain human state estimation,” IEEE Robo Auto Lett 7(4), 1221912226 (2022).CrossRefGoogle Scholar
Grushko, S., Vysocký, A., Oščádal, P., Vocetka, M., Novák, P. and Bobovský, Z., “Improved mutual understanding for human-robot collaboration: Combining human-aware motion planning with haptic feedback devices for communicating planned trajectory,” Sensors 21(11), 3673 (2021).CrossRefGoogle ScholarPubMed
Lagomarsino, M., Lorenzini, M., De Momi, E. and Ajoudani, A., “Robot Trajectory Adaptation to Optimise the Trade-Off Between Human Cognitive Ergonomics and Workplace Productivity in Collaborative Tasks,” In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan (IEEE, 2022) pp. 663669.CrossRefGoogle Scholar
Liu, H., Qu, D., Xu, F., Du, Z., Jia, K., Song, J. and Liu, M., “Real-time and efficient collision avoidance planning approach for safe human-robot interaction,” J Intell Robot Syst 105(4), 93 (2022).CrossRefGoogle Scholar
Braglia, G., Tagliavini, M., Pini, F. and Biagiotti, L., “Online motion planning for safe human–robot cooperation using b-splines and hidden markov models,” Robotics 12(4), 118 (2023).CrossRefGoogle Scholar
Liu, B., Rocco, P., Zanchettin, A. M., Zhao, F., Jiang, G. and Mei, X., “A real-time hierarchical control method for safe human–robot coexistence,” Robot Com-Int Manuf 86, 102666 (2024).CrossRefGoogle Scholar
Svarny, P., Tesar, M., Behrens, J. K. and Hoffmann, M.. Safe Physical HRI: Toward a Unified Treatment of Speed and Separation Monitoring Together with Power and Force Limiting. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China (IEEE, 2019) pp. 75807587.Google Scholar
Kim, E., Yamada, Y., Okamoto, S., Sennin, M. and Kito, H., “Considerations of potential runaway motion and physical interaction for speed and separation monitoring,” Robot Com-Int Manuf 67, 102034 (2021).CrossRefGoogle Scholar
Byner, C., Matthias, B. and Ding, H., “Dynamic speed and separation monitoring for collaborative robot applications–concepts and performance,” Robot Com-Int Manuf 58, 239252 (2019).CrossRefGoogle Scholar
Safeea, M., Neto, P. and Bearee, R., “On-line collision avoidance for collaborative robot manipulators by adjusting off-line generated paths: An industrial use case,” Robot Auton Syst 119, 278288 (2019).CrossRefGoogle Scholar
Secil, S. and Ozkan, M., “Minimum distance calculation using skeletal tracking for safe human-robot interaction,” Robot Com-Int Manuf 73, 102253 (2022).CrossRefGoogle Scholar
Ferraguti, F., Landi, C. T., Singletary, A., Lin, H.-C., Ames, A., Secchi, C. and Bonfè, M., “Safety and efficiency in robotics: The control barrier functions approach,” IEEE Robot Autom Mag 29(3), 139151 (2022).CrossRefGoogle Scholar
Kot, T., Wierbica, R., Oščádal, P., Spurný, T. and Bobovský, Z., “Using elastic bands for collision avoidance in collaborative robotics,” IEEE Access 10, 106972106987 (2022).CrossRefGoogle Scholar
Li, Y., Zhang, Y., Razmjoo, A. and Calinon, S., Learning robot geometry as distance fields: Applications to whole-body manipulation (2023). arXiv preprint arXiv: 2307.00533.Google Scholar
Choi, S. H., Park, K.-B., Roh, D. H., Lee, J. Y., Mohammed, M., Ghasemi, Y. and Jeong, H., “An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation,” Robot Com-Int Manuf 73, 102258 (2022).CrossRefGoogle Scholar
Lacevic, B., Zanchettin, A. M. and Rocco, P., “Safe human-robot collaboration via collision checking and explicit representation of danger zones,” IEEE T Autom Sci Eng 20 (2), 846861 (2022).CrossRefGoogle Scholar
Lyu, J., Ruppel, P., Hendrich, N., Li, S., Görner, M. and Zhang, J., “Efficient and collision-free human-robot collaboration based on intention and trajectory prediction,” IEEE Trans Cogn Develop Sys 15(4), 18531863 (2022).CrossRefGoogle Scholar
Pereira, A., Baumann, M., Gerstner, J. and Althoff, M., “Improving efficiency of human-robot coexistence while guaranteeing safety: Theory and user study,” IEEE T Autom Sci Eng 20(4), 27062719 (2023).CrossRefGoogle Scholar
Scalera, L., Giusti, A., Renato Vidoni, V. D. C., Matt, D. and Riedl, M., “Application of dynamically scaled safety zones based on the ISO/TS 15066: 2016 for collaborative robotics,” Int J Mech Cont 21(1), 4149 (2020).Google Scholar
Scalera, L., Vidoni, R. and Giusti, A., “Optimal Scaling of Dynamic Safety Zones for Collaborative Robotics,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi‘an, China (IEEE, 2021) pp. 38223828.CrossRefGoogle Scholar
Scalera, L., Giusti, A., Vidoni, R. and Gasparetto, A., “Enhancing fluency and productivity in human-robot collaboration through online scaling of dynamic safety zones,” The International Journal of Advanced Manufacturing Technology 121(9-10), 67836798 (2022).CrossRefGoogle Scholar
Hoffman, G., “Evaluating fluency in human–robot collaboration,” IEEE Trans Hum-Mach Syst 49(3), 209218 (2019).CrossRefGoogle Scholar
Giusti, A. and Nainer, C., “Inverse Uncertain-Dynamics of Robot Manipulators Using Interval Arithmetic,” In: The International Conference of IFToMM ITALY, Cham (Springer, 2022) pp. 661668.CrossRefGoogle Scholar
Scalera, L., Nainer, C., Giusti, A. and Gasparetto, A., “Robust safety zones for manipulators with uncertain dynamics in collaborative robotics,” Int J Comp Integ Manuf, 113 (2023).Google Scholar
Corke, P. and Haviland, J., “Not your grandmother’s toolbox–the Robotics Toolbox reinvented for Python,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi‘an, China (IEEE, 2021) pp. 1135711363.CrossRefGoogle Scholar
Andersson, J. A. E., Gillis, J., Horn, G., Rawlings, J. B. and Diehl, M., “CasADi: A software framework for nonlinear optimization and optimal control,” Math Prog Comp 11(1), 136 (2019).CrossRefGoogle Scholar
Johannessen, L. M. G., Arbo, M. H. and Gravdahl, J. T., “Robot Dynamics with URDF & Casadi,” In: 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, Netherlands (IEEE, 2019) pp. 16.Google Scholar
Gaz, C., Cognetti, M., Oliva, A., Giordano, P. R. and De Luca, A., “Dynamic identification of the Franka Emika Panda robot with retrieval of feasible parameters using penalty-based optimization,” IEEE Robot Auto Lett 4(4), 41474154 (2019).CrossRefGoogle Scholar
Haddadin, S., Parusel, S., Johannsmeier, L., Golz, S., Gabl, S., Walch, F., Sabaghian, M., Jähne, C., Hausperger, L. and Haddadin, S., “The Franka Emika robot: A reference platform for robotics research and education,” IEEE Robot Autom Mag 29(2), 4664 (2022).CrossRefGoogle Scholar
Franka Emika GmbH, Robot and interface specifications, (2017). [Online; accessed 16-February-2023]. Available at: https://frankaemika.github.io/docs/control_parameters.html.Google Scholar
Pheasant, S. and Haslegrave, C. M., Bodyspace: Anthropometry, ergonomics and the design of work, (2018).Google Scholar
Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F. J. and Medina-Carnicer, R., “Generation of fiducial marker dictionaries using mixed integer linear programming,” Patt Recog 51, 481491 (2016).CrossRefGoogle Scholar
Romero-Ramirez, F. J., Muñoz-Salinas, R. and Medina-Carnicer, R., “Speeded up detection of squared fiducial markers,” Image Vision Comput 76, 3847 (2018).CrossRefGoogle Scholar
Ericson, C.. Real-Time Collision Detection (CRC Press, Boca Raton: 2004).CrossRefGoogle Scholar
Hoffman, G. and Breazeal, C., “Cost-based anticipatory action selection for human–robot fluency,” IEEE Trans Robot 23(5), 952961 (2007).CrossRefGoogle Scholar
Dragan, A. D., Bauman, S., Forlizzi, J. and Srinivasa, S. S.. Effects of Robot Motion on Human-Robot Collaboration. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA (Association for Computing Machinery, 2015) pp. 5158.CrossRefGoogle Scholar
Shapiro, S. S. and Wilk, M. B., “An analysis of variance test for normality (complete samples),” Biometrika 3(4), 591611 (1965).CrossRefGoogle Scholar
Student, “The probable error of a mean,” Biometrika 6(1), 125 (1908).CrossRefGoogle Scholar
Kruskal, W. H. and Wallis, W. A., “Use of ranks in one-criterion variance analysis,” J Am Stat Assoc 47(260), 583621 (1952).CrossRefGoogle Scholar
Figure 0

Figure 1. A SSL and the corresponding safety zone encapsulate the k-th part of the manipulator (light blue), and a bounding volume encloses one body part (red). The minimum-distance line segment is illustrated as dashed line (white).

Figure 1

Table I. Contributions of $S_p$ in each approach.

Figure 2

Figure 2. Experimental setup.

Figure 3

Table II. Joint space limits of the Franka Emika robot from [40].

Figure 4

Table III. Specifications of the point-to-point motions for Test A (Fig. 3a).

Figure 5

Table IV. Specifications of the point-to-point motions for Test B (Fig. 3b).

Figure 6

Figure 3. Desired paths for the robot.

Figure 7

Figure 4. Paths for the human operator. Measures are in millimeters.

Figure 8

Figure 5. Experimental results for Test A: stop time (a), radii of the safety zones (b), T-TIME (c), R-IDLE (d), C-ACT-WS (e), and R-STOPS (f).

Figure 9

Figure 6. Experimental results for Test B: stop time (a), radii of the safety zones (b), T-TIME (c), R-IDLE (d), C-ACT-WS (e), and R-STOPS (f).

Figure 10

Table V. Mean $\pm$ standard deviation for stop time, radii of the safety zones, and fluency metrics resulted from Test A. $\Delta$% indicates the percentage difference between the means of pairs of approaches. : not significative; : p-value $\lt 0.05$; : p-value $\lt 0.01$.

Figure 11

Table VI. Mean $\pm$ standard deviation for stop time, radii of the safety zones, and fluency metrics resulted from Test B. $\Delta$% indicates the percentage difference between the means of pairs of approaches. : not significative; : p-value $\lt 0.05$; : p-value $\lt 0.01$.

Supplementary material: File

Scalera et al. supplementary material

Scalera et al. supplementary material
Download Scalera et al. supplementary material(File)
File 40.2 MB