Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-25T10:39:24.934Z Has data issue: false hasContentIssue false

Vision-based adaptive LT sliding mode admittance control for collaborative robots with actuator saturation

Published online by Cambridge University Press:  09 May 2024

Cong Huang
Affiliation:
Institue of Smart City and Intelligent Transportation, Southwest Jiaotong University, Chengdu, Sichuan, China School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Minglei Zhu*
Affiliation:
Institue of Smart City and Intelligent Transportation, Southwest Jiaotong University, Chengdu, Sichuan, China School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Shijie Song
Affiliation:
Institue of Smart City and Intelligent Transportation, Southwest Jiaotong University, Chengdu, Sichuan, China School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Yuyang Zhao
Affiliation:
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Jinmao Jiang
Affiliation:
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
*
Corresponding author: Minglei Zhu; Email: Minglei.zhu@uestc.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we propose a novel vision-based adaptive leakage-type (LT) sliding mode admittance control for actuator-constrained collaborative robots to realize the synchronous control of the precise path following and compliant interaction force. Firstly, we develop a vision-admittance-based model to couple the visual feedback and force sensing in the image feature space so that a reference image feature trajectory can be obtained concerning the contact force command and predefined trajectory. Secondly, considering the system uncertainty, external disturbance, and torque constraints of collaborative robots in reality, we propose an adaptive sliding mode controller in the image feature space to perform precise trajectory tracking. This controller employs a leakage-type (LT) adaptive control law to reduce the side effects of system uncertainties without knowing the upper bound of system uncertainties. Moreover, an auxiliary dynamic is considered in this controller to overcome the joint torque constraints. Finally, we prove the convergence of the tracking error with the Lyapunov stability analysis and operate various semi-physical simulations compared to the conventional adaptive sliding mode and parallel vision/force controller to demonstrate the efficacy of the proposed controller. The simulation results show that compared with the controller mentioned above, the path following accuracy and interaction force control precision of the proposed controller increased by 50% and achieved faster convergence.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Recently, collaborative robots have drawn much interest in the robotic community due to their ability to perform tasks that include interaction with the environment or the human operating the robot, such as picking and placement of objects, deburring, polishing, spraying, and high-precision positioning assembly [Reference Nabat, de la O Rodriguez, Company, Krut and Pierrot1Reference Wu, Gao, Zhang and Wang4]. These tasks demand that the collaborative robot is able to achieve precise position control and compliant contact force control simultaneously. The precise contact force command control promises the security of the robot manipulator and the interaction environment.

The control of a collaborative robot requires precise position information and a detailed geometric description of the surroundings. Additionally, the force sensor is essential to avoid large contact forces along the constrained degrees of freedom (DOF) [Reference Oyman, Korkut, Yilmaz, Bayraktaroglu and Arslan5]. From the literature, we see that the compliant force control methods can be classified into two directions: the direct force control methodology, which applies the force sensing directly in the close loop [Reference Raibert and Craig6Reference Zhu, Huang, Qiu, Zheng and Gong8], and the indirect force control, which proposes the mass-spring-damper behaviour of the robot or the reference generator to realize the control [Reference Mason9Reference Huang, Lee and Du11].

However, with the classical model-based controller, accurate geometric knowledge of the interaction environment in advance is indispensable. In order to reduce the necessity to gather information about the interaction environment, external sensors such as cameras and laser radars are used to observe the surroundings. The rapid development of image processing greatly promotes the application of visual servoing in the robot control field [Reference Qi, Tang and Zhang12]. Nevertheless, from the literature [Reference Zhu, Huang, Qiu, Zheng and Gong8, Reference De Schutter and Baeten13, Reference Bellakehal, Andreff, Mezouar and Tadjine14], we see that these researches only replace the motion control loop with the visual servoing to track the trajectory and force at the same time. Doing so requires a trade-off mechanism to combine two control signals, implying that neither force nor vision can fully supply their ideal demand, resulting in local minima and reduced control accuracy. Moreover, the simple composition of two different physical quantities, the vision and the force sensing, is always difficult because of the data rates, time delays, etc. Additionally, attempts have been proposed to design vision-based indirect force controllers [Reference Zhou, Li, Yue, Gui, Sun, Jiang and Liu15Reference Odira, Andreff, Petiet and Byiringiro17]. For the vision-based impedance control law proposed by [Reference Lippiello, Fontanelli and Ruggiero16], even though it allows the physical interaction of a UAV with a dual-arm manipulator, it still does not have the general capability of combining visual and force sensing simultaneously. Furthermore, compared with other sensors, the camera has a low observation frequency, resulting in large gains to provide effective tracking performance and eliminate external interference. This factor constrains the development of vision-based impedance control. In conclusion, to develop a general framework of vision and force, we develop a novel vision-admittance model to realize the compliant control along the visual task space instead of in the Cartesian or joint space. In this case, the vision and force are coupled in the image feature space, and a reference image feature trajectory can be obtained online based on the desired trajectory and the force command. The collaborative robot can be driven to follow this reference image feature trajectory to achieve both the precise path following and keep the desired interaction force at the same time. In addition, the local minimum convergence can be avoided, and the issue of inconsistencies arising at the level of actuation can be solved.

In practical applications, several problems still need to be considered, such as errors from the dynamic robot model, uncertainties from the external disturbance, and the actuator joint torque constraints coming from physical limitations [Reference Hu18, Reference He, Dong and Sun19]. All these problems may cause poor tracking precision or, more seriously, make the whole control system unstable. Therefore, we must design a robust adaptive controller for this saturated, uncertain system. The research about intelligent control has drawn consideration attention for overcoming the mentioned problems, such as the neural network-based controllers [Reference Zhou, Zhao, Li, Lu and Wu20, Reference Dai, Song, Xu, Huang and Gong21], model predictive control [Reference Haninger, Hegeler and Peternel22], and cascade control [Reference Ahmadi, Xie and Zakeri23]. The neural network needs to be trained in advance, the complex calculation of model predictive control cannot meet the real-time interaction force control requirement, and the cascade control loses adaptivity and leads to low convergence. Compared with these intelligent controllers, the sliding mode controller has been acceptable in practice for its simplicity and strong robustness against system parametric variations and external disturbances [Reference Qian, Zhao, Qian, Wang and Zi24Reference Tuan and Ha26]. Nevertheless, for the conventional sliding mode controller, the priority knowledge of the disturbance upper bound is necessary before the design process of the controller. In addition, a conservative control gain of the control input ought to be predefined to guarantee the system stability under external disturbance, which may cause the system chattering [Reference Jia and Shan27, Reference Fu, Ai and Chen28]. Even though with the adaptive sliding mode controller, the upper bound of the disturbance is no longer needed, one main problem remains that the adaptive gain does not decrease when the disturbance becomes small, and it leads to the overestimation when the disturbance amplitude drops [Reference Plestan, Shtessel, Bregeault and Poznyak29, Reference Baek, Jin and Han30]. As a result, in this paper, we design a novel adaptive sliding mode controller which applies the leakage-type (LT) adaptive law to perform the reference image feature trajectory tracking in the feature space with the collaborative robot. This LT adaptive law has been proven useful for cancelling overestimation, and it is convenient in practical application due to its simple first-order formulation [Reference Shao, Zheng, Wang, Wang and Liang31]. Under this control law, not only the demand for the upper bound of disturbance is removed but also the overestimation is globally removed by approximating the disturbance variations with adaptive parameters. Thus, the chattering signal of the control input can be suppressed. Moreover, the auxiliary dynamics are constructed to compensate for the impacts of actuator saturation by a specific control input component in this proposed adaptive sliding mode controller.

For clarity, the main contributions of this manuscript are presented as follows:

  • The vision-admittance-based model is proposed, which depicts the contact between the robot and the surroundings in the feature space. For a collaborative robot and its task, we can always have a predefined image feature trajectory. Then, based on this predefined trajectory and real-time contact force error, we can obtain a novel reference image feature trajectory online with this model.

  • A robust adaptive leakage-type sliding mode controller with the auxiliary dynamics compensator is developed and used to track the trajectory in the image feature space. With this controller, impacts of the actuator torque constraints can be compensated, and the overestimation and uncertain external disturbance can be cancelled without knowing the upper bound of system uncertainties.

This paper is organized as follows: some preliminaries of the collaborative robot dynamic models and image-based visual servoing dynamic are presented in Section 2. In Section 3, the vision-admittance-based model is designed, and the process of generating the reference image feature trajectory is presented. Section 4 shows the process of developing the robust adaptive leakage-type Sliding mode controller equipped with auxiliary dynamics compensator and together with its stability analysis. In Section 5, numerical simulations are performed on a collaborative robot to finish the task of polishing to demonstrate the efficacy of the proposed control methodology. In the end, we give the conclusion in Section 6.

2. Preliminaries

In this section, some preliminary knowledge of robot dynamics and dynamic Image-based visual servoing is presented.

2.1. Dynamic model of the robot with input saturation

Generally, a collaborative robot dynamic model is often expressed by a second-order nonlinear equation. The general dynamic formulation in the robot actuator joint space can be illustrated by [Reference Roy, Roy and Kar32]:

(1) \begin{equation} \mathbf{M(q)}\ddot{\mathbf q}+\mathbf{C(q,q)}\dot{\mathbf q}+\mathbf{g}(\mathbf q)+\mathbf{J}^{T}\mathbf{F}_{e}=\Delta \mathbf{\tau } \end{equation}

where $\Delta \mathbf{\tau }= sat(\mathbf{\tau })-\mathbf{\tau }_d$ ; $\mathbf q,\dot{\mathbf q},\ddot{\mathbf q}\in \mathbb{R}^n$ are generalized actuate joint position, velocity, and acceleration, respectively; $\mathbf{M(q)}\in \mathbb{R}^{n\times n}$ and $\mathbf{C(q,{q})}\in \mathbb{R}^{n\times n}$ are the symmetric positive definite inertia matrix and the Coriolis-centrifugal matrix of the robot, and $\mathbf{g}(\mathbf q)\in \mathbb{R}^{n\times 1}$ denotes the gravity vector; $\mathbf{F_e}\in \mathbb{R}^{6\times 1}$ represents the external force caused by the contact with the environment; $\mathbf{J}$ is the Jacobian matrix of the robot; the nonlinear saturation torque $sat(\mathbf{\tau })$ is given by

(2) \begin{equation} sat(\mathbf{\tau }^{i}) = \begin{cases} \mathbf{\tau }_{max}^{i}, &\text{if $\mathbf{\tau } ^{i}\gt \mathbf{\tau } _{max}^{i}$}, \\ \mathbf{\tau }^{i}, &\text{if $\mathbf{\tau } _{min}^{i}\lt \mathbf{\tau } ^{i}\lt \mathbf{\tau } _{max}^{i}$},\\ \mathbf{\tau }_{min}^{i}, &\text{if $\mathbf{\tau }^{i}\lt \mathbf{\tau } _{min}^{i}$}.\\ \end{cases} \end{equation}

where $i=1,2\dots n$ and $\mathbf{\tau }$ represents the vector of the input torque without saturation. $\mathbf{\tau } _{max}^{i}$ and $\mathbf{\tau } _{min}^{i}$ are the upper and lower bounds of permissible torque input limited by actuator joint constraints, respectively. $\mathbf{\tau }_d\in \mathbb{R}^{n\times 1}$ denotes the unknown disturbance in input torque.

Property 1. The matrix $\mathbf{M}$ is a symmetric positive inertia matrix constrained by the following:

(3) \begin{equation} \begin{aligned} m_{1}\left \| \textbf{x}\right \|^{2}\leq \textbf{x}^{T}\mathbf{M}\textbf{x}\leq m_{2}\left \|\textbf{x} \right \|^2 \end{aligned} \end{equation}

in which $m_{1}$ and $m_{2}$ are the lower and upper bound limits, and $\textbf{x}$ can be any n-dimensional vector given as $\textbf{x}\in \mathbb{R}^{n \times 1}$ .

Property 2. Matrix $\mathbf{C(q,{q})}$ , vector $\mathbf{g(q)}$ and vector $\mathbf{\tau }_{d}$ in (1) are bounded as:

(4) \begin{equation} \begin{aligned} \left \|\mathbf{C}(\mathbf q,\dot{\mathbf q}) \right \|&\leq \mathbf{C}_m \left \|\dot{\mathbf{q}} \right \|, \\ \left \|\mathbf{g}(\mathbf q) \right \|& \leq \mathbf{g}_m, \\ \left \|\mathbf{\tau }_{d} \right \|&\leq \mathbf{\tau }_{m}. \end{aligned} \end{equation}

in which $C_{m}$ , $\mathbf{g}_{m}$ and $\mathbf{\tau }_{m}$ are positive constants.

Property 3. $\dot{\mathbf{M}}-2\mathbf{C}$ is a skew symmetric matrix satisfying the following equality:

(5) \begin{equation} \textbf{x}^{T}[\dot{\mathbf{M}}-2\mathbf{C}]\textbf{x}=0 \end{equation}

where $\textbf{x}$ is any n-dimensional vector given as $\textbf{x}\in \mathbb{R}^{n \times 1}$ , and $\textbf{M}$ , $\textbf{C}$ are the metrics in (1).

2.2. Dynamic of image-based visual servoing

Image-based visual servoing (IBVS) is an external sensor-based controller. It takes cameras as the primary sensors and estimates the pose of the robot directly from the visual information [Reference Zhu, Briot and Chriette33]. The general kinematic relationship between the twist of the robot and the velocity of the image features can be described by [Reference Mariottini, Oriolo and Prattichizzo34]:

(6) \begin{equation} \dot{\mathbf s}=\mathbf{L}_s\dot{\mathbf{p}}_c \end{equation}

where $\mathbf{s}\in \mathbb{R}^m$ is the vector of $m$ image features and $\mathbf{L}_s$ is the well-known interaction matrix related to $\mathbf s$ [Reference Chaumette and Hutchinson35]. $\dot{\mathbf{p}}_c$ denotes the spatial relative camera-object velocity of the robot. It can be transformed to the actuator velocities $\dot{\mathbf q}$ with the Jacobian matrix $\mathbf{J}$ . Therefore, the relationship between $\dot{\mathbf s}$ and $\dot{\mathbf q}$ can be illustrated as:

(7) \begin{equation} \dot{\mathbf s}=\mathbf{L}_s^c\mathbf{T}_e\mathbf{J}\dot{\mathbf q} \end{equation}

where $^c\mathbf{T}_e$ represents the transform matrix from the kinematic screw to the camera frame. $\mathbf{J}_s$ is defined as $\mathbf{J}_s=\mathbf{L}_s^c\mathbf{T}_e\mathbf{J}$ . Then, Eq.7 can be rearranged in the following equation:

(8) \begin{equation} \dot{\mathbf q}=\mathbf{J}_s^{+}\dot{\mathbf s} \end{equation}

in which $^{+}$ here represents the pseudo-inverse of the matrix.

The application of visual feedback in the dynamic model of robots was proposed by [Reference Zhang and Ostrowski36] in 1999 for the first time. In order to generate the actuator torque, the image feature accelerations have been applied to the IBVS dynamic model [Reference Fusco, Kermorgant and Martinet37, Reference Ozawa and Chaumette38]. By differentiating (8), the IBVS interaction model can be expressed as:

(9) \begin{equation} \ddot{\mathbf q}=\mathbf{J}_{\mathbf s}^{+}\ddot{\mathbf s}+\dot{\mathbf{J}_{\mathbf s}^{+}}\dot{\mathbf s} \end{equation}

Inserting (6) and (9) into (1), the IBVS dynamic model can be written as:

(10) \begin{equation} sat(\mathbf{\tau })=\mathbf{M}\mathbf{J}^+_s\ddot{\mathbf s}+\mathbf{M}\dot{\mathbf{J}_{s}^{+}}\dot{\mathbf s}+\mathbf{C}\mathbf{J}_{s}^{+}\dot{\mathbf s}+\mathbf{g}+\mathbf{J}^{T}\mathbf{F}_e+\mathbf{\tau }_d \end{equation}

Multiplying both sides of (10) with the term ${\mathbf{J}_s ^+ }^T$ , the general dynamic Image-based visual servoing model can be rearranged as:

(11) \begin{equation} \overline{sat(\mathbf{\tau })}=\overline{\mathbf{M}}\ddot{\mathbf s}+\overline{\mathbf{C}}\dot{\mathbf s}+\overline{\mathbf{g}}+\overline{\mathbf{J}^{T}\mathbf{F}}_e+\overline{{\mathbf{\tau }}_d} \end{equation}

where $\overline{sat(\mathbf{\tau })}={\mathbf{J}^+_s}^Tsat(\mathbf{\tau }),\overline{\mathbf{M}}={\mathbf{J}^+_s}^T \mathbf{M}\mathbf{J}^+_s, \overline{\mathbf{C}}={\mathbf{J}^+_s}^T(\mathbf{M}\dot{\mathbf{J}_{s}^{+}}+\mathbf{C}\mathbf{J}_{s}^{+}),\overline{\mathbf{g}}={\mathbf{J}^+_s}^T \mathbf{g},\overline{\mathbf{J}^{T}\mathbf{F}}_e={\mathbf{J}^+_s}^T\mathbf{J}^{T}\mathbf{F}_e$ and $\overline{{\mathbf{\tau }}_{d}}={\mathbf{J}^+_s}^T\mathbf{\tau }_d$ . Similarly, for the dynamic image-based visual servoing model of collaborative robots, some properties have already been demonstrated in ref. [Reference Wang, Liu, Chen and Zhang39], and are presented as follows:

Property 4. The matrix $\overline{\mathbf{M}}$ is a positive symmetric matrix and bounded as:

(12) \begin{equation} m_3\|\mathbf s\|^{2}\leq \mathbf s^T\overline{\mathbf{M}}\mathbf s\leq m_4\|\mathbf s\|^{2},\forall \mathbf s \in \mathbf{R}^{m\times 1}. \end{equation}

in which $m_{3}$ and $m_{4}$ are the lower limit constant and upper limit constant [Reference Wang, Liu, Chen and Zhang39].

Property 5. The disturbance torque $\overline{\mathbf{\tau }_{d}} \in \mathbb{R}^{m\times 1}$ in the feature space is bounded as:

(13) \begin{equation} \left \|\overline{\mathbf{\tau }_{d}} \right \|\leq \mathbf{\tau }_{sm} \end{equation}

It has been mentioned in Section 1 that for a collaborative robot, its task trajectory is predefined. Therefore, we can promise that the robot does not encounter Type 1 singularity and Type 2 singularity, and the visual servoing controller does not encounter controller singularity during the task [Reference Merlet40, Reference Briot and Martinet41], which means that the matrices $\mathbf{J}$ and $\mathbf{L}_{s}$ are all full rank.

3. Vision-admittance-based model

In this section, to realize the accurate position control and the compliant interaction force control simultaneously, a novel vision-admittance-based reference trajectory generator is established and applied to generate a reference trajectory in the image feature space online with regard to the interaction force.

Firstly, we need to express the external interaction force in the feature space. Based on the content in Section. 2, we perform some simple manipulations with (10) and get:

(14) \begin{equation} \ddot{\mathbf s}+\mathbf{C}_{s}\dot{\mathbf s}+b_s+f_{sext}=f_s \end{equation}

with

(15) \begin{equation} \begin{aligned} f_s&=\mathbf{J}_s \mathbf{M}^{-1}sat(\mathbf{\tau }),\\ \mathbf{C}_s&=\mathbf{J}_s \mathbf{M}^{-1}(\mathbf{M}\dot{\mathbf{J}_{s}^{+}}+\mathbf{C}\mathbf{J}_{s}^{+}),\\ b_s&=\mathbf{J}_s \mathbf{M}^{-1}(\mathbf{g}+\mathbf{\tau }_{d}),\\ f_{sext}&=\mathbf{J}_s \mathbf{M}^{-1}\mathbf{J}^{T}\mathbf{F}_e=\mathbf{J}_{sf}\mathbf{F}_e. \end{aligned} \end{equation}

where $f_{sext}$ is the vector of per unit mass/inertia (p.u.m.i) virtual forces applying on image feature space corresponding to the external forces to the robot in Cartesian space. $f_s$ is the projection of the robot actuator torque in the image feather space.

Then, the model of interaction force $\mathbf{F}_{e}$ between the end-effector of robot and the surroundings is often considered as a generalized MSD model in the Cartesian space [Reference Mason9] in the following form (Fig. 1):

(16) \begin{equation} \mathbf{B}_e\ddot{e}_p+\mathbf{D}_e\dot{e}_p+\mathbf{K}_e e_p=\mathbf{F}_e \end{equation}

where $e_p=\mathcal{X}_r-\mathcal{X}_d$ is the error between the reference trajectory and desired trajectory of the robot in the task space. $\mathbf{B}_e$ , $\mathbf{D}_e$ , and $\mathbf{K}_e$ are inertia, damping, and stiffness parameters of the contact surface of the environment, respectively.

Figure 1. Description of the vision-admittance-based model and the general interaction model.

Multiplying both sides of (16) with $\mathbf{J}_s \mathbf{M}^{-1}\mathbf{J}^{T}$ , we obtain:

(17) \begin{equation} \mathbf{B}_{sext}\ddot{e}_p+\mathbf{D}_{sext}\dot{e}_p+\mathbf{K}_{sext} e_p=f_{sext} \end{equation}

where $\mathbf{B}_{sext}=\mathbf{J}_{sf}\mathbf{B}_e,\mathbf{D}_{sext}=\mathbf{J}_{sf} \mathbf{D}_e$ and $\mathbf{K}_{sext}=\mathbf{J}_{sf} \mathbf{K}_e$ .

The interaction force between the robot manipulator and the contact surface comes from the very small deformations. Then, we take the first-order geometric model for the projection from Cartesian space and image feature space. The vision-based-admittance model (Fig 1) can therefore be derived from (17) and expressed as:

(18) \begin{equation} \mathbf{B}_{s}\ddot{e}_s+\mathbf{D}_{s}\dot{e}_s+\mathbf{K}_{s} e_s=f_{sext} \end{equation}

where $\mathbf{B}_s=\mathbf{B}_{sext}\mathbf{L}^+_s$ , $\mathbf{D}_s=\mathbf{D}_{sext}\mathbf{L}^+_s$ , and $\mathbf{K}_s=\mathbf{K}_{sext}\mathbf{L}_s^+$ . $e_{s}=\mathbf{s}_{r}-\mathbf{s}_{d}$ , $\mathbf{s}_{r}$ is the reference image feature and $\mathbf{s}_d$ is the predefined desired value. When the collaborative robot is performing free motion, and no external force exists, $\mathbf{s}_{r}\equiv \mathbf{s}_{d}$ .

Considering the situation that the system (18) is at the equilibrium ( $\ddot{e}_s=0$ ), the simplified model of (18) can be described as:

(19) \begin{equation} \begin{aligned} \dot{e}_s(t)&=-\mathbf{K}_s\mathbf{D}^{-1}_s e_{s}+\mathbf{D^{-1}_s}f_{sext}\\ &=\mathbf{A}e_s+\mathbf{N}f_{sext}. \end{aligned} \end{equation}

By solving (19), we obtain:

(20) \begin{equation} e_s(t)=e^{\mathbf{A}t}_s(t)\mathbf{s}_0+\int ^t_0 e^{\mathbf{A}(t-\tau )}_s \mathbf{N}f_{sext}(\tau )d\tau \end{equation}

where $\mathbf{s}_{0}$ is the initial state of the image feature. Therefore, we can deduce the reference image trajectory from:

(21) \begin{equation} \begin{aligned} \mathbf{s}_r(t)&=e_s(t)+\mathbf{s}_d\\ & =e^{\mathbf{A}t}_{\mathbf{s}}(t)s_0+\int ^t_0 e^{\mathbf{A}(t-\tau )}_s \mathbf N f_{sext}(\tau )d\tau +\mathbf{s}_d. \end{aligned} \end{equation}

With the force command $\mathbf F_d\in \mathbb{R}^6$ , and the vision-admittance model, we can design the force control law as the following form:

(22) \begin{equation} f_{sext}(t)=\mathbf{J}_{sf}(\mathbf{K}_p e_f+\mathbf{K}_j \int _0^t e_f(\tau ) d\tau ) \end{equation}

where $e_f=\mathbf{F}_d-\mathbf{F}_e$ . $\mathbf{F}_d$ is the desired contact force command. $\mathbf K_p=diag(k_{p,1}\cdots k_{p.n})$ and $\mathbf K_j=diag(k_{j,1}\cdots k_{j.n})$ are positive definite matrices.

4. Leakage-type adaptive sliding mode controller with auxiliary dynamic compensator

In this section, we develop a Leakage-type adaptive sliding mode controller with an auxiliary dynamic compensator to drive the collaborative robot to track the reference image feature trajectory.

The auxiliary dynamic is first described to cancel the impacts of input saturation. Moreover, the Leakage-type adaptive law is presented to solve the problems from unknown system errors and unmatched uncertainties caused by external disturbance. With Lyapunov analysis, the stability of the controller with uncertainties and input saturation is proven. The structure of this novel adaptive sliding mode controller is shown in Fig 2.

Figure 2. The structure of vision-admittance-based adaptive sliding mode controller.

4.1. Auxiliary dynamic model for input saturation compensation

Firstly, we construct the auxiliary dynamic model, which is used to establish the sliding function in the following section. Considering the impacts of the input saturation, the auxiliary dynamic vector, in this case, can be defined as:

(23) \begin{equation} \triangle \mathbf{r}=\overline{\mathbf{M}}^{-1}\overline{\triangle \mathbf{\tau }}\in \mathbb{R}^{m\times 1} \end{equation}

where $\overline{\triangle \mathbf{\tau }}=\overline{sat(\mathbf{\tau })}-\overline{\mathbf{\tau }}$ .

Then, the following auxiliary dynamic is proposed as follows:

(24) \begin{equation} \left \{ \begin{aligned} \dot{\xi _{i,1}}&=-h_{i,1}\xi _{i,1}+\xi _{i,2}, \\ \dot{\xi _{i,2}}&=-h_{i,2}\xi _{i,2}+\triangle{r_{i}}. \end{aligned} \right . \end{equation}

where $i=1,2,\cdot \cdot \cdot m$ . $\xi _{i,1},\xi _{i,2}$ are the state variables of the auxiliary dynamic model.

Thus, we rewrite (24) in the matrix form:

(25) \begin{equation} \dot{\xi _i}=\mathbf{H}_i\xi _i+\mathbf{E}_i\triangle r_i \end{equation}

where $\xi _i=[\xi _{i,1},\xi _{i,2}]$ , $\mathbf{H}_i=diag\{-h_{i,1},-h_{i,2}\}$ , and $\mathbf{E}_i=[0,1]^T$ . $h_{i,1},h_{i,2}$ are positive and selected to make $\mathbf{H}_i$ become a Hurwitz matrix. By defining $\boldsymbol{\xi }=[{\xi }_1^T\cdot \cdot \cdot{\xi }_n^T]^T\in \mathbb{R}^{2m\times 1}$ , (25) can be rewritten as:

(26) \begin{equation} \dot{\boldsymbol{\xi }}=\mathbf{H}\boldsymbol{\xi }+\mathbf{E}\triangle \mathbf{r} \end{equation}

where $\mathbf{H}=diag\{\mathbf{H}_1\cdot \cdot \cdot,\mathbf{H}_m\}\in \mathbb{R}^{2m\times 2m}$ , $\mathbf{E}=[\mathbf{E}_1^T\cdot \cdot \cdot,\mathbf{E}_m^T]\in \mathbb{R}^{2m\times m}$

Defining $\xi^{\prime}_1=[\xi _{1,1}\cdot \cdot \cdot \xi _{1,m}]^T$ and $\xi^{\prime}_2=[\xi _{2,1}\cdot \cdot \cdot \xi _{2,m}]^T$ , we transform (23) into the following form:

(27) \begin{equation} \left \{ \begin{aligned} \dot{\xi^{\prime}_1}&=-\hat{\mathbf{H}}_1\xi^{\prime}_1+\xi^{\prime}_2,\\ \dot{\xi^{\prime}_2}&=-\hat{\mathbf{H}}_2\xi^{\prime}_2+\triangle \mathbf{r}. \end{aligned} \right . \end{equation}

with $\hat{\mathbf{H}}_1=diag\{h_{1,1}\cdot \cdot \cdot h_{n,1}\}\in \mathbb{R}^{m\times m}$ , $\hat{\mathbf{H}}_2=diag\{h_{1,2}\cdot \cdot \cdot h_{n,2}\}\in \mathbb{R}^{m\times m}$

Inserting (23) into (26), we obtain:

(28) \begin{equation} \ddot{\xi^{\prime}_1}=-\hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}-\hat{\mathbf{H}}_2\xi^{\prime}_2+\overline{\mathbf{M}}^{-1}\overline{\triangle \mathbf{\tau }} \end{equation}

In the following section, (28) is applied to the design of the adaptive sliding mode controller.

4.2. Leakage-type adaptive sliding mode controller

Considering the auxiliary dynamic compensation proposed above, the error vector of this sliding model controller can be defined as:

(29) \begin{equation} e_{c}=e_{sf}-\xi^{\prime}_1 \end{equation}

where $e_{sf}=\mathbf{s}-\mathbf{s}_{r}$ . $\mathbf s$ is the real-time vision feedback and $\mathbf{s}_r$ is the reference feature trajectory in Section 3.

With (29), the linear sliding function can be defined as:

(30) \begin{equation} \mathbf{r}=\dot{e}_c+\Lambda e_c \end{equation}

where $\Lambda =diag\{\lambda _1\cdot \cdot \cdot \lambda _m\}$ , $\lambda _i\gt 0$ .

Differentiating (30) and combining it with (11) and (28), we have:

(31) \begin{equation} \begin{aligned} \dot{\mathbf{r}}&= \ddot{e}_c+\Lambda \dot{e}_c\\ &= \ddot{\mathbf{s}}-\ddot{\mathbf s}_{r}-\ddot{\xi^{\prime}_1}+\Lambda \dot{e}_c\\ &= \ddot{\mathbf s}-\ddot{\mathbf s}_{r}+\hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}+\hat{\mathbf{H}}_2\xi^{\prime}_2-\overline{\mathbf{M}}^{-1}\overline{\triangle \mathbf{\tau }}+\Lambda \dot{e}_c\\ &= \overline{\mathbf{M}}^{-1}(\overline{sat(\mathbf{\tau })}-\overline{\mathbf{C}}\dot{s}-\overline{\mathbf{g}}-\overline{\mathbf{J}^{T}\mathbf{F}_e}-\overline{\mathbf{\tau }_d})- \ddot{\mathbf s}_r+ \hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}+ \hat{\mathbf{H}}_2\xi^{\prime}_2-\overline{\mathbf{M}}^{-1}\overline{\triangle \mathbf{\tau }}+\Lambda \dot{e}_c\\ &= \overline{\mathbf{M}}^{-1}(\overline{\mathbf{\tau }}-\overline{\mathbf{{C}}}\dot{\mathbf s}-\overline{\mathbf{g}}-\overline{\mathbf{J}^{T}\mathbf{F}_e}-\overline{\mathbf{\tau }_d})-\ddot{\mathbf s}_r+\hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}+\hat{\mathbf{H}}_2\xi^{\prime}_2+\Lambda \dot{e}_c. \end{aligned} \end{equation}

This system is stable when $\dot{\mathbf r}=0$ and the external disturbance is ignored ( $\overline{\mathbf{\tau }_d}=0$ ). Therefore, the following equivalent input torque can be obtained:

(32) \begin{equation} \overline{\mathbf{\tau }_0}=\overline{\mathbf{C}}\dot{\mathbf{s}}+\overline{\mathbf{g}}+\overline{\mathbf{J}^{T}\mathbf{F}_e}+\overline{\mathbf{M}}(\ddot{\mathbf{s}}_r-\hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}-\hat{\mathbf{H}}_2\xi^{\prime}_2-\Lambda \dot{e}_c) \end{equation}

Moreover, to solve the problem of the external disturbance of the system, we propose the following control law:

(33) \begin{equation} \overline{\mathbf{\tau }_1}=\overline{\mathbf{M}}({-}\mathbf{K} sgn(\mathbf{r})-\mathbf{\Phi } \mathbf{r}) \end{equation}

where $\boldsymbol{\Phi }=diag\{\phi _1\cdot \cdot \cdot,\phi _m\}$ , $\phi _i\gt 0$ . $\mathbf{K}=diag\{k_1\cdot \cdot \cdot,k_m\}$ is the adaptive parameter. By denoting $\mathbf{r}=[r_1\cdot \cdot \cdot,r_m]^T$ , the update law of $k_{i}$ [Reference Roy, Baldi and Fridman42] can be expressed as:

(34) \begin{equation} \dot{k}_i=|{r_i}|-\mu _i k_i \end{equation}

with $\mu _i\gt 0$ ( $i=1\cdots n$ ) and the initial values of adaptive parameters $k_i(0)\geq 0$ .

Finally, the overall output of this adaptive sliding mode controller can be constructed as:

(35) \begin{equation} \overline{\mathbf{\tau }}=\overline{\mathbf{\tau }_0}+\overline{\mathbf{\tau }_1} \end{equation}

4.3. Stability analysis

In this section, the stability analysis and achievable performance of the control scheme in Fig 2 are presented.

Assumption 1. The upper bound of the disturbance vector in the image feature space of the robot system $\overline{\mathbf{\tau }_d}=[\gamma _1\cdot \cdot \cdot,\gamma _m]^T$ is $\bar{k}=[\bar{k}_1\cdot \cdot \cdot \bar{k}_m]^T,\bar{k}_i\gt 0$ as $|\gamma _i|\leq \bar{k}_i\,\,(i=1 \cdots m)$ .

Assumption 2. The system (1) is input-to-state stable [Reference Karason and Annaswamy43].

Property 6. For any given vector $\mathbf{x}=[x_1\cdot \cdot \cdot,x_m]^T\in \mathbb{R}^m$ and $\mathbf{y}=[y_1\cdot \cdot \cdot,y_m]^T\in \mathbb{R}^m$ , the following inequality holds:

(36) \begin{equation} \mathbf{x}^Ty\leq \|\mathbf{x}\|\|\mathbf{y}\| \end{equation}

Property 7. For any given vector $\mathbf{x}=[x_1\cdot \cdot \cdot,x_m]^T\in \mathbb{R}^m$ , the following inequality holds:

(37) \begin{equation} \sum _{i=1}^m|x_i|\geq \sqrt{\sum _{i=1}^m \mathbf{x}^2_i}=\|\mathbf{x}\|\ \end{equation}

Property 8. For any given vector $\mathbf{x}\in \mathbb{R}^m$ and a symmetric positive definite matrix $\mathbf{M}\in \mathbb{R}^{m\times m}$ , the following inequality holds:

(38) \begin{equation} \lambda _{min}(\mathbf{R})\mathbf{x}^T \mathbf{x}\leq \mathbf{x}^T\mathbf{R}\mathbf{x}\leq \lambda _{max}(\mathbf{R})\mathbf{x}^T \mathbf{x} \end{equation}

where $\lambda _{min}(\mathbf{R})$ and $\lambda _{max}(\mathbf{R})$ are the minimal and maximal eigenvalues of matrix $\mathbf{R}$ .

Lemma 4.1. Define a positive Lyapunov function of state $\mathbf r$ as $\mathbf{V}(\mathbf{r})$ satisfying the following inequality:

(39) \begin{equation} \dot{\mathbf{V}}(\mathbf{r})+\mathbf{\Theta }\mathbf{V}(\mathbf{r})\leq \zeta \end{equation}

where $\mathbf{\Theta }\gt 0$ and $\zeta \gt 0$ . For any initial condition of the Lyapunov function, $\mathbf{V}(r)$ can converge to a finite region as:

(40) \begin{equation} \mathbf{V}(\mathbf r)\leq \frac{\zeta }{\mathbf{\Theta }(1-\epsilon )} \end{equation}

and the convergence time of $\mathbf{V}(\mathbf r)$ is limited by the following inequality as:

(41) \begin{equation} t\leq \frac{1}{\mathbf{\Theta }\epsilon }\ln \frac{\mathbf{\Theta }(1-\epsilon )\mathbf{V}(0)}{\zeta } \end{equation}

where $\epsilon \in (0,1)$ and $\mathbf{V}(0)$ is the initial statement of Lyapunov function. The proof can be found from [Reference Shao, Zheng, Wang, Wang and Liang31].

Lemma 4.2. Given a differential function as:

(42) \begin{equation} \mathbf{h}(e,t)=(\frac{d}{dt}+\nu )^{w-1} e \end{equation}

where $\nu \gt 0$ and $e=[e,\dot{e}\cdots,e^{m-1}]^T\in \mathbb{R}^m$ . When $\mathbf{h}(e,t)$ is constrained as $|\mathbf{h}(e,t)|\leq \ \boldsymbol{\Theta }$ , the following inequality holds:

(43) \begin{equation} |e^{(i)}|\leq (2\nu )^{i}\frac{\mathbf{\Theta }}{\nu ^{m-1}} \end{equation}

where $i=0,1 \cdots m-1$ . The proof can be found from [Reference Slotine and Li44].

Then, we design the Lyapunov function as follows:

(44) \begin{equation} \mathbf{V}=\frac{1}{2}\mathbf r^T\mathbf r+\frac{1}{2}(k-\bar{k})^T(k-\bar{k}) \end{equation}

where $k=[k_1\cdot \cdot \cdot k_n]^T$ . Differentiating (44) and combining it with (30) to (35), the derivative of $\mathbf{V}$ is given by:

(45) \begin{equation} \begin{aligned} \dot{\mathbf{V}}&=\mathbf r^T\dot{\mathbf r}+(k-\bar{k})^T\dot{k}\\[3pt] &=\mathbf r^T(\overline{\mathbf{M}}^{-1}(\overline{\mathbf{\tau }}-\overline{\mathbf{C}}\dot{\mathbf s}-\overline{\mathbf{g}}-\overline{\mathbf{J}^{T}\mathbf{F}_e}- \overline{\mathbf{\tau }_d})-\ddot{s}_r+\hat{\mathbf{H}}_1\dot{\xi^{\prime}_1}+\hat{\mathbf{H}}_2\xi^{\prime}_2+\Lambda \dot{e}_c)+(k-\bar{k})^T\dot{k}\\[3pt] &=\mathbf r^T(\overline{\mathbf{M}}^{-1}(\overline{\mathbf{\tau }_1}-\overline{\mathbf{\tau }_d}))+(k-\bar{k})^T\dot{k}\\[3pt] & =\mathbf r^T({-}\mathbf{K} sgn(\mathbf r)-\mathbf{\Phi } \mathbf r-\overline{\mathbf{M}}^{-1}\overline{\mathbf{\tau }_d})+(k-\bar{k})^T((\mathbf r)_{||}-\mathbf{\mu } k). \end{aligned} \end{equation}

where $(\mathbf r)_{||}=[|r_1|\cdot \cdot \cdot |r_m|]^T$ and $\mathbf{\mu }=diag\{\mu _1 \cdots \mu _m\}$ from (34). In addition, considering Properties 4 to 7, (45) can be rewritten as:

(46) \begin{equation} \begin{aligned} \dot{\mathbf{V}}\leq &-\sum ^m_{i=1} k_i|{\mathbf{r}}_i|-{\mathbf{r}}^T\mathbf{\Phi }{\mathbf{r}}+\sum ^m_{i=1}m_{4}{\mathbf{\tau }_{sm,i}}|{\mathbf{r}}_i|+(k-\bar{k})^T({\mathbf{r}}_{||}-\mathbf{\mu } k)\\[3pt] \leq &-\sum ^m_{i=1} k_i|{\mathbf{r}}_i|-{\mathbf{r}}^T\mathbf{\Phi }{\mathbf{r}}+(k-\bar{k})^T({\mathbf{r}}_{||}-\mathbf{\mu } k)\\[3pt] &= -{\mathbf{r}}^T\mathbf{\Phi }{\mathbf{r}}-(k-\bar{k})^T\mathbf{\mu } k. \end{aligned} \end{equation}

Then, by adopting Property 8, (46) becomes:

(47) \begin{equation} \begin{aligned} \dot{\mathbf{V}}\leq &-{\mathbf{r}}^T\mathbf{\Phi }{\mathbf{r}}-\frac{1}{2}(k-\bar{k})^T\mu (k-\bar{k})+\frac{1}{2}\bar{k}^T\mathbf{\mu } \bar{k}\\ \leq &-\frac{\lambda _{min}(\mathbf{\Phi )}}{\lambda _{max}(\mathbf{\overline{M}})}{\mathbf{r}}^T\mathbf{\overline{M}}{\mathbf{r}}-\frac{1}{2}\lambda _{min}(\mathbf{\mu })(k-\bar{k})^T(k-\bar{k})+\frac{1}{2}\bar{k}^T\mathbf{\mu } \bar{k}\\ \leq &-min\{\frac{2\lambda _{min}(\mathbf{\Phi })}{\lambda _{max}(\mathbf{\overline{M}})},\lambda _{min}(\mathbf{\mu })\}\mathbf{V}+\frac{1}{2}\bar{k}^T\mathbf{\mu } \bar{k}\\ &= -\mathbf{\Theta }\mathbf{V}+{\zeta }. \end{aligned} \end{equation}

where $\Theta =min\{\frac{2\lambda _{min}(\mathbf{\Phi })}{\lambda _{max}(\mathbf{\overline{M}})},\lambda _{min}(\mathbf{\mu })\}$ and ${\zeta }=\frac{1}{2}\bar{k}^T\mathbf{\mu } \bar{k}$ .

With Assumption 1, the boundary of the sliding variable $\mathbf{r}$ can be obtained:

(48) \begin{equation} \frac{1}{2}\lambda _{min}(\mathbf{\overline{M}})\|\mathbf{{r}}\|^2\leq \frac{1}{2}\mathbf{{r}}^T\mathbf{\overline{M}}\mathbf{{ r}}\leq \frac{\boldsymbol{\zeta }}{\mathbf{\Theta }(1-\epsilon )} \end{equation}

where $\epsilon \in (0,1)$ .

Simplify (48) and take Lemma 4.1 into consideration, the boundary of $\mathbf{{r}}$ can be described as:

(49) \begin{equation} |\mathbf{{r}}_i|\leq \|\mathbf{{r}}\|\leq \sqrt{\frac{2\zeta }{\lambda _{min}(\overline{\mathbf{M}})\mathbf{\Theta }(1-\epsilon )}} \end{equation}

then the convergence time is bounded as follows:

(50) \begin{equation} t\leq \frac{1}{\boldsymbol{\Theta }\epsilon }\ln \frac{\boldsymbol{\Theta }(1-\epsilon )\mathbf{V}(0)}{\zeta } \end{equation}

From (30) and according to Lemma 4.2 for $w=2$ and $i=1$ , the error $e_{c}$ in (29) is bounded as:

(51) \begin{equation} \left |e_c\right |\leq \frac{\hat{r}_i}{\nu _i} \end{equation}

where $\nu _i$ is a positive constant and $\hat{r}$ is the upper limit of $\left | r_{i}\right |$ .

Therefore, with the proposed controller (35), the linear sliding function $\mathbf{{r}}$ and $e_c$ asymptotically converge to a limited region in a finite time.

5. Semi-physical simulation experiments

To demonstrate the efficacy of the proposed methodology, semi-physical simulations are performed on a 6-DOF collaborative robot, which is defined as a polishing robot. We suppose that a grinding tool is attached to the robot manipulator, and it is driven to follow a predefined polish trajectory $\mathbf{s}_{d}$ and a desired polish force command $\mathbf{F}_{d}$ at the same time. A force sensor is fixed between the tool and the manipulator to measure the real-time grinding force (see Fig. 3). As a comparison, we set another two controllers, the conventional adaptive sliding mode controller in ref. [Reference Gao, An, Proctor and Bradley45] and the parallel vision/force controller (PVFC) in ref. [Reference Zhu, Huang, Qiu, Zheng and Gong8].

Figure 3. Simulation structure of SD7/700.

5.1. Collaborative robot and interaction environment model

The simulations are performed in MATLAB with the robot system toolbox. The physical CAD model of the 6-DOF desktop manipulator STEP SD7/700 is applied, and all the robot transformations are perfectly known using the robotic system toolbox. The initial joint angles of STEP SD7/700 are set to be $\mathbf{q}(0) = [0,0,0,0,-1.4708,0.5236]^T$ rad.

The structure of this collaborative robot is shown in Fig. 3. The origin of the global coordinates is the centre of the robot base. The centre of the interactive plane is $P_i(0.3,0,0.4)$ m. The 4-point image features are chosen as the image feature in this simulation, given as $A_1(0.55,0.05,0.3)$ m, $A_2(0.55,-0.05,0.3)$ m, $A_3(0.45,-0.05,0.3)$ m, and $A_4(0.45,0.05,0.3)$ m, in the global frame, respectively.

In the simulation, We employ the eye-in-hand configuration and fix the camera to the end-effector of the manipulator (see Fig. 3). The camera’s resolution is $1024 \times 1024$ pixels, and focal lengths along the x and y axis are 10 mm. The ratios between the size of a pixel and unit length are 100 pixels/mm along the two axes of the pixel plane. The grinding tip is used to perform the task on the interactive plane, and the force sensor is used to get the feedback of the interaction force.

Considering the real polishing environment, the model of the interaction force between the robot and the interactive plane in this simulation is an MSD model given as 16 and we set $\mathbf{K}_e = 10000$ N/m, $\mathbf{D}_e = 0.57$ Ns/m, and $\mathbf{B}_e = 0.001$ Ns $^2$ /m.

5.2. Definition of the position trajectory and noise from feedback

To perform the polishing task, we define a trajectory to reach the workspace and avoid the singularity location. It consists of two phases: the approaching phase and the interaction phase.

For the approaching phase, the robot’s initial position is $(0.398,0,0.6339)$ m, and the desired position is $(0.3,0.05,0.4005)$ m. We defined the time-varying position trajectory in the Cartesian space as:

(52) \begin{equation} \mathbf{p}_d = \begin{bmatrix} 0.3\\ 0.05\\ 0.4005+\frac{(2-t)^3}{80} \end{bmatrix} \end{equation}

where $t \in [0,2]$ s. After the approaching phase, the robot manipulator remains stable at $(0.3,0.05,0.4005)$ m for $1$ sec. Then, during the interaction phase, the desired force and position trajectory are given simultaneously. For the polish force command, we set it to be 10 N normal to the interaction plane all the time, and the predefined time-varying trajectory in the Cartesian space is given as:

(53) \begin{equation} \mathbf{p}_d = \begin{bmatrix} 0.3+0.05sin(\frac{t-3}{2})\\ 0.05cos(\frac{t-3}{2})\\ 0.4\end{bmatrix} \end{equation}

where $t \in [3,20]$ s.

In order to prove the robustness and adaptive ability of the proposed controller, some noise and uncertainties are added to the system. First, we added uncertainty of $5\%$ on the diagonal element of the inertia matrix given as:

(54) \begin{equation} \hat{\mathbf{M}} = \mathbf{M}+0.05\mathbf{M}_d \end{equation}

where $\mathbf{M}_d = diag\{ M_{1,1},M_{2,2},M_{3,3},M_{4,4},M_{5,5},M_{6,6}\}$ and $M_{i,i}, i = 1\cdots,6$ are the diagonal elements of inertia matrix of $\mathbf{M}$ . In addition, by using the Matlab function wgn, white Gaussian noise with a maximum of $0.1$ pixel is added to the image features, a noise of 0.1 N is added to the force feedback, and the power of wgn function is set as $-32$ . Considering the possible ageing of the robot motor, a $1\%$ random noise is given as the unknown disturbance $\mathbf{\tau }_d$ in 1.

5.3. Simulation design

For the force control law (22), we set $k_{j,i}=0.08$ , $k_{p,i}=0.02$ and $i=1,\cdots 6$ . Three sets of controllers are proposed in this simulation:

  • Controller 1: The leakage-type adaptive sliding mode controller (LTASMC) we developed in (35). We set $h_{i,1}=13$ , $h_{i,2}=50$ , $k_i(0)=0$ , $\phi _i=10$ , $\mu _i=200$ , $\lambda _i=5$ , ( $i = 1,\cdots 8$ ).

  • Controller 2: The adaptive sliding mode controller proposed in ref. [Reference Gao, An, Proctor and Bradley45]. This controller replaces the adaptive control law in (33) with follows:

    (55) \begin{equation} k_i = a_i|r_i| \end{equation}
    where $a_i = 1$ , $i=1,\cdots 8$ , and other elements are the same as Controller 1.
  • Controller 3: The parallel vision/force controller (PVFC) [Reference Zhu, Huang, Qiu, Zheng and Gong8]. The input torque is given as:

    (56) \begin{equation} \mathbf{\tau }=\mathbf{\tau }_0+\mathbf{\tau }_1+\mathbf{\tau }_2 \end{equation}
    and
    (57) \begin{equation} \begin{aligned} \mathbf{\tau }_0 &=\mathbf{J}_s^T(\overline{\mathbf{C}}\dot{\mathbf{s}}+\overline{\mathbf{g}}+\overline{\mathbf{J}^{T}\mathbf{F}_e}),\\ \mathbf{\tau }_1 &= \hat{\mathbf{M}}\mathbf{J}_s^+(\ddot{s}_d-K_{dv}\dot{e}_s-K_{pv}*e_s),\\ \mathbf{\tau }_2 &= -\hat{\mathbf{M}}\mathbf{J}^+(K_{pf} e_f+K_{if}\int ^t_0 e_f d\tau ). \end{aligned} \end{equation}
    where we set $K_{dv} = 20$ , $K_{pv} = 50$ , $K_{pf} = 0.01$ , and $K_{if} = 0.03$ .

Three cases are designed to illustrate the robustness and adaptability of these three controllers:

  • Case 1: The robot system works without any uncertainty, disturbance, or input saturation.

  • Case 2: The robot system works with input saturation given as $\mathbf{\tau }_{max}^i = 8$ Nm and $\mathbf{\tau }_{min}^i = -8$ Nm where $i=1,\cdots 6$ . In this case, we only focus on the approaching phase. Two controllers (the proposed controller with and without the auxiliary dynamic model (ADM)) are used to demonstrate the performance of the proposed controller to handle the actuator saturation. Uncertainties and disturbances are not added to the robot system.

  • Case 3: Uncertainties of the inertia parameters and disturbance of the image features, force feedback, and actuator torque are added to the robot model and the input saturation is given as $\mathbf{\tau }_{max}^i = 12$ Nm and $\mathbf{\tau }_{min}^i = -12$ Nm where $i=1,\cdots 6$ .

5.4. Analysis of the results

The results of three simulations are given as follows:

Case 1: The desired and actual image trajectory, position errors, and input torque are presented in Fig. 4. The force command and actual force feedback are plotted in Fig. 5(a). In Fig. 4(a), 4(d), and Fig. 4(g), the red dashed line represents the desired image features. The four solid-coloured lines represent the actual image features in the pixel plane, and $P_1, P_2, P_3, P_4$ are the initial image features of the robot. From Fig. 4(a), 4(d), and Fig. 4(g), we can find that three controllers can track the predefined image features trajectory in Case 1, but the converging speed of three controllers are different. As is shown in Fig. 4(b), Fig. 4), and F ig.4(h), the position errors of the proposed controller and the conventional adaptive sliding mode controller converge to a small neighbourhood of zero quickly within $0.6$ sec. However, for the parallel vision/force controller, it takes $0.8$ sec to converge to the desired trajectory. When the force command is applied and the desired trajectory is changed, the convergence time of the parallel vision/force controller is $0.5$ sec and that of the proposed controller. The conventional adaptive sliding mode controller is about $1$ sec. The maximal force errors of the proposed controller, conventional adaptive sliding mode controller, and parallel vision/force controller are $0.01$ N, $0.017$ N, and $0.025$ N, respectively, and the position errors of the three controllers are within the order of $10^{-4}$ m. From Fig. 4(c), Fig. 4(f), and 4(i), we see that the chattering of the conventional adaptive sliding mode controller in input torque is serious. However, with the help of the leak-type adaptive law, the chattering of the proposed controller is significantly reduced. From Fig. 4 and Fig. 5(a), it is clear that the control accuracy of the proposed controller and conventional adaptive sliding mode controller is better than that of the parallel vision/force controller and compared to the conventional adaptive sliding mode controller, the chatter of the input torque under the proposed controller is eliminated.

Figure 4. The simulation results of case 1 ((a), (d), and (g) show the desired trajectories and actual trajectories of three controllers in the pixel plane, (b), (e), and (h) show the trajectory tracking errors of three controllers, (c), (f) and (i) show the input torque of three controllers.).

Figure 5. The desired and actual force control performance of three controllers in case 1 and case 3.

Case 2: The simulation results of Case 2 are given in Fig. 6 and Fig. 7. As is shown in Fig. 7, the input torque of the robot under two controllers is bounded due to the input saturation, and the converging speed of the proposed controller of Case 2 is slower than that of Case 1. From Fig. 6, the robot under the proposed controller with the auxiliary dynamic model converges faster than without the auxiliary dynamic model in the X and Y-axes, while the convergence rate of the proposed controller with the auxiliary dynamic model in the Z-axis is similar to that without the auxiliary dynamic model. Numerical results demonstrate that the proposed controller with the auxiliary dynamic model converges faster than without the auxiliary dynamic model and our proposed controller is effective in handling input saturation.

Figure 6. The trajectory tracking errors of LTASMC (with and without auxiliary dynamic model) in case 2.

Figure 7. The input torque results of LTASMC (with and without auxiliary dynamic model) in case 2.

Figure 8. The trajectory tracking errors of three controllers ((a), (b), and (c)) and input torque of three controllers ((d), (e), and (f)) in case 3.

Case 3: From 8 and Fig. 5(b), it is clear that with the help of the vision-admittance-based reference trajectory generator, the proposed controller, and the conventional adaptive sliding mode controller can simultaneously track the predefined image feature trajectory and force command with system uncertainty and input saturation. From Fig. 8, the convergence times of the proposed controller, conventional adaptive sliding mode controller, and parallel vision/force controller are 0.7 sec, 0.8 sec, and 0.9 sec respectively. Fig. 8(d), Fig. 8(e), and Fig. 8(f) show that the proposed controller has the smallest chatter, while conventional adaptive sliding mode controller and parallel vision/force controller have similar chatter. As is shown in Fig. 5b, the force converging speed of Case 3 is similar to that of Case 2. However, when the system converges, the maximal force error of the proposed controller is the smallest given as $0.1$ N, and that of conventional adaptive sliding mode controller and parallel vision/force controller are $0.2$ N and $0.5$ N, respectively. In Case 3, the robot system under the proposed controller is more robust to systematic uncertainties and feedback noise compared with the other two controllers. Furthermore, the proposed controller performs better than the others in handling input chatter and control accuracy.

Generally, from the simulation results, we conclude this section by pointing out that in Case 1, by tracking the image feature trajectory obtained from the proposed vision-admittance-based model, we can achieve accurate predefined trajectory tracking and compliant contact force control at the same time. In Case 2, the auxiliary dynamic model we added to the LT adaptive sliding mode controller effectively solves the input saturation. In Case 3, the robustness and adaptability of the proposed controller are validated.

6. Conclusion

In this paper, we propose a novel vision-based adaptive leakage-type sliding mode admittance controller to handle collaborative robots’ position/force control problem with input saturation. A vision-admittance-based reference trajectory generator combines visual information and force sensing into a reference trajectory in the image feature space. A novel adaptive sliding mode controller with the auxiliary dynamic model is applied to perform the trajectory tracking control and deal with the effects of the input saturation. A leakage-type (LT) adaptive law is used in the proposed controller to reduce the impacts of system uncertainties without knowing the upper bound of uncertainty in advance. The stability analysis verifies that this controller can asymptotically stabilize the robot system with input saturation and uncertainties during a finite time. The simulations are carried out on a 6-Dof collaborative robot. The results have confirmed the effectiveness of the proposed controller. Moreover, compared with conventional adaptive sliding mode controllers and parallel vision/force controllers, the controller designed in this paper performs better in terms of robustness, precision, input chatter, and convergence speed. Considering that the robot interacts with an unknown surface, in the future, we will study the vision/force collaborative control method based on predictive control theory and multi-objective optimization methods to improve the interaction accuracy. The experimental platform is being built, and further experiments will be carried out to verify the effectiveness of the proposed method.

Author contributions

Cong Huang and Minglei Zhu conceived and designed the study. Cong Huang and Shijie Song conducted data gathering. Cong Huang and Minglei Zhu performed statistical analyses. Cong Huang, Yuyang Zhao, and Jinmao Jiang wrote the article.

Financial support

This work was supported by the National Natural Science Foundation of China (Grant No. 62303095) and the Natural Science Foundation of Sichuan Province (2023NSFSC0872).

Competing interests

The authors declare no conflicts of interest exist.

Ethical approval

None.

References

Nabat, V., de la O Rodriguez, M., Company, O., Krut, S. and Pierrot, F., “Par4: Very High Speed Parallel Robot for Pick-and-Place,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems, (IEEE, 2005) pp. 553558.CrossRefGoogle Scholar
Yang, G., Chen, I.-M., Yeo, S. H. and Lin, W.. Design and Analysis of a Modular Hybrid Parallel-Serial Manipulator for Robotised Deburring Applications (Springer London, London, 2008) pp. 167188.Google Scholar
Xu, P., Li, B. and Chueng, C.-F., “Dynamic Analysis of a Linear Delta Robot in Hybrid Polishing Machine Based on the Principle of Virtual Work,” In: 18th International Conference on Advanced Robotics (ICAR) 2017, (IEEE, 2017) pp. 379384.CrossRefGoogle Scholar
Wu, J., Gao, Y., Zhang, B. and Wang, L., “Workspace and dynamic performance evaluation of the parallel manipulators in a spray-painting equipment,” Robot Com-Int Manuf 44, 199207 (2017).CrossRefGoogle Scholar
Oyman, E. L., Korkut, M. Y., Yilmaz, C., Bayraktaroglu, Z. Y. and Arslan, M. S., “Design and control of a cable-driven rehabilitation robot for upper and lower limbs,” Robotica 40(1), 137 (2022).CrossRefGoogle Scholar
Raibert, M. H. and Craig, J. J., “Hybrid position/Force control of manipulators,” J Dyn Syst Meas Cont 103(2), 126133 (1981).CrossRefGoogle Scholar
Chiaverini, S. and Sciavicco, L., “The parallel approach to force/position control of robotic manipulators,” IEEE Trans Robot Autom 9(4), 361373 (1993).CrossRefGoogle Scholar
Zhu, M., Huang, C., Qiu, Z., Zheng, W. and Gong, D., “Parallel image-based visual servoing/force control of a collaborative delta robot,” Front Neurorobotics 16 (2022). https://doi.org/10.3389/fnbot.2022.922704 CrossRefGoogle ScholarPubMed
Mason, M. T., “Compliance and force control for computer controlled manipulators,” IEEE Trans Syst Man Cyber 11(6), 418432 (1981).CrossRefGoogle Scholar
Liu, C. and Li, Z., “Force tracking smooth adaptive admittance control in unknown environment,” Robotica 41(7), 19912011 (2023).CrossRefGoogle Scholar
Huang, A.-C., Lee, K.-J. and Du, W.-L., “Contact force cancelation in robot impedance control by target impedance modification,” Robotica 41(6), 17331748 (2023).CrossRefGoogle Scholar
Qi, R., Tang, Y. and Zhang, K., “An optimal visual servo trajectory planning method for manipulators based on system nondeterministic model,” Robotica 40(6), 16651681 (2022).CrossRefGoogle Scholar
De Schutter, J. and Baeten, J.. Integrated Visual Servoing and Force Control: The Task Frame Approach (Springer Science & Business Media, Berlin, Heidelberg, 2003).Google Scholar
Bellakehal, S., Andreff, N., Mezouar, Y. and Tadjine, M., “Vision/force control of parallel robots,” Mech Mach Theory 46(10), 13761395 (2011).CrossRefGoogle Scholar
Zhou, Y., Li, X., Yue, L., Gui, L., Sun, G., Jiang, X. and Liu, Y.-H., “Global Vision-Based Impedance Control for Robotic Wall Polishing,” In: IEEE/RSJ international conference on intelligent robots and systems (IROS) 2019, (IEEE, 2019) pp. 60226027.CrossRefGoogle Scholar
Lippiello, V., Fontanelli, G. A. and Ruggiero, F., “Image-based visual-impedance control of a dual-arm aerial manipulator,” IEEE Robot Auto Lett 3(3), 18561863 (2018).CrossRefGoogle Scholar
Odira, J. K., Andreff, N., Petiet, L. and Byiringiro, J. B., “Visual servoing of a laser beam through a mirror,” Robotica 40(9), 31573177 (2022).CrossRefGoogle Scholar
Hu, Q., “Adaptive output feedback sliding-mode manoeuvring and vibration control of flexible spacecraft with input saturation,” IET Cont Theory Appl 2(6), 467478 (2008).CrossRefGoogle Scholar
He, W., Dong, Y. and Sun, C., “Adaptive neural impedance control of a robotic manipulator with input saturation,” IEEE Trans Syst Man Cybernet: Syst 46(3), 334344 (2015).CrossRefGoogle Scholar
Zhou, Q., Zhao, S., Li, H., Lu, R. and Wu, C., “Adaptive neural network tracking control for robotic manipulators with dead zone,” IEEE Trans Neur Net Lear Syst 30(12), 36113620 (2019).CrossRefGoogle ScholarPubMed
Dai, X., Song, S., Xu, W., Huang, Z. and Gong, D., “Modal space neural network compensation control for gough-stewart robot with uncertain load - sciencedirect,” Neurocomputing 449, 245257 (2021).CrossRefGoogle Scholar
Haninger, K., Hegeler, C. and Peternel, L., “Model Predictive Control with Gaussian Processes for Flexible Multi-Modal Physical Human Robot Interaction,” In: International Conference on Robotics and Automation (ICRA) 2022, (IEEE, 2022) pp. 69486955.CrossRefGoogle Scholar
Ahmadi, B., Xie, W.-F. and Zakeri, E., “Robust cascade vision/force control of industrial robots utilizing continuous integral sliding-mode control method,” IEEE/ASME Trans Mechatr 27(1), 524536 (2021).CrossRefGoogle Scholar
Qian, S., Zhao, Z., Qian, P., Wang, Z. and Zi, B., “Research on workspace visual-based continuous switching sliding mode control for cable-driven parallel robots,” Robotica 42(1), 120 (2024).CrossRefGoogle Scholar
Cheng, J., Zhang, X., Sun, T. and Yang, H., “Robust Impedance Control for a Five-Bar Parallel Robot Based on Uncertainty Estimation,” In: International Symposium on Autonomous Systems (ISAS) 2020, (IEEE, 2020) pp. 137141.CrossRefGoogle Scholar
Tuan, L. A. and Ha, Q. P., “Adaptive fractional-order integral fast terminal sliding mode and fault-tolerant control of dual-arm robots,” Robotica, 124 (2024).Google Scholar
Jia, S. and Shan, J., “Finite-time trajectory tracking control of space manipulator under actuator saturation,” IEEE Trans Ind Electron 67(3), 20862096 (2019).CrossRefGoogle Scholar
Fu, X., Ai, H. and Chen, L., “Integrated sliding mode control with input restriction, output feedback and repetitive learning for space robot with flexible-base, flexible-link and flexible-joint,” Robotica 41(1), 370391 (2023).CrossRefGoogle Scholar
Plestan, F., Shtessel, Y., Bregeault, V. and Poznyak, A., “New methodologies for adaptive sliding mode control,” Int J Control 83(9), 19071919 (2010).CrossRefGoogle Scholar
Baek, J., Jin, M. and Han, S., “A new adaptive sliding-mode control scheme for application to robot manipulators,” IEEE Trans Ind Electron 63(6), 36283637 (2016).CrossRefGoogle Scholar
Shao, K., Zheng, J., Wang, H., Wang, X. and Liang, B., “Leakage-type adaptive state and disturbance observers for uncertain nonlinear systems,” Nonlinear Dynam 105(3), 22992311 (2021).CrossRefGoogle Scholar
Roy, S., Roy, S. B. and Kar, I. N., “Adaptive-robust control of euler-lagrange systems with linearly parametrizable uncertainty bound,” IEEE Trans Contr Syst Tech 26(5), 18421850 (2017).CrossRefGoogle Scholar
Zhu, M., Briot, S. and Chriette, A., “Sensor-based design of a delta parallel robot,” Mechatronics 87, 102893 (2022).CrossRefGoogle Scholar
Mariottini, G. L., Oriolo, G. and Prattichizzo, D., “Image-based visual servoing for nonholonomic mobile robots using epipolar geometry,” IEEE Trans Robot 23(1), 87100 (2007).CrossRefGoogle Scholar
Chaumette, F. and Hutchinson, S., “Visual servo control. i. basic approaches,” IEEE Robot Autom Mag 13(4), 8290 (2006).CrossRefGoogle Scholar
Zhang, H. and Ostrowski, J. P., “Visual Servoing with Dynamics: Control of an Unmanned Blimp,” In: Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat No. 99CH36288C), (IEEE, 1999) pp. 618623.Google Scholar
Fusco, F., Kermorgant, O. and Martinet, P., “A Comparison of Visual Servoing from Features Velocity and Acceleration Interaction Models,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019, (IEEE, 2019) pp. 44474452.CrossRefGoogle Scholar
Ozawa, R. and Chaumette, F., “Dynamic Visual Servoing with Image Moments for a Quadrotor Using a Virtual Spring Approach,” In: IEEE International Conference on Robotics and Automation 2011, (IEEE, 2011) pp. 56705676.CrossRefGoogle Scholar
Wang, F., Liu, Z., Chen, C. L. P. and Zhang, Y., “Adaptive neural network-based visual servoing control for manipulator with unknown output nonlinearities,” Inform Sci 451-452, 1633 (2018).CrossRefGoogle Scholar
Merlet, J. P.. Parallel Robots (Springer, Netherlands, (2006).Google Scholar
Briot, S. and Martinet, P., “Minimal Representation for the Control of Gough-Stewart Platforms via Leg Observation Considering a Hidden Robot Model,” In: IEEE international conference on robotics and automation 2013, (IEEE, 2013) pp. 46534658.CrossRefGoogle Scholar
Roy, S., Baldi, S. and Fridman, L. M., “On adaptive sliding mode control without a priori bounded uncertainty,” Automatica 111, 108650 (2020).CrossRefGoogle Scholar
Karason, S. P. and Annaswamy, A. M., “Adaptive control in the presence of input constraints,” In: 1993 American control conference, (IEEE, 1993) pp. 13701374.CrossRefGoogle Scholar
Slotine, J.-J. E. and Li, W.. Applied nonlinear control (Prentice hall, Englewood Cliffs, NJ, 1991).Google Scholar
Gao, J., An, X., Proctor, A. and Bradley, C., “Sliding mode adaptive neural network control for hybrid visual servoing of underwater vehicles,” Ocean Eng 142, 666675 (2017).CrossRefGoogle Scholar
Figure 0

Figure 1. Description of the vision-admittance-based model and the general interaction model.

Figure 1

Figure 2. The structure of vision-admittance-based adaptive sliding mode controller.

Figure 2

Figure 3. Simulation structure of SD7/700.

Figure 3

Figure 4. The simulation results of case 1 ((a), (d), and (g) show the desired trajectories and actual trajectories of three controllers in the pixel plane, (b), (e), and (h) show the trajectory tracking errors of three controllers, (c), (f) and (i) show the input torque of three controllers.).

Figure 4

Figure 5. The desired and actual force control performance of three controllers in case 1 and case 3.

Figure 5

Figure 6. The trajectory tracking errors of LTASMC (with and without auxiliary dynamic model) in case 2.

Figure 6

Figure 7. The input torque results of LTASMC (with and without auxiliary dynamic model) in case 2.

Figure 7

Figure 8. The trajectory tracking errors of three controllers ((a), (b), and (c)) and input torque of three controllers ((d), (e), and (f)) in case 3.