Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Comparative Experimental Investigation on Optimal Parametric Array Types
Previous Article in Journal
Wearable Edge AI Applications for Ecological Environments
Previous Article in Special Issue
Three-Dimensional Unified Motion Control of a Robotic Standing Wheelchair for Rehabilitation Purposes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Reality-Based Framework to Simulate Control Algorithms for Robotic Assistance and Rehabilitation Tasks through a Standing Wheelchair

by
Jessica S. Ortiz
1,*,
Guillermo Palacios-Navarro
1,*,
Víctor H. Andaluz
2 and
Bryan S. Guevara
2
1
Department of Electronic Engineering and Communications, University of Zaragoza, 44003 Teruel, Spain
2
Departamento de Eléctrica y Electrónica, Universidad de las Fuerzas Armadas ESPE, Sangolquí 171103, Ecuador
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(15), 5083; https://doi.org/10.3390/s21155083
Submission received: 13 May 2021 / Revised: 18 June 2021 / Accepted: 23 July 2021 / Published: 27 July 2021

Abstract

:
The implementation of control algorithms oriented to robotic assistance and rehabilitation tasks for people with motor disabilities has been of increasing interest in recent years. However, practical implementation cannot be carried out unless one has the real robotic system availability. To overcome this drawback, this article presents the development of an interactive virtual reality (VR)-based framework that allows one to simulate the execution of rehabilitation tasks and robotic assistance through a robotic standing wheelchair. The virtual environment developed considers the kinematic and dynamic model of the standing human–wheelchair system with a displaced center of mass, since it can be displaced for different reasons, e.g.,: bad posture, limb amputations, obesity, etc. The standing wheelchair autonomous control scheme has been implemented through the Full Simulation (FS) and Hardware in the Loop (HIL) techniques. Finally, the performance of the virtual control schemes has been shown by means of several experiments based on robotic assistance and rehabilitation for people with motor disabilities.

1. Introduction

There are thousands of people worldwide with some type of physical disability, some of them due to congenital or birth diseases and some others due to spinal injuries caused by accidents or age-related problems. For years, people with motor disabilities have been belittled by society, considered to be a burden [1,2]. Nowadays, we are more aware of the limitations that people with disabilities face when performing actions or tasks of everyday life, and different mechanical methods, techniques and devices that can facilitate this group of people to integrate into society have emerged [3]. Depending on the degree of motor disability that affects a person, the use of canes, walkers, chairs, among other manual mechanisms, allow people to move independently. However, there is a group of people with disabilities in lower and/or upper limbs, or with severe motor dysfunctions who cannot manipulate conventional mechanical devices [1,2,4]. This group of people require permanent assistance, i.e., they depend on a third person to manipulate the device, get out of bed, use the toilet; in short, to carry out any type of daily activity, thus generating dependence on their family, friends or caregivers [5].
Technology developed in the area of the rehabilitation of people with disabilities considers that technological development must create bio mechanisms capable of coexisting with others aimed at performing tasks in changing work environments, for which control in their manipulation and locomotion approaches has been analyzed by various researchers [4,6,7,8,9]. Currently, the fusion between mechanics, electronics and software has allowed the development of robotic devices that facilitate a person to perform safe movements, and as well as providing a certain degree of autonomy to the person by offering motor assistance. These systems are known as assistance robots [4,10,11]. Among the most common autonomous or semi-autonomous robotic mechanisms for the assistance and rehabilitation of people with motor disabilities, we can highlight the following: walkers, autonomous wheelchairs, standing wheelchairs and exoskeletons, among others [9,11,12].
We find in the literature several studies focused on developing control strategies that allow a person with a motor disability to maneuver a robotic wheelchair through: electromyography (EMG) signals that receive movement of the neck and arm muscles [13,14]; electrooculography (EOG) signals, where control depends on the user’s eye movement [15]; electroencephalography (EEG) signals which are used to define the movement of the robotic wheelchair [10,14]; or even control via voice command [16]. The abovementioned works are intended for allowing the user to move around in a partially structured environment. On the other hand, according to the activities of daily living (ADL) that a person may carry out, there is a need for the person with a motor disability to continuously change their sitting position, and vice versa [3]. In this context, great interest has been generated in the scientific community to develop prototypes of a robotic standing wheelchair, in order to improve the quality of life of people with motor disabilities [4,10]. Thus, different control algorithms are currently being proposed for the execution of autonomous or semi-autonomous tasks in partially structured and unstructured environments, through the standing human–wheelchair system. Autonomous care and rehabilitation are considered among the most common tasks found in the literature, so the implementation of control algorithms must ensure safe and reliable robotic systems for the user [4,17,18].
Therefore, the evaluation of the control algorithms requires a considerable number of experimental tests, in order to correct possible errors in accuracy and precision. However, this process cannot be easily carried out due to external factors, such as: (i) availability: people with disabilities have limited movement, which leads to a time conflict in participating in the required experimental tests; (ii) accident risk: people with motor disabilities who participate in experimental tests of any type of bio mechanism are exposed to possible accidents since the reactions to avoid blows, injuries and fractures are reduced compared to a person who does not have a motor disability; and finally, (iii) lack of bio mechanisms: the high costs of either bio mechanisms or elements for their construction limit researchers in carrying out experimental tests, as they are required to verify the correct operation of the designed controls algorithms [19].
As explained in previous paragraphs, the simulation of robotic applications for the assistance and rehabilitation of people with motor disabilities is an essential step prior to the experimental implementation of new research proposals. The main objective of the simulation is to recreate the real behavior of the patient when being subjected to assistance and rehabilitation tasks, without putting at risk the integrity of the person and the robotic system in the development stage. In addition, the implementation costs are radically reduced by dispensing with the physical robotic system until the end of the development process.

Main Contributions of the Study

For all the above, and to overcome the different factors that prevent the implementation of control algorithms in a real robotic system, we think it is essential to implement different technological tools to solve this problem, in order to continue developing new research proposals aimed at autonomous or semi-autonomous control of the robotic system in the area of service robotics, specifically in the patient rehabilitation area. Therefore, in this work, the development of an interactive and virtual system was oriented to simulate advanced control strategies for rehabilitation tasks and robotic assistance for people with motor disabilities through a robotic standing wheelchair. Unlike the works available in the literature, the developed VR-based system considers the implementation of closed-loop control algorithms, through the techniques of FS and HIL. After reviewing the literature regarding the autonomous control of wheelchairs, it can be concluded that there are different works to solve the trajectory tracking problem, where the desired speed is equal to the derivative with respect to the time of the desired trajectory. In addition, we may find works where control strategies are implemented to solve the trajectory tracking problem, but when it comes to demonstrating the stability of the proposed controller, they consider the desired speed of the robotic system constant. The proposals found in the literature for the autonomous control of a standing chair are not the best. The movements of the chair–man system must not depend exclusively on the desired trajectory, nor must the speed of movement always be constant. Therefore, in this work, a control algorithm is proposed for the autonomous control of rehabilitation and robotic assistance tasks are based on solving the standing wheelchair path following problem defined in the axes with respect to the inertial reference system. The proposed controller considers that the desired speed of the standing wheelchair is variable and may depend on the parameters of the desired task or the vital signs of the person, which differs from the works found in the literature. On the other hand, the virtualized environment considers the kinematic and dynamic behavior of the standing human–wheelchair system; therefore, a dynamic model is proposed that considers the lateral displacement of the center of mass of the human–wheelchair system, which differs from works found in the literature. The lateral displacement of the center of mass can be generated by the bad posture of the person, amputation of limbs, or a person with spinal injury, among others. In addition, the kinematic model and the dynamic model consider as input signals the maneuverability velocities of the standing wheelchair, in a similar way to commercial robots. Another relevant difference is that in this work, the proposed virtual system considers the development of dynamic link libraries (DLLs) that generate shared memory (SM) in RAM. The SM allows the exchange of information, in real time, between the virtual system developed in the Unity 3D graphics engine and the MatLab mathematical software (the MathWorks Inc., Natick, MA, USA), in which the advanced control algorithm is implemented to simulate rehabilitation or robotic assistance tasks. Finally, the robustness of the proposed control scheme is mathematically analyzed, guaranteeing that the control errors are limited as a function of the velocity error. The velocity error is generated by the friction force between the robotic wheelchair and the selected surface in the virtual environment, thus resembling reality.
The article is organized as follows: Section 2 presents the state of the art, whereas Section 3 deals with the formulation of the problem and describes the proposal to be developed in this work. The kinematic and dynamic models featuring the robotic standing wheelchair velocities as inputs are presented in Section 4, whereas the development of the interactive and virtual environment is presented in Section 5. Section 6 deals with the design of the control algorithm for the execution of rehabilitation and autonomous assistance tasks, together with a robustness analysis of the proposed control scheme. The experimental results are presented in Section 7 and discussed in Section 8. Finally, Section 9 presents the main conclusions together with future work.

2. Review of Literature

Nowadays, the development of software that allows one to simulate work environments is booming, due to the interaction that it offers to the user with diverse multidisciplinary systems of certain complexity [20,21,22]. The purpose of work environments is to help and support the user during the fulfillment of a task, and also to evaluate the correct functioning of the system [22]. The technological advances of the last decade have allowed the expansion of the use of simulators in several areas, e.g., social sciences, engineering, robotics, medicine and rehabilitation, among others [23,24]. In the rehabilitation area, simulation software has become an ally because it allows the patient to perform a sequence of exercises in a more interactive way, avoiding the frustration and boredom that can be generated in the patient [25].
A review of the literature shows that there are simulators oriented to robotic applications and simulators for rehabilitation applications. (i) Commercial robotics simulators: among the commercial simulators applied to robotics are Gazebo, V-REP and Webots, among others [26]. The programming language of these simulators is mainly based on C++ and Phyton, and they are compatible with ROS (Robotic Operating System), which allows for direct communication with the scientific programming software Matlab. However, they lack the possibility of introducing the behavior of a human in the form of an avatar, a feature that is essential for research related to robotic assistance and rehabilitation; (ii) simulators for rehabilitation: among the main simulators under development oriented to rehabilitation tasks are: Development of Exergaming Simulator for Gym Training, a prototype simulator combines various gym and rehabilitation equipment (treadmill, exercise bike, etc.) with virtual environments, games, sports applications, immersive gaming view and advanced motion controllers [27]. For the cognitive rehabilitation process, the scientific community is developing different prototypes of robotic assistants that consider virtual reality, augmented reality and mixed reality [28,29,30]. In [29], a simulator considering a wheelchair for Parkinson’s tremor testing is presented. A rehabilitation process for the restoration of lower limb gait is presented in [30]. In the works found in the literature, the applications developed only consider the virtual environment as a 3D plotter, which does not consider the dynamics of movement of the human–robot system, nor does it allow the implementation of assistance or rehabilitation tasks autonomously.
Currently, Unity3D (Unity Software Inc., San Francisco, CA, USA) is one of the most widely used 3D graphics engines for the development of simulators for robotic applications and for physical–cognitive rehabilitation tasks [31]. The advantage of Unity 3D is the compatibility with different formats, low latency of data exchange in real time, versatility to interact with other software, integrated supports for video cards and support for VR devices [32,33]. In this context, virtual environments can be designed to enable people with motor disabilities to perform assistive and rehabilitative tasks considering activities of daily living. Virtual environments developed for rehabilitation and robotic assistance applications for people with motor disabilities should be interactive environments that allow implicit interaction and sensory immersion of the user, thus ensuring that the experience in the virtual environment is as similar as possible to the experience in the real world [34,35].

3. Problem Formulation

The control algorithms for any developed standing wheelchair robot must be evaluated through different experimental tests to verify their robustness, stability and efficiency. To accomplish this, it is essential to have the standing wheelchair robot. In many cases, this is a problem because the purchase or construction of the standing wheelchair represents a high cost for universities, research centers or companies focused on the development of assistance robots for people with physical disabilities. In addition, experimental tests are considered risky, since people with physical disabilities are exposed to some kind of accident. When evaluating the operation of the control algorithms, sudden movements can occur that may lead to blows, falls and injuries, because people with physical disabilities do not have the same reflexes and reaction skills to face these events as a person without physical disabilities. Table 1 presents the four alternatives for the implementation and evaluation of control schemes [32].
Due to the aforementioned drawbacks, when implementing closed-loop control algorithms with no possibility of having the robotic system, it is recommended to use a technique that emulates the real behavior of a robot–human system. Therefore, and considering that a robotic system is not available for the implementation and evaluation of control algorithms, this work proposes the implementation of control schemes based on the FS and HIL techniques, respectively, in order to implement and evaluate control schemes for the assistance and rehabilitation of people with motor disabilities via robotic wheelchairs. For the two proposed implementation techniques, the emulation of the human–wheelchair system in a 3D VR environment has been considered, as shown in Figure 1 and Figure 2, respectively.
Figure 1 shows the implemented control scheme considering the FS technique. The implementation considers two main parts that make up a closed-loop control scheme, defined as: (i) Target Controller: this block is within the mathematical software that allows the implementation of control algorithms, in charge of correcting control errors to accomplish the desired task to be performed; (ii) Virtual Environment: this block fulfills the function of simulating the behavior of a robotic system which interacts with a 3D virtual environment. This block considers the mathematical modeling that represents the kinematics and dynamics of the robotic system, including disturbances that affect the system (e.g., friction between the robot and the environment, noise at the input and output of the robotic system, among others).
Figure 2 details the three main parts that make up a closed-loop control scheme considering the HIL technique, defined as: (i) Target Controller: this contains the control algorithm in charge of correcting possible errors between the reference signal and the output; (ii) Real-time Simulation: this block fulfills the function of simulating the behavior of a robotic system, considering the mathematical modeling that represents both the kinematics and dynamics of the robotic system. In addition, this block can include disturbances that may affect the system and the sensor in charge of receiving the output signal; and (iii) Bilateral Communication: this is the communication channel in charge of communicating the real part of the process with the simulation part in real time.
FS and HIL techniques offer advantages in the process of implementing the control scheme, such as: reduced development times, evaluation of the robustness of the control algorithm against disturbances in the system, reliability in system data and analysis in the implementation of security protocols (essential for an assistance robot), among others. These techniques require the knowledge of both the kinematic and dynamic behavior of the robot–human system. Therefore, mathematical models are one of the main requirements to validate the correct operation of the control techniques to be implemented. It should be noted that the simulation of the robot–human system evolves in real time, through a system of differential equations, identified and validated with a real system.

4. Robotic Standing Wheelchair Modeling

This section describes the modeling of the standing wheelchair (see Figure 3) in order to be implemented in the 3D simulator proposed in this work. This work considers the kinematic modeling of the wheelchair, as well as the dynamic model of the robotic system with displacement of the center of mass.

4.1. Kinematic Modeling

This work is based on a non-holonomic mobile platform with standing. A robotic standing wheelchair is a differential drive mobile robot (DDMR) that can rotate freely around its vertical axis and move independently on the vertical axis. It is assumed that the human–wheelchair system with standing moves on ( X , Y , Z ) axis of a reference system < R > . The kinematic model of robot is confirmed by a set of three velocities represented at the spatial frame < W s w > . The displacement of the robot is guided by a linear velocity u , and two angular velocities ω ψ and ω ϕ , as shown in Figure 4.
In other words, the Cartesian motion of the standing wheelchair robot at the inertial frame < R > , is defined as
[ η ˙ x η ˙ y η ˙ z ψ ˙ ] = [ cos ψ a sin ψ b sin ψ + b cos ϕ sin ψ b sin ϕ cos ψ sin ψ a cos ψ + b cos ψ b cos ϕ cos ψ b sin ϕ sin ψ 0 0 b cos ϕ 0 1 0 ] [ u ω ψ ω ϕ ] η ˙ s w ( t ) = J ( ψ , ϕ ) μ ( t )
where a and b are distances; η ˙ x , η ˙ y , η ˙ z and ψ ˙ are the point interest velocities (whose position is being controlled) with respect to the inertial frame < R > ; J ( ψ , ϕ ) R m   x   n represents the Jacobian matrix that defines a linear mapping between the velocities vector η ˙ s w ( t ) R m with m = 4 and of the standing wheelchair maneuverability velocities vector μ ( t ) R n with n = 3 .

4.2. Standing Wheelchair Dynamic Model

In this subsection, the dynamic modeling of the standing wheelchair robot is presented, for which a separate analysis is considered. For the dynamic model of the wheelchair without standing, it is assumed that the human–wheelchair system moves on a planar horizontal surface, where the vertical disturbances have been neglected, whereas for the dynamic model of standing, only linear motion about the Z axis is considered, as shown in Figure 5.
The dynamic model of a robotic system can be obtained through the force equilibrium approach established by Newton’s second law, or its equivalent for rotational movements, the so-called Euler’s law [4]. However, in this work, a simple and systematic conceptualization is considered through the kinetic and potential energy balance approach established by the Lagrange formulation [36].
The Lagrange formalism is used to derive the dynamic equations of the human–wheelchair system. In the case of the dynamic model of the wheelchair without standing the potential energy P ( q ) = 0 , because the trajectory of the wheelchair is constrained to the horizontal plane. Thus, the kinetic energy is given by,
L = K = 1 2 ( m w + m h ) v 2 + 1 2 I ω ψ 2
where m = m w + m h represents the human–wheelchair system mass, in which m h is the human mass and m w is the wheelchair mass; v 2 = η ˙ x p 2 + η ˙ y p 2 is the velocity of the wheelchair on the X Y plane; I is the inertia moment of the wheelchair–human system.
On the other hand, for the dynamic model of standing, the Lagrangian equation is defined as,
L = 1 2 m h η ˙ z 2 m h g ( h z + b sin ( ϕ ) )
where η ˙ z ( t ) = ω ϕ ( t ) b cos ϕ ( t ) , h z is the constant height of the wheelchair seat.
Therefore, it is possible to obtain a dynamic model that considers both linear velocity and angular velocities as input signals, as commercial robots have [37].
[ μ r e f p ( t ) 2 x 1 ω ϕ r e f ( t ) 1 x 1 ] = [ M p ( ς ) 2 x 2 0 2 x 1 0 1 x 2 M b ( ϕ , φ ) 1 x 1 ] [ μ ˙ p ω ˙ ϕ ] + [ C p ( ς , μ p ) 2 x 2 0 2 x 1 0 1 x 2 C b ( ϕ , ϕ ˙ , φ , φ ˙ ) 1 x 1 ] [ μ p ω ϕ ] + [ 0 2 x 1 g ( ϕ ) 1 x 1 ] μ r e f ( t ) = M ( ϕ , φ , ς ) μ ˙ + C ( ϕ , ϕ ˙ , φ , ς , μ ) μ + g ( ϕ )
where M ( ϕ , φ , ς ) R n x n with n = 3 represents the inertia matrix of the standing human–wheelchair system; C ( ς , μ ) R n x n represents the centripetal and Coriolis forces; g ( ϕ ) R n represents the gravitational vector; μ = [ u ω ψ ω ϕ ] R n is the vector of system’s velocity; and μ r e f = [ u r e f ω ψ r e f ω φ r e f ] R n is the vector of velocity control signals for the standing human–wheelchair system; and ς = [ ς p ς b ] R l with l = l p + l b = 22 is the vector of dynamic parameters, which contain the physical, mechanical and electrical parameters of the human–wheelchair system. For more details on the dynamic model, see Ortiz’s proposal in [37]. Appendix A shows the dynamic parameters of the standing wheelchair.

5. Virtual Environment

Virtual environments intended for rehabilitation should consider a virtual environment that allows robot–human interaction with every day, real-life situations. Therefore, this section describes the development of a 3D virtual simulator that allows people with motor disabilities to perform autonomous rehabilitation and assistance tasks. The virtual environments developed are related to everyday tasks in a person’s real life, with the aim of evaluating the performance of closed-loop control algorithms in a more realistic way.
The implementation scheme of the Virtual Standing Human–Wheelchair System simulator (VSWHS), is presented in Figure 6, which consists of external graphic resources that are executed on a Unity3D graphic engine. The proposed scheme consists of four main blocks: (i) external resources, which consider the development of 3D objects to be included in the virtual environment; (ii) 3D graphics engine, which contains the implementation of external resources and programming scripts that allow the simulation of robot–human interaction in a virtual environment; (iii) virtual devices, which allow user immersion and interaction with the virtual environment; and finally (iv) control algorithm, which allows the implementation of closed-loop control algorithms, in order to carry out rehabilitation or autonomous assistance tasks for people with motor disabilities.

5.1. External Resources

External resources are essentially made up of three groups: (i) virtualized scenario, referring to scenarios related to ADL, to evaluate rehabilitation and autonomous assistance tasks for people with partial or total motor disabilities; (ii) virtualized robot, related to 3D modeling of the standing wheelchair and its assembly (it is carried out in the Solid Works (SolidWorks Corp., Waltham, MAassachusetts, USA) CAD software). This process is based on the dimensions and physical characteristics of a real wheelchair; and (iii) avatar, which represents the person or user who will be participating in the use of the simulator, and whose character is modeled in the Autodesk Maya software taking into account the anthropomorphic dimensions of the average individual (see Figure 7).

5.2. Graphics Engine

Unity has been considered as a 3D graphics engine. Unity is a multiplatform video game engine created by Unity Technologies. Unity is available as a development platform for Microsoft Windows, Mac OS, and Linux [32]. We separated the virtual environment development process into two main parts: 3D scene development and programming of the virtual environment control scripts, respectively.

5.2.1. Virtual Scene

This subsection describes the 3D scenes developed for applications aimed to simulate rehabilitation proposals and robotic assistance. In addition, the proposed system considers the implementation of a user interface, which allows one to define the simulation parameters, e.g., desired task, virtual environment to execute the desired task and physical characteristics of the avatar, among others.
(a) User Interface (UI). This was developed to allow easy and intuitive interaction with the program to start up the virtual control scheme and allow the user to visualize the evolution of the system, as well as the data represented as variables of the states of the robot–human system. An important feature is that, depending on the dynamic disturbance data of the controller, the height and weight of the avatar can be modified to simulate in a more reliable and credible way the real behavior of the robot–human system. Another important detail is the development of a real-time graphics system that allows the visualization of the control errors evolution locally in the graphics engine without the need to pay attention to the scientific programming software (see Figure 8).
(b) Realism and Rendering. The development of 3D scenes is a fundamental process to create realistic virtual environments that are capable of deceiving the user’s senses. Thus, when importing external resources, it is necessary to make some virtualized environment and robot configurations.
The Meshing stage considers the data of the vertices and faces of the objects aimed at taking the geometry from the Mesh Filter and renders it at the position defined by the GameObject’s Transform component. The Material stage defines the textures, material properties, and the Lighting and Lightmapping components of the imported external resources. In order to optimize the graphic rendering performance in the Shaders stage, each external resource is customized through specialized scripts that contain mathematical algorithms that calculate the color of each rendered pixel based on the lighting input and the material configuration. Finally, these settings are stored in “Prefabs” for later use.

5.2.2. Scripting Stage

One of the functionalities defined as a set of the most relevant public classes when implementing a 3D virtual environment that allows one to emulate the behavior of a robot–human system, is the kinematic and dynamic modeling block of the robotic system. It should be noted that the proposed dynamic model (Equation (4)) allows one to modify the avatar weight and considering external disturbances, which can be generated by sliding on smooth surfaces, or by the noise generated at the inputs of the maneuverability commands and at the outputs of the robot, for example. The sliding of the wheels is affected by the friction forces that are generated according to the type of soil in the virtual environment where the wheelchair performs the desired task. Figure 9 shows the model block of the robotic system considered in this work, where both the mathematical models representing a wheelchair and the actual robotic system consider the same input and output signals.
On the other hand, the scripts contain the code blocks with the necessary instructions that determine the functionality of a set of tools, data and components that make up the 3D virtual simulator. In this layer, the dedicated libraries (SDK—Software Development Kit) of the virtual input and output devices are managed, which allow communication and interaction with each other. In addition, these blocks manage the components involved in the scene, such as the robotic system model, the audio controller, cameras, lighting, user interface (UI) and the generation of fictitious forces that, together, simulate real conditions which robots are subjected to during operation (see Figure 10).
Through the dynamic modification of the mesh of the avatar model in its masculine and feminine version, it is possible to modify the physical appearance representing the accumulation of fat based on the configured weight. In the same way, it is also possible to modify the height, maintaining an anthropomorphic proportion of the human body. At this stage, animation of the movement of the robotic wheelchair is also performed based on the workspace that is defined by the control algorithm. In a similar way, the animation frames of the avatar are synchronized according to the state variables of the standing wheelchair.

5.3. Inter-Process Communication—Shared Memory

The exchange of information between memory segments is a feature of operating systems, that enables one to share information. Taking into account the information provided by [27], in this work, we implemented the shared memory method, since it is an easy technique to apply, with short delays and low computational cost because no third party functions are used. Figure 11 presents the data exchange scheme based on shared memory, proposed in this work. For the FS technique, data exchange is considered between the 3D simulator that is developed in the Unity graphics engine, and the mathematical software in which the wheelchair control algorithm is implemented. On the other hand, for the HIL technique, data exchange is considered between Unity and the target hardware in which the wheelchair control algorithm is implemented.
Figure 12 contains scripts that allow the exchange of information between the virtual environments with mathematical software, through the use of a dynamic link library (DLL) that generates a shared memory in RAM (SM) for the exchange of data between different software packages. By means of the SM, the control actions calculated in the destination controller are injected into the mathematical model of the robotic system. The model of the robotic system calculates its position and velocity outputs, which are sent to the mathematical software, thus closing the control loop through the feedback of the robot’s output states.

6. Control Algorithm Design

The proposed control algorithm for the execution of rehabilitation and autonomous assistance tasks must be implemented according to the technique to be used. That is, for the FS technique, a different mathematical software hosted on the same computer as the virtual environment is considered. Regarding the HIL technique, a hardware of a different kind than the computer where the 3D virtual environment is hosted is considered. On the other hand, with the aim of executing autonomous rehabilitation or robotic assistance tasks for people with motor disabilities, an advanced control algorithm is proposed to solve the problem of following the desired path P ( s ) R 3 , not parameterized in time, defined on ( X , Y , Z ) axis of a inertial reference frame < R > .
Figure 13 shows the wheelchair path-following problem, where P d = [ P x P y P z ] R 3 defines the closest point between the standing wheelchair and the desired path P ( s ) . In addition, it is considered that the desired velocity of the wheelchair can be variable, which differs from works found in the literature, in which it is considered that the desired velocity is constant. In this work, the velocity can be defined according to the characteristics of the desired task, i.e., υ d ( t ) = f ( υ max , P , η ) . The proposed control algorithm will consider a non-linear control law based on the kinematic model of the robotic standing wheelchair (see Figure 14).
The proposed controller considers the saturation of the μ min < μ r e f ( t ) < μ max velocity commands, and receives as input signals P ( s ) | s [ s 0 , s f ] , which describe the desired motion task of the standing wheelchair, respective to the inertial frame R ( X , Y , Z ) . The problem of control to deal with—often called the inverse kinematics problem—is finding the control vector of maneuverability μ r e f ( t ) | t [ t 0 , t f ] . to achieve the desired operational motion. The corresponding evolution of the whole system is given by the actual generalized motion q ( t ) | t [ t 0 , t f ] . Hence, the control error is defined as η ˜ ( t ) = P d ( s ) η ( t ) , and consequently, the control aim is expressed as lim t η ˜ ( t ) = 0 R m . The desired velocity of the standing wheelchair will depend on the task, the control error, the angular velocity, etc. In this case, it is considered that the reference velocity depends on the control errors and the angular velocity. It is defined as [4]:
| υ d | = v max 1 + k η ˜ η ˜ + k Γ Γ P
where v max is the desired maximum velocity on the desired path P ( s ) ; k η ˜ and k Γ are positive constants that are control error and radius of curvature of P ( s ) , respectively. The radius of curvature is defined as [38],
Γ P ( t ) = P ˙ × P ¨ P ˙ 3 .
The proposed control scheme considers the kinematics of the standing wheelchair represented by Equation (1), without considering the variation of the orientation, since due to its mechanical configuration, the wheelchair is oriented tangentially to the desired path profile:
[ η ˙ x η ˙ y η ˙ z ] = [ cos ψ a sin ψ b sin ψ + b cos ϕ sin ψ b sin ϕ cos ψ sin ψ a cos ψ + b cos ψ b cos ϕ cos ψ b sin ϕ sin ψ 0 0 b cos ϕ ] [ u ω ψ ω ϕ ] η ˙ ( t ) = J s w ( ψ , ϕ ) μ ( t )
Thus, the following control law is proposed for the standing wheelchair robot:
μ r e f ( t ) = J s w 1 ( υ d ( t ) + Γ tanh ( Γ 1 κ η ˜ ( t ) ) )
where J s w 1 is the inverse Jacobian matrix of J s w ( ψ , ϕ ) ; κ and Γ are the definite positive diagonal matrices that weigh the control error η ˜ ( t ) = P d ( s ) η ( t ) . In order to include an analytical saturation of velocities in the standing wheelchair robot, the tanh ( . ) function, which limits the control errors η ˜ ( t ) is proposed. The expressions tanh ( . ) denote a component by component operation. Additionally, υ d ( t ) represents the desired velocities vector on the desired path:
υ d ( t ) = [ v x v y v z ] = [ | υ d | cos ( β ) cos ( α ) | υ d | cos ( β ) sin ( α ) | υ d | sin ( β ) ] .
The vector | υ d | represents the modulus of the desired velocity; v x , v y and v z are the projections of υ d on the direction of the X , Y and Z axes, respectively, while α represents the orientation of the projection of γ on the X Y plane measured from the X axis of the < R > reference system; and β is the angle between the tangent vector γ with the X Y plane The angles are determined by:
α ( t ) = tan 1 ( P ˙ y P ˙ x )   and   β ( t ) = tan 1 ( P ˙ z ( P ˙ x , P ˙ y ) ) .

Robustness Analysis

The behavior of the control error of the interest point of the standing wheelchair is analyzed considering errors in velocity tracking, i.e., ε ( t ) = μ ˜ ( t ) = μ r e f ( t ) μ ( t ) . The velocity error can be caused by unwanted disturbances on the robotic chair. Therefore, by substituting Equation (8) in (7), the close loop equation is obtained:
υ d ( t ) = η ˙ ( t ) Γ tanh ( Γ 1 κ η ˜ ( t ) ) + J s w μ ˜ ( t ) .
Remember that the desired velocity vector υ d ( t ) is different from the time derivative of the desired path. Now, defining difference signal γ ( t ) as γ ( t ) = d d t P ( s ) υ d ( t ) and remembering that η ˜ ˙ ( t ) = d d t P ( s ) η ˙ ( t ) Equation (11) can be written as:
η ˜ ˙ ( t ) = γ ( t ) Γ tanh ( Γ 1 κ η ˜ ( t ) ) + J s w μ ˜ ( t )
Remark 1.
The desired velocity vector υ d ( t ) is tangent to the desired path P ( s )  and is collinear to the vector of the derivative of the desired path. Then, γ ( t ) is also a collinear vector to υ d ( t ) and d d t P ( s ) .
For the robustness analysis, the following Lyapunov candidate function is considered: V ( η ˜ ( t ) ) = 1 2 η ˜ T η ˜ . Its time derivative on the trajectories of the system is, V ˙ ( η ˜ ( t ) ) = η ˜ T γ η ˜ T Γ tanh ( Γ 1 κ η ˜ ) + η ˜ T J μ ˜ ( t ) . A sufficient condition for V ˙ ( η ˜ ( t ) ) to be negative definite is,
| η ˜ T Γ tanh ( Γ 1 κ η ˜ ) | > | η ˜ T ( γ + J s w μ ˜ ( t ) ) |
For large values of η ˜ ( t ) , the condition in the Equation (13) can be reinforced as, η ˜ T Γ tanh ( Γ 1 κ η ˜ ) > η ˜ T γ + J s w μ ˜ ( t ) . Then, V ˙ ( η ˜ ( t ) ) will be negative definite only if Γ > γ + J s w μ ˜ ( t ) / tanh ( Γ 1 κ η ˜ ) . Hence, the control errors η ˜ ( t ) decrease, while for small errors values of η ˜ ( t ) , the error is ultimately bound by:
η ˜ < κ a u x J s w μ ˜ ( t ) + γ ( t ) ς λ min ( κ ) tanh ( κ a u x ) ; with   0 < ς < 1
If the velocity errors are bound, then, it can be concluded that the control error is also ultimately bound by Equation (14). The velocity error is generated by the frictional forces between the wheelchair and the surface where the desired task is being performed. The friction forces change according to the coefficient of friction between surfaces; therefore, the velocity error is different from zero, but it is bound μ ˜ ( t ) < k μ ˜ , with k μ ˜ being a positive constant.

7. Experimental Results

This section presents the results obtained from the developed virtual environment and the proposed control scheme. This section is divided into four parts. First, we introduce the virtual simulator with the interactive windows that allow the configuration of the VR environment and the physical characteristics of the avatar. Second, we present the results obtained from the implementation of the advanced control algorithms for autonomous rehabilitation and robotic assistance tasks (HIL and FS simulation techniques are considered in the tests). Third, we present the hardware performance and computational cost of the computer when running the developed virtual environment. Finally, the results of a usability test are presented, for a group of 20 people who experimented with the developed virtual system.

7.1. Virtual Human–Wheelchair System Simulator

This subsection presents the user interface (UI) developed in this work. In addition, the configuration of the virtual simulator for the execution of rehabilitation tasks and robotic assistance for people with motor disabilities is shown. The UI allows one to navigate through a series of windows that allow one to modify and store information about the executed task.
Figure 15 shows the configuration scene of the informative data of the avatars that are used in the execution of the virtual desired tasks, e.g., name, gender, age, height and weight. The configuration of all the data enables the customization of the appearance of the avatar, with options including skin, hair, eyes, underwear, shirt and pants; each one with the possibility of modifying the type of material that determines the texture and color of the object. In addition, the configuration scene allows one to select the virtualized scenario where the experiment will be carried out. For this project, four available scenarios were developed.
Different virtual scenarios showing for the activities of daily living were developed to carry out autonomous assistance tasks. Figure 16 shows the virtualized scenarios, for which two types of visual art were considered: (i) High Definition Render Pipeline (HDRP), in which advanced visual optimization and lighting techniques were implemented to emulate the visual stimuli as faithfully as possible, as perceived in the real world; and (ii) Low poly style, which uses a small number of polygons in 3D models, with the purpose of seeking the abstraction of the elements and that the form takes over the design in such a way that a minimalist appearance is generated that encourages the user’s creativity to a certain extent during the execution of the experiment.

7.2. Control Scheme Implementation

This subsection shows the behavior of the implemented control schemes (based on the FS and HIL techniques described in Section 3 through experimental tests. In the implementation, the real-time interaction between human–wheelchair and the virtual environment was considered. The mathematical modeling of the human–wheelchair system presented in Section 4 and the development of the virtual environment presented in Section 5 were taken into account for virtual interaction. The control algorithm proposed in Section 6 was implemented in the target Hardware, according to the aforementioned techniques (HIL or FS).

7.2.1. Experiment 1

The first experiment considers the implementation of the FS technique, aimed at executing an autonomous assistance task. A male avatar was configured with an age of 35 years, a height of 1.75 (m), and a weight of 100 (kg). This information was included in the dynamic model of the standing wheelchair–human system represented by Equation (4). Additionally, the HDRP virtual environment representing a neighborhood environment was considered (see Figure 16c). For this experiment, the aim was to follow a desired path that allowed the autonomous displacement of the human–wheelchair system from an initial position P o to a final position P d . The desired task was selected since the transfer of a person between two points is a common action of daily life. Figure 17 shows the desired path for the human–wheelchair system, obtained from the virtual scenario through a non-linear regression that determines the values of the parameters associated with the best fit curve.
Once the desired path was obtained, the desired path vectors were defined η d x and η d y with respect to the X Y plane of the inertial reference system R ( X , Y , Z ) . A constant posture was considered for the movement of standing on the Z axis, defined by η d z = 0.5   [ m ] , the value representing the distance from the point of interest of control to the ground, i.e., a value that corresponds to the status of the avatar sitting in the wheelchair while executing the desired task. For the autonomous task execution, the control law proposed in Equation (8) was implemented, where the controller parameters were defined as: initial conditions of the robot η o = [ 54 3 0.85 ] [ m ] ; desired path η d = [ η d x η d y η d z ] ; weight matrix of control errors Γ = d i a g ( 1.8 ,   1.8 ,   1 ) and κ = d i a g ( 1.1 ,   1.1 ,   0.5 ) . A sampling time of T 0 = 0.1   [ s ] was set.
The evaluation of the autonomous assistance task was carried out through the analysis of the response curves of the proposed control algorithm. Figure 18, Figure 19, Figure 20 and Figure 21 show the results of the first experiment. Figure 18 shows the virtual stroboscopic movement of the robot–human system, based on real data.
Figure 19 shows that the control errors η ˜ ( η ˜ x , η ˜ y , η ˜ z ) R 3 converge to values close to zero asymptotically, i.e., achieving final feature errors max | η ˜ ( t ) | < 0.04   [ m ] , since the velocity errors are bounded and different from zero μ ˜ ( t ) = μ r e f ( t ) μ ( t ) 0 R 3 , as shown in Figure 20.
Figure 21 shows the control actions injected into the standing wheelchair robot during the experimental test. From the results obtained, the adequate performance of the proposed controller was verified.

7.2.2. Experiment 2

The second experiment considers the implementation of the HIL technique. A female avatar was configured with an age of 21 years, a height of 1.6 (m) and a weight of 67 (kg). This information was included in the dynamic model of the human–wheelchair system represented by Equation (4). In addition, we used the virtual environment that represents a house (Figure 19). The experiment considered a task applied to autonomous rehabilitation routines, in which the standing movement is performed sinusoidally. The desired movement was considered a low frequency of movement, in order not to cause abrupt movements to the patient or unwanted injuries. For people with motor disabilities in their lower extremities, standing physical exercises are performed with the purpose of not losing muscle mass, reducing spasticity, preventing the appearance of ulcers, and it is even fundamental for physiological and social reasons, and to guarantee the correct development of the hip joint during childhood. Therefore, standing upright is key to avoid motor impairment in the case of neurological injuries or physical disability [39]. The desired task, desired velocity and initial conditions for the controller are defined in Table 2 for the experiment.
Unlike the first experiment, the movement of standing in the Z axis was variable, while the displacement was executed with respect to the X Y plane of the inertial reference system R ( X , Y , Z ) . For the autonomous task execution, the same control law proposed in Equation (8) was implemented, where the controller parameters are defined as: the weight matrices of control errors Γ = d i a g ( 1.8 ,   1.8 ,   1 ) and κ = d i a g ( 1.1 ,   1.1 ,   0.5 ) ; the gain constants to define the desired velocity based on the desired task k η ˜ = 1.4 and k Γ = 1.3 . Finally, a sampling time of T 0 = 0.1   [ s ] was set. Figure 22 shows the virtual stroboscopic movement of the robot–human system, based on real data.
The control errors η ˜ ( η ˜ x , η ˜ y , η ˜ z ) R 3 converge to values close to zero asymptotically, i.e., achieving final feature errors max η ˜ ( t ) < 0.06   [ m ] , as shown in Figure 23. Figure 24 shows velocity errors are bound and different from zero μ ˜ ( t ) = μ r e f ( t ) μ ( t ) 0 .
Velocity errors are caused by wheel slippage and by frictional forces between the wheelchair and the surface where the tests are being run (virtual environment). Therefore, the velocity errors are limited. In this experiment, the bound of the maximum velocity error is max μ ˜ ( t ) < 0.1 . Figure 25 shows the control actions applied to the standing wheelchair robot during the experimental test.

7.3. Hardware Performance

The experimental results presented in Section 7 were implemented on the target hardware, according to the mentioned techniques (HIL or FS). For the development of the experiments, a computer with advanced features was used (AMD Ryzen 5 3500×, NVIDIA® GeForce® GTX 1060 video card, 16 GB of RAM, 64-bit Windows 10 operating system), sound sources, and HCT VIVE pro VR glasses.
Figure 26 shows the computational performance of the graphics processing unit (GPU) when running the developed virtual simulator. The computational performance of the GPU briefly reaches 42% of the nominal performance. The moderate consumption of the computational capacity of the graphics card is attributed to the optimization of the graphics resources considered in the external resources design stage detailed in Section 5.
Figure 27 shows the performance of the central processing unit (CPU) when running the virtual simulator. The CPU performance is around 83% of computational capacity during the simultaneous execution of the Unity and MatLab software when implementing the control algorithms going forward during the experimental tests.
From the results shown in Figure 26 and Figure 27, it can be concluded that the used computer supports the execution of the developed virtual simulator. The computational performance of the computer is below the maximum performance threshold of the components that make up the hardware. The computational performance of the computer has a direct relationship with the execution time of the proposed control schemes.
It is important to mention that the computational performance of the computer has a direct relationship with the execution time of the proposed control schemes. Therefore, Figure 28 shows the execution time of the control scheme implemented with the FS technique for each sampling time. The machine time in each sampling period is between 10 and 11 (ms). Therefore, it is possible to conclude that the virtual simulator developed allows the execution of autonomous control tasks for a sampling period greater than the machine time.
Finally, Figure 29 shows the execution time of the control scheme implemented with the HIL technique for each sampling time. The peaks observed in the figure correspond to the delay time in the wireless communication between the control unit and the virtual environment developed. It should be noted that for the HIL technique, a Raspberry Pi (Raspberry Pi Foundation, Cambridge, England) as target controller was considered in this work. For the implementation of the HIL technique, it is recommended that the sampling period be T 0 0.5   [ s ] , since the machine time considers the wireless communication between the control unit and the virtual environment. For service robotics applications, specifically in the area of rehabilitation and assistance of person, the velocity of movement of the robot–human system must be low; therefore, considering the Nyquist–Shannon sampling theorem, the sampling time can be greater than T 0 0.5   [ s ] .

7.4. Usability of the Simulated System

The usability of the system was analyzed with the help of group of 20 people. The activities of all the participants began with the installation of the developed virtual application. Before the experiments, all participants were trained to navigate VR environments, aimed at leveling the experience in the use of immersive VR environments. In the training, no autonomous control tasks were considered for rehabilitation or robotic assistance. After finishing the experiments, the experimental group completed a usability test to measure the level of acceptance of the system’s features. To measure the degree of usability of the developed application, we used the System Usability Scale (SUS) [40], which is probably the most popular questionnaire to measure the usability attitudes of a system [41]. The total average SUS score obtained was 82.5%, which indicates an excellent degree of usability for our simulator. The designed application, besides being as simple as possible, must also have a high degree of usability.

8. Discussion

In this work, a VR-based framework to simulate different control schemes through a robotic wheelchair was developed. The framework also allows the simulation of robotic assistance tasks and the implementation of motor rehabilitation exercises for people with motor disabilities. The latter can be extended to home-based environments, thus favoring tele-rehabilitation. Therefore, people with reduced mobility or motor disabilities can take advantage of this framework, making everyday life easier for them.
Unlike other simulators oriented to the research of robotic systems applied to physical rehabilitation that we can find in the literature, the action of standing and the ability to move while simulating real scenarios has been scarcely explored. In most cases, VR-based physical rehabilitation is performed statically. That is to say, the patient remains in the same place in the rehabilitation session. Although this may be somewhat favorable to maintain the integrity of the person and the robotic system, it has been shown that the sensation of movement can generate rewards that motivate the user to not abandon the training sessions [42]. In addition, one of the most relevant features of the framework deals with the fact that no simulator has explored the range of movements that has direct impact on the patient while performing the movements. This feature is not possible in commercial simulators that do not have the ability to simulate human movements, so this process is essential before, during and after the assistance and rehabilitation. This idea has been considered in the developed simulator, because it allows the avatar to change the position according to the user’s position in real time through cameras or motion capture sensors in the actual implementation of the project.
Within the areas of robotic assistance and rehabilitation, one can define ADL tasks assisted autonomously by a robotic system [43,44]. For this purpose, a robotic standing wheelchair has been considered for the autonomous control of movements of the robot–human system. The work presented in this article falls under the scope of service robots, which work autonomously or semi-autonomously to perform tasks that are useful for the well-being of people [10,45]. In particular, the specific scope is the development of wheelchair prototypes (especially electric wheelchairs) with varying degrees of autonomy designed for people who cannot move their lower and/or upper extremities. Wheelchairs can improve the quality of living for people with motor disabilities, so that people can perform everyday tasks and see the world with other possibilities [11,17].
We find in the literature that different control algorithms have been implemented for the execution of autonomous or semi-autonomous tasks in each prototype developed. The tasks have been developed through vision sensors, audio signals, electromyographic signals (EMGs), electroencephalogram signals (EEGs) and gestural signals, among other signals [10,13,14]. In order to design and implement control algorithms, the physical robotic system is required in order to experimentally evaluate the developed control proposals. To overcome this drawback, different simulation software oriented to robotic systems have been commercially developed [46,47]. However, when considering robotic systems for the rehabilitation area, there is no commercial software or free software that allows one to simulate the behavior of robotic systems oriented to the execution of assistance or rehabilitation tasks. Therefore, the development of new interactive and 3D simulators applied to the area of service robotics is a new trend in the scientific community, whose implementation has been accelerated by the COVID-19 pandemic [4,43,48].
In fact, it is important to mention that the development of this work has been motivated and influenced by the COVID-19 pandemic, which has generated mobility restrictions, making it more difficult for people to attend hospitals, rehabilitation centers, institutes or laboratories to develop experimental tests on robot–human systems. This is why the possibility of being able to simulate rehabilitation and/or robotic assistance tasks in safe conditions prior to their application with patients becomes even more important. The set made up of the standing wheelchair and the framework that we have developed constitutes a very useful rehabilitation technology created in the pandemic. There are other technological developments that have been recently created in different areas of knowledge as a result of the COVID-19 pandemic [49,50]. We firmly believe that these technological developments are likely to last once the pandemic is over [37,51].
The developed VR environment considers both the kinematic and dynamic models of the standing wheelchair. The dynamic model considers the displaced center of mass, which can be caused by poor posture of the person, amputation of the limbs, or spinal injury, etc., which differs from the literature works [4,37]. In addition, the dynamic model considers as input signals the maneuvering velocities of the robotic standing wheelchair, in a similar way as commercial robots do. A trajectory tracking algorithm has been proposed for the autonomous control of the robotic system. The proposed controller design is based on the standing wheelchair kinematic model, in which analytical saturation is implemented in order to limit the maneuverability commands of the robotic system. As far as the trajectory tracking is concerned, we have considered that the desired velocity may depend on the rehabilitation task or robotic assistance. This consideration differs from other works found in the literature, which consider that the desired velocity is constant [9,14,52]. The studies found in the literature have solved the trajectory tracking problem, by choosing the desired velocity equal to the derivative with respect to the time of the desired trajectory [53], which is not logical in autonomous tasks that transport a person with some degree of motor disability. Furthermore, no works were found implementing path-following strategies for standing wheelchairs. We only found works for wheelchairs without the standing degree of freedom [10,52]. Lyapunov’s theory helped us to show that control errors converge to values close to zero, if velocity errors are bound, which confirms that the proposed control scheme works correctly. Therefore, the movements of the standing wheelchair meet the objectives of an autonomous rehabilitation task or robotic assistance.
The experiments carried out in our study showed the performance and versatility of the proposed controller. Furthermore, the obtained usability test results demonstrated a high degree of usability for the developed virtual application [40,54]. Another interesting feature of our system lies in the exchange of information in the bilateral communication between the 3D graphic engine and the mathematical software, whose time is in the microseconds range [37]. Therefore, this fact leads us to consider that our simulator is a real-time simulator, considering that for assistive robotics the sampling period can be greater than 0.05 (s).
The developed framework opens doors to the creation of customized rehabilitation plans with the help of medical experts, as they have the clinical criteria to plan the rehabilitation tasks aimed at every single person with motor disabilities. It is worth mentioning that there is a high risk of muscle injuries when a rehabilitation plan is incorrectly executed, since inadequate movements or abrupt movements may result in muscle injuries, for example [51].

9. Conclusions

The framework developed in this work has demonstrated its ability to simulate robotic assistance and motor rehabilitation tasks through a standing wheelchair prior to its implementation with human beings. The simulation techniques used for autonomous control have proven to meet the necessary requirements that these tasks need for a safe operation. Therefore, future work deals with the development and implementation of an autonomous neurorehabilitation plan for people with lower/upper limb motor disabilities. We will take advantages of the benefits of virtual environments for patient rehabilitation as well as the benefits of using sensors. In this sense, we plan to track a person’s movement through the “3 Space Mocap” sensors (YEI Technology, Portsmouth, OH, USA) [55,56]. The conjunction of both technologies will allow the patient to have a good immersive and interactive experience within the virtual environments developed in this work. Our research will also be focused on the development of new control strategies based on the dynamic model of the standing human–wheelchair system for the robotic assistance of people with motor disabilities.

Author Contributions

Conceptualization, J.S.O., G.P.-N. and V.H.A.; methodology, J.S.O. and V.H.A.; software, J.S.O., B.S.G.; validation, J.S.O., B.S.G. and V.H.A.; formal analysis, J.S.O. and V.H.A.; investigation, J.S.O.; resources, B.S.G. and V.H.A.; data curation, G.P.-N.; writing—original draft preparation, J.S.O.; writing—review and editing, G.P.-N. and V.H.A.; visualization, J.S.O., B.S.G. and V.H.A.; supervision, G.P.-N. and V.H.A.; project administration, G.P.-N. and V.H.A.; funding acquisition, J.S.O. and V.H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

The authors would like to thank the ARSI Research Group for their support in developing this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Standing Wheelchair Dynamic Model
[ μ r e f p ( t ) 2 x 1 ω ϕ r e f ( t ) 1 x 1 ] = [ M p ( ς ) 2 x 2 0 2 x 1 0 1 x 2 M b ( ϕ , φ ) 1 x 1 ] [ μ ˙ p ω ˙ ϕ ] + [ C p ( ς , μ p ) 2 x 2 0 2 x 1 0 1 x 2 C b ( ϕ , ϕ ˙ , φ , φ ˙ ) 1 x 1 ] [ μ p ω ϕ ] + [ 0 2 x 1 g ( ϕ ) 1 x 1 ]
M p ( ς ) = [ ς 1 p + ς 2 p m h ( ς 3 p + ς 4 p m h ) ( ς 5 p + ς 6 p m h ) ς 7 p + ς 8 p m h ] ; C p ( ς , μ p ) = [ ς 9 p ψ ˙ ( ς 10 p + ς 11 p m h ) 0 ς 12 p ]
M b ( ϕ , φ ) = [ ς 13 + ( ς 14 + ς 15 φ ) cos ( ϕ ) 2 sin ( ϕ + ϕ e ) sin ( β ϕ ) ]  
C b ( ϕ , ϕ ˙ , φ , φ ˙ ) = [ ς 16 + ( ς 17 φ ˙ cos ( ϕ ) 2 + ( ς 20 ω ϕ + ς 21 φ ω ϕ ) sin ( 2 ϕ ) ) 1 sin ( ϕ + ϕ e ) 2 sin ( β ϕ ) + ( ς 18 ω ϕ + ς 19 φ ω ϕ ) cos ( ϕ ) 2 cos ( ϕ + ϕ e ) sin ( ϕ + ϕ e ) sin ( β ϕ ) ]
g ( ϕ ) = [ ς 22 g cos ( ϕ ) sin ( β ϕ ) ]
Dynamic parameters of the standing wheelchair.
ς 1 = 0.0987; ς 2 = 0.0046; ς 3 = 0.0986; ς 4 = −0.00014; ς 5 = 0.0987; ς 6 = −0.0001; ς 7 = 0.0987; ς 8 = 0.0032; ς 9 = 0.9214; ς 10 = 0.0986; ς 11 = −0.0019; ς 12 = 0.9582; ς 13 = 0.1885; ς 14 = 0.0214; ς 15 = −0.0001; ς 16 = 1.00; ς 17 = 0.0003; ς 18 = −0.0085; ς 19 = −0.0004; ς 20 = 0.0229; ς 21 = 0.0005; and ς 22 = −0.0038.

References

  1. Khaksari, M.; Fathi-Ashtiani, A.; Seifi, M.; Lotfi-Kashani, F.; Helm, F.A. Life Skills Training Effects on Adjustment and Mental Health in Physical-Motor Disabilities. Int. J. Indian Psychol. 2019, 7, 5–14. [Google Scholar]
  2. Ünver, B.; Erdem, E.U. Effects of Intellectual Disability on Gross Motor Function and Health Related Quality of Life in Cerebral Palsy. Clin. Exp. Health Sci. 2019, 9, 138–142. [Google Scholar] [CrossRef] [Green Version]
  3. Straudi, S.; Manfredini, F.; Lamberti, N.; Martinuzzi, C.; Maietti, E.; Basaglia, N. Robot-Assisted Gait Training Is Not Superior to Intensive Overground Walking in Multiple Sclerosis with Severe Disability (the RAGTIME Study): A Randomized Controlled Trial. Mult. Scler. 2020, 26, 716–724. [Google Scholar] [CrossRef]
  4. Herrera, D.; Roberti, F.; Carelli, R.; Andaluz, V.; Varela, J.; Ortiz, J.; Canseco, P. Modeling and Path-Following Control of a Wheelchair in Human-Shared Environments. Int. J. Humanoid Robot. 2018, 15, 1850010. [Google Scholar] [CrossRef]
  5. Clark, C.; Sliker, L.; Sandstrum, J.; Burne, B.; Haggett, V.; Bodine, C. Development and Preliminary Investigation of a Semiautonomous Socially Assistive Robot (SAR) Designed to Elicit Communication, Motor Skills, Emotion, and Visual Regard (Engagement) from Young Children with Complex Cerebral Palsy: A Pilot Comparative Trial. Adv. Hum. Comput. Interact. 2019, 2019, 2614060. [Google Scholar] [CrossRef]
  6. Andaluz, V.H.; Ortiz, J.S.; Sanchéz, J.S. Bilateral Control of a Robotic Arm Through Brain Signals. In Augmented and Virtual Reality; De Paolis, L.T., Mongelli, A., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9254, pp. 355–368. ISBN 978-3-319-22887-7. [Google Scholar]
  7. Diez, P.F.; Torres Müller, S.M.; Mut, V.A.; Laciar, E.; Avila, E.; Bastos-Filho, T.F.; Sarcinelli-Filho, M. Commanding a Robotic Wheelchair with a High-Frequency Steady-State Visual Evoked Potential Based Brain–Computer Interface. Med. Eng. Phys. 2013, 35, 1155–1164. [Google Scholar] [CrossRef]
  8. Voilque, A.; Masood, J.; Fauroux, J.c.; Sabourin, L.; Guezet, O. Industrial Exoskeleton Technology: Classification, Structural Analysis, and Structural Complexity Indicator. In Proceedings of the 2019 Wearable Robotics Association Conference (WearRAcon), Scottsdale, AZ, USA, 25–27 March 2019; pp. 13–20. [Google Scholar]
  9. Jiménez, M.F.; Monllor, M.; Frizera, A.; Bastos, T.; Roberti, F.; Carelli, R. Admittance Controller with Spatial Modulation for Assisted Locomotion Using a Smart Walker. J. Intell. Robot. Syst. 2019, 94, 621–637. [Google Scholar] [CrossRef]
  10. Ortiz, J.S.; Andaluz, V.H.; Rivas, D.; Sánchez, J.S.; Espinosa, E.G. Human-Wheelchair System Controlled by Through Brain Signals. In Intelligent Robotics and Applications; Kubota, N., Kiguchi, K., Liu, H., Obo, T., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; Volume 9835, pp. 211–222. ISBN 978-3-319-43517-6. [Google Scholar]
  11. Hartman, A.; Nandikolla, V.K. Human-Machine Interface for a Smart Wheelchair. J. Robot. 2019, 2019, 4837058. [Google Scholar] [CrossRef]
  12. Brandão, A.S.; Felix, L.B.; Cavalieri, D.C.; de Sá, A.M.F.L.M.; Bastos-Filho, T.F.; Sarcinelli-Filho, M. Controlling Devices Using Biological Signals. Int. J. Adv. Robot. Syst. 2011, 8, 30. [Google Scholar] [CrossRef]
  13. Rakasena, E.P.G.; Herdiman, L. Electric Wheelchair with Forward-Reverse Control Using Electromyography (EMG) Control of Arm Muscle. J. Phys. Conf. Ser. 2020, 1450, 012118. [Google Scholar] [CrossRef]
  14. Ferreira, A.; Celeste, W.C.; Cheein, F.A.; Bastos-Filho, T.F.; Sarcinelli-Filho, M.; Carelli, R. Human-Machine Interfaces Based on EMG and EEG Applied to Robotic Systems. J. NeuroEng. Rehabil. 2008, 5, 10. [Google Scholar] [CrossRef] [Green Version]
  15. Huang, Q.; Chen, Y.; Zhang, Z.; He, S.; Zhang, R.; Liu, J.; Zhang, Y.; Shao, M.; Li, Y. An EOG-Based Wheelchair Robotic Arm System for Assisting Patients with Severe Spinal Cord Injuries. J. Neural Eng. 2019, 16, 026021. [Google Scholar] [CrossRef] [PubMed]
  16. Abdulghani, M.M.; Al-Aubidy, K.M.; Ali, M.M.; Hamarsheh, Q.J. Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition. Sensors 2020, 20, 2872. [Google Scholar] [CrossRef]
  17. Nikpour, M.; Huang, L.; Al-Jumaily, A.M. Stability and Direction Control of a Two-Wheeled Robotic Wheelchair Through a Movable Mechanism. IEEE Access 2020, 8, 45221–45230. [Google Scholar] [CrossRef]
  18. Sago, Y.; Noda, Y.; Kakihara, K.; Terashima, K. Parallel Two-Wheel Vehicle with Underslung Vehicle Body. Mech. Eng. J. 2014, 1, DR0036. [Google Scholar] [CrossRef] [Green Version]
  19. Sulistiyawan, B.B.; Susmartini, S.; Herdiman, L. A Framework of Stand up Motorized Wheelchair as Universal Design Product to Help Mobility of the Motoric Disabled People. In AIP Conference Proceedings; AIP Publishing LLC: Surakarta, Indonesia, 2020; p. 030026. [Google Scholar]
  20. Román-Ibáñez, V.; Pujol-López, F.A.; Mora-Mora, H.; Pertegal-Felices, M.L.; Jimeno-Morenilla, A. A Low-Cost Immersive Virtual Reality System for Teaching Robotic Manipulators Programming. Sustainability 2018, 10, 1102. [Google Scholar] [CrossRef] [Green Version]
  21. Monroy, J.; Hernandez-Bennetts, V.; Fan, H.; Lilienthal, A.; Gonzalez-Jimenez, J. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments. Sensors 2017, 17, 1479. [Google Scholar] [CrossRef] [Green Version]
  22. Viglialoro, R.M.; Esposito, N.; Condino, S.; Cutolo, F.; Guadagni, S.; Gesi, M.; Ferrari, M.; Ferrari, V. Augmented Reality to Improve Surgical Simulation: Lessons Learned Towards the Design of a Hybrid Laparoscopic Simulator for Cholecystectomy. IEEE Trans. Biomed. Eng. 2019, 66, 2091–2104. [Google Scholar] [CrossRef]
  23. Song, J.; Hur, K.; Lee, J.; Lee, H.; Lee, J.; Jung, S.; Shin, J.; Kim, H. Hardware-in-the-Loop Simulation Using Real-Time Hybrid-Simulator for Dynamic Performance Test of Power Electronics Equipment in Large Power System. Energies 2020, 13, 3955. [Google Scholar] [CrossRef]
  24. Rodič, B. Self-Organizing Manufacturing Systems in Industry 4.0: Aspect of Simulation Modelling. Available online: www.igi-global.com/chapter/self-organizing-manufacturing-systems-in-industry-40/269071 (accessed on 8 June 2021).
  25. Mahoney, K.; Pierce, J.; Papo, S.; Imran, H.; Evans, S.; Wu, W.-C. Efficacy of Adding Activity of Daily Living Simulation Training to Traditional Pulmonary Rehabilitation on Dyspnea and Health-Related Quality-of-Life. PLoS ONE 2020, 15, e0237973. [Google Scholar] [CrossRef]
  26. Santos Pessoa de Melo, M.; Gomes da Silva Neto, J.; Jorge Lima da Silva, P.; Natario Teixeira, J.M.X.; Teichrieb, V. Analysis and Comparison of Robotics 3D Simulators. In Proceedings of the 2019 21st Symposium on Virtual and Augmented Reality (SVR), Rio de Janeiro, Brazil, 28–31 October 2019; pp. 242–251. [Google Scholar]
  27. Staranowicz, A.; Mariottini, G.L. A Survey and Comparison of Commercial and Open-Source Robotic Simulator Software. In Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments, Heraklion Crete Greece, 25–27 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1–8. [Google Scholar]
  28. Nurkkala, V.-M.; Kalermo, J.; Jarvilehto, T. Development of Exergaming Simulator for Gym Training, Exercise Testing and Rehabilitation. J. Commun. Comput. 2014, 11, 403–411. [Google Scholar]
  29. Vailland, G.; Grzeskowiak, F.; Devigne, L.; Gaffary, Y.; Fraudet, B.; Leblong, É.; Nouviale, F.; Pasteau, F.; Breton, R.L.; Guégan, S.; et al. User-Centered Design of a Multisensory Power Wheelchair Simulator: Towards Training and Rehabilitation Applications. In Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), Toronto, ON, Canada, 24–28 June 2019; pp. 77–82. [Google Scholar]
  30. Meyer, R.T.; Sergeeva, Y. Mixed-Reality Assistive Robotic Power Chair Simulator for Parkinson’s Tremor Testing. Med. Eng. Phys. 2020, 83, 142–147. [Google Scholar] [CrossRef]
  31. Grzeskowiak, F.; Babel, M.; Bruneau, J.; Pettre, J. Toward Virtual Reality-Based Evaluation of Robot Navigation among People. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 22–26 March 2020; pp. 766–774. [Google Scholar]
  32. Andaluz, V.H.; Chicaiza, F.A.; Gallardo, C.; Quevedo, W.X.; Varela, J.; Sánchez, J.S.; Arteaga, O. Unity3D-MatLab Simulator in Real Time for Robotics Applications. In Augmented Reality, Virtual Reality, and Computer Graphics; De Paolis, L.T., Mongelli, A., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; Volume 9768, pp. 246–263. ISBN 978-3-319-40620-6. [Google Scholar]
  33. Wu, M.; Dai, S.-L.; Yang, C. Mixed Reality Enhanced User Interactive Path Planning for Omnidirectional Mobile Robot. Appl. Sci. 2020, 10, 1135. [Google Scholar] [CrossRef] [Green Version]
  34. Liu, H.; Wang, L. Remote Human–Robot Collaboration: A Cyber–Physical System Application for Hazard Manufacturing Environment. J. Manuf. Syst. 2020, 54, 24–34. [Google Scholar] [CrossRef]
  35. Carvajal, C.P.; Méndez, M.G.; Torres, D.C.; Terán, C.; Arteaga, O.B.; Andaluz, V.H. Autonomous and Tele-Operated Navigation of Aerial Manipulator Robots in Digitalized Virtual Environments. In Augmented Reality, Virtual Reality, and Computer Graphics; De Paolis, L.T., Bourdot, P., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 10851, pp. 496–515. ISBN 978-3-319-95281-9. [Google Scholar]
  36. Amouri, A.; Mahfoudi, C.; Zaatri, A. Dynamic Modeling of a Spatial Cable-Driven Continuum Robot Using Euler-Lagrange Method. Int. J. Eng. Technol. Innov. 2020, 10, 60–74. [Google Scholar] [CrossRef]
  37. Ortiz, J.S.; Palacios-Navarro, G.; Andaluz, V.H.; Recalde, L.F. Three-Dimensional Unified Motion Control of a Robotic Standing Wheelchair for Rehabilitation Purposes. Sensors 2021, 21, 3057. [Google Scholar] [CrossRef]
  38. Acosta Núñez, J.F.; Andaluz Ortiz, V.H.; González-de-Rivera Peces, G.; Garrido Salas, J. Energy-Saver Mobile Manipulator Based on Numerical Methods. Electronics 2019, 8, 1100. [Google Scholar] [CrossRef] [Green Version]
  39. Schifino, G.; Cimolin, V.; Pau, M.; da Cunha, M.J.; Leban, B.; Porta, M.; Galli, M.; Souza Pagnussat, A. Functional Electrical Stimulation for Foot Drop in Post-Stroke People: Quantitative Effects on Step-to-Step Symmetry of Gait Using a Wearable Inertial Sensor. Sensors 2021, 21, 921. [Google Scholar] [CrossRef] [PubMed]
  40. Sauro, J.; Lewis, J.R. When Designing Usability Questionnaires, Does It Hurt to Be Positive? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; p. 2215. [Google Scholar]
  41. Salvendy, G. Handbook of Human Factors and Ergonomics; John Wiley & Sons: New York, NY, USA, 2006. [Google Scholar]
  42. Guevara, B.; Martínez, A.; Gordón, A.; Constante, P. Sistema inmersivo de reconocimiento y control de gestos empleando realidad virtual para rehabilitación de las extremidades superiores en pacientes con daño cerebral adquirido (DCA). Iber. J. Inf. Syst. Technol. 2019, E19, 658–670. [Google Scholar]
  43. Maule, L.; Luchetti, A.; Zanetti, M.; Tomasin, P.; Pertile, M.; Tavernini, M.; Guandalini, G.M.A.; De Cecco, M. RoboEye, an Efficient, Reliable and Safe Semi-Autonomous Gaze Driven Wheelchair for Domestic Use. Technologies 2021, 9, 16. [Google Scholar] [CrossRef]
  44. Norouzi-Gheidari, N.; Hernandez, A.; Archambault, P.S.; Higgins, J.; Poissant, L.; Kairy, D. Feasibility, Safety and Efficacy of a Virtual Reality Exergame System to Supplement Upper Extremity Rehabilitation Post-Stroke: A Pilot Randomized Clinical Trial and Proof of Principle. Int. J. Environ. Res. Public Health 2019, 17, 113. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Ivkov, M.; Blešić, I.; Dudić, B.; Pajtinková Bartáková, G.; Dudić, Z. Are Future Professionals Willing to Implement Service Robots? Attitudes of Hospitality and Tourism Students towards Service Robotization. Electronics 2020, 9, 1442. [Google Scholar] [CrossRef]
  46. Ortiz, J.S.; Palacios-Navarro, G.; Carvajal, C.P.; Andaluz, V.H. 3D Virtual Path Planning for People with Amyotrophic Lateral Sclerosis Through Standing Wheelchair. In Social Robotics; Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Broadbent, E., He, H., Wagner, A.R., Castro-González, Á., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11357, pp. 181–191. ISBN 978-3-030-05203-4. [Google Scholar]
  47. Gull, M.A.; Bai, S.; Bak, T. A Review on Design of Upper Limb Exoskeletons. Robotics 2020, 9, 16. [Google Scholar] [CrossRef] [Green Version]
  48. Hernandez-Ossa, K.A.; Montenegro-Couto, E.H.; Longo, B.; Bissoli, A.; Sime, M.M.; Lessa, H.M.; Enriquez, I.R.; Frizera-Neto, A.; Bastos-Filho, T. Simulation System of Electric-Powered Wheelchairs for Training Purposes. Sensors 2020, 20, 3565. [Google Scholar] [CrossRef]
  49. Menga, G.; Ghirardi, M. Control of the Sit-To-Stand Transfer of a Biped Robotic Device for Postural Rehabilitation. Robotics 2019, 8, 91. [Google Scholar] [CrossRef] [Green Version]
  50. Javaid, M.; Haleem, A. Exploring Smart Material Applications for COVID-19 Pandemic Using 4D Printing Technology. J. Ind. Integr. Manag. 2020, 05, 481–494. [Google Scholar] [CrossRef]
  51. Serner, A.; Weir, A.; Tol, J.L.; Thorborg, K.; Lanzinger, S.; Otten, R.; Hölmich, P. Return to Sport After Criteria-Based Rehabilitation of Acute Adductor Injuries in Male Athletes: A Prospective Cohort Study. Orthop. J. Sports Med. 2020, 8, 232596711989724. [Google Scholar] [CrossRef] [PubMed]
  52. Nikpour, M.; Huang, L.; Al-Jumaily, A.M. An Approach on Velocity and Stability Control of a Two-Wheeled Robotic Wheelchair. Appl. Sci. 2020, 10, 6446. [Google Scholar] [CrossRef]
  53. Shahin, M.K.; Tharwat, A.; Gaber, T.; Hassanien, A.E. A Wheelchair Control System Using Human-Machine Interaction: Single-Modal and Multimodal Approaches. J. Intell. Syst. 2019, 28, 115–132. [Google Scholar] [CrossRef]
  54. Borsci, S.; Federici, S.; Lauriola, M. On the Dimensionality of the System Usability Scale: A Test of Alternative Measurement Models. Cogn. Process. 2009, 10, 193–197. [Google Scholar] [CrossRef]
  55. 3-SpaceTM MoCap Starter Bundle. Available online: https://yostlabs.com/product/3-space-mocap-starter-bundle/ (accessed on 4 May 2021).
  56. Abtahi, M.; Bahram Borgheai, S.; Jafari, R.; Constant, N.; Diouf, R.; Shahriari, Y.; Mankodiya, K. Merging FNIRS-EEG Brain Monitoring and Body Motion Capture to Distinguish Parkinsons Disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1246–1253. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram for the FS technique.
Figure 1. Block diagram for the FS technique.
Sensors 21 05083 g001
Figure 2. Block diagram for the HIL technique.
Figure 2. Block diagram for the HIL technique.
Sensors 21 05083 g002
Figure 3. Robotic standing wheelchair built by the authors.
Figure 3. Robotic standing wheelchair built by the authors.
Sensors 21 05083 g003
Figure 4. Robotic standing wheelchair.
Figure 4. Robotic standing wheelchair.
Sensors 21 05083 g004
Figure 5. Schematic of the robotic standing wheelchair.
Figure 5. Schematic of the robotic standing wheelchair.
Sensors 21 05083 g005
Figure 6. Proposed virtual environment schema.
Figure 6. Proposed virtual environment schema.
Sensors 21 05083 g006
Figure 7. External resources virtualization.
Figure 7. External resources virtualization.
Sensors 21 05083 g007
Figure 8. Data visualization of the control errors evolution in the graphics engine.
Figure 8. Data visualization of the control errors evolution in the graphics engine.
Sensors 21 05083 g008
Figure 9. Cinematic and dynamic behavior of the human–robot system.
Figure 9. Cinematic and dynamic behavior of the human–robot system.
Sensors 21 05083 g009
Figure 10. Scripting general scheme.
Figure 10. Scripting general scheme.
Sensors 21 05083 g010
Figure 11. Data exchange between virtual environment and destination controller.
Figure 11. Data exchange between virtual environment and destination controller.
Sensors 21 05083 g011
Figure 12. Publisher/subscriber of the shared memories.
Figure 12. Publisher/subscriber of the shared memories.
Sensors 21 05083 g012
Figure 13. Path-following problem for a robotic standing wheelchair.
Figure 13. Path-following problem for a robotic standing wheelchair.
Sensors 21 05083 g013
Figure 14. Block diagram of the motion control of the standing wheelchair-human system.
Figure 14. Block diagram of the motion control of the standing wheelchair-human system.
Sensors 21 05083 g014
Figure 15. Scene configuration of the simulator environment for robotic assistance (initial scene).
Figure 15. Scene configuration of the simulator environment for robotic assistance (initial scene).
Sensors 21 05083 g015
Figure 16. Screenshots of the developed virtual environments for the execution of rehabilitation tasks and robotic assistance, all related to activities of daily living.
Figure 16. Screenshots of the developed virtual environments for the execution of rehabilitation tasks and robotic assistance, all related to activities of daily living.
Sensors 21 05083 g016
Figure 17. Autonomous assistance task: movement of the standing human–wheelchair system from a house located in P o to another house located in point P d .
Figure 17. Autonomous assistance task: movement of the standing human–wheelchair system from a house located in P o to another house located in point P d .
Sensors 21 05083 g017
Figure 18. Virtual stroboscopic movement of the robot–human system based on the experimental data.
Figure 18. Virtual stroboscopic movement of the robot–human system based on the experimental data.
Sensors 21 05083 g018
Figure 19. Time evolution of the control errors η ˜ ( t ) = ( η ˜ x , η ˜ y , η ˜ z ) .
Figure 19. Time evolution of the control errors η ˜ ( t ) = ( η ˜ x , η ˜ y , η ˜ z ) .
Sensors 21 05083 g019
Figure 20. Time evolution of the velocity errors μ ˜ ( k T 0 ) = ( u ˜ , ω ˜ ψ , ω ˜ ϕ ) .
Figure 20. Time evolution of the velocity errors μ ˜ ( k T 0 ) = ( u ˜ , ω ˜ ψ , ω ˜ ϕ ) .
Sensors 21 05083 g020
Figure 21. Velocity commands to the standing wheelchair μ r e f ( k T 0 ) = ( u r e f , ω ψ r e f , ω ϕ r e f ) .
Figure 21. Velocity commands to the standing wheelchair μ r e f ( k T 0 ) = ( u r e f , ω ψ r e f , ω ϕ r e f ) .
Sensors 21 05083 g021
Figure 22. Virtual stroboscopic movement of the robot–human system.
Figure 22. Virtual stroboscopic movement of the robot–human system.
Sensors 21 05083 g022
Figure 23. Time evolution of the control errors η ˜ ( t ) = ( η ˜ x , η ˜ y , η ˜ z ) .
Figure 23. Time evolution of the control errors η ˜ ( t ) = ( η ˜ x , η ˜ y , η ˜ z ) .
Sensors 21 05083 g023
Figure 24. Time evolution of the velocity errors μ ˜ ( t ) = ( u ˜ , ω ˜ ψ , ω ˜ ϕ ) .
Figure 24. Time evolution of the velocity errors μ ˜ ( t ) = ( u ˜ , ω ˜ ψ , ω ˜ ϕ ) .
Sensors 21 05083 g024
Figure 25. Velocity commands to the standing wheelchair μ r e f ( k T 0 ) = ( u r e f , ω ψ r e f , ω ϕ r e f ) .
Figure 25. Velocity commands to the standing wheelchair μ r e f ( k T 0 ) = ( u r e f , ω ψ r e f , ω ϕ r e f ) .
Sensors 21 05083 g025
Figure 26. GPU performance.
Figure 26. GPU performance.
Sensors 21 05083 g026
Figure 27. CPU performance.
Figure 27. CPU performance.
Sensors 21 05083 g027
Figure 28. Execution time of the proposed closed-loop control scheme.
Figure 28. Execution time of the proposed closed-loop control scheme.
Sensors 21 05083 g028
Figure 29. Algorithm execution time during Hardware in the Loop technique.
Figure 29. Algorithm execution time during Hardware in the Loop technique.
Sensors 21 05083 g029
Table 1. Implementation of closed-loop control algorithms.
Table 1. Implementation of closed-loop control algorithms.
Control System ConfigurationFull Simulation (FS)Rapid Control Prototyping (RCP)Hardware in the Loop (HIL)Deployed System (DS)
Control laws and signal processingSimulatedSimulatedDeployed to target hardwareDeployed to target hardware
Robot, feedback, and power converterSimulatedPhysical componentsSimulatedPhysical components
Primary benefitsEasy to develop and make changes; full set of analysis tools.Easy to modify control laws; full set of analysis tools.Safely and quickly validate deployed control lawsCost and reliability appropriate for field operation
Table 2. Desired task and initial parameters.
Table 2. Desired task and initial parameters.
Initial ConditionsDesired Task
η 0 x 10   [ m ] u 0 0   [ m / s ] η d x 2 cos ( 0.05 t ) 10 . 58   [ m ]
η 0 y 11   [ m ] ω ψ 0 0   [ rad / s ] η d y 2 sin ( 0.05 t ) 11 . 12   [ m ]
η 0 z 0.5   [ m ] ω ϕ 0 0   [ rad / s ] η d z 0.2 sin ( 0.3 t ) + 0 . 21   [ m ]
η 0 ψ 0   [ rad ] -- v max 0.32   [ m s ]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ortiz, J.S.; Palacios-Navarro, G.; Andaluz, V.H.; Guevara, B.S. Virtual Reality-Based Framework to Simulate Control Algorithms for Robotic Assistance and Rehabilitation Tasks through a Standing Wheelchair. Sensors 2021, 21, 5083. https://doi.org/10.3390/s21155083

AMA Style

Ortiz JS, Palacios-Navarro G, Andaluz VH, Guevara BS. Virtual Reality-Based Framework to Simulate Control Algorithms for Robotic Assistance and Rehabilitation Tasks through a Standing Wheelchair. Sensors. 2021; 21(15):5083. https://doi.org/10.3390/s21155083

Chicago/Turabian Style

Ortiz, Jessica S., Guillermo Palacios-Navarro, Víctor H. Andaluz, and Bryan S. Guevara. 2021. "Virtual Reality-Based Framework to Simulate Control Algorithms for Robotic Assistance and Rehabilitation Tasks through a Standing Wheelchair" Sensors 21, no. 15: 5083. https://doi.org/10.3390/s21155083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop