1. Introduction
Physical disability refers to a condition that limits an individual’s ability to perform physical activities due to impairments in mobility, dexterity, or endurance. The percentage of people with disabilities is gradually increasing [
1]. Stroke is a major cause of physical disability. Neurorehabilitation accelerates physical recovery by utilizing the brain’s special properties called neuroplasticity [
2]. Neuroplasticity allows the brain to reassign functions from damaged areas to healthy regions of the brain [
3]. This process, known as functional recovery, enables other parts of the brain to take over control of the affected limbs. Research indicates that consistent, intensive, and targeted exercises maximize neuroplasticity and facilitate functional recovery [
3]. Physical therapy provides crucial exercises for neurorehabilitation, but it is often costly, time-consuming, highly dependent on the therapist’s expertise, and requires personalized approaches for each patient. Robot-assisted rehabilitation has emerged as a viable alternative to traditional therapies [
4,
5]. The current neurorehabilitation research challenge is to develop an intelligent rehabilitation exoskeleton system that will provide personalized therapies with minimum human intervention. Achieving this goal requires the ability to read and interpret physiological signals (e.g., surface electromyography (sEMG), electroencephalogram (EEG)) and interpret them for robot-assisted control and recovery assessment purposes.
Neural networks hold significant potential for analyzing and interpreting physiological signals in robot-assisted physical therapy. These computational models, inspired by the structure and function of the human brain, excel at identifying patterns, making decisions, and solving complex problems. In rehabilitation robotics, neural networks are particularly effective at processing large and complex physiological datasets.
Commonly used physiological signals include surface electromyography (sEMG), electromyography (EMG), electroencephalography (EEG), and electrocardiography (ECG or EKG). EMG signals provide valuable insights into a patient’s motor intent and abilities. This enables exoskeleton systems to deliver personalized support tailored to the user’s specific needs. Unlike traditional therapy, which relies on therapists to subjectively adjust exercises and intensity based on patient performance, neural networks empower robotic systems to adapt dynamically using real-time data.
By enabling precise and objective measurements of motor recovery, neural networks enhance the ability to monitor a patient’s progress over time. This technological advancement can significantly improve the accuracy and personalization of rehabilitation therapy, leading to better outcomes [
3].
Neural networks have shown great potential in proving the control, adaptability, and intelligence of robotic systems. Different types of neural networks, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and deep neural networks (DNNs) models have been applied in various applications to enhance the performance of rehabilitation devices [
6].
The role of neural networks in robot-assisted rehabilitation can be categorized into two domains: controlling the robotic exoskeleton or device and evaluating the user’s motor performance to assess motor recovery over time [
3,
7].
One of the primary challenges in upper extremity rehabilitation is accurately predicting the user’s movement intent and providing the appropriate level of assistance without causing excessive fatigue or underutilization of the user’s residual motor capabilities [
8]. Neural networks, particularly RNNs and LSTMs [
9], are well suited for this task due to their ability to analyze temporal sequences of data. These models can learn patterns in the user’s movement history, allowing the device to anticipate future movements and adjust its support accordingly [
10].
The integration of neural networks in robot-assisted rehabilitation represents a transformative step forward in the field of neurorehabilitation. By improving the control and adaptability of rehabilitation devices, neural networks enable more personalized and effective therapy, boosting the user’s recovery experience and outcomes [
11].
This systematic review aims to explore the various neural network models used in rehabilitation robotics, highlighting their potential to revolutionize motor recovery in patients with neurological impairments. By synthesizing current research, this review emphasizes how neural networks can transform therapy approaches, advancing the field of neurorehabilitation toward more effective, patient-centered care.
The organization of the article can be broadly divided into 7 sections.
Section 1 introduces the readers to neurorehabilitation and physical therapy, how robot-assisted rehabilitation can contribute to recovering physical impairments, and the role of the neural network in establishing the relationship between the robot and the user.
Section 2 describes the existing literature, especially the recently published review articles focusing on the application of neural networks in rehabilitation robotics, which help to solidify the contribution of this article.
Section 3 discusses the systematic review and article selection methodology, including the formulation of the research questionnaire, literature search strategy, inclusion and exclusion criteria, study selection, data extraction, and validation from the selected articles.
Section 4 classifies the neural network applications in robot-assisted rehabilitation based on the way they were used in previously conducted research.
Section 5 discusses the key findings, highlights the implications, and acknowledges the limitations.
Section 6 describes the future directions in the light of the reviewed articles, and finally,
Section 7 concludes the article.
2. Current State of Art
Rehabilitation robotics has received significant research attention recently for its ability to deliver various types of physical therapy across different stages of recovery. Over the past two decades, researchers have focused on advancing this field by transitioning from traditional human-assisted rehabilitation systems to robot-assisted alternatives. The current objective is to develop an autonomous rehabilitation system capable of providing diverse therapies to a wide range of patients with minimal human involvement.
Creating such a system requires continuous interaction between the user and the robotic platform. Physiological signals, such as sEMG, EMG, and EEG, play a critical role in establishing this connection. Neural networks are particularly well-suited to process and interpret complex physiological data, enabling meaningful insights. Consequently, their use in physiological signal processing has grown significantly, as evidenced by numerous research publications in the field.
The following section reviews recent critical reviews on the application of machine learning, deep learning, and neural networks in enhancing robot-assisted rehabilitation systems. By examining these studies, existing gaps are identified, and potential research opportunities are highlighted.
Ai et al. review the use of machine learning algorithms (MLAs) in robot-assisted upper limb rehabilitation, focusing on their role in enhancing motor function recovery for stroke patients [
12]. The paper examines the current state of rehabilitation robots, patient–robot interaction, and the importance of intelligent control systems. It also explored the MLAs employed for movement intention recognition, human–robot interaction control, and quantitative assessment of motor function. The study emphasizes patient involvement and highlights the potential for intelligent robots to improve recovery through adaptive learning and data-driven personalization. The review discusses recent advances in adaptive learning, allowing robots to customize rehabilitation programs based on individual patient needs, which enhances engagement and recovery outcomes.
However, the paper has several limitations. It relies heavily on theoretical concepts, with many reinforcement learning approaches still confined to simulations, limiting their clinical relevance. Although it explores key areas like intention recognition and interactive control, it does not sufficiently address practical challenges such as computational complexity, lengthy training periods, and the need for large datasets. Additionally, patient-specific variability and safety concerns in human–robot interactions are briefly mentioned. While the paper suggests future research directions, these recommendations remain broad and lack concrete strategies for practical implementation. Furthermore, it overlooks the potential of advanced neural networks like CNNs, LSTMs, and RBFNNs in improving accuracy, adaptability, and personalized care.
Bardi et al. provide a systematic review of soft robotic wearable devices (known as exosuits) for upper limb rehabilitation. This article explores their potential as flexible alternatives to rigid exoskeletons for everyday support [
13]. The review examines 105 articles covering 69 different devices, focusing on actuation methods, applications (rehabilitation, assistance, and augmentation), and strategies for intention detection. The article concluded that pneumatic and cable-driven actuation systems are the most common and typically offer only one or two degrees of freedom. The review emphasizes the need for clinical trials to assess these devices’ effectiveness in real-world settings.
The main limitation of this article is its limited applicability to real-world scenarios, as many devices remain in early development and have undergone limited testing on individuals with motor disabilities. The absence of standardized experimental protocols across studies also complicates comparisons of effectiveness. Furthermore, the review provides minimal practical solutions for challenges related to force transmission, control, and ergonomics; although it highlights the importance of user experience and portability, specific strategies for improving these aspects are lacking. While they focus on actuation methods and device types, they miss how neural networks enhance motion control, intention detection, and personalized therapy. Future research recommendations are broad, stressing the need for more targeted clinical trials and real-world testing to facilitate the practical adoption of exosuits.
The article “Myoelectric Control Systems for Upper Limb Wearable Robotic Exoskeletons and Exosuits: A Systematic Review” explores the design and application of myoelectric control systems in wearable robotic exoskeletons. It focuses on key design elements, such as degrees of freedom, portability, and various application scenarios. The study highlights the use of electromyographic (EMG) signals to enhance human–robot interaction and adaptability during motion tasks [
8]. By reviewing 60 selected articles, the authors analyze different myoelectric control systems and assess their effectiveness through experimental studies.
The article identifies challenges in integrating these systems into everyday use, including issues like user training and device comfort, and suggests future research directions to enhance usability and functionality. While the review provides valuable insights, it has notable limitations. Much of the technology is still in experimental stages, with limited clinical validation and a predominant reliance on healthy subjects rather than individuals with motor impairments. Although the review addresses device design and usability, it does not consider how advanced neural networks, such as CNNs and LSTMs, could improve motion prediction and enable personalized rehabilitation.
Challenges such as calibration complexity, muscle fatigue, and electrode displacement are discussed but remain inadequately addressed. Furthermore, the article offers limited solutions for implementing multi-degree-of-freedom exoskeletons, which restrict their practical application. While it proposes future research directions, these lack concrete strategies to transition from laboratory research to clinical implementation. Overall, the article underscores the need for more targeted, adaptive, and patient-centered solutions to advance the field.
The article titled “Review on Patient-Cooperative Control Strategies for Upper-Limb Rehabilitation Exoskeletons” discusses patient-cooperative control strategies for upper limb rehabilitation exoskeletons, emphasizing the importance of adapting robot controllers to patients’ recovery stages [
14]. It proposes a three-level classification system: high-level training modalities, low-level control strategies, and hardware-level implementation. This classification aims to enhance the adaptability and effectiveness of rehabilitation protocols, ensuring that each patient’s unique needs are met throughout their recovery journey. The study highlights the need for compliant control strategies to enhance human–robot interaction and promote motor relearning. Various exoskeletons are examined to illustrate the integration of these strategies, aiming to improve rehabilitation outcomes for neurological patients.
This article offers valuable insights into control strategies for rehabilitation robotics but has several limitations. It leans heavily on theoretical frameworks, with minimal empirical data or clinical validation. Many of the strategies discussed are in early developmental stages, and the inconsistent classification of control approaches across studies adds to the confusion. The proposed future directions, though important, are broad and lack actionable strategies for practical implementation.
Although the article emphasizes the importance of patient-specific adaptation, it provides few concrete solutions for addressing individual variability, patient engagement, and safety concerns. While highlighting control strategies, it neglects to explore the potential of neural networks in enhancing motion control and personalizing therapy for patients at various recovery stages. Hardware challenges, such as actuator performance and sensor limitations critical for real-world applications are also underexplored.
Additionally, the focus on affordability is vague, offering limited practical strategies to make these technologies more accessible. Key challenges, such as aligning exoskeletons with human biomechanics and ensuring user comfort, receive insufficient attention. While the article identifies opportunities to improve quality of life, its future directions are imprecise, leaving a gap between research and real-world implementation.
Gaudet et al. review the current trends and challenges in pediatric access to upper limb exoskeletons, emphasizing the limited availability of these devices for children with upper limb impairments [
15]. The article categorizes exoskeletons into sensor-less and sensor-based types, stressing the importance of sensor-based solutions for better adaptability to children’s growth and specific needs.
The review highlights key pediatric diagnoses, such as Duchenne muscular dystrophy and spinal muscular atrophy, that require assistive exoskeletons to support daily activities. While the article provides valuable insights, it has some limitations. It primarily focuses on theoretical discussions, with many exoskeleton designs still in development and lacking clinical validation. Key challenges, such as accommodating children’s growth and ensuring usability, are addressed but without practical solutions.
Although the review discusses adapting exoskeletons to children’s growth, it does not explore the potential of neural networks to enable real-time adjustments and personalized care. Furthermore, the discussion on accessibility and affordability remains broad, underscoring a significant gap between research advancements and practical applications. The study highlights the need for more focused efforts to address these gaps and make pediatric exoskeletons more accessible and effective in real-world settings.
The article “Wearable Upper Limb Robotics for Pervasive Health: A Review” provides a comprehensive overview of wearable upper limb exoskeletons, focusing on both rigid and soft designs for pervasive health applications [
16]. It emphasizes the critical role of these technologies in enhancing rehabilitation therapies, improving patients’ quality of life, and boosting self-esteem.
The review explores technical challenges and opportunities in exoskeleton design, underscoring the importance of comfort, biocompatibility, and ease of operation for long-term use. It also discusses the integration of biological signals, such as electromyography (EMG), for control methods and highlights the potential of soft exoskeletons across various healthcare settings.
While the review covers key design considerations and the use of EMG signals for control, it does not delve into how neural networks could enhance these systems. Specifically, it misses an opportunity to explore how neural networks could improve real-time control, adaptability, and personalized therapy. Although it identifies challenges related to technical design and long-term usability, a discussion on the potential of neural networks to address these challenges and advance wearable robotics is notably absent.
Sarhan, Al-Faiz, and Takhakh’s review in
Heliyon examines advancements in EMG and EEG-based control systems for upper limb rehabilitation robots designed for stroke patients, emphasizing their importance in improving rehabilitation outcomes [
17]. The article highlights the advantages of non-invasive EEG techniques for monitoring brain activity and EMG signals for muscle control. The findings suggest that combining EEG and EMG signals can enhance the accuracy of robotic exoskeleton control, potentially reducing rehabilitation times and improving functional recovery. Continuous development and clinical evaluation are recommended to maximize the effectiveness of these systems.
While the review provides valuable insights, it has some limitations. It focuses primarily on theoretical discussions, with minimal real-world validation or clinical trials. Although it underscores the potential of EMG and EEG signals for robot control, it does not adequately address challenges such as signal variability, muscle fatigue, and reliable signal acquisition in practical applications. The review also lacks specific strategies for clinical integration, offering broad future research directions without actionable recommendations.
Furthermore, while the article emphasizes the potential of hybrid EMG-EEG systems, it does not sufficiently explore the complexities of synchronizing multiple signals for effective control. Although the authors highlight the benefits of combining EEG and EMG, they overlook the role of neural networks in enhancing motion prediction and enabling real-time, personalized therapy. This gap suggests the need for more focused research on leveraging advanced algorithms to address these challenges and improve clinical applicability.
Xu et al. provide a systematic review of upper limb rehabilitation exoskeletons (ULR-EXO) for stroke patients, with a focus on execution and perception technologies [
18]. The review examines the anatomical and kinematic characteristics of the upper limb and highlights the necessity of human–robot compatibility in rehabilitation. It categorizes perceptual signals and execution mechanisms, addressing challenges in sensor applications and rehabilitation strategies. The importance of tailoring ULR-EXO designs to meet the diverse rehabilitation needs of stroke patients at different treatment stages is emphasized, offering guidance for future research and development in the field.
The review presents a comprehensive overview of soft exosuits, though many devices discussed are still in the early stages of development and lack extensive real-world validation or clinical trials. While it covers control strategies and actuation mechanisms, it does not adequately address practical challenges such as reliable force transmission, alignment with human biomechanics, and preventing device slippage. Although comfort and usability are highlighted as critical factors, the article offers limited concrete solutions for achieving them.
Additionally, the review calls for shared evaluation metrics and increased clinical testing but provides broad recommendations that lack specificity. It also overlooks how neural networks could enhance motion control, adapt to patient needs in real-time, and personalize therapy. These gaps underscore the need for further research focused on practical, user-centered applications to advance the development and implementation of ULR-EXO systems.
A compressive review of available research articles reveals that the available review articles do not solely focus on the use of neural networks for physiological signal classification and interpretation. Unlike previous reviews, which focus broadly on machine learning algorithms, control strategies, sensor systems, or device design, our article specifically focuses on the critical role of neural networks in enhancing exoskeleton-assisted rehabilitation. The emphasis of this article is on recently developed neural networks that use physiological signal interpretation, maneuvering the exoskeleton system, and learning about neurological recovery. This article also reviews the application of various kinds of neural networks like CNNs, LSTMs, and RBFNNs to offer superior performance in real-time adaptability, motion prediction, and personalized therapy.
The concept of systematic review and meta-analysis is followed to develop this research article. The article selection methodology, review, and analysis are presented in
Section 3.
4. Exploring Neural Network Applications in Robot-Assisted Rehabilitation
The application of neural networks in exoskeleton-based robot-assisted upper limb rehabilitation has seen significant advancements in recent years. Neural networks, a subset of artificial intelligence, have demonstrated effectiveness in modeling complex, non-linear systems, making them particularly suited for rehabilitation technologies [
19]. In robot-assisted rehabilitation applications, neural networks are used to interpret physiological signals, predict motion intention, and improve the control strategy of exoskeleton systems. Neural networks offer adaptive learning capabilities, and these allow for personalized rehabilitation plans adapted to the unique needs of patients recovering from conditions like stroke, spinal cord injury, and other neuromotor impairments [
20]. Currently, a variety of neural network architectures are available. The selection of the neural network structure depends on the application. Commonly used neural network types for movement prediction, motor function recovery, and patient–exoskeleton interaction are convolutional neural networks (CNNs), long short-term memory (LSTM) networks, radial basis function neural networks (RBFNNs), and deep neural networks (DNNs).
Table 1 presents the neural networks’ classification based on their architecture and functionalities.
Section 4 will discuss different types of neural networks and their applications in robot-assisted rehabilitation based on insights from published research articles.
4.1. Convolutional Neural Networks (CNNs)
A Convolutional Neural Network (CNN) is a specialized deep learning model designed for processing structured grid-like data (e.g., images). CNNs have proven highly effective in enhancing upper limb rehabilitation when integrated with robotic exoskeletons. Their key strength lies in their ability to automatically learn and recognize complex patterns from data, making them ideal for interpreting signals critical for exoskeleton control [
21]. In rehabilitation, exoskeletons aid patients by supporting their movements, and CNNs analyze data from various sources like surface electromyography (sEMG) signals, and video feeds to fine-tune this support [
22]. These signals reflect the patient’s muscle activity and movements and allow the CNN to dynamically adjust the exoskeleton’s assistance in real time for a personalized therapeutic experience.
CNNs work through multiple layers, starting with the convolutional layer, where small filters detect simple patterns, such as edges. As data move through deeper layers, more complex features like shapes and movement patterns are identified.
Figure 2 illustrates a Convolutional Neural Network (CNN) architecture. The input layer on the left receives data, which is organized into channels for processing. The middle section contains multiple convolutional layers, where filters (orange nodes) extract features by identifying spatial patterns such as edges, textures, and shapes. Non-linear activation functions, like ReLU and pooling operations, are applied to reduce dimensionality while preserving key features. In the final layer on the right, the extracted features are flattened and passed through fully connected layers. These layers combine weights and activations to generate the final output, such as class scores or predictions.
It allows the exoskeleton to respond more accurately to the user’s needs [
23]. CNNs also efficiently process large data volumes, recognizing subtle movement details. They can integrate various sensor inputs, creating a comprehensive understanding of user actions. It enhances rehabilitation outcomes by providing adaptive, real-time support [
24]. Numerous studies have been conducted on the use of CNNs to predict movements based on muscle activity. The following section will highlight key representative works in this area.
Li et al. proposed a novel home-based exoskeleton system to improve upper limb stroke rehabilitation [
25]. This system aims to support early stroke recovery by enhancing patient–device interaction while prioritizing affordability and safety. It uses a hybrid machine learning model that combines convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to interpret surface electromyography (sEMG) signals.
Estimating movement intentions from sEMG signals is challenging due to subject variability. To address this, the authors developed a subject-independent motion estimation method. CNNs are used for feature extraction, while LSTMs capture temporal patterns in muscle activity, eliminating the need for subject-specific calibration. Experiments demonstrated the model’s effectiveness, achieving an estimation error of 10° and a delay of 300 ms. This highlights its ability to generalize across sEMG signal variability and its suitability for real-time control.
The CNN-LSTM model’s performance was evaluated using statistical metrics, showcasing its effectiveness in subject-independent estimation of continuous movements. The Mean Absolute Error (MAE) for the three test subjects was 12.8540, 13.0131, and 11.3080, while the Correlation Coefficient (R²) values were 0.9250, 0.9274, and 0.9347, respectively. These results demonstrate the model’s high accuracy and strong generalization, emphasizing its potential for real-world application in home-based upper limb rehabilitation systems.
Tryon et al. explored the application of wearable robotic exoskeletons for rehabilitation and mobility assistance [
26]. Their research focused on enhancing the human–machine interface by combining electroencephalography (EEG) and electromyography (EMG) signals. Convolutional Neural Networks (CNNs) were utilized to automatically extract and integrate information from these signals, addressing the limitations of traditional machine learning approaches that rely on manual feature extraction.
EEG and EMG data were collected during elbow flexion–extension tasks, and CNN models were trained using time–frequency and time–domain representations. The results demonstrated a mean accuracy of 80.51% and an F-score of 80.74%, significantly outperforming the baseline accuracy of 33.33%. These findings highlight the ability of CNNs to effectively fuse EEG and EMG signals, enhancing the control systems of exoskeletons and advancing rehabilitation technologies for individuals with movement disorders.
The article “Path Planning and Impedance Control of a Soft Modular Exoskeleton for Coordinated Upper Limb Rehabilitation” presents a novel approach to upper limb rehabilitation using a soft modular exoskeleton powered by pneumatic artificial muscles (PAMs) [
27]. The primary focus is on improving joint coordination, a crucial aspect of stroke recovery. The study proposes a hybrid convolutional neural network–long short-term memory (CNN-LSTM) model to capture the coordination relationships between the elbow and wrist joints.
A coordinated training exercise (Task 1), such as the action of drinking water, is used to evaluate the system. The model also generates adaptive, patient-specific trajectories for rehabilitation tasks, such as touching the head. An impedance control strategy enhances safety by constraining the robot’s movements within a virtual coordination tunnel.
Experimental results demonstrate that the CNN-LSTM model effectively quantifies joint coordination. The impedance control approach improves movement coordination and rehabilitation outcomes. This method emphasizes personalized training, patient engagement, and safety, representing a significant step forward in stroke rehabilitation technology.
The study used R2 and mean square error (MSE) values to evaluate the CNN-LSTM and BP models. For Task 1, the CNN-LSTM model achieved an average R2 of 0.926 for the elbow joint and 0.848 for the wrist, compared to 0.911 and 0.807, respectively, for the BP model. The CNN-LSTM also achieved lower MSE values: 39.842 for the elbow and 10.492 for the wrist, compared to 47.834 and 13.398, respectively, for the BP model. These results highlight the effectiveness of the proposed approach in enhancing joint coordination for rehabilitation.
Jiang et al. investigate the application of convolutional neural networks (CNNs) to recognize shoulder motion patterns using surface electromyography (sEMG) signals from 12 upper limb muscles [
28]. The study aims to enhance motion pattern recognition, a critical component of assistive devices and rehabilitation technologies.
The CNN model was tested under various conditions, including different motion speeds, subject variability, and the use of multiple EMG recording devices. Results demonstrate the potential of CNNs to process sEMG signals effectively, improving motion recognition for rehabilitation and assistive applications.
Accuracy was used as the primary evaluation metric for motion pattern recognition. The model achieved an average accuracy of 97.57% (±0.21%) for normal-speed motion and 79.64% (±8.16%) for fast-speed motion when tested on 40% of the EMG datasets. Furthermore, accuracy increased with the inclusion of more subjects in the training dataset, highlighting the importance of diverse training data in improving model performance.
Tang et al. introduced an innovative upper limb rehabilitation exoskeleton system (VR-ULE) that integrates motor imagery (MI) brain–computer interface (BCI) paradigms with virtual reality (VR) for stroke rehabilitation [
29]. The VR-ULE system employs MI electroencephalogram (EEG) recognition models based on convolutional neural networks (CNNs) and squeeze-and-excitation (SE) blocks. These models interpret patients’ motion intentions, enabling them to control the exoskeleton during rehabilitation exercises.
A notable feature of the system is its adaptability to individual EEG signals, as MI EEG features vary among patients. SE blocks enhance rehabilitation by identifying the importance of different feature channels and focusing on the most informative frequency bands for each patient. This personalized approach improves treatment outcomes by tailoring rehabilitation to individual needs. Additionally, the integration of MI cues into VR scenes promotes neuroplasticity and interhemispheric balance, addressing the limitations of traditional MI-BCI systems, such as poor adaptability.
The system’s performance was validated through offline training and online experiments, demonstrating significant improvements. The study evaluated the models using precision, recall, and F-score metrics. The CNN + SE model achieved a classification accuracy of 87.53% ± 1.07%, compared to 83.32% ± 1.04% for the standard CNN model. These results highlight the VR-ULE system’s potential as an effective tool for stroke rehabilitation.
Bu et al. propose an innovative method to recognize limb joint motions and detect joint angles directly from sEMG signals using an enhanced detection algorithm [
30]. This approach eliminates the need for traditional feature extraction, providing a more efficient and secure way to identify movements.
The method combines MobileNetV2 with the Ghost module as the feature extraction network. For target detection, it utilizes the Yolo-V4 algorithm, known for its accuracy and speed. Yolo-V4 is applied to estimate upper limb joint movements and predict joint angles. Experimental results show the algorithm achieves approximately 78% accuracy in movement identification, with a processing time of about 17.97 milliseconds per image on a PC.
These findings highlight the potential of this method to enhance upper limb exoskeleton control, particularly in rehabilitation applications. Bakri et al. introduce a robotic exoskeleton system designed to assist patients with paralysis, enabling interaction with their environment with minimal effort [
31]. The system combines Electroencephalogram (EEG) signals and computer vision technologies, including Microsoft Kinect, for object detection and recognition. Its primary goal is to enhance the independence of paralysis patients in daily activities, reducing reliance on caregiver support.
The system consists of four key components: an EEG module, an infrared depth camera, a 3D-printed upper limb exoskeleton, and a motorized wheelchair. The EEG module captures brain signals when patients focus on objects, while the depth camera determines the 3D coordinates of these objects. The exoskeleton, designed for various paralysis conditions, uses inverse kinematics to guide its movements.
The hardware features a lightweight, flexible exoskeleton constructed from 3D-printed materials, with motors selected based on torque requirements. Convolutional Neural Networks (CNNs) are employed for object detection and recognition, significantly enhancing the system’s accuracy in classifying and interacting with objects. This integration demonstrates the potential of the system to improve the autonomy of patients in real-world settings.
Sedighi et al. propose a novel method for motion intention detection using surface electromyography (sEMG) and deep learning techniques [
32]. Their approach employs Convolutional Neural Networks (CNNs) for spatial feature extraction and Long Short-Term Memory (LSTM) models to capture temporal relationships. This combination is used to predict upper limb joint movements and control a pneumatic cable-driven upper limb exoskeleton in real time.
The system processes data from three sEMG channels and joint angle information to generate motion trajectories, assisting users in various tasks. By integrating CNNs and LSTMs, the model enhances the prediction of future movement intentions, allowing the exoskeleton to provide tailored support based on the user’s muscle activity. The study also addresses practical challenges, including variability in speed, payload, and electrode placement.
Extensive experiments demonstrate the model’s robust performance, with an average root mean square error (RMSE) of 0.069 m/s and a standard deviation of 0.005 m/s when following reference trajectories. The system significantly reduces the user’s muscle effort, making the exoskeleton a valuable tool for rehabilitation and assistive applications. This research highlights the potential of combining deep learning with sEMG to create more responsive and adaptable human–robot interaction systems.
Lee et al. present the design and implementation of an intent-driven robotic exoskeleton aimed at improving upper extremity strength [
33]. The system uses Pneumatic Artificial Muscles (PAMs) to mimic human muscle function by converting compressed air into mechanical motion. It offers a high force-to-weight ratio and natural compliance, utilizing lightweight carbon fiber and aluminum in its construction.
The exoskeleton is equipped with three PAMs to assist joint movements. The pneumatic actuators operate within a pressure range of 10 to 60 psi, with a safety valve set at 70 psi to ensure safe operation. Advanced features include a cloud-based deep learning algorithm that predicts user intent by classifying upper extremity activities. Soft bioelectronic sensors monitor electromyography (EMG) signals, which are processed through cloud computing to identify intended movements. This results in notable reductions in EMG activity during elbow and shoulder flexion.
The exoskeleton is highly adaptable, featuring 3D-printed arm mounts and adjustable components to accommodate various body sizes. It is lightweight, comfortable, and provides real-time assistance with a response time of 500–550 ms. The study underscores the potential of this technology for individuals with neuromotor disorders and its practicality in real-world applications.
Zhong et al. propose a new method for recognizing various upper limb rehabilitation movements using surface electromyography (sEMG) signals. The approach utilizes continuous wavelet transform (CWT) to capture the time–frequency characteristics of these signals [
34]. The methodology outlines the data acquisition process and introduces a multiscale time–frequency information fusion approach. A multiple-feature fusion network (MFFN) is introduced by combining DenseNet and Deep Belief Network (DBN) architectures to improve sEMG signal recognition and extraction. By adjusting the DenseNet framework, the MFFN is designed to improve adaptability and stability in identifying upper limb movements across various rehabilitation exercises. It evaluates time–frequency features bidirectionally. In the feature extraction stage, the MFFN considers both the current layer and cross-layer features within the convolutional neural network (CNN). It also minimizes the loss of time–frequency information during the convolution process. The performance analysis involves statistical evaluations, such as quartile difference, mean difference, and variance, to assess the stability and uniqueness of the extracted features. The results show that the MFFN effectively extracts stable and meaningful time-frequency features from complex sEMG signals. This enables clear differentiation between various rehabilitation movements. The fusion learning approach significantly enhances movement recognition.
The next section will explain the applications of Radial Basis Function Neural Networks for upper extremity rehabilitation applications.
4.2. Radial Basis Function Neural Networks (RBFNNs)
A Radial Basis Function Neural Network (RBFNN) is a type of artificial neural network that utilizes radial basis functions as activation functions. RBFNNs excel in tasks such as pattern recognition and signal processing. This makes them especially valuable for interpreting complex data from biological signals [
35]. In robotic exoskeletons for arm rehabilitation, RBFNNs process signals like surface electromyography (sEMG) to interpret muscle activity. By learning from these signals, RBFNNs enable the exoskeleton to respond to the user’s movements in real-time and provide personalized support during therapy sessions [
36].
The network consists of three layers: an input layer, a hidden layer where radial basis functions are placed, and an output layer. The hidden layer captures the relationship between input signals and the desired output, such as movement assistance.
Figure 3 shows the architecture of a Radial Basis Function Neural Network (RBFNN), commonly used for function approximation, classification, and regression tasks. The network has three layers: the input layer (left) transmits the input data, while the hidden layer (middle) applies Radial Basis Functions (RBFs), such as Gaussian functions, to calculate distances between the input and predefined center points. These transformations map the input into a new space, capturing complex patterns. In the output layer (right), the weighted outputs from the hidden layer are combined using a summation function to generate the final prediction or decision.
A key advantage of RBFNNs is their ability to quickly learn from small datasets, making them ideal for rehabilitation scenarios where users’ movements vary and the system needs to adapt. RBFNNs can also integrate data from different sources, such as sEMG sensors and motion-tracking devices. This adaptability leads to more responsive exoskeleton systems, resulting in better rehabilitation outcomes and faster patient recovery [
37]. The following section will present some recently conducted representative research that used RBFNNs for rehabilitation robotics applications.
Kong et al. present a novel approach to upper limb rehabilitation by developing a control method for an exoskeleton that actively involves patients by recognizing their movement intentions through surface electromyography (sEMG) signals [
38]. This method combines radial basis function (RBF) and sliding mode impedance control with a least-squares support vector machine (LSSVM) for joint angle prediction. This combined method addresses critical challenges such as poor human–machine coupling and compliance in rehabilitation. By using sEMG signals to detect patient movement intentions, the exoskeleton operates in sync with natural movements. It enhances patient engagement and makes rehabilitation more intuitive. A joint angle prediction model based on LSSVM enables real-time integration between the user and the exoskeleton. Additionally, the adaptive sliding mode controller, built on the RBF network, adjusts the motion trajectory dynamically based on the interaction force between the user and the exoskeleton. This system improves compliance by adapting to the user’s physical condition and effort. Experimental results demonstrate that the RBF-based impedance control method effectively reduces interaction force, enhances comfort and safety, and stabilizes the system’s impedance characteristics. The authors mentioned that the experimental data closely aligned with the simulation results, with discrepancies attributed to the unpredictability of human motion.
Zhang et al. introduce a novel upper extremity exoskeleton robot to provide passive rehabilitation therapy. It combines iterative learning control (ILC) with sliding mode control to address the complexities of operating a wearable system with six degrees of freedom [
39]. The goal is to improve rehabilitation by accurately replicating human motion while adapting to dynamic uncertainties. Using motion data from a healthy subject via a VICON capture system, realistic joint space trajectories are generated to mimic natural arm movements. To ensure precise tracking of these trajectories, an iterative learning controller is developed. It estimates dynamic parameters and removes the initial identical condition (i.i.c) requirement through a polynomial reconstruction method. The control strategy also includes an adaptive law to manage non-periodic disturbances like friction and tissue torques that affect the system’s stability. A sliding mode controller mitigates chattering and maintains robustness. The stability of the system was proved through a composite energy function. The proposed control system is validated through simulations and experiments. The control system showed its ability to track trajectories accurately, handle disturbances, and remain stable. The study used evaluation metrics that included position states, tracking errors, and torques to assess tracking performance. The experiment spanned 40 iterations, with performance evaluated at the first iteration (k = 1) and the final iteration (k = 40). Results showed improved tracking performance and reduced errors by the 40th iteration despite the presence of unknown human–robot dynamics parameters. This makes significant advancements in sliding mode-based adaptive control strategies for the rehabilitation of exoskeletons.
Hasan S [
40] focuses on the development of a radial basis function (RBF) neural network-based controller for a human lower extremity exoskeleton robot. This controller is designed to manage the complex dynamics associated with a seven-degrees-of-freedom rehabilitation exoskeleton robot. The study emphasizes the computational efficiency of the RBF network. It allows for the conversion of sequentially structured robot dynamics into parallel architecture dynamics. This process enhances performance without the need for high-speed CPUs or multicore processors. In this paper, the training performance of the RBF network is quantified by the mean square error, with a target set to prevent overfitting and maintain a compact network size. The robustness of the developed controller to parameter variations is analyzed using a statistical analysis called ANOVA, and its effectiveness is demonstrated through comparative studies with other control techniques, including the Sliding Mode Controller, Computed Torque Controller, Adaptive Controller, and Linear Quadratic Regulator. The paper also highlights the advantages of using RBF networks over conventional techniques, particularly in terms of stability and computational efficiency. The use of a realistic friction model for joint friction further enhances trajectory tracking accuracy. The RBF network improved computational efficiency, requiring only 0.002 s of a 5.399 s simulation, enhancing performance by nearly 220 times.
Overall, the study presents a comprehensive approach to exoskeleton robot control, leveraging the unique capabilities of RBF networks to address the challenges posed by complex robotic systems.
Guo et al. [
41] propose a novel control strategy utilizing Radial Basis Function (RBF) neural networks to enhance the accuracy and safety of upper limb rehabilitation robots. This approach addresses the limitations of traditional PID control systems. The effectiveness of the RBF-based method is validated through rigorous MATLAB simulations, focusing on key factors such as performance, safety, and stability. The results demonstrate significant improvements, including a 9° enhancement in the mechanical structure’s accuracy compared to conventional systems. Along with increased control precision and safety, these findings suggest that the RBF neural network system offers a more reliable treatment option for patients with hemiplegia, providing better outcomes and reducing discomfort. By overcoming the shortcomings of traditional control methods, this research advances rehabilitation robotics and underscores the need for ongoing innovation in control strategies to ensure effective and safe interventions for stroke survivors.
Overall, this work paves the way for more advanced and patient-friendly rehabilitation technologies, enhancing the quality of care for individuals undergoing upper limb rehabilitation.
Xu et al. introduce a novel strategy for estimating joint torque using surface electromyography (sEMG) signals to enhance exoskeleton control by accurately identifying motion intention [
42]. This advancement is essential in rehabilitation robotics, as accurately predicting a user’s movement intention can greatly enhance the functionality of exoskeletons. The proposed method introduces two key advancements: system identification for elbow angle estimation and neural networks for optimizing muscle activation factors. Unlike traditional methods that rely on angular transducers, this approach estimates the elbow angle using a system identification technique. It simplifies the system and reduces the hardware requirements. The estimated angle is then used in a Hill-type muscle model to simulate muscle contractions. Additionally, neural networks are applied to refine torque estimation by adapting to variations in sEMG signals among different users. Experimental validation showed improvements in torque estimation accuracy, with a 2–9% increase in correlation coefficient and reductions in root mean square error (RMSE) by 0.2–2.5 Nm and decreasing the normalized root mean square error (NRMSE) by 0.5–9.5%.
Wu et al. introduce a novel neural adaptive backstepping sliding mode control (NABSMC) strategy designed for upper limb exoskeletons in rehabilitation training [
43]. The NABSMC algorithm represents a significant advancement in robot-assisted rehabilitation therapy. It is known for aiding motor function recovery in individuals with disabilities. The proposed control strategy integrates a radial basis function network (RBFN) to address dynamic uncertainties and external disturbances commonly encountered in human–robot interactions. The control law of the NABSMC system consists of three key components: the equivalent control term, the hitting control term, and the disturbance compensation term, ensuring both robustness and precision. The stability and boundedness of the closed-loop system are verified through the Lyapunov stability theorem. It provides a strong theoretical foundation for the control approach. To assess the algorithm’s effectiveness, experiments were conducted with three volunteers. It compares the NABSMC to an optimal backstepping sliding mode control (OBSMC) strategy. Results showed that the NABSMC outperforms the OBSMC in terms of trajectory tracking accuracy, step response, and robustness during repetitive passive training, making it a promising solution for improving rehabilitation outcomes.
Wu et al. introduce an innovative approach to improve robot-assisted rehabilitation therapy for individuals with upper limb disabilities, such as those caused by stroke, spinal cord injury, or orthopedic injury [
36]. The study focuses on developing an adaptive admittance control strategy combined with a neural network-based disturbance observer (AACNDO) to enhance patient–robot interaction. The AACNDO system addresses the limitations of traditional rehabilitation, which is often labor-intensive, expensive, and unable to meet growing demands. By using a dynamics-based adaptive admittance controller and a radial basis function network as a disturbance observer, the system can dynamically adjust to the patient’s motion intentions and recovery stage. It ensures personalized and effective therapy. Experimental validation, including sinusoidal and circular trajectory tracking and intention-based resistive training with volunteers, demonstrated the system’s effectiveness in delivering both passive and cooperative rehabilitation training. These results indicate the potential of the AACNDO approach to significantly advance rehabilitative robotics and improve patient outcomes.
Wang et al. introduce a novel framework aimed at enhancing motor learning in poststroke rehabilitation through robot-assisted therapy [
37]. This framework adjusts the reference trajectory and robotic assistance in real time. This also responds dynamically to the patient’s level of active participation. It ensures that the system continuously adapts to the user’s input for more personalized support. The system utilizes the minimum-jerk model to create smooth movement trajectories. It also incorporates movement phases detected by an adaptive frequency Hopf oscillator (AFO) to align with the patient’s natural rhythm. Gaussian radial basis functions (RBFs) are employed to model the patient’s motor abilities, enabling the system to adjust its assistance accordingly. This approach ensures the support remains both flexible and responsive to the patient’s performance level. Simulations validated the framework’s effectiveness. The evaluation metrics included tracking errors and root mean square (RMS) errors. During simulations with square-wave force input, tracking errors were limited to less than 0.0235 m along the x-axis and 0.0045 m along the y-axis. This shows its capability to dynamically adjust assistance levels. It successfully adapts to patient fatigue or changes in intention, ensuring continuous support. Future work aims to implement this framework in clinical settings using the CASIA-ARM, with applications in both upper and lower extremity exoskeletons and broadening the scope of rehabilitation tasks to accommodate various impairment levels.
Guo et al. present a novel task performance-based adaptive velocity assist-as-needed (TPAVAAN) control scheme for an upper limb exoskeleton. It is aimed at enhancing rehabilitation training by providing appropriate assistance to subjects [
44]. The TPAVAAN controller is structured with an outer position and velocity-based double impedance control (PVDIC) loop and an inner barrier Lyapunov function-based time–delay estimation controller with neural network compensation (NN-BLFTDEC). The PVDIC loop calculates assistive force through a position-based impedance controller for trajectory tracking and a velocity-based impedance controller to maintain the desired task velocity. The NN-BLFTDEC is designed to constrain tracking errors using a barrier Lyapunov function, while a time–delay estimation method and radial basis function neural network are employed to estimate uncertain exoskeleton dynamics. The controller adapts the assistance level based on the subject’s motor capability, assessed through a task performance function that considers position tracking error and assistive force. Co-simulation studies confirm the controller’s effectiveness. It reveals reduced tracking errors and enhanced task performance. As the subject’s motor capability improves, the system responds by increasing the desired velocity accordingly. The TPAVAAN controller has been shown to be more effective than previous methods in promoting active participation and improving rehabilitation outcomes.
Wu et al. present a novel approach to robot-assisted rehabilitation by introducing a soft elbow exoskeleton designed to enhance the rehabilitation training of disabled patients [
45]. The primary focus is on integrating active patient involvement and voluntary participation into the rehabilitation process to improve therapy outcomes. This paper presents an adaptive cooperative control strategy that enhances joint torque estimation and incorporates a time–delay sliding mode control approach. The exoskeleton uses surface electromyography signals from the biceps and triceps to estimate human elbow joint torque and motion intention. These signals are processed through a Hill-type musculoskeletal model and a Gaussian radial basis function network to provide accurate estimations. The paper discusses the design of the soft elbow exoskeleton and the enhanced method for joint torque estimation. It also outlines the development of an adaptive cooperative control system with improved joint torque estimation (ACC-IJTE). Experiments conducted with healthy volunteers and stroke patients demonstrate the effectiveness of the proposed control strategy. The results show that the scheme ensures accurate joint torque estimation and precise position control. This allows patients to actively influence the training path according to their motion intention, adjusting for varying training intensities.
4.3. Back Propagation Neural Network (BPNN)
Backpropagation Neural Networks (BPNNs), a type of Multilayer Perceptron (MLP), are highly effective at learning from data over time, making them ideal for interpreting complex signals. This capability enhances the exoskeleton’s ability to adapt efficiently to user movements by analyzing and responding to patterns in the input signals [
44]. BPNNs consist of three primary layers: an input layer, a hidden layer, and an output layer. The input layer receives data, such as muscle signals, which are processed in the hidden layers through weighted connections and activation functions to extract meaningful patterns. The output layer then generates predictions, such as movement guidance or adjustments in the exoskeleton’s assistance. The architecture of a BPNN is shown in
Figure 4.
A core feature of BPNNs is their training process, which uses backpropagation to minimize errors and improve performance. During training, the network compares the predicted output with the actual result and calculates the error. This error is propagated backward through the network to adjust the connection weights, typically using optimization techniques like gradient descent. This iterative process allows the BPNN to become more accurate over time [
46].
One significant advantage of BPNNs is their ability to handle large datasets and learn complex patterns, enabling them to adapt to individual users. By analyzing specific movement data, BPNNs provide personalized and effective rehabilitation therapy, which is essential in addressing the varied needs of patients [
46]. Additionally, BPNNs can integrate information from multiple sources, such as muscle sensors and motion-tracking devices, to create a more comprehensive understanding of user movements. This integration enhances the exoskeleton’s capacity to deliver precise, real-time assistance, ultimately improving rehabilitation outcomes and patient recovery.
By combining adaptability, learning efficiency, and the ability to integrate diverse data, BPNNs represent a critical advancement in rehabilitation robotics. Their application ensures smarter exoskeleton systems capable of meeting the complex and dynamic needs of users while delivering safe and effective therapy.
Li et al. introduce a real-time control method for upper limb exoskeletons. This method aimed at enhancing stroke rehabilitation through an active torque prediction model [
47]. Traditional rehabilitation therapies often face limitations such as high costs and the need for one-on-one care, compounded by a shortage of physiotherapists. The proposed solution addresses these challenges by using electromyography (EMG) signals and elbow joint angles as control inputs. These signals are processed to extract relevant features, which are then fed into a backpropagation (BP) neural network to predict active elbow torque. The BP neural network was chosen for its ability to model complex non-linear dynamics. The developed neural network consists of three layers with a hidden layer using the tansig activation function. Principal component analysis (PCA) is applied to reduce the input features to a five-dimensional vector. This improves the accuracy of the torque predictions. The network undergoes training with up to 3000 sessions, employing techniques like momentum and adaptive learning rates to enhance performance. Experimental results confirm the model’s suitability for real-time applications, achieving a high output frequency of 31.80 Hz. It also demonstrates an accuracy of 94.98% and a root mean square error (RMSE) of 0.1956 Nm. With a real-time delay of just 40 milliseconds, this model greatly enhances adaptability in exoskeleton-assisted rehabilitation therapy. It provides a personalized and precise approach, particularly benefiting stroke patients.
The paper titled “Glenohumeral Joint Trajectory Tracking for Improving the Shoulder Compliance of the Upper Limb Rehabilitation Robot” addresses the challenge of enhancing the compliance of upper limb rehabilitation robots by accurately tracking the trajectory of the glenohumeral joint (CGH) [
48]. Accurate CGH trajectory tracking is critical, as variations in the joint’s center among patients can affect the effectiveness of exoskeleton-based rehabilitation training. The study introduces a real-time prediction model based on a backpropagation neural network. This model considers shoulder motion and shoulder width, improving the coupling compliance between the user and the exoskeleton.
To measure the motion characteristics of the humeral head, the research utilizes a biplane X-ray system, which offers advantages such as lower radiation exposure, reduced cost, and shorter computation time compared to CT scans. Although slightly less accurate than CT, the biplane X-ray system provides sufficient precision for tracking the humeral head’s movement. The results were evaluated by measuring the matching distance between the rehabilitation robot and the human shoulder joint, with an average distance of 7.72 cm maintained. This ensured safe and effective interaction between the robot and the user.
The findings demonstrate that the proposed real-time prediction model significantly improves the alignment between the exoskeleton and the human shoulder joint, enhancing both comfort and safety during rehabilitation training. While this study focused on healthy young subjects, it establishes a foundation for future research involving stroke patients, highlighting the potential for broader applications in rehabilitation robotics.
“An upper-limb power-assist exoskeleton using proportional myoelectric control” explores the creation and testing of an upper limb power-assist exoskeleton that uses proportional myoelectric control [
29]. It is designed to increase arm strength and help with rehabilitation for people with physical disabilities or older adults. The exoskeleton facilitates movements of the shoulder, elbow, and wrist, offering comprehensive support. It is user-friendly and suitable for use in both home and clinical settings. It has a simple one-degree-of-freedom mechanism for the shoulder and elbow and is attached to the arm with adjustable carbon fiber braces.
The control system simplifies user adjustments by directly linking nervous system activity to the exoskeleton’s movements via air pressure changes. Tests of this method showed that a four-second movement duration achieved the best prediction accuracy, with lower errors and more precise alignment between intended and actual movement angles, compared to shorter or longer durations. However, the study identified challenges with the exoskeleton’s pneumatic muscles, including a limited range of motion and a bulky air supply system, which reduces portability. Future models could address these issues by adopting alternative technologies like servo motors or hydraulic cylinders.
The research highlights the potential of proportional myoelectric control for real-time and adaptive operation. However, further refinement of the control systems and EMG technology is needed to better manage complex movements. Prediction performance was assessed using root mean square error (RMSE) and the coefficient of determination (R²). For the four-second duration, RMSE was 9.67 and R2 was 0.87, outperforming the two-second period (RMSE: 10.70; R2: 0.83) and the eight-second period (RMSE: 12.42; R2: 0.79). These findings demonstrate the system’s effectiveness for medium-duration movements while highlighting areas for continued improvement.
The paper “An Intention-Based Online Bilateral Training System for Upper Limb Motor Rehabilitation” introduces the development and evaluation of an upper limb power-assist exoskeleton that uses proportional myoelectric control [
49]. This device is designed to enhance arm strength and support rehabilitation for individuals with disabilities or the elderly. It facilitates shoulder, elbow, and wrist movements and is suitable for use in both home and clinical settings. The exoskeleton employs a straightforward one-degree-of-freedom mechanism and attaches to the arm using adjustable carbon fiber braces. Its control system links muscle signals to exoskeleton movements, allowing users to adapt easily.
Tests revealed that a four-second movement cycle achieved the highest accuracy, with fewer errors and a closer match between intended and actual movements compared to two- or eight-second cycles. However, the study identifies limitations of pneumatic muscles, such as a limited range of motion and bulky air supply equipment, which reduce portability. Future iterations could address these issues by replacing pneumatic muscles with servo motors or hydraulic systems, enhancing flexibility and usability.
The research highlights the promise of proportional myoelectric control for real-time adaptability but acknowledges challenges in handling more complex movements. The proposed system outperformed conventional systems, with offline RMSE values significantly lower for the multi-feature vector (20.44 degrees) compared to single-feature vectors (26.18 and 26.32 degrees). These results demonstrate the system’s potential for improving rehabilitation outcomes while emphasizing the need for further development to address remaining challenges.
4.4. Fuzzy Neural Network (FNN)
Fuzzy Neural Networks (FNNs) are highly effective in enhancing upper limb rehabilitation when integrated with robotic exoskeletons. By combining the learning capabilities of neural networks with the reasoning power of fuzzy logic, FNNs excel at managing uncertain and imprecise data, such as sensor readings commonly used in rehabilitation [
50]. Exoskeletons rely on inputs from sources like surface electromyography (sEMG) and motion sensors to interpret user movements. FNNs process these data, enabling the exoskeleton to provide real-time, personalized support tailored to the user’s specific needs.
A major advantage of FNNs is their ability to handle uncertainty and inconsistent inputs, which is crucial in rehabilitation where user movements or muscle signals may vary. The adaptability of FNNs allows them to make sense of unclear data and deliver accurate assistance, improving the overall effectiveness of therapy [
51]. Additionally, FNNs can integrate data from multiple sensors, such as muscle signals and motion trackers, to create a comprehensive understanding of user movements. This capability ensures that the exoskeleton can respond to subtle changes in user actions, providing precise and adaptive support for more effective rehabilitation and improved patient outcomes.
Figure 5 illustrates the architecture of a Fuzzy Neural Network (FNN). The input layer receives features, which are transformed into fuzzy values in the hidden layer using fuzzy membership functions. These values represent the degree to which inputs belong to specific fuzzy sets. The max layer filters the most significant activations, eliminating less relevant information. Finally, the output layer combines these activations using fuzzy rules to produce a final decision or prediction. This architecture highlights how FNNs integrate fuzzy logic with neural networks to manage data uncertainty and imprecision, making them ideal for rehabilitation applications.
Xu G, Song A, and Li H present an innovative adaptive impedance controller for upper limb rehabilitation robots [
52]. It utilizes an Evolutionary Dynamic Recurrent Fuzzy Neural Network (EDRFNN). This controller is designed to adjust the desired impedance between the robot and the impaired limb in real time based on the limb’s physical recovery condition. The EDRFNN incorporates dynamic feedback neurons. It improves its dynamic control performance compared to traditional FNNs, which struggle with dynamic and uncertain systems. The controller uses a hybrid learning approach, combining genetic algorithms (GAs), hybrid evolutionary programming (HEP), and dynamic backpropagation (BP) to optimize the DRFNN parameters offline and fine-tune them online. This hybrid approach aims to overcome the limitations of the BP algorithm, which can become trapped in sub-optimal solutions. The system’s convergence is ensured using a discrete-type Lyapunov function, guaranteeing global convergence of the tracking error. Simulation results demonstrate that the proposed controller offers robust dynamic control performance, effectively adapting to changes in the impaired limb’s condition without significant tracking errors or force overshoots. However, the paper notes that the applicability of this control algorithm in clinical settings remains to be tested.
Mushage B., Chedjou J., and Kyamakya K. present a controller design for a five-degrees-of-freedom (DOF) upper limb exoskeleton robot developed for passive rehabilitation therapy [
53]. The robot faces several challenges, including uncertain non-linear dynamics, disturbance torques, unavailable full-state measurements, and various actuation faults such as loss of effectiveness and bias faults.
To address these issues, the authors propose an adaptive non-linear control scheme that incorporates a novel reaching law-based sliding mode control strategy. This scheme employs a high-gain state observer with a dynamic high-gain matrix and a fuzzy neural network (FNN) to estimate the state vector and the robot’s unknown dynamics. The proposed control strategy aims to enhance performance by delivering a chattering-free control signal, ensuring good tracking accuracy, reducing control torque amplitudes, and improving energy efficiency.
The study demonstrates the scheme’s ability to handle FNN approximation errors, disturbance torques, and actuation faults without requiring prior knowledge of bounds or additional fault detection and diagnosis components. Simulation results validate the scheme’s effectiveness, showing faster response times, fewer oscillations during transient phases, and improved tracking accuracy. The maximum tracking error observed is approximately 0.5 units, a significant reduction compared to previous methods.
The authors also outline future research directions, including the development of efficient observer-based adaptive fault-tolerant controllers for uncertain multi-input multi-output (MIMO) strict-feedback non-linear systems. This work will focus on addressing unknown control directions and constrained inputs to further enhance the robustness and applicability of the proposed control design.
Razzaghian A. introduces a novel control strategy for upper limb rehabilitation exoskeleton robots [
54]. The method integrates a fractional-order Lyapunov-based robust controller with a fuzzy neural network (FNN) compensator to ensure the exoskeleton’s tracking error converges to zero within a finite time. This strategy enhances system robustness against uncertainties and external disturbances.
The control design is based on a finite-time fractional-order nonsingular fast terminal sliding mode control (FONFTSMC) method, which achieves finite-time stability for the closed-loop control system. Stability is validated through the Lyapunov stability theorem, supported by an adaptive law. The FNN compensates for model uncertainties and external disturbances by approximating them, allowing for real-time adjustment of fuzzy rules and improved adaptability and performance.
To demonstrate the effectiveness of this approach, a case study involving an upper limb exoskeleton robot was conducted. The simulation results highlight the superiority of the FNN-FONFTSMC method in robust trajectory tracking for rehabilitation. The evaluation metrics include position and velocity tracking errors, convergence rate, and reaching time. The proposed method achieves faster convergence and a shorter reaching time than other controllers, with position tracking errors converging to zero in under 0.8 s. Additionally, the method eliminates chattering in input torques, reducing the risk of actuator damage.
This study underscores the potential of the FNN-FONFTSMC method to significantly enhance the performance and reliability of exoskeleton robots for rehabilitation purposes, ensuring precise, robust, and safe trajectory tracking.
4.5. Deep Neural Network (DNN)
Deep Neural Networks (DNNs) are highly effective in rehabilitation systems due to their ability to learn complex patterns from large datasets. They are particularly well-suited for processing data from multiple sensors, such as muscle activity and motion tracking, and for integrating this information to provide a comprehensive understanding of the user’s movements [
55]. This makes DNNs ideal for applications that require adaptive and precise rehabilitation.
Figure 6 depicts the architecture of a DNN. A DNN begins with an input layer that receives raw features or data. This input is passed through multiple hidden layers, where neurons apply weights, biases, and activation functions (e.g., ReLU, sigmoid, or tanh) to transform the data non-linearly. The use of stacked hidden layers enables the network to learn hierarchical features, moving from basic to more complex representations. Finally, the output layer generates predictions or classifications based on the features learned during training.
The hierarchical structure of DNNs allows them to model intricate data relationships effectively, making them a valuable tool for improving the adaptability and precision of rehabilitation systems. This capability enables exoskeletons to dynamically adjust their support, enhancing the overall rehabilitation process. By incorporating DNNs, exoskeletons become more intelligent, resulting in better therapy outcomes and faster recovery for patients.
Mikołajewski et al. examine the potential of 3D-printed exoskeletons and their ability to be customized on a large scale, a development that could transform healthcare [
56]. The study introduces the concept of 4D printing, where printed objects can adapt and change over time, offering even greater personalization and efficiency. The authors emphasize that developing personalized medical devices requires collaboration across multiple disciplines, including research, engineering, and clinical practice. They also highlight the importance of establishing new business models to ensure these innovations are accessible.
To maintain quality and transparency in the development of personalized exoskeletons, the study uses the AGREE II tool. Artificial intelligence, including traditional artificial neural networks and deep learning models, is employed to support the customization and optimization of exoskeleton designs. These neural networks analyze and interpret complex data related to hand and arm movements, aiding in early diagnosis and rehabilitation.
By extracting useful movement markers and improving the efficiency and safety of exoskeletons, the application of neural networks contributes significantly to the creation of more effective and personalized rehabilitation tools. This integration of advanced technologies paves the way for innovative, patient-specific solutions in healthcare.
Wang et al. [
57] present a novel approach to enhance the control and interaction of upper limb rehabilitation exoskeleton robots. The study focuses on a motion intensity perception model that integrates data from the robot’s movements and the patient’s heart rate. This model improves trajectory control, which is crucial for effective training and human–robot interaction during rehabilitation.
The researchers propose a bionic control method combined with a motion intensity classification technique using multi-modal information. By merging movement signals from the robot with the patient’s heart rate into a vector, the model employs a deep learning framework for control. The paper emphasizes the importance of maintaining moderate motion intensity during rehabilitation. Excessive intensity can damage motor function, while moderate intensity promotes recovery and prevents injuries. The model can classify motion intensity in real-time, dynamically adjusting tasks to match the patient’s condition.
Experimental results demonstrate that the deep neural network (DNN)-based model significantly enhances human–robot interaction and rehabilitation outcomes. The model achieved a recognition accuracy of 99.0% during training and 95.7% during testing. Additionally, the motor control cycle operates efficiently at 200 Hz. Future research aims to incorporate electromyogram (EMG) signals into the framework, further improving control and tailoring rehabilitation to meet individual patient needs, ultimately supporting better recovery outcomes.
Hasan S. [
58] focuses on the development of a deep learning-based controller for exoskeleton robots. These are gaining attention for their potential to enhance human capabilities and improve rehabilitation methods. The research integrates various engineering disciplines and emphasizes the importance of precise motion control systems in robotics. The study addresses the challenge of controlling non-linear robot dynamics. These are influenced by factors such as mass, inertia, and joint friction. A model-based control approach is highlighted for its systematic method of managing non-linear dynamics, though it faces challenges like real-time computation delays. To overcome these, the paper proposes a deep neural network-based controller that leverages parallel processing to estimate joint torque requirements efficiently. This controller is designed for a seven-degrees-of-freedom human lower extremity exoskeleton robot.
The neural network model is trained using an analytical model-based data generation technique, and a PD controller is used to correct prediction errors. The paper demonstrates the controller’s high trajectory tracking performance and stability through simulations. Additionally, a comparative study shows that the developed controller performs on par with conventional controllers while maintaining minimal trajectory tracking errors. The robustness of the controller towards parametric variations is further validated through an analysis of variance (ANOVA).
The article “Design and Verification of a Human-Robot Interaction System for Upper Limb Exoskeleton Rehabilitation” [
59] presents a novel approach to enhance human–robot interaction in upper limb exoskeleton robots for rehabilitation. The core innovation is a motion intent recognition system that utilizes an altitude signal sensor to predict the user’s movements during exercises. A key contribution of this study is the integration of a modified model, which reduces noise and time delays in signals. This ensures safe and effective rehabilitation by minimizing errors and preventing safety risks caused by mis-triggers.
The research includes the development of an experimental platform to test the position control of a single-joint exoskeleton arm. Results demonstrate that the method effectively tracks predicted motion intent while improving the robot’s control system. The filtering algorithm is validated using an angle sensor to monitor joint angles and an improved combined filtering method to analyze noise amplitude under dynamic conditions, accounting for real-world influences.
Simulation results show that the adaptive filter combined with clipping filtering produces smoother and more accurate motion trajectories compared to clipping filtering alone. These findings highlight the potential of this system to improve rehabilitation outcomes by enhancing both accuracy and safety in human–robot interactions.
4.6. Long Short-Term Memory Networks (LSTMs)
Long Short-Term Memory (LSTM) networks are specifically designed to process sequential data, making them ideal for analyzing time-based signals such as muscle activity and movement patterns [
60]. This capability is particularly valuable in rehabilitation, as LSTMs can retain important information from previous steps, enabling the exoskeleton to adapt in real-time and provide personalized support. Unlike traditional neural networks that handle data as individual instances, LSTMs work with sequences of data, allowing them to understand and respond to patterns that unfold over time [
61].
LSTMs achieve this through specialized units known as “memory cells”, which can store information for extended periods. These cells use internal mechanisms, such as forget gates, input gates, and output gates, to regulate the flow of information, determining what to retain, update, or discard. This ability to capture both short-term and long-term patterns is critical in rehabilitation, where understanding the progression of a user’s movements is more important than analyzing isolated actions. By recognizing these changes, LSTMs enable exoskeletons to deliver more accurate and adaptive support during therapy sessions.
Figure 7 illustrates an LSTM network that processes sequential data. The input layer receives data at each time step and passes it to the hidden layers, which consist of LSTM cells. These cells manage information flow using their gating mechanisms to effectively learn from sequential patterns. The output layer then generates results, such as predictions or classifications, based on the knowledge gained from the sequence. This structure makes LSTMs an essential tool for enhancing the effectiveness of rehabilitation systems.
Ren et al. [
62] propose a deep learning-based motion prediction model for controlling an upper limb exoskeleton robot in rehabilitation. The model is applied to the NTUH-II exoskeleton, an eight-degrees-of-freedom system designed to facilitate robot-assisted training (RAT) by synchronizing movements between the robot and the human arm. The approach combines inertial measurement unit (IMU) and surface electromyography (sEMG) signals to capture both human arm dynamics and muscle activity, leveraging the strengths of both data types to improve motion prediction.
The proposed model, a Multi-stream Long Short-Term Memory (LSTM) Dueling network, predicts the user’s motion trajectory in real-time. This enables more accurate and timely synchronization between the human and robot arms, reducing mean absolute error and average delay time and improving the overall coordination experience for users. The study details the process of acquiring and preprocessing IMU and sEMG signals, estimating human arm dynamics, and implementing the model on a robotic arm.
Experimental results show that the proposed model outperforms other deep learning and traditional regression models in accuracy. It achieved a normalized mean absolute error (MAE) of 0.29 degrees for horizontal abduction/adduction and 0.19 degrees for elbow flexion/extension—significantly lower than traditional models. This work highlights the potential for integrating the model into various rehabilitation tasks and robot arms capable of independent multi-joint movement, offering promising advancements in the field of robot-assisted rehabilitation.
Kansal et al. [
63] present a detailed approach to developing a low-cost, high-functionality upper limb prosthesis controlled by electroencephalogram (EEG) signals. The prosthetic arm, designed for amputees, emulates complex human arm movements with three degrees of freedom. The study introduces an innovative end-to-end pipeline that integrates a Genetic Algorithm (GA)-optimized Long Short-Term Memory (LSTM) deep learning model to classify upper limb motion intentions from EEG data.
The authors emphasize the use of non-invasive EEG techniques as a safer alternative to invasive methods like electromyography (EMG), which require surgical electrode implantation. EEG data are collected using the EPOC Flex 32-Channel EEG headset, with signals processed for accurate motion classification. The methodology incorporates various data cleaning and denoising techniques, such as bandpass and digital notch filters, to enhance signal quality and interpretation.
The study also highlights the importance of affordability in prosthetic design, referencing previous works that explored 3D-printed, low-cost solutions. Results demonstrate the prosthetic arm’s ability to replicate complex human motions in real time. The GA-LSTM model achieved an accuracy of 89.5%, along with a precision, recall, and F1-score of 89%, indicating high effectiveness in predicting arm movements from EEG signals.
The paper concludes by discussing future improvements, including increasing the prosthetic arm’s degrees of freedom (DOF) and incorporating multimodal data sources to further enhance the model’s accuracy and response time. This research underscores the potential of combining advanced machine learning techniques with non-invasive EEG technologies to create cost-effective and functional prosthetic solutions for amputees.
4.7. Adaptive Neural Network (ANN)
Adaptive Neural Networks (ANNs) are designed to dynamically adjust their learning parameters, enabling them to adapt to changing data patterns [
64]. This capability makes ANNs ideal for interpreting signals from sensors, such as muscle activity and movement data, which are critical in rehabilitation.
Figure 8 illustrates the architecture of an ANN. In robotic rehabilitation, exoskeletons use data from biosensors to monitor the user’s movements. ANNs process these data in real time, allowing the exoskeleton to adapt its behavior and provide personalized assistance. This real-time adaptability is particularly beneficial in rehabilitation, where a user’s movements can change or improve over time. A significant advantage of ANNs is their ability to handle dynamic, real-time data and respond effectively to variations in the user’s movements [
65]. This adaptability is crucial in rehabilitation, where movements are often unpredictable, and the exoskeleton must adjust accordingly to ensure effective and responsive assistance.
The article “Saturated Adaptive Control of Antagonistic Muscles on an Upper-Limb Hybrid Exoskeleton” introduces an innovative control strategy for an upper limb hybrid exoskeleton by developing an adaptive position controller that addresses input saturation in the user’s muscles [
66]. The exoskeleton integrates functional electrical stimulation (FES) with motorized actuators to improve rehabilitation outcomes for individuals with mobility impairments, such as those caused by strokes.
The proposed controller uses a feedforward component generated by a neural network to stimulate the user’s biceps and triceps. At the same time, a robust feedback controller operates the exoskeleton’s motor. A key feature of the system is its ability to handle stimulation input saturation, ensuring user comfort and safety. Excess input is redirected to the exoskeleton’s motor rather than being discarded, optimizing the system’s functionality.
The study includes a Lyapunov stability analysis, confirming that the closed-loop position error system is uniformly ultimately bounded. Experimental validation was conducted with four uninjured participants, demonstrating the effectiveness of the proposed control approach. Future research will involve testing the system with participants who have neurological injuries and further refining the design and control methods to enhance the exoskeleton’s performance.
Rahmani et al. [
67] introduce an innovative control approach for a seven-degrees-of-freedom (DOF) exoskeleton robot named ETS-MARSE, developed to assist individuals with upper limb impairments caused by neurological disorders. The primary objective is to improve trajectory tracking control, a critical component for passive rehabilitation exercises. The ETS-MARSE robot replicates human upper limb joint articulations and faces challenges such as external disturbances and unknown dynamics, including friction forces and backlash.
To address these challenges, the study proposes an adaptive neural network fast fractional integral terminal sliding mode control (ANFFITSMC) approach. This method is designed to manage modeling uncertainties and enhance the robot’s performance in delivering passive arm movement therapy. A key feature of the approach is the integration of an adaptive radial basis function neural network (ARBFN) with fast fractional integral terminal sliding mode control (FFITSMC). This combination effectively mitigates the chattering phenomenon commonly observed in control systems.
The stability of the proposed controller is validated using Lyapunov theory, and simulation results demonstrate its effectiveness in reducing chattering and improving trajectory tracking performance. The ANFFITSMC method achieved a trajectory tracking error reduction of approximately 35% compared to FFITSMC. Additionally, the chattering amplitude was reduced by about 40%, further enhancing the system’s smoothness and efficiency.
The study underscores the advantages of the proposed control approach, which does not require an accurate dynamic model of the robot. This adaptability makes it suitable for a wide range of users with varying degrees of upper limb impairment, offering a promising solution for personalized rehabilitation therapy.
He et al. [
68] propose a novel control method for a multi-degree-of-freedom (n-DOF) upper limb exoskeleton that addresses challenges such as uncertainties, external disturbances, and input dead zones. The method leverages an adaptive neural network sliding mode control approach based on a fractional-order ultra-local model. This approach simplifies the complex system dynamics while accounting for input dead zones.
To stabilize the system, the proposed method integrates fractional-order sliding mode control with time–delay estimation and neural networks for disturbance estimation. This leads to the development of a fractional-order ultra-local model-based neural network sliding mode controller (FO-NNSMC). A distinctive feature of the method is its adaptive treatment of control gain. Initially considered constant, the control gain is later handled as an unknown parameter to prevent performance degradation due to improper selection. The inclusion of the Nussbaum technique ensures system stability, resulting in a fractional-order ultra-local model-based adaptive neural network sliding mode controller (FO-ANNSMC).
Stability analysis, conducted using Lyapunov theory, confirms the robustness of the proposed approach. Its effectiveness is validated through co-simulations on a virtual prototype of a seven-DOF upper limb exoskeleton and experiments on a two-DOF model. The FO-ANNSMC controller achieved superior performance, with an ITAE index of 0.0015—the smallest among the methods compared. These results underscore the potential of the FO-ANNSMC controller in improving control accuracy and robustness for upper limb exoskeletons.
4.8. Recurrent Neural Networks (RNNs)
Unlike traditional neural networks, which process each data point independently, Recurrent Neural Networks (RNNs) have connections that enable them to remember past inputs. This allows RNNs to process sequences of data, making them ideal for analyzing how movements change over time [
69].
Figure 9 illustrates the architecture of an RNN. RNN processes sequential data through interconnected layers. The input layer feeds data step-by-step into the network, representing each time step as an input feature. The hidden layers are the core of the RNN, where each neuron has recurrent connections, allowing information to loop back and influence future steps. This mechanism enables the network to retain context and learn temporal patterns from previous inputs. Finally, the output layer processes the learned features from the hidden layers to produce predictions or classifications for the sequence.
In rehabilitation, this ability to recall previous information is essential, as it helps the exoskeleton interpret the continuous flow of a user’s movements and respond appropriately. A key feature of RNNs is their feedback loops, which pass information from one step to the next. This allows past data to influence current predictions, making RNNs effective for tasks such as movement prediction and muscle signal analysis.
By leveraging RNNs, exoskeletons gain the ability to better understand and adapt to dynamic, time-dependent changes in a user’s movements. This enhances their capacity to provide personalized, real-time assistance during rehabilitation exercises. The following section will introduce the recent usage of the RNN network for human upper extremity rehabilitation.
Gu et al. [
70] review the current advancements in hand function rehabilitation systems that incorporate hand motion recognition devices and artificial intelligence. The study emphasizes the significant impact of strokes on hand function, which greatly affects patients’ ability to perform daily tasks. It examines hardware developments, including gesture recognition devices based on computer vision and wearable sensors, as well as software advancements, particularly the use of Recurrent Neural Networks (RNNs) to enhance the functionality and effectiveness of rehabilitation robots.
The paper identifies key challenges, such as the need for improved recognition algorithms and the limitations of current devices, and suggests future research directions to address these issues. Among the approaches discussed, RNNs are highlighted for their effectiveness in recognizing dynamic gestures. When combined with Long Short-Term Memory (LSTM) networks, RNNs significantly improve the accuracy of gesture recognition, achieving an average accuracy of 91.44% for nine distinct gestures. This combination proves especially effective for real-time recognition of isolated dynamic gestures, demonstrating its potential to advance hand function rehabilitation systems.
4.9. Support Vector Machines Neural Networks (SVNNs)
Support Vector Neural Networks (SVNNs) integrate support vector machines (SVMs) with neural networks. This combination makes them highly effective for classification and regression tasks. They are particularly useful for analyzing muscle activity and movement patterns [
71]. SVNNs use support vectors to define decision boundaries that separate data classes. This approach allows them to identify patterns in complex, high-dimensional data, such as muscle signals from sensors. The neural network component further refines these classifications by learning from errors and improving performance over time. This combination equips SVNNs to handle non-linear relationships between input data and output results, making them suitable for complex tasks like movement recognition.
Figure 10 illustrates the architecture of an SVNN. The input layer (left) receives input features, which are passed to the hidden layers (middle). In the hidden layers, SVM principles are integrated with neural network learning. Neurons in the hidden layer often use kernel-based transformations (similar to those in SVMs) to map input data into a higher-dimensional space. This allows the network to learn complex patterns and create optimal decision boundaries for classification or regression tasks. The output layer (right) aggregates these results and produces predictions, completing the process.
The article “Design of Human Adaptive Mechatronics Controller for Upper Limb Motion Intention Prediction” introduces an innovative approach to enhancing Human Adaptive Mechatronics (HAM) systems by improving upper limb motion prediction and response time [
72]. This research is particularly beneficial for elderly individuals with disabilities who depend on devices like exoskeletons for daily tasks. The methodology involves extracting features from electromyography (EMG) signals using both time and frequency-based techniques. These features are then used to predict optimal controller parameters for HAM systems.
The study employs a Modified Lion Optimization (MLO) algorithm to select the best control parameters, coupled with a Support Vector Neural Network (SVNN) for motion prediction at different time points. The proposed model achieves 96% accuracy in predicting movements, validated through the integration of advanced optimization techniques and EMG signal data. The research also explores the use of Recurrent Neural Networks (RNNs), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), for real-time analysis of HAM motions. These neural networks demonstrate comparable accuracy in processing EMG data for motion prediction.
The MLO-SVNN classifier outperforms traditional methods, achieving a prediction accuracy of 96.56%, which surpasses other classifiers such as Support Vector Machines (SVMs), Neural Networks (NNs), K-Nearest Neighbors (KNN), and Bayesian classifiers. This study highlights the potential of combining optimization algorithms with advanced neural network architectures to significantly improve the performance of HAM systems in upper limb rehabilitation applications.
4.10. Multi-Layer Neural Network (MLNN)
Multilayer Neural Networks (MLNNs) consist of multiple layers of neurons, often referred to as hidden layers, enabling them to learn complex patterns from data such as muscle activity and movement signals. A key strength of MLNNs is their ability to model non-linear relationships between inputs and outputs, which is particularly important for interpreting physiological signals that are inherently non-linear.
Figure 11 illustrates the architecture of an MLNN.
By utilizing MLNNs for interpreting physiological signals, exoskeletons can provide more precise and personalized assistance [
73]. This enables exoskeletons to provide more responsive, real-time adaptive support, resulting in improved rehabilitation outcomes and faster patient recovery. The following section discusses notable studies that have employed MLNNs in robot-assisted rehabilitation.
Figure 11 depicts the structure of an MLNN, a type of feedforward neural network with multiple layers of neurons. The input layer receives raw features and includes bias nodes to enhance the network’s learning capability. Hidden layers, consisting of interconnected neurons, compute weighted sums of inputs, apply bias terms, and use activation functions to introduce non-linearity. These layers allow the network to model intricate patterns and relationships in the data. The final output layer generates results such as classifications, predictions, or regression outputs, showcasing the versatility of MLNNs in rehabilitation applications.
Wang et al. [
74] present an innovative upper limb rehabilitation robot aimed at aiding motor function recovery. It is specifically designed for stroke survivors in need of long-term, high-intensity therapy. The robot features a three-degree-of-freedom system, integrating a five-bar parallel mechanism with a wrist exoskeleton module. The first two joints operate in torque control mode, while the wrist joint functions in velocity control mode. All joints are driven by surface electromyography (sEMG) signals.
The control framework is customized to address the unique characteristics of each component. For the first two joints, an impedance controller is implemented, which regulates current and torque through a Jacobian transformation from the end-effector workspace to joint space. This ensures that the robot provides appropriate assistance or resistance during rehabilitation exercises. The wrist module utilizes an admittance controller that adjusts motor velocity based on estimated wrist torque derived from sEMG signals.
The study emphasizes the potential of robot-assisted rehabilitation to promote neural plasticity and accelerate functional recovery compared to traditional manual therapy. The proposed sEMG-driven control framework demonstrated significant adaptability to diverse rehabilitation needs. Recognition accuracy was the primary metric used to evaluate the perception model’s performance, achieving an average of 99.0% on the training dataset and 95.7% on the test dataset. These results highlight the effectiveness of the robot in enhancing rehabilitation outcomes.
Resquín et al. [
75] investigate the development and evaluation of a hybrid robotic system designed to assist in the rehabilitation of reaching movements in patients with brain injuries. The study primarily focuses on assessing the system’s usability in a clinical setting based on a Feedback Error Learning (FEL) scheme. The research is divided into two parts: the first demonstrates the technical feasibility and learning capability of the FEL controller, which is essential for executing coordinated shoulder–elbow joint movements. The second evaluates the system’s usability with brain injury patients, assessing their performance, satisfaction, and emotional response to the intervention.
The system employs the minimum jerk trajectory method to generate tracking references, a widely used and well-established technique in rehabilitation devices. The hybrid robotic system incorporates an FEL controller to adaptively learn the inverse dynamic model of the arm, adjusting assistance levels based on user capabilities. This adaptive approach ensures that the system provides personalized support tailored to individual patient needs.
The results reveal high patient satisfaction and acceptance, indicating that the system effectively increases therapy dosage while enhancing patient engagement and motivation. Metrics such as root mean square error (RMSE), performance rate (PR), and task completion (TC) index were used to evaluate the system’s effectiveness, demonstrating significant improvements in task performance. User satisfaction scores increased from 62.5% to 83%, and patients provided positive feedback on the system’s usability. The study concludes that the hybrid robotic system is both technically feasible and effective in supporting 3D-reaching movement rehabilitation, offering significant potential for clinical application.
Medina et al. [
76] present the design and evaluation of a hybrid upper limb orthosis that integrates a functional electrical stimulation (FES) system for rehabilitation. The device supports both active and assisted upper limb movements by combining electrical stimulation with real-time electromyographic (EMG) signal processing. EMG signals are collected from the trapezius and deltoid muscles and classified using a static multilayer artificial neural network trained with the Levenberg–Marquardt algorithm to infer the user’s movement intentions.
The orthosis was manufactured using 3D printing and includes electronic components, enabling it to function as a fully actuated robotic system. It employs a decentralized control system with state feedback algorithms, specifically proportional-derivative (PD) controllers. For trajectory tracking of each actuated joint, the study introduces an interpolation method based on sigmoidal functions, utilizing estimated time derivatives of tracking errors calculated by discretized super-twisting differentiators.
The device was tested through simulations and experimental trials with four volunteers, demonstrating effective performance across various scenarios. Results showed that the system successfully supports rehabilitation therapy by tracking predefined reference trajectories while delivering electrostimulation to targeted muscles. Performance metrics included tracking error and convergence time. The system achieved a maximum tracking error of 2% and convergence to a bounded region in under 0.3 s. The PD + AST (Adaptive Super-Twisting) controller achieved trajectory convergence at approximately 75 ms, significantly faster than the classical PD controller’s 150 ms. These findings highlight the orthosis’s potential as a robust tool for upper limb rehabilitation therapy.
The study “Study on ANN-based Upper Limb Exoskeleton” explores the development and application of an exoskeleton designed to assist individuals with impaired arm mobility [
77]. This research contributes to the increasing adoption of exoskeletons to restore mobility in patients affected by accidents or diseases. A key innovation is the use of non-invasive EMG sensors to detect movement intentions, even in patients who cannot express them through conventional means. This capability allows the exoskeleton to adapt to specific medical conditions, enabling everyday use.
This study introduces a method where users can train the exoskeleton using active muscle groups, even if movement intentions are undetectable in the affected arm. This feature, combined with the exoskeleton’s ability to mimic arm movements like a shadow, supports users who cannot sustain independent movement. EMG signals are processed to determine shoulder and elbow angles, which are then analyzed by a multilayer neural network. Additionally, IMU sensors are incorporated during training to ensure precise synchronization between the human and robotic arm movements.
The paper also reviews existing technologies, such as Myo armbands with EMG channels and IMU sensors, highlighting the superior performance of the MSLSTM Dueling model over traditional regression and other deep learning models in predicting arm movement. The evaluation metrics included overall regression and mean square error (MSR). The neural network achieved an impressive overall regression of (R = 0.997) and an MSR of (3.106), underscoring its effectiveness in assisting users with impaired mobility.
Aktan et al. [
78] detail the development of an intelligent controller for DIAGNOBOT, a rehabilitation robot designed for diagnosing and treating wrist and forearm conditions. The controller incorporates a decision support system powered by a multilayer neural network, which combines traditional statistical methods and database analysis to process biomechanical data from patients. These data are used to evaluate the joint range of motion (ROM) and force/torque deficiencies, providing personalized therapeutic exercise recommendations and robotic settings.
A key innovation highlighted in the study is the controller’s ability to perform both diagnostic assessments and therapeutic recommendations, marking it as a first in the field of rehabilitation robotics. Tests conducted with voluntary patients demonstrated the controller’s high accuracy in generating both diagnostic evaluations and therapy plans. For instance, differences in ROM measurements between physician assessments and the robotic system ranged from 2.3% to 10% for the first patient, indicating strong alignment with clinical standards.
The research emphasizes plans to further enhance the controller by expanding the existing healthy human database and integrating deep learning algorithms to improve diagnostic precision and therapeutic suggestions. Biomechanical measurements were used to evaluate the controller’s performance, focusing on percentage differences in ROM and joint force measurements between physician and robotic assessments. These advancements underscore the potential of DIAGNOBOT to transform rehabilitation practices through intelligent, data-driven diagnostic and therapeutic solutions.
Jebri et al. [
79] present an adaptive control system for exoskeletons that integrates a Brain–Computer Interface (BCI) based on Steady-State Visual Evoked Potentials (SSVEPs) to assist in tracking position and velocity trajectories. The BCI interprets EEG signals to detect user intentions and generate desired movement trajectories, enabling intuitive and effective control.
To ensure robust operation, the system combines a continuous neural network (NN) with a sliding mode controller (SMC). This approach provides resistance to approximation errors and external disturbances by leveraging known parameter bounds. Adaptive neural networks are employed to model the dynamic interaction between the exoskeleton and the human body, eliminating the need for an exact dynamic model by continuously updating synaptic weights. The sliding mode controller guarantees global asymptotic stability for both trajectory tracking and neural network approximations, with stability validated using the Lyapunov method.
Real-time experiments conducted with a two-degrees-of-freedom upper limb exoskeleton demonstrated the system’s effectiveness in rehabilitation scenarios. The study highlights the potential of the proposed control system to improve rehabilitation outcomes. Future research aims to enhance the controller by eliminating its reliance on prior knowledge of parameter bounds required in the current design, further advancing its adaptability and usability.
Wu et al. [
80] propose an advanced control strategy for a soft wearable exoskeleton designed to assist the elbow joint. The system is aimed at improving power efficiency for individuals with motor dysfunction caused by aging or neurological conditions. The exoskeleton mimics the human skeletal structure and features tendon–sheath actuators, soft wraps, and a waist brace. It incorporates sensors, such as inertial measurement units (IMUs) and force sensors, for control and feedback.
The study introduces a Neural-Network-Enhanced Torque Estimation Control (NNETEC) strategy, which integrates a joint torque estimation module based on surface electromyography (sEMG) signals with a neural network that interprets the user’s motion intentions. A PID controller with hybrid position/torque feedback ensures precise and adaptive assistance during movements.
Experiments conducted with healthy volunteers lifting dumbbells demonstrated that the NNETEC method significantly improved efficiency compared to traditional control strategies. The results highlight its potential for enhancing user support while conserving power. Future research aims to incorporate a fuzzy algorithm to further optimize the balance between force and position control, enhancing overall system performance and adaptability.
Table 2 summarizes the above-reviewed articles.
Table 3 outlines the application of various neural networks in robot-assisted rehabilitation based on the reviewed perspective, along with the rationale for selecting each network.