Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (168)

Search Parameters:
Keywords = motor imagery classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1853 KiB  
Article
Integrating Electroencephalography Source Localization and Residual Convolutional Neural Network for Advanced Stroke Rehabilitation
by Sina Makhdoomi Kaviri and Ramana Vinjamuri
Bioengineering 2024, 11(10), 967; https://doi.org/10.3390/bioengineering11100967 - 27 Sep 2024
Abstract
Motor impairments caused by stroke significantly affect daily activities and reduce quality of life, highlighting the need for effective rehabilitation strategies. This study presents a novel approach to classifying motor tasks using EEG data from acute stroke patients, focusing on left-hand motor imagery, [...] Read more.
Motor impairments caused by stroke significantly affect daily activities and reduce quality of life, highlighting the need for effective rehabilitation strategies. This study presents a novel approach to classifying motor tasks using EEG data from acute stroke patients, focusing on left-hand motor imagery, right-hand motor imagery, and rest states. By using advanced source localization techniques, such as Minimum Norm Estimation (MNE), dipole fitting, and beamforming, integrated with a customized Residual Convolutional Neural Network (ResNetCNN) architecture, we achieved superior spatial pattern recognition in EEG data. Our approach yielded classification accuracies of 91.03% with dipole fitting, 89.07% with MNE, and 87.17% with beamforming, markedly surpassing the 55.57% to 72.21% range of traditional sensor domain methods. These results highlight the efficacy of transitioning from sensor to source domain in capturing precise brain activity. The enhanced accuracy and reliability of our method hold significant potential for advancing brain–computer interfaces (BCIs) in neurorehabilitation. This study emphasizes the importance of using advanced EEG classification techniques to provide clinicians with precise tools for developing individualized therapy plans, potentially leading to substantial improvements in motor function recovery and overall patient outcomes. Future work will focus on integrating these techniques into practical BCI systems and assessing their long-term impact on stroke rehabilitation. Full article
(This article belongs to the Special Issue Artificial Intelligence for Biomedical Signal Processing)
Show Figures

Figure 1

13 pages, 3440 KiB  
Article
Optimizing Real-Time MI-BCI Performance in Post-Stroke Patients: Impact of Time Window Duration on Classification Accuracy and Responsiveness
by Aleksandar Miladinović, Agostino Accardo, Joanna Jarmolowska, Uros Marusic and Miloš Ajčević
Sensors 2024, 24(18), 6125; https://doi.org/10.3390/s24186125 - 22 Sep 2024
Abstract
Brain–computer interfaces (BCIs) are promising tools for motor neurorehabilitation. Achieving a balance between classification accuracy and system responsiveness is crucial for real-time applications. This study aimed to assess how the duration of time windows affects performance, specifically classification accuracy and the false positive [...] Read more.
Brain–computer interfaces (BCIs) are promising tools for motor neurorehabilitation. Achieving a balance between classification accuracy and system responsiveness is crucial for real-time applications. This study aimed to assess how the duration of time windows affects performance, specifically classification accuracy and the false positive rate, to optimize the temporal parameters of MI-BCI systems. We investigated the impact of time window duration on classification accuracy and false positive rate, employing Linear Discriminant Analysis (LDA), Multilayer Perceptron (MLP), and Support Vector Machine (SVM) on data acquired from six post-stroke patients and on the external BCI IVa dataset. EEG signals were recorded and processed using the Common Spatial Patterns (CSP) algorithm for feature extraction. Our results indicate that longer time windows generally enhance classification accuracy and reduce false positives across all classifiers, with LDA performing the best. However, to maintain the real-time responsiveness, crucial for practical applications, a balance must be struck. The results suggest an optimal time window of 1–2 s, offering a trade-off between classification performance and excessive delay to guarantee the system responsiveness. These findings underscore the importance of temporal optimization in MI-BCI systems to improve usability in real rehabilitation scenarios. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

22 pages, 2309 KiB  
Article
Enhancing EEG-Based MI-BCIs with Class-Specific and Subject-Specific Features Detected by Neural Manifold Analysis
by Mirco Frosolone, Roberto Prevete, Lorenzo Ognibeni, Salvatore Giugliano, Andrea Apicella, Giovanni Pezzulo and Francesco Donnarumma
Sensors 2024, 24(18), 6110; https://doi.org/10.3390/s24186110 - 21 Sep 2024
Abstract
This paper presents an innovative approach leveraging Neuronal Manifold Analysis of EEG data to identify specific time intervals for feature extraction, effectively capturing both class-specific and subject-specific characteristics. Different pipelines were constructed and employed to extract distinctive features within these intervals, specifically for [...] Read more.
This paper presents an innovative approach leveraging Neuronal Manifold Analysis of EEG data to identify specific time intervals for feature extraction, effectively capturing both class-specific and subject-specific characteristics. Different pipelines were constructed and employed to extract distinctive features within these intervals, specifically for motor imagery (MI) tasks. The methodology was validated using the Graz Competition IV datasets 2A (four-class) and 2B (two-class) motor imagery classification, demonstrating an improvement in classification accuracy that surpasses state-of-the-art algorithms designed for MI tasks. A multi-dimensional feature space, constructed using NMA, was built to detect intervals that capture these critical characteristics, which led to significantly enhanced classification accuracy, especially for individuals with initially poor classification performance. These findings highlight the robustness of this method and its potential to improve classification performance in EEG-based MI-BCI systems. Full article
(This article belongs to the Special Issue Biomedical Sensing and Bioinformatics Processing)
Show Figures

Figure 1

18 pages, 3786 KiB  
Article
Efficient Multi-View Graph Convolutional Network with Self-Attention for Multi-Class Motor Imagery Decoding
by Xiyue Tan, Dan Wang, Meng Xu, Jiaming Chen and Shuhan Wu
Bioengineering 2024, 11(9), 926; https://doi.org/10.3390/bioengineering11090926 - 15 Sep 2024
Abstract
Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain–computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ [...] Read more.
Research on electroencephalogram-based motor imagery (MI-EEG) can identify the limbs of subjects that generate motor imagination by decoding EEG signals, which is an important issue in the field of brain–computer interface (BCI). Existing deep-learning-based classification methods have not been able to entirely employ the topological information among brain regions, and thus, the classification performance needs further improving. In this paper, we propose a multi-view graph convolutional attention network (MGCANet) with residual learning structure for multi-class MI decoding. Specifically, we design a multi-view graph convolution spatial feature extraction method based on the topological relationship of brain regions to achieve more comprehensive information aggregation. During the modeling, we build an adaptive weight fusion (Awf) module to adaptively merge feature from different brain views to improve classification accuracy. In addition, the self-attention mechanism is introduced for feature selection to expand the receptive field of EEG signals to global dependence and enhance the expression of important features. The proposed model is experimentally evaluated on two public MI datasets and achieved a mean accuracy of 78.26% (BCIC IV 2a dataset) and 73.68% (OpenBMI dataset), which significantly outperforms representative comparative methods in classification accuracy. Comprehensive experiment results verify the effectiveness of our proposed method, which can provide novel perspectives for MI decoding. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

35 pages, 12036 KiB  
Article
Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals
by Chiang Liang Kok, Chee Kit Ho, Thein Htet Aung, Yit Yan Koh and Tee Hui Teo
Appl. Sci. 2024, 14(17), 8091; https://doi.org/10.3390/app14178091 - 9 Sep 2024
Abstract
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering [...] Read more.
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering was applied to remove artifacts and focus on the mu and beta frequency bands. The initial system, a preliminary study model, explored the overall framework of EEG signal processing and classification, utilizing time-domain features such as variance and frequency-domain features such as alpha and beta power, with a KNN model for classification. Insights from this study informed the development of a baseline system, which innovatively combined the common spatial patterns (CSP) method with continuous wavelet transform (CWT) for feature extraction and employed a GoogLeNet classifier with transfer learning. This system classified six unique pairs of events derived from the four motor functions, achieving remarkable accuracy, with the highest being 99.73% for the GP–RV pair and the lowest 80.87% for the FW–GP pair in intersubject classification. Building on this success, three additional systems were developed for four-way classification. The final model, ML-CSP-OVR, demonstrated the highest intersubject classification accuracy of 78.08% using all combined data and 76.39% for leave-one-out intersubject classification. This proposed model, featuring a novel combination of CSP-OVR, CWT, and GoogLeNet, represents a significant advancement in the field, showcasing strong potential as a general system for motor imagery (MI) tasks that is not dependent on the subject. This work highlights the prominence of the research contribution by demonstrating the effectiveness and robustness of the proposed approach in achieving high classification accuracy across different motor functions and subjects. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

28 pages, 952 KiB  
Review
A Comprehensive Review of Hardware Acceleration Techniques and Convolutional Neural Networks for EEG Signals
by Yu Xie and Stefan Oniga
Sensors 2024, 24(17), 5813; https://doi.org/10.3390/s24175813 - 7 Sep 2024
Abstract
This paper comprehensively reviews hardware acceleration techniques and the deployment of convolutional neural networks (CNNs) for analyzing electroencephalogram (EEG) signals across various application areas, including emotion classification, motor imagery, epilepsy detection, and sleep monitoring. Previous reviews on EEG have mainly focused on software [...] Read more.
This paper comprehensively reviews hardware acceleration techniques and the deployment of convolutional neural networks (CNNs) for analyzing electroencephalogram (EEG) signals across various application areas, including emotion classification, motor imagery, epilepsy detection, and sleep monitoring. Previous reviews on EEG have mainly focused on software solutions. However, these reviews often overlook key challenges associated with hardware implementation, such as scenarios that require a small size, low power, high security, and high accuracy. This paper discusses the challenges and opportunities of hardware acceleration for wearable EEG devices by focusing on these aspects. Specifically, this review classifies EEG signal features into five groups and discusses hardware implementation solutions for each category in detail, providing insights into the most suitable hardware acceleration strategies for various application scenarios. In addition, it explores the complexity of efficient CNN architectures for EEG signals, including techniques such as pruning, quantization, tensor decomposition, knowledge distillation, and neural architecture search. To the best of our knowledge, this is the first systematic review that combines CNN hardware solutions with EEG signal processing. By providing a comprehensive analysis of current challenges and a roadmap for future research, this paper provides a new perspective on the ongoing development of hardware-accelerated EEG systems. Full article
(This article belongs to the Special Issue Sensors Fusion in Digital Healthcare Applications)
Show Figures

Figure 1

14 pages, 796 KiB  
Article
Independent Vector Analysis for Feature Extraction in Motor Imagery Classification
by Caroline Pires Alavez Moraes, Lucas Heck dos Santos, Denis Gustavo Fantinato, Aline Neves and Tülay Adali
Sensors 2024, 24(16), 5428; https://doi.org/10.3390/s24165428 - 22 Aug 2024
Viewed by 337
Abstract
Independent vector analysis (IVA) can be viewed as an extension of independent component analysis (ICA) to multiple datasets. It exploits the statistical dependency between different datasets through mutual information. In the context of motor imagery classification based on electroencephalogram (EEG) signals for the [...] Read more.
Independent vector analysis (IVA) can be viewed as an extension of independent component analysis (ICA) to multiple datasets. It exploits the statistical dependency between different datasets through mutual information. In the context of motor imagery classification based on electroencephalogram (EEG) signals for the brain–computer interface (BCI), several methods have been proposed to extract features efficiently, mainly based on common spatial patterns, filter banks, and deep learning. However, most methods use only one dataset at a time, which may not be sufficient for dealing with a multi-source retrieving problem in certain scenarios. From this perspective, this paper proposes an original approach for feature extraction through multiple datasets based on IVA to improve the classification of EEG-based motor imagery movements. The IVA components were used as features to classify imagined movements using consolidated classifiers (support vector machines and K-nearest neighbors) and deep classifiers (EEGNet and EEGInception). The results show an interesting performance concerning the clustering of MI-based BCI patients, and the proposed method reached an average accuracy of 86.7%. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

18 pages, 8360 KiB  
Article
A Method for the Spatial Interpolation of EEG Signals Based on the Bidirectional Long Short-Term Memory Network
by Wenlong Hu, Bowen Ji and Kunpeng Gao
Sensors 2024, 24(16), 5215; https://doi.org/10.3390/s24165215 - 12 Aug 2024
Viewed by 569
Abstract
The precision of electroencephalograms (EEGs) significantly impacts the performance of brain–computer interfaces (BCI). Currently, the majority of research into BCI technology gives priority to lightweight design and a reduced electrode count to make it more suitable for application in wearable environments. This paper [...] Read more.
The precision of electroencephalograms (EEGs) significantly impacts the performance of brain–computer interfaces (BCI). Currently, the majority of research into BCI technology gives priority to lightweight design and a reduced electrode count to make it more suitable for application in wearable environments. This paper introduces a deep learning-based time series bidirectional (BiLSTM) network that is designed to capture the inherent characteristics of EEG channels obtained from neighboring electrodes. It aims to predict the EEG data time series and facilitate the conversion process from low-density EEG signals to high-density EEG signals. BiLSTM pays more attention to the dependencies in time series data rather than mathematical maps, and the root mean square error can be effectively restricted to below 0.4μV, which is less than half the error in traditional methods. After expanding the BCI Competition III 3a dataset from 18 channels to 60 channels, we conducted classification experiments on four types of motor imagery tasks. Compared to the original low-density EEG signals (18 channels), the classification accuracy was around 82%, an increase of about 20%. When juxtaposed with real high-density signals, the increment in the error rate remained below 5%. The expansion of the EEG channels showed a substantial and notable improvement compared with the original low-density signals. Full article
Show Figures

Figure 1

24 pages, 8078 KiB  
Article
EEG Channel Selection for Stroke Patient Rehabilitation Using BAT Optimizer
by Mohammed Azmi Al-Betar, Zaid Abdi Alkareem Alyasseri, Noor Kamal Al-Qazzaz, Sharif Naser Makhadmeh, Nabeel Salih Ali and Christoph Guger
Algorithms 2024, 17(8), 346; https://doi.org/10.3390/a17080346 - 8 Aug 2024
Viewed by 665
Abstract
Stroke is a major cause of mortality worldwide, disrupts cerebral blood flow, leading to severe brain damage. Hemiplegia, a common consequence, results in motor task loss on one side of the body. Many stroke survivors face long-term motor impairments and require great rehabilitation. [...] Read more.
Stroke is a major cause of mortality worldwide, disrupts cerebral blood flow, leading to severe brain damage. Hemiplegia, a common consequence, results in motor task loss on one side of the body. Many stroke survivors face long-term motor impairments and require great rehabilitation. Electroencephalograms (EEGs) provide a non-invasive method to monitor brain activity and have been used in brain–computer interfaces (BCIs) to help in rehabilitation. Motor imagery (MI) tasks, detected through EEG, are pivotal for developing BCIs that assist patients in regaining motor purpose. However, interpreting EEG signals for MI tasks remains challenging due to their complexity and low signal-to-noise ratio. The main aim of this study is to focus on optimizing channel selection in EEG-based BCIs specifically for stroke rehabilitation. Determining the most informative EEG channels is crucial for capturing the neural signals related to motor impairments in stroke patients. In this paper, a binary bat algorithm (BA)-based optimization method is proposed to select the most relevant channels tailored to the unique neurophysiological changes in stroke patients. This approach is able to enhance the BCI performance by improving classification accuracy and reducing data dimensionality. We use time–entropy–frequency (TEF) attributes, processed through automated independent component analysis with wavelet transform (AICA-WT) denoising, to enhance signal clarity. The selected channels and features are proved through a k-nearest neighbor (KNN) classifier using public BCI datasets, demonstrating improved classification of MI tasks and the potential for better rehabilitation outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Healthcare)
Show Figures

Figure 1

24 pages, 22137 KiB  
Article
Feature Extraction and Classification of Motor Imagery EEG Signals in Motor Imagery for Sustainable Brain–Computer Interfaces
by Yuyi Lu, Wenbo Wang, Baosheng Lian and Chencheng He
Sustainability 2024, 16(15), 6627; https://doi.org/10.3390/su16156627 - 2 Aug 2024
Viewed by 600
Abstract
Motor imagery brain–computer interface (MI-BCI) systems hold the potential to restore motor function and offer the opportunity for sustainable autonomous living for individuals with a range of motor and sensory impairments. The feature extraction and classification of motor imagery EEG signals related to [...] Read more.
Motor imagery brain–computer interface (MI-BCI) systems hold the potential to restore motor function and offer the opportunity for sustainable autonomous living for individuals with a range of motor and sensory impairments. The feature extraction and classification of motor imagery EEG signals related to motor imagery brain–computer interface systems has become a research hotspot. To address the challenges of difficulty in feature extraction and low recognition rates of motor imagery EEG signals caused by individual variations in EEG signals, a classification algorithm for EEG signals based on multi-feature fusion and the SVM-AdaBoost algorithm was proposed to improve the recognition accuracy of motor imagery EEG signals. Initially, the electroencephalography (EEG) signals are preprocessed using Finite Impulse Response (FIR) filters, and a multi-wavelet framework is constructed based on the Morlet wavelet and the Haar wavelet. Subsequently, the preprocessed signals undergo multi-wavelet decomposition to extract energy features, Common Spatial Patterns (CSP) features, Autoregressive (AR) features, and Power Spectral Density (PSD) features. The extracted features are then fused, and the fused feature vector is normalized. Following that, classification is implemented within the SVM-AdaBoost algorithm. To enhance the adaptability of SVM-AdaBoost, the Grid Search method is employed to optimize the penalty parameter and kernel function parameter of the SVM. Concurrently, the Whale Optimization Algorithm is utilized to optimize the learning rate and number of weak learners within the AdaBoost ensemble, thereby refining the overall performance. In addition, the classification performance of the algorithm is validated using a brain-computer interface (BCI) dataset. In this study, it was found that the classification accuracy reached 95.37%. Via the analysis of motor imagery electroencephalography (EEG) signals, the activation patterns in different regions of the brain can be detected and identified, enabling the inference of user intentions and facilitating communication and control between the human brain and external devices. Full article
Show Figures

Figure 1

29 pages, 4864 KiB  
Article
Comparative Analysis of Deep Learning Models for Optimal EEG-Based Real-Time Servo Motor Control
by Dimitris Angelakis, Errikos C. Ventouras, Spiros Kostopoulos and Pantelis Asvestas
Eng 2024, 5(3), 1708-1736; https://doi.org/10.3390/eng5030090 - 2 Aug 2024
Viewed by 412
Abstract
This study harnesses EEG signals to enable the real-time control of servo motors, utilizing the OpenBCI Community Dataset to identify and assess brainwave patterns related to motor imagery tasks. Specifically, the dataset includes EEG data from 52 subjects, capturing electrical brain activity while [...] Read more.
This study harnesses EEG signals to enable the real-time control of servo motors, utilizing the OpenBCI Community Dataset to identify and assess brainwave patterns related to motor imagery tasks. Specifically, the dataset includes EEG data from 52 subjects, capturing electrical brain activity while participants imagined executing specific motor tasks. Each participant underwent multiple trials for each motor imagery task, ensuring a diverse and comprehensive dataset for model training and evaluation. A deep neural network model comprising convolutional and bidirectional long short-term memory (LSTM) layers was developed and trained using k-fold cross-validation, achieving a notable accuracy of 98%. The model’s performance was further compared against recurrent neural networks (RNNs), multilayer perceptrons (MLPs), and Τransformer algorithms, demonstrating that the CNN-LSTM model provided the best performance due to its effective capture of both spatial and temporal features. The model was deployed on a Python script interfacing with an Arduino board, enabling communication with two servo motors. The Python script predicts actions from preprocessed EEG data to control the servo motors in real-time. Real-time performance metrics, including classification reports and confusion matrices, demonstrate the seamless integration of the LSTM model with the Arduino board for precise and responsive control. An Arduino program was implemented to receive commands from the Python script via serial communication and control the servo motors, enabling accurate and responsive control based on EEG predictions. Overall, this study presents a comprehensive approach that combines machine learning, real-time implementation, and hardware interfacing to enable the precise and real-time control of servo motors using EEG signals, with potential applications in the human–robot interaction and assistive technology domains. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications)
Show Figures

Figure 1

27 pages, 1835 KiB  
Article
Exploring Feature Selection and Classification Techniques to Improve the Performance of an Electroencephalography-Based Motor Imagery Brain–Computer Interface System
by Md. Humaun Kabir, Nadim Ibne Akhtar, Nishat Tasnim, Abu Saleh Musa Miah, Hyoun-Sup Lee, Si-Woong Jang and Jungpil Shin
Sensors 2024, 24(15), 4989; https://doi.org/10.3390/s24154989 - 1 Aug 2024
Viewed by 1013
Abstract
The accuracy of classifying motor imagery (MI) activities is a significant challenge when using brain–computer interfaces (BCIs). BCIs allow people with motor impairments to control external devices directly with their brains using electroencephalogram (EEG) patterns that translate brain activity into control signals. Many [...] Read more.
The accuracy of classifying motor imagery (MI) activities is a significant challenge when using brain–computer interfaces (BCIs). BCIs allow people with motor impairments to control external devices directly with their brains using electroencephalogram (EEG) patterns that translate brain activity into control signals. Many researchers have been working to develop MI-based BCI recognition systems using various time-frequency feature extraction and classification approaches. However, the existing systems still face challenges in achieving satisfactory performance due to large amount of non-discriminative and ineffective features. To get around these problems, we suggested a multiband decomposition-based feature extraction and classification method that works well, along with a strong feature selection method for MI tasks. Our method starts by splitting the preprocessed EEG signal into four sub-bands. In each sub-band, we then used a common spatial pattern (CSP) technique to pull out narrowband-oriented useful features, which gives us a high-dimensional feature vector. Subsequently, we utilized an effective feature selection method, Relief-F, which reduces the dimensionality of the final features. Finally, incorporating advanced classification techniques, we classified the final reduced feature vector. To evaluate the proposed model, we used the three different EEG-based MI benchmark datasets, and our proposed model achieved better performance accuracy than existing systems. Our model’s strong points include its ability to effectively reduce feature dimensionality and improve classification accuracy through advanced feature extraction and selection methods. Full article
Show Figures

Figure 1

16 pages, 1013 KiB  
Article
EEG Motor Imagery Classification: Tangent Space with Gate-Generated Weight Classifier
by Sara Omari, Adil Omari, Fares Abu-Dakka and Mohamed Abderrahim
Biomimetics 2024, 9(8), 459; https://doi.org/10.3390/biomimetics9080459 - 27 Jul 2024
Viewed by 412
Abstract
Individuals grappling with severe central nervous system injuries often face significant challenges related to sensorimotor function and communication abilities. In response, brain–computer interface (BCI) technology has emerged as a promising solution by offering innovative interaction methods and intelligent rehabilitation training. By leveraging electroencephalographic [...] Read more.
Individuals grappling with severe central nervous system injuries often face significant challenges related to sensorimotor function and communication abilities. In response, brain–computer interface (BCI) technology has emerged as a promising solution by offering innovative interaction methods and intelligent rehabilitation training. By leveraging electroencephalographic (EEG) signals, BCIs unlock intriguing possibilities in patient care and neurological rehabilitation. Recent research has utilized covariance matrices as signal descriptors. In this study, we introduce two methodologies for covariance matrix analysis: multiple tangent space projections (M-TSPs) and Cholesky decomposition. Both approaches incorporate a classifier that integrates linear and nonlinear features, resulting in a significant enhancement in classification accuracy, as evidenced by meticulous experimental evaluations. The M-TSP method demonstrates superior performance with an average accuracy improvement of 6.79% over Cholesky decomposition. Additionally, a gender-based analysis reveals a preference for men in the obtained results, with an average improvement of 9.16% over women. These findings underscore the potential of our methodologies to improve BCI performance and highlight gender-specific performance differences to be examined further in our future studies. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction: 2nd Edition)
Show Figures

Figure 1

21 pages, 15605 KiB  
Article
Integration of Virtual Reality-Enhanced Motor Imagery and Brain-Computer Interface for a Lower-Limb Rehabilitation Exoskeleton Robot
by Chih-Jer Lin and Ting-Yi Sie
Actuators 2024, 13(7), 244; https://doi.org/10.3390/act13070244 - 28 Jun 2024
Viewed by 734
Abstract
In this study, we integrated virtual reality (VR) goggles and a motor imagery (MI) brain-computer interface (BCI) algorithm with a lower-limb rehabilitation exoskeleton robot (LLRER) system. The MI-BCI system was integrated with the VR goggles to identify the intention classification system. The VR [...] Read more.
In this study, we integrated virtual reality (VR) goggles and a motor imagery (MI) brain-computer interface (BCI) algorithm with a lower-limb rehabilitation exoskeleton robot (LLRER) system. The MI-BCI system was integrated with the VR goggles to identify the intention classification system. The VR goggles enhanced the immersive experience of the subjects during data collection. The VR-enhanced electroencephalography (EEG) classification model of a seated subject was directly applied to the rehabilitation of the LLRER wearer. The experimental results showed that the VR goggles had a positive effect on the classification accuracy of MI-BCI. The best results were obtained with subjects in a seated position wearing VR, but the seated VR classification model cannot be directly applied to rehabilitation triggers in the LLRER. There were a number of confounding factors that needed to be overcome. This study proposes a cumulative distribution function (CDF) auto-leveling method that can apply the seated VR model to standing subjects wearing exoskeletons. The classification model of seated VR had an accuracy of 75.35% in the open-loop test of the LLRER, and the accuracy of correctly triggering the rehabilitation action in the closed-loop gait rehabilitation of LLRER was 74%. Preliminary findings regarding the development of a closed-loop gait rehabilitation system activated by MI-BCI were presented. Full article
Show Figures

Figure 1

22 pages, 97889 KiB  
Article
Processing and Integration of Multimodal Image Data Supporting the Detection of Behaviors Related to Reduced Concentration Level of Motor Vehicle Users
by Anton Smoliński, Paweł Forczmański and Adam Nowosielski
Electronics 2024, 13(13), 2457; https://doi.org/10.3390/electronics13132457 - 23 Jun 2024
Cited by 1 | Viewed by 468
Abstract
This paper introduces a comprehensive framework for the detection of behaviors indicative of reduced concentration levels among motor vehicle operators, leveraging multimodal image data. By integrating dedicated deep learning models, our approach systematically analyzes RGB images, depth maps, and thermal imagery to identify [...] Read more.
This paper introduces a comprehensive framework for the detection of behaviors indicative of reduced concentration levels among motor vehicle operators, leveraging multimodal image data. By integrating dedicated deep learning models, our approach systematically analyzes RGB images, depth maps, and thermal imagery to identify driver drowsiness and distraction signs. Our novel contribution includes utilizing state-of-the-art convolutional neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) networks for effective feature extraction and classification across diverse distraction scenarios. Additionally, we explore various data fusion techniques, demonstrating their impact on improving detection accuracy. The significance of this work lies in its potential to enhance road safety by providing more reliable and efficient tools for the real-time monitoring of driver attentiveness, thereby reducing the risk of accidents caused by distraction and fatigue. The proposed methods are thoroughly evaluated using a multimodal benchmark dataset, with results showing their substantial capabilities leading to the development of safety-enhancing technologies for vehicular environments. The primary challenge addressed in this study is the detection of driver states not relying on the lighting conditions. Our solution employs multimodal data integration, encompassing RGB, thermal, and depth images, to ensure robust and accurate monitoring regardless of external lighting variations Full article
(This article belongs to the Special Issue Advancement on Smart Vehicles and Smart Travel)
Show Figures

Figure 1

Back to TopTop