Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (468)

Search Parameters:
Keywords = Kinect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1178 KiB  
Article
Pig Weight Estimation Method Based on a Framework Combining Mask R-CNN and Ensemble Regression Model
by Sheng Jiang, Guoxu Zhang, Zhencai Shen, Ping Zhong, Junyan Tan and Jianfeng Liu
Animals 2024, 14(14), 2122; https://doi.org/10.3390/ani14142122 - 20 Jul 2024
Viewed by 407
Abstract
Using computer vision technology to estimate pig live weight is an important method to realize pig welfare. But there are two key issues that affect pigs’ weight estimation: one is the uneven illumination, which leads to unclear contour extraction of pigs, and the [...] Read more.
Using computer vision technology to estimate pig live weight is an important method to realize pig welfare. But there are two key issues that affect pigs’ weight estimation: one is the uneven illumination, which leads to unclear contour extraction of pigs, and the other is the bending of the pig body, which leads to incorrect pig body information. For the first one, Mask R-CNN was used to extract the contour of the pig, and the obtained mask image was converted into a binary image from which we were able to obtain a more accurate contour image. For the second one, the body length, hip width and the distance from the camera to the pig back were corrected by XGBoost and actual measured information. Then we analyzed the rationality of the extracted features. Three feature combination strategies were used to predict pig weight. In total, 1505 back images of 39 pigs obtained using Azure kinect DK were used in the numerical experiments. The highest prediction accuracy is XGBoost, with an MAE of 0.389, RMSE of 0.576, MAPE of 0.318% and R2 of 0.995. We also recommend using the Mask R-CNN + RFR method because it has fairly high precision in each strategy. The experimental results show that our proposed method has excellent performance in live weight estimation of pigs. Full article
Show Figures

Figure 1

20 pages, 3670 KiB  
Article
Enhancing Visual Odometry with Estimated Scene Depth: Leveraging RGB-D Data with Deep Learning
by Aleksander Kostusiak and Piotr Skrzypczyński
Electronics 2024, 13(14), 2755; https://doi.org/10.3390/electronics13142755 - 13 Jul 2024
Viewed by 564
Abstract
Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO [...] Read more.
Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO systems that use RGB-D images, exploring methods to enhance depth information. We examine conventional image inpainting techniques and a deep learning approach, utilizing newer depth data from devices like the Kinect v2. Our research highlights the importance of refining data from lower-quality sensors, which is crucial for cost-effective VO applications. By integrating deep learning models with richer context from RGB images and more comprehensive depth references, we demonstrate improved trajectory estimation compared to standard methods. This work advances budget-friendly RGB-D VO systems for indoor mobile robots, emphasizing deep learning’s role in leveraging connections between image appearance and depth data. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Figure 1

29 pages, 18651 KiB  
Article
Realization of Impression Evidence with Reverse Engineering and Additive Manufacturing
by Osama Abdelaal and Saleh Ahmed Aldahash
Appl. Sci. 2024, 14(13), 5444; https://doi.org/10.3390/app14135444 - 23 Jun 2024
Viewed by 564
Abstract
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to [...] Read more.
Significant advances in reverse engineering and additive manufacturing have the potential to provide a faster, accurate, and cost-effective process chain for preserving, analyzing, and presenting forensic impression evidence in both 3D digital and physical forms. The objective of the present research was to evaluate the capabilities and limitations of five 3D scanning technologies, including laser scanning (LS), structured-light (SL) scanning, smartphone (SP) photogrammetry, Microsoft Kinect v2 RGB-D camera, and iPhone’s LiDAR (iLiDAR) Sensor, for 3D reconstruction of 3D impression evidence. Furthermore, methodologies for 3D reconstruction of latent impression and visible 2D impression based on a single 2D photo were proposed. Additionally, the FDM additive manufacturing process was employed to build impression evidence models created by each procedure. The results showed that the SL scanning system generated the highest reconstruction accuracy. Consequently, the SL system was employed as a benchmark to assess the reconstruction quality of other systems. In comparison to the SL data, LS showed the smallest absolute geometrical deviations (0.37 mm), followed by SP photogrammetry (0.78 mm). In contrast, the iLiDAR exhibited the largest absolute deviations (2.481 mm), followed by Kinect v2 (2.382 mm). Additionally, 3D printed impression replicas demonstrated superior detail compared to Plaster of Paris (POP) casts. The feasibility of reconstructing 2D impressions into 3D models is progressively increasing. Finally, this article explores potential future research directions in this field. Full article
(This article belongs to the Special Issue Advances in 3D Sensing Techniques and Its Applications)
Show Figures

Figure 1

12 pages, 2240 KiB  
Article
Comparing the Drop Vertical Jump Tracking Performance of the Azure Kinect to the Kinect V2
by Patrik Abdelnour, Kevin Y. Zhao, Athanasios Babouras, Jason Philip Aaron Hiro Corban, Nicolaos Karatzas, Thomas Fevens and Paul Andre Martineau
Sensors 2024, 24(12), 3814; https://doi.org/10.3390/s24123814 - 13 Jun 2024
Viewed by 373
Abstract
Traditional motion analysis systems are impractical for widespread screening of non-contact anterior cruciate ligament (ACL) injury risk. The Kinect V2 has been identified as a portable and reliable alternative but was replaced by the Azure Kinect. We hypothesize that the Azure Kinect will [...] Read more.
Traditional motion analysis systems are impractical for widespread screening of non-contact anterior cruciate ligament (ACL) injury risk. The Kinect V2 has been identified as a portable and reliable alternative but was replaced by the Azure Kinect. We hypothesize that the Azure Kinect will assess drop vertical jump (DVJ) parameters associated with ACL injury risk with similar accuracy to its predecessor, the Kinect V2. Sixty-nine participants performed DVJs while being recorded by both the Azure Kinect and the Kinect V2 simultaneously. Our software analyzed the data to identify initial coronal, peak coronal, and peak sagittal knee angles. Agreement between the two systems was evaluated using the intraclass correlation coefficient (ICC). There was poor agreement between the Azure Kinect and the Kinect V2 for initial and peak coronal angles (ICC values ranging from 0.135 to 0.446), and moderate agreement for peak sagittal angles (ICC = 0.608, 0.655 for left and right knees, respectively). At this point in time, the Azure Kinect system is not a reliable successor to the Kinect V2 system for assessment of initial coronal, peak coronal, and peak sagittal angles during a DVJ, despite demonstrating superior tracking of continuous knee angles. Alternative motion analysis systems should be explored. Full article
(This article belongs to the Special Issue Sensor-Based Motion Analysis in Medicine, Rehabilitation and Sport)
Show Figures

Figure 1

18 pages, 1043 KiB  
Article
Gamified Exercise with Kinect: Can Kinect-Based Virtual Reality Training Improve Physical Performance and Quality of Life in Postmenopausal Women with Osteopenia? A Randomized Controlled Trial
by Saima Riaz, Syed Shakil Ur Rehman, Danish Hassan and Sana Hafeez
Sensors 2024, 24(11), 3577; https://doi.org/10.3390/s24113577 - 1 Jun 2024
Viewed by 547
Abstract
Background: Osteopenia, caused by estrogen deficiency in postmenopausal women (PMW), lowers Bone Mineral Density (BMD) and increases bone fragility. It affects about half of older women’s social and physical health. PMW experience pain and disability, impacting their health-related Quality of Life (QoL) and [...] Read more.
Background: Osteopenia, caused by estrogen deficiency in postmenopausal women (PMW), lowers Bone Mineral Density (BMD) and increases bone fragility. It affects about half of older women’s social and physical health. PMW experience pain and disability, impacting their health-related Quality of Life (QoL) and function. This study aimed to determine the effects of Kinect-based Virtual Reality Training (VRT) on physical performance and QoL in PMW with osteopenia. Methodology: The study was a prospective, two-arm, parallel-design, randomized controlled trial. Fifty-two participants were recruited in the trial, with 26 randomly assigned to each group. The experimental group received Kinect-based VRT thrice a week for 24 weeks, each lasting 45 min. Both groups were directed to participate in a 30-min walk outside every day. Physical performance was measured by the Time Up and Go Test (TUG), Functional Reach Test (FRT), Five Times Sit to Stand Test (FTSST), Modified Sit and Reach Test (MSRT), Dynamic Hand Grip Strength (DHGS), Non-Dynamic Hand Grip Strength (NDHGS), BORG Score and Dyspnea Index. Escala de Calidad de vida Osteoporosis (ECOS-16) questionnaire measured QoL. Both physical performance and QoL measures were assessed at baseline, after 12 weeks, and after 24 weeks. Data were analyzed on SPSS 25. Results: The mean age of the PMW participants was 58.00 ± 5.52 years. In within-group comparison, all outcome variables (TUG, FRT, FTSST, MSRT, DHGS, NDHGS, BORG Score, Dyspnea, and ECOS-16) showed significant improvements (p < 0.001) from baseline at both the 12th and 24th weeks and between baseline and the 24th week in the experimental group. In the control group, all outcome variables except FRT (12th week to 24th week) showed statistically significant improvements (p < 0.001) from baseline at both the 12th and 24th weeks and between baseline and the 24th week. In between-group comparison, the experimental group demonstrated more significant improvements in most outcome variables at all points than the control group (p < 0.001), indicating the positive additional effects of Kinect-based VRT. Conclusion: The study concludes that physical performance and QoL measures were improved in both the experimental and control groups. However, in the group comparison, these variables showed better results in the experimental group. Thus, Kinect-based VRT is an alternative and feasible intervention to improve physical performance and QoL in PMW with osteopenia. This novel approach may be widely applicable in upcoming studies, considering the increasing interest in virtual reality-based therapy for rehabilitation. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

12 pages, 2374 KiB  
Article
Evaluating Desk-Assisted Standing Techniques for Simulated Pregnant Conditions: An Experimental Study Using a Maternity-Simulation Jacket
by Kohei Uno, Kako Tsukioka, Hibiki Sakata, Tomoe Inoue-Hirakawa and Yusuke Matsui
Healthcare 2024, 12(9), 931; https://doi.org/10.3390/healthcare12090931 - 1 May 2024
Viewed by 921
Abstract
Lower back pain, a common issue among pregnant women, often complicates daily activities like standing up from a chair. Therefore, research into the standing motion of pregnant women is important, and many research studies have already been conducted. However, many of these studies [...] Read more.
Lower back pain, a common issue among pregnant women, often complicates daily activities like standing up from a chair. Therefore, research into the standing motion of pregnant women is important, and many research studies have already been conducted. However, many of these studies were conducted in highly controlled environments, overlooking everyday scenarios such as using a desk for support when standing up, and their effects have not been adequately tested. To address this gap, we measured multimodal signals for a sit-to-stand (STS) movement with hand assistance and verified the changes using a t-test. To avoid imposing strain on pregnant women, we used 10 non-diseased young adults who wore jackets designed to simulate pregnancy conditions, thus allowing for more comprehensive and rigorous experimentation. We attached surface electromyography (sEMG) sensors to the erector spinae muscles of participants and measured changes in muscle activity, skeletal positioning, and center of pressure both before and after wearing a Maternity-Simulation Jacket. Our analysis showed that the jacket successfully mimicked key aspects of the movement patterns typical in pregnant women. These results highlight the possibility of developing practical strategies that more accurately mirror the real-life scenarios met by pregnant women, enriching the current research on their STS movement. Full article
(This article belongs to the Section Women's Health Care)
Show Figures

Figure 1

14 pages, 5243 KiB  
Article
Neural Network-Based Body Weight Prediction in Pelibuey Sheep through Biometric Measurements
by Alfonso J. Chay-Canul, Enrique Camacho-Pérez, Fernando Casanova-Lugo, Omar Rodríguez-Abreo, Mayra Cruz-Fernández and Juvenal Rodríguez-Reséndiz
Technologies 2024, 12(5), 59; https://doi.org/10.3390/technologies12050059 - 30 Apr 2024
Viewed by 1268
Abstract
This paper presents an intelligent system for the dynamic estimation of sheep body weight (BW). The methodology used to estimate body weight is based on measuring seven biometric parameters: height at withers, rump height, body length, body diagonal length, total body length, semicircumference [...] Read more.
This paper presents an intelligent system for the dynamic estimation of sheep body weight (BW). The methodology used to estimate body weight is based on measuring seven biometric parameters: height at withers, rump height, body length, body diagonal length, total body length, semicircumference of the abdomen, and semicircumference of the girth. A biometric parameter acquisition system was developed using a Kinect as a sensor. The results were contrasted with measurements obtained manually with a flexometer. The comparison gives an average root mean square error (RMSE) of 9.91 and a mean R2 of 0.81. Subsequently, the parameters were used as input in a back-propagation artificial neural network. Performance tests were performed with different combinations to make the best choice of architecture. In this way, an intelligent body weight estimation system was obtained from biometric parameters, with a 5.8% RMSE in the weight estimations for the best architecture. This approach represents an innovative, feasible, and economical alternative to contribute to decision-making in livestock production systems. Full article
Show Figures

Figure 1

19 pages, 11345 KiB  
Article
ST-TGR: Spatio-Temporal Representation Learning for Skeleton-Based Teaching Gesture Recognition
by Zengzhao Chen, Wenkai Huang, Hai Liu, Zhuo Wang, Yuqun Wen and Shengming Wang
Sensors 2024, 24(8), 2589; https://doi.org/10.3390/s24082589 - 18 Apr 2024
Cited by 2 | Viewed by 889
Abstract
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition [...] Read more.
Teaching gesture recognition is a technique used to recognize the hand movements of teachers in classroom teaching scenarios. This technology is widely used in education, including for classroom teaching evaluation, enhancing online teaching, and assisting special education. However, current research on gesture recognition in teaching mainly focuses on detecting the static gestures of individual students and analyzing their classroom behavior. To analyze the teacher’s gestures and mitigate the difficulty of single-target dynamic gesture recognition in multi-person teaching scenarios, this paper proposes skeleton-based teaching gesture recognition (ST-TGR), which learns through spatio-temporal representation. This method mainly uses the human pose estimation technique RTMPose to extract the coordinates of the keypoints of the teacher’s skeleton and then inputs the recognized sequence of the teacher’s skeleton into the MoGRU action recognition network for classifying gesture actions. The MoGRU action recognition module mainly learns the spatio-temporal representation of target actions by stacking a multi-scale bidirectional gated recurrent unit (BiGRU) and using improved attention mechanism modules. To validate the generalization of the action recognition network model, we conducted comparative experiments on datasets including NTU RGB+D 60, UT-Kinect Action3D, SBU Kinect Interaction, and Florence 3D. The results indicate that, compared with most existing baseline models, the model proposed in this article exhibits better performance in recognition accuracy and speed. Full article
Show Figures

Figure 1

15 pages, 9529 KiB  
Article
Advanced Planar Projection Contour (PPC): A Novel Algorithm for Local Feature Description in Point Clouds
by Wenbin Tang, Yinghao Lv, Yongdang Chen, Linqing Zheng and Runxiao Wang
J. Imaging 2024, 10(4), 84; https://doi.org/10.3390/jimaging10040084 - 29 Mar 2024
Viewed by 961
Abstract
Local feature description of point clouds is essential in 3D computer vision. However, many local feature descriptors for point clouds struggle with inadequate robustness, excessive dimensionality, and poor computational efficiency. To address these issues, we propose a novel descriptor based on Planar Projection [...] Read more.
Local feature description of point clouds is essential in 3D computer vision. However, many local feature descriptors for point clouds struggle with inadequate robustness, excessive dimensionality, and poor computational efficiency. To address these issues, we propose a novel descriptor based on Planar Projection Contours, characterized by convex packet contour information. We construct the Local Reference Frame (LRF) through covariance analysis of the query point and its neighboring points. Neighboring points are projected onto three orthogonal planes defined by the LRF. These projection points on the planes are fitted into convex hull contours and encoded as local features. These planar features are then concatenated to create the Planar Projection Contour (PPC) descriptor. We evaluated the performance of the PPC descriptor against classical descriptors using the B3R, UWAOR, and Kinect datasets. Experimental results demonstrate that the PPC descriptor achieves an accuracy exceeding 80% across all recall levels, even under high-noise and point density variation conditions, underscoring its effectiveness and robustness. Full article
Show Figures

Figure 1

11 pages, 1726 KiB  
Article
Comparing a Portable Motion Analysis System against the Gold Standard for Potential Anterior Cruciate Ligament Injury Prevention and Screening
by Nicolaos Karatzas, Patrik Abdelnour, Jason Philip Aaron Hiro Corban, Kevin Y. Zhao, Louis-Nicolas Veilleux, Stephane G. Bergeron, Thomas Fevens, Hassan Rivaz, Athanasios Babouras and Paul A. Martineau
Sensors 2024, 24(6), 1970; https://doi.org/10.3390/s24061970 - 20 Mar 2024
Cited by 2 | Viewed by 1022
Abstract
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet [...] Read more.
Knee kinematics during a drop vertical jump, measured by the Kinect V2 (Microsoft, Redmond, WA, USA), have been shown to be associated with an increased risk of non-contact anterior cruciate ligament injury. The accuracy and reliability of the Microsoft Kinect V2 has yet to be assessed specifically for tracking the coronal and sagittal knee angles of the drop vertical jump. Eleven participants performed three drop vertical jumps that were recorded using both the Kinect V2 and a gold standard motion analysis system (Vicon, Los Angeles, CA, USA). The initial coronal, peak coronal, and peak sagittal angles of the left and right knees were measured by both systems simultaneously. Analysis of the data obtained by the Kinect V2 was performed by our software. The differences in the mean knee angles measured by the Kinect V2 and the Vicon system were non-significant for all parameters except for the peak sagittal angle of the right leg with a difference of 7.74 degrees and a p-value of 0.008. There was excellent agreement between the Kinect V2 and the Vicon system, with intraclass correlation coefficients consistently over 0.75 for all knee angles measured. Visual analysis revealed a moderate frame-to-frame variability for coronal angles measured by the Kinect V2. The Kinect V2 can be used to capture knee coronal and sagittal angles with sufficient accuracy during a drop vertical jump, suggesting that a Kinect-based portable motion analysis system is suitable to screen individuals for the risk of non-contact anterior cruciate ligament injury. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

22 pages, 2241 KiB  
Article
Motion Capture in Mixed-Reality Applications: A Deep Denoising Approach
by André Correia Gonçalves, Rui Jesus and Pedro Mendes Jorge
Virtual Worlds 2024, 3(1), 135-156; https://doi.org/10.3390/virtualworlds3010007 - 11 Mar 2024
Viewed by 919
Abstract
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to [...] Read more.
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data. Full article
Show Figures

Figure 1

21 pages, 4113 KiB  
Article
Simulation of Human Movement in Zero Gravity
by Adelina Bärligea, Kazunori Hase and Makoto Yoshida
Sensors 2024, 24(6), 1770; https://doi.org/10.3390/s24061770 - 9 Mar 2024
Viewed by 1061
Abstract
In the era of expanding manned space missions, understanding the biomechanical impacts of zero gravity on human movement is pivotal. This study introduces a novel and cost-effective framework that demonstrates the application of Microsoft’s Azure Kinect body tracking technology as a motion input [...] Read more.
In the era of expanding manned space missions, understanding the biomechanical impacts of zero gravity on human movement is pivotal. This study introduces a novel and cost-effective framework that demonstrates the application of Microsoft’s Azure Kinect body tracking technology as a motion input generator for subsequent OpenSim simulations in weightlessness. Testing rotations, locomotion, coordination, and martial arts movements, we validate the results’ realism under the constraints of angular and linear momentum conservation. While complex, full-body coordination tasks face limitations in a zero gravity environment, our findings suggest possible approaches to device-free exercise routines for astronauts and reveal insights into the feasibility of hand-to-hand combat in space. However, some challenges remain in distinguishing zero gravity effects in the simulations from discrepancies in the captured motion input or forward dynamics calculations, making a comprehensive validation difficult. The paper concludes by highlighting the framework’s practical potential for the future of space mission planning and related research endeavors, while also providing recommendations for further refinement. Full article
Show Figures

Figure 1

26 pages, 1847 KiB  
Systematic Review
Economic Cost of Rehabilitation with Robotic and Virtual Reality Systems in People with Neurological Disorders: A Systematic Review
by Roberto Cano-de-la-Cuerda, Aitor Blázquez-Fernández, Selena Marcos-Antón, Patricia Sánchez-Herrera-Baeza, Pilar Fernández-González, Susana Collado-Vázquez, Carmen Jiménez-Antona and Sofía Laguarta-Val
J. Clin. Med. 2024, 13(6), 1531; https://doi.org/10.3390/jcm13061531 - 7 Mar 2024
Cited by 2 | Viewed by 1410
Abstract
Background: The prevalence of neurological disorders is increasing worldwide. In recent decades, the conventional rehabilitation for people with neurological disorders has been often reinforced with the use of technological devices (robots and virtual reality). The aim of this systematic review was to [...] Read more.
Background: The prevalence of neurological disorders is increasing worldwide. In recent decades, the conventional rehabilitation for people with neurological disorders has been often reinforced with the use of technological devices (robots and virtual reality). The aim of this systematic review was to identify the evidence on the economic cost of rehabilitation with robotic and virtual reality devices for people with neurological disorders through a review of the scientific publications over the last 15 years. Methods: A systematic review was conducted on partial economic evaluations (cost description, cost analysis, description of costs and results) and complete (cost minimization, cost-effectiveness, cost utility and cost benefit) studies. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. The main data sources used were PubMed, Scopus and Web of Science (WOS). Studies published in English over the last 15 years were considered for inclusion in this review, regardless of the type of neurological disorder. The critical appraisal instrument from the Joanna Briggs Institute for economic evaluation and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) were used to analyse the methodological quality of all the included papers. Results: A total of 15 studies were included in this review. Ten papers were focused on robotics and five on virtual reality. Most of the studies were focused on people who experienced a stroke. The robotic device most frequently used in the papers included was InMotion® (Bionik Co., Watertown, MA, USA), and for those focused on virtual reality, all papers included used semi-immersive virtual reality systems, with commercial video game consoles (Nintendo Wii® (Nintendo Co., Ltd., Kyoto, Japan) and Kinect® (Microsoft Inc., Redmond, WA, USA)) being used the most. The included studies mainly presented cost minimization outcomes and a general description of costs per intervention, and there were disparities in terms of population, setting, device, protocol and the economic cost outcomes evaluated. Overall, the methodological quality of the included studies was of a moderate level. Conclusions: There is controversy about using robotics in people with neurological disorders in a rehabilitation context in terms of cost minimization, cost-effectiveness, cost utility and cost benefits. Semi-immersive virtual reality devices could involve savings (mainly derived from the low prices of the systems analysed and transportation services if they are applied through telerehabilitation programmes) compared to in-clinic interventions. Full article
(This article belongs to the Section Clinical Rehabilitation)
Show Figures

Figure 1

18 pages, 6573 KiB  
Article
Development and Evaluation of an Image Processing-Based Kinesthetic Learning System
by Deniz Yıldız, Uğur Fidan, Mehmet Yıldız, Büşra Er, Gürbüz Ocak, Fatih Güngör, İjlal Ocak and Zeki Akyildiz
Appl. Sci. 2024, 14(5), 2186; https://doi.org/10.3390/app14052186 - 5 Mar 2024
Viewed by 1112
Abstract
This study aims to develop an interactive language learning game and explore its efficacy for English language learners. A computer-generated playground was projected onto a large classroom floor (4 × 3 m) with a wide-angle projection device. A Kinect depth camera determined the [...] Read more.
This study aims to develop an interactive language learning game and explore its efficacy for English language learners. A computer-generated playground was projected onto a large classroom floor (4 × 3 m) with a wide-angle projection device. A Kinect depth camera determined the spatial positions of the playground and the positions of the students’ heads, feet, and bodies. Then, we evaluated the system’s effect on English education through pre- and post-tests. While there was no significant difference between the groups in terms of achievement in the pre-tests, the experimental group exhibited significantly greater improvement in the post-tests (F: 14.815, p < 0.001, η2p: 0.086). Also, both groups demonstrated significant learning gains in post-tests compared to pre-tests (F: 98.214, p < 0.001, η2p: 0.383), and the group x time interaction of the experimental group increased more in percentage (32.32% vs. 17.54%) compared to the control group (F: 9.166, p < 0.003, η2p: 0.055). Qualitative data from student views indicated enhanced learning pace, vocabulary acquisition, enjoyment of the learning process, and increased focus. These findings suggest that a kinesthetic learning environment can significantly benefit English language learning in children. Full article
Show Figures

Figure 1

24 pages, 4112 KiB  
Article
Enhancing Human Action Recognition with 3D Skeleton Data: A Comprehensive Study of Deep Learning and Data Augmentation
by Chu Xin, Seokhwan Kim, Yongjoo Cho and Kyoung Shin Park
Electronics 2024, 13(4), 747; https://doi.org/10.3390/electronics13040747 - 13 Feb 2024
Viewed by 1498
Abstract
Human Action Recognition (HAR) is an important field that identifies human behavior through sensor data. Three-dimensional human skeleton data extracted from the Kinect depth sensor have emerged as a powerful alternative to mitigate the effects of lighting and occlusion of traditional 2D RGB [...] Read more.
Human Action Recognition (HAR) is an important field that identifies human behavior through sensor data. Three-dimensional human skeleton data extracted from the Kinect depth sensor have emerged as a powerful alternative to mitigate the effects of lighting and occlusion of traditional 2D RGB or grayscale image-based HAR. Data augmentation is a key technique to enhance model generalization and robustness in deep learning while suppressing overfitting to training data. In this paper, we conduct a comprehensive study of various data augmentation techniques specific to skeletal data, which aim to improve the accuracy of deep learning models. These augmentation methods include spatial augmentation, which generates augmented samples from the original 3D skeleton sequence, and temporal augmentation, which is designed to capture subtle temporal changes in motion. The evaluation covers two publicly available datasets and a proprietary dataset and employs three neural network models. The results highlight the impact of temporal augmentation on model performance on the skeleton datasets, while exhibiting the nuanced impact of spatial augmentation. The findings underscore the importance of tailoring augmentation strategies to specific dataset characteristics and actions, providing novel perspectives for model selection in skeleton-based human action recognition tasks. Full article
Show Figures

Figure 1

Back to TopTop