Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (959)

Search Parameters:
Keywords = collaborative robotics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 716 KiB  
Review
Robotic Surgery in the Management of Renal Tumors During Pregnancy: A Narrative Review
by Lucio Dell’Atti and Viktoria Slyusar
Cancers 2025, 17(4), 574; https://doi.org/10.3390/cancers17040574 (registering DOI) - 8 Feb 2025
Viewed by 214
Abstract
Renal masses are uncommon during pregnancy; they represent the most frequently encountered urological cancer in pregnant patients and require careful surgical planning. The introduction of robotic surgical systems aims to address these challenges by simplifying intra-corporeal suturing and reducing technical complexity. Robot-assisted laparoscopic [...] Read more.
Renal masses are uncommon during pregnancy; they represent the most frequently encountered urological cancer in pregnant patients and require careful surgical planning. The introduction of robotic surgical systems aims to address these challenges by simplifying intra-corporeal suturing and reducing technical complexity. Robot-assisted laparoscopic renal surgery offers potential benefits over both open surgery and conventional laparoscopy, providing greater precision and reduced invasiveness, particularly in tumor excision and suturing. Although urological tumors during pregnancy are rare, early detection significantly improves outcomes by enabling intervention before the tumor advances and while the uterus remains relatively small. The decision regarding the timing and necessity of surgery in pregnant patients requires a careful assessment of maternal health, fetal development, and the progression of the disease. Risks for adverse pregnancy outcomes should be explained, and the patient’s decision about pregnancy termination should be considered. Radical nephrectomy or nephron-sparing surgery are essential treatments for the management of renal tumors. Effective management demands close collaboration between a multidisciplinary team and the patient to ensure individualized care. The aim of this review was to evaluate the renal tumors during pregnancy in terms of epidemiology, risk factors, diagnosis and the safety of a robot-assisted laparoscopic approach in the management of these tumors. Full article
Show Figures

Figure 1

36 pages, 2041 KiB  
Article
A Novice-Friendly and Accessible Networked Educational Robotics Simulation Platform
by Gordon Stein, Devin Jean, Saman Kittani, Menton Deweese and Ákos Lédeczi
Educ. Sci. 2025, 15(2), 198; https://doi.org/10.3390/educsci15020198 - 7 Feb 2025
Viewed by 245
Abstract
Despite its potential for STEM education, educational robotics remains out of reach for many classrooms due to upfront purchase costs, maintenance requirements, storage space, and numerous other barriers to entry. As demonstrated previously, these physical robot limitations can be reduced or eliminated through [...] Read more.
Despite its potential for STEM education, educational robotics remains out of reach for many classrooms due to upfront purchase costs, maintenance requirements, storage space, and numerous other barriers to entry. As demonstrated previously, these physical robot limitations can be reduced or eliminated through simulation. This work presents a new version of RoboScape Online, a browser-based networked educational robotics simulation platform that aims to make robotics education more accessible while expanding both the breadth and depth of topics taught. Through cloud-hosted simulations, this platform enables distant students to collaborate and compete in real-time. Integration with NetsBlox, a block-based programming environment, allows students at any level to participate in computer science activities. By incorporating a virtual machine for running NetsBlox code into the server, RoboScape Online enables scenarios to be built using the same syntax and abstractions used to program the robots. This approach enables more creative curriculum activities while proving that block-based programming is a valuable development tool, not just a “toy language”. Classroom case studies demonstrate RoboScape Online’s potential to improve students’ computational thinking skills and foster positive attitudes toward STEM subjects, with especially significant improvements in attitudes toward self-expression and creativity within the realm of computer science. Full article
(This article belongs to the Special Issue Innovations in Precollegiate Computer Science Education)
Show Figures

Figure 1

28 pages, 891 KiB  
Review
A Comprehensive Review of AI-Based Digital Twin Applications in Manufacturing: Integration Across Operator, Product, and Process Dimensions
by David Alfaro-Viquez, Mauricio Zamora-Hernandez, Michael Fernandez-Vega, Jose Garcia-Rodriguez and Jorge Azorin-Lopez
Electronics 2025, 14(4), 646; https://doi.org/10.3390/electronics14040646 - 7 Feb 2025
Viewed by 537
Abstract
Digital twins (DTs) represent a transformative technology in manufacturing, facilitating significant advancements in monitoring, simulation, and optimization. This paper offers an extensive bibliographic review of AI-Based DT applications, categorized into three principal dimensions: operator, process, and product. The operator dimension focuses on enhancing [...] Read more.
Digital twins (DTs) represent a transformative technology in manufacturing, facilitating significant advancements in monitoring, simulation, and optimization. This paper offers an extensive bibliographic review of AI-Based DT applications, categorized into three principal dimensions: operator, process, and product. The operator dimension focuses on enhancing safety and ergonomics through intelligent assistance, utilizing real-time monitoring and artificial intelligence, notably in human–robot collaboration contexts. The process application concerns itself with optimizing production flows, identifying bottlenecks, and dynamically reconfiguring systems through predictive models and real-time simulations. Lastly, the product dimension emphasizes the applications focused on the improvements in product design and quality, employing lifecycle and historical data to satisfy evolving market requirements. This categorization provides a structured framework for analyzing the specific capabilities and trends of DTs, while also identifying knowledge gaps in contemporary research. This review highlights the key challenges of technological interoperability, data integration, and high implementation costs while emphasizing how digital twins, supported by AI, can drive the transition toward sustainable, human-centered manufacturing systems in line with Industry 5.0. The findings provide valuable insights for advancing the state of the art and exploring future opportunities in digital twin applications. Full article
Show Figures

Figure 1

21 pages, 2881 KiB  
Article
Analyzing the Impact of Information Asymmetry on Strategy Adaptation in Swarm Robotics: A Game-Theoretic Approach
by Yi Sun and Ying Han
Symmetry 2025, 17(2), 248; https://doi.org/10.3390/sym17020248 - 7 Feb 2025
Viewed by 242
Abstract
In dynamic environments characterized by information asymmetry, swarm robots encounter significant challenges in efficiently collaborating to complete tasks. This study investigates the effects of factors such as resource information, shared costs, transmission efficiency, and strategy-switching probabilities arising from uneven information sharing among robots [...] Read more.
In dynamic environments characterized by information asymmetry, swarm robots encounter significant challenges in efficiently collaborating to complete tasks. This study investigates the effects of factors such as resource information, shared costs, transmission efficiency, and strategy-switching probabilities arising from uneven information sharing among robots from the perspective of information disparity. A payoff matrix is developed to model the selection between search and exploration strategies under conditions of information asymmetry. Utilizing evolutionary game theory and replicator dynamics, the study analyzes how robots adapt their strategies in response to variations in resource information and shared costs. The findings reveal that the system ultimately evolves toward one of two dominant strategies: search or exploration. Numerical simulations demonstrate that information disparity, shared costs, transmission efficiency, and strategy-switching probabilities collectively drive the transition of robots from a search strategy to an exploration strategy, enabling them to acquire unknown environmental information more effectively and expedite task completion. The results suggest that in environments with balanced information, the system predominantly favors the search strategy to optimize resource utilization. Conversely, in environments with pronounced information asymmetry, the system is more inclined to adopt the exploration strategy, enhancing adaptability to environmental changes and accelerating task completion. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 9755 KiB  
Article
Marker-Based Safety Functionality for Human–Robot Collaboration Tasks by Means of Eye-Tracking Glasses
by Enrico Masi, Nhu Toan Nguyen, Eugenio Monari, Marcello Valori and Rocco Vertechy
Machines 2025, 13(2), 122; https://doi.org/10.3390/machines13020122 - 6 Feb 2025
Viewed by 260
Abstract
Human–robot collaboration (HRC) remains an increasingly growing trend in the robotics research field. Despite the widespread usage of collaborative robots on the market, several safety issues still need to be addressed to develop industry-ready applications exploiting the full potential of the technology. This [...] Read more.
Human–robot collaboration (HRC) remains an increasingly growing trend in the robotics research field. Despite the widespread usage of collaborative robots on the market, several safety issues still need to be addressed to develop industry-ready applications exploiting the full potential of the technology. This paper focuses on hand-guiding applications, proposing an approach based on a wearable device to reduce the risk related to operator fatigue or distraction. The methodology aims at ensuring operator’s attention during the hand guidance of a robot end effector in order to avoid injuries. This goal is achieved by detecting a region of interest (ROI) and checking that the gaze of the operator is kept within this area by means of a pair of eye-tracking glasses (Pupil Labs Neon, Berlin, Germany). The detection of the ROI is obtained primarily by the tracking camera of the glasses, acquiring the position of predefined ArUco markers, thus obtaining the corresponding contour area. In case of the misdetection of one or more markers, their position is estimated through the optical flow methodology. The performance of the proposed system is initially assessed with a motorized test bench simulating the rotation of operator’s head in a repeatable way and then in an HRC scenario used as case study. The tests show that the system can effectively identify a planar ROI in the context of a HRC application in real time. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

42 pages, 2435 KiB  
Review
Technological Innovation in Start-Ups on a Pathway to Achieving Sustainable Development Goal (SDG) 8: A Systematic Review
by Lilian Danil, Siti Jahroh, Rizal Syarief and Asep Taryana
Sustainability 2025, 17(3), 1220; https://doi.org/10.3390/su17031220 - 3 Feb 2025
Viewed by 667
Abstract
In a start-up, the level of technological innovation is crucial to the start-up’s competitiveness, especially in the digital age; as a result, high-tech start-ups stand a better chance of being more profitable than middle-tech and low-tech start-ups. The aim of this study is [...] Read more.
In a start-up, the level of technological innovation is crucial to the start-up’s competitiveness, especially in the digital age; as a result, high-tech start-ups stand a better chance of being more profitable than middle-tech and low-tech start-ups. The aim of this study is to identify and examine research papers regarding the role of technological innovation in advancing Sustainable Development Goal 8 (SDG) in the current context. This study intends to fill research gaps by performing a systematic literature review and meta-analysis following the PRISMA guidelines on the subject. To investigate advancements in the use of start-up technologies, scientific publications were obtained from the Scopus database, yielding a total of 384 entries for the preferred reporting items for systematic reviews and the meta-analyses identification stage. The findings indicate that high technology encompasses artificial intelligence (AI), blockchain, the Internet of Things (IoT), and collaborative robots; medium technology comprises mobile applications, big data, and cloud computing; and low technology consists of software and connectivity. Each of the technological innovations plays a significant role in advancing SDG 8, encompassing aspects such as economic growth, employment, productivity, creativity, innovation, entrepreneurship, development policies, and business growth. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

12 pages, 2704 KiB  
Article
A High-Flexibility Contact Force Sensor Based on the 8-Shaped Wound Polymer Optical Fiber for Human Safety in Human–Robot Collaboration
by Yi Liu, Yaru Zuo, Xueyao Jiang, Xuezhu Li, Weihao Yuan and Wenhong Cao
Fibers 2025, 13(2), 15; https://doi.org/10.3390/fib13020015 - 2 Feb 2025
Viewed by 575
Abstract
Human–robot collaboration is a new trend in modern manufacturing. Safety, or human protection, is of great significance due to humans and robots sharing the same workshop space. To achieve effective protection, in this paper, a contact force sensor based on an 8-shaped wound [...] Read more.
Human–robot collaboration is a new trend in modern manufacturing. Safety, or human protection, is of great significance due to humans and robots sharing the same workshop space. To achieve effective protection, in this paper, a contact force sensor based on an 8-shaped wound polymer optical fiber is proposed. The 8-shaped wound structure can convert the normal contact force to the shrinkage of the 8-shaped optical fiber ring. The macro-bending loss of the optical fiber is used to detect the contact force. Compared with conventional sensors, the proposed scheme has the advantage of high flexibility, low cost, fast response, and high repeatability, which shows great promise in actively alerting users to potential collisions and passively reducing the harm caused to humans. Full article
Show Figures

Figure 1

40 pages, 29209 KiB  
Article
Integration of Deep Learning Vision Systems in Collaborative Robotics for Real-Time Applications
by Nuno Terras, Filipe Pereira, António Ramos Silva, Adriano A. Santos, António Mendes Lopes, António Ferreira da Silva, Laurentiu Adrian Cartal, Tudor Catalin Apostolescu, Florentina Badea and José Machado
Appl. Sci. 2025, 15(3), 1336; https://doi.org/10.3390/app15031336 - 27 Jan 2025
Viewed by 570
Abstract
Collaborative robotics and computer vision systems are increasingly important in automating complex industrial tasks with greater safety and productivity. This work presents an integrated vision system powered by a trained neural network and coupled with a collaborative robot for real-time sorting and quality [...] Read more.
Collaborative robotics and computer vision systems are increasingly important in automating complex industrial tasks with greater safety and productivity. This work presents an integrated vision system powered by a trained neural network and coupled with a collaborative robot for real-time sorting and quality inspection in a food product conveyor process. Multiple object detection models were trained on custom datasets using advanced augmentation techniques to optimize performance. The proposed system achieved a detection and classification accuracy of 98%, successfully processing more than 600 items with high efficiency and low computational cost. Unlike conventional solutions that rely on ROS (Robot Operating System), this implementation used a Windows-based Python framework for greater accessibility and industrial compatibility. The results demonstrated the reliability and industrial applicability of the solution, offering a scalable and accurate methodology that can be adapted to various industrial applications. Full article
Show Figures

Figure 1

27 pages, 12074 KiB  
Article
Near Time-Optimal Trajectories with ISO Standard Constraints for Human–Robot Collaboration in Fabric Co-Transportation
by Renat Kermenov, Alessandro Di Biase, Ilaria Pellicani, Sauro Longhi and Andrea Bonci
Robotics 2025, 14(2), 10; https://doi.org/10.3390/robotics14020010 - 27 Jan 2025
Viewed by 508
Abstract
Enabling robots to work safely close to humans requires both adherence to safety standards and the development of appropriate strategies to plan and control robot movements in accordance with human movements. Collaboration between humans and robots in a shared environment is a joint [...] Read more.
Enabling robots to work safely close to humans requires both adherence to safety standards and the development of appropriate strategies to plan and control robot movements in accordance with human movements. Collaboration between humans and robots in a shared environment is a joint activity aimed at completing specific tasks, requiring coordination, synchronisation, and sometimes physical contact, in which each party contributes its own skills and resources. Among the most challenging tasks of human–robot cooperation is the co-transport of deformable materials such as fabrics. This paper proposes a method for generating the trajectory of a collaborative manipulator. The method is designed for the co-transport of materials such as fabrics. It combines a near time-optimal control strategy that ensures responsiveness in following human actions while simultaneously guaranteeing compliance with the safety limits imposed by current regulations. The combination of these two elements results in a viable co-transport solution which preserves the safety of human operators. This is achieved by constraining the path of the robot trajectory with prescribed velocities and accelerations while simultaneously ensuring a near time-optimal control strategy. In short, the robot movement is generated in such a way as to ensure both the tracking of humans in the co-transportation task and compliance with safety limits. As a first attempt to adopt the proposed approach to integrate time-optimal strategies into human–robot interaction, the simulations and preliminary experimental result obtained are promising. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

36 pages, 3892 KiB  
Article
Mutual Cooperation System for Task Execution Between Ground Robots and Drones Using Behavior Tree-Based Action Planning and Dynamic Occupancy Grid Mapping
by Hiroaki Kobori and Kosuke Sekiyama
Drones 2025, 9(2), 95; https://doi.org/10.3390/drones9020095 - 26 Jan 2025
Viewed by 542
Abstract
This study presents a cooperative system where drones and ground robots share information to efficiently complete tasks in environments that challenge the capabilities of a single robot. Drones focus on exploring high-interest areas for ground robots, generating occupancy grid maps and identifying high-risk [...] Read more.
This study presents a cooperative system where drones and ground robots share information to efficiently complete tasks in environments that challenge the capabilities of a single robot. Drones focus on exploring high-interest areas for ground robots, generating occupancy grid maps and identifying high-risk routes. Ground robots use this information to evaluate and adapt routes as needed. Flexible action planning through behavior trees enables the robots to respond dynamically to environmental changes, facilitating spontaneous and adaptable cooperation. Experiments with real robots confirmed the system’s performance and adaptability to various settings. Specifically, when high-risk areas were identified from drone provided information, ground robots generated alternative routes to bypass these zones, demonstrating the system’s capacity to navigate complex paths while minimizing risks. This establishes a basis for scaling to larger environments. The proposed system is expected to improve the safety and efficiency of robot operations by enabling multiple robots to accomplish complex tasks collaboratively-tasks that would be difficult or time consuming for an individual robot. The findings demonstrate the potential for multi-robot cooperation to enhance task execution in challenging environments and provide a framework for future research on effective role sharing and information exchange in autonomous systems. Full article
Show Figures

Figure 1

29 pages, 32678 KiB  
Article
An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition
by Zhuangqun Song, Peng Zhao, Xueji Wu, Rong Yang and Xueshan Gao
Sensors 2025, 25(3), 713; https://doi.org/10.3390/s25030713 - 24 Jan 2025
Viewed by 585
Abstract
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a [...] Read more.
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a vision-driven follow-and-track control strategy is proposed. Subsequently, an algorithm for recognizing human motion intentions based on machine learning is proposed for human-robot collaboration tasks. A muscle–machine interface is constructed using a bi-directional long short-term memory (BiLSTM) network, which decodes multichannel surface electromyography (sEMG) signals into flexion and extension angles of the hip and knee joints in the sagittal plane. The hyperparameters of the BiLSTM network are optimized using the quantum-behaved particle swarm optimization (QPSO) algorithm, resulting in a QPSO-BiLSTM hybrid model that enables continuous real-time estimation of human motion intentions. Further, to address the uncertain nonlinear dynamics of the wearer-exoskeleton robot system, a dual radial basis function neural network adaptive sliding mode Controller (DRBFNNASMC) is designed to generate control torques, thereby enabling the precise tracking of motion trajectories generated by the muscle–machine interface. Experimental results indicate that the follow-up-assisted frame can accurately track human motion trajectories. The QPSO-BiLSTM network outperforms traditional BiLSTM and PSO-BiLSTM networks in predicting continuous lower limb motion, while the DRBFNNASMC controller demonstrates superior gait tracking performance compared to the fuzzy compensated adaptive sliding mode control (FCASMC) algorithm and the traditional proportional–integral–derivative (PID) control algorithm. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

17 pages, 5755 KiB  
Article
A Hybrid Architecture for Safe Human–Robot Industrial Tasks
by Gaetano Lettera, Daniele Costa and Massimo Callegari
Appl. Sci. 2025, 15(3), 1158; https://doi.org/10.3390/app15031158 - 24 Jan 2025
Viewed by 609
Abstract
In the context of Industry 5.0, human–robot collaboration (HRC) is increasingly crucial for enabling safe and efficient operations in shared industrial workspaces. This study aims to implement a hybrid robotic architecture based on the Speed and Separation Monitoring (SSM) collaborative scenario defined in [...] Read more.
In the context of Industry 5.0, human–robot collaboration (HRC) is increasingly crucial for enabling safe and efficient operations in shared industrial workspaces. This study aims to implement a hybrid robotic architecture based on the Speed and Separation Monitoring (SSM) collaborative scenario defined in ISO/TS 15066. The system calculates the minimum protective separation distance between the robot and the operators and slows down or stops the robot according to the risk assessment computed in real time. Compared to existing solutions, the approach prevents collisions and maximizes workcell production by reducing the robot speed only when the calculated safety index indicates an imminent risk of collision. The proposed distributed software architecture utilizes the ROS2 framework, integrating three modules: (1) a fast and reliable human tracking module based on the OptiTrack system that considerably reduces latency times or false positives, (2) an intention estimation (IE) module, employing a linear Kalman filter (LKF) to predict the operator’s next position and velocity, thus considering the current scenario and not the worst case, and (3) a robot control module that computes the protective separation distance and assesses the safety index by measuring the Euclidean distance between operators and the robot. This module dynamically adjusts robot speed to maintain safety while minimizing unnecessary slowdowns, ensuring the efficiency of collaborative tasks. Experimental results demonstrate that the proposed system effectively balances safety and speed, optimizing overall performance in human–robot collaborative industrial environments, with significant improvements in productivity and reduced risk of accidents. Full article
Show Figures

Figure 1

19 pages, 5103 KiB  
Article
Human-Aware Control for Physically Interacting Robots
by Reza Sharif Razavian
Bioengineering 2025, 12(2), 107; https://doi.org/10.3390/bioengineering12020107 - 23 Jan 2025
Viewed by 643
Abstract
This paper presents a novel model for predicting human movements and introduces a new control method for human–robot interaction based on this model. The developed predictive model of human movement is a holistic model that is based on well-supported neuroscientific and biomechanical theories [...] Read more.
This paper presents a novel model for predicting human movements and introduces a new control method for human–robot interaction based on this model. The developed predictive model of human movement is a holistic model that is based on well-supported neuroscientific and biomechanical theories of human motor control; it includes multiple levels of the human sensorimotor system hierarchy, including high-level decision-making based on internal models, muscle synergies, and physiological muscle mechanics. Therefore, this holistic model can predict arm kinematics and neuromuscular activities in a computationally efficient way. The computational efficiency of the model also makes it suitable for repetitive predictive simulations within a robot’s control algorithm to predict the user’s behavior in human–robot interactions. Therefore, based on this model and the nonlinear model predictive control framework, a human-aware control algorithm is implemented, which internally runs simulations to predict the user’s interactive movement patterns in the future. Consequently, it can optimize the robot’s motor torques to minimize an index, such as the user’s neuromuscular effort. Simulation results of the holistic model and its utilization in the human-aware control of a two-link robot arm are presented. The holistic model is shown to replicate salient features of human movements. The human-aware controller’s ability to predict and minimize the user’s neuromuscular effort in a collaborative task is also demonstrated in simulations. Full article
Show Figures

Figure 1

37 pages, 20841 KiB  
Article
Reinforced NEAT Algorithms for Autonomous Rover Navigation in Multi-Room Dynamic Scenario
by Dhadkan Shrestha and Damian Valles
Fire 2025, 8(2), 41; https://doi.org/10.3390/fire8020041 - 23 Jan 2025
Viewed by 639
Abstract
This paper demonstrates the performance of autonomous rovers utilizing NeuroEvolution of Augmenting Topologies (NEAT) in multi-room scenarios and explores their potential applications in wildfire management and search and rescue missions. Simulations in three- and four-room scenarios were conducted over 100 to 10,000 generations, [...] Read more.
This paper demonstrates the performance of autonomous rovers utilizing NeuroEvolution of Augmenting Topologies (NEAT) in multi-room scenarios and explores their potential applications in wildfire management and search and rescue missions. Simulations in three- and four-room scenarios were conducted over 100 to 10,000 generations, comparing standard learning with transfer learning from a pre-trained single-room model. The task required rovers to visit all rooms before returning to the starting point. Performance metrics included fitness score, successful room visits, and return rates. The results revealed significant improvements in rover performance across generations for both scenarios, with transfer learning providing substantial advantages, particularly in early generations. Transfer learning achieved 32 successful returns after 10,000 generations for the three-room scenario compared to 34 with standard learning. In the four-room scenario, transfer learning achieved 32 successful returns. Heatmap analyses highlighted efficient navigation strategies, particularly around starting points and target zones. This study highlights NEAT’s adaptability to complex navigation problems, showcasing the utility of transfer learning. Additionally, it proposes the integration of NEAT with UAV systems and collaborative robotic frameworks for fire suppression, fuel characterization, and dynamic fire boundary detection, further strengthening its role in real-world emergency management. Full article
Show Figures

Figure 1

16 pages, 1745 KiB  
Article
Shared Control of Supernumerary Robotic Limbs Using Mixed Realityand Mouth-and-Tongue Interfaces
by Hongwei Jing, Sikai Zhao, Tianjiao Zheng, Lele Li, Qinghua Zhang, Kerui Sun, Jie Zhao and Yanhe Zhu
Biosensors 2025, 15(2), 70; https://doi.org/10.3390/bios15020070 - 23 Jan 2025
Viewed by 511
Abstract
Supernumerary Robotic Limbs (SRLs) are designed to collaborate with the wearer, enhancing operational capabilities. When human limbs are occupied with primary tasks, controlling SRLs flexibly and naturally becomes a challenge. Existing methods such as electromyography (EMG) control and redundant limb control partially address [...] Read more.
Supernumerary Robotic Limbs (SRLs) are designed to collaborate with the wearer, enhancing operational capabilities. When human limbs are occupied with primary tasks, controlling SRLs flexibly and naturally becomes a challenge. Existing methods such as electromyography (EMG) control and redundant limb control partially address SRL control issues. However, they still face limitations like restricted degrees of freedom and complex data requirements, which hinder their applicability in real-world scenarios. Additionally, fully autonomous control methods, while efficient, often lack the flexibility needed for complex tasks, as they do not allow for real-time user adjustments. In contrast, shared control combines machine autonomy with human input, enabling finer control and more intuitive task completion. Building on our previous work with the mouth-and-tongue interface, this paper integrates a mixed reality (MR) device to form an interactive system that enables shared control of the SRL. The system allows users to dynamically switch between voluntary and autonomous control, providing both flexibility and efficiency. A random forest model classifies 14 distinct tongue and mouth operations, mapping them to six-degree-of-freedom SRL control. In comparative experiments involving ten healthy subjects performing assembly tasks under three control modes (shared control, autonomous control, and voluntary control), shared control demonstrates a balance between machine autonomy and human input. While autonomous control offers higher task efficiency, shared control achieves greater task success rates and improves user experience by combining the advantages of both autonomous operation and voluntary control. This study validates the feasibility of shared control and highlights its advantages in providing flexible switching between autonomy and user intervention, offering new insights into SRL control. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

Back to TopTop