Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (420)

Search Parameters:
Keywords = robot customization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4644 KiB  
Article
A System for Robotic Extraction of Fasteners
by Austin Clark and Musa K. Jouaneh
Appl. Sci. 2025, 15(2), 618; https://doi.org/10.3390/app15020618 - 10 Jan 2025
Viewed by 265
Abstract
Automating the extraction of mechanical fasteners from end-of-life (EOL) electronic waste is challenging due to unpredictable conditions and unknown fastener locations relative to robotic coordinates. This study develops a system for extracting cross-recessed screws using a Deep Convolutional Neural Network (DCNN) for screw [...] Read more.
Automating the extraction of mechanical fasteners from end-of-life (EOL) electronic waste is challenging due to unpredictable conditions and unknown fastener locations relative to robotic coordinates. This study develops a system for extracting cross-recessed screws using a Deep Convolutional Neural Network (DCNN) for screw detection, integrated with industrial robot simulation software. The simulation models the tooling, camera, environment, and robot kinematics, enabling real-time control and feedback between the robot and the simulation environment. The system, tested on a robotic platform with custom tooling, including force and torque sensors, aimed to optimize fastener removal. Key performance indicators included the speed and success rate of screw extraction, with success rates ranging from 78 to 89% on the first pass and 100% on the second. The system uses a state-based program design for fastener extraction, with real-time control via a web-socket interface. Despite its potential, the system faces limitations, such as longer cycle times, with single fastener extraction taking over 30 s. These challenges can be mitigated by refining the tooling, DCNN model, and control logic for improved efficiency. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

44 pages, 4022 KiB  
Review
Neural Network for Enhancing Robot-Assisted Rehabilitation: A Systematic Review
by Nafizul Alam, Sk Hasan, Gazi Abdullah Mashud and Subodh Bhujel
Actuators 2025, 14(1), 16; https://doi.org/10.3390/act14010016 - 6 Jan 2025
Viewed by 465
Abstract
The integration of neural networks into robotic exoskeletons for physical rehabilitation has become popular due to their ability to interpret complex physiological signals. Surface electromyography (sEMG), electromyography (EMG), electroencephalography (EEG), and other physiological signals enable communication between the human body and robotic systems. [...] Read more.
The integration of neural networks into robotic exoskeletons for physical rehabilitation has become popular due to their ability to interpret complex physiological signals. Surface electromyography (sEMG), electromyography (EMG), electroencephalography (EEG), and other physiological signals enable communication between the human body and robotic systems. Utilizing physiological signals for communicating with robots plays a crucial role in robot-assisted neurorehabilitation. This systematic review synthesizes 44 peer-reviewed studies, exploring how neural networks can improve exoskeleton robot-assisted rehabilitation for individuals with impaired upper limbs. By categorizing the studies based on robot-assisted joints, sensor systems, and control methodologies, we offer a comprehensive overview of neural network applications in this field. Our findings demonstrate that neural networks, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Radial Basis Function Neural Networks (RBFNNs), and other forms of neural networks significantly contribute to patient-specific rehabilitation by enabling adaptive learning and personalized therapy. CNNs improve motion intention estimation and control accuracy, while LSTM networks capture temporal muscle activity patterns for real-time rehabilitation. RBFNNs improve human–robot interaction by adapting to individual movement patterns, leading to more personalized and efficient therapy. This review highlights the potential of neural networks to revolutionize upper limb rehabilitation, improving motor recovery and patient outcomes in both clinical and home-based settings. It also recommends the future direction of customizing existing neural networks for robot-assisted rehabilitation applications. Full article
Show Figures

Figure 1

15 pages, 1079 KiB  
Article
An Improved Hierarchical Optimization Framework for Walking Control of Underactuated Humanoid Robots Using Model Predictive Control and Whole Body Planner and Controller
by Yuanji Liu, Haiming Mou, Hao Jiang, Qingdu Li and Jianwei Zhang
Mathematics 2025, 13(1), 154; https://doi.org/10.3390/math13010154 - 3 Jan 2025
Viewed by 559
Abstract
This paper addresses the fundamental challenge of achieving stable and efficient walking in a lightweight, underactuated humanoid robot that lacks an ankle roll degree of freedom. To tackle this relevant critical problem, we present a hierarchical optimization framework that combines model predictive control [...] Read more.
This paper addresses the fundamental challenge of achieving stable and efficient walking in a lightweight, underactuated humanoid robot that lacks an ankle roll degree of freedom. To tackle this relevant critical problem, we present a hierarchical optimization framework that combines model predictive control (MPC) with a tailored whole body planner and controller (WBPC). At the high level, we employ a matrix exponential (ME)-based discretization of the MPC, ensuring numerical stability across a wide range of step sizes (5 to 100 ms), thereby reducing computational complexity without sacrificing control quality. At the low level, the WBPC is specifically designed to handle the unique kinematic constraints imposed by the missing ankle roll DOF, generating feasible joint trajectories for the swing foot phase. Meanwhile, a whole body control (WBC) strategy refines ground reaction forces and joint trajectories under full-body dynamics and contact wrench cone (CWC) constraints, guaranteeing physically realizable interactions with the environment. Finally, a position–velocity–torque (PVT) controller integrates feedforward torque commands with the desired trajectories for robust execution. Validated through walking experiments on the MuJoCo simulation platform using our custom-designed lightweight robot X02, this approach not only improves the numerical stability of MPC solutions, but also provides a scientifically sound and effective method for underactuated humanoid locomotion control. Full article
Show Figures

Figure 1

22 pages, 1695 KiB  
Article
Exploring Key Considerations for Artificial Intelligence Robots in Home Healthcare Using the Unified Theory of Acceptance and Use of Technology and the Fuzzy Analytical Hierarchy Process Method
by Keng-Yu Lin, Kuei-Hu Chang, Yu-Wen Lin and Mei-Jin Wu
Systems 2025, 13(1), 25; https://doi.org/10.3390/systems13010025 - 2 Jan 2025
Viewed by 477
Abstract
Most countries face declining birth rates and an aging population, which makes the persistent healthcare labor shortage a pressing challenge. Introducing artificial intelligence (AI) robots into home healthcare could help address these issues. Exploring the primary considerations for integrating AI robots in home [...] Read more.
Most countries face declining birth rates and an aging population, which makes the persistent healthcare labor shortage a pressing challenge. Introducing artificial intelligence (AI) robots into home healthcare could help address these issues. Exploring the primary considerations for integrating AI robots in home healthcare has become an urgent topic. However, previous studies have not systematically examined the factors influencing elderly individuals’ adoption of home healthcare AI robots, hindering an understanding of their acceptance and adoption. Furthermore, traditional methods overlook the relative importance of each consideration and cannot manage the ambiguity inherent in subjective human cognition, potentially leading to biased decision-making. To address these limitations, this study employs the unified theory of acceptance and use of technology (UTAUT) as a theoretical framework, integrating the modified Delphi method (MDM) and the fuzzy analytical hierarchy process (FAHP) to identify the key considerations. The research determined the order of importance of four evaluation criteria and fourteen evaluation sub-criteria, revealing that customization, accompany, and subjective norms are key factors that influence elderly individuals’ adoption of home healthcare AI robots. Full article
Show Figures

Figure 1

18 pages, 40537 KiB  
Article
Text-Guided Object Detection Accuracy Enhancement Method Based on Improved YOLO-World
by Qian Ding, Enzheng Zhang, Zhiguo Liu, Xinhai Yao and Gaofeng Pan
Electronics 2025, 14(1), 133; https://doi.org/10.3390/electronics14010133 - 31 Dec 2024
Viewed by 442
Abstract
In intelligent human–robot interaction scenarios, rapidly and accurately searching and recognizing specific targets is essential for enhancing robot operation and navigation capabilities, as well as achieving effective human–robot collaboration. This paper proposes an improved YOLO-World method with an integrated attention mechanism for text-guided [...] Read more.
In intelligent human–robot interaction scenarios, rapidly and accurately searching and recognizing specific targets is essential for enhancing robot operation and navigation capabilities, as well as achieving effective human–robot collaboration. This paper proposes an improved YOLO-World method with an integrated attention mechanism for text-guided object detection, aiming to boost visual detection accuracy. The method incorporates SPD-Conv modules into the YOLOV8 backbone to enhance low-resolution image processing and feature representation for small and medium-sized targets. Additionally, EMA is introduced to improve the visual feature representation guided by the text, and spatial attention focuses the model on image areas related to the text, enhancing its perception of specific target regions described in the text. The improved YOLO-World method with attention mechanism is detailed in the paper. Comparative experiments with four advanced object detection algorithms on COCO and a custom dataset show that the proposed method not only significantly improves object detection accuracy but also exhibits good generalization capabilities in varying scenes. This research offers a reference for high-precision object detection and provides technical solutions for applications requiring accurate object detection, such as human–robot interaction and artificial intelligence robots. Full article
Show Figures

Figure 1

24 pages, 7131 KiB  
Article
Study on the Customization of Robotic Arms for Spray-Coating Production Lines
by Chao-Chung Liu, Jun-Chi Liu and Chao-Shu Liu
Machines 2025, 13(1), 23; https://doi.org/10.3390/machines13010023 - 31 Dec 2024
Viewed by 386
Abstract
This paper focuses on the design and development of a customized 7-axis suspended robotic arm for automated spraying production lines. The design process considers factors such as workspace dimensions, workpiece sizes, and suspension positions. After analyzing degrees of freedom and workspace coordinates, 3D [...] Read more.
This paper focuses on the design and development of a customized 7-axis suspended robotic arm for automated spraying production lines. The design process considers factors such as workspace dimensions, workpiece sizes, and suspension positions. After analyzing degrees of freedom and workspace coordinates, 3D modeling ensures the arm can reach designated positions and orientations. Servo motors and reducers are selected based on load capacity and speed requirements. A suspended body method allows flexible use within the workspace. Kinematics analysis is conducted, followed by trajectory-tracking experiments using the manifold deformation control method. Results from simulation and real experiments show minimal error in tracking, demonstrating the effectiveness of the control method. Finally, the actual coating thickness sprayed by the 7-axis suspended robotic arm at four locations on the motorcycle shell was measured. The results show that the measured values at each location fall within the standard range provided by the manufacturer, demonstrating consistency in spraying across different regions. This consistency highlights the precision and effectiveness of the robotic arm’s control system in achieving uniform coating thickness, even on complex and curved surfaces. Therefore, the robotic arm has been successfully applied in a factory’s automated spraying production line. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

15 pages, 5191 KiB  
Article
Determining the Proper Force Parameters for Robotized Pipetting Devices Used in Automated Polymerase Chain Reaction (PCR)
by Melania-Olivia Sandu, Valentin Ciupe, Corina-Mihaela Gruescu, Robert Kristof, Carmen Sticlaru and Elida-Gabriela Tulcan
Robotics 2025, 14(1), 2; https://doi.org/10.3390/robotics14010002 - 28 Dec 2024
Viewed by 424
Abstract
This study aims to provide a set of experimentally determined forces needed for gripping operations related to a robotically manipulated microliter manual pipette. The experiments are conducted within the scope of automated sample processing for polymerase chain reaction (PCR) analysis in small-sized to [...] Read more.
This study aims to provide a set of experimentally determined forces needed for gripping operations related to a robotically manipulated microliter manual pipette. The experiments are conducted within the scope of automated sample processing for polymerase chain reaction (PCR) analysis in small-sized to medium-sized laboratories where dedicated automated equipment is absent and where procedures are carried out manually. Automation is justified by the requirement for increased efficiency and to eliminate possible errors generated by lab technicians. The test system comprises an industrial robot; a dedicated custom gripper assembly necessary for the pipette; pipetting tips; and mechanical holders for tubes with chemical substances and genetic material. The selected approach is to measure forces using the robot’s built-in force–torque sensor while controlling and limiting the pipette’s gripping force and the robot’s pushing force. Because the manipulation of different materials requires the attachment and discarding of tips to and from the pipette, the operator’s perceived tip release force is also considered. Full article
Show Figures

Figure 1

22 pages, 13566 KiB  
Article
Exploring Architectural Units Through Robotic 3D Concrete Printing of Space-Filling Geometries
by Meryem N. Yabanigül and Derya Gulec Ozer
Buildings 2025, 15(1), 60; https://doi.org/10.3390/buildings15010060 - 27 Dec 2024
Viewed by 489
Abstract
The integration of 3D concrete printing (3DCP) into architectural design and production offers a solution to challenges in the construction industry. This technology presents benefits such as mass customization, waste reduction, and support for complex designs. However, its adoption in construction faces various [...] Read more.
The integration of 3D concrete printing (3DCP) into architectural design and production offers a solution to challenges in the construction industry. This technology presents benefits such as mass customization, waste reduction, and support for complex designs. However, its adoption in construction faces various limitations, including technical, logistical, and legal barriers. This study provides insights relevant to architecture, engineering, and construction practices, guiding future developments in the field. The methodology involves fabricating closed architectural units using 3DCP, emphasizing space-filling geometries and ensuring structural strength. Across three production trials, iterative improvements were made, revealing challenges and insights into design optimization and fabrication techniques. Prioritizing controlled filling of the unit’s internal volume ensures portability and ease of assembly. Leveraging 3D robotic concrete printing technology enables precise fabrication of closed units with controlled voids, enhancing speed and accuracy in production. Experimentation with varying unit sizes and internal support mechanisms, such as sand infill and central supports, enhances performance and viability, addressing geometric capabilities and fabrication efficiency. Among these strategies, sand filling has emerged as an effective solution for internal support as it reduces unit weight, simplifies fabrication, and maintains structural integrity. This approach highlights the potential of lightweight and adaptable modular constructions in the use of 3DCP technologies for architectural applications. Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
Show Figures

Figure 1

21 pages, 100293 KiB  
Article
An Improved Method for Enhancing the Accuracy and Speed of Dynamic Object Detection Based on YOLOv8s
by Zhiguo Liu, Enzheng Zhang, Qian Ding, Weijie Liao and Zixiang Wu
Sensors 2025, 25(1), 85; https://doi.org/10.3390/s25010085 - 26 Dec 2024
Viewed by 454
Abstract
Accurate detection and tracking of dynamic objects are critical for enabling skill demonstration and effective skill generalization in robotic skill learning and application scenarios. To further improve the detection accuracy and tracking speed of the YOLOv8s model in dynamic object tracking tasks, this [...] Read more.
Accurate detection and tracking of dynamic objects are critical for enabling skill demonstration and effective skill generalization in robotic skill learning and application scenarios. To further improve the detection accuracy and tracking speed of the YOLOv8s model in dynamic object tracking tasks, this paper proposes a method to enhance both detection precision and speed based on YOLOv8s architecture. Specifically, a Focused Linear Attention mechanism is introduced into the YOLOv8s backbone network to enhance dynamic object detection accuracy, while the Ghost module is incorporated into the neck network to improve the model’s tracking speed for dynamic objects. By mapping the motion of dynamic objects across frames, the proposed method achieves accurate trajectory tracking. This paper provides a detailed explanation of the improvements made to YOLOv8s for enhancing detection accuracy and speed in dynamic object detection tasks. Comparative experiments on the MS-COCO dataset and the custom dataset demonstrate that the proposed method has a clear advantage in terms of detection accuracy and processing speed. The dynamic object detection experiments further validate the effectiveness of the proposed method for detecting and tracking objects at different speeds. The proposed method offers a valuable reference for the field of dynamic object detection, providing actionable insights for applications such as robotic skill learning, generalization, and artificial intelligence-driven robotics. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

11 pages, 5532 KiB  
Article
Reinforcement Learning-Based Control for Collaborative Robotic Brain Retraction
by Ibai Inziarte-Hidalgo, Estela Nieto, Diego Roldan, Gorka Sorrosal, Jesus Perez-Llano and Ekaitz Zulueta
Sensors 2024, 24(24), 8150; https://doi.org/10.3390/s24248150 - 20 Dec 2024
Viewed by 304
Abstract
In recent years, the application of AI has expanded rapidly across various fields. However, it has faced challenges in establishing a foothold in medicine, particularly in invasive medical procedures. Medical algorithms and devices must meet strict regulatory standards before they can be approved [...] Read more.
In recent years, the application of AI has expanded rapidly across various fields. However, it has faced challenges in establishing a foothold in medicine, particularly in invasive medical procedures. Medical algorithms and devices must meet strict regulatory standards before they can be approved for use on humans. Additionally, medical robots are often custom-built, leading to high costs. This paper introduces a cost-effective brain retraction robot designed to perform brain retraction procedures. The robot is trained, specifically the Deep Deterministic Policy Gradient (DDPG) algorithm, using reinforcement learning techniques with a brain contact model, offering a more affordable solution for such delicate tasks. Full article
Show Figures

Figure 1

21 pages, 10315 KiB  
Article
G-RCenterNet: Reinforced CenterNet for Robotic Arm Grasp Detection
by Jimeng Bai and Guohua Cao
Sensors 2024, 24(24), 8141; https://doi.org/10.3390/s24248141 - 20 Dec 2024
Viewed by 371
Abstract
In industrial applications, robotic arm grasp detection tasks frequently suffer from inadequate accuracy and success rates, which result in reduced operational efficiency. Although existing methods have achieved some success, limitations remain in terms of detection accuracy, real-time performance, and generalization ability. To address [...] Read more.
In industrial applications, robotic arm grasp detection tasks frequently suffer from inadequate accuracy and success rates, which result in reduced operational efficiency. Although existing methods have achieved some success, limitations remain in terms of detection accuracy, real-time performance, and generalization ability. To address these challenges, this paper proposes an enhanced grasp detection model, G-RCenterNet, based on the CenterNet framework. First, a channel and spatial attention mechanism is introduced to improve the network’s capability to extract target features, significantly enhancing grasp detection performance in complex backgrounds. Second, an efficient attention module search strategy is proposed to replace traditional fully connected layer structures, which not only increases detection accuracy but also reduces computational overhead. Additionally, the GSConv module is incorporated during the prediction decoding phase to accelerate inference speed while maintaining high accuracy, further improving real-time performance. Finally, ResNet50 is selected as the backbone network, and a custom loss function is designed specifically for grasp detection tasks, which significantly enhances the model’s ability to predict feasible grasp boxes. The proposed G-RCenterNet algorithm is embedded into a robotic grasping system, where a structured light depth camera captures target images, and the grasp detection network predicts the optimal grasp box. Experimental results based on the Cornell Grasp Dataset and real-world scenarios demonstrate that the G-RCenterNet model performs robustly in grasp detection tasks, achieving accurate and efficient target grasp detection suitable for practical applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 2251 KiB  
Article
Humanoid Robots in Tourism and Hospitality—Exploring Managerial, Ethical, and Societal Challenges
by Ida Skubis, Agata Mesjasz-Lech and Joanna Nowakowska-Grunt
Appl. Sci. 2024, 14(24), 11823; https://doi.org/10.3390/app142411823 - 18 Dec 2024
Viewed by 766
Abstract
The paper evaluates the benefits and challenges of employing humanoid robots in tourism and hospitality, examining their roles, decision-making processes, human-centric approaches, and oversight mechanisms. Data will be collected from a variety of sources, including academic journals, websites of the companies where the [...] Read more.
The paper evaluates the benefits and challenges of employing humanoid robots in tourism and hospitality, examining their roles, decision-making processes, human-centric approaches, and oversight mechanisms. Data will be collected from a variety of sources, including academic journals, websites of the companies where the robots operate, case studies, and news articles. Specific attention will be given to concrete examples of humanoid robots deployed in the tourism and hospitality sector, such as Connie, Spencer, and Henn-na Hotel’s robots. Robots highlight the potential to assume roles traditionally occupied by humans. The presence of humanoid robots also influences cultural practices and social interactions within the hospitality context. Humanoid robots also have the potential to improve equity and accessibility in the tourism and hospitality industry. The interaction between humans and humanoid robots can have psychological and emotional effects on both guests and employees. Finally, the usage of humanoid robots intersects with broader sustainability operational efficiency and customer satisfaction across various sectors within the tourism and hospitality industry. Introducing humanoid robots represents a challenge in innovation that holds promise for revolutionizing service delivery and guest experiences. Full article
(This article belongs to the Special Issue AI from Industry 4.0 to Industry 5.0: Engineering for Social Change)
Show Figures

Figure 1

19 pages, 1008 KiB  
Article
EEG-Based Mobile Robot Control Using Deep Learning and ROS Integration
by Bianca Ghinoiu, Victor Vlădăreanu, Ana-Maria Travediu, Luige Vlădăreanu, Abigail Pop, Yongfei Feng and Andreea Zamfirescu
Technologies 2024, 12(12), 261; https://doi.org/10.3390/technologies12120261 - 14 Dec 2024
Viewed by 1169
Abstract
Efficient BCIs (Brain-Computer Interfaces) harnessing EEG (Electroencephalography) have shown potential in controlling mobile robots, also presenting new possibilities for assistive technologies. This study explores the integration of advanced deep learning models—ASTGCN, EEGNetv4, and a combined CNN-LSTM architecture—with ROS (Robot Operating System) to control [...] Read more.
Efficient BCIs (Brain-Computer Interfaces) harnessing EEG (Electroencephalography) have shown potential in controlling mobile robots, also presenting new possibilities for assistive technologies. This study explores the integration of advanced deep learning models—ASTGCN, EEGNetv4, and a combined CNN-LSTM architecture—with ROS (Robot Operating System) to control a two-wheeled mobile robot. The models were trained using a published EEG dataset, which includes signals from subjects performing thought-based tasks. Each model was evaluated based on its accuracy, F1-score, and latency. The CNN-LSTM architecture model exhibited the best performance on the cross-subject strategy with an accuracy of 88.5%, demonstrating significant potential for real-time applications. Integration with ROS was facilitated through a custom middleware, enabling seamless translation of neural commands into robot movements. The findings indicate that the CNN-LSTM model not only outperforms existing EEG-based systems in terms of accuracy but also underscores the practical feasibility of implementing such systems in real-world scenarios. Considering its efficacy, CNN-LSTM shows a great potential for assistive technology in the future. This research contributes to the development of a more intuitive and accessible robotic control system, potentially enhancing the quality of life for individuals with mobility impairments. Full article
(This article belongs to the Special Issue Advanced Autonomous Systems and Artificial Intelligence Stage)
Show Figures

Figure 1

23 pages, 6025 KiB  
Article
Integrating Vision and Olfaction via Multi-Modal LLM for Robotic Odor Source Localization
by Sunzid Hassan, Lingxiao Wang and Khan Raqib Mahmud
Sensors 2024, 24(24), 7875; https://doi.org/10.3390/s24247875 - 10 Dec 2024
Viewed by 661
Abstract
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot [...] Read more.
Odor source localization (OSL) technology allows autonomous agents like mobile robots to localize a target odor source in an unknown environment. This is achieved by an OSL navigation algorithm that processes an agent’s sensor readings to calculate action commands to guide the robot to locate the odor source. Compared to traditional ‘olfaction-only’ OSL algorithms, our proposed OSL algorithm integrates vision and olfaction sensor modalities to localize odor sources even if olfaction sensing is disrupted by non-unidirectional airflow or vision sensing is impaired by environmental complexities. The algorithm leverages the zero-shot multi-modal reasoning capabilities of large language models (LLMs), negating the requirement of manual knowledge encoding or custom-trained supervised learning models. A key feature of the proposed algorithm is the ‘High-level Reasoning’ module, which encodes the olfaction and vision sensor data into a multi-modal prompt and instructs the LLM to employ a hierarchical reasoning process to select an appropriate high-level navigation behavior. Subsequently, the ‘Low-level Action’ module translates the selected high-level navigation behavior into low-level action commands that can be executed by the mobile robot. To validate our algorithm, we implemented it on a mobile robot in a real-world environment with non-unidirectional airflow environments and obstacles to mimic a complex, practical search environment. We compared the performance of our proposed algorithm to single-sensory-modality-based ‘olfaction-only’ and ‘vision-only’ navigation algorithms, and a supervised learning-based ‘vision and olfaction fusion’ (Fusion) navigation algorithm. The experimental results show that the proposed LLM-based algorithm outperformed the other algorithms in terms of success rates and average search times in both unidirectional and non-unidirectional airflow environments. Full article
Show Figures

Figure 1

18 pages, 6956 KiB  
Article
Multifunctional Sensor Array for User Interaction Based on Dielectric Elastomers with Sputtered Metal Electrodes
by Sebastian Gratz-Kelly, Mario Cerino, Daniel Philippi, Dirk Göttel, Sophie Nalbach, Jonas Hubertus, Günter Schultes, John Heppe and Paul Motzki
Materials 2024, 17(23), 5993; https://doi.org/10.3390/ma17235993 - 6 Dec 2024
Viewed by 566
Abstract
The integration of textile-based sensing and actuation elements has become increasingly important across various fields, driven by the growing demand for smart textiles in healthcare, sports, and wearable electronics. This paper presents the development of a small, smart dielectric elastomer (DE)-based sensing array [...] Read more.
The integration of textile-based sensing and actuation elements has become increasingly important across various fields, driven by the growing demand for smart textiles in healthcare, sports, and wearable electronics. This paper presents the development of a small, smart dielectric elastomer (DE)-based sensing array designed for user control input in applications such as human–machine interaction, virtual object manipulation, and robotics. DE-based sensors are ideal for textile integration due to their flexibility, lightweight nature, and ability to seamlessly conform to surfaces without compromising comfort. By embedding these sensors into textiles, continuous user interaction can be achieved, providing a more intuitive and unobtrusive user experience. The design of this DE array draws inspiration from a flexible and wearable version of a touchpad, which can be incorporated into clothing or accessories. Integrated advanced machine learning algorithms enhance the sensing system by improving resolution and enabling pattern recognition, reaching a prediction performance of at least 80. Additionally, the array’s electrodes are fabricated using a novel sputtering technique for low resistance as well as high geometric flexibility and size reducibility. A new crimping method is also introduced to ensure a reliable connection between the sensing array and the custom electronics. The advantages of the presented design, data evaluation, and manufacturing process comprise a reduced structure size, the flexible adaptability of the system to the respective application, reliable pattern recognition, reduced sensor and line resistance, the adaptability of mechanical force sensitivity, and the integration of electronics. This research highlights the potential for innovative, highly integrated textile-based sensors in various practical applications. Full article
Show Figures

Graphical abstract

Back to TopTop