Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = robot localisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 8355 KiB  
Article
Multi-Neural Network Localisation System with Regression and Classification on Football Autonomous Robots
by Carolina Coelho Lopes, António Ribeiro, Tiago Ribeiro, Gil Lopes and A. Fernando Ribeiro
AI 2025, 6(2), 27; https://doi.org/10.3390/ai6020027 - 5 Feb 2025
Abstract
In environments like the RoboCup Middle Size League (MSL), precise and rapid localisation of robots is crucial for effective autonomous interaction. This study addresses the limitations of conventional localisation approaches—often based on single-camera systems or sensors such as LiDAR (Light Detection and Ranging) [...] Read more.
In environments like the RoboCup Middle Size League (MSL), precise and rapid localisation of robots is crucial for effective autonomous interaction. This study addresses the limitations of conventional localisation approaches—often based on single-camera systems or sensors such as LiDAR (Light Detection and Ranging) and infrared—by developing a robust Artificial Intelligence (AI)-based multi-camera system solution. This method uses multiple neural networks, breaking down the problem while taking advantage of both classification and regression methods. The solution includes a classification neural network to detect field markers, such as line intersections, and two regression neural networks: one for calculating the position of the markers, and another for determining the robot’s position in real-time. It takes advantage of both approaches while maintaining the desired performance, accuracy, and robustness, simplifying the training process and adapting it to different scenarios. Designed specifically to meet MSL robotics’s high-speed demands and precision requirements, the system employs data augmentation techniques to ensure resilience against lighting, angles, and position variations. The results show that this optimised approach improves spatial awareness and accuracy, promising robot football advancements. Beyond MSL applications, this method has the potential for broader real-world uses that require dependable, real-time localisation in dynamic settings. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Image Processing and Computer Vision)
Show Figures

Figure 1

42 pages, 40649 KiB  
Article
A Multi-Drone System Proof of Concept for Forestry Applications
by André G. Araújo, Carlos A. P. Pizzino, Micael S. Couceiro and Rui P. Rocha
Drones 2025, 9(2), 80; https://doi.org/10.3390/drones9020080 - 21 Jan 2025
Viewed by 717
Abstract
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry [...] Read more.
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry via Smoothing and Mapping (LIO-SAM), and Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm (DCL-SLAM), seamlessly integrated within the MRS UAV System and Swarm Formation packages. This integration is achieved through a series of procedures compliant with Robot Operating System middleware (ROS), including an auto-tuning particle swarm optimisation method for enhanced flight control and stabilisation, which is crucial for autonomous operation in challenging environments. Field experiments conducted in a forest with multiple drones demonstrate the system’s ability to navigate complex terrains as a coordinated swarm, accurately and collaboratively mapping forest areas. Results highlight the potential of this proof of concept, contributing to the development of scalable autonomous solutions for forestry management. The findings emphasise the significance of integrating multiple open-source technologies to advance sustainable forestry practices using swarms of drones. Full article
Show Figures

Figure 1

36 pages, 22961 KiB  
Article
Enhanced STag Marker System: Materials and Methods for Flexible Robot Localisation
by James R. Heselden, Dimitris Paparas, Robert L. Stevenson and Gautham P. Das
Machines 2025, 13(1), 2; https://doi.org/10.3390/machines13010002 - 24 Dec 2024
Viewed by 419
Abstract
Accurate localisation is key for the autonomy of mobile robots. Fiducial localisation utilises relative positions of markers physically deployed across an environment to determine a localisation estimate for a robot. Fiducial markers are strictly designed, with very limited flexibility in appearance. This often [...] Read more.
Accurate localisation is key for the autonomy of mobile robots. Fiducial localisation utilises relative positions of markers physically deployed across an environment to determine a localisation estimate for a robot. Fiducial markers are strictly designed, with very limited flexibility in appearance. This often results in a “trade-off” between visual customisation, library size, and occlusion resilience. Many fiducial localisation approaches vary in their position estimation over time, leading to instability. The Stable Fiducial Marker System (STag) was designed to address this limitation with the use of a two-stage homography detection. Through its combined square and circle detection phases, it can refine detection stability. In this work, we explore the utility of STag as a basis for a stable mobile robot localisation system. Key marker restrictions are addressed in this work through contributions of three new chromatic STag marker types. The hue/greyscale STag marker set addresses constraints in customisability, the high-capacity STag marker set addresses limitations in library size, and the high-occlusion STag marker set improves resilience to occlusions. These are designed with compatibility with the STag detection system, requiring only preprocessing steps for enhanced detection. They are assessed against the existing STag markers and each shows clear improvements. Further, we explore the viability of various materials for marker fabrication, for use in outdoor and low-light conditions. This includes the exploration of “active” materials which induce effects such as retro-reflectance and photo-luminescence. Detection rates are experimentally assessed across lighting conditions, with “active” markers assessed on the practicality of their effects. To encapsulate this work, we have developed a full end-to-end deployment for fiducial localisation under the STag system. It is shown to function for both on-board and off-board localisation, with deployment in practical robot trials. As a part of this contribution, the associated software for marker set generation/detection, physical marker fabrication, and end-to-end localisation has been released as an open source distribution. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

22 pages, 25637 KiB  
Article
Low-Cost Real-Time Localisation for Agricultural Robots in Unstructured Farm Environments
by Chongxiao Liu and Bao Kha Nguyen
Machines 2024, 12(9), 612; https://doi.org/10.3390/machines12090612 - 2 Sep 2024
Cited by 2 | Viewed by 1442
Abstract
Agricultural robots have demonstrated significant potential in enhancing farm operational efficiency and reducing manual labour. However, unstructured and complex farm environments present challenges to the precise localisation and navigation of robots in real time. Furthermore, the high costs of navigation systems in agricultural [...] Read more.
Agricultural robots have demonstrated significant potential in enhancing farm operational efficiency and reducing manual labour. However, unstructured and complex farm environments present challenges to the precise localisation and navigation of robots in real time. Furthermore, the high costs of navigation systems in agricultural robots hinder their widespread adoption in cost-sensitive agricultural sectors. This study compared two localisation methods that use the Error State Kalman Filter (ESKF) to integrate data from wheel odometry, a low-cost inertial measurement unit (IMU), a low-cost real-time kinematic global navigation satellite system (RTK-GNSS) and the LiDAR-Inertial Odometry via Smoothing and Mapping (LIO-SAM) algorithm using a low-cost IMU and RoboSense 16-channel LiDAR sensor. These two methods were tested on unstructured farm environments for the first time in this study. Experiment results show that the ESKF sensor fusion method without a LiDAR sensor could save 36% of the cost compared to the method that used the LIO-SAM algorithm while maintaining high accuracy for farming applications. Full article
(This article belongs to the Special Issue New Trends in Robotics, Automation and Mechatronics)
Show Figures

Figure 1

29 pages, 7421 KiB  
Article
Continuous Online Semantic Implicit Representation for Autonomous Ground Robot Navigation in Unstructured Environments
by Quentin Serdel, Julien Marzat and Julien Moras
Robotics 2024, 13(7), 108; https://doi.org/10.3390/robotics13070108 - 18 Jul 2024
Viewed by 1358
Abstract
While mobile ground robots have now the physical capacity of travelling in unstructured challenging environments such as extraterrestrial surfaces or devastated terrains, their safe and efficient autonomous navigation has yet to be improved before entrusting them with complex unsupervised missions in such conditions. [...] Read more.
While mobile ground robots have now the physical capacity of travelling in unstructured challenging environments such as extraterrestrial surfaces or devastated terrains, their safe and efficient autonomous navigation has yet to be improved before entrusting them with complex unsupervised missions in such conditions. Recent advances in machine learning applied to semantic scene understanding and environment representations, coupled with modern embedded computational means and sensors hold promising potential in this matter. This paper therefore introduces the combination of semantic understanding, continuous implicit environment representation and smooth informed path-planning in a new method named COSMAu-Nav. It is specifically dedicated to autonomous ground robot navigation in unstructured environments and adaptable for embedded, real-time usage without requiring any form of telecommunication. Data clustering and Gaussian processes are employed to perform online regression of the environment topography, occupancy and terrain traversability from 3D semantic point clouds while providing an uncertainty modeling. The continuous and differentiable properties of Gaussian processes allow gradient based optimisation to be used for smooth local path-planning with respect to the terrain properties. The proposed pipeline has been evaluated and compared with two reference 3D semantic mapping methods in terms of quality of representation under localisation and semantic segmentation uncertainty using a Gazebo simulation, derived from the 3DRMS dataset. Its computational requirements have been evaluated using the Rellis-3D real world dataset. It has been implemented on a real ground robot and successfully employed for its autonomous navigation in a previously unknown outdoor environment. Full article
Show Figures

Figure 1

12 pages, 3428 KiB  
Article
Protocol for the RoboSling Trial: A Randomised Study Assessing Urinary Continence Following Robotic Radical Prostatectomy with or without an Intraoperative Retropubic Vascularised Fascial Sling (RoboSling)
by Amandeep Virk, Patrick-Julien Treacy, Wenjie Zhong, Stuart Robert Jackson, Nariman Ahmadi, Nicola Nadia Jeffery, Lewis Chan, Paul Sved, Arthur Vasilaras, Ruban Thanigasalam and Scott Leslie
Soc. Int. Urol. J. 2024, 5(2), 148-159; https://doi.org/10.3390/siuj5020024 - 17 Apr 2024
Viewed by 1040
Abstract
Objectives: To determine if early (three months) and late (one year) post-operative continence is improved by performing a novel retropubic vascularised fascial sling (RoboSling) procedure concurrently with robot-assisted radical prostatectomy in men undergoing treatment for localised prostate cancer. To additionally assess surgical outcomes, [...] Read more.
Objectives: To determine if early (three months) and late (one year) post-operative continence is improved by performing a novel retropubic vascularised fascial sling (RoboSling) procedure concurrently with robot-assisted radical prostatectomy in men undergoing treatment for localised prostate cancer. To additionally assess surgical outcomes, quality of life and health economic outcomes in patients undergoing the novel RoboSling technique. Methods: This study aims to recruit 120 consecutive patients with clinically localised prostate cancer who have chosen to undergo robot-assisted radical prostatectomy in the Sydney Local Health District, Australia. A prospective assessment of early and late post-operative continence following robot-assisted radical prostatectomy with and without a RoboSling procedure will be performed in a two-group, 1:1, parallel, randomized controlled trial. Four surgeons will take part in the study, all of whom are beyond their learning curve. Patients will be blinded as to whether the RoboSling procedure is performed for them, as will be the research officers collecting the post-operative data on urinary function. Trial Registration: ACTRN12618002058257. Results: The trial is currently underway. Conclusions: The RoboSling technique is unique in that the sling is vascularised and has a broad surface area compared to previously described slings in the literature. If a clinically significant improvement in post-operative continence is established with the RoboSling, then, we can in turn expect improvements in quality of life for men undergoing this technique with radical prostatectomy. Full article
Show Figures

Figure 1

18 pages, 16454 KiB  
Article
Robotic Disassembly Platform for Disassembly of a Plug-In Hybrid Electric Vehicle Battery: A Case Study
by Mo Qu, D. T. Pham, Faraj Altumi, Adeyemisi Gbadebo, Natalia Hartono, Kaiwen Jiang, Mairi Kerin, Feiying Lan, Marcel Micheli, Shuihao Xu and Yongjing Wang
Automation 2024, 5(2), 50-67; https://doi.org/10.3390/automation5020005 - 1 Apr 2024
Cited by 4 | Viewed by 3379
Abstract
Efficient processing of end-of-life lithium-ion batteries in electric vehicles is an important and pressing challenge in a circular economy. Regardless of whether the processing strategy is recycling, repurposing, or remanufacturing, the first processing step will usually involve disassembly. As battery disassembly is a [...] Read more.
Efficient processing of end-of-life lithium-ion batteries in electric vehicles is an important and pressing challenge in a circular economy. Regardless of whether the processing strategy is recycling, repurposing, or remanufacturing, the first processing step will usually involve disassembly. As battery disassembly is a dangerous task, efforts have been made to robotise it. In this paper, a robotic disassembly platform using four industrial robots is proposed to automate the non-destructive disassembly of a plug-in hybrid electric vehicle battery pack into modules. This work was conducted as a case study to demonstrate the concept of the autonomous disassembly of an electric vehicle battery pack. A two-step object localisation method based on visual information is used to overcome positional uncertainties from different sources and is validated by experiments. Also, the unscrewing system is highlighted, and its functions, such as handling untightened fasteners, loosening jammed screws, and changing the nutrunner adapters with square drives, are detailed. Furthermore, the time required for each operation is compared with that taken by human operators. Finally, the limitations of the platform are reported, and future research directions are suggested. Full article
(This article belongs to the Special Issue Smart Remanufacturing)
Show Figures

Figure 1

19 pages, 5361 KiB  
Article
Bimanual Telemanipulation Framework Utilising Multiple Optically Localised Cooperative Mobile Manipulators
by Christopher Peers and Chengxu Zhou
Robotics 2024, 13(4), 59; https://doi.org/10.3390/robotics13040059 - 1 Apr 2024
Cited by 2 | Viewed by 2107
Abstract
Bimanual manipulation is valuable for its potential to provide robots in the field with increased capabilities when interacting with environments, as well as broadening the number of possible manipulation actions available. However, for a robot to perform bimanual manipulation, the system must have [...] Read more.
Bimanual manipulation is valuable for its potential to provide robots in the field with increased capabilities when interacting with environments, as well as broadening the number of possible manipulation actions available. However, for a robot to perform bimanual manipulation, the system must have a capable control framework to localise and generate trajectories and commands for each sub-system to allow for successful cooperative manipulation as well as sufficient control over each individual sub-system. The proposed method suggests using multiple mobile manipulator platforms coupled through the use of an optical tracking localisation method to act as a single bimanual manipulation system. The framework’s performance relies on the accuracy of the localisation. As commands are primarily high-level, it is possible to use any number and combination of mobile manipulators and fixed manipulators within this framework. We demonstrate the functionality of this system through tests in a Pybullet simulation environment using two different omnidirectional mobile manipulators, as well a real-life experiment using two quadrupedal manipulators. Full article
(This article belongs to the Special Issue Legged Robots into the Real World, 2nd Edition)
Show Figures

Figure 1

30 pages, 13428 KiB  
Article
SEG-SLAM: Dynamic Indoor RGB-D Visual SLAM Integrating Geometric and YOLOv5-Based Semantic Information
by Peichao Cong, Jiaxing Li, Junjie Liu, Yixuan Xiao and Xin Zhang
Sensors 2024, 24(7), 2102; https://doi.org/10.3390/s24072102 - 25 Mar 2024
Cited by 9 | Viewed by 2193
Abstract
Simultaneous localisation and mapping (SLAM) is crucial in mobile robotics. Most visual SLAM systems assume that the environment is static. However, in real life, there are many dynamic objects, which affect the accuracy and robustness of these systems. To improve the performance of [...] Read more.
Simultaneous localisation and mapping (SLAM) is crucial in mobile robotics. Most visual SLAM systems assume that the environment is static. However, in real life, there are many dynamic objects, which affect the accuracy and robustness of these systems. To improve the performance of visual SLAM systems, this study proposes a dynamic visual SLAM (SEG-SLAM) system based on the orientated FAST and rotated BRIEF (ORB)-SLAM3 framework and you only look once (YOLO)v5 deep-learning method. First, based on the ORB-SLAM3 framework, the YOLOv5 deep-learning method is used to construct a fusion module for target detection and semantic segmentation. This module can effectively identify and extract prior information for obviously and potentially dynamic objects. Second, differentiated dynamic feature point rejection strategies are developed for different dynamic objects using the prior information, depth information, and epipolar geometry method. Thus, the localisation and mapping accuracy of the SEG-SLAM system is improved. Finally, the rejection results are fused with the depth information, and a static dense 3D mapping without dynamic objects is constructed using the Point Cloud Library. The SEG-SLAM system is evaluated using public TUM datasets and real-world scenarios. The proposed method is more accurate and robust than current dynamic visual SLAM algorithms. Full article
(This article belongs to the Special Issue Advanced Sensors Technologies Applied in Mobile Robotics: 2nd Edition)
Show Figures

Figure 1

28 pages, 14944 KiB  
Article
On the Importance of Precise Positioning in Robotised Agriculture
by Mateusz Nijak, Piotr Skrzypczyński, Krzysztof Ćwian, Michał Zawada, Sebastian Szymczyk and Jacek Wojciechowski
Remote Sens. 2024, 16(6), 985; https://doi.org/10.3390/rs16060985 - 11 Mar 2024
Cited by 6 | Viewed by 2022
Abstract
The precision of agro-technical operations is one of the main hallmarks of a modern approach to agriculture. However, ensuring the precise application of plant protection products or the performance of mechanical field operations entails significant costs for sophisticated positioning systems. This paper explores [...] Read more.
The precision of agro-technical operations is one of the main hallmarks of a modern approach to agriculture. However, ensuring the precise application of plant protection products or the performance of mechanical field operations entails significant costs for sophisticated positioning systems. This paper explores the integration of precision positioning based on the global navigation satellite system (GNSS) in agriculture, particularly in fieldwork operations, seeking solutions of moderate cost with sufficient precision. This study examines the impact of GNSSs on automation and robotisation in agriculture, with a focus on intelligent agricultural guidance. It also discusses commercial devices that enable the automatic guidance of self-propelled machinery and the benefits that they provide. This paper investigates GNSS-based precision localisation devices under real field conditions. A comparison of commercial and low-cost GNSS solutions, along with the integration of satellite navigation with advanced visual odometry for improved positioning accuracy, is presented. The research demonstrates that affordable solutions based on the common differential GNSS infrastructure can be applied for accurate localisation under real field conditions. It also underscores the potential of GNSS-based automation and robotisation in transforming agriculture into a more efficient and sustainable industry. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

26 pages, 1339 KiB  
Article
Applying Design Thinking to Enhance Programming Education in Vocational and Compulsory Secondary Schools
by Belkis Díaz-Lauzurica and David Moreno-Salinas
Appl. Sci. 2023, 13(23), 12792; https://doi.org/10.3390/app132312792 - 29 Nov 2023
Cited by 1 | Viewed by 1898
Abstract
A proper and complete formation in technology (science, communications, programming, robotics, Computational Thinking, etc.) must be imparted at all educational levels for a lifelong education. However, students may lose motivation or interest due to the complexity and abstraction of some of the concepts [...] Read more.
A proper and complete formation in technology (science, communications, programming, robotics, Computational Thinking, etc.) must be imparted at all educational levels for a lifelong education. However, students may lose motivation or interest due to the complexity and abstraction of some of the concepts imparted. In line with this, the work at hand looks to improve the interest and commitment of students by presenting the programming concepts and contents in a practical way. The teaching–learning process is based on the development of projects about robotics, which are adapted for courses and groups of different educational levels. The Design Thinking methodology is used to impart the content. This methodology allows the students to experiment, design and test different solutions for a given problem, increasing their motivation and interest, promoting creativity, and making the students conscious of their learning process. Two different projects are considered, a simulated one based on a sensor network to localise and track a robot in a closed area for vocational education students, and an experimental one about constructing a robot with several capabilities using Lego Mindstorms for compulsory secondary education students. The results obtained over three different groups of students are analysed and compared, and show that the methodology and projects selected can be adopted and adapted for different educational levels, increasing the proficiency of the students, their development, motivation and self-learning despite the difficulty and complexity of some concepts related to computer science. Full article
(This article belongs to the Special Issue ICTs in Education)
Show Figures

Figure 1

33 pages, 440 KiB  
Review
Challenges and Solutions for Autonomous Ground Robot Scene Understanding and Navigation in Unstructured Outdoor Environments: A Review
by Liyana Wijayathunga, Alexander Rassau and Douglas Chai
Appl. Sci. 2023, 13(17), 9877; https://doi.org/10.3390/app13179877 - 31 Aug 2023
Cited by 25 | Viewed by 9994
Abstract
The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the [...] Read more.
The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges. Full article
Show Figures

Figure 1

19 pages, 5509 KiB  
Article
A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments
by Ilaria Ciuffreda, Sara Casaccia and Gian Marco Revel
Sensors 2023, 23(15), 6963; https://doi.org/10.3390/s23156963 - 5 Aug 2023
Cited by 9 | Viewed by 3479
Abstract
This work illustrates an innovative localisation sensor network that uses multiple PIR and ultrasonic sensors installed on a mobile social robot to localise occupants in indoor environments. The system presented aims to measure movement direction and distance to reconstruct the movement of a [...] Read more.
This work illustrates an innovative localisation sensor network that uses multiple PIR and ultrasonic sensors installed on a mobile social robot to localise occupants in indoor environments. The system presented aims to measure movement direction and distance to reconstruct the movement of a person in an indoor environment by using sensor activation strategies and data processing techniques. The data collected are then analysed using both a supervised (Decision Tree) and an unsupervised (K-Means) machine learning algorithm to extract the direction and distance of occupant movement from the measurement system, respectively. Tests in a controlled environment have been conducted to assess the accuracy of the methodology when multiple PIR and ultrasonic sensor systems are used. In addition, a qualitative evaluation of the system’s ability to reconstruct the movement of the occupant has been performed. The system proposed can reconstruct the direction of an occupant with an accuracy of 70.7% and uncertainty in distance measurement of 6.7%. Full article
(This article belongs to the Special Issue Metrology for Living Environment)
Show Figures

Figure 1

16 pages, 7670 KiB  
Article
Dense Papaya Target Detection in Natural Environment Based on Improved YOLOv5s
by Lei Wang, Hongcheng Zheng, Chenghai Yin, Yong Wang, Zongxiu Bai and Wei Fu
Agronomy 2023, 13(8), 2019; https://doi.org/10.3390/agronomy13082019 - 29 Jul 2023
Cited by 3 | Viewed by 1876
Abstract
Due to the fact that the green features of papaya skin are the same colour as the leaves, the dense growth of fruits causes serious overlapping occlusion phenomenon between them, which increases the difficulty of target detection by the robot during the picking [...] Read more.
Due to the fact that the green features of papaya skin are the same colour as the leaves, the dense growth of fruits causes serious overlapping occlusion phenomenon between them, which increases the difficulty of target detection by the robot during the picking process. This study proposes an improved YOLOv5s-Papaya deep convolutional neural network for achieving dense multitarget papaya detection in natural orchard environments. The model is based on the YOLOv5s network architecture and incorporates the Ghost module to enhance its lightweight characteristics. The Ghost module employs a strategy of grouped convolutional layers and weighted fusion, allowing for more efficient feature representation and improved model performance. A coordinate attention module is introduced to improve the accuracy of identifying dense multitarget papayas. The fusion of bidirectional weighted feature pyramid networks in the PANet structure of the feature fusion layer enhances the performance of papaya detection at different scales. Moreover, the scaled intersection over union bounding box regression loss function is used rather than the complete intersection over union bounding box regression loss function to enhance the localisation accuracy of dense targets and expedite the convergence of the network model training. Experimental results show that the YOLOv5s-Papaya model achieves detection average precision, precision, and recall rates of 92.3%, 90.4%, and 83.4%, respectively. The model’s size, number of parameters, and floating-point operations are 11.5 MB, 6.2 M, and 12.8 G, respectively. Compared to the original YOLOv5s network model, the model detection average precision is improved by 3.6 percentage points, the precision is improved by 4.3 percentage points, the number of parameters is reduced by 11.4%, and the floating-point operations are decreased by 18.9%. The improved model has a lighter structure and better detection performance. This study provides the theoretical basis and technical support for intelligent picking recognition of overlapping and occluded dense papayas in natural environments. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Smart Agriculture—Volume II)
Show Figures

Figure 1

38 pages, 50553 KiB  
Article
Safe and Robust Map Updating for Long-Term Operations in Dynamic Environments
by Elisa Stefanini, Enrico Ciancolini, Alessandro Settimi and Lucia Pallottino
Sensors 2023, 23(13), 6066; https://doi.org/10.3390/s23136066 - 30 Jun 2023
Cited by 1 | Viewed by 2168
Abstract
Ensuring safe and continuous autonomous navigation in long-term mobile robot applications is still challenging. To ensure a reliable representation of the current environment without the need for periodic remapping, updating the map is recommended. However, in the case of incorrect robot pose estimation, [...] Read more.
Ensuring safe and continuous autonomous navigation in long-term mobile robot applications is still challenging. To ensure a reliable representation of the current environment without the need for periodic remapping, updating the map is recommended. However, in the case of incorrect robot pose estimation, updating the map can lead to errors that prevent the robot’s localisation and jeopardise map accuracy. In this paper, we propose a safe Lidar-based occupancy grid map-updating algorithm for dynamic environments, taking into account uncertainties in the estimation of the robot’s pose. The proposed approach allows for robust long-term operations, as it can recover the robot’s pose, even when it gets lost, to continue the map update process, providing a coherent map. Moreover, the approach is also robust to temporary changes in the map due to the presence of dynamic obstacles such as humans and other robots. Results highlighting map quality, localisation performance, and pose recovery, both in simulation and experiments, are reported. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop