Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = OctoMap

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7296 KiB  
Article
Autonomous Full 3D Coverage Using an Aerial Vehicle, Performing Localization, Path Planning, and Navigation towards Indoors Inventorying for the Logistics Domain
by Kosmas Tsiakas, Emmanouil Tsardoulias and Andreas L. Symeonidis
Robotics 2024, 13(6), 83; https://doi.org/10.3390/robotics13060083 - 23 May 2024
Viewed by 889
Abstract
Over the last years, a rapid evolution of unmanned aerial vehicle (UAV) usage in various applications has been observed. Their use in indoor environments requires a precise perception of the surrounding area, immediate response to its changes, and, consequently, a robust position estimation. [...] Read more.
Over the last years, a rapid evolution of unmanned aerial vehicle (UAV) usage in various applications has been observed. Their use in indoor environments requires a precise perception of the surrounding area, immediate response to its changes, and, consequently, a robust position estimation. This paper provides an implementation of navigation algorithms for solving the problem of fast, reliable, and low-cost inventorying in the logistics industry. The drone localization is achieved with a particle filter algorithm that uses an array of distance sensors and an inertial measurement unit (IMU) sensor. Navigation is based on a proportional–integral–derivative (PID) position controller that ensures an obstacle-free path within the known 3D map. As for the full 3D coverage, an extraction of the targets and then their final succession towards optimal coverage is performed. Finally, a series of experiments are carried out to examine the robustness of the positioning system using different motion patterns and velocities. At the same time, various ways of traversing the environment are examined by using different configurations of the sensor that is used to perform the area coverage. Full article
(This article belongs to the Special Issue Autonomous Navigation of Mobile Robots in Unstructured Environments)
Show Figures

Figure 1

23 pages, 138297 KiB  
Article
Online Multi-Contact Motion Replanning for Humanoid Robots with Semantic 3D Voxel Mapping: ExOctomap
by Masato Tsuru, Adrien Escande, Iori Kumagai, Masaki Murooka and Kensuke Harada
Sensors 2023, 23(21), 8837; https://doi.org/10.3390/s23218837 - 30 Oct 2023
Viewed by 1914
Abstract
This study introduces a rapid motion-replanning technique driven by a semantic 3D voxel mapping system, essential for humanoid robots to autonomously navigate unknown territories through online environmental sensing. Addressing the challenges posed by the conventional approach based on polygon mesh or primitive extraction [...] Read more.
This study introduces a rapid motion-replanning technique driven by a semantic 3D voxel mapping system, essential for humanoid robots to autonomously navigate unknown territories through online environmental sensing. Addressing the challenges posed by the conventional approach based on polygon mesh or primitive extraction for mapping, we adopt semantic voxel mapping, utilizing our innovative Extended-Octomap (ExOctomap). This structure archives environmental normal vectors, outcomes of Euclidean Cluster Extraction, and principal component analysis within an Octree structure, facilitating an O(log N) efficiency in semantic accessibility from a position query xR3. This strategy reduces the 6D contact pose search to simple 3D grid sampling. Moreover, voxel representation enables the search of collision-free trajectories online. Through experimental validation based on simulations and real robotic experiments, we demonstrate that our framework can efficiently adapt multi-contact motions across diverse environments, achieving near real-time planning speeds that range from 13.8 ms to 115.7 ms per contact. Full article
(This article belongs to the Special Issue Advances in Mobile Robot Perceptions, Planning, Control and Learning)
Show Figures

Figure 1

19 pages, 10161 KiB  
Article
Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion
by Haiyan Shao, Qingshuai Zhao, Hongtang Chen, Weixin Yang, Bin Chen, Zhiquan Feng, Jinkai Zhang and Hao Teng
Machines 2023, 11(6), 588; https://doi.org/10.3390/machines11060588 - 25 May 2023
Viewed by 1774
Abstract
This study addresses the challenges associated with incomplete or missing information in obstacle detection methods that employ a single sensor. Additionally, it tackles the issue of motion distortion in LiDAR point cloud data during synchronization and mapping in complex environments. The research introduces [...] Read more.
This study addresses the challenges associated with incomplete or missing information in obstacle detection methods that employ a single sensor. Additionally, it tackles the issue of motion distortion in LiDAR point cloud data during synchronization and mapping in complex environments. The research introduces two significant contributions. Firstly, a novel obstacle detection method, named the point-map fusion (PMF) algorithm, was proposed. This method integrates point cloud data from the LiDAR, camera, and odometer, along with local grid maps. The PMF algorithm consists of two components: the point-fusion (PF) algorithm, which combines LiDAR point cloud data and camera laser-like point cloud data through a point cloud library (PCL) format conversion and concatenation, and selects the most proximate point cloud to the quadruped robot dog as the valid data; and the map-fusion (MF) algorithm, which incorporates local grid maps acquired using the Gmapping and OctoMap algorithms, leveraging Bayesian estimation theory. The local grid maps obtained by the Gmapping and OctoMap algorithms are denoted as map A and map B, respectively. This sophisticated methodology enables seamless map fusion, which significantly enhances the precision and reliability of the approach. Secondly, a motion distortion removal (MDR) method for LiDAR point cloud data based on odometer readings was proposed. The MDR method utilizes legged odometer data for linear data interpolation of the original distorted LiDAR point cloud data, facilitating the determination of the corresponding pose of the quadruped robot dog. Subsequently, the LiDAR point cloud data are then transformed to the quadruped robot dog coordinate system, efficiently mitigating motion distortion. Experimental results demonstrated that the proposed PMF algorithm achieved a 50% improvement in success rate compared to using only LiDAR or the PF algorithm in isolation, while the MDR algorithm enhanced mapping accuracy by 45.9% when motion distortion was taken into account. The effectiveness of the proposed methods was confirmed through rigorous experimentation. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

25 pages, 5804 KiB  
Article
NGLSFusion: Non-Use GPU Lightweight Indoor Semantic SLAM
by Le Wan, Lin Jiang, Bo Tang, Yunfei Li, Bin Lei and Honghai Liu
Appl. Sci. 2023, 13(9), 5285; https://doi.org/10.3390/app13095285 - 23 Apr 2023
Viewed by 1472
Abstract
Perception of the indoor environment is the basis of mobile robot localization, navigation, and path planning, and it is particularly important to construct semantic maps in real time using minimal resources. The existing methods are too dependent on the graphics processing unit (GPU) [...] Read more.
Perception of the indoor environment is the basis of mobile robot localization, navigation, and path planning, and it is particularly important to construct semantic maps in real time using minimal resources. The existing methods are too dependent on the graphics processing unit (GPU) for acquiring semantic information about the indoor environment, and cannot build the semantic map in real time on the central processing unit (CPU). To address the above problems, this paper proposes a non-use GPU for lightweight indoor semantic map construction algorithm, named NGLSFusion. In the VO method, ORB features are used for the initialization of the first frame, new keyframes are created by optical flow method, and feature points are extracted by direct method, which speeds up the tracking speed. In the semantic map construction method, a pretrained model of the lightweight network LinkNet is optimized to provide semantic information in real time on devices with limited computing power, and a semantic point cloud is fused using OctoMap and Voxblox. Experimental results show that the algorithm in this paper ensures the accuracy of camera pose while speeding up the tracking speed, and obtains a reconstructed semantic map with complete structure without using GPU. Full article
(This article belongs to the Special Issue 3D Scene Understanding and Object Recognition)
Show Figures

Figure 1

21 pages, 2235 KiB  
Article
RTSDM: A Real-Time Semantic Dense Mapping System for UAVs
by Zhiteng Li, Jiannan Zhao, Xiang Zhou, Shengxian Wei, Pei Li and Feng Shuang
Machines 2022, 10(4), 285; https://doi.org/10.3390/machines10040285 - 18 Apr 2022
Cited by 7 | Viewed by 3106
Abstract
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and [...] Read more.
Intelligent drones or flying robots play a significant role in serving our society in applications such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the UAV and creating a semantic 3D map is significant for fully autonomous tasks. However, integrating simultaneous localization, 3D reconstruction, and semantic segmentation together is a huge challenge for power-limited systems such as UAVs. To address this, we propose a real-time semantic mapping system that can help a power-limited UAV system to understand its location and surroundings. The proposed approach includes a modified visual SLAM with the direct method to accelerate the computationally intensive feature matching process and a real-time semantic segmentation module at the back end. The semantic module runs a lightweight network, BiSeNetV2, and performs segmentation only at key frames from the front-end SLAM task. Considering fast navigation and the on-board memory resources, we provide a real-time dense-map-building module to generate an OctoMap with the segmented semantic map. The proposed system is verified in real-time experiments on a UAV platform with a Jetson TX2 as the computation unit. A frame rate of around 12 Hz, with a semantic segmentation accuracy of around 89% demonstrates that our proposed system is computationally efficient while providing sufficient information for fully autonomous tasks such as rescue, inspection, etc. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

17 pages, 1157 KiB  
Article
An Occupancy Mapping Method Based on K-Nearest Neighbours
by Yu Miao, Alan Hunter and Ioannis Georgilas
Sensors 2022, 22(1), 139; https://doi.org/10.3390/s22010139 - 26 Dec 2021
Cited by 9 | Viewed by 2739
Abstract
OctoMap is an efficient probabilistic mapping framework to build occupancy maps from point clouds, representing 3D environments with cubic nodes in the octree. However, the map update policy in OctoMap has limitations. All the nodes containing points will be assigned with the same [...] Read more.
OctoMap is an efficient probabilistic mapping framework to build occupancy maps from point clouds, representing 3D environments with cubic nodes in the octree. However, the map update policy in OctoMap has limitations. All the nodes containing points will be assigned with the same probability regardless of the points being noise, and the probability of one such node can only be increased with a single measurement. In addition, potentially occupied nodes with points inside but traversed by rays cast from the sensor to endpoints will be marked as free. To overcome these limitations in OctoMap, the current work presents a mapping method using the context of neighbouring points to update nodes containing points, with occupancy information of a point represented by the average distance from a point to its k-Nearest Neighbours. A relationship between the distance and the change in probability is defined with the Cumulative Density Function of average distances, potentially decreasing the probability of a node despite points being present inside. Experiments are conducted on 20 data sets to compare the proposed method with OctoMap. Results show that our method can achieve up to 10% improvement over the optimal performance of OctoMap. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

17 pages, 2824 KiB  
Article
Parameter Reduction and Optimisation for Point Cloud and Occupancy Mapping Algorithms
by Yu Miao, Alan Hunter and Ioannis Georgilas
Sensors 2021, 21(21), 7004; https://doi.org/10.3390/s21217004 - 22 Oct 2021
Cited by 2 | Viewed by 1563
Abstract
Occupancy mapping is widely used to generate volumetric 3D environment models from point clouds, informing a robotic platform which parts of the environment are free and which are not. The selection of the parameters that govern the point cloud generation algorithms and mapping [...] Read more.
Occupancy mapping is widely used to generate volumetric 3D environment models from point clouds, informing a robotic platform which parts of the environment are free and which are not. The selection of the parameters that govern the point cloud generation algorithms and mapping algorithms affects the process and the quality of the final map. Although previous studies have been reported in the literature on optimising major parameter configurations, research in the process to identify optimal parameter sets to achieve best occupancy mapping performance remains limited. The current work aims to fill this gap with a two-step principled methodology that first identifies the most significant parameters by conducting Neighbourhood Component Analysis on all parameters and then optimise those using grid search with the area under the Receiver Operating Characteristic curve. This study is conducted on 20 data sets with specially designed targets, providing precise ground truths for evaluation purposes. The methodology is tested on OctoMap with point clouds created by applying StereoSGBM on the images from a stereo camera. A clear indication can be seen that mapping parameters are more important than point cloud generation parameters. Moreover, up to 15% improvement in mapping performance can be achieved over default parameters. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 5242 KiB  
Article
Aerial and Ground Robot Collaboration for Autonomous Mapping in Search and Rescue Missions
by Dimitrios Chatziparaschis, Michail G. Lagoudakis and Panagiotis Partsinevelos
Drones 2020, 4(4), 79; https://doi.org/10.3390/drones4040079 - 19 Dec 2020
Cited by 42 | Viewed by 10065
Abstract
Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems [...] Read more.
Humanitarian Crisis scenarios typically require immediate rescue intervention. In many cases, the conditions at a scene may be prohibitive for human rescuers to provide instant aid, because of hazardous, unexpected, and human threatening situations. These scenarios are ideal for autonomous mobile robot systems to assist in searching and even rescuing individuals. In this study, we present a synchronous ground-aerial robot collaboration approach, under which an Unmanned Aerial Vehicle (UAV) and a humanoid robot solve a Search and Rescue scenario locally, without the aid of a commonly used Global Navigation Satellite System (GNSS). Specifically, the UAV uses a combination of Simultaneous Localization and Mapping and OctoMap approaches to extract a 2.5D occupancy grid map of the unknown area in relation to the humanoid robot. The humanoid robot receives a goal position in the created map and executes a path planning algorithm in order to estimate the FootStep navigation trajectory for reaching the goal. As the humanoid robot navigates, it localizes itself in the map while using an adaptive Monte-Carlo Localization algorithm by combining local odometry data with sensor observations from the UAV. Finally, the humanoid robot performs visual human body detection while using camera data through a Darknet pre-trained neural network. The proposed robot collaboration scheme has been tested under a proof of concept setting in an exterior GNSS-denied environment. Full article
Show Figures

Figure 1

16 pages, 4446 KiB  
Article
Optimal Frontier-Based Autonomous Exploration in Unconstructed Environment Using RGB-D Sensor
by Liang Lu, Carlos Redondo and Pascual Campoy
Sensors 2020, 20(22), 6507; https://doi.org/10.3390/s20226507 - 14 Nov 2020
Cited by 28 | Viewed by 4502
Abstract
Aerial robots are widely used in search and rescue applications because of their small size and high maneuvering. However, designing an autonomous exploration algorithm is still a challenging and open task, because of the limited payload and computing resources on board UAVs. This [...] Read more.
Aerial robots are widely used in search and rescue applications because of their small size and high maneuvering. However, designing an autonomous exploration algorithm is still a challenging and open task, because of the limited payload and computing resources on board UAVs. This paper presents an autonomous exploration algorithm for the aerial robots that shows several improvements for being used in the search and rescue tasks. First of all, an RGB-D sensor is used to receive information from the environment and the OctoMap divides the environment into obstacles, free and unknown spaces. Then, a clustering algorithm is used to filter the frontiers extracted from the OctoMap, and an information gain based cost function is applied to choose the optimal frontier. At last, the feasible path is given by A* path planner and a safe corridor generation algorithm. The proposed algorithm has been tested and compared with baseline algorithms in three different environments with the map resolutions of 0.2 m, and 0.3 m. The experimental results show that the proposed algorithm has a shorter exploration path and can save more exploration time when compared with the state of the art. The algorithm has also been validated in the real flight experiments. Full article
(This article belongs to the Special Issue Sensors for Unmanned Aircraft Systems and Related Technologies)
Show Figures

Graphical abstract

24 pages, 5411 KiB  
Article
Autonomous 3D Exploration of Large Structures Using an UAV Equipped with a 2D LIDAR
by Margarida Faria, António Sérgio Ferreira, Héctor Pérez-Leon, Ivan Maza and Antidio Viguria
Sensors 2019, 19(22), 4849; https://doi.org/10.3390/s19224849 - 8 Nov 2019
Cited by 14 | Viewed by 4405
Abstract
This paper addressed the challenge of exploring large, unknown, and unstructured industrial environments with an unmanned aerial vehicle (UAV). The resulting system combined well-known components and techniques with a new manoeuvre to use a low-cost 2D laser to measure a 3D structure. Our [...] Read more.
This paper addressed the challenge of exploring large, unknown, and unstructured industrial environments with an unmanned aerial vehicle (UAV). The resulting system combined well-known components and techniques with a new manoeuvre to use a low-cost 2D laser to measure a 3D structure. Our approach combined frontier-based exploration, the Lazy Theta* path planner, and a flyby sampling manoeuvre to create a 3D map of large scenarios. One of the novelties of our system is that all the algorithms relied on the multi-resolution of the octomap for the world representation. We used a Hardware-in-the-Loop (HitL) simulation environment to collect accurate measurements of the capability of the open-source system to run online and on-board the UAV in real-time. Our approach is compared to different reference heuristics under this simulation environment showing better performance in regards to the amount of explored space. With the proposed approach, the UAV is able to explore 93% of the search space under 30 min, generating a path without repetition that adjusts to the occupied space covering indoor locations, irregular structures, and suspended obstacles. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 17479 KiB  
Article
Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments
by Chaoqun Wang, Jiankun Wang, Chenming Li, Danny Ho, Jiyu Cheng, Tingfang Yan, Lili Meng and Max Q.-H. Meng
Sensors 2019, 19(13), 2993; https://doi.org/10.3390/s19132993 - 7 Jul 2019
Cited by 24 | Viewed by 7573
Abstract
Complex environments pose great challenges for autonomous mobile robot navigation. In this study, we address the problem of autonomous navigation in 3D environments with staircases and slopes. An integrated system for safe mobile robot navigation in 3D complex environments is presented and both [...] Read more.
Complex environments pose great challenges for autonomous mobile robot navigation. In this study, we address the problem of autonomous navigation in 3D environments with staircases and slopes. An integrated system for safe mobile robot navigation in 3D complex environments is presented and both the perception and navigation capabilities are incorporated into the modular and reusable framework. Firstly, to distinguish the slope from the staircase in the environment, the robot builds a 3D OctoMap of the environment with a novel Simultaneously Localization and Mapping (SLAM) framework using the information of wheel odometry, a 2D laser scanner, and an RGB-D camera. Then, we introduce the traversable map, which is generated by the multi-layer 2D maps extracted from the 3D OctoMap. This traversable map serves as the input for autonomous navigation when the robot faces slopes and staircases. Moreover, to enable robust robot navigation in 3D environments, a novel camera re-localization method based on regression forest towards stable 3D localization is incorporated into this framework. In addition, we utilize a variable step size Rapidly-exploring Random Tree (RRT) method which can adjust the exploring step size automatically without tuning this parameter manually according to the environment, so that the navigation efficiency is improved. The experiments are conducted in different kinds of environments and the output results demonstrate that the proposed system enables the robot to navigate efficiently and robustly in complex 3D environments. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

29 pages, 20745 KiB  
Article
DRE-SLAM: Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot
by Dongsheng Yang, Shusheng Bi, Wei Wang, Chang Yuan, Wei Wang, Xianyu Qi and Yueri Cai
Remote Sens. 2019, 11(4), 380; https://doi.org/10.3390/rs11040380 - 13 Feb 2019
Cited by 50 | Viewed by 8925
Abstract
The state-of-the-art visual simultaneous localization and mapping (V-SLAM) systems have high accuracy localization capabilities and impressive mapping effects. However, most of these systems assume that the operating environment is static, thereby limiting their application in the real dynamic world. In this paper, by [...] Read more.
The state-of-the-art visual simultaneous localization and mapping (V-SLAM) systems have high accuracy localization capabilities and impressive mapping effects. However, most of these systems assume that the operating environment is static, thereby limiting their application in the real dynamic world. In this paper, by fusing the information of an RGB-D camera and two encoders that are mounted on a differential-drive robot, we aim to estimate the motion of the robot and construct a static background OctoMap in both dynamic and static environments. A tightly coupled feature-based method is proposed to fuse the two types of information based on the optimization. Dynamic pixels occupied by dynamic objects are detected and culled to cope with dynamic environments. The ability to identify the dynamic pixels on both predefined and undefined dynamic objects is available, which is attributed to the combination of the CPU-based object detection method and a multiview constraint-based approach. We first construct local sub-OctoMaps by using the keyframes and then fuse the sub-OctoMaps into a full OctoMap. This submap-based approach gives the OctoMap the ability to deform, and significantly reduces the map updating time and memory costs. We evaluated the proposed system in various dynamic and static scenes. The results show that our system possesses competitive pose accuracy and high robustness, as well as the ability to construct a clean static OctoMap in dynamic scenes. Full article
(This article belongs to the Special Issue Mobile Mapping Technologies)
Show Figures

Graphical abstract

Back to TopTop