Pose estimation and map reconstruction are basic requirements for robotic autonomous behavior. In this paper, we propose a point-plane-based method to simultaneously estimate the robot's poses and reconstruct the current environment's map using RGB-D cameras. First, we detect and track the point and plane features from color and depth images, and reliable constraints are obtained, even for low-texture scenes. Then, we construct cost functions from these features, and we utilize the plane's minimal representation to minimize these functions for pose estimation and local map optimization. Furthermore, we extract the Manhattan World (MW) axes on the basis of the plane normals and vanishing directions of parallel lines for the MW scenes, and we add the MW constraint to the point-plane-based cost functions for more accurate pose estimation. The results of experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for pose estimation and map reconstruction, and we show its advantages compared with alternative methods.
Keywords: Manhattan World; RGB-D camera; map reconstruction; point–plane-based factor graph; pose estimation; visual SLAM.