Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Scale-invariant localization using quasi-semantic object landmarks

2021, Autonomous Robots

This work presentsObject Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. AnObject Landmarkconsists of a bounding box$${\mathbf {b}}$$bdefining an object, a descriptor$${\mathbf {q}}$$qof that object produced by a Convolutional Neural Network, and a set of classical point features within$${\mathbf {b}}$$b. We evaluateObject Landmarkson visual odometry and place-recognition tasks, and compare them against several modern approaches. We find thatObject Landmarksenable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.