Spatialbot: Precise spatial understanding with vision language models

W Cai, I Ponomarenko, J Yuan, X Li, W Yang… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2406.13642, 2024arxiv.org
Vision Language Models (VLMs) have achieved impressive performance in 2D image
understanding, however they are still struggling with spatial understanding which is the
foundation of Embodied AI. In this paper, we propose SpatialBot for better spatial
understanding by feeding both RGB and depth images. Additionally, we have constructed
the SpatialQA dataset, which involves multi-level depth-related questions to train VLMs for
depth understanding. Finally, we present SpatialBench to comprehensively evaluate VLMs' …
Vision Language Models (VLMs) have achieved impressive performance in 2D image understanding, however they are still struggling with spatial understanding which is the foundation of Embodied AI. In this paper, we propose SpatialBot for better spatial understanding by feeding both RGB and depth images. Additionally, we have constructed the SpatialQA dataset, which involves multi-level depth-related questions to train VLMs for depth understanding. Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities in spatial understanding at different levels. Extensive experiments on our spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks, demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.
arxiv.org
Showing the best result for this search. See all results