Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

[CVPR 2024] Probing the 3D Awareness of Visual Foundation Models

License

Notifications You must be signed in to change notification settings

mbanani/probe3d

Repository files navigation

Probing the 3D Awareness of Visual Foundation Models

This repository contains a re-implementation of the code for the paper Probing the 3D Awareness of Visual Foundation Models (CVPR 2024) which presents an analysis of the 3D awareness of visual foundation models.

Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, Varun Jampani

If you find this code useful, please consider citing:

@inProceedings{elbanani2024probing,
  title={{Probing the 3D Awareness of Visual Foundation Models}},
  author={
        El Banani, Mohamed and Raj, Amit and Maninis, Kevis-Kokitsi and 
        Kar, Abhishek and Li, Yuanzhen and Rubinstein, Michael and Sun, Deqing and 
        Guibas, Leonidas and Johnson, Justin and Jampani, Varun
        },
  booktitle={CVPR},
  year={2024},
}

Environment Setup

We recommend using Anaconda or Miniconda. To setup the environment, follow the instructions below.

conda create -n probe3d python=3.9 --yes
conda activate probe3d
conda install pytorch=2.2.1 torchvision=0.17.1 pytorch-cuda=12.1 -c pytorch -c nvidia 
conda install -c pytorch -c nvidia faiss-gpu=1.8.0
conda install -c conda-forge nb_conda_kernels=2.3.1

pip install -r requirements.txt
python setup.py develop

pip install protobuf==3.20.3    # weird dependency with datasets and google's api
pre-commit install              # install pre-commit

Finally, please follow the dataset download and preprocessing instructions here.

Evaluation Experiments

We provide code to train the depth probes and evaluate the correspondence. All experiments use hydra configs which can be found here. Below are example commands for running the evaluations with the DINO ViT-B/16 backbone.

# Training single-view probes
python train_depth.py backbone=dino_b16 +backbone.return_multilayer=True
python train_snorm.py backbone=dino_b16 +backbone.return_multilayer=True

# Evaluating multiview correspondence 
python evaluate_navi_correspondence.py +backbone=dino_b16
python evaluate_scannet_correspondence.py +backbone=dino_b16
python evaluate_spair_correspondence.py +backbone=dino_b16

Performance Correlation

Coming soon.

Acknowledgments

We thank Prafull Sharma, Shivam Duggal, Karan Desai, Junhwa Hur, and Charles Herrmann for many helpful discussions. We also thank Alyosha Efros, David Fouhey, Stella Yu, and Andrew Owens for their feedback.

We would also like to acknowledge the following repositories and users for releasing very valuable code and datasets:

  • GeoNet for releasing the extracted surface normals for full NYU.

About

[CVPR 2024] Probing the 3D Awareness of Visual Foundation Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages