Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

SLDGroup/GreedyViG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs

CVPR 2024

PDF | Arxiv

Mustafa Munir, William Avery, Md Mostafijur Rahman, and Radu Marculescu

Overview

This repository contains the source code for GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs

Pretrained Models

Weights trained on ImageNet-1K can be downloaded here.

Weights trained on COCO 2017 Object Detection and Instance Segmentation can be downloaded here.

Weights trained on ADE20K Semantic Segmentation can be downloaded here.

detection

Contains all of the object detection and instance segmentation results, backbone code, and config.

segmentation

Contains all of the semantic segmentation results, backbone code, and config.

models

Contains the main GreedyViG model code.

util

Contains utility scripts used in GreedyViG.

Usage

Installation Image Classification

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
conda install mpi4py
pip install -r requirements.txt

Image Classification

Train image classification:

python -m torch.distributed.launch --nproc_per_node=num_GPUs --nnodes=num_nodes --use_env main.py --data-path /path/to/imagenet --model greedyvig_model --output_dir greedyvig_results

For example:

python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --use_env main.py --data-path ../../Datasets/ILSVRC/Data/CLS-LOC/ --model GreedyViG_M --output_dir greedyvig_test_results

Test image classification:

python -m torch.distributed.launch --nproc_per_node=num_GPUs --nnodes=num_nodes --use_env main.py --data-path /path/to/imagenet --model greedyvig_model --resume pretrained_model --eval

For example:

python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --use_env main.py --data-path ../../Datasets/ILSVRC/Data/CLS-LOC/ --model GreedyViG_S --resume Pretrained_Models_GreedyViG/S_GreedyViG_81_1.pth --eval

Installation Object Detection and Instance Segmentation

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install timm
pip install submitit
pip install -U openmim
mim install mmcv-full
mim install mmdet==2.28

Object Detection and Instance Segmentation

Detection and instance segmentation on MS COCO 2017 is implemented based on MMDetection. We follow settings and hyper-parameters of PVT, PoolFormer, and EfficientFormer for comparison.

All commands for object detection and instance segmentation should be run from the GreedyViG/detection/ directory.

Data preparation

Prepare COCO 2017 dataset according to the instructions in MMDetection.

ImageNet Pretraining

Put ImageNet-1K pretrained weights of backbone as

GreedyViG
├── Final_Results
│   ├── model
│   │   ├── model.pth
│   │   ├── ...

Train object detection and instance segmentation:

python -m torch.distributed.launch --nproc_per_node num_GPUs --nnodes=num_nodes --node_rank 0 main.py configs/mask_rcnn_greedyvig_model --greedyvig_model greedyvig_model --work-dir Output_Directory --launcher pytorch > Output_Directory/log_file.txt 

For example:

python -m torch.distributed.launch --nproc_per_node 2 --nnodes 1 --node_rank 0 main.py configs/mask_rcnn_greedyvig_s_fpn_1x_coco.py --greedyvig_model greedyvig_s --work-dir detection_results/ --launcher pytorch > detection_results/greedyvig_s_run.txt 

Test object detection and instance segmentation:

python -m torch.distributed.launch --nproc_per_node=num_GPUs --nnodes=num_nodes --node_rank 0 test.py configs/mask_rcnn_greedyvig_model --checkpoint Pretrained_Model --eval {bbox or segm} --work-dir Output_Directory --launcher pytorch > log_file.txt

For example:

python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank 0 test.py configs/mask_rcnn_greedyvig_s_fpn_1x_coco.py --checkpoint ../Pretrained_Models_GreedyViG/Detection/GreedyViG_S_Det.pth --eval bbox --work-dir detection_results/ --launcher pytorch > detection_results/greedyvig_s_run_evaluation.txt

Installation Semantic Segmentation

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install -U openmim
mim install mmengine
mim install mmcv-full
mim install "mmsegmentation <=0.30.0"

Semantic Segmentation

Semantic segmentation on ADE20K is implemented based on MMSegmentation. We follow settings and hyper-parameters of PVT, PoolFormer, and EfficientFormer for comparison.

All commands for semantic segmentation should be run from the GreedyViG/segmentation/ directory.

Train semantic segmentation:

8 GPUs, 40K Iterations

python -m torch.distributed.launch --nproc_per_node 8 --nnodes 1 --node_rank 0 train.py configs/sem_fpn/fpn_greedyvig_s_ade20k_40k.py --greedyvig_model greedyvig_s --work-dir semantic_results/ --launcher pytorch > semantic_results/greedyvig_s_run_semantic.txt

Citation

If our code or models help your work, please cite MobileViG (CVPRW 2023), MobileViGv2 (CVPRW 2024), and GreedyViG (CVPR 2024):

@InProceedings{GreedyViG_2024_CVPR,
    author    = {Munir, Mustafa and Avery, William and Rahman, Md Mostafijur and Marculescu, Radu},
    title     = {GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {6118-6127}
}
@InProceedings{mobilevig2023,
    author    = {Munir, Mustafa and Avery, William and Marculescu, Radu},
    title     = {MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2023},
    pages     = {2211-2219}
}
@InProceedings{MobileViGv2_2024,
    author    = {Avery, William and Munir, Mustafa and Marculescu, Radu},
    title     = {Scaling Graph Convolutions for Mobile Vision},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2024},
    pages     = {5857-5865}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published