Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

Simple, Easy 3D Object Detection with Point-Wise Semantics

Notifications You must be signed in to change notification settings

HAMA-DL-dev/SeSame

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SeSame: Simple, Easy 3D Object Detection with Point-Wise Semantics

qualitative result

News

[24.07.31] Update existing KITTI entry due to the expiration of submission

[24.07.08] Fix bugs

[24.03.08] All result and model zoo are uploaded.

[24.02.28] The result is submitted to KITTI 3D/BEV object detection benchmark with name SeSame-point, SeSame-voxel, SeSame-pillar

To Do

  • Preprint of our work will be available after review process
  • Upload whole project including training, validation logs and result on test split
  • Evaluation on KITTI val split and test split
  • Code conversion spconv1.x to spconv2.x

Model Zoo

3D detection (car)

model AP_easy AP_mod AP_hard config pretrained weight result
SeSame-point 85.25 76.83 71.60 pointrcnn_sem_painted.yaml pointrcnn_epoch80.pth log
SeSame-voxel 81.51 75.05 70.53 second_sem_painted.yaml second_epoch80.pth log
SeSame-pillar 83.88 73.85 68.65 pointpillar_sem_painted.yaml pointpillar_epoch80.pth log

BEV detection (car)

model AP_easy AP_mod AP_hard config pretrained weight result
SeSame-point 90.84 87.49 83.77 pointrcnn_sem_painted.yaml pointrcnn_epoch80.pth log
SeSame-voxel 89.86 85.62 80.95 second_sem_painted.yaml second_epoch80.pth log
SeSame-pillar 90.61 86.88 81.93 pointpillar_sem_painted.yaml pointpillar_epoch80.pth log

Contents

Requirements

  • CUDA 10.2
  • NVIDIA TITAN RTX
  • pcdet : 0.3.0+0
  • spconv : 2.3.6
  • torch : 1.10.1
  • torchvision : 0.11.2
  • torch-scatter : 2.1.2

If your CUDA version is not 10.2, it might be better to install those packages on your own.

The environment.yaml is suitable for CUDA 10.2 users.

Setup

git clone https://github.com/HAMA-DL-dev/SeSame.git
cd SeSame
conda env create -f environment.yaml

Datasets

KITTI 3D object detection (link)

/path/to/your/kitti
    ├── ImageSets
    ├── training
        ├── labels_cylinder3d        # !<--- segmented point clouds from 3D sem.seg.
        ├── segmented_lidar          # !<--- feature concatenated point clouds 
        ├── velodyne                 # !<--- point clouds 
        ├── planes
        ├── image_2
        ├── image_3
        ├── label_2
        └── calib
    ├── kitti_infos_train.pkl
    └── kitti_infos_val.pkl
dataset numbers of datset index infos dataset infos
train 3712 / 7481 train.txt kitti_infos_train.pkl
val 3769 / 7481 val.txt kitti_infos_val.pkl
test 7518 test.txt N/A

For more information of *.pkl files, reference this documentation : mmdetection3d-create-kitti-datset

Segment point clouds

[Step1] Load pretrained weights at this link

[Step2] Modify related paths like below

semantickitti.yaml (link) : path to the downloaded weight

painting_cylinder3d.py (link) : path to your KITTI and semantic-kitti configs

# point clouds from KITTI 3D object detection dataset
TRAINING_PATH = "/path/to/your/SeSame/detector/data/kitti/training/velodyne/"

# semantic map of Semantic KITTI dataset
SEMANTIC_KITTI_PATH = "/path/to/your/SeSame/detector/tools/cfgs/dataset_configs/semantic-kitti.yaml" 

[Step3] Segment raw point clouds from KITTI object detection dataset

cd /path/to/your/kitti/training
mkdir segmented_lidar
mkdir labels_cylinder3d
cd /path/to/your/SeSame/segment/

python demo_folder.py --demo-folder /path/to/your/kitti/training/velodyne/ --save-folder /path/to/your/kitti/training/labels_cylinder3d/

python pointpainting_cylinder3d.py

Generate GT database

cd detector/tools
python -m pcdet.datasets.kitti.sem_painted_kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/semantic_painted_kitti.yaml

Train

cd ~/SeSame/detector/tools
python train.py --cfg_file cfgs/kitti_models/${model.yaml} --batch_size 16 --epochs 80 --workers 16 --ckpt_save_interval 5

example

python train.py --cfg_file cfgs/kitti_models/pointpillar_sem_painted.yaml --batch_size 16 --epochs 80 --workers 16 --ckpt_save_interval 5

If you stop the training process for mistake, don't worry.

You can resume training with option --start_epoch ${numbers of epoch}

Test

python test.py --cfg_file ${configuration file of each model with *.yaml} --batch_size ${4,8,16} --workers 4 --ckpt ${path to *.pth file} --save_to_file

example

python test.py --cfg_file ../output/kitti_models/pointpillar_sem_painted/default/pointpillar_sem_painted.yaml --batch_size 16 --workers 4 --ckpt ../output/kitti_models/pointpillar_sem_painted/default/ckpt/checkpoint_epoch_70.pth --save_to_file

Acknowledgments

Thanks for the opensource codes from Cylinder3D, PointPainting and OpenPCDet

About

Simple, Easy 3D Object Detection with Point-Wise Semantics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published