Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
/ GLIMS Public

The official repository of GLIMS as a BraTS 2023 submission.

License

Notifications You must be signed in to change notification settings

yaziciz/GLIMS

Repository files navigation

GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation

This repository contains the code of GLIMS.

GLIMS ranked in the top 5 among 65 unique submissions during the validation phase of the Adult Glioblastoma Segmentation challenge of BraTS 2023.

Installation

Clone the repository

git clone https://github.com/yaziciz/GLIMS.git
cd GLIMS

Install the required dependencies

With your virtual environment activated, install the project's dependencies:

pip install -r requirements.txt

Usage Instructions

Running the Main Script

The GLIMS model can be trained by the given script on the BraTS 2023 dataset:

python main.py --output_dir <output_directory> --data_dir <data_directory> --json_list <json_list_file> --fold <fold_id>

Validation

By using the pre-trained model, the validation phase can be performed as follows:

python post_validation.py --output_dir <output_directory> --data_dir <data_directory> --json_list <json_list_file> --fold <fold_number> --pretrained_dir <pretrained_model_directory>

Testing with Model Ensembles

To test GLIMS by using the ensemble method on the unannotated BraTS 2023 dataset, the following script can be used:

python test_BraTS.py --data_dir <validation_data_directory> --model_ensemble_1 <model_1_path> --model_ensemble_2 <model_2_path> --output_dir <output_directory>

The model_ensemble_1 and model_ensemble_2 variables represent the fold 2 and fold 4 models, as indicated in our challenge submission paper on arXiv.

Citations

GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation
Image and Vision Computing, May 2024
Journal Paper, arXiv

@article{yazici2024glims,
  title={GLIMS: Attention-guided lightweight multi-scale hybrid network for volumetric semantic segmentation},
  author={Yazici, Ziya Ata and Oksuz, Ilkay and Ekenel, Hazim Kemal},
  journal={Image and Vision Computing},
  pages={105055},
  year={2024},
  publisher={Elsevier},
  doi={https://doi.org/10.1016/j.imavis.2024.105055}
}

Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation
Accepted to the 9th Brain Lesion (BrainLes) Workshop @ MICCAI 2023
arXiv

@article{yazici2024attention,
  title={Attention-Enhanced Hybrid Feature Aggregation Network for 3D Brain Tumor Segmentation},
  author={Yazici, Ziya Ata and Oksuz, Ilkay and Ekenel, Hazim Kemal},
  journal={arXiv preprint arXiv:2403.09942},
  year={2024}
}

Thank you for your interest in our work!

We are also deeply grateful to the MONAI Consortium for their MONAI framework, which was instrumental in the development of GLIMS.

About

The official repository of GLIMS as a BraTS 2023 submission.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages