Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

LowLevelAI/GLARE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ECCV 2024] GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval [Paper]

Han Zhou1,*, Wei Dong1,*, Xiaohong Liu2,†, Shuaicheng Liu3, Xiongkuo Min2, Guangtao Zhai2, Jun Chen1,†

1McMaster University, 2Shanghai Jiao Tong University,

3University of Electronic Science and Technology of China

*Equal Contribution, Corresponding Authors

Introduction

This repository represents the official implementation of our ECCV 2024 paper titled GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval. If you find this repo useful, please give it a star ⭐ and consider citing our paper in your research. Thank you.

License

We present GLARE, a novel network for low-light image enhancement.

  • Codebook-based LLIE: exploit normal-light (NL) images to extract NL codebook prior as the guidance.
  • Generative Feature Learning: develop an invertible latent normalizing flow strategy for feature alignment.
  • Adaptive Feature Transformation: adaptively introduces input information into the decoding process and allows flexible adjustments for users.
  • Future: network structure can be meticulously optimized to improve efficiency and performance in the future.

Overall Framework

teaser

📢 News

2024-07-21: Inference code and pre-trained models for LOL is released! Feel free to use them. ⭐
2024-07-21: License is updated to Apache License, Version 2.0. 💫
2024-07-19: Paper is available at: . 🎉
2024-07-01: Our paper has been accepted by ECCV 2024. Code and Models will be released. 🚀

∞ TODO

  • 🔜 Inference code for real-world (unpaired) images.
  • 🔜 Pre-trained models for LOLv2-real and LOLv2-synthetic datasets.
  • 🔜 Google Colab demo.
  • Hugging Face Space (optional).

🛠️ Setup

The inference code was tested on:

  • Ubuntu 22.04 LTS, Python 3.8, CUDA 11.3, GeForce RTX 2080Ti or higher GPU RAM.

📦 Repository

Clone the repository (requires git):

git clone https://github.com/LowLevelAI/GLARE.git
cd GLARE

💻 Dependencies

  • Make Conda Environment: Using Conda to create the environment:

    conda create -n glare python=3.8
    conda activate glare
  • Then install dependencies:

    conda install pytorch=1.11 torchvision cudatoolkit=11.3 -c pytorch
    pip install addict future lmdb numpy opencv-python Pillow pyyaml requests scikit-image scipy tqdm yapf einops tb-nightly natsort
    pip install pyiqa==0.1.4 
    pip install pytorch_lightning==1.6.0
    pip install --force-reinstall charset-normalizer==3.1.0
  • Build CUDA extensions:

    cd GLARE/defor_cuda_ext
    BASICSR_EXT=True python setup.py develop
  • Remove CUDA extensions (/GLARE/defor_cuda_ext/basicsr/ops/dcn/deform_conv_ext.xxxxxx.so) to the path: /GLARE/code/models/modules/ops/dcn/.

🏃 Testing on benchmark datasets

📷 Download following datasets:

LOL Google Drive

LOL-v2 Google Drive

⬇ Download pre-trained models

Download pre-trained weights and place them to folder pretrained_weights. More pre-trained models will be provided soon.

🚀 Run inference

python code/infer_dataset.py

You can find all results in results/. Enjoy!

🏋️ Training

Comming Soon~

✏️ Contributing

Please refer to this instruction.

🎓 Citation

Please cite our paper:

@InProceedings{Han_ECCV24_GLARE,
    author    = {Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},
    title     = {GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    year      = {2024}
}

@article{GLARE,
      title = {GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval}, 
      author = {Zhou, Han and Dong, Wei and Liu, Xiaohong and Liu, Shuaicheng and Min, Xiongkuo and Zhai, Guangtao and Chen, Jun},
      journal = {arXiv preprint arXiv:2407.12431},
      year = {2024}
}

🎫 License

This work is licensed under the Apache License, Version 2.0 (as defined in the LICENSE).

By downloading and using the code and model you agree to the terms in the LICENSE.

License