Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

A codebase and a curated list of awesome deep long-tailed learning (TPAMI 2023).

Notifications You must be signed in to change notification settings

Vanint/Awesome-LongTailed-Learning

Repository files navigation

Awesome Long-Tailed Learning (TPAMI 2023)

Awesome PRs Welcome

We released Deep Long-Tailed Learning: A Survey and our codebase to the community. In this survey, we reviewed recent advances in long-tailed learning based on deep neural networks. Existing long-tailed learning studies can be grouped into three main categories (i.e., class re-balancing, information augmentation and module improvement), which can be further classified into nine sub-categories (as shown in the below figure). We also provided empirical analysis for several state-of-the-art methods by evaluating to what extent they address the issue of class imbalance. We concluded the survey by highlighting important applications of deep long-tailed learning and identifying several promising directions for future research.

After completing this survey, we decided to release our long-tailed learning resources and codebase, hoping to push the development of the community. If you have any questions or suggestions, please feel free to contact us.

1. Type of Long-tailed Learning

Symbol Sampling CSL LA TL Aug
Type Re-sampling Class-sensitive Learning Logit Adjustment Transfer Learning Data Augmentation
Symbol RL CD DT Ensemble other
Type Representation Learning Classifier Design Decoupled Training Ensemble Learning Other Types

2. Top-tier Conference Papers

2023

Title Venue Year Type Code
Long-tailed recognition by mutual information maximization between latent features and ground-truth labels ICML 2023 CSL,RL Official
Large language models struggle to learn long-tail knowledge ICML 2023 Aug
Feature directions matter: Long-tailed learning via rotated balanced representation ICML 2023 RL
Wrapped Cauchy distributed angular softmax for long-tailed visual recognition ICML 2023 RL,CD Official
Rethinking image super resolution from long-tailed distribution learning perspective CVPR 2023 CSL
Transfer knowledge from head to tail: Uncertainty calibration under long-tailed distribution CVPR 2023 CSL,TL Official
Towards realistic long-tailed semi-supervised learning: Consistency is all you need CVPR 2023 CSL,TL,Ensemble Official
Global and local mixture consistency cumulative learning for long-tailed visual recognitions CVPR 2023 CSL,RL Official
Long-tailed visual recognition via self-heterogeneous integration with knowledge excavation CVPR 2023 TL,Ensemble Official
Balancing logit variation for long-tailed semantic segmentation CVPR 2023 Aug Official
Use your head: Improving long-tail video recognition CVPR 2023 Aug Official
FCC: Feature clusters compression for long-tailed visual recognition CVPR 2023 RL Official
FEND: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction CVPR 2023 RL
SuperDisco: Super-class discovery improves visual recognition for the long-tail CVPR 2023 RL
Class-conditional sharpness-aware minimization for deep long-tailed recognition CVPR 2023 DT Official
Balanced product of calibrated experts for long-tailed recognition CVPR 2023 Ensemble Official
No one left behind: Improving the worst categories in long-tailed learning CVPR 2023 Ensemble
On the effectiveness of out-of-distribution data in self-supervised long-tail learning ICLR 2023 Sampling,TL,Aug Official
LPT: Long-tailed prompt tuning for image classification ICLR 2023 Sampling,TL,Other Official
Long-tailed partial label learning via dynamic rebalancing ICLR 2023 CSL Official
Delving into semantic scale imbalance ICLR 2023 CSL,RL
INPL: Pseudo-labeling the inliers first for imbalanced semi-supervised learning ICLR 2023 TL
CUDA: Curriculum of data augmentation for long-tailed recognition ICLR 2023 Aug Official
Long-tailed learning requires feature learning ICLR 2023 RL
Decoupled training for long-tailed classification with stochastic representations ICLR 2023 RL,DT

2022

Title Venue Year Type Code
Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition NeurIPS 2022 CSL,Ensemble Official
SoLar: Sinkhorn label refinery for imbalanced partial-label learning NeurIPS 2022 CSL Official
Do we really need a learnable classifier at the end of deep neural network? NeurIPS 2022 RL,CD
Maximum class separation as inductive bias in one matrix NeurIPS 2022 CD Official
Escaping saddle points for effective generalization on class-imbalanced data NeurIPS 2022 other Official
Breadcrumbs: Adversarial class-balanced sampling for long-tailed recognition ECCV 2022 Sampling,Aug,DT Official
Constructing balance from imbalance for long-tailed image recognition ECCV 2022 Sampling,RL Official
Tackling long-tailed category distribution under domain shifts ECCV 2022 CSL,Aug,RL Official
Improving GANs for long-tailed data through group spectral regularization ECCV 2022 CSL,Other Official
Learning class-wise visual-linguistic representation for long-tailed visual recognition ECCV 2022 TL,RL Official
Learning with free object segments for long-tailed instance segmentation ECCV 2022 Aug
SAFA: Sample-adaptive feature augmentation for long-tailed image classification ECCV 2022 Aug,RL
On multi-domain long-tailed recognition, imbalanced domain generalization, and beyond ECCV 2022 RL Official
Invariant feature learning for generalized long-tailed classification ECCV 2022 RL Official
Towards calibrated hyper-sphere representation via distribution overlap coefficient for long-tailed learning ECCV 2022 RL,CD Official
Long-tailed instance segmentation using Gumbel optimized loss ECCV 2022 CD Official
Long-tailed class incremental learning ECCV 2022 DT Official
Identifying hard noise in long-tailed sample distribution ECCV 2022 Other Official
Relieving long-tailed instance segmentation via pairwise class balance CVPR 2022 CSL Official
The majority can help the minority: Context-rich minority oversampling for long-tailed classification CVPR 2022 TL,Aug Official
Long-tail recognition via compositional knowledge transfer CVPR 2022 TL,RL
BatchFormer: Learning to explore sample relationships for robust representation learning CVPR 2022 TL,RL Official
Nested collaborative learning for long-tailed visual recognition CVPR 2022 RL,Ensemble Official
Long-tailed recognition via weight balancing CVPR 2022 DT Official
Class-balanced pixel-level self-labeling for domain adaptive semantic segmentation CVPR 2022 other Official
Killing two birds with one stone: Efficient and robust training of face recognition CNNs by partial FC CVPR 2022 other Official
Optimal transport for long-tailed recognition with learnable cost matrix ICLR 2022 LA
Do deep networks transfer invariances across classes? ICLR 2022 TL,Aug Official
Self-supervised learning is more robust to dataset imbalance ICLR 2022 RL

2021

Title Venue Year Type Code
Improving contrastive learning on imbalanced seed data via open-world sampling NeurIPS 2021 Sampling,TL, DC Official
Semi-supervised semantic segmentation via adaptive equalization learning NeurIPS 2021 Sampling,CSL,TL, Aug Official
On model calibration for long-tailed object detection and instance segmentation NeurIPS 2021 LA Official
Label-imbalanced and group-sensitive classification under overparameterization NeurIPS 2021 LA
Towards calibrated model for long-tailed visual recognition from prior perspective NeurIPS 2021 Aug, RL Official
Supercharging imbalanced data learning with energy-based contrastive representation transfer NeurIPS 2021 Aug, TL, RL Official
VideoLT: Large-scale long-tailed video recognition ICCV 2021 Sampling Official
Exploring classification equilibrium in long-tailed object detection ICCV 2021 Sampling,CSL Official
GistNet: a geometric structure transfer network for long-tailed recognition ICCV 2021 Sampling,TL, DC
FASA: Feature augmentation and sampling adaptation for long-tailed instance segmentation ICCV 2021 Sampling,CSL
ACE: Ally complementary experts for solving long-tailed recognition in one-shot ICCV 2021 Sampling,Ensemble Official
Influence-Balanced Loss for Imbalanced Visual Classification ICCV 2021 CSL Official
Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation ICCV 2021 TL Official
Self supervision to distillation for long-tailed visual recognition ICCV 2021 TL Official
Distilling virtual examples for long-tailed recognition ICCV 2021 TL
MosaicOS: A simple and effective use of object-centric images for long-tailed object detection ICCV 2021 TL Official
Parametric contrastive learning ICCV 2021 RL Official
Distributional robustness loss for long-tail learning ICCV 2021 RL Official
Learning of visual relations: The devil is in the tails ICCV 2021 DT
Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection ICML 2021 Sampling Official
Self-Damaging Contrastive Learning ICML 2021 TL,RL Official
Delving into deep imbalanced regression ICML 2021 Other Official
Long-tailed multi-label visual recognition by collaborative training on uniform and re-balanced samplings CVPR 2021 Sampling,Ensemble
Equalization loss v2: A new gradient balance approach for long-tailed object detection CVPR 2021 CSL Official
Seesaw loss for long-tailed instance segmentation CVPR 2021 CSL Official
Adaptive class suppression loss for long-tail object detection CVPR 2021 CSL Official
PML: Progressive margin loss for long-tailed age classification CVPR 2021 CSL
Disentangling label distribution for long-tailed visual recognition CVPR 2021 CSL,LA Official
Adversarial robustness under long-tailed distribution CVPR 2021 CSL,LA,CD Official
Distribution alignment: A unified framework for long-tail visual recognition CVPR 2021 CSL,LA,DT Official
Improving calibration for long-tailed recognition CVPR 2021 CSL,Aug,DT Official
CReST: A class-rebalancing self-training framework for imbalanced semi-supervised learning CVPR 2021 TL Official
Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts CVPR 2021 TL Official
RSG: A simple but effective module for learning imbalanced datasets CVPR 2021 TL,Aug Official
MetaSAug: Meta semantic augmentation for long-tailed visual recognition CVPR 2021 Aug Official
Contrastive learning based hybrid networks for long-tailed image classification CVPR 2021 RL
Unsupervised discovery of the long-tail in instance segmentation using hierarchical self-supervision CVPR 2021 RL
Long-tail learning via logit adjustment ICLR 2021 LA Official
Long-tailed recognition by routing diverse distribution-aware experts ICLR 2021 TL,Ensemble Official
Exploring balanced feature spaces for representation learning ICLR 2021 RL,DT

2020

Title Venue Year Type Code
Balanced meta-softmax for long-taield visual recognition NeurIPS 2020 Sampling,CSL Official
Posterior recalibration for imbalanced datasets NeurIPS 2020 LA Official
Long-tailed classification by keeping the good and removing the bad momentum causal effect NeurIPS 2020 LA,CD Official
Rethinking the value of labels for improving classimbalanced learning NeurIPS 2020 TL,RL Official
The devil is in classification: A simple framework for long-tail instance segmentation ECCV 2020 Sampling,DT,Ensemble Official
Imbalanced continual learning with partitioning reservoir sampling ECCV 2020 Sampling Official
Distribution-balanced loss for multi-label classification in long-tailed datasets ECCV 2020 CSL Official
Feature space augmentation for long-tailed data ECCV 2020 TL,Aug,DT
Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification ECCV 2020 TL,Ensemble Official
Solving long-tailed recognition with deep realistic taxonomic classifier ECCV 2020 CD Official
Learning to segment the tail CVPR 2020 Sampling,TL Official
BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition CVPR 2020 Sampling,Ensemble Official
Overcoming classifier imbalance for long-tail object detection with balanced group softmax CVPR 2020 Sampling,Ensemble Official
Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective CVPR 2020 CSL Official
Equalization loss for long-tailed object recognition CVPR 2020 CSL Official
Domain balancing: Face recognition on long-tailed domains CVPR 2020 CSL
M2m: Imbalanced classification via majorto-minor translation CVPR 2020 TL,Aug Official
Deep representation learning on long-tailed data: A learnable embedding augmentation perspective CVPR 2020 TL,Aug,RL
Inflated episodic memory with region self-attention for long-tailed visual recognition CVPR 2020 RL
Decoupling representation and classifier for long-tailed recognition ICLR 2020 Sampling,CSL,RL,CD,DT Official

2019

Title Venue Year Type Code
Meta-weight-net: Learning an explicit mapping for sample weighting NeurIPS 2019 CSL Official
Learning imbalanced datasets with label-distribution-aware margin loss NeurIPS 2019 CSL Official
Dynamic curriculum learning for imbalanced data classification ICCV 2019 Sampling
Class-balanced loss based on effective number of samples CVPR 2019 CSL Official
Striking the right balance with uncertainty CVPR 2019 CSL
Feature transfer learning for face recognition with under-represented data CVPR 2019 TL,Aug
Unequal-training for deep face recognition with long-tailed noisy data CVPR 2019 RL Official
Large-scale long-tailed recognition in an open world CVPR 2019 RL Official

2018

Title Venue Year Type Code
Large scale fine-grained categorization and domain-specific transfer learning CVPR 2018 TL Official

2017

Title Venue Year Type Code
Learning to model the tail NeurIPS 2017 CSL
Focal loss for dense object detection ICCV 2017 CSL
Range loss for deep face recognition with long-tailed training data ICCV 2017 RL
Class rectification hard mining for imbalanced deep learning ICCV 2017 RL

2016

Title Venue Year Type Code
Learning deep representation for imbalanced classification CVPR 2016 Sampling,RL
Factors in finetuning deep model for object detection with long-tail distribution CVPR 2016 CSL,RL

3. Benchmark Datasets

Dataset Long-tailed Task # Class # Training data # Test data
ImageNet-LT Classification 1,000 115,846 50,000
CIFAR100-LT Classification 100 50,000 10,000
Places-LT Classification 365 62,500 36,500
iNaturalist 2018 Classification 8,142 437,513 24,426
LVIS v0.5 Detection and Segmentation 1,230 57,000 20,000
LVIS v1 Detection and Segmentation 1,203 100,000 19,800
VOC-LT Multi-label Classification 20 1,142 4,952
COCO-LT Multi-label Classification 80 1,909 5,000
VideoLT Video Classification 1,004 179,352 25,622

4. Our codebase

  • To use our codebase, please install requirements:
    pip install -r requirements.txt
    
  • Hardware requirements: 4 GPUs with >= 23G GPU RAM are recommended.
  • ImageNet-LT dataset: please download ImageNet-1K dataset, and put it to the ./data file.
    data
    └──ImageNet
        ├── train
        └── val
    
  • Softmax:
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/ce.yaml  --exp_name imagenet/CE  --gpu 0,1,2,3 
    
  • Weighted Softmax:
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/weighted_ce.yaml  --exp_name imagenet/weighted_ce  --gpu 0,1,2,3
    
  • ESQL (Equalization loss):
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/seql.yaml  --exp_name imagenet/seql  --gpu 0,1,2,3
    
  • Balanced Softmax:
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/balanced_softmax.yaml  --exp_name imagenet/BS  --gpu 0,1,2,3
    
  • LADE:
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/lade.yaml  --exp_name imagenet/LADE  --gpu 0,1,2,3
    
  • De-confound (Casual):
    cd ./Main-codebase 
    Training: python3 main.py --seed 1 --cfg config/ImageNet_LT/causal.yaml  --exp_name imagenet/causal --remine_lambda 0.1 --alpha 0.005 --gpu 0,1,2,3
    
  • Decouple (IB-CRT):
    cd ./Main-codebase 
    Training stage 1: python3 main.py --seed 1 --cfg config/ImageNet_LT/ce.yaml  --exp_name imagenet/CE  --gpu 0,1,2,3 
    Training stage 2: python3  main.py --cfg ./config/ImageNet_LT/cls_crt.yaml --model_dir exp_results/imagenet/CE/final_model_checkpoint.pth  --gpu 0,1,2,3 
    
  • MiSLAS:
    cd ./MiSLAS-codebase
    Training stage 1: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage1.py --cfg config/imagenet/imagenet_resnext50_stage1_mixup.yaml
    Training stage 2: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage2.py --cfg config/imagenet/imagenet_resnext50_stage2_mislas.yaml resume checkpoint_path
    Evalutation: CUDA_VISIBLE_DEVICES=0  python3 eval.py --cfg ./config/imagenet/imagenet_resnext50_stage2_mislas.yaml  resume checkpoint_path_stage2
    
  • RSG:
    cd ./RSG-codebase
    Training: python3 imagenet_lt_train.py 
    Evalutation: python3 imagenet_lt_test.py 
    
  • ResLT:
    cd ./ResLT-codebase
    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh/X50.sh
    Evalutation: CUDA_VISIBLE_DEVICES=0 bash sh/X50_eval.sh
    # The test performance can be found in the log file.
    
  • PaCo:
    cd ./PaCo-codebase
    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh/ImageNetLT_train_X50.sh
    Evalutation: CUDA_VISIBLE_DEVICES=0 bash sh/ImageNetLT_eval_X50.sh
    # The test performance can be found in the log file.
    
  • LDAM:
    cd ./Ensemble-codebase 
    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c ./configs/config_imagenet_lt_resnext50_ldam.json
    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path
    
  • RIDE:
    cd ./Ensemble-codebase 
    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c ./configs/config_imagenet_lt_resnext50_ride.json
    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path
    
  • SADE:
    cd ./Ensemble-codebase 
    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c ./configs/config_imagenet_lt_resnext50_sade.json
    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path
    

5. Empirical Studies

(1) Long-tailed benchmarking performance

  • We evaluate several state-of-the-art methods on ImageNet-LT to see to what extent they handle class imbalance via new evaluation metrics, i.e., UA (upper bound accuracy) and RA (relative accuracy). We categorize these methods based on class re-balancing (CR), information augmentation (IA) and module improvement (MI).

  • Almost all long-tailed methods perform better than the Softmax baseline in terms of accuracy, which demonstrates the effectiveness of long-tailed learning.
  • Training with 200 epochs leads to better performance for most long-tailed methods, since sufficient training enables deep models to fit data better and learn better image representations.
  • In addition to accuracy, we also evaluate long-tailed methods based on UA and RA. For the methods that have higher UA, the performance gain comes not only from the alleviation of class imbalance, but also from other factors, like data augmentation or better network architectures. Therefore, simply using accuracy for evaluation is not accurate enough, while our proposed RA metric provides a good complement, since it alleviates the influences of factors apart from class imbalance.
  • For example, MiSLAS, based on data mixup, has higher accuracy than Balanced Sofmtax under 90 training epochs, but it also has higher UA. As a result, the relative accuracy of MiSLAS is lower than Balanced Sofmtax, which means that Balanced Sofmtax alleviates class imbalance better than MiSLAS under 90 training epochs.
  • Although some recent high-accuracy methods have lower RA, the overall development trend of long-tailed learning is still positive, as shown in the below figure.

  • The current state-of-the-art long-tailed method in terms of both accuracy and RA is SADE (ensemble-based method).

(2) More discussions on cost-sensitive losses

  • We further evaluate the performance of different cost-sensitive learning losses based on the decoupled training scheme.
  • Decoupled training, compared to joint training, can further improve the overall performance of most cost-sensitive learning methods apart from balanced softmax (BS).
  • Although BS outperofmrs other cost-sensitive losses under one-stage training, they perform comparably under decoupled training. This implies that although these cost-sensitive losses perform differently under joint training, they essentially learn similar quality of feature representations.

5. Citation

If this repository is helpful to you, please cite our survey.

@article{zhang2023deep,
      title={Deep long-tailed learning: A survey},
      author={Zhang, Yifan and Kang, Bingyi and Hooi, Bryan and Yan, Shuicheng and Feng, Jiashi},
      journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
      year={2023},
      publisher={IEEE}
}

5. Other Resources

About

A codebase and a curated list of awesome deep long-tailed learning (TPAMI 2023).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published