Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content

giakoumoglou/synco

Repository files navigation

SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations

This is a PyTorch implementation of the SynCo paper:

@misc{giakoumoglou2024synco,
  author  = {Nikolaos Giakoumoglou and Tania Stathaki},
  title   = {SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual Representations},
  journal = {arXiv preprint arXiv:2410.02401},
  year    = {2024},
}

Preparation

Install PyTorch and ImageNet dataset following the official PyTorch ImageNet training code.

This repo is based on MoCo v2 and Barlow Twins code:

diff main_synco.py <(curl https://raw.githubusercontent.com/facebookresearch/moco/main_moco.py)
diff main_lincls.py <(curl https://raw.githubusercontent.com/facebookresearch/moco/main_lincls.py)
diff main_semisup.py <(curl https://raw.githubusercontent.com/facebookresearch/barlowtwins/evaluate.py)

Unsupervised Training

This implementation only supports multi-gpu, DistributedDataParallel training, which is faster and simpler; single-gpu or DataParallel training is not supported.

To do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine, run:

python main_synco.py \
  -a resnet50 \
  --lr 0.03 \
  --batch-size 256 \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  --mlp --moco-t 0.2 --aug-plus --cos \
  --n-hard 1024 --n1 256 --n2 256 --n3 256 --n4 64 --n5 64 --n6 64 \
  [your imagenet-folder with train and val folders]

This script uses all the default hyper-parameters as described in the MoCo v2 paper.

Linear Classification

With a pre-trained model, to train a supervised linear classifier on frozen features/weights in an 8-gpu machine, run:

python main_lincls.py \
  -a resnet50 \
  --lr 30.0 \
  --batch-size 256 \
  --pretrained [your checkpoint path]/checkpoint_0199.pth.tar \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  [your imagenet-folder with train and val folders]

This script uses all the default hyper-parameters as described in the MoCo v2 paper.

Semi-supervised Learning

To fine-tune the model end-to-end, including training a linear classifier on features/weights using a pre-trained model on an 8-GPU machine with a subset of the ImageNet training set, run:

python main_semisup.py \
  -a resnet50 \
  --lr-backbone [YOUR_LR] --lr-classifier [YOUR_LR] \
  --train-percent 1 --weights finetune \
  --batch-size 256 \
  --pretrained [your checkpoint path]/checkpoint_0199.pth.tar \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  [your imagenet-folder with train and val folders]

Transferring to Object Detection

See ./detection.

Models

Our pre-trained ResNet-50 models can be downloaded as follows:

epochs top-1 acc. model
SynCo 200 68.1 download
SynCo 800 70.6 download

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.