Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Sep 30, 2022 · Our analysis reveals strong disparities in the computational efficiency of pre-training methods and their dependence on dataset quality.
Self-supervised methods have achieved remarkable success in transfer learning, often achieving the same or better accuracy than supervised pre-training.
Sep 30, 2022 · The results call into question the commonly-held assumption that self-supervised methods inherently scale to large, uncurated data.
Oct 11, 2022 · The paper Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods is on arXiv. Author: Hecate He | Editor: Michael ...
Where should i spend my flops? efficiency evaluations of visual pre-training methods. S Koppula, Y Li, E Shelhamer, A Jaegle, N Parthasarathy, R Arandjelovic, .
Oct 18, 2022 · We compute their per gradient step FLOP cost (post-compilation, running on the same accelerator hardware), and use this to cross-compare the ...
International Conference on Learning Representations (ICLR 2023). Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods [arXiV]
Efficient visual pretraining with contrastive detection. OJ Hénaff ... Where should i spend my flops? efficiency evaluations of visual pre-training methods.
Figure A.6.4 and A.6.5 shows the FLOP efficiency of each method, measured on the object detection task. ... Where Should I Spend My FLOPS? Efficiency Evaluations ...