The Devil Is in the Frequency: Geminated Gestalt Autoencoder for Self-Supervised Visual Pre-training

Authors

  • Hao Liu Tencent
  • Xinghua Jiang Tencent
  • Xin Li Tencent
  • Antai Guo Tencent
  • Yiqing Hu Tencent
  • Deqiang Jiang Tencent
  • Bo Ren Tencent

DOI:

https://doi.org/10.1609/aaai.v37i2.25252

Keywords:

CV: Representation Learning for Vision, CV: Applications

Abstract

The self-supervised Masked Image Modeling (MIM) schema, following "mask-and-reconstruct" pipeline of recovering contents from masked image, has recently captured the increasing interest in the community, owing to the excellent ability of learning visual representation from unlabeled data. Aiming at learning representations with high semantics abstracted, a group of works attempts to reconstruct non-semantic pixels with large-ratio masking strategy, which may suffer from "over-smoothing" problem, while others directly infuse semantics into targets in off-line way requiring extra data. Different from them, we shift the perspective to the Fourier domain which naturally has global perspective and present a new Masked Image Modeling (MIM), termed Geminated Gestalt Autoencoder (Ge^2-AE) for visual pre-training. Specifically, we equip our model with geminated decoders in charge of reconstructing image contents from both pixel and frequency space, where each other serves as not only the complementation but also the reciprocal constraints. Through this way, more robust representations can be learned in the pre-trained encoders, of which the effectiveness is confirmed by the juxtaposing experimental results on downstream recognition tasks. We also conduct several quantitative and qualitative experiments to investigate the learning behavior of our method. To our best knowledge, this is the first MIM work to solve the visual pre-training through the lens of frequency domain.

Downloads

Published

2023-06-26

How to Cite

Liu, H., Jiang, X., Li, X., Guo, A., Hu, Y., Jiang, D., & Ren, B. (2023). The Devil Is in the Frequency: Geminated Gestalt Autoencoder for Self-Supervised Visual Pre-training. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1649-1656. https://doi.org/10.1609/aaai.v37i2.25252

Issue

Section

AAAI Technical Track on Computer Vision II