Conformer: Local features coupling global representations for visual recognition
Proceedings of the IEEE/CVF international conference on …, 2021•openaccess.thecvf.com
Abstract Within Convolutional Neural Network (CNN), the convolution operations are good
at extracting local features but experience difficulty to capture global representations. Within
visual transformer, the cascaded self-attention modules can capture long-distance feature
dependencies but unfortunately deteriorate local feature details. In this paper, we propose a
hybrid network structure, termed Conformer, to take advantage of convolutional operations
and self-attention mechanisms for enhanced representation learning. Conformer roots in the …
at extracting local features but experience difficulty to capture global representations. Within
visual transformer, the cascaded self-attention modules can capture long-distance feature
dependencies but unfortunately deteriorate local feature details. In this paper, we propose a
hybrid network structure, termed Conformer, to take advantage of convolutional operations
and self-attention mechanisms for enhanced representation learning. Conformer roots in the …
Abstract
Within Convolutional Neural Network (CNN), the convolution operations are good at extracting local features but experience difficulty to capture global representations. Within visual transformer, the cascaded self-attention modules can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take advantage of convolutional operations and self-attention mechanisms for enhanced representation learning. Conformer roots in the Feature Coupling Unit (FCU), which fuses local features and global representations under different resolutions in an interactive fashion. Conformer adopts a concurrent structure so that local features and global representations are retained to the maximum extent. Experiments show that Conformer, under the comparable parameter complexity, outperforms the visual transformer (DeiT-B) by 2.3% on ImageNet. On MSCOCO, it outperforms ResNet-101 by 3.7% and 3.6% mAPs for object detection and instance segmentation, respectively, demonstrating the great potential to be a general backbone network. Code is available at github. com/pengzhiliang/Conformer.
openaccess.thecvf.com