Osformer: One-stage camouflaged instance segmentation with transformers

J Pei, T Cheng, DP Fan, H Tang, C Chen… - European conference on …, 2022 - Springer
J Pei, T Cheng, DP Fan, H Tang, C Chen, L Van Gool
European conference on computer vision, 2022Springer
We present OSFormer, the first one-stage transformer framework for camouflaged instance
segmentation (CIS). OSFormer is based on two key designs. First, we design a location-
sensing transformer (LST) to obtain the location label and instance-aware parameters by
introducing the location-guided queries and the blend-convolution feed-forward network.
Second, we develop a coarse-to-fine fusion (CFF) to merge diverse context information from
the LST encoder and CNN backbone. Coupling these two components enables OSFormer …
Abstract
We present OSFormer, the first one-stage transformer framework for camouflaged instance segmentation (CIS). OSFormer is based on two key designs. First, we design a location-sensing transformer (LST) to obtain the location label and instance-aware parameters by introducing the location-guided queries and the blend-convolution feed-forward network. Second, we develop a coarse-to-fine fusion (CFF) to merge diverse context information from the LST encoder and CNN backbone. Coupling these two components enables OSFormer to efficiently blend local features and long-range context dependencies for predicting camouflaged instances. Compared with two-stage frameworks, our OSFormer reaches 41% AP and achieves good convergence efficiency without requiring enormous training data, i.e., only 3,040 samples under 60 epochs. Code link: https://github.com/PJLallen/OSFormer.
Springer