Ssformer: A lightweight transformer for semantic segmentation

W Shi, J Xu, P Gao - 2022 IEEE 24th International Workshop …, 2022 - ieeexplore.ieee.org
W Shi, J Xu, P Gao
2022 IEEE 24th International Workshop on Multimedia Signal …, 2022ieeexplore.ieee.org
It is well believed that Transformer performs better in semantic segmentation compared to
convolutional neural networks. Nevertheless, the original Vision Transformer [2] may lack of
inductive biases of local neighborhoods and possess a high time complexity. Recently, Swin
Transformer [3] sets a new record in various vision tasks by using hierarchical architecture
and shifted windows while being more efficient. However, as Swin Transformer is
specifically designed for image classification, it may achieve suboptimal performance on …
It is well believed that Transformer performs better in semantic segmentation compared to convolutional neural networks. Nevertheless, the original Vision Transformer [2] may lack of inductive biases of local neighborhoods and possess a high time complexity. Recently, Swin Transformer [3] sets a new record in various vision tasks by using hierarchical architecture and shifted windows while being more efficient. However, as Swin Transformer is specifically designed for image classification, it may achieve suboptimal performance on dense prediction-based segmentation task. Further, simply combing Swin Transformer with existing methods would lead to the boost of model size and parameters for the final segmentation model. In this paper, we rethink the Swin Transformer for semantic segmentation, and design a lightweight yet effective transformer model, called SSformer. In this model, considering the inherent hierarchical design of Swin Transformer, we propose a decoder to aggregate information from different layers, thus obtaining both local and global attentions. Experimental results show the proposed SSformer yields comparable mIoU performance with state-of-the-art models, while maintaining a smaller model size and lower compute. Source code and pretrained models are available at: https://github.com/shiwt03/SSformer.
ieeexplore.ieee.org