Generalized Semantic Segmentation by Self-Supervised Source Domain Projection and Multi-Level Contrastive Learning
DOI:
https://doi.org/10.1609/aaai.v37i9.26280Keywords:
ML: Transfer, Domain Adaptation, Multi-Task Learning, CV: Segmentation, ML: Unsupervised & Self-Supervised LearningAbstract
Deep networks trained on the source domain show degraded performance when tested on unseen target domain data. To enhance the model's generalization ability, most existing domain generalization methods learn domain invariant features by suppressing domain sensitive features. Different from them, we propose a Domain Projection and Contrastive Learning (DPCL) approach for generalized semantic segmentation, which includes two modules: Self-supervised Source Domain Projection (SSDP) and Multi-Level Contrastive Learning (MLCL). SSDP aims to reduce domain gap by projecting data to the source domain, while MLCL is a learning scheme to learn discriminative and generalizable features on the projected data. During test time, we first project the target data by SSDP to mitigate domain shift, then generate the segmentation results by the learned segmentation network based on MLCL. At test time, we can update the projected data by minimizing our proposed pixel-to-pixel contrastive loss to obtain better results. Extensive experiments for semantic segmentation demonstrate the favorable generalization capability of our method on benchmark datasets.Downloads
Published
2023-06-26
How to Cite
Yang, L., Gu, X., & Sun, J. (2023). Generalized Semantic Segmentation by Self-Supervised Source Domain Projection and Multi-Level Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10789-10797. https://doi.org/10.1609/aaai.v37i9.26280
Issue
Section
AAAI Technical Track on Machine Learning IV