Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Visually Source-Free Domain Adaptation via Adversarial Style Matching

Published: 19 January 2024 Publication History

Abstract

The majority of existing works explore Unsupervised Domain Adaptation (UDA) with an ideal assumption that samples in both domains are available and complete. In real-world applications, however, this assumption does not always hold. For instance, data-privacy is becoming a growing concern, the source domain samples may be not publicly available for training, leading to a typical Source-Free Domain Adaptation (SFDA) problem. Traditional UDA methods would fail to handle SFDA since there are two challenges in the way: the data incompleteness issue and the domain gaps issue. In this paper, we propose a visually SFDA method named Adversarial Style Matching (ASM) to address both issues. Specifically, we first train a style generator to generate source-style samples given the target images to solve the data incompleteness issue. We use the auxiliary information stored in the pre-trained source model to ensure that the generated samples are statistically aligned with the source samples, and use the pseudo labels to keep semantic consistency. Then, we feed the target domain samples and the corresponding source-style samples into a feature generator network to reduce the domain gaps with a self-supervised loss. An adversarial scheme is employed to further expand the distributional coverage of the generated source-style samples. The experimental results verify that our method can achieve comparative performance even compared with the traditional UDA methods with source samples for training.

References

[1]
S. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, pp. 1345–1359, 2010.
[2]
K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 3723–3732.
[3]
M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” in Proc. NIPS, 2018, pp. 1640–1650.
[4]
T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez, “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proc. CVPR, 2019, pp. 2517–2526.
[5]
H. Wang, S. Liao, and L. Shao, “AFAN: Augmented feature alignment network for cross-domain object detection,” IEEE Trans. Image Process., vol. 30, pp. 4046–4056, 2021.
[6]
H. Guo, R. Pasunuru, and M. Bansal, “Multi-source domain adaptation for text classification via distancenet-bandits,” in Proc. AAAI, 2020, vol. 34, no. 5, pp. 7830–7838.
[7]
J. Na, H. Jung, H. J. Chang, and W. Hwang, “Fixbi: Bridging domain spaces for unsupervised domain adaptation,” in Proc. CVPR, 2021, pp. 1094–1103.
[8]
X. Gu, J. Sun, and Z. Xu, “Spherical space domain adaptation with robust pseudo-label loss,” in Proc. CVPR, 2020, pp. 9101–9110.
[9]
J. Li, E. Chen, Z. Ding, L. Zhu, K. Lu, and H. T. Shen, “Maximum density divergence for domain adaptation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp. 3918–3930, Nov. 2021.
[10]
C. Sun, A. Shrivastava, S. Singh, and A. Gupta, “Revisiting unreasonable effectiveness of data in deep learning era,” in Proc. ICCV, 2017, pp. 843–852.
[11]
V. K. Kurmi, V. K. Subramanian, and V. P. Namboodiri, “Domain impression: A source data free domain adaptation method,” in Proc. WACV, 2021, pp. 615–625.
[12]
S. Yang, Y. Wang, J. van de Weijer, L. Herranz, and S. Jui, “Casting a BAIT for offline and online source-free domain adaptation,” 2020, arXiv:2010.12427.
[13]
J. Li, Z. Du, L. Zhu, Z. Ding, K. Lu, and H. T. Shen, “Divergence-agnostic unsupervised domain adaptation by adversarial attacks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8196–8211, Nov. 2022.
[14]
M. Rostami and A. Galstyan, “Overcoming concept shift in domain-aware settings through consolidated internal distributions,” 2020, arXiv:2007.00197.
[15]
Y. Hou and L. Zheng, “Source free domain adaptation with image translation,” 2020, arXiv:2008.07514.
[16]
X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “VisDA: The visual domain adaptation challenge,” 2017, arXiv:1710.06924.
[17]
I. J. Taneja, L. Pardo, D. Morales, and M. L. Menéndez, “On generalized information and divergence measures and their applications: A brief review,” Qüestiió, Quaderns d’Estadística i Investigació Operativa, vol. 13, pp. 47–73, 1989.
[18]
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. ICCV, 2017, pp. 2223–2232.
[19]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. ICML, 2015, pp. 448–456.
[20]
S. Chen, M. Harandi, X. Jin, and X. Yang, “Domain adaptation by joint distribution invariant projections,” IEEE Trans. Image Process., vol. 29, pp. 8264–8277, 2020.
[21]
J. Li, M. Jing, H. Su, K. Lu, L. Zhu, and H. T. Shen, “Faster domain adaptation networks,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 12, pp. 5770–5783, Dec. 2022.
[22]
Z. Yu, J. Li, L. Zhu, K. Lu, and H. T. Shen, “Uneven bi-classifier learning for domain adaptation,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 7, pp. 3398–3408, Jul. 2023.
[23]
A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, no. 1, pp. 723–773, Jan. 2012.
[24]
B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. CVPR, 2012, pp. 2066–2073.
[25]
M. Chen, S. Zhao, H. Liu, and D. Cai, “Adversarial-learned loss for domain adaptation,” in Proc. AAAI, 2020, vol. 34, no. 4, pp. 3521–3528.
[26]
Z. Yu, J. Li, L. Zhu, K. Lu, and H. T. Shen, “Classification certainty maximization for unsupervised domain adaptation,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 8, pp. 4232–4243, Aug. 2023.
[27]
J. Liang, D. Hu, and J. Feng, “Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation,” in Proc. ICML, 2020, pp. 6028–6039.
[28]
L. Shen, J. Li, L. Zuo, L. Zhu, and H. T. Shen, “Source-free cross-domain state of charge estimation of lithium-ion batteries at different ambient temperatures,” IEEE Trans. Power Electron., vol. 38, no. 6, pp. 6851–6862, Jun. 2023.
[29]
H. Xia, H. Zhao, and Z. Ding, “Adaptive adversarial network for source-free domain adaptation,” in Proc. ICCV, 2021, pp. 9010–9019.
[30]
H. Zhang, Y. Zhang, K. Jia, and L. Zhang, “Unsupervised domain adaptation of black-box source models,” in Proc. BMVC, 2021, pp. 1–10.
[31]
S. Yang, Y. Wang, J. Van De Weijer, L. Herranz, and S. Jui, “Generalized source-free domain adaptation,” in Proc. ICCV, 2021, pp. 8978–8987.
[32]
J.-B. Grillet al., “Bootstrap your own latent-a new approach to self-supervised learning,” in Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020, pp. 21271–21284.
[33]
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc. ICML, 2020, pp. 1597–1607.
[34]
X. Chen and K. He, “Exploring simple Siamese representation learning,” in Proc. CVPR, 2021, pp. 15750–15758.
[35]
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proc. CVPR, 2020, pp. 9729–9738.
[36]
C. Chen, Z. Li, C. Ouyang, M. Sinclair, W. Bai, and D. Rueckert, “Maxstyle: Adversarial style composition for robust medical image segmentation,” in Proc. MICCAI. Cham, Switzerland: Springer, 2022, pp. 151–161.
[37]
T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, and Z. Wang, “Adversarial robustness: From self-supervised pre-training to fine-tuning,” in Proc. CVPR, 2020, pp. 699–708.
[38]
Z. Jiang, T. Chen, T. Chen, and Z. Wang, “Robust pre-training by adversarial contrastive learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., vol. 33, 2020, pp. 16199–16210.
[39]
K. Ghasedi Dizaji, A. Herandi, C. Deng, W. Cai, and H. Huang, “Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization,” in Proc. ICCV, 2017, pp. 5736–5745.
[40]
C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille, and Q. V. Le, “Adversarial examples improve image recognition,” in Proc. CVPR, 2020, pp. 819–828.
[41]
W.-G. Chang, T. You, S. Seo, S. Kwak, and B. Han, “Domain-specific batch normalization for unsupervised domain adaptation,” in Proc. CVPR, 2019, pp. 7354–7362.
[42]
M. Ishii and M. Sugiyama, “Source-free domain adaptation via distributional alignment by matching batch normalization statistics,” 2021, arXiv:2101.10842.
[43]
C. Wei, K. Shen, Y. Chen, and T. Ma, “Theoretical analysis of self-training with deep networks on unlabeled data,” in Proc. ICLR, 2020, pp. 1–30.
[44]
Y. Ganinet al., “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 2030–2096, 2016.
[45]
R. Xu, G. Li, J. Yang, and L. Lin, “Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation,” in Proc. ICCV, 2019, pp. 1426–1435.
[46]
Y. Jin, X. Wang, M. Long, and J. Wang, “Minimum class confusion for versatile domain adaptation,” in Proc. ECCV. Cham, Switzerland: Springer, 2020, pp. 464–480.
[47]
Y. Wu, D. Inkpen, and A. El-Roby, “Dual mixup regularized learning for adversarial domain adaptation,” in Proc. ECCV. Cham, Switzerland: Springer, 2020, pp. 540–555.
[48]
S. Li, F. Lv, B. Xie, C. H. Liu, J. Liang, and C. Qin, “Bi-classifier determinacy maximization for unsupervised domain adaptation,” in Proc. AAAI Conf. Artif. Intell., May 2021, vol. 35, no. 10, pp. 8455–8464.
[49]
R. Li, W. Cao, S. Wu, and H.-S. Wong, “Generating target image-label pairs for unsupervised domain adaptation,” IEEE Trans. Image Process., vol. 29, pp. 7997–8011, 2020.
[50]
D. Hu, J. Liang, Q. Hou, H. Yan, and Y. Chen, “Adversarial domain adaptation with prototype-based normalized output conditioner,” IEEE Trans. Image Process., vol. 30, pp. 9359–9371, 2021.
[51]
M. Long, Y. Cao, Z. Cao, J. Wang, and M. I. Jordan, “Transferable representation learning with deep adaptation networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 12, pp. 3071–3085, Dec. 2019.
[52]
Y. Zhang, B. Deng, K. Jia, and L. Zhang, “Label propagation with augmented anchors: A simple semi-supervised learning baseline for unsupervised domain adaptation,” in Proc. ECCV. Cham, Switzerland: Springer, 2020, pp. 781–797.
[53]
M. Li, Y.-M. Zhai, Y.-W. Luo, P.-F. Ge, and C.-X. Ren, “Enhanced transport distance for unsupervised domain adaptation,” in Proc. CVPR, 2020, pp. 13936–13944.
[54]
N. Xiao and L. Zhang, “Dynamic weighted learning for unsupervised domain adaptation,” in Proc. CVPR, 2021, pp. 15242–15251.
[55]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, 2016, pp. 770–778.
[56]
S. Yang, J. van de Weijer, L. Herranz, and S. Jui, “Exploiting the intrinsic neighborhood structure for source-free domain adaptation,” in Proc. NIPS, vol. 34, 2021, pp. 29393–29405.
[57]
J. Huang, D. Guan, A. Xiao, and S. Lu, “Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data,” in Proc. Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 3635–3649.
[58]
R. Li, Q. Jiao, W. Cao, H.-S. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in Proc. CVPR, 2020, pp. 9641–9650.
[59]
S. Hong, W. Im, J. Ryu, and H. S. Yang, “SSPP-DAN: Deep domain adaptation network for face recognition with single sample per person,” in Proc. ICIP, 2017, pp. 825–829.
[60]
Y. Kim, D. Cho, K. Han, P. Panda, and S. Hong, “Domain adaptation without source data,” IEEE Trans. Artif. Intell., vol. 2, no. 6, pp. 508–518, Dec. 2021.
[61]
S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Mach. Learn., vol. 79, nos. 1–2, pp. 151–175, May 2010.
[62]
O. Wileset al., “A fine-grained analysis on distribution shift,” in Proc. ICLR, 2021, pp. 1–36.
[63]
A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” in Proc. CVPR, 2011, pp. 1521–1528.
[64]
S. Beery, G. Van Horn, and P. Perona, “Recognition in terra incognita,” in Proc. ECCV, 2018, pp. 456–473.
[65]
K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in Proc. ECCV. Cham, Switzerland: Springer, 2010, pp. 213–226.
[66]
H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proc. CVPR, 2017, pp. 5018–5027.
[67]
X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proc. ICCV, 2019, pp. 1406–1415.
[68]
E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Proc. CVPR, 2017, pp. 7167–7176.
[69]
M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep transfer learning with joint adaptation networks,” in Proc. ICML, 2017, pp. 2208–2217.
[70]
Y. Zhuet al., “Deep subdomain adaptation network for image classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 4, pp. 1713–1722, Apr. 2021.
[71]
R. Xu, Z. Chen, W. Zuo, J. Yan, and L. Lin, “Deep cocktail network: Multi-source unsupervised domain adaptation with category shift,” in Proc. CVPR, 2018, pp. 3964–3973.
[72]
H. Zhao, S. Zhang, G. Wu, J. M. Moura, J. P. Costeira, and G. J. Gordon, “Adversarial multiple source domain adaptation,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, Dec. 2018, pp. 8559–8570.
[73]
S. Zhaoet al., “Multi-source distilling domain adaptation,” in Proc. AAAI Conf. Artif. Intell., Apr. 2020, vol. 34, no. 7, pp. 12975–12983.
[74]
L. Yang, Y. Balaji, S.-N. Lim, and A. Shrivastava, “Curriculum manager for source selection in multi-source domain adaptation,” in Computer Vision—ECCV. Glasgow, U.K.: Springer, 2020, pp. 608–624.
[75]
X. Peng, Z. Huang, Y. Zhu, and K. Saenko, “Federated adversarial domain adaptation,” in Proc. ICLR, 2019, pp. 1–19.
[76]
D. Li and T. Hospedales, “Online meta-learning for multi-source and semi-supervised domain adaptation,” in Proc. ECCV. Cham, Switzerland: Springer, 2020, pp. 382–403.
[77]
S. M. Ahmed, D. S. Raychaudhuri, S. Paul, S. Oymak, and A. K. Roy-Chowdhury, “Unsupervised multi-source domain adaptation without access to source data,” in Proc. CVPR, 2021, pp. 10103–10112.
[78]
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. ECCV. Cham, Switzerland: Springer, 2016, pp. 694–711.
[79]
M. Sugiyama, M. Krauledat, and K.-R. Müller, “Covariate shift adaptation by importance weighted cross validation,” J. Mach. Learn. Res., vol. 8, pp. 985–1005, May 2007.
[80]
R. Shu, H. H. Bui, H. Narui, and S. Ermon, “A DIRT-T approach to unsupervised domain adaptation,” in Proc. ICLR, 2018, pp. 1–19.
[81]
T. M. H. Hsuet al., “Unsupervised domain adaptation with imbalanced cross-domain data,” in Proc. ICCV, 2015, pp. 4121–4129.
[82]
X. Li, J. Li, L. Zhu, G. Wang, and Z. Huang, “Imbalanced source-free domain adaptation,” in Proc. ACM MM, 2021, pp. 3330–3339.
[83]
J. Yang, J. Yang, S. Wang, S. Cao, H. Zou, and L. Xie, “Advancing imbalanced domain adaptation: Cluster-level discrepancy minimization with a comprehensive benchmark,” IEEE Trans. Cybern., vol. 53, no. 2, pp. 1106–1117, Feb. 2023.

Index Terms

  1. Visually Source-Free Domain Adaptation via Adversarial Style Matching
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Image Processing
        IEEE Transactions on Image Processing  Volume 33, Issue
        2024
        6591 pages

        Publisher

        IEEE Press

        Publication History

        Published: 19 January 2024

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 13 Jan 2025

        Other Metrics

        Citations

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media