Abstract
Domain adaptation (DA) improves the generalization ability of models across source and target domains with different distributions. Current methods aim to reduce domain distribution divergence to learn transferable features. However, in most real cases, the number of available labeled samples is limited, and annotating labels requires significant time and labor costs, making it difficult to achieve high accuracy. To address this problem, we propose a novel method called Deep Geometric and Statistic Alignment for Fewer labeled Domain Adaptation (GSA4FDA). This method achieves the target task with fewer labeled source samples by combining manifold learning and leveraging the local geometric structure of sufficient unlabeled source samples. For domain alignment, we employ a joint geometric-statistical alignment and embed it into a specific layer of a deep convolutional neural network (CNN) to obtain high-level semantic information, taking into account the complementary nature of the two aspects. Specifically, we use the Nyström method and Maximum Mean Discrepancy (MMD) to compensate for the geometrical and statistical shift between domains. The experimental results on several datasets demonstrate the superiority of our method, particularly when there are limited labeled source samples available.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434
Chu T, Liu Y, Deng J, Li W, Duan L (2022) Denoised maximum classifier discrepancy for source-free unsupervised domain adaptation. In: AAAI, vol 2
Ding Z, Li S, Shao M, Fu Y (2018) Graph adaptive knowledge transfer for unsupervised domain adaptation. In: ECCV, pp 37–52
Drineas P, Mahoney MW, Cristianini N (2005) On the Nyström method for approximating a gram matrix for improved kernel-based learning. J Mach Learn Res 6:2153–2175
Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by backpropagation. In: ICML, pp 1180–1189
Gong B, Grauman K, Sha F (2017) Geodesic flow kernel and landmarks: kernel methods for unsupervised domain adaptation. In: CVPR, pp 59–79
Gretton A, Borgwardt KM, Rasch MJ, Scholkopf B, Smola AJ (2012) A kernel two-sample test. J Mach Learn Res 13:723–773
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: CVPR, pp 770–778
Hu L, Kan M, Shan S, Chen X (2020) Unsupervised domain adaptation with hierarchical gradient synchronization. In: CVPR, pp 4043–4052
Li M, Bi W, Kwok JT, Lu B (2014) Large-scale Nyström kernel matrix approximation using randomized SVD. IEEE Trans Neural Netw Learn Syst 26(1):152–164
Li J, Liu W, Zhou Y, Tao D, Nie L (2020) Domain adaptation with few labeled source samples by graph regularization. Neural Process Lett 51(1):23–39
Li J, Liu W, Zhou Y, Yu J, Tao D, Xu C (2022) Domain-invariant graph for adaptive semi-supervised domain adaptation. ACM Trans Multimed Comput Commun Appl 18(3):1–18
Liu W, Li J, Liu B, Guan W, Zhou Y, Xu C (2021) Unified cross-domain classification via geometric and statistical adaptations. Pattern Recogn 110:107658
Long M, Wang J, Sun J, Yu PS (2015) Domain invariant transfer kernel learning. IEEE Trans Knowl Data Eng 27(6):1519–1532
Long M, Zhu H, Wang J, Jordan MI (2016) Unsupervised domain adaptation with residual transfer networks. Ad Neural Inf Process Syst 29
Long M, Zhu H, Wang J, Jordan MI (2017) Deep transfer learning with joint adaptation networks. In: ICML. PMLR, pp 2208–2217
Long M, Cao Y, Cao Z, Wang J, Jordan MI (2018) Transferable representation learning with deep adaptation networks. IEEE Trans Pattern Anal Mach Intell 41(12):3071–3085
Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. Adv Neural Inf Process Syst 31
Ma X, Zhang T, Xu C (2019) GCAN: Graph convolutional adversarial network for unsupervised domain adaptation. In: CVPR, pp 8266–8276
Pei Z, Cao Z, Long M, Wang J (2018) Multi-adversarial domain adaptation. In: AAAI (2018)
Sun B, Saenko K (2016) Deep coral: correlation alignment for deep domain adaptation. In: ECCV, pp 443–450
Sun J, Wang Z, Wang W, Li H, Sun F (2021) Domain adaptation with geometrical preservation and distribution alignment. Neurocomputing 454:152–167
Tzeng E, Hoffman J, Saenko K, Darrell T (2017) Adversarial discriminative domain adaptation. In: CVPR, pp 7167–7176
Williams CKI, Seeger M (2000) Using the Nyström method to speed up kernel machines. In: Neural information processing systems, pp 682–688
Wu H, Yan Y, Ye Y, Ng MK, Wu Q (2020) Geometric knowledge embedding for unsupervised domain adaptation. Knowl Based Syst 191:105155
Xiao N, Zhang L (2021) Dynamic weighted learning for unsupervised domain adaptation. In: CVPR, pp 15242–15251
Xie S, Zheng Z, Chen L, Chen C (2018) Learning semantic representations for unsupervised domain adaptation. In: ICML. PMLR, pp 5423–5432
Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? Adv Neural Inf Process Syst 27
Zellinger W, Grubinger T, Lughofer E, Natschläger T, Saminger-Platz S (2017) Central moment discrepancy (CMD) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811
Zhao J, Li L, Deng F, He H, Chen J (2020) Discriminant geometrical and statistical alignment with density peaks for domain adaptation. IEEE Trans Cybern
Zhu Y, Zhuang F, Wang J, Ke G, Chen J, Bian J, Xiong H, He Q (2020) Deep subdomain adaptation network for image classification. IEEE Trans Neural Netw Learn Syst 32(4):1713–1722
Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2020) A comprehensive survey on transfer learning. Proc IEEE 109(1):43–76
Acknowledgements
This work was supported partly by the Qingdao Natural Science Foundation (Grant No. 23-2-1-161-zyyd-jch), the Shandong Natural Science Foundation (Grant No. ZR2023MF008).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Cai, Y., Liu, B., Yang, X. et al. GSA4FDA: Deep Geometric and Statistic Alignment for Fewer Labeled Domain Adaptation. Neural Process Lett 55, 11333–11351 (2023). https://doi.org/10.1007/s11063-023-11378-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-023-11378-y