Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-73232-4_13guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence

Published: 30 September 2024 Publication History

Abstract

Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain. This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation, and a minimum amount of annotation budget is available in the target domain. Without referencing the source data, new challenges emerge in identifying the most informative target samples for labeling, establishing cross-domain alignment during adaptation, and ensuring continuous performance improvements through the iterative query-and-adaptation process. In response, we present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead. We propose Contrastive Active Sampling to learn from the hypotheses of the preceding model, thereby querying target samples that are both informative to the current model and persistently challenging throughout active learning. During adaptation, we learn from features of actively selected anchors obtained from previous intermediate models, so that the Visual Persistence-guided Adaptation can facilitate feature distribution alignment and active sample exploitation. Extensive experiments on three widely-used benchmarks show that our LFTL achieves state-of-the-art performance, superior computational efficiency and continuous improvements as the annotation budget increases. Our code is available at https://github.com/lyumengyao/lftl.

References

[1]
Baum, E.B., Lang, K.: Query learning can work poorly when a human oracle is used. In: International Joint Conference on Neural Networks, Beijing China, vol. 8, p. 8 (1992)
[2]
Bearman A, Russakovsky O, Ferrari V, and Fei-Fei L Leibe B, Matas J, Sebe N, and Welling M What’s the point: semantic segmentation with point supervision Computer Vision – ECCV 2016 2016 Cham Springer 549-565
[3]
Beluch, W.H., Genewein, T., Nürnberger, A., Köhler, J.M.: The power of ensembles for active learning in image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9368–9377 (2018)
[4]
Cao Z, Ma L, Long M, and Wang J Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Partial adversarial domain adaptation Computer Vision – ECCV 2018 2018 Cham Springer 139-155
[5]
Chang, W.G., You, T., Seo, S., Kwak, S., Han, B.: Domain-specific batch normalization for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7354–7362 (2019)
[6]
Chuang, Y.S., Xie, Y., Luo, H., Kim, Y., Glass, J., He, P.: Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883 (2023)
[7]
Ding, N., Xu, Y., Tang, Y., Xu, C., Wang, Y., Tao, D.: Source-free domain adaptation via distribution estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7212–7222 (June 2022)
[8]
Fu, B., Cao, Z., Wang, J., Long, M.: Transferable query selection for active domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7272–7281 (2021)
[9]
Gal, Y., Islam, R., Ghahramani, Z.: Deep bayesian active learning with image data. In: International Conference on International Conference on Machine Learning, pp. 1183–1192. PMLR (2017)
[10]
Ganin Y et al. Domain-adversarial training of neural networks J. Mach. Learn. Res. 2016 17 1 2030-2096
[11]
Gera, A., et al.: The benefits of bad advice: Autocontrastive decoding across model layers. In: Rogers, A., Boyd-Graber, J., Okazaki, N. (eds.) Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10406–10420. Association for Computational Linguistics, Toronto, Canada (Jul 2023)
[12]
Hao, T., Ding, X., Feng, J., Yang, Y., Chen, H., Ding, G.: Quantized prompt for efficient generalization of vision-language models. In: European Conference on Computer Vision (ECCV). Springer (2024)
[13]
He, G., Liu, X., Fan, F., You, J.: Classification-aware semi-supervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 964–965 (2020)
[14]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
[15]
Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: International Conference on International Conference on Machine Learning, pp. 1558–1567. PMLR (2017)
[16]
Huang, J., Guan, D., Xiao, A., Lu, S.: Model adaptation: historical contrastive learning for unsupervised domain adaptation without source data. Adv. Neural Inform. Process. Syst. 34 (2021)
[17]
Hwang, U., Lee, J., Shin, J., Yoon, S.: SF(DA)\$⌃2\$: source-free domain adaptation through the lens of data augmentation. In: The Twelfth International Conference on Learning Representations (2024)
[18]
Jin Y, Wang X, Long M, and Wang J Vedaldi A, Bischof H, Brox T, and Frahm J-M Minimum class confusion for versatile domain adaptation Computer Vision – ECCV 2020 2020 Cham Springer 464-480
[19]
Joshi, A.J., Porikli, F., Papanikolopoulos, N.: Multi-class active learning for image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2372–2379. IEEE (2009)
[20]
Kim Y, Cho D, Han K, Panda P, and Hong S Domain adaptation without source data IEEE Trans. Artifi. Intell. 2021 2 6 508-518
[21]
Kothandaraman, D., Shekhar, S., Sancheti, A., Ghuhan, M., Shukla, T., Manocha, D.: Salad: source-free active label-agnostic domain adaptation for classification, segmentation and detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 382–391 (2023)
[22]
Kundu, J.N., Venkat, N., Babu, R.V., et al.: Universal source-free domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4544–4553 (2020)
[23]
Leng, S., Zhang, H., Chen, G., Li, X., Lu, S., Miao, C., Bing, L.: Mitigating object hallucinations in large vision-language models through visual contrastive decoding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024)
[24]
Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval. pp. 3–12. Springer (1994).
[25]
Li, R., Jiao, Q., Cao, W., Wong, H.S., Wu, S.: Model adaptation: Unsupervised domain adaptation without source data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9641–9650 (2020)
[26]
Li S et al. Deep residual correction network for partial domain adaptation IEEE Trans. Pattern Anal. Mach. Intell. 2020 43 7 2329-2344
[27]
Li, X.L., et al.: Contrastive decoding: open-ended text generation as optimization. In: Rogers, A., Boyd-Graber, J., Okazaki, N. (eds.) Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12286–12312. Association for Computational Linguistics, Toronto, Canada (Jul 2023)
[28]
Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Trans. Pattern Anal. Mach. Intell. (2021)
[29]
Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 740–755. Springer (2014)
[30]
Luo Y, Liu P, Guan T, Yu J, and Yang Y Adversarial style mining for one-shot unsupervised domain adaptation Adv. Neural. Inf. Process. Syst. 2020 33 20612-20623
[31]
Melville, P., Mooney, R.J.: Diverse ensembles for active learning. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004, pp. 74. Association for Computing Machinery, New York (2004)
[32]
Motiian, S., Jones, Q., Iranmanesh, S., Doretto, G.: Few-shot adversarial domain adaptation. Adv. Neural Inform. Process. Syst. 30 (2017)
[33]
Panareda Busto, P., Gall, J.: Open set domain adaptation. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 754–763 (2017)
[34]
Papadopoulos DP, Clarke ADF, Keller F, and Ferrari V Fleet D, Pajdla T, Schiele B, and Tuytelaars T Training object class detectors from eye tracking data Computer Vision – ECCV 2014 2014 Cham Springer 361-376
[35]
Peng K-C, Wu Z, and Ernst J Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Zero-shot deep domain adaptation Computer Vision – ECCV 2018 2018 Cham Springer 793-810
[36]
Peng, X., Usman, B., Kaushik, N., Wang, D., Hoffman, J., Saenko, K.: Visda: a synthetic-to-real benchmark for visual domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 2021–2026 (2018)
[37]
Prabhu, V., Chandrasekaran, A., Saenko, K., Hoffman, J.: Active domain adaptation via clustering uncertainty-weighted embeddings. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8505–8514 (2021)
[38]
Qiu, Z., Zhang, Y., Lin, H., Niu, S., Liu, Y., Du, Q., Tan, M.: Source-free domain adaptation via avatar prototype generation and adaptation. In: International Joint Conference on Artificial Intelligence (2021)
[39]
Rai, P., Saha, A., Daumé III, H., Venkatasubramanian, S.: Domain adaptation meets active learning. In: Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, pp. 27–32 (2010)
[40]
Saenko K, Kulis B, Fritz M, and Darrell T Daniilidis K, Maragos P, and Paragios N Adapting visual category models to new domains Computer Vision – ECCV 2010 2010 Heidelberg Springer 213-226
[41]
Saha A, Rai P, Daumé H, Venkatasubramanian S, and DuVall SL Gunopulos D, Hofmann T, Malerba D, and Vazirgiannis M Active supervised domain adaptation Machine Learning and Knowledge Discovery in Databases 2011 Heidelberg Springer 97-112
[42]
Saito, K., Kim, D., Sclaroff, S., Darrell, T., Saenko, K.: Semi-supervised domain adaptation via minimax entropy. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8050–8058 (2019)
[43]
Saito K, Kim D, Sclaroff S, and Saenko K Universal domain adaptation through self supervision Adv. Neural. Inf. Process. Syst. 2020 33 16282-16292
[44]
Saito K, Yamamoto S, Ushiku Y, and Harada T Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Open set domain adaptation by backpropagation Computer Vision – ECCV 2018 2018 Cham Springer 156-171
[45]
Sener, O., Savarese, S.: Active learning for convolutional neural networks: A core-set approach. In: International Conference on Learning Representations (2018)
[46]
Settles, B.: Active learning literature survey (2009)
[47]
Settles, B.: Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009)
[48]
Shi, Y., Sha, F.: Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1275–1282 (2012)
[49]
Shu, R., Bui, H.H., Narui, H., Ermon, S.: A dirt-t approach to unsupervised domain adaptation. In: Proc. 6th International Conference on Learning Representations (2018)
[50]
Su, J.C., Tsai, Y.H., Sohn, K., Liu, B., Maji, S., Chandraker, M.: Active adversarial domain adaptation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 739–748 (2020)
[51]
Sun, T., Lu, C., Ling, H.: Local context-aware active domain adaptation. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 18588–18597 (2023)
[52]
Tan, S., Jiao, J., Zheng, W.S.: Weakly supervised open-set domain adaptation by dual-domain collaboration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5394–5403 (2019)
[53]
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)
[54]
Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)
[55]
Wang, F., Han, Z., Zhang, Z., He, R., Yin, Y.: Mhpl: minimum happy points learning for active source free domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20008–20018 (2023)
[56]
Wang, J., Jiang, J.: Conditional coupled generative adversarial networks for zero-shot domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3375–3384 (2019)
[57]
Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9010–9019 (2021)
[58]
Xie, M., et al.: Learning distinctive margin toward active domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7993–8002 (2022)
[59]
Xiong, Y., Chen, H., Lin, Z., Zhao, S., Ding, G.: Confidence-based visual dispersal for few-shot unsupervised domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11621–11631 (2023)
[60]
Xu, X., Zhou, X., Venkatesan, R., Swaminathan, G., Majumder, O.: d-sne: domain adaptation using stochastic neighborhood embedding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2497–2506 (2019)
[61]
Xu, X., et al.: Tad: a plug-and-play task-aware decoding method to better adapt llms on downstream tasks. In: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI 2024, Jeju, South Korea, 3-9 August 2024. ijcai.org (2024)
[62]
Yan, D., Huang, L., Jordan, M.I.: Fast approximate spectral clustering. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 907–916 (2009)
[63]
Yang B, Yeh HW, Harada T, and Yuen PC Model-induced generalization error bound for information-theoretic representation learning in source-data-free unsupervised domain adaptation IEEE Trans. Image Process. 2021 31 419-432
[64]
You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2720–2729 (2019)
[65]
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. In: International Conference on Learning Representations (2018)
[66]
Zhang, Z., et al.: Divide and contrast: source-free domain adaptation via adaptive contrastive learning. In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K. (eds.) Advances in Neural Information Processing Systems (2022)

Cited By

View all
  • (2024)Quantized Prompt for Efficient Generalization of Vision-Language ModelsComputer Vision – ECCV 202410.1007/978-3-031-72655-2_4(54-73)Online publication date: 29-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part I
Sep 2024
580 pages
ISBN:978-3-031-73231-7
DOI:10.1007/978-3-031-73232-4
  • Editors:
  • Aleš Leonardis,
  • Elisa Ricci,
  • Stefan Roth,
  • Olga Russakovsky,
  • Torsten Sattler,
  • Gül Varol

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 30 September 2024

Author Tags

  1. Transfer learning
  2. Domain adaptation
  3. Active learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Quantized Prompt for Efficient Generalization of Vision-Language ModelsComputer Vision – ECCV 202410.1007/978-3-031-72655-2_4(54-73)Online publication date: 29-Sep-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media