Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Unsupervised Multi-source Domain Adaptation for Regression

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2020)

Abstract

We consider the problem of unsupervised domain adaptation from multiple sources in a regression setting. We propose in this work an original method to take benefit of different sources using a weighted combination of the sources. For this purpose, we define a new measure of similarity between probabilities for domain adaptation which we call hypothesis-discrepancy. We then prove a new bound for unsupervised domain adaptation combining multiple sources. We derive from this bound a novel adversarial domain adaptation algorithm adjusting weights given to each source, ensuring that sources related to the target receive higher weights. We finally evaluate our method on different public datasets and compare it to other domain adaptation baselines to demonstrate the improvement for regression tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Code is available at https://github.com/GRichard513/ADisc-MSDA.

  2. 2.

    https://www.cs.jhu.edu/~mdredze/datasets/sentiment/.

  3. 3.

    https://github.com/KeiraZhao/MDAN.

References

  1. Adlam, B., Cortes, C., Mohri, M., Zhang, N.: Learning GANs and ensembles using discrepancy. In: Advances in Neural Information Processing Systems, pp. 5788–5799 (2019)

    Google Scholar 

  2. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Machine Learn. 79(1–2), 151–175 (2010)

    Article  MathSciNet  Google Scholar 

  3. Ben-David, S., Blitzer, J., Crammer, K., Pereira, F.: Analysis of representations for domain adaptation. In: Advances in Neural Information Processing Systems, pp. 137–144 (2007)

    Google Scholar 

  4. Ben-David, S., Lu, T., Luu, T., Pál, D.: Impossibility theorems for domain adaptation. In: International Conference on Artificial Intelligence and Statistics, pp. 129–136 (2010)

    Google Scholar 

  5. Blitzer, J., Dredze, M., Pereira, F.: Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 440–447 (2007)

    Google Scholar 

  6. Cortes, C., Gonzalvo, X., Kuznetsov, V., Mohri, M., Yang, S.: AdaNet: adaptive structural learning of artificial neural networks. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 874–883. JMLR.org (2017)

    Google Scholar 

  7. Cortes, C., Mohri, M.: Domain adaptation and sample bias correction theory and algorithm for regression. Theoret. Comput. Sci. 519, 103–126 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cortes, C., Mohri, M., Medina, A.M.: Adaptation based on generalized discrepancy. J. Machine Learn. Res. 20(1), 1–30 (2019)

    MathSciNet  MATH  Google Scholar 

  9. Friedman, J.H.: Multivariate adaptive regression splines. Ann. Stat. 19, 1–67 (1991)

    MathSciNet  MATH  Google Scholar 

  10. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2030–2096 (2016)

    MathSciNet  Google Scholar 

  11. Hoffman, J., Mohri, M., Zhang, N.: Algorithms and theory for multiple-source adaptation. In: Advances in Neural Information Processing Systems, pp. 8246–8256 (2018)

    Google Scholar 

  12. Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., Sugiyama, M.: Unsupervised domain adaptation based on source-guided discrepancy. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4122–4129 (2019)

    Google Scholar 

  13. Li, Y., Carlson, D.E., et al.: Extracting relationships by multi-domain matching. In: Advances in Neural Information Processing Systems, pp. 6798–6809 (2018)

    Google Scholar 

  14. Mansour, Y., Mohri, M., Rostamizadeh, A.: Domain adaptation: learning bounds and algorithms. In: COLT 2009 - The 22nd Conference on Learning Theory, February 2009

    Google Scholar 

  15. Mansour, Y., Mohri, M., Rostamizadeh, A.: Domain adaptation with multiple sources. In: Advances in Neural Information Processing Systems, pp. 1041–1048 (2009)

    Google Scholar 

  16. Mansour, Y., Mohri, M., Rostamizadeh, A.: Multiple source adaptation and the Rényi divergence. arXiv preprint arXiv:1205.2628 (2012)

  17. McDiarmid, C.: Concentration. In: Habib, M., McDiarmid, C., Ramirez-Alfonsin, J., Reed, B. (eds.) Probabilistic Methods for Algorithmic Discrete Mathematics. Algorithms and Combinatorics, vol. 16, pp. 195–248. Springer, Heidelberg (1998). https://doi.org/10.1007/978-3-662-12788-9_6

    Chapter  Google Scholar 

  18. Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)

    Google Scholar 

  19. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)

    Google Scholar 

  20. Wen, J., Greiner, R., Schuurmans, D.: Domain aggregation networks for multi-source domain adaptation. arXiv preprint arXiv:1909.05352 (2019)

  21. Xu, Z., Sun, S.: Multi-source transfer learning with multi-view Adaboost. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012. LNCS, vol. 7665, pp. 332–339. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34487-9_41

    Chapter  Google Scholar 

  22. Zhao, H., Zhang, S., Wu, G., Moura, J.M., Costeira, J.P., Gordon, G.J.: Adversarial multiple source domain adaptation. In: Advances in Neural Information Processing Systems, pp. 8559–8570 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guillaume Richard .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 534 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Richard, G., Mathelin, A.d., Hébrail, G., Mougeot, M., Vayatis, N. (2021). Unsupervised Multi-source Domain Adaptation for Regression. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12457. Springer, Cham. https://doi.org/10.1007/978-3-030-67658-2_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67658-2_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67657-5

  • Online ISBN: 978-3-030-67658-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics