Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Importance-Based Neuron Selective Distillation for Interference Mitigation in Multilingual Neural Machine Translation

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14120))

  • 1051 Accesses

Abstract

Multilingual neural machine translation employs a single model to translate multiple languages, enabling efficient cross-lingual transferability through shared parameters. However, multilingual training suffers from negative language interference, especially interference with high-resource languages. Existing approaches generally use language-specific modules to distinguish heterogeneous characteristics among different languages but suffer from the parameter explosion problem. In this paper, we propose a “divide and conquer” multilingual translation training method based on the importance of neurons that can mitigate negative language interference effectively without adding additional parameters. The key technologies can be summarized as estimation, pruning, distillation, and fine-tuning. Specifically, we estimate the importance of existing pre-trained model neurons, dividing them into the important ones representing general knowledge of each language and the unimportant ones representing individual knowledge of each low-resource language. Then, we prune the pre-trained model, retaining only the important neurons, and train the pruned model supervised by the original complete model via selective distillation to compensate for some performance loss due to unstructured pruning. Finally, we restore the pruned neurons and only fine-tune them. Experimental results on several language pairs demonstrate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bau, A., Belinkov, Y., Sajjad, H., Durrani, N., Dalvi, F., Glass, J.R.: Identifying and controlling important neurons in neural machine translation. In: 7th International Conference on Learning Representations. ICLR (2019)

    Google Scholar 

  2. Baziotis, C., Artetxe, M., Cross, J., Bhosale, S.: Multilingual machine translation with hyper-adapters. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 1170–1185, December 2022

    Google Scholar 

  3. Duh, H.K.B.T.K., Koehn, P.: Regularized training objective for continued training for domain adaptation in neural machine translation. In: ACL 2018, p. 36 (2018)

    Google Scholar 

  4. Gong, H., Li, X., Genzel, D.: Adaptive sparse transformer for multilingual translation. arXiv preprint arXiv:2104.07358 (2021)

  5. Gu, J., Hassan, H., Devlin, J., Li, V.O.: Universal neural machine translation for extremely low resource languages. arXiv preprint arXiv:1802.05368 (2018)

  6. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  7. Johnson, M., et al.: Google’s multilingual neural machine translation system: enabling zero-shot translation

    Google Scholar 

  8. Kong, X., Renduchintala, A., Cross, J., Tang, Y., Gu, J., Li, X.: Multilingual neural machine translation with deep encoder and multiple shallow decoders. arXiv preprint arXiv:2206.02079 (2022)

  9. Liu, Y., et al.: Multilingual denoising pre-training for neural machine translation. Trans. Assoc. Comput. Linguist. 8, 726–742 (2020)

    Article  Google Scholar 

  10. Ma, S., et al.: XLM-t: scaling up multilingual machine translation with pretrained cross-lingual transformer encoders

    Google Scholar 

  11. Ott, M., et al.: FairSeq: a fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038 (2019)

  12. Philip, J., Berard, A., Gallé, M., Besacier, L.: Monolingual adapters for zero-shot neural machine translation. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4465–4470 (2020)

    Google Scholar 

  13. See, A., Luong, M.T., Manning, C.D.: Compression of neural machine translation models via pruning. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 291–301 (2016)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  15. Wang, Y., Zhou, L., Zhang, J., Zhai, F., Xu, J., Zong, C.: A compact and language-sensitive multilingual translation method. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1213–1223 (2019)

    Google Scholar 

  16. Wang, Y., Zhai, C., Awadalla, H.H.: Multi-task learning for multilingual neural machine translation. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1022–1034 (2020)

    Google Scholar 

  17. Yang, J., Yin, Y., Ma, S., Zhang, D., Li, Z., Wei, F.: High-resource language-specific training for multilingual neural machine translation. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23–29 July 2022, pp. 4461–4467 (2022)

    Google Scholar 

  18. Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., Finn, C.: Gradient surgery for multi-task learning. Adv. Neural. Inf. Process. Syst. 33, 5824–5836 (2020)

    Google Scholar 

  19. Zhang, B., Williams, P., Titov, I., Sennrich, R.: Improving massively multilingual neural machine translation and zero-shot translation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1628–1639 (2020)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grant No. U21B2009). This research is also supported by the Strategic Priority Research Program of Chinese Academy of Science, Grant No. XDC02030400.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Heyan Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Huang, H., Hu, Y., Guo, P., Xie, Y. (2023). Importance-Based Neuron Selective Distillation for Interference Mitigation in Multilingual Neural Machine Translation. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14120. Springer, Cham. https://doi.org/10.1007/978-3-031-40292-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40292-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40291-3

  • Online ISBN: 978-3-031-40292-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics