Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A Unified Data Augmentation Framework for Low-Resource Multi-domain Dialogue Generation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Abstract

Current state-of-the-art dialogue systems heavily rely on extensive training datasets. However, challenges arise in domains where domain-specific training datasets are insufficient or entirely absent. To tackle this challenge, we propose a novel data Augmentation framework for Multi-Domain Dialogue Generation, referred to as AMD\(^2\)G. The AMD\(^2\)G framework consists of a data augmentation process and a two-stage training approach: domain-agnostic training and domain adaptation training. We posit that domain corpora are a blend of domain-agnostic and domain-specific features, with certain representation patterns shared among diverse domains. Domain-agnostic training aims to enable models to learn these common expressive patterns. To construct domain-agnostic dialogue corpora, we employ a de-domaining data processing technique used to remove domain-specific features. By mitigating the effects of domain-specific features, the model trained on the de-domained corpora can effectively learn common expression patterns in different domains. Subsequently, we adapt the learned domain-agnostic features to the target domain through domain adaptation training. We conduct experiments on Chinese dialogue datasets from five different domains and show that AMD\(^2\)G achieves superior performance compared to both direct training on the target domain corpus and collective training on all five domain corpora. Our work underscores AMD\(^2\)G as a viable alternative solution for low-resource multi-domain dialogue generation. Code and data associated with our work are available on GitHub repository (https://github.com/misonsky/Amdg).

Y. Liu and E. Nie—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://cdict.qq.pinyin.cn/v1.

  2. 2.

    https://shurufa.baidu.com/dict.

  3. 3.

    https://pinyin.sogou.com/dict/.

  4. 4.

    https://huggingface.co/fnlp/bart-base-chinese.

  5. 5.

    https://huggingface.co/uer/gpt2-chinese-cluecorpussmall.

References

  1. Adiwardana, D., et al.: Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977 (2020)

  2. Bang, N., Lee, J., Koo, M.W.: Task-optimized adapters for an end-to-end task-oriented dialogue system. arXiv preprint arXiv:2305.02468 (2023)

  3. Bang, Y., et al.: A multitask, multilingual, multimodal evaluation of chatGPT on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023 (2023)

  4. Chen, X., Cardie, C.: Multinomial adversarial networks for multi-domain text classification. arXiv preprint arXiv:1802.05694 (2018)

  5. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. In: NIPS 2014 Workshop on Deep Learning, December 2014 (2014)

    Google Scholar 

  6. Hathaliya, J.J., Tanwar, S.: An exhaustive survey on security and privacy issues in healthcare 4.0. Comput. Commun. 153, 311–335 (2020)

    Article  Google Scholar 

  7. He, Z., He, Y., Wu, Q., Chen, J.: Fg2seq: effectively encoding knowledge for end-to-end task-oriented dialog. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8029–8033. IEEE (2020)

    Google Scholar 

  8. Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12), 1–38 (2023)

    Article  Google Scholar 

  9. Kim, D., et al.: Bidirectional domain mixup for domain adaptive semantic segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 1114–1123 (2023)

    Google Scholar 

  10. Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 7871–7880 (2020)

    Google Scholar 

  11. Li, J., Galley, M., Brockett, C., Gao, J., Dolan, W.B.: A diversity-promoting objective function for neural conversation models. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119 (2016)

    Google Scholar 

  12. Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2157–2169 (2017)

    Google Scholar 

  13. Li, S., et al.: Enhancing dialogue generation with conversational concept flows. In: Findings of the Association for Computational Linguistics: EACL 2023, pp. 1484–1495 (2023)

    Google Scholar 

  14. Li, X., Li, M., Wang, Y., Ren, C.X., Guo, X.: Adaptive texture filtering for single-domain generalized segmentation. arXiv preprint arXiv:2303.02943 (2023)

  15. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  16. Lin, P., Wang, J., Schütze, H., Li, W.: Modeling content-emotion duality via disentanglement for empathetic conversation. arXiv preprint arXiv:2209.12495 (2022)

  17. Lin, Z., et al.: Bitod: a bilingual multi-domain dataset for task-oriented dialogue modeling. arXiv preprint arXiv:2106.02787 (2021)

  18. Liu, W., Tang, J., Cheng, Y., Li, W., Zheng, Y., Liang, X.: MEDDG: an entity-centric medical consultation dataset for entity-aware medical dialogue generation (2022)

    Google Scholar 

  19. Liu, Y., Feng, S., Wang, D., Schütze, H., Zhang, Y.: PVGRU: generating diverse and relevant dialogue responses via pseudo-variational mechanism. arXiv preprint arXiv:2212.09086 (2022)

  20. Liu, Y., Feng, S., Wang, D., Zhang, Y.: MulZDG: multilingual code-switching framework for zero-shot dialogue generation. In: Proceedings of the 29th International Conference on Computational Linguistics. pp. 648–659 (2022)

    Google Scholar 

  21. Liu, Y., Feng, S., Wang, D., Zhang, Y., Schütze, H.: Evaluate what you can’t evaluate: unassessable generated responses quality. arXiv preprint arXiv:2305.14658 (2023)

  22. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2017)

    Google Scholar 

  23. Ma, X., Zhang, P., Zhao, F.: Domain-specific attention with distributional signatures for multi-domain end-to-end task-oriented dialogue. In: Findings of the Association for Computational Linguistics: ACL 2023, pp. 3109–3122 (2023)

    Google Scholar 

  24. Madotto, A., Wu, C.S., Fung, P.: Mem2seq: effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1468–1478 (2018)

    Google Scholar 

  25. Nie, E., Liang, S., Schmid, H., Schütze, H.: Cross-lingual retrieval augmented prompt for low-resource languages. arXiv preprint arXiv:2212.09651 (2022)

  26. OpenAI: Gpt-4 technical report (2023)

    Google Scholar 

  27. Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural. Inf. Process. Syst. 35, 27730–27744 (2022)

    Google Scholar 

  28. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  29. Peng, S., Huang, X., Lin, Z., Ji, F., Chen, H., Zhang, Y.: Teacher-student framework enhanced multi-domain dialogue generation. arXiv preprint arXiv:1908.07137 (2019)

  30. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  31. Qin, L., Liu, Y., Che, W., Wen, H., Li, Y., Liu, T.: Entity-consistent end-to-end task-oriented dialogue system with kb retriever. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 133–142 (2019)

    Google Scholar 

  32. Qin, L., Xu, X., Che, W., Zhang, Y., Liu, T.: Dynamic fusion network for multi-domain end-to-end task-oriented dialog. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6344–6354 (2020)

    Google Scholar 

  33. Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  34. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 1–67 (2020)

    MathSciNet  Google Scholar 

  35. Ren, F., et al.: TechKG: a large-scale Chinese technology-oriented knowledge graph. arXiv preprint arXiv:1812.06722 (2018)

  36. Ren, F., Ning, A., Qi, M., Lei, H.: TechGPT: technology-oriented generative pretrained transformer. https://github.com/neukg/TechGPT (2023)

  37. Roller, S., et al.: Recipes for building an open-domain chatbot. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 300–325 (2021)

    Google Scholar 

  38. Sedgwick, P.: Pearson’s correlation coefficient. BMJ 345 (2012)

    Google Scholar 

  39. Serban, I., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  40. Shao, Y., et al.: CPT: a pre-trained unbalanced transformer for both Chinese language understanding and generation. arXiv preprint arXiv:2109.05729 (2021)

  41. Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  42. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  43. Wen, H., Liu, Y., Che, W., Qin, L., Liu, T.: Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 3781–3792 (2018)

    Google Scholar 

  44. Wu, C.S., Madotto, A., Hosseini-Asl, E., Xiong, C., Socher, R., Fung, P.: Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743 (2019)

  45. Wu, C.S., Socher, R., Xiong, C.: Global-to-local memory pointer networks for task-oriented dialogue. arXiv preprint arXiv:1901.04713 (2019)

  46. Wu, H., Zhang, Y., Jin, X., Xue, Y., Wang, Z.: Shared-private LSTM for multi-domain text classification. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 116–128. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_10

    Chapter  Google Scholar 

  47. Wu, H., Xu, K., Song, L., Jin, L., Zhang, H., Song, L.: Domain-adaptive pretraining methods for dialogue understanding. arXiv preprint arXiv:2105.13665 (2021)

  48. Xie, T., et al.: UnifiedsKG: unifying and multi-tasking structured knowledge grounding with text-to-text language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 602–631 (2022)

    Google Scholar 

  49. Xu, J., Ren, X., Lin, J., Sun, X.: Diversity-promoting GAN: a cross-entropy based generative adversarial network for diversified text generation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3940–3949 (2018)

    Google Scholar 

  50. Yang, A., Liu, K., Liu, J., Lyu, Y., Li, S.: Adaptations of rouge and bleu to better evaluate machine reading comprehension task. In: Proceedings of the Workshop on Machine Reading for Question Answering, pp. 98–104 (2018)

    Google Scholar 

  51. Yang, L., Li, J., Li, S., Shinozaki, T.: Multi-domain dialogue state tracking with disentangled domain-slot attention. In: Findings of the Association for Computational Linguistics: ACL 2023, pp. 4928–4938 (2023)

    Google Scholar 

  52. Yunjie, J., et al.: Belle: be everyone’s large language model engine (2023)

    Google Scholar 

  53. Zhang, Z., Li, J., Zhu, P., Zhao, H., Liu, G.: Modeling multi-turn conversation with deep utterance aggregation (2018)

    Google Scholar 

  54. Zhong, V., Xiong, C., Socher, R.: Global-locally self-attentive encoder for dialogue state tracking. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1458–1467 (2018)

    Google Scholar 

  55. Zhou, H., Zheng, C., Huang, K., Huang, M., Zhu, X.: KDCONV: a Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation (2020)

    Google Scholar 

Download references

Acknowledgement

We would like to thank reviewers for their constructive comments. The project is supported by the National Natural Science Foundation of China (62172086, 62272092) and DFG (grant SCHU 2246/14-1). The project is also supported by China Scholarship Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shi Feng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Y. et al. (2024). A Unified Data Augmentation Framework for Low-Resource Multi-domain Dialogue Generation. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14942. Springer, Cham. https://doi.org/10.1007/978-3-031-70344-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70344-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70343-0

  • Online ISBN: 978-3-031-70344-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics