Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Learning Multi-turn Response Selection in Grounded Dialogues with Reinforced Knowledge and Context Distillation

Published: 21 April 2023 Publication History

Abstract

Recently, knowledge-grounded dialogue systems have gained increasing attention. Great efforts have been made to build response matching models where all dialogue content and knowledge sentences are leveraged. However, knowledge redundancy and distraction of irrelevant dialogue content often exist in knowledge-grounded conversations, which may affect the matching process and lead to inferior performance. In addition, irrelevant dialogue history and excessive knowledge also hinder the exploitation of popular pre-trained language models (PLMs) due to the limitation of input length. To address these challenges, we propose a new knowledge-grounded dialogue model based on PLMs, where a knowledge selector and a context selector are designed for filtering out irrelevant knowledge sentences and redundant dialogue history, respectively. Considering the lack of labeled data for the learning of two selectors, we pre-train them with weakly-supervised tasks and then jointly conduct the optimization of knowledge and context selection and fine-tuning of PLMs for response ranking with reinforcement learning (RL). By this means, the dialogue model can distill more accurate and concise knowledge and dialogue content for subsequent response ranking module, and the overall model can converge and perform better. We conduct experiments on two benchmarks and evaluation results indicate that our model can significantly outperform the state-of-the-art methods.

References

[1]
Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open-domain chatbot.
[2]
Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Melbourne, Australia, 675–686. DOI:
[3]
Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2256–2262. DOI:
[4]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. DOI:
[5]
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations.
[6]
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the International Conference on Machine Learning. PMLR, 1243–1252.
[7]
Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of the AAAI Conference on Artificial Intelligence.
[8]
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tür, and Amazon Alexa AI. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In Proceedings of the INTERSPEECH. 1891–1895.
[9]
Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, and Xiaodan Zhu. 2020. Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management. 2041–2044.
[10]
Jia-Chen Gu, Zhenhua Ling, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2020. Filtering before iteratively referring for knowledge-grounded response selection in retrieval-based chatbots. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online, 1412–1422. DOI:
[11]
Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, and Quan Liu. 2019. Dually interactive matching network for personalized response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Hong Kong, China, 1845–1854. DOI:
[12]
Janghoon Han, Taesuk Hong, Byoungjae Kim, Youngjoong Ko, and Jungyun Seo. 2021. Fine-grained post-training for improving retrieval-based dialogue systems. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 1549–1558. DOI:
[13]
Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, and Pei-Hao Su. 2019. Training neural response selection for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 5392–5404. DOI:
[14]
Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS’14, Montreal, Canada), Volume 2, MIT Press, Cambridge, MA, 2042–2050.
[15]
Kai Hua, Zhiyuan Feng, Chongyang Tao, Rui Yan, and Lu Zhang. 2020. Learning to detect relevant contexts and knowledge for response selection in retrieval-based dialogue systems. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management. 525–534.
[16]
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In Proceedings of the International Conference on Learning Representations.
[17]
Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv:1408.6988. Retrieved from https://arxiv.org/abs/1408.6988.
[18]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations.
[19]
Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, Guwei Jin, and Wei Chu. 2017. AliMe Assist: An intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM’17, Singapore, Singapore). Association for Computing Machinery, New York, NY, 2495–2498.
[20]
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, 110–119. DOI:
[21]
Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Berlin, Germany, 994–1003. DOI:
[22]
Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 1192–1202. DOI:
[23]
Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 2157–2169. DOI:
[24]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach.
[25]
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Prague, Czech Republic, 285–294. DOI:
[26]
Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 2775–2779. DOI:
[27]
Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, 3349–3358. Retrieved from https://aclanthology.org/C16-1316.
[28]
OpenAI. 2022. ChatGPT: Optimizing language models for dialogue. Retrieved from https://openai.com/blog/chatgpt/.
[29]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. 311–318. DOI:
[30]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Advances in Neural Information Processing Systems 32. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), Curran Associates, Inc., 8026–8037. Retrieved from http://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
[31]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2383–2392. DOI:
[32]
Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the AAAI Conference on Artificial Intelligence.
[33]
Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence.
[34]
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Beijing, China, 1577–1586. DOI:
[35]
Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of Information Technology and Electronic Engineering 19, 1 (2018), 10–26.
[36]
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Advances in Neural Information Processing Systems. 3104–3112.
[37]
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems (NIPS’99 Denver, CO). MIT Press, Cambridge, MA, 1057–1063.
[38]
Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, and Rui Yan. 2021. A pre-training strategy for zero-resource response selection in knowledge-grounded conversations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Online, 4446–4457. DOI:
[39]
Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi-representation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 12th ACM International Conference on Web Search and Data Mining.Association for Computing Machinery, 267–275. DOI:
[40]
Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1–11. DOI:
[41]
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models.
[42]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Processing Systems 30 (2017).
[43]
Jesse Vig and Kalai Ramea. 2019. Comparison of transfer-learning approaches for response selection in multi-turn conversations. In Proceedings of the Workshop on DSTC7.
[44]
Oriol Vinyals and Quoc Le. 2015. A neural conversational model.
[45]
Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 935–945. Retrieved from https://aclanthology.org/D13-1096.
[46]
Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.
[47]
Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question answering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
[48]
Joseph Weizenbaum. 1966. ELIZA–a computer program for the study of natural language communication between man and machine. Communications of the ACM 9, 1 (1966), 36–45.
[49]
Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and HeuiSeok Lim. 2020. An effective domain adaptive post-training method for BERT in response selection. In Proceedings of the Interspeech 2020.
[50]
Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2018. Starspace: Embed all the things!. In Proceedings of the AAAI Conference on Artificial Intelligence.
[51]
Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Vancouver, Canada, 496–505. DOI:
[52]
Yu Wu, Wei Wu, Dejian Yang, Can Xu, and Zhoujun Li. 2018. Neural response generation with dynamic vocabularies. In Proceedings of the AAAI Conference on Artificial Intelligence.
[53]
Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the AAAI Conference on Artificial Intelligence.
[54]
Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Proceedings of the AAAI Conference on Artificial Intelligence.
[55]
Yi Xu, Hai Zhao, and Zhuosheng Zhang. 2021. Topicaware multi-turn dialogue modeling. In Proceedings of the 35th AAAI Conference on Artificial Intelligence.
[56]
Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. 55–64.
[57]
Chunyuan Yuan, Wei Zhou, Mingming Li, Shangwen Lv, Fuqing Zhu, Jizhong Han, and Songlin Hu. 2019. Multi-hop selector network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Hong Kong, China, 111–120. DOI:
[58]
Chen Zhang, Hao Wang, Feijun Jiang, and Hongzhi Yin. 2021. Adapting to context-aware knowledge in natural conversation for multi-turn response selection. In Proceedings of the Web Conference 2021. 1990–2001.
[59]
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Melbourne, Australia, 2204–2213. DOI:
[60]
Xueliang Zhao, Chongyang Tao, Wei Wu, Can Xu, Dongyan Zhao, and Rui Yan. 2019. A document-grounded matching network for response selection in retrieval-based chatbots. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 5443–5449. DOI:
[61]
Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, and Rui Yan. 2020. Knowledge-grounded dialogue generation with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online, 3377–3390. DOI:
[62]
Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence.
[63]
Kangyan Zhou, Shrimai Prabhumoye, and Alan W. Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 708–713. DOI:
[64]
Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Melbourne, Australia, 1118–1127. DOI:

Cited By

View all
  • (2024)Research on Effective Information Extraction Techniques for Multi-Round Dialogues of Large-Scale Models in Deep Learning EnvironmentApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-35699:1Online publication date: 27-Nov-2024
  • (2024)Market-aware Long-term Job Skill Recommendation with Explainable Deep Reinforcement LearningACM Transactions on Information Systems10.1145/370499843:2(1-35)Online publication date: 21-Nov-2024
  • (2024)Automated Sparse and Low-Rank Shallow Autoencoders for RecommendationACM Transactions on Recommender Systems10.1145/3656482Online publication date: 10-Apr-2024
  • Show More Cited By

Index Terms

  1. Learning Multi-turn Response Selection in Grounded Dialogues with Reinforced Knowledge and Context Distillation

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Information Systems
      ACM Transactions on Information Systems  Volume 41, Issue 4
      October 2023
      958 pages
      ISSN:1046-8188
      EISSN:1558-2868
      DOI:10.1145/3587261
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 April 2023
      Online AM: 16 February 2023
      Accepted: 06 February 2023
      Revised: 04 February 2023
      Received: 10 March 2022
      Published in TOIS Volume 41, Issue 4

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Knowledge-grounded dialogue systems
      2. context-response matching
      3. response ranking
      4. multi-turn dialogues
      5. pre-training
      6. reinforcement learning
      7. pre-trained language models
      8. content planing

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)118
      • Downloads (Last 6 weeks)7
      Reflects downloads up to 06 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Research on Effective Information Extraction Techniques for Multi-Round Dialogues of Large-Scale Models in Deep Learning EnvironmentApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-35699:1Online publication date: 27-Nov-2024
      • (2024)Market-aware Long-term Job Skill Recommendation with Explainable Deep Reinforcement LearningACM Transactions on Information Systems10.1145/370499843:2(1-35)Online publication date: 21-Nov-2024
      • (2024)Automated Sparse and Low-Rank Shallow Autoencoders for RecommendationACM Transactions on Recommender Systems10.1145/3656482Online publication date: 10-Apr-2024
      • (2024)Document-level Relation Extraction with Progressive Self-distillationACM Transactions on Information Systems10.1145/365616842:6(1-34)Online publication date: 25-Jun-2024
      • (2024)Topic-aware response selection for dialog systemsNatural Language Processing Journal10.1016/j.nlp.2024.1000878(100087)Online publication date: Sep-2024
      • (2023)Knowledge distillation for high dimensional search indexProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667574(33403-33419)Online publication date: 10-Dec-2023
      • (2023)Non-Recursive Cluster-Scale Graph Interacted Model for Click-Through Rate PredictionProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615180(3748-3752)Online publication date: 21-Oct-2023
      • (2023)Simulating Student Interactions with Two-stage Imitation Learning for Intelligent Educational SystemsProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615060(3423-3432)Online publication date: 21-Oct-2023
      • (2023)Differentiable Optimized Product Quantization and BeyondProceedings of the ACM Web Conference 202310.1145/3543507.3583482(3353-3363)Online publication date: 30-Apr-2023
      • (2023)Membership Inference Attacks Against Sequential Recommender SystemsProceedings of the ACM Web Conference 202310.1145/3543507.3583447(1208-1219)Online publication date: 30-Apr-2023
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media