Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches, and Future Directions

Published: 05 October 2023 Publication History

Abstract

Abstractive dialogue summarization generates a concise and fluent summary covering the salient information in a dialogue among two or more interlocutors. It has attracted significant attention in recent years based on the massive emergence of social communication platforms and an urgent requirement for efficient dialogue information understanding and digestion. Different from news or articles in traditional document summarization, dialogues bring unique characteristics and additional challenges, including different language styles and formats, scattered information, flexible discourse structures, and unclear topic boundaries. This survey provides a comprehensive investigation of existing work for abstractive dialogue summarization from scenarios, approaches to evaluations. It categorizes the task into two broad categories according to the type of input dialogues, i.e., open-domain and task-oriented, and presents a taxonomy of existing techniques in three directions, namely, injecting dialogue features, designing auxiliary training tasks, and using additional data. A list of datasets under different scenarios and widely accepted evaluation metrics are summarized for completeness. After that, the trends of scenarios and techniques are summarized, together with deep insights into correlations between extensively exploited features and different scenarios. Based on these analyses, we recommend future directions, including more controlled and complicated scenarios, technical innovations and comparisons, publicly available datasets in special domains, and so on.

References

[1]
Stergos D. Afantenos, Eric Kow, Nicholas Asher, and Jérémy Perret. 2015. Discourse parsing for multi-party chat dialogues. In Proceedings of the EMNLP. 928–937.
[2]
Alexander A. Alemi and Paul Ginsparg. 2015. Text segmentation based on semantic word embeddings. Retrieved from https://arxiv.org/abs/1503.05543
[3]
Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Trans. Assoc. Comput. Linguistics 4 (2016), 463–476.
[4]
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the ACL. 344–354.
[5]
Jaime Arguello and Carolyn Rosé. 2006. Topic-segmentation of dialogue. In Proceedings of the Analyzing Conversations in Text and Speech. 42–49.
[6]
Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara, and Stergos D. Afantenos. 2016. Discourse structure and dialogue acts in multiparty dialogue: The STAC corpus. In Proceedings of the LREC.
[7]
Nicholas Asher and Alex Lascarides. 2005. Logics of Conversation. Cambridge University Press.
[8]
Abedelkadir Asi, Song Wang, Roy Eisenstadt, Dean Geckt, Yarin Kuper, Yi Mao, and Royi Ronen. 2022. An end-to-end dialogue summarization system for sales calls. Retrieved from https://arxiv.org/abs/2204.12951
[9]
Jiaxin Bai, Hongming Zhang, Yangqiu Song, and Kun Xu. 2021. Joint conference resolution and character linking for multiparty conversation. In Proceedings of the EACL. 539–548.
[10]
Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Generating abstractive summaries from meeting transcripts. In Proceedings of the ACM DocEng. 51–60.
[11]
Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the ACL. 318–325.
[12]
Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the ACL-HLT. 481–490.
[13]
Amanda Bertsch, Graham Neubig, and Matthew R. Gormley. 2022. He said, she said: Style transfer for shifting the perspective of dialogues. In Findings of the EMNLP.
[14]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. NeurIPS 33 (2020), 1877–1901.
[15]
Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. MultiWOZ—A large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the EMNLP. 5016–5026.
[16]
Harry Bunt. 1994. Context and dialogue control. Think Quart. 3, 1 (1994), 19–31.
[17]
Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Maël Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, Iain McCowan, Wilfried Post, Dennis Reidsma, and Pierre Wellner. 2005. The AMI meeting corpus: A pre-announcement. In Proceedings of the MLMI. Springer, 28–39.
[18]
Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the ACL. 815–824.
[19]
Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument mining for PERSuAsive oNline discussions. In Proceedings of the EMNLP-IJCNLP. 2933–2943.
[20]
Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. SIGKDD Explor. 19, 2 (2017), 25–35.
[21]
Jiaao Chen and Diyi Yang. 2020. Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the EMNLP. 4106–4118.
[22]
Jiaao Chen and Diyi Yang. 2021. Simple conversational data augmentation for semi-supervised abstractive dialogue summarization. In Proceedings of the EMNLP. 6605–6616.
[23]
Jiaao Chen and Diyi Yang. 2021. Structure-aware abstractive conversation summarization via discourse and action graphs. In Proceedings of the NAACL-HLT. 1380–1391.
[24]
Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021. SummScreen: A dataset for abstractive screenplay summarization. Retrieved from https://arxiv.org/abs/2104.07091
[25]
Meng Chen, Ruixue Liu, Lei Shen, Shaozu Yuan, Jingyan Zhou, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. The JDDC corpus: A large-scale multi-turn Chinese dialogue dataset for E-commerce customer service. In Proceedings of the LREC. 459–466.
[26]
Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. Retrieved from https://arxiv.org/abs/1902.10909
[27]
Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the ACL. 675–686.
[28]
Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. DialogSum: A real-life scenario dialogue summarization dataset. In Findings of the ACL/IJCNLP. 5062–5074.
[29]
Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Xianchao Zhu, and Yue Zhang. 2023. Revisiting cross-lingual summarization: A corpus-based study and a new benchmark with improved annotation. In Proceedings of the ACL. 9332–9351.
[30]
Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, and Yue Zhang. 2022. The cross-lingual conversation summarization challenge. Retrieved from https://arxiv.org/abs/2205.00379
[31]
Freddy Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In Proceedings of the ANLP. 26–33.
[32]
Tanya Chowdhury and Tanmoy Chakraborty. 2019. CQASUMM: Building references for community question answering summarization corpora. In Proceedings of the ACM COMAD/CODS. 18–26.
[33]
Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that’s “human” is not gold: Evaluating human evaluation of generated text. In Proceedings of the ACL/IJCNLP. 7282–7296.
[34]
Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the NAACL-HLT. 615–621.
[35]
Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. MuTual: A dataset for multi-turn dialogue reasoning. In Proceedings of the ACL. 1406–1416.
[36]
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In Proceedings of the ICLR.
[37]
Jiasheng Di, Xiao Wei, and Zhenyu Zhang. 2020. How to interact and change? Abstractive dialogue summarization with dialogue act weight and topic change info. In Proceedings of the KSEM, Part II(Lecture Notes in Computer Science, Vol. 12275). 238–249.
[38]
Xinyu Duan, Yating Zhang, Lin Yuan, Xin Zhou, Xiaozhong Liu, Tianyi Wang, Ruocheng Wang, Qiong Zhang, Changlong Sun, and Fei Wu. 2019. Legal summarization for multi-role debate dialogue via controversy focus mining and multi-task learning. In Proceedings of the CIKM. 1361–1370.
[39]
Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir R. Radev. 2021. ConvoSumm: Conversation summarization benchmark and improved abstractive summarization with argument mining. In Proceedings of the ACL/IJCNLP, Volume 1: Long Papers. 6866–6880.
[40]
Yue Fang, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Bo Long, Yanyan Lan, and Yanquan Zhou. 2022. From spoken dialogue to formal summary: An utterance rewriting for dialogue summarization. In Proceedings of the NAACL. 3859–3869.
[41]
Benoit Favre, Evgeny Stepanov, Jérémy Trione, Frédéric Béchet, and Giuseppe Riccardi. 2015. Call centre conversation summarization: A pilot task at multiling 2015. In Proceedings of the SIGdial. 232–236.
[42]
Guy Feigenblat, R. Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, and Ranit Aharonov. 2021. TWEETSUMM - A dialog summarization dataset for customer service. In Findings of the EMNLP. 245–260.
[43]
Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. Incorporating commonsense knowledge into abstractive dialogue summarization via heterogeneous graph networks. In Proceedings of the CCL(Lecture Notes in Computer Science, Vol. 12869). 127–142.
[44]
Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. A survey on dialogue summarization: Recent advances and new frontiers. Retrieved from https://arxiv.org/abs/2107.03175
[45]
Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2022. MSAMSum: Towards benchmarking multi-lingual dialogue summarization. In Proceedings of the 2nd DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering. 1–12.
[46]
Xiachong Feng, Xiaocheng Feng, Bing Qin, and Xinwei Geng. 2021. Dialogue discourse-aware graph model and data augmentation for meeting summarization. In Proceedings of the IJCAI. 3808–3814.
[47]
Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, and Ting Liu. 2021. Language model as an annotator: Exploring DialoGPT for dialogue summarization. In Proceedings of the ACL/IJCNLP, Volume 1: Long Papers. 1479–1491.
[48]
Xiyan Fu, Yating Zhang, Tianyi Wang, Xiaozhong Liu, Changlong Sun, and Zhenglu Yang. 2021. RepSum: Unsupervised dialogue summarization based on replacement strategy. In Proceedings of the ACL/IJCNLP, Volume 1: Long Papers. 6042–6051.
[49]
Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, and Yejin Choi. 2021. Paragraph-level commonsense transformers with recurrent memory. In Proceedings of the AAAI, Vol. 35. 12857–12865.
[50]
Leilei Gan, Yating Zhang, Kun Kuang, Lin Yuan, Shuo Li, Changlong Sun, Xiaozhong Liu, and Fei Wu. 2021. Dialogue inspectional summarization with factual inconsistency awareness. Retrieved from https://arxiv.org/abs/2111.03284
[51]
Prakhar Ganesh and Saket Dingliwal. 2019. Restructuring conversations using discourse relations for zero-shot abstractive dialogue summarization. Retrieved from https://arxiv.org/abs/1902.01615
[52]
Mingqi Gao and Xiaojun Wan. 2022. DialSummEval: Revisiting summarization evaluation for dialogues. In Proceedings of the NAACL-HLT. 5693–5709.
[53]
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the ICML. PMLR, 1243–1252.
[54]
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization. 70–79.
[55]
Chih-Wen Goo and Yun-Nung Chen. 2018. Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts. In Proceedings of the IEEE SLT. 735–742.
[56]
Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the NAACL-HLT, Volume 1 (Long Papers). 708–719.
[57]
Michael Hanna and Ondřej Bojar. 2021. A fine-grained analysis of BERTScore. In Proceedings of the 6th Conference on Machine Translation. 507–517.
[58]
Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the NeurIPS. 1693–1701.
[59]
Tanveer Hussain, Khan Muhammad, Weiping Ding, Jaime Lloret, Sung Wook Baik, and Victor Hugo C. de Albuquerque. 2021. A comprehensive survey of multi-view video summarization. Pattern Recogn. 109 (2021), 107567.
[60]
Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2003. The ICSI meeting corpus. In Proceedings of the ICASSP. 364–367.
[61]
Qi Jia, Yizhu Liu, Haifeng Tang, and Kenny Zhu. 2022. Post-training dialogue summarization using pseudo-paraphrasing. In Findings of the NAACL. 1660–1669.
[62]
Qi Jia, Haifeng Tang, and Kenny Zhu. 2023. Reducing sensitivity on speaker names for text generation from dialogues. In Findings of the ACL. 2058–2073.
[63]
Anirudh Joshi, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2020. Dr. Summarize: Global summarization of medical dialogue by exploiting local structures. In Findings of the EMNLP. 3755–3763.
[64]
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguist. 8 (2020), 64–77.
[65]
Muhammad Khalifa, Miguel Ballesteros, and Kathleen R. McKeown. 2021. A bag of tricks for dialogue summarization. In Proceedings of the EMNLP. 8014–8022.
[66]
Seokhwan Kim. 2019. Dynamic memory networks for dialogue topic tracking. http://workshop.colips.org/dstc7/papers/05.pdf
[67]
Seungone Kim, Se June Joo, Hyungjoo Chae, Chaehyeong Kim, Seung won Hwang, and Jinyoung Yeo. 2022. Mind the gap! injecting commonsense knowledge for abstractive dialogue summarization. In Proceedings of the COLING. 6285–6300.
[68]
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of the ICLR Conference Track Proceedings.
[69]
Jia Jin Koay, Alexander Roustai, Xiaojin Dai, Dillon Burns, Alec Kerrigan, and Fei Liu. 2020. How domain terminology affects meeting summarization performance. In Proceedings of the COLING. 5689–5695.
[70]
Jia Jin Koay, Alexander Roustai, Xiaojin Dai, and Fei Liu. 2021. A sliding-window approach to automatic creation of meeting minutes. In Proceedings of the NAACL-HLT. 68–75.
[71]
Kundan Krishna, Sopan Khosla, Jeffrey P. Bigham, and Zachary C. Lipton. 2021. Generating SOAP notes from doctor-patient conversations using modular summarization techniques. In Proceedings of the ACL/IJCNLP, Volume 1: Long Papers. 4958–4972.
[72]
Wojciech Kryściński, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the EMNLP. 9332–9346.
[73]
Harshit Kumar, Arvind Agarwal, and Sachindra Joshi. 2018. Dialogue-act-driven conversation model: An experimental study. In Proceedings of the COLING. 1246–1256.
[74]
Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLI-based models for inconsistency detection in summarization. TACL 10 (2022), 163–177.
[75]
Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of the NAACL-HLT, Volume 2 (Short Papers). 687–692.
[76]
Seanie Lee, Dong Bok Lee, and Sung Ju Hwang. 2021. Contrastive learning with adversarial perturbations for conditional text generation. In Proceedings of the ICLR.
[77]
Yuejie Lei, Yuanmeng Yan, Zhiyuan Zeng, Keqing He, Ximing Zhang, and Weiran Xu. 2021. Hierarchical speaker-aware sequence-to-sequence model for dialogue summarization. In Proceedings of the ICASSP. 7823–7827.
[78]
Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, and Weiran Xu. 2021. A finer-grain universal dialogue semantic structures based model for abstractive dialogue summarization. In Findings of the EMNLP. 1354–1364.
[79]
Mirko Lenz, Premtim Sahitaj, Sean Kallenberg, Christopher Coors, Lorik Dumani, Ralf Schenkel, and Ralph Bergmann. 2020. Towards an argument mining pipeline transforming texts to argument graphs. In Proceedings of the COMMA(Frontiers in Artificial Intelligence and Applications, Vol. 326). 263–270.
[80]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the ACL. 7871–7880.
[81]
Daniel Li, Thomas Chen, Albert Tung, and Lydia B. Chilton. 2021. Hierarchical summarization for longform spoken dialog. In Proceedings of the ACM UIST. 582–597.
[82]
Haoran Li, Song Xu, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2021. Learn to copy from the copying history: Correlational copy network for abstractive summarization. In Proceedings of the EMNLP. 4091–4101.
[83]
Manling Li, Lingyu Zhang, Heng Ji, and Richard J. Radke. 2019. Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In Proceedings of the ACL, Volume 1: Long Papers. 2190–2196.
[84]
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the IJCNLP, Volume 1: Long Papers. 986–995.
[85]
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the EMNLP. 1823–1840.
[86]
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. 74–81.
[87]
Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the COLING volume 1.
[88]
Haitao Lin, Liqun Ma, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2021. CSDS: A fine-grained Chinese dataset for customer service dialogue summarization. In Proceedings of the EMNLP. 4436–4451.
[89]
Haitao Lin, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2022. Other roles matter! enhancing role-oriented dialogue summarization via role interactions. In Proceedings of the ACL. 2545–2558.
[90]
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proceedings of the ECCV, Part V(Lecture Notes in Computer Science, Vol. 8693). 740–755.
[91]
Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the LREC.
[92]
Marina Litvak, Mark Last, and Menahem Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Proceedings of the ACL. 927–936.
[93]
Bing Liu and Sahisnu Mazumder. 2021. Lifelong and continual learning dialogue systems: Learning during conversation. In Proceedings of the AAAI. 15058–15063.
[94]
Chunyi Liu, Peng Wang, Jiang Xu, Zang Li, and Jieping Ye. 2019. Automatic dialogue summary generation for customer service. In Proceedings of the ACM SIGKDD. 1957–1965.
[95]
Fei Liu, Feifan Liu, and Yang Liu. 2010. A supervised framework for keyword extraction from meeting transcripts. IEEE Trans. Audio, Speech, Lang. Process. 19, 3, 538–548.
[96]
Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, and Xiaojie Wang. 2021. Topic-aware contrastive learning for abstractive dialogue summarization. In Proceedings of the EMNLP. 1229–1243.
[97]
Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, and Dongmei Zhang. 2020. Incomplete utterance rewriting as semantic segmentation. In Proceedings of the EMNLP. 2846–2857.
[98]
Yizhu Liu, Qi Jia, and Kenny Zhu. 2022. Reference-free summarization evaluation via semantic correlation and compression ratio. In Proceedings of the NAACL. 2109–2115.
[99]
Yizhu Liu, Qi Jia, and Kenny Q. Zhu. 2021. Keyword-aware abstractive summarization by extracting set-level intermediate summaries. In Proceedings of the WWW. 3042–3054.
[100]
Yongtai Liu, Joshua Maynez, Gonçalo Simões, and Shashi Narayan. 2022. Data augmentation for low-resource dialogue summarization. In Findings of the NAACL. 703–710.
[101]
Zhengyuan Liu and Nancy Chen. 2021. Controllable neural dialogue summarization with personal named entity planning. In Proceedings of the EMNLP. 92–106.
[102]
Zhengyuan Liu and Nancy Chen. 2022. Entity-based de-noising modeling for controllable dialogue summarization. In Proceedings of the SIGdial. 407–418.
[103]
Zhengyuan Liu and Nancy F. Chen. 2022. Dynamic sliding window modeling for abstractive meeting summarization. In Proceedings of Interspeech 2022. 5150–5154.
[104]
Zhengyuan Liu and Nancy F. Chen. 2023. Picking the underused heads: A network pruning perspective of attention head selection for fusing dialogue coreference information. In Proceedings of the ICASSP. 1–5.
[105]
Zhengyuan Liu, Angela Ng, Sheldon Lee Shao Guang, Ai Ti Aw, and Nancy F. Chen. 2019. Topic-aware pointer-generator networks for summarizing spoken conversations. In Proceedings of the IEEE ASRU. 814–821.
[106]
Zhengyuan Liu, Ke Shi, and Nancy Chen. 2021. Coreference-aware dialogue summarization. In Proceedings of the SIGdial. 509–519.
[107]
Vanessa Loza, Shibamouli Lahiri, Rada Mihalcea, and Po-Hsiang Lai. 2014. Building a dataset for summarization and keyword extraction from emails. In Proceedings of the LREC. 2441–2446.
[108]
Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander F. Gelbukh, and Erik Cambria. 2019. DialogueRNN: An attentive RNN for emotion detection in conversations. In Proceedings of the AAAI. 6818–6825.
[109]
Valentin Malykh, Konstantin Chernis, Ekaterina Artemova, and Irina Piontkovskaya. 2020. SumTitles: A summarization dataset with low extractiveness. In Proceedings of the COLING. 5718–5730.
[110]
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the ACL. 1906–1919.
[111]
Yashar Mehdad, Giuseppe Carenini, Frank Wm. Tompa, and Raymond T. Ng. 2013. Abstractive meeting summarization with entailment and fusion. In Proceedings of the ENLG. 136–146.
[112]
Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle Lee, Anish Acharya, and Rajiv Ratn Shah. 2021. GupShup: An annotated corpus for abstractive summarization of open-domain code-switched conversations. Retrieved from https://arxiv.org/abs/2104.08578
[113]
Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the ICML(Proceedings of Machine Learning Research, Vol. 70). 2410–2419.
[114]
Amita Misra, Pranav Anand, Jean E. Fox Tree, and Marilyn A. Walker. 2015. Using summarization to discover argument facets in online idealogical dialog. In Proceedings of the NAACL-HLT. 430–440.
[115]
Sabine Molenaar, Lientje Maas, Verónica Burriel, Fabiano Dalpiaz, and Sjaak Brinkkemper. 2020. Medical dialogue summarization for automated reporting in healthcare. In Proceedings of the Advanced Information Systems Engineering Workshops (CAiSE’20)(Lecture Notes in Business Information Processing, Vol. 382). 76–88.
[116]
Roser Morante and Eduardo Blanco. 2012. *SEM 2012 shared task: Resolving the scope and focus of negation. In Proceedings of the 1st Joint Conference on Lexical and Computational Semantics. 265–274.
[117]
Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James F. Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the NAACL-HLT. 839–849.
[118]
Gabriel Murray, Steve Renals, and Jean Carletta. 2005. Extractive summarization of meeting recordings. In Proceedings of the INTERSPEECH. 593–596.
[119]
Varun Nair, Namit Katariya, Xavier Amatriain, Ilya Valmianski, and Anitha Kannan. 2021. Adding more data does not always help: A study in medical conversation summarization with PEGASUS. Retrieved from https://arxiv.org/abs/2111.07564
[120]
Yuri Nakayama, Tsukasa Shiota, and Kazutaka Shimada. 2021. Corpus construction for topic-based summarization of multi-party conversation. In Proceedings of the International Conference on Asian Language Processing (IALP’21). IEEE, 229–234.
[121]
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the SIGNLL. 280–290.
[122]
Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the EMNLP. 1797–1807.
[123]
Ani Nenkova and Lucy Vanderwende. 2005. The impact of frequency on summarization. Microsoft Research, Redmond, Washington, Tech. Rep. MSR-TR-2005 101.
[124]
OpenAI. 2022. Online ChatGPT: Optimizing language models for dialogue. OpenAI Blog. Retrieved from https://online-chatgpt.com/
[125]
Shereen Oraby, Pritam Gundecha, Jalal Mahmud, Mansurul Bhuiyan, and Rama Akkiraju. 2017. “How May I Help You?” Modeling Twitter customer service conversations using fine-grained dialogue acts. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. 343–355.
[126]
Tatsuro Oya, Yashar Mehdad, Giuseppe Carenini, and Raymond T. Ng. 2014. A template-based abstractive meeting summarization: Leveraging summary and source text relationships. In Proceedings of the INLG. 45–53.
[127]
Seongmin Park and Jihwa Lee. 2022. Unsupervised abstractive dialogue summarization with word graphs and POV conversion. In Proceedings of the 2nd Workshop on Deriving Insights from User-Generated Text. 1–9.
[128]
Seongmin Park, Dongchan Shin, and Jihwa Lee. 2022. Leveraging non-dialogue summaries for dialogue summarization. In Proceedings of the 1st Workshop On Transcript Understanding. 1–7.
[129]
George Prodan and Elena Pelican. 2021. Prompt scoring system for dialogue summarization using GPT-3. https://www.techrxiv.org/articles/preprint/Prompt_scoring_system_for_dialogue_summarization_using_GPT-3/16652392
[130]
MengNan Qi, Hao Liu, Yuzhuo Fu, and Ting Liu. 2021. Improving abstractive dialogue summarization with hierarchical pretraining and topic segment. In Findings of the EMNLP. 1121–1130.
[131]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 (2020), 140:1–140:67.
[132]
Revanth Rameshkumar and Peter Bailey. 2020. Storytelling with dialogue: A critical role dungeons and dragons dataset. In Proceedings of the ACL. 5121–5134.
[133]
Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using siamese BERT-Networks. In Proceedings of the EMNLP-IJCNLP. 3980–3990.
[134]
Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the EMNLP. 379–389.
[135]
Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn taking for conversation. Language 50 (1974), 696–735.
[136]
Evan Sandhaus. 2008. The New York times annotated corpus. In Proceedings of the Linguistic Data Consortium.
[137]
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the ACL, Volume 1: Long Papers. 1073–1083.
[138]
Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the AAAI. 3776–3784.
[139]
Guokan Shang, Wensi Ding, Zekun Zhang, Antoine J.-P. Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorré. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In Proceedings of the ACL,Volume 1: Long Papers. 664–674.
[140]
Liqun Shao, Hao Zhang, Ming Jia, and Jie Wang. 2017. Efficient and effective single-document summarizations and a word-embedding measurement of quality. Retrieved from https://arxiv.org/abs/1710.00284
[141]
Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, and Chandan K. Reddy. 2021. Neural abstractive text summarization with sequence-to-sequence models. ACM Trans. Data Sci. 2, 1 (2021), 1:1–1:37.
[142]
Zhouxing Shi and Minlie Huang. 2019. A deep sequential model for discourse parsing on multi-party dialogues. In Proceedings of the AAAI. 7007–7014.
[143]
Karan Singla, Evgeny A. Stepanov, Ali Orkan Bayer, Giuseppe Carenini, and Giuseppe Riccardi. 2017. Automatic community creation for abstractive spoken conversations summarization. In Proceedings of the Workshop on New Frontiers in Summarization (NFiS@EMNLP’17). 43–47.
[144]
Ruben Sipos, Pannaga Shivaswamy, and Thorsten Joachims. 2012. Large-margin learning of submodular summarization models. In Proceedings of the EACL. 224–233.
[145]
Yan Song, Yuanhe Tian, Nan Wang, and Fei Xia. 2020. Summarizing medical conversations via identifying important utterances. In Proceedings of the COLING. 717–729.
[146]
Robyn Speer and Catherine Havasi. 2012. Representing general relational knowledge in ConceptNet 5. In Proceedings of the LREC. 3679–3686.
[147]
Manfred Stede, Stergos D. Afantenos, Andreas Peldszus, Nicholas Asher, and Jérémy Perret. 2016. Parallel discourse annotations on a corpus of short texts. In Proceedings of the LREC.
[148]
Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca A. Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Retrieved from https://dblp.org/rec/journals/corr/cs-CL-0006023.html
[149]
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. Trans. Assoc. Comput. Linguistics 7 (2019), 217–231.
[150]
Ayesha Ayub Syed, Ford Lumban Gaol, and Tokuro Matsuo. 2021. A survey of the state-of-the-art models in neural abstractive text summarization. IEEE Access 9 (2021), 13248–13265.
[151]
Ryuichi Takanobu, Minlie Huang, Zhongzhou Zhao, Feng-Lin Li, Haiqing Chen, Xiaoyan Zhu, and Liqiang Nie. 2018. A weakly supervised method for topic segmentation and labeling in goal-oriented dialogues via reinforcement learning. In Proceedings of the IJCAI. 4403–4410.
[152]
Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, and Dragomir Radev. 2021. Confit: Toward faithful dialogue summarization with linguistically informed contrastive fine-tuning. Retrieved from https://arxiv.org/abs/2112.08713
[153]
Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2021. Multilingual translation from denoising pre-training. In Findings of the ACL-IJCNLP. 3450–3466.
[154]
Sansiri Tarnpradab, Fei Liu, and Kien A. Hua. 2017. Toward extractive summarization of online forum discussions via hierarchical attention networks. In Proceedings of the 13th International Flairs Conference.
[155]
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In Proceedings of the ICLR.
[156]
Naama Tepper, Anat Hashavit, Maya Barnea, Inbal Ronen, and Lior Leiba. 2018. Collabot: Personalized group chat summarization. In Proceedings of the ACM WSDM. 771–774.
[157]
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar et al. 2023. Llama: Open and efficient foundation language models. Retrieved from https://arxiv.org/abs/2302.13971
[158]
Jan Ulrich, Gabriel Murray, and Giuseppe Carenini. 2008. A publicly available annotated corpus for supervised email summarization. In Proceedings of the of AAAI Email Workshop.
[159]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NeurIPS 30 (2017).
[160]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the ICLR.
[161]
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of the NeurIPS. 2692–2700.
[162]
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the ACL. 5008–5020.
[163]
Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen, and Haizhou Li. 2022. Analyzing and evaluating faithfulness in dialogue summarization. In Proceedings of the EMNLP. 4897–4908.
[164]
Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022. Clidsum: A benchmark dataset for cross-lingual dialogue summarization. Retrieved from https://arxiv.org/abs/2202.05599
[165]
Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? Debiasing relation extraction with counterfactual analysis. In Proceedings of the NAACL. 3071–3081.
[166]
Yiming Wang, Zhuosheng Zhang, and Rui Wang. 2023. Element-aware summarization with large language models: Expert-aligned evaluation and chain-of-thought method. In Proceedings of the ACL (Volume 1: Long Papers). 8640–8665.
[167]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Info. Process. Syst. 35 (2022), 24824–24837.
[168]
Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021. Controllable abstractive dialogue summarization with sketch supervision. In Findings of the ACL/IJCNLP. 5108–5122.
[169]
Xue-Feng Xi, Zhou Pi, and Guodong Zhou. 2020. Global encoding for long chinese text summarization. ACM Trans. Asian Low Resour. Lang. Info. Process. 19, 6 (2020), 84:1–84:17.
[170]
Wen Xiao and Giuseppe Carenini. 2019. Extractive summarization of long documents by combining global and local context. In Proceedings of the EMNLP-IJCNLP. 3009–3019.
[171]
Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. Retrieved from https://arxiv.org/abs/2107.07567
[172]
Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2021. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues. In Proceedings of the AAAI. 14158–14166.
[173]
Pranjul Yadav, Michael S. Steinbach, Vipin Kumar, and György J. Simon. 2018. Mining electronic health records (EHRs): A survey. ACM Comput. Surv. 50, 6 (2018), 85:1–85:40.
[174]
Takashi Yamamura, Kazutaka Shimada, and Shintaro Kawahara. 2016. The Kyutech corpus and topic segmentation using a combined method. In Proceedings of the 12th Workshop on Asian Language Resources (ALR12’16). 95–104.
[175]
Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the NAACL. 508–520.
[176]
Ze Yang, Liran Wang, Zhoujin Tian, Wei Wu, and Zhoujun Li. 2022. TANet: Thread-aware pretraining for abstractive conversational summarization. Retrieved from https://arxiv.org/abs/2204.04504
[177]
Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. In Proceedings of the ACL. 4927–4940.
[178]
Lin Yuan and Zhou Yu. 2019. Abstractive dialog summarization with semantic scaffolds. Retrieved from https://arxiv.org/abs/1910.00825
[179]
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. NeurIPS 34 (2021), 27263–27277.
[180]
Klaus Zechner. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Comput. Linguist. 28, 4 (2002), 447–485.
[181]
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the ICML. 11328–11339.
[182]
Longxiang Zhang, Renato Negrinho, Arindam Ghosh, Vasudevan Jagannathan, Hamid Reza Hassanzadeh, Thomas Schaaf, and Matthew R. Gormley. 2021. Leveraging pretrained models for automatic summarization of doctor-patient conversations. In Findings of the EMNLP. 3693–3712.
[183]
Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, and Mohit Bansal. 2021. EmailSum: Abstractive email thread summarization. In Proceedings of the ACL/IJCNLP, Volume 1: Long Papers. 6895–6909.
[184]
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the ACL, Volume 1: Long Papers. 2204–2213.
[185]
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In Proceedings of the ICLR.
[186]
Xiyuan Zhang, Chengxi Li, Dian Yu, Samuel Davidson, and Zhou Yu. 2020. Filling conversation ellipsis for better social dialog understanding. In Proceedings of the AAAI. 9587–9595.
[187]
Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, and Amr Ahmed. 2021. Unsupervised abstractive dialogue summarization for tete-a-tetes. In Proceedings of the AAAI. 14489–14497.
[188]
Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. Retrieved from https://arxiv.org/abs/1903.05759
[189]
Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed H. Awadallah, Dragomir Radev, and Rui Zhang. 2021. Summ2303 N: A multi-stage summarization framework for long input dialogues and documents. Retrieved from https://arxiv.org/abs/2110.10150
[190]
Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, and Dragomir R. Radev. 2021. An exploratory study on long dialogue summarization: What works and what’s next. In Findings of the EMNLP. 4426–4433.
[191]
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the ACL. 270–278.
[192]
Lulu Zhao, Weihao Zeng, Weiran Xu, and Jun Guo. 2021. Give the truth: Incorporate semantic slot into abstractive dialogue summarization. In Findings of the EMNLP. 2435–2446.
[193]
Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, and Fanyu Meng. 2021. TODSum: Task-oriented dialogue summarization with state tracking. Retrieved from https://arxiv.org/abs/2110.12680
[194]
Jiyuan Zheng, Zhou Zhao, Zehan Song, Min Yang, Jun Xiao, and Xiaohui Yan. 2020. Abstractive meeting summarization by hierarchical adaptive segmental network learning with multiple revising steps. Neurocomputing 378 (2020), 179–188.
[195]
Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. DialogLM: Pre-trained model for long dialogue understanding and summarization. Retrieved from https://arxiv.org/abs/2109.02492
[196]
Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir R. Radev. 2021. QMSum: A new benchmark for query-based multi-domain meeting summarization. In Proceedings of the NAACL-HLT. 5905–5921.
[197]
Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the NAACL-HLT. 5927–5934.
[198]
Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In Findings of the EMNLP. 194–203.
[199]
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE ICCV. 19–27.
[200]
Yicheng Zou, Jun Lin, Lujun Zhao, Yangyang Kang, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, and Xiaozhong Liu. 2021. Unsupervised summarization for chat logs with topic-oriented ranking and context-aware auto-encoders. In Proceedings of the AAAI. 14674–14682.
[201]
Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, and Xiaozhong Liu. 2021. Topic-oriented spoken dialogue summarization for customer service with saliency-aware topic modeling. In Proceedings of the AAAI. 14665–14673.
[202]
Yicheng Zou, Bolin Zhu, Xingwu Hu, Tao Gui, and Qi Zhang. 2021. Low-resource dialogue summarization with domain-agnostic multi-source pretraining. In Proceedings of the EMNLP. 80–91.

Cited By

View all
  • (2024)The Scenario Approach to the Concept of Maintenance of Technical Systems of Urban EngineeringAdvances in Manufacturing IV10.1007/978-3-031-56444-4_7(84-97)Online publication date: 30-Mar-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 56, Issue 3
March 2024
977 pages
EISSN:1557-7341
DOI:10.1145/3613568
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 October 2023
Online AM: 07 September 2023
Accepted: 22 August 2023
Revised: 30 July 2023
Received: 30 January 2022
Published in CSUR Volume 56, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Dialogue summarization
  2. dialogue context modeling
  3. abstractive summarization

Qualifiers

  • Survey

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)622
  • Downloads (Last 6 weeks)50
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)The Scenario Approach to the Concept of Maintenance of Technical Systems of Urban EngineeringAdvances in Manufacturing IV10.1007/978-3-031-56444-4_7(84-97)Online publication date: 30-Mar-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media