Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3397271.3401285acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Evaluation of Cross Domain Text Summarization

Published: 25 July 2020 Publication History

Abstract

Extractive-abstractive hybrid summarization can generate readable, concise summaries for long documents. Extraction-then-abstraction and extraction-with-abstraction are two representative approaches to hybrid summarization. But their general performance is yet to be evaluated by large scale experiments.We examined two state-of-the-art hybrid summarization algorithms from three novel perspectives: we applied them to a form of headline generation not previously tried, we evaluated the generalization of the algorithms by testing them both within and across news domains; and we compared the automatic assessment of the algorithms to human comparative judgments. It is found that an extraction-then-abstraction hybrid approach outperforms an extraction-with-abstraction approach, particularly for cross-domain headline generation.

Supplementary Material

MOV File (3397271.3401285.mov)
Short Presentation of Evaluation of Cross Domain Summarisation.

References

[1]
Chen, Y.C., Bansal, M.: Fast abstractive summarization with reinforce-selected sentence rewriting. In: ACL (2018)
[2]
Chopra, S., Auli, M., Rush, A.M.: Abstractive sentence summarization with attentive recurrent neural networks. In: Proc NAACL. pp. 93--98 (2016)
[3]
Denkowski, M., Lavie, A.: Meteor universal: Language specific translation evaluation for any target language. In: Proceedings of the ninth workshop on statistical machine translation. pp. 376--380 (2014)
[4]
Hsu, W.T., Lin, C.K., Lee, M.Y., Min, K., Tang, J., Sun, M.: A unified model for extractive and abstractive summarization using inconsistency loss. In: ACL (2018)
[5]
Kim, S.N., Baldwin, T., Kan, M.Y.: Evaluating n-gram based evaluation metrics for automatic keyphrase extraction. In: Proceedings of the 23rd international conference on computational linguistics. pp. 572--580 (2010)
[6]
Lin, C.Y.: Looking for a few good metrics: Automatic summarization evaluation how many samples are enough? In: NTCIR (2004)
[7]
Lin, C.Y.: Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out (2004)
[8]
Liu, C.W., Lowe, R., Serban, I.V., Noseworthy, M., Charlin, L., Pineau, J.: How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023 (2016)
[9]
Munoz, S.R., Bangdiwala, S.I.: Interpretation of kappa and b statistics measures of agreement. Journal of Applied Statistics 24(1), 105--112 (1997)
[10]
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics. pp. 311--318 (2002)
[11]
Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)
[12]
Rankel, P.A., Conroy, J.M., Schlesinger, J.D.: Better metrics to automatically predict the quality of a text summary. Algorithms 5(4), 398--420 (2012)
[13]
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 (2015)
[14]
Saggion, H., Poibeau, T.: Automatic text summarization: Past, present and future. In: Multi-source, multilingual information extraction and summarization, pp. 3--21. Springer (2013)
[15]
See, A., Liu, P.J., Manning, C.D.: Get to the point: Summarization with pointer generator networks. arXiv preprint arXiv:1704.04368 (2017)

Cited By

View all
  • (2024)“Sankshepan”—Summarizing Kannada Text Using BART TransformerData Science and Big Data Analytics10.1007/978-981-99-9179-2_51(677-691)Online publication date: 17-Mar-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
July 2020
2548 pages
ISBN:9781450380164
DOI:10.1145/3397271
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 July 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. evaluation
  2. headline generation
  3. text summarization

Qualifiers

  • Short-paper

Funding Sources

  • Australian Research Council Discovery Projects

Conference

SIGIR '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 792 of 3,983 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)2
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)“Sankshepan”—Summarizing Kannada Text Using BART TransformerData Science and Big Data Analytics10.1007/978-981-99-9179-2_51(677-691)Online publication date: 17-Mar-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media