Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3366423.3380114acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Generating Multi-hop Reasoning Questions to Improve Machine Reading Comprehension

Published: 20 April 2020 Publication History

Abstract

This paper focuses on the topic of multi-hop question generation, which aims to generate questions needed reasoning over multiple sentences and relations to derive answers. In particular, we first build an entity graph to integrate various entities scattered over text based on their contextual relations. We then heuristically extract the sub-graph by the evidential relations and type, so as to obtain the reasoning chain and textual related contents for each question. Guided by the chain, we propose a holistic generator-evaluator network to form the questions, where such guidance helps to ensure the rationality of generated questions which need multi-hop deduction to correspond to the answers. The generator is a sequence-to-sequence model, designed with several techniques to make the questions syntactically and semantically valid. The evaluator optimizes the generator network by employing a hybrid mechanism combined of supervised and reinforced learning. Experimental results on HotpotQA data set demonstrate the effectiveness of our approach, where the generated samples can be used as pseudo training data to alleviate the data shortage problem for neural network and assist to learn the state-of-the-arts for multi-hop machine comprehension.

References

[1]
Y. Chali and S.A. Hasan. 2015. Towards Topic-to-Question Generation. In Journal of Computational Linguistics. 1–20.
[2]
D. Chen, J. Bolton, and C.D. Manning. 2016. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2358–2367.
[3]
K. Cho, B.V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing. 1724–1734.
[4]
C. Clark and M. Gardner. 2018. Simple and Effective Multi-Paragraph Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 845–855.
[5]
J. Devlin, M.W. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of of the The North American Chapter of the Association for Computational Linguistics. 4171–4186.
[6]
M. Ding, C. Zhou, Q. Chen, H. Yang, and J. Tang. 2019. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2694–2703.
[7]
X. Du and C. Cardie. 2017. Identifying Where to Focus in Reading Comprehension for Neural Question Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2067–2073.
[8]
X. Du and C. Cardie. 2018. Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 1907–1917.
[9]
Y. Feldman and R. El-Yaniv. 2019. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2296–2309.
[10]
H. Gong, S. Bhat, L. Wu, J. Xiong, and W. Hwu. 2019. 2019. Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus. In Proceedings of the 2019 Conference of of the The North American Chapter of the Association for Computational Linguistics. 3168–3180.
[11]
Ç. Gülçehre, S. Ahn, R. Nallapati, B. Zhou, and Y. Bengio. 2016. Pointing the Unknown Words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 140–149.
[12]
D. Guo, Y. Sun, D. Tang, N. Duan, J. Yin, H. Chi, J. Cao, P. Chen, and M. Zhou. 2018. Question Generation from SQL Queries Improves Neural Semantic Parsing. In Proceedings of the 2018 conference on Empirical Methods in Natural Language Processing. 1597–1607.
[13]
V. Harrison and M.A. Walker. 2018. Neural Generation of Diverse Questions using Answer Focus, Contextual and Linguistic Features. In Proceedings of the 11th International Conference on Natural Language Generation. 296–306.
[14]
M. Heilman and D.J. Litman. 2011. Automatic Factual Question Generation from Text. In arXiv:1809.02040. 224–231.
[15]
D.P. Kingma and J. Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of International Conference on Learning Representations. 324–331.
[16]
T.N. Kipf and M. Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of International Conference on Learning Representations. 243–253.
[17]
C. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. Bethard, and D. McClosky. 2014. The Stanford Corenlp Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. 55–60.
[18]
S. Min, V. Zhong, L. Zettlemoyer, and H. Hajishirzi. 2019. Multi-hop Reading Comprehension through Question Decomposition and Rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6097–6109.
[19]
R. Mitkov and L.A. Ha. 2003. Computer-Aided Generation of Multiple-Choice Tests. In Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing. 17–22.
[20]
K. Nishida, K. Nishida, M. Nagata, A. Otsuka, I. Saito, H. Asano, and J. Tomita. 2019. Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2335–2345.
[21]
L. Pan, W. Lei, T.S. Chua, and M.Y. Kan. 2017. Recent Advances in Neural Question Generation. In arXiv:1905.08949.
[22]
K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 311–318.
[23]
R. Pascanu, T. Mikolov, and Y. Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks. In Proceedings of International Conference on Learning Representations. 1310–1318.
[24]
R. Paulus, C. Xiong, and R. Socher. 2017. A Deep Reinforced Model for Abstractive Summarization. In arXiv:1705.04304.
[25]
M.A. Ranzato, S. Chopra, M. Auli, and W. Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In Proceedings of International Conference on Learning Representations. 54–63.
[26]
S.J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. 2017. Self-Critical Sequence Training for Image Captioning. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition. 7008–7024.
[27]
L. Song, Z. Wang, M. Yu, Y. Zhang, R. Florian, and D. Gildea. 2018. Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks. In arXiv:1809.02040.
[28]
L. Song, Y. Zhang, Z. Wang, and D. Gildea. 2018. A Graph-to-Sequence Model for AMR-to-Text Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 1616–1626.
[29]
S. Subramanian, T. Wang, X. Yuan, S. Zhang, A. Trischler, and Y. Bengio. 2018. Neural Models for Key Phrase Extraction and Question Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 78–88.
[30]
C. Sun, A. Shrivastava, S. Singh, and A. Gupta. 2017. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition. 843–852.
[31]
A. Talmor and J. Berant. 2019. MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4911–4921.
[32]
A. Trischler, Z. Ye, X. Yuan, and K. Suleman. 2016. Natural Language Comprehension with the EpiReader. In Proceedings of the 2016 conference on Empirical Methods in Natural Language Processing. 128–137.
[33]
R.J. Williams and D. Zipser. 1989. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. In Journal of Neural Computation. 270–280.
[34]
Y. Xiao, Y. Qu, L. Qiu, H. Zhou, L. Li, W. Zhang, and Y. Yu. 2019. Dynamically Fused Graph Network for Multi-hop Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6140–6150.
[35]
Z. Yang, P. Qi, S. Zhang, Y. Bengio, W.W. Cohen, R. Salakhutdinov, and C.D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2369–2380.
[36]
J. Yu, Z.J. Zha, and T.S. Chua. 2012. Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 391–401.
[37]
J. Yu, Z.J. Zha, M. Wang, and T.S. Chua. 2011. Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. 1496–1505.
[38]
J. Yu, Z.J. Zha, M. Wang, K. Wang, and T.S. Chua. 2011. Domain-Assisted Product Aspect Hierarchy Generation: Towards Hierarchical Organization of Unstructured Consumer Reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 140–150.
[39]
J. Yu, Z. Zha, and J. Yin. 2019. Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2241–2251.
[40]
X. Zhang and M. Lapata. 2017. Sentence Simplification with Deep Reinforcement Learning. In Proceedings of the 2017 conference on Empirical Methods in Natural Language Processing. 584–594.

Cited By

View all
  • (2024)Complex query answering over knowledge graphs foundation model using region embeddings on a lie groupWorld Wide Web10.1007/s11280-024-01254-727:3Online publication date: 11-Apr-2024
  • (2024)Syntax-guided question generation using prompt learningNeural Computing and Applications10.1007/s00521-024-09421-736:12(6271-6282)Online publication date: 26-Feb-2024
  • (2022)Unified Question Generation with Continual Lifelong LearningProceedings of the ACM Web Conference 202210.1145/3485447.3511930(871-881)Online publication date: 25-Apr-2022
  • Show More Cited By

Index Terms

  1. Generating Multi-hop Reasoning Questions to Improve Machine Reading Comprehension
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        WWW '20: Proceedings of The Web Conference 2020
        April 2020
        3143 pages
        ISBN:9781450370233
        DOI:10.1145/3366423
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 April 2020

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. machine reading comprehension
        2. multi-hop question generation
        3. reasoning chain

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        WWW '20
        Sponsor:
        WWW '20: The Web Conference 2020
        April 20 - 24, 2020
        Taipei, Taiwan

        Acceptance Rates

        Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)62
        • Downloads (Last 6 weeks)13
        Reflects downloads up to 01 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Complex query answering over knowledge graphs foundation model using region embeddings on a lie groupWorld Wide Web10.1007/s11280-024-01254-727:3Online publication date: 11-Apr-2024
        • (2024)Syntax-guided question generation using prompt learningNeural Computing and Applications10.1007/s00521-024-09421-736:12(6271-6282)Online publication date: 26-Feb-2024
        • (2022)Unified Question Generation with Continual Lifelong LearningProceedings of the ACM Web Conference 202210.1145/3485447.3511930(871-881)Online publication date: 25-Apr-2022
        • (2022)QA4QG: Using Question Answering to Constrain Multi-Hop Question GenerationICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP43922.2022.9747008(8232-8236)Online publication date: 23-May-2022
        • (2021)Multi-hop Reasoning Question Generation and Its ApplicationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2021.3073227(1-1)Online publication date: 2021
        • (2021)Adaptive Cross-Lingual Question Generation with Minimal ResourcesThe Computer Journal10.1093/comjnl/bxab106Online publication date: 19-Jul-2021

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media