Ranking paragraphs for improving answer recall in open-domain question answering

J Lee, S Yun, H Kim, M Ko, J Kang - arXiv preprint arXiv:1810.00494, 2018 - arxiv.org
arXiv preprint arXiv:1810.00494, 2018arxiv.org
Recently, open-domain question answering (QA) has been combined with machine
comprehension models to find answers in a large knowledge source. As open-domain QA
requires retrieving relevant documents from text corpora to answer questions, its
performance largely depends on the performance of document retrievers. However, since
traditional information retrieval systems are not effective in obtaining documents with a high
probability of containing answers, they lower the performance of QA systems. Simply …
Recently, open-domain question answering (QA) has been combined with machine comprehension models to find answers in a large knowledge source. As open-domain QA requires retrieving relevant documents from text corpora to answer questions, its performance largely depends on the performance of document retrievers. However, since traditional information retrieval systems are not effective in obtaining documents with a high probability of containing answers, they lower the performance of QA systems. Simply extracting more documents increases the number of irrelevant documents, which also degrades the performance of QA systems. In this paper, we introduce Paragraph Ranker which ranks paragraphs of retrieved documents for a higher answer recall with less noise. We show that ranking paragraphs and aggregating answers using Paragraph Ranker improves performance of open-domain QA pipeline on the four open-domain QA datasets by 7.8% on average.
arxiv.org