Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3357384.3358148acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

A Compare-Aggregate Model with Latent Clustering for Answer Selection

Published: 03 November 2019 Publication History

Abstract

In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing. First, we explore the effect of additional information by adopting a pretrained language model to compute the vector representation of the input text and by applying transfer learning from a large-scale corpus. Second, we enhance the compare-aggregate model by proposing a novel latent clustering method to compute additional information within the target corpus and by changing the objective function from listwise to pointwise. To evaluate the performance of the proposed approaches, experiments are performed with the WikiQA and TREC-QA datasets. The empirical results demonstrate the superiority of our proposed approach, which achieve state-of-the-art performance for both datasets.

References

[1]
Weijie Bian, Si Li, Zhao Yang, Guang Chen, and Zhiqing Lin. 2017. A compareaggregate model with dynamic-clip attention for answer selection. In CIKM. ACM, 1987--1990.
[2]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. 4171--4186.
[3]
Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP. 1746--1751.
[4]
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. In SIGDIAL. 285--294.
[5]
Harish Tayyar Madabushi, Mark Lee, and John Barnden. 2018. Integrating Question Classification and Deep Learning for improved Answer Selection. In ACL. 3283--3294.
[6]
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL. 2227--2237.
[7]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In EMNLP. 2383--2392.
[8]
Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. arXiv preprint arXiv:1602.03609 (2016).
[9]
Gehui Shen, Yunlun Yang, and Zhi-Hong Deng. 2017. Inter-weighted alignment network for sentence pair modeling. In EMNLP. 1179--1189.
[10]
Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Lstmbased deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 (2015).
[11]
Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Multi-Cast Attention Networks. In SIGKDD. ACM, 2299--2308.
[12]
Quan Hung Tran, Tuan Lai, Gholamreza Haffari, Ingrid Zukerman, Trung Bui, and Hung Bui. 2018. The Context-dependent Additive Recurrent Neural Net. In NAACL, Vol. 1. 1274--1283.
[13]
AlexWang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In EMNLP. 353.
[14]
MengqiuWang, Noah A Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? A quasi-synchronous grammar for QA. In EMNLP-CoNLL.
[15]
Shuohang Wang and Jing Jiang. 2017. A compare-aggregate model for matching text sequences. In ICLR.
[16]
Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, 4144--4150.
[17]
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. 2013--2018.

Cited By

View all
  • (2024)Cross-biased Contrastive Learning for Answer Selection with Dual-Tower StructureNeurocomputing10.1016/j.neucom.2024.128641(128641)Online publication date: Sep-2024
  • (2024)MATER: Bi-level matching-aggregation model for time-aware expert recommendationExpert Systems with Applications10.1016/j.eswa.2023.121576237(121576)Online publication date: Mar-2024
  • (2024)Multi-view pre-trained transformer via hierarchical capsule network for answer sentence selectionApplied Intelligence10.1007/s10489-024-05513-y54:21(10561-10580)Online publication date: 1-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
November 2019
3373 pages
ISBN:9781450369763
DOI:10.1145/3357384
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep learning
  2. information retrieval
  3. natural language processing
  4. question answering

Qualifiers

  • Short-paper

Conference

CIKM '19
Sponsor:

Acceptance Rates

CIKM '19 Paper Acceptance Rate 202 of 1,031 submissions, 20%;
Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)24
  • Downloads (Last 6 weeks)2
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Cross-biased Contrastive Learning for Answer Selection with Dual-Tower StructureNeurocomputing10.1016/j.neucom.2024.128641(128641)Online publication date: Sep-2024
  • (2024)MATER: Bi-level matching-aggregation model for time-aware expert recommendationExpert Systems with Applications10.1016/j.eswa.2023.121576237(121576)Online publication date: Mar-2024
  • (2024)Multi-view pre-trained transformer via hierarchical capsule network for answer sentence selectionApplied Intelligence10.1007/s10489-024-05513-y54:21(10561-10580)Online publication date: 1-Nov-2024
  • (2023)Dual Fusion-Propagation Graph Neural Network for Multi-View ClusteringIEEE Transactions on Multimedia10.1109/TMM.2023.324817325(9203-9215)Online publication date: 1-Jan-2023
  • (2023)Decision Tree Clustering for Time Series Data: An Approach for Enhanced Interpretability and EfficiencyPRICAI 2023: Trends in Artificial Intelligence10.1007/978-981-99-7022-3_42(457-468)Online publication date: 10-Nov-2023
  • (2023)Improving Open-Domain Answer Sentence Selection by Distributed Clients with Privacy PreservationAdvanced Data Mining and Applications10.1007/978-3-031-46677-9_2(15-29)Online publication date: 5-Nov-2023
  • (2023)Efficient Fine-Tuning Large Language Models for Knowledge-Aware Response PlanningMachine Learning and Knowledge Discovery in Databases: Research Track10.1007/978-3-031-43415-0_35(593-611)Online publication date: 17-Sep-2023
  • (2022)Machine Reading at Scale: A Search Engine for Scientific and Academic ResearchSystems10.3390/systems1002004310:2(43)Online publication date: 5-Apr-2022
  • (2022)RLAS-BIABCComputational Intelligence and Neuroscience10.1155/2022/78398402022Online publication date: 1-Jan-2022
  • (2022)Clustering-based Sequence to Sequence Model for Generative Question Answering in a Low-resource LanguageACM Transactions on Asian and Low-Resource Language Information Processing10.1145/356303622:2(1-14)Online publication date: 27-Dec-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media