Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

YNU-HPCC at SemEval-2019 Task 8: Using A LSTM-Attention Model for Fact-Checking in Community Forums

Peng Liu, Jin Wang, Xuejie Zhang


Abstract
We propose a system that uses a long short-term memory with attention mechanism (LSTM-Attention) model to complete the task. The LSTM-Attention model uses two LSTM to extract the features of the question and answer pair. Then, each of the features is sequentially composed using the attention mechanism, concatenating the two vectors into one. Finally, the concatenated vector is used as input for the MLP and the MLP’s output layer uses the softmax function to classify the provided answers into three categories. This model is capable of extracting the features of the question and answer pair well. The results show that the proposed system outperforms the baseline algorithm.
Anthology ID:
S19-2207
Volume:
Proceedings of the 13th International Workshop on Semantic Evaluation
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota, USA
Editors:
Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1180–1184
Language:
URL:
https://aclanthology.org/S19-2207
DOI:
10.18653/v1/S19-2207
Bibkey:
Cite (ACL):
Peng Liu, Jin Wang, and Xuejie Zhang. 2019. YNU-HPCC at SemEval-2019 Task 8: Using A LSTM-Attention Model for Fact-Checking in Community Forums. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1180–1184, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Cite (Informal):
YNU-HPCC at SemEval-2019 Task 8: Using A LSTM-Attention Model for Fact-Checking in Community Forums (Liu et al., SemEval 2019)
Copy Citation:
PDF:
https://aclanthology.org/S19-2207.pdf