Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets

Rotem Dror, Gili Baumer, Marina Bogomolov, Roi Reichart


Abstract
With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction.
Anthology ID:
Q17-1033
Volume:
Transactions of the Association for Computational Linguistics, Volume 5
Month:
Year:
2017
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
471–486
Language:
URL:
https://aclanthology.org/Q17-1033
DOI:
10.1162/tacl_a_00074
Bibkey:
Cite (ACL):
Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets. Transactions of the Association for Computational Linguistics, 5:471–486.
Cite (Informal):
Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets (Dror et al., TACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/Q17-1033.pdf
Video:
 https://aclanthology.org/Q17-1033.mp4
Code
 rtmdrr/replicability-analysis-NLP