Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Curse of Performance Instability in Analysis Datasets: Consequences, Source, and Suggestions

Xiang Zhou, Yixin Nie, Hao Tan, Mohit Bansal


Abstract
We find that the performance of state-of-the-art models on Natural Language Inference (NLI) and Reading Comprehension (RC) analysis/stress sets can be highly unstable. This raises three questions: (1) How will the instability affect the reliability of the conclusions drawn based on these analysis sets? (2) Where does this instability come from? (3) How should we handle this instability and what are some potential solutions? For the first question, we conduct a thorough empirical study over analysis sets and find that in addition to the unstable final performance, the instability exists all along the training curve. We also observe lower-than-expected correlations between the analysis validation set and standard validation set, questioning the effectiveness of the current model-selection routine. Next, to answer the second question, we give both theoretical explanations and empirical evidence regarding the source of the instability, demonstrating that the instability mainly comes from high inter-example correlations within analysis sets. Finally, for the third question, we discuss an initial attempt to mitigate the instability and suggest guidelines for future work such as reporting the decomposed variance for more interpretable results and fair comparison across models.
Anthology ID:
2020.emnlp-main.659
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8215–8228
Language:
URL:
https://aclanthology.org/2020.emnlp-main.659
DOI:
10.18653/v1/2020.emnlp-main.659
Bibkey:
Cite (ACL):
Xiang Zhou, Yixin Nie, Hao Tan, and Mohit Bansal. 2020. The Curse of Performance Instability in Analysis Datasets: Consequences, Source, and Suggestions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8215–8228, Online. Association for Computational Linguistics.
Cite (Informal):
The Curse of Performance Instability in Analysis Datasets: Consequences, Source, and Suggestions (Zhou et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.659.pdf
Video:
 https://slideslive.com/38939082
Code
 owenzx/InstabilityAnalysis
Data
GLUEMultiNLISICKSNLISQuAD