Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

What Does BERT Look at? An Analysis of BERT’s Attention

Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning


Abstract
Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT’s attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT’s attention.
Anthology ID:
W19-4828
Volume:
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Tal Linzen, Grzegorz Chrupała, Yonatan Belinkov, Dieuwke Hupkes
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
276–286
Language:
URL:
https://aclanthology.org/W19-4828
DOI:
10.18653/v1/W19-4828
Bibkey:
Cite (ACL):
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT’s Attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
What Does BERT Look at? An Analysis of BERT’s Attention (Clark et al., BlackboxNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-4828.pdf
Code
 clarkkev/attention-analysis +  additional community code
Data
Penn Treebank