Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Simple dynamic word embeddings for mapping perceptions in the public sphere

Nabeel Gillani, Roger Levy


Abstract
Word embeddings trained on large-scale historical corpora can illuminate human biases and stereotypes that perpetuate social inequalities. These embeddings are often trained in separate vector space models defined according to different attributes of interest. In this paper, we introduce a single, unified dynamic embedding model that learns attribute-specific word embeddings and apply it to a novel dataset—talk radio shows from around the US—to analyze perceptions about refugees. We validate our model on a benchmark dataset and apply it to two corpora of talk radio shows averaging 117 million words produced over one month across 83 stations and 64 cities. Our findings suggest that dynamic word embeddings are capable of identifying nuanced differences in public discourse about contentious topics, suggesting their usefulness as a tool for better understanding how the public perceives and engages with different issues across time, geography, and other dimensions.
Anthology ID:
W19-2111
Original:
W19-2111v1
Version 2:
W19-2111v2
Volume:
Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Svitlana Volkova, David Jurgens, Dirk Hovy, David Bamman, Oren Tsur
Venue:
NLP+CSS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
94–99
Language:
URL:
https://aclanthology.org/W19-2111
DOI:
10.18653/v1/W19-2111
Bibkey:
Cite (ACL):
Nabeel Gillani and Roger Levy. 2019. Simple dynamic word embeddings for mapping perceptions in the public sphere. In Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science, pages 94–99, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Simple dynamic word embeddings for mapping perceptions in the public sphere (Gillani & Levy, NLP+CSS 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-2111.pdf