In Silico Sociology: Forecasting COVID-19 Polarization with Large Language Models

AC Kozlowski, H Kwon, JA Evans - arXiv preprint arXiv:2407.11190, 2024 - arxiv.org
arXiv preprint arXiv:2407.11190, 2024arxiv.org
By training deep neural networks on massive archives of digitized text, large language
models (LLMs) learn the complex linguistic patterns that constitute historic and
contemporary discourses. We argue that LLMs can serve as a valuable tool for sociological
inquiry by enabling accurate simulation of respondents from specific social and cultural
contexts. Applying LLMs in this capacity, we reconstruct the public opinion landscape of
2019 to examine the extent to which the future polarization over COVID-19 was prefigured in …
By training deep neural networks on massive archives of digitized text, large language models (LLMs) learn the complex linguistic patterns that constitute historic and contemporary discourses. We argue that LLMs can serve as a valuable tool for sociological inquiry by enabling accurate simulation of respondents from specific social and cultural contexts. Applying LLMs in this capacity, we reconstruct the public opinion landscape of 2019 to examine the extent to which the future polarization over COVID-19 was prefigured in existing political discourse. Using an LLM trained on texts published through 2019, we simulate the responses of American liberals and conservatives to a battery of pandemic-related questions. We find that the simulated respondents reproduce observed partisan differences in COVID-19 attitudes in 84% of cases, significantly greater than chance. Prompting the simulated respondents to justify their responses, we find that much of the observed partisan gap corresponds to differing appeals to freedom, safety, and institutional trust. Our findings suggest that the politicization of COVID-19 was largely consistent with the prior ideological landscape, and this unprecedented event served to advance history along its track rather than change the rails.
arxiv.org