Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Embedding Democratic Values into Social Media AIs via Societal Objective Functions

Published: 26 April 2024 Publication History

Abstract

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

References

[1]
Douglas J Ahler and Gaurav Sood. 2018. The parties in our heads: Misperceptions about party composition and their consequences. The Journal of Politics 80, 3 (2018), 964--981.
[2]
Hunt Allcott, Luca Braghieri, Sarah Eichmeyer, and Matthew Gentzkow. 2020. The Welfare Effects of Social Media. American Economic Review 110, 3 (March 2020), 629--76. https://doi.org/10.1257/aer.20190658
[3]
Carolina Are. 2020. How Instagram's algorithm is censoring women and vulnerable users but helping online abusers. Feminist media studies 20, 5 (2020), 741--744.
[4]
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: Harmlessness from AI Feedback. arXiv:2212.08073 [cs.CL]
[5]
Chris Bail. 2022. Breaking the social media prism: How to make our platforms less polarizing. Princeton University Press.
[6]
Eytan Bakshy, Solomon Messing, and Lada A Adamic. 2015. Exposure to ideologically diverse news and opinion on Facebook. Science 348, 6239 (2015), 1130--1132.
[7]
Lucy Bernholz, Héléne Landemore, and Rob Reich. 2021. Digital Technology and Democratic Theory. University of Chicago Press. https://doi.org/10.7208/chicago/9780226748603.001.0001
[8]
Michael S. Bernstein, Angèle Christin, Jeffrey T. Hancock, Tatsunori Hashimoto, Chenyan Jia, Michelle Lam, Nicole Meister, Nathaniel Persily, Tiziano Piccardi, Martin Saveski, Jeanne L. Tsai, Johan Ugander, and Chunchen Xu. 2023. Embedding Societal Values into Social Media Algorithms. Journal of Online Trust and Safety 2, 1 (2023).
[9]
Rahul Bhargava, Anna Chung, Neil S. Gaikwad, Alexis Hope, Dennis Jen, Jasmin Rubinovitz, Belén Saldías-Fuentes, and Ethan Zuckerman. 2019. Gobo: A System for Exploring User Control of Invisible Algorithms in Social Media. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing (Austin, TX, USA) (CSCW '19). Association for Computing Machinery, New York, NY, USA, 151--155. https://doi.org/10.1145/3311957.3359452
[10]
Monika Bickert. 2018. Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process. https://about.fb.com/news/2018/04/comprehensive-community-standards/
[11]
Reuben Binns. 2017. Fairness in Machine Learning: Lessons from Political Philosophy. CoRR abs/1712.03586 (2017). arXiv:1712.03586 http://arxiv.org/abs/1712.03586
[12]
Levi Boxell, Matthew Gentzkow, and Jesse M Shapiro. 2020. Cross-Country Trends in Affective Polarization. Working Paper 26669. National Bureau of Economic Research. https://doi.org/10.3386/w26669
[13]
William J Brady, Killian L McLoughlin, Mark P Torres, Kara F Luo, Maria Gendron, and MJ Crockett. 2023. Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility. Nature human behaviour (2023), 1--11.
[14]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language Models are Few-Shot Learners. Advances in neural information processing systems 33 (2020), 1877--1901.
[15]
L Elisa Celis, Sayash Kapoor, Farnood Salehi, and Nisheeth Vishnoi. 2019. Controlling polarization in personalization: An algorithmic framework. In Proceedings of the conference on fairness, accountability, and transparency. 160--169.
[16]
Uthsav Chitra and Christopher Musco. 2020. Analyzing the impact of filter bubbles on social network polarization. In Proceedings of the 13th International Conference on Web Search and Data Mining. 115--123.
[17]
Giovanni Luca Ciampaglia, Azadeh Nematzadeh, Filippo Menczer, and Alessandro Flammini. 2018. How algorithmic popularity bias hinders or promotes quality. Scientific reports 8, 1 (2018), 15951.
[18]
James Price Dillard and Lijiang Shen. 2005. On the nature of reactance and its role in persuasive health communication. Communication monographs 72, 2 (2005), 144--168.
[19]
Salomé Do, Étienne Ollion, and Rubing Shen. 2022. The Augmented Social Scientist: Using Sequential Transfer Learning to Annotate Millions of Texts with Human-Level Accuracy. Sociological Methods & Research (2022). https://doi.org/10.1177/00491241221134526 arXiv:https://doi.org/10.1177/00491241221134526
[20]
James N Druckman, Suji Kang, James Chu, Michael N. Stagnaro, Jan G Voelkel, Joseph S Mernyk, Sophia L Pink, Chrystal Redekopp, David G Rand, and Robb Willer. 2023. Correcting misperceptions of out-partisans decreases American legislators' support for undemocratic practices. Proceedings of the National Academy of Sciences 120, 23 (2023), e2301836120.
[21]
Dean Eckles. 2022. Algorithmic transparency and assessing effects of algorithmic ranking. https://doi.org/10.31235/osf.io/c8za6
[22]
Ziv Epstein, Gordon Pennycook, and David Rand. 2020. Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--11.
[23]
Eli J Finkel, Christopher A Bail, Mina Cikara, Peter H Ditto, Shanto Iyengar, Samara Klar, Lilliana Mason, Mary C McGrath, Brendan Nyhan, David G Rand, et al. 2020. Political sectarianism in America. Science 370, 6516 (2020), 533--536.
[24]
Richard Fletcher, Alessio Cornia, Lucas Graves, and Rasmus Kleis Nielsen. 2018. Measuring the reach of" fake news" and online disinformation in Europe. Australasian Policing 10, 2 (2018).
[25]
Erin D Foster and Ariel Deardorff. 2017. Open science framework (OSF). Journal of the Medical Library Association: JMLA 105, 2 (2017), 203.
[26]
Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
[27]
Sandra González-Bailón, David Lazer, Pablo Barberá, Meiqing Zhang, Hunt Allcott, Taylor Brown, Adriana Crespo- Tenorio, Deen Freelon, Matthew Gentzkow, Andrew M Guess, et al. 2023. Asymmetric ideological segregation in exposure to political news on Facebook. Science 381, 6656 (2023), 392--398.
[28]
Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society 7, 1 (2020), 2053951719897945. https://doi.org/10.1177/2053951719897945 arXiv:https://doi.org/10.1177/2053951719897945
[29]
Andrew Guess, Kevin Munger, Jonathan Nagler, and Joshua Tucker. 2019. How accurate are survey responses on social media and politics? Political Communication 36, 2 (2019), 241--258.
[30]
Andrew M Guess, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, et al . 2023. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381, 6656 (2023), 398--404.
[31]
Andrew M Guess, Neil Malhotra, Jennifer Pan, Pablo Barberá, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, Deen Freelon, Matthew Gentzkow, et al. 2023. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381, 6656 (2023), 404--408.
[32]
Oliver L Haimson, Justin Buss, Zu Weinger, Denny L Starks, Dykee Gorrell, and Briar Sweetbriar Baron. 2020. Trans time: Safety, privacy, and content warnings on a transgender-specific social media site. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1--27.
[33]
Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating Large Language Models in Generating Synthetic HCI Research Data: A Case Study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 433, 19 pages. https://doi.org/10.1145/3544548.3580688
[34]
Jeff Hancock, Sunny Xun Liu, Mufan Luo, and Hannah Mieczkowski. 2022. Psychological well-being and social media use: a meta-analysis of associations between social media use and depression, anxiety, loneliness, eudaimonic, hedonic and social well-being. Anxiety, Loneliness, Eudaimonic, Hedonic and Social Well-Being (March 9, 2022) (2022).
[35]
Eszter Hargittai, Yuli Patrick Hsieh, and WH Dutton. 2013. The Oxford handbook of Internet studies.
[36]
Rachel Hartman, Will Blakey, Jake Womick, Chris Bail, Eli J Finkel, Hahrie Han, John Sarrouf, Juliana Schroeder, Paschal Sheeran, Jay J Van Bavel, et al. 2022. Interventions to reduce partisan animosity. Nature Human Behaviour 6, 9 (2022), 1194--1205.
[37]
Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900--915. https://doi.org/10.1080/1369118X.2019.1573912 arXiv:https://doi.org/10.1080/1369118X.2019.1573912
[38]
Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. In Findings of the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics, Toronto, Canada, 8003--8017. https://aclanthology.org/2023.findings-acl.507
[39]
Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is ChatGPT Better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech. In Companion Proceedings of the ACM Web Conference 2023 (Austin, TX, USA) (WWW '23 Companion). Association for Computing Machinery, New York, NY, USA, 294--297. https://doi.org/10.1145/3543873.3587368
[40]
Ferenc Huszár, Sofia Ira Ktena, Conor O'Brien, Luca Belli, Andrew Schlaikjer, and Moritz Hardt. 2022. Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences 119, 1 (2022), e2025334119.
[41]
Shanto Iyengar, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J Westwood. 2019. The origins and consequences of affective polarization in the United States. Annual review of political science 22 (2019), 129--146.
[42]
Shanto Iyengar and Sean J. Westwood. 2015. Fear and Loathing across Party Lines: New Evidence on Group Polarization. American Journal of Political Science 59, 3 (2015), 690--707. https://doi.org/10.1111/ajps.12152 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/ajps.12152
[43]
Dietmar Jannach and Gediminas Adomavicius. 2016. Recommendations with a purpose. In Proceedings of the 10th ACM conference on recommender systems. 7--10.
[44]
Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. 2022. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1--27.
[45]
Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. 2023. Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks. arXiv:2302.05733 [cs.CR]
[46]
Jon Keegan. 2016. Blue Feed, Red Feed. http://graphics.wsj.com/blue-feed-red-feed/
[47]
Junsol Kim and Byungkyu Lee. 2023. AI-Augmented Surveys: Leveraging Large Language Models for Opinion Prediction in Nationally Representative Surveys. arXiv:2305.09620 [cs.CL]
[48]
Jon Kingzette, James N Druckman, Samara Klar, Yanna Krupnikov, Matthew Levendusky, and John Barry Ryan. 2021. How affective polarization undermines support for democratic norms. Public Opinion Quarterly 85, 2 (2021), 663--677.
[49]
Samara Klar, Yanna Krupnikov, and John Barry Ryan. 2018. Affective polarization or partisan disdain? Untangling a dislike for the opposing party from a dislike of partisanship. Public Opinion Quarterly 82, 2 (2018), 379--390.
[50]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2022. The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization. In Proceedings of the 23rd ACM Conference on Economics and Computation (Boulder, CO, USA) (EC '22). Association for Computing Machinery, New York, NY, USA, 29. https://doi.org/10.1145/3490486.3538365
[51]
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large Language Models are Zero-Shot Reasoners. In Advances in Neural Information Processing Systems, Vol. 35. 22199--22213.
[52]
Geza Kovacs, Zhengxuan Wu, and Michael S. Bernstein. 2018. Rotating Online Behavior Change Interventions Increases Effectiveness But Also Increases Attrition. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 95 (nov 2018), 25 pages. https://doi.org/10.1145/3274364
[53]
Anastasia Kozyreva, Stefan M Herzog, Stephan Lewandowsky, Ralph Hertwig, Philipp Lorenz-Spreen, Mark Leiser, and Jason Reifler. 2023. Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences 120, 7 (2023), e2210666120.
[54]
Noah Kreski, Jonathan Platt, Caroline Rutherford, Mark Olfson, Candice Odgers, John Schulenberg, and Katherine M Keyes. 2021. Social media use and depressive symptoms among United States adolescents. Journal of Adolescent Health 68, 3 (2021), 572--579.
[55]
Leib Litman, Jonathan Robinson, and Tzvi Abberbock. 2017. TurkPrime. com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior research methods 49, 2 (2017), 433--442.
[56]
Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces. 31--40.
[57]
Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. 2023. A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour 7, 1 (2023), 74--101.
[58]
Li Lucy and David Bamman. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In Proceedings of the Third Workshop on Narrative Understanding. Association for Computational Linguistics, Virtual, 48--55. https://doi.org/10.18653/v1/2021.nuse-1.5
[59]
Christoph Lutz. 2022. Inequalities in Social Media Use and their Implications for Digital Methods Research. 679--690.
[60]
Misa T Maruyama, Scott P Robertson, Sara K Douglas, Bryan C Semaan, and Heather A Faucett. 2014. Hybrid media consumption: How tweeting during a televised political debate influences the vote decision. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 1422--1432.
[61]
Smitha Milli, Luca Belli, and Moritz Hardt. 2021. From Optimizing Engagement to Measuring Value. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 714--722. https://doi.org/10.1145/3442188.3445933
[62]
Smitha Milli, Micah Carroll, Sashrika Pandey, Yike Wang, and Anca D Dragan. 2023. Twitter's Algorithm: Amplifying Anger, Animosity, and Affective Polarization. arXiv preprint arXiv:2305.16941 (2023).
[63]
Emily Moyer-Gusé and Robin L Nabi. 2010. Explaining the effects of narrative in an entertainment television program: Overcoming resistance to persuasion. Human communication research 36, 1 (2010), 26--52.
[64]
Luke Munn. 2020. Angry by design: Toxic communication and technical architectures. Humanities and Social Sciences Communications 7, 1 (2020), 1--11.
[65]
Sean Munson, Stephanie Lee, and Paul Resnick. 2013. Encouraging reading of diverse political viewpoints with a browser widget. In Proceedings of the international AAAI conference on web and social media, Vol. 7. 419--428.
[66]
Sean A Munson and Paul Resnick. 2010. Presenting diverse political opinions: how and how much. In Proceedings of the SIGCHI conference on human factors in computing systems. 1457--1466.
[67]
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 5356--5371. https://doi.org/10.18653/v1/2021.acl-long.416
[68]
Arvind Narayanan. 2023. Understanding Social Media Recommendation Algorithms. https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms.
[69]
Matti Nelimarkka, Salla-Maaria Laaksonen, and Bryan Semaan. 2018. Social media is polarized, social media is polarized: towards a new design agenda for mitigating polarization. In Proceedings of the 2018 designing interactive systems conference. 957--970.
[70]
Matti Nelimarkka, Jean Philippe Rancy, Jennifer Grygiel, and Bryan Semaan. 2019. (Re) Design to Mitigate Political Polarization: Reflecting Habermas' ideal communication space in the United States of America and Finland. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1--25.
[71]
Brendan Nyhan, Jaime Settle, Emily Thorson, Magdalena Wojcieszak, Pablo Barberá, Annie Y Chen, Hunt Allcott, Taylor Brown, Adriana Crespo-Tenorio, Drew Dimmery, et al . 2023. Like-minded sources on Facebook are prevalent but not polarizing. Nature 620, 7972 (2023), 137--144.
[72]
OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
[73]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al . 2022. Training Language Models to Follow Instructions with Human Feedback. Advances in Neural Information Processing Systems 35 (2022), 27730--27744.
[74]
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. arXiv:2203.02155 [cs.CL]
[75]
Aviv Ovadya and Luke Thorburn. 2023. Bridging Systems: Open Problems for Countering Destructive Divisiveness across Ranking, Recommenders, and Governance. arXiv preprint arXiv:2301.09976 (2023).
[76]
Tim Paek, Michael Gamon, Scott Counts, David Chickering, and Aman Dhesi. 2010. Predicting the importance of newsfeed posts and social network friends. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 24. 1419--1424.
[77]
Nitish Pahwa. 2021. Facebook Asked Users What Content Was ?Good" or ?Bad for the World." Some of the Results Were Shocking. https://slate.com/technology/2021/11/facebook-good-bad-for-the-world-gftw-bftw.html
[78]
Nicholas Pangakis, Samuel Wolken, and Neil Fasching. 2023. Automated Annotation with Generative AI Requires Validation. arXiv:2306.00176 [cs.CL]
[79]
Fábio Perez and Ian Ribeiro. 2022. Ignore Previous Prompt: Attack Techniques For Language Models. https://doi.org/10.48550/ARXIV.2211.09527
[80]
Jay Peters. 2022. Twitter makes it harder to choose the old reverse-chronological feed. https://www.theverge.com/2022/3/10/22971307/twitter-home-timeline-algorithmic-reverse-chronological-feed
[81]
Pew Research Center. 2019. Partisan Antipathy: More Intense, More Personal. Technical Report. Washington, D.C. https://www.pewresearch.org/politics/2019/10/10/the-partisan-landscape-and-views-of-the-parties/
[82]
Google Transparency Report. 2023. YouTube Community Guidelines enforcement. https://transparencyreport. google.com/youtube-policy/removals
[83]
Martin J Riedl, Kelsey N Whipple, and Ryan Wallace. 2022. Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others. Information, Communication & Society 25, 11 (2022), 1632--1649.
[84]
Ronald E Robertson, Jon Green, Damian J Ruck, Katherine Ognyanova, Christo Wilson, and David Lazer. 2023. Users choose to engage with more partisan news than they are exposed to on Google Search. Nature (2023), 1--7.
[85]
Laura Robinson, Shelia R. Cotten, Hiroshi Ono, Anabel Quan-Haase, Gustavo Mesch, Wenhong Chen, Jeremy Schulz, Timothy M. Hale, and Michael J. Stern. 2015. Digital inequalities and why they matter. In- formation, Communication & Society 18, 5 (2015), 569--582. https://doi.org/10.1080/1369118X.2015.1012532 arXiv:https://doi.org/10.1080/1369118X.2015.1012532
[86]
Fred Rowland. 2011. The filter bubble: what the internet is hiding from you. portal: Libraries and the Academy 11, 4 (2011), 1009--1011.
[87]
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. 2023. Whose Opinions Do Language Models Reflect? arXiv:2303.17548 [cs.CL]
[88]
Martin Saveski, Nabeel Gillani, Ann Yuan, Prashanth Vijayaraghavan, and Deb Roy. 2022. Perspective-taking to reduce affective polarization on social media. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 885--895.
[89]
Michael Scharkow and Marko Bachl. 2017. How measurement error in content analysis and self-reported media use leads to minimal media effect findings in linkage analyses: A simulation study. Political Communication 34, 3 (2017), 323--343.
[90]
Nick Seaver. 2017. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big data & society 4, 2 (2017), 2053951717738104.
[91]
Bryan Semaan, Heather Faucett, Scott P Robertson, Misa Maruyama, and Sara Douglas. 2015. Designing political deliberation environments to support interactions in the public sphere. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 3167--3176.
[92]
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3407--3412. https://doi.org/10.18653/v1/D19--1339
[93]
Charles Percy Snow. 1959. Two cultures. Science 130, 3373 (1959), 419--419.
[94]
Jonathan Stray, Alon Halevy, Parisa Assar, Dylan Hadfield-Menell, Craig Boutilier, Amar Ashar, Lex Beattie, Michael Ekstrand, Claire Leibowicz, Connie Moon Sehat, et al. 2022. Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. arXiv preprint arXiv:2207.10192 (2022).
[95]
Cass R Sunstein. 2001. http://Republic. com.
[96]
Cass R Sunstein. 2015. Partyism. U. Chi. Legal F. (2015), 1.
[97]
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling Task-Specific Knowledge from BERT into Simple Neural Networks. arXiv:1903.12136 [cs.CL]
[98]
The YouTube Team. 2019. The Four Rs of Responsibility, Part 1: Removing harmful content. https://blog.youtube/inside-youtube/the-four-rs-of-responsibility-remove/
[99]
Petter Törnberg. 2022. How digital media drive affective polarization through partisan sorting. Proceedings of the National Academy of Sciences 119, 42 (2022), e2207159119.
[100]
Christopher Torres-Lugo, Manita Pote, Alexander C Nwala, and Filippo Menczer. 2022. Manipulating Twitter Through Deletions. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 1029--1039.
[101]
Twitter Transparency. 2021. Rules Enforcement. https://transparency.twitter.com/en/reports/rules-enforcement.html
[102]
David van Mill. 2021. Freedom of Speech. In The Stanford Encyclopedia of Philosophy (Spring 2021 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[103]
Anna Vannucci, Emily G Simpson, Sonja Gagnon, and Christine McCauley Ohannessian. 2020. Social media use and risky behaviors in adolescents: A meta-analysis. Journal of Adolescence 79 (2020), 258--274.
[104]
Jan G Voelkel, Michael Stagnaro, James Chu, Sophia Pink, Joseph Mernyk, Chrystal Redekopp, Isaias Ghezae, Matthew Cashman, Dhaval Adjodah, Levi Allen, et al. 2023. Megastudy identifying effective interventions to strengthen Americans' democratic attitudes. (2023).
[105]
Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want To Reduce Labeling Cost? GPT-3 Can Help. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics, Punta Cana, Dominican Republic, 4195--4205. https://doi.org/10.18653/v1/2021.findings-emnlp.354
[106]
Magdalena Wojcieszak and Benjamin R Warner. 2020. Can interparty contact reduce affective polarization? A systematic test of different forms of intergroup contact. Political Communication 37, 6 (2020), 789--811.
[107]
Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi. 2017. Returning is believing: Optimizing long-term user engagement in recommender systems. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 1927--1936.
[108]
Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, and Pierre-Yves Oudeyer. 2023. Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. In Companion Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI '23 Companion). Association for Computing Machinery, New York, NY, USA, 75--78. https://doi.org/10.1145/3581754.3584136
[109]
Kai-Cheng Yang and Filippo Menczer. 2023. Large language models can rate news outlet credibility. arXiv:2304.00228 [cs.CL]
[110]
Waheeb Yaqub, Otari Kakhidze, Morgan L Brockman, Nasir Memon, and Sameer Patil. 2020. Effects of credibility indicators on social media news sharing intent. In Proceedings of the 2020 chi conference on human factors in computing systems. 1--14.
[111]
Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-Sensitive Algorithm Design: Method, Case Study, and Lessons. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 194 (nov 2018), 23 pages. https://doi.org/10.1145/3274463
[112]
Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2023. Can Large Language Models Transform Computational Social Science? arXiv:2305.03514 [cs.CL]

Cited By

View all
  • (2024)Tweeting “in the language they understand”: a peace journalism conception of political contexts and media narratives on Nigeria's Twitter banMedia International Australia10.1177/1329878X241280234Online publication date: 19-Sep-2024
  • (2024)Granting Non-AI Experts Creative Control Over AI SystemsAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686714(1-5)Online publication date: 13-Oct-2024
  • (2024)Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642830(1-28)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Embedding Democratic Values into Social Media AIs via Societal Objective Functions

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
    CSCW
    April 2024
    6294 pages
    EISSN:2573-0142
    DOI:10.1145/3661497
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2024
    Published in PACMHCI Volume 8, Issue CSCW1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. affective polarization
    2. algorithms
    3. partisan animosity
    4. social media ais
    5. social media users

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)386
    • Downloads (Last 6 weeks)128
    Reflects downloads up to 16 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Tweeting “in the language they understand”: a peace journalism conception of political contexts and media narratives on Nigeria's Twitter banMedia International Australia10.1177/1329878X241280234Online publication date: 19-Sep-2024
    • (2024)Granting Non-AI Experts Creative Control Over AI SystemsAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686714(1-5)Online publication date: 13-Oct-2024
    • (2024)Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642830(1-28)Online publication date: 11-May-2024
    • (2024)Attraction to politically extreme users on social mediaPNAS Nexus10.1093/pnasnexus/pgae3953:10Online publication date: 15-Oct-2024
    • (2024)People who share encounters with racism are silenced online by humans and machines, but a guideline-reframing intervention holds promiseProceedings of the National Academy of Sciences10.1073/pnas.2322764121121:38Online publication date: 9-Sep-2024

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media