Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3531146.3533088acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Taxonomy of Risks posed by Language Models

Published: 20 June 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify twenty-one risks, drawing on expertise and literature from computer science, linguistics, and the social sciences. We situate these risks in our taxonomy of six risk areas: I. Discrimination, Hate speech and Exclusion, II. Information Hazards, III. Misinformation Harms, IV. Malicious Uses, V. Human-Computer Interaction Harms, and VI. Environmental and Socioeconomic harms. For risks that have already been observed in LMs, the causal mechanism leading to harm, evidence of the risk, and approaches to risk mitigation are discussed. We further describe and analyse risks that have not yet been observed but are anticipated based on assessments of other language technologies, and situate these in the same taxonomy. We underscore that it is the responsibility of organizations to engage with the mitigations we discuss throughout the paper. We close by highlighting challenges and directions for further research on risk evaluation and mitigation with the goal of ensuring that language models are developed responsibly.

    References

    [1]
    Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security(CCS ’16). Association for Computing Machinery, Vienna, Austria, 308–318. https://doi.org/10.1145/2976749.2978318
    [2]
    Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent Anti-Muslim Bias in Large Language Models. arXiv:2101.05783 [cs] (January 2021). http://arxiv.org/abs/2101.05783 arXiv:2101.05783.
    [3]
    Daron Acemoglu and Pascual Restrepo. 2018. Artificial Intelligence, Automation and Work. Working Paper 24196. National Bureau of Economic Research. https://doi.org/10.3386/w24196
    [4]
    David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D’souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, and Salomey Osei. 2021. MasakhaNER: Named Entity Recognition for African Languages. arXiv:2103.11811 [cs] (July 2021). http://arxiv.org/abs/2103.11811 arXiv:2103.11811.
    [5]
    Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a Human-like Open-Domain Chatbot. arXiv:2001.09977 [cs, stat] (Feb. 2020). http://arxiv.org/abs/2001.09977 arXiv:2001.09977.
    [6]
    Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov. 2017. Physiognomy’s New Clothes. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a
    [7]
    Ross Andersen. 2020. The Panopticon Is Already Here. The Atlantic (July 2020). https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
    [8]
    Kristjan Arumae and Parminder Bhatia. 2020. CALM: Continuous Adaptive Learning for Language Modeling. arXiv:2004.03794 [cs] (April 2020). http://arxiv.org/abs/2004.03794 arXiv:2004.03794.
    [9]
    Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a Laboratory for Alignment. arXiv:2112.00861 [cs] (Dec. 2021). http://arxiv.org/abs/2112.00861 arXiv:2112.00861.
    [10]
    David Autor and Anna Salomons. 2019. New Frontiers: The Evolving Content and Geography of New Work in the 20th Century - David Autor. (2019). https://app.scholarsite.io/david-autor/articles/new-frontiers-the-evolving-content-and-geography-of-new-work-in-the-20th-century Working Paper.
    [11]
    Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Spinning Language Models for Propaganda-As-A-Service. arXiv:2112.05224 [cs] (Dec. 2021). http://arxiv.org/abs/2112.05224 arXiv:2112.05224.
    [12]
    Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2019. Fairness and machine learning. fairmlbook.org. https://fairmlbook.org/
    [13]
    Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104 (2016), 671. https://heinonline.org/HOL/Page?handle=hein.journals/calr104&id=695&div=&collection=
    [14]
    Emily M. Bender. 2011. On Achieving and Evaluating Language-Independence in NLP. Linguistic Issues in Language Technology 6, 0 (November 2011). http://elanguage.net/journals/lilt/article/view/2624
    [15]
    Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, Virtual Event, Canada, 610–623. https://doi.org/10.1145/3442188.3445922
    [16]
    Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 5185–5198. https://doi.org/10.18653/v1/2020.acl-main.463
    [17]
    Yoshua Bengio. 2008. Neural net language models., 3881 pages. http://www.scholarpedia.org/article/Neural_net_language_models
    [18]
    Ruha Benjamin. 2020. Race After Technology: Abolitionist Tools for the New Jim Code. Social Forces 98, 4 (June 2020), 1–3. https://doi.org/10.1093/sf/soz162
    [19]
    Hilary Bergen. 2016. ‘I’d Blush if I Could’: Digital Assistants, Disembodied Cyborgs and the Problem of Gender. Word and Text, A Journal of Literary Studies and Linguistics VI, 01(2016), 95–113. https://www.ceeol.com/search/article-detail?id=469884
    [20]
    Federico Bianchi and Dirk Hovy. 2021. On the gap between adoption and understanding in NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 3895–3901.
    [21]
    Timothy W. Bickmore, Ha Trinh, Stefan Olafsson, Teresa K. O’Leary, Reza Asadi, Nathaniel M. Rickles, and Ricardo Cruz. 2018. Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant. Journal of Medical Internet Research 20, 9 (September 2018), e11510. https://doi.org/10.2196/11510
    [22]
    Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of ”Bias” in NLP. arXiv:2005.14050 [cs] (May 2020). http://arxiv.org/abs/2005.14050 arXiv:2005.14050.
    [23]
    Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic Dialectal Variation in Social Media: A Case Study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 1119–1130. https://doi.org/10.18653/v1/D16-1120
    [24]
    Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the Opportunities and Risks of Foundation Models. arXiv:2108.07258 [cs] (August 2021). http://arxiv.org/abs/2108.07258 arXiv:2108.07258.
    [25]
    Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. arXiv:2112.04426 [cs] (Jan. 2022). http://arxiv.org/abs/2112.04426 arXiv:2112.04426.
    [26]
    Nick Bostrom. 2014. Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford. OCLC: ocn881706835.
    [27]
    Nick Bostrom 2011. Information hazards: A typology of potential harms from knowledge. Review of Contemporary Philosophy(2011), 44–79.
    [28]
    Geoffrey C. Bowker and Susan Leigh Star. 1999. Sorting Things Out: Classification and Its Consequences. MIT Press, Cambridge, MA, USA.
    [29]
    Cynthia Breazeal and Brian Scassellati. 2000. Infant-like Social Interactions between a Robot and a Human Caregiver. Adaptive Behavior 8, 1 (January 2000), 49–74. https://doi.org/10.1177/105971230000800104
    [30]
    Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs] (July 2020). http://arxiv.org/abs/2005.14165 arXiv:2005.14165.
    [31]
    Ben Buchanan, Andrew Lohn, Micah Musser, and Sedova Katerina. 2021. Truth, Lies, and Truth, Lies, and Automation: How Language Models Could Change DisinformationAutomation: How Language Models Could Change Disinformation. Technical Report. CSET.
    [32]
    Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (April 2017), 183–186. https://doi.org/10.1126/science.aal4230 arXiv:1608.07187.
    [33]
    Yang Trista Cao and Hal Daumé III. 2020. Toward Gender-Inclusive Coreference Resolution. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020), 4568–4595. https://doi.org/10.18653/v1/2020.acl-main.418 arXiv:1910.13913.
    [34]
    Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting Training Data from Large Language Models. arXiv:2012.07805 [cs] (June 2021). http://arxiv.org/abs/2012.07805 arXiv:2012.07805.
    [35]
    Stephen Cave and Kanta Dihal. 2020. The Whiteness of AI. Philosophy & Technology 33, 4 (December 2020), 685–703. https://doi.org/10.1007/s13347-020-00415-6
    [36]
    Amanda Cercas Curry, Judy Robertson, and Verena Rieser. 2020. Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Barcelona, Spain (Online), 72–78. https://aclanthology.org/2020.gebnlp-1.7
    [37]
    Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. arXiv:2107.03374 [cs] (July 2021). http://arxiv.org/abs/2107.03374 arXiv:2107.03374.
    [38]
    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311(2022).
    [39]
    Kate Crawford. 2021. Atlas of AI. Yale University Press. https://yalebooks.yale.edu/book/9780300209570/atlas-ai
    [40]
    Kimberlé Crenshaw. 2017. On Intersectionality: Essential Writings. Books (March 2017). https://scholarship.law.columbia.edu/books/255
    [41]
    Bennett Cyphers and Gennie Gebhart. 2019. Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance. Technical Report. Electronic Frontier Foundation. https://www.eff.org/wp/behind-the-one-way-mirror
    [42]
    Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and Play Language Models: A Simple Approach to Controlled Text Generation. arXiv:1912.02164 [cs] (March 2020). http://arxiv.org/abs/1912.02164 arXiv:1912.02164.
    [43]
    Cyprien de Masson d’ Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic Memory in Lifelong Language Learning. In Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc.https://papers.nips.cc/paper/2019/hash/f8d2e80c1458ea2501f98a2cafadb397-Abstract.html
    [44]
    DeepMind Interactive Agents Team, Josh Abramson, Arun Ahuja, Arthur Brussee, Federico Carnevale, Mary Cassin, Felix Fischer, Petko Georgiev, Alex Goldin, Tim Harley, Felix Hill, Peter C. Humphreys, Alden Hung, Jessica Landon, Timothy Lillicrap, Hamza Merzic, Alistair Muldal, Adam Santoro, Guy Scully, Tamara von Glehn, Greg Wayne, Nathaniel Wong, Chen Yan, and Rui Zhu. 2021. Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning. arXiv:2112.03763 [cs] (Dec. 2021). http://arxiv.org/abs/2112.03763 arXiv:2112.03763.
    [45]
    Emily Denton, Alex Hanna, Razvan Amironesei, Andrew Smart, Hilary Nicole, and Morgan Klaus Scheuerman. 2020. Bringing the People Back In: Contesting Benchmark Machine Learning Datasets. arXiv:2007.07399 [cs] (July 2020). http://arxiv.org/abs/2007.07399 arXiv:2007.07399.
    [46]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs] (May 2019). http://arxiv.org/abs/1810.04805 arXiv:1810.04805.
    [47]
    Thomas Dietterich and Eun Bae Kong. 1995. Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms. Technical Report. Department of Computer Science, Oregon State University.
    [48]
    Emily Dinan, Gavin Abercrombie, A. Stevie Bergman, Shannon Spruit, Dirk Hovy, Y.-Lan Boureau, and Verena Rieser. 2021. Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling. arXiv:2107.03451 [cs] (July 2021). http://arxiv.org/abs/2107.03451 arXiv:2107.03451.
    [49]
    Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and Mitigating Unintended Bias in Text Classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society(AIES ’18). Association for Computing Machinery, New Orleans, LA, USA, 67–73. https://doi.org/10.1145/3278721.3278729
    [50]
    Jesse Dodge, Maarten Sap, Ana Marasović, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus. arXiv:2104.08758 [cs] (September 2021). http://arxiv.org/abs/2104.08758 arXiv:2104.08758.
    [51]
    David M. Douglas. 2016. Doxing: a conceptual analysis. Ethics and Information Technology 18, 3 (September 2016), 199–210. https://doi.org/10.1007/s10676-016-9406-0
    [52]
    Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. 2021. Truthful AI: Developing and governing AI that does not lie. arXiv:2110.06674 [cs] (Oct. 2021). http://arxiv.org/abs/2110.06674 arXiv:2110.06674.
    [53]
    Tom Everitt, Gary Lea, and Marcus Hutter. 2018. AGI Safety Literature Review. arXiv:1805.01109 [cs] (May 2018). http://arxiv.org/abs/1805.01109 arXiv:1805.01109.
    [54]
    William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv:2101.03961 [cs] (January 2021). http://arxiv.org/abs/2101.03961 arXiv:2101.03961.
    [55]
    Samantha Finkelstein, Evelyn Yarzebinski, Callie Vaughn, Amy Ogan, and Justine Cassell. 2013. The Effects of Culturally Congruent Educational Technologies on Student Achievement. In Artificial Intelligence in Education(Lecture Notes in Computer Science), H. Chad Lane, Kalina Yacef, Jack Mostow, and Philip Pavlik (Eds.). Springer, Berlin, Heidelberg, 493–502. https://doi.org/10.1007/978-3-642-39112-5_50
    [56]
    Chris Flood. 2017. Fake news infiltrates financial markets. Financial Times (May 2017). https://www.ft.com/content/a37e4874-2c2a-11e7-bc4b-5528796fe35c
    [57]
    Paula Fortuna and Sérgio Nunes. 2018. A Survey on Automatic Detection of Hate Speech in Text. Comput. Surveys 51, 4 (July 2018), 85:1–85:30. https://doi.org/10.1145/3232676
    [58]
    Michel Foucault and Alan Sheridan. 2012. Discipline and punish: the birth of the prison. Vintage, New York. http://0-lib.myilibrary.com.catalogue.libraries.london.ac.uk?id=435863 OCLC: 817200914.
    [59]
    Iason Gabriel and Vafa Ghazavi. 2021. The Challenge of Value Alignment: from Fairer Algorithms to AI Safety. arXiv:2101.06060 [cs] (January 2021). http://arxiv.org/abs/2101.06060 arXiv:2101.06060.
    [60]
    Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2020. Datasheets for Datasets. arXiv:1803.09010 [cs] (March 2020). http://arxiv.org/abs/1803.09010 arXiv:1803.09010.
    [61]
    Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. arXiv:2009.11462 [cs] (September 2020). http://arxiv.org/abs/2009.11462 arXiv:2009.11462.
    [62]
    Alexandre Georgieff and Anna Milanez. 2021. What happened to jobs at high risk of automation?Technical Report 255. OECD Publishing. https://ideas.repec.org/p/oec/elsaab/255-en.html
    [63]
    Jennifer Golbeck. 2018. Predicting Alcoholism Recovery from Twitter. In Social, Cultural, and Behavioral Modeling(Lecture Notes in Computer Science), Robert Thomson, Christopher Dancy, Ayaz Hyder, and Halil Bisgin (Eds.). Springer International Publishing, Cham, 243–252. https://doi.org/10.1007/978-3-319-93372-6_28
    [64]
    Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society 7, 1 (January 2020), 2053951719897945. https://doi.org/10.1177/2053951719897945
    [65]
    Mary Gray and Siddarth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Mariner Books. https://ghostwork.info/
    [66]
    David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI—Explainable artificial intelligence. Science Robotics 4, 37 (December 2019), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
    [67]
    Beth Gutelius and Nik Theodore. 2019. The Future of Warehouse Work: Technological Change in the U.S. Logistics Industry. Technical Report. UC Berkeley Labor Center and Working Partnerships USA. https://laborcenter.berkeley.edu/future-of-warehouse-work/
    [68]
    Salomé Gómez-Upegui. 2021. The Future of Digital Assistants Is Queer. Wired (Nov. 2021). https://www.wired.com/story/digital-assistant-smart-device-gender-identity/
    [69]
    Karen Hao. 2020. A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.MIT Technology Review (August 2020). https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/
    [70]
    Donna Jeanne Haraway. 2004. The Haraway Reader. Psychology Press. Google-Books-ID: QxUr0gijyGoC.
    [71]
    Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. arXiv:1610.02413 [cs] (October 2016). http://arxiv.org/abs/1610.02413 arXiv:1610.02413.
    [72]
    Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2021. Aligning AI With Shared Human Values. arXiv:2008.02275 [cs] (July 2021). http://arxiv.org/abs/2008.02275 arXiv:2008.02275.
    [73]
    Philip Hines, Li Hiu Yu, Richard H Guy, Angela Brand, and Marisa Papaluca-Amati. 2019. Scanning the horizon: a systematic literature review of methodologies. BMJ open 9, 5 (2019), e026764.
    [74]
    Paul Hitlin, Kenneth Olmstead, and Skye Toor. 2017. FCC Net Neutrality Online Public Comments Contain Many Inaccuracies and Duplicates. Technical Report. Pew Research Center. https://www.pewresearch.org/internet/2017/11/29/public-comments-to-the-federal-communications-commission-about-net-neutrality-contain-many-inaccuracies-and-duplicates/
    [75]
    Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, 2022. Training Compute-Optimal Large Language Models. arXiv preprint arXiv:2203.15556(2022).
    [76]
    Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (May 2019), 1–16. https://doi.org/10.1145/3290605.3300830 arXiv:1812.05239.
    [77]
    Kris Holt. 2020. Google’s ’Verse by Verse’ AI can help you write in the style of famous poets. Engadget (November 2020). https://www.engadget.com/googles-ai-poetry-verse-by-verse-202105834.html
    [78]
    Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The Curious Case of Neural Text Degeneration. arXiv:1904.09751 [cs] (February 2020). http://arxiv.org/abs/1904.09751 arXiv:1904.09751.
    [79]
    Dirk Hovy and Shannon L. Spruit. 2016. The Social Impact of Natural Language Processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, 591–598. https://doi.org/10.18653/v1/P16-2096
    [80]
    Dirk Hovy and Diyi Yang. 2021. The Importance of Modeling Social Factors of Language: Theory and Practice. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 588–602. https://doi.org/10.18653/v1/2021.naacl-main.49
    [81]
    Kane Hsieh. 2019. Transformer Poetry. Paper Gains Publishing. https://papergains.co/
    [82]
    Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2020. Reducing Sentiment Bias in Language Models via Counterfactual Evaluation. arXiv:1911.03064 [cs] (October 2020). http://arxiv.org/abs/1911.03064 arXiv:1911.03064.
    [83]
    Katie Hunt and CY Xu. 2013. China ’employs 2 million to police internet’. CNN (October 2013). https://www.cnn.com/2013/10/07/world/asia/china-internet-monitors/index.html publisher: CNN.
    [84]
    Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, Virtual Event, Canada, 560–575. https://doi.org/10.1145/3442188.3445918
    [85]
    Gilhwan Hwang, Jeewon Lee, Cindy Yoonjung Oh, and Joonhwan Lee. 2019. It Sounds Like A Woman: Exploring Gender Stereotypes in South Korean Voice Assistants. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(CHI EA ’19). Association for Computing Machinery, Glasgow, Scotland Uk, 1–6. https://doi.org/10.1145/3290607.3312915
    [86]
    Christopher Ingraham. 2018. How rising inequality hurts everyone, even the rich. Washington Post (February 2018). https://www.washingtonpost.com/news/wonk/wp/2018/02/06/how-rising-inequality-hurts-everyone-even-the-rich/
    [87]
    Carolin Ischen, Theo Araujo, Hilde Voorveld, Guda van Noort, and Edith Smit. 2019. Privacy concerns in chatbot interactions. In International Workshop on Chatbot Research and Design. Springer, 34–48.
    [88]
    Gautier Izacard and Edouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. arXiv:2007.01282 [cs] (Feb. 2021). http://arxiv.org/abs/2007.01282 arXiv:2007.01282.
    [89]
    Letitia James. 2021. How U.S. Companies & Partisans Hack Democracy to Undermine Your Voice. Technical Report. New York State Office of the Attorney General.
    [90]
    Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. 2019. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456(2019).
    [91]
    Florence Jaumotte, Subir Lall, and Chris Papageorgiou. 2013. Rising Income Inequality: Technology, or Trade and Financial Globalization?IMF Economic Review 61, 2 (June 2013), 271–309. https://doi.org/10.1057/imfer.2013.7
    [92]
    Robin Jeshion. 2020. Pride and Prejudiced: On the Reclamation of Slurs. Grazer Philosophische Studien 97, 1 (March 2020), 106–137. https://doi.org/10.1163/18756735-09701007
    [93]
    Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for Natural Language Understanding. arXiv:1909.10351 [cs] (Oct. 2020). http://arxiv.org/abs/1909.10351 arXiv:1909.10351.
    [94]
    Eun Seo Jo and Timnit Gebru. 2020. Lessons from archives: strategies for collecting sociocultural data in machine learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, Barcelona, Spain, 306–316. https://doi.org/10.1145/3351095.3372829
    [95]
    Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2021. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. arXiv:2004.09095 [cs] (January 2021). http://arxiv.org/abs/2004.09095 arXiv:2004.09095.
    [96]
    Lynn H Kaack, Priya L Donti, Emma Strubell, George Kamiya, Felix Creutzig, and David Rolnick. 2021. Aligning artificial intelligence with climate change mitigation. (Oct. 2021). https://hal.archives-ouvertes.fr/hal-03368037
    [97]
    Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 6769–6781. https://doi.org/10.18653/v1/2020.emnlp-main.550
    [98]
    Nora Kassner and Hinrich Schütze. 2020. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. arXiv:1911.03343 [cs] (May 2020). http://arxiv.org/abs/1911.03343 arXiv:1911.03343.
    [99]
    Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. 2021. Alignment of Language Agents. arXiv:2103.14659 [cs] (March 2021). http://arxiv.org/abs/2103.14659 arXiv:2103.14659.
    [100]
    Os Keyes, Zoë Hitzig, and Mwenza Blell. 2021. Truth from the machine: artificial intelligence and the materialization of identity. Interdisciplinary Science Reviews 46, 1-2 (April 2021), 158–175. https://doi.org/10.1080/03080188.2020.1840224
    [101]
    Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2021. A Distributional Approach to Controlled Text Generation. arXiv:2012.11635 [cs] (May 2021). http://arxiv.org/abs/2012.11635 arXiv:2012.11635.
    [102]
    Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. arXiv:1911.00172 [cs] (Feb. 2020). http://arxiv.org/abs/1911.00172 arXiv:1911.00172.
    [103]
    Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santiago, and Vivek Datta. 2020. Intersectional Bias in Hate Speech and Abusive Language Datasets. arXiv:2005.05921 [cs] (May 2020). http://arxiv.org/abs/2005.05921 arXiv:2005.05921.
    [104]
    Youjeong Kim and S Shyam Sundar. 2012. Anthropomorphism of computers: Is it mindful or mindless?Computers in Human Behavior 28, 1 (2012), 241–250.
    [105]
    Jan Kocoń, Alicja Figas, Marcin Gruza, Daria Puchalska, Tomasz Kajdanowicz, and Przemysław Kazienko. 2021. Offensive, aggressive, and hate speech analysis: From data-centric to human-centered approach. Information Processing & Management 58, 5 (September 2021), 102643. https://doi.org/10.1016/j.ipm.2021.102643
    [106]
    Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-Augmented Dialogue Generation. arXiv:2107.07566 [cs] (July 2021). http://arxiv.org/abs/2107.07566 arXiv:2107.07566.
    [107]
    Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110, 15 (April 2013), 5802–5805. https://doi.org/10.1073/pnas.1218772110
    [108]
    Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2020. GeDi: Generative Discriminator Guided Sequence Generation. arXiv:2009.06367 [cs] (Oct. 2020). http://arxiv.org/abs/2009.06367 arXiv:2009.06367.
    [109]
    Amit Kulkarni. 2021. GitHub Copilot AI Is Leaking Functional API Keys. Analytics Drift (July 2021). https://analyticsdrift.com/github-copilot-ai-is-leaking-functional-api-keys/
    [110]
    James Lambert and Edward Cone. 2019. How Robots Change the World - What automation really means for jobs, productivity and regions. Technical Report. Oxford Economics. https://www.oxfordeconomics.com/recent-releases/how-robots-change-the-world
    [111]
    Issie Lapowsky. 2017. How Bots Broke the FCC’s Public Comment System. Wired (November 2017). https://www.wired.com/story/bots-broke-fcc-public-comment-system/
    [112]
    Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the Gap: Assessing Temporal Generalization in Neural Language Models. arXiv:2102.01951 [cs] (Oct. 2021). http://arxiv.org/abs/2102.01951 arXiv:2102.01951.
    [113]
    Becca Lewis and Alice E. Marwick. 2017. Media Manipulation and Disinformation Online. Technical Report. Data & Society. https://datasociety.net/library/media-manipulation-and-disinfo-online
    [114]
    Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or No Deal? End-to-End Learning for Negotiation Dialogues. arXiv:1706.05125 [cs] (June 2017). http://arxiv.org/abs/1706.05125 arXiv:1706.05125.
    [115]
    Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2020. Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets. arXiv:2008.02637 [cs] (August 2020). http://arxiv.org/abs/2008.02637 arXiv:2008.02637.
    [116]
    Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, and Ion Stoica. 2021. TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models. arXiv:2102.07988 [cs] (September 2021). http://arxiv.org/abs/2102.07988 arXiv:2102.07988.
    [117]
    Yuting Liao and Jiangen He. 2020. Racial mirroring effects on human-agent interaction in psychotherapeutic conversations. In Proceedings of the 25th International Conference on Intelligent User Interfaces(IUI ’20). Association for Computing Machinery, Cagliari, Italy, 430–442. https://doi.org/10.1145/3377325.3377488
    [118]
    Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. TruthfulQA: Measuring How Models Mimic Human Falsehoods. arXiv:2109.07958 [cs] (September 2021). http://arxiv.org/abs/2109.07958 arXiv:2109.07958.
    [119]
    N LSE Blog. 2017. Doxing is a toxic practice – no matter who is targeted | Media@LSE. https://blogs.lse.ac.uk/medialse/2017/08/18/the-dangers-of-doxing-and-the-implications-for-media-regulation/
    [120]
    Li Lucy and David Bamman. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In Proceedings of the Third Workshop on Narrative Understanding. Association for Computational Linguistics, Virtual, 48–55. https://doi.org/10.18653/v1/2021.nuse-1.5
    [121]
    Aibek Makazhanov, Davood Rafiei, and Muhammad Waqar. 2014. Predicting political preference of Twitter users. Social Network Analysis and Mining 4, 1 (May 2014), 193. https://doi.org/10.1007/s13278-014-0193-5
    [122]
    Huina Mao, Xin Shuai, and Apu Kapadia. 2011. Loose tweets: an analysis of privacy leaks on twitter. In Proceedings of the 10th annual ACM workshop on Privacy in the electronic society(WPES ’11). Association for Computing Machinery, Chicago, Illinois, USA, 1–12. https://doi.org/10.1145/2046556.2046558
    [123]
    Vidushi Marda and Shivangi Narayan. 2021. On the importance of ethnographic methods in AI research. Nature Machine Intelligence 3, 3 (March 2021), 187–189. https://doi.org/10.1038/s42256-021-00323-0
    [124]
    Mark Marino. 2014. The Racial Formation of Chatbots. CLCWeb: Comparative Literature and Culture 16, 5 (December 2014). https://doi.org/10.7771/1481-4374.2560
    [125]
    Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S. Isaac. 2020. Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics. arXiv:2005.07572 [cs, stat] (May 2020). http://arxiv.org/abs/2005.07572 arXiv:2005.07572.
    [126]
    Kevin McKee, Xuechunzi Bai, and Susan Fiske. 2021. Understanding Human Impressions of Artificial Intelligence. PsyArxiv (2021). https://psyarxiv.com/5ursp/
    [127]
    Juliana Menasce Horowitz, Ruth Igielnik, and Rakesh Kochhar. 2020. Trends in U.S. income and wealth inequality. Technical Report. Pew Research Center. https://www.pewresearch.org/social-trends/2020/01/09/trends-in-income-and-wealth-inequality/
    [128]
    Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, 2022. Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147(2022).
    [129]
    Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (February 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
    [130]
    Adam S Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni Linos. 2016. Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health. JAMA internal medicine 176, 5 (May 2016), 619–625. https://doi.org/10.1001/jamainternmed.2016.0400
    [131]
    Antonio A. Morgan-Lopez, Annice E. Kim, Robert F. Chew, and Paul Ruddle. 2017. Predicting age groups of Twitter users based on language and metadata features. PLOS ONE 12, 8 (August 2017), e0183537. https://doi.org/10.1371/journal.pone.0183537
    [132]
    David Mytton. 2021. Data centre water consumption. NPJ Clean Water 4, 1 (February 2021), 1–6. https://doi.org/10.1038/s41545-021-00101-w
    [133]
    Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereotypical bias in pretrained language models. arXiv:2004.09456 [cs] (April 2020). http://arxiv.org/abs/2004.09456 arXiv:2004.09456.
    [134]
    Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, 2021. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332(2021).
    [135]
    Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. ”How Old Do You Think I Am?” A Study of Language and Age in Twitter. Proceedings of the International AAAI Conference on Web and Social Media 7, 1(2013), 439–448. https://ojs.aaai.org/index.php/ICWSM/article/view/14381
    [136]
    Debora Nozza, Federico Bianchi, and Dirk Hovy. 2021. HONEST: Measuring Hurtful Sentence Completion in Language Models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 2398–2406. https://doi.org/10.18653/v1/2021.naacl-main.191
    [137]
    Katherine Ognyanova, David Lazer, Ronald E. Robertson, and Christo Wilson. 2020. Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review (June 2020). https://doi.org/10.37016/mr-2020-024
    [138]
    Google PAIR. 2019. People + AI Guidebook. Google. https://design.google/ai-guidebook
    [139]
    Arielle Pardes. 2018. The Emotional Chatbots Are Here to Probe Our Feelings. Wired (January 2018). https://www.wired.com/story/replika-open-source/
    [140]
    Gregory Park, H. Andrew Schwartz, Johannes C. Eichstaedt, Margaret L. Kern, Michal Kosinski, David J. Stillwell, Lyle H. Ungar, and Martin E. P. Seligman. 2015. Automatic personality assessment through social media language.Journal of Personality and Social Psychology 108, 6 (June 2015), 934–952. https://doi.org/10.1037/pspp0000020
    [141]
    David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon Emissions and Large Neural Network Training. arXiv:2104.10350 [cs] (April 2021). http://arxiv.org/abs/2104.10350 arXiv:2104.10350.
    [142]
    Diana Perez-Marin and Ismael Pascual-Nieto. 2011. Conversational Agents and Natural Language Interaction: Techniques and Effective Practices. Information Science Reference - Imprint of: IGI Publishing, Hershey, PA.
    [143]
    Nathaniel Persily and Joshua A. Tucker. 2020. Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge University Press. Google-Books-ID: TgH3DwAAQBAJ.
    [144]
    Daniel Preoţiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond Binary Labels: Political Ideology Prediction of Twitter Users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, 729–740. https://doi.org/10.18653/v1/P17-1068
    [145]
    Katyanna Quach. 2020. Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves. The Register (October 2020). https://www.theregister.com/2020/10/28/gpt3_medical_chatbot_experiment/
    [146]
    Daniele Quercia, Michal Kosinski, David Stillwell, and Jon Crowcroft. 2011. Our Twitter Profiles, Our Selves: Predicting Personality with Twitter. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing. 180–185. https://doi.org/10.1109/PASSAT/SocialCom.2011.26
    [147]
    Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. (2018).
    [148]
    Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling Language Models: Methods, Analysis & Insights from Training Gopher. arXiv:2112.11446 [cs] (Dec. 2021). http://arxiv.org/abs/2112.11446 arXiv:2112.11446.
    [149]
    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:1910.10683 [cs, stat] (July 2020). http://arxiv.org/abs/1910.10683 arXiv:1910.10683.
    [150]
    Inioluwa Deborah Raji. 2020. Handle with Care: Lessons for Data Science from Black Female Scholars. Patterns 1, 8 (November 2020), 100150. https://doi.org/10.1016/j.patter.2020.100150
    [151]
    Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. 2021. AI and the everything in the whole wide world benchmark. arXiv preprint arXiv:2111.15366(2021).
    [152]
    Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. arXiv:2001.00973 [cs] (January 2020). http://arxiv.org/abs/2001.00973 arXiv:2001.00973.
    [153]
    Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, and Françoise Beaufays. 2020. Training Production Language Models without Memorizing User Data. arXiv:2009.10031 [cs, stat] (September 2020). http://arxiv.org/abs/2009.10031 arXiv:2009.10031.
    [154]
    Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. arXiv:2102.12092 [cs] (Feb. 2021). http://arxiv.org/abs/2102.12092 arXiv:2102.12092.
    [155]
    Priyanka Ranade, Aritran Piplai, Sudip Mittal, Anupam Joshi, and Tim Finin. 2021. Generating Fake Cyber Threat Intelligence Using Transformer-Based Models. arXiv:2102.04351 [cs] (June 2021). http://arxiv.org/abs/2102.04351 arXiv:2102.04351.
    [156]
    Erin Rand. 2014. Reclaiming Queer: Activist & Academic Rhetorics of Resistance. University of Alabama Press.
    [157]
    Ehud Reiter. 2020. Could NLG systems injure or even kill people?https://ehudreiter.com/2020/10/20/could-nlg-systems-injure-or-even-kill-people/
    [158]
    Matthew Rimmer. 2013. Patent-Busting: The Public Patent Foundation, Gene Patents and the Seed Wars. In The Intellectual Property and Food Project, Charles Lawson and Jay Sanderson (Eds.). Routledge.
    [159]
    Cami Rincón, Os Keyes, and Corinne Cath. 2021. Speaking from Experience: Trans/Non-Binary Requirements for Voice-Activated AI. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 132:1–132:27. https://doi.org/10.1145/3449206
    [160]
    Corby Rosset. 2020. Turing-NLG: A 17-billion-parameter language model by Microsoft. https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/
    [161]
    Alan Rubel, Adam Pham, and Clinton Castro. 2019. Agency Laundering and Algorithmic Decision Systems. In Information in Contemporary Society(Lecture Notes in Computer Science), Natalie Greene Taylor, Caitlin Christian-Lamb, Michelle H. Martin, and Bonnie Nardi (Eds.). Springer International Publishing, Cham, 590–598. https://doi.org/10.1007/978-3-030-15742-5_56
    [162]
    Sebastian Ruder. 2020. Why You Should Do NLP Beyond English. https://ruder.io/nlp-beyond-english/
    [163]
    Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’21). Association for Computing Machinery, Virtual Event, Canada, 315–328. https://doi.org/10.1145/3442188.3445896
    [164]
    Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. Movement Pruning: Adaptive Sparsity by Fine-Tuning. arXiv:2005.07683 [cs] (Oct. 2020). http://arxiv.org/abs/2005.07683 arXiv:2005.07683.
    [165]
    Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 1668–1678. https://doi.org/10.18653/v1/P19-1163
    [166]
    Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP. arXiv:2103.00453 [cs] (Sept. 2021). http://arxiv.org/abs/2103.00453 arXiv:2103.00453.
    [167]
    Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green AI. Commun. ACM 63, 12 (November 2020), 54–63. https://doi.org/10.1145/3381831
    [168]
    Adrian Shahbaz and Allie Funk. 2019. Social Media Surveillance. Technical Report. Freedom House. https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/social-media-surveillance
    [169]
    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2021. Societal Biases in Language Generation: Progress and Challenges. arXiv:2105.04054 [cs] (June 2021). http://arxiv.org/abs/2105.04054 arXiv:2105.04054.
    [170]
    Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2020. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. arXiv:1909.08053 [cs] (March 2020). http://arxiv.org/abs/1909.08053 arXiv:1909.08053.
    [171]
    Irene Solaiman and Christy Dennison. 2021. Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets. arXiv:2106.10328 [cs] (June 2021). http://arxiv.org/abs/2106.10328 arXiv:2106.10328.
    [172]
    Karen Sparck Jones. 2004. Language modelling’s generative model: is it rational?Computer Laboratory, University of Cambridge, Cambridge, UK.
    [173]
    Titus Stahl. 2016. Indiscriminate mass surveillance and the public sphere. Ethics and Information Technology 18, 1 (March 2016), 33–39. https://doi.org/10.1007/s10676-016-9392-2
    [174]
    William. Stanley Jevons. 1905. The Coal Question: An Inquiry Concerning the Progress of the Nation, and the Probable Exhaustion of Our Coal-mines(3 ed.). Augustus M. Kelley, New York.
    [175]
    Jack Stilgoe, Richard Owen, and Phil Macnaghten. 2013. Developing a framework for responsible innovation. Research Policy 42, 9 (November 2013), 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008
    [176]
    Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Considerations for Deep Learning in NLP. arXiv:1906.02243 [cs] (June 2019). http://arxiv.org/abs/1906.02243 arXiv:1906.02243.
    [177]
    Shannon Sullivan and Nancy Tuana (Eds.). 2007. Race and epistemologies of ignorance. State University of New York Press, Albany. OCLC: ocm70676503.
    [178]
    summerstay on Reddit. 2020. Fiction by Neil Gaiman and Terry Pratchett by GPT-3. www.reddit.com/r/slatestarcodex/comments/hmu5lm/fiction_by_neil_gaiman_and_terry_pratchett_by_gpt3/
    [179]
    Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019. LAMOL: LAnguage MOdeling for Lifelong Language Learning. arXiv:1909.03329 [cs] (Dec. 2019). http://arxiv.org/abs/1909.03329 arXiv:1909.03329.
    [180]
    Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. 2021. Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models. arXiv:2102.02503 [cs] (February 2021). http://arxiv.org/abs/2102.02503 arXiv:2102.02503.
    [181]
    Nenad Tomasev, Kevin R. McKee, Jackie Kay, and Shakir Mohamed. 2021. Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (July 2021), 254–265. https://doi.org/10.1145/3461702.3462540 arXiv:2102.04257.
    [182]
    Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021. Facebook AI WMT21 News Translation Task Submission. arXiv:2108.03265 [cs] (Aug. 2021). http://arxiv.org/abs/2108.03265 arXiv:2108.03265.
    [183]
    Evert Van den Broeck, Brahim Zarouali, and Karolien Poels. 2019. Chatbot advertising effectiveness: When does the message get through?Computers in Human Behavior 98 (September 2019), 150–157. https://doi.org/10.1016/j.chb.2019.04.009
    [184]
    VersebyVerse 2020. Verse by Verse. https://sites.research.google/versebyverse/publisher:.
    [185]
    James Vincent. 2017. The invention of AI ‘gaydar’ could be the start of something much worse. The Verge (September 2017). https://www.theverge.com/2017/9/21/16332760/ai-sexuality-gaydar-photo-physiognomy
    [186]
    Kate Vredenburgh. 2021. The Right to Explanation. Journal of Political Philosophy 0, 0 (2021), 1–21. https://doi.org/10.1111/jopp.12262
    [187]
    Carissa Véliz. 2019. Privacy matters because it empowers us all | Aeon Essays. Aeon (September 2019). https://aeon.co/essays/privacy-matters-because-it-empowers-us-all
    [188]
    Daniel Wallace, Florian Tramer, Matthew Jagielski, and Ariel Herbert-Voss. 2020. Does GPT-2 Know Your Phone Number?http://bair.berkeley.edu/blog/2020/12/20/lmmem/
    [189]
    Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. arXiv:2002.10957 [cs] (April 2020). http://arxiv.org/abs/2002.10957 arXiv:2002.10957.
    [190]
    Yilun Wang and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.Journal of Personality and Social Psychology 114, 2 (February 2018), 246–257. https://doi.org/10.1037/pspa0000098
    [191]
    William Warner and Julia Hirschberg. 2012. Detecting Hate Speech on the World Wide Web. In Proceedings of the Second Workshop on Language in Social Media. Association for Computational Linguistics, Montréal, Canada, 19–26. https://aclanthology.org/W12-2103
    [192]
    Michael Webb. 2019. The Impact of Artificial Intelligence on the Labor Market. SSRN Scholarly Paper ID 3482150. Social Science Research Network, Rochester, NY. https://papers.ssrn.com/abstract=3482150
    [193]
    Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from Language Models. arXiv:2112.04359 [cs] (Dec. 2021). http://arxiv.org/abs/2112.04359 arXiv:2112.04359.
    [194]
    Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, Kirsty Anderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. 2021. Challenges in Detoxifying Language Models. arXiv:2109.07445 [cs] (September 2021). http://arxiv.org/abs/2109.07445 arXiv:2109.07445.
    [195]
    Mark West, Rebecca Kraut, and Han Ei Chew. 2019. I’d blush if I could : closing gender divides in digital skills through education. Technical Report. UNESCO. https://repositorio.minedu.gob.pe/handle/20.500.12799/6598
    [196]
    Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Rosanne Liu, Jason Yosinski, and Pascale Fung. 2021. Language Models are Few-shot Multilingual Learners. arXiv:2109.07684 [cs] (September 2021). http://arxiv.org/abs/2109.07684 arXiv:2109.07684.
    [197]
    Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein. 2021. Detoxifying Language Models Risks Marginalizing Minority Voices. arXiv:2104.06390 [cs] (April 2021). http://arxiv.org/abs/2104.06390 arXiv:2104.06390.
    [198]
    Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv:2010.11934 [cs] (March 2021). http://arxiv.org/abs/2010.11934 arXiv:2010.11934.
    [199]
    James O. Young. 2005. Profound Offense and Cultural Appropriation. The Journal of Aesthetics and Art Criticism 63, 2 (2005), 135–146. https://www.jstor.org/stable/3700467
    [200]
    Wu Youyou, Michal Kosinski, and David Stillwell. 2015. Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences 112, 4 (January 2015), 1036–1040. https://doi.org/10.1073/pnas.1418680112
    [201]
    Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang. 2021. Differentially Private Fine-tuning of Language Models. arXiv:2110.06500 [cs, stat] (Oct. 2021). http://arxiv.org/abs/2110.06500 arXiv:2110.06500.
    [202]
    Sean Zdenek. 2007. “Just Roll Your Mouse Over Me”: Designing Virtual Women for Customer Service on the Web. Technical Communication Quarterly 16, 4 (August 2007), 397–430. https://doi.org/10.1080/10572250701380766
    [203]
    Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending Against Neural Fake News. arXiv:1905.12616 [cs] (Dec. 2020). http://arxiv.org/abs/1905.12616 arXiv:1905.12616.
    [204]
    Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, and Huajun Chen. 2021. Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. arXiv:2108.13161 [cs] (October 2021). http://arxiv.org/abs/2108.13161 arXiv:2108.13161.
    [205]
    Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. OPT: Open Pre-trained Transformer Language Models. (2022). https://doi.org/10.48550/arxiv.2205.01068
    [206]
    Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate Before Use: Improving Few-Shot Performance of Language Models. arXiv:2102.09690 [cs] (June 2021). http://arxiv.org/abs/2102.09690 arXiv:2102.09690.
    [207]
    Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593(2019).
    [208]
    Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction. International Journal of Social Robotics 7, 3 (June 2015), 347–360. https://doi.org/10.1007/s12369-014-0267-6

    Cited By

    View all
    • (2024)AI-Generated Text Detector for Arabic Language Using Encoder-Based Transformer ArchitectureBig Data and Cognitive Computing10.3390/bdcc80300328:3(32)Online publication date: 18-Mar-2024
    • (2024)The Case for a Broader Approach to AI Assurance: Addressing 'Hidden' Harms in the Development of Artificial IntelligenceSSRN Electronic Journal10.2139/ssrn.4660737Online publication date: 2024
    • (2024)Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelityPLOS ONE10.1371/journal.pone.030002419:3(e0300024)Online publication date: 12-Mar-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
    June 2022
    2351 pages
    ISBN:9781450393522
    DOI:10.1145/3531146
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 June 2022

    Check for updates

    Author Tags

    1. language models
    2. responsible AI
    3. responsible innovation
    4. risk assessment
    5. technology risks

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    FAccT '22
    Sponsor:

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)14,206
    • Downloads (Last 6 weeks)1,454

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)AI-Generated Text Detector for Arabic Language Using Encoder-Based Transformer ArchitectureBig Data and Cognitive Computing10.3390/bdcc80300328:3(32)Online publication date: 18-Mar-2024
    • (2024)The Case for a Broader Approach to AI Assurance: Addressing 'Hidden' Harms in the Development of Artificial IntelligenceSSRN Electronic Journal10.2139/ssrn.4660737Online publication date: 2024
    • (2024)Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelityPLOS ONE10.1371/journal.pone.030002419:3(e0300024)Online publication date: 12-Mar-2024
    • (2024)ChatGPT als Unterstützung von Lehrkräften – Einordnung, Analyse und AnwendungsbeispieleChatGPT to Support Teachers - Classification, Analysis and Examples of ApplicationsHMD Praxis der Wirtschaftsinformatik10.1365/s40702-024-01052-9Online publication date: 29-Feb-2024
    • (2024)mGPT: Few-Shot Learners Go MultilingualTransactions of the Association for Computational Linguistics10.1162/tacl_a_0063312(58-79)Online publication date: 31-Jan-2024
    • (2024)Honeyfile Camouflage: Hiding Fake Files in Plain SightProceedings of the 3rd ACM Workshop on the Security Implications of Deepfakes and Cheapfakes10.1145/3660354.3660355(1-7)Online publication date: 1-Jul-2024
    • (2024)GAI as a Catalyst in National Technology Sovereignty: Evaluating the Influence of GAI on Government PolicyProceedings of the 25th Annual International Conference on Digital Government Research10.1145/3657054.3657126(618-626)Online publication date: 11-Jun-2024
    • (2024)Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act SociallyProceedings of the ACM on Human-Computer Interaction10.1145/36536938:CSCW1(1-25)Online publication date: 26-Apr-2024
    • (2024)Taxonomy of Generative AI Applications for Risk AssessmentProceedings of the IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI10.1145/3644815.3644977(288-289)Online publication date: 14-Apr-2024
    • (2024)A Demonstration of BLIP: A System to Explore Undesirable Consequences of Digital TechnologiesCompanion Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640544.3645237(70-73)Online publication date: 18-Mar-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media