Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3658987acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

The Impact and Opportunities of Generative AI in Fact-Checking

Published: 05 June 2024 Publication History

Abstract

Generative AI appears poised to transform white collar professions, with more than 90% of Fortune 500 companies using OpenAI’s flagship GPT models, which have been characterized as “general purpose technologies” capable of effecting epochal changes in the economy. But how will such technologies impact organizations whose job is to verify and report factual information, and to ensure the health of the information ecosystem? To investigate this question, we conducted 30 interviews with N=38 participants working at 29 fact-checking organizations across six continents, asking about how they use generative AI and the opportunities and challenges they see in the technology. We found that uses of generative AI envisioned by fact-checkers differ based on organizational infrastructure, with applications for quality assurance in Editing, for trend analysis in Investigation, and for information literacy in Advocacy. We used the TOE framework to describe participant concerns ranging from the Technological (lack of transparency), to the Organizational (resource constraints), to the Environmental (uncertain and evolving policy). Building on the insights of our participants, we describe value tensions between fact-checking and generative AI, and propose a novel Verification dimension to the design space of generative models for information verification work. Finally, we outline an agenda for fairness, accountability, and transparency research to support the responsible use of generative AI in fact-checking. Throughout, we highlight the importance of human infrastructure and labor in producing verified information in collaboration with AI. We expect that this work will inform not only the scientific literature on fact-checking, but also contribute to understanding of organizational adaptation to a powerful but unreliable new technology.

References

[1]
Hammaad Adam, Ming Yang, Kenrick D. Cato, Ioana Baldini, Charles R. Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, and Marzyeh Ghassemi. 2022. Write It Like You See It: Detectable Differences in Clinical Notes by Race Lead to Differential Model Recommendations. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (2022). https://api.semanticscholar.org/CorpusID:248572183
[2]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency., 610–623.
[3]
Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the people? opportunities and challenges for participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (2022), 1–8.
[4]
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency., 173–184.
[5]
Erling Björgvinsson, Pelle Ehn, and Per-Anders Hillgren. 2010. Participatory design and "democratizing innovation". In Proceedings of the 11th Biennial Participatory Design Conference (Sydney, Australia) (PDC ’10). Association for Computing Machinery, New York, NY, USA, 41–50. https://doi.org/10.1145/1900441.1900448
[6]
Virginia Braun and Victoria Clarke. 2022. Everything changes… well some things do: Reflections on, and resources for, reflexive thematic analysis. QMiP Bulletin (2022). https://api.semanticscholar.org/CorpusID:255921405
[7]
Stephanie Brookes and Lisa Waller. 2023. Communities of practice in the production and resourcing of fact-checking. Journalism 24, 9 (2023), 1938–1958.
[8]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[9]
Erik Brynjolfsson, Daniel Rock, and Chad Syverson. 2021. The productivity J-curve: How intangibles complement general purpose technologies. American Economic Journal: Macroeconomics 13, 1 (2021), 333–372.
[10]
Stuart K Card, Jock D Mackinlay, and George G Robertson. 1990. The design space of input devices. In Proceedings of the SIGCHI conference on Human factors in computing systems. 117–124.
[11]
Taiwan FactCheck Center. 2024. Taiwan FactCheck Center. https://tfc-taiwan.org.tw/en. [Accessed 22-01-2024].
[12]
Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, and Smaranda Muresan. 2023. Creativity Support in the Age of Large Language Models: An Empirical Study Involving Emerging Writers. ArXiv abs/2309.12570 (2023). https://api.semanticscholar.org/CorpusID:262217523
[13]
Africa Check. 2024. Africa Check. https://africacheck.org/. [Accessed 22-01-2024].
[14]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
[15]
Chequeado. 2024. Chequeado. https://chequeado.com/. [Accessed 22-01-2024].
[16]
Victoria Clarke and Virginia Braun. 2017. Thematic analysis. The Journal of Positive Psychology 12 (2017), 297 – 298. https://api.semanticscholar.org/CorpusID:219624951
[17]
ColombiaCheck. 2024. ColombiaCheck. https://colombiacheck.com/. [Accessed 22-01-2024].
[18]
RMIT FactLab CrossCheck. 2024. RMIT FactLab CrossCheck. https://www.rmit.edu.au/about/schools-colleges/media-and-communication/industry/factlab/crosscheck. [Accessed 22-01-2024].
[19]
Anubrata Das, Houjiang Liu, Venelin Kovatchev, and Matthew Lease. 2023. The state of human-centered NLP technology for fact-checking. Information processing & management 60, 2 (2023), 103219.
[20]
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2023. The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (, Boston, MA, USA, ) (EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Article 37, 23 pages. https://doi.org/10.1145/3617694.3623261
[21]
Univision El Detector. 2024. Univision El Detector. https://www.univision.com/especiales/noticias/detector/index.html. [Accessed 22-01-2024].
[22]
Laurence Dierickx, Carl-Gustav Lindén, and Andreas Lothe Opdahl. 2023. Automated fact-checking to support professional practices: systematic literature review and meta-analysis. International Journal of Communication 17 (2023), 21.
[23]
Mats Ekström, Amanda Ramsälv, and Oscar Westlund. 2021. The epistemologies of breaking news. Journalism Studies 22, 2 (2021), 174–192.
[24]
Mats Ekström, Amanda Ramsälv, and Oscar Westlund. 2022. Data-driven news work culture: Reconciling tensions in epistemic values and practices of news journalism. Journalism 23, 4 (2022), 755–772.
[25]
Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130 (2023).
[26]
Ilker Etikan, Sulaiman Abubakar Musa, Rukayya Sunusi Alkassim, 2016. Comparison of convenience sampling and purposive sampling. American journal of theoretical and applied statistics 5, 1 (2016), 1–4.
[27]
Factly. 2024. Factly. https://factly.in/. [Accessed 22-01-2024].
[28]
Aos Fatos. 2024. Aos Fatos. https://www.aosfatos.org/. [Accessed 22-01-2024].
[29]
Science Feedback. 2024. Science Feedback. https://science.feedback.org/. [Accessed 22-01-2024].
[30]
Robert G Fichman and Chris F Kemerer. 1999. The illusory diffusion of innovation: An examination of assimilation gaps. Information systems research 10, 3 (1999), 255–275.
[31]
Code for Africa. 2024. Code for Africa. https://github.com/CodeForAfrica/. [Accessed 22-01-2024].
[32]
Batya Friedman, David G. Hendry, and Alan Borning. 2017. A Survey of Value Sensitive Design Methods. Found. Trends Hum.-Comput. Interact. 11, 2 (nov 2017), 63–125. https://doi.org/10.1561/1100000015
[33]
Grammarly. 2024. Grammarly. https://www.grammarly.com/. [Accessed 22-01-2024].
[34]
Lucas Graves. 2017. Anatomy of a fact check: Objective practice and the contested epistemology of fact checking. Communication, culture & critique 10, 3 (2017), 518–537.
[35]
Lucas Graves. 2018. Factsheet: Understanding the promise and limits of automated fact-checking. Reuters Inst. Study of Journalism, Univ. Oxford, Oxford (2018).
[36]
Lucas Graves and Michelle A Amazeen. 2019. Fact-checking as idea and practice in journalism. In Oxford research encyclopedia of communication.
[37]
Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics 10 (2022), 178–206.
[38]
Infoveritas. 2024. Infoveritas. https://info-veritas.com/. [Accessed 22-01-2024].
[39]
Shrey Jain, Connor Spelliscy, Samuel Vance-Law, and Scott Moore. 2023. AI and Democracy’s Digital Identity Crisis. arXiv preprint arXiv:2311.16115 (2023).
[40]
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, 2023. Mistral 7B. arXiv preprint arXiv:2310.06825, (2023),.
[41]
Prerna Juneja and Tanushree Mitra. 2022. Human and technological infrastructures of fact-checking. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–36.
[42]
S Kapoor and A Narayanan. 2023. How to prepare for the deluge of generative AI on social media.
[43]
Hanlin Li, Nicholas Vincent, Stevie Chancellor, and Brent Hecht. 2023. The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1151–1161. https://doi.org/10.1145/3593013.3594070
[44]
Litmus. 2024. Litmus. https://litmus-factcheck.jp/about/en/. [Accessed 22-01-2024].
[45]
logically.ai. 2024. logically.ai. https://www.logically.ai/. [Accessed 22-01-2024].
[46]
Chiara Longoni, Andrey Fradkin, Luca Cian, and Gordon Pennycook. 2022. News from generative artificial intelligence is believed less. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 97–106.
[47]
Maldita. 2024. Maldita. https://maldita.es/. [Accessed 22-01-2024].
[48]
Meedan. 2024. Meedan. https://meedan.com/. [Accessed 22-01-2024].
[49]
MindaNews. 2024. MindaNews. https://www.mindanews.com/. [Accessed 22-01-2024].
[50]
Meredith Ringel Morris. 2023. Scientists’ Perspectives on the Potential for Generative AI in their Fields. arXiv preprint arXiv:2304.01420 (2023).
[51]
Meredith Ringel Morris, Carrie J Cai, Jess Holbrook, Chinmay Kulkarni, and Michael Terry. 2023. The design space of generative models. arXiv preprint arXiv:2304.10547 (2023).
[52]
Mahin Naderifar, Hamideh Goli, and Fereshteh Ghaljaie. 2017. Snowball sampling: A purposeful method of sampling in qualitative research. Strides in development of medical education 14, 3 (2017).
[53]
Terrence Neumann and Nicholas Wolczynski. 2023. Does AI-Assisted Fact-Checking Disproportionately Benefit Majority Groups Online?. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 480–490.
[54]
Newtral. 2024. Newtral. https://www.newtral.es/. [Accessed 22-01-2024].
[55]
OpenAI. 2022. Introducing ChatGPT. OpenAI Blog, (Nov 2022),.
[56]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
[57]
Angela Phillips. 2010. TRANSPARENCY AND THE NEW ETHICS OF JOURNALISM. Journalism Practice 4, 3 (2010), 373–382. https://doi.org/10.1080/17512781003642972 arXiv:https://doi.org/10.1080/17512781003642972
[58]
Pagella Politica. 2024. Pagella Politica. https://pagellapolitica.it/. [Accessed 22-01-2024].
[59]
Politifact. 2024. Politifact. https://www.politifact.com/. [Accessed 22-01-2024].
[60]
Poynter. 2024. International Fact Checking Network. https://www.poynter.org/ifcn/. [Accessed 22-01-2024].
[61]
Poynter. 2024. Verified signatories of the IFCN code of principles. https://ifcncodeofprinciples.poynter.org/signatories. [Accessed 22-01-2024].
[62]
Kalyan Prasad Agrawal. 2023. Towards adoption of generative AI in organizational settings. Journal of Computer Information Systems (2023), 1–16.
[63]
Pravda. 2024. Pravda. https://pravda.org.pl/. [Accessed 22-01-2024].
[64]
Australian Associated Press. 2024. Australian Associated Press. https://www.aap.com.au/. [Accessed 22-01-2024].
[65]
Agence France Presse. 2024. Agence France Presse. https://www.afp.com/en. [Accessed 22-01-2024].
[66]
Organizers Of Queerinai, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubicka, Hang Yuan, Hetvi J, Huan Zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx Mclean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, St John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew Mcnamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dong, Jackie Kay, Manu Saraswat, Nikhil Vytla, and Luke Stark. 2023. Queer In AI: A Case Study in Community-Led Participatory AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1882–1895. https://doi.org/10.1145/3593013.3594134
[67]
The Quint. 2024. The Quint. https://www.thequint.com/international. [Accessed 22-01-2024].
[68]
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, 2018. Improving language understanding by generative pre-training., (2018),.
[69]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[70]
Evani Radiya-Dixit and Gina Neff. 2023. A Sociotechnical Audit: Assessing Police Use of Facial Recognition. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1334–1346.
[71]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2 (2022), 3.
[72]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning. PMLR, 8821–8831.
[73]
Rappler. 2024. Rappler. https://www.rappler.com/. [Accessed 22-01-2024].
[74]
Thomson Reuters. 2024. Thomson Reuters. https://www.thomsonreuters.com/en.html. [Accessed 22-01-2024].
[75]
Paavo Ritala, Mika Ruokonen, and Laavanya Ramaul. 2023. Transforming boundaries: how does ChatGPT change knowledge work?Journal of Business Strategyahead-of-print (2023).
[76]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10684–10695.
[77]
Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, 2022. Make-a-video: Text-to-video generation without text-video data. arXiv preprint arXiv:2209.14792 (2022).
[78]
Der Spiegel. 2024. Der Spiegel. https://www.spiegel.de/international/. [Accessed 22-01-2024].
[79]
Clay Spinuzzi. 2005. The Methodology of Participatory Design. Technical Communication 52 (05 2005), 163–174.
[80]
Lead Stories. 2024. Lead Stories. https://leadstories.com/. [Accessed 22-01-2024].
[81]
Charlotte Tang, Yunan Chen, Bryan C Semaan, and Jahmeilah A Roberson. 2015. Restructuring human infrastructure: The impact of EHR deployment in a volunteer-dependent clinic. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 649–661.
[82]
Tech4Peace. 2024. Tech4Peace. https://t4p.co/. [Accessed 22-01-2024].
[83]
The New York Times. 2023. The ChatGPT Lawyer Explains Himself. https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html. [Accessed 22-01-2024].
[84]
India Today. 2024. India Today. https://www.indiatoday.in/. [Accessed 22-01-2024].
[85]
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, (2023),.
[86]
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, (2023),.
[87]
Jessica Wolk, Batya Friedman, and Gavin Jancke. 2007. Value Tensions in Design: The Value Sensitive Design, Development, and Appropriation of a Corporation’s. GROUP’07 - Proceedings of the 2007 International ACM Conference on Supporting Group Work, 281–290. https://doi.org/10.1145/1316624.1316668
[88]
Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. 2023. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7623–7633.
[89]
Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. 1–10.
[90]
Xinying Yu, Shi Xu, and Mark Ashton. 2023. Antecedents and outcomes of artificial intelligence adoption and application in the workplace: the socio-technical system theory perspective. Information Technology & People 36, 1 (2023), 454–474.
[91]
G Zagni and T Canetta. 2023. Generative AI marks the beginning of a new era for disinformation.
[92]
Hubert D Zając, Dana Li, Xiang Dai, Jonathan F Carlsen, Finn Kensing, and Tariq O Andersen. 2023. Clinician-facing AI in the Wild: Taking Stock of the Sociotechnical Challenges and Opportunities for HCI. ACM Transactions on Computer-Human Interaction 30, 2 (2023), 1–39.

Cited By

View all
  • (2024)Outsourcing, Augmenting, or Complicating: The Dynamics of AI in Fact-Checking Practices in the NordicsEmerging Media10.1177/275235432412888462:3(449-473)Online publication date: 9-Oct-2024
  • (2024)Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature ReviewDisinformation in Open Online Media10.1007/978-3-031-71210-4_1(1-15)Online publication date: 2-Sep-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. Design
  2. Fact-Checking
  3. Generative AI
  4. Sociotechnical Infrastructure
  5. Transparency

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,085
  • Downloads (Last 6 weeks)178
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Outsourcing, Augmenting, or Complicating: The Dynamics of AI in Fact-Checking Practices in the NordicsEmerging Media10.1177/275235432412888462:3(449-473)Online publication date: 9-Oct-2024
  • (2024)Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature ReviewDisinformation in Open Online Media10.1007/978-3-031-71210-4_1(1-15)Online publication date: 2-Sep-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media