Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3659031acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

The Role of Explainability in Collaborative Human-AI Disinformation Detection

Published: 05 June 2024 Publication History

Abstract

Manual verification has become very challenging based on the increasing volume of information shared online and the role of generative Artificial Intelligence (AI). Thus, AI systems are used to identify disinformation and deep fakes online. Previous research has shown that superior performance can be observed when combining AI and human expertise. Moreover, according to the EU AI Act, human oversight is inevitable when using AI systems in a domain where fundamental human rights, such as the right to free expression, might be affected. Thus, AI systems need to be transparent and offer sufficient explanations to be comprehensible. Much research has been done on integrating eXplainability (XAI) features to increase the transparency of AI systems; however, they lack human-centered evaluation. Additionally, the meaningfulness of explanations varies depending on users’ background knowledge and individual factors. Thus, this research implements a human-centered evaluation schema to evaluate different XAI features for the collaborative human-AI disinformation detection task. Hereby, objective and subjective evaluation dimensions, such as performance, perceived usefulness, understandability, and trust in the AI system, are used to evaluate different XAI features. A user study was conducted with an overall total of 433 participants, whereas 406 crowdworkers and 27 journalists participated as experts in detecting disinformation. The results show that free-text explanations contribute to improving non-expert performance but do not influence the performance of experts. The XAI features increase the perceived usefulness, understandability, and trust in the AI system, but they can also lead crowdworkers to blindly trust the AI system when its predictions are wrong.

References

[1]
Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is Your Evidence: Improving Fact-checking by Justification Modeling. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, Belgium, 85–90. https://doi.org/10.18653/v1/W18-5513
[2]
Sajid Ali, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M Alonso-Moral, Roberto Confalonieri, Riccardo Guidotti, Javier Del Ser, Natalia Díaz-Rodríguez, and Francisco Herrera. 2023. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion 99 (2023), 101805.
[3]
Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild, and Duncan J. Watts. 2020. Evaluating the fake news problem at the scale of the information ecosystem. Science Advances 6, 14 (2020), eaay3539. https://doi.org/10.1126/sciadv.aay3539
[4]
Jennifer Allen, Cameron Martel, and David G Rand. 2022. Birds of a Feather Don’t Fact-Check Each Other: Partisanship and the Evaluation of News in Twitter’s Birdwatch Crowdsourced Fact-Checking Program. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 245, 19 pages. https://doi.org/10.1145/3491102.3502040
[5]
Wissam Antoun, Fady Baly, Rim Achour, Amir Hussein, and Hazem Hajj. 2020. State of the Art Models for Fake News Detection Tasks. In 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT). 519–524. https://doi.org/10.1109/ICIoT48696.2020.9089487
[6]
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. Generating Fact Checking Explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7352–7364. https://doi.org/10.18653/v1/2020.acl-main.656
[7]
Vimala Balakrishnan, Wei Zhen Ng, Mun Chong Soo, Gan Joo Han, and Choon Jiat Lee. 2022. Infodemic and fake news – A comprehensive overview of its global magnitude during the COVID-19 pandemic in 2021: A scoping review. International Journal of Disaster Risk Reduction 78 (2022), 103144. https://doi.org/10.1016/j.ijdrr.2022.103144
[8]
Ramy Baly, Georgi Karadzhov, Dimitar Alexandrov, James Glass, and Preslav Nakov. 2018. Predicting Factuality of Reporting and Bias of News Media Sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 3528–3539. https://doi.org/10.18653/v1/D18-1389
[9]
Nikola Banovic, Zhuoran Yang, Aditya Ramesh, and Alice Liu. 2023. Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 27 (apr 2023), 17 pages. https://doi.org/10.1145/3579460
[10]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 648–657. https://doi.org/10.1145/3351095.3375624
[11]
Alessandro Bondielli and Francesco Marcelloni. 2019. A survey on fake news and rumour detection techniques. Information Sciences 497 (2019), 38–55. https://doi.org/10.1016/j.ins.2019.05.035
[12]
Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, and Chenhao Tan. 2022. Human-Centered Evaluation of Explanations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts. Association for Computational Linguistics, Seattle, United States, 26–32. https://doi.org/10.18653/v1/2022.naacl-tutorials.4
[13]
Sven Coppers, Jan Van den Bergh, Kris Luyten, Karin Coninx, Iulianna van der Lek-Ciudin, Tom Vanallemeersch, and Vincent Vandeghinste. 2018. Intellingo: An Intelligible Translation Environment. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174098
[14]
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies: why and how. In Proceedings of the 1st international conference on Intelligent user interfaces. 193–200. https://doi.org/10.1145/169891.169968
[15]
Omar Darwish, Yahya Tashtoush, Majdi Maabreh, Rana Al-essa, Ruba Aln’uman, Ammar Alqublan, Munther Abualkibash, and Mahmoud Elkhodr. 2023. Identifying Fake News in the Russian-Ukrainian Conflict Using Machine Learning. In Advanced Information Networking and Applications: Proceedings of the 37th International Conference on Advanced Information Networking and Applications (AINA-2023), Volume 3. Springer, 546–557. https://doi.org/10.1007/978-3-031-28694-0_51
[16]
Anubrata Das, Houjiang Liu, Venelin Kovatchev, and Matthew Lease. 2023. The state of human-centered NLP technology for fact-checking. Information processing & management 60, 2 (2023), 103219. https://doi.org/10.1016/j.ipm.2022.103219
[17]
Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence, Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Springer International Publishing, Cham, 449–466. https://doi.org/10.1007/978-3-030-60117-1_33
[18]
Malin Eiband, Daniel Buschek, Alexander Kremer, and Heinrich Hussmann. 2019. The Impact of Placebic Explanations on Trust in Intelligent Systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312787
[19]
Ziv Epstein, Nicolo Foppiani, Sophie Hilgard, Sanjana Sharma, Elena Glassman, and David Rand. 2022. Do Explanations Increase the Effectiveness of AI-Crowd Generated Fake News Warnings?Proceedings of the International AAAI Conference on Web and Social Media 16, 1 (May 2022), 183–193. https://doi.org/10.1609/icwsm.v16i1.19283
[20]
Lijun Feng, Martin Jansche, Matt Huenerfauth, and Noémie Elhadad. 2010. A Comparison of Features for Automatic Readability Assessment. In Coling 2010: Posters. Coling 2010 Organizing Committee, Beijing, China, 276–284. https://aclanthology.org/C10-2032
[21]
World Economic Forum. 2024. Global Risks 2024: At a Turning Point. https://www.weforum.org/publications/global-risks-report-2024/in-full/global-risks-2024-at-a-turning-point/#global-risks-2024-at-a-turning-point
[22]
Nicole Gillespie, Steve Lockey, and Caitlin Curtis. 2021. Trust in artificial intelligence: A five country study. (2021). https://doi.org/10.14264/e34bfa3
[23]
Michael Gleicher. 2016. A framework for considering comprehensibility in modeling. Big data 4, 2 (2016), 75–88. https://doi.org/10.1089/big.2016.0007
[24]
Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. Transactions of the Association for Computational Linguistics 10 (02 2022), 178–206. https://doi.org/10.1162/tacl_a_00454
[25]
Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating ChatGPT and other Large Generative AI Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1112–1123. https://doi.org/10.1145/3593013.3594067
[26]
Bernease Herman. 2017. The promise and peril of human evaluation for model interpretability. arXiv abs/1711.07414 (2017). https://arxiv.org/abs/1711.07414
[27]
Dudi ISKANDAR, Indah SURYAWATI, Geri SURATNO, Liliyana LILIYANA, Muhtadi MUHTADI, and Ngimadudin NGIMADUDIN. 2023. Public Communication Model In Combating Hoaxes And Fake News In Ahead Of The 2024 General Election. International Journal of Environmental, Sustainability, and Social Science 4, 5 (2023), 1505–1518.
[28]
S Mo Jones-Jang, Tara Mortensen, and Jingjing Liu. 2021. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American behavioral scientist 65, 2 (2021), 371–388. https://doi.org/10.1177/0002764219869406
[29]
Razieh Khamsehashari, Vera Schmitt, Tim Polzehl, Salar Mohtaj, and Sebastian Moeller. 2023. How Risky is Multimodal Fake News Detection? A Review of Cross-Modal Learning Approaches under EU AI Act Constrains. In Proc. 2023 ISCA Symposium on Security and Privacy in Speech Communication. 47–51. https://doi.org/10.21437/SPSC.2023-1
[30]
Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. 2016. Examples are not enough, learn to criticize! Criticism for Interpretability. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/5680522b8e2bb01943234bce7bf84534-Paper.pdf
[31]
Neema Kotonya and Francesca Toni. 2020. Explainable Automated Fact-Checking for Public Health Claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 7740–7754. https://doi.org/10.18653/v1/2020.emnlp-main.623
[32]
Philipp Kulms and Stefan Kopp. 2018. A Social Cognition Perspective on Human–Computer Trust: The Effect of Perceived Warmth and Competence on Trust in Decision-Making With Computers. Front. Digit. Humanit. 5 (June 2018), 352444. https://doi.org/10.3389/fdigh.2018.00014
[33]
Piyawat Lertvittayakumjorn and Francesca Toni. 2019. Human-grounded Evaluations of Explanation Methods for Text Classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 5195–5205. https://doi.org/10.18653/v1/D19-1523
[34]
Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable AI (XAI): From algorithms to user experiences. arXiv abs/2110.10790 (2021). https://arxiv.org/abs/2110.10790
[35]
Rhema Linder, Sina Mohseni, Fan Yang, Shiva K. Pentyala, Eric D. Ragan, and Xia Ben Hu. 2021. How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking. Applied AI Letters 2, 4 (2021), e49. https://doi.org/10.1002/ail2.49
[36]
Yang Liu and Yi-Fang Wu. 2018. Early Detection of Fake News on Social Media Through Propagation Path Classification with Recurrent and Convolutional Networks. Proceedings of the AAAI Conference on Artificial Intelligence 32, 1. https://doi.org/10.1609/aaai.v32i1.11268
[37]
Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith, and Simone Stumpf. 2024. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion 106 (2024), 102301. https://doi.org/10.1016/j.inffus.2024.102301
[38]
Chiara Longoni, Andrey Fradkin, Luca Cian, and Gordon Pennycook. 2022. News from Generative Artificial Intelligence Is Believed Less. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (, Seoul, Republic of Korea,) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 97–106. https://doi.org/10.1145/3531146.3533077
[39]
Pedro Lopes, Eduardo Silva, Cristiana Braga, Tiago Oliveira, and Luís Rosado. 2022. XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Appl. Sci. 12, 19 (Sept. 2022), 9423. https://doi.org/10.3390/app12199423
[40]
Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In 11th australasian conference on information systems, Vol. 53. Citeseer, 6–8. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=b8eda9593fbcb63b7ced1866853d9622737533a2
[41]
Giovanni Da San Martino, Stefano Cresci, Alberto Barrón-Cedeño, Seunghak Yu, Roberto Di Pietro, and Preslav Nakov. 2020. A Survey on Computational Propaganda Detection. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere (Ed.). International Joint Conferences on Artificial Intelligence Organization, 4826–4832. https://doi.org/10.24963/ijcai.2020/672 Survey track.
[42]
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. Proceedings of the AAAI Conference on Artificial Intelligence 35, 17 (May 2021), 14867–14875. https://doi.org/10.1609/aaai.v35i17.17745
[43]
D Harrison Mcknight, Michelle Carter, Jason Bennett Thatcher, and Paul F Clay. 2011. Trust in a specific technology: An investigation of its components and measures. ACM Transactions on management information systems (TMIS) 2, 2 (2011), 1–25. https://doi.org/10.1145/1985347.1985353
[44]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
[45]
Sina Mohseni, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, and Eric Ragan. 2021. Machine Learning Explanations to Prevent Overtrust in Fake News Detection. Proceedings of the International AAAI Conference on Web and Social Media 15, 1 (May 2021), 421–431. https://doi.org/10.1609/icwsm.v15i1.18072
[46]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3–4 (Sept. 2021), 1–45. https://doi.org/10.1145/3387166
[47]
Salar Mohtaj, Ata Nizamoglu, Charlott Sahitaj, Premtim Jakob, Sebastian Möller, and Vera Schmitt. 2024. NewsPolyML: Multi-lingual European News Fake Assessment Dataset(MAD ’24). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3643491.3660290
[48]
Linda Monsees. 2023. Information disorder, fake news and the future of democracy. Globalizations 20, 1 (2023), 153–168. https://doi.org/10.1080/14747731.2021.1927470
[49]
An T. Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease. 2018. Believe It or Not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery, New York, NY, USA, 189–199. https://doi.org/10.1145/3242587.3242666
[50]
Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D Ragan. 2019. The effects of meaningful and meaningless explanations on trust and perceived system accuracy in intelligent systems. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 97–105. https://doi.org/10.1609/hcomp.v7i1.5284
[51]
High Level Expert Group on Fake News and Online Disinformation. 2018. Report to the European Commission on A Multi-Dimensional Approach to Disinformation. (2018). https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation
[52]
Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. 2023. Fact-Checking Complex Claims with Program-Guided Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Toronto, Canada, 6981–7004. https://doi.org/10.18653/v1/2023.acl-long.386
[53]
Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic Detection of Fake News. In Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics, Santa Fe, New Mexico, USA, 3391–3401. https://aclanthology.org/C18-1287
[54]
Tim Polzehl, Vera Schmitt, Nils Feldhus, Joachim Meyer, and Sebastian Möller. 2023. Fighting Disinformation: Overview of Recent AI-Based Collaborative Human-Computer Interaction for Intelligent Decision Support Systems. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - HUCAPP,. INSTICC, SciTePress, 267–278. https://doi.org/10.5220/0011788900003417
[55]
Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, 2931–2937. https://doi.org/10.18653/v1/D17-1317
[56]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32. https://doi.org/10.1609/aaai.v32i1.11491
[57]
Arkadiy Saakyan, Tuhin Chakrabarty, and Smaranda Muresan. 2021. COVID-Fact: Fact Extraction and Verification of Real-World Claims on COVID-19 Pandemic. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 2116–2129. https://doi.org/10.18653/v1/2021.acl-long.165
[58]
Vera Schmitt, Balazs Csomor, Joachim Meyer, Luis-Felipe Villa-Arenas, Charlott Jakob, Tim Polzehl, and Sebastian Möller. 2024. Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-Checking(MAD ’24). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3643491.3660283
[59]
Vera Schmitt, Veronika Solopova, Vinicius Woloszyn, and Jessica de Jesus de Pinho Pinhal. 2021. Implications of the New Regulation Proposed by the European Commission on Automatic Content Moderation. In Proc. 2021 ISCA Symposium on Security and Privacy in Speech Communication. 47–51. https://doi.org/10.21437/SPSC.2021-10
[60]
Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 624–643. https://doi.org/10.18653/v1/2021.naacl-main.52
[61]
Khurram Shahzad 2021. Measuring Information Literacy (IL) Skills among University Research Scholars: A Case Study of GC University Lahore. (2021). https://digitalcommons.unl.edu/libphilprac/6418/
[62]
Felix M Simon, Sacha Altay, and Hugo Mercier. 2023. Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review 4, 5 (2023). https://doi.org/10.37016/mr-2020-127
[63]
Timo Speith and Markus Langer. 2023. A new perspective on evaluation methods for explainable artificial intelligence (xai). In 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). IEEE, 325–331. https://doi.org/10.1109/REW57809.2023.00061
[64]
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a Large-scale Dataset for Fact Extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 809–819. https://doi.org/10.18653/v1/N18-1074
[65]
Joshua A Tucker, Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Social media, political polarization, and political disinformation: A review of the scientific literature. Political polarization, and political disinformation: a review of the scientific literature (March 19, 2018) (2018). https://doi.org/10.2139/ssrn.3144139
[66]
Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. 2003. User acceptance of information technology: Toward a unified view. MIS quarterly (2003), 425–478. https://doi.org/10.2307/30036540
[67]
Giulia Vilone and Luca Longo. 2021. Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion 76 (2021), 89–106. https://doi.org/10.1016/j.inffus.2021.05.009
[68]
David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or Fiction: Verifying Scientific Claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 7534–7550. https://doi.org/10.18653/v1/2020.emnlp-main.609
[69]
William Yang Wang. 2017. “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Vancouver, Canada, 422–426. https://doi.org/10.18653/v1/P17-2067
[70]
Jonas Wanner, Lukas-Valentin Herm, Kai Heinrich, and Christian Janiesch. 2022. The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electronic Markets (2022), 1–24. https://doi.org/10.1007/s12525-022-00593-5
[71]
Claire Wardle and Hossein Derakhshan. 2017. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report 27 (2017), 1–107. https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html
[72]
Sarah Wiegreffe and Ana Marasovic. 2021. Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, J. Vanschoren and S. Yeung (Eds.). Vol. 1. https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/698d51a19d8a121ce581499d7b701668-Paper-round1.pdf
[73]
Vinicius Woloszyn, Eduardo G Cortes, Rafael Amantea, Vera Schmitt, Dante AC Barone, and Sebastian Möller. 2021. Towards a novel benchmark for automatic generation of claimreview markup. In Proceedings of the 13th ACM Web Science Conference 2021. 29–35. https://doi.org/10.1145/3447535.3462640
[74]
Jianlong Zhou, Amir H Gandomi, Fang Chen, and Andreas Holzinger. 2021. Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics 10, 5 (2021), 593.

Cited By

View all
  • (2024)Human vs. Artificial: Detecting Fake News and DisinformationMedia & Marketing Identity10.34135/mmidentity-2024-59(587-600)Online publication date: 2024
  • (2024)NewsPolyML: Multi-lingual European News Fake Assessment DatasetProceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation10.1145/3643491.3660290(82-90)Online publication date: 10-Jun-2024
  • (2024)Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-CheckingProceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation10.1145/3643491.3660283(91-100)Online publication date: 10-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. Collaborative disinformation detection
  2. XAI
  3. expert and lay people evaluation
  4. transparent AI systems

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • BMBF Germany

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)891
  • Downloads (Last 6 weeks)155
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Human vs. Artificial: Detecting Fake News and DisinformationMedia & Marketing Identity10.34135/mmidentity-2024-59(587-600)Online publication date: 2024
  • (2024)NewsPolyML: Multi-lingual European News Fake Assessment DatasetProceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation10.1145/3643491.3660290(82-90)Online publication date: 10-Jun-2024
  • (2024)Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-CheckingProceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation10.1145/3643491.3660283(91-100)Online publication date: 10-Jun-2024
  • (2024)Explaining Veracity Predictions with Evidence Summarization: A Multi-Task Model Approach2024 IEEE International Conference on Big Data (BigData)10.1109/BigData62323.2024.10825442(6924-6932)Online publication date: 15-Dec-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media