Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485447.3512246acmconferencesArticle/Chapter ViewAbstractPublication PageswebconfConference Proceedingsconference-collections
research-article

VICTOR: An Implicit Approach to Mitigate Misinformation via Continuous Verification Reading

Published: 25 April 2022 Publication History
  • Get Citation Alerts
  • Abstract

    We design and evaluate VICTOR, an easy-to-apply module on top of a recommender system to mitigate misinformation. VICTOR takes an elegant, implicit approach to deliver fake-news verifications, such that readers of fake news can continuously access more verified news articles about fake-news events without explicit correction. We frame fake-news intervention within VICTOR as a graph-based question-answering (QA) task, with Q as a fake-news article and A as the corresponding verified articles. Specifically, VICTOR adopts reinforcement learning: it first considers fake-news readers’ preferences supported by underlying news recommender systems and then directs their reading sequence towards the verified news articles. To verify the performance of VICTOR, we collect and organize VERI, a new dataset consisting of real-news articles, user browsing logs, and fake-real news pairs for a large number of misinformation events. We evaluate zero-shot and few-shot VICTOR on VERI to simulate the never-exposed-ever and seen-before conditions of users while reading a piece of fake news. Results demonstrate that compared to baselines, VICTOR proactively delivers 6% more verified articles with a diversity increase of 7.5% to over 68% of at-risk users who have been exposed to fake news. Moreover, we conduct a field user study in which 165 participants evaluated fake news articles. Participants in the VICTOR condition show better exposure rates, proposal rates, and click rates on verified news articles than those in the other two conditions. Altogether, our work demonstrates the potentials of VICTOR, i.e., combat fake news by delivering verified information implicitly.

    References

    [1]
    Davey Alba and Kate Conger. 2020. Twitter moves to target fake videos and photos (Accessed Sept 5, 2020). https://www.nytimes.com/2020/02/04/technology/twitter-fake-videos-photos-disinformation.html
    [2]
    Cristian Bravo-Lillo, Lorrie Cranor, Saranga Komanduri, Stuart Schechter, and Manya Sleeper. 2014. Harder to ignore? Revisiting pop-up fatigue and approaches to prevent it. In 10th Symposium On Usable Privacy and Security ({SOUPS} 2014). 105–111.
    [3]
    Chia-Wei Chen, Sheng-Chuan Chou, Chang-You Tai, and Lun-Wei Ku. 2019. Phrase-Guided Attention Web Article Recommendation for Next Clicks and Views. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. 315–324.
    [4]
    Katherine Clayton, Spencer Blair, Jonathan A Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, 2019. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behavior (2019), 1–23.
    [5]
    Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alexander J. Smola, and Andrew McCallum. 2018. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning. CoRR abs/1711.05851(2018). arxiv:1711.05851http://arxiv.org/abs/1711.05851
    [6]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805(2018). arXiv:1810.04805http://arxiv.org/abs/1810.04805
    [7]
    Mehrdad Farajtabar, Jiachen Yang, Xiaojing Ye, Huan Xu, Rakshit Trivedi, Elias Khalil, Shuang Li, Le Song, and Hongyuan Zha. 2017. Fake news mitigation via point process based intervention. arXiv preprint arXiv:1703.07823(2017).
    [8]
    Mingkun Gao, Ziang Xiao, Karrie Karahalios, and Wai-Tat Fu. 2018. To label or not to label: The effect of stance and credibility labels on readers’ selection and perception of news articles. ACM CHI 2, CSCW (2018).
    [9]
    R Kelly Garrett and Brian E Weeks. 2013. The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. 1047–1058.
    [10]
    Mahak Goindani and Jennifer Neville. 2020. Social reinforcement learning to combat fake news spread. In Uncertainty in Artificial Intelligence. PMLR, 1006–1016.
    [11]
    Qingyuan Gong, Yang Chen, Xinlei He, Zhou Zhuang, Tianyi Wang, Hong Huang, Xin Wang, and Xiaoming Fu. 2018. DeepScan: Exploiting deep learning for malicious account detection in location-based social networks. IEEE Communications Magazine 56, 11 (2018), 21–27.
    [12]
    David J Hauser and Norbert Schwarz. 2016. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior research methods 48, 1 (2016), 400–407.
    [13]
    Alan G Hawkes. 1971. Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 1 (1971), 83–90.
    [14]
    Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (Nov. 1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
    [15]
    Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. 2018. Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (Marina Del Rey, CA, USA) (WSDM ’18). Association for Computing Machinery, New York, NY, USA, 324–332. https://doi.org/10.1145/3159652.3159734
    [16]
    Justin Kosslyn and Cong Yu. 2017. Fact check now available in Google search and news around the world (Accessed Sept 5, 2020). https://www.blog.google/products/search/fact-check-now-available-google-search-and-news-around-world/
    [17]
    Yi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 505–514. https://doi.org/10.18653/v1/2020.acl-main.48
    [18]
    Neil A Macmillan and C Douglas Creelman. 2004. Detection theory: A user’s guide. Psychology press.
    [19]
    Sina Mohseni, Eric Ragan, and Xia Hu. 2019. Open Issues in Combating Fake News: Interpretability as an Opportunity. arxiv:1904.03016 [cs.SI]
    [20]
    Nam P. Nguyen, Guanhua Yan, My T. Thai, and Stephan Eidenbenz. 2012. Containment of Misinformation Spread in Online Social Networks. In Proceedings of the 4th Annual ACM Web Science Conference (Evanston, Illinois) (WebSci ’12). Association for Computing Machinery, New York, NY, USA, 213–222. https://doi.org/10.1145/2380718.2380746
    [21]
    Gordon Pennycook and Tyrone Cannon. 2018. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology General (06 2018). https://doi.org/10.2139/ssrn.2958246
    [22]
    Gordon Pennycook, Jonathon McPhetres, Yunhao Zhang, Jackson G Lu, and David G Rand. 2020. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science 31, 7 (2020), 770–780.
    [23]
    Gordon Pennycook and David G Rand. 2019. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences 116, 7 (2019), 2521–2526.
    [24]
    Guy Rosen, Katie Harbath, Nathaniel Gleicher, and Rob Leathern. 2019. Helping to protect the 2020 US elections (Accessed Sept 5, 2020). https://about.fb.com/news/2019/10/update-on-election-integrity-efforts/
    [25]
    Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci ’19). ACM, New York, NY, USA, 265–274. https://doi.org/10.1145/3292522.3326012
    [26]
    Karishma Sharma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu. 2019. Combating fake news: A survey on identification and mitigation techniques. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 3(2019), 1–42.
    [27]
    Kai Shu, H Russell Bernard, and Huan Liu. 2019. Studying fake news via network analysis: detection and mitigation. In Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining. Springer, 43–65.
    [28]
    Kai Shu, Deepak Mahudeswaran, Suhang Wang, and Huan Liu. 2019. Hierarchical Propagation Networks for Fake News Detection: Investigation and Exploitation. CoRR abs/1903.09196(2019). arxiv:1903.09196http://arxiv.org/abs/1903.09196
    [29]
    Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake News Detection on Social Media: A Data Mining Perspective. SIGKDD Explor. Newsl. 19, 1 (Sept. 2017), 22–36. https://doi.org/10.1145/3137597.3137600
    [30]
    Jeff Smith, Grace Jackson, and Seetha Raj. 2017. Designing against misinformation (Accessed Sept 5, 2020). https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2
    [31]
    Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction(second ed.). The MIT Press. http://incompleteideas.net/book/the-book-2nd.html
    [32]
    Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, and Jing Gao. 2020. Weak Supervision for Fake News Detection via Reinforcement Learning. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 516–523. https://aaai.org/ojs/index.php/AAAI/article/view/5389
    [33]
    Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Empowering News Recommendation with Pre-Trained Language Models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (Virtual Event, Canada) (SIGIR ’21). Association for Computing Machinery, New York, NY, USA, 1652–1656. https://doi.org/10.1145/3404835.3463069
    [34]
    Liang Wu, Fred Morstatter, Xia Hu, and Huan Liu. 2016. Mining misinformation in social media. Big Data in Complex and Social Networks(2016), 123–152.
    [35]
    Yikun Xian, Zuohui Fu, S. Muthukrishnan, Gerard de Melo, and Yongfeng Zhang. 2019. Reinforcement Knowledge Graph Reasoning for Explainable Recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (Paris, France) (SIGIR’19). Association for Computing Machinery, New York, NY, USA, 285–294. https://doi.org/10.1145/3331184.3331203

    Cited By

    View all
    • (2023)Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social NetworksACM Transactions on the Web10.1145/361608817:4(1-26)Online publication date: 10-Oct-2023
    • (2022)Misinformation Containment Using NLP and Machine LearningDeep Learning Research Applications for Natural Language Processing10.4018/978-1-6684-6001-6.ch003(41-56)Online publication date: 9-Dec-2022

    Index Terms

    1. VICTOR: An Implicit Approach to Mitigate Misinformation via Continuous Verification Reading
                  Index terms have been assigned to the content through auto-classification.

                  Recommendations

                  Comments

                  Information & Contributors

                  Information

                  Published In

                  cover image ACM Conferences
                  WWW '22: Proceedings of the ACM Web Conference 2022
                  April 2022
                  3764 pages
                  ISBN:9781450390965
                  DOI:10.1145/3485447
                  Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                  Sponsors

                  Publisher

                  Association for Computing Machinery

                  New York, NY, United States

                  Publication History

                  Published: 25 April 2022

                  Permissions

                  Request permissions for this article.

                  Check for updates

                  Author Tags

                  1. fake news intervention
                  2. misinformation
                  3. user research

                  Qualifiers

                  • Research-article
                  • Research
                  • Refereed limited

                  Funding Sources

                  Conference

                  WWW '22
                  Sponsor:
                  WWW '22: The ACM Web Conference 2022
                  April 25 - 29, 2022
                  Virtual Event, Lyon, France

                  Acceptance Rates

                  Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

                  Contributors

                  Other Metrics

                  Bibliometrics & Citations

                  Bibliometrics

                  Article Metrics

                  • Downloads (Last 12 months)69
                  • Downloads (Last 6 weeks)2
                  Reflects downloads up to 27 Jul 2024

                  Other Metrics

                  Citations

                  Cited By

                  View all
                  • (2023)Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social NetworksACM Transactions on the Web10.1145/361608817:4(1-26)Online publication date: 10-Oct-2023
                  • (2022)Misinformation Containment Using NLP and Machine LearningDeep Learning Research Applications for Natural Language Processing10.4018/978-1-6684-6001-6.ch003(41-56)Online publication date: 9-Dec-2022

                  View Options

                  Get Access

                  Login options

                  View options

                  PDF

                  View or Download as a PDF file.

                  PDF

                  eReader

                  View online with eReader.

                  eReader

                  HTML Format

                  View this article in HTML Format.

                  HTML Format

                  Media

                  Figures

                  Other

                  Tables

                  Share

                  Share

                  Share this Publication link

                  Share on social media