Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3334480.3383159acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
abstract

The Rumour Mill: Making the Spread of Misinformation Explicit and Tangible

Published: 25 April 2020 Publication History

Abstract

Misinformation spread presents a technological and social threat to society. With the advance of AI-based language models, automatically generated texts have become difficult to identify and easy to create at scale. We present "The Rumour Mill", a playful art piece, designed as a commentary on the spread of rumours and automatically-generated misinformation. The mill is a tabletop interactive machine, which invites a user to experience the process of creating believable text by interacting with different tangible controls on the mill. The user manipulates visible parameters to adjust the genre and type of an automatically generated text rumour. The Rumour Mill is a physical demonstration of the state of current technology and its ability to generate and manipulate natural language text, and of the act of starting and spreading rumours.

Supplementary Material

MP4 File (int035pv.mp4)
Preview video
MP4 File (dein1078vf.mp4)
Supplemental video

References

[1]
Ahmer Arif, Kelley Shanahan, Fang-Ju Chou, Yoanna Dosouto, Kate Starbird, and Emma S Spiro. 2016. How information snowballs: Exploring the role of exposure in online rumor propagation. In Proc. CSCW.
[2]
Leon Derczynski, Torben Oskar Albert-Lindqvist, Marius Venø Bendsen, Nanna Inie, and others. 2019. Misinformation on Twitter During the Danish National Election: A Case Study. In Proceedings of the conference for Truth and Trust Online.
[3]
William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proc. NAACL. 1163--1168.
[4]
Philip Howard and Bence Kollanyi. 2016. Bots, #StrongerIn, and #Brexit: computational propaganda during the UK-EU referendum. SSRN 2798311 (2016).
[5]
American Press Institute. 2014. Fact checking: A studio workshop. (2014).
[6]
Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019).
[7]
Marcelo Mendoza, Barbara Poblete, and Carlos Castillo. 2010. Twitter under crisis: Can we trust what we RT?. In Proceedings of the first workshop on Social Media Analytics. ACM, 71--79.
[8]
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019).
[9]
Chengcheng Shao, Pik-Mai Hui, Lei Wang, Xinwen Jiang, Alessandro Flammini, Filippo Menczer, and Giovanni Luca Ciampaglia. 2018. Anatomy of an online misinformation network. PloS one 13, 4 (2018).
[10]
Peter Tolmie, Rob Procter, David William Randall, Mark Rouncefield, Christian Burger, Geraldine Wong Sak Hoi, Arkaitz Zubiaga, and Maria Liakata. 2017. Supporting the use of user generated content in journalistic practice. In Proc. CHI. ACM, 3632--3644.
[11]
William Yang Wang. 2017. "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection. In Proc. ACL. 422--426.
[12]
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending Against Neural Fake News. arXiv preprint arXiv:1905.12616 (2019).

Cited By

View all
  • (2023)Talking Abortion (Mis)information with ChatGPT on TikTok2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)10.1109/EuroSPW59978.2023.00071(594-608)Online publication date: Jul-2023
  • (2022)Deepfakes and Society: What Lies Ahead?Frontiers in Fake Media Generation and Detection10.1007/978-981-19-1524-6_1(3-43)Online publication date: 29-May-2022
  • (2021)ARGH!Proceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3481894(3847-3856)Online publication date: 26-Oct-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
April 2020
4474 pages
ISBN:9781450368193
DOI:10.1145/3334480
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 April 2020

Check for updates

Author Tags

  1. NLP
  2. critical design
  3. misinformation
  4. rumour spread

Qualifiers

  • Abstract

Funding Sources

Conference

CHI '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)25
  • Downloads (Last 6 weeks)2
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Talking Abortion (Mis)information with ChatGPT on TikTok2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)10.1109/EuroSPW59978.2023.00071(594-608)Online publication date: Jul-2023
  • (2022)Deepfakes and Society: What Lies Ahead?Frontiers in Fake Media Generation and Detection10.1007/978-981-19-1524-6_1(3-43)Online publication date: 29-May-2022
  • (2021)ARGH!Proceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3481894(3847-3856)Online publication date: 26-Oct-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media