Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3517446acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Are Deepfakes Concerning? Analyzing Conversations of Deepfakes on Reddit and Exploring Societal Implications

Published: 28 April 2022 Publication History

Abstract

Deepfakes are synthetic content generated using advanced deep learning and AI technologies. The advancement of technology has created opportunities for anyone to create and share deepfakes much easier. This may lead to societal concerns based on how communities engage with it. However, there is limited research available to understand how communities perceive deepfakes. We examined deepfake conversations on Reddit from 2018 to 2021—including major topics and their temporal changes as well as implications of these conversations. Using a mixed-method approach—topic modeling and qualitative coding, we found 6,638 posts and 86,425 comments discussing concerns of the believable nature of deepfakes and how platforms moderate them. We also found Reddit conversations to be pro-deepfake and building a community that supports creating and sharing deepfake artifacts and building a marketplace regardless of the consequences. Possible implications derived from qualitative codes indicate that deepfake conversations raise societal concerns. We propose that there are implications for Human Computer Interaction (HCI) to mitigate the harm created from deepfakes.

Supplementary Material

MP4 File (3491102.3517446-talk-video.mp4)
Talk Video
MP4 File (3491102.3517446-video-preview.mp4)
Video Preview

References

[1]
Shruti Agarwal and Hany Farid. 2021. Detecting Deep-Fake Videos From Aural and Oral Dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 981–989.
[2]
Saifuddin Ahmed. 2021. Who inadvertently shares deepfakes? Analyzing the role of political interest, cognitive ability, and social network size. Telematics and Informatics 57 (2021), 101508.
[3]
Oxford Analytica. 2019. ’Deepfakes’ could irreparably damage public trust. Emerald Expert Briefingsoxan-db (2019).
[4]
Oxford Analytica. 2019. ’deepfakes’ could irreparably damage public trust. Expert Briefings (2019).
[5]
Sairam Balani and Munmun De Choudhury. 2015. Detecting and characterizing mental health related self-disclosure in social media. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. 1373–1378.
[6]
Michael S Bernstein, Margaret Levi, David Magnus, Betsy Rajala, Debra Satz, and Charla Waeiss. 2021. ESR: Ethics and Society Review of Artificial Intelligence Research. arXiv preprint arXiv:2106.11521(2021).
[7]
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”.
[8]
Jennifer Bisset. 2021. Lucasfilm hires deepfake YouTuber who fixed Luke Skywalker in The Mandalorian. Retrieved August 29, 2021 from https://www.cnet.com/news/lucasfilm-hires-deepfake-youtuber-who-fixed-luke-skywalker-in-the-mandalorian/
[9]
Christoph Bregler, Michele Covell, and Malcolm Slaney. 1997. Video Rewrite: Driving Visual Speech with Audio. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques(SIGGRAPH ’97). ACM Press/Addison-Wesley Publishing Co., USA, 353–360. https://doi.org/10.1145/258734.258880
[10]
Catherine Francis Brooks. 2021. Popular discourse around deepfakes and the interdisciplinary challenge of fake video distribution. Cyberpsychology, Behavior, and Social Networking 24, 3(2021), 159–163.
[11]
Jacquelyn Burkell and Chandell Gosse. 2019. Nothing new here: Emphasizing the social and cultural context of deepfakes. First Monday (2019).
[12]
Roberto Caldelli, Leonardo Galteri, Irene Amerini, and Alberto Del Bimbo. 2021. Optical Flow based CNN for detection of unlearnt deepfake manipulations. Pattern Recognition Letters 146 (2021), 31–37.
[13]
Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. 2019. Everybody dance now. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5933–5942.
[14]
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The internet’s hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro Scales. Proceedings of the ACM on Human-Computer Interaction 2, CSCW(2018), 1–25.
[15]
Justin D Cochran and Stuart A Napshin. 2021. Deepfakes: awareness, concerns, and platform accountability. Cyberpsychology, Behavior, and Social Networking 24, 3(2021), 164–172.
[16]
Antonella De Angeli, Mattia Falduti, Maria Menendez Blanco, and Sergio Tessaris. 2021. Reporting Revenge Porn: a Preliminary Expert Analysis. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter. 1–5.
[17]
Adrienne De Ruiter. 2021. The distinct wrong of deepfakes. Philosophy & Technology 34, 4 (2021), 1311–1332.
[18]
Simone Eelmaa. 2021. Sexualization of Children in Deepfakes and Hentai: Examining Reddit User Views. (2021).
[19]
Don Fallis. 2020. The epistemic threat of deepfakes. Philosophy & Technology(2020), 1–21.
[20]
Casey Fiesler, Joshua McCann, Kyle Frye, Jed R Brubaker, 2018. Reddit rules! characterizing an ecosystem of governance. In Twelfth International AAAI Conference on Web and Social Media.
[21]
Samuel Greengard. 2019. Will deepfakes do deep damage?Commun. ACM 63, 1 (2019), 17–19.
[22]
The Guardian. 2021. Mother charged with deepfake plot against daughter’s cheerleading rivals. https://www.theguardian.com/us-news/2021/mar/15/mother-charged-deepfake-plot-cheerleading-rivals
[23]
Jeffrey T Hancock and Jeremy N Bailenson. 2021. The social impact of deepfakes. Cyberpsychology, Behavior, and Social Networking 24, 3(2021).
[24]
Karen Hao. 2021. Memers are making deepfakes, and things are getting weird. Retrieved August 29, 2021 from https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/
[25]
Douglas Harris. 2018. Deepfakes: False pornography is here and the law cannot protect you. Duke L. & Tech. Rev. 17(2018), 99.
[26]
Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition. arXiv preprint arXiv:1406.2227(2014).
[27]
JiJI. 2020. Two men arrested over deepfake pornography videos. Retrieved August 29, 2021 from https://www.japantimes.co.jp/news/2020/10/02/national/crime-legal/two-men-arrested-deepfake-pornography-videos/
[28]
Stamatis Karnouskos. 2020. Artificial intelligence in digital media: The era of deepfakes. IEEE Transactions on Technology and Society 1, 3 (2020), 138–147.
[29]
David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, and Ross Anderson. 2021. Markpainting: Adversarial Machine Learning meets Inpainting. arxiv:2106.00660 [cs.LG]
[30]
Zachary Kimo Stine and Nitin Agarwal. 2020. Comparative Discourse Analysis Using Topic Models: Contrasting Perspectives on China from Reddit. In International Conference on Social Media and Society. 73–84.
[31]
Loukas Konstantinou, Ana Caraban, and Evangelos Karapanos. 2019. Combating Misinformation Through Nudging. In IFIP Conference on Human-Computer Interaction. Springer, 630–634.
[32]
Johannes Langguth, Konstantin Pogorelov, Stefan Brenner, Petra Filkuková, and Daniel Thilo Schroeder. 2021. Don’t Trust Your Eyes: Image Manipulation in the Age of DeepFakes. Frontiers in Communication 6 (2021), 26.
[33]
Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. 2020. Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5001–5010.
[34]
Xurong Li, Kun Yu, Shouling Ji, Yan Wang, Chunming Wu, and Hui Xue. 2020. Fighting against deepfake: Patch&pair convolutional neural networks (PPCNN). In Companion Proceedings of the Web Conference 2020. 88–89.
[35]
Yang Liu, Zhijun Yin, 2020. Understanding weight loss via online discussions: Content analysis of Reddit posts using topic modeling and word clustering techniques. Journal of medical Internet research 22, 6 (2020), e13745.
[36]
Sophie Maddocks. 2020. ‘A Deepfake Porn Plot Intended to Silence Me’: exploring continuities between pornographic and ‘political’deep fakes. Porn Studies 7, 4 (2020), 415–423.
[37]
Artem A Maksutov, Viacheslav O Morozov, Aleksander A Lavrenov, and Alexander S Smirnov. 2020. Methods of deepfake detection based on machine learning. In 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). IEEE, 408–411.
[38]
December Maxwell, Sarah R Robinson, Jessica R Williams, and Craig Keaton. 2020. ” A Short Story of a Lonely Guy”: A Qualitative Thematic Analysis of Involuntary Celibacy Using Reddit.Sexuality & Culture 24, 6 (2020).
[39]
Sara May. 2019. Ask Me Anything: Promoting Archive Collections on Reddit. Marketing Libraries Journal 3, 1 (2019).
[40]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
[41]
Yisroel Mirsky and Wenke Lee. 2021. The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR) 54, 1 (2021), 1–41.
[42]
Eryn J Newman, Maryanne Garry, Christian Unkelbach, Daniel M Bernstein, D Stephen Lindsay, and Robert A Nash. 2015. Truthiness and falsiness of trivia claims depend on judgmental contexts.Journal of Experimental Psychology: Learning, Memory, and Cognition 41, 5(2015), 1337.
[43]
Jennifer Otiono, Monsurat Olaosebikan, Orit Shaer, Oded Nov, and Mad Price Ball. 2019. Understanding Users Information Needs and Collaborative Sensemaking of Microbiome Data. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–21.
[44]
Derek O’callaghan, Derek Greene, Joe Carthy, and Pádraig Cunningham. 2015. An analysis of the coherence of descriptors in topic modeling. Expert Systems with Applications 42, 13 (2015), 5645–5657.
[45]
Jesús Ángel Pérez Dasilva, Koldobika Meso Ayerdi, and Terese Mendiguren Galdospin. 2021. Deepfakes on Twitter: Which Actors Control Their Spread?(2021).
[46]
Jiameng Pu, Neal Mangaokar, Bolun Wang, Chandan K Reddy, and Bimal Viswanath. 2020. Noisescope: Detecting deepfake images in a blind setting. In Annual Computer Security Applications Conference. 913–927.
[47]
Md Shohel Rana and Andrew H Sung. 2020. Deepfakestack: A deep ensemble-based learning technique for deepfake detection. In 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 70–75.
[48]
Helen Rosner. 2021. The Ethics of a Deepfake Anthony Bourdain Voice. Retrieved August 29, 2021 from https://www.newyorker.com/culture/annals-of-gastronomy/the-ethics-of-a-deepfake-anthony-bourdain-voice
[49]
Kelly M Sayler and Laurie A Harris. 2020. Deep fakes and national security. Technical Report. Congressional Research SVC Washington United States.
[50]
Lisa Schirch. 2021. The techtonic shift: How social media works. In Social Media Impacts on Conflict and Democracy. Routledge, 1–20.
[51]
Nicolas Schrading, Cecilia Ovesdotter Alm, Raymond Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on reddit. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2577–2583.
[52]
Zack Sharf. 2021. Lucasfilm Hired the YouTuber Who Used Deepfakes to Tweak Luke Skywalker ‘Mandalorian’ VFX. Retrieved August 29, 2021 from https://www.indiewire.com/2021/07/lucasfilm-hires-deepfake-youtuber-mandalorian-skywalker-vfx-1234653720/
[53]
Shaina J Sowles, Monique McLeary, Allison Optican, Elizabeth Cahn, Melissa J Krauss, Ellen E Fitzsimmons-Craft, Denise E Wilfley, and Patricia A Cavazos-Rehg. 2018. A content analysis of an online pro-eating disorder community on Reddit. Body image 24(2018), 137–144.
[54]
Catherine Stupp. 2019. Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case. The Wall Street Journal 30, 08 (2019).
[55]
Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2017. Synthesizing Obama: Learning Lip Sync from Audio. ACM Trans. Graph. 36, 4, Article 95 (July 2017), 13 pages. https://doi.org/10.1145/3072959.3073640
[56]
Rashid Tahir, Brishna Batool, Hira Jamshed, Mahnoor Jameel, Mubashir Anwar, Faizan Ahmed, Muhammad Adeel Zaffar, and Muhammad Fareed Zaffar. 2021. Seeing is Believing: Exploring Perceptual Differences in DeepFake Videos. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[57]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2387–2395.
[58]
Sachin Thukral, Hardik Meisheri, Tushar Kataria, Aman Agarwal, Ishan Verma, Arnab Chatterjee, and Lipika Dey. 2018. Analyzing behavioral trends in community driven discussion platforms like reddit. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 662–669.
[59]
Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society 6, 1 (2020), 2056305120903408.
[60]
James Vincent. 2020. This is what a deepfake voice clone used in a failed fraud attempt sounds like. Retrieved August 29, 2021 from https://www.theverge.com/2020/7/27/21339898/deepfake-audio-voice-clone-scam-attempt-nisos
[61]
Travis L Wagner and Ashley Blewer. 2019. “The Word Real Is No Longer Real”: Deepfakes, Gender, and the Challenges of AI-Altered Video. Open Information Science 3, 1 (2019), 32–46.
[62]
Lei Wang, Yongcheng Zhan, Qiudan Li, Daniel D Zeng, Scott J Leischow, and Janet Okamoto. 2015. An examination of electronic cigarette content on social media: analysis of e-cigarette flavor content on Reddit. International journal of environmental research and public health 12, 11(2015), 14916–14935.
[63]
Leslie Wöhler, Martin Zembaty, Susana Castillo, and Marcus Magnor. 2021. Towards Understanding Perceptual Differences between Genuine and Face-Swapped Videos. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[64]
Digvijay Yadav and Sakina Salmani. 2019. Deepfake: A survey on facial forgery technique using generative adversarial network. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS). IEEE, 852–857.
[65]
Youtube. 2019. Keanu goes Bollywood: a true deepfake story. Retrieved August 29, 2021 from https://www.youtube.com/watch?v=J-0kCua6Q08
[66]
Catherine Zeng and Rafael Olivera-Cintrón. 2019. Preparing for the World of a” Perfect” Deepfake. Dostopno na: https://czeng. org/classes/6805/Final.pdf (18. 6. 2020) (2019).
[67]
Lilei Zheng, Ying Zhang, and Vrizlynn LL Thing. 2019. A survey on image tampering and its detection in real-world photos. Journal of Visual Communication and Image Representation 58 (2019), 380–399.

Cited By

View all
  • (2025)Exploring the Landscape of Compressed DeepFakes: Generation, Dataset and DetectionNeurocomputing10.1016/j.neucom.2024.129116619(129116)Online publication date: Feb-2025
  • (2024)A Topic Modeling Approach Towards Understanding the Discourse between Religion and Videogames on RedditProceedings of the ACM on Human-Computer Interaction10.1145/36770548:CHI PLAY(1-44)Online publication date: 15-Oct-2024
  • (2024)Exploring the Use of Abusive Generative AI Models on CivitaiProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681052(6949-6958)Online publication date: 28-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
10459 pages
ISBN:9781450391573
DOI:10.1145/3491102
This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 April 2022

Check for updates

Author Tags

  1. content analysis
  2. deepfake
  3. societal implication
  4. topic modeling

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9,130
  • Downloads (Last 6 weeks)1,218
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Exploring the Landscape of Compressed DeepFakes: Generation, Dataset and DetectionNeurocomputing10.1016/j.neucom.2024.129116619(129116)Online publication date: Feb-2025
  • (2024)A Topic Modeling Approach Towards Understanding the Discourse between Religion and Videogames on RedditProceedings of the ACM on Human-Computer Interaction10.1145/36770548:CHI PLAY(1-44)Online publication date: 15-Oct-2024
  • (2024)Exploring the Use of Abusive Generative AI Models on CivitaiProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681052(6949-6958)Online publication date: 28-Oct-2024
  • (2024)Understanding the Impact of AI-Generated Content on Social Media: The Pixiv CaseProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680631(6813-6822)Online publication date: 28-Oct-2024
  • (2024)“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation ToolsACM Transactions on Interactive Intelligent Systems10.1145/365199014:2(1-39)Online publication date: 15-May-2024
  • (2024)Artificial Dreams: Surreal Visual Storytelling as Inquiry Into AI 'Hallucination'Proceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660685(619-637)Online publication date: 1-Jul-2024
  • (2024)Forms of Fraudulence in Human-Centered Design: Collective Strategies and Future Agenda for Qualitative HCI ResearchExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636309(1-5)Online publication date: 11-May-2024
  • (2024)Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural AnalysisProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642840(1-17)Online publication date: 11-May-2024
  • (2024)Understanding fraudulence in online qualitative studies: From the researcher's perspectiveProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642732(1-17)Online publication date: 11-May-2024
  • (2024)Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow QuestionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642596(1-17)Online publication date: 11-May-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media