Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3461702.3462584acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adversarial Dynamics in Synthetic Media

Published: 30 July 2021 Publication History

Abstract

Synthetic media detection technologies label media as either synthetic or non-synthetic and are increasingly used by journalists, web platforms, and the general public to identify misinformation and other forms of problematic content. As both well-resourced organizations and the non-technical general public generate more sophisticated synthetic media, the capacity for purveyors of problematic content to adapt induces a detection dilemma : as detection practices become more accessible, they become more easily circumvented. This paper describes how a multistakeholder cohort from academia, technology platforms, media entities, and civil society organizations active in synthetic media detection and its socio-technical implications evaluates the detection dilemma. Specifically, we offer an assessment of detection contexts and adversary capacities sourced from the broader, global AI and media integrity community concerned with mitigating the spread of harmful synthetic media. A collection of personas illustrates the intersection between unsophisticated and highly-resourced sponsors of misinformation in the context of their technical capacities. This work concludes that there is no "best'' approach to navigating the detector dilemma, but derives a set of implications from multistakeholder input to better inform detection process decisions and policies, in practice.

References

[1]
H. Ajder, G. Patrini, F. Cavalli, and L. Cullen. 2019. The State of Deepfakes: Landscape, Threats, and Impact. Deeptrace Policy Brief.
[2]
D. Alba. 2020. Tool to Help Journalists Spot Doctored Images Is Unveiled by Jigsaw. The New York Times. https://www.nytimes.com/2020/02/04/technology/jigsawdoctored- images-disinformation.html
[3]
J. Benton. 2020. Is this video ?missing context," ?transformed," or ?edited"? This effort wants to standardize howwe categorize visual misinformation. NiemanLab.
[4]
M. Bickert. 2020. Enforcing Against Manipulated Media.
[5]
T. Burt and E. Horvitz. 2020. New Steps to Combat Disinformation. Microsoft On the Issues.
[6]
Rosie Campbell. 2020. Publication Norms for Responsible AI. Partnership on AI.
[7]
J. Cohen. 2020. Disinformation is more than fake news. Jigsaw Medium.
[8]
S Cole. 2019. This Horrifying App Undresses a Photo of Any Woman With a Single Click. Motherboard.
[9]
A. Dame-Boyle. 2015. Remembering the Case that Established Code as Speech. The Electronic Frontier Foundation.
[10]
B. Dolhansky, J. Bitton, B. Pflaum, J. Lu, R. Howes, M. Wang, and C. Canton Ferrer. 2019. The DeepFake Detection Challenge (DFDC) Dataset. arXiv e-prints (2019).
[11]
S. Fischer. 2020. Project Origin is watermarking media to fight fake news ahead of the election. Axios.
[12]
S. Gregory. 2018. Prepare Don't Panic. WITNESS Blog.
[13]
S. Gregory. 2019. Deepfakes and Synthetic Media: Updated Survey of Solutions Against Malicious Usages. WITNESS Blog.
[14]
S Gregory. 2020. Tracing trust: Why we must build authenticity infrastructure that works for all. WITNESS Blog.
[15]
Christopher Hesse. 2017. Image-to-Image Demo: Interactive Image Translation with pix2pix-tensorflow. https://affinelayer.com/pixsrv/
[16]
Tim Hwang. 2020. Deepfakes: A Grounded Threat Assessment. Center for Security and Emerging Technology.
[17]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Imageto- Image Translation with Conditional Adversarial Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https: //doi.org/10.1109/cvpr.2017.632
[18]
C. Leibowicz. 2020. The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity. PAI Research Paper.
[19]
C. Leibowicz, S. Adler, and P. Eckersley. 2019. When Is It Appropriate to Publish High-Stakes AI Research? PAI Blog.
[20]
C. Leibowicz, J. Stray, and E. Saltz. 2020. Manipulated Media Detection Requires More Than Tools: Community Insights on What's Needed. PAI Blog.
[21]
Paarth Neekhara, Brian Dolhansky, Joanna Bitton, and Cristian Canton Ferrer. 2020. Adversarial Threats to DeepFake Detection: A Practical Perspective. arXiv preprint arXiv:2011.09957 (2020).
[22]
Aviv Ovadya. 2019. Making deepfake tools doesn't have to be irresponsible. Here's how. MIT Technology Review.
[23]
Aviv Ovadya. 2020. Making Sense of Deepfake Mitigations. Medium.
[24]
A. Ovadya and J. Whittlestone. 2019. Reducing Malicious Use of Synthetic Media Research: Considerations and Potential Release Practices for Machine Learning. ArXiv.
[25]
Adam Petcher and Greg Morrisett. 2015. The foundational cryptography framework. In International Conference on Principles of Security and Trust. Springer, 53--72.
[26]
L. Rosenthol, A. Parsons, E. Scouten, J. Aythora, B. MacCormack, P. England, M. Levallee, J. Dotan, S. Hanna, H. Farid, and S. Gregory. 2020. The Content Authenticity Initiative: Setting the Standard for Digital Content Attribution. Adobe Whitepaper.
[27]
Y. Roth and A. Achuthan. 2020. Building rules in public: Our approach to synthetic and manipulated media. Twitter Blog.
[28]
E. Saltz, L. Coleman, and C.R. Leibowicz. 2020. Making AI Art Responsibly: A Field Guide. Medium.
[29]
E. Saltz, C. Leibowicz, and C. Wardle. 2020. Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventions.
[30]
L. Verdoliva. 2020. Media Forensics and DeepFakes: An Overview. IEEE Journal of Selected Topics in Signal Processing (2020).
[31]
C. Wardle. 2019. Understanding Information Disorder. First Draft News.
[32]
X. Yang, Y. Li, H. Qi, and S. Lyu. 2019. Exposing GAN-Synthesized Faces Using Landmark Locations. Proceedings of the ACM Workshop on Information Hiding and Multimedia Security (2019).
[33]
N. Yu, V. Skripniuk, and S. Abdelnabi. 2020. Artificial GAN Fingerprints: Rooting Deepfake Attribution in Training Data. arXiv e-prints (2020).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
July 2021
1077 pages
ISBN:9781450384735
DOI:10.1145/3461702
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. misinformation
  2. security
  3. synthetic media

Qualifiers

  • Research-article

Conference

AIES '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)205
  • Downloads (Last 6 weeks)17
Reflects downloads up to 08 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)The Future of DeepfakesDeepfakes and Their Impact on Business10.4018/979-8-3693-6890-9.ch011(243-266)Online publication date: 13-Dec-2024
  • (2024)The spread of synthetic media on XHarvard Kennedy School Misinformation Review10.37016/mr-2020-140Online publication date: 3-Jun-2024
  • (2024)Navigating legal challenges of deepfakes in the American context: a call to actionCogent Engineering10.1080/23311916.2024.232097111:1Online publication date: 22-Feb-2024
  • (2024)Human detection of political speech deepfakes across transcripts, audio, and videoNature Communications10.1038/s41467-024-51998-z15:1Online publication date: 2-Sep-2024
  • (2024)A systematic review on research utilising artificial intelligence for open source intelligence (OSINT) applicationsInternational Journal of Information Security10.1007/s10207-024-00868-223:4(2911-2938)Online publication date: 1-Aug-2024
  • (2024)Information Consumption Patterns, Fake News, and Deep FakeDigital Transformation, Artificial Intelligence and Society10.1007/978-981-97-5656-8_9(131-145)Online publication date: 19-Aug-2024
  • (2023)Typology of Risks of Generative Text-to-Image ModelsProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604722(396-410)Online publication date: 8-Aug-2023
  • (2023)Txt2Vid: Ultra-Low Bitrate Compression of Talking-Head Videos via TextIEEE Journal on Selected Areas in Communications10.1109/JSAC.2022.322195341:1(107-118)Online publication date: Jan-2023
  • (2023)Cutting through the Hype: Understanding the Implications of Deepfakes for the Fact-Checking Actor-NetworkDigital Journalism10.1080/21670811.2023.219466512:10(1505-1522)Online publication date: 31-Mar-2023
  • (2023)Deepfake AI images: should deepfakes be banned in Thailand?AI and Ethics10.1007/s43681-023-00350-0Online publication date: 4-Oct-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media