Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3494109acmconferencesBook PagePublication Pagesasia-ccsConference Proceedingsconference-collections
WDC '22: Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes
ACM2022 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
ASIA CCS '22: ACM Asia Conference on Computer and Communications Security Nagasaki Japan 30 May 2022
ISBN:
978-1-4503-9178-8
Published:
30 May 2022
Sponsors:

Reflects downloads up to 10 Nov 2024Bibliometrics
Skip Abstract Section
Abstract

It is our great pleasure to welcome you to the first ACM Workshop on the security implications of Deepfakes and Cheapfakes - WDC 2022, co-located with ACM AsiaCCS 2022. The workshop's mission is to share and discuss novel solutions to the security and privacy-related issues that arise from fake multimedia. A total of ten papers were submitted in response to the call for papers, with authors from four countries: Germany, Israel, South Korea, and the United States. The submissions' significance, novelty, technical quality, and field relevance were all considered. The review process was conducted in a double-blind fashion. The program committee members exerted considerable effort in evaluating the papers, with the majority receiving two or three reviews. Finally, we accepted four short papers with a 57.1% acceptance rate and three poster papers for presentation at the workshop.

Skip Table Of Content Section
SESSION: Keynote Talk I
keynote
Deepfake Detection: State-of-the-art and Future Directions

In recent years there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches it is now possible to generate data with a high level of realism. While this opens up new opportunities for the ...

SESSION: Session 1: Short Papers
short-paper
Open Access
Extracting a Minimal Trigger for an Efficient Backdoor Poisoning Attack Using the Activation Values of a Deep Neural Network

A backdoor poisoning attack is an approach that threatens the security of artificial intelligence by injecting a predefined backdoor trigger into a training dataset to induce misbehavior in the classification model. In this paper, we discuss an approach ...

short-paper
Zoom-DF: A Dataset for Video Conferencing Deepfake

With the rapid growth of deep learning methods, AI technologies for generating deepfake videos also have been significantly advanced. Nowadays, the manipulated videos such as deepfakes are so sophisticated that one cannot easily differentiate between ...

short-paper
Public Access
Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation

Deepfake videos are getting better in quality and can be used for dangerous disinformation campaigns. The pressing need to detect these videos has motivated researchers to develop different types of detection models. Among them, the models that utilize ...

short-paper
Negative Adversarial Example Generation Against Naver's Celebrity Recognition API

Deep Neural Networks (DNNs) are very effective in image classification, detection and recognition due to a large number of available data. However, they can be easily fooled by adversarial examples and produce incorrect results, which can cause problems ...

SESSION: Keynote Talk II
keynote
Advanced Machine Learning Techniques to Detect Various Types of Deepfakes

Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. ...

SESSION: Session 2: Poster and Discussion Papers
short-paper
Deepfake Detection for Fake Images with Facemasks

Hyper-realistic face image generation and manipulation have given rise to numerous unethical social issues, e.g., invasion of privacy, threat of security, and malicious political maneuvering, which resulted in the development of recent deepfake ...

short-paper
Discussion Paper: The Integrity of Medical AI

Deep learning has proven itself to be an incredible asset to the medical community. However, with offensive AI, the technology can be turned against medical community; adversarial samples can be used to cause misdiagnosis and medical deepfakes can be ...

short-paper
A Face Pre-Processing Approach to Evade Deepfake Detector

Recently, various image synthesis technologies have increased the prevalence of impersonation attacks. With the development of such technologies, damages to people such as defamation or fake news have also increased. Deepfakes have already evolved to ...

Contributors
  • Sungkyunkwan University
  • Commonwealth Scientific and Industrial Research Organisation
  • Sungkyunkwan University

Index Terms

  1. Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes
      Index terms have been assigned to the content through auto-classification.

      Recommendations