It is our great pleasure to welcome you to the first ACM Workshop on the security implications of Deepfakes and Cheapfakes - WDC 2022, co-located with ACM AsiaCCS 2022. The workshop's mission is to share and discuss novel solutions to the security and privacy-related issues that arise from fake multimedia. A total of ten papers were submitted in response to the call for papers, with authors from four countries: Germany, Israel, South Korea, and the United States. The submissions' significance, novelty, technical quality, and field relevance were all considered. The review process was conducted in a double-blind fashion. The program committee members exerted considerable effort in evaluating the papers, with the majority receiving two or three reviews. Finally, we accepted four short papers with a 57.1% acceptance rate and three poster papers for presentation at the workshop.
Proceeding Downloads
Deepfake Detection: State-of-the-art and Future Directions
In recent years there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches it is now possible to generate data with a high level of realism. While this opens up new opportunities for the ...
Extracting a Minimal Trigger for an Efficient Backdoor Poisoning Attack Using the Activation Values of a Deep Neural Network
A backdoor poisoning attack is an approach that threatens the security of artificial intelligence by injecting a predefined backdoor trigger into a training dataset to induce misbehavior in the classification model. In this paper, we discuss an approach ...
Zoom-DF: A Dataset for Video Conferencing Deepfake
With the rapid growth of deep learning methods, AI technologies for generating deepfake videos also have been significantly advanced. Nowadays, the manipulated videos such as deepfakes are so sophisticated that one cannot easily differentiate between ...
Evaluating Robustness of Sequence-based Deepfake Detector Models by Adversarial Perturbation
Deepfake videos are getting better in quality and can be used for dangerous disinformation campaigns. The pressing need to detect these videos has motivated researchers to develop different types of detection models. Among them, the models that utilize ...
Negative Adversarial Example Generation Against Naver's Celebrity Recognition API
Deep Neural Networks (DNNs) are very effective in image classification, detection and recognition due to a large number of available data. However, they can be easily fooled by adversarial examples and produce incorrect results, which can cause problems ...
Advanced Machine Learning Techniques to Detect Various Types of Deepfakes
Despite significant advancements of deep learning-based forgery detectors for distinguishing manipulated deepfake images, most detection approaches suffer from moderate to significant performance degradation with low-quality compressed deepfake images. ...
Deepfake Detection for Fake Images with Facemasks
Hyper-realistic face image generation and manipulation have given rise to numerous unethical social issues, e.g., invasion of privacy, threat of security, and malicious political maneuvering, which resulted in the development of recent deepfake ...
Discussion Paper: The Integrity of Medical AI
Deep learning has proven itself to be an incredible asset to the medical community. However, with offensive AI, the technology can be turned against medical community; adversarial samples can be used to cause misdiagnosis and medical deepfakes can be ...
A Face Pre-Processing Approach to Evade Deepfake Detector
Recently, various image synthesis technologies have increased the prevalence of impersonation attacks. With the development of such technologies, damages to people such as defamation or fake news have also increased. Deepfakes have already evolved to ...
Index Terms
- Proceedings of the 1st Workshop on Security Implications of Deepfakes and Cheapfakes