Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = deepfake attacks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2708 KiB  
Article
A Multi-Attack Adaptive Fake Face Detection Network Based on Global Feature Normalization
by Honggang Xie, Jia Liu, Zhiwei Chen, Kaiyuan Hou and Yuan Yao
Electronics 2024, 13(23), 4615; https://doi.org/10.3390/electronics13234615 - 22 Nov 2024
Viewed by 510
Abstract
The advancement of deepfake technology has resulted in increasingly realistic forged faces, posing a challenge for existing fake face detection models, which often exhibit poor adaptability to complex and varied forgery techniques. To address this challenge, this paper proposes a fake face detection [...] Read more.
The advancement of deepfake technology has resulted in increasingly realistic forged faces, posing a challenge for existing fake face detection models, which often exhibit poor adaptability to complex and varied forgery techniques. To address this challenge, this paper proposes a fake face detection network that demonstrates high accuracy and generalization capabilities. A global normalization calibration module is designed to address the issue of ineffective feature extraction within the network, enhancing the model’s focus on key feature regions by establishing a mutual inhibition competition mechanism between different channels. This network maintains high accuracy even when facing various attack methods in both the objective and digital worlds. Experimental results indicate that the proposed method excels in both binary and multi-class classification tests on the CASIA-FASD dataset, the Celeb-DF dataset, and a self-constructed dataset. Specifically, for binary classification tasks, the accuracy reaches 98.19% on the CASIA-FASD dataset, 99.17% on the Celeb-DF dataset, and 98.03% on the self-constructed dataset. In comparison, other methods exhibit slightly inferior performance. For multi-class classification tasks, the proposed model achieves the best performance across all tested forgery types, demonstrating exceptional overall performance and adaptability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

89 pages, 16650 KiB  
Review
Video and Audio Deepfake Datasets and Open Issues in Deepfake Technology: Being Ahead of the Curve
by Zahid Akhtar, Thanvi Lahari Pendyala and Virinchi Sai Athmakuri
Forensic Sci. 2024, 4(3), 289-377; https://doi.org/10.3390/forensicsci4030021 - 13 Jul 2024
Cited by 2 | Viewed by 4276
Abstract
The revolutionary breakthroughs in Machine Learning (ML) and Artificial Intelligence (AI) are extensively being harnessed across a diverse range of domains, e.g., forensic science, healthcare, virtual assistants, cybersecurity, and robotics. On the flip side, they can also be exploited for negative purposes, like [...] Read more.
The revolutionary breakthroughs in Machine Learning (ML) and Artificial Intelligence (AI) are extensively being harnessed across a diverse range of domains, e.g., forensic science, healthcare, virtual assistants, cybersecurity, and robotics. On the flip side, they can also be exploited for negative purposes, like producing authentic-looking fake news that propagates misinformation and diminishes public trust. Deepfakes pertain to audio or visual multimedia contents that have been artificially synthesized or digitally modified through the application of deep neural networks. Deepfakes can be employed for benign purposes (e.g., refinement of face pictures for optimal magazine cover quality) or malicious intentions (e.g., superimposing faces onto explicit image/video to harm individuals producing fake audio recordings of public figures making inflammatory statements to damage their reputation). With mobile devices and user-friendly audio and visual editing tools at hand, even non-experts can effortlessly craft intricate deepfakes and digitally altered audio and facial features. This presents challenges to contemporary computer forensic tools and human examiners, including common individuals and digital forensic investigators. There is a perpetual battle between attackers armed with deepfake generators and defenders utilizing deepfake detectors. This paper first comprehensively reviews existing image, video, and audio deepfake databases with the aim of propelling next-generation deepfake detectors for enhanced accuracy, generalization, robustness, and explainability. Then, the paper delves deeply into open challenges and potential avenues for research in the audio and video deepfake generation and mitigation field. The aspiration for this article is to complement prior studies and assist newcomers, researchers, engineers, and practitioners in gaining a deeper understanding and in the development of innovative deepfake technologies. Full article
(This article belongs to the Special Issue Human and Technical Drivers of Cybercrime)
Show Figures

Figure 1

10 pages, 178 KiB  
Article
The Role of Machine Learning in Advanced Biometric Systems
by Milkias Ghilom and Shahram Latifi
Electronics 2024, 13(13), 2667; https://doi.org/10.3390/electronics13132667 - 7 Jul 2024
Viewed by 2048
Abstract
Today, the significance of biometrics is more pronounced than ever in accurately allowing access to valuable resources, from personal devices to highly sensitive buildings, as well as classified information. Researchers are pushing forward toward devising robust biometric systems with higher accuracy, fewer false [...] Read more.
Today, the significance of biometrics is more pronounced than ever in accurately allowing access to valuable resources, from personal devices to highly sensitive buildings, as well as classified information. Researchers are pushing forward toward devising robust biometric systems with higher accuracy, fewer false positives and false negatives, and better performance. On the other hand, machine learning (ML) has been shown to play a key role in improving such systems. By constantly learning and adapting to users’ changing biometric patterns, ML algorithms can improve accuracy and performance over time. The integration of ML algorithms with biometrics, however, introduces vulnerabilities in such systems. This article investigates the new issues of concern that come about because of the adoption of ML methods in biometric systems. Specifically, techniques to breach biometric systems, namely, data poisoning, model inversion, bias injection, and deepfakes, are discussed. Here, the methodology consisted of conducting a detailed review of the literature in which ML techniques have been adopted in biometrics. In this study, we included all works that have successfully applied ML and reported favorable results after this adoption. These articles not only reported improved numerical results but also provided sound technical justification for this improvement. There were many isolated, unsupported, and unjustified works about the major advantages of ML techniques in improving security, which were excluded from this review. Though briefly mentioned, we did not touch upon encryption/decryption aspects, and, accordingly, cybersecurity was excluded from this study. At the end, recommendations are made to build stronger and more secure systems that benefit from ML adoption while closing the door to adversarial attacks. Full article
(This article belongs to the Special Issue Biometric Recognition: Latest Advances and Prospects)
26 pages, 2681 KiB  
Review
Deepfake Attacks: Generation, Detection, Datasets, Challenges, and Research Directions
by Amal Naitali, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Computers 2023, 12(10), 216; https://doi.org/10.3390/computers12100216 - 23 Oct 2023
Cited by 17 | Viewed by 32410
Abstract
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is [...] Read more.
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment. Full article
Show Figures

Figure 1

15 pages, 1951 KiB  
Article
Voice Deepfake Detection Using the Self-Supervised Pre-Training Model HuBERT
by Lanting Li, Tianliang Lu, Xingbang Ma, Mengjiao Yuan and Da Wan
Appl. Sci. 2023, 13(14), 8488; https://doi.org/10.3390/app13148488 - 22 Jul 2023
Cited by 4 | Viewed by 5472
Abstract
In recent years, voice deepfake technology has developed rapidly, but current detection methods have the problems of insufficient detection generalization and insufficient feature extraction for unknown attacks. This paper presents a forged speech detection method (HuRawNet2_modified) based on a self-supervised pre-trained model (HuBERT) [...] Read more.
In recent years, voice deepfake technology has developed rapidly, but current detection methods have the problems of insufficient detection generalization and insufficient feature extraction for unknown attacks. This paper presents a forged speech detection method (HuRawNet2_modified) based on a self-supervised pre-trained model (HuBERT) to improve detection (and address the above problems). A combination of impulsive signal-dependent additive noise and additive white Gaussian noise was adopted for data boosting and augmentation, and the HuBERT model was fine-tuned on different language databases. On this basis, the size of the extracted feature maps was modified independently by the α-feature map scaling (α-FMS) method, with a modified end-to-end method using the RawNet2 model as the backbone structure. The results showed that the HuBERT model could extract features more comprehensively and accurately. The best evaluation indicators were an equal error rate (EER) of 2.89% and a minimum tandem detection cost function (min t-DCF) of 0.2182 on the database of the ASVspoof2021 LA challenge, which verified the effectiveness of the detection method proposed in this paper. Compared with the baseline systems in databases of the ASVspoof 2021 LA challenge and the FMFCC-A, the values of EER and min t-DCF decreased. The results also showed that the self-supervised pre-trained model with fine-tuning can extract acoustic features across languages. And the detection can be slightly improved when the languages of the pre-trained database, and the fine-tuned and tested database are the same. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 2659 KiB  
Article
Generalized Spoof Detection and Incremental Algorithm Recognition for Voice Spoofing
by Jinlin Guo, Yancheng Zhao and Haoran Wang
Appl. Sci. 2023, 13(13), 7773; https://doi.org/10.3390/app13137773 - 30 Jun 2023
Cited by 1 | Viewed by 1652
Abstract
Highly deceptive deepfake technologies have caused much controversy, e.g., artificial intelligence-based software can automatically generate nude photos and deepfake images of anyone. This brings considerable threats to both individuals and society. In addition to video and image forgery, audio forgery poses many hazards [...] Read more.
Highly deceptive deepfake technologies have caused much controversy, e.g., artificial intelligence-based software can automatically generate nude photos and deepfake images of anyone. This brings considerable threats to both individuals and society. In addition to video and image forgery, audio forgery poses many hazards but lacks sufficient attention. Furthermore, existing works have only focused on voice spoof detection, neglecting the identification of spoof algorithms. It is of great value to recognize the algorithm for synthesizing spoofing voices in traceability. This study presents a system combining voice spoof detection and algorithm recognition. In contrast, the generalizability of the spoof detection model is discussed from the perspective of embedding space and decision boundaries to face the voice spoofing attacks generated by spoof algorithms that are not available in the training set. This study presents a method for voice spoof algorithms recognition based on incremental learning, taking into account data flow scenarios where new spoof algorithms keep appearing in reality. Our experimental results on the LA dataset of ASVspoof show that our system can improve the generalization of spoof detection and identify new voice spoof algorithms without catastrophic forgetting. Full article
Show Figures

Figure 1

23 pages, 5951 KiB  
Article
DDS: Deepfake Detection System through Collective Intelligence and Deep-Learning Model in Blockchain Environment
by Nakhoon Choi and Heeyoul Kim
Appl. Sci. 2023, 13(4), 2122; https://doi.org/10.3390/app13042122 - 7 Feb 2023
Cited by 5 | Viewed by 3810
Abstract
With the spread of mobile devices and the improvement of the mobile service environment, the use of various Internet content providers (ICPs), including content services such as YouTube and video hosting services, has increased significantly. Video content shared in ICP is used for [...] Read more.
With the spread of mobile devices and the improvement of the mobile service environment, the use of various Internet content providers (ICPs), including content services such as YouTube and video hosting services, has increased significantly. Video content shared in ICP is used for information delivery and issue checking based on accessibility. However, if the content registered and shared in ICP is manipulated through deepfakes and maliciously distributed to cause political attacks or social problems, it can cause a very large negative effect. This study aims to propose a deepfake detection system that detects manipulated video content distributed in video hosting services while ensuring the transparency and objectivity of the detection subject. The detection method of the proposed system is configured through a blockchain and is not dependent on a single ICP, establishing a cooperative system among multiple ICPs and achieving consensus for the common purpose of deepfake detection. In the proposed system, the deep-learning model for detecting deepfakes is independently driven by each ICP, and the results are ensembled through integrated voting. Furthermore, this study proposes a method to supplement the objectivity of integrated voting and the neutrality of the deep-learning model by ensembling collective intelligence-based voting through the participation of ICP users in the integrated voting process and ensuring high accuracy at the same time. Through the proposed system, the accuracy of the deep-learning model is supplemented by utilizing collective intelligence in the blockchain environment, and the creation of a consortium contract environment for common goals between companies with conflicting interests is illuminated. Full article
(This article belongs to the Special Issue Blockchain in Information Security and Privacy)
Show Figures

Figure 1

16 pages, 2230 KiB  
Review
Deepfakes Generation and Detection: A Short Survey
by Zahid Akhtar
J. Imaging 2023, 9(1), 18; https://doi.org/10.3390/jimaging9010018 - 13 Jan 2023
Cited by 38 | Viewed by 31964
Abstract
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been [...] Read more.
Advancements in deep learning techniques and the availability of free, large databases have made it possible, even for non-technical people, to either manipulate or generate realistic facial samples for both benign and malicious purposes. DeepFakes refer to face multimedia content, which has been digitally altered or synthetically created using deep neural networks. The paper first outlines the readily available face editing apps and the vulnerability (or performance degradation) of face recognition systems under various face manipulations. Next, this survey presents an overview of the techniques and works that have been carried out in recent years for deepfake and face manipulations. Especially, four kinds of deepfake or face manipulations are reviewed, i.e., identity swap, face reenactment, attribute manipulation, and entire face synthesis. For each category, deepfake or face manipulation generation methods as well as those manipulation detection methods are detailed. Despite significant progress based on traditional and advanced computer vision, artificial intelligence, and physics, there is still a huge arms race surging up between attackers/offenders/adversaries (i.e., DeepFake generation methods) and defenders (i.e., DeepFake detection methods). Thus, open challenges and potential research directions are also discussed. This paper is expected to aid the readers in comprehending deepfake generation and detection mechanisms, together with open issues and future directions. Full article
Show Figures

Figure 1

14 pages, 1898 KiB  
Article
Deepfake Video Detection Based on MesoNet with Preprocessing Module
by Zhiming Xia, Tong Qiao, Ming Xu, Xiaoshuai Wu, Li Han and Yunzhi Chen
Symmetry 2022, 14(5), 939; https://doi.org/10.3390/sym14050939 - 5 May 2022
Cited by 18 | Viewed by 6796
Abstract
With the development of computer hardware and deep learning, face manipulation videos represented by Deepfake have been widely spread on social media. From the perspective of symmetry, many forensics methods have been raised, while most detection performance might drop under compression attacks. To [...] Read more.
With the development of computer hardware and deep learning, face manipulation videos represented by Deepfake have been widely spread on social media. From the perspective of symmetry, many forensics methods have been raised, while most detection performance might drop under compression attacks. To solve this robustness issue, this paper proposes a Deepfake video detection method based on MesoNet with preprocessing module. First, the preprocessing module is established to preprocess the cropped face images, which increases the discrimination among multi-color channels. Next, the preprocessed images are fed into the classic MesoNet. The detection performance of proposed method is verified on two datasets; the AUC on FaceForensics++ can reach 0.974, and it can reach 0.943 on Celeb-DF which is better than the current methods. More importantly, even in the case of heavy compression, the detection rate can still be more than 88%. Full article
Show Figures

Figure 1

20 pages, 2246 KiB  
Review
A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
by Zaynab Almutairi and Hebah Elgibreen
Algorithms 2022, 15(5), 155; https://doi.org/10.3390/a15050155 - 4 May 2022
Cited by 49 | Viewed by 27454
Abstract
A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus recently [...] Read more.
A number of AI-generated tools are used today to clone human voices, leading to a new technology known as Audio Deepfakes (ADs). Despite being introduced to enhance human lives as audiobooks, ADs have been used to disrupt public safety. ADs have thus recently come to the attention of researchers, with Machine Learning (ML) and Deep Learning (DL) methods being developed to detect them. In this article, a review of existing AD detection methods was conducted, along with a comparative description of the available faked audio datasets. The article introduces types of AD attacks and then outlines and analyzes the detection methods and datasets for imitation- and synthetic-based Deepfakes. To the best of the authors’ knowledge, this is the first review targeting imitated and synthetically generated audio detection methods. The similarities and differences of AD detection methods are summarized by providing a quantitative comparison that finds that the method type affects the performance more than the audio features themselves, in which a substantial tradeoff between the accuracy and scalability exists. Moreover, at the end of this article, the potential research directions and challenges of Deepfake detection methods are discussed to discover that, even though AD detection is an active area of research, further research is still needed to address the existing gaps. This article can be a starting point for researchers to understand the current state of the AD literature and investigate more robust detection models that can detect fakeness even if the target audio contains accented voices or real-world noises. Full article
Show Figures

Figure 1

20 pages, 881 KiB  
Article
Deterring Deepfake Attacks with an Electrical Network Frequency Fingerprints Approach
by Deeraj Nagothu, Ronghua Xu, Yu Chen, Erik Blasch and Alexander Aved
Future Internet 2022, 14(5), 125; https://doi.org/10.3390/fi14050125 - 21 Apr 2022
Cited by 7 | Viewed by 3869
Abstract
With the fast development of Fifth-/Sixth-Generation (5G/6G) communications and the Internet of Video Things (IoVT), a broad range of mega-scale data applications emerge (e.g., all-weather all-time video). These network-based applications highly depend on reliable, secure, and real-time audio and/or video streams (AVSs), which [...] Read more.
With the fast development of Fifth-/Sixth-Generation (5G/6G) communications and the Internet of Video Things (IoVT), a broad range of mega-scale data applications emerge (e.g., all-weather all-time video). These network-based applications highly depend on reliable, secure, and real-time audio and/or video streams (AVSs), which consequently become a target for attackers. While modern Artificial Intelligence (AI) technology is integrated with many multimedia applications to help enhance its applications, the development of General Adversarial Networks (GANs) also leads to deepfake attacks that enable manipulation of audio or video streams to mimic any targeted person. Deepfake attacks are highly disturbing and can mislead the public, raising further challenges in policy, technology, social, and legal aspects. Instead of engaging in an endless AI arms race “fighting fire with fire”, where new Deep Learning (DL) algorithms keep making fake AVS more realistic, this paper proposes a novel approach that tackles the challenging problem of detecting deepfaked AVS data leveraging Electrical Network Frequency (ENF) signals embedded in the AVS data as a fingerprint. Under low Signal-to-Noise Ratio (SNR) conditions, Short-Time Fourier Transform (STFT) and Multiple Signal Classification (MUSIC) spectrum estimation techniques are investigated to detect the Instantaneous Frequency (IF) of interest. For reliable authentication, we enhanced the ENF signal embedded through an artificial power source in a noisy environment using the spectral combination technique and a Robust Filtering Algorithm (RFA). The proposed signal estimation workflow was deployed on a continuous audio/video input for resilience against frame manipulation attacks. A Singular Spectrum Analysis (SSA) approach was selected to minimize the false positive rate of signal correlations. Extensive experimental analysis for a reliable ENF edge-based estimation in deepfaked multimedia recordings is provided to facilitate the need for distinguishing artificially altered media content. Full article
(This article belongs to the Special Issue 6G Wireless Channel Measurements and Models: Trends and Challenges)
Show Figures

Figure 1

16 pages, 3951 KiB  
Article
The Framework of Cross-Domain and Model Adversarial Attack against Deepfake
by Haoxuan Qiu, Yanhui Du and Tianliang Lu
Future Internet 2022, 14(2), 46; https://doi.org/10.3390/fi14020046 - 29 Jan 2022
Cited by 2 | Viewed by 3280
Abstract
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples [...] Read more.
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples generated by a model in a domain. To improve the generalization of adversarial examples and produce better attack effects on each domain of multiple deepfake models, this paper proposes a framework of Cross-Domain and Model Adversarial Attack (CDMAA). Firstly, CDMAA uniformly weights the loss function of each domain and calculates the cross-domain gradient. Then, inspired by the multiple gradient descent algorithm (MGDA), CDMAA integrates the cross-domain gradients of each model to obtain the cross-domain perturbation vector, which is used to optimize the adversarial example. Finally, we propose a penalty-based gradient regularization method to pre-process the cross-domain gradients to improve the success rate of attacks. CDMAA experiments on four mainstream deepfake models showed that the adversarial examples generated from CDMAA have the generalizability of attacking multiple models and multiple domains simultaneously. Ablation experiments were conducted to compare the CDMAA components with the methods used in existing studies and verify the superiority of CDMAA. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

14 pages, 2151 KiB  
Article
Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
by Li Fan, Wei Li and Xiaohui Cui
Future Internet 2021, 13(11), 288; https://doi.org/10.3390/fi13110288 - 17 Nov 2021
Cited by 7 | Viewed by 3721
Abstract
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is [...] Read more.
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial examples generation method. This method can simply and effectively attack forensics detectors by adding perturbations to images in different directions. Our attacks can reduce its AUC from 0.9999 to 0.0331, and the detection accuracy of deepfake images from 0.9997 to 0.0731. Compared with state-of-the-art studies, our work provides an important defense direction for future research on deepfake-image detectors, by focusing on the generalization performance of detectors and their resistance to adversarial example attacks. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

17 pages, 4880 KiB  
Article
Fighting Deepfakes by Detecting GAN DCT Anomalies
by Oliver Giudice, Luca Guarnera and Sebastiano Battiato
J. Imaging 2021, 7(8), 128; https://doi.org/10.3390/jimaging7080128 - 30 Jul 2021
Cited by 62 | Viewed by 6656
Abstract
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect [...] Read more.
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this paper we propose a new pipeline able to detect the so-called GAN Specific Frequencies (GSF) representing a unique fingerprint of the different generative architectures. By employing Discrete Cosine Transform (DCT), anomalous frequencies were detected. The β statistics inferred by the AC coefficients distribution have been the key to recognize GAN-engine generated data. Robustness tests were also carried out in order to demonstrate the effectiveness of the technique using different attacks on images such as JPEG Compression, mirroring, rotation, scaling, addition of random sized rectangles. Experiments demonstrated that the method is innovative, exceeds the state of the art and also give many insights in terms of explainability. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Back to TopTop