Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3658664.3659652acmconferencesArticle/Chapter ViewAbstractPublication Pagesih-n-mmsecConference Proceedingsconference-collections
short-paper

Did You Note My Palette? Unveiling Synthetic Images Through Color Statistics

Published: 24 June 2024 Publication History

Abstract

High-quality artificially generated images are widely available now and increasingly realistic, posing challenges for image forensics in distinguishing them from real ones. Unfortunately, building a single detector that generalizes well to unseen generators is very difficult, creating the need for diverse cues. In this paper, we show that natural and synthetic images differ in their color statistics, possibly due to the widely used perceptual loss, which is more sensitive to brightness than to chroma differences. Consequently, color statistics offer valuable cues for forensic analysis and the development of robust detectors. Our experiments using simple hand-crafted color functions with a random forest achieve 91% accuracy averaged over all tested Diffusion Models, even with limited training samples.

References

[1]
Adobe. 2024. Firefly. https://www.adobe.com/sensei/generative-ai/firefly.html.
[2]
M.A. Amin, Y. Hu, H. She, J. Li, Y. Guan, and M.Z. Amin. 2023. Exposing Deepfake Frames through Spectral Analysis of Color Channels in Frequency Domain. In International Workshop on Biometrics and Forensics. IEEE, 1--6.
[3]
M. Barni, K. Kallas, E. Nowroozi, and B. Tondi. 2020. CNN Detection of GAN-Generated Face Images Based on Cross-Band Co-Occurrences Analysis. In IEEE International Workshop on Information Forensics and Security). IEEE, 1--6.
[4]
N. Bonettini, P. Bestagini, S. Milani, and S. Tubaro. 2021. On the Use of Benford's Law to Detect GAN-Generated Images. In International Conference on Pattern Recognition. IEEE, 5495--5502.
[5]
J. Bruna, P. Sprechmann, and Y. LeCun. 2016. Super-Resolution with Deep Convolutional Sufficient Statistics.
[6]
B. Chen, X. Liu, Y. Zheng, G. Zhao, and Y.-Q. Shi. 2021. A Robust GAN-Generated Face Detection Method Based on Dual-Color Spaces and an Improved Xception. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 32, 6 (2021), 3527--3538.
[7]
R. Corvi, D. Cozzolino, G. Poggi, K. Nagano, and L. Verdoliva. 2023 a. Intriguing Properties of Synthetic Images: From Generative Adversarial Networks to Diffusion Models. In IEEE Conference on Computer Vision and Pattern Recognition. 973--982.
[8]
R. Corvi, D. Cozzolino, G. Zingarini, G. Poggi, K. Nagano, and L. Verdoliva. 2023 b. On the Detection of Synthetic Images Generated by Diffusion Models. In International Conference on Acoustics, Speech and Signal Processing. IEEE, 1--5.
[9]
D.-T. Dang-Nguyen, C. Pasquini, V. Conotter, and G. Boato. 2015. RAISE: A Raw Images Dataset for Digital Image Forensics. In ACM Multimedia Systems Conference. 219--224.
[10]
A. Dosovitskiy and T. Brox. 2016. Generating Images with Perceptual Similarity Metrics Based on Deep Networks. In Advances in Neural Information Processing Systems, Vol. 29.
[11]
H. Farid. 2022a. Lighting (In)consistency of Paint by Text. Technical Report 2207.13744. arXiv.
[12]
H. Farid. 2022b. Perspective (In)consistency of Paint by Text. Technical Report 2206.14617. arXiv.
[13]
J. Fridrich and J. Kodovsky. 2012. Rich Models for Steganalysis of Digital Images. IEEE Transactions on Information Forensics and Security, Vol. 7, 3 (2012), 868--882.
[14]
M. Goebel, L. Nataraj, T. Nanjundaswamy, T. Manhar Mohammed, S. Chandrasekaran, and B.S. Manjunath. 2020. Detection, Attribution and Localization of GAN Generated Images.
[15]
D. Gragnaniello, D. Cozzolino, F. Marra, G. Poggi, and L. Verdoliva. 2021. Are GAN Generated Images Easy to Detect? A Critical Analysis of the State-of-the-Art. In IEEE International Conference on Multimedia and Expo. IEEE, 1--6.
[16]
P. He, H. Li, and H. Wang. 2019. Detection of Fake Images via the Ensemble of Deep Representations from Multi Color Spaces. In IEEE International Conference on Image Processing. IEEE, 2299--2303.
[17]
J. Ho, A. Jain, and P. Abbeel. 2020. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, Vol. 33. 6840--6851.
[18]
Midjourney Inc. 2024. Midjourney. https://www.midjourney.com/home.
[19]
J. Johnson, A. Alahi, and L. Fei-Fei. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European Conference on Computer Vision. Springer, 694--711.
[20]
E.-G. Lee, I. Lee, and S.-B. Yoo. 2023. ClueCatcher: Catching Domain-Wise Independent Clues for Deepfake Detection. Mathematics, Vol. 11, 18 (2023), 3952.
[21]
H. Li, B. Li, S. Tan, and J. Huang. 2020. Identification of Deep Network Generated Images Using Disparities in Color Components. Signal Processing, Vol. 174 (2020), 107616.
[22]
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C.L. Zitnick. 2014. Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision. Springer, 740--755.
[23]
B. Liu, F. Yang, X. Bi, B. Xiao, W. Li, and X. Gao. 2022. Detecting Generated Images by Real Images. In European Conference on Computer Vision. Springer, 95--110.
[24]
S. Mandelli, N. Bonettini, P. Bestagini, and S. Tubaro. 2022. Detecting GAN-Generated Images by Orthogonal Training of Multiple CNNs. In IEEE International Conference on Image Processing. IEEE, 3091--3095.
[25]
F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi. 2018. Do GANs Leave Artificial Fingerprints?. In IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). 506--511.
[26]
F. Matern, C. Riess, and M. Stamminger. 2019. Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. In IEEE Winter Applications of Computer Vision Workshops. IEEE, 83--92.
[27]
S. McCloskey and M. Albright. 2019. Detecting GAN-Generated Imagery Using Saturation Cues. In EEE International Conference on Image Processing. IEEE, 4584--4588.
[28]
S. Mo, P. Lu, and X. Liu. 2022. AI-Generated Face Image Identification with Different Color Space Channel Combinations. Sensors, Vol. 22, 21 (2022), 8228.
[29]
U. Ojha, Y. Li, and Y.J. Lee. 2023. Towards Universal Fake Image Detectors that Generalize Across Generative Models. In IEEE Conference on Computer Vision and Pattern Recognition. 24480--24489.
[30]
OpenAI. 2024. DALL-E 3. https://openai.com/dall-e-3.
[31]
T. Qiao, Y. Chen, X. Zhou, R. Shi, H. Shao, K. Shen, and X. Luo. 2023. CSC-Net: Cross-Color Spatial Co-occurrence Matrix Network for Detecting Synthesized Fake Images. IEEE Transactions on Cognitive and Developmental Systems (2023).
[32]
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684--10695.
[33]
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. 2024. Stable Diffusion. https://github.com/CompVis/stable-diffusion.
[34]
C. Schuhmann, R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komatsuzaki. 2021. Laion-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs.
[35]
C. Tan, Y. Zhao, S. Wei, G. Gu, and Y. Wei. 2023. Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection. In IEEE Conference on Computer Vision and Pattern Recognition. 12105--12114.
[36]
European Union. 2021. Artificial Intelligence Act, Article 10, paragraph 3. https://artificialintelligenceact.eu/the-act/.
[37]
S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A.A. Efros. 2020. CNN-Generated Images Are Surprisingly Easy to Spot... for Now. In IEEE Conference on Computer Vision and Pattern Recognition. 8695--8704.
[38]
Z. Wang, J. Bao, W. Zhou, W. Wang, H. Hu, H. Chen, and H. Li. 2023. DIRE for Diffusion-Generated Image Detection.
[39]
K. Zeng, X. Yu, B. Liu, Y. Guan, and Y. Hu. 2023. Detecting Deepfakes in Alternative Color Spaces to Withstand Unseen Corruptions. In International Workshop on Biometrics and Forensics. IEEE, 1--6.
[40]
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. 2017. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Transactions on Image Processing, Vol. 26, 7 (2017), 3142--3155.
[41]
L. Zhang, Y. Zhou, C. Barnes, S. Amirghodsi, Z. Lin, E. Shechtman, and J. Shi. 2022. Perceptual Artifacts Localization for Inpainting. In European Conference on Computer Vision. Springer, 146--164.
[42]
R. Zhang, P. Isola, A. A Efros, E. Shechtman, and O. Wang. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE Conference on Computer Vision and Pattern Recognition. 586--595.
[43]
Xu Zhang, Svebor Karaman, and Shih-Fu Chang. 2019. Detecting and Simulating Artifacts in GAN Fake Images. In IEEE International Workshop on Information Forensics and Security. io

Index Terms

  1. Did You Note My Palette? Unveiling Synthetic Images Through Color Statistics

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IH&MMSec '24: Proceedings of the 2024 ACM Workshop on Information Hiding and Multimedia Security
    June 2024
    305 pages
    ISBN:9798400706370
    DOI:10.1145/3658664
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Best Paper

    Author Tags

    1. color spaces
    2. diffusion models
    3. image forensics
    4. synthetic image detection

    Qualifiers

    • Short-paper

    Funding Sources

    • Research and Training Group 2475 Cybercrime and Forensic Computing (grant number 393541319/GRK2475/2-2024)

    Conference

    IH&MMSEC '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 128 of 318 submissions, 40%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 159
      Total Downloads
    • Downloads (Last 12 months)159
    • Downloads (Last 6 weeks)45
    Reflects downloads up to 01 Oct 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media