Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

An invisible, robust copyright protection method for DNN-generated content

Published: 24 July 2024 Publication History

Abstract

Wide deployment of deep neural networks (DNNs) based applications (e.g., style transfer, cartoonish), stimulating the need for copyright protection of such application’s production. Though some traditional visible copyright techniques exist, they often introduce undesired artifacts and compromise the aesthetic quality of the images. In this paper, we propose a novel invisible, robust copyright protection method, which is composed of two networks: the copyright encoder and the copyright decoder. The former projects the copyright information to the invisible perturbation with the drive of both the input of images and copyright information, thereby adding it to the image and yielding encoded images. The copyright decoder extracts copyright information from encoded images. Moreover, a robustness module is integrated to enhance the decoder’s ability to decipher images against various distortions encountered on social media platforms. Furthermore, the loss function is elaborately designed, taking into account both feature space and color space, to guarantee the quality of encoded and decoded copyright images. Extensively objective and subjective experiments validate the effectiveness of the proposed method. Additionally, the physical test is conducted by posting the encoded images to social media (e.g., Weibo and Twitter) and downloading them to verify the feasibility of the proposed method in practice.

References

[1]
Baluja S., Hiding images in plain sight: Deep steganography, Advances in Neural Information Processing Systems 30 (2017).
[2]
Baluja S., Hiding images within images, IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (7) (2019) 1685–1697.
[3]
Boroumand M., Chen M., Fridrich J., Deep residual network for steganalysis of digital images, IEEE Transactions on Information Forensics and Security 14 (5) (2018) 1181–1193.
[4]
Chen, Y., Lai, Y.-K., & Liu, Y.-J. (2018). Cartoongan: Generative adversarial networks for photo cartoonization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9465–9474).
[5]
Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., & Choo, J. (2018). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8789–8797).
[6]
Choi, Y., Uh, Y., Yoo, J., & Ha, J.-W. (2020). Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8188–8197).
[7]
Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L., Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248–255.
[8]
Dhir R., Ashok M., Gite S., Kotecha K., Automatic image colorization using GANs, in: Soft computing and its engineering applications: second international conference, icSoftComp 2020, changa, anand, India, December 11–12, 2020, proceedings 2, Springer, 2021, pp. 15–26.
[9]
Fomin V., Anmol J., Desroziers S., Kriss J., Tejani A., High-level library to help with training neural networks in PyTorch, 2020, GitHub repository, GitHub, https://github.com/pytorch/ignite.
[10]
Gatys, L., Ecker, A., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414–2423).
[11]
Goodfellow I., Shlens J., Szegedy C., Explaining and harnessing adversarial examples, in: International conference on learning representations, 2015, URL http://arxiv.org/abs/1412.6572.
[12]
Goyal P., Dollár P., Girshick R., Noordhuis P., Wesolowski L., Kyrola A., et al., Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017, arXiv preprint arXiv:1706.02677.
[13]
Gu, S., Chen, C., Liao, J., & Yuan, L. (2018). Arbitrary style transfer with deep feature reshuffle. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8222–8231).
[14]
Han Y., Yang J., Fu Y., Disentangled face attribute editing via instance-aware latent space search, 2021, arXiv preprint arXiv:2105.12660.
[15]
Heusel M., Ramsauer H., Unterthiner T., Nessler B., Hochreiter S., Gans trained by a two time-scale update rule converge to a local nash equilibrium, Advances in Neural Information Processing Systems 30 (2017).
[16]
Huang, H., Wang, Y., Chen, Z., Zhang, Y., Li, Y., Tang, Z., et al. (2022). Cmua-watermark: A cross-model universal adversarial watermark for combating deepfakes. In Proceedings of the AAAI conference on artificial intelligence, vol. 36, no. 1 (pp. 989–997).
[17]
Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134).
[18]
Jaderberg M., Simonyan K., Zisserman A., et al., Spatial transformer networks, Advances in Neural Information Processing Systems 28 (2015).
[19]
Jia, Z., Fang, H., & Zhang, W. (2021). MBRS: Enhancing robustness of dnn-based watermarking by mini-batch of real and simulated jpeg compression. In Proceedings of the 29th ACM international conference on multimedia (pp. 41–49).
[20]
Jia, X., Wei, X., Cao, X., & Han, X. (2020). Adv-watermark: A novel watermark perturbation for adversarial examples. In Proceedings of the 28th ACM international conference on multimedia (pp. 1579–1587).
[21]
Jing, J., Deng, X., Xu, M., Wang, J., & Guan, Z. (2021). HiNet: deep image hiding by invertible network. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4733–4742).
[22]
Khodadadeh, S., Ghadar, S., Motiian, S., Lin, W.-A., Bölöni, L., & Kalarot, R. (2022). Latent to latent: A learned mapper for identity preserving editing of multiple face attributes in stylegan-generated images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 3184–3192).
[23]
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681–4690).
[24]
Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1833–1844).
[25]
Likert R., A technique for the measurement of attitudes, Archives of Psychology (1932).
[26]
Park, T., Liu, M.-Y., Wang, T.-C., & Zhu, J.-Y. (2019). Gaugan: semantic image synthesis with spatially adaptive normalization. In ACM SIGGRApH 2019 real-time live! (p. 1).
[27]
Salman H., Ilyas A., Engstrom L., Vemprala S., Madry A., Kapoor A., Unadversarial examples: Designing objects for robust vision, Advances in Neural Information Processing Systems 34 (2021) 15270–15284.
[28]
Shin, R., & Song, D. (2017). Jpeg-resistant adversarial images. In NIPS 2017 workshop on machine learning and computer security, vol. 1 (p. 8).
[29]
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., et al. (2013). Intriguing properties of neural networks. In 2nd international conference on learning representations.
[30]
Tancik, M., Mildenhall, B., & Ng, R. (2020). Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2117–2126).
[31]
Wang Z., Byrnes O., Wang H., Sun R., Ma C., Chen H., et al., Data hiding with deep learning: A survey unifying digital watermarking and steganography, IEEE Transactions on Computational Social Systems (2023).
[32]
Wang, J., Yin, Z., Hu, P., Liu, A., Tao, R., Qin, H., et al. (2022). Defensive patches for robust recognition in the physical world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2456–2465).
[33]
Wei, P., Li, S., Zhang, X., Luo, G., Qian, Z., & Zhou, Q. (2022). Generative steganography network. In Proceedings of the 30th ACM international conference on multimedia (pp. 1621–1629).
[34]
Yao, X., Newson, A., Gousseau, Y., & Hellier, P. (2021). A latent transformer for disentangled face editing in images and videos. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 13789–13798).
[35]
Ye J., Ni J., Yi Y., Deep learning hierarchical representations for image steganalysis, IEEE Transactions on Information Forensics and Security 12 (11) (2017) 2545–2557.
[36]
Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., & Sang, N. (2018). Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (pp. 325–341).
[37]
Zhang C., Benz P., Karjauv A., Sun G., Kweon I.S., UDH: Universal deep hiding for steganography, watermarking, and light field messaging, Advances in Neural Information Processing Systems 33 (2020) 10223–10234.
[38]
Zhang K.A., Cuesta-Infante A., Xu L., Veeramachaneni K., SteganoGAN: High capacity image steganography with GANs, 2019, arXiv preprint arXiv:1901.03892.
[39]
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586–595).
[40]
Zhu, J., Kaplan, R., Johnson, J., & Fei-Fei, L. (2018). Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (pp. 657–672).
[41]
Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Neural Networks
Neural Networks  Volume 177, Issue C
Sep 2024
298 pages

Publisher

Elsevier Science Ltd.

United Kingdom

Publication History

Published: 24 July 2024

Author Tags

  1. Copyright protection
  2. Invisible perturbation
  3. Robustness technique

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media