Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15065))

Included in the following conference series:

Abstract

While the success of deep learning relies on large amounts of training datasets, data is often limited in privacy-sensitive domains. To address this challenge, generative model learning with differential privacy has emerged as a solution to train private generative models for desensitized data generation. However, the quality of the images generated by existing methods is limited due to the complexity of modeling data distribution. We build on the success of diffusion models and introduce DP-SAD, which trains a private diffusion model by a stochastic adversarial distillation method. Specifically, we first train a diffusion model as a teacher and then train a student by distillation, in which we achieve differential privacy by adding noise to the gradients from other models to the student. For better generation quality, we introduce a discriminator to distinguish whether an image is from the teacher or the student, which forms the adversarial training. Extensive experiments and analysis clearly demonstrate the effectiveness of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  2. Blattmann, A., et al.: Stable video diffusion: scaling latent video diffusion models to large datasets. arXiv preprint (2023). https://arxiv.org/abs/2311.15127

  3. Cao, T., et al.: Don’t generate me: training differentially private generative models with sinkhorn divergence. NeurIPS, 34, 12480–12492 (2021)

    Google Scholar 

  4. Chen, D., Kerkouche, R., Fritz, M.: Private set generation with discriminative information. NeurIPS 35, 14678–14690 (2022)

    Google Scholar 

  5. Chen, D., Orekondy, T., Fritz, M.: GS-WGAN: a gradient-sanitized approach for learning differentially private generators. NeurIPS 33, 12673–12684 (2020)

    Google Scholar 

  6. Chen, J., et al.: DPGEN: differentially private generative energy-guided network for natural image synthesis. In: CVPR, pp. 8387–8396 (2022)

    Google Scholar 

  7. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint (2020). https://arxiv.org/abs/2003.04297

  8. Dockhorn, T., Cao, T., Vahdat, A., Kreis, K.: Differentially private diffusion models. arXiv preprint (2022). https://doi.org/10.48550/arXiv.2210.09929

  9. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) Theory of Cryptography, pp. 265–284. Springer Berlin Heidelberg, Berlin, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  10. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    Google Scholar 

  11. Ge, S., Li, J., Ye, Q., Luo, Z.: Detecting masked faces in the wild with LLE-CNNS. In: CVPR, pp. 2682–2690 (2017)

    Google Scholar 

  12. Ge, S., Zhao, S., Li, C., Li, J.: Low-resolution face recognition in the wild via selective knowledge distillation. TIP 28(4), 2051–2062 (2018)

    Google Scholar 

  13. Ghalebikesabi, S., et al.: Differentially private diffusion models generate useful synthetic images. arXiv preprint (2023). https://arxiv.org/abs/2302.13861

  14. Goodfellow, I., et al.: Generative adversarial nets. NeurIPS 27, 2672–2680 (2014)

    Google Scholar 

  15. Gu, J., Zhai, S., Zhang, Y., Liu, L., Susskind, J.M.: Boot: Data-free distillation of denoising diffusion models with bootstrapping. In: Proceedings of the International Conference on Machine Learning Workshop (2023)

    Google Scholar 

  16. Harder, F., Adamczewski, K., Park, M.: DP-MERF: differentially private mean embeddings with random features for practical privacy-preserving data generation. In: International Conference on Artificial Intelligence and Statistics, pp. 1819–1827 (2021)

    Google Scholar 

  17. He, B., Li, J., Zhao, Y., Tian, Y.: Part-regularized near-duplicate vehicle re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3997–4005 (2019)

    Google Scholar 

  18. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NeurIPSW (2015). https://arxiv.org/abs/1503.02531

  19. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS, pp. 6840–6851 (2020)

    Google Scholar 

  20. Ho, J., Salimans, T.: Classifier-free diffusion guidance. In: NeurIPSW (2021)

    Google Scholar 

  21. Jordon, J., Yoon, J., Van Der Schaar, M.: PATE-GAN: Generating synthetic data with differential privacy guarantees. In: ICLR (2019). https://openreview.net/forum?id=S1zk9iRqF7

  22. Katzir, O., Patashnik, O., Cohen-Or, D., Lischinski, D.: Noise-free score distillation. arXiv preprint (2023). https://arxiv.org/abs/2310.17590

  23. Kodaira, A., et al.: StreamDiffusion: a pipeline-level solution for real-time interactive generation. arXiv preprint (2023). https://arxiv.org/abs/2312.12491

  24. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)

    Google Scholar 

  25. Li, Y., et al. Snapfusion: text-to-image diffusion model on mobile devices within two seconds. In: NeurIPS (2023)

    Google Scholar 

  26. Liu, Z., Luo, P., Wang, X., et al.: Deep learning face attributes in the wild. In: ICCV, pp. 3730–3738 (2015)

    Google Scholar 

  27. Long, Y., Wang, B., Yang, Z., et al.: G-PATE: scalable differentially private data generator via private aggregation of teacher discriminators. In: NeurIPS, pp. 2965–2977 (2021)

    Google Scholar 

  28. Luo, S., Tan, Y., Huang, L., Li, J., Zhao, H.: Latent consistency models: synthesizing high-resolution images with few-step inference. arXiv preprint (2023). https://arxiv.org/abs/2310.04378

  29. Lyu, S., Vinaroz, M., Liu, M., Park, M.: Differentially private latent diffusion models. arXiv preprint (2023). http://arxiv.org/abs/2305.15759

  30. MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)

    Google Scholar 

  31. Meng, C., et al.: On distillation of guided diffusion models. In: CVPR, pp. 14297–14306 (2023)

    Google Scholar 

  32. Mironov, I.: Rényi differential privacy. In: IEEE Computer Security Foundations Symposium, pp. 263–275 (2017)

    Google Scholar 

  33. Papernot, N., Abadi, M., Erlingsson, U., et al.: Semi-supervised knowledge transfer for deep learning from private training data. In: ICLR (2017). https://arxiv.org/abs/1610.05755

  34. Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: ICLR (2022). https://arxiv.org/abs/2202.00512

  35. Sauer, A., Lorenz, D., Blattmann, A., Rombach, R.: Adversarial diffusion distillation. arXiv preprint (2023). https://arxiv.org/abs/2311.17042

  36. o Takagi, S., Takahashi, T., Cao, Y., et al.: P3GM: private high-dimensional data release via privacy preserving phased generative model. In: International Conference on Data Engineering, pp. 169–180 (2021)

    Google Scholar 

  37. Wang, B., Wu, F., Long, Y., et al.: DataLens: scalable privacy preserving training via gradient compression and aggregation. In: Proceedings of the ACM SIGSAC Conference on Computer Communications Security, pp. 2146–2168 (2021)

    Google Scholar 

  38. Wang, C., et al.: SAM-DiffSR: structure-modulated diffusion model for image super-resolution. arXiv preprint (2024). https://arxiv.org/abs/2402.17133

  39. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint (2017). https://arxiv.org/abs/1708.07747

  40. Xie, L., Lin, K., Wang, S., Wang, F., Zhou, J.: Differentially private generative adversarial network. arXiv preprint (2018). http://arxiv.org/abs/1802.06739

  41. Zhao, Y., Xu, Y., Xiao, Z., Hou, T.: MobileDiffusion: subsecond text-to-image generation on mobile devices. arXiv preprint (2023). https://arxiv.org/abs/2311.16567

Download references

Acknowledgements

This work was partially supported by grants from the Pioneer R&D Program of Zhejiang Province (2024C01024), and Open Research Project of the State Key Laboratory of Media Convergence and Communication, Communication University of China (SKLMCC2022KF004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiming Ge .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1009 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, B., Wang, P., Ge, S. (2025). Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15065. Springer, Cham. https://doi.org/10.1007/978-3-031-72667-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72667-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72666-8

  • Online ISBN: 978-3-031-72667-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics