Abstract
While the success of deep learning relies on large amounts of training datasets, data is often limited in privacy-sensitive domains. To address this challenge, generative model learning with differential privacy has emerged as a solution to train private generative models for desensitized data generation. However, the quality of the images generated by existing methods is limited due to the complexity of modeling data distribution. We build on the success of diffusion models and introduce DP-SAD, which trains a private diffusion model by a stochastic adversarial distillation method. Specifically, we first train a diffusion model as a teacher and then train a student by distillation, in which we achieve differential privacy by adding noise to the gradients from other models to the student. For better generation quality, we introduce a discriminator to distinguish whether an image is from the teacher or the student, which forms the adversarial training. Extensive experiments and analysis clearly demonstrate the effectiveness of our proposed method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)
Blattmann, A., et al.: Stable video diffusion: scaling latent video diffusion models to large datasets. arXiv preprint (2023). https://arxiv.org/abs/2311.15127
Cao, T., et al.: Don’t generate me: training differentially private generative models with sinkhorn divergence. NeurIPS, 34, 12480–12492 (2021)
Chen, D., Kerkouche, R., Fritz, M.: Private set generation with discriminative information. NeurIPS 35, 14678–14690 (2022)
Chen, D., Orekondy, T., Fritz, M.: GS-WGAN: a gradient-sanitized approach for learning differentially private generators. NeurIPS 33, 12673–12684 (2020)
Chen, J., et al.: DPGEN: differentially private generative energy-guided network for natural image synthesis. In: CVPR, pp. 8387–8396 (2022)
Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint (2020). https://arxiv.org/abs/2003.04297
Dockhorn, T., Cao, T., Vahdat, A., Kreis, K.: Differentially private diffusion models. arXiv preprint (2022). https://doi.org/10.48550/arXiv.2210.09929
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) Theory of Cryptography, pp. 265–284. Springer Berlin Heidelberg, Berlin, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Ge, S., Li, J., Ye, Q., Luo, Z.: Detecting masked faces in the wild with LLE-CNNS. In: CVPR, pp. 2682–2690 (2017)
Ge, S., Zhao, S., Li, C., Li, J.: Low-resolution face recognition in the wild via selective knowledge distillation. TIP 28(4), 2051–2062 (2018)
Ghalebikesabi, S., et al.: Differentially private diffusion models generate useful synthetic images. arXiv preprint (2023). https://arxiv.org/abs/2302.13861
Goodfellow, I., et al.: Generative adversarial nets. NeurIPS 27, 2672–2680 (2014)
Gu, J., Zhai, S., Zhang, Y., Liu, L., Susskind, J.M.: Boot: Data-free distillation of denoising diffusion models with bootstrapping. In: Proceedings of the International Conference on Machine Learning Workshop (2023)
Harder, F., Adamczewski, K., Park, M.: DP-MERF: differentially private mean embeddings with random features for practical privacy-preserving data generation. In: International Conference on Artificial Intelligence and Statistics, pp. 1819–1827 (2021)
He, B., Li, J., Zhao, Y., Tian, Y.: Part-regularized near-duplicate vehicle re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3997–4005 (2019)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NeurIPSW (2015). https://arxiv.org/abs/1503.02531
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: NeurIPS, pp. 6840–6851 (2020)
Ho, J., Salimans, T.: Classifier-free diffusion guidance. In: NeurIPSW (2021)
Jordon, J., Yoon, J., Van Der Schaar, M.: PATE-GAN: Generating synthetic data with differential privacy guarantees. In: ICLR (2019). https://openreview.net/forum?id=S1zk9iRqF7
Katzir, O., Patashnik, O., Cohen-Or, D., Lischinski, D.: Noise-free score distillation. arXiv preprint (2023). https://arxiv.org/abs/2310.17590
Kodaira, A., et al.: StreamDiffusion: a pipeline-level solution for real-time interactive generation. arXiv preprint (2023). https://arxiv.org/abs/2312.12491
LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)
Li, Y., et al. Snapfusion: text-to-image diffusion model on mobile devices within two seconds. In: NeurIPS (2023)
Liu, Z., Luo, P., Wang, X., et al.: Deep learning face attributes in the wild. In: ICCV, pp. 3730–3738 (2015)
Long, Y., Wang, B., Yang, Z., et al.: G-PATE: scalable differentially private data generator via private aggregation of teacher discriminators. In: NeurIPS, pp. 2965–2977 (2021)
Luo, S., Tan, Y., Huang, L., Li, J., Zhao, H.: Latent consistency models: synthesizing high-resolution images with few-step inference. arXiv preprint (2023). https://arxiv.org/abs/2310.04378
Lyu, S., Vinaroz, M., Liu, M., Park, M.: Differentially private latent diffusion models. arXiv preprint (2023). http://arxiv.org/abs/2305.15759
MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)
Meng, C., et al.: On distillation of guided diffusion models. In: CVPR, pp. 14297–14306 (2023)
Mironov, I.: Rényi differential privacy. In: IEEE Computer Security Foundations Symposium, pp. 263–275 (2017)
Papernot, N., Abadi, M., Erlingsson, U., et al.: Semi-supervised knowledge transfer for deep learning from private training data. In: ICLR (2017). https://arxiv.org/abs/1610.05755
Salimans, T., Ho, J.: Progressive distillation for fast sampling of diffusion models. In: ICLR (2022). https://arxiv.org/abs/2202.00512
Sauer, A., Lorenz, D., Blattmann, A., Rombach, R.: Adversarial diffusion distillation. arXiv preprint (2023). https://arxiv.org/abs/2311.17042
o Takagi, S., Takahashi, T., Cao, Y., et al.: P3GM: private high-dimensional data release via privacy preserving phased generative model. In: International Conference on Data Engineering, pp. 169–180 (2021)
Wang, B., Wu, F., Long, Y., et al.: DataLens: scalable privacy preserving training via gradient compression and aggregation. In: Proceedings of the ACM SIGSAC Conference on Computer Communications Security, pp. 2146–2168 (2021)
Wang, C., et al.: SAM-DiffSR: structure-modulated diffusion model for image super-resolution. arXiv preprint (2024). https://arxiv.org/abs/2402.17133
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint (2017). https://arxiv.org/abs/1708.07747
Xie, L., Lin, K., Wang, S., Wang, F., Zhou, J.: Differentially private generative adversarial network. arXiv preprint (2018). http://arxiv.org/abs/1802.06739
Zhao, Y., Xu, Y., Xiao, Z., Hou, T.: MobileDiffusion: subsecond text-to-image generation on mobile devices. arXiv preprint (2023). https://arxiv.org/abs/2311.16567
Acknowledgements
This work was partially supported by grants from the Pioneer R&D Program of Zhejiang Province (2024C01024), and Open Research Project of the State Key Laboratory of Media Convergence and Communication, Communication University of China (SKLMCC2022KF004).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, B., Wang, P., Ge, S. (2025). Learning Differentially Private Diffusion Models via Stochastic Adversarial Distillation. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15065. Springer, Cham. https://doi.org/10.1007/978-3-031-72667-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-72667-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72666-8
Online ISBN: 978-3-031-72667-5
eBook Packages: Computer ScienceComputer Science (R0)