Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Differential Privacy in Federated Dynamic Gradient Clipping Based on Gradient Norm

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14490))

  • 394 Accesses

Abstract

Federal learning achieves privacy preservation by adding noise to gradient. The noise needs to be clipped to prevent excessive noise from significantly affecting the accuracy of models. However, the imprecise clipped threshold affects the amount of gradient noise leading to degradation of model accuracy. In this paper, for reducing the impact of gradient noise on model accuracy, we propose a differential privacy in federated dynamic gradient clipping based on gradient norm method named DP-FedDGCN. DP-FedDGCN reduces the impact of the amount of gradient noise on the accuracy of the model by dynamically generating a clipped threshold to crop the gradients, achieving the trade-off between data protection and model accuracy. The experimental results show that the attacked accuracy remains consistent in the case of Dirichlet distribution parameters \(\alpha =1\), using MIA, ML-Leaks, and White-box inference attacks. Meanwhile, the average test accuracy outperforms the DP-FedAvg, DP-FedAGNC, and DP-FedDDC methods by about 2.46%, 1.07%, and 1.09%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ling, C., Zhang, W., He, H.: K-anonymity privacy protection algorithm for IoT applications in virtualization and edge computing. Cluster Comput. 26, 1495–1510 (2020)

    Article  Google Scholar 

  2. Mehta, B.B., Rao, U.P.: Improved l-diversity: scalable anonymization approach for privacy preserving big data publishing. J. King Saud Univ.-Comput. Inf. Sci. 34(4), 1423–1430 (2022)

    Google Scholar 

  3. Gangarde, R., Sharma, A., Pawar, A., et al.: Privacy preservation in online social networks using multiple-graph-properties-based clustering to ensure k-anonymity, l-diversity, and t-closeness. Electronics 10(22), 2877 (2021)

    Article  Google Scholar 

  4. Li, R., Xiao, Y., Zhang, C., et al.: Cryptographic algorithms for privacy protection in online applications. Math. Found. Comput. 1(4), 311–330 (2018)

    Article  MathSciNet  Google Scholar 

  5. Phong, L.T., Aono, Y., Hayashi, T., et al.: Privacy preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13, 1333–1345 (2018)

    Article  Google Scholar 

  6. Sayyad, S.: Privacy preserving deep learning using secure multiparty computation. In: 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), pp. 139–142. IEEE (2020)

    Google Scholar 

  7. Dwork, C.: Differential privacy. In: Encyclopedia of Cryptography and Security, pp. 338–340 (2011)

    Google Scholar 

  8. Xu, Z., Shi, S., Liu, A.X., et al.: An adaptive and fast convergent approach to differentially private deep learning. In: IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, pp. 1867–1876. IEEE (2020)

    Google Scholar 

  9. Wang, D., Xu, J.: Differentially private empirical risk minimization with smooth non-convex loss functions: a non-stationary view. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 1182–1189 (2019)

    Google Scholar 

  10. Abadi, M., Chu, A., Goodfellow, I., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  11. Pan, Z., Hu, L., Tang, W., et al.: Privacy protection multi-granular federated neural architecture search: a general framework. IEEE Trans. Knowl. Data Eng. 35(3), 2975–2986 (2021)

    Google Scholar 

  12. Tang, W., Li, B., Barni, M., et al.: An automatic cost learning framework for image steganography using deep reinforcement learning. IEEE Trans. Inf. Forensics Secur. 16, 952–967 (2020)

    Article  Google Scholar 

  13. Li, T., Li, J., Chen, X., et al.: NPMML: a framework for non-interactive privacy protection multi-party machine learning. IEEE Trans. Dependable Secure Comput. 18(6), 2969–2982 (2020)

    Google Scholar 

  14. Wei, K., Li, J., Ding, M., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  15. Guerraoui, R., Gupta, N., Pinot, R., et al.: Differential privacy and Byzantine resilience in SGD: do they add up? In: Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, pp. 391–401 (2021)

    Google Scholar 

  16. Yuan, Y., Zou, Z., Li, D., et al.: D-(DP)2SGD: decentralized parallel SGD with differential privacy in dynamic networks. Wirel. Commun. Mob. Comput. 6679453, 1–14 (2021)

    Google Scholar 

  17. Huang, X., Ding, Y., Jiang, Z.L., et al.: DP-FL: a novel differentially private federated learning framework for the unbalanced data. World Wide Web 23(4), 2529–2545 (2020)

    Article  Google Scholar 

  18. Liu, J., Talwar, K.: Private selection from private candidates. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pp. 298–309 (2019)

    Google Scholar 

  19. Augenstein, S., McMahan, H.B., Ramage, D., et al.: Generative models for effective ML on private, decentralized datasets. In: Proceedings of the 8th International Conference on Learning Representations (2020)

    Google Scholar 

  20. Jordon, J., Yoon, J., Schaar, M.: PATE-GAN: generating synthetic data with differential privacy guarantees. In: Proceedings of the 7th International Conference on Learning Representations (2019)

    Google Scholar 

  21. Lennart van der Veen, K., Seggers, R., Bloem, P., et al.: Three tools for practical differential privacy. In: Proceedings of the NeurIPS 2018 Workshop (2018)

    Google Scholar 

  22. Du, J., Li, S., Chen, X., et al.: Dynamic differential-privacy preserving SGD. arXiv preprint arXiv:2111.00173 (2021)

  23. Gu, Y., Bai, Y., Xu, S.: CS-MIA: membership inference attack based on prediction confidence series in federated learning. J. Inf. Secur. Appl. 67, 103201 (2022)

    Google Scholar 

  24. Salem, A., Zhang, Y., Humbert, M., et al.: ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed Systems Security (NDSS) Symposium (2019)

    Google Scholar 

  25. Song, L., Shokri, R., Mittal, P.: Privacy risks of securing machine learning models against adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 241–257 (2019)

    Google Scholar 

Download references

Acknowledgment

This work is supported by Key Research and Development Program of China, Grant/Award Number: 2022YFC3005401, Key Research and Development Project of Jiangsu Province of China (No. BE2020729), Science Technology Achievement Transformation of Jiangsu Province of China (No. BA2021002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yingchi Mao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mao, Y., Li, C., Wang, Z., Tu, Z., Ping, P. (2024). Differential Privacy in Federated Dynamic Gradient Clipping Based on Gradient Norm. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14490. Springer, Singapore. https://doi.org/10.1007/978-981-97-0859-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0859-8_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0858-1

  • Online ISBN: 978-981-97-0859-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics