Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3626232.3653279acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
research-article

Towards Accurate and Stronger Local Differential Privacy for Federated Learning with Staircase Randomized Response

Published: 19 June 2024 Publication History

Abstract

Federated Learning (FL), a privacy-preserving training approach, has proven to be effective, yet its vulnerability to attacks that extract information from model weights is widely recognized. To address such privacy concerns, Local Differential Privacy (LDP) has been applied to FL: perturbing the weights trained for the local model by each client. However, besides high utility loss on the randomized model weights, we identify a new inference attack to the existing LDP method, that can reconstruct the original value from the noisy values with high confidence. To mitigate these issues, in this paper, we propose the Staircase Randomized Response (SRR)-FL framework, which assigns higher probabilities to weights closer to the true weight, reducing the distance between the true and perturbed data. This minimizes the noise for maintaining the same LDP guarantee, leading to better utility. Compared to existing LDP mechanisms (e.g., Generalized Randomized Response) on the FL, SRR-FL can further provide a more accurate privacy-preserving training model, and enhance the robustness against the inference attack while ensuring the same LDP guarantee. Furthermore, we also use the parameter shuffling method for privacy amplification. The efficacy of SRR-FL has been validated on widely used datasets MNIST, Medical-MNIST and CIFAR-10, demonstrating remarkable performance. Code is available at https://github.com/matta-varun/SRR-FL.

References

[1]
Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. 2018. Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984 (2018).
[2]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konevcnỳ, Stefano Mazzocchi, Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. Proceedings of machine learning and systems, Vol. 1 (2019), 374--388.
[3]
Mahawaga Arachchige Pathum Chamikara, Dongxi Liu, Seyit Camtepe, Surya Nepal, Marthie Grobler, Peter Bertok, and Ibrahim Khalil. 2022. Local Differential Privacy for Federated Learning. In ESORICS, Vol. 13554. Springer, 195--216.
[4]
Cangxiong Chen and Neill DF Campbell. 2021. Understanding training-data leakage from gradients in neural networks for image classification. arXiv preprint arXiv:2111.10178 (2021).
[5]
Anda Cheng, Peisong Wang, Xi Sheryl Zhang, and Jian Cheng. 2022. Differentially private federated learning with local regularization and sparsification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10122--10131.
[6]
Olivia Choudhury, Aris Gkoulalas-Divanis, Theodoros Salonidis, Issa Sylla, Yoonyoung Park, Grace Hsu, and Amar Das. 2019. Differential privacy-enabled federated learning for sensitive health data. arXiv:1910.02578 (2019).
[7]
Li Deng. 2012. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, Vol. 29, 6 (2012), 141--142.
[8]
Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. 2017. Collecting telemetry data privately. Advances in Neural Information Processing Systems, Vol. 30 (2017).
[9]
Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006 a. Our Data, Ourselves: Privacy Via Distributed Noise Generation. In Advances in Cryptology - EUROCRYPT 2006, Serge Vaudenay (Ed.). 486--503.
[10]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006 b. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265--284.
[11]
Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy., Vol. 9, 3--4 (2014).
[12]
Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. 2019. Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity. (2019).
[13]
Úlfar Erlingsson, Aleksandra Korolova, and Vasyl Pihur. 2014. RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response. CoRR, Vol. abs/1407.6981 (2014). showeprint[arXiv]1407.6981 http://arxiv.org/abs/1407.6981
[14]
Vitaly Feldman, Audra McMillan, and Kunal Talwar. 2023. Stronger privacy amplification by shuffling for rényi and approximate differential privacy. In SODA. SIAM, 4966--4981.
[15]
Shuya Feng, Meisam Mohammady, Han Wang, Xiaochen Li, Zhan Qin, and Yuan Hong. 2024. DPI: Ensuring Strict Differential Privacy for Infinite Data Streaming. In 2024 IEEE Symposium on Security and Privacy (SP).
[16]
Jiahui Geng, Yongli Mou, Feifei Li, Qing Li, Oya Beyan, Stefan Decker, and Chunming Rong. 2021. Towards general deep leakage in federated learning. arXiv preprint arXiv:2110.09074 (2021).
[17]
Quan Geng, Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2015. The Staircase Mechanism in Differential Privacy. IEEE Journal of Selected Topics in Signal Processing, Vol. 9, 7 (2015), 1176--1184.
[18]
Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially Private Federated Learning: A Client Level Perspective. CoRR, Vol. abs/1712.07557 (2017). showeprint[arXiv]1712.07557 http://arxiv.org/abs/1712.07557
[19]
Antonious Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, and Ananda Theertha Suresh. 2021a. Shuffled Model of Differential Privacy in Federated Learning. In AISTATS.
[20]
Antonious M Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, and Peter Kairouz. 2021b. On the renyi differential privacy of the shuffle model. In CCS. 2321--2341.
[21]
Haimei Gong, Liangjun Jiang, Xiaoyang Liu, Yuanqi Wang, Lei Wang, and Ke Zhang. 2022. Recover User's Private Training Image Data by Gradient in Federated Learning. Sensors, Vol. 22, 19 (2022), 7157.
[22]
Yuan Hong, Jaideep Vaidya, Haibing Lu, Panagiotis Karras, and Sanjay Goel. 2015. Collaborative Search Log Sanitization: Toward Differential Privacy and Boosted Utility. IEEE Trans. Dependable Secur. Comput., Vol. 12, 5 (2015), 504--518.
[23]
Rui Hu, Yuanxiong Guo, Hongning Li, Qingqi Pei, and Yanmin Gong. 2020. Personalized federated learning with differential privacy. IEEE Internet of Things Journal, Vol. 7, 10 (2020), 9530--9539.
[24]
Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2011. What can we learn privately? SIAM J. Comput., Vol. 40, 3 (2011), 793--826.
[25]
Muah Kim, Onur Günlü, and Rafael F Schaefer. 2021. Federated learning with local differential privacy: Trade-offs between privacy, utility, and communication. In ICASSP. IEEE, 2650--2654.
[26]
Paul C Kocher. 1996. Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In CRYPTO'96. Springer, 104--113.
[27]
Jakub Konecnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[28]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[29]
Yiwei Li, Tsung-Hui Chang, and Chong-Yung Chi. 2020. Secure federated averaging algorithm with differential privacy. In MLSP. IEEE, 1--6.
[30]
Zhuotao Lian, Weizheng Wang, and Chunhua Su. 2021. COFEL: Communication-efficient and optimized federated learning with local differential privacy. In ICC.
[31]
Bingyu Liu, Shangyu Xie, Han Wang, Yuan Hong, Xuegang Ban, and Meisam Mohammady. 2021b. VTDP: Privately Sanitizing Fine-Grained Vehicle Trajectory Data With Boosted Utility. IEEE Trans. Dependable Secur. Comput., Vol. 18, 6 (2021), 2643--2657.
[32]
Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, and Masatoshi Yoshikawa. 2021a. Flame: Differentially private federated learning in the shuffle model. In AAAI, Vol. 35. 8688--8696.
[33]
Pathum Chamikara Mahawaga Arachchige, Dongxi Liu, Seyit Camtepe, Surya Nepal, Marthie Grobler, Peter Bertok, and Ibrahim Khalil. 2022. Local differential privacy for federated learning. In ESORICS. Springer, 195--216.
[34]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017a. Communication-efficient learning of deep networks from decentralized data. In AISTATS. 1273--1282.
[35]
H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated Learning of Deep Networks using Model Averaging. CoRR, Vol. abs/1602.05629 (2016). showeprint[arXiv]1602.05629 http://arxiv.org/abs/1602.05629
[36]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017b. Learning Differentially Private Language Models Without Losing Accuracy. CoRR, Vol. abs/1710.06963 (2017). showeprint[arXiv]1710.06963 http://arxiv.org/abs/1710.06963
[37]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning Differentially Private Recurrent Language Models. In ICLR.
[38]
Meisam Mohammady, Shangyu Xie, Yuan Hong, Mengyuan Zhang, Lingyu Wang, Makan Pourzandi, and Mourad Debbabi. 2020. R2DP: A Universal and Automated Approach to Optimizing the Randomization Mechanisms of Differential Privacy for Utility Metrics with No Known Optimal Distributions. In CCS.
[39]
Seung Ho Na, Hyeong Gwon Hong, Junmo Kim, and Seungwon Shin. 2022. Closing the Loophole: Rethinking Reconstruction Attacks in Federated Learning from a Privacy Standpoint. In ACSAC.
[40]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In IEEE SP. 739--753.
[41]
Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, and Binghui Wang. 2024. Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks. USENIX Security (2024).
[42]
Mathias PM Parisot, Balazs Pejo, and Dayana Spagnuelo. 2021. Property inference attacks on convolutional neural networks: Influence and implications of target model's complexity. arXiv preprint arXiv:2104.13061 (2021).
[43]
Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, and Ramtin Pedarsani. 2020. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In AISTATS. PMLR, 2021--2031.
[44]
Sina Sajadmanesh and Daniel Gatica-Perez. 2021. Locally Private Graph Neural Networks. In CCS. 2130--2145.
[45]
Ahmed Salem, Giovanni Cherubin, David Evans, Boris Köpf, Andrew Paverd, Anshuman Suri, Shruti Tople, and Santiago Zanella-Béguelin. 2023. SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning. In 2023 IEEE Symposium on Security and Privacy (SP). 327--345.
[46]
Reza Shokri, Marco Stronati, and Vitaly Shmatikov. 2016. Membership Inference Attacks against Machine Learning Models. CoRR, Vol. abs/1610.05820 (2016). showeprint[arXiv]1610.05820 http://arxiv.org/abs/1610.05820
[47]
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated multi-task learning. Advances in neural information processing systems, Vol. 30 (2017).
[48]
Lichao Sun, Jianwei Qian, and Xun Chen. 2021. LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy. In IJCAI.
[49]
Aleksei Triastcyn and Boi Faltings. 2019. Federated learning with bayesian differential privacy. In IEEE Big Data. 2587--2596.
[50]
Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the third ACM international workshop on edge systems, analytics and networking. 61--66.
[51]
Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, and Wenqi Wei. 2019. Demystifying membership inference attacks in machine learning as a service. IEEE transactions on services computing, Vol. 14, 6 (2019), 2073--2089.
[52]
Muhammad Habib ur Rehman, Ahmed Mukhtar Dirir, Khaled Salah, Ernesto Damiani, and Davor Svetinovic. 2021. TrustFed: A framework for fair and trustworthy cross-device federated learning in IIoT. IEEE Transactions on Industrial Informatics, Vol. 17, 12 (2021), 8485--8494.
[53]
Jaideep Vaidya, Basit Shafiq, Anirban Basu, and Yuan Hong. 2013. Differentially Private Naive Bayes Classification. In WI. 571--576.
[54]
Huanyu Wang and Elena Dubrova. 2021. Federated learning in side-channel analysis. In ICISC. Springer, 257--272.
[55]
Han Wang, Hanbin Hong, Li Xiong, Zhan Qin, and Yuan Hong. 2022a. L-SRR: Local Differential Privacy for Location-Based Services with Staircase Randomized Response. In CCS'22. 2809--2823. https://doi.org/10.1145/3548606.3560636
[56]
Han Wang, Jayashree Sharma, Shuya Feng, Kai Shu, and Yuan Hong. 2022b. A Model-Agnostic Approach to Differentially Private Topic Mining. In KDD.
[57]
Han Wang, Shangyu Xie, and Yuan Hong. 2020b. VideoDP: A Flexible Platform for Video Analytics with Differential Privacy. Proc. Priv. Enhancing Technol., Vol. 2020, 4 (2020), 277--296.
[58]
Ning Wang, Xiaokui Xiao, Yin Yang, Jun Zhao, Siu Cheung Hui, Hyejin Shin, Junbum Shin, and Ge Yu. 2019b. Collecting and Analyzing Multidimensional Data with Local Differential Privacy. In ICDE. IEEE, 638--649.
[59]
Yansheng Wang, Yongxin Tong, and Dingyuan Shi. 2020a. Federated latent dirichlet allocation: A local differential privacy based framework. In AAAI.
[60]
Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. 2019a. Subsampled rényi differential privacy and analytical moments accountant. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR.
[61]
Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. 2020. Federated learning with differential privacy: Algorithms and performance analysis. TIFS (2020).
[62]
Jiahao Xie, Chao Zhang, Zebang Shen, Weijie Liu, and Hui Qian. 2021. Efficient cross-device federated learning algorithms for minimax problems. arXiv preprint arXiv:2105.14216 (2021).
[63]
Shangyu Xie and Yuan Hong. 2021. Reconstruction Attack on Instance Encoding for Language Understanding. In EMNLP.
[64]
Xiaoyun Xu, Jingzheng Wu, Mutian Yang, Tianyue Luo, Xu Duan, Weiheng Li, Yanjun Wu, and Bin Wu. [n.,d.]. Information leakage by model weights on federated learning. In PPMLP'20.
[65]
Jiancheng Yang, Rui Shi, and Bingbing Ni. 2021. MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis. In IEEE ISBI. 191--195.
[66]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. TIST, Vol. 1, 2 (2019), 1--19.
[67]
Yuxiang Yang, Junzhe Jia, Justin Zhan, Chaoyi Pang, Senzhang Li, Haibin Lu, and Yanmin Chen. 2020. Federated Learning with Differential Privacy: Algorithms and Performance. TIFS (2020).
[68]
Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, and Jinfeng Yi. 2022. Understanding clipping for federated learning: Convergence and client-level differential privacy. In ICML. endthebibl

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CODASPY '24: Proceedings of the Fourteenth ACM Conference on Data and Application Security and Privacy
June 2024
429 pages
ISBN:9798400704215
DOI:10.1145/3626232
  • General Chair:
  • João P. Vilela,
  • Program Chairs:
  • Haya Schulmann,
  • Ninghui Li
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 June 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. client-level ldp
  2. federated learning
  3. local differential privacy

Qualifiers

  • Research-article

Funding Sources

Conference

CODASPY '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 149 of 789 submissions, 19%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 23
    Total Downloads
  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)8
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media