Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3579856.3582819acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article
Open access

RecUP-FL: Reconciling Utility and Privacy in Federated learning via User-configurable Privacy Defense

Published: 10 July 2023 Publication History

Abstract

Federated learning (FL) provides a variety of privacy advantages by allowing clients to collaboratively train a model without sharing their private data. However, recent studies have shown that private information can still be leaked through shared gradients. To further minimize the risk of privacy leakage, existing defenses usually require clients to locally modify their gradients (e.g., differential privacy) prior to sharing with the server. While these approaches are effective in certain cases, they regard the entire data as a single entity to protect, which usually comes at a large cost in model utility. In this paper, we seek to reconcile utility and privacy in FL by proposing a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes while obtaining significant improvements in utility over traditional defenses. Moreover, we observe that existing inference attacks often rely on a machine learning model to extract the private information (e.g., attributes). We thus formulate such a privacy defense as an adversarial learning problem, where RecUP-FL generates slight perturbations that can be added to the gradients before sharing to fool adversary models. To improve the transferability to un-queryable black-box adversary models, inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a set of substitute models and iteratively alternates between simulations of the white-box and the black-box adversarial attack scenarios to generate perturbations. Extensive experiments on four datasets under various adversarial settings (both attribute inference attack and data reconstruction attack) show that RecUP-FL can meet user-specified privacy constraints over the sensitive attributes while significantly improving the model utility compared with state-of-the-art privacy defenses.

References

[1]
2016. General Data Protection Regulation.https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng.
[2]
2018. California Consumer Privacy Act.https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5.
[3]
2022. Apple A16 Bionic Chips Performance.https://www.cpu-monkey.com/en/cpu-apple_a16_bionic.
[4]
2022. Intel Core i7 12700H Core Performance.https://www.giznext.com/laptop-chipsets/intel-core-i7-12700h-chipset-gnt.
[5]
2022. NVIDIA A100 GPU.https://www.nvidia.com/en-us/data-center/a100/.
[6]
Giuseppe Ateniese, Luigi Mancini, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks 10, 3 (2015).
[7]
Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. 2018. Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals. CoRR abs/1807.03418 (2018). arxiv:1807.03418
[8]
Peva Blanchard, Rachid Guerraoui, Julien Stainer, 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems. 119–129.
[9]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.
[10]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE.
[11]
Gábor Danner and Márk Jelasity. 2015. Fully distributed privacy preserving mini-batch gradient descent learning. In IFIP International conference on distributed applications and interoperable systems. Springer, 30–44.
[12]
Dheeru Dua and Casey Graff. 2017. UCI machine learning repository. https://archive.ics.uci.edu/ml/datasets/adult.
[13]
Tiantian Feng, Hanieh Hashemi, Rajat Hebbar, Murali Annavaram, and Narayanan. 2021. Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings. arXiv preprint arXiv:2112.13416 (2021).
[14]
Karan Ganju, Qi Wang, Wei Yang, Carl A Gunter, and Nikita Borisov. 2018. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 619–633.
[15]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients-how easy is it to break privacy in federated learning?Advances in Neural Information Processing Systems 33 (2020), 16937–16947.
[16]
Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
[17]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of International Conference on Learning Representations (ICLR).
[18]
Dhruv Guliani, Françoise Beaufays, and Giovanni Motta. 2021. Training speech recognition models with federated learning: A quality/cost framework. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3080–3084.
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[20]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In Proceedings of ACM SIGSAC conference on computer and communications security.
[21]
Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2021. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence 44, 9 (2021), 5149–5169.
[22]
Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, and Ram Rajagopal. 2017. Context-aware generative adversarial privacy. Entropy 19, 12 (2017), 656.
[23]
Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, and Ram Rajagopal. 2018. Generative adversarial privacy. arXiv preprint arXiv:1807.05306 (2018).
[24]
Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. 2007. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. Technical Report 07-49. University of Massachusetts, Amherst.
[25]
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In International Conference on Machine Learning. PMLR, 2137–2146.
[26]
Shotaro Ishii and David Ljunggren. 2021. A Comparative Analysis of Robustness to Noise in Machine Learning Classifiers.
[27]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. { AttriGuard} : A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In 27th USENIX Security Symposium (USENIX Security 18). 513–529.
[28]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1–2 (2021), 1–210.
[29]
John Koetsier. 2021. Apple iOS 15 And Healthtech: This Is The Next Generation Of Health. https://www.forbes.com/sites/johnkoetsier/2021/06/07/apple-ios-15-and-healthtech-this-is-the-next-generation-of-health/?sh=409c97e74730.
[30]
Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J Dally. 2017. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887 (2017).
[31]
Xiaoyuan Liu, Hongwei Li, and Miao He. 2020. Adaptive privacy-preserving federated learning. Peer-to-Peer Networking and Applications 13, 6 (2020).
[32]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision. 3730–3738.
[33]
Yao Lu, Guangming Lu, Jinxing Li, and Yuanrong Xu. 2021. Fully shared convolutional neural networks. Neural Computing and Applications 33, 14 (2021).
[34]
Lingjuan Lyu and Chen Chen. 2021. A novel attribute reconstruction attack in federated learning. arXiv preprint arXiv:2108.06910 (2021).
[35]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR.
[36]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 691–706.
[37]
Mitek. 2021. How does machine learning help with fraud detection in banks?https://www.miteksystems.com/blog/how-does-machine-learning-help-with-fraud-detection-in-banks.
[38]
Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, and Soteris Demetriou. 2021. Quantifying and Localizing Private Information Leakage from Neural Network Gradients. arXiv preprint arXiv:2105.13929 (2021).
[39]
Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP). IEEE, 19–38.
[40]
Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018).
[41]
Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2017. Towards a visual privacy advisor: Understanding and predicting privacy risks in images. In Proceedings of the IEEE international conference on computer vision. 3686–3695.
[42]
Nisarg Raval, Ashwin Machanavajjhala, and Jerry Pan. 2019. Olympus: Sensor Privacy through Utility Aware Obfuscation.Proc. Priv. Enhancing Technol. 2019, 1 (2019).
[43]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, 3–18.
[44]
Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, and Yiran Chen. 2021. Soteria: Provable defense against privacy leakage in federated learning from representation perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9311–9319.
[45]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE Conference on Computer Communications. IEEE.
[46]
Wenqi Wei, Ling Liu, Yanzhao Wut, Gong Su, and Arun Iyengar. 2021. Gradient-leakage resilient federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS). IEEE, 797–807.
[47]
Xingxing Xiong, Shubo Liu, Dan Li, and Xiaoguang Niu. 2020. A comprehensive survey on local differential privacy. Security and Communication Networks (2020).
[48]
Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. arXiv preprint arXiv:1803.01498 (2018).
[49]
Hongxu Yin, Arun Mallya, Arash Vahdat, and Pavlo Molchanov. 2021. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337–16346.
[50]
Zheng Yuan, Jie Zhang, Yunpei Jia, Chuanqi Tan, Tao Xue, and Shiguang Shan. 2021. Meta gradient adversarial attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR). 7748–7757.
[51]
Oualid Zari, Chuan Xu, and Giovanni Neglia. 2021. Efficient passive membership inference attack in federated learning. arXiv preprint arXiv:2111.00430 (2021).
[52]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).
[53]
Han Zhao, Jianfeng Chi, Yuan Tian, and Geoffrey J Gordon. 2019. Adversarial privacy preservation under attribute inference attack. (2019).
[54]
Haitian Zheng, Kefei Wu, Jong-Hwi Park, Wei Zhu, and Jiebo Luo. 2021. Personalized fashion recommendation from personal social media data: An item-to-set metric learning approach. In 2021 IEEE International Conference on Big Data.
[55]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Advances in Neural Information Processing Systems 32 (2019).

Index Terms

  1. RecUP-FL: Reconciling Utility and Privacy in Federated learning via User-configurable Privacy Defense

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASIA CCS '23: Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security
      July 2023
      1066 pages
      ISBN:9798400700989
      DOI:10.1145/3579856
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 10 July 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. federated learning
      2. meta-learning
      3. privacy defense
      4. user-configurable

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      ASIA CCS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 418 of 2,322 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 263
        Total Downloads
      • Downloads (Last 12 months)214
      • Downloads (Last 6 weeks)32
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media