Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3474369.3486876acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Differential Privacy Defenses and Sampling Attacks for Membership Inference

Published: 15 November 2021 Publication History

Abstract

Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 308--318.
[2]
Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoharan. 2016. Membership privacy in MicroRNA-based studies. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 319--330.
[3]
Niklas Buescher, Spyros Boukoros, Stefan Bauregger, and Stefan Katzenbeisser. 2017. Two is not enough: Privacy assessment of aggregation schemes in smart metering. Proceedings on Privacy Enhancing Technologies, Vol. 2017, 4 (2017), 198--214.
[4]
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. 2019. GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs. arXiv:1909.03935. https://arxiv.org/abs/1909.03935 https://arxiv.org/pdf/1909.03935.pdf
[5]
Christopher A Choquette Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. 2020. Label-Only Membership Inference Attacks. arXiv preprint arXiv:2007.14321 (2020).
[6]
Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading classifiers by morphing in the dark. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 119--133.
[7]
Cynthia Dwork. 2011. A firm foundation for private data analysis. Commun. ACM, Vol. 54, 1 (2011), 86--95.
[8]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265--284.
[9]
Cynthia Dwork, Aaron Roth, et almbox. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, Vol. 9, 3-4 (2014), 211--407.
[10]
Cynthia Dwork, Adam Smith, Thomas Steinke, Jonathan Ullman, and Salil Vadhan. 2015. Robust traceability from trace amounts. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science. IEEE, 650--669.
[11]
Justin Gilmer, Nicolas Ford, Nicholas Carlini, and Ekin Cubuk. 2019. Adversarial examples are a natural consequence of test error in noise. In International Conference on Machine Learning. PMLR, 2280--2289.
[12]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Membership inference attacks against generative models. Proceedings on Privacy Enhancing Technologies, Vol. 2019, 1 (2019), 133--152.
[13]
Benjamin Hilprecht, Martin H"arterich, and Daniel Bernau. 2019. Monte carlo and reconstruction membership inference attacks against generative models. Proceedings on Privacy Enhancing Technologies, Vol. 2019, 4 (2019), 232--249.
[14]
Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John V Pearson, Dietrich A Stephan, Stanley F Nelson, and David W Craig. 2008. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS genetics, Vol. 4, 8 (2008).
[15]
Bargav Jayaraman and David Evans. 2019. Evaluating differentially private machine learning in practice. In 28th $$USENIX$$ Security Symposium ({USENIX} Security 19). 1895--1912.
[16]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 259--274.
[17]
Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2014. Extremal mechanisms for local differential privacy. In Advances in neural information processing systems. 2879--2887.
[18]
Zheng Li and Yang Zhang. 2020. Label-Leaks: Membership Inference Attack with Label. arXiv preprint arXiv:2007.15528 (2020).
[19]
Gaoyang Liu, Chen Wang, Kai Peng, Haojun Huang, Yutong Li, and Wenqing Cheng. 2019. Socinf: Membership inference attacks on social media health data with machine learning. IEEE Transactions on Computational Social Systems, Vol. 6, 5 (2019), 907--921.
[20]
Kin Sum Liu, Bo Li, and Jie Gao. 2018. Generative model: Membership attack, generalization and diversity. CoRR, abs/1805.09898 (2018).
[21]
Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory, Vol. 28, 2 (1982), 129--137.
[22]
Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A. Gunter, and Kai Chen. 2018. Understanding Membership Inferences on Well-Generalized Learning Models. CoRR, Vol. abs/1802.04889 (2018). arxiv: 1802.04889 http://arxiv.org/abs/1802.04889
[23]
Mohamed Maouche, Sonia Ben Mokhtar, and Sara Bouchenak. 2017. Ap-attack: a novel user re-identification attack on mobility datasets. In Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. 48--57.
[24]
Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07). IEEE, 94--103.
[25]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 691--706.
[26]
Yuantian Miao, Ben Zi Hao Zhao, Minhui Xue, Chao Chen, Lei Pan, Jun Zhang, Dali Kaafar, and Yang Xiang. 2019. The audio auditor: Participant-level membership inference in voice-based iot. arXiv preprint arXiv:1905.07082 (2019).
[27]
A Naranyanan and V Shmatikov. 2008. Robust de-anonymization of large datasets. In Proceedings of the 2008 IEEE Symposium on Security and Privacy.
[28]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 634--646.
[29]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 739--753.
[30]
Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2017. Knock knock, who's there" Membership inference on aggregate location data. arXiv preprint arXiv:1708.06145 (2017).
[31]
Zhan Qin, Yin Yang, Ting Yu, Issa Khalil, Xiaokui Xiao, and Kui Ren. 2016. Heavy hitter estimation over set-valued data with local differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 192--203.
[32]
Md Atiqur Rahman, Tanzila Rahman, Robert Laganière, Noman Mohammed, and Yang Wang. 2018. Membership Inference Attack against Differentially Private Deep Learning Model. Transactions on Data Privacy, Vol. 11, 1 (2018), 61--79.
[33]
Vibhor Rastogi, Michael Hay, Gerome Miklau, and Dan Suciu. 2009. Relationship privacy: output perturbation for queries with joins. In Proceedings of the twenty-eighth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. 107--116.
[34]
Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, and Michael Backes. 2018. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. Annual Network and Distributed System Security Symposium (NDSS) (2018). arxiv: 1806.01246 http://arxiv.org/abs/1806.01246
[35]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3--18.
[36]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[37]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 587--601.
[38]
Congzheng Song and Vitaly Shmatikov. 2019. Auditing data provenance in text-generation models. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 196--206.
[39]
Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 241--257.
[40]
Rui Wang, Yong Fuga Li, XiaoFeng Wang, Haixu Tang, and Xiaoyong Zhou. 2009a. Learning your identity and disease from research papers: information leaks in genome wide association study. In Proceedings of the 16th ACM conference on Computer and communications security. 534--544.
[41]
Rui Wang, XiaoFeng Wang, Zhou Li, Haixu Tang, Michael K Reiter, and Zheng Dong. 2009b. Privacy-preserving genomic computation through program specialization. In Proceedings of the 16th ACM conference on Computer and communications security. 338--347.
[42]
Stanley L Warner. 1965. Randomized response: A survey technique for eliminating evasive answer bias. J. Amer. Statist. Assoc., Vol. 60, 309 (1965), 63--69.
[43]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF). IEEE, 268--282.
[44]
Jun Zhang, Zhenjie Zhang, Xiaokui Xiao, Yin Yang, and Marianne Winslett. 2012. Functional mechanism: regression analysis under differential privacy. arXiv preprint arXiv:1208.0219 (2012).

Cited By

View all
  • (2023)Defenses to Membership Inference Attacks: A SurveyACM Computing Surveys10.1145/362066756:4(1-34)Online publication date: 10-Nov-2023
  • (2023)Defending Against Membership Inference Attacks With High Utility by GANIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.317456920:3(2144-2157)Online publication date: 1-May-2023
  • (2023)GanNoise: Defending against black-box membership inference attacks by countering noise generation2023 International Conference on Data Security and Privacy Protection (DSPP)10.1109/DSPP58763.2023.10405019(32-40)Online publication date: 16-Oct-2023
  • Show More Cited By

Index Terms

  1. Differential Privacy Defenses and Sampling Attacks for Membership Inference

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      AISec '21: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security
      November 2021
      210 pages
      ISBN:9781450386579
      DOI:10.1145/3474369
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 November 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. deep learning
      2. membership inference attacks
      3. privacy-preserving machine learning

      Qualifiers

      • Research-article

      Conference

      CCS '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 94 of 231 submissions, 41%

      Upcoming Conference

      CCS '24
      ACM SIGSAC Conference on Computer and Communications Security
      October 14 - 18, 2024
      Salt Lake City , UT , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)58
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 30 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Defenses to Membership Inference Attacks: A SurveyACM Computing Surveys10.1145/362066756:4(1-34)Online publication date: 10-Nov-2023
      • (2023)Defending Against Membership Inference Attacks With High Utility by GANIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.317456920:3(2144-2157)Online publication date: 1-May-2023
      • (2023)GanNoise: Defending against black-box membership inference attacks by countering noise generation2023 International Conference on Data Security and Privacy Protection (DSPP)10.1109/DSPP58763.2023.10405019(32-40)Online publication date: 16-Oct-2023
      • (2023)Fedlabx: a practical and privacy-preserving framework for federated learningComplex & Intelligent Systems10.1007/s40747-023-01184-310:1(677-690)Online publication date: 31-Jul-2023
      • (2022)An Aggregation Method based on Cosine Distance Filtering2022 4th International Conference on Intelligent Information Processing (IIP)10.1109/IIP57348.2022.00031(117-121)Online publication date: Oct-2022
      • (2022)Semi-Leak: Membership Inference Attacks Against Semi-supervised LearningComputer Vision – ECCV 202210.1007/978-3-031-19821-2_21(365-381)Online publication date: 23-Oct-2022
      • (2022)Label‐only membership inference attacks on machine unlearning without dependence of posteriorsInternational Journal of Intelligent Systems10.1002/int.2300037:11(9424-9441)Online publication date: 26-Sep-2022
      • (2021)A Defense Framework for Privacy Risks in Remote Machine Learning ServiceSecurity and Communication Networks10.1155/2021/99246842021Online publication date: 1-Jan-2021

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media