Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Enhanced Mixup Training: a Defense Method Against Membership Inference Attack

  • Conference paper
  • First Online:
Information Security Practice and Experience (ISPEC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 13107))

Abstract

Membership inference attacks (MIAs) have powerful attack ability to threaten the privacy of users. In general, it mainly utilizes model-based or metric-based inference methods to infer whether a particular data sample is in the training dataset of the target model. While several defenses have been proposed to mitigate privacy risks, they still suffer from some limitations: 1) Current defenses are limited to defend model-based attack and are still vulnerable to other types of attacks. 2) They make the impractical assumption that the defender already knows adversaries’ attack strategies. In this paper, we present Enhanced Mixup Training (EMT) as a defense against MIAs. Specifically, EMT benefits from recursive mixup training, which mixes training data by using devised Enhanced Mix Item during the training process. Compared with existing defenses, EMT fundamentally improves the accuracy and generalization of the target model, and hence effectively reduces the risk of MIAs. We prove theoretically that EMT corresponds to a specific type of data-adaptive regularization which leads to better generalization. Moreover, our defense is adaptive and does not require knowing how adversaries launch attacks. Our experimental results on Location30, Purchase100 and Texas100 datasets show that our EMT successfully mitigates both model-based and metric-based attacks without the accuracy loss.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Jia, J., Salem, A., Backes, M., Zhang, Y., Gong, N.Z.: MemGuard. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (2019). https://doi.org/10.1145/3319535.3363201

  2. Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In 30th USENIX Security Symposium (USENIX Security 21) (2021)

    Google Scholar 

  3. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. Proc. ICLR 2018, 1–13 (2017)

    Google Scholar 

  4. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP) (2017). https://doi.org/10.1109/sp.2017.41

  5. Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (2018). https://doi.org/10.1145/3243734.3243855

  6. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Proceedings 2019 Network and Distributed System Security Symposium (2019). https://doi.org/10.14722/ndss.2019.23119

  7. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015). https://doi.org/10.1145/2810103.2813687

  8. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (2016). https://doi.org/10.1145/2976749.2978318

  9. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: analyzing the connection to overfitting (2018). In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF) (2018). https://doi.org/10.1109/csf.2018.00027

  10. Leino, K., Fredrikson, M.: Stolen memories: leveraging model memorization for calibrated white-box membership inference. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622 (2020)

    Google Scholar 

  11. Bassily, R., Smith, A., Thakurta, A.: Private empirical risk minimization: efficient algorithms and tight error bounds. In: 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (2014). https://doi.org/10.1109/focs.2014.56

  12. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (2012). https://doi.org/10.1109/cvpr.2012.6248074

  13. Kononenko, I.: Machine learning for medical diagnosis: history, state of the art and perspective. Artif. Intell. Medl. 23(1), 89–109 (2001). https://doi.org/10.1016/s0933-3657(01)00077-x

    Article  Google Scholar 

  14. Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and mandarin. In: International Conference on Machine Learning, pp. 173–182. PMLR, June 2016

    Google Scholar 

  15. Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Privacy Enhan. Technol. 2019(1), 133–152 (2018). https://doi.org/10.2478/popets-2019-0008

    Article  Google Scholar 

  16. Zhang, L., Deng, Z., Kawaguchi, K., Ghorbani, A., Zou, J.: How does mixup help with robustness and generalization. In: Proceedings of ICLR (2021)

    Google Scholar 

  17. Guowen, X., Li, H., Liu, S., Yang, K., Lin, X.: VerifyNet: secure and verifiable federated learning. IEEE Trans. Inf. Forens. Secur. 15, 911–926 (2019)

    Google Scholar 

  18. Guowen, X., Li, H., Zhang, Y., Shengmin, X., Ning, J., Deng, R.H.: Priva-cy-preserving federated deep learning with irregular users. IEEE Trans. Depend. Secur. Comput. (2020). https://doi.org/10.1109/TDSC.2020.3005909

    Article  Google Scholar 

Download references

Acknowledgment

This work is supported by the National Natural Science Foundation of China under Grants 62020106013, 61972454, 61802051, 61772121, and 61728102, Sichuan Science and Technology Program under Grants 2020JDTD0007 and 2020YFG0298, the Fundamental Research Funds for Chinese Central Universities under Grant ZYGX2020ZB027.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongwei Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Z., Li, H., Hao, M., Xu, G. (2021). Enhanced Mixup Training: a Defense Method Against Membership Inference Attack. In: Deng, R., et al. Information Security Practice and Experience. ISPEC 2021. Lecture Notes in Computer Science(), vol 13107. Springer, Cham. https://doi.org/10.1007/978-3-030-93206-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93206-0_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93205-3

  • Online ISBN: 978-3-030-93206-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics