Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

GradDiff: : Gradient-based membership inference attacks against federated distillation with differential comparison

Published: 12 April 2024 Publication History

Abstract

Membership inference attacks (MIAs) has demonstrated a great threat to federated learning (FL) and its extension federated distillation (FD). However, existing research on MIAs against FD is insufficient. In this paper, we propose a novel membership inference attack named GradDiff, which is a passive gradient-based MIA employing differential comparison. Additionally, to make full use of the federated training process, we also design the gradient drift attack (GradDrift), an active version of GradDiff, in which the attacker modifies the target model by gradient tuning and is able to obtain more information about membership privacy. We conduct extensive experiments on three real-world datasets to evaluate the effectiveness of the proposed attacks. The results show that our proposed attacks can outperform the existing baseline methods in terms of precision and recall. Besides, we perform a thorough investigation of the factors that may influence the performance of MIAs against FD.

References

[1]
B. McMahan, E. Moore, D. Ramage, S. Hampson, B. Aguera y Arcas, Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273–1282. PMLR, 20–22 Apr 2017.
[2]
N. Wang, W. Yang, X. Wang, L. Wu, Z. Guan, X. Du, M. Guizani. A blockchain based privacy- preserving federated learning scheme for internet of vehicles. Digital Communications and Networks, 2022.
[3]
E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, S.-L. Kim, Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479, 2018.
[4]
Z. Zhu, J. Hong, J. Zhou, Data-free knowledge distillation for heterogeneous federated learning, in: Proceedings of the 38th International Conference on Machine Learning (ICML), Volume 139 of Proceedings of Machine Learning Research, 2021, pp. 12878–12889.
[5]
D. Li, J. Wang, FedMD: Heterogenous federated learning via model distillation, NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, 2019.
[6]
T. Lin, L. Kong, S.U. Stich, M. Jaggi, Ensemble distillation for robust model fusion in federated learning, in: Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS), 2020, pp. 2351–2363.
[7]
J. Guo, Z. Liu, S. Tian, F. Huang, J. Li, X. Li, K.K. Igorevich, J. Ma, TFL-DT: A trust evaluation scheme for federated learning in digital twin for mobile networks. IEEE J. Selected Areas Commun., 2023.
[8]
J.P. Albrecht, How the GDPR will change the world, Eur. Data Protection Law Rev. 2 (2016) 287–289.
[9]
R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership inference attacks against machine learning models.In 2017 IEEE symposium on security and privacy (SP), pages 3–18, 2017.
[10]
V. Shejwalkar, A. Houmansadr, P. Kairouz, D. Ramage, Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning, in: 2022 IEEE Symposium on Security and Privacy (SP), 2022, pp. 1354–1371.
[11]
J. Guo, H. Li, F. Huang, Z. Liu, Y. Peng, X. Li, J. Ma, V.G. Menon, K. Kostro- mitin Igorevich. ADFL: A poisoning attack defense framework for horizontal federated learning. IEEE Transactions on Industrial Informatics, 18(10):6526–6536, 2022.
[12]
M. Nasr, R. Shokri, A. Houmansadr, Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, in: 2019 IEEE Symposium on Security and Privacy (SP), 2019, pp. 739–753.
[13]
J. Zhang, J. Zhang, J. Chen, Y. Shui, Gan enhanced membership inference: A passive local attack in federated learning, in: ICC 2020–2020 IEEE International Conference on Communications (ICC), 2020, pp. 1–6.
[14]
Y. Huang, S. Gupta, Z. Song, K. Li, S. Arora, Evaluating gradient inversion attacks and defenses in federated learning, in: Proceedings of the 35th International Conference on Neural Information Processing Systems (NeurIPS), 2021, pp. 7232–7241.
[15]
A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, M. Backes, ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models. In Proceedings of the 26th Annual Network and Distributed System Security Symposium (NDSS), 2019.
[16]
Z. Yang, Y. Zhao, J. Zhang, FD-Leaks: Membership inference attacks against federated distillation learning. In Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data, pages 364–378, 2022.
[17]
S. Liu, F. Dong, MIA-FedDL: A Membership Inference Attack against Federated Distillation Learning. In 2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD), pages 1148–1153. IEEE, 2023.
[18]
Q. Li, Y. Diao, Q. Chen, B. He, Federated learning on non-iid data silos: An experimental study, in: 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2022, pp. 965–978.
[19]
B. Hui, Y. Yang, H. Yuan, P. Burlina, N. Zhenqiang Gong, Y. Cao, Practical blind membership inference attack via differential comparisons. In Proceedings of the 28th Annual Network and Distributed System Security Symposium (NDSS), 2021.
[20]
Z. Zhang, C. Yan, B.A. Malin, Membership inference attacks against synthetic health data, J. Biomed. Inform. 125 (2022).
[21]
M.A. Shah, J. Szurley, M. Mueller, A. Mouchtaris, J. Droppo, Evaluating the Vulnerability of End- to-End Automatic Speech Recognition Models to Membership Inference Attacks, Interspeech, 2021.
[22]
A. Pyrgelis, C. Troncoso, E. De Cristofaro, Knock knock, who’s there? membership inference on aggregate location data, Proceedings of the 25th Annual Network and Distributed System Security Symposium (NDSS), 2018.
[23]
X. Yuan, L. Zhang, Membership inference attacks and defenses in neural network pruning. In 31st USENIX Security Symposium (USENIX Security 22), pages 4561–4578, 2022.
[24]
M. Zhang, Z. Ren, Z. Wang, P. Ren, Z. Chen, P. Hu, Y. Zhang, Membership inference attacks against recommender systems. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 864–879, 2021.
[25]
A. Pustozerova, R. Mayer, Information leaks in federated learning. In Proceedings of the 27th Annual Network and Distributed System Security Symposium (NDSS), volume 10, page 122, 2020.
[26]
D. Chen, Y. Ning, Y. Zhang, M. Fritz, GAN-Leaks: A taxonomy of membership inference attacks against generative models, in: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020, pp. 343–362.
[27]
K.S. Liu, C. Xiao, B. Li, J. Gao, Performing co-membership attacks against deep generative models, in: 2019 IEEE International Conference on Data Mining (ICDM), 2019, pp. 459–467.
[28]
S. Hidano, T. Murakami, Y. Kawamoto, TransMIA: membership inference attacks using transfer shadow training, in: 2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1–10.
[29]
Y. Zou, Z. Zhang, M. Backes, Y. Zhang, Privacy analysis of deep learning in the wild: Membership inference attacks against transfer learning. arXiv preprint arXiv:2009.04872, 2020.
[30]
C.A. Choquette-Choo, F. Tramer, N. Carlini, N. Papernot, Label-only membership inference attacks, in: Proceedings of the 38th International Conference on Machine Learning, Volume 139 of Proceedings of Machine Learning Research, 2021, pp. 1964–1974.
[31]
Y. Liu, Z. Zhao, M. Backes, Y. Zhang, Membership inference attacks by exploiting loss trajectory. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2085–2098, 2022.
[32]
S. Kumar Murakonda, R. Shokri, G. Theodorakopoulos, Quantifying the privacy risks of learning high-dimensional graphical models. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2287–2295, 2021.
[33]
N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, F. Tramer, Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1897–1914, 2022.
[34]
M. Naseri, J. Hayes, E. De Cristofaro, Local and central differential privacy for robustness and privacy in federated learning. In Proceedings of the 29th Annual Network and Distributed System Security Symposium (NDSS), volume 10, page 122, 2020.
[35]
X. Ma, Y. Zhou, L. Wang, M. Miao, Privacy-preserving byzantine-robust federated learning, Comp. Stand. Interfaces 80 (2022).
[36]
Y. Liu, J. Peng, J. Kang, A.M. Iliyasu, D. Niyato, A.A. Abd, El-Latif, A secure federated learning frame- work for 5g networks, IEEE Wirel. Commun. 27 (4) (2020) 24–31.
[37]
R. Bost, R. Ada Popa, S. Tu, S. Goldwasser, Machine learning classification over encrypted data. Cryptology ePrint Archive, 2014.
[38]
J. Ma, S.-A. Naas, S. Sigg, X. Lyu, Privacy-preserving federated learning based on multi-key homomorphic encryption, Int. J. Intell. Syst. 37 (9) (2022) 5880–5901.
[39]
O.h. Seungeun, J. Park, E. Jeong, H. Kim, M. Bennis, S.-L. Kim, Mix2fld: Downlink federated learning after uplink federated distillation with two-way mixup, IEEE Commun. Lett. 24 (10) (2020) 2211–2215.
[40]
F. Sattler, A. Marban, R. Rischke, W. Samek, CFD: Communication-efficient federated distillation via soft-label quantization and delta coding, IEEE Trans. Network Sci. Eng. 9 (4) (2021) 2025–2038.
[41]
S. Itahara, T. Nishio, Y. Koda, M. Morikura, K. Yamamoto, Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data, IEEE Trans. Mob. Comput. 22 (1) (2021) 191–205.
[42]
X. Gong, A. Sharma, S. Karanam, Z. Wu, T. Chen, D. Doermann, A. Innanje, Ensemble attention distillation for privacy-preserving federated learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15076–15086, 2021.
[43]
X. Gong, L. Song, R. Vedula, A. Sharma, M. Zheng, B. Planche, A. Innanje, T. Chen, J. Yuan, D. Doermann, et al., Federated learning with privacy-preserving ensemble attention distillation, IEEE Trans. Med. Imaging (2022).
[44]
W. Chuhan, W. Fangzhao, L. Lyu, Y. Huang, X. Xie, Communication-efficient federated learning via knowledge distillation, Nat. Commun. 13 (1) (2022) 2032.
[45]
J. Huang. Maximum likelihood estimation of dirichlet distribution parameters. CMU Technique report, 18, 2005.
[46]
C.-C. Jay Kuo, Understanding convolutional neural networks with a mathematical model, J. Vis. Commun. Image Represent. 41 (2016) 406–413.
[47]
R. Tang, J. Lin, Deep residual learning for small-footprint keyword spotting. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5484–5488, 2018.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Information Sciences: an International Journal
Information Sciences: an International Journal  Volume 658, Issue C
Feb 2024
737 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 12 April 2024

Author Tags

  1. Membership inference attack
  2. Federated distillation
  3. Membership privacy
  4. Privacy leakage

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media