GradDiff: : Gradient-based membership inference attacks against federated distillation with differential comparison
References
Recommendations
LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
ASIA CCS '23: Proceedings of the 2023 ACM Asia Conference on Computer and Communications SecurityFederated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on ...
Label-Only Membership Inference Attack Against Federated Distillation
Algorithms and Architectures for Parallel ProcessingAbstractFederated learning is a prevailing distributed machine learning paradigm that aims to protect data privacy by training models locally. However, it is still vulnerable to various attacks, such as federated membership inference attacks, which can ...
Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures
ASIA CCS '22: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications SecurityMachine learning algorithms, when applied to sensitive data, can pose severe threats to privacy. A growing body of prior work has demonstrated that membership inference attack (MIA) can disclose whether specific private data samples are present in the ...
Comments
Information & Contributors
Information
Published In
Publisher
Elsevier Science Inc.
United States
Publication History
Author Tags
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0