Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2024
Breaking State-of-the-Art Poisoning Defenses to Federated Learning: An Optimization-Based Attack Framework
CIKM '24: Proceedings of the 33rd ACM International Conference on Information and Knowledge ManagementPages 2930–2939https://doi.org/10.1145/3627673.3679566Federated Learning (FL) is a novel client-server distributed learning framework that can protect data privacy. However, recent works show that FL is vulnerable to poisoning attacks. Many defenses with robust aggregators (AGRs) are proposed to mitigate ...
- surveyOctober 2024
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
- Thanh Toan Nguyen,
- Nguyen Quoc Viet hung,
- Thanh Tam Nguyen,
- Thanh Trung Huynh,
- Thanh Thi Nguyen,
- Matthias Weidlich,
- Hongzhi Yin
ACM Computing Surveys (CSUR), Volume 57, Issue 1Article No.: 3, Pages 1–39https://doi.org/10.1145/3677328Recommender systems have become an integral part of online services due to their ability to help users locate specific information in a sea of data. However, existing studies show that some recommender systems are vulnerable to poisoning attacks, ...
- research-articleAugust 2024
Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3311–3322https://doi.org/10.1145/3637528.3671795Contrastive learning (CL) has recently gained prominence in the domain of recommender systems due to its great ability to enhance recommendation accuracy and improve model robustness. Despite its advantages, this paper identifies a vulnerability of CL-...
- research-articleJuly 2024
Cloud-Based Machine Learning Models as Covert Communication Channels
ASIA CCS '24: Proceedings of the 19th ACM Asia Conference on Computer and Communications SecurityPages 141–157https://doi.org/10.1145/3634737.3657026While Machine Learning (ML) is one of the most promising technologies in our era, it is prone to a variety of attacks. One of them is covert channels, that enable two parties to stealthily transmit information through carriers intended for different ...
- research-articleJune 2024
Attacking Social Media via Behavior Poisoning
ACM Transactions on Knowledge Discovery from Data (TKDD), Volume 18, Issue 7Article No.: 169, Pages 1–27https://doi.org/10.1145/3654673Since social media such as Facebook and X (formerly known as Twitter) have permeated various aspects of daily life, people have strong incentives to influence information dissemination on these platforms and differentiate their content from the fierce ...
-
- research-articleJune 2024JUST ACCEPTED
PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence
Federated learning (FL), which holds promise for use in edge intelligence applications for smart cities, enables smart devices collaborate in training a global model by exchanging local model updates instead of sharing local training data. However, the ...
- short-paperMay 2024
Detecting Poisoning Attacks on Federated Learning Using Gradient-Weighted Class Activation Mapping
WWW '24: Companion Proceedings of the ACM Web Conference 2024Pages 714–717https://doi.org/10.1145/3589335.3651490This paper proposes a new defense mechanism, namely, GCAMA, against model poisoning attacks on Federated learning (FL), which integrates <u>G</u>radient-weighted <u>C</u>lass <u>A</u>ctivation <u>M</u>apping (GradCAM) and <u>A</u>utoencoder to offer a ...
- research-articleMay 2024
Poisoning Federated Recommender Systems with Fake Users
WWW '24: Proceedings of the ACM Web Conference 2024Pages 3555–3565https://doi.org/10.1145/3589334.3645492Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants ...
- research-articleMarch 2024
PACE: Poisoning Attacks on Learned Cardinality Estimation
Proceedings of the ACM on Management of Data (PACMMOD), Volume 2, Issue 1Article No.: 37, Pages 1–27https://doi.org/10.1145/3639292Cardinality estimation (CE) plays a crucial role in database optimizer. We have witnessed the emergence of numerous learned CE models recently which can outperform traditional methods such as histograms and samplings. However, learned models also bring ...
- research-articleNovember 2023
Certifiers Make Neural Networks Vulnerable to Availability Attacks
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityPages 67–78https://doi.org/10.1145/3605764.3623917To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some ...
- research-articleNovember 2023
MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers
CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications SecurityPages 1526–1540https://doi.org/10.1145/3576915.3623212Federated Learning (FL) enhances decentralized machine learning by safeguarding data privacy, reducing communication costs, and improving model performance with diverse data sources. However, FL faces vulnerabilities such as untargeted poisoning attacks ...
- research-articleNovember 2023
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications SecurityPages 1511–1525https://doi.org/10.1145/3576915.3623193Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users. As local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable ...
- research-articleOctober 2023
Fraud Detection under Siege: Practical Poisoning Attacks and Defense Strategies
ACM Transactions on Privacy and Security (TOPS), Volume 26, Issue 4Article No.: 45, Pages 1–35https://doi.org/10.1145/3613244Machine learning (ML) models are vulnerable to adversarial machine learning (AML) attacks. Unlike other contexts, the fraud detection domain is characterized by inherent challenges that make conventional approaches hardly applicable. In this article, we ...
- ArticleNovember 2023
Mitigating Sybil Attacks in Federated Learning
AbstractFederated learning (FL) is a distributed learning paradigm that facilities a basic data-privacy level, as the clients do not have to share their raw data. Since the clients send local model updates, it increases the attack surface of FL—with ...
- research-articleMay 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsPages 1835–1844In targeted poisoning attacks, an attacker manipulates an agent-environment interaction to force the agent into adopting a policy of interest, called target policy. Prior work has primarily focused on attacks that modify standard MDP primitives, such as ...
- research-articleApril 2023
Curriculum Graph Poisoning
WWW '23: Proceedings of the ACM Web Conference 2023Pages 2011–2021https://doi.org/10.1145/3543507.3583211Despite the success of graph neural networks (GNNs) over the Web in recent years, the typical transductive learning setting for node classification requires GNNs to be retrained frequently, making them vulnerable to poisoning attacks by corrupting the ...
- research-articleApril 2023
Tutorial: Toward Robust Deep Learning against Poisoning Attacks
ACM Transactions on Embedded Computing Systems (TECS), Volume 22, Issue 3Article No.: 42, Pages 1–15https://doi.org/10.1145/3574159Deep Learning (DL) has been increasingly deployed in various real-world applications due to its unprecedented performance and automated capability of learning hidden representations. While DL can achieve high task performance, the training process of a DL ...
- research-articleNovember 2022
EIFFeL: Ensuring Integrity for Federated Learning
CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications SecurityPages 2535–2549https://doi.org/10.1145/3548606.3560611Federated learning (FL) enables clients to collaborate with a server to train a machine learning model. To ensure privacy, the server performs secure aggregation of updates from the clients. Unfortunately, this prevents verification of the well-...
- research-articleSeptember 2022
Defending against Poisoning Backdoor Attacks on Federated Meta-learning
ACM Transactions on Intelligent Systems and Technology (TIST), Volume 13, Issue 5Article No.: 76, Pages 1–25https://doi.org/10.1145/3523062Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: ...
- posterApril 2022
Poisoning Attacks against Feature-Based Image Classification
CODASPY '22: Proceedings of the Twelfth ACM Conference on Data and Application Security and PrivacyPages 358–360https://doi.org/10.1145/3508398.3519363Adversarial machine learning and the robustness of machine learning is gaining attention, especially in image classification. Attacks based on data poisoning, with the aim to lower the integrity or availability of a model, showed high success rates, ...