Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleMarch 2024
Machine Learning Security Against Data Poisoning: Are We There Yet?
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these ...
- research-articleDecember 2023
Hardening RGB-D object recognition systems against adversarial patch attacks
- Yang Zheng,
- Luca Demetrio,
- Antonio Emanuele Cinà,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 651, Issue Chttps://doi.org/10.1016/j.ins.2023.119701AbstractRGB-D object recognition systems improve their predictive performances by fusing color and depth information, outperforming neural network architectures that rely solely on colors. While RGB-D systems are expected to be more robust to adversarial ...
Highlights- We assess the performance of a state-of-art system for object detection based on color and depth features.
- We explain why RGB features are more vulnerable to attacks than depth features.
- We develop a defense against adversarial ...
- ArticleSeptember 2023
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
- Dario Lazzaro,
- Antonio Emanuele Cinà,
- Maura Pintor,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli,
- Marcello Pelillo
AbstractDeep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and ...
- research-articleSeptember 2023
Stateful detection of adversarial reprogramming
- Yang Zheng,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Maura Pintor,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 642, Issue Chttps://doi.org/10.1016/j.ins.2023.119093AbstractAdversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to ...
Highlights- This work is the first that proposes a defense against reprogramming in black-box scenarios.
- Our analysis shows that stateful defenses increase the attackers' cost for executing adversarial reprogramming.
- Stateful defenses remain a ...
- surveyJuly 2023
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
- Antonio Emanuele Cinà,
- Kathrin Grosse,
- Ambra Demontis,
- Sebastiano Vascon,
- Werner Zellinger,
- Bernhard A. Moser,
- Alina Oprea,
- Battista Biggio,
- Marcello Pelillo,
- Fabio Roli
ACM Computing Surveys (CSUR), Volume 55, Issue 13sArticle No.: 294, Pages 1–39https://doi.org/10.1145/3585385The success of machine learning is fueled by the increasing availability of computing power and large training datasets. The training data is used to learn new models or update existing ones, assuming that it is sufficiently representative of the data ...
-
- research-articleJune 2023
Why adversarial reprogramming works, when it fails, and how to tell the difference
- Yang Zheng,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Ambra Demontis,
- Maura Pintor,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 632, Issue CPages 130–143https://doi.org/10.1016/j.ins.2023.02.086AbstractAdversarial reprogramming allows repurposing a machine-learning model to perform a different task. For example, a model trained to recognize animals can be reprogrammed to recognize digits by embedding an adversarial program in the digit images ...
Highlights- We have provided a first-order linear model of adversarial reprogramming.
- We have performed an extensive experimental analysis that validates our model.
- Our analysis unveils the factors that affect the success of adversarial ...
- ArticleMay 2023
BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability
Advances in Knowledge Discovery and Data MiningPages 3–14https://doi.org/10.1007/978-3-031-33374-3_1AbstractAdversarial defenses protect machine learning models from adversarial attacks, but are often tailored to one type of model or attack. The lack of information on unknown potential attacks makes detecting adversarial examples challenging. ...
- research-articleFebruary 2023
ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches
Highlights- We provide a dataset to benchmark machine-learning models against adversarial patches.
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, ...
- review-articleJanuary 2023
The Threat of Offensive AI to Organizations
- Yisroel Mirsky,
- Ambra Demontis,
- Jaidip Kotak,
- Ram Shankar,
- Deng Gelei,
- Liu Yang,
- Xiangyu Zhang,
- Maura Pintor,
- Wenke Lee,
- Yuval Elovici,
- Battista Biggio
AbstractAI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative ...
- research-articleApril 2024
Indicators of attack failure: debugging and improving optimization of adversarial examples
NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing SystemsArticle No.: 1676, Pages 23063–23076Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more ...
- abstractNovember 2022
AISec '22: 15th ACM Workshop on Artificial Intelligence and Security
CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications SecurityPages 3549–3551https://doi.org/10.1145/3548606.3563683Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML ...
- proceedingNovember 2022
AISec'22: Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security
It is our pleasure to welcome you to the 15th ACM Workshop on Artificial Intelligence and Security - AISec 2022. AISec, having been annually co-located with CCS for 15 consecutive years, is the premier meeting place for researchers interested in the ...
- ArticleMarch 2022
Super-Sparse Regression for Fast Age Estimation from Faces at Test Time
AbstractAge estimation from faces is a challenging problem that has recently gained increasing relevance due to its potentially multi-faceted applications. Many current methods for age estimation rely on extracting computationally-demanding features from ...
- proceedingNovember 2021
AISec '21: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security
It is our pleasure to welcome you to the 14th ACM Workshop on Artificial Intelligence and Security - AISec 2021. AISec, having been annually co-located with CCS for 14 consecutive years, is the premier meeting place for researchers interested in the ...
- proceedingNovember 2020
AISec'20: Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security
It is our pleasure to welcome you to the 13th ACM Workshop on Artificial Intelligence and Security - AISec 2020. AISec, having been annually co-located with CCS for 13 consecutive years, is the premier meeting place for researchers interested in the ...
- abstractNovember 2020
AISec'20: 13th Workshop on Artificial Intelligence and Security
CCS '20: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications SecurityPages 2143–2144https://doi.org/10.1145/3372297.3416247Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML ...
- ArticleAugust 2019
Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks
- Ambra Demontis,
- Marco Melis,
- Maura Pintor,
- Matthew Jagielski,
- Battista Biggio,
- Alina Oprea,
- Cristina Nita-Rotaru,
- Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model. Empirical evidence for transferability has been shown in previous work, but the underlying reasons why an ...
- research-articleJuly 2019
Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection
- Ambra Demontis,
- Marco Melis,
- Battista Biggio,
- Davide Maiorca,
- Daniel Arp,
- Konrad Rieck,
- Igino Corona,
- Giorgio Giacinto,
- Fabio Roli
IEEE Transactions on Dependable and Secure Computing (TDSC), Volume 16, Issue 4Pages 711–724https://doi.org/10.1109/TDSC.2017.2700270To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection. However, its security against well-crafted attacks has not only been recently ...