Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleMarch 2024
Toward Effective Traffic Sign Detection via Two-Stage Fusion Neural Networks
IEEE Transactions on Intelligent Transportation Systems (ITS-TRANSACTIONS), Volume 25, Issue 8Pages 8283–8294https://doi.org/10.1109/TITS.2024.3373793Automatic detection of traffic signs is crucial for Advanced Driving Assistance Systems (ADAS). Current two-stage approaches consist of a preliminary object detection step, where the traffic signs are categorized within broader families (e.g., speed ...
- research-articleMarch 2024
Machine Learning Security Against Data Poisoning: Are We There Yet?
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these ...
- research-articleJune 2024
Nebula: Self-Attention for Dynamic Malware Analysis
IEEE Transactions on Information Forensics and Security (TIFS), Volume 19Pages 6155–6167https://doi.org/10.1109/TIFS.2024.3409083Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment and logging their actions. Previous work has proposed training machine learning models, i.e., convolutional and long short-term memory networks, on ...
- research-articleFebruary 2024
Rethinking data augmentation for adversarial robustness
- Hamid Eghbal-zadeh,
- Werner Zellinger,
- Maura Pintor,
- Kathrin Grosse,
- Khaled Koutini,
- Bernhard A. Moser,
- Battista Biggio,
- Gerhard Widmer
Information Sciences: an International Journal (ISCI), Volume 654, Issue Chttps://doi.org/10.1016/j.ins.2023.119838AbstractRecent work has proposed novel data augmentation methods to improve the adversarial robustness of deep neural networks. In this paper, we re-evaluate such methods through the lens of different metrics that characterize the augmented manifold, ...
Highlights- Augmentation methods for adversarial robustness are often not tested in isolation.
- They are often tested on one single value of augmentation probability.
- They improve robustness only when combined with classical augmentations.
- ...
- research-articleDecember 2023
Hardening RGB-D object recognition systems against adversarial patch attacks
- Yang Zheng,
- Luca Demetrio,
- Antonio Emanuele Cinà,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 651, Issue Chttps://doi.org/10.1016/j.ins.2023.119701AbstractRGB-D object recognition systems improve their predictive performances by fusing color and depth information, outperforming neural network architectures that rely solely on colors. While RGB-D systems are expected to be more robust to adversarial ...
Highlights- We assess the performance of a state-of-art system for object detection based on color and depth features.
- We explain why RGB features are more vulnerable to attacks than depth features.
- We develop a defense against adversarial ...
-
- research-articleNovember 2023
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityPages 233–244https://doi.org/10.1145/3605764.3623920Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage. Nevertheless, the attacks recently proposed have demonstrated limited effectiveness due to their lack of ...
- ArticleSeptember 2023
Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training
- Dario Lazzaro,
- Antonio Emanuele Cinà,
- Maura Pintor,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli,
- Marcello Pelillo
AbstractDeep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and ...
- research-articleSeptember 2023
Stateful detection of adversarial reprogramming
- Yang Zheng,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Maura Pintor,
- Ambra Demontis,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 642, Issue Chttps://doi.org/10.1016/j.ins.2023.119093AbstractAdversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to ...
Highlights- This work is the first that proposes a defense against reprogramming in black-box scenarios.
- Our analysis shows that stateful defenses increase the attackers' cost for executing adversarial reprogramming.
- Stateful defenses remain a ...
- surveyJuly 2023
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
- Antonio Emanuele Cinà,
- Kathrin Grosse,
- Ambra Demontis,
- Sebastiano Vascon,
- Werner Zellinger,
- Bernhard A. Moser,
- Alina Oprea,
- Battista Biggio,
- Marcello Pelillo,
- Fabio Roli
ACM Computing Surveys (CSUR), Volume 55, Issue 13sArticle No.: 294, Pages 1–39https://doi.org/10.1145/3585385The success of machine learning is fueled by the increasing availability of computing power and large training datasets. The training data is used to learn new models or update existing ones, assuming that it is sufficiently representative of the data ...
- research-articleJune 2023
Why adversarial reprogramming works, when it fails, and how to tell the difference
- Yang Zheng,
- Xiaoyi Feng,
- Zhaoqiang Xia,
- Xiaoyue Jiang,
- Ambra Demontis,
- Maura Pintor,
- Battista Biggio,
- Fabio Roli
Information Sciences: an International Journal (ISCI), Volume 632, Issue CPages 130–143https://doi.org/10.1016/j.ins.2023.02.086AbstractAdversarial reprogramming allows repurposing a machine-learning model to perform a different task. For example, a model trained to recognize animals can be reprogrammed to recognize digits by embedding an adversarial program in the digit images ...
Highlights- We have provided a first-order linear model of adversarial reprogramming.
- We have performed an extensive experimental analysis that validates our model.
- Our analysis unveils the factors that affect the success of adversarial ...
- research-articleFebruary 2023
ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches
Highlights- We provide a dataset to benchmark machine-learning models against adversarial patches.
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, ...
- research-articleJanuary 2023
Machine Learning Security in Industry: A Quantitative Survey
IEEE Transactions on Information Forensics and Security (TIFS), Volume 18Pages 1749–1762https://doi.org/10.1109/TIFS.2023.3251842Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze ...
- review-articleJanuary 2023
The Threat of Offensive AI to Organizations
- Yisroel Mirsky,
- Ambra Demontis,
- Jaidip Kotak,
- Ram Shankar,
- Deng Gelei,
- Liu Yang,
- Xiangyu Zhang,
- Maura Pintor,
- Wenke Lee,
- Yuval Elovici,
- Battista Biggio
AbstractAI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative ...
- research-articleApril 2024
Indicators of attack failure: debugging and improving optimization of adversarial examples
NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing SystemsArticle No.: 1676, Pages 23063–23076Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more ...
- research-articleNovember 2022
Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems
AbstractRecently, neural networks (NNs) have been proposed for the detection of cyber attacks targeting industrial control systems (ICSs). Such detectors are often retrained, using data collected during system operation, to cope with the ...
- research-articleSeptember 2022
Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware
IEEE Security and Privacy (IEEE-SEC-PRIVACY), Volume 20, Issue 5Pages 77–85https://doi.org/10.1109/MSEC.2022.3182356While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different contexts. We discuss how to develop automated and scalable security evaluations of machine learning using ...
- research-articleSeptember 2022
Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
AbstractBackdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time. These attacks require poisoning the training data to compromise the learning algorithm, e.g., by injecting ...
- research-articleAugust 2022
Explainability-based Debugging of Machine Learning for Vulnerability Discovery
ARES '22: Proceedings of the 17th International Conference on Availability, Reliability and SecurityArticle No.: 113, Pages 1–8https://doi.org/10.1145/3538969.3543809Machine learning has been successfully used for increasingly complex and critical tasks, achieving high performance and efficiency that would not be possible for human operators. Unfortunately, recent studies have shown that, despite its power, this ...
- research-articleAugust 2022
Industrial practitioners' mental models of adversarial machine learning
SOUPS'22: Proceedings of the Eighteenth USENIX Conference on Usable Privacy and SecurityArticle No.: 6, Pages 97–116Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models ...
- research-articleJuly 2022
Towards learning trustworthily, automatically, and with guarantees on graphs: An overview
- Luca Oneto,
- Nicoló Navarin,
- Battista Biggio,
- Federico Errica,
- Alessio Micheli,
- Franco Scarselli,
- Monica Bianchini,
- Luca Demetrio,
- Pietro Bongini,
- Armando Tacchella,
- Alessandro Sperduti
Neurocomputing (NEUROC), Volume 493, Issue CPages 217–243https://doi.org/10.1016/j.neucom.2022.04.072AbstractThe increasing digitization and datification of all aspects of people’s daily life, and the consequent growth in the use of personal data, are increasingly challenging the current development and adoption of Machine Learning (ML). ...