Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJuly 2024
Towards realistic problem-space adversarial attacks against machine learning in network intrusion detection
ARES '24: Proceedings of the 19th International Conference on Availability, Reliability and SecurityJuly 2024, Article No.: 113, Pages 1–8https://doi.org/10.1145/3664476.3669974Current trends in network intrusion detection systems (NIDS) capitalize on the extraction of features from network traffic and the use of up-to-date machine and deep learning techniques to infer a detection model; in consequence, NIDS can be vulnerable ...
- research-articleJuly 2024
SoK: Visualization-based Malware Detection Techniques
ARES '24: Proceedings of the 19th International Conference on Availability, Reliability and SecurityJuly 2024, Article No.: 45, Pages 1–13https://doi.org/10.1145/3664476.3664514Cyber attackers leverage malware to infiltrate systems, steal sensitive data, and extort victims, posing a significant cybersecurity threat. Security experts address this challenge by employing machine learning and deep learning approaches to detect ...
- ArticleJuly 2024
Extended Abstract: Evading Packing Detection: Breaking Heuristic-Based Static Detectors
Detection of Intrusions and Malware, and Vulnerability AssessmentJul 2024, Pages 174–183https://doi.org/10.1007/978-3-031-64171-8_9AbstractNowadays, executable packing remains an open issue in its detection especially when it comes to static analysis. Packing is significantly used in malware to hide malicious code from detection systems. These last years, many studies about static ...
- research-articleAugust 2024
Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search
GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference CompanionJuly 2024, Pages 1563–1572https://doi.org/10.1145/3638530.3664104How to accurately evaluate the adversarial robustness of Deep Neural Networks (DNNs) is critical for their deployment in real-world applications. An ideal indicator of robustness is adversarial risk. Unfortunately, since it involves maximizing the 0--1 ...
- posterAugust 2024
Exploring Layerwise Adversarial Robustness Through the Lens of t-SNE
GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference CompanionJuly 2024, Pages 619–622https://doi.org/10.1145/3638530.3654258Adversarial examples, designed to trick Artificial Neural Networks (ANNs) into producing wrong outputs, highlight vulnerabilities in these models. Exploring these weaknesses is crucial for developing defenses, and so, we propose a method to assess the ...
-
- research-articleJune 2024JUST ACCEPTED
A Comprehensive Understanding of the Impact of Data Augmentation on the Transferability of 3D Adversarial Examples
ACM Transactions on Knowledge Discovery from Data (TKDD), Just Accepted https://doi.org/10.1145/36732323D point cloud classifiers exhibit vulnerability to imperceptible perturbations, which poses a serious threat to the security and reliability of deep learning models in practical applications, making the robustness evaluation of deep 3D point cloud models ...
- research-articleMay 2024
Remote Perception Attacks against Camera-based Object Recognition Systems and Countermeasures
ACM Transactions on Cyber-Physical Systems (TCPS), Volume 8, Issue 2Article No.: 14, Pages 1–27https://doi.org/10.1145/3596221In vision-based object recognition systems, imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes, e.g., to maneuver an automated vehicle around an obstacle or to raise alarms for intruders in ...
- short-paperMay 2024
Improving Model Robustness against Adversarial Examples with Redundant Fully Connected Layer
WWW '24: Companion Proceedings of the ACM on Web Conference 2024May 2024, Pages 529–532https://doi.org/10.1145/3589335.3651524Recent studies show that deep neural networks are extremely vulnerable, especially for adversarial examples of image classification models. However, the current defense technologies exhibit a series of limitations in terms of the adaptability of ...
- extended-abstractMay 2024
Evaluation of Robustness of Off-Road Autonomous Driving Segmentation against Adversarial Attacks: A Dataset-Centric Study
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 2237–2239The study explores the vulnerability of semantic segmentation models to adversarial input perturbations in the domain of off-road autonomous driving. Existing studies have primarily concentrated on enhancing model's robustness via architectural ...
- research-articleJune 2024
Properties that allow or prohibit transferability of adversarial attacks among quantized networks
AST '24: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)April 2024, Pages 99–109https://doi.org/10.1145/3644032.3644453Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples. Further, these adversarial examples are found to be transferable from the source network in which they are crafted to a black-box target network. As the trend of using deep ...
- research-articleMarch 2024
Detection of Adversarial Facial Accessory Presentation Attacks Using Local Face Differential
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), Volume 20, Issue 7Article No.: 192, Pages 1–28https://doi.org/10.1145/3643831To counter adversarial facial accessory presentation attacks (PAs), a detection method based on local face differential is proposed in this article. It extracts the local face differential features from a suspected face image and a reference face image, ...
- research-articleMarch 2024
Attack as Detection: Using Adversarial Attack Methods to Detect Abnormal Examples
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 33, Issue 3Article No.: 68, Pages 1–45https://doi.org/10.1145/3631977As a new programming paradigm, deep learning (DL) has achieved impressive performance in areas such as image processing and speech recognition, and has expanded its application to solve many real-world problems. However, neural networks and DL are ...
- research-articleMarch 2024
How Important Are Good Method Names in Neural Code Generation? A Model Robustness Perspective
ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 33, Issue 3Article No.: 60, Pages 1–35https://doi.org/10.1145/3630010Pre-trained code generation models (PCGMs) have been widely applied in neural code generation, which can generate executable code from functional descriptions in natural languages, possibly together with signatures. Despite substantial performance ...
- research-articleFebruary 2024
A Comprehensive Study of Learning-based Android Malware Detectors under Challenging Environments
ICSE '24: Proceedings of the IEEE/ACM 46th International Conference on Software EngineeringMay 2024, Article No.: 12, Pages 1–13https://doi.org/10.1145/3597503.3623320Recent years have witnessed the proliferation of learning-based Android malware detectors. These detectors can be categorized into three types, String-based, Image-based and Graph-based. Most of them have achieved good detection performance under the ...
- short-paperJanuary 2024
GhostVec: A New Threat to Speaker Privacy of End-to-End Speech Recognition System
MMAsia '23: Proceedings of the 5th ACM International Conference on Multimedia in AsiaDecember 2023, Article No.: 94, Pages 1–5https://doi.org/10.1145/3595916.3626367Speaker adaptation systems face privacy concerns, for such systems are trained on private datasets and often overfitting. This paper demonstrates that an attacker can extract speaker information by querying speaker-adapted speech recognition (ASR) ...
FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks
ACSAC '23: Proceedings of the 39th Annual Computer Security Applications ConferenceDecember 2023, Pages 492–505https://doi.org/10.1145/3627106.3627128We propose FLARE, the first fingerprinting mechanism to verify whether a suspected Deep Reinforcement Learning (DRL) policy is an illegitimate copy of another (victim) policy. We first show that it is possible to find non-transferable, universal ...
- research-articleFebruary 2024
Hybrid Method for the Detection of Evasion Attacks Aimed at Machine Learning Systems
Automatic Control and Computer Sciences (ACCS), Volume 57, Issue 8Dec 2023, Pages 983–988https://doi.org/10.3103/S0146411623080072AbstractThe existing methods for the detection of evasion attacks in machine learning systems are analyzed. An experimental comparison of the methods is carried out. The uncertainty method is universal; however, in this method, it is difficult to ...
- keynoteNovember 2023
When Papers Choose Their Reviewers: Adversarial Machine Learning in Peer Review
ARTMAN '23: Proceedings of the 2023 Workshop on Recent Advances in Resilient and Trustworthy ML Systems in Autonomous NetworksNovember 2023, Page 3https://doi.org/10.1145/3605772.3625394Academia is thriving like never before. Thousands of papers are submitted to conferences on hot research topics, such as artificial intelligence and computer vision. To handle this growth, systems for automatic paper-reviewer assignments are ...
- research-articleNovember 2023
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and SecurityNovember 2023, Pages 127–138https://doi.org/10.1145/3605764.3623903Artificial intelligence, and specifically deep neural networks (DNNs), has rapidly emerged in the past decade as the standard for several tasks from specific advertising to object detection. The performance offered has led DNN algorithms to become a part ...
- research-articleNovember 2023
Attack Some while Protecting Others: Selective Attack Strategies for Attacking and Protecting Multiple Concepts
CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications SecurityNovember 2023, Pages 801–814https://doi.org/10.1145/3576915.3623177Machine learning models are vulnerable to adversarial attacks. Existing research focuses on attack-only scenarios. In practice, one dataset may be used for learning different concepts, and the attacker may be incentivized to attack some concepts but ...