Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3605764acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec '23: Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
ACM2023 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '23: ACM SIGSAC Conference on Computer and Communications Security Copenhagen Denmark 30 November 2023
ISBN:
979-8-4007-0260-0
Published:
26 November 2023
Sponsors:
Next Conference
October 14 - 18, 2024
Salt Lake City , UT , USA
Reflects downloads up to 30 Aug 2024Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 16th ACM Workshop on Artificial Intelligence and Security - AISec 2023. AISec, having been annually co-located with CCS for 16 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers have also been developing theories and analytics unique to this domain and have explored diverse topics such as learning in gametheoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2022 received 64 submissions, of which 21 (35%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, and from a wide variety of institutions, both academic and corporate.

Skip Table Of Content Section
SESSION: Session 1: Privacy-Preserving Machine Learning
research-article
Differentially Private Logistic Regression with Sparse Solutions

LASSO regularized logistic regression is particularly useful for its built-in feature selection, allowing coefficients to be removed from deployment and producing sparse solutions. Differentially private versions of LASSO logistic regression have been ...

research-article
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models

Differentially Private Stochastic Gradient Descent (DP-SGD) limits the amount of private information deep learning models can memorize during training. This is achieved by clipping and adding noise to the model's gradients, and thus networks with more ...

research-article
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile

Differential privacy (DP) is the prevailing technique for protecting user data in machine learning models. However, deficits to this framework include a lack of clarity for selecting the privacy budget ε and a lack of quantification for the privacy ...

research-article
Information Leakage from Data Updates in Machine Learning Models

In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these ...

research-article
Membership Inference Attacks Against Semantic Segmentation Models

Membership inference attacks aim to infer whether a data record has been used to train a target model by observing its predictions. In sensitive domains such as healthcare, this can constitute a severe privacy violation. In this work we attempt to ...

research-article
Utility-preserving Federated Learning

We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data ...

SESSION: Session 2: Machine Learning Security
research-article
Certifiers Make Neural Networks Vulnerable to Availability Attacks

To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some ...

research-article
Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Large Language Models (LLMs) are increasingly being integrated into applications, with versatile functionalities that can be easily modulated via natural language prompts. So far, it was assumed that the user is directly prompting the LLM. But, what if ...

research-article
Open Access
Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning

Communication networks able to withstand hostile environments are critically important for disaster relief operations. In this paper, we consider a challenging scenario where drones have been compromised in the supply chain, during their manufacture, and ...

research-article
Open Access
The Adversarial Implications of Variable-Time Inference

Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to ...

research-article
Dictionary Attack on IMU-based Gait Authentication

We present a novel adversarial model for authentication systems that use gait patterns recorded by the inertial measurement unit (IMU) built into smartphones. The attack idea is inspired by and named after the concept of a dictionary attack on knowledge (...

research-article
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence

Artificial intelligence, and specifically deep neural networks (DNNs), has rapidly emerged in the past decade as the standard for several tasks from specific advertising to object detection. The performance offered has led DNN algorithms to become a part ...

research-article
Open Access
Task-Agnostic Safety for Reinforcement Learning

Reinforcement learning (RL) has been an attractive potential for designing autonomous systems due to its learning-by-exploration approach. However, this learning process makes RL inherently vulnerable and thus unsuitable for applications where safety is ...

research-article
Open Access
Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery

Several learning-based vulnerability detection methods have been proposed to assist developers during the secure software development life-cycle. In particular, recent learning-based large transformer networks have shown remarkably high performance in ...

research-article
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition

Over the past decade, the machine learning security community has developed a myriad of defenses for evasion attacks. An understudied question in that community is: for whom do these defenses defend? This work considers common approaches to defending ...

SESSION: Session 3: Machine Learning for Cybersecurity
research-article
Open Access
Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks

Machine learning-based (ML) malware detectors have been shown to be susceptible to adversarial malware examples. Given the vulnerability of deep learning detectors to small changes on the input file, we propose a practical and certifiable defense against ...

research-article
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora

When investigating a malicious file, searching for related files is a common task that malware analysts must perform. Given that production malware corpora may contain over a billion files and consume petabytes of storage, many feature extraction and ...

research-article
Drift Forensics of Malware Classifiers

The widespread occurrence of mobile malware still poses a significant security threat to billions of smartphone users. To counter this threat, several machine learning-based detection systems have been proposed within the last decade. These methods have ...

research-article
Open Access
Lookin' Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors

Given their generalization capabilities,deep learning algorithms may represent a powerful weapon in the arsenal of antivirus developers. Nevertheless, recent works in different domains (e.g., computer vision) have shown that such algorithms are ...

research-article
Open Access
Reward Shaping for Happier Autonomous Cyber Security Agents

As machine learning models become more capable, they have exhibited increased potential in solving complex tasks. One of the most promising directions uses deep reinforcement learning to train autonomous agents in computer network defense tasks. This ...

research-article
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors

Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage. Nevertheless, the attacks recently proposed have demonstrated limited effectiveness due to their lack of ...

Contributors
  • University of Cagliari
  • Swiss Federal Institute of Technology, Zurich

Index Terms

  1. Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Acceptance Rates

        Overall Acceptance Rate 94 of 231 submissions, 41%
        YearSubmittedAcceptedRate
        AISec '1832928%
        AISec '17361131%
        AISec '16381232%
        AISec '15251144%
        AISec '14241250%
        AISec '13171059%
        AISec '12241042%
        AISec '10151067%
        AISec '0820945%
        Overall2319441%