Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3560830acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec'22: Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security
ACM2022 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security Los Angeles CA USA 11 November 2022
ISBN:
978-1-4503-9880-0
Published:
07 November 2022
Sponsors:
Next Conference
October 14 - 18, 2024
Salt Lake City , UT , USA
Reflects downloads up to 30 Aug 2024Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 15th ACM Workshop on Artificial Intelligence and Security - AISec 2022. AISec, having been annually co-located with CCS for 15 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2022 received 40 submissions, of which 14 (35%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, from a wide variety of institutions, both academic and corporate.

Skip Table Of Content Section
SESSION: Session 1: Privacy-Preserving Machine Learning
research-article
Open Access
Label-Only Membership Inference Attack against Node-Level Graph Neural Networks

Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the message of nodes' neighbors and structure information to acquire expressive representations of nodes for node classification, graph classification, and link ...

research-article
Open Access
Repeated Knowledge Distillation with Confidence Masking to Mitigate Membership Inference Attacks

Machine learning models are often trained on sensitive data, such as medical records or bank transactions, posing high privacy risks. In fact, membership inference attacks can use the model parameters or predictions to determine whether a given data ...

research-article
Open Access
Forgeability and Membership Inference Attacks

A membership inference (MI) attack predicts whether a data point was used for training a machine learning (ML) model. MI attacks are currently the most widely deployed attack for auditing privacy of a ML model. A recent work by Thudi et. al. [18] show ...

research-article
Open Access
Inferring Class-Label Distribution in Federated Learning

Federated Learning (FL) has become a popular distributed learning method for training classifiers by using data that are private to individual clients. The clients´ data are typically assumed to be confidential, but their heterogeneity and potential ...

SESSION: Session 2A: Adversarial Machine Learning
research-article
Video is All You Need: Attacking PPG-based Biometric Authentication

Unobservable physiological signals enhance biometric authentication systems. Photoplethysmography (PPG) signals are convenient owing to its ease of measurement and are usually well protected against remote adversaries in authentication. Any leaked PPG ...

research-article
Open Access
Magnitude Adversarial Spectrum Search-based Black-box Attack against Image Classification

Recent development has revealed that deep neural networks used in image classification systems are vulnerable to adversarial attacks. Thus, it is critical to understand the possible adversarial attacks to develop effective defense mechanisms. In this ...

SESSION: Session 2B: Adversarial Machine Learning
research-article
Assessing the Impact of Transformations on Physical Adversarial Attacks

The decision of neural networks is easily shifted at an attacker's will by so-called adversarial attacks. Initially only successful when directly applied to the input, recent advances allow attacks to breach the digital realm, leading to over-the-air ...

research-article
Open Access
Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation

Recent works have demonstrated that deep learning models are vulnerable to backdoor poisoning attacks, where these attacks instill spurious correlations to external trigger patterns or objects (e.g., stickers, sunglasses, etc.). We find that such ...

research-article
Proactive Detection of Query-based Adversarial Scenarios in NLP Systems

Adversarial attacks can mislead a Deep Learning (DL) algorithm into generating erroneous predictions via feeding maliciously-disturbed inputs called adversarial examples. DL-based Natural Language Processing (NLP) algorithms are severely threatened by ...

SESSION: Session 3: Machine Learning for Cybersecurity
research-article
Context-Based Clustering to Mitigate Phishing Attacks

Phishing is by far the most common and disruptive type of cyber-attack faced by most organizations. Phishing messages may share common attributes such as the same or similar subject lines, the same sending infrastructure, similar URLs with certain parts ...

research-article
Quo Vadis: Hybrid Machine Learning Meta-Model Based on Contextual and Behavioral Malware Representations

We propose a hybrid machine learning architecture that simultaneously employs multiple deep learning models analyzing contextual and behavioral characteristics of Windows portable executable, producing a final prediction based on a decision from the ...

research-article
Optimising Vulnerability Triage in DAST with Deep Learning

False positives generated by vulnerability scanners are an industry-wide challenge in web application security. Accordingly, this paper presents a novel multi-view deep learning architecture to optimise Dynamic Application Security Testing (DAST) ...

research-article
Open Access
Bridging Automated to Autonomous Cyber Defense: Foundational Analysis of Tabular Q-Learning

Leveraging security automation and orchestration technologies enables security analysts to respond more quickly and accurately to threats. However, current tooling is limited to automating very finely scoped and hand-coded situations, such as ...

Contributors
  • University of Cagliari
  • Swiss Federal Institute of Technology, Zurich

Recommendations

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%
YearSubmittedAcceptedRate
AISec '1832928%
AISec '17361131%
AISec '16381232%
AISec '15251144%
AISec '14241250%
AISec '13171059%
AISec '12241042%
AISec '10151067%
AISec '0820945%
Overall2319441%