Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3474369acmconferencesBook PagePublication PagesccsConference Proceedingsconference-collections
AISec '21: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security
ACM2021 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security Virtual Event Republic of Korea 15 November 2021
ISBN:
978-1-4503-8657-9
Published:
15 November 2021
Sponsors:
Next Conference
October 14 - 18, 2024
Salt Lake City , UT , USA
Bibliometrics
Skip Abstract Section
Abstract

It is our pleasure to welcome you to the 14th ACM Workshop on Artificial Intelligence and Security - AISec 2021. AISec, having been annually co-located with CCS for 14 consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in gametheoretic adversarial environments, privacy-preserving learning, and applications to malware, spam, and intrusion detection. AISec 2021 received 56 submissions, of which 17 (30%) were selected for publication and presentation as full papers. Submissions arrived from researchers in many different countries, from a wide variety of institutions, both academic and corporate.

Skip Table Of Content Section
SESSION: Session 1: Adversarial Machine Learning
research-article
Unicode Evil: Evading NLP Systems Using Visual Similarities of Text Characters

Adversarial Text Generation Frameworks (ATGFs) aim at causing a Natural Language Processing (NLP) machine to misbehave, i.e., misclassify a given input. In this paper, we propose EvilText, a general ATGF that successfully evades some of the most popular ...

research-article
Open Access
Adversarial Transfer Attacks With Unknown Data and Class Overlap

The ability to transfer adversarial attacks from one model (the surrogate) to another model (the victim) has been an issue of concern within the machine learning (ML) community. The ability to successfully evade unseen models represents an uncomfortable ...

research-article
Open Access
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing

Adversarial training (AT) has become a popular choice for training robust networks. However, it tends to sacrifice clean accuracy heavily in favor of robustness and suffers from a large generalization error. To address these concerns, we propose Smooth ...

research-article
Open Access
SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries

Given black-box access to the prediction API, model extraction attacks can steal the functionality of models deployed in the cloud. In this paper, we introduce the SEAT detector, which detects black-box model extraction attacks so that the defender can ...

research-article
NNoculation: Catching BadNets in the Wild

This paper proposes a novel two-stage defense (NNoculation) against backdoored neural networks (BadNets) that, repairs a BadNet both pre-deployment and online in response to backdoored test inputs encountered in the field. In the pre-deployment stage, ...

SESSION: Session 2A: Machine Learning for Cybersecurity
research-article
Network Anomaly Detection Using Transfer Learning Based on Auto-Encoders Loss Normalization

Anomaly detection is a classic, long-term research problem. Previous attempts to solve it have used auto-encoders to learn a representation of the normal behaviour of networks and detect anomalies according to reconstruction loss. In this paper, we ...

research-article
A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels

In some problem spaces, the high cost of obtaining ground truth labels necessitates use of lower quality reference datasets. It is difficult to benchmark model performance using these datasets, as evaluation results may be biased. We propose a ...

research-article
Open Access
StackBERT: Machine Learning Assisted Static Stack Frame Size Recovery on Stripped and Optimized Binaries

The call stack represents one of the core abstractions that compiler-generated programs leverage to organize binary execution at runtime. For many use cases reasoning about stack accesses of binary functions is crucial: security-sensitive applications ...

research-article
Public Access
Patch-based Defenses against Web Fingerprinting Attacks

Anonymity systems like Tor are vulnerable to Website Fingerprinting (WF) attacks, where a local passive eavesdropper infers the victim's activity. WF attacks based on deep learning classifiers have successfully overcome numerous defenses. While recent ...

SESSION: Session 2B: Machine Learning for Cybersecurity
research-article
INSOMNIA: Towards Concept-Drift Robustness in Network Intrusion Detection

Despite decades of research in network traffic analysis and incredible advances in artificial intelligence, network intrusion detection systems based on machine learning (ML) have yet to prove their worth. One core obstacle is the existence of concept ...

research-article
Investigating Labelless Drift Adaptation for Malware Detection

The evolution of malware has long plagued machine learning-based detection systems, as malware authors develop innovative strategies to evade detection and chase profits. This induces concept drift as the test distribution diverges from the training, ...

research-article
Spying through Virtual Backgrounds of Video Calls

Video calls have become an essential part of today's business life, especially due to the Corona pandemic. Several industry branches enable their employees to work from home and collaborate via video conferencing services. While remote work offers ...

research-article
Open Access
Explaining Graph Neural Networks for Vulnerability Discovery

Graph neural networks (GNNs) have proven to be an effective tool for vulnerability discovery that outperforms learning-based methods working directly on source code. Unfortunately, these neural networks are uninterpretable models, whose decision process ...

research-article
Automating Privilege Escalation with Deep Reinforcement Learning

AI-based defensive solutions are necessary to defend networks and information assets against intelligent automated attacks. Gathering enough realistic data for training machine learning-based defenses is a significant practical challenge. An intelligent ...

research-article
Automated Detection of Side Channels in Cryptographic Protocols: DROWN the ROBOTs!

Currently most practical attacks on cryptographic protocols like TLS are based on side channels, such as padding oracles. Some well-known recent examples are DROWN, ROBOT and Raccoon (USENIX Security 2016, 2018, 2021). Such attacks are usually found by ...

SESSION: Session 3: Privacy-Preserving Machine Learning
research-article
FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data

Federated learning (FL) has been proposed to allow collaborative training of machine learning (ML) models among multiple parties to keep their data private and only model updates are shared. Most existing approaches have focused on horizontal FL, while ...

research-article
Differential Privacy Defenses and Sampling Attacks for Membership Inference

Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a ...

Contributors
  • Google LLC
  • University of Cagliari

Recommendations

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%
YearSubmittedAcceptedRate
AISec '1832928%
AISec '17361131%
AISec '16381232%
AISec '15251144%
AISec '14241250%
AISec '13171059%
AISec '12241042%
AISec '10151067%
AISec '0820945%
Overall2319441%