- Sponsor:
- sigsac
It is our pleasure to welcome you to the 10th ACM Workshop on Artificial Intelligence and Security - AISec 2017. AISec, having been annually co-located with CCS for ten consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to spam and intrusion detection.
AISec 2017 received 36 submissions, of which 11 (30%) were selected for publication and presentation as full papers. We also accepted 3 additional short papers, namely, two-page papers to be presented in a lightning round at the workshop (10 mins). Submissions arrived from researchers in 15 countries, from a wide variety of institutions both academic and corporate.
The accepted papers were organized into the following thematic groups:
Deep Learning, concerning the analysis of the security properties of deep neural networks against test-time evasion and training-time poisoning attacks;
Authentication and Intrusion Detection, related to systems that use machine learning to solve a particular security problem;
Defense against Poisoning, related to the discussion of countermeasures that mitigate the impact of training-time poisoning attacks;
Malware, concerning automatic malware detection and classification.
The keynote address is given by Aylin Caliskan, from Princeton University, USA, whose talk is entitled, "Beyond Big Data: What Can We Learn from AI Models?" In this talk, Dr. Caliskan discusses how to use machine learning and natural language processing in novel ways to interpret big data, develop privacy and security attacks, and gain insights about humans and society through these methods. She discusses how to analyze machine learning models' internal representations to investigate how the artificial intelligence perceives the world, and uncover facts about the society and the use of language which have implications for privacy, security, and fairness in machine learning.
Proceeding Downloads
Beyond Big Data: What Can We Learn from AI Models?: Invited Keynote
My research involves the heavy use of machine learning and natural language processing in novel ways to interpret big data, develop privacy and security attacks, and gain insights about humans and society through these methods. I do not use machine ...
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for ...
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. ...
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Luis Muñoz-González,
- Battista Biggio,
- Ambra Demontis,
- Andrea Paudice,
- Vasin Wongrassamee,
- Emil C. Lupu,
- Fabio Roli
A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the ...
Efficient Defenses Against Adversarial Attacks
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of ...
An Early Warning System for Suspicious Accounts
In the face of large-scale automated cyber-attacks to large online services, fast detection and remediation of compromised accounts are crucial to limit the spread of new attacks and to mitigate the overall damage to users, companies, and the public at ...
Differentially Private Noisy Search with Applications to Anomaly Detection (Abstract)
We consider the problem of privacy-sensitive anomaly detection - screening to detect individuals, behaviors, areas, or data samples of high interest. What defines an anomaly is context-specific; for example, a spoofed rather than genuine user attempting ...
Malware Analysis of Imaged Binary Samples by Convolutional Neural Network with Attention Mechanism
This paper presents a method to extract important byte sequences in malware samples by application of convolutional neural network (CNN) to images converted from binary data. This method, by combining a technique called the attention mechanism into CNN, ...
Generating Look-alike Names For Security Challenges
Motivated by the need to automatically generate behavior-based security challenges to improve user authentication for web services, we consider the problem of large-scale construction of realistic-looking names to serve as aliases for real individuals. ...
In (Cyber)Space Bots Can Hear You Speak: Breaking Audio CAPTCHAs Using OTS Speech Recognition
Captchas have become almost ubiquitous as they are commonly deployed by websites as part of their defenses against fraudsters. However visual captchas pose a considerable obstacle to certain groups of users, such as the visually impaired, and that has ...
Practical Machine Learning for Cloud Intrusion Detection: Challenges and the Way Forward
Operationalizing machine learning based security detections is extremely challenging, especially in a continuously evolving cloud environment. Conventional anomaly detection does not produce satisfactory results for analysts that are investigating ...
Robust Linear Regression Against Training Data Poisoning
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice. In high-dimensional settings, supervised learning commonly relies on dimensionality reduction to improve performance and identify the most important ...
Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach
The use of machine learning models has become ubiquitous. Their predictions are used to make decisions about healthcare, security, investments and many other critical applications. Given this pervasiveness, it is not surprising that adversaries have an ...
Malware Classification and Class Imbalance via Stochastic Hashed LZJD
There are currently few methods that can be applied to malware classification problems which don't require domain knowledge to apply. In this work, we develop our new SHWeL feature vector representation, by extending the recently proposed Lempel-Ziv ...
Learning the PE Header, Malware Detection with Minimal Domain Knowledge
Many efforts have been made to use various forms of domain knowledge in malware detection. Currently there exist two common approaches to malware detection without domain knowledge, namely byte n-grams and strings. In this work we explore the ...
Index Terms
- Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security