Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3474370acmconferencesBook PagePublication PagesicseConference Proceedingsconference-collections
MTD '21: Proceedings of the 8th ACM Workshop on Moving Target Defense
ACM2021 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security Virtual Event Republic of Korea 15 November 2021
ISBN:
978-1-4503-8658-6
Published:
15 November 2021
Sponsors:
Next Conference
Reflects downloads up to 03 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

It is our great pleasure to welcome you to the 2021 ACM Workshop on Moving Target Defense (MTD'21). The mission of MTD is provide a forum for researchers and practitioners in this area to exchange their novel ideas, findings, experiences, and lessons learned. The eighth MTD workshop has a special focus on the lessons learned from the past years of research in the area of moving target defenses, and the challenges and opportunities faced by the community moving forward.

The call for papers attracted submissions from North America and Europe. Each submission received at least three reviews. Each submission was then discussed and carefully debated by the members of the program committee. After careful considerations, the program committee accepted three full technical papers. In an attempt to highlight the important lessons learned in the community so far, this year, we also organized five invited talks covering broad research efforts that capture important aspects of MTDs. These invited papers capture many years of experience in designing, building, evaluating, and transitioning MTD technologies to practice.

Skip Table Of Content Section
SESSION: Session 1
invited-talk
Randomization-based Defenses against Data-Oriented Attacks

For nearly two decades now, the vast majority of critical software vulnerabilities have been memory corruption bugs in C and C++ programs[13, 14]. Attackers often exploit these bugs using control-flow hijacking techniques to seize control over ...

research-article
What's in the box: Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models

Machine learning models are now widely deployed in real-world applications. However, the existence of adversarial examples has been long considered a real threat to such models. While numerous defenses aiming to improve the robustness have been proposed,...

research-article
Combinatorial Boosting of Classifiers for Moving Target Defense Against Adversarial Evasion Attacks

Adversarial evasion attacks challenge the integrity of machine learning models by creating out-of-distribution samples that are consistently misclassified by these models. While a variety of detection and mitigation approaches have been proposed, they ...

SESSION: Session 2
invited-talk
Public Access
Game Theoretic Models for Cyber Deception

Cyber deception has great potential in thwarting cyberattacks [1, 4, 8]. A defender (e.g., network administrator) can use deceptive cyber artifacts such as honeypots and faking services to confuse attackers (e.g., hackers) and thus reduce the success ...

invited-talk
Using Honeypots to Catch Adversarial Attacks on Neural Networks

Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we ...

invited-talk
Research Frontiers for Moving Target Defenses

New software security threats are constantly arising, including new classes of attacks such as the recent spate of micro-architectural vulnerabilities, from side-channels and speculative execution to attacks like Rowhammer that alter the physical state ...

SESSION: Keynote Talk
keynote
Moving Target Defense against Adversarial Machine Learning

As Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms ...

SESSION: Session 3
research-article
Open Access
Concolic Execution of NMap Scripts for Honeyfarm Generation

Attackers rely upon a vast array of tools for automating attacksagainst vulnerable servers and services. It is often the case thatwhen vulnerabilities are disclosed, scripts for detecting and exploit-ing them in tools such asNmapandMetasploitare ...

Contributors
  • University of California, Riverside

Recommendations

Acceptance Rates

Overall Acceptance Rate 40 of 92 submissions, 43%
YearSubmittedAcceptedRate
MTD '1855100%
MTD '1726935%
MTD '1626935%
MTD '1519842%
MTD '1416956%
Overall924043%