Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3475724acmconferencesBook PagePublication PagesmmConference Proceedingsconference-collections
ADVM '21: Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia
ACM2021 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
MM '21: ACM Multimedia Conference Virtual Event China 20 October 2021
ISBN:
978-1-4503-8672-2
Published:
22 October 2021
Sponsors:
Next Conference
October 28 - November 1, 2024
Melbourne , VIC , Australia
Reflects downloads up to 04 Oct 2024Bibliometrics
Skip Abstract Section
Abstract

Deep learning has achieved significant success in multimedia fields involving computer vision, natural language processing, and acoustics. However, research in adversarial learning also shows that they are highly vulnerable to adversarial examples. Extensive works have demonstrated that adversarial examples could easily fool deep neural networks to wrong predictions threatening practical deep learning applications in both digital and physical world. Though challenging, discovering and harnessing adversarial attacks is beneficial for diagnosing model blind-spots and further understanding as well as improving multimedia systems in practice. In this workshop, we aim to bring together researchers from the fields of adversarial machine learning, model robustness, and explainable AI to discuss recent research and future directions for adversarial robustness of deep learning models, with a particular focus on multimedia applications, including computer vision, acoustics, etc.

Skip Table Of Content Section
SESSION: Oral Session
research-article
Comparative Study of Adversarial Training Methods for Long-tailed Classification

Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in ...

research-article
Imperceptible Adversarial Examples by Spatial Chroma-Shift

Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on ...

research-article
Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique

Pan-sharpening is one of the most commonly used techniques in remote sensing, which fuses panchromatic (PAN) and multispectral (MS) images to obtain both the high spectral and high spatial resolution images. Due to these advantages, researchers usually ...

research-article
Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization

Deepfake, a well-known face forgery technique, has raised serious concerns about personal privacy and social media security. Therefore, a plenty of deepfake detection methods come out and achieve outstanding performance in the single dataset case. ...

SESSION: Poster Session
research-article
Comparative Study of Adversarial Training Methods for Cold-Start Recommendation

Adversarial training in recommendation is originated to improve the robustness of recommenders to attack signals and has recently shown promising results to alleviate cold-start recommendation. However, existing methods usually should make a trade-off ...

research-article
Detecting Adversarial Patch Attacks through Global-local Consistency

Recent works have well-demonstrated the threat of adversarial patch attacks to real-world vision media systems. By arbitrarily modifying pixels within a small restricted area in the image, adversarial patches can mislead neural-network-based image ...

research-article
Open Access
Real World Robustness from Systematic Noise

Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples ...

research-article
Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds

The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance ...

research-article
An Investigation on Sparsity of CapsNets for Adversarial Robustness

The routing-by-agreement mechanism in capsule networks (CapsNets) is used to build visual hierarchical relationships with a characteristic of assigning parts to wholes. The connections between capsules of different layers become sparser with more ...

research-article
Frequency Centric Defense Mechanisms against Adversarial Examples

Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We ...

Contributors
  • Nanyang Technological University
  • Johns Hopkins University
  • California Institute of Technology
  • Beihang University
  • University of California, Berkeley
  • Johns Hopkins University
  • Arizona State University
  • Beihang University
  1. Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia

      Recommendations