- Sponsor:
- sigmm
Deep learning has achieved significant success in multimedia fields involving computer vision, natural language processing, and acoustics. However, research in adversarial learning also shows that they are highly vulnerable to adversarial examples. Extensive works have demonstrated that adversarial examples could easily fool deep neural networks to wrong predictions threatening practical deep learning applications in both digital and physical world. Though challenging, discovering and harnessing adversarial attacks is beneficial for diagnosing model blind-spots and further understanding as well as improving multimedia systems in practice. In this workshop, we aim to bring together researchers from the fields of adversarial machine learning, model robustness, and explainable AI to discuss recent research and future directions for adversarial robustness of deep learning models, with a particular focus on multimedia applications, including computer vision, acoustics, etc.
Proceeding Downloads
Comparative Study of Adversarial Training Methods for Long-tailed Classification
Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in ...
Imperceptible Adversarial Examples by Spatial Chroma-Shift
Deep Neural Networks have been shown to be vulnerable to various kinds of adversarial perturbations. In addition to widely studied additive noise based perturbations, adversarial examples can also be created by applying a per pixel spatial drift on ...
Generating Adversarial Remote Sensing Images via Pan-Sharpening Technique
Pan-sharpening is one of the most commonly used techniques in remote sensing, which fuses panchromatic (PAN) and multispectral (MS) images to obtain both the high spectral and high spatial resolution images. Due to these advantages, researchers usually ...
Improving Generalization of Deepfake Detection with Domain Adaptive Batch Normalization
Deepfake, a well-known face forgery technique, has raised serious concerns about personal privacy and social media security. Therefore, a plenty of deepfake detection methods come out and achieve outstanding performance in the single dataset case. ...
Comparative Study of Adversarial Training Methods for Cold-Start Recommendation
Adversarial training in recommendation is originated to improve the robustness of recommenders to attack signals and has recently shown promising results to alleviate cold-start recommendation. However, existing methods usually should make a trade-off ...
Detecting Adversarial Patch Attacks through Global-local Consistency
Recent works have well-demonstrated the threat of adversarial patch attacks to real-world vision media systems. By arbitrarily modifying pixels within a small restricted area in the image, adversarial patches can mislead neural-network-based image ...
Real World Robustness from Systematic Noise
Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples ...
Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds
The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance ...
An Investigation on Sparsity of CapsNets for Adversarial Robustness
The routing-by-agreement mechanism in capsule networks (CapsNets) is used to build visual hierarchical relationships with a characteristic of assigning parts to wholes. The connections between capsules of different layers become sparser with more ...
Frequency Centric Defense Mechanisms against Adversarial Examples
Adversarial example(AE) aims at fooling a Convolution Neural Network by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We ...
- Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia
Recommendations
Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks
KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAlthough deep neural networks (DNNs) have been successfully deployed in various real-world application scenarios, recent studies demonstrated that DNNs are extremely vulnerable to adversarial attacks. By introducing visually imperceptible perturbations ...
A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples
AbstractDeep neural networks (DNNs) are vulnerable to adversarial attacks that generate adversarial examples by adding small perturbations to the clean images. To combat adversarial attacks, the two main defense methods used are denoising and adversarial ...