Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Apr 11, 2017 · Abstract:Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time.
People also ask
May 23, 2017 · Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time.
Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. Adversarial examples are known to ...
Tramer et al. study adversarial subspaces, subspaces of the input space that are spanned by multiple, orthogonal adversarial examples.
It is found that adversarial examples span a contiguous subspace of large (~25) dimensionality, which indicates that it may be possible to design defenses ...
Targeted adversarial transferability is defined by whether or not the target model assigns the same class as the target class towards which the source model was ...
In this work, we exploit a feature space-based perturbation method to examine the generality of the learned features of the image forensics networks and ...
Jun 5, 2024 · Our paper introduces a ranking strategy that refines the transfer attack process, enabling the attacker to estimate the likelihood of success ...
Aug 13, 2018 · Florian Tramèr, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, Patrick D. McDaniel: The Space of Transferable Adversarial Examples.
May 24, 2017 · Adversarial subspaces. Introduces methods for discovering a subspace of adversarial perturbations. MNIST: 25 dimensions.