Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Apr 18, 2019 · We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples.
Apr 18, 2019 · Deep neural networks are vulnerable to adversarial attacks. Numerous efforts have focused on defenses that either try to patch `holes' in ...
This work introduces trapdoors and describes an implementation of trapdoors using similar strategies to backdoor/Trojan attacks, and shows that by ...
This repository contains code implementation of the paper "Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks", at ACM CCS ...
Missing: Concealed | Show results with:Concealed
Each trapdoor has minimal impact on classification of normal inputs, but leads attackers to produce adversarial inputs whose similarity to the trapdoor makes ...
Missing: Concealed | Show results with:Concealed
Mar 9, 2022 · Bibliographic details on Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks.
Deep neural networks are vulnerable to adversarial attacks. Numerous efforts have focused on defenses that either try to patch `holes' in trained models or ...
Deep neural networks are vulnerable to adversarial attacks. Numerous effortshave focused on defenses that either try to patch ...
People also ask
Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks. S. Shan, E. Willson, B. Wang, B. Li, H. Zheng, and B. Zhao.
Nov 2, 2020 · We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples.