Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Game Theory for Adversarial Attacks and Defenses release_uwxeexgoybfmdngxidioj5tnpi

by Shorya Sharma

Released as a article .

2021  

Abstract

Adversarial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset, which leads to even state-of-the-art deep neural networks outputting incorrect answers with high confidence. Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked. Gradually, a game-like competition between attackers and defenders formed, in which both players would attempt to play their best strategies against each other while maximizing their own payoffs. To solve the game, each player would choose an optimal strategy against the opponent based on the prediction of the opponent's strategy choice. In this work, we are on the defensive side to apply game-theoretic approaches on defending against attacks. We use two randomization methods, random initialization and stochastic activation pruning, to create diversity of networks. Furthermore, we use one denoising technique, super resolution, to improve models' robustness by preprocessing images before attacks. Our experimental results indicate that those three methods can effectively improve the robustness of deep-learning neural networks.
In text/plain format

Archived Files and Locations

application/pdf  1.2 MB
file_anjfakt3pva6pav7y4blnwde7i
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-10-08
Version   v1
Language   en ?
arXiv  2110.06166v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 5e6f01b7-dcd5-4a42-86d7-e804def6d6d2
API URL: JSON