Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Foundations and Trends® in Privacy and Security > Vol 6 > Issue 2

Reverse Engineering of Deceptions on Machine- and Human-Centric Attacks

By Yuguang Yao, Michigan State University, USA, yaoyugua@msu.edu | Xiao Guo, Michigan State University, USA, guoxia11@msu.edu | Vishal Asnani, Michigan State University, USA, asnanivi@msu.edu | Yifan Gong, Northeastern University, USA, gong.yifa@northeastern.edu | Jiancheng Liu, Michigan State University, USA, liujia45@msu.edu | Xue Lin, Northeastern University, USA, xue.lin@northeastern.edu | Xiaoming Liu, Michigan State University, USA, liuxm@msu.edu | Sijia Liu, Michigan State University, USA, liusiji5@msu.edu

 
Suggested Citation
Yuguang Yao, Xiao Guo, Vishal Asnani, Yifan Gong, Jiancheng Liu, Xue Lin, Xiaoming Liu and Sijia Liu (2024), "Reverse Engineering of Deceptions on Machine- and Human-Centric Attacks", Foundations and TrendsĀ® in Privacy and Security: Vol. 6: No. 2, pp 53-152. http://dx.doi.org/10.1561/3300000039

Publication Date: 26 Mar 2024
© 2024 Y. Yao et al.
 
Subjects
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Reverse Engineering of Adversarial Examples
3. Model Parsing via Adversarial Examples
4. Reverse Engineering of Generated Images
5. Manipulation Localization of Generated Images
6. Conclusion and Discussion
References

Abstract

This work presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine- and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems. For machine-centric attacks, we cover reverse engineering methods for pixel-level perturbations, adversarial saliency maps, and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images. Through this work, we offer a forward-looking perspective on the challenges and opportunities associated with RED. In addition, we provide foundational and practical insights in the realms of AI security and trustworthy computer vision.

DOI:10.1561/3300000039
ISBN: 978-1-63828-340-9
112 pp. $80.00
Buy book (pb)
 
ISBN: 978-1-63828-341-6
112 pp. $155.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Reverse Engineering of Adversarial Examples
3. Model Parsing via Adversarial Examples
4. Reverse Engineering of Generated Images
5. Manipulation Localization of Generated Images
6. Conclusion and Discussion
References

Reverse Engineering of Deceptions on Machine- and Human-Centric Attacks

This monograph presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems.

For machine-centric attacks, reverse engineering methods for pixel-level perturbations are covered, as well as adversarial saliency maps and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images.

In this work, a forward-looking perspective on the challenges and opportunities associated with RED are presented. In addition, foundational and practical insights in the realms of AI security and trustworthy computer vision are provided.

 
SEC-039

Copyright © 2024 now publishers inc.
Boston - Delft