BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30448Keywords:
Backdoor Attacks, Foundation Model, Segment Anything ModelAbstract
Image segmentation is foundational to computer vision applications, and the Segment Anything Model (SAM) has become a leading base model for these tasks. However, SAM falters in specialized downstream challenges, leading to various customized SAM models. We introduce BadSAM, a backdoor attack tailored for SAM, revealing that customized models can harbor malicious behaviors. Using the CAMO dataset, we confirm BadSAM's efficacy and identify SAM vulnerabilities. This study paves the way for the development of more secure and customizable vision foundation models.Downloads
Published
2024-03-24
How to Cite
Guan, Z., Hu, M., Zhou, Z., Zhang, J., Li, S., & Liu, N. (2024). BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23506-23507. https://doi.org/10.1609/aaai.v38i21.30448
Issue
Section
AAAI Student Abstract and Poster Program