BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract)

Authors

  • Zihan Guan The University of Virginia
  • Mengxuan Hu The University of Virginia
  • Zhongliang Zhou The University of Georgia
  • Jielu Zhang The University of Georgia
  • Sheng Li The University of Virginia
  • Ninghao Liu The University of Georgia

DOI:

https://doi.org/10.1609/aaai.v38i21.30448

Keywords:

Backdoor Attacks, Foundation Model, Segment Anything Model

Abstract

Image segmentation is foundational to computer vision applications, and the Segment Anything Model (SAM) has become a leading base model for these tasks. However, SAM falters in specialized downstream challenges, leading to various customized SAM models. We introduce BadSAM, a backdoor attack tailored for SAM, revealing that customized models can harbor malicious behaviors. Using the CAMO dataset, we confirm BadSAM's efficacy and identify SAM vulnerabilities. This study paves the way for the development of more secure and customizable vision foundation models.

Published

2024-03-24

How to Cite

Guan, Z., Hu, M., Zhou, Z., Zhang, J., Li, S., & Liu, N. (2024). BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23506-23507. https://doi.org/10.1609/aaai.v38i21.30448