Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey

Zhichen Dong, Zhanhui Zhou, Chao Yang, Jing Shao, Yu Qiao


Abstract
Large Language Models (LLMs) are now commonplace in conversation applications. However, their risks of misuse for generating harmful responses have raised serious societal concerns and spurred recent research on LLM conversation safety. Therefore, in this survey, we provide a comprehensive overview of recent studies, covering three critical aspects of LLM conversation safety: attacks, defenses, and evaluations. Our goal is to provide a structured summary that enhances understanding of LLM conversation safety and encourages further investigation into this important subject. For easy reference, we have categorized all the studies mentioned in this survey according to our taxonomy, available at: https://github.com/niconi19/LLM-conversation-safety.
Anthology ID:
2024.naacl-long.375
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6734–6747
Language:
URL:
https://aclanthology.org/2024.naacl-long.375
DOI:
10.18653/v1/2024.naacl-long.375
Bibkey:
Cite (ACL):
Zhichen Dong, Zhanhui Zhou, Chao Yang, Jing Shao, and Yu Qiao. 2024. Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 6734–6747, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey (Dong et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.375.pdf