Localizing lying in llama: Understanding instructed dishonesty on true-false questions through prompting, probing, and patching

J Campbell, R Ren, P Guo - arXiv preprint arXiv:2311.15131, 2023 - arxiv.org
arXiv preprint arXiv:2311.15131, 2023arxiv.org
Large language models (LLMs) demonstrate significant knowledge through their outputs,
though it is often unclear whether false outputs are due to a lack of knowledge or dishonesty.
In this paper, we investigate instructed dishonesty, wherein we explicitly prompt LLaMA-2-
70b-chat to lie. We perform prompt engineering to find which prompts best induce lying
behavior, and then use mechanistic interpretability approaches to localize where in the
network this behavior occurs. Using linear probing and activation patching, we localize five …
Large language models (LLMs) demonstrate significant knowledge through their outputs, though it is often unclear whether false outputs are due to a lack of knowledge or dishonesty. In this paper, we investigate instructed dishonesty, wherein we explicitly prompt LLaMA-2-70b-chat to lie. We perform prompt engineering to find which prompts best induce lying behavior, and then use mechanistic interpretability approaches to localize where in the network this behavior occurs. Using linear probing and activation patching, we localize five layers that appear especially important for lying. We then find just 46 attention heads within these layers that enable us to causally intervene such that the lying model instead answers honestly. We show that these interventions work robustly across many prompts and dataset splits. Overall, our work contributes a greater understanding of dishonesty in LLMs so that we may hope to prevent it.
arxiv.org