Time Sensitive Knowledge Editing through Efficient Finetuning

X Ge, A Mousavi, E Grave, A Joulin, K Qian… - arXiv preprint arXiv …, 2024 - arxiv.org
X Ge, A Mousavi, E Grave, A Joulin, K Qian, B Han, M Arefiyan, Y Li
arXiv preprint arXiv:2406.04496, 2024arxiv.org
Large Language Models (LLMs) have demonstrated impressive capability in different tasks
and are bringing transformative changes to many domains. However, keeping the
knowledge in LLMs up-to-date remains a challenge once pretraining is complete. It is thus
essential to design effective methods to both update obsolete knowledge and induce new
knowledge into LLMs. Existing locate-and-edit knowledge editing (KE) method suffers from
two limitations. First, the post-edit LLMs by such methods generally have poor capability in …
Large Language Models (LLMs) have demonstrated impressive capability in different tasks and are bringing transformative changes to many domains. However, keeping the knowledge in LLMs up-to-date remains a challenge once pretraining is complete. It is thus essential to design effective methods to both update obsolete knowledge and induce new knowledge into LLMs. Existing locate-and-edit knowledge editing (KE) method suffers from two limitations. First, the post-edit LLMs by such methods generally have poor capability in answering complex queries that require multi-hop reasoning. Second, the long run-time of such locate-and-edit methods to perform knowledge edits make it infeasible for large scale KE in practice. In this paper, we explore Parameter-Efficient Fine-Tuning (PEFT) techniques as an alternative for KE. We curate a more comprehensive temporal KE dataset with both knowledge update and knowledge injection examples for KE performance benchmarking. We further probe the effect of fine-tuning on a range of layers in an LLM for the multi-hop QA task. We find that PEFT performs better than locate-and-edit techniques for time-sensitive knowledge edits.
arxiv.org