当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Knowledge Editing for Large Language Models: A Survey
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-10-07 , DOI: 10.1145/3698590
Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li

Large Language Models (LLMs) have recently transformed both the academic and industrial landscapes due to their remarkable capacity to understand, analyze, and generate texts based on their vast knowledge and reasoning ability. Nevertheless, one major drawback of LLMs is their substantial computational cost for pre-training due to their unprecedented amounts of parameters. The disadvantage is exacerbated when new knowledge frequently needs to be introduced into the pre-trained model. Therefore, it is imperative to develop effective and efficient techniques to update pre-trained LLMs. Traditional methods encode new knowledge in pre-trained LLMs through direct fine-tuning. However, naively re-training LLMs can be computationally intensive and risks degenerating valuable pre-trained knowledge irrelevant to the update in the model. Recently, Knowledge-based Model Editing (KME), also known as Knowledge Editing or Model Editing , has attracted increasing attention, which aims to precisely modify the LLMs to incorporate specific knowledge, without negatively influencing other irrelevant knowledge. In this survey, we aim to provide a comprehensive and in-depth overview of recent advances in the field of KME. We first introduce a general formulation of KME to encompass different KME strategies. Afterward, we provide an innovative taxonomy of KME techniques based on how the new knowledge is introduced into pre-trained LLMs, and investigate existing KME strategies while analyzing key insights, advantages, and limitations of methods from each category. Moreover, representative metrics, datasets, and applications of KME are introduced accordingly. Finally, we provide an in-depth analysis regarding the practicality and remaining challenges of KME and suggest promising research directions for further advancement in this field.

中文翻译:


大型语言模型的知识编辑:一项调查



大型语言模型 (LLMs) 最近改变了学术和工业领域的格局,因为它们基于丰富的知识和推理能力理解、分析和生成文本的非凡能力。然而,LLMs是,由于其前所未有的参数数量,它们在预训练时会产生大量的计算成本。当经常需要将新知识引入预训练模型时,这种缺点会更加严重。因此,必须开发有效且高效的技术来更新预训练的 LLMs。传统方法通过直接微调在预训练的 LLMs。然而,天真的重新训练 LLMs 可能是计算密集型的,并且有可能使与模型中的更新无关的有价值的预训练知识退化。最近,基于知识的模型编辑 (KME),也称为知识编辑或模型编辑 ,引起了越来越多的关注,其目的是精确修改 LLMs 以纳入特定知识,而不会对其他不相关的知识产生负面影响。在这项调查中,我们旨在全面、深入地概述 KME 领域的最新进展。我们首先介绍 KME 的一般公式,以包含不同的 KME 策略。之后,我们根据如何将新知识引入预训练LLMs法,并研究了现有的 KME 策略,同时分析了每个类别方法的关键见解、优势和局限性。此外,还相应地介绍了 KME 的代表性指标、数据集和应用。 最后,我们对 KME 的实用性和仍然存在的挑战进行了深入分析,并为该领域的进一步发展提出了有希望的研究方向。
更新日期:2024-10-07
down
wechat
bug