当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
TCohPrompt: task-coherent prompt-oriented fine-tuning for relation extraction
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-07-22 , DOI: 10.1007/s40747-024-01563-4
Jun Long , Zhuoying Yin , Chao Liu , Wenti Huang

Prompt-tuning has emerged as a promising approach for improving the performance of classification tasks by converting them into masked language modeling problems through the insertion of text templates. Despite its considerable success, applying this approach to relation extraction is challenging. Predicting the relation, often expressed as a specific word or phrase between two entities, usually requires creating mappings from these terms to an existing lexicon and introducing extra learnable parameters. This can lead to a decrease in coherence between the pre-training task and fine-tuning. To address this issue, we propose a novel method for prompt-tuning in relation extraction, aiming to enhance the coherence between fine-tuning and pre-training tasks. Specifically, we avoid the need for a suitable relation word by converting the relation into relational semantic keywords, which are representative phrases that encapsulate the essence of the relation. Moreover, we employ a composite loss function that optimizes the model at both token and relation levels. Our approach incorporates the masked language modeling (MLM) loss and the entity pair constraint loss for predicted tokens. For relation level optimization, we use both the cross-entropy loss and TransE. Extensive experimental results on four datasets demonstrate that our method significantly improves performance in relation extraction tasks. The results show an average improvement of approximately 1.6 points in F1 metrics compared to the current state-of-the-art model. Codes are released at https://github.com/12138yx/TCohPrompt.



中文翻译:


TCohPrompt:用于关系提取的任务相关提示导向微调



即时调优已成为一种有前途的方法,通过插入文本模板将分类任务转换为掩码语言建模问题,从而提高分类任务的性能。尽管取得了相当大的成功,但将这种方法应用于关系提取仍然具有挑战性。预测关系(通常表示为两个实体之间的特定单词或短语)通常需要创建从这些术语到现有词典的映射并引入额外的可学习参数。这可能会导致预训练任务和微调之间的一致性降低。为了解决这个问题,我们提出了一种在关系提取中进行即时调整的新方法,旨在增强微调和预训练任务之间的一致性。具体来说,我们通过将关系转换为关系语义关键字来避免对合适的关系词的需要,关系语义关键字是封装关系本质的代表性短语。此外,我们采用复合损失函数在令牌和关系级别上优化模型。我们的方法结合了掩码语言建模(MLM)损失和预测标记的实体对约束损失。对于关系级别优化,我们同时使用交叉熵损失和 TransE。对四个数据集的广泛实验结果表明,我们的方法显着提高了关系提取任务的性能。结果显示,与当前最先进的模型相比,F1 指标平均提高了约 1.6 个点。代码发布于 https://github.com/12138yx/TCohPrompt。

更新日期:2024-07-22
down
wechat
bug