当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-11-15 , DOI: 10.1007/s40747-024-01642-6
Diyou Li, Lijuan Zhang, Jie Huang, Neal Xiong, Lei Zhang, Jian Wan

Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach.



中文翻译:


通过双对比学习框架和交叉注意力模块增强零镜头关系提取



零样本关系提取 (ZSRE) 对于提高对自然语言关系的理解以及提高实际应用中自然语言处理方法的准确性和效率至关重要。然而,现有的 ZSRE 模型忽视了语义信息融合的重要性,并且在用于零镜头关系提取任务时具有局限性。因此,本文提出了一个用于 ZSRE 的双重对比学习框架和交叉注意力网络模块。首先,我们的模型设计了一个双重对比学习框架,从不同角度比较输入句子和关系描述;此过程旨在更好地分离表示空间中的不同关系类别。此外,我们从计算机视觉领域引入了模型的交叉注意力网络,以增强输入实例对关系描述相关信息的关注。在 Wiki-ZSL 和 FewRel 数据集上获得的实验结果充分证明了我们方法的有效性。

更新日期:2024-11-15
down
wechat
bug