当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Document-level relation extraction via dual attention fusion and dynamic asymmetric loss
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-11-11 , DOI: 10.1007/s40747-024-01632-8
Xiaoyao Ding, Dongyan Ding, Gang Zhou, Jicang Lu, Taojie Zhu

Document-level relation extraction (RE), which requires integrating and reasoning information to identify multiple possible relations among entities. However, previous research typically performed reasoning on heterogeneous graphs and set a global threshold for multiple relations classification, regardless of interaction reasoning information among multiple relations and positive–negative samples imbalance on databases. This paper proposes a novel framework for Document-level RE with two techniques, dual attention fusion and dynamic asymmetric loss. Concretely, to obtain more interdependency feature learning, we construct entity pairs and contextual matrixes using multi-head axial attention and co-attention mechanism to learn the interaction among entity pairs deeply. To alleviate the hard-thresholds influence from positive–negative imbalance samples, we dynamically adjust weights to optimize the probabilities of different labels. We evaluate our model on two benchmark document-level RE datasets, DocRED and CDR. Experimental results show that our DASL (Dual Attention fusion and dynamic aSymmetric Loss) obtains superior performance on two public datasets, we further provide extensive experiments to analyze how dual attention fusion and dynamic asymmetric loss guide the model for better extracting multi-label relations among entities.



中文翻译:


通过双注意力融合和动态不对称损失进行文档级关系提取



文档级关系提取 (RE),需要集成和推理信息来识别实体之间的多种可能关系。然而,以前的研究通常在异构图上进行推理,并为多关系分类设置一个全局阈值,而不管多个关系之间的交互推理信息和数据库上的正负样本不平衡。本文提出了一种新的文档级 RE 框架,具有两种技术,即双注意力融合和动态不对称损失。具体来说,为了获得更多的相互依赖特征学习,我们使用多头轴向注意力和共注意力机制构建实体对和上下文矩阵,以深入学习实体对之间的交互。为了减轻正负不平衡样本的硬阈值影响,我们动态调整权重以优化不同标签的概率。我们在两个基准文档级 RE 数据集 DocRED 和 CDR 上评估了我们的模型。实验结果表明,我们的 DASL (Dual Attention fusion and dynamic aSymmetric Loss) 在两个公共数据集上获得了优异的性能,我们进一步提供了广泛的实验来分析双注意力融合和动态不对称损失如何引导模型,以更好地提取实体之间的多标签关系。

更新日期:2024-11-11
down
wechat
bug