当前位置: X-MOL 学术Expert Syst. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Noun-based attention mechanism for Fine-grained Named Entity Recognition
Expert Systems with Applications ( IF 7.5 ) Pub Date : 2022-01-06 , DOI: 10.1016/j.eswa.2021.116406
Alejandro Jesús Castañeira Rodríguez 1 , Daniel Castro Castro 2 , Silena Herold García 3
Affiliation  

Fine-grained Named Entity Recognition is a challenging Natural Language Processing problem as it requires classifying entity mentions into hundreds of types that can span across several domains and be organized in several hierarchy levels. This task can be divided into two subtasks: Fine-grained Named Entity Detection and Fine-grained Named Entity Typing. In this work, we propose solutions for both of these subtasks. For the former, we propose a system that uses a stack of Byte-Pair Encoded vectors in combination with Flair embeddings, followed by a BILSTM-CRF network, which allowed us to improve the current state of the art for 1k-WFB-g dataset. In the second subtask, attention mechanisms have become a common component in most of the current architectures, where the patterns captured by these mechanisms are generic, so in theory, they could attend to any word in the text indistinctly, regardless of its syntactic type, often causing inexplicable errors. To overcome this limitation we propose an attention mechanism based specifically on the use of elements of the noun syntactic type. We have compared our results to those obtained with a generic attention mechanism, where our method presented better results.



中文翻译:

基于名词的细粒度命名实体识别注意机制

细粒度命名实体识别是一个具有挑战性的自然语言处理问题,因为它需要将实体提及分类为数百种类型,这些类型可以跨越多个域并在多个层次结构级别进行组织。该任务可以分为两个子任务:细粒度命名实体检测和细粒度命名实体类型。在这项工作中,我们为这两个子任务提出了解决方案。对于前者,我们提出了一个系统,该系统使用一堆字节对编码向量与 Flair 嵌入相结合,然后是 BILSTM-CRF 网络,这使我们能够改进 1k-WFB-g 数据集的当前技术水平. 在第二个子任务中,注意力机制已成为当前大多数架构中的常见组件,这些机制捕获的模式是通用的,因此理论上,他们可以模糊地注意文本中的任何单词,无论其句法类型如何,通常会导致莫名其妙的错误。为了克服这个限制,我们提出了一种专门基于名词句法类型元素使用的注意力机制。我们将我们的结果与使用通用注意机制获得的结果进行了比较,我们的方法呈现出更好的结果。

更新日期:2022-01-06
down
wechat
bug