当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Encoding Syntactic Information into Transformers for Aspect-Based Sentiment Triplet Extraction
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2023-07-07 , DOI: 10.1109/taffc.2023.3291730
Li Yuan, Jin Wang, Liang-Chih Yu, Xuejie Zhang

Aspect-based sentiment triplet extraction (ASTE) aims to extract triplets consisting of aspect terms and their associated opinion terms and sentiment polarities from sentences, a relatively new and challenging subtask of aspect-based sentiment analysis (ABSA). Previous studies have used either pipeline models or unified tagging schema models. These models ignore the syntactic relationships between the aspect and its corresponding opinion words, which leads them to mistakenly focus on syntactically unrelated words. One feasible option is to use a graph convolution network (GCN) to exploit syntactic information by propagating the representation from the opinion words to the aspect. However, such a method considers all syntactic dependencies to be of the same type and thus may still incorrectly associate unrelated words to the target aspect through the iterations of graph convolutional propagation. Herein, a syntax-aware transformer (SA-Transformer) is proposed to extend the GCN strategy by fully exploiting the dependency types of edges to block inappropriate propagation. The proposed approach can obtain different representations and weights even for edges with the same dependency type according to their adjacent dependency type of edges. Instead of using a GCN layer, we used an L -layer SA transformer to encode syntactic information in the word-pair representation to improve performance. Experimental results on four benchmark datasets show that the proposed model outperforms various previous models for ASTE.

中文翻译:


将句法信息编码到 Transformer 中以进行基于方面的情感三元组提取



基于方面的情感三元组提取(ASTE)旨在从句子中提取由方面术语及其相关观点术语和情感极性组成的三元组,这是基于方面的情感分析(ABSA)的一个相对较新且具有挑战性的子任务。以前的研究要么使用管道模型,要么使用统一标记模式模型。这些模型忽略了方面与其相应的意见词之间的句法关系,这导致它们错误地关注句法不相关的词。一种可行的选择是使用图卷积网络(GCN)通过将表示从观点词传播到方面来利用句法信息。然而,这种方法将所有句法依赖关系视为同一类型,因此仍然可能通过图卷积传播的迭代错误地将不相关的单词与目标方面相关联。在此,提出了一种语法感知变压器(SA-Transformer),通过充分利用边的依赖类型来阻止不适当的传播,从而扩展 GCN 策略。即使对于具有相同依赖类型的边,所提出的方法也可以根据其相邻的依赖类型获得不同的表示和权重。我们没有使用 GCN 层,而是使用 L 层 SA 转换器来编码单词对表示中的语法信息,以提高性能。四个基准数据集的实验结果表明,所提出的模型优于 ASTE 的各种先前模型。
更新日期:2023-07-07
down
wechat
bug