当前位置: X-MOL 学术Multimedia Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Local discriminative graph convolutional networks for text classification
Multimedia Systems ( IF 3.5 ) Pub Date : 2023-05-29 , DOI: 10.1007/s00530-023-01112-y
Bolin Wang , Yuanyuan Sun , Yonghe Chu , Changrong Min , Zhihao Yang , Hongfei Lin

Recently, graph convolutional networks (GCNs) has demonstrated great success in the text classification. However, the GCN only focuses on the fitness between the ground-truth labels and the predicted ones. Indeed, it ignores the local intra-class diversity and local inter-class similarity that is implicitly encoded by the graph, which is an important cue in machine learning field. In this paper, we propose a local discriminative graph convolutional network (LDGCN) to boost the performance of text classification. Different from the text GCN that minimize only the cross entropy loss, our proposed LDGCN is trained by optimizing a new discriminative objective function. So that, in the new LDGCN feature spaces, the texts from the same scene class are mapped closely to each other and the texts of different classes are mapped as farther apart as possible. So as to ensure that the features extracted by GCN have good discriminative ability, achieve the maximum separability of samples. Experimental results demonstrate its superiority against the baselines.



中文翻译:

用于文本分类的局部判别图卷积网络

最近,图卷积网络 (GCN) 在文本分类方面取得了巨大成功。然而,GCN 只关注真实标签和预测标签之间的适应度。事实上,它忽略了由图隐式编码的局部类内多样性和局部类间相似性,这是机器学习领域的一个重要线索。在本文中,我们提出了一种局部判别图卷积网络 (LDGCN) 来提高文本分类的性能。与仅最小化交叉熵损失的文本 GCN 不同,我们提出的 LDGCN 是通过优化新的判别目标函数来训练的。因此,在新的 LDGCN 特征空间中,来自同一场景类的文本彼此紧密映射,不同类的文本映射得尽可能远。从而保证GCN提取的特征具有良好的判别能力,实现样本的最大可分离性。实验结果证明了它相对于基线的优越性。

更新日期:2023-05-29
down
wechat
bug