当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Bootstrap Latent Prototypes for graph positive-unlabeled learning
Information Fusion ( IF 14.7 ) Pub Date : 2024-06-29 , DOI: 10.1016/j.inffus.2024.102553 Chunquan Liang , Yi Tian , Dongmin Zhao , Mei Li , Shirui Pan , Hongming Zhang , Jicheng Wei
Information Fusion ( IF 14.7 ) Pub Date : 2024-06-29 , DOI: 10.1016/j.inffus.2024.102553 Chunquan Liang , Yi Tian , Dongmin Zhao , Mei Li , Shirui Pan , Hongming Zhang , Jicheng Wei
Graph positive-unlabeled (GPU) learning aims to learn binary classifiers from only positive and unlabeled (PU) nodes. The state-of-the-art methods rely on provided class prior probabilistic and their performance lags far behind the fully labeled counterparts. To bridge the gap, we propose Bootstrap Latent Prototypes (BLP), a framework that consists of a graph representation learning module and a two-step strategy algorithm. The learning module bootstraps previous versions of node representations to serve as targets and learns enhanced representations by predicting the latent prototypes respectively for the P set and each individual in the U set. It eliminates the requirement for a class prior, while capturing positive similarity information, as well as the low-level semantic similarity and uniformity information, thereby producing closely aligned and discriminated representations for positive nodes. The algorithm module utilizes the obtained representations to select reliable negative nodes and train a binary classifier with both labeled positives and selected reliable negatives. Experimental results on diverse real-life datasets demonstrate that our proposed BLP method not only outperforms state-of-the-art approaches but also surpasses fully labeled classification models in most cases. The source code is available at .
中文翻译:
用于图正未标记学习的 Bootstrap 潜在原型
图正未标记(GPU)学习旨在仅从正未标记(PU)节点学习二元分类器。最先进的方法依赖于提供的类先验概率,其性能远远落后于完全标记的对应方法。为了弥补这一差距,我们提出了 Bootstrap Latent Prototypes (BLP),这是一个由图表示学习模块和两步策略算法组成的框架。学习模块引导先前版本的节点表示作为目标,并通过分别预测 P 集和 U 集中每个个体的潜在原型来学习增强表示。它消除了对先验类的要求,同时捕获正相似性信息以及低级语义相似性和均匀性信息,从而为正节点生成紧密对齐和可区分的表示。算法模块利用获得的表示来选择可靠的负节点,并使用标记的正节点和选定的可靠负节点训练二元分类器。在不同的现实数据集上的实验结果表明,我们提出的 BLP 方法不仅优于最先进的方法,而且在大多数情况下也超过了完全标记的分类模型。源代码可在 处获得。
更新日期:2024-06-29
中文翻译:
用于图正未标记学习的 Bootstrap 潜在原型
图正未标记(GPU)学习旨在仅从正未标记(PU)节点学习二元分类器。最先进的方法依赖于提供的类先验概率,其性能远远落后于完全标记的对应方法。为了弥补这一差距,我们提出了 Bootstrap Latent Prototypes (BLP),这是一个由图表示学习模块和两步策略算法组成的框架。学习模块引导先前版本的节点表示作为目标,并通过分别预测 P 集和 U 集中每个个体的潜在原型来学习增强表示。它消除了对先验类的要求,同时捕获正相似性信息以及低级语义相似性和均匀性信息,从而为正节点生成紧密对齐和可区分的表示。算法模块利用获得的表示来选择可靠的负节点,并使用标记的正节点和选定的可靠负节点训练二元分类器。在不同的现实数据集上的实验结果表明,我们提出的 BLP 方法不仅优于最先进的方法,而且在大多数情况下也超过了完全标记的分类模型。源代码可在 处获得。