当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differentiated Anchor Quantity Assisted Incomplete Multiview Clustering Without Number-Tuning
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2024-08-22 , DOI: 10.1109/tcyb.2024.3443198
Shengju Yu 1 , Pei Zhang 1 , Siwei Wang 2 , Zhibin Dong 1 , Hengfu Yang 3 , En Zhu 1 , Xinwang Liu 1
Affiliation  

Incomplete multiview clustering (IMVC) generally requires the number of anchors to be the same in all views. Also, this number needs to be tuned with extra manual efforts. This not only degenerates the diversity of multiview data but also limits the model’s scalability. For generating differentiated numbers of anchors without tuning, in this article we devise a novel framework named DAQINT. To be specific, the most perfect solution is to jointly find the optimal number of anchors that belongs to respective view. Regretfully, it is extremely time consuming. In view of this, we choose to first offer a set of anchor numbers for each view, and then integrate their contributions by adaptive weighting to approximate the optimal number. In particular, these offered numbers are all predefined and do not require any tuning. Through adaptively weighting them, we hold that this equivalently makes each view enjoy a different number of anchors. Accordingly, the bipartite graphs generated on all views are with diverse scales. Besides exploring multiview features more deeply, they also balance the importance between views. Then, to fuse these multiscale bipartite graphs, we design a combination strategy that owns linear computation and storage overheads. Afterward, to solve the resulting optimization problem, we also carefully develop a three-step iterative algorithm with linear complexities and demonstrated convergence. Experiments on the multiple public datasets validate the superiority of DAQINT against several advanced IMVC methods, such as on Mfeat, DAQINT surpasses the competitors like MKC, EEIMVC, FLSD, DSIMVC, IMVC-CBG, and DCP by 36.65%, 6.33%, 48.53%, 22.46%, 15.06%, and 32.04%, respectively, in ACC.

中文翻译:


差分锚点数量辅助不完全多视图聚类,无需数字调整



不完全多视图聚类 (IMVC) 通常要求所有视图中的锚点数量相同。此外,这个数字需要通过额外的手动工作来调整。这不仅降低了多视图数据的多样性,还限制了模型的可扩展性。为了在不调整的情况下生成不同数量的锚点,在本文中,我们设计了一个名为 DAQINT 的新框架。具体来说,最完美的解决方案是共同找到属于各个视图的最佳锚点数量。遗憾的是,这非常耗时。有鉴于此,我们选择先为每个视图提供一组锚点编号,然后通过自适应加权来整合它们的贡献,以近似出最优数字。特别是,这些提供的号码都是预定义的,不需要任何调整。通过自适应加权,我们认为这等效地使每个视图享有不同数量的锚点。因此,在所有视图上生成的二分图具有不同的尺度。除了更深入地探索多视图功能外,它们还平衡了视图之间的重要性。然后,为了融合这些多尺度二分图,我们设计了一个拥有线性计算和存储开销的组合策略。之后,为了解决由此产生的优化问题,我们还仔细开发了一个具有线性复杂性和演示收敛性的三步迭代算法。在多个公共数据集上的实验验证了 DAQINT 相对于几种高级 IMVC 方法的优越性,例如在 Mfeat 上,DAQINT 在 ACC 中分别超越了 MKC、EEIMVC、FLSD、DSIMVC、IMVC-CBG 和 DCP 等竞争对手 36.65%、6.33%、48.53%、22.46%、15.06% 和 32.04%。
更新日期:2024-08-22
down
wechat
bug