当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DHM-Net: Deep Hypergraph Modeling for Robust Feature Matching
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2024-10-16 , DOI: 10.1109/tip.2024.3477916
Shunxing Chen, Guobao Xiao, Junwen Guo, Qiangqiang Wu, Jiayi Ma

We present a novel deep hypergraph modeling architecture (called DHM-Net) for feature matching in this paper. Our network focuses on learning reliable correspondences between two sets of initial feature points by establishing a dynamic hypergraph structure that models group-wise relationships and assigns weights to each node. Compared to existing feature matching methods that only consider pair-wise relationships via a simple graph, our dynamic hypergraph is capable of modeling nonlinear higher-order group-wise relationships among correspondences in an interaction capturing and attention representation learning fashion. Specifically, we propose a novel Deep Hypergraph Modeling block, which initializes an overall hypergraph by utilizing neighbor information, and then adopts node-to-hyperedge and hyperedge-to-node strategies to propagate interaction information among correspondences while assigning weights based on hypergraph attention. In addition, we propose a Differentiation Correspondence-Aware Attention mechanism to optimize the hypergraph for promoting representation learning. The proposed mechanism is able to effectively locate the exact position of the object of importance via the correspondence aware encoding and simple feature gating mechanism to distinguish candidates of inliers. In short, we learn such a dynamic hypergraph format that embeds deep group-wise interactions to explicitly infer categories of correspondences. To demonstrate the effectiveness of DHM-Net, we perform extensive experiments on both real-world outdoor and indoor datasets. Particularly, experimental results show that DHM-Net surpasses the state-of-the-art method by a sizable margin. Our approach obtains an 11.65% improvement under error threshold of 5° for relative pose estimation task on YFCC100M dataset. Code will be released at https://github.com/CSX777/DHM-Net .

中文翻译:


DHM-Net:用于稳健特征匹配的深度超图建模



在本文中,我们提出了一种用于特征匹配的新颖深度超图建模架构(称为 DHM-Net)。我们的网络专注于通过建立动态超图结构来学习两组初始特征点之间的可靠对应关系,该结构对组关系进行建模并为每个节点分配权重。与仅通过简单图考虑成对关系的现有特征匹配方法相比,我们的动态超图能够以交互捕获和注意力表示学习的方式对对应关系之间的非线性高阶组关系进行建模。具体来说,我们提出了一种新的深度超图建模模块,它通过利用邻居信息来初始化一个整体超图,然后采用节点到超边缘和超边缘到节点的策略来传播对应之间的交互信息,同时根据超图注意力分配权重。此外,我们提出了一种 Differentiation Correspondence-Aware Attention 机制来优化超图以促进表征学习。所提出的机制能够通过对应感知编码和简单的特征门控机制有效地定位重要对象的确切位置,以区分内部值的候选者。简而言之,我们学习了这样一种动态超图格式,它嵌入了深入的组交互以明确推断对应类别。为了证明 DHM-Net 的有效性,我们对现实世界的室外和室内数据集进行了广泛的实验。特别是,实验结果表明 DHM-Net 大大超过了最先进的方法。我们的方法得到 11。在 YFCC100M 数据集上的相对姿态估计任务的误差阈值为 5° 下,提高了 65%。代码将在 https://github.com/CSX777/DHM-Net 发布。
更新日期:2024-10-16
down
wechat
bug