当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ACMatch: Improving context capture for two-view correspondence learning via adaptive convolution
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-11-16 , DOI: 10.1016/j.isprsjprs.2024.11.004
Xiang Fang, Yifan Lu, Shihua Zhang, Yining Xie, Jiayi Ma

Two-view correspondence learning plays a pivotal role in the field of computer vision. However, this task is beset with great challenges stemming from the significant imbalance between true and false correspondences. Recent approaches have started leveraging the inherent filtering properties of convolution to eliminate false matches. Nevertheless, these methods tend to apply convolution in an ad hoc manner without careful design, thereby inheriting the limitations of convolution and hindering performance improvement. In this paper, we propose a novel convolution-based method called ACMatch, which aims to meticulously design convolutional filters to mitigate the shortcomings of convolution and enhance its effectiveness. Specifically, to address the limitation of existing convolutional filters of struggling to effectively capture global information due to the limited receptive field, we introduce a strategy to help them obtain relatively global information by guiding grid points to incorporate more contextual information, thus enabling a global perspective for two-view learning. Furthermore, we recognize that in the context of feature matching, inliers and outliers provide fundamentally different information. Hence, we design an adaptive weighted convolution module that allows the filters to focus more on inliers while ignoring outliers. Extensive experiments across various visual tasks demonstrate the effectiveness, superiority, and generalization. Notably, ACMatch attains an AUC@5° of 35.93% on YFCC100M without RANSAC, surpassing the previous state-of-the-art by 5.85 absolute percentage points and exceeding the 35% AUC@5° bar for the first time. Our code is publicly available at https://github.com/ShineFox/ACMatch.

中文翻译:


ACMatch:通过自适应卷积改进双视图对应学习的上下文捕获



双视图通信学习在计算机视觉领域起着举足轻重的作用。然而,这项任务面临着巨大的挑战,因为真假对应之间的严重不平衡。最近的方法已开始利用卷积固有的过滤特性来消除错误匹配。然而,这些方法往往以临时的方式应用卷积,而没有精心设计,从而继承了卷积的局限性并阻碍了性能改进。在本文中,我们提出了一种名为 ACMatch 的新型基于卷积的方法,旨在精心设计卷积滤波器,以减轻卷积的缺点并提高其有效性。具体来说,为了解决现有卷积滤波器由于感受野有限而难以有效捕获全局信息的局限性,我们引入了一种策略,通过引导网格点包含更多的上下文信息来帮助他们获得相对全局的信息,从而为双视图学习提供全局视角。此外,我们认识到,在特征匹配的上下文中,inliers 和 outliers 提供了根本不同的信息。因此,我们设计了一个自适应加权卷积模块,允许滤波器更多地关注内部值,同时忽略异常值。对各种视觉任务的广泛实验证明了有效性、优越性和泛化性。值得注意的是,在没有 RANSAC 的 YFCC100M 上,ACMatch 的 AUC@5° 为 35.93%,比之前最先进的技术高出 5.85 个绝对百分点,并首次超过 35% AUC@5° 标准。我们的代码在 https://github.com/ShineFox/ACMatch 上公开提供。
更新日期:2024-11-16
down
wechat
bug