当前位置:
X-MOL 学术
›
ISPRS J. Photogramm. Remote Sens.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Common-feature-track-matching approach for multi-epoch UAV photogrammetry co-registration
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-11-14 , DOI: 10.1016/j.isprsjprs.2024.10.025 Xinlong Li, Mingtao Ding, Zhenhong Li, Peng Cui
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-11-14 , DOI: 10.1016/j.isprsjprs.2024.10.025 Xinlong Li, Mingtao Ding, Zhenhong Li, Peng Cui
Automatic co-registration of multi-epoch Unmanned Aerial Vehicle (UAV) image sets remains challenging due to the radiometric differences in complex dynamic scenes. Specifically, illumination changes and vegetation variations usually lead to insufficient and spatially unevenly distributed common tie points (CTPs), resulting in under-fitting of co-registration near the areas without CTPs. In this paper, we propose a novel Common-Feature-Track-Matching (CFTM) approach for UAV image sets co-registration, to alleviate the shortage of CTPs in complex dynamic scenes. Instead of matching features between multi-epoch images, we first search correspondences between multi-epoch feature tracks (i.e., groups of features corresponding to the same 3D points), which avoids the removal of matches due to unreliable estimation of the relative pose between inter-epoch image pairs. Then, the CTPs are triangulated from the successfully matched track pairs. Since an even distribution of CTPs is crucial for robust co-registration, a block-based strategy is designed, as well as enabling parallel computation. Finally, an iterative optimization algorithm is developed to gradually select the best CTPs to refine the poses of multi-epoch images. We assess the performance of our method on two challenging datasets. The results show that CFTM can automatically acquire adequate and evenly distributed CTPs in complex dynamic scenes, achieving a high co-registration accuracy approximately four times higher than the state-of-the-art in challenging scenario. Our code is available at https://github.com/lixinlong1998/CoSfM .
中文翻译:
用于多纪元无人机摄影测量共配准的共特征航迹匹配方法
由于复杂动态场景中的辐射差异,多纪元无人机 (UAV) 图像集的自动联合配准仍然具有挑战性。具体来说,光照变化和植被变化通常会导致公共连接点 (CTP) 不足且空间分布不均匀,从而导致在没有 CTP 的区域附近共配准拟合不足。在本文中,我们提出了一种新的用于无人机图像集共配准的通用特征轨迹匹配 (CFTM) 方法,以缓解复杂动态场景中 CTP 的短缺。我们首先搜索多 epoch 特征轨迹之间的对应关系(即对应于相同 3D 点的特征组),而不是匹配多 epoch 图像之间的特征,这避免了由于不可靠估计 epoch 图像对之间的相对姿态而删除匹配项。然后,从成功匹配的轨道对中对 CTP 进行三角测量。由于 CTP 的均匀分布对于稳健的共注册至关重要,因此设计了一种基于块的策略,并支持并行计算。最后,开发了一种迭代优化算法,逐步选择最佳 CTP 来细化多 epoch 图像的位姿。我们评估了我们的方法在两个具有挑战性的数据集上的性能。结果表明,CFTM 可以在复杂的动态场景中自动获取足够且分布均匀的 CTP,在具有挑战性的场景中实现比最先进的高共配准精度大约四倍。我们的代码可在 https://github.com/lixinlong1998/CoSfM 获取。
更新日期:2024-11-14
中文翻译:
用于多纪元无人机摄影测量共配准的共特征航迹匹配方法
由于复杂动态场景中的辐射差异,多纪元无人机 (UAV) 图像集的自动联合配准仍然具有挑战性。具体来说,光照变化和植被变化通常会导致公共连接点 (CTP) 不足且空间分布不均匀,从而导致在没有 CTP 的区域附近共配准拟合不足。在本文中,我们提出了一种新的用于无人机图像集共配准的通用特征轨迹匹配 (CFTM) 方法,以缓解复杂动态场景中 CTP 的短缺。我们首先搜索多 epoch 特征轨迹之间的对应关系(即对应于相同 3D 点的特征组),而不是匹配多 epoch 图像之间的特征,这避免了由于不可靠估计 epoch 图像对之间的相对姿态而删除匹配项。然后,从成功匹配的轨道对中对 CTP 进行三角测量。由于 CTP 的均匀分布对于稳健的共注册至关重要,因此设计了一种基于块的策略,并支持并行计算。最后,开发了一种迭代优化算法,逐步选择最佳 CTP 来细化多 epoch 图像的位姿。我们评估了我们的方法在两个具有挑战性的数据集上的性能。结果表明,CFTM 可以在复杂的动态场景中自动获取足够且分布均匀的 CTP,在具有挑战性的场景中实现比最先进的高共配准精度大约四倍。我们的代码可在 https://github.com/lixinlong1998/CoSfM 获取。