当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-modal change detection using historical land use maps and current remote sensing images
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-10-24 , DOI: 10.1016/j.isprsjprs.2024.10.010
Kai Deng, Xiangyun Hu, Zhili Zhang, Bo Su, Cunjun Feng, Yuanzeng Zhan, Xingkun Wang, Yansong Duan

Using bi-temporal remote sensing imagery to detect land in urban expansion has become a common practice. However, in the process of updating land resource surveys, directly detecting changes between historical land use maps (referred to as “maps” in this paper) and current remote sensing images (referred to as “images” in this paper) is more direct and efficient than relying on bi-temporal image comparisons. The difficulty stems from the substantial modality differences between maps and images, presenting a complex challenge for effective change detection. To address this issue, in this paper, we propose a novel deep learning model named the cross-modal patch alignment network (CMPANet), which bridges the gap between different modalities for cross-modal change detection (CMCD) between maps and images. Our proposed model uses a vision transformer (ViT-B/16) fine-tuned on 1.8 million remote sensing images as an encoder for images and trainable ViTs as the encoder for maps. To bridge the distribution differences between these encoders, we introduce a feature domain adaptation image-map alignment module (IMAM) to transfer and share pretrained model knowledge rapidly. Additionally, we incorporate the cross-modal and cross-channel attention (CCMAT) module and the transformer block attention module to facilitate the interaction and fusion of features across modalities. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. On the newly created EVLab-CMCD dataset and the publicly available HRSCD dataset, CMPANet has achieved state-of-the-art results and offers a novel technical approach for CMCD between maps and images.

中文翻译:


使用历史土地利用地图和当前遥感影像进行跨模式变化检测



使用双时相遥感影像来检测城市扩张中的土地已成为一种常见做法。然而,在土地资源调查更新过程中,直接检测历史土地利用图(本文简称“地图”)与当前遥感影像(本文简称“影像”)之间的变化,比依赖双时相影像比对更直接、更高效。困难源于地图和图像之间的巨大模态差异,为有效变化检测带来了复杂的挑战。为了解决这个问题,在本文中,我们提出了一种名为跨模态补丁对齐网络 (CMPANet) 的新型深度学习模型,该模型弥合了地图和图像之间跨模态变化检测 (CMCD) 的不同模态之间的差距。我们提出的模型使用在 180 万张遥感图像上微调的视觉转换器 (ViT-B/16) 作为图像的编码器,使用可训练的 ViT 作为地图的编码器。为了弥合这些编码器之间的分布差异,我们引入了一个特征域自适应图像映射对齐模块 (IMAM),以快速传输和共享预训练模型知识。此外,我们还整合了跨模态和跨通道注意力 (CCMAT) 模块和 transformer block 注意力模块,以促进跨模态特征的交互和融合。然后,这些融合的特征通过基于 UperNet 的特征金字塔进行处理,以生成像素级变化图。然后,这些融合的特征通过基于 UperNet 的特征金字塔进行处理,以生成像素级变化图。 在新创建的 EVLab-CMCD 数据集和公开可用的 HRSCD 数据集上,CMPANet 取得了最先进的结果,并为地图和图像之间的 CMCD 提供了一种新颖的技术方法。
更新日期:2024-10-24
down
wechat
bug