当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Contrastive learning for real SAR image despeckling
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-11-15 , DOI: 10.1016/j.isprsjprs.2024.11.003
Yangtian Fang, Rui Liu, Yini Peng, Jianjun Guan, Duidui Li, Xin Tian

The use of synthetic aperture radar (SAR) has greatly improved our ability to capture high-resolution terrestrial images under various weather conditions. However, SAR imagery is affected by speckle noise, which distorts image details and hampers subsequent applications. Recent forays into supervised deep learning-based denoising methods, like MRDDANet and SAR-CAM, offer a promising avenue for SAR despeckling. However, they are impeded by the domain gaps between synthetic data and realistic SAR images. To tackle this problem, we introduce a self-supervised speckle-aware network to utilize the limited near-real datasets and unlimited synthetic datasets simultaneously, which boosts the performance of the downstream despeckling module by teaching the module to discriminate the domain gap of different datasets in the embedding space. Specifically, based on contrastive learning, the speckle-aware network first characterizes the discriminative representations of spatial-correlated speckle noise in different images across diverse datasets, which provides priors of versatile speckles and image characteristics. Then, the representations are effectively modulated into a subsequent multi-scale despeckling network to generate authentic despeckled images. In this way, the despeckling module can reconstruct reliable SAR image characteristics by learning from near-real datasets, while the generalization performance is guaranteed by learning abundant patterns from synthetic datasets simultaneously. Additionally, a novel excitation aggregation pooling module is inserted into the despeckling network to enhance the network further, which utilizes features from different levels of scales and better preserves spatial details around strong scatters in real SAR images. Extensive experiments across real SAR datasets from Sentinel-1, Capella-X, and TerraSAR-X satellites are carried out to verify the effectiveness of the proposed method over other state-of-the-art methods. Specifically, the proposed method achieves the best PSNR and SSIM values evaluated on the near-real Sentinel-1 dataset, with gains of 0.22 dB in PSNR compared to MRDDANet, and improvements of 1.3% in SSIM over SAR-CAM. The code is available at https://github.com/YangtianFang2002/CL-SAR-Despeckling.

中文翻译:


真实 SAR 图像去斑的对比学习



合成孔径雷达 (SAR) 的使用大大提高了我们在各种天气条件下捕获高分辨率地面图像的能力。但是,SAR 影像会受到散点噪声的影响,散点噪声会扭曲影像细节并阻碍后续应用。最近对基于监督深度学习的去噪方法(如 MRDDANet 和 SAR-CAM)的尝试为 SAR 去斑提供了一条有前途的途径。然而,它们受到合成数据和真实 SAR 图像之间的域差距的阻碍。为了解决这个问题,我们引入了一个自监督的斑点感知网络,以同时利用有限的近真实数据集和无限的合成数据集,通过教模块区分嵌入空间中不同数据集的域间隙,提高了下游去斑模块的性能。具体来说,基于对比学习,斑点感知网络首先表征不同数据集中不同图像中空间相关散斑噪声的判别表示,从而提供多功能斑点和图像特征的先验。然后,将表示有效地调制到随后的多尺度去斑网络中,以生成真实的去斑图像。通过这种方式,去斑模块可以通过从近乎真实的数据集中学习来重建可靠的 SAR 图像特征,同时通过从合成数据集中学习丰富的模式来保证泛化性能。此外,在去斑网络中插入了一个新的激发聚合池模块,以进一步增强网络,该模块利用了不同尺度级别的特征,并更好地保留了真实 SAR 图像中强散射周围的空间细节。 对来自 Sentinel-1、Capella-X 和 TerraSAR-X 卫星的真实 SAR 数据集进行了广泛的实验,以验证所提出的方法相对于其他最先进方法的有效性。具体来说,所提出的方法实现了在接近真实的 Sentinel-1 数据集上评估的最佳 PSNR 和 SSIM 值,与 MRDDANet 相比,PSNR 增加了 0.22 dB,与 SAR-CAM 相比,SSIM 提高了 1.3%。该代码可在 https://github.com/YangtianFang2002/CL-SAR-Despeckling 获取。
更新日期:2024-11-15
down
wechat
bug