当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
HyperDehazing: A hyperspectral image dehazing benchmark dataset and a deep learning model for haze removal
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-10-05 , DOI: 10.1016/j.isprsjprs.2024.09.034
Hang Fu, Ziyan Ling, Genyun Sun, Jinchang Ren, Aizhu Zhang, Li Zhang, Xiuping Jia

Haze contamination severely degrades the quality and accuracy of optical remote sensing (RS) images, including hyperspectral images (HSIs). Currently, there are no paired benchmark datasets containing hazy and haze-free scenes in HSI dehazing, and few studies have analyzed the distributional properties of haze in the spatial and spectral domains. In this paper, we developed a new hazy synthesis strategy and constructed the first hyperspectral dehazing benchmark dataset (HyperDehazing), which contains 2000 pairs synthetic HSIs covering 100 scenes and another 70 real hazy HSIs. By analyzing the distribution characteristics of haze, we further proposed a deep learning model called HyperDehazeNet for haze removal from HSIs. Haze-insensitive longwave information injection, novel attention mechanisms, spectral loss function, and residual learning are used to improve dehazing and scene reconstruction capability. Comprehensive experimental results demonstrate that the HyperDehazing dataset effectively represents complex haze in real scenes with synthetic authenticity and scene diversity, establishing itself as a new benchmark for training and assessment of HSI dehazing methods. Experimental results on the HyperDehazing dataset demonstrate that our proposed HyperDehazeNet effectively removes complex haze from HSIs, with outstanding spectral reconstruction and feature differentiation capabilities. Furthermore, additional experiments conducted on real HSIs as well as the widely used Landsat-8 and Sentinel-2 datasets showcase the exceptional dehazing performance and robust generalization capabilities of HyperDehazeNet. Our method surpasses other state-of-the-art methods with high computational efficiency and a low number of parameters.

中文翻译:


HyperDehazing:高光谱图像去雾基准数据集和用于雾霾去除的深度学习模型



雾霾污染会严重降低光学遥感 (RS) 图像的质量和准确性,包括高光谱图像 (HSI)。目前,在 HSI 去雾中没有包含雾霾和无雾场景的配对基准数据集,并且很少有研究分析雾霾在空间和光谱域中的分布特性。在本文中,我们开发了一种新的朦胧合成策略,并构建了第一个高光谱去雾基准数据集 (HyperDehazing),其中包含 2000 对合成 HSI,涵盖 100 个场景和另外 70 个真实的朦胧 HSI。通过分析雾霾的分布特征,我们进一步提出了一种名为 HyperDehazeNet 的深度学习模型,用于从 HSI 中去除雾霾。采用雾霾不敏感的长波信息注入、新颖的注意力机制、谱损失函数和残差学习来提高去雾和场景重建能力。综合实验结果表明,HyperDehazing 数据集有效地表示了真实场景中的复杂雾霾,具有合成真实性和场景多样性,成为 HSI 去雾方法训练和评估的新基准。在 HyperDehazing 数据集上的实验结果表明,我们提出的 HyperDehazeNet 有效地去除了 HSI 中的复杂雾霾,具有出色的光谱重建和特征区分能力。此外,在真实 HSI 以及广泛使用的 Landsat-8 和 Sentinel-2 数据集上进行的其他实验展示了 HyperDehazeNet 卓越的去雾性能和强大的泛化能力。我们的方法以高计算效率和少量参数优于其他最先进的方法。
更新日期:2024-10-05
down
wechat
bug