当前位置: X-MOL 学术Int. J. Appl. Earth Obs. Geoinf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How can geostatistics help us understand deep learning? An exploratory study in SAR-based aircraft detection
International Journal of Applied Earth Observation and Geoinformation ( IF 7.6 ) Pub Date : 2024-10-14 , DOI: 10.1016/j.jag.2024.104185
Lifu Chen, Zhenhuan Fang, Jin Xing, Xingmin Cai

Deep Neural Networks (DNNs) have garnered significant attention across various research domains due to their impressive performance, particularly Convolutional Neural Networks (CNNs), known for their exceptional accuracy in image processing tasks. However, the opaque nature of DNNs has raised concerns about their trustworthiness, as users often cannot understand how the model arrives at its predictions or decisions. This lack of transparency is particularly problematic in critical fields such as healthcare, finance, and law, where the stakes are high. Consequently, there has been a surge in the development of explanation methods for DNNs. Typically, the effectiveness of these methods is assessed subjectively via human observation on the heatmaps or attribution maps generated by eXplanation AI (XAI) methods. In this paper, a novel GeoStatistics Explainable Artificial Intelligence (GSEAI) framework is proposed, which integrates spatial pattern analysis from Geostatistics with XAI algorithms to assess and compare XAI understandability. Global and local Moran’s I indices, commonly used to assess the spatial autocorrelation of geographic data, assist in comprehending the spatial distribution patterns of attribution maps produced by the XAI method, through measuring the levels of aggregation or dispersion. Interpreting and analyzing attribution maps by Moran’s I scattergram and LISA clustering maps provide an accurate global objective quantitative assessment of the spatial distribution of feature attribution and achieves a more understandable local interpretation. In this paper, we conduct experiments on aircraft detection in SAR images based on the widely used YOLOv5 network, and evaluate four mainstream XAI methods quantitatively and qualitatively. By using GSEAI to perform explanation analysis of the given DNN, we could gain more insights about the behavior of the network, to enhance the trustworthiness of DNN applications. To the best of our knowledge, this is the first time XAI has been integrated with geostatistical algorithms in SAR domain knowledge, which expands the analytical approaches of XAI and also promotes the development of XAI within SAR image analytics.

中文翻译:


地统计如何帮助我们理解深度学习?基于 SAR 的飞机检测的探索性研究



深度神经网络 (DNN) 因其令人印象深刻的性能而在各个研究领域引起了广泛关注,尤其是卷积神经网络 (CNN),它以其在图像处理任务中的卓越准确性而闻名。然而,DNN 的不透明性引发了人们对其可信度的担忧,因为用户通常无法理解模型是如何得出预测或决策的。这种缺乏透明度在医疗保健、金融和法律等关键领域尤其成问题,因为这些领域的风险很高。因此,DNN 解释方法的开发激增。通常,这些方法的有效性是通过人工观察解释 AI (XAI) 方法生成的热图或归因图来主观评估的。本文提出了一种新的地统计可解释人工智能 (GSEAI) 框架,该框架将地统计学的空间模式分析与 XAI 算法相结合,以评估和比较 XAI 的可理解性。全球和局部 Moran's I 指数通常用于评估地理数据的空间自相关,通过测量聚合或离散的水平,帮助理解 XAI 方法生成的归因图的空间分布模式。通过 Moran 的 I 散点图和 LISA 聚类图解释和分析归因图,可以对特征归因的空间分布进行准确的全局客观定量评估,并实现更易于理解的局部解释。本文基于广泛使用的 YOLOv5 网络进行了 SAR 图像中飞机检测的实验,并对四种主流的 XAI 方法进行了定量和定性评价。 通过使用 GSEAI 对给定的 DNN 执行解释分析,我们可以更深入地了解网络的行为,从而提高 DNN 应用程序的可信度。据我们所知,这是 XAI 首次与 SAR 领域知识中的地统计算法集成,这扩展了 XAI 的分析方法,也促进了 XAI 在 SAR 图像分析中的发展。
更新日期:2024-10-14
down
wechat
bug