当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-09 , DOI: 10.1016/j.inffus.2024.102719
Afsana Ahmed Munia, Moloud Abdar, Mehedi Hasan, Mohammad S. Jalali, Biplab Banerjee, Abbas Khosravi, Ibrahim Hossain, Huazhu Fu, Alejandro F. Frangi

Small inaccuracies in the system components or artificial intelligence (AI) models for medical imaging could have significant consequences leading to life hazards. To mitigate those risks, one must consider the precision of the image analysis outcomes (e.g., image segmentation), along with the confidence in the underlying model predictions. U-shaped architectures, based on the convolutional encoder–decoder, have established themselves as a critical component of many AI-enabled diagnostic imaging systems. However, most of the existing methods focus on producing accurate diagnostic predictions without assessing the uncertainty associated with such predictions or the introduced techniques. Uncertainty maps highlight areas in the predicted segmented results, where the model is uncertain or less confident. This could lead radiologists to pay more attention to ensuring patient safety and pave the way for trustworthy AI applications. In this paper, we therefore propose the Attention-guided Hierarchical Fusion U-Net (named AHF-U-Net) for medical image segmentation. We then introduce the uncertainty-aware version of it called UA-AHF-U-Net which provides the uncertainty map alongside the predicted segmentation map. The network is designed by integrating the Encoder Attention Fusion module (EAF) and the Decoder Attention Fusion module (DAF) on the encoder and decoder sides of the U-Net architecture, respectively. The EAF and DAF modules utilize spatial and channel attention to capture relevant spatial information and indicate which channels are appropriate for a given image. Furthermore, an enhanced skip connection is introduced and named the Hierarchical Attention-Enhanced (HAE) skip connection. We evaluated the efficiency of our model by comparing it with eleven well-established methods for three popular medical image segmentation datasets consisting of coarse-grained images with unclear boundaries. Based on the quantitative and qualitative results, the proposed method ranks first in two datasets and second in a third. The code can be accessed at: https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net.

中文翻译:


注意力引导的分层融合 U-Net,用于不确定性驱动的医学图像分割



用于医学成像的系统组件或人工智能 (AI) 模型中的微小误差可能会产生重大后果,从而导致生命危险。为了降低这些风险,必须考虑图像分析结果(例如图像分割)的精度,以及对基础模型预测的置信度。基于卷积编码器-解码器的 U 形架构已成为许多支持 AI 的诊断成像系统的关键组件。然而,大多数现有方法都侧重于产生准确的诊断预测,而没有评估与此类预测或引入的技术相关的不确定性。不确定性图突出显示了预测的分段结果中模型不确定或信心较低的区域。这可能会导致放射科医生更加关注确保患者安全,并为值得信赖的 AI 应用程序铺平道路。因此,在本文中,我们提出了用于医学图像分割的注意力引导分层融合 U-Net(名为 AHF-U-Net)。然后,我们介绍了它的不确定性感知版本,称为 UA-AHF-U-Net,它提供不确定性图和预测的分割图。该网络是通过分别在 U-Net 架构的编码器侧和解码器侧集成编码器注意力融合模块 (EAF) 和解码器注意力融合模块 (DAF) 设计的。EAF 和 DAF 模块利用空间和通道注意力来捕获相关的空间信息,并指示哪些通道适合给定图像。此外,还引入了增强的跳过连接,并将其命名为分层注意力增强 (HAE) 跳过连接。 我们通过将模型与三个流行的医学图像分割数据集的 11 种成熟方法进行比较来评估该模型的效率,这些数据集由边界不明确的粗粒度图像组成。基于定量和定性结果,所提方法在两个数据集中排名第一,在第三个数据集中排名第二。代码可在以下网址访问:https://github.com/AfsanaAhmedMunia/AHF-Fusion-U-Net。
更新日期:2024-10-09
down
wechat
bug