当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
E[formula omitted]-MIL: An explainable and evidential multiple instance learning framework for whole slide image classification
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-06 , DOI: 10.1016/j.media.2024.103294
Jiangbo Shi 1 , Chen Li 1 , Tieliang Gong 1 , Huazhu Fu 2
Affiliation  

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.

中文翻译:


E[公式省略]-MIL:用于整个幻灯片图像分类的可解释和证据性多实例学习框架



在计算病理学领域,基于多实例学习(MIL)的方法已被广泛采用来处理整个幻灯片图像(WSI)。由于滑动级监督稀疏,这些方法通常缺乏对肿瘤区域的良好定位,导致可解释性较差。此外,它们缺乏对预测结果稳健的不确定性估计,导致可靠性较差。为了解决上述两个限制,我们提出了一种用于整个幻灯片图像分类的可解释且证据性的多实例学习(E2-MIL)框架。 E2-MIL主要由三个模块组成:细节感知注意力蒸馏模块(DAM)、结构感知注意力精炼模块(SRM)和不确定性感知实例分类器(UIC)。具体来说,DAM 通过利用互补子包从本地网络学习详细的注意力知识,帮助全局网络定位更多细节感知的正实例。此外,还引入了屏蔽自引导损失,以帮助弥合幻灯片级标签和实例级分类任务之间的差距。 SRM 生成结构感知注意力图,通过有效建模聚类实例之间的空间关系来定位整个肿瘤区域结构。此外,UIC还提供准确的实例级分类结果和稳健的预测不确定性估计,以提高基于主观逻辑理论的模型可靠性。对三个大型多中心子类型数据集的广泛实验证明了 E2-MIL 的幻灯片级和实例级性能优势。
更新日期:2024-08-06
down
wechat
bug