当前位置: X-MOL 学术IEEE Comput. Intell. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
2024 IEEE Conference on Artificial Intelligence
IEEE Computational Intelligence Magazine ( IF 10.3 ) Pub Date : 2024-01-08 , DOI: 10.1109/mci.2023.3338040


The Transformer-based methods provide a good opportunity for modeling the global context of gigapixel whole slide image (WSI), however, there are still two main problems in applying Transformer to WSI-based survival analysis task. First, the training data for survival analysis is limited, which makes the model prone to overfitting. This problem is even worse for Transformer-based models which require large-scale data to train. Second, WSI is of extremely high resolution (up to 150,000 × 150,000 pixels) and is typically organized as a multi-resolution pyramid. Vanilla Transformer cannot model the hierarchical structure of WSI (such as patch cluster-level relationships), which makes it incapable of learning hierarchical WSI representation. To address these problems, in this article, we propose a novel Sparse and Hierarchical Transformer (SH-Transformer) for survival analysis. Specifically, we introduce sparse self-attention to alleviate the overfitting problem, and propose a hierarchical Transformer structure to learn the hierarchical WSI representation. Experimental results based on three WSI datasets show that the proposed framework outperforms the state-of-the-art methods.

中文翻译:


2024 年 IEEE 人工智能会议



基于 Transformer 的方法为十亿像素整个幻灯片图像(WSI)的全局上下文建模提供了很好的机会,但是,将 Transformer 应用到基于 WSI 的生存分析任务仍然存在两个主要问题。首先,生存分析的训练数据有限,这使得模型容易出现过拟合。对于需要大规模数据来训练的基于 Transformer 的模型来说,这个问题更加严重。其次,WSI 具有极高的分辨率(高达 150,000 × 150,000 像素),并且通常组织为多分辨率金字塔。 Vanilla Transformer 无法对 WSI 的层次结构进行建模(例如补丁簇级关系),这使得它无法学习层次化的 WSI 表示。为了解决这些问题,在本文中,我们提出了一种新颖的稀疏分层变压器(SH-Transformer)用于生存分析。具体来说,我们引入稀疏自注意力来缓解过拟合问题,并提出一种分层 Transformer 结构来学习分层 WSI 表示。基于三个 WSI 数据集的实验结果表明,所提出的框架优于最先进的方法。
更新日期:2024-01-08
down
wechat
bug