当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Generative Adversarial Reinforcement Learning for Semi-Supervised Segmentation of Low-Contrast and Small Objects in Medical Images
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2024-04-01 , DOI: 10.1109/tmi.2024.3383716
Chenchu Xu 1 , Tong Zhang 2 , Dong Zhang 3 , Dingwen Zhang 4 , Junwei Han 4
Affiliation  

Deep reinforcement learning (DRL) has demonstrated impressive performance in medical image segmentation, particularly for low-contrast and small medical objects. However, current DRL-based segmentation methods face limitations due to the optimization of error propagation in two separate stages and the need for a significant amount of labeled data. In this paper, we propose a novel deep generative adversarial reinforcement learning (DGARL) approach that, for the first time, enables end-to-end semi-supervised medical image segmentation in the DRL domain. DGARL ingeniously establishes a pipeline that integrates DRL and generative adversarial networks (GANs) to optimize both detection and segmentation tasks holistically while mutually enhancing each other. Specifically, DGARL introduces two innovative components to facilitate this integration in semi-supervised settings. First, a task-joint GAN with two discriminators links the detection results to the GAN’s segmentation performance evaluation, allowing simultaneous joint evaluation and feedback. This ensures that DRL and GAN can be directly optimized based on each other’s results. Second, a bidirectional exploration DRL integrates backward exploration and forward exploration to ensure the DRL agent explores the correct direction when forward exploration is disabled due to lack of explicit rewards. This mitigates the issue of unlabeled data being unable to provide rewards and rendering DRL unexplorable. Comprehensive experiments on three generalization datasets, comprising a total of 640 patients, demonstrate that our novel DGARL achieves 85.02% Dice and improves at least 1.91% for brain tumors, achieves 73.18% Dice and improves at least 4.28% for liver tumors, and achieves 70.85% Dice and improves at least 2.73% for pancreas compared to the ten most recent advanced methods, our results attest to the superiority of DGARL. Code is available at GitHub.

中文翻译:


用于医学图像中低对比度和小物体半监督分割的深度生成对抗强化学习



深度强化学习 (DRL) 在医学图像分割方面表现出令人印象深刻的性能,特别是对于低对比度和小型医疗物体。然而,当前基于 DRL 的分割方法面临局限性,因为在两个独立的阶段中优化了误差传播,并且需要大量的标记数据。在本文中,我们提出了一种新的深度生成对抗强化学习 (DGARL) 方法,该方法首次在 DRL 领域实现了端到端的半监督医学图像分割。DGARL 巧妙地建立了一个集成 DRL 和生成对抗网络 (GAN) 的管道,以全面优化检测和分割任务,同时相互增强。具体来说,DGARL 引入了两个创新组件,以促进在半监督环境中的这种集成。首先,具有两个判别器的任务联合 GAN 将检测结果与 GAN 的分割性能评估联系起来,从而允许同时进行联合评估和反馈。这确保了 DRL 和 GAN 可以根据彼此的结果直接优化。其次,双向探索 DRL 集成了向后探索和向前探索,以确保当由于缺乏明确奖励而被禁用前向探索时,DRL 代理会探索正确的方向。这缓解了未标记数据无法提供奖励和导致 DRL 无法探索的问题。在三个泛化数据集(共包含 640 名患者)上进行的综合实验表明,我们的新型 DGARL 实现了 85.02% 的 Dice,脑肿瘤的改善率至少为 1.91%,Dice 的率达到了 73.18%,肝肿瘤的改善率至少为 4.28%,Dice 的率达到了 70.85%,至少改善了 2 个。与十种最新的先进方法相比,胰腺的 73%,我们的结果证明了 DGARL 的优越性。GitHub 上提供了代码。
更新日期:2024-04-01
down
wechat
bug