当前位置:
X-MOL 学术
›
Med. Image Anal.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
An objective comparison of methods for augmented reality in laparoscopic liver resection by preoperative-to-intraoperative image fusion from the MICCAI2022 challenge
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-10-22 , DOI: 10.1016/j.media.2024.103371 Sharib Ali, Yamid Espinel, Yueming Jin, Peng Liu, Bianca Güttner, Xukun Zhang, Lihua Zhang, Tom Dowrick, Matthew J. Clarkson, Shiting Xiao, Yifan Wu, Yijun Yang, Lei Zhu, Dai Sun, Lan Li, Micha Pfeiffer, Shahid Farid, Lena Maier-Hein, Emmanuel Buc, Adrien Bartoli
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-10-22 , DOI: 10.1016/j.media.2024.103371 Sharib Ali, Yamid Espinel, Yueming Jin, Peng Liu, Bianca Güttner, Xukun Zhang, Lihua Zhang, Tom Dowrick, Matthew J. Clarkson, Shiting Xiao, Yifan Wu, Yijun Yang, Lei Zhu, Dai Sun, Lan Li, Micha Pfeiffer, Shahid Farid, Lena Maier-Hein, Emmanuel Buc, Adrien Bartoli
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D–2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver’s inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D–2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
中文翻译:
MICCAI2022挑战术前-术中图像融合增强现实腹腔镜肝切除术方法的客观比较
腹腔镜肝脏切除术的增强现实是一种可视化模式,允许外科医生通过将嵌入肝脏内的肿瘤和血管投影到腹腔镜图像上来定位它们。在此过程中,从计算机断层扫描 (CT) 或磁共振 (MR) 成像数据中提取的术前 3D 模型被配准到术中腹腔镜图像中。关于 3D-2D 融合,大多数算法使用解剖标志来指导配准,例如肝脏的下嵴、镰状韧带和闭塞轮廓。这些通常在腹腔镜图像和 3D 模型中手工标记,这很耗时且容易出错。因此,需要自动化此过程,以便增强现实可以在手术室中有效使用。我们提出了在医学图像计算和计算机辅助干预 (MICCAI 2022) 会议期间举行的术前到术中腹腔镜融合挑战 (P2ILF),该挑战研究了自动检测这些标志并在注册中使用它们的可能性。挑战分为两个任务:(1) 2D 和 3D 地标分割任务和 (2) 3D-2D 配准任务。为团队提供了训练数据,包括来自 9 名患者的 167 张腹腔镜图像和 9 个术前 3D 模型,以及相应的 2D 和 3D 标志注释。共有来自 4 个国家的 6 支团队参加了挑战赛,每项任务的成果都进行了独立评估。所有团队都为 2D 和 3D 地标分割任务提出了基于深度学习的方法,并为配准任务提出了基于可微渲染的方法。 在 2 例患者的 16 张测试图像和 2 张术前 3D 模型上对所提出的方法进行了评估。在任务 1 中,团队能够分割大部分 2D 地标,而 3D 地标的分割更具挑战性。在任务 2 中,只有一个团队获得了可接受的定性和定量注册结果。根据实验结果,我们提出了三个关键假设,这些假设决定了该领域研究的当前局限性和未来方向。
更新日期:2024-10-22
中文翻译:
MICCAI2022挑战术前-术中图像融合增强现实腹腔镜肝切除术方法的客观比较
腹腔镜肝脏切除术的增强现实是一种可视化模式,允许外科医生通过将嵌入肝脏内的肿瘤和血管投影到腹腔镜图像上来定位它们。在此过程中,从计算机断层扫描 (CT) 或磁共振 (MR) 成像数据中提取的术前 3D 模型被配准到术中腹腔镜图像中。关于 3D-2D 融合,大多数算法使用解剖标志来指导配准,例如肝脏的下嵴、镰状韧带和闭塞轮廓。这些通常在腹腔镜图像和 3D 模型中手工标记,这很耗时且容易出错。因此,需要自动化此过程,以便增强现实可以在手术室中有效使用。我们提出了在医学图像计算和计算机辅助干预 (MICCAI 2022) 会议期间举行的术前到术中腹腔镜融合挑战 (P2ILF),该挑战研究了自动检测这些标志并在注册中使用它们的可能性。挑战分为两个任务:(1) 2D 和 3D 地标分割任务和 (2) 3D-2D 配准任务。为团队提供了训练数据,包括来自 9 名患者的 167 张腹腔镜图像和 9 个术前 3D 模型,以及相应的 2D 和 3D 标志注释。共有来自 4 个国家的 6 支团队参加了挑战赛,每项任务的成果都进行了独立评估。所有团队都为 2D 和 3D 地标分割任务提出了基于深度学习的方法,并为配准任务提出了基于可微渲染的方法。 在 2 例患者的 16 张测试图像和 2 张术前 3D 模型上对所提出的方法进行了评估。在任务 1 中,团队能够分割大部分 2D 地标,而 3D 地标的分割更具挑战性。在任务 2 中,只有一个团队获得了可接受的定性和定量注册结果。根据实验结果,我们提出了三个关键假设,这些假设决定了该领域研究的当前局限性和未来方向。