当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Alzheimer’s disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-05-28 , DOI: 10.1016/j.media.2024.103213
Baiying Lei 1 , Yafeng Li 1 , Wanyi Fu 2 , Peng Yang 1 , Shaobin Chen 1 , Tianfu Wang 1 , Xiaohua Xiao 3 , Tianye Niu 4 , Yu Fu 5 , Shuqiang Wang 6 , Hongbin Han 7 , Jing Qin 8 ,
Affiliation  

Multi-modal data can provide complementary information of Alzheimer’s disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https://github.com/xiankantingqianxue/MIA-code.git)

中文翻译:


通过特征归纳学习和双多级图神经网络从多模态数据诊断阿尔茨海默病



多模态数据可以从不同角度提供阿尔茨海默病(AD)及其发展的补充信息。这些信息与AD的诊断、预防和治疗密切相关,因此通过多模态数据研究AD是必要且关键的。然而,现有的学习方法通​​常忽略特征异质性的影响,直接在最后阶段融合特征。此外,这些方法大多数只关注局部融合特征或全局融合特征,忽略了不同层次特征的互补性,因此不能充分利用多模态数据中嵌入的信息。为了克服这些缺点,我们提出了一种融合基因、成像、蛋白质和临床数据的 AD 诊断新框架。我们的框架通过特征归纳学习(FIL)模块在相同特征空间下学习不同模态的特征表示,从而减轻特征异质性的影响。此外,在我们的框架中,通过新颖的双多级图神经网络(DMGNN)提取不同级别的局部和全局显着多模态特征交互信息。我们在阿尔茨海默病神经影像计划(ADNI)数据集上广泛验证了所提出的方法,实验结果表明我们的方法始终优于其他最先进的多模态融合方法。该代码可在 GitHub 网站上公开获取。 (https://github.com/xiankantingqianxue/MIA-code.git)
更新日期:2024-05-28
down
wechat
bug