当前位置: X-MOL 学术J. Dent. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Learning–Based Facial and Skeletal Transformations for Surgical Planning
Journal of Dental Research ( IF 5.7 ) Pub Date : 2024-05-29 , DOI: 10.1177/00220345241253186
J Bao 1, 2, 3, 4, 5 , X Zhang 6 , S Xiang 6 , H Liu 7 , M Cheng 1, 2, 3, 4, 5 , Y Yang 8 , X Huang 1, 2, 3, 4, 5 , W Xiang 1, 2, 3, 4, 5 , W Cui 1, 2, 3, 4, 5 , H C Lai 1, 2, 3, 4, 5 , S Huang 9 , Y Wang 10 , D Qian 6 , H Yu 1, 2, 3, 4, 5
Affiliation  

The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.

中文翻译:


基于深度学习的面部和骨骼改造用于手术规划



虚拟手术规划(VSP)在正颌手术中的应用越来越多,这意味着准确预测面部和骨骼形状的迫切需要。牙面部畸形患者的颅面关系仍不清楚,由于复杂的解剖结构以及面部软组织和骨骼之间的非线性关系,面部和骨骼形状之间的转换仍然是一项具有挑战性的任务。在这项研究中,基于大规模数据集开发并验证了一种名为 P2P-ConvGC 的新型双向 3 维 (3D) 深度学习框架,用于面部和骨骼形状之间准确的特定于主题的转换。具体来说,使用两阶段点采样策略生成多个不重叠的点子集来表示高分辨率的面部和骨骼形状。将面部和骨骼点子集分别输入到预测系统中,通过骨骼预测子网络和面部预测子网络预测相应的骨骼和面部点子集。对于定量评估,准确性是通过预测的骨架或面部与相应的地面实况之间的形状误差和界标误差来计算的。通过将预测点集与地面实况进行比较来计算形状误差,P2P-ConvGC 的性能优于现有最先进的算法,包括 P2P-Net、P2P-ASNL 和 P2P-Conv。上颅骨、下颌骨和面部软组织中 P2P-ConvGC 的总地标误差(颅颌面地标的欧氏距离)分别为 1.964 ± 0.904 mm、2.398 ± 1.174 mm 和 2.226 ± 0.774 mm。此外,使用临床队列验证了双向模型的临床可行性。 结果证明了其预测能力,面部预测的平均表面偏差误差为0.895±0.175毫米,骨骼预测的平均表面偏差误差为0.906±0.082毫米。总之,我们提出的模型在面部和骨骼形状的特定受试者预测方面取得了良好的性能,并在术后面部预测和正颌手术的 VSP 中显示出临床应用潜力。
更新日期:2024-05-29
down
wechat
bug