International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-10-30 , DOI: 10.1007/s11263-024-02248-8 Hongbin Xu, Junduan Huang, Yuer Ma, Zifeng Li, Wenxiong Kang
3D biometric techniques on finger traits have become a new trend and have demonstrated a powerful ability for recognition and anti-counterfeiting. Existing methods follow an explicit 3D pipeline that reconstructs the models first and then extracts features from 3D models. However, these explicit 3D methods suffer from the following problems: 1) Inevitable information dropping during 3D reconstruction; 2) Tight coupling between specific hardware and algorithm for 3D reconstruction. It leads us to a question: Is it indispensable to reconstruct 3D information explicitly in recognition tasks? Hence, we consider this problem in an implicit manner, leaving the nerve-wracking 3D reconstruction problem for learnable neural networks with the help of neural radiance fields (NeRFs). We propose FingerNeRF, a novel generalizable NeRF for 3D finger biometrics. To handle the shape-radiance ambiguity problem that may result in incorrect 3D geometry, we aim to involve extra geometric priors based on the correspondence of binary finger traits like fingerprints or finger veins. First, we propose a novel Trait Guided Transformer (TGT) module to enhance the feature correspondence with the guidance of finger traits. Second, we involve extra geometric constraints on the volume rendering loss with the proposed Depth Distillation Loss and Trait Guided Rendering Loss. To evaluate the performance of the proposed method on different modalities, we collect two new datasets: SCUT-Finger-3D with finger images and SCUT-FingerVein-3D with finger vein images. Moreover, we also utilize the UNSW-3D dataset with fingerprint images for evaluation. In experiments, our FingerNeRF can achieve 4.37% EER on SCUT-Finger-3D dataset, 8.12% EER on SCUT-FingerVein-3D dataset, and 2.90% EER on UNSW-3D dataset, showing the superiority of the proposed implicit method in 3D finger biometrics.
中文翻译:
通过可泛化神经渲染改进 3D 手指特征识别
手指特征的 3D 生物识别技术已成为一种新趋势,并展示了强大的识别和防伪能力。现有方法遵循显式 3D 管道,该管道首先重建模型,然后从 3D 模型中提取特征。然而,这些显式 3D 方法存在以下问题:1) 3D 重建过程中不可避免的信息丢失;2) 用于 3D 重建的特定硬件和算法之间的紧密耦合。这让我们想到了一个问题:在识别任务中明确地重建 3D 信息是必不可少的吗?因此,我们以隐含的方式考虑这个问题,在神经辐射场 (NeRF) 的帮助下为可学习神经网络留下了令人伤脑筋的 3D 重建问题。我们提出了 FingerNeRF,这是一种用于 3D 手指生物识别的新型可推广 NeRF。为了处理可能导致 3D 几何结构不正确的形状-辐射模糊问题,我们的目标是根据指纹或手指静脉等二进制手指特征的对应关系引入额外的几何先验。首先,我们提出了一种新的特征引导转换器 (TGT) 模块,以增强特征与手指特征引导的对应关系。其次,我们通过提出的 Depth Distillation Loss 和 Trait Guided Rendering Loss 对体积渲染损失进行额外的几何约束。为了评估所提出的方法在不同模态上的性能,我们收集了两个新的数据集:带有手指图像的 SCUT-Finger-3D 和带有手指静脉图像的 SCUT-FingerVein-3D。此外,我们还利用带有指纹图像的 UNSW-3D 数据集进行评估。在实验中,我们的 FingerNeRF 可以在 SCUT-Finger-3D 数据集上实现 4.37% 的 EER,在 SCUT-FingerVein-3D 数据集上实现 8.12% 的 EER,以及 2.UNSW-3D 数据集上的 90% EER,显示了所提出的隐式方法在 3D 手指生物识别中的优越性。