当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Data-Centric Face Anti-spoofing: Improving Cross-Domain Generalization via Physics-Based Data Synthesis
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-10-17 , DOI: 10.1007/s11263-024-02240-2
Rizhao Cai, Cecelia Soh, Zitong Yu, Haoliang Li, Wenhan Yang, Alex C. Kot

Face Anti-Spoofing (FAS) research is challenged by the cross-domain problem, where there is a domain gap between the training and testing data. While recent FAS works are mainly model-centric, focusing on developing domain generalization algorithms for improving cross-domain performance, data-centric research for face anti-spoofing, improving generalization from data quality and quantity, is largely ignored. Therefore, our work starts with data-centric FAS by conducting a comprehensive investigation from the data perspective for improving cross-domain generalization of FAS models. More specifically, at first, based on physical procedures of capturing and recapturing, we propose task-specific FAS data augmentation (FAS-Aug), which increases data diversity by synthesizing data of artifacts, such as printing noise, color distortion, moiré pattern, etc. Our experiments show that using our FAS augmentation can surpass traditional image augmentation in training FAS models to achieve better cross-domain performance. Nevertheless, we observe that models may rely on the augmented artifacts, which are not environment-invariant, and using FAS-Aug may have a negative effect. As such, we propose Spoofing Attack Risk Equalization (SARE) to prevent models from relying on certain types of artifacts and improve the generalization performance. Last but not least, our proposed FAS-Aug and SARE with recent Vision Transformer backbones can achieve state-of-the-art performance on the FAS cross-domain generalization protocols. The implementation is available at https://github.com/RizhaoCai/FAS-Aug.



中文翻译:


迈向以数据为中心的人脸反欺骗:通过基于物理的数据合成改进跨域泛化



人脸反欺骗 (FAS) 研究面临跨域问题的挑战,其中训练和测试数据之间存在域差距。虽然最近的 FAS 工作主要以模型为中心,专注于开发域泛化算法以提高跨域性能,但以数据为中心的人脸反欺骗研究,从数据质量和数量改进泛化,在很大程度上被忽视了。因此,我们的工作从以数据为中心的 FAS 开始,从数据角度进行全面调查,以提高 FAS 模型的跨领域泛化。更具体地说,首先,基于捕获和重新捕获的物理程序,我们提出了特定于任务的 FAS 数据增强 (FAS-Aug),它通过合成伪影数据(如印刷噪声、颜色失真、摩尔纹等)来增加数据多样性。我们的实验表明,在训练 FAS 模型方面,使用我们的 FAS 增强可以超越传统的图像增强,以实现更好的跨域性能。尽管如此,我们观察到模型可能依赖于增强的工件,这些工件不是环境不变的,并且使用 FAS-Aug 可能会产生负面影响。因此,我们提出了欺骗攻击风险均衡 (SARE) 来防止模型依赖某些类型的工件并提高泛化性能。最后但并非最不重要的一点是,我们提出的 FAS-Aug 和 SARE 与最近的 Vision Transformer 主干网络可以在 FAS 跨域泛化协议上实现最先进的性能。该实施可在 https://github.com/RizhaoCai/FAS-Aug 上获得。

更新日期:2024-10-18
down
wechat
bug