当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mf-net: multi-feature fusion network based on two-stream extraction and multi-scale enhancement for face forgery detection
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-11-09 , DOI: 10.1007/s40747-024-01634-6
Hanxian Duan, Qian Jiang, Xin Jin, Michal Wozniak, Yi Zhao, Liwen Wu, Shaowen Yao, Wei Zhou

Due to the increasing sophistication of face forgery techniques, the images generated are becoming more and more realistic and difficult for human eyes to distinguish. These face forgery techniques can cause problems such as fraud and social engineering attacks in facial recognition and identity verification areas. Therefore, researchers have worked on face forgery detection studies and have made significant progress. Current face forgery detection algorithms achieve high detection accuracy within-dataset. However, it is difficult to achieve satisfactory generalization performance in cross-dataset scenarios. In order to improve the cross-dataset detection performance of the model, this paper proposes a multi-feature fusion network based on two-stream extraction and multi-scale enhancement. First, we design a two-stream feature extraction module to obtain richer feature information. Secondly, the multi-scale feature enhancement module is proposed to focus the model more on information related to the current sub-region from different scales. Finally, the forgery detection module calculates the overlap between the features of the input image and real images during the training phase to determine the forgery regions. The method encourages the model to mine forgery features and learns generic and robust features not limited to a particular feature. Thus, the model achieves high detection accuracy and performance. We achieve the AUC of 99.70% and 90.71% on FaceForensics++ and WildDeepfake datasets. The generalization experiments on Celeb-DF-v2 and WildDeepfake datasets achieve the AUC of 80.16% and 65.15%. Comparison experiments with multiple methods on other benchmark datasets confirm the superior generalization performance of our proposed method while ensuring model detection accuracy. Our code can be found at: https://github.com/1241128239/MFNet.



中文翻译:


Mf-net:基于双流提取和多尺度增强的多特征融合网络,用于人脸伪造检测



由于人脸伪造技术的日益复杂,生成的图像变得越来越逼真,人眼难以区分。这些人脸伪造技术可能会导致面部识别和身份验证领域出现欺诈和社会工程攻击等问题。因此,研究人员致力于人脸伪造检测研究并取得了重大进展。当前的人脸伪造检测算法在数据集内实现了很高的检测精度。但是,在跨数据集场景下,很难达到令人满意的泛化性能。为了提高模型的跨数据集检测性能,该文提出了一种基于双流提取和多尺度增强的多特征融合网络。首先,我们设计了一个双流特征提取模块,以获得更丰富的特征信息。其次,提出了多尺度特征增强模块,使模型更多地关注来自不同尺度的当前子区域相关信息。最后,伪造检测模块在训练阶段计算输入图像和真实图像的特征之间的重叠,以确定伪造区域。该方法鼓励模型挖掘伪造特征并学习不限于特定特征的通用和稳健特征。因此,该模型实现了较高的检测精度和性能。我们在 FaceForensics++ 和 WildDeepfake 数据集上实现了 99.70% 和 90.71% 的 AUC。在 Celeb-DF-v2 和 WildDeepfake 数据集上的泛化实验实现了 80.16% 和 65.15% 的 AUC。 在其他基准数据集上与多种方法的比较实验证实了我们提出的方法在保证模型检测准确性的同时具有卓越的泛化性能。我们的代码可以在以下位置找到: https://github.com/1241128239/MFNet.

更新日期:2024-11-09
down
wechat
bug