当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Breaking the Limits of Reliable Prediction via Generated Data
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-09-20 , DOI: 10.1007/s11263-024-02221-5
Zhen Cheng, Fei Zhu, Xu-Yao Zhang, Cheng-Lin Liu

In open-world recognition of safety-critical applications, providing reliable prediction for deep neural networks has become a critical requirement. Many methods have been proposed for reliable prediction related tasks such as confidence calibration, misclassification detection, and out-of-distribution detection. Recently, pre-training has been shown to be one of the most effective methods for improving reliable prediction, particularly for modern networks like ViT, which require a large amount of training data. However, collecting data manually is time-consuming. In this paper, taking advantage of the breakthrough of generative models, we investigate whether and how expanding the training set using generated data can improve reliable prediction. Our experiments reveal that training with a large quantity of generated data can eliminate overfitting in reliable prediction, leading to significantly improved performance. Surprisingly, classical networks like ResNet-18, when trained on a notably extensive volume of generated data, can sometimes exhibit performance competitive to pre-training ViT with a substantial real dataset.



中文翻译:


通过生成的数据突破可靠预测的限制



在安全关键应用的开放世界识别中,为深度神经网络提供可靠的预测已成为一项关键要求。人们已经提出了许多用于可靠预测相关任务的方法,例如置信度校准、错误分类检测和分布外检测。最近,预训练已被证明是提高预测可靠性的最有效方法之一,特别是对于像 ViT 这样需要大量训练数据的现代网络。然而,手动收集数据非常耗时。在本文中,利用生成模型的突破,我们研究了使用生成数据扩展训练集是否以及如何能够提高可靠的预测。我们的实验表明,使用大量生成的数据进行训练可以消除可靠预测中的过度拟合,从而显着提高性能。令人惊讶的是,像 ResNet-18 这样的经典网络在对大量生成数据进行训练时,有时可以表现出与使用大量真实数据集预训练 ViT 相媲美的性能。

更新日期:2024-09-20
down
wechat
bug