International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-08-31 , DOI: 10.1007/s11263-024-02219-z Denis Huseljic , Marek Herde , Paul Hahn , Mehmet Müjde , Bernhard Sick
In the field of deep learning based computer vision, the development of deep object detection has led to unique paradigms (e.g., two-stage or set-based) and architectures (e.g., Faster-RCNN or DETR) which enable outstanding performance on challenging benchmark datasets. Despite this, the trained object detectors typically do not reliably assess uncertainty regarding their own knowledge, and the quality of their probabilistic predictions is usually poor. As these are often used to make subsequent decisions, such inaccurate probabilistic predictions must be avoided. In this work, we investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting. We propose a framework to ensure a fair, unbiased, and repeatable evaluation and conduct detailed analyses assessing the calibration under distributional changes (e.g., distributional shift and application to out-of-distribution data). Furthermore, by investigating the influence of different detector paradigms, post-processing steps, and suitable choices of metrics, we deliver novel insights into why poor detector calibration emerges. Based on these insights, we are able to improve the calibration of a detector by simply finetuning its last layer.
中文翻译:
预训练目标探测器不确定性校准的系统评估
在基于深度学习的计算机视觉领域,深度目标检测的发展带来了独特的范式(例如,两阶段或基于集合)和架构(例如, Faster-RCNN或DETR ),它们在具有挑战性的基准测试中实现了出色的性能数据集。尽管如此,经过训练的目标检测器通常无法可靠地评估其自身知识的不确定性,并且其概率预测的质量通常很差。由于这些通常用于做出后续决策,因此必须避免这种不准确的概率预测。在这项工作中,我们研究了多类设置中不同预训练对象检测架构的不确定性校准特性。我们提出了一个框架,以确保公平、公正和可重复的评估,并进行详细分析,评估分布变化下的校准(例如,分布变化和对分布外数据的应用)。此外,通过研究不同探测器范式、后处理步骤和合适的指标选择的影响,我们对探测器校准不良的原因提供了新颖的见解。基于这些见解,我们能够通过简单地微调探测器的最后一层来改进探测器的校准。