International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-11-03 , DOI: 10.1007/s11263-024-02237-x Tao Zhou, Qi Ye, Wenhan Luo, Haizhou Ran, Zhiguo Shi, Jiming Chen
Multi-object tracking (MOT) in the scenario of low-frame-rate videos is a promising solution to better meet the computing, storage, and transmitting bandwidth resource constraints of edge devices. Tracking with a low frame rate poses particular challenges in the association stage as objects in two successive frames typically exhibit much quicker variations in locations, velocities, appearances, and visibilities than those in normal frame rates. In this paper, we observe severe performance degeneration of many existing association strategies caused by such variations. Though optical-flow-based methods like CenterTrack can handle the large displacement to some extent due to their large receptive field, the temporally local nature makes them fail to give reliable displacement estimations of objects that newly appear in the current frame (i.e., not visible in the previous frame). To overcome the local nature of optical-flow-based methods, we propose an online tracking method by extending the CenterTrack architecture with a new head, named APP, to recognize unreliable displacement estimations. Further, to capture the fine-grained and private unreliability of each displacement estimation, we extend the binary APP predictions to displacement uncertainties. To this end, we reformulate the displacement estimation task via Bayesian deep learning tools. With APP predictions, we propose to conduct association in a multi-stage manner where vision cues or historical motion cues are leveraged in the corresponding stage. By rethinking the commonly used bipartite matching algorithms, we equip the proposed multi-stage association policy with a hybrid matching strategy conditioned on displacement uncertainties. Our method shows robustness in preserving identities in low-frame-rate video sequences. Experimental results on public datasets in various low-frame-rate settings demonstrate the advantages of the proposed method.
中文翻译:
APPTracker+:低帧率多目标跟踪中遮挡处理的位移不确定性
在低帧率视频场景下,多目标跟踪 (MOT) 是一种很有前途的解决方案,可以更好地满足边缘设备的计算、存储和传输带宽资源约束。在关联阶段,使用低帧速率进行跟踪会带来特别的挑战,因为与正常帧速率相比,两个连续帧中的对象在位置、速度、外观和可见性方面表现出的变化通常要快得多。在本文中,我们观察到由此类变化引起的许多现有关联策略的严重性能退化。尽管像 CenterTrack 这样的基于光流的方法可以在一定程度上处理大位移,因为它们的感受野很大,但时间上的局部性质使它们无法对当前帧中新出现的物体(即,在前一帧中不可见)给出可靠的位移估计。为了克服基于光流的方法的局部性质,我们提出了一种在线跟踪方法,通过使用名为 APP 的新头扩展 CenterTrack 架构来识别不可靠的位移估计。此外,为了捕捉每个位移估计的细粒度和私有不可靠性,我们将二进制 APP 预测扩展到位移不确定性。为此,我们通过贝叶斯深度学习工具重新制定了位移估计任务。通过 APP 预测,我们建议以多阶段的方式进行关联,其中视觉线索或历史运动线索在相应阶段被利用。通过重新思考常用的二分匹配算法,我们为所提出的多阶段关联策略配备了一种以位移不确定性为条件的混合匹配策略。 我们的方法在低帧率视频序列中保持身份方面表现出稳健性。在各种低帧率设置下的公共数据集上的实验结果表明了所提出的方法的优势。