当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LightMOT: Lightweight and anchor-free solution for tracking multiple objects in dense populations
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2024-12-20 , DOI: 10.1016/j.future.2024.107690
P Karthikeyan, Yong-Hong Liu, Pao-Ann Hsiung

Object tracking technology plays a critical role in analysing population flow in high-traffic areas like road intersections. While existing multiple-object tracking (MOT) methods have set benchmarks for accuracy and speed, they often face challenges with real-time processing in densely populated scenes, where the sheer number of objects and frequent occlusions make tracking difficult. This challenge is particularly significant in applications such as autonomous driving and security surveillance, where the ability to accurately track multiple objects in real-time is essential for decision-making, safety, and situational awareness. To address these limitations, we introduce an anchor-free lightweight multi-object tracking (LightMOT) method. LightMOT replaces FairMOT's backbone with MobileNet to boost speed without compromising accuracy. By incorporating Dilated Convolution (DC) into MobileNet, it maintains FairMOT's feature map size while achieving faster, more efficient tracking. The architecture, designed with dedicated object detection and tracking components, effectively handles challenges such as occlusion and similar appearances. LightMOT's superior frame rates and accuracy make it ideal for real-time applications such as autonomous driving and security surveillance, representing a significant advancement in MOT technologies. Implemented using PyTorch, LightMOT achieved impressive results on the MOT20 dataset, with 32.2 frame per second (FPS), 70.1 % Multiple Object Tracking Accuracy (MOTA), and a 29.95 % Cost Performance (CP) score, significantly surpassing FairMOT and other trackers like Semi-TCL, RelationTrack, and CAMTrack.

中文翻译:


LightMOT:轻量级且无锚点的解决方案,用于跟踪密集种群中的多个对象



对象跟踪技术在分析道路交叉口等交通繁忙区域的人口流动方面发挥着关键作用。虽然现有的多目标跟踪 (MOT) 方法在准确性和速度方面树立了标杆,但它们经常面临在密集场景中进行实时处理的挑战,在这些场景中,目标数量庞大且频繁的遮挡使跟踪变得困难。这一挑战在自动驾驶和安全监控等应用中尤为重要,在这些应用中,实时准确跟踪多个对象的能力对于决策、安全和态势感知至关重要。为了解决这些限制,我们引入了一种无锚点轻量级多对象跟踪 (LightMOT) 方法。LightMOT 用 MobileNet 取代了 FairMOT 的主干网,以在不影响准确性的情况下提高速度。通过将膨胀卷积 (DC) 整合到 MobileNet 中,它可以保持 FairMOT 的特征图大小,同时实现更快、更高效的跟踪。该架构设计有专用的对象检测和跟踪组件,可有效应对遮挡和类似外观等挑战。LightMOT 卓越的帧速率和精度使其成为自动驾驶和安全监控等实时应用的理想选择,这代表了 MOT 技术的重大进步。使用 PyTorch 实现的 LightMOT 在 MOT20 数据集上取得了令人印象深刻的结果,每秒 32.2 帧 (FPS)、70.1% 的多目标跟踪精度 (MOTA) 和 29.95% 的性价比 (CP) 分数,明显超过了 FairMOT 和其他跟踪器,如 Semi-TCL、RelationTrack 和 CAMTrack。
更新日期:2024-12-20
down
wechat
bug