当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EGST: An Efficient Solution for Human Gaits Recognition Using Neuromorphic Vision Sensor
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 6-3-2024 , DOI: 10.1109/tifs.2024.3409167
Liaogehao Chen 1 , Zhenjun Zhang 1 , Yang Xiao 2 , Yaonan Wang 1
Affiliation  

Traditional cameras struggle to perform in challenging scenarios such as low latency, high speed and high dynamic range. In contrast, neuromorphic vision sensors (event cameras) have great potential for robotics and computer vision due to the advantages of high temporal resolution, high dynamic range, and ultra-low resource consumption. Event cameras are novel bio-inspired sensors that monitor the brightness change of each pixel asynchronously and provide a stream of events encoding the time, position and sign of the brightness changes. Hence, traditional computer vision methods cannot be directly applied to the event-stream. Finding event representations that completely maintain event attributes, as well as efficient and accurate learning approaches, is the key to unlocking the potential of event cameras. In this study, we reveal the rigid transfer from event-stream to graph that has been overlooked in previous work and introduce a novel event representation, namely event graph sequence (EGS) considering the local and global temporal clues. Coupled with EGS, we propose a spatio-temporal pattern extracting (STPE) module to capture the spatio-temporal correlation and evolution of EGS. Our novel framework, Event Graph Sequence Transformer (EGST), exploits event properties to provide efficient and accurate recognition. This study focuses on the event-based human gaits recognition task, and EGST is evaluated on three different event-based gait datasets. The evaluation results show better or comparable accuracy than the state-of-the-art, while requiring extremely low computation resources. The code will be available at https://github.com/C19h/EGST.

中文翻译:


EGST:使用神经形态视觉传感器进行人类步态识别的有效解决方案



传统相机难以应对低延迟、高速和高动态范围等具有挑战性的场景。相比之下,神经形态视觉传感器(事件相机)由于具有高时间分辨率、高动态范围和超低资源消耗的优势,在机器人和计算机视觉领域具有巨大潜力。事件相机是新型仿生传感器,可异步监控每个像素的亮度变化,并提供对亮度变化的时间、位置和符号进行编码的事件流。因此,传统的计算机视觉方法不能直接应用于事件流。寻找完全保持事件属性的事件表示以及高效准确的学习方法是释放事件相机潜力的关键。在这项研究中,我们揭示了以前的工作中被忽视的从事件流到图的刚性转移,并引入了一种新颖的事件表示,即考虑局部和全局时间线索的事件图序列(EGS)。与 EGS 结合,我们提出了一个时空模式提取(STPE)模块来捕获 EGS 的时空相关性和演化。我们的新颖框架事件图序列转换器(EGST)利用事件属性来提供高效、准确的识别。本研究重点关注基于事件的人类步态识别任务,并在三个不同的基于事件的步态数据集上评估 EGST。评估结果显示出比最先进技术更好或相当的准确性,同时需要极低的计算资源。该代码可在 https://github.com/C19h/EGST 上获取。
更新日期:2024-08-22
down
wechat
bug