当前位置: X-MOL 学术Sci. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Microsaccade-inspired event camera for robotics
Science Robotics ( IF 26.1 ) Pub Date : 2024-05-29 , DOI: 10.1126/scirobotics.adj8124
Botao He 1, 2 , Ze Wang 3, 4 , Yuan Zhou 2, 3 , Jingxi Chen 1 , Chahat Deep Singh 1 , Haojia Li 5 , Yuman Gao 2, 3 , Shaojie Shen 5 , Kaiwei Wang 4 , Yanjun Cao 3 , Chao Xu 2, 3 , Yiannis Aloimonos 1, 6, 7 , Fei Gao 2, 3 , Cornelia Fermüller 1, 6, 7
Affiliation  

Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras’ output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call artificial microsaccade–enhanced event camera (AMI-EV). Benchmark comparisons validated the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrated the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.

中文翻译:


受微跳动启发的机器人事件相机



神经形态视觉传感器或事件相机使极短反应时间的视觉感知成为可能,为高动态机器人应用开辟了新途径。这些事件摄像机的输出取决于运动和纹理。然而,事件相机无法捕获与相机运动平行的对象边缘。这是传感器固有的问题,因此很难通过算法解决。人类视觉利用微小的无意识眼球运动的主动机制来处理知觉衰退,其中最突出的称为微眼跳。通过在注视过程中不断地轻微移动眼睛,微扫视可以基本上保持纹理的稳定性和持久性。受微跳视的启发,我们设计了一种基于事件的感知系统,能够同时保持低反应时间和稳定的纹理。在此设计中,旋转楔形棱镜安装在事件相机的光圈前面,以重定向光线并触发事件。旋转楔形棱镜的几何光学允许对额外的旋转运动进行算法补偿,从而产生稳定的纹理外观和独立于外部运动的高信息输出。将硬件设备和软件解决方案集成到一个系统中,我们将其称为人工微跳视增强事件相机(AMI-EV)。基准比较验证了 AMI-EV 在标准摄像机和事件摄像机均无法提供的情况下记录的卓越数据质量。各种现实世界的实验证明了该系统在促进机器人感知低级和高级视觉任务方面的潜力。
更新日期:2024-05-29
down
wechat
bug