当前位置: X-MOL 学术Comput. Ind. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning 3D human–object interaction graphs from transferable context knowledge for construction monitoring
Computers in Industry ( IF 8.2 ) Pub Date : 2024-09-10 , DOI: 10.1016/j.compind.2024.104171
Liuyue Xie , Shreyas Misra , Nischal Suresh , Justin Soza-Soto , Tomotake Furuhata , Kenji Shimada

We propose a novel framework for detecting 3D human–object interactions (HOI) in construction sites and a toolkit for generating construction-related human–object interaction graphs. Computer vision methods have been adopted for construction site safety surveillance in recent years. The current computer vision methods rely on videos and images, with which safety verification is performed on common-sense knowledge, without considering 3D spatial relationships among the detected instances. We propose a new method to incorporate spatial understanding by directly inferring the interactions from 3D point cloud data. The proposed model is trained on a 3D construction site dataset generated from our crafted simulation toolkit. The model achieves 54.11% mean interaction over union (mIOU) and 72.98% average mean precision(mAP) for the worker–object interaction relationship recognition. The model is also validated on PiGraphs, a benchmarking dataset with 3D human–object interaction types, and compared against other existing 3D interaction detection frameworks. It was observed that it achieves superior performance from the state-of-the-art model, increasing the interaction detection mAP by 17.01%. Besides the 3D interaction model, we also simulate interactions from industrial surveillance footage using MoCap and physical constraints, which will be released to foster future studies in the domain.

中文翻译:


从可转移的上下文知识中学习 3D 人与物交互图以进行施工监控



我们提出了一个用于检测建筑工地 3D 人与物体交互 (HOI) 的新颖框架,以及一个用于生成与建筑相关的人与物体交互图的工具包。近年来,计算机视觉方法已应用于建筑工地安全监控。目前的计算机视觉方法依赖于视频和图像,对常识知识进行安全验证,没有考虑检测到的实例之间的3D空间关系。我们提出了一种通过直接从 3D 点云数据推断交互来整合空间理解的新方法。所提出的模型在我们精心设计的模拟工具包生成的 3D 建筑工地数据集上进行训练。该模型在工人-对象交互关系识别方面实现了 54.11% 的平均交互比 (mIOU) 和 72.98% 的平均平均精度 (mAP)。该模型还在 PiGraphs(具有 3D 人与物体交互类型的基准数据集)上进行了验证,并与其他现有的 3D 交互检测框架进行了比较。据观察,它实现了最先进模型的卓越性能,将交互检测 mAP 提高了 17.01%。除了 3D 交互模型之外,我们还使用 MoCap 和物理约束来模拟工业监控录像中的交互,这些模型将被发布以促进该领域的未来研究。
更新日期:2024-09-10
down
wechat
bug