当前位置: X-MOL 学术Robot. Comput.-Integr. Manuf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Smart and user-centric manufacturing information recommendation using multimodal learning to support human-robot collaboration in mixed reality environments
Robotics and Computer-Integrated Manufacturing ( IF 9.1 ) Pub Date : 2024-07-26 , DOI: 10.1016/j.rcim.2024.102836
Sung Ho Choi , Minseok Kim , Jae Yeol Lee

The future manufacturing system must be capable of supporting customized mass production while reducing cost and must be flexible enough to accommodate market demands. Additionally, workers must possess the knowledge and skills to adapt to the evolving manufacturing environment. Previous studies have been conducted to provide customized manufacturing information to the worker. However, most have not considered the worker's situation or region of interest (ROI), so they had difficulty providing information tailored to the worker. Thus, a manufacturing information recommendation system should utilize not only manufacturing data but also the worker's situational information and intent to assist the worker in adjusting to the evolving working environment. This study presents a smart and user-centric manufacturing information recommendation system that harnesses the vision and text dual encoder-based multimodal deep learning model to offer the most relevant information based on the worker's vision and query, which can support human-robot collaboration (HRC) in a mixed reality (MR) environment. The proposed recommendation model can assist the worker by analyzing the manufacturing environment image acquired from smart glasses, the worker's specific question, and the related manufacturing document. By establishing correlations between the MR-based visual information and the worker's query using the multimodal deep learning model, the proposed approach identifies the most suitable information to be recommended. Furthermore, the recommended information can be visualized through MR smart glasses to support HRC. For quantitative and qualitative evaluation, we compared the proposed model with existing vision-text dual models, and the results demonstrated that the proposed approach outperformed previous studies. Thus, the proposed approach has the potential to assist workers more effectively in MR-based manufacturing environments, enhancing their overall productivity and adaptability.

中文翻译:


使用多模态学习进行智能且以用户为中心的制造信息推荐,以支持混合现实环境中的人机协作



未来的制造系统必须能够支持定制化大规模生产,同时降低成本,并且必须足够灵活以适应市场需求。此外,工人必须具备适应不断变化的制造环境的知识和技能。之前的研究是为了向工人提供定制的制造信息。然而,大多数人没有考虑工人的情况或感兴趣的区域(ROI),因此他们很难提供适合工人的信息。因此,制造信息推荐系统不仅应该利用制造数据,还应该利用工人的情境信息和意图来帮助工人适应不断变化的工作环境。本研究提出了一种以用户为中心的智能制造信息推荐系统,该系统利用基于视觉和文本双编码器的多模态深度学习模型,根据工人的视觉和查询提供最相关的信息,从而支持人机协作(HRC) )在混合现实(MR)环境中。所提出的推荐模型可以通过分析从智能眼镜获取的制造环境图像、工人的具体问题以及相关的制造文档来帮助工人。通过使用多模态深度学习模型建立基于 MR 的视觉信息和工作人员查询之间的相关性,该方法可以识别最合适的推荐信息。此外,推荐信息可以通过MR智能眼镜可视化,支持HRC。 对于定量和定性评估,我们将所提出的模型与现有的视觉文本双模型进行了比较,结果表明所提出的方法优于以前的研究。因此,所提出的方法有可能在基于 MR 的制造环境中更有效地帮助工人,提高他们的整体生产力和适应性。
更新日期:2024-07-26
down
wechat
bug