当前位置:
X-MOL 学术
›
Robot. Comput.-Integr. Manuf.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Intelligent seam tracking in foils joining based on spatial–temporal deep learning from molten pool serial images
Robotics and Computer-Integrated Manufacturing ( IF 9.1 ) Pub Date : 2024-07-29 , DOI: 10.1016/j.rcim.2024.102840 Yuxiang Hong , Yuxuan Jiang , Mingxuan Yang , Baohua Chang , Dong DU
Robotics and Computer-Integrated Manufacturing ( IF 9.1 ) Pub Date : 2024-07-29 , DOI: 10.1016/j.rcim.2024.102840 Yuxiang Hong , Yuxuan Jiang , Mingxuan Yang , Baohua Chang , Dong DU
Vision-based weld seam tracking has become one of the key technologies to realize intelligent robotic welding, and weld deviation detection is an essential step. However, accurate and robust detection of weld deviations during the microwelding of ultrathin metal foils remains a significant challenge. This challenge can be attributed to the fusion zone at the mesoscopic scale and the complex time-varying interference (pulsed arcs and reflected light from the workpiece surface). In this paper, an intelligent seam tracking approach for foils joining based on spatial–temporal deep learning from molten pool serial images is proposed. More specifically, a microscopic passive vision sensor is designed to capture molten pool and seam trajectory images under pulsed arc lights. A 3D convolutional neural network (3DCNN) and long short-term memory (LSTM)-based welding torch offset prediction network (WTOP-net) is established to implement highly accurate deviation prediction by capturing long-term dependence of spatial–temporal features. Then, expert knowledge is further incorporated into the spatio-temporal features to improve the robustness of the model. In addition, the slime mould algorithm (SMA) is used to prevent local optima and improve accuracy, efficiency of WTOP-net. The experimental results indicate that the maximum error detected by our method fluctuates within 0.08 mm and the average error is within 0.011 mm when joining two 0.12 mm thickness stainless steel diaphragms. The proposed approach provides a basis for automated robotic seam tracking and intelligent precision manufacturing of ultrathin sheets welded components in aerospace and other fields.
中文翻译:
基于熔池序列图像的时空深度学习的箔片连接中的智能焊缝跟踪
基于视觉的焊缝跟踪已成为实现智能机器人焊接的关键技术之一,而焊缝偏差检测是必不可少的一步。然而,在超薄金属箔微焊接过程中准确、稳健地检测焊接偏差仍然是一个重大挑战。这一挑战可归因于介观尺度的熔合区和复杂的时变干扰(脉冲电弧和工件表面的反射光)。本文提出了一种基于熔池序列图像的时空深度学习的箔片连接智能焊缝跟踪方法。更具体地说,微型被动视觉传感器被设计用于捕获脉冲弧光下的熔池和焊缝轨迹图像。建立了基于 3D 卷积神经网络(3DCNN)和长短期记忆(LSTM)的焊枪偏移预测网络(WTOP-net),通过捕获时空特征的长期依赖性来实现高精度偏差预测。然后,将专家知识进一步融入时空特征中,以提高模型的鲁棒性。此外,还使用粘菌算法(SMA)来防止局部最优,提高了WTOP-net的准确性和效率。实验结果表明,当连接两个0.12 mm厚度的不锈钢膜片时,我们的方法检测到的最大误差波动在0.08 mm以内,平均误差在0.011 mm以内。所提出的方法为航空航天和其他领域的超薄板焊接部件的自动化机器人焊缝跟踪和智能精密制造提供了基础。
更新日期:2024-07-29
中文翻译:
基于熔池序列图像的时空深度学习的箔片连接中的智能焊缝跟踪
基于视觉的焊缝跟踪已成为实现智能机器人焊接的关键技术之一,而焊缝偏差检测是必不可少的一步。然而,在超薄金属箔微焊接过程中准确、稳健地检测焊接偏差仍然是一个重大挑战。这一挑战可归因于介观尺度的熔合区和复杂的时变干扰(脉冲电弧和工件表面的反射光)。本文提出了一种基于熔池序列图像的时空深度学习的箔片连接智能焊缝跟踪方法。更具体地说,微型被动视觉传感器被设计用于捕获脉冲弧光下的熔池和焊缝轨迹图像。建立了基于 3D 卷积神经网络(3DCNN)和长短期记忆(LSTM)的焊枪偏移预测网络(WTOP-net),通过捕获时空特征的长期依赖性来实现高精度偏差预测。然后,将专家知识进一步融入时空特征中,以提高模型的鲁棒性。此外,还使用粘菌算法(SMA)来防止局部最优,提高了WTOP-net的准确性和效率。实验结果表明,当连接两个0.12 mm厚度的不锈钢膜片时,我们的方法检测到的最大误差波动在0.08 mm以内,平均误差在0.011 mm以内。所提出的方法为航空航天和其他领域的超薄板焊接部件的自动化机器人焊缝跟踪和智能精密制造提供了基础。