当前位置: X-MOL 学术Sci. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SimPLE, a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects
Science Robotics ( IF 26.1 ) Pub Date : 2024-06-26 , DOI: 10.1126/scirobotics.adi8808
Maria Bauza 1 , Antonia Bronars 1 , Yifan Hou 2 , Ian Taylor 1 , Nikhil Chavan-Dafle 1 , Alberto Rodriguez 1
Affiliation  

Existing robotic systems have a tension between generality and precision. Deployed solutions for robotic manipulation tend to fall into the paradigm of one robot solving a single task, lacking “precise generalization,” or the ability to solve many tasks without compromising on precision. This paper explores solutions for precise and general pick and place. In precise pick and place, or kitting, the robot transforms an unstructured arrangement of objects into an organized arrangement, which can facilitate further manipulation. We propose SimPLE (Simulation to Pick Localize and placE) as a solution to precise pick and place. SimPLE learns to pick, regrasp, and place objects given the object’s computer-aided design model and no prior experience. We developed three main components: task-aware grasping, visuotactile perception, and regrasp planning. Task-aware grasping computes affordances of grasps that are stable, observable, and favorable to placing. The visuotactile perception model relies on matching real observations against a set of simulated ones through supervised learning to estimate a distribution of likely object poses. Last, we computed a multistep pick-and-place plan by solving a shortest-path problem on a graph of hand-to-hand regrasps. On a dual-arm robot equipped with visuotactile sensing, SimPLE demonstrated pick and place of 15 diverse objects. The objects spanned a wide range of shapes, and SimPLE achieved successful placements into structured arrangements with 1-mm clearance more than 90% of the time for six objects and more than 80% of the time for 11 objects.

中文翻译:


SimPLE,一种在模拟中学习的视觉触觉方法,可精确拾取、定位、重新抓取和放置对象



现有的机器人系统在通用性和精确性之间存在张力。已部署的机器人操纵解决方案往往属于一个机器人解决一项任务的范例,缺乏“精确的概括”,或者在不影响精度的情况下解决许多任务的能力。本文探讨了精确和通用拾取和放置的解决方案。在精确的拾取和放置或配套中,机器人将物体的非结构化排列转变为有组织的排列,这可以促进进一步的操作。我们提出 SimPLE(拾取定位和放置模拟)作为精确拾取和放置的解决方案。 SimPLE 可以在没有经验的情况下,在给定对象的计算机辅助设计模型的情况下学习拾取、重新抓取和放置对象。我们开发了三个主要组成部分:任务感知抓取、视觉触觉感知和重新抓取规划。任务感知抓取计算稳定、可观察且有利于放置的抓取的可供性。视觉触觉感知模型依赖于通过监督学习将真实观察结果与一组模拟观察结果进行匹配,以估计可能的物体姿势的分布。最后,我们通过解决手对手重新抓取图上的最短路径问题来计算多步拾取和放置计划。在配备视觉触觉传感的双臂机器人上,SimPLE 演示了 15 个不同物体的拾取和放置。这些物体的形状多种多样,SimPLE 在 6 个物体上成功放置到间隙为 1 毫米的结构化排列中,成功率超过 90%,在 11 个物体上成功放置到 80% 以上。
更新日期:2024-06-26
down
wechat
bug