当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Planning with mental models – Balancing explanations and explicability
Artificial Intelligence ( IF 5.1 ) Pub Date : 2024-07-18 , DOI: 10.1016/j.artint.2024.104181
Sarath Sreedharan , Tathagata Chakraborti , Christian Muise , Subbarao Kambhampati

Human-aware planning involves generating plans that are explicable, i.e. conform to user expectations, as well as providing explanations when such plans cannot be found. In this paper, we bring these two concepts together and show how an agent can achieve a trade-off between these two competing characteristics of a plan. To achieve this, we conceive a first-of-its-kind planner that can reason about the possibility of explaining a plan . We will also explore how solutions to such problems can be expressed as “self-explaining plans” – and show how this representation allows us to leverage classical planning compilations of epistemic planning to reason about this trade-off at plan generation time without having to incur the computational burden of having to search in the space of differences between the agent model and the mental model of the human in the loop in order to come up with the optimal trade-off. We will illustrate these concepts in two well-known planning domains, as well as with a robot in a typical search and reconnaissance task. Human factor studies in the latter highlight the usefulness of the proposed approach.

中文翻译:


用心智模型进行规划——平衡解释和可解释性



人类意识规划涉及生成可解释的计划,即符合用户期望,以及在找不到此类计划时提供解释。在本文中,我们将这两个概念结合在一起,并展示代理如何在计划的这两个竞争特征之间实现权衡。为了实现这一目标,我们设想了一个史无前例的规划器,它可以推理解释计划的可能性。我们还将探索如何将此类问题的解决方案表示为“不言自明的计划”,并展示这种表示如何使我们能够利用认知规划的经典规划汇编来在计划生成时推理这种权衡,而不必承担必须在循环中的代理模型和人类心理模型之间的差异空间中进行搜索以得出最佳权衡的计算负担。我们将在两个众所周知的规划领域以及机器人执行典型的搜索和侦察任务中说明这些概念。后者的人为因素研究强调了所提出方法的有用性。
更新日期:2024-07-18
down
wechat
bug