Geometric Backtracking for Combined
Task and Motion Planning in Robotic Systems

J. Bidot, L. Karlsson, F. Lagriffoul, and A. Saffiotti

Abstract

Planners for real robotic systems should not only reason about abstract actions, but also about aspects related to physical execution such as kinematics and geometry. We present an approach to hybrid task and motion planning, in which state-based forward-chaining task planning is tightly coupled with motion planning and other forms of geometric reasoning. Our approach is centered around the problem of geometric backtracking that arises in hybrid task and motion planning: in order to satisfy the geometric preconditions of the current action, a planner may need to reconsider geometric choices, such as grasps and poses, that were made for previous actions. Geometric backtracking is a necessary condition for completeness, but it may lead to a dramatic computational explosion due to the large size of the space of geometric states. We explore two avenues to deal with this issue: the use of heuristics based on different geometric conditions to guide the search, and the use of geometric constraints to prune the search space. We empirically evaluate these different approaches, and demonstrate that they improve the performance of hybrid task and motion planning. We demonstrate our hybrid planning approach in two domains: a real, humanoid robotic platform, the DLR Justin robot, performing object manipulation tasks; and a simulated autonomous forklift operating in a warehouse.
 

 
 
During the GeRT project, we had the opportunity to test our approach on different planning problems. The solution plans were to be executed on Rollin' Justin, a humanoid robot with a complex kinematics. The scenarios presented below (mainly A and C) emphasize the task planning part of our system, i.e. the capacity to reason about symbolic preconditions and effects of actions in order to reach a given goal.
 

 
 
In scenario A, the task was to move two cups from one side of the table to the other side. In the initial position, one cup is upright, and the other one is upside-down. In the goal configuration, both cups have to be upright, hence the planner needs to find two different action sequences to solve the problem. The first cup is moved using a hand-over, while the second cup is re-grasped using the box in the middle as a temporary location. The handover changes the orientation of the cup, while the re-grasp preserves it.

The following picture represents the initial geometric state.

Justin, two cups, and a box.

The video showing a run of this scenario is here (9.3 MiB).
 

 
 
In scenario B, the task consists in placing three cups on a tray. The difficulty comes from the limited space on the tray and the bulky hands, which imposes to carefully choose the positions of the cups to fit them all on the tray. Solving this problem requires plenty of geometric backtracking.

The following picture represents the initial geometric state.

Justin, three cups, and a tray.

The video showing a run of this scenario is here (5.6 MiB).
 

 
 
In scenario C, the task is to place two cups in a box, while grasping these cups may be compromised by the presence of two occluding boxes placed nearby. Like in scenario B, the bulky hands plus the limited space between the objects triggers geometric backtracking. The planner here decides to move away the boxes if they actually occlude the cups. The option to move them away has been encoded in an HTN decomposition method. The planning domain model also allows for re-grasping operations, which is necessary to place the leftmost cup into the box. In the future, it would be desirable to use statistics from collisions to determine which objects to move in priority.

The following picture represents the initial geometric state.

Justin, two cups, two milk boxes, and an empty carton
	box.

The video showing a run of this scenario is here (5.7 MiB).
 

 
 
 
The paper is published in the Artificial Intelligence journal. It is available on-line at http://dx.doi.org/10.1016/j.artint.2015.03.005.
 
This page was updated on 25 January 2016.