[Previous] [Home] [Next]

1. Overview

By autonomous robot navigation we mean the ability of a robot to move purposefully and without human intervention in environments that have not been specifically engineered for it. Autonomous nagivation requires a number of heterogeneous capabilities, including the ability to execute elementary goal-achieving actions, like reaching a given location; to react in real time to unexpected events, like the sudden appearance of an obstacle; to build, use and maintain a map of the environment; to determine the robot's position with respect to this map; to form plans that pursue specific goals or avoid undesired situations; and to adapt to changes in the environment.

A number of different architectures for autonomous robot navigation have been proposed in the last twenty years. These include hierarchical architectures, that partition the robot's functionalities into high-level (model and plan) and low-level (sense and execute) layers; behavior-based architectures, that achieve complex behavior by combining several simple behavior-producing units; and hybrid architectures, that combine a layered organization with a behavior-based decomposition of the execution layer. While the use hybrid architectures is gaining increasing consensus in the field, a number of technological gaps remain. Among these, we emphasize:

Fuzzy techniques have already been used to address some of these problems (Sugeno and Nishida 1985; Takeuchi et al 1988; Maeda et al 1991). In this case study, we discuss how we have used them in the SRI International mobile robot Flakey. More details on our work on Flakey can be found in (Saffiotti et al 1993a; Saffiotti et al 1995; Saffiotti and Wesley 1996).

Figure 1: The architecture used in our test-bed (partial view)

Figure 1 shows a partial view of the architecture that we have developed for Flakey; the modules where we have employed fuzzy techniques are marked by thick borders. The Local Perceptual Space (LPS) collects and integrates all the information relevant to immediate sensing and acting, represented in a Cartesian plane centered on the robot. This includes information coming from the sensors, at different levels of abstraction and interpretation, and information coming from an approximate map. Action routines are packed into behaviors: implementations of basic skills aimed at achieving or maintaining a particular goal. Behaviors are activated and combined according to the indications contained in a plan. Behaviors do not take their input directly from the sensors, but use the information maintained in the LPS. Typically, reactive behaviors, like obstacle avoidance, use low-level data (e.g., occupancy information); while goal-oriented behaviors, like wall-following or door-crossing, use more abstract representations, or descriptors, built by the higher-level perception routines or taken from the map (e.g., a representation of the door to cross). Descriptors are a convenient way to bring prior knowledge into the controller (Saffiotti et al 1995).


[Previous] [Home] [Next]
Last updated: August 7, 1997, by A. Saffiotti