A number of different architectures for autonomous robot navigation have been proposed in the last twenty years. These include hierarchical architectures, that partition the robot's functionalities into high-level (model and plan) and low-level (sense and execute) layers; behavior-based architectures, that achieve complex behavior by combining several simple behavior-producing units; and hybrid architectures, that combine a layered organization with a behavior-based decomposition of the execution layer. While the use hybrid architectures is gaining increasing consensus in the field, a number of technological gaps remain. Among these, we emphasize:
Figure 1 shows a partial view of the architecture that we have developed for Flakey; the modules where we have employed fuzzy techniques are marked by thick borders. The Local Perceptual Space (LPS) collects and integrates all the information relevant to immediate sensing and acting, represented in a Cartesian plane centered on the robot. This includes information coming from the sensors, at different levels of abstraction and interpretation, and information coming from an approximate map. Action routines are packed into behaviors: implementations of basic skills aimed at achieving or maintaining a particular goal. Behaviors are activated and combined according to the indications contained in a plan. Behaviors do not take their input directly from the sensors, but use the information maintained in the LPS. Typically, reactive behaviors, like obstacle avoidance, use low-level data (e.g., occupancy information); while goal-oriented behaviors, like wall-following or door-crossing, use more abstract representations, or descriptors, built by the higher-level perception routines or taken from the map (e.g., a representation of the door to cross). Descriptors are a convenient way to bring prior knowledge into the controller (Saffiotti et al 1995).
Figure 1: The architecture used in our test-bed (partial view)