Logo

The Cognitive Robotic Systems Laboratory at AASS

(Previously "Mobile Robotics Lab")


Our Completed Thesis

The following are the theses completed at the AASS Mobile Robotics Lab. These include both PhD theses and "Licentiate" theses: the latter is a Swedish degree which is obtained half way during the doctoral studies. Online versions of most theses can be obtained by following the links to the university library. For other theses, please write a note to the author, to the supervisor(s), or to the Lab director.

2009
-> Robert Lundh. Robots that Help Each-Other: Self-Configuration of Distributed Robot Systems.
PhD Thesis. Örebro University, Örebro, Sweden, May 2009.
Main supervisor: A. Saffiotti. Co-supervisor: L. Karsson.
[University library entry, including PDF]

Abstract: Imagine the following situation. You give your favorite robot, named Pippi, the task to fetch a heavy parcel that just arrived at your front door. While pushing the parcel back to you, she must travel through a door. Unfortunately, the parcel she is pushing is blocking her camera, giving her a hard time to see the door. If she cannot see the door, she cannot safely push the parcel through it. What would you as a human do in a similar situation? Most probably you would ask someone for help, someone to guide you through the door, as we ask for help when we need to park our car in a tight parking spot. Why not let the robots do the same? Why not let robots help each other? Luckily for Pippi, there is another robot, named Emil, vacuum cleaning the floor in the same room. Since Emil has a video camera and can view both Pippi and the door at the same time, he can estimate PippiÕs position relative to the door and use this information to guide Pippi through the door by wireless communication. In that way he can enable Pippi to deliver the parcel to you. The goal of this thesis is to endow robots with the ability to help each other in a similar way.

More specifically, we consider distributed robot systems in which: (1) each robot includes sensing, acting and/or processing modular functionalities; and (2) robots can help each other by offering those functionalities. A functional configuration is any way to allocate and connect functionalities among the robots. An interesting feature of a system of this type is the possibility to use different functional configurations to make the same set of robots perform different tasks, or to perform the same task under different conditions. In the above example, Emil is offering a perceptual functionality to Pippi. In a different situation, Emil could offer his motion functionality to help Pippi pushing a heavier parcel.

In this thesis, we propose an approach to automatically generate, at run time, a functional configuration of a distributed robot system to perform a given task in a given environment, and to dynamically change this configuration in response to failures. Our approach is based on artificial intelligence planning techniques, and it is provably sound, complete and optimal.

In order to handle tasks that require more than one step (i.e., one configuration) to be accomplished, we also show how methods for automatic configuration can be integrated with methods for task planning in order to produce a complete plan were each step is a configuration. For the scenario above, generating a complete plan before the execution starts enables Pippi to know beforehand if she can get the parcel or not. We also propose an approach to merge configurations, which enables concurrent execution of configurations. Merging of configurations can be used for: parallel execution of a sequence of configurations for reducing execution time (as demonstrated in this thesis), and guaranteeing safe execution of multiple configurations generated by the same or different configuration processes.

We demonstrate the applicability of our approach on a specific type of distributed robot system, called Peis-Ecology, and show experiments in which configurations, and sequences of configurations are automatically generated and executed on real robots. Further, we give an experiment were merged configurations are created and executed on simulated robots.

-> Mattias Seeman. Tele-Operated Remote Inspection with a Spherical Robot.
Licentiate Thesis. Örebro University, Örebro, Sweden, December 2009.
Main supervisor: M. Broxvall. Co-supervisor: A. Saffiotti.

Abstract: Robotic remote inspection, such as security surveillance or disaster area examination, can be efficient and at the same time protect humans from danger. However, conventional tele-operation interfaces impose a cognitive burden on the operator, who is typically not a robotics expert, and high-level information interpretation and decision-making is neglected. This work investigates the use of a spherical robot for remote inspection. Ball-shaped robots are inherently stable and robust, encapsulating all sensors and moving parts. They are, however, oscillation-prone and this causes some control and perception challenges.

We take the disadvantages of spherical robots into consideration in the development of a tele-operation user interface for remote inspection. The interface must promote good situation awareness provided sensor data from an oscillating robot. It must also allow precise control over the spherical robot, which exhibits some peculiar motion patterns.

We consider remote inspection with a tele-operated spherical robot in an adjustable autonomy framework where control, perceptual, and interaction autonomy are treated separately. With no control or interaction autonomy present in the system, we try to improve situation awareness by visualizing the robot and sensor data in a 3D virtual reality (VR), and also by introducing low-level perceptual autonomy features in the user interface: image stabilization in virtual reality, and a virtual 360¡ panorama.

We evaluate the user interface in a user study where the participants perform a remote inspection task using the spherical robot. The experiment compares (1) using the 3D VR visualization against using a conventional 2D visualization, and (2) having the panorama feature enabled or disabled. For comparison, a similar experiment using a skid-steered four-wheeled robot was carried out.

The experiments show that there is a difference between the different visualization modes with the spherical robot: using the conventional 2D visualization is more efficient. Without integrated video and map information, the virtual 3D view of the robot apparently adds nothing of benefit to the user interface. Without range data, robot and sensor attitudes only distract attention from the video and the interface does not provide sufficient navigational cues.

2008
-> Abdelbaki Bouguerra. Robust Execution of Robot Task-Plans: A Knowledge-based Approach.
PhD Thesis. Örebro University, Örebro, Sweden, September 2008.
Main supervisor: L. Karsson. Co-supervisor: A. Saffiotti.
[University library entry, including PDF]

Abstract: Autonomous mobile robots are being developed with the aim of accomplishing complex tasks in different environments, including human habitats as well as less friendly places, such as distant planets and underwater regions. A major challenge faced by such robots is to make sure that their actions are executed correctly and reliably, despite the dynamics and the uncertainty inherent in their working space. This thesis is concerned with the ability of a mobile robot to reliably monitor the execution of its plans and detect failures.

Existing approaches for monitoring the execution of plans rely mainly on checking the explicit effects of plan actions, i.e., effects encoded in the action model. This supposedly means that the effects to monitor are directly observable, but that is not always the case in a real-world environment. In this thesis, we propose to use semantic domain-knowledge to derive and monitor implicit expectations about the effects of actions. For instance, a robot entering a room asserted to be an office should expect to see at least a desk, a chair, and possibly a PC. These expectations are derived from knowledge about the type of the room the robot is entering. If the robot enters a kitchen instead, then it should expect to see an oven, a sink, etc.

The major contributions of this thesis are as follows.

  • We define the notion of Semantic Knowledge-based Execution Monitoring SKEMon, and we propose a general algorithm for it based on the use of description logics for representing knowledge.
  • We develop a probabilistic approach of semantic Knowledge-based execution monitoring to take into account uncertainty in both acting and sensing. Specifically, we allow for sensing to be unreliable and for action models to have more than one possible outcome. We also take into consideration uncertainty about the state of the world. This development is essential to the applicability of our technique, since uncertainty is a pervasive feature in robotics.
  • We present a general schema to deal with situations where perceptual information relevant to SKEMon is missing. The schema includes steps for modeling and generating a course of action to actively collect such information.

We describe approaches based on planning and greedy action selection to generate the information-gathering solutions. The thesis also shows how such a schema can be applied to respond to failures occurring before or while an action is executed. The failures we address are ambiguous situations that arise when the robot attempts to anchor symbolic descriptions (relevant to a plan action) in perceptual information.

The work reported in this thesis has been tested and verified using a mobile robot navigating in an indoor environment. In addition, simulation experiments were conducted to evaluate the performance of SKEMon using known metrics. The results show that using semantic knowledge can lead to high performance in monitoring the execution of robot plans.

-> Marco Gritti. A Mechanism for Automatic Self-Configuration of Software Components in Robot Ecologies.
Licentiate Thesis. Örebro University, Örebro, Sweden, October 2008.
Main supervisor: A. Saffiotti. Co-supervisor: M. Broxvall.

Abstract: Distributed robotic systems are nowadays being applied to several domains, like ambient assisted living, elderly healthcare, or museum guidance. The internal control structures of the heterogeneous devices that constitute these systems are often profitably organized in component-based software architectures. The strong added value of such distributed systems comes from their potential ability to automatically self-configure the interaction patterns of their constituting components. Dynamically reconfigurable component interactions within and across the various devices allow these systems to automatically change the form of their actions by founding new cooperations among their devices, or by destroying the old ones. Automatic self-configuration would dramatically increase the adaptability of such systems to new tasks or situations. At present time, no satisfactory solutions exist to the problem of how such distributed systems should automatically self-configure. With the aim to provide a solution to this problem, we have developed a mechanism for automatic self-configuration of distributed robotic systems, in which special components are able to establish, monitor, and, if needed, change the system configurations.

This work has resulted in the following original outcomes:

  • design of formal descriptions for software components that are typical of distributed robotic systems;
  • design of formal task specifications that assimilate categories of system-level tasks to categories of system configurations;
  • design and implementation of software that dynamically creates and monitors configurations given such task and component specifications.

This work has been inspired by ideas from the field of Semantic Web Services and by the heritage of reactive architectures in Robotics. In order to validate our design, we have implemented and tested the self-configuration mechanism on a specific type of distributed robotic system: the Peis-Ecology.

2007
-> Johan Larsson. Reactive Navigation of an Autonomous Vehicle in Underground Mines.
Licentiate Thesis. Örebro University, Örebro, Sweden, March 2007.
Main supervisor: A. Saffiotti. Co-supervisor: M. Broxvall.

Abstract: In most underground mines, LHD (Load-Haul-Dump) vehicles are used to transport ore from the stope or muck-pile to a dumping point, and are typically operated by a human who is sitting on-board the vehicle. Generally, an underground mine does not offer the best working environment for humans, and the job of an LHD operator can be characterised as Three D: Dangerous, Dirty and Dull.

Tele-operated LHD vehicles are sometimes used in an attempt to remove the need of an on-board operator. However, tele-operation typically leads to reduced productivity and increased maintenance costs. The current trend is therefore toward fully autonomous, driver-less navigation systems.

In this thesis, we present our approach to develop a reactive navigation system for operating an autonomous LHD vehicle in underground mines. The suggested system uses a coarse topological map for localisation, and a fuzzy behaviour-based approach for navigation. In our work, we have extended an existing framework for autonomous robot navigation and we have designed, developed and validated novel feature detection algorithms to enable reliable tunnel following and topological localisation. These algorithms operate on data from a laser scanner, and extract features such as tunnel center line and intersections. We claim, that the developed algorithms go beyond the state of the art because of their reduced computational cost and of their robustness with respect to sensor noise. To further increase the robustness of the topological localisation, we also propose the use of RFID technology by deploying passive RFID tags at critical locations in the mine.

To complement the description of our algorithms and system, we report on an initial implementation tested on ordinary research robots, and an extensive quantitative evaluation of the feature detection algorithms. This evaluation confirms the good performance of the algorithms, and their robustness to noise in the laser data. We also describe a few qualitative tests of the complete navigation system made in indoor environments using ordinary research robots. These tests indicate that the techniques developed in this thesis, originally intended for use in an underground mine, can also be used in other domains characterized by corridor-like features.

2006
-> Amy Loutfi. Odour Recognition using Electronic Noses in Robotic and Intelligent Systems.
PhD Thesis. Örebro University, Örebro, Sweden, February 2006.
Main supervisor: S. Coradeschi. Co-supervisor: A. Saffiotti.
[University library entry]

Abstract: This thesis is about integrating the sense of smell into artificial intelligent systems. In order to endow such systems with olfaction, we use a device called an electronic nose or e-nose. An e-nose consists of a number of gas sensors with partial selectivity and a pattern recognition component trained to recognize both complex and simple types of odours. Discussed in this thesis are a number of challenges which makes the integration of electronic noses into an intelligent system non-trivial. Challenges unique to the current technological state of odour identification include the characteristics of the gas sensing technologies such as sensitivity and drift, and the limitations of the pattern recognition algorithms to cope with these characteristics. Another challenge general to olfaction is the inherent difficulty in conveying or communicating a perception of an odour to a human user.

If we are to consider e-noses into today's and tomorrow's intelligent systems, it is important to examine the ingredients currently present in such systems. Sensing modalities such as vision, sonar and infrared are often used to collect a number of different properties about the external world. Planning and reasoning components are used to determine the appropriate responses to execute. The system can also perform actions which can then change the state of the environment. Using an electronic nose in a system containing these properties requires much more than simply mounting a physical sensor. In order for the system to really embody electronic olfaction, each level of the system architecture needs to be considered. We investigate the use of current techniques from AI and robotics that are available to confront these integration problems. Specifically this thesis investigates (1) the abstraction e-nose data from low-level sensor readings to a high-level representation of odour concepts, (2) the combination of perceptual information from the e-nose with other sensing modalities (3) coordination of sensing actions to determine where and when the e-nose should be activated in this context. We view these three constituents as important contributions for the advancement of electronic olfaction toward real application domains ranging from safety and security to the food industry.

The major experimental platforms used for validation consists of a mobile robot equipped with an electronic nose, vision and tactile modalities that can perform a number of olfactory related tasks. Other experimental platforms include e-noses as part of a distributed sensing network and also as part of a visual interface interacting with humans.

-> Robert Lundh. Plan-based Configuration of a Group of Robots.
Licentiate Thesis. Örebro University, Örebro, Sweden, September 2006.
Main supervisor: A. Saffiotti. Co-supervisor: L. Karsson.

Abstract: Imagine the following situation. You give your favorite robot, named Pippi, the task to fetch a parcel that just arrived at your front door. While pushing the parcel back to you, she must travel through a door opening. Unfortunately, the parcel she is pushing is blocking her camera, giving her a hard time to see the door to cross. If she cannot see the door, she cannot safely push the parcel through the door opening. What would you as a human do in a similar situation? Most probably you would ask someone for help, someone to guide you through the door, as we ask for help when we need to park our car in a tight parking spot. Why not let the robots do the same? Why not let robots help each other. Luckily for Pippi, there is another robot, named Emil, vacuum cleaning the floor in the same room. Since Emil can view both Pippi and the door at the same time, he can guide Pippi through the door, enabling her to deliver the parcel to you.

This work is about societies of autonomous robots in which robots can help each other by offering information-producing functionalities. A functional configuration is a way to allocate and connect functionalities among robots. In general, different configurations can be used to solve the same task, depending on the current situation. For the work on configurations, we have three steps. The first step is to foramlly define the idea of functional configuration. Second, to show how configurations can be automatically generated and executed. The third step is to address the problem of when and how to change a configuration in response to changing conditions. In this licenciate thesis we report initial work that focus on the two first steps: the third step is subject of future work. We propose a formal definition of functional configurations, and we propose an approach based on artificial intelligence (AI) planning techniques to automatically generate a preferred configuration for a given task, environment, and set of resources. To illustrate these ideas, we describe an experimental system where these are implemented, and show two example of it in which two robots mutually help each other to address tasks. In the first example they help each other to cross a door, and in the second example they carry a bar together.

2005
-> Pär Buschka. An Investigation of Hybrid Maps for Mobile Robots.
PhD Thesis. Örebro University, Örebro, Sweden, December 2005.
Main supervisor: A. Saffiotti. Co-supervisor: T. Duckett.
[University library entry, including PDF]

Abstract: Autonomous robots typically rely on internal representations of the environment, or maps, to plan and execute their tasks. Several types of maps have been proposed in the literature, and there is general consensus that different types have different advantages and limitations, and that each type is more suited to certain tasks and less to others. Because of these reasons, it is becoming common wisdom in the field of mobile robotics to use hybrid maps that integrate several representations, usually of different types. Hybrid maps provide scalability and multiple views, allowing for instance to combine robot-centered and human-centered representations. There is, however, little understanding of the general principles that can be used to combine different maps into a hybrid one, and to make it something more than the sum of its parts. There is no systematic analysis of the different ways in which different maps can be combined, and how they can be made to cooperate. This makes it difficult to evaluate and compare different systems, and precludes us from getting a clear understanding of how a hybrid map can be designed or improved.

The investigation presented in this thesis aims to contribute to fill this foundational gap, and to get a clearer understanding of the nature of hybrid maps. To help in this investigation, we develop two tools: The first one is a conceptual tool, an analytical framework in which the main ingredients of a hybrid map are described; The second one is an empirical tool, a new hybrid map that allows us to experimentally verify our claims and hypotheses. While these tools are themselves important contributions of this thesis, our investigation has resulted in the following additional outcomes:

  • A set of concepts that allow us to better understand the structure and operation of hybrid maps, and that help us to design them, compare them, identify their problems, and possibly improve them;
  • The identification of the notion of synergy as the fundamental way in which component maps inside a hybrid map cooperate.

To assess the significance of these outcomes, we make and validate the following claims:

  • Our framework allows us to classify and describe existing maps in a uniform way. This claim is validated constructively by making a thorough classification of the hybrid maps reported in the literature.
  • Our framework also allows us to enhance an existing hybrid map by identifying spots for improvement. This claim is verified experimentally by modifying an existing map and evaluating its performance against the original one.
  • The notion of synergy plays an important role in hybrid maps. This claim is verified experimentally by testing the performance of a hybrid map with and without synergy.

-> Zbigniew Wasik. A Behavior-Based Control System for Mobile Manipulation.
PhD Thesis. Örebro University, Örebro, Sweden, April 2005.
Main supervisor: A. Saffiotti. Co-supervisor: D. Driankov.
[University library entry, including PDF]

Abstract: The field of industrial robotics can be defined as the study, design and use of robot systems for manufacturing. Although the problem of designing a controller for industrial robots has been the subject of intensive study, a number of assumptions are usually made which may seriously limit the applicability of these robots. First, the robotic manipulator is usually considered to be positioned at one place, which means that it can only work in its limited working envelope fixed to this position. Second, it is usually assumed that the environment of the manipulator (workcell) is carefully engineered to suit the task and the configuration of the manipulator. The workcell is usually controlled in a way that makes it fully predictable. Finally, the control program of the manipulator is often designed assuming that the task will not change, and it is difficult to reprogram it to fit other tasks and/or other environments.

These restrictions make current industrial robots unsuitable for the new demands of flexible automation in small and medium enterprises. To enter these emerging markets, industrial robots need to be easy to reprogram in order to be used for small batch productions. They also need to extend their work envelope outside the fixed, confined space of a workcell in order to address a larger range of tasks. Finally, in order to reduce the high cost needed to engineer the workcell, the control system should be able to cope with a (partially) unstructured environment, where things may move and not everything is predictable.

In this thesis, we develop techniques that extend the applicability of current robotic manipulators, by addressing the above limitations. We propose an approach to sensor-based manipulation that has the following distinctive features.

  • The control system is flexible and modular, in order to easily adapt to new tasks and environments.
  • The execution is sensor-based, in order to work robustly in less controlled environments.
  • Our approach applies to the more general problem of combined mobility andmanipulation, in order to extend the work space of the manipulator.

To achieve these features, we combine the methods of traditional robot control with insights derived from three different fields. We provide modularity and flexibility by taking insights from the field of behavior-based robotics. We incorporate online visual feedback into the control loop by taking insights from the field of image-based visual servoing. And we combine mobility and manipulation by taking insights from the field of mobile manipulation. To demonstrate the effectiveness of our approach, we develop a complete mobile manipulation system, and we validate its performance in a number of experiments involving the collection of objects from a large floor, whose position is not known a priori, and which may be moving.

2004
-> Ola Pettersson. Model-Free Execution Monitoring in Behavior-Based Mobile Robotics.
PhD Thesis. Örebro University, Örebro, Sweden, October 2004.
Main supervisor: A. Saffiotti. Co-supervisor: L. Karlsson.
[University library entry]

Abstract: In the near future, autonomous mobile robots are expected to assist us by performing service tasks in many different areas, including transportation, cleaning, mining, or harvesting. In order to manage these tasks in a changing and partially unpredictable environment, without the help of humans, the robot must have the ability to plan its actions and to execute them robustly and in a safe way. Since the real world is dynamic and not fully predictable, the robot must also have the ability to detect when the execution does not proceed as planned, and to correctly identify the causes of the failure. An execution monitoring system is a system that allows the robot to detect and classify these failures.

Most current approaches to execution monitoring in robotics are based on the idea of predicting the outcomes of the robot's actions by using some sort of model, and comparing the predicted outcomes with the observed ones. In contrary, this thesis explores the use of model-free approaches to execution monitoring, that is, approaches that do not use predictive models. These approaches solely observe the actual execution of the robot, and detect certain patterns that indicate a problem. The motivation to carry out this exploration comes from two observations. First, there is very little work done on this topic, especially within the field of artificial intelligence and autonomous robotics. Second, model-free approaches have some complementary advantages and disadvantages compared to the model-based ones.

In this thesis, we show that pattern recognition techniques can be applied to realize model-free execution monitoring by classifying observed behavioral patterns into normal or faulty behaviors. We investigate the use of several such techniques, and verify their utility in a number of experiments involving the navigation of a mobile robot in indoor environments. Statistical measures are used to compare the results given from several realistic simulations. Our approach has also been successfully tested on a real robot navigating in an office environment. Interesting, this test has shown that we can train a model-free execution monitor in simulation, and then use it in a real robot.


Last updated on Oct 28, 2009 by A. Saffiotti