IROS 2017 Workshop

Introspective Methods for Reliable Autonomy


As humans, doubt and understanding our own limitations, failures and shortcomings is a key for improvement and development. Such knowledge alters our behaviors e.g. to execute tasks in a more cautious way. Equipping robots with a set of skills that allows them to assess the quality of their sensory data, internal models, used methods etc. is correspondingly believed to greatly improve the overall performance of an autonomous system.

The aim of this workshop is to discuss the following question:

How to assess the quality of internal models, methods and sensor data, used by robots and how to alter their behavior upon this information?

With respect to humans, introspection is the process of examining one’s internal state. Robots do not have thoughts neither feelings, only data, hardware and algorithms.

Therefore, robots can only assess the quality of sensor data, internal models, representations, information, perception input etc. Such knowledge can later lead to a modification of robots behavior by including the assessed quality score in the planning process.

Introspection relates to safety, active perception, mapping and many topics. These topics have a direct impact on a variety of research areas, such as long term autonomy, search and rescue, and many others. Long term autonomy can benefit from autonomous failure recovery and active learning. For Search and Rescue, estimation of the confidence of the sensor input and used maps is essential for overall risk assessment. Moreover, for a large variety of tasks assessing the quality of sensor data, internal models, representation, information will directly affect mission success. The ability to reason and solve its own failures, and proactively enrich owned knowledge is a direct way to improve autonomous behaviours of a robot.


17-09-2017 Papers are published
The papers presented during this workshop are now available to read. You can find them in Accepted papers
22-08-2017 Questions to the experts
If you want to ask a question regarding introspection to our invited speakers. Just send it to us through the form Panel
31-07-2017 Final Deadline Extension
Upon the popular request the papers can be submitted until 06-08-2017. For details see Papers
14-07-2017 Deadline Extension
Upon the popular request the papers can be submitted until 31-07-2017. For details see Papers
04-05-2017 Call for papers
Now you can submit your contribution for the workshop. See Papers
26-04-2017 Acceptance of the Workshop
Our Workshop was accepted and will take place during IROS 2017!
15-03-2017 Project on ResearchGate
If you are interested in the problem of robotic introspection join us on ResearchGate
13-03-2017 Page is up
The website of the workshop is published and ready for use.


  • Internal assessment
    • Map quality assessment
    • Perception quality assessment
    • Classification quality assessment
  • Analysis
    • Failure analysis
    • Execution monitoring
  • Introspection-related actions
    • Active learning
    • Failure recovery
    • Reconfigurable robots
    • Planning with uncertainty

Invited speakers

Martial Hebert

Carnegie Mellon University - Robotics Institute

Abstract - TBA

Oliver Brock

Technische Universität Berlin - Robotics and Biology Laboratory

Following characterizations of introspection in philosophy, there cannot be introspection in robots. (And probably not in humans either – we probably just have the sensation of introspection.) I will talk about philosophy briefly, hopefully helping us to articulate clearly what we might mean by introspection in robotics. Towards this goal, the workshop seeks answers the question of “How to assess the quality of internal models, methods and sensor data, used by robots and how to alter their behavior upon this information?” I am going to argue a) that assessing quality of internal models and sensor data is ubiquitous in robotics (think: Kalman filter) and b) that altering behavior based on this information is equally ubiquitous (think: control based on Kalman filter). But that is not really what we mean, right? So what then do we actually mean by introspection? I will propose that this has something to do with hierarchical processing of sensor information in perception for action (or interactive perception). I will also propose a way of designing “introspective” systems and give concrete examples.

Rudolph Triebel

DLR - Robotics and Mechatronics Center

Recently, Autonomous Learning methods have become very popular in robotics, because they tend to learn more efficiently and more adaptively by integrating decision making into the learning process. For robot perception, this means that data samples that are hard to classify are selected actively to be labeled by the supervisor for re-training. This is also known as Active Learning. In this presentation, I will show example applications of the Active Learning framework to 3D object classification, and I will argue that for this it is necessary that the classifier be introspective. More in detail, I will motivate the notions of under- and overconfidence and how this relates to introspectiveness. Furthermore, I will show that in particular ensemble learning methods such as bagging and boosting are very useful in this context, because either they already tend to be introspective or they can be modified such that they are. As a result, Active Learning produces less label queries and provides a better classification accuracy. Based on experiments on benchmark data sets I will show how this can be used to learn 3D objects more efficiently and more accurately, even if the data is given in streams.

Andreas Birk

Jacobs University - Jacobs Robotics

Abstract - TBA

Leon Kester


The development of autonomous systems seems to be at a critical point.

For applications where these systems have to operate in more complex environments reaching more complex goals, a number of problems are encountered.

The systems need a lot of computational power and memory, they have difficulty explaining themselves, they can behave in an unpredictable or unreliable way, they do not use their resources effectively, they are not robust to failures and they have difficulty to understand and act according to the law.

Because of these problems also test and evaluation of these systems is problematic.

It is understood that introspective methods can help and I will argued that these introspective methods can be regarded as a form of self-awareness of the system; the capability of doing self-assessment and self-management.

By presenting various system models it is discussed how self-awareness could be implemented and also on the basis of some examples where ‘self’ may refer to.

Finally I will touch upon the problems regarding ethical and lawful behavior.

Raymond Sheh

Curtin University - Intelligent Robots Group

Artificially Intelligent (AI) robots are increasingly finding many of their critical capabilities dependent on complex Machine Learned (ML) models. Unlike planners, hand coded controllers or mathematically derived models, there is considerable variation in the explainability of different ML techniques, from completely opaque, black-box models to transparent, rule based models. When selecting a ML technique, in addition to on-task performance, there needs to be careful consideration of the explanatory capabilities of the ML techniques and the explanatory demands of the application, both for the agents' own introspection and for presentation to humans.

Explainable Artificial Intelligence (XAI) is becoming topical as a study of the explainability of various ML techniques in particular, and AI techniques more generally. In this talk we will discuss some of the implications of these variations in explainability and propose a categorisation of explainability from the perspective of matching up the robotic need for introspection and human-robot interaction (HRI) with the explanatory capabilities of various ML techniques.


Time Event
9:20-9:25 Welcome
9:25-10:00 Oliver Brock
10:00-10:30 Coffe break
10:30-11:05 Rudolph Triebel
11:05 - 11:40 Andreas Birk
11:40 - 12:15 Leon Kester
12:15 - 13:15 Lunch Break
13:15 - 13:50 Raymond Sheh
13:50 - 14:25 Ingmar Posner
14:25 - 15:00 Martial Hebert
15:00 - 15:15 Lightning talks
15:15 - 16:00 Poster Session
16:00 - 16:30 Coffee break
16:30 - 17:30 Panel discussion
17:30 - 17:35 Closing

This workshop is supported by:

Organizers and committee

Tomasz Piotr Kucner

Orebro University - AASS


Sören Schwertfeger

SahanghaiTech University - STAR-Lab

Martin Magnusson

Orebro University - AASS

Achim J. Lilenthal

Orebro University - AASS


Tom Duckett

University of Lincoln

Tomáš Krajník

University of Lincoln

Robert Krug

Royal Institute of Technology (KTH)

Bruno Lacerda

University of Birmingham

Timm Linder

University of Freiburg/
Bosch Corporate Research

Stephanie Lowry

Örebro University

Masoumeh Mansouri

Örebro University

Luigi Palmieri

University of Freiburg
Bosch Corporate Research

Juan Rojas

Guangdong University of Technology

Erik Schaffernicht

Örebro University

Todor Stoyanov

Örebro University

Serge Thill

University of Skövde

Rafael Valencia

Carnegie Mellon University