Cognitive Interactive Robot Learning

The imitation technique is heavily utilized within robotics, in this case often denoted Learning from Demonstration (LfD) or Imitation Learning (IL). A human demonstrates a wanted behavior by tele-operating a robot. In this research, a robot will be equipped with a set of parameterized high-level behaviors. Models and techniques for ways in which behaviors can be combined into sequences will be developed. By recognizing a behavior on-the-fly during a demonstration, a shared control system can be constructed. In this way, the robot may take over control already during the demonstration. The human will be relieved of the hard task of tele-operating but may, if needed, interfere and correct the robot’s control signals.

Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today’s task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field.

Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator’s intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot’s repertoire.

The goal of this research is to develop methods for learning from demonstration that mainly focus on understanding the tutor’s intentions, and recognizing which elements of a demonstration need the robot’s attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot’s attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills.

Another major contribution of this reserach is methods to resolve ambiguities in demonstrations where the tutor’s intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms.

The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.

In the News

My interview with Swedish Radio

https://sverigesradio.se/artikel/6063318

January 2015

My interview with Västerbottens-Kuriren newspaper

https://www.vk.se/2015-01-08/robotar-lar-sig-att-hjalpa-manniskor

January 2015

Related Publications

2016

Priming as a Means to Reduce Ambiguity in Learning from Demonstration

Fonooni, B., Hellström, T., Janlert, L.E.

International Journal of Social Robotics, 8(1), 5-19, 2016

2015

Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations

Fonooni, B., Jevtić, A., Hellström, T., Janlert, L.E.

Robotics and Autonomous Systems, 65, 24-39, 2015

2015

On the Similarities Between Control Based and Behavior Based Visual Servoing

Fonooni, B., Hellström, T.

30th ACM/SIGAPP Symposium on Applied Computing (SAC), Salamanca, Spain, April 2015

2015

Applying a Priming Mechanism for Intention Recognition in Shared Control

Fonooni, B., Hellström, T.

5th IEEE CogSIMA 2015, Orlando, FL, USA, March 2015

2013

Towards Search and Rescue Field Robotic Assistant

Kozlov, A., Gancet, J., Letier, P., Schillaci, G., Hafner, V., Fonooni, B., Nevatia, Y., Hellström, T.

11th IEEE SSRR 2013, Linköping, Sweden, October 2013

2013

Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration

Fonooni, B., Hellström, T., Janlert, L.E.

3rd IEEE CogSIMA 2013, San Diego, CA, USA, February 2013

2012

Learning High-Level Behaviors From Demonstration Through Semantic Networks

Fonooni, B., Hellström, T., Janlert, L.E.

4th International Conference on Agents and Artificial Intelligence (ICAART), Vilamoura, Algarve, Portugal, February 2012

2012

Sequential Learning From Demonstration Based On Semantic Networks

Fonooni, B.

Umeå's 15th Student Conference in Computing Science (USCCS), Umeå, Sweden, January 2012