BarrierefreiheitKontaktSprache auswählen: EN

Session 1: Autonomy and Interaction in Robotics

Neue Robotergenerationen werden in der Lage sein, selbstständig Entscheidungen zu treffen und umzusetzen. Dabei spielen neue Technologien wie Embodied AI, Agentic AI sowie hochperformante maschinelle Lernmethoden eine Schlüsselrolle. Was sollte man über autonome Roboter wissen?

Mobile Roboter werden in Zukunft eng mit Menschen zusammenarbeiten, so Prof. Dr. Seth Hutchinson von der Northeastern University in Boston/ USA. Eine neue Robotergeneration wird unter anderem die Lebensqualität pflegebedürftiger Menschen verbessern. Prof. Hutchinson beschreibt ein Software-Setting für diese Roboter, das auf den Funktionen Sicherheit, Kooperation und Anpassungsfähigkeit basiert.

Den Spagat zwischen immer größerer Autonomie und dem Wunsch nach Kontrolle und Sicherheit künftiger Robotergenerationen thematisiert Prof. Dr. Sethu Vijayakumar, Professor of Robotics an der University of Edinburgh. In seinem Vortrag stellt er hochperformante, maschinelle Lerntechnologien vor und skizzierte einen optimalen Kompromiss zwischen Autonomie und Kontrolle.

Prof. Antonio Bicchi, IIT Genua/ Universität Pisa und Herausgeber des “International Journal of Robotics Research” (IJRR), schaut in sei-nem Vortrag auf den Schritt von der Mensch-Roboter-Interaktion zur Mensch-Roboter-Integration. So ist der Roboter der Zukunft als eine Art physischer „Prothese“ vorstellbar, ausgestattet mit umfassender Sensorik und so intelligent, dass er quasi erahnt, was der Mensch wünscht und entsprechend handelt – sowohl im Healthcare- wie auch im industriellen Umfeld.

Am Honda Research Institute Europe entwickeln Dr. Felix Ocker und sein Team “entscheidungsfreudige” Robotern, die sich nicht nur voll-kommen selbstständig orientieren, sondern auch autonom, d.h. ohne menschlichen Befehl, agieren, Entscheidungen treffen und diese Ent-scheidungen auch erklären können. Möglich wird das durch Agentic AI – eine Künstliche Intelligenz, die noch am Anfang ihrer Entwicklung steht.

Die Speaker und Vorträge dieser Session

"Mobile Manipulation: Safety, Cooperation, Adaptation"

Mobile manipulators, working cooperatively with people, have the potential to significantly enhance quality of life for older adults and others with physical limitations or mental impairment. However, in order to gain widespread acceptance in domestic settings these robotic systems must be safe, effective, and adaptable. In this talk, we describe recent results in safe control of mobile manipulators using control barrier functions and control Lyapunov functions, both defined in the task (or operational) space. Our approach is implemented in an optimization-based framework, which allows easy extension to task-level control in dynamic settings.

Seth Hutchinson is a professor at Northeastern University. He was previously the Executive Director of the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology, where he was also Professor and KUKA Chair for Robotics in the School of Interactive Computing (2018-2024). He is Professor emeritus at the University of Illinois in Urbana-Champaign, where he was a faculty member during 1990-2017. He received his Ph.D. from Purdue University.

Hutchinson served as president of the IEEE Robotics and Automation Society, as Editor-in-Chief for the "IEEE Trans. on Robotics" and as the founding Editor-in-Chief of the RAS Conference Editorial Board. He has more than 300 publications on the topics of robotics and computer vision, and is coauthor of two books on robotics. He is a fellow of the IEEE.

Mobile Manipulation: Safety, Cooperation, Adaptation

Mobile manipulators, working cooperatively with people, have the potential to significantly enhance quality of life for older adults and others with physical limitations or mental impairment. However, in order to gain widespread acceptance in domestic settings these robotic systems must be safe, effective, and adaptable. In this talk, we describe recent results in safe control of mobile manipulators using control barrier functions and control Lyapunov functions, both defined in the task (or operational) space. Our approach is implemented in an optimization-based framework, which allows easy extension to task-level control in dynamic settings.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"From Automation to Autonomy: Machine Learning for Next-generation Robotics"

The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.
This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include deployment of robots in extreme environments, self-driving cars, asset inspection, repair & maintenance, factories of the future and assisted living technologies including exoskeletons and prosthetics to list a few.

Sethu Vijayakumar is the Professor of Robotics at the University of Edinburgh, UK, and the Founding Director of the Edinburgh Centre for Robotics. He has pioneered the use of large-scale machine learning techniques in the real-time control of several iconic robotic platforms such as the SARCOS and the HONDA ASIMO humanoids, KUKA-LWR robot arm and iLIMB prosthetic hand. One of his projects (2016) involved a collaboration with NASA Johnson Space Centre on the Valkyrie humanoid robot being prepared for unmanned robotic pre-deployment missions to Mars. Professor Vijayakumar holds the Royal Academy of Engineering (RAEng) - Microsoft Research Chair at Edinburgh and is also an Adjunct Faculty of the University of Southern California (USC), Los Angeles. He has published over 250 peer reviewed and highly cited articles [H-index 50, Citations > 13,000 as of 2025] on topics covering robot learning, optimal control, and real-time planning in high dimensional sensorimotor systems. He has been appointed to grant review panels for the EU (FP7, H2020), DFG-Germany and NSF-USA. He is a Fellow of the Royal Society of Edinburgh, a judge on BBC Robot Wars and winner of the 2015 Tam Dalyell Prize for excellence in engaging the public with science – including his role in the UK wide launch of the BBC micro:bit initiative (2016) for STEM education. Professor Vijayakumar helps shape and drive the national Robotics and Autonomous Systems (RAS) agenda in his role as a Programme Director (Human-AI Interfaces and Robotics) at The Alan Turing Institute, the United Kingdom’s national institute for data science and Artificial Intelligence.

From Automation to Autonomy: Machine Learning for Next-generation Robotics

The new generation of robots work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.
This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include deployment of robots in extreme environments, self-driving cars, asset inspection, repair & maintenance, factories of the future and assisted living technologies including exoskeletons and prosthetics to list a few.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"From Cobotics and Human-Robot Interaction to Human-Robot Integration"

For many years, the name of the game in avdanced, human-centric Robotics has been Human-Robot Interaction. In recent years, we have witnessed a further deepening of the relationship between humans and technology. Robotic technologies have been providing definite advances to assist people in need of physical help, including rehabilitation and prosthetics. Working in fields were humans are placed right at the center of the technology, on the other hand, is helping refocus our robotics research itself. In prosthetics, the goal is to have an artificial limb to move naturally and intelligently enough to perform the task that users intend, without requiring their attention. By abstracting this idea, a robot of the future can be thought as a physical "prosthesis'' of its user, with sensors, actuators, and intelligence enough to interpret and execute the user intention, translating it in a sensible action of which the user remains the owner.

In the talk I will present how human-robot integration reaches beyond prosthetics and rehabilitation applications to industrial environments. For example, exoskeletons and supernumerary limbs for augmenting human possibilities and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.

Antonio Bicchi is Senior Scientist at the Italian Institute of Technology in Genoa and the Chair of Robotics at the University of Pisa. He graduated from the University of Bologna in 1988 and was a postdoc scholar at M.I.T. Artificial Intelligence lab. He teaches Robotics and Control Systems in the Department of Information Engineering (DII) of the University of Pisa. He leads the Robotics Group at the Research Center "E. Piaggio'' of the University of Pisa since 1990. He is the head of the SoftRobotics Lab for Human Cooperation and Rehabilitation at IIT in Genoa. Since 2013 he serves ad Adjunct Professor at the School of Biological and Health Systems Engineering of Arizona State University.
From January, 2023, he is the Editor in Chief of the International Journal of Robotics Reserach (IJRR), the first scientific journal in Robotics. He has been the founding Editor-in-Chief of the IEEE Robotics and Automation Letters (2015-2019), which rapidly became the top Robotics journal by number of submissions. He has organized the first WorldHaptics Conference (2005), today the premier conference in the field. He is a co-founder and President of the Italian Institute of Robotics and Intelligent Machines (I-RIM)
His main research interests are in Robotics, Haptics, and Control Systems. He has published more than 500 papers on international journals, books, and refereed conferences. His research on human and robot hands has been generoously supported by the European Research Council. He originated and is today the scientific coordinator of the JOiiNT Lab, an advanced tech transfer lab with leading-edge industries in the Kilometro Rosso Innovation District in Bergamo, Italy.
Antonio Bicchi is the recipient of the Robotics and Automation Society Pioneer Award in 2025.

From Cobotics and Human-Robot Interaction to Human-Robot Integration

For many years, the name of the game in avdanced, human-centric Robotics has been Human-Robot Interaction. In recent years, we have witnessed a further deepening of the relationship between humans and technology. Robotic technologies have been providing definite advances to assist people in need of physical help, including rehabilitation and prosthetics. Working in fields were humans are placed right at the center of the technology, on the other hand, is helping refocus our robotics research itself. In prosthetics, the goal is to have an artificial limb to move naturally and intelligently enough to perform the task that users intend, without requiring their attention. By abstracting this idea, a robot of the future can be thought as a physical "prosthesis'' of its user, with sensors, actuators, and intelligence enough to interpret and execute the user intention, translating it in a sensible action of which the user remains the owner.

In the talk I will present how human-robot integration reaches beyond prosthetics and rehabilitation applications to industrial environments. For example, exoskeletons and supernumerary limbs for augmenting human possibilities and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

"When robots take initiative: Agentic AI for human-robot cooperation"

As robots move beyond rigid, rule-based behavior, a new class of intelligent agents is emerging—robots that take initiative, act with purpose, and collaborate with humans. This talk explores the shift toward agentic AI, highlighting key design patterns and recent breakthroughs that have the potential to make autonomy not just possible, but useful. From language model-based robots that offer support only when truly needed, to systems that plan for more long-term tasks using extensible tool libraries, agentic AI is reshaping how robots perceive, decide, and interact. Drawing on real-world prototypes and research at the Honda Research Institute, this talk gives an overview of how robots can detect needs, choose when to act, and explain their behavior—critical steps toward meaningful cooperation. This talk will close with an outlook on what’s working, what’s next, and where agentic AI is already finding traction.

Felix Ocker is a Senior Scientist at Honda Research Institute Europe, researching AI systems that enable autonomous and collaborative behavior. With a background in engineering and a Ph.D. from the Technical University of Munich, his research has spanned knowledge representation, multi-agent systems, and semantic integration for automated production. His current work explores agentic AI - intelligent agents capable of proactive, goal-directed behavior - and how these capabilities can enhance human-robot interaction. Felix has authored numerous publications and is co-inventor on patents related to memory, tool use, and theory of mind in intelligent systems. He regularly contributes to the academic community through peer reviewing, technical committees, and invited talks at both industrial and scientific venues.

When robots take initiative: Agentic AI for human-robot cooperation

As robots move beyond rigid, rule-based behavior, a new class of intelligent agents is emerging—robots that take initiative, act with purpose, and collaborate with humans. This talk explores the shift toward agentic AI, highlighting key design patterns and recent breakthroughs that have the potential to make autonomy not just possible, but useful. From language model-based robots that offer support only when truly needed, to systems that plan for more long-term tasks using extensible tool libraries, agentic AI is reshaping how robots perceive, decide, and interact. Drawing on real-world prototypes and research at the Honda Research Institute, this talk gives an overview of how robots can detect needs, choose when to act, and explain their behavior—critical steps toward meaningful cooperation. This talk will close with an outlook on what’s working, what’s next, and where agentic AI is already finding traction.

Hightech Summit Session 1: Autonomy and Interaction in Robotics

© Messe München

Session-Chair

„Autonomy and Interaction in Robotics” wird von Prof. Dr. Lorenzo Masia, Professor für „Intelligent Bio-Robotic Systems“ und Direktor des Munich Instituts of Robotics and Machine Intelligence (MIRMI) an der Technischen Universität München (TUM), als Session Chair moderiert.

© Messe München

Session-Chair

„Autonomy and Interaction in Robotics” wird von Prof. Dr. Angela Schoellig, Mitglied des Vorstands des Munich Institute of Robotics and Machine Intelligence (MIRMI)
Koordinatorin des Robotics Institute Germany (RIG), Professorin für Sicherheit, Performanz und Zuverlässigkeit lernender Systeme an der Technischen Universität München (TUM), als Session Chair moderiert.