Perspectives on AI
One topic, different perspectives: Experts from the interdisciplinary Plattform Lernende Systeme assess current developments in the field of Artificial Intelligence from their respective specialist backgrounds.
Learning robotics
Robots capable of learning are no longer science fiction. Robotic systems equipped with AI are being used in more and more areas of application beyond industry. Whether in care, in the event of a disaster or in the household - robots can support us in many areas and take over strenuous or dangerous tasks. Their potential for the economy and society is enormous. How do robots work together successfully? How do robots learn? And what progress can we expect from adaptive systems in the near future? Experts from Plattform Lernende Systeme provide answers.
Barbara Deml
Karlsruhe Institute of TechnologySven Behnke
University of BonnDorothea Koert
Technical University of Darmstadt
Prof. Dr.-Ing. Barbara Deml | Karlsruhe Institute of Technology
Barbara Deml is head of the Institute of Ergonomics and Industrial Organization at KIT and a member of the Future of Work and Human-Machine Interaction working group of Plattform Lernende Systeme.
Friend and helper: How humans and social robots work together
How can Artificial Intelligence (AI) enable socially interactive robotics? To answer this, we must first ask the question: What is meant by social robots? Social robots are the antithesis of industrial robots. They are not autonomous tools, but interactive partners. These include toy robots, such as robot dogs, service robots in the field of care or therapy, collaborative robots, so-called cobots, in an industrial context, but also software robots, such as chatbots. They are also often robots whose shape resembles a human body or which have human-like characteristics. They are then referred to as humanoid robots. Social robots should be able to communicate with us in order to build a trusting relationship. Above all, this requires social intelligence, for which AI is an essential prerequisite:
- AI makes it possible to understand human language and the context of conversations. Ideally, social robots will be able to have a human-like conversation.
- AI can be used to recognize faces and analyse facial expressions or gestures. This allows social robots to recognize emotions or hand signals and react appropriately.
- Machine learning allows a robot to learn from experience or observation. Social robots can thus adapt to the individual preferences and needs of their users. This enables personalized interactions.
How can a robot recognize human emotions such as stress? How do we humans recognize whether our fellow human beings are stressed or whether they are happy or angry, for example? As a rule, we unconsciously interpret the context as well as various verbal and non-verbal signals such as smiles, frowns, eyebrow raises or other facial movements. Also, the way someone speaks - including tone of voice, speed and emphasis - can reveal a lot about our emotional state. There is a long tradition of research on this in psychology. Many of these behavioral indicators are well described today and can now also be observed by a robot's technical sensors. Advances in speech, image and video analysis enable robots to recognize body postures, facial expressions, pupil reactions, changes in tone of voice or speed of speech. Robots can analyze movement patterns or interaction behavior with technical devices. A combination of several such sensors then makes it possible to draw conclusions about emotions. But, of course, this is not always one hundred percent successful. We humans are not always right with our emotion recognition either.
How can humanoid robotics be used in care in a humane way?
Human-centered technology design always focuses on the needs, abilities and preferences of the user. The same applies when humanoid robots are to be used in care. The top priority is the question: What are the needs, abilities and preferences of care staff and caregivers when interacting with a robot?
- Humanoid robots can be developed to help elderly people or people in need of care with everyday tasks, such as getting up, getting dressed, preparing meals and other basic activities. This can promote people's independence and at the same time relieve the burden on care staff.
- Humanoid robots can be equipped with sensors to monitor the environment and raise the alarm if unusual activity or emergencies are detected. This is particularly useful in care facilities or for elderly people who live alone.
- Robots can be used to dispense medication and remind people to take it. This is particularly important for people with complex medication schedules.
- Humanoid robots can be designed to enable social interaction and provide companionship. This can be particularly important for older people who may feel lonely.
It is important that humanoid robots are used ethically and sensitively in care. This includes respecting privacy and ensuring safety. Humanoid robots are not intended to replace human caregivers, but rather to support them and improve the quality of care.
Video series "Nachgefragt": Interview with Prof Dr Barbara Deml
Prof Dr Sven Behnke | University of Bonn
Sven Behnke is Head of the Institute of Computer Science VI - Intelligent Systems and Robotics at the University of Bonn and a member of the Learning Robotic Systems working group of Plattform Lernende Systeme.
People as role models: Learning from a small amount of data
Robots are of great benefit in industrial mass production. Without industrial robots that perform repetitive tasks, cars would no longer be manufactured in Germany. Mobile robots, for example, transport shelves in warehouses or meals in hospitals. Simple robots already help in the household, for example with floor cleaning or lawn care.
For robots to be useful, a narrow task definition is currently required, for example "Move an object from A to B!". The operating environment must also be structured, for example by providing the object at a known location in a known position. Research is working on new areas of application for robots: in future, they will work directly with people in production, help people in need of assistance in everyday life or support emergency services in dealing with disasters. However, these open, complex application domains require more cognitive abilities than current autonomous robots have. Today, remote-controlled robots can already solve numerous tasks in complex environments with the help of their operator's human intelligence. Teleoperation transforms a human into an avatar robot. The human operator can cope effortlessly with new situations and can flexibly transfer their existing knowledge to the current circumstances. It recognises problems during execution and quickly develops alternative courses of action.
How can we equip robots with cognitive abilities?
In recent years, deep learning has achieved impressive successes in related areas, for example in visual perception, speech recognition and synthesis as well as dialogue systems such as ChatGPT. These are based on the training of large models with gigantic amounts of data. Such basic models capture extensive world knowledge and can be quickly adapted to specific tasks through transfer learning or in-context learning, for example. How can we continue this success story for robotics? The first steps in this direction are multimodal models that are not only trained with one modality - i.e. only texts, images or speech - but with data from several modalities, such as CLIP from OpenAI. Even if the acquisition of real robotic interaction data is complex, there are initiatives to merge data from different robots and tasks, e.g. Open X-Embodiment. Models trained in this way can solve a variety of manipulation tasks better than models that have only been trained with specific data.
Another option is to generate interactions in a simulation. The challenges here are to make the simulation realistic and to transfer what has been learnt in the simulation to reality (Sim2Real gap).
Learning from big data in robotics?
The human model shows us that data-efficient learning is possible. This requires specific learning models that have been evolutionarily optimised and require little data through the use of prior knowledge - inductive bias. Although we can no longer learn arbitrary tasks, we can learn the tasks that life presents us with faster and better.
In order to achieve comparable data efficiency with robots, similar learning models are needed that have a suitable inductive bias. In my view, it would be helpful to model these models on the structure of the human cognitive system. In particular, robots not only need a fast, parallel sensorimotor system 1 for routine tasks, but also a system 2 for higher cognitive functions, such as planning or assessing one's own limits.
With the right cognitive architecture, teleoperation offers great opportunities to gradually transfer human competences to autonomous functions and thus be less and less dependent on humans as operators.
Video series "Nachgefragt": Interview with Prof Dr Sven Behnke
Dr. Dorothea Koert | Technical University of Darmstadt
Dorothea Koert is head of the IKIDA junior research group at the Intelligent Autonomous Systems Lab at TU Darmstadt and a member of the Learning Robotic Systems working group of Plattform Lernende Systeme.
How robots learn
The possible tasks that robots will be able to perform in everyday life in the future are diverse - as are the preferences of their users as to how they want to be supported by a robot. This makes pure pre-programming of future robots almost impossible. The ability to learn new tasks in interaction with humans is therefore becoming a key component in the development of intelligent robotic systems.
In order to enable a large section of society to participate in robots that are capable of learning, it is essential that robots are also able to learn new tasks from everyday users without prior programming knowledge.
Learning from demonstrations and feedback
Two promising approaches to how robots can learn from humans are demonstration learning and interactive reinforcement learning. When learning from demonstrations, robots can either be "taken by the hand" by humans and guided through the task or they can observe humans performing a task themselves and then try to understand and copy what they have seen. Human demonstrations can be used to recognize familiar subtasks and perform them in a new order, as well as to learn completely new movement and task sequences.
In interactive reinforcement learning, on the other hand, robots use feedback gained through interaction with humans to iteratively improve what they have previously learned. Humans can evaluate robots during the execution of their tasks. In this way, robots can also learn their users' personal preferences for task execution. Feedback can either be given explicitly, for example via tablet or voice input, or robots can learn through implicit feedback, i.e. by how their behavior influences human behavior or the success of task execution.
Human error sources in learning
Adaptive robotic systems that learn through direct interaction with humans and can improve on what they have previously learned have great potential in many areas of application. Prerequisite: the robots are safe. An important question in current research is therefore how robots and the algorithms they use can be protected against incorrect or undesired human demonstrations. In contrast to classically programmed robots, robotic systems that are capable of learning should, for example, ensure that they understand potential uncertainties or inconsistencies in human feedback. It is equally important that the robots cannot leave a previously defined core task area, even through human demonstrations.
The development of safe and human-centered future learning algorithms therefore requires interdisciplinary research in cognitive science, robotics and machine learning. The aim is to understand how people want to give and receive demonstrations and feedback and to explore how the robots of the future can best learn from this.