Certification of AI Systems: Plattform Lernende Systeme names Challenges
Artificial Intelligence (AI) is already used in many industries. In order to exploit its economic and social potential, it is essential to strengthen trust in AI systems and the processes and decisions associated with them. A possible key prerequisite for this is the certification of AI systems. The benefits it promises and the demands it makes in terms of technical implementation, public welfare and maintaining innovative strength are outlined by experts from the Plattform Lernende Systeme in a discussion paper. It provides an overview of existing certification projects in Germany and forms the basis for further discussions.
Particularly in sensitive application areas such as medicine, certification of AI systems can help to strengthen confidence in their performance, reliability and security. In the operational context, certification facilitates the interoperability of different systems and thus promotes the further use of Artificial Intelligence. Finally, certification could promote competitive dynamics in the development of AI applications and – by establishing a trusted brand "AI made in Europe" – create competitive advantages internationally.
Special challenges in self-learning systems
On the way to a certification of Artificial Intelligence, however, numerous questions need to be clarified. "For example, it must be answered how self-learning systems can be reliably verified, or how further learning can be ensured during operation – for example through structured updates," says Prof. Dr. Stefan Wrobel, head of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS and member of the Technological Enablers and Data Science working group of Plattform Lernende Systeme. "Another challenge is that AI applications are often hybrid systems – that is, they are based on a combination of different AI technologies. In the field of language technology, for example, Machine Learning methods are often combined with model-based knowledge. Certification must also cover such complex systems," says Stefan Wrobel.
It is also crucial for a meaningful and beneficial certification of AI systems to apply the right benchmarks. "The task is to develop a general test system in order to make a certification of highly different AI systems for different fields of application comparable. In doing so, it is necessary to consider already established norms and standards and to close existing gaps," explains Stefan Wrobel, one of four co-authors of the discussion paper.
Finding an appropriate level for certification
The discussion paper "Certification of AI Systems", which was prepared by an interdisciplinary team of authors, not only illuminates technical, but also legal and ethical aspects. "Certification can help us to exploit the societal advantages of many AI systems in a secure way and with respect to the common good. In order for this to happen in accordance with socially recognised values, a form of certification must be found that is guided by important ethical principles, but at the same time also fulfils economic principles, avoids over-regulation and promotes innovation," says Jessica Heesen, head of the research focus Media Ethics and Information Technology at the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen and head of the IT Security, Privacy, Law and Ethics working group of the Plattform Lernende Systeme. The co-author of the discussion paper adds: "In the best case, certification itself can trigger new developments for a European path in AI application".
About the discussion paper
The discussion paper Certification of AI Systems (Executive Summary) highlights the potential and challenges of certifying AI systems and provides an overview of existing certification projects in Germany. The paper was written by members of the Plattform Lernende Systeme under the direction of the working group IT Security, Privacy, Law and Ethics and the working group Technological Enablers and Data Science and is intended as a basis for further discussions on the topic.
Further information:
Linda Treugut / Birgit Obermeier
Press and Public Relations
Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | 80333 Munich
T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de