Ms Heesen, what principles should be followed for the responsible development and application of AI systems?
Jessica Heesen: In principle, AI development should serve society and not lead to the creation of new technical or economic constraints that violate ethical standards of coexistence or restrict positive developments. This is what is meant when many political documents and speeches speak of "AI for people". And this is what the philosopher Theodor W. Adorno already expressed in 1953: "There is no technological task that does not fall within society". Technological development is therefore not limited to the technical solution as such, but is an elementary building block of a humane society. Accordingly, the development of AI as one of the modern key technologies must be guided by guiding values such as non-discrimination, protection of privacy, reliability or transparency.
In general, it is important that the routines and recommendations for action specified by AI systems are not recognised as having no alternative ("practical constraints"). As in other technical products, certain changeable purposes and preferences are "inscribed" in AI software, which can benefit certain groups and individuals but harm others.
What does this mean in concrete terms for the development and application of AI?
Jessica Heesen: We must distinguish two levels here: On the one hand, the scope for action of individuals or groups when using AI, and on the other hand the political and legal framework conditions that are set for a value-oriented and secure use of AI. For example, algorithmic decision systems with AI components can be used to decide on social benefits. The United Nations speaks here - quite critically - of a "Digital Welfare State"AI systems can, for example, decide on the allocation of food vouchers - as is already practised in India. Or they can track down potential social fraud. The use of a programme called SyRI, which the Dutch Ministry of Social Affairs used to identify people who might have been or might be receiving unemployment or housing benefits in error, has now been prohibited by the courts.
One thing is clear: when using such sensitive AI decision-making systems, a high sense of responsibility is required. The administrative staff on site must understand, interpret and fairly apply the functionalities, at least in broad outlines. This includes being able to oppose the decision of an AI system in the role of the "human final decision maker". The authority, in turn, must ensure reliability and non-discrimination when purchasing such a system. The developers are responsible, among other things, for ensuring the quality of training data and for examining the systems in a regulated manner for discriminatory factors.
What are the implications for a possible regulation of AI systems?
Jessica Heesen: The concept of regulation encompasses a whole range of possibilities for giving AI systems a form that is technically advanced by a society, but at the same time is in line with the values of liberal constitutional states. That is why the EU Commission repeatedly emphasises the importance of value-oriented development of AI. In general, there are different approaches to this: Weak forms of regulation are, for example, codes of ethics of professional societies or companies. Strong forms of regulation are legal and state requirements in the form of certifications of AI. In all these different forms of regulation, ethical values are fundamental and must be formulated in a further step for the respective application contexts.
Plattform Lernende Systeme has developed an ethics guide that outlines an ethically reflected development and application process of AI systems for developers, providers, users and those affected. It names principles such as self-determination, justice and protection of privacy, from which the criteria for the regulation of AI systems are largely derived. We would like to contribute to the public discussion and give an impulse for further debates.