AI in the legal system: Opportunities and challenges for democracy

Artificial Intelligence (AI) can ease the burden on the legal system and fundamentally contribute to fairer and more transparent judgements and proceedings. On the other hand, there are currently still qualitative weaknesses of AI systems in legal applications as well as ethical and legal concerns. A current white paper from Plattform Lernende Systeme provides an overview of possible applications of AI systems in the context of judicial decisions and addresses design options for the successful use of AI in law firms and courtrooms. 

Download the white paper (in German)

The justice system in Germany is increasingly overloaded. According to the German Association of Judges, more than 900,000 cases remained open nationwide last year - significantly more than in previous years. AI technology promises greater efficiency and a wide range of applications for the legal system: At one end are systems with low autonomy, such as chat bots that offer private individuals automated information for legal self-help. Lawyers can be assisted by AI-supported tools in researching legal texts and predicting judgements or preparing written pleadings. At the other end of the spectrum are programmes that make recommendations for judicial decision-making, such as whether a sentence should be suspended. The culmination of such a development would ultimately be automated judgements by a so-called Roboterrichters, which are currently not constitutionally possible in Germany.

The rule of law and an independent judiciary are central building blocks and conditions of a liberal democracy. The authors of the white paper "Künstliche Intelligenz und Recht - Auf dem Weg zum Robo-Richter?“ emphasise that technologies that influence this sensitive area of society must therefore comply with constitutional requirements and ethical principles. The experts cite the (still) insufficient quality of AI systems in legal applications and data protection, which becomes particularly problematic when cloud-based language models are used, as further challenges for the use of AI in the legal system. Furthermore, an AI system cannot fully do justice to the complexity of a legal dispute in court by analysing the objective facts, as it cannot analyse soft factors such as empathy or a sense of proportion. Law is not static, but is constantly evolving dynamically. AI-based forecasting systems, on the other hand, set an anchor in the interpretation of the law for any given status quo, according to the authors.

Trust in democracy and the rule of law 

Trust in an independent and fair judiciary is a basic prerequisite for a functioning democracy. According to Frauke Rostalski, Professor of Criminal Law, Criminal Procedure Law, Philosophy of Law and Comparative Law at the University of Cologne, member of Plattform Lernende Systeme and co-author of the white paper, transparency is a key factor in achieving this: ‘AI systems harbour the risk of being so-called “Black Boxes”. When using them to support judgement, it is all the more important to ensure that any suggestions are adequately explained and justified. This is the only way to create social acceptance for the decision and thus legal peace.’ 

Society must make a decision in an open and equal negotiation process as to whether - and if so, to what extent - it wants AI support in and for the legal system, according to the white paper. The authors also emphasise that the final decision on any use of AI in the legal system must always remain with the human being. The right to refuse the involvement of AI in judgements could also contribute to the fundamental acceptance of AI involvement in the legal system. Lawyers would also need to develop AI expertise. The federal and state governments should provide the financial resources to develop AI systems for the legal system independently of private companies. 

About the white paper

The white paper "Künstliche Intelligenz und Recht - Auf dem Weg zum Robo-Richter?" was written by members of the IT Security, Privacy, Law and Ethics working group of Plattform Lernende Systeme. The publication can be downloaded free of charge at this link.

Further information:

Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | D - 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back