3 Questions for

Uta Wilkens

Holder of the Chair of Work, Human Resources & Leadership at the Ruhr University Bochum and spokesperson of the HUMAINE competence centre / member of Plattform Lernende Systeme

AI expertise: critical thinking and an understanding of the impact of AI are crucial

The first provisions of the AI Act have been in force since February 2025 - among other things, providers and operators of AI systems are obliged under Article 4 to train their employees in the use of Artificial Intelligence (AI). This concerns the need to impart sufficient technical AI skills and to ensure an understanding of the social, ethical and legal aspects of the use of AI. In this interview, Uta Wilkens explains what this means in concrete terms for companies, what skills should be taught and what challenges remain. She holds the Chair of Work, Human Resources & Leadership at Ruhr University Bochum and is spokesperson for the HUMAINE competence centre as well as a member of the ‘Future of Work and Human-Machine Interaction’ working group of t Plattform Lernende Systeme.

1

Ms Wilkens, what does the implementation of Article 4 of the AI Act actually mean for companies?

Uta Wilkens: AI expertise must be built up and further developed in a context-appropriate and target group-specific manner. This is an important requirement for companies. What exactly this means in terms of content, what benchmark is used to indicate sufficient AI skills, which forms of learning and teaching (formal or informal learning) can be taken into account and who is legitimised to teach AI skills is still very vague at the moment. First of all, for companies this means that the topic of AI skills must be assessed in connection with the introduction and use of AI systems and that responsibility in this regard must be recognised. So something needs to be done.

2

What training content should employees be taught?

Uta Wilkens: A distinction can be made here between skills and user knowledge, background knowledge and the ability to assess the effects and consequences. If you look at current training content, companies are concerned with the effective use of AI for specific areas of work, e.g. how to achieve helpful results with generative AI through good prompts or how to recognise and limit hallucinations. In addition to user knowledge, technical expertise also involves background knowledge in order to develop a basic understanding of how AI algorithms, machine learning processes and large language models work. Background knowledge also includes knowledge of the legal requirements and ethical principles when dealing with AI.

From a scientific perspective and when dealing with AI expertise, all of this is important, but still falls short because it is about more than the instrumental handling of an AI tool. It is about understanding what effect is generated by an AI in the applied domain and how the use of AI affects one's own domain knowledge. Accordingly, a high level of AI expertise means that users are familiar with approaches that allow them to increase rather than reduce critical thinking when using AI. The developers of AI must know the organisational goals for which the AI is being used and how the data used should be classified against this background. The interplay between technical and domain knowledge is a key point. These facets should not be handled separately. By dovetailing AI expertise with domain knowledge, it is ultimately possible to develop expertise in a context- and target group-specific manner.

For managers who decide on the use of AI and for employee representatives who co-determine these decisions, there are also important framework conditions to consider. For example, prohibited practices are regulated in Article 5 of the EU AI Act. Questions also arise such as: What impact will AI have on the individual activities of employees, for example with regard to their role? What are the consequences for work organisation? How will personnel deployment scenarios change?

The HUMAINE competence centre is developing methods that support organisations in developing, implementing and using artificial intelligence in work processes in a human-centric way. These methods are available via the HUMAINE Toolbox. In addition, the competence centre offers training infrastructure specifically for industrial SMEs, brings human-AI role development to life in training labs set up for this purpose and offers advice on various AI ethics topics. This also includes a model works agreement and a model AI code of conduct.

3

What challenges arise during implementation?

Uta Wilkens: There are already a large number of training providers - from institutional providers such as the Chamber of Industry and Commerce or organisations such as HUMAINE to private providers; from high-priced classroom training to webinars that convey content in a minimal amount of time. The choice is extensive and the challenge remains to select the right training courses based on the customised topics for specific applications and groups of people.

In my opinion, ethical topics relating to the use of AI have been underrepresented to date. According to our exchanges of experience, employees feel that they have not received enough training in this area in particular. There is still some catching up to do here. In addition, the target group-specific tailoring of training content needs to be improved in order to cover actual individual needs. And finally, there is a lack of concrete knowledge about the learning success of previous skills development programmes.

The interview is released for editorial use (provided the source is cited © Plattform Lernende Systeme).

Go back