3 Questions for

Wolfgang Ecker

Distinguished Engineer at Infineon Technologies, Honorary Professor at the Technical University of Munich.

Edge AI: "Enormous application potential can only be realised through a holistic approach"

The latest successes in the development of generative artificial intelligence (AI) are based on "more and more" centrally processed data, upscaled neural networks and computing capacities. This is accompanied by data protection issues, costs and increasing resource consumption. Research and industry are therefore currently pursuing a different approach in parallel: the decentralisation of AI architectures along the lines of edge computing - known as edge AI. The aim is to process and analyse data for AI systems not in the cloud, but as close as possible to where it is generated - i.e. close to the user.


In this interview, Wolfgang Ecker explains the benefits of edge AI, where it can be used and what hurdles still exist. He is a Distinguished Engineer at Infineon Technologies specialising in hardware architectures and advised the Bundestag in his role as an expert in the Enquete Commission "Artificial Intelligence - Social Responsibility and Economic, Social and Ecological Potential". Wolfgang Ecker is co-author of a white paper on the topic of Edge AI, which was recently published by the Technological Enablers and Data Science working group of Plattform Lernende Systeme.

1

Mr Ecker, what are the greatest advantages of edge AI compared to traditional cloud approaches?

Wolfgang Ecker: Let me put it this way: from a technological point of view, Edge AI is an additional technical challenge compared to the cloud. Edge AI has to make do with milli-watts of electrical power to calculate the networks, whereas the cloud uses kilowatts or megawatts. In terms of costs, Edge AI tends to be in the euro range, while the cloud costs thousands or millions of euros. Accordingly, the AI computing units have to be smaller and use less power, which is only possible with particularly optimised networks.

The technical advantages of Edge AI solutions therefore lie in the application of the technology. Edge AI does not have to send the data to the cloud first and wait for a response, but can be executed close to where the data occurs. Faster and guaranteed responses from the AI are therefore a technical advantage, as is the protection of the data, as it only needs to be stored locally. The applications are also more robust, as a communication failure with the cloud does not have to be taken into account. Finally, edge AI applications have a much lower CO2-footprint than applications in the cloud.

2

What potential does this offer - and where do we stand in Germany in terms of transfer?

Wolfgang Ecker : The aforementioned advantages of a small form factor, low costs, low energy consumption, better protected data and systemically inherently more robust - because more independent - implementations open up a wide range of potential, especially in leading German industries such as automotive, mechanical engineering and medical technology. One example is vehicle-to-vehicle communication in semi-autonomous driving: Using sensor data from the car (e.g. lidar, cameras, radar) and traffic data exchanged between vehicles via communication networks, local AI models can be used to process this incoming data reliably and in real time to recognise anomalies. In dangerous situations, warnings can be communicated or even measures can be initiated independently to avoid an accident. Another field of application is industrial robotics: Edge AI can be implemented here with the help of federated learning so that picking robots are able to "feel" with AI and learn from each other and thus reliably grasp even unknown objects.

In my opinion, the opportunities for edge AI are limitless. Even if there have already been successes, we are still far from utilising the available potential. Localised approaches and trenches around our own areas of work prevent a holistic approach. The design of the networks, the training of the networks, the translation of the networks and the hardware architectures for calculating the networks are largely considered independently. Solutions from Cloud AI are often adapted instead of developing customised complete Edge AI solutions. However, a holistic approach is necessary in order to provide high-performance Edge AI technology. Technology and applications must also be considered together. Edge AI machines can only be designed efficiently with knowledge of the application and, in turn, new applications can only be developed with knowledge of the performance of Edge AI technology.

3

Will it be possible to run and/or train large language models "on the edge" in the foreseeable future? What is necessary for this?

Wolfgang Ecker: Taken literally, I don't think that will ever work. "Large models" on the one hand and energy and cost efficiency on the other don't go together. The decisive factor will be whether it is possible to scale the large language models in such a way that they can be processed by edge devices. It is also important to design the training so efficiently that it can be executed in the Edge. Initial approaches are known, so why shouldn't it work with the holistic approach described above?

However, it must be made clear that an edge AI implementation will never achieve the universality and performance of cloud AI. Accordingly, distributed and/or mixed edge AI/cloud AI solutions harbour further great potential. Perhaps new edge AI technologies will also migrate back to the cloud in order to reduce energy requirements and thecarbon footprint.

The white paper Edge AI - AI technology for more data protection, energy efficiency and real-time applications from the Learning Systems platformprovides detailed expertise on the potential and challenges of edge AI .

 

 

 

The interview is authorised for editorial use (provided the source is cited © Plattform Lernende Systeme).

Go back