Powerful technologies may help overcome future challenges

2 February 2022

How will the field of artificial intelligence (AI) develop in the coming years? What sort of risks, what chances will open up? Professor Stefan Kramer of the Institute of Computer Science at Johannes Gutenberg University Mainz (JGU) is to find answers to these questions – in an interdisciplinary research project in which he and his colleagues will investigate core aspects of AI over the next six years.

"When it comes to artificial intelligence, the knowledge that most people have on the subject is, at best, 10 to 15 years old," says Professor Stefan Kramer. "We’ve got a lot of explaining to do. It is thus important that we manage to get the subject dealt with more thoroughly at schools and universities in particular. Our students and young people need a basic know-how of digitality and the handling of data specifically."

AI itself is not without potential problems, as Kramer is the first to admit. "It is a powerful technology and we therefore need to keep tabs on what is happening. Even so, we shouldn't focus solely on the risks but also consider the potential advantages. I have the impression that the negative aspects are being overemphasized at present."

Carl Zeiss Foundation sponsoring for TOPML project at Mainz University

For a change, Kramer is not sitting at his usual desk in his office at the JGU Institute of Computer Science but in front of his computer in his flat in the old city center of Mainz. We are talking online as COVID-19 is again gaining ground and the infection rate is on the rise. But for Kramer, the computer scientist, communicating in this form is, of course, perfectly routine and it even gives him the opportunity to open files and put diagrams and the like on screen to more clearly explain what he is talking about. One of his main interests is a major and ambitious research project that will involve not only Kramer but a lot of other researchers working in an extensive range of fields at JGU.

"Trading Off Non-Functional Properties of Machine Learning" (TOPML) is the title of this undertaking, in which the spotlight will be on four main aspects of artificial intelligence. "The Carl Zeiss Foundation will invest some EUR 5 million in this project over the next six years," adds Kramer. TOPML is being funded as part of the foundation's program on artificial intelligence established only recently. A total of six projects in Germany have been granted financing through this program.

"In our TOPML project, we will be looking at the factors of transparency, data protection, fairness, and the efficient use of resources. We will be examining the relationships and interplay between these four concepts: How can maintaining the transparency of AI decision-making be reconciled with the need to preserve the integrity of the private sphere, for example? The fewer the processing stages, the better the efficiency – in other words, energy consumption is reduced and you save time and money. But does this have an impact on fairness?"

A whole range of disciplines will be involved in the discussion of these issues. "We'll be collaborating with specialists in law, ethics, and philosophy. We will share our knowledge about the latest AI techniques with them while we computer scientists will familiarize ourselves with their points of view. We are already capable of putting this technology in place but before we do so it is essential that we have this exchange of ideas and arguments." Professor Thomas Metzinger of the JGU Department of Philosophy is one of these specialists as are Professor Matthias Bäcker and Professor Friederike Wapler of the Mainz School of Law. For a start there will be six subprojects, each dealing with the interrelationship between two of the four specified aspects. "Furthermore, we will establish an endowed professorship on AI."

The chances of artificial intelligence

The TOPML project will involve more than research activities. "We intend to offer workshops and open discussions to reach a wider public. There is an extra budget earmarked for this. We'll use these platforms to communicate the chances of AI as well as the challenges."

This is important for Kramer: "There is an extremely critical debate about AI going on in Germany and Austria in particular. Data protection is a big argument here. While I perfectly understand these worries, we need to bear in mind that this is one element among many. AI can help us, for example, to better use resources. It can improve logistics processes. Moreover, AI can be completely impartial – more impartial than man-made decisions. We've developed processes that ensure that nobody is favored or disadvantaged in comparison with others. This is an issue that our research community has been dealing with for some time now – this is a very important aspect." Hence, the intention is that AI will provide the same level of service to everyone, foster greater equality, and help underprivileged groups to participate more freely. "We can achieve all this by means of the deliberate use of suitable algorithms."

At the same time, Kramer leaves no doubt that the use of AI needs to be closely monitored. "This is a very active field of discussion. An appropriate regulatory concept has emerged at EU level. It defines AI applications in terms of four risk categories." Kramer puts a diagram showing a pyramid on screen: Its tip is bright red, then there follow various purple- and blue-colored sections towards the base. "The peak represents all unacceptable forms of AI usage, such as for social scoring or for biometric analysis in public areas. These will be prohibited. Next we have the high risk level, which covers such things as medical applications, transport infrastructures, and education. When used in such situations, AI will need to be stringently monitored." The two lower levels need far less regulatory control. These levels are relevant to, for example, chatbots and game and entertainment systems. "I find this approach very persuasive," says Kramer. "It considers everything from an external standpoint, provides a framework but also makes provision for a certain freedom of configuration."

Cautious predictions for the future

The question is what form these configurations will take. Perhaps we will have more autonomous systems, such as those currently well under development for use in road transport. Will AI facilitate and advance human endeavors – or could it possibly make human work and thinking superfluous?

"At present, we are experiencing the third wave of AI. In the first wave, knowledge-based systems were created while the second resulted in forms of AI that are capable of utilizing mass data for the purposes of statistical learning processes. Now, in the third wave, we are working towards merging the two. We want to further improve learning and decision-making processes." What exactly the outcome of this will be is, as yet, unclear, and Kramer is not interested in speculating about the future. "I'm no prophet," he asserts. However, it is undeniable that the introduction of new technologies always leads to some form of societal disruption. "One significant factor in this context is automation. Thanks to AI, we will be seeing massive progress here. And increasing automation invariably means that there will be less work remaining available for distribution elsewhere." This will be a major challenge for society.

"On the other hand, AI can help us when it comes to finding ways of making economic activity more sustainable, of conserving energy, and of putting a halt to climate change," he adds. "To be clear, this technology on its own will not save us. We will have to submit to certain changes to our lifestyle, but by making more rational use of resources possible across the board it could help us conserve the odd percentage point here and there – and this could well prove to be decisive." Kramer leans back in his chair and takes a deep breath. "I don't have the gift of precognition," he again stresses in conclusion, "I'm just a computer scientist."