©generated by NightCafé AI

Issue: 02/2023

Generative AI

Strengthen research, maintain digital sovereignty

Ever since the launch of ChatGPT last year, large, pre-trained AI models for text generation have become an ongoing public issue. The question posed in podiums and opinion pieces is not infrequent: Generative AI – saviour or demon? It is obvious that the answer must be more differentiated. After all, the size of the model and training data sets, the costs of the training, the performance after training and the application all influence the quality and trustworthiness of the result.

High-profile comments complicate the factual debate: author Yuval Harari recently wrote in the Economist that “the operating system of human civilisation” (language) has been hacked by generative AI. Elon Musk, with Steve Wosniak (Apple), Evan Sharp (Pinterest) and other tech greats, even called for an AI moratorium, only to announce TruthGPT shortly afterwards. Of course, technology assessment must include ethical and regulatory requirements for technology use in addition to determining economic and social potential. The thought leaders were probably less concerned with a differentiated view, which can be a guideline for wise innovation promotion and regulation alike. But differentiation is necessary in order to tap the potential of the leap innovation AI without hindering human development.

Large language models are powerful tools for knowledge preparation, interaction and decision support. Nevertheless, our operating system was not hacked. The surface and deep structure of the language is complex and sometimes overwhelms the chatbots. They “hallucinate”, to use the words of AI pioneer John McCarthy, when the training material in knowledge areas is insufficient and the language model produces approximately probable but ultimately wrong or misleading answers. Knowledge graphs can be a promising solution, with chatbots giving assured answers based on the information and facts stored in the knowledge graph. The L3S is working on these issues.

Back to the initial question. The answer is threefold:

1. It is important to objectify the public debate on AI in order to place the social discourse and technology and innovation policy action on a firm footing. Science can make an effective contribution to this in advising policymakers and society, even if this can be challenging.

2. Europe’s claim to set global standards for trustworthy AI systems is worthy of support. However, providers of large, pre-trained AI models are mostly capital- and data-intensive technology companies from the USA and China. They provide the foundation models for the digital value creation of tomorrow. If researchers and companies primarily rely on non-European providers, dependencies and losses in value creation can be the result. Establishing the world’s most advanced legal framework for AI is therefore not very effective if Europe is primarily the consumer of the technology it is trying to regulate.

3. Strengthening AI research means preserving Europe’s digital sovereignty. If Europe wants to be more than just a consumer, it must reduce dependencies along the tech stack. This applies to the semiconductor sector with hardware components that are essential for the training and deployment of AI algorithms. Likewise, investments in foundation models, which are enablers for new business models and at the same time should achieve conformity with the European AI and Data Protection Regulation. Competences for the implementation, management, and improvement of AI models in the industrial context as well as the development of customised AI services for companies, administration, and society up to data and AI competence along the entire education chain must also be promoted. In addition, open-source approaches play a special role, as they promote the democratisation of AI development and the comprehensive participation of companies, start-ups, communities, and research institutions.

Europe can make a significant contribution to making AI a game changer. Let’s get started.


Contact

Dr. Johannes Winter

Johannes Winter is Chief Strategy Officer and Deputy Managing Director of L3S.