Foto: ©Erwin Wodicka – stock.adobe.com

Fairness instead of prejudice

Legal Framework for AI

Artificial intelligence (AI) offers enormous optimisation potential for many areas of business and life. In the process, AI-based systems make decisions that can have far-reaching effects on each individual and on society as a whole. While AI decisions open up many opportunities, they can also create discrimination, for example in the allocation of jobs or loans. The reason may lie in problematic training data that reflect existing prejudices in society. However, biases can also only occur when algorithms transform data into decisions or when the results are used in applications. The use of AI thus also raises legal and ethical questions. Science is thus faced not only with the task of optimising the predictive performance of AI algorithms, but also of incorporating ethical and legal principles into their design, training and use.  

Taking stock of bias  

A number of projects at L3S aim to understand – and avoid – the legal, social and technical challenges of these biases. The European Research Training Group Artificial Intelligence without Bias (NoBIAS) also addresses this issue. The participating scientists are looking at the entire decision-making pipeline. The overall goal: to understand where the different causes of bias lie, to recognise when they emerge, and to mitigate their impact on application outcomes. Fifteen PhD students at eight institutions in five countries are tackling the problem together: with multidisciplinary research in computer science, data science, machine learning, law and social sciences. The L3S is involved in NoBIAS with Professors Maria-Esther Vidal, Christian Heinze, Eirini Ntoutsi, Wolfgang Nejdl and other researchers. In a first study, together with professors from other institutions, they have conducted a broad multidisciplinary survey of bias in AI systems. The study focuses on technical challenges and solutions to the problem and proposes to embark on new research paths whose approaches are anchored in a legal framework. 

Interdisciplinary collaboration   

How does one intervene in the algorithmic components of AI-based decision-making systems to create fair and equitable systems? Finding an answer is significantly complicated by the contextual nature of fairness and discrimination. Modern decision-making systems that involve the allocation of resources or information to individuals – such as in lending, advertising or online search – incorporate machine-learned predictions into their pipelines. Concerns do arise: for example, about possible strategic behaviour or constraints on allocation. Normally, economics and game theory deal with this. Gourab K. Patro from the L3S research centre, together with other researchers from all over the world, has investigated problems regarding fairness and discrimination in automated decision-making systems from the perspective of statistical machine learning, economics, game theory and mechanism design. Together, the scientists want to create a comprehensive system that combines the individual frameworks of the different disciplines. 

Fairness-aware learning   

Data-driven algorithms are used in many applications where data becomes available in a continuous sequence. The models therefore need to be updated regularly. In such dynamic environments, where the underlying data distributions change over time, simple static learning approaches can fail. Fairness-aware learning is therefore not a one-off requirement here, but should continuously encompass the entire data stream. Professor Eirini Ntoutsi and Vasileios Iosifidis have been working on this problem at L3S and have proposed an online boosting approach that maintains fairness in the classification of data across the entire data stream. To this end, they have conducted extensive experiments that show potential applications – for example in banking or for the police.   

Featured Projects
Contact
Prof. Dr. techn. Wolfgang Nejdl

Wolfgang Nejdl is director of L3S and project coordinator of NoBIAS.

Gourab K. Patro

Gourab Patro is a research associate at L3S and project manager of NoBIAS.