Foto: ©PhonlamaiPhoto – stock.adobe.com
Ethical and legal standards for artificial intelligence
Artificial intelligence (AI)-based systems are already making decisions in many areas that can affect any individual, anywhere, at any time – with far-reaching effects on society as a whole as well. Search engines, Internet recommendation systems, and social media bots use AI systems, influencing our perception of political developments and even scientific findings. Companies use AI in hiring processes, banks for lending. But when artificial intelligence makes decisions, it comes with risks: discrimination, for example. After all, even the machine brain is not free of biases. When learning from data sets, AI systems also adopt the stereotypes they contain. Thus, companies could miss opportunities because bias leads to AI-driven decisions underperforming; much worse, they could violate human rights. Thus, one question researchers at L3S are addressing is: How can standards for unbiased attitudes and non-discriminatory practices be upheld in big data analysis and algorithm-based decision making?
Recognize and remedy discrimination
Indeed, there are growing concerns about the normative quality of AI-based decisions and predictions. In particular, there is growing evidence that algorithms sometimes reinforce rather than eliminate existing biases and discriminations-with possible negative implications for social cohesion and democratic institutions. So what is the role of ethics in such influential decision-making systems?
In the BIAS research group, experts from Leibniz Universität Hannover bring together epistemological as well as ethical, legal and technical perspectives. The Volkswagen Foundation is funding the cross-faculty research initiative as part of the call for proposals “Artificial Intelligence – Its Impact on Tomorrow’s Society”. The core idea: philosophers analyze the ethical dimension of concepts and principles in the context of AI (bias, discrimination, fairness). Lawyers investigate whether the principles are adequately reflected in the relevant legal framework (data protection, consumer, competition, anti-discrimination law). And computer scientists develop concrete technical solutions to detect discrimination and remedy it with debiasing strategies.
In addition to researchers from the Institute of Philosophy, the L3S is involved in BIAS – with Professors Tina Krügel, Eirini Ntoutsi, Wolfgang Nejdl, Christian Heinze and Bodo Rosenhahn as lead researchers. They are all united by the understanding that not only the algorithms, but the entire system of computer predictions and human decisions should be unbiased and non-discriminatory. They therefore target the entire decision-making process and not just individual components.
AI with responsibility
How AI-based decisions can be made responsibly is also the topic of the European PhD program NoBias – Artificial Intelligence without Bias. Fifteen PhD students at eight institutions in five countries are tackling the problem together: with multidisciplinary research in computer science, data science, machine learning, and law and social sciences. The L3S is involved in NoBias with Professors Eirini Ntoutsi, Maria-Esther Vidal, Christian Heinze, Tina Krügel, Sören Auer and Wolfgang Nejdl.
Bias can occur at all stages of AI-based decision-making processes: when data is collected, when algorithms convert data into decision-making capacity, and when the results are applied. To avoid discrimination, standard AI methods are not sufficient. The young scientists are therefore developing technical solutions that embed ethical and legal principles in the training, design and deployment of the algorithms. To do this, they must first understand the legal, social, and technical challenges. In addition to developing fair algorithms for unbiased decision making, NoBias‘ goals include automatically explaining AI results and transparently documenting the entire data provenance process.
NoBias establishes practical relevance through cooperation with more than ten associated partner companies from the fields of telecommunications, finance, marketing, media, software and legal advice, which can use the researchers’ expertise to drive forward AI innovations in a legally compliant manner.
Prof. Dr. Eirini Ntoutsi
L3S member Eirini Ntoutsi is project coordinator of NoBIAS and lead researcher in the BIAS project.
Dr. Vasileios Iosifidis
Vasileios Iosifidis is a research associate at L3S and project leader of NoBIAS.
Arjun Roy is a PhD student at L3S and a research associate in the BIAS project.