Skip to main content
Big Data

Volkswagen Foundation supports interdisciplinary research on decision-making with AI

Date
2019-01-09

"Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions — BIAS"

Whether selecting applicants or granting loans - more and more decisions are made by artificial intelligence (AI) techniques based on data and algorithmic processing. Via search engines, Internet recommendation systems and social media bots, these techniques also influence our perception of political developments and even of scientific findings. However, there is growing concern about the quality of AI ratings and predictions. In particular, there is strong evidence that algorithms often do not eliminate bias and discrimination in the data, but rather reinforce them, thereby exerting negative effects on social cohesion and democratic institutions.

In the "BIAS" research group, which is financed by the Volkswagen Foundation, philosophers, lawyers and computer scientists will jointly address the question of how standards of unbiased attitudes and non-discriminatory practice can be met in big data analyses and algorithm-based decision-making. The scientists will provide philosophical analyses of the relevant concepts and principles in the context of AI ("bias", "discrimination", "fairness"), investigate their adequate reception in pertinent legal frameworks (data protection, consumer protection, competition, anti-discrimination legislation) and develop concrete technical solutions (debiasing strategies, discrimination detection procedures etc.). Interdisciplinary synergies are achieved through close cooperation on shared issues and direct reference to approaches and results of the other disciplines. In addition, the research group will establish concrete means of close interaction, including regular meetings, joint workshops, an interdisciplinary conference, and joint publications.