Skip to main content

AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. The discriminative impact of AI-based decision-making to certain population groups has been already observed in a variety of cases and the need to move beyond traditional AI algorithms optimized for predictive performance has been identified. Our interdisciplinary team of experts from philosophy, law and computer science will examine how standards of unbiased attitudes and non-discriminatory practices can be met in big data analysis and algorithm-based decision-making.

AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. They are applied by search engines, Internet recommendation systems and social media bots, influencing our perceptions of political developments and even of scientific findings. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.  Scholarly reflection of these issues has begun but is still in its early stages and a lot of work remains to be done. In particular, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which technical options to combat bias and discrimination are both realistically possible and normatively justified. The research group “BIAS” will examine these issues in an integrated, interdisciplinary project bringing together experts from philosophy, law, and computer science. Our shared research question is: How can standards of unbiased attitudes and non-discriminatory practices be met in big data analysis and algorithm-based decision-making? In approaching this question, we will provide philosophical analyses of the relevant concepts and principles in the context of AI (“bias”, “discrimination”, “fairness”), investigate their adequate reception in pertinent legal frameworks (data protection, consumer, competition, anti-discrimination law), and develop concrete technical solutions (debiasing strategies, discrimination detection procedures etc.). Central to our project will be the interdisci­plinary synergies created by intensive collaboration on shared issues and direct uptake of the other disciplines’ approaches and results. In addition, we will establish concrete means of close interaction, including regular meetings, joint workshops, an interdisciplinary conference, and joint publications.

Funded by: Volkswagen Foundation under the call Artificial Intelligence and the Society of the Future

http://portal.volkswagenstiftung.de/search/projectDetails.do?ref=95037

 

 

Begin
End