Direkt zum Inhalt
Prof. Dr. Tina Eliassi-Rad

Just Machine Learning

Date
2019-01-08

Prof. Dr. Tina Eliassi-Rad, Northeastern University

Fairness in machine learning is an important and popular topic these days. “Fair” machine learning approaches are supposed to produce decisions that are probabilistically independent of sensitive features (such as gender and race) or their proxies (such as zip codes). Some examples of probabilistically fair measures here include precision parity, true positive parity, and false positive parity across pre-defined groups in the population (e.g., whites vs. non-whites). Most literature in this area frame the machine learning problem as estimating a risk score. For example, Jack’s risk of defaulting on a loan is 8, while Jill's is 2. Recent papers - by Kleinberg, Mullainathan, and Raghavan (arXiv:1609.05807v2, 2016) and Alexandra Chouldechova (arXiv:1703.00056v1 , 2017) - present an impossibility result on simultaneously satisfying three desirable fairness properties when estimating risk scores with differing base rates in the population. I take a broader notion of fairness and ask the following two questions: Is there such a thing as just machine learning? If so, is just machine learning possible in our unjust world? I will describe a different way of framing the problem and will present some preliminary results.

5 Juli 2019, 14:00
Multimedia Room, 15th Floor, L3S

NEIID
20190605