Discrimination by algorithms and what to do about it

TAB report explores the question of whether algorithmic decision-making systems make (un)fairer decisions than humans and what can be done to prevent algorithm-based discrimination
HP 24: Mögliche Diskriminierung durch algorithmische Entscheidungssysteme und maschinelles Lernen
Background paper: Possible Discrimination through Algorithmic Decision Systems and Machine Learning

Algorithmic decision making (ADM) systems , i.e. programmed processes that calculate an output from a certain input in various, precisely defined step sequences and derive a (semi)recommendation for a decision, are something we frequently encounter in everyday life: they determine the best route for a planned trip, the right partner on a singles exchange or one's credit rating. In doing so, they set a more or less significant course and thus possibly determine life chances - often without the person concerned being aware of it.

Because of their number-based regularity, ADM systems might initially be assumed to be more objective decision-making bodies. However, some high-profile cases of biased machine decisions - for example, when an online mail order company was looking for new employees and its learning ADM systems suggested almost exclusively men - raise doubts about the objectivity of algorithmic decision recommendations and raise the question of whether ADM systems make (in)fairer decisions than humans: Do discrimination risks change through the use of ADM systems?

TAB explores this question in the study published as Background Paper no. 24 (available only in German), which uses four case studies from the areas of job placement, medical care, the penal system and automated person recognition to demonstrate that unequal treatment by ADM systems is often a continuation of »pre-digital«  unequal treatment, and at the same time illustrates that the question of whether a concrete unequal treatment is discriminatory (or not) is often highly controversial within a society and within case law. Finally, the background paper presents various courses of action to prevent algorithm-based discrimination.

24 November 2020

Download:

Further information: