→Jump to content

Office of Technology Assessment at the German Bundestag Office of Technology Assessment at the German Bundestag

Information on the project

Discrimination as possible consequence of algorithmic decision-making systems and machine learning

Thematic area: Technology, society, innovation
Analytical approach: TA project
Topic initiative: Committee on Education, Research and Technology Assessment
Status: completed
Duration: 2019 till 2020

Subject and objective of the project

If, of two people with the same creditworthiness criteria, only one is granted the desired personal loan and the other is not because of their sex or mother tongue, there is reason to suspect that this is a case of discrimination. According to socio-scientific definitions, discrimination is a social practice that limits access to certain material and immaterial goods on the basis of (supposed) group memberships. In this context, the deviation from the assumed normal case serves as a distinguishing feature and thus as a reason for discrimination. What constitutes discrimination and where it begins is a product of constant social negotiation processes, as can be seen e. g. from the debates on »same-sex marriage«.

Many decisions based on digitised data are either prepared or made entirely by algorithmic decision-making systems (ADM), e. g. for granting loans, selecting suitable applicants for a job or calculating an individual risk profile (for example with regard to a specific disease, a payment default or even for committing a crime). ADM systems are programmed procedures that calculate an output from a given input following different, well-defined steps, potentially carrying out data collection and analysis as well as interpretation and assessment of the results, and finally deriving a decision (recommendation) from the results. Frequently, a distinction is made between algorithms and ADM systems according to whether they function on the basis of rules or are able to learn, i. e. to derive their own functional and analysis rules from training data. Learning algorithms are part of machine learning systems (ML), which are also referred to by the term »artificial intelligence« (AI).

Unequal treatment plays a role in many areas of society and as such is often widely accepted (for example, age limits for participation in political elections or for obtaining a driving licence). As a consequence, not every unequal treatment is necessarily unjustified, i. e. discriminatory. Often, however, unequal treatment is difficult to recognise for those affected, since the basis for decisions made is not necessarily disclosed (and does not necessarily have to be disclosed). If more and more decision-making processes are automated, the question arises as to what consequences this will have in terms of social discrimination. Will existing social risks of discrimination be algorithmically perpetuated and possibly even exponentiated? Or do ADM systems rather rise above all human prejudice and thus evaluate more objectively?

Key results

It seems premature to make a general judgement on whether ADM systems lead to more, less or new types of social discrimination. The report examines four case studies from the fields of employment services, medical care, the penitentiary system and automated person recognition, and shows that unequal treatment by ADM systems is often a continuation of well-known »pre-digital« unequal treatment. One of the case studies – which also represents one of the best-known cases of algorithmically supported unequal treatment – describes the supposedly higher probability for convicted black versus white Americans to be classified as being at risk of recidivism and, as a consequence, to be released on probation only with a comparatively high bail or not at all. At the same time, the case studies demonstrate that the question of whether or not a specific unequal treatment is discriminatory is often highly controversial within a society and within the jurisdiction.

Precisely because discrimination by ADM systems is often difficult to detect, a number of proposals aim to minimise the risks of discrimination by ADM systems ex ante. For this, the discussion focuses on the creation of transparency, monitoring and evaluation of ADM systems as well as a uniform regulation. For example, mandatory labeling can help to make the use of ADM systems transparent for those affected. Moreover, risk-adapted assessment of ADM systems can help to assess social consequences in advance and establish various control measures depending on the criticality of the situation. However, these measures represent only a part of the approaches currently being discussed. The aim is to develop a societal and legal approach to algorithmic decision-making systems that provides scope for innovation and development while at the same time offering citizens protection against lack of transparency and discrimination when accessing socially available goods.

Top