S_19_Symbolbildim Uhrzeigersinn: © Mark Adams; Piotr Adamowicz; Fabio Formaggio; photovibes/ 123rf.com

Discrimination as possible consequence of algorithmic decision-making systems and machine learning

sprungmarken_marker_1360

Background and central aspects of the topic

If, of two people with the same creditworthiness characteristics, only one receives the desired personal loan and the other does not because of their sex or mother tongue, then it is suspected that this is a case of discrimination. Social science definitions understand discrimination as a social practice that restricts access to certain material and immaterial goods on the basis of (supposed) group membership. In this context, the deviation from the assumed normal case serves as a distinguishing feature and thus as a reason for discrimination. What constitutes discrimination and where it begins is a product of constant social negotiation processes, such as the debates about "marriage for all".

Many decisions based on digitised data are either prepared or made entirely by algorithmic decision-making systems, for example, for the granting of loans, the selection of suitable applications for a job or the calculation of an individual risk profile (e.g. for a certain illness, a default of payment or even for the commission of a crime). Algorithmic decision systems (AES) are programmed procedures that calculate an output from a certain input in different, precisely defined sequences of steps, potentially carrying out data collection and analysis, interpretation and interpretation of the results and finally the derivation of a decision (semi-recommendation) from the results. A common distinction is to classify algorithms and AES according to whether they are rule-based or learning, i.e. deriving their own functional and analysis rules from training data. Learning algorithms form part of machine learning systems (ML), which are also described by the term artificial intelligence (AI).

Unequal treatment plays a role in many areas of society and is often widely accepted as such (for example, age limits for participation in political elections or for obtaining a driving licence) - not every unequal treatment is therefore necessarily unjustified, i.e. discriminatory. Often, however, unequal treatment is difficult for those affected to recognise, as the basis for decisions taken is not (necessarily) disclosed. If more and more decision-making processes are automated, the question arises as to what consequences this has with regard to social discrimination. Are existing social risks of discrimination algorithmically perpetuated and possibly even potentiated? Or are AES rather above human prejudices and thus evaluate more objectively?

Key results

It seems premature to make a general judgement on whether AES lead to more, less or novel social discrimination. The report looks at four case studies in the fields of employment services, medical care, the prison system and automated person recognition, and makes it clear that unequal treatment by AES is often a continuation of well-known "pre-digital" unequal treatment. One of the case studies, which also represents one of the best-known cases of algorithmically supported unequal treatment, describes the supposedly higher probability of convicted black compared to white US-Americans being classified as at risk of recidivism and, as a consequence, only being released on a comparatively high bail or not on parole at all. At the same time, the case studies demonstrate that the question of whether or not a concrete unequal treatment is discriminatory is often highly controversial within a society and within the jurisdiction.

Precisely because discrimination by AES is often difficult to detect, a number of proposals aim to minimise the discrimination risks of algorithmic decision systems ex ante. The creation of transparency, monitoring and evaluation of AES, and uniform regulation are at the centre of the discussion. For example, a labelling requirement can help to make the use of AES transparent for those affected, and a risk-adapted evaluation of AES can help to estimate social consequences in advance and establish various control measures depending on the criticality of the issue. These measures represent only a fraction of the approaches currently under discussion, which aim to develop a societal and legal approach to algorithmic decision-making systems that offers scope for innovation and development while at the same time providing citizens with security from intransparency and discrimination in access to socially available goods.

Publications