→Jump to content

Office of Technology Assessment at the German Bundestag Office of Technology Assessment at the German Bundestag

Information on the project

Discrimination as possible consequence of algorithmic decision-making systems and machine learning

Thematic area: Technology, society, innovation
Analytical approach: TA project
Topic initiative: Committee on Education, Research and Technology Assessment
Status: ongoing
Current project phase: Evaluation of available studies
Duration: 2019 till 2019

Background and central aspects of the topic

For a long time now, a wide variety of decisions affecting people's lives and opportunities have been made based on using rules and taking into account certain characteristics. Medical experts and cost bearers consider the patients' risks of disease and relapse when deciding on treatments or rehabilitation. Insurance companies classify their customers according to the probability of loss and calculate corresponding rates, and banks make credit decisions based on assessments of creditworthiness, e. g. by SCHUFA Holding AG.

Since these decisions are increasingly data-based and have been automated by means of algorithms/software (algorithmic decision-making [ADM] systems), they are moving into the focus of public perception. Moreover, science, politics and other stakeholders are addressing the associated opportunities and risks. ADM systems offer the potential to make decisions faster, more efficiently and more objectively and to reduce or even avoid inadequacies in human decisions. Nevertheless, direct risks of automated decisions include wrong decisions and the resulting consequences as well as issues of responsibility and liability that still remain unclear. So far, there are no specific requirements to disclose either (training) data used for designing decision-making rules and models or the algorithms used for applying these rules to new cases. As a result, it is often hard to understand decision-making processes from the outside. The lack of transparency provides a breeding ground for distrust or rejection. The increasing use of machine learning methods will further aggravate this situation and take it to a new level.

For some time now, it has been pointed out that – beyond the risk of making wrong decisions – ADM systems can lead to individual persons or groups of persons being systematically discriminated against. Moreover, due to the lack of transparency, this discrimination is difficult to identify and prove. The accusation of possible discrimination weighs heavily, because fundamental rights of individuals are affected. In Germany, there is a prohibition of discrimination not only for public authorities, but also for the private sector (laid down by the General Act on Equal Treatment [»AGG«]). Individual cases of ADM systems that have encouraged discrimination have already become public (although it is not always clear to what extent people have actually been harmed).

The social relevance of the issue is becoming increasingly apparent. Several scientific studies have recently been initiated by political bodies, i. a. by the German Federal Anti-Discrimination Agency, the Advisory Council for Consumer Affairs (»SVRV«) and the German Federal Ministry of Education and Research (»BMBF«), the latter within the framework of the project »Assessing Big Data (ABIDA)«.

Moreover, many other institutions are dealing with the ethics of algorithms, i. a. the German Ethics Council, the Bertelsmann Stiftung, the Algorithm Watch association and, in the context of specific applications, the Hans-Bredow-Institut for Media Research and the Chair of Public Administration, Public Law, Administrative Law and European Law at the German University of Administrative Sciences Speyer. Risks of discrimination due to algorithmic decision-making systems are also dealt with by the Study Commission on artificial intelligence (AI) set up by the German Bundestag in 2018 and the Data Ethics Commission of the German Federal Government.

Objectives and approach

The TAB considers the discrimination risks resulting from ADM systems to be not only a technical challenge, but also a legal, social and political one. In particular the following central questions are to be dealt with:

  • In which areas and for which decisions are algorithmic decision-making systems already in use that should be non-discriminatory according to the current legal situation (in particular in compliance with the »AGG«)? In which areas are ADM systems and machine learning likely to be applied in the future? Which advantages are expected from their application?
  • What are the risks of discrimination arising from the use of ADM systems? Which cases of discrimination are already known that have been caused by the use of algorithmic decision-making systems?
  • What are the criteria and procedures according to which the use of algorithmic decision-making systems by public authorities shall be permitted? Which applications by public authorities already exist, which are possible?
  • Which technical possibilities exist to identify discrimination due to algorithmic decision-making systems and machine learning in advance or subsequently.
  • Which methods (e. g. »counterfactual explanations«) are discussed in the literature in order to make complex procedures comprehensible? How can trade secrets be protected?
  • Which legal possibilities and suggestions exist for limiting discrimination risks that result from the use of algorithms, e. g. within the framework of competition law or due to control mechanisms by public authorities? What experience has been gained in other countries in this respect?

The debate is still at its very beginning. In order to provide a factually sound overview, the TAB will evaluate the large number of available and expected study results on the basis of these central questions and summarise them in the form of a synopsis. In addition, interviews with experts shall be conducted to clarify possible contradictions or ambiguities.

Project progress

Evaluation of available studies.

Top