Deepfakes – legal and societal challenges as well as innovation potentials

  • Project team:

    Alma Kolleck

  • Thematic area:

    Digital society and economy

  • Topic initiative:

    Committee on Digital Affairs

  • Analytical approach:

    TA brief study

  • Startdate:

    November 2022

  • Enddate:

    June 2023

Background and central aspects of the topic

After a fake video emerged in March 2022 in which Ukrainian President Volodymyr Selenskyj calls on his soldiers to lay down their weapons, deepfake videos, i.e. manipulated film sequences, were again the focus of media attention. Deepfakes are generally understood to be media content that appears to be true to reality, primarily films or manipulated or synthetic audio or image media, which have been created with the help of artificial intelligence (AI). The term deepfake is composed of the terms deep learning, as a form of artificial neural networks (a subtype of AI), and fake.

Public and scientific attention to deepfake movies, images, or soundtracks is high in part because it is becoming increasingly easy, even for individuals, to create deepfakes, and because technological advances are significantly increasing the quality of manipulation (and thus making detection more difficult). One challenge that comes with this development is the use of deepfakes as part of targeted disinformation campaigns that can potentially permanently shake trust in institutions or individuals or negatively impact situations. Deepfakes can also be used in video or image identification procedures to circumvent identification or authentication or to commit identity theft. In addition, manipulated video or audio sequences presented as evidence could present new challenges to courts. Finally, deepfake films may also become increasingly significant in violations of personal rights and in the public disparagement of individuals, such as when third party faces are edited into porn sequences. Overall, pornographic depictions are one of the most common known fields of application of deepfakes.

At the same time, synthetic audiovisual media also open up a range of innovative, non-harmful applications in the fields of art, satire, public relations and advertising. Similarly, dubbing of films into different languages could be facilitated by, for example, adapting lip movements accordingly and transferring actors' original voices into any language. Deepfake avatars also open up new possibilities for accessing content in the field of education and communication, for example, historical knowledge by means of historical figures "revived" by means of deepfakes.

Objectives and approach

The study is intended to provide a concentrated overview of the current state of technological development and the social and legal ways of dealing with deepfakes with four focal points. As a starting point, a concentrated insight into the technological basics is to be given and it is to be worked out to what extent deepfakes have different implications than other widespread audiovisual editing and manipulation. In a second step, the creative and innovative uses of deepfakes in the fields of entertainment, art, education, and law enforcement (e.g., when deepfake videos of missing persons are created to obtain new clues from the public) will be elaborated.

In the third step, the focus of the elaboration will be on the social and legal challenges in dealing with deepfakes. For this purpose, a legal expert opinion will be tendered that will summarize the current legal situation regarding deepfakes in Germany, possible regulatory gaps as well as potential new laws or amendments to laws to close these gaps. TAB will supplement the report with a literature study on social aspects, such as how media users, journalists, judges and public prosecutors can be sensitized to the dangers of deepfakes in order to recognize manipulation.

The last step will be to work out what technical possibilities exist so far to efficiently detect deepfakes and what technical progress can be expected in this area - increasingly difficult to detect deepfakes or rather better detection of deepfakes, for example through appropriate technical detection software?