Faceless man in a suit holds a mask in front of a film cameramohamed Hassan/pixabay

Deepfakes – legal and societal challenges as well as innovation potentials

In a nutshell

Deepfakes are media content that has been generated or manipulated using AI systems, creating a false impression of authenticity. Since their emergence in 2017, the technologies used to create them, and particularly to distribute them, have developed dramatically. This has given rise to a number of societal challenges, ranging from sexual violence in the form of non-consensual, sexualised deepfakes to financial fraud and attempts to influence politics. At the same time, the technology also offers great potential in areas such as education, art, advertising and entertainment. The TA-Kompakt report provides a concise and up-to-date overview of deepfake technologies, their applications and the challenges they pose. It places particular emphasis on the issue of existing and potential future legal regulation.

Nine questions – nine answers


AI-generated content in the form of text, images, audio and/or video is referred to as synthetic media. Such content is called a deepfake when it falsely gives the impression that certain events have taken place, that certain statements have been made, or that certain actions have been carried out. Other forms of disinformation or image manipulation do not imply artificial intelligence (AI). These are sometimes referred to as cheapfakes.


Four types of AI models are primarily used to create deepfakes: autoregressive models, autoencoders, Generative Adversarial Networks (GANs) and diffusion models. These differ in terms of the amount of training data required, the achievable quality, the computing power required and the degree of specialisation for specific tasks.

The development of these technologies is characterised by increasingly better results being achieved with less and less effort.

Even today, deepfakes can be created and published easily in terms of both software and hardware requirements. It is assumed that deepfakes will spread further in the future, to the extent that the authenticity of online content can no longer be readily assumed, even in live situations.

There is no precise data on the extent of deepfake technology use, and the few studies existing are of questionable reliability. Overall, there is a lack of empirical research on the extent of deepfake technology use.

Deepfake technologies have the potential to be used for innovative applications, particularly in the fields of art and entertainment, education and science, and advertising and marketing. Examples include the lip-synchronous translation of films, educational use in museums and satirical representations.

However, these beneficial applications also pose indirect risks, such as an increasing disregard for human labour or a blurring of the boundaries between reality and fiction, as has been described for generative AI applications in general.

Deepfakes can also be used for law enforcement purposes, such as helping to design wanted posters or enabling access to criminal networks.

Deepfakes can be deliberately produced and distributed to influence public debate or the political process. Even the suspicion of a fake can be enough to call into question the credibility of news and trust in a shared understanding of reality.

Deepfakes are also used specifically against individuals to exploit or even destroy their reputation for fraudulent purposes. They can facilitate identity theft and circumvent identification procedures. Pornographic deepfakes are used as a form of sexual violence, particularly against women, and can have the effect of discouraging women from taking part in the public sphere at a societal level.

Deepfakes can be used in a variety of ways to harm individuals, companies and institutions. Currently, there is no explicit regulation of deepfakes in German law. At a civil law level, the protection of one’s own image, personality rights, copyright and data protection rules can be invoked. Furthermore, various criminal law provisions may apply. However, there are legal loopholes, particularly with regard to sexualised deepfakes of adults, as well as with regard to protection against discrimination and the use of deepfakes by law enforcement agencies.

Enforcing the law in relation to deepfakes poses a particular problem. Deepfakes are often produced and published abroad, and the lack of transparency makes it difficult to gather evidence. Furthermore, those affected report that the authorities are often reluctant to pursue criminal prosecutions.

In Germany, a bill to amend the Criminal Code was introduced by the Bundesrat in 2024. Given the both beneficial and harmful potential applications of deepfake technologies and the sensitivity of state intervention in public discourse, careful consideration of regulatory options appears essential. Looking ahead, deepfakes also offer opportunities to the legal system, for example in the area of criminal prosecution or the protection of witnesses in court proceedings. However, the ability to forge evidence with simple means and in convincing quality poses a challenge.


Some countries have already enacted laws to specifically address certain effects of deepfakes, particularly in the area of non-consensual pornography. In the USA, a nationwide regulation on pornographic deepfakes has been in place since May 2025, and political deepfakes are regulated in several states. In China, deepfakes, like synthetic media content in general, are covered by a separate law. The key points of this law are a comprehensive ban on misuse and an obligation to use real names and labelling. There are no exceptions for freedom of expression or artistic freedom. At the European level, the AI Act imposes transparency obligations on developers of deepfake technologies, while the Digital Services Act regulates the distribution of content, particularly via very large online platforms.

Several German research projects are receiving funding to develop detectors that can automatically recognise deepfakes. However, these detectors are often specialised in specific types of deepfakes. A fundamental problem with these detectors is that they can also be used to improve deepfakes, thereby contributing to raising the bar for detection.

Alternative approaches focus on labelling and verifying authentic content and tracing the origin of the data, e.g. through digital watermarks or cryptographic signatures. However, these can also be manipulated or circumvented by malicious actors.


In addition to developing technical measures to authenticate content and closing regulatory gaps, it is proposed to raise awareness and increase vigilance and media literacy through education and transparent reporting on the potential dangers of deepfakes. Voluntary commitments, for example by political parties, can contribute to protect public debate. Research on both the spread of deepfakes and their impact on individuals and society, as well as on possible responses to them by state and civil society institutions, is still under-developed.

Methodological approach

This study is based on a literature review of the current state of technological development, examples of applications, and the effects of deepfakes. To present and discuss the legal situation in a well-founded manner, an expert legal opinion was commissioned, and a further legal assessment and update was provided in the form of a commentary on the draft report by an external expert.

Download

Cover: TA-Kompakt Nr. 3:Rechtliche und gesellschaftliche Herausforderungen von Deepfakes

TA-Kompakt Nr. 3 (only in German)

Rechtliche und gesellschaftliche Herausforderungen und Potenziale von Deepfakes (PDF)

The TAB report provides a concise overview of deepfake technologies, their social impact and legal challenges, and outlines specific courses of action for politics, businesses and civil society.

doi:10.5445/IR/1000190774

 

The TA-Kompakt series provides concise information on current and controversial topics to meet the needs of the German Bundestag. 

In the Bundestag

The final report was presented to the Committee on Research, Technology, Space and Technology Assessment on December 17, 2025 and was approved by the committee members. 

Process - Report on the parliamentary server (DIP)
Technology assessment (TA) Legal and social challenges and potentials of deepfakes

In the media

  • netzpolitik.org (27.02.2026), Bericht für den Bundestag: So gefährlich sind Deepfakes. 

Previous publication on the topic

Cover Themenkurzprofil Nr. 25 Deepfakes - Manipulation von Filmsequenzen

Themenkurzprofil Nr. 25 (only in German)

 

 

Deepfakes – Manipulation von Filmsequenzen
2019. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB). doi:10.5445/IR/1000133910