Social Bots
- Project team:
Sonja Kind
- Thematic area:
- Topic initiative:
Committee on Education, Research and Technology Assessment
- Analytical approach:
TA project
- Startdate:
2016
- Enddate:
2017
sprungmarken_marker_1908
Subject and objective of the project
Social bots are computer programmes developed to automatically generate messages like comments, answers or statements in social networks, e. g. Facebook or Twitter, in order to influence or manipulate discourses. They are able to generate meaningful texts that are similar to content written by humans. The similarity to humans is also suggested by social bots not always acting and commenting politically, but also posting trivial information such as comments on scores of football matches or notes on the content of TV series. For humans, it is rarely obvious that the messages have been created by a machine.
Fake accounts of social bots, i. e. fake user profiles that do not belong to an authentic person, can be easily multiplied so that thousands of user accounts can be created, e. g. on Twitter, that will generate tens of thousands of tweets a day. It is assumed and partly proven that social bots are being used deliberately by states, companies and stakeholder groups.
The objective of the brief study was to work out an overview of the current state of the art of social bots (capabilities of the applied algorithms), fields of application and users, propagation as well as actual and presumed risks. Moreover, the current state of knowledge regarding the actual extent of using social bots and their impact should be presented. The study focused on the issue of potential dangers of social bots resulting from a possible manipulation of political discussions and tendencies in social networks or from the influence of social bots on people’s buying behaviour.
Basically, the brief study was oriented towards the following three issues:
- What is technically feasible and how can the influence of social bots be proven?
- What are potential future applications of social bots?
- How can social bots be identified and prevented?
Key results
Experts consider the potential of social bots regarding political processes to be predominantly high. Social bots can be used for disseminating news on the Internet in order to manipulate tendencies or to influence political debates and discourses. In particular, there is a potential danger if social bots disseminate masses of fake news in crisis situations such as e. g. after attacks. Thus, social bots can contribute to changing the political debate culture on the Internet and involve disinformation and a ‘climate poisoning’ in public discourse.
Economic processes are another sphere of influence of social bots. Social bots bear the risk of influencing the customer and buying behaviour of individuals (via so-called influencer marketing) and of manipulating even entire markets such as stock exchange trading.
In view of IT security, a risk due to social bots currently still seems to be unlikely, as they do not attack the hardware or software of IT systems directly. Against the background of IT devices becoming more and more intelligent (Industry 4.0, Internet of Things) on the one hand and in view of the further development of social bots to be expected on the other hand, future risks – such as e. g. highjacking of devices for malicious purposes – are difficult to assess and to predict. Social bots, however, can become a danger particularly if they target humans as potentially weak points of IT security and exploit them for attacks (e. g. by sending links that will install malware).
Business models of social networks are primarily based on sales of advertising and/or user data. These models can only work with humans acting on the platform and making purchasing decisions. In the long term, social bots represent a threat for the business model of social networks. Users might turn away from them, because they lose confidence in the authenticity of the messages. Moreover, as a consequence, investors might withdraw from the social networks.
However, the use of social bots does not necessarily have to be associated with negative intentions. Possible positive applications include artistic and creative applications as well as approaches using social bots as a countermeasure (so-called counter-speech campaigns) in order to fight fake news. Moreover, they could be used for positively influencing human behaviour (nudging). However, this would only be ethically acceptable if the principles of informational self-determination are observed.
Contact
Dr. Sonja Kind+49 30 310078-283
sonja kind∂vdivde-it de
TAB Secretariat
+49 30 28491-0
buero∂tab-at-the-bundestag.de
Publications
Kind, S.; Jetzke, T.; Weide, S.; Ehrenberg-Silies, S.; Bovenschulte, M.
2017. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB). doi:10.5445/IR/1000133492
Kind, S.; Jetzke, T.; Weide, S.; Ehrenberg-Silies, S.; Bovenschulte, M.
2017, April. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB)
Kind, S.; Jetzke, T.; Weide, S.; Ehrenberg-Silies, S.; Bovenschulte, M.
2017, April. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB)
Ehrenberg-Silies, S.; Kind, S.
2016. Büro für Technikfolgen-Abschätzung beim Deutschen Bundestag (TAB). doi:10.5445/IR/1000127190