Propaganda bots: An army of digital agitators
Propaganda bots: An army of digital agitators
Propaganda bots: An army of digital agitators
- Author:
- October 26, 2022
Insight summary
In the era of social media, online bots have become an increasingly common tool for disseminating propaganda. These bots are automated accounts designed to mimic real people and can be used to influence politics and current events by spreading disinformation. The long-term implications of the increased usage of these bots may include companies and political parties using them to manipulate public opinions and more synthetic social media accounts fanning the flames of controversial debates and policies.
Propaganda bots context
A bot uses artificial intelligence (AI) software and can autonomously perform actions such as sharing messages, re-sharing, liking, following, unfollowing, or direct messaging various accounts on a media platform. Due to the variety of AI software applications, media and social movements have adapted this technology for increased interaction and reach with membership bases. Propaganda bots have risen in popularity among rogue governments, organizations, and activist groups because the software platforms that create and manage them have become increasingly accessible and easy to use.
These bots are highly versatile and programmable, making them especially effective at influencing public opinion by targeting specific communities. The bots can spread false information about candidates and issues or harass people with opposing views. Further, they can create fake social media profiles to manufacture support for a particular candidate or cause. In particular, the social media platform Twitter has become the haven for these bots as the site lends well to short, written messages.
Propaganda bots were deployed in several high-profile political campaigns, including the US presidential election and the UK Brexit referendum in 2016. In both cases, bots spread misinformation and sowed discord among voters. While propaganda bots are not limited to authoritarian countries, they are widespread in places where freedom of speech is limited. In such nations, the government often uses bots to control the population and suppress protests and dissent. Examples are China and Russia, which often flood their highly restricted Internet networks with government-leaning content using artificial intelligence (AI) and machine learning (ML) algorithms.
Disruptive impact
Some significant developments that have enabled the rise of propaganda bots are the increasing competence of AI to generate text and the growing popularity of social media chatbots. Text-generating software is sophisticated enough to fool many people most of the time. The text-generating software can write influential op-eds on highly complex national issues or talk to consumers on merchant websites. These bots are even utilized by websites that purport to be legitimate local news sources but provide disinformation (also known as pink-slime journalism).
In 2017, the US Federal Communications Commission (FCC) received over 22 million comments in its invitation for public opinion regarding net neutrality. About half of the comments appeared to be fraudulent, using stolen identities. These comments were simplistic; about 1.3 million were produced from the same template, with some words changed to appear different.
Whereas in 2020, Harvard University researcher Max Weiss created text-generation software to write 1,000 comments on a government call regarding the national health insurance program Medicaid. Weiss conducted this study to prove how easy it was to program propaganda bots. The comments were all unique and believable, so much so that the Medicaid administrators thought they were real. Weiss then informed Medicaid of the research he was doing and had the comments removed to prevent any policy debate from becoming biased.
Implications of propaganda bots
Wider implications of propaganda bots may include:
- Companies using propaganda bots to create valuable public relations content to restore their corporate reputation.
- Increasing cases of personalized text messages and emails assisting cybercriminals in executing identity theft, fraud, and phishing.
- Individuals renting propaganda bots for personal use; e.g., using bots to increase social media followers and harass people online.
- Higher volumes of AI-driven personas sending letters to newspapers and elected officials, submitting individual comments to public rule-making processes, and discussing political issues on social media.
- Governments attempting to implement stricter moderation legislation on Big Tech firms to regulate the use and creation of bots.
- Businesses adapting to surveillance capabilities of propaganda bots, leading to enhanced monitoring of employee productivity and customer behavior.
- Consumer skepticism intensifying as distinguishing between authentic and bot-generated content becomes challenging, impacting brand trust and marketing effectiveness.
- Policymakers facing the dilemma of balancing freedom of speech with the need to curb misinformation, influencing the scope of legislation on digital communication platforms.
Questions to consider
- How can people and organizations protect themselves from unknowingly engaging in discourse with propaganda bots?
- How else are bots going to change public debate and discussion?
- What are some recent encounters you have had with social media propaganda bots?
Insight references
The following popular and institutional links were referenced for this insight: