Government propaganda growth: The rise of state-sponsored virtual brainwashing
Government propaganda growth: The rise of state-sponsored virtual brainwashing
Government propaganda growth: The rise of state-sponsored virtual brainwashing
- Author:
- December 12, 2022
Insight summary
The global rise in government-backed propaganda has dramatically transformed the digital landscape, with social media becoming a battleground for misinformation campaigns. Governments are increasingly employing sophisticated techniques like AI-generated personas and deepfake videos, making it challenging for platforms and users to distinguish truth from fiction. This escalating trend not only affects public opinion and elections but also strains international relations and compels legislative action to manage digital content integrity.
Government propaganda growth context
According to the University of Oxford’s Internet Institute, state-sponsored propaganda campaigns occurred in 28 countries in 2017 and increased to 81 countries in 2020. Propaganda has become an integral tool for many governments and political movements. It is used to smear opponents’ reputations, shape public opinion, silence opposition, and interfere in foreign affairs. In 2015, few countries used social media bots and other technologies to wage so-called computational propaganda campaigns. However, since 2016, social media propaganda has increased, most notably with Russia interfering in the UK’s Brexit referendum and the US elections. As of 2022, almost every election is accompanied by a misinformation campaign to some degree; and many are conducted professionally.
The Oxford researchers emphasized that governments and political parties have invested millions in the development of “cyber troops” to drown out competing voices on social media. Volunteer groups, youth organizations, and civil society organizations that support government ideologies are often incorporated in these cyber troops to spread false information.
Social media platforms like Facebook and Twitter have attempted to police their platforms and remove these cyber troops. Between January 2019 and November 2020, the platforms removed more than 317,000 accounts and pages from troll farm accounts. However, experts think it is too late to weed out these fake accounts. Governments have become increasingly more sophisticated in their campaigns, investing in artificial intelligence (AI)-generated online personas and deepfake content.
Disruptive impact
During the 2016 Philippine national elections, the eventual winner Rodrigo Duterte used Facebook to reach Millennial voters and encourage “patriotic trolling.” Duterte, known for his “iron-fist” administration method, had been accused of human rights violations in his “war against drugs” by civil rights organizations, including the UN Human Rights Council. However, this notorious reputation only fueled his election campaign, primarily concentrated on Facebook, the platform used by about 97 percent of Filipinos.
He hired strategists to help him build a worldwide army of personalities and bloggers. His large following (often vicious and combative) was frequently referred to as the Duterte Die-Hard Supporters (DDS). Once elected, Duterte proceeded to weaponize Facebook, taking down reputations and jailing vocal critics, including Nobel Prize Winner journalist Maria Ressa and opposition senator Leila De Lima. Duterte’s use of social media to advance his administration’s propaganda and justify the widespread human rights violations under his leadership is just one example of how governments can use all available resources to sway public opinion.
In 2020, University of Oxford researchers recorded that 48 countries partnered with private consultancy and marketing firms to wage disinformation campaigns. These campaigns were expensive, with almost USD $60 billion on contracts. Despite Facebook and other social media sites’ efforts to control troll farm attacks, governments generally have the upper hand. In January 2021, when Facebook took down accounts with suspicious links to Uganda President Yoweri Museveni’s re-election campaign, Museveni had Internet service providers block all access to social media platforms and messaging apps.
Implications of government propaganda growth
Wider implications of government propaganda growth may include:
- The increasing use of deepfake videos releasing “scandalous” activities supposedly done by politicians.
- Social media platforms heavily investing in bot weeding and building algorithms to identify fake accounts. Some platforms may eventually be pushed to adopt identity authentication policies for all of their users.
- Authoritarian states banning social media platforms that attempt to stop their propaganda campaigns and replacing these apps with censored applications. This measure may lead to their respective citizens’ increased alienation and indoctrination.
- People being no longer able to identify which sources are legitimate because propaganda campaigns will become more sophisticated and believable.
- Countries continuing to weaponize social media to accuse opponents, get them fired, or put them in jail.
- Nations investing in counter-propaganda strategies, aiming to safeguard national security and public opinion from foreign influence campaigns.
- Legislative bodies enacting stricter regulations on digital content, striving to balance free speech with the need to curb misleading propaganda.
- Diplomatic tensions rising as countries accuse each other of spreading false information, impacting international relations and trade agreements.
Questions to consider
- If your country has experienced a government-sponsored propaganda campaign, what was the result?
- How do you protect yourself from state-sponsored propaganda campaigns?
Insight references
The following popular and institutional links were referenced for this insight: