Tactics spreading disinformation: How the human brain is invaded

IMAGE CREDIT:
Image credit
iStock

Tactics spreading disinformation: How the human brain is invaded

Tactics spreading disinformation: How the human brain is invaded

Subheading text
From using bots to flooding social media with fake news, disinformation tactics are changing the course of human civilization.
    • Author:
    • Author name
      Quantumrun Foresight
    • October 4, 2023

    Insight summary

    Misinformation is spreading through tactics like the Contagion Model and encrypted apps. Groups like Ghostwriter target NATO and US troops, while AI manipulates public opinion. People often trust familiar sources, making them susceptible to false information. This could lead to more AI-based disinformation campaigns, stronger government regulations, increased use of encrypted apps by extremists, heightened cybersecurity in media, and educational courses on combating disinformation.

    Tactics spreading disinformation context

    Misinformation tactics are tools and strategies often applied on social networking sites, creating a pandemic of false beliefs. This manipulation of information has resulted in a widespread misunderstanding about topics ranging from voter fraud to whether violent attacks are real (e.g., the Sandy Hook elementary school shooting) or whether vaccines are safe. As fake news continues to be shared across different platforms, it has created a deep distrust against social institutions like the media. One theory of how misleading information spreads is called Contagion Model, which is based on how computer viruses work. A network is created by nodes, which represent people, and edges, which symbolize social links. A concept is seeded in one “mind” and spreads under various conditions and depending on social relationships.

    It doesn’t help that technology and the increasing digitization of society are helping make misinformation tactics more effective than ever. An example is encrypted messaging apps (EMAs), which not only facilitate sharing false information to personal contacts but also make it impossible for app companies to track the messages being shared. For example, far-right groups transferred to EMAs after the January 2021 US Capitol attack because mainstream social media platforms like Twitter banned them. Disinformation tactics have immediate and long-term consequences. Aside from elections where questionable personalities with crime records win through troll farms, they can marginalize minorities and facilitate war propaganda (e.g., Russia’s Ukraine invasion). 

    Disruptive impact

    In 2020, security company FireEye released a report highlighting the disinformation efforts of a group of hackers called Ghostwriter. Since March 2017, the propagandists have been spreading lies, particularly against the military alliance North Atlantic Treaty Organization (NATO) and US troops in Poland and the Baltics. They’ve published falsified material on social media and pro-Russian news websites. Ghostwriter has sometimes utilized a more aggressive approach: hacking the content management systems (CMS) of news websites to post their own stories. The group then distributes its fake news using phony emails, social media posts, and even op-eds written by them on other sites that accept content from readers.

    Another disinformation tactic uses algorithms and artificial intelligence (AI) to manipulate public opinion on social media, such as “boosting” social media followers through bots or creating automated troll accounts to post hateful comments. Experts call this computational propaganda. Meanwhile, research by The New York Times discovered that politicians use email to spread disinformation more often than people realize. In the US, both parties are guilty of using hyperbole in their emails to constituents, which can often encourage sharing false information. 

    There are a few key reasons why people fall for misinformation campaigns. 

    • First, people are social learners and tend to trust their sources of information like friends or family members. These people, in turn, get their news from trusted friends, making it difficult to break this cycle. 
    • Second, people often fail to proactively fact-check the information they consume, especially if they’re used to getting their news from one source (often traditional media or their favorite social media platforms like Facebook or Twitter). When they see a headline or an image (and even just branding) that supports their beliefs, they often don’t question the authenticity of these claims (no matter how ridiculous). 
    • Echo chambers are powerful disinformation tools, automatically making people with opposing beliefs the enemy. The human brain is hardwired to seek information that supports existing ideas and discount information that goes against them.

    Wider implications of tactics spreading disinformation

    Possible implications of tactics spreading disinformation may include: 

    • More companies specializing in AI and bots to help politicians and propagandists gain followers and “credibility” through clever disinformation campaigns.
    • Governments being pressured to create anti-disinformation laws and agencies to combat troll farms and misinformation strategists.
    • Increasing downloads of EMAs for extremist groups who want to spread propaganda and ruin reputations.
    • Media sites investing in expensive cybersecurity solutions to prevent disinformation hackers from planting fake news in their systems. Novel generative AI solutions may be employed in this moderation process.
    • Generative AI powered bots may be employed by bad actors to produce a wave of propaganda and disinformation media content at scale.
    • Increased pressure for universities and community schools to include anti-disinformation courses. 

    Questions to consider

    • How do you protect yourself from disinformation tactics?
    • How else can governments and agencies prevent the spread of these tactics?

    Insight references

    The following popular and institutional links were referenced for this insight:

    Centre for International Governance Innovation The Business of Computational Propaganda Needs to End