Censorship and AI: Algorithms that can re-enforce and flag censorship

IMAGE CREDIT:
Image credit
iStock

Censorship and AI: Algorithms that can re-enforce and flag censorship

Censorship and AI: Algorithms that can re-enforce and flag censorship

Subheading text
Artificial intelligence (AI) systems’ evolving learning capabilities can be both a benefit and a deterrent to censorship.
    • Author:
    • Author name
      Quantumrun Foresight
    • October 31, 2022

    Insight summary



    When it comes to artificial intelligence (AI) and censorship, many experts are concerned about the implications of such technology regarding government control. Artificial intelligence systems may be vulnerable to bias due to the data used to train them. However, some organizations are also experimenting with how to use AI to detect and prevent censorship.



    Censorship and AI context



    Algorithms, powered by AI, are becoming increasingly influenced by the data they are trained on. However, this development raises concerns about the potential misuse of AI systems by governments or organizations. A striking example of this is the Chinese government's use of AI to censor content on social media platforms such as WeChat and Weibo. 



    Conversely, the evolution of AI systems also holds great promise in certain applications, such as content moderation and the accurate detection of censored information. Social media platforms are at the forefront of leveraging AI to monitor the content posted on their servers, particularly when it comes to identifying hate speech and content that incites violence. For instance, in 2019, YouTube made a significant announcement about its intention to employ AI in identifying videos containing graphic violence or extremist content.



    Furthermore, by the end of 2020, Facebook reported that its AI algorithms could detect approximately 94.7 percent of hate speech posted on the platform. In this rapidly evolving landscape, it's crucial for both policymakers and the public to stay informed about the dual nature of AI's impact on online content. While there are concerns about its potential for censorship, AI also offers valuable tools for enhancing content moderation and ensuring a safer online environment. 



    Disruptive impact



    A 2021 study by the University of California San Diego examined two separate AI algorithms to see how they scored headlines containing specific terms. The AI systems assessed training data from the Chinese version of the information portal Wikipedia (Chinese Wikipedia) and Baidu Baike, an online encyclopedia. 



    The study found that the AI algorithm trained on Chinese Wikipedia delivered more positive scores to headlines that mentioned terms like “election” and “freedom.” Meanwhile, the AI algorithm trained on Baidu Baike gave more positive scores to headlines containing phrases like “surveillance” and “social control.” This revelation sparked concern among many experts about AI’s potential for government censorship. 



    However, there have also been studies that looked into how AI can identify attempts at censorship. In 2021, the University of Chicago’s Data Science Institute and Princeton University released plans to build a real-time tool to monitor and detect Internet censorship. The project’s ultimate goal is to provide additional monitoring capabilities and dashboards to data users—including diplomats, policymakers, and non-scientists. The team plans to have a real-time “weather map” for censorship so that observers can almost immediately see Internet interference as it happens. This feature would include the countries and sites or content governments are manipulating.



    Implications of censorship and AI



    Wider implications of censorship and AI may include: 




    • Cybercriminals hacking censorship organizations to capture and manipulate censored information. 

    • Increased investments and research for tools that can detect censorship and other information manipulation.

    • Social media platforms continuing to perfect their algorithms to moderate content. However, this increasing self-policing may alienate many users.

    • A rise in community distrust of government officials and news media outlets.

    • AI systems continuing to be used by some nation-states to control local media and news, including removing stories unfavorable to the respective governments.

    • Businesses adapting their digital strategies to comply with diverse global internet regulations, leading to more localized and segmented online services.

    • Consumers turning to alternative, decentralized platforms to avoid censorship, fostering a shift in social media dynamics.

    • Policymakers worldwide grappling with the challenge of regulating AI in censorship without stifling free speech, leading to varied legislative approaches.



    Questions to consider




    • How else can AI be used to promote or prevent censorship?

    • How might the rise of AI censorship further disseminate misinformation?


    Insight references

    The following popular and institutional links were referenced for this insight: