Synthetic media: The dark side of content creation

IMAGE CREDIT:
Image credit
iStock

Synthetic media: The dark side of content creation

Synthetic media: The dark side of content creation

Subheading text
With the increasing accessibility of deepfake and AI-generated technology, it’s now easier than ever to fabricate content.
    • Author:
    • Author name
      Quantumrun Foresight
    • January 5, 2024

    Insight summary



    Synthetic media, encompassing AI-driven technologies like deepfakes, has broad implications in swaying public opinion and spreading disinformation. Utilizing AI for realistic videos and texts, it poses ethical concerns in entertainment and risks in spreading false narratives on social media. This technology's impact includes potential government regulations, increased risks of blackmail, erosion of factual discourse, and the emergence of specialized job roles for identifying and combating fake content.



    Synthetic media context



    Synthetic media is an umbrella term for any artificially created or modified media, including anything from AI-written music and texts to realistic-looking deepfake videos. This technology is enabled by artificial intelligence and machine learning (AI/ML), particularly natural language processing (NLP) and generative adversarial networks (GANs). NLP is often used to create deepfake texts, such as tweets, blog posts, and even op-eds. AI writes this content to mimic human writers through sentences that flow a lot more smoothly and with better logic. Because of this technology, troll farms can flood social media with fake posts using dummy accounts to induce panic, doubts, and real-world violence. 



    Meanwhile, GANs are primarily used in deepfake videos. GANs are powered by deep learning technology, which involves using numerous networks or nodes to create references, connections, and categorizations. As a result, AI can accurately identify images of real people and superimpose them on another person’s body, often in real time. Deepfake apps are already popular as filters on Snapchat or Instagram (e.g., Face Swap), but they can have disastrous consequences well beyond entertainment. They can be used to superimpose real people’s faces on pornographic videos, resulting in ruined reputations and political scandals. GANs are now more sophisticated than ever, and various industries will try to take advantage of all the opportunities this technology offers.



    Disruptive impact



    The entertainment sector is the apparent heavy user of deepfake content. This trend can already be seen in some films and music videos, where deceased actors and musicians are resurrected using old footage of them and their voice samples. (Carrie Fisher’s young Princess Leia being included in Star Wars: Rogue One is an example). While these developments can be good for storytelling (and nostalgia), it has ethical implications. For example, these deceased celebrities can no longer consent to have a version of themselves brought back to life for commercial purposes. How are they going to be compensated, if even? 



    However, a more malicious (and increasingly popular) use of synthetic media is in disinformation campaigns. An example is AI-generated personas who claim to be experts or journalists to persuade the public about a certain message or a developing news story. In July 2020, an inquiry revealed a network of non-existent authors that published thought leadership pieces in conservative media outlets such as The Washington Examiner and American Thinker. The group has also written for Middle Eastern news outlets like Al Arabiya and Jerusalem Post, with themes including promoting some Gulf nations and Iran-bashing. Facebook shut down a similar Russian network of fake personas that claimed to be news editors in Kyiv in February 2022. The network published material claiming that the West had betrayed Ukraine and that Ukraine was a failing state.



    Wider implications of synthetic media



    Possible implications of synthetic media may include: 




    • Governments creating standardized regulations to prosecute illegal synthetic media creators and spreaders.

    • Increased instances of blackmail and ransom for deepfake content created against celebrities, politicians, and other public figures.

    • Continued erosion of facts in social media, with corrupt politicians and companies accusing whistleblowers of creating synthetic media against them (even if the evidence is actually factual).

    • National agencies, media sites, and financial institutions training their staff to identify and report malicious synthetic media.

    • Synthetic media courses and specializations being offered as courses in universities, leading to new job categories focusing on fact-checking and identifying fake content.



    Questions to comment on




    • How else can synthetic media be used for criminal activities?

    • What other ways can governments train their citizens to identify synthetic media?


    Insight references

    The following popular and institutional links were referenced for this insight: