Generative algorithms: Could this become the most disruptive technology of the 2020s?

IMAGE CREDIT:
Image credit
iStock

Generative algorithms: Could this become the most disruptive technology of the 2020s?

Generative algorithms: Could this become the most disruptive technology of the 2020s?

Subheading text
Computer-generated content is becoming so human-like that it is becoming impossible to detect and deflect.
    • Author:
    • Author name
      Quantumrun Foresight
    • February 21, 2023

    Despite the early deepfake scandals caused by generative algorithms, these artificial intelligence (AI) technologies remain a powerful tool that many industries—from media corporations to advertising agencies to film studios—use to create believable content. Some experts argue that generative AI should be more closely monitored as the capabilities of these AI algorithms will soon have the potential to skew and deceive the public, not to mention automate vast swaths of white-collar labor.



    Generative algorithms context



    Generative AI, or algorithms that can create content (including text, audio, image, video, and more) with minimal human intervention, has made significant strides since the 2010s. For example, OpenAI’s Generative Pre-trained Transformer 3 (GPT-3) was released in 2020 and is considered the most advanced neural network of its kind. It can generate text that is virtually indistinguishable from something a person would write. Then in November 2022, OpenAI released ChatGPT, an algorithm that attracted significant consumer, private sector, and media interest due to its stunning ability to provide detailed responses to user prompts and articulate answers across many domains.



    Another generative AI technology that is gaining popularity (and notoriety) are deepfakes. The technology behind deepfakes utilizes generative adversarial networks (GANs), where two algorithms train each other to produce images as close to the original. While this technology may sound complicated, it has become relatively easy to produce. Numerous online applications, such as Faceswap and ZAO Deepswap, can create deepfake images, audio, and videos in minutes (and, in some applications, instantly).



    While all these generative AI tools have been initially developed to advance machine and deep learning technologies, they have also been used for unethical practices. Next-generation disinformation and propaganda campaigns have thrived using these tools. Synthetic media, such as AI-generated op-eds, videos, and images, have led to a flood of fake news. Deepfake comment bots have even been used to harass women and minorities online. 



    Disruptive impact



    Generative AI systems are rapidly experiencing widespread applications across numerous industries. A study published in 2022 by the Association for Computing Machinery found that leading media companies such as the Associated Press, Forbes, the New York Times, the Washington Post, and ProPublica use AI to generate entire articles from scratch. This content includes reporting on crimes, financial markets, politics, sporting events, and foreign affairs.



    Generative AI is also used more frequently as an input while writing texts for different applications, anything from user- and company-generated content to reports written by governmental institutions. When AI writes the text, its involvement is usually not revealed. Some have argued that given the potential for misuse, AI users should be transparent about its utilization. In fact, this type of disclosure will likely become law by the late 2020s, as proposed by the Algorithmic Justice and Online Platform Transparency Act of 2021. 



    Another area where generative AI disclosure is needed is in advertising. A 2021 study published in the Journal of Advertising found that advertisers are automating many processes to create “synthetic ads” generated through data analysis and modification. 



    Advertisers often use manipulation tactics to make ads more personalized, rational, or emotive so that consumers will want to purchase the product. Ad manipulation involves any change made to an advertisement, such as retouching, make-up, and lighting/angle. However, digital manipulation practices have become so severe that they can cause unrealistic beauty standards and body dysmorphia among teens. Several countries, such as the UK, France, and Norway, have mandated that advertisers and influencers explicitly state if their content has been manipulated.



    Implications of generative algorithms



    Wider implications of generative algorithms may include: 




    • Numerous white-collar professions—such as software engineering, lawyers, customer service representatives, sales representatives, and more—will see increasing automation of their lower-value job responsibilities. This automation will improve the productivity of the average worker while reducing the need for companies to hire in excess. As a result, more companies (especially smaller or less high-profile companies) will gain access to skilled professionals at a critical period when the labor force worldwide is shrinking due to boomer retirements.

    • Generative AI being used to ghostwrite opinion pieces and thought leadership articles.

    • The increased use of generative AI to streamline digital versioning, where different angles of the same story are written simultaneously.

    • Deepfake content being used in advertisements and movies to de-age actors or bring back deceased ones.

    • Deepfake apps and technologies becoming increasingly accessible and low-cost, allowing more people to participate in propaganda and disinformation.

    • More countries requiring companies to reveal the use of AI-generated content, personas, writers, celebrities, and influencers.



    Questions to comment on




    • How is generative AI being used in your line of work, if at all?

    • What are the other benefits and dangers of using AI to mass-produce content?

       


    Insight references

    The following popular and institutional links were referenced for this insight: