Deepfakes and harassment: How synthetic content is used to harass women

IMAGE CREDIT:
Image credit
iStock

Deepfakes and harassment: How synthetic content is used to harass women

Deepfakes and harassment: How synthetic content is used to harass women

Subheading text
Manipulated images and videos are contributing to a digital environment that targets women.
    • Author:
    • Author name
      Quantumrun Foresight
    • December 14, 2022

    Insight summary



    Developments in deepfake technology have resulted in increasing incidents of sexual harassment, particularly against women. Experts believe abuse will worsen unless stricter laws on how synthetic media is created, used and distributed are enforced. The long-term implications of using deepfakes for harrassment could include increased lawsuits and more advanced deepfake technologies and filters.



    Deepfakes and harassment context



    In 2017, a discussion board on the website Reddit was used to host artificial intelligence (AI)-manipulated pornography for the first time. Within a month, the Reddit thread went viral, and thousands of people had posted their deepfake pornography on the site. Synthetic content used to create fake pornography or harassment is increasingly common, yet public interest is frequently focused on propaganda deepfakes that promote disinformation and political instability. 



    The term "deepfake" is a combination of "deep learning" and "fake," a method for recreating photographs and videos with the help of AI. The essential component in this content's production is machine learning (ML), which allows for the rapid and inexpensive creation of fake material that is increasingly difficult for human viewers to detect.



     A neural network is trained with footage of the targeted person to create a deepfake video. The more footage used in the training data, the more realistic the results will be; the network will learn that person's mannerisms and other personality traits. Once the neural network is trained, anyone can use computer-graphics techniques to superimpose a copy of person's likeness onto another actor or body. This copying has resulted in a surging number of pornographic materials of female celebrities and civilians unaware that their images have been used in this way. According to research firm Sensity AI, roughly 90 to 95 percent of all deepfake videos fall into the nonconsensual pornography category.



    Disruptive impact



    Deepfakes have worsened the practice of revenge porn, primarily targeting women to expose them to public humiliation and trauma. Women's privacy and safety are jeopardized as end-to-end fake video technology is increasingly weaponized, e.g., harassing, intimidating, demeaning, and degrading women personally and professionally. Worse, there is not enough regulation against this type of content.



    For example, as of 2022, revenge porn content is banned in 46 US states, and only two states explicitly cover synthetic media in their ban. Deepfakes are not illegal by themselves, only when they breach copyrights or become defamatory. These limitations make it difficult for victims to pursue legal action, especially since there is no way to delete this content online permanently.



    Meanwhile, another form of synthetic content, avatars (online representations of users), is also being subjected to assaults. According to a 2022 report by nonprofit advocacy organization SumOfUs, a woman researching on behalf of the organization was allegedly assaulted in the Metaverse platform Horizon Worlds. The woman reported that another user sexually assaulted her avatar while others watched. When the victim brought the incident to Meta's attention, a Meta spokesperson said that the researcher had deactivated the Personal Boundary option. The feature was introduced in February 2022 as a safety precaution enabled by default and prevented strangers from approaching an avatar within four feet.



    Implications of deepfakes and harassment



    Wider implications of deepfakes and harassment may include: 




    • Increased pressure for governments to implement a global regulatory policy against deepfakes used for digital harassment and assault.

    • More women being victimized by deepfake technology, particularly celebrities, journalists, and activists.

    • An increase in lawsuits from victims of deepfake harassment and defamation. 

    • Increased incidents of inappropriate behavior toward avatars and other online representations in metaverse communities.

    • New and increasingly easy to use deepfake apps and filters being released that can create realistic content, leading to the commodification of nonconsensual deepfake content, especially pornography.

    • Social media and website hosting platforms investing more to heavily monitor content circulated on their platforms, including banning individuals or taking down group pages.



    Questions to consider




    • How is your government addressing deepfake harassment?

    • What are the other ways that online users can protect themselves from being victimized by deepfake creators?


    Insight references

    The following popular and institutional links were referenced for this insight:

    Global Conference on Women's Studies Justice for Women: Deep fakes and Revenge Porn