Synthetic media and the law: The fight against misleading content
Synthetic media and the law: The fight against misleading content
Synthetic media and the law: The fight against misleading content
- Author:
- February 17, 2023
The proliferation of accessible synthetic or deepfake technologies has led to consumers becoming more vulnerable to disinformation and manipulated forms of media—and without the resources necessary to protect themselves. To address the harmful effects of content manipulation, key organizations such as government agencies, media outlets, and tech companies are working together to make synthetic media more transparent.
Synthetic media and the law context
Aside from propaganda and disinformation, synthetic or digitally altered content has led to the rise of body dysmorphia and low self-esteem among the youth. Body dysmorphia is a mental health condition that makes people obsess over their perceived flaws in appearance. Teenagers are particularly susceptible to this condition as they are continuously bombarded by society-dictated standards of beauty and acceptability.
Some governments are partnering with organizations to make entities that use digitally manipulated videos and photos to mislead people accountable. For example, the US Congress passed the Deepfake Task Force Act in 2021. This bill established a National Deepfake and Digital Provenance task force comprising the private sector, federal agencies, and academia. The Act is also developing a digital provenance standard that would identify where a piece of online content came from and the alterations that were made to it.
This bill supplements the Content Authenticity Initiative (CAI) led by tech firm Adobe. The CAI protocol allows creative professionals to get credit for their work by attaching tamper-evident attribution data, like name, location, and edit history to a piece of media. This standard also provides consumers with a new level of transparency about what they see online.
According to Adobe, provenance technologies empower customers to conduct due diligence without waiting for intermediary labels. The spread of fake news and propaganda can be slowed by making it easier for online users to fact-check the origins of a piece of content and identify legitimate sources.
Disruptive impact
Social media posts are one area where synthetic media regulations are becoming more necessary than ever. In 2021, Norway passed a law preventing advertisers and social media influencers from sharing retouched images without disclosing that the photo was edited. The new law impacts brands, companies, and influencers that post sponsored content across all social media sites. Sponsored posts refer to content paid for by an advertiser, including giving away merchandise.
The amendment requires disclosures for any edits made to the image, even if it was done before the photo was taken. For example, Snapchat and Instagram filters that modify one’s appearance would have to be labeled. According to media site Vice, some examples of what would have to be labeled include “enlarged lips, narrowed waists, and exaggerated muscles.” By prohibiting advertisers and influencers from posting doctored photos without transparency, the government hopes to reduce the number of young people succumbing to negative body pressures.
Other European countries have proposed or passed similar laws. For example, the UK introduced the Digitally Altered Body Images Bill in 2021, which would require social media posts denoting any filter or alteration to be disclosed. The UK’s Advertising Standards Authority also banned social media influencers from using unrealistic beauty filters in advertisements. In 2017, France passed a law requiring all commercial images that have been digitally altered to make a model look thinner to include a warning label similar to those found on cigarette packages.
Implications of synthetic media and the law
Wider implications of synthetic media being moderated by legislation may include:
- More organizations and governments working together to create provenance standards to help consumers track the creation and spread of online information.
- Anti-disinformation agencies creating comprehensive programs to educate the public about using anti-deepfake technologies and detecting their use.
- Stricter laws that require advertisers and firms to avoid using (or at least disclose their use of) exaggerated and manipulated photos for marketing.
- Social media platforms being pressured to regulate how influencers are using their filters. In some cases, app filters may be forced to automatically imprint a watermark on edited images before the images are published online.
- Increasing accessibility of deepfake technologies, including more advanced artificial intelligence systems that can make it difficult for people and protocols to detect altered content.
Questions to comment on
- What are some of your country’s regulations on the use of synthetic media, if any?
- How else do you think deepfake content should be regulated?
Insight references
The following popular and institutional links were referenced for this insight: