Recognition privacy: Can online photos be protected?

IMAGE CREDIT:
Image credit
iStock

Recognition privacy: Can online photos be protected?

Recognition privacy: Can online photos be protected?

Subheading text
Researchers and companies are developing new technologies to help individuals protect their online photos from being used in facial recognition systems.
    • Author:
    • Author name
      Quantumrun Foresight
    • November 4, 2022

    Insight summary



    As facial recognition technology (FRT) becomes widespread, various groups have tried to limit its efficacy to preserve privacy. While attempting to outmaneuver facial recognition systems isn’t always possible, researchers have begun experimenting with ways to confuse online apps that scrape and gather photos for facial recognition engines. These methods include using artificial intelligence (AI) to add "noise" to images and cloaking software.



    Recognition privacy context



    Facial recognition technology is increasingly utilized by various sectors, including law enforcement, education, retail, and aviation, for purposes ranging from identifying criminals to surveillance. For example, in New York, facial recognition has been instrumental in aiding investigators to make numerous arrests and identify cases of identity theft and fraud, significantly since 2010. However, this increase in usage also raises questions about privacy and the ethical use of such technology.



    In border security and immigration, the US Department of Homeland Security employs facial recognition to verify the identities of travelers entering and leaving the country. This is done by comparing travelers' photographs with existing images, such as those found in passports. Similarly, retailers are adopting facial recognition to identify potential shoplifters by comparing customers' faces against a database of known offenders. 



    Despite the practical benefits, the expanding use of facial recognition technologies has sparked concerns about privacy and consent. A notable example is the case of Clearview AI, a company that amassed billions of images from social media platforms and the internet, without explicit permission, to train its facial recognition system. This practice highlights the thin line between public and private domains, as individuals who share their photographs online often have limited control over how these images are used. 



    Disruptive impact



    In 2020, a software called Fawkes was developed by researchers from the University of Chicago. Fawkes offers an effective method of facial recognition protection by “cloaking” photos to deceive deep learning systems, all while making minimal changes that are not noticeable to the human eye. The tool solely targets systems that harvest personal images without permission and does not affect models built with legitimately obtained pictures, such as those used by law enforcement.



    Fawkes can be downloaded from the project website, and anyone can use it by following a few simple steps. The cloaking software only takes a few moments to process the photos before users can go ahead and post them publicly. The software is also available for Mac and PC operating systems.



    In 2021, Israel-based tech company Adversa AI created an algorithm that adds noise, or minor alterations, to photos of faces, which causes facial scanning systems to detect a different face altogether. The algorithm successfully subtly changes an individual’s image to someone else of their choosing (e.g., Adversa AI’s CEO was able to deceive an image search system into identifying him as Tesla’s Elon Musk). This technology is unique because it was created without detailed knowledge of the target FRT’s algorithms. Thus, an individual can also use the tool against other facial recognition engines.



    Implications of recognition privacy



    Wider implications of recognition privacy may include: 




    • Social media and other content-based platforms incorporating recognition privacy technologies.

    • Smartphone, laptops, and cameras including programs that can cloak users’ photos, increasing users’ privacy.

    • An increasing number of startups developing biometric camouflage or programs to restrict FRT detection. 

    • More national and local governments implementing laws that restrict or ban FRTs in public surveillance.

    • More lawsuits against facial recognition systems that illegally scrape private images, including making social media companies accountable for their lack of security measures.

    • A growing movement of citizens and organizations that lobby against the increasing use of FRTs.



    Questions to consider




    • What can be done to balance the use of facial recognition systems?

    • How do you use facial recognition at work and in your daily life?


    Insight references

    The following popular and institutional links were referenced for this insight: