Algorithms target people: When machines are turned against minorities
Algorithms target people: When machines are turned against minorities
Algorithms target people: When machines are turned against minorities
- Author:
- February 8, 2023
There's no doubt that facial recognition systems can be put to good use, such as helping find missing children and detecting abuse, when trained correctly. However, in some high-profile cases, the algorithms powering these facial recognition systems are being trained using data from vulnerable populations without their consent.
Algorithms targeting people context
Image recognition systems, such as facial recognition technology (FRT), are challenging to develop because the algorithms driving them need large datasets to get accurate results. For example, the Multiple Encounter Dataset includes two extensive photo collections: those who have not committed any crimes and deceased individuals. The dataset contains a greater proportion of images featuring people from minority groups than is representative of the general population. If law enforcement uses this data to train algorithms, it's likely to create biased results.
Many images are used without the pictured person's consent, especially children. The Child Exploitation Image Analytics program has been in use since 2016 by facial recognition technology developers to test their datasets. This program contains pictures of infants through adolescents, most of whom are victims of coercion, abuse, and sexual assault, as stated in the project's documentation.
According to a 2019 study by the news site Slate, the US' Facial Recognition Verification Testing program heavily relies on images of children who have been victims of child pornography, US visa applicants (particularly those from Mexico), and people who have been previously arrested and then passed away. Some images came from a Department of Homeland Security (DHS) study in which DHS employees captured pictures of regular travelers for research purposes. Also, the people whose photos are primarily used in this program have been suspected of criminal activity.
Disruptive impact
China has also used FRT algorithms to target minorities, particularly the Uyghur Muslim community. In 2021, BBC reported that a camera system that uses AI and facial recognition to detect emotions had been tested on Uyghurs in Xinjiang. Surveillance is a daily activity for citizens living in the province. The area is also home to "re-education centers" (called high-security detention camps by human rights groups), where an estimated one million people have been held.
If there isn't proper government regulation, anyone could be used as a test subject for the facial recognition industry. People's most vulnerable moments might be captured and then further exploited by the government sectors supposed to protect the public. Additionally, some data sets are released to the public, allowing private citizens or corporations to download, store, and use them.
It is impossible to tell how many commercial systems rely on this data, but it's common practice that numerous academic projects do. Bias among source data sets creates issues when software must be "trained" for its specific task (for example, to identify faces) or "tested" for performance. Because of several events where people were discriminated against because of the color of their skin or other characteristics, there have been many calls for better regulation regarding how these machine learning (ML) systems are trained. According to Analytics Insight, only Belgium and Luxembourg have completely banned FRT in their territories.
Implications of algorithms targeting people
Wider implications of algorithms targeting people may include:
- Human rights groups lobbying their respective governments to limit the use of FRT, including access to the images.
- Governments working together to create a global standard that would clearly define how FRT tools will be trained and employed.
- Increased lawsuits against companies that use illegal training data from minority groups.
- Some countries extending their implementation of FRT algorithms to monitor and control their respective populations.
- More biased AI tools that target specific ethnic groups, sexual orientations, and religious affiliations.
Questions to comment on
- What are some ways that algorithms are being used to target specific groups in your community, if any?
- How else can AI be used to oppress or exploit vulnerable populations?
Insight references
The following popular and institutional links were referenced for this insight: