Artificial intelligence bias: Machines are not as objective as we hoped

Image credit

Artificial intelligence bias: Machines are not as objective as we hoped

Artificial intelligence bias: Machines are not as objective as we hoped

Subheading text
Everyone agrees that AI should be unbiased, but removing biases is proving problematic
    • Author:
    • Author name
      Quantumrun Foresight
    • February 8, 2022

    The hope is that data-driven technologies will help mankind to create a more equitable society where everyone is treated fairly. The reality, however, is that many of the same biases in humans that have created injustices up to now are present in the algorithms that operate in our digital world.

    AI bias general context

    The biases present in artificial intelligence (AI) systems often exist due to the conscious and unconscious biases of the people responsible for developing those systems, biases that often infiltrate their work. Similarly, even if algorithms were designed to be free of bias, those algorithms often learn about the world from large data sets—data sets that themselves contain biases.

    For example, in 2012, a project called ImageNet crowdsourced the labeling of images that machine learning systems would be trained on. A large neural network trained on the data was subsequently able to identify things with great accuracy. However, researchers found biases lurking in the ImageNet data. In one instance, an algorithm trained on the data was biased to assume that all software programmers are white men, which could have led to women being ignored for such positions when hiring is automated. The biases infiltrated the data sets because the person adding labels to images of “woman” added an additional label consisting of a derogatory term.

    Disruptive impact 

    Researchers in various public and private organizations have since taken steps to address these and other sources of bias in the data and core algorithms they use to manage their system. For the example noted above, researchers used crowdsourcing to identify and remove inaccurate labeling terms that projected derogatory meaning onto certain images. These and other steps showed that AI could be reengineered to be fairer.

    However, removing biases is a complex issue and creates other problems. Experts say that stripping out bias could make a data set less useful, especially when more than one bias is involved. A data set that lacks certain biases ends up with too little information to work with. In fact, it’s not clear what a truly diverse image data set would look like.

    Applications to mitigate AI bias

    The problem with AI bias can be addressed by:

    • Organizations being proactive in ensuring fairness and non-discrimination as they leverage AI to improve productivity and performance. 
    • Having an AI ethicist in development teams to detect and mitigate ethical risks early in a project. 
    • Designing AI products with diversity factors such as gender, race, class, and culture clearly in mind.
    • Getting representatives from the diverse groups that will be using a company’s AI product to test it before it is released.

    However, unaddressed, AI bias could lead to:

    • Various public services being restricted from certain members of the public.
    • Certain members of the public being unable to access or qualify for certain job opportunities.
    • Law enforcement agencies and professionals unfairly targeting certain members of society more than others. 

    Questions to comment on

    • Are you optimistic that automated decision-making will be fair in the future?
    • What about AI decision-making makes you the most nervous?

    Insight references

    The following popular and institutional links were referenced for this insight: