Artificial intelligence bias: Machines are not as objective as we hoped

IMAGE CREDIT:
Image credit
iStock

Artificial intelligence bias: Machines are not as objective as we hoped

Artificial intelligence bias: Machines are not as objective as we hoped

Subheading text
Everyone agrees that AI should be unbiased, but removing biases is proving problematic
    • Author:
    • Author name
      Quantumrun Foresight
    • February 8, 2022

    Insight summary



    While data-driven technologies hold the promise of fostering a fair society, they often reflect the same biases that humans harbor, leading to potential injustices. For instance, biases in artificial intelligence (AI) systems can inadvertently worsen harmful stereotypes. However, efforts are underway to make AI systems more equitable, though this raises complex questions about the balance between utility and fairness, and the need for thoughtful regulation and diversity in tech teams.



    AI bias general context



    The hope is that technologies driven by data will assist humanity in establishing a society where fairness is the norm for all. However, the current reality paints a different picture. Many of the biases that humans have, which have led to injustices in the past, are now being mirrored in the algorithms that govern our digital world. These biases in AI systems often stem from prejudices of the individuals who develop these systems, and these biases frequently seep into their work.



    Take, for instance, a project in 2012 known as ImageNet, which sought to crowdsource the labeling of images for the training of machine learning systems. A large neural network trained on this data was subsequently able to identify objects with impressive accuracy. However, upon closer inspection, researchers discovered biases hidden within the ImageNet data. In one particular case, an algorithm trained on this data was biased towards the assumption that all software programmers are white men.



    This bias could potentially result in women being overlooked for such roles when the hiring process is automated. The biases found their way into the data sets because the individual adding labels to images of "woman" included an additional label that consisted of a derogatory term. This example illustrates how biases, whether intentional or unintentional, can infiltrate even the most sophisticated AI systems, potentially perpetuating harmful stereotypes and inequalities.



    Disruptive impact 



    Efforts to address bias in data and algorithms have been initiated by researchers across various public and private organizations. In the case of the ImageNet project, for instance, crowdsourcing was employed to identify and eliminate labeling terms that cast a derogatory light on certain images. These measures demonstrated that it is indeed possible to reconfigure AI systems to be more equitable.



    However, some experts argue that removing bias could potentially render a data set less effective, particularly when multiple biases are at play. A data set stripped of certain biases may end up lacking sufficient information for effective use. It raises the question of what a truly diverse image data set would look like, and how it could be used without compromising its utility.



    This trend underscores the need for a thoughtful approach to the use of AI and data-driven technologies. For companies, this might mean investing in bias-detection tools and promoting diversity in tech teams. For governments, it could involve implementing regulations to ensure fair use of AI. 



    Implications of AI bias



    Wider implications of AI bias may include:




    • Organizations being proactive in ensuring fairness and non-discrimination as they leverage AI to improve productivity and performance. 

    • Having an AI ethicist in development teams to detect and mitigate ethical risks early in a project. 

    • Designing AI products with diversity factors such as gender, race, class, and culture clearly in mind.

    • Getting representatives from the diverse groups that will be using a company’s AI product to test it before it is released.

    • Various public services being restricted from certain members of the public.

    • Certain members of the public being unable to access or qualify for certain job opportunities.

    • Law enforcement agencies and professionals unfairly targeting certain members of society more than others. 



    Questions to consider




    • Are you optimistic that automated decision-making will be fair in the future?

    • What about AI decision-making makes you the most nervous?


    Insight references

    The following popular and institutional links were referenced for this insight: