Franken-Algorithms: Algorithms gone rogue
Franken-Algorithms: Algorithms gone rogue
Franken-Algorithms: Algorithms gone rogue
- Author:
- April 12, 2023
As machine learning (ML) algorithms become more advanced, they are able to learn and adapt to patterns in large datasets on their own. This process, known as "autonomous learning," can result in the algorithm generating its own code or rules to make decisions. The issue with this is that the code generated by the algorithm may be difficult or impossible for humans to understand, making it challenging to pinpoint biases.
Franken-Algorithms context
Franken-Algorithms refer to algorithms (the rules that computers follow when processing data and responding to commands) that have become so complex and intertwined that humans can no longer decipher them. The term is a nod to Mary Shelley's science fiction about a "monster" created by the mad scientist Dr. Frankenstein. While algorithms and codes are the building blocks of big tech and have allowed Facebook and Google to be the influential companies they are now, there's still so much about the technology that humans don't know.
When programmers build codes and run them through software, ML allows computers to understand and predict patterns. While big tech claims that algorithms are objective because human emotions and unpredictability do not influence them, these algorithms can evolve and write their own rules, leading to disastrous results. The code generated by these algorithms is often complex and opaque, making it difficult for researchers or practitioners to interpret the algorithm's decisions or to identify any biases that may be present in the algorithm's decision-making process. This roadblock can create significant challenges for businesses that rely on these algorithms to make decisions, as they may be unable to understand or explain the reasoning behind those decisions.
Disruptive impact
When Franken-Algorithms go rogue, it can be a matter of life and death. An example was an accident in 2018 when a self-driving car in Arizona hit and killed a woman riding a bike. The car's algorithms were unable to correctly identify her as a human. Experts were torn on the root cause of the accident—was the car improperly programmed, and did the algorithm become too complex for its own good? What programmers can agree on, however, is that there needs to be an oversight system for software companies—a code of ethics.
However, this ethics code comes with some pushback from big tech because they are in the business of selling data and algorithms, and can't afford to be regulated or required to be transparent. Additionally, a recent development that has caused concern for big tech employees is the increasing use of algorithms within the military, such as Google's partnership with the US Department of Defense to incorporate algorithms in military tech, like autonomous drones. This application has led to some employees resigning and experts voicing concerns that algorithms are still too unpredictable to be used as killing machines.
Another concern is that Franken-Algorithms may perpetuate and even amplify biases due to the datasets they are trained on. This process can lead to various societal issues, including discrimination, inequality, and wrongful arrests. Because of these heightened risks, many tech companies are starting to publish their ethical AI guidelines to be transparent on how they develop, use, and monitor their algorithms.
Wider implications for Franken-Algorithms
Potential implications for Franken-Algorithms may include:
- Development of autonomous systems that can make decisions and take actions without human oversight, raising concerns about accountability and safety. However, such algorithms may reduce the costs of developing software and robotics that can automate human labor across most industries.
- More scrutiny on how algorithms can automate military technology and support autonomous weapons and vehicles.
- Increased pressure for governments and industry leaders to implement an algorithm code of ethics and regulations.
- Franken-Algorithms disproportionately impacting certain demographic groups, such as low-income communities or minority populations.
- Franken-Algorithms could perpetuate and amplify discrimination and bias in decision-making, such as hiring and lending decisions.
- These algorithms being used by cybercriminals to monitor and exploit weaknesses in systems, particularly in financial institutions.
- Political actors using rogue algorithms to automate marketing campaigns using generative AI systems in ways that can influence public opinion and sway elections.
Questions to consider
- How do you think algorithms will further develop in the future?
- What can governments and companies do to control Franken-Algorithms?
Insight references
The following popular and institutional links were referenced for this insight: