Artificial Intelligence liability insurance: Who should be responsible when AI fails?
Artificial Intelligence liability insurance: Who should be responsible when AI fails?
Artificial Intelligence liability insurance: Who should be responsible when AI fails?
- Author:
- August 5, 2022
Insight summary
As businesses integrate artificial intelligence and machine learning (AI/ML), specialized insurance policies are emerging to address unique risks like data corruption and model theft. These AI/ML-specific policies differ from traditional cyber insurance, targeting issues beyond digital system failures, such as AI-induced physical harm. The growing use of AI is prompting new legal frameworks, specialized job roles, and cross-industry standards, influencing everything from consumer protection to AI research priorities.
AI liability insurance context
Businesses are increasingly integrating AI/ML into their operations, leading to the emergence of specialized insurance policies. These policies are designed to mitigate risks unique to AI and ML applications. While still in nascent stages, these AI/ML-specific insurance policies are becoming essential as businesses expand their use of these technologies. The exact coverage of these policies is not fully defined yet, but they are expected to address issues such as data corruption, intellectual property theft related to AI models, and losses from adversarial attacks.
There is a distinct difference between general cyber insurance and AI/ML-specific insurance. Traditional cyber insurance typically addresses digital system failures, encompassing aspects like business operation disruptions, and liabilities related to information security and privacy breaches. However, AI/ML-specific insurance focuses on the unique risks posed by the use of AI technologies. For example, general cyber insurance might not cover damages from AI system failures that cause physical harm or significant brand damage.
Instances like the accident involving Uber's self-driving car in Arizona, which resulted in a pedestrian's death, highlight the need for AI-specific coverage. Traditional cyber insurance, rooted in financial line insurance, often excludes such liabilities. Similarly, when Blackberry Security's AI-based antivirus engine mistakenly identified a harmful ransomware as benign, the potential brand damage from such an incident would not typically be covered by conventional cyber insurance.
Disruptive impact
Artificial intelligence systems are not perfect. For example, they might misidentify a face or cause an autonomous vehicle to crash into another car. When this happens, it can be said that the AI system is “biased” or “error-prone.” Such potential errors are why it’s crucial to understand how AI works. When something goes wrong with an AI system, who is responsible? Is it the people who made the AI system or those who used it? Such cases represent a challenging question in an emerging field of law. Sometimes it’s clear who is responsible, and sometimes it’s not. For example, who is responsible if an AI system is used to make a financial decision and it loses money? Such cases are where AI/ML insurance may help identify or clarify liabilities.
There is a lot of discussion about how AI will be held responsible for its actions. Some industry analysts suggest that AI should be given legal personhood so that it can own things and be sued in court. Another proposal is for the government to have a system where experts approve algorithms (of a particular scale) before they are used publicly. These oversight teams could potentially ensure that new commercial or public sector algorithms meet specific standards.
Some jurisdictions, such as the US and Europe, have released their own AI-specific regulations that oversee how AI can be made accountable. The US has the Fair Credit Reporting Act and the Equal Credit Opportunity Act, which now prohibit deceptive and fraudulent activities connected to AI-powered automated decision-making. Meanwhile, the European Union released a proposal for the Artificial Intelligence Act in 2021, a comprehensive legal framework focusing on oversight of high-risk AI applications. Additionally, AI liability insurance is required to maintain public trust and may be required by operators of high-risk AI systems.
Implications of AI liability insurance
Wider implications of AI liability insurance may include:
- Insurance companies offering extensive AI/ML liability insurance plans, leading to improved risk management for emerging technologies like self-driving cars, AI in healthcare, and automated financial systems.
- Governments enacting precise AI liability laws, resulting in stricter regulations for foreign AI service providers and potentially hefty fines for multinational companies in the event of AI-related incidents.
- Businesses establishing dedicated teams for AI oversight, enhancing accountability and safety in AI deployments through the involvement of data scientists, security experts, and risk managers.
- The formation of cross-industry associations to set AI liability standards, contributing to responsible AI use and influencing environmental, social, and governance (ESG) metrics for investor guidance.
- Increased public skepticism towards AI, driven by high-profile AI failures, leading to a cautious approach towards AI adoption and demand for greater transparency in AI operations.
- A rise in specialized legal practices focused on AI-related cases, offering expertise in navigating the complexities of AI technology and its societal impacts.
- Enhanced consumer protection frameworks to address AI-related harms, ensuring fair compensation and rights for individuals affected by AI errors or malfunctions.
- An evolution in job roles and skill requirements, with a growing need for professionals trained in AI ethics, risk assessment, and legal compliance.
- A shift in AI research priorities, emphasizing the development of safer and more reliable AI systems in response to increased liability concerns and insurance requirements.
- Changes in AI education and training programs, focusing on ethical AI development and risk management to prepare future professionals for the evolving landscape of AI and its implications.
Questions to consider
- Who do you think should be held responsible for AI system failures?
- How do you think the insurance industry would adapt to increasing AI liabilities?
Insight references
The following popular and institutional links were referenced for this insight: