The insurance industry understands risk better than almost anyone. Yet several major insurers now say artificial intelligence may be too unpredictable to insure.
AIG, Great American, and WR Berkley have asked U.S. regulators for permission to exclude AI-related liabilities from corporate policies.
They argue that modern AI generates results that even experts cannot fully explain. Because of that, they see AI as a threat they cannot reliably measure.

Incidents
Recently, Google’s AI Overview feature wrongly suggested that a solar company faced legal trouble.
That mistake helped trigger a lawsuit worth more than $100 million. It showed how a single false claim can create major financial harm.
Air Canada faced a similar problem; its customer service chatbot invented a discount that its customers relied on.
The court ruled that the airline had to honor the discount. Even small errors can become expensive.
A case in Hong Kong also raised alarms. Fraudsters created a digital clone of a senior executive and used it during a video call to trick an employee.
The call looked and sounded so real, the company ended up losing $25 million. These incidents proved that AI can bypass traditional security checks with ease.
Also read: Apple’s AI Generates False News Headlines
The Core Issue
Despite these large losses, insurers say one major payout is not their greatest concern. They can handle a single claim worth hundreds of millions of dollars.
However, they cannot absorb thousands of smaller claims that stem from the same faulty system. This is the systemic risk they fear most.
If a widely used AI model fails at scale, thousands of businesses could suffer losses at the same moment.
An executive at Aon explained it clearly: insurers can cover one $400 million loss. They cannot cover 10,000 losses triggered by one malfunctioning AI tool.
AI Adoption
If insurers begin excluding AI from coverage, companies may need to carry these risks themselves.
That could affect how businesses adopt AI tools and how quickly they deploy them.
Small firms may hesitate to rely on AI without reliable insurance protection, and large firms may need stronger oversight to manage the technology.
Developers may face pressure to improve model transparency and reduce failure rates.

