Gartner published a prediction earlier this year that by 2027, enterprises will suffer more than $10 billion in cumulative losses attributable to AI agent failures. the report got a decent amount of coverage, mostly of the "AI is risky, enterprises should be careful" variety. but i think people are reading it wrong, or at least incompletely.
the interesting thing about that number isn't whether it's right. Gartner numbers are always a little made-up — they're meant to be directionally useful, not actuarially precise. the interesting thing is what it implies about the structure of how those losses will materialize. because "$10 billion in losses from AI agent failures" is not one thing. it's at least four different things, and they have completely different insurance implications.
four ways an AI agent failure costs you money
the first is direct output error. the agent gives a customer wrong information, the customer acts on it, the customer suffers a quantifiable loss and sues. this is the Air Canada chatbot scenario. the damages are relatively easy to measure — there's a specific decision, a specific harm, a specific dollar amount. it maps cleanly onto errors and omissions coverage. the question is just whether your E&O policy was written with this failure mode in mind, which most of them weren't.
the second is data leakage. the agent retrieves something it shouldn't and exposes it, either to another user, or to a log that gets exfiltrated, or through an output it generates that contains information it wasn't supposed to have access to. this is cyber liability territory and it maps better onto existing coverage — there are established frameworks for breach notification, damage quantification, regulatory fines. but AI-specific leakage has different mechanics than a traditional breach. you don't need to find a vulnerability in the traditional sense. sometimes you just need to ask the right question twice.
the third is unauthorized action. the agent does something the operator didn't intend — sends an email, modifies a record, makes an API call — because someone figured out how to manipulate its instructions. this is genuinely underserved by existing coverage. it's not really a data breach. it's not really an E&O failure. it's closer to employee misconduct, except the employee is software and the misconduct was caused by an external party exploiting the software's design. nobody has a clean policy for this.
the fourth is reputational damage. the agent says something that goes viral for the wrong reasons, or gives advice that turns out to be wrong in a very public way, or gets used in a way that generates regulatory attention. this is the hardest to quantify and the hardest to insure. the $10 billion estimate probably includes a lot of this bucket, which is part of why I think the number should be taken directionally rather than literally.
the thing about loss distributions
to write insurance for any of these, you need to know something about the loss distribution. how often do failures happen? when they happen, how bad are they? are the losses clustered (one catastrophic event) or distributed (many small events)? are different industries exposed differently? can you observe risk characteristics that predict loss frequency?
none of this data exists yet in a form that's useful to an actuary. the industry is too young, the deployments are too recent, and the failure modes are sufficiently novel that historical loss data from software liability doesn't translate cleanly. this is why AI-specific insurance is almost nonexistent as a product — not because insurers don't see the market, but because they can't model it.
Gartner's $10 billion, whatever its flaws as an estimate, is doing important work in this context. it's giving the insurance industry permission to start treating AI agent liability as a real underwriting category. a number that large — even a made-up number — creates the business case for building actuarial models, which requires data collection, which requires policies in force, which requires someone to start writing coverage even before the models are good. Lloyd's of London has written aviation policies before the aviation industry fully understood crash risk. the number being imprecise doesn't mean it's not load-bearing.
what the loss taxonomy tells you about product design
if you're building insurance products for AI agents — and a few companies are starting to — the four-bucket taxonomy above has direct implications for how you price and structure coverage.
direct output error is the most insurable in the short term. the loss is quantifiable, the causation chain is legible, and there are analogues in existing E&O products. you can start here. but the premium needs to reflect the use case: an agent giving weather information has a very different loss profile than an agent giving medical advice, even if the underlying model is identical.
data leakage from AI systems is the bucket that most confuses enterprise legal teams right now. they have cyber policies. they think that covers it. it mostly does, but AI-specific leakage mechanisms — context window contamination, RAG retrieval failures, inadvertent training data memorization — often fall into ambiguous coverage language written before these attack vectors existed. the smart play is to either extend existing cyber policies with AI-specific riders or write standalone coverage that addresses these explicitly.
unauthorized action is the genuinely new territory. i don't think there's a clean existing product that covers it. the closest analogues are crime coverage (for losses caused by third parties manipulating your systems) and professional liability (for harm caused by your service behaving outside its intended scope). the right answer is probably a hybrid, and I suspect this is where the first AI-specific policy language will be developed that actually does something useful.
the $10 billion as a Schelling point
here's the thing about big round numbers from credible sources: they create coordination. once a Gartner report says "$10 billion," it becomes acceptable for a CFO to put AI liability as a line item in the risk register. it becomes acceptable for a CISO to bring it to the board. it becomes acceptable for a broker to propose a policy. the number may be wrong by an order of magnitude in either direction and it doesn't matter — it's a Schelling point that makes the conversation possible.
the same dynamic happened with cyber insurance in the mid-2000s. before there were authoritative estimates of how much data breaches cost, cyber coverage barely existed as a product. once people started quantifying it — even imprecisely — the market could form. the Gartner number is doing the same thing for AI agent liability. it's not a prediction, it's permission.
the companies that move first on AI liability coverage — both on the insurance side and the compliance side — will set the terms of the market. once there's a standard policy and a standard certification, the companies that have them will close enterprise deals faster and the ones that don't will face more friction. the Gartner number, whatever its accuracy, just made that market formation more likely to happen quickly.