Across 240+ insurance executives surveyed last year, 90% favor AI adoption. Only 22% have anything running in production. That gap is structural, and it is specific to insurance in a way that generic change management framing misses.
Agentic AI works by executing workflows. Its output quality is a direct function of workflow quality. Feed it how a 20-year account manager handles a renewal (steps that were never written down, vary by client, and live nowhere except in muscle memory) and the output becomes unreliable in ways that carry real liability risk for the agency. Most agency workflows were never built to be documented. They grew as relationship businesses, and the process lived with the person. A senior account manager’s renewal workflow and a junior one’s are often genuinely different, and neither version exists anywhere in writing.
The data problem is equally structural. Insurance is an industry where 64% of the most valuable information arrives as unstructured PDFs, loss runs, and scanned submissions. ACORD, the closest thing insurance has to a shared data language across carriers and systems, is interpreted differently by every platform that claims to support it. Downstream, an agentic workflow completing a commercial lines submission is working with inputs that are inconsistently structured at the source.
While this is not an argument against AI in insurance, one agency recently reported that commission reconciliation (matching carrier payments against their book of business) dropped from two days of manual work to five minutes after AI. That is a task with a clearly defined input, a known correct output, and no ambiguity in the business rules. Coverage gap analysis also follows a similar pattern where an agent can run the same structured comparison across an entire book of business simultaneously, something a senior producer could only ever do one client at a time.
The pattern across every deployment that works is consistent. There is a tightly scoped task, a clean data source, and a defined set of business rules governing the output.
It is not uncommon that the conversations about AI readiness in insurance land on model capability being the limiting factor. However, in the deployments we have worked through at ValueLabs with AiDE, the failure point is almost always upstream i.e., how well the process was defined before the agent was asked to execute it. The enterprises building toward meaningful AI in 2026 are starting with the foundational work first: documenting processes, cleaning their agency management system data, and defining what correct looks like before asking an agent to produce it. That work has no compelling vendor deck, no impressive benchmark, and no shortcut. It is also what the other 22% did before anyone was watching.