How Are AI-Driven Insights Transforming Underwriting Profitability in Insurance?

There’s a number worth sitting with for a moment. Roughly 16% of P&C insurers currently use AI to augment human underwriting[1]. About 60% say they intend to make it a priority by 2028.

It’s a striking gap. And the more you think about it, the more it explains about where the insurance industry actually is on this question – not where the headlines say it is.

The commercial case for AI in underwriting has been settled for some time. McKinsey’s tracking of the sector suggests that AI leaders in insurance have generated 6.1 times the total shareholder return of AI laggards over the past five years[2] – a spread wider than in almost any other sector. Generative AI alone could unlock between $50 billion and $70 billion in incremental industry revenue[3]. Advanced risk assessment prior to underwriting can drive 40-50% improvements in loss ratios[4]. WTW’s most recent Advanced Analytics & AI Survey found that insurers using sophisticated analytics achieved combined ratios six points lower and premium growth three points higher than slower adopters.

These numbers aren’t quietly contested by the industry. Most senior insurance executives have read them, and most agree with the direction. And yet only 16% of the market is actually doing it.

The gap between what insurers know they should be doing and what they’re actually doing is the most useful place to start any conversation about AI in underwriting. Because once you understand why that gap exists, the path to closing it becomes much clearer.

The gap is almost never about the technology

It’s tempting to assume that the 44-point gap between intent and execution is a technology problem – that the AI tools aren’t quite ready, or the use cases aren’t proven, or the integration is too hard. None of those explanations really hold up.

The tools are mature. Multiple credible vendors offer underwriting AI products that work. The use cases are well documented. McKinsey reports cases where AI-enabled underwriting has cut quoting times[5] from more than a month to a matter of days, and in some commercial lines from days to hours. The integration challenges are real but solvable – similar challenges have been solved in other heavily regulated sectors.

What stops most insurers isn’t the model. It’s the data the model would need to be useful, and the state that data is in.

This is the part that doesn’t fit neatly into a board paper. It’s much easier to commit to “implementing AI” than to commit to “spending eighteen months bringing our policy data, claims data, customer data, and document archives into a state where AI could actually do something with them.” The first sounds strategic. The second sounds like overhead. But the first depends entirely on the second, and in most insurance organisations, the second hasn’t been done.

Why insurance data is uniquely difficult

Insurance has a structural data problem that other sectors don’t have to the same degree.

Insurers tend to run on systems that have accumulated over decades, often through mergers, demergers, line-of-business expansions, and the various technology transitions that come with each. Policy administration systems live alongside claims platforms, which live alongside CRM systems, which live alongside actuarial environments, which live alongside the document management systems where the actual unstructured information about a risk – broker submissions, surveys, correspondence, supporting evidence – tends to sit. None of these were designed to talk to each other in the way modern AI workflows would expect.

On top of that, a great deal of the information that matters for underwriting is unstructured by nature. A risk submission isn’t a tidy database row. It’s a packet of documents, often in different formats, often with handwritten amendments, often with attachments that contain the actual judgement-shaping detail. Industry-wide, around 80% of enterprise data is unstructured[6] – and in insurance the proportion is often higher, because so much of the work has historically been document-driven.

AI systems can do extraordinary things with this kind of information, but only when it’s been brought into a state where they can actually see it. A scanned broker submission is, to a model, an image. A claims file split across three legacy systems is, to a model, three disconnected fragments. A historical loss record locked in a format that was retired ten years ago is, to a model, invisible.

The 16% of insurers using AI in underwriting today are not the ones with the cleverest models. They’re the ones who’ve done the unglamorous work of getting their data into shape first.

What “data ready” actually looks like

It’s worth being concrete about what closing the gap involves, because the abstract version (“improve data quality”) has been on insurer roadmaps for years without much happening.

It means integration across the systems that hold underwriting-relevant information, so that an AI model assessing a risk can actually draw on the full picture of that risk rather than the slice that happens to live in one system. It means consistency in how data is structured, named, and validated across those systems, so that the same field doesn’t mean three different things depending on where you look. It means bringing the unstructured side – the document estate, the broker submissions, the claims files – into a form where models can actually read it, not just store it. It means governance: lineage, audit trails, quality controls, and the ability to defend the data foundation to a regulator who is increasingly likely to ask about it. And it means tackling the historical archive, not just the new data flowing in, because the patterns that train better risk models tend to live in the back catalogue.

This is a meaningful programme of work, and it’s not glamorous. It rarely produces a launch event. It tends to take longer than the executive sponsor initially hopes and reveal more complexity than the initial assessment suggested. But it’s the work that determines whether the AI investment that follows it actually delivers, or quietly becomes another pilot that never scales.

What good looks like, sector-specifically

The insurers who’ve moved past the gap aren’t necessarily the largest or the most technologically sophisticated. They’re the ones who’ve made a few specific decisions early.

They’ve stopped treating data as a project and started treating it as an asset. The investment is funded, sustained, and led at a level that can hold its ground against quarterly pressure to show results faster. They’ve integrated underwriting data with the document estate that actually contains most of the risk information, rather than leaving the documents to a separate part of the organisation. They’ve built the discipline around governance early, so that models can be deployed in regulated lines without creating compliance exposure. And they’ve been willing to start with the underwriting domains where AI can make the clearest difference – often specialty and commercial lines where complex risks reward better data more visibly than personal lines do.

The result, in the insurers who’ve done it well, is not a single AI model that transforms underwriting. It’s a steadily widening set of capabilities – better triage, better segmentation, faster quoting, more accurate pricing, earlier identification of emerging risks – that compound over time and become hard for slower competitors to replicate.

Where Dajon fits

Dajon Data Management works with insurers on the data foundation that AI-enabled underwriting depends on.

The work spans the full picture: integrating underwriting and claims data across systems, cleansing and structuring policy records, bringing unstructured document estates into a state where models can actually use them, and building the governance layer that regulated insurers need in place before any AI capability goes live. We’re particularly used to the kind of multi-system, document-heavy environments that insurance organisations tend to run on, and to the regulatory expectations that come with deploying AI in pricing and risk decisions.

The point isn’t to hand insurers another technology product. It’s to do the work that makes the technology insurers are already buying – or planning to buy – actually deliver against the business case that justified it.

Closing the gap

The 16/60 gap is the most important number in insurance AI right now, and not because it represents a market opportunity for vendors. It represents a strategic divide that’s widening fast inside the industry.

The insurers who close the gap will compound an advantage measured in shareholder return, combined ratio, and the ability to respond to risk in close to real time. The ones who don’t will find themselves trying to catch up with competitors whose data foundation took years to build – which means catching up will take years of its own. There isn’t a shortcut, and there isn’t a vendor product that substitutes for the underlying work.

Underwriting is still about judgement. But the judgement of underwriters working with rich, integrated, AI-ready data is increasingly going to outperform the judgement of underwriters working with whatever they happen to have to hand. That’s the gap.

Is your underwriting AI strategy waiting for the data work, or built on top of it?

Dajon Data Management helps insurers integrate, cleanse, and structure the data that AI-driven underwriting depends on. Get in touch to find out where your data foundation might be holding your strategy back.


References

  1. 5 Ways Agentic AI Is Transforming Insurance Underwriting in 2026 InsureTechTrends[]
  2. McKinsey: The Future of AI for the Insurance Industry Digital Insurer / McKinsey[]
  3. Gen AI could unlock $50-$70bn in insurance revenue McKinsey via Reinsurance News[]
  4. AI meets insurtechs McKinsey[]
  5. McKinsey says AI could add $70 bn to insurance revenue McKinsey via Beinsure[]
  6. Unstructured Data: The Hidden Bottleneck in Enterprise AI Adoption Gartner via CDO Magazine[]