How Are Leading Insurers Identifying Emerging Vulnerabilities Before They Become Losses?

There is a timing problem at the heart of insurance that the industry does not always talk about directly.

By the time a risk shows up in claims data, it has already become a loss. The underwriting decision that priced it, the portfolio construction that concentrated it, the renewal terms that did not account for it: all of those happened in a world where the information that would have changed them existed somewhere, in some form, but was not visible in a way that could be acted on.

The question that is starting to separate the insurers pulling ahead from those standing still is not how well they understand past losses. It is how early they can see the next ones forming. According to the International Insurance Society’s 2026 Global Priorities Report, 71% of insurance executives now cite AI as the top issue across all categories of business priorities[1]. The strategic shift is no longer aspirational. It is the central question of the year ahead.

The fundamental limitation of looking backwards

Underwriting has always been built on historical data. Analyse what has happened across a portfolio, identify the patterns that preceded loss, and use that understanding to price and select risk going forward. It is a logical approach and it has worked well enough for long enough that it became the default model across the industry.

But it has a structural limitation that becomes more significant as the environment insurers are operating in becomes less stable.

Historical models are inherently reactive. They can tell you what risks looked like when they materialised in the past. They are less equipped to tell you what a risk looks like in the period before it materialises, when behaviour is shifting, when external conditions are changing, when the patterns that will eventually show up in claims data are still forming beneath the surface. By the time those patterns are clear enough to show up in historical analysis, the exposure has already accumulated.

The conditions making this lag expensive are visible across every major line of business. Swiss Re has reported that social inflation has driven a 57% increase in US liability claims over the past decade, with 27 court cases in 2023 alone awarding compensation of more than $100 million each[2]. Climate volatility is reshaping the geography of physical risk. Fraud patterns are evolving faster than detection models can be updated. Kennedys’ 2025 Forecast surveying 170 partners across the global insurance practice ranked AI adoption, cyber attacks and extreme weather as the top three risks for the year, with social inflation rated more immediate than AI in the United States[3]. In a slower, more predictable market, the lag between a risk emerging and a model recognising it was manageable. In this environment, that lag is expensive.

Where risk actually reveals itself before it becomes a claim

Emerging vulnerabilities do not announce themselves in claims data. They appear earlier, in signals that are individually easy to dismiss but collectively significant, provided you have the ability to see them together and recognise what they mean.

The pattern of claims in a specific geography that does not yet breach a threshold but is trending in a direction that historical norms do not explain. The cohort of policies where renewal behaviour has shifted in a way that correlates with deteriorating risk quality in similar cohorts in the past. The external indicator (a regulatory change, a court ruling, a shift in medical practice) that has not yet worked its way into loss experience but almost certainly will. The concentration in a book of business that only becomes visible when data from across systems is brought together rather than viewed separately.

None of these is a loss. At the point they appear, they are noise. The kind of thing that gets noted and then deprioritised in favour of the immediate demands of a busy underwriting operation. But for the insurers that have developed the ability to systematically surface and interpret these signals, they represent something more valuable than historical analysis can provide: time to act before the exposure compounds.

What the insurers pulling ahead are actually doing

The competitive advantage in early risk identification is not primarily about having better algorithms. It is about having better data to run them on, and having organised that data in a way that makes the signals visible rather than buried.

The insurers making the most progress here have done something that sounds straightforward but is operationally significant: they have stopped treating their data as a series of separate system outputs and started treating it as a single asset that needs to be managed as a whole. Policy data, claims data, customer behaviour data, operational data, and external data sources are being brought together into environments where they can be analysed in relation to each other rather than in isolation.

That integration is what makes AI genuinely useful in this context. McKinsey has estimated that AI-enabled underwriting can reduce loss ratios by three to five percentage points and expense ratios by one to three points, with combined cloud and AI capabilities boosting underwriting productivity by up to 40% and cutting quote turnaround times by 30%[4]. Those are not pilot numbers. They are what production deployments are delivering when the underlying data foundation supports them.

In practical terms, this looks like deterioration in a specific segment being flagged weeks before it would have appeared in loss ratios. Fraud patterns being identified through behavioural clustering before individual claims are submitted. Concentration risk becoming visible at a portfolio level before individual underwriters can see it from within their own books. Renewal pricing being informed by leading indicators of risk quality rather than lagging indicators of historical loss.

The underwriting decisions that follow these insights still require experienced human judgement. What changes is the quality and timeliness of the information those decisions are based on.

Why this is fundamentally a data problem, not a technology problem

There is a tendency to frame this shift as being primarily about AI and analytics capability. That framing misses the more important point.

AI models do not create insight from bad data. They amplify whatever signal exists in the data they are given. If the underlying data is fragmented across disconnected systems, inconsistently structured, poorly governed, or incomplete in ways that are not immediately obvious, the models surface noise rather than signal. The investment in analytics capability delivers a fraction of its potential value, and in some cases actively misleads by generating confident-looking outputs from an unreliable foundation. The IAIS Global Insurance Market Report 2025 flagged exactly this concern, with supervisors highlighting model governance, transparency, data bias and third-party concentration as priority risks alongside AI’s expanding use in underwriting and claims[5].

This is where a significant number of insurers are currently stuck. They have invested in analytics platforms. They have data science capability. They are trying to move towards earlier risk identification. But the data environment they are running those capabilities on was not built for this purpose, and the gaps in it are limiting what is possible in ways that are not always visible until you look closely at why the models are not performing as expected.

The organisations that have genuinely closed the gap, the ones consistently identifying emerging vulnerabilities before they become losses, have typically done the unglamorous work of fixing their data foundation before or alongside their investment in analytical capability. They have integrated systems that were previously siloed. They have standardised data structures across different parts of the business. They have built governance frameworks that maintain data quality over time rather than degrading it. They have created the conditions under which the models they are running can actually do what they are supposed to do.

What this means commercially, and why the gap will widen

The commercial implications of consistently seeing risk earlier compound over time in a way that makes this more than an operational improvement.

Better early warning of emerging vulnerabilities means more precise pricing. Not just on average, but at the level of specific risks where the data is telling you something that the market has not yet priced in. It means portfolio management that can respond to deteriorating conditions before they show up in loss ratios, rather than after. It means renewal decisions informed by leading indicators of risk quality rather than lagging ones. And it means faster, more confident responses to external shocks: the kind of events that expose the difference between organisations that understood their exposure before it happened and those that discovered it afterwards.

Individually, each of these advantages is meaningful. Collectively, over a number of underwriting cycles, they create a material difference in loss ratios, portfolio quality, and the ability to grow in areas where the data is generating genuine confidence rather than educated guesswork.

The insurers that are not moving in this direction are not standing still in an environment that is staying the same. They are standing still in an environment that is accelerating away from them. And the gap between organisations that have built the capability to identify risk early and those that have not is going to be increasingly difficult to close the longer it remains unaddressed.

How Dajon helps insurers build the foundation this requires

At Dajon Data Management, we work with insurance organisations on the data challenges that sit underneath the ambition to identify risk earlier and make better underwriting decisions.

That means integrating data across systems that were not designed to work together, standardising structures that have drifted into inconsistency, and building data environments that are organised in a way that analytical models can actually use. It means making sure that the signals indicating emerging risk (the patterns in policy behaviour, the early indicators in operational data, the correlations between internal and external factors) are visible rather than buried in the gaps between disconnected systems.

The goal is not better technology for its own sake. It is making sure that the investment insurers are making in analytical capability is running on a foundation solid enough to deliver what it promises. Because the tools exist. The models exist. What determines whether they produce genuine competitive advantage or expensive disappointment is the quality of the data they are working with.

The advantage goes to whoever acts on it first

Risk does not announce itself. It leaves traces in data, in patterns, in the early movements of variables that experienced underwriters recognise as significant but that traditional models surface too slowly to act on.

The insurers that are building the capability to read those traces earlier, and the data foundation that makes that reading possible, are establishing an advantage that is genuinely difficult for those behind them to replicate quickly. It is not a technology advantage, which can be bought. It is a data and capability advantage, built over time through deliberate investment in the unglamorous work of getting the foundation right.

Are you seeing those early signals, or waiting for the risk to show up in your loss reports?

Dajon Data Management helps insurance organisations build the data foundation that makes earlier, more informed risk identification possible. Get in touch to understand where your current data environment might be limiting what your analytical capability can deliver.


References

  1. AI Tops Insurance Executive Priorities as Regulatory Concerns and Market Volatility Reshape the Risk Landscape Risk & Insurance[]
  2. Leaning into Uncertainty and Managing Risk Insurance Journal[]
  3. AI tops global risk index for insurance sector in 2025 as sustainability drops to bottom Kennedys[]
  4. How AI and Ecosystem Innovation Are Transforming Underwriting Guidewire[]
  5. IAIS Global Insurance Market Report 2025 IAIS[]