An underwriter, an adjuster, and a risk manager walk into the same customer account. None of them sees the same picture.
It sounds like the opening of a joke, but in most insurers it’s just Tuesday. The underwriter is working in the policy administration system, with a submission that arrived as a PDF and was keyed into the platform last week. The adjuster is in the claims system, looking at a loss history that includes incidents the underwriter never saw. The risk manager is in a third tool, flagging behavioural signals that would materially change how the policy should have been priced – except the underwriter who priced it is working three systems away and won’t see those signals until renewal, if then.
Each of them is doing their job properly, with the information available to them. None of them has the complete picture. And the gap between “each team doing their job properly” and “the organisation making a good decision” is where most insurance data silos do their real damage.
The silos insurers don’t notice
What makes data silos in insurance particularly hard to address is that, for the people inside them, the friction often isn’t visible. Underwriters have always rekeyed data from broker submissions. Claims handlers have always worked with whatever loss history happens to be in their system. Renewals have always gone out priced on what’s visible in the policy system rather than what might be sitting in claims or risk management. It isn’t new, and it isn’t uncomfortable enough at the individual level to force change.
The consequences show up further down the chain. Accenture’s longitudinal P&C underwriting research, running since 2008, has consistently found that[1] underwriting remains a “paper-first process with important data siloed in PDFs and spreadsheets attached to emails from brokers” – and that despite fifteen years of technology investment, the fundamentals haven’t really improved. Underwriters still move between documents hunting for data formatted differently depending on who sent it. The specific technologies have changed; the underlying fragmentation hasn’t.
A recent example from the telematics world illustrates how this plays out commercially. Roughly[2] 88% of commercial fleets now have telematics capturing driving behaviour – data that ought to be transforming risk pricing and claims defence. In practice, the data is captured. Claims departments process major losses without accessing it. Underwriting prices renewals without loss history integration. Risk management sends safety alerts nobody reads. Each department has the data. None of them coordinate effectively to prevent losses or defend claims. The investment in telematics is real. The return on it is significantly smaller than it should be, because the data never crosses the silo walls.
Why “more data” doesn’t help when the data is fragmented
The instinctive response to poor data insight tends to be more of it. More analytics tools. More dashboards. More external data sources feeding into a reporting layer that promises to bring everything together.
This rarely solves the underlying problem, and often makes it worse. If the underlying data is fragmented and inconsistent, additional tools layered on top produce more sophisticated views of partial pictures. The dashboards get better. The decisions don’t. In some ways they get harder to trust, because they now carry the authority of visualisation without the integrity of complete data. A well-designed chart built from three disconnected systems will confidently tell you a story that the actual data doesn’t support. That’s more dangerous than no chart at all.
The issue was never the volume of data. Insurance has always been data-rich. The issue is that the data exists in places that don’t connect, and the work to connect them is unglamorous, time-consuming, and consistently under-invested – because every individual system still works, and the commercial impact of their disconnection is diffuse rather than obvious.
What fragmented data actually costs
The costs show up in specific, measurable ways once you start looking for them, and they’re rarely small.
Pricing is the most direct. An underwriter assessing a risk without full visibility of the claims history, telematics data, or internal risk flags is making a pricing decision on a partial picture. Sometimes that produces mispriced policies that look profitable on paper but generate claims the book can’t sustain. Sometimes it produces over-conservative pricing on good risks that get lost to competitors whose underwriters had better visibility. Either way, it costs money, and the cost doesn’t show up in any single report.
Claims handling is the next. A complex claim handled without customer context, prior loss patterns, or relevant policy history takes longer, costs more, and more often settles for more than it should. The inefficiency is real, but the bigger number is the settlements that ran higher because the handler didn’t have the information that would have changed the negotiation.
Fraud detection is another. Fraud patterns rarely sit inside a single system – they cross claims, policy, customer, and payment records, and spotting them requires the kind of cross-system pattern recognition that silos actively prevent. Every insurer has seen cases in hindsight where the fraud was visible if you connected the dots; the question is how many are still slipping through because nothing is connecting them in time.
Beyond the operational costs, there’s the strategic one. In a market where[3] AI leaders in insurance have generated 6.1 times the total shareholder return of laggards over five years, the ability to use data well is becoming the defining competitive differentiator. And AI, whatever else it does, cannot see what isn’t connected. An insurer with fragmented data is not just slower today – it’s structurally unprepared for the capabilities the industry’s most successful competitors are already building.
What “integrated” actually means
The word “integration” gets used in ways that cover a lot of different ambitions, so it’s worth being specific about what actually changes decision-making.
True integration isn’t just data pipelines moving records between systems. It’s a working state where policy, claims, customer, and external data can be interrogated together, in context, with consistent definitions, and with enough metadata that patterns between them become visible. It’s the difference between an underwriter being able to open a claims record in another system and an underwriter working in an environment where the claims record is already embedded in the view they’re making the decision from. The second is significantly harder to build and disproportionately more valuable to use.
The insurers who’ve done this well tend to share a few characteristics. They’ve treated data governance as a leadership-level function, not an IT initiative. They’ve been willing to invest in the unstructured side – the broker submissions, surveys, correspondence, supporting evidence – that contains most of the real risk information but sits outside the structured systems. And they’ve built the connective tissue between systems incrementally, around specific underwriting or claims decisions where the business case is clearest, rather than attempting a single enterprise data project that tries to boil the ocean.
The results compound. Each integrated decision produces data that makes the next one better. Silos have the opposite property – each decision made on partial data adds noise to the system, making the next decision a little harder to trust.
How Dajon helps insurers close the gap
At Dajon Data Management, this is the kind of work we tend to get called in for when an insurer has recognised that its data environment has become a constraint on the business rather than an asset to it.
The work spans the parts of the estate that other providers often leave alone – bringing together data across policy, claims, and customer systems, digitising and structuring the document estates that contain most of the unstructured risk information, cleansing and reconciling records across multiple source systems, and building the governance layer that regulated insurers need to defend their data use to auditors and supervisors. We work alongside insurers’ internal teams and their transformation partners, focused specifically on the data layer because that’s where the leverage is.
For our clients, the outcome isn’t a perfect single source of truth – insurance is too complex for that to be a realistic target. It’s an environment where the people making decisions can actually see what the business knows, in time to use it.
The strategic point
Data silos aren’t an inconvenience. They’re a structural liability that gets more expensive every year the industry becomes more data-driven.
The insurers who will outperform in the next five years won’t necessarily be the ones with the most data, the most sophisticated analytics, or the most ambitious AI strategies. They’ll be the ones who can actually see what they already know – and who built the foundation for that visibility before their competitors made it a necessity rather than an advantage.
Is your organisation making decisions based on a complete picture – or are you working with significantly less than you think?
Dajon Data Management helps insurance organisations integrate and structure their data for better insight and decision-making. Get in touch to understand what a genuinely connected data environment could look like for your business.
References
- Why insurers need to rescue underwriters from siloed data Accenture[↩]
- Why Insurance Telematics Integrations Fail Carrier Management[↩]
- The future of AI for the insurance industry McKinsey[↩]
