Artificial intelligence is rapidly becoming a defining priority for the insurance industry. From fraud detection to underwriting optimisation and claims automation, insurers are investing at an unprecedented pace – the global AI in insurance market is projected to grow from $7.71 billion in 2024 to over $35 billion by 2029[1]. The promise is compelling: More accurate risk assessment, faster claims processing, and deeper insight into customer behaviour all offer clear competitive advantages.
However, there is a critical assumption underpinning much of this investment that often goes unchallenged – that AI can deliver value regardless of the quality of the data it is built on.
In reality, the opposite is true. And for insurers that fail to address their data foundations before deploying AI, the consequences can be significant.
Why insurance data is inherently complex
Insurance organisations generate vast amounts of data across multiple systems. Policy platforms, claims management systems, underwriting tools, customer databases, and broker portals all capture different aspects of the business – and over time, these systems evolve independently.
The result is fragmentation. A single customer may exist across multiple platforms with different identifiers. Claims data may not align perfectly with policy records. Historical data may be incomplete, inconsistent, or stored in formats that resist analysis. For organisations with decades of legacy records – many of which still exist in paper or scanned-image formats – the challenge is compounded further.
This fragmentation limits visibility across the business. The insight required to understand risk, detect fraud, and price policies accurately exists within the data, but it is rarely accessible in a unified, structured way. Without a clear, consolidated view of information across systems, even the most advanced analytics tools are working with an incomplete picture. Dajon Data Management works with insurers to address exactly this problem – digitising legacy records and creating structured, integrated data environments that give organisations a single, reliable view of their information.
Why AI struggles without a strong data foundation
AI models rely on structured, consistent, and integrated data to produce meaningful results. When data is fragmented or poorly structured, several challenges emerge: Models may produce inaccurate or biased outputs, important relationships between datasets may be missed, and insights may be incomplete or misleading.
A recent WTW survey of property and casualty insurers underscored this point, finding that 42% of respondents cited data-related issues – including poor quality and limited accessibility – as significant barriers to adopting advanced analytics and AI[2]. The same survey found that insurers with more sophisticated analytics capabilities achieved combined ratios six percentage points lower and premium growth three percentage points higher than slower adopters – a clear illustration of the commercial advantage that well-structured data enables.
The wider picture is equally telling. Research suggests that over 95% of corporate AI initiatives deliver zero measurable return[3], and a recurring theme in post-implementation analysis is the same: Poor data quality undermines the models before they even begin to operate. In an industry where decisions directly impact financial performance and regulatory compliance, AI does not compensate for poor data. It amplifies it.
The shift from AI-first to data-first
Many organisations approach transformation with an AI-first mindset, focusing on tools, platforms, and use cases. However, the most successful insurers are taking a different approach – they are starting with data.
This means integrating data across systems, standardising formats, cleansing records, and ensuring that information is accurate, complete, and accessible. It means addressing the legacy challenge head-on – converting paper archives and unstructured documents into searchable, indexed digital environments. And it means establishing data governance frameworks that maintain quality over time, not just at the point of migration.
Only once this foundation is in place can AI technologies deliver reliable and valuable insights. Dajon supports insurers through this process, providing document digitisation, intelligent indexing, and data cleansing services that prepare information for advanced analytics and AI. By focusing on the data layer first, Dajon helps insurers avoid the costly cycle of deploying technology on top of unreliable information.
The commercial impact of better data
The financial implications of this shift are significant. When data is properly structured and integrated, insurers can improve underwriting accuracy, detect fraud earlier, and respond more effectively to emerging risks. Decision-making becomes faster and more informed, and operational efficiency improves across the organisation.
Insurers using AI-powered underwriting tools have reported processing times dropping from weeks to hours[1], while AI-driven fraud detection and claims automation are delivering measurable cost reductions across the sector. But these results are achievable only when the underlying data is trustworthy. Organisations that attempt to implement AI without addressing data quality often struggle to realise meaningful returns on investment – a pattern that is becoming increasingly well-documented as the industry moves from pilot programmes to production deployments.
By contrast, organisations that invest in data preparation and integration before deploying AI are positioned to extract compounding value. Clean, well-structured data does not just support a single use case – it becomes a strategic asset that underpins pricing, compliance, customer insight, and operational efficiency simultaneously. This is where Dajon’s approach delivers lasting value, creating data environments that serve the business across multiple functions and over the long term.
Building the foundation for AI-driven insurance
AI will continue to play a central role in the future of insurance. However, its success will depend on something far more fundamental than the technology itself – it will depend on data.
Organisations that prioritise data quality, integration, and structure will be best positioned to take advantage of AI-driven innovation. Those that rush to deploy AI without addressing the underlying data challenge risk not only wasted investment, but the amplification of existing inaccuracies across their operations.
With the right foundation – and support from partners such as Dajon Data Management – insurers can move beyond fragmented insight and build a more accurate, efficient, and data-driven approach to risk. The question is not whether to invest in AI. It is whether your data is ready to make that investment worthwhile.
References
- AI in Insurance Industry Statistics 2025 Coinlaw[↩][↩]
- Insurers using advanced analytics and AI deliver stronger results: WTW survey Reinsurance News[↩]
- 2026 Insurance Industry Predictions: AI Edition AgentSync[↩]
