Digital Transformation Doesn’t Fail Because of Bad Technology. It Fails Because of This.

Seventy percent of digital transformations fail to meet their objectives. The number has barely moved in a decade despite better technology, more experienced partners, and a generation of accumulated lessons[1]. BCG’s analysis of more than 850 companies put the success rate at around 35%. Bain’s 2024 work landed even harder: 88% of business transformations fail to achieve their original ambitions[2].

That persistence is the interesting part. If the technology is better, the partners are more experienced, and the playbook is understood, the failure has to be coming from somewhere else.

It is. And it’s been in the same place for long enough that it’s worth saying directly.

Most digital transformation programmes that fail don’t fail because of the technology. They fail because of the data the technology was supposed to run on.

What failure actually looks like in practice

It’s worth being specific about this because “transformation failure” covers a wide range of outcomes, and the version that data problems create tends to be the least dramatic and the most persistent.

Outright programme failure – the kind that makes headlines and ends careers – is relatively rare. The Birmingham City Council Oracle implementation, where a £19 million programme has ballooned to an estimated £216.5 million by 2026 and the council declared effective bankruptcy partway through[3], is the rare visible failure. The more common version is quieter: a transformation that technically completes but doesn’t deliver what it was supposed to. Systems go live. Processes change. The organisation moves forward. And then, gradually, it becomes apparent that something isn’t working the way it should.

Reporting that should be more accurate isn’t. Decisions that should be faster aren’t. Processes that should be more efficient have acquired new complications that weren’t in the original design. Users who were supposed to trust the new system have developed workarounds that undermine it. The return on investment that was projected in the business case is materialising more slowly than expected, if at all.

These outcomes don’t get called failure. They get called teething issues, change management challenges, or adoption problems. The technology gets blamed. The implementation partner gets blamed. The project team gets blamed. Almost never does the post-mortem land on the real cause, which is that the data the new system was given wasn’t in a state that allowed it to perform.

Why data is always the hidden variable

Every digital transformation programme, whatever its stated objective, is fundamentally an exercise in changing how an organisation uses its information.

A new ERP is a different way of managing financial, operational, and supply chain data. A CRM transformation is a different way of capturing and using customer data. A move to cloud is a different infrastructure for storing and accessing all of the above. An AI initiative is a set of tools applied to data the organisation already holds to generate insights it couldn’t previously reach.

In every case, the technology is the mechanism. The data is what the mechanism operates on. And the relationship between the two is not symmetrical – a well-configured system running on poor data produces poor outcomes, while a less sophisticated system running on good data can perform remarkably well.

This seems obvious when stated plainly. The difficulty is that during a transformation programme, it is almost never given the weight it deserves. The technology decisions get the attention, the budget, and the senior sponsorship. The data gets treated as a workstream – important in principle, managed in practice by whoever is least busy, and deprioritised whenever the timeline comes under pressure. Gartner’s 83% statistic for data migration failure rates is not a separate problem from the transformation failure rate[4]. It’s the same problem, viewed from a different angle.

The result is transformation programmes that arrive at go-live with systems that are correctly configured and data that isn’t ready. And the system, however well designed, cannot compensate for that. It amplifies what it’s given. When what it’s given is incomplete, inconsistent, or poorly structured, what it produces is incomplete, inconsistent, and poorly structured outputs; with the full sophistication of the new platform applied to a foundation that was never fit for purpose.

The specific ways data problems derail transformation

The failure modes are consistent enough across programmes that they’re worth naming specifically, because organisations that recognise them early have a chance to address them. Organisations that don’t tend to encounter them at go-live, when the cost of addressing them is at its highest.

The first is data quality. Legacy systems accumulate problems over time: duplicate records, incomplete fields, inconsistent formats, values that made sense in the context of the old system but don’t translate correctly to the new one. These problems are often invisible in normal operations because the people using the legacy system have learned to work around them. They become very visible when the data is migrated, because the new system applies rules and validation that the old one didn’t – and the data that passed quietly through the old environment fails noisily in the new one. Gartner’s research puts the average annual cost of poor data quality at $12.9 million per organisation; most of that cost is invisible until something forces it to the surface[5].

The second is data completeness. Transformation programmes regularly discover, later than is comfortable, that the data the new system needs to function properly doesn’t exist in a usable form. Historical records that are required for reporting aren’t structured in a way that the new system can use. Reference data that underpins core processes is incomplete or out of date. Integration data that should flow between systems isn’t in a compatible format. The gaps aren’t always obvious during planning because nobody has looked at the data closely enough to find them – by the time they’re found, the timeline has become too compressed to address them properly.

The third is data integration. Most transformation programmes involve multiple systems that need to exchange data. The integration architecture gets designed carefully. The technical connections get built and tested. And then, in a live environment with real data volumes and real business complexity, the integrations start producing results that don’t quite match what was expected – because the data flowing through them has inconsistencies that the test environment didn’t replicate and that the integration logic wasn’t designed to handle.

Each of these failure modes is identifiable in advance. Each of them is significantly cheaper to address before go-live than after. And each of them is routinely underestimated or discovered too late because the programme’s attention and energy was focused on the technology rather than on the data the technology depends on.

Why specialist data capability changes the outcome

The reason data problems persist in transformation programmes despite being well understood is not that organisations don’t know they matter. It’s that the capability required to address them properly is different from the capability required to implement the technology – and the two rarely sit in the same team.

System integrators and implementation partners are excellent at configuring platforms, designing processes, and managing the complexity of a large technology programme. They are not always excellent at the specific disciplines of data assessment, data quality remediation, data migration, and data integration governance. These require a different kind of depth: the kind that comes from having worked through data problems across many different environments, seen the failure modes enough times to anticipate them, and developed approaches that address root causes rather than symptoms.

Programmes that bring that specialist capability in – either through dedicated internal resource or through specialist partners – handle data problems differently. They find them earlier, address them more systematically, and arrive at go-live with a data foundation that the new system can actually build on. Programmes that don’t tend to discover the same problems later, under more pressure, with fewer options for addressing them cleanly.

This is not a complicated insight. But it’s one that gets rediscovered repeatedly, programme by programme, because the decision about whether to invest in specialist data capability tends to be made at the planning stage when the data problems aren’t yet visible – and the case for the investment is always harder to make before the problems have surfaced than after.

What the programmes that succeed do differently

The transformation programmes that consistently deliver what they promise share a set of characteristics that are worth being direct about, because they run counter to the instinct of most programme teams under pressure.

They treat data as a first-order workstream, not a supporting one. The data assessment, preparation, and migration work has its own resource, its own timeline, and its own governance. Not as a subset of the technology workstream, but as a parallel and equally important programme of work with its own milestones and its own senior accountability.

They start data preparation earlier than feels necessary. The temptation in every programme is to wait until the system design is more settled before getting into the detail of data. The programmes that succeed resist that temptation, because they know that the time required to properly assess, cleanse, and prepare data is almost always longer than the optimistic estimate; which means arriving at go-live with data work that is incomplete.

They test with real data rather than synthetic data. Test environments populated with clean, synthetic data will pass almost any test. Programmes that test with actual data from the legacy system find the problems that matter – the edge cases, the inconsistencies, the records that don’t conform to the assumptions the system was designed around – at a point when they can still be addressed.

And they treat go-live not as the end of the data work but as the point at which the data work is validated. The question isn’t “is the data there?” It’s “is the data right?”. And that distinction requires a different approach to post-migration validation than most programmes apply.

How Dajon works alongside transformation programmes

At Dajon Data Management, we sit at the data layer of transformation programme: assessing what exists, identifying the quality and completeness issues that will surface at go-live if not addressed earlier, preparing and migrating data so the new system can use it, and validating that what arrives is ready to support real operations from day one.

For organisations already in production and managing the consequences of data problems that weren’t addressed before go-live, we work on remediation: understanding what’s wrong, what it’s affecting, and what needs to happen to bring the data foundation to the state the system was designed to run on.

The variable that determines whether everything else works

Digital transformation will continue to attract significant investment. The technology will continue to improve. The implementation methodologies will continue to mature.

None of that changes the fundamental relationship between the technology and the data it operates on. Better tools applied to poor data produce better-looking poor outcomes. The sophistication of the platform raises the ceiling of what’s possible, but the data determines whether the organisation gets anywhere near that ceiling or spends its time managing the gap between what the system was supposed to do and what it’s actually doing.

The organisations that consistently get the most from their transformation investments are not the ones with the most advanced technology or the most experienced implementation partners. They’re the ones that treated the data as seriously as the technology; before the programme started, not after the problems surfaced.

Digital transformation doesn’t fail because of bad technology. It fails because of what the technology was given to work with. When did you last look honestly at that?

Dajon Data Management works alongside transformation programmes to address the data layer that determines whether the technology delivers what it was designed to. Get in touch to understand where your current data environment might be limiting what your next transformation can achieve.


References

  1. Why 70% of Digital Transformations Fail: Insights and Solutions Boston Consulting Group / McKinsey[]
  2. Digital Transformation Failure Rate 2025 Bain / Gartner[]
  3. Total cost of Birmingham City’s Oracle system failure to reach £216.5m by 2026 – report Data Center Dynamics[]
  4. 83% of data migrations fail or exceed their budgets and schedules Gartner via Experian[]
  5. Data Quality: Why It Matters and How to Achieve It Gartner[]