When an ERP migration goes wrong, the technology gets blamed first. The platform was not configured properly. The integrations were not tested. The vendor oversold the product.
In many cases, the real cause is simpler and less dramatic: The data that went into the new system was not fit for purpose.
Gartner estimates that poor data quality costs the average organisation $12.9 million per year[1]. A 2025 report by the IBM Institute for Business Value found that over a quarter of organisations estimate they lose more than $5 million annually due to poor data quality alone[2]. These figures reflect the everyday damage that bad data does across operations. In a migration project, the impact is concentrated and amplified – because migration forces dirty data into a clean system, and the clean system refuses to tolerate what the legacy environment quietly absorbed.
Why legacy data is almost never clean
Most organisations do not set out to create poor-quality data. It accumulates gradually, over years, across systems that were never designed to work together.
A customer record is created in the CRM with one set of fields. The same customer appears in the billing system with slightly different formatting. A third entry exists in the support platform with an outdated postal address. None of these inconsistencies cause problems in day-to-day operations because each system has evolved its own workarounds – staff know to check the billing system for the correct account number and the CRM for the current contact details.
Now imagine migrating all three systems into a single modern platform that expects one clean, consistent record per customer. The duplicates, the format mismatches, and the stale data that were invisible in the old environment suddenly become blocking issues in the new one.
This pattern repeats across every data domain: Products with retired codes still appearing in active catalogues. Financial transactions tagged with department identifiers that no longer exist. Employee records carrying data from two mergers ago in fields that the new HR system does not recognise.
The post-go-live disaster
The most damaging thing about poor data quality in migration is its timing. Problems rarely surface during the migration itself. They appear after go-live – when the business is relying on the new system to work.
Reports produce incorrect figures because duplicate records have inflated totals. Automated workflows fail because mandatory fields contain blank or invalid data. Customer-facing processes break because address formats do not match the new system’s validation rules.
At this point, remediation is expensive and disruptive. The organisation is simultaneously trying to run its operations on the new platform and fix the data that should have been cleaned before it arrived. According to research from Oracle and the Bloor Group, average cost overruns in migration projects hover around 30%, with schedule slippage averaging 41%[3]. A significant portion of this can be traced directly to data issues discovered too late.
The Queensland Health payroll migration in Australia is one of the more dramatic examples. The rushed implementation, combined with inadequate data preparation, led to over 35,000 payroll errors in its first month alone. Thousands of employees were underpaid, overpaid, or not paid at all. The failure ultimately cost taxpayers over AUD $1.2 billion[4].
Why data preparation is the highest-value activity in any migration
Successful migrations invest heavily in the work that happens before any data moves between systems.
Data profiling comes first: Understanding what exists, where it sits, how it is structured, and how it relates to data in other systems. This step almost always reveals more problems than anyone expected. Fields that should contain dates hold free text. Lookup values have been customised per department. Entire record sets exist in formats that the target system cannot accept.
Data cleansing follows: Removing duplicates, standardising formats, correcting known errors, and enriching incomplete records. This is detailed, sometimes tedious work, but it is where the outcome of the migration is largely determined.
Data mapping and validation ensure that every field in the source system has a defined destination in the target, with transformation rules documented and tested. Edge cases – the records that do not fit neatly into the mapping – need explicit handling rather than the hope that they will sort themselves out.
Research into successful migrations shows they typically allocate 30–40% of project time to testing, compared with just 15% for projects that fail[5]. When this preparation is done thoroughly, the actual migration becomes largely mechanical. The data has been profiled, cleaned, mapped, and validated. The risk has been addressed at the point where it can be managed, rather than left to explode after go-live.
How Dajon helps organisations prepare data for migration
Dajon Data Management provides the specialist data migration and preparation support that migration programmes depend on. Working alongside organisations and their IT implementation partners, Dajon handles the detailed work of analysing, cleansing, and structuring enterprise data for migration.
This means engaging early – before the migration phase begins – to profile existing data environments, identify quality issues, and build the preparation plan that ensures data arrives in the new system ready to support operations from day one.
Where legacy records exist in paper form or as unstructured digital files, Dajon’s document digitisation and data capture services ensure those records are brought into the migration scope rather than left behind.
The difference between a migration that delivers its promised benefits and one that generates months of post-implementation firefighting almost always comes down to what happened to the data before it moved.
Technology platforms will continue to improve. Data, left untreated, will not. For any organisation planning a system migration, the most important investment is not in the destination platform – it is in the quality of what you put into it.
References
- The Hidden Costs of Poor Data Quality Cambridge Spark[↩]
- The True Cost of Poor Data Quality IBM[↩]
- Data Migration Cost Analysis & Calculator DataFlowMapper[↩]
- Failed Data Migration Projects and the Lessons Learned Hopp Tech[↩]
- Planning a Smart Data Migration Dajon Data Management[↩]
