Data Migrations are a risky business, here’s how to get it right time and time again.
There is often a common misconception that data migrations are simply moving data from A to B. Data quality and managing its transition effectively to achieve the best outcome, is a far more complex process that requires planning and structure paying heed to the risks poor quality data can have to an end-result. These can often be catastrophic if Data quality is assumed, rather than a proven reality!
In addition, Industry experts have long since recognised the inherent risks associated with the quality of data in data migrations. In fact, Gartner themselves highlight ‘attention to risks’ as an area we should apply amplified focus to, which is often not an innate part of planning when it comes to data migrations.
“Analysis of data migration projects over the years has shown that they meet with mixed results. While mission-critical to the success of the business initiatives they are meant to facilitate, lack of planning structure and attention to risks causes many data migration efforts to fail.”
Gartner, “Risks and Challenges in Data Migrations and Conversions.”
What can be done then to ensure you have the best data quality possible? Not just once or twice but is guaranteed and assured each time you face a project where data migration and assured data quality, is key.
The Data Quality Minefield
Risk #1 – Reality Check
Data migration requirements are often developed based on assumptions around the data, rather than fact or reality. Mappings and translations based on assumptions may miss key values. Duplication between or across legacy data sources may not be expected or accounted for. Data structure disparity between legacy data sources and the new target system may not be fully understood. Consequently, there is little data quality assurance, and big risk to the end project.
Risk #2 – Mind the Gap
While data may never be 100% clean, lack of attention to data quality can cripple even the most straight forward projects, leading to last minute cleansing initiatives which are costly and time consuming. In addition, how do you ensure that any data gaps are filled and that you are not migrating too little or too much data?
Risk #3 – Unique Landscape
Every organisation’s data landscape is unique which may include anything from decades old data to one-off databases. Documentation may not exist, institutional knowledge may be limited, and key staff may not understand the data nor are around anymore to add insight or narrative to the lay of your data landscape!
Risk #4 – When Bad Quality Data happens to Good People
It is almost stating the obvious that any new system is only as good as the good data under-pinning it! Missing, invalid or inconsistent legacy data can have substantial effects on your new system. Can data ever be 100% clean? How can its quality be assured?
Risk #5 – We’re only human after all
We are all only human, and therefore the human eye can easily overlook seemingly innocuous changes between source and new systems. For example, fields with the same name may be very different things between a legacy and new system. Data types, time formatting and field length differences can also easily be overlooked, with disastrous impact. What if your data sources are in various languages? What about compliance with the GDPR?
Risk #6 – Plan to test
It is incredible to think why comprehensive testing is often viewed a lesser priority to other concerns when it comes to data migrations. The reality is that errors found late in the process during application, or even in production can be difficult to trace, and very costly to correct. Therefore testing needs to be done throughout the project, with the functional team and the business engaged with testing to ensure all requirements are met and rules are applied. If testing is not applied as a priority from the get go, corrupted data can sneak into the process, meaning the delivered data does not meet the needs of the business, setting the project go live back in time and budget.
So, with all risks considered around poor data quality in data migrations, it begs the question is 100% assurance and coverage of all migrated and transformed data ever possible?
As has been highlighted, assuring the best quality possible of migrated data is often a manual, costly and time-consuming process, with the added burden of the scale of production databases means only a small sample of data can be inspected.
What is more, traditional data comparison tools don’t help as they only compare like with like.
iData is trail-blazing data quality in migrations, by assuring the transforming data between old and new structures, allowing the information to be compared directly.
And it’s not just a handful of records but entire databases can be compared using automated tests to provide 100% assurance and 100% coverage of all migrated data.
Missing records are identified, corrupted fields are reported, data quality rules can be applied and failing records traced and corrected.
iData is pioneering and proven, with customer use cases demonstrating:
- Full data coverage meaning 100% assurance
- Fast, automated, repeatable tests
- Enhanced data quality through data rules and governance
- Source control compatible tests
- Interactive data wizards
- Command auto-completion and interactive assistance
But don’t just take our word for it – see for yourself with your very own demonstration of iData.
The Importance of Data Quality in Banking and Finance
Data quality is a key measure of business success for all organizations, particularly those within the financial services sector. Ensuring...Read more
Data Quality Trends and Their Outlook
The top three data quality developments in data management today are modern data hubs, data warehouse modernization and machine learning...Read more