The New Global Financial Crisis: Data Quality Lies at the Eye of the Storm
On October 29, 1929, Black Tuesday hit Wall Street as investors traded some 16 million shares on the New York Stock Exchange in a single day. This created a panic which wiped out millions of investors.
Over the next several years, consumer spending and investment dropped, causing steep declines in industrial output and employment as failing companies laid off workers, in the Great Depression of the 1930s.
Fast forward to the Global Banking Crisis of 2007–2009. The banking industry was decimated by a combination of deregulation, the collapse of the housing market, and poor decision-making, with many banks and finance institutions failing.
In 2022, in the wake of a global pandemic, it would be easy to reflect on these earlier crises and to blame external factors for the failure of the finance industry to keep up with what is required of it in the twenty-first century.
But the reality lies closer to home, in the detail.
More specifically, the devil lies in the data.
Data is Spiralling Out of Control
The world has been changing for a long time.
Digital transformation has been fundamental to the finance industry for more than twenty years, with the requirement to keep up-to-date with ever-more demanding consumers; to stay within ever-more stringent regulations; and to still have clear and comprehensive information and insight to drive senior-decision-making, at the forefront of the challenges.
The volume, complexity, rate of change, and propensity for error in financial data is growing exponentially. It is no surprise, therefore, that the market for data quality tools is growing at more than 18% year-on-year.
The biggest data quality vendors, giants of the industry such as Talend and Informatica, are simply not agile enough to respond to the needs of every business. And with growth strategies built on a principle of acquisition and integration, manual processes are required as part of any digital transformation or data quality initiative.
This shifting and fast-growing market, therefore, has seen the rapid rise of challenger brands with very specific skillsets and technologies – around data visualization, obfuscation, governance or test data management, for example.
The plethora of tools available, the abundance of requirements, and the scale of the problem, makes it very difficult for banks and other financial institutions to build a robust data strategy that gives them the insight and governance needed to survive, let alone thrive. Opportunity abounds for the more agile FinTech brands emerging every day, who are not tied to legacy systems, and are able to augment manual processes with the best new data quality tech coming down the line.
Consumer Experience: the Biggest Digital Risk to the Banking Sector?
According to Deloitte’s 2022 Global Risk Survey, 72% of consumers have experienced an adverse digital incident in the last year. However, the vast majority of issues were not directly to do with technology, but rather concerned problems with the interaction between humans and technology.
The most prevalent issue was customer services being unable to help the consumer with a problem, experienced by 23% of participants. There is an assumption by consumers that their bank or other finance provider will be able to access their data, make changes, and respond to their needs, accurately and in real time.
The reality, however, is that with most data assurance solutions touching 5-20% of the data, there are more customers with errors, than not.
Consumer confidence, net promoter scores (NPS), and customer reviews directly impact profitability. According to Bain & Co (2018), in most industries, Net Promoter Scores explained roughly 20% to 60% of the variation in organic growth rates among competitors. On average, an industry's Net Promoter leader outgrew its competitors by a factor greater than two times.
In other words, a company's NPS is a good indicator of its future growth, with this being even stronger in the finance and insurance sectors than in most others.
Regulatory Requirements and Compliance
The problems with data quality have a direct impact on any business’s ability to comply with regulatory requirements, both at an industry and local level.
The problems with so-called “dirty data”, and with duplicate data arise in seven specific ways:
- ‘Entry Quality’, with data acquired, adapted and augmented across multiple touchpoints, and often impacted by human error, is one of the most common problems
- ‘Process Quality’ can be diminished when moving or integrating data into a single database and is often a cause for corruption of data or data loss, reaping havoc on decision-making at every level
- ‘Identification Quality’ can be affected when a software or analyst misidentifies data, leading to disorganization within the data
- ‘Integration Quality’ can be diminished by either technical or human means, due to mis-integrations of data
- ‘Usage Quality’ differs from ‘Identification Quality’ in that it can be spoiled though misjudgments of how the data should be applied rather than misidentification of the data itself
- ‘Ageing Quality’ is affected by data that has become outdated over time
- ‘Organizational Quality’ which can be marred when complex data needs to be integrated into a single database.
As well as causing compliance issues and directly impacting businesses’ ability to respond to customer expectations, there are wider ramifications. For example, duplicate data clearly expands data lakes, sometimes to 2-3 times the size they ought to be, with a significant increase in the cost of storing and manipulating the data.
Holding inaccurate data of a personal nature (PII), can present opportunities for catastrophic data breaches, especially when sending out financial transaction information.
Inaccurate data can also impact marketing efforts, which results in information not reaching the intended target audience, and breaches of the GDPR (EU General Data Protection Regulation).
These issues can be remedied with profiling tools and investigative techniques included in software such as IDS’s end-to-end data assurance tool, iData 2.0, which incorporates data quality, migration & transformation, and test data management (including data obfuscation for pre-production environments) all in one intuitive toolkit.
Seeking a Single Point of Data Truth
Access to clean data, alongside a robust data strategy, are essential to give decision-makers in the organization access to a single point of truth. Developing new products, reimagining your mobile banking experience, building AI Operations into a contact center – all these are fundamentally dependent not just on data being accurate, but on it being augmented with intelligence from a vast range of sources, from market and financial indicators, to consumer trends, demographic and economic data.
There are a number of proactive data quality measures that can be applied when cleaning dirty data to identify and fix problems with your data. Intuitive ‘profile and cleanse’ functionalities can speed up efficiency of data cleansing techniques, removing duplicate or irrelevant data, fixing structural errors, filtering unwanted outliers, handling missing data, validating it, and providing an indication of quality assurance of the data.
Master data management (MDM) is the process of managing, organizing, centralizing, synchronizing, and categorizing data, across departments according to a certain set of predefined rules.
Whilst data governance is the higher-level, strategic approach a company uses to envelop its data, MDM is a common tactic for achieving specific data governance goals.
Is There a Way Out of the Data Storm?
Data quality management provides a context-specific process for improving data quality for analysis and decision making. The goal is to create insights into the health of the data using various processes and technologies on increasingly bigger and more complex data sets, with the goal of creating data truth, based on certainty.
This can only be achieved by assuring 100% of the data, through 100% of the journey, 100% of the time.
Those banks and financial institutions that remain wedded to manual processes, will continue to assure only a tiny fraction of their data at
any given time. But is 100% data certainty a realistic goal?
With the right methodology, tools, and an understanding of total data quality management, the answer has to be, of course, yes – the alternative results include poor decision-making, cost, lack of control, and increased exposure to risk.
Reducing Taxonomy Assurance Costs for a Major Bank
By prioritzing data quality assurance over everything, learn how IDS cut taxonomy assurance costs by an extreme amount of 90%.