Data quality in numbers

Following a recent survey around the state of data quality to 1,900 businesses carried out by the learning company O’Reilly in late 2019, key findings include despite several organizations being aware of data quality issues, many are uncertain about how best to address those concerns.

I thought I would share some of the stark findings on data quality below:

  • Over 60% of businesses indicated that too many data sources and inconsistent data was their top data quality worry.
  • This was followed closely by disorganised data stores and lack of meta data (50%), and poor data quality controls at data entry (47%).
  • Disorganised data stores and lack of metadata is fundamentally a governance issue and with only 20% of respondents saying their organizations publish information on data provenance and lineage, which suggests that data governance needs to be corrected or addressed.
  • Almost half (48%) are using Machine Learning or AI tools to address data quality issues, automating some of the tasks involved in discovering, profiling and indexing data.
  • People have long been noted as transferring their biases onto data when analyzing it, but only 20% of respondents cited this as a primary data quality issue.
  • Most respondents indicated many data quality issues, at the same time, over 70% of respondents do not have dedicated data quality teams. 

Particularly in challenging times, should we all look to our data quality as the fuel to aid our recovery and growth?

Get in touch to discover how we can empower your data quality teams and results by plugging skills gaps with data specialists and solutions to ultimately assure your data quality, guarantee your data governance and help build, grow and scale your organization.

Source: Rachel Roumeliotis, Strategic Content Director, O’Reilly April 28, 2020

Follow us

ids Newsletter

Receive more content like this, straight to your inbox...