There are quite a few data quality myths that need to be dispelled in order to move forward and mitigate the data quality risks. Last year I’ve covered 4 myths about Data Quality everyone thinks are true that started an entire trend on LinkedIn and also sparked a series of YouTube videos. So, here is the next data quality myth that we need to understand and debunk:

Myth #5: Our data is good

Do you think your organization has good data quality? I heard a positive answer for this question more often than I expected. It’s a good thing, sure, but here’s why a lot of organizations think that their data quality is good and why that’s not the reality, why It’s a myth!

Many organizations can’t simply accept the fact that they may have an issue with the quality of their data. Let me outline some of the reasons why that is:

1. Lack of information

Let’s get the obvious out of the way, which is the lack of information. Some business executives suffer from this: they are shielded and unaware that their company’s data quality is poor. Sometime it could be that they are misinformed about it or choose not to care. Either, or, is not a good scenario to be part of.

2. Data migration

For the second reason, let’s consider a data set that contains bio data for customers: names, dates of birth, etc. It could be that this data set is of good quality in application A, in database A, in source system A. So now when you’re talking to a data steward or a data custodian that are focused on the application/database/source system A, they will tell you: “My data is good”. That could very well be the case, but within the same organization once that data set is transferred over to another database or is consumed by another application, most likely, its quality will drop. Especially if you don’t have a data governance program and a data quality program in place. It’s unavoidable. Without these programs, even though the source application does a good job and understands the business rules and exceptions by which this data set needs to abide to, the same can’t be said about the target application. When data is migrated to a new system, there’s a high chance it will be transformed into something that will be in conflict with those business rules and none is the wiser.

3. Metadata

For the third reason, I remember having this conversation with a data professional and he was bragging about the good quality of their data even though they didn’t have a data governance program. I was happy to hear, but also a bit skeptical because they didn’t have a data governance program. And I did have a chance to poke around a bit in one of their databases and do a bit of data profiling. In a way he was right, the quality of the data was good, but boy was he wrong. Let me give you an example of what I mean. They were storing delivery information in address fields and even name fields. The delivery information such as instructions on where to leave packages, what the buzzer number is and so on, was accurate, but it was stored in the wrong fields. Not to even mention about its consistency. So, in order to have good quality data you should look at all data quality dimensions and consider its metadata as well. I mean, how many times did you find interesting information in names and dates fields? You would think that the date field was actually of type date, but you would be surprised.

4. Decay factor

For the forth reason, you could actually have good quality data today, but that doesn’t mean it will retain its quality tomorrow. Back to the bio data example, if you’re storing the age of an individual and not their date of birth, if it’s not updated automatically, it will be incorrect past their next birth date. Just as I addressed in the second data quality myth video, even if you cleanse your data so that it is clean now, it won’t be tomorrow as the quality for certain data will decay just by sitting there.

5. Post-cleansing

Lastly, and in a way this ties back to the first reason, a lot of data quality issues are “fixed” in the ETL phase before it gets outputted in reports and dashboards or other areas where they are available for human consumption. The problem is that the data quality efforts are not also serving the system of record or system of origin. If there is any control over that data, then the data in these systems should also be cleansed. Most of the time they are not and when another data integration project occurs, the data quality efforts need to be replicated, though a lot of times they can be forgotten. The bad data quality then appears in these new environments if it was not cleansed at the source.

So, is your data clean or do you just think it is?

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

About the author 

George Firican

George Firican is the Director of Data Governance and Business Intelligence at the University of British Columbia, which is ranked among the top 20 public universities in the world. His passion for data led him towards award-winning program implementations in the data governance, data quality, and business intelligence fields. Due to his desire for continuous improvement and knowledge sharing, he founded LightsOnData, a website which offers free templates, definitions, best practices, articles and other useful resources to help with data governance and data management questions and challenges. He also has over twelve years of project management and business/technical analysis experience in the higher education, fundraising, software and web development, and e-commerce industries.

You may also like:

How to Become a Data Science Freelancer

George Firican

12/19/2023

Data Governance in 2024

Data Governance in 2024
5 Steps to Achieve Proactive Data Observability – Explained Over Beers
>