4 data quality myths

Myth #1: It’s all about fixing the data

Data cleansing is a very important aspect of improving data quality, but it’s not the only one. In order to have a sustainable data quality program you can’t just fix the data. You need to understand what needs to be fixed and why; analyze the root cause of the issues and address any findings; understand your data environment and inter-dependencies; identify the data owners, stewards, and custodians. You also have to profile the data and not just understand the business logic by which data gets created, maintained, or consumed, but also how do these conflict with your technical constraints. Prevention methods in the form of data entry validations, regular data quality audits, clear ownership and definitions, understood business and technical processes are also needed to sustain the needed level of your data quality. For details on all of these, please read “The trifecta of the best data quality management“.

Myth #2: It’s a one time project

I’ve seen a lot of organizations do this, they throw money at a project meant to improve a particular set of data for a particular purpose (ex: physical addresses for a particular publication or appeal they need to send). The big issue is that it is seen as a one time project when in fact maintaining data quality is never-ending. Even if you cleanse your data once, like in our example the physical addresses, this data will decay just by sitting there. Why? People move, zip and postal codes can change, specific addresses can cease to exist. Data quality needs to always be monitored. Plus it’s never just about the one project. The quality of a data set can have multiple ramifications and they can affect your business in more ways than you think. So remember, that data quality should not be project based, but program based.


Read about the 3 types of data quality projects a data steward should work on


Myth #3: It’s IT’s responsibility

This is the one I encounter more often. Data is technical so it must be IT’s responsibility to ensure its quality is high. Wrong! First of all, bad data affects every unit of an organization and the organization as a whole. Potential revenue as well as beneficial engagements and interactions with your constituent base can be lost because of bad data. Second of all, even though IT plays an important role in offering the technical solution for improving the quality of the data, it is always the business which needs to offer the definitions for every data quality dimensions: completeness, accuracy, timeliness, consistency, etc. In reality, data quality is EVERYONE’S RESPONSIBILITY. Even though it takes a long time to change people’s perception, this is something that constantly needs to be communicated in your presentations, status reports, on-boarding, and through other communication vehicles. For best practices on communication, please read the “3 communication steps for successful data management programs“.

Myth #4: A good tool  will ensure its success

This is a misconception not only applied to data quality, but many other pain-points an organization is trying to solve. Good tools are important and needed, but it’s the people defining the scope, the issues which need to be resolved, it’s the people analyzing the causes of bad data, and it’s the people creating the data quality and business rules for data cleansing, data integration and overall data quality management, as well as assigning roles and responsibilities for the ongoing maintenance of data quality. People and their skills are arguably the most important cog in the data quality improvement machine.

What myths have you heard of which you thought they were true?

  • Other frequently held beliefs and myths (not “which [I] thought they were true”):
    . “Legacy data is not great”
    . “New/incoming data is better than old/legacy”
    . “Data quality is what it is and will not worsen”
    and of course:
    . “We are unlikely to need old/legacy data in the future”.

    • Very true. I’m guessing you had to deal with legacy data quite a bit? In your situation, were the business stakeholders ever convinced on the importance of legacy data?

      • George,
        I am a geoscientist (in the oil and gas industry), so I spend much of my life attempting to extract value from (existing data) – typically field- or lab-acquired data that becomes “legacy” almost immediately after acquisition. In this context, “legacy data” refers to data liable to decay and to lose fitness-for-purposes (immediate and future, planned and unforeseen). I also spend quite a bit of time planning the acquisition of new data, and I am acutely aware of the pressures to do things faster and cheaper, regardless of value, to the extent that we frequently spend time and money acquiring such half-arsed data sets that they will be of no value.
        Data analysts know intimately about the importance of legacy data, although younger people have also been led to believe that technology can make up for any data issue (when in fact technology can make up and cover any data issue, but that doesn’t make it right or confident, only misleading). However, they tend to have little voice and use it even less.
        The problem is that decision makers, frequently their +i managers, frequently do not understand about data and are relentlessly bombarded by software manufacturers’ spin. There’s no sexiness in legacy data, when compared to new data… not least because legacy data is imperfect, whereas managers can decree that from now on, all data will be good.
        A presentation I gave some years ago to the data management community was entitled “Things are NOT getting better”. I wish they were and advocate for positive changes.

        • Thanks for sharing these details. Would love to see your presentation if you can share it on this site.
          Legacy data is definitely important, especially to derive meaningful analytics. That’s way, one of the things that I’ve seen to work is a side by side comparison on trends or other analytics derived from past data and how that is with or without including legacy data as part of the data set.
          Thank you again for sharing this with us.

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    About the author 

    George Firican

    George Firican is the Director of Data Governance and Business Intelligence at the University of British Columbia, which is ranked among the top 20 public universities in the world. His passion for data led him towards award-winning program implementations in the data governance, data quality, and business intelligence fields. Due to his desire for continuous improvement and knowledge sharing, he founded LightsOnData, a website which offers free templates, definitions, best practices, articles and other useful resources to help with data governance and data management questions and challenges. He also has over twelve years of project management and business/technical analysis experience in the higher education, fundraising, software and web development, and e-commerce industries.

    You may also like:

    How to Become a Data Science Freelancer

    George Firican

    12/19/2023

    Data Governance in 2024

    Data Governance in 2024
    >