4 myths about Data Quality everyone thinks are true
Myth #1: It’s all about fixing the data
Data cleansing is a very important aspect of improving data quality, but it’s not the only one. In order to have a sustainable data quality program you can’t just fix the data. You need to understand what needs to be fixed and why; analyze the root cause of the issues and address any findings; understand your data environment and inter-dependencies; identify the data owners, stewards, and custodians. You also have to profile the data and not just understand the business logic by which data gets created, maintained, or consumed, but also how do these conflict with your technical constraints. Prevention methods in the form of data entry validations, regular data quality audits, clear ownership and definitions, understood business and technical processes are also needed to sustain the needed level of your data quality. For details on all of these, please read “The trifecta of the best data quality management“.
Myth #2: It’s a one time project
I’ve seen a lot of organizations do this, they throw money at a project meant to improve a particular set of data for a particular purpose (ex: physical addresses for a particular publication or appeal they need to send). The big issue is that it is seen as a one time project when in fact maintaining data quality is ever-ending. Even if you cleanse your data once, like in our example the physical addresses, this data will decay just by sitting there. Why? People move, zip and postal codes can change, specific addresses can cease to exist. Data quality needs to always be monitored. Plus it’s never just about the one project. The quality of a data set can have multiple ramifications and they can affect your business in more ways than you think. So remember, that data quality should not be project based, but program based.
Read about the 3 types of data quality projects a data steward should work on
Myth #3: It’s IT’s responsibility
This is the one I encounter more often. Data is technical so it must be IT’s responsibility to ensure its quality is high. Wrong! First of all, bad data affects every unit of an organization and the organization as a whole. Potential revenue as well as beneficial engagements and interactions with your constituent base can be lost because of bad data. Second of all, even though IT plays an important role in offering the technical solution for improving the quality of the data, it is always the business which needs to offer the definitions for every data quality dimensions: completeness, accuracy, timeliness, consistency, etc. In reality, data quality is EVERYONE’S RESPONSIBILITY. Even though it takes a long time to change people’s perception, this is something that constantly needs to be communicated in your presentations, status reports, on-boarding, and through other communication vehicles. For best practices on communication, please read the “3 communication steps for successful data management programs“.
Myth #4: A good tool will ensure its success
This is a misconception not only applied to data quality, but many other pain-points an organization is trying to solve. Good tools are important and needed, but it’s the people defining the scope, the issues which need to be resolved, it’s the people analyzing the causes of bad data, and it’s the people creating the data quality and business rules for data cleansing, data integration and overall data quality management, as well as assigning roles and responsibilities for the ongoing maintenance of data quality. People and their skills are arguably the most important cog in the data quality improvement machine.
What myths have you heard of which you thought they were true?