An introduction to Data Quality

Hadi Fadlallah
Towards Data Science
5 min readDec 9, 2018

--

Reference: www.datapine.com/blog/data-quality-management-and-metrics/
Image Reference: www.datapine.com

1. What is Data Quality?

There are many definitions of data quality, in general, data quality is the assessment of how much the data is usable and fits its serving context.

Many factors helps measuring data quality such as:

  • Data Consistency: Violation of semantic rules defined over the dataset. [1].
  • Data Accuracy: Data are accurate when data values stored in the database correspond to real-world values. [1]
  • Data Uniqueness: A measure of unwanted duplication existing within or across systems for a particular field, record, or data set. [2]
  • Data Completeness: The degree to which values are present in a data collection. [1]
  • Data Timeliness: The extent to which age of the data is appropriated for the task at hand. [3]

Other factors can be taken into consideration [4] such as Availability, Ease of Manipulation, Believability, and Currency.

2. Why Data Quality is Important?

Enhancing the data quality is a critical concern as data is considered as the core of all activities within organizations, poor data quality leads to inaccurate reporting which will result inaccurate decisions and surely economic damages.

3. How to improve Data Quality?

Data quality improvement is achieved by:

  1. Training Staff
  2. Implementing data quality solutions

3.1. Training Staff

Before thinking about implementing data quality solutions, first we must minimize the data quality problems resulted by in-organization human activities such as data entry. Also all developers and database administrators must have a good knowledge of the business process and must refer to a unified schema when developing and designing databases and applications. [5]

3.2. Implementing data quality solutions

The other way to improve data quality is by implementing data quality solutions. Data quality solutions is a set of tools or application that perform quality tasks such as:

  • Knowledge base creation: a knowledge base is a machine-readable resource for the dissemination of information. [6]
  • Data de-duplication: Remove duplicated information based on a set of semantic rules.
  • Data cleansing: Removing unwanted characters and symbols from values.
  • Data profiling: is the process of examining the data available from an existing information source (e.g. a database or a file) and collecting statistics or informative summaries about that data. [7]
  • Data matching: Data matching describes efforts to compare two sets of collected data using technologies such as Record Linkage and Entity resolution. [8]

4. Popular data quality solutions

In this section, i will show some of the most popular data quality solutions in the market.

4.1. IBM Infosphere information server

IBM InfoSphere® Information Server is a market-leading data integration platform, which includes a family of products that enable you to understand, cleanse, monitor, transform and deliver data, and to collaborate to bridge the gap between business and IT. InfoSphere Information Server provides massively parallel processing (MPP) capabilities to deliver a highly scalable and flexible integration platform that handles all data volumes, big and small.

InfoSphere Information Server provides you with the ability to flexibly meet your unique information integration requirements — from data integration to data quality and data governance — to deliver trusted information to your mission-critical business initiatives (such as big data and analytics, data warehouse modernization, master data management and point-of-impact analytics). [9]

4.2. Informatica Data Quality

Informatica Data Quality delivers trustworthy data to all stakeholders, projects, and data domains for all business applications on premise or in the cloud. [10]

4.3. Oracle Data Quality

Oracle Enterprise Data Quality delivers a complete, best-of-breed approach to party and product data resulting in trustworthy master data that integrates with applications to improve business insight. [11]

4.4. Microsoft Data Quality Services

SQL Server Data Quality Services (DQS) is a knowledge-driven data quality product. DQS enables you to build a knowledge base and use it to perform a variety of critical data quality tasks, including correction, enrichment, standardization, and de-duplication of your data. DQS enables you to perform data cleansing by using cloud-based reference data services provided by reference data providers. DQS also provides you with profiling that is integrated into its data-quality tasks, enabling you to analyze the integrity of your data. [12]

4.5. Melissa Data Quality

Since 1985, Melissa has been providing enterprise data quality tools with wide capabilities including data profiling and standardization, cleansing, enriching, linking and deduping. Our mission is to provide organizations with best-of-breed solutions that deliver trusted, reliable, accurate information for greater insight. [10]

4.6. Talend Data Quality

Talend’s enterprise data quality tool profiles, cleanses, and masks data, while monitoring data quality over time, in any format or size. Data de-duplication, validation, and standardization creates clean data for access, reporting, analytics, and operations. Enrich data with external sources for postal validation, business identification, credit score information, and more. [13]

4.7. Syncsort Trillium Software Lead

Syncsort’s Trillium Cloud delivers an industry-leading enterprise data quality solution with the deployment ease and operational flexibility of a Syncsort-administered hardened, secure colud environment. [14]

4.8. SAS Data Quality

The SAS Data Quality software enables you to improve the consistency and integrity of your data. When you increase the quality of your data, you increase the value of your analytical results.

The SAS Data Quality software supports a variety of data quality operations. The data quality operations employ predefined rules that apply to the specific context of your data (such as names or street addresses). Examples of data quality operations include casing, parsing, fuzzy matching, and standardization. [15]

5. References

--

--