What You Need to Know About Regression Testing on DW/BI Projects

Regression testing on large data integration DW/BI development projects is challenging. The number of test cases is often massive and many change impacts may be widely spread. For high-level integration regression testing, the retest-all approach is always time and resource consuming.

To counter challenges with regression testing, this article proposes test scenarios based on changes to ETL logic and data. Selected test scenarios should be semi-formal representations of detailed system requirements with their test inputs, outputs, and conditions defined. By using test dependency information, the QA team can use a test slicing algorithm that identifies the scenarios that are affected and thus are candidates for regression testing.


What is regression testing?

Following are three ways of understanding regression testing:

  1. Testing performed after developing functional improvements or repairs to data and BI reports. The purpose of those tests is to establish if changes haves regressed other attributes of data and reports.
  2. A series of tests intended to show that the software’s overall behavior is unchanged except as required by adjustments to the software or data.
  3. Testing conducted for the purpose of evaluating whether specific changes to the system have introduced new failures.

Figure 1 shows domains of regression testing to be considered after changes to source data, DW data, ETL’s, business logic, and business intelligence reports.

Common testing domains across the DW/BI project lifecycle
Figure 1: Common testing domains across the DW/BI project lifecycle


Common strategies for selecting regression test suites

  1. High priority and high-risk use cases. Choose baseline tests to rerun by risk heuristics – those with the most risk to data, reports, or dashboards when failing.
  2. End-to-end operational profiles. Choose baseline tests to rerun by allocating time/QA resources in proportion to operational profile risks (source extraction, data staging, data mart loads, etc.).
  3. Business logic and/or data changes. Choose baseline tests to rerun after assessing changes to code and data.
  4. Select from existing test cases. Choose baseline tests for regression testing by analyzing dependencies and relationships with changed or added code.


Recommended strategies for DW/BI regression test planning

Combine the four common strategies from above: Any one of the existing regression testing strategies above may be fine for your project, but in the real world, a combination of the four strategies, as described below, may be a better choice.
It is assumed that first we test each change (fix) by running all related test cases. Then for regression testing:

  1. 30% of regression tests: Tests representing the riskiest functions and data; in particular, those identified to be affected by changes. Among components for high priority, consider business risk and frequency of using the changed scenario by users.
  2. 50% of regression tests: Run all tests planned for general regression testing.
  3. 20% of regression tests: Exploratory Testing – Remember to properly document the results of exploratory testing. For those who not like the idea of ‘exploratory testing’, use the time in the schedule to improve your understanding of the requirements, the system, and your logical and architectural coverage of application by test cases then plan for exploratory testing.

Conventional allocations are 30%; 50% and 20%; other ratios of regression tests may work better for your project (see Figure 2).


Percentage of regression tests

Risky Changes/ Fixes
Planned Regression Tests
Exploratory Tests

Figure 2: Percentages of testing resources and time allocated to the three regression test strategies.

The selection of test cases for regression testing

  • Requires knowledge of logic and data changes/or bug fixes and how the software may be affected
  • Includes the data and business logic areas of frequent defects
  • Includes the areas which have undergone many and/or recent code changes
  • Includes the domains which are highly visible to users
  • Includes the core features of the product which are mandatory requirements by users


By using requirements traceability information, one can uncover affected components and their associated test scenarios and test cases for regression testing.

With information about dependencies and traceability, one can use a flow-affect analysis to identify all potentially affected logic and data, (directly or indirectly), scenarios, and thus a set of test cases can be selected for regression testing. Checkout Part 2 of this article to learn more about the process for selecting regression test cases to be run after changes to DW ETL code or data. 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

About the author 

Wayne Yaddow

Wayne Yaddow is an independent consultant with more than 20 years’ experience leading data integration, data warehouse, and ETL testing projects with J.P. Morgan Chase, Credit Suisse, Standard and Poor’s, AIG, Oppenheimer Funds, and IBM. He taught IIST (International Institute of Software Testing) courses on data warehouse and ETL testing and wrote DW/BI articles for Better Software, The Data Warehouse Institute (TDWI), Tricentis, and others. Wayne continues to lead numerous ETL testing and coaching projects on a consulting basis. You can contact him at wyaddow@gmail.com.

You may also like:

What data quality testing skills are needed for data integration projects?
Managing DW/ BI data integration risks through data reconciliation and data lineage processes
Main considerations for testing BI reports