Crushing Test Coverage with Equivalence Partitioning

Crushing Test Coverage with Equivalence Partitioning

Crushing Test Coverage with Equivalence Partitioning

It’s impossible to test all possible criterion and scenarios when performing validation.  Mathematically, it’s overwhelming!  Because of our focus to reach this unobtainable goal a common error is the creation of too many test cases.  Think about the issues and challenges this can cause.  Without performing risk analysis and prioritizing the design, execution and analysis of our test cases – we try to “do it all”!  We then run out of time or budget and fail to execute them all.  The result – we fail to design and/or execute those tests that validate the most critical aspects of our system.  Once in production, this problem escalates.

How do we design test plans and test cases to ensure test case quantity is minimized while coverage is maximized?   Coverage pertains to how much of a system has been or will be validated.  It’s often measured for requirements and code.  There’s strong evidence that higher coverage is directly related to higher post-release quality.  Some common strategies for code coverage include:

·         Statement coverage

·         Decision (branch) coverage

·         Condition coverage

·         Decision-condition coverage

·         Multiple-condition coverage

·         Domain coverage

·         All-uses coverage

·         All-defs coverage


All of these, being code coverage strategies, are intended for white box testing (such as unit testing).

My focus in this blog is on Equivalence Partitioning.  This approach is a black box testing approach typically performed after unit testing and based on specifications and requirements.  It’s an outstanding tool to assist maximizing coverage and keeping test cases and test executions to a minimum.   It is used prior to the actual test case design and execution.  (I’m also a proponent of exploratory testing in which case specific test cases have not been designed in advance.)  Since it’s impossible to test all inputs to a program (as well as the inputs variations and combinations) it’s necessary to design and develop a limited number of inputs that cover all variations of input.

A VIN (Vehicle Identification Number) is a perfect example (and something I’m very familiar due to my experience with VIN edits on an FBI NCIS re-engineering project).  VINs were introduced in the 1950’s.  Until 1981, there were no accepted VIN standards.  In 1981, the National Highway Traffic Safety Administration of the U.S. standardized the format requiring all over-the-road-vehicles sold to contain a 17-character VIN.  As you can imagine, there are countless variations of an invalid VIN to test in the process of testing edit error message functionality.  Our goal is to generate a limited number of inputs with which we can assume will work for the inputs not tested but meeting the same business rule. We start by separating – partitioning – the inputs into a finite number of equivalence classes (ECs).  We assume a test and the corresponding test data of a representative value (EC) is equivalent to a test and the corresponding test data of any other value in that EC.  If a value within an EC uncovers a defect, then all other tests within that same EC will uncover that same defect – and vice versa.  This significantly decreases the number of values to be tested while still providing coverage.  Some detailed examples:

  • If an input condition specifies a set of input values not handled differently (e.g., state abbreviation is “MA”, “FL”, “GA”), identify a valid equivalence class for all three values and an invalid equivalence class (e.g., “AK”)
  • If an input condition specifies a range (e.g., x is 1-99), then there is one valid equivalent class (0<x<100), and two invalid equivalence classes (x<1 and x>99)
  • If an input condition specifies a “must be” situation (e.g., the first character must be a letter), identify one valid equivalence class (the first character is a letter) and one invalid (it is not a letter)

It’s important to also consider Boundary Value Analysis and Special Cases.  Boundary Value Analysis emphasizes the need for values to be chosen at the boundaries and somewhere in the middle of the domain of each input since values at the boundaries are more likely to find a defect than those that are not.  Special cases include: lists (consider an empty list, and corrupt first and last elements); strings (consider NULL string, Empty string, and large strings); file/directory paths (consider invalid drive, slashes the wrong way, and invalid characters).

If you have any feedback, any questions, or any topics you’d like address in future blogs or “Quality in a Quick” videos, please email me at  I’m Bob Crews, President and Co-Founder of Checkpoint Technologies.


Thank you so much. Make it a great day!

Bob Crews



Follow by Email