Measuring Software Quality — it isn’t about metrics

Julie Griech
5 min readNov 16, 2018
Photo by Dawid Małecki on Unsplash

One of the most important questions about a software quality program is always how are we going to measure quality?

This is a valid question that deserves attention — and a small adjustment in order to capture the full spirit of what it means to have a highly-effective quality program. The adjustment is to the question itself. Instead of asking about measurements, we should be asking how is our quality program helping us to build and deliver better products?

This distinction is needed because measurements come after undesirable outcomes have already happened. Bugs have occurred, requirements have not been met, usability is not its best, or tests have failed. Respect is due to these problems, but they are inevitably going to happen and they are a consistent part of software development no matter how mature our process is or how diligently we work to prevent them.

Using Metrics to Measure Quality is Flawed, But Why?

Most testing metrics come out of two systems: 1) a bug tracking tool, and 2) a test case management and test run tracking tool. Both of these systems have valuable data that should be used to gain a better understanding of where we are doing well or not doing well, however, they are not well-suited for decision making and are not intended to implement and run a quality program.

These systems are flawed in that the data can be entered incorrectly, it can be altered post data collection, explainable data outliers cannot be discounted from metrics, and can be difficult to make absolute statistical connections between multiple data elements.

An excellent quality program seeks to proactively create and maintain practices that ensure you are building quality into your process and development, and not attempting to test quality into your software.

Attempting to build quality into a product through testing is futile and creates more churn and adds too much risk to your applications.

Baking Quality into your Workflow and Measuring its Success — Three Ways

Photo by Erin Waynick on Unsplash

Success is a mixture of many things. We hardly ever arrive at success without failures and retries, and stops and starts. I see each challenge we face in software quality assurance as a way to implement something creative that will be effective and become embedded into our workflows.

One: Quality Plans

When test plans and test cases come together, they are better as one. I call this happy marriage of information a Quality Plan. A Quality Plan that contains all planning and testing details together is much more efficient and fun to work with. How a Quality Plan is structured is up to you. I really favor test cases that allow for ideation free from needing detailed steps and expected results. Quality Plans are more scenario based testing and aim to specifically call out only what actually matters for a particular test.

Process Needed:

  • Ensure that Quality Plans are reviewed with Product and Development, and any other key stakeholders.
  • Ensure that Quality Plans are being executed on local branches by Developers and Testers.
  • Ensure that Quality Plans are automated and run in your CI environment.

Measurements:

  1. Track any changes or new backlog items that come from the reviews and the testing. This signifies that the testing has uncovered gaps in design, requirements, and/or implementation.
  2. Track what is added to your automation repository based on each Quality Plan.

Two: Continuous Review

Team reviews of incremental development is an easy way to validate that what is desired is actually being implemented. This means that developers are creating what product wants, and that what quality is planning to validate is equally correct. If we waited to check-in with each other until the development work is completed, then we would need to rework written code. This poses not only a risk to schedules, but to stability and likely to related feature work. The goal is to catch discrepancies and issues early in order to develop it correctly the first time and adjust testing plans accordingly.

Process Needed:

  • Ensure developers complete entrance reviews before coding begins. These should happen once they have had an opportunity to review the requirements. The purpose of these reviews is to get questions answered, clarify misunderstandings, and to resolve potential issues. Entrance reviews should take place between key stakeholders (e.g., Product, Design, Development, Quality).
  • Ensure developers complete ongoing reviews with your key stakeholders when issues, questions, or roadblocks come up.
  • Ensure developers complete exit reviews with your key stakeholders prior to merging their code in order to validate that what has been developed is as expected.

Measurements:

  • Track that the Entrance and Exit reviews are taking place.
  • Track any rework or changes to requirements that occurs based off of the reviews.

Three: Automation

Test automation is a key element to a successful quality program. Beyond speeding up testing and being consistently repeatable, it frees up manual testers to be more creative and to spend more time doing exploratory testing.

Process Needed:

  • Ensure that manual testers and automators pair on validated Quality Plans.
  • Ensure that you have a clearly defined and well-documented test automation framework that includes information for developers who want to contribute functional tests.

Measurements:

  1. Track when manual tests are removed from test runs, and are now running in the pipeline.
  2. Track when developers contribute automated tests to the repository.

The goal of a software quality program is to have a positive impact on the quality with which we develop software starting from the beginning of the lifecycle through to delivery. As we study the outcomes of our quality practices, we will see a clear and full view of where we are meeting or not meeting the goals of our program. Numbers and trend lines alone cannot do that well, or at all.

Every quality practice that is implemented should have a well-defined desired outcome. There should be an intention of helping the overall process in order to achieve a well-tested, usable, and stable application.

--

--