Resources

Our notable partnerships give us access to the best resources and tools for the job.

Get software testing training resources:


Resources for “Metrics”

Articles

At RBCS, a growing part of our consulting business is helping clients with metrics programs. We’re always happy to help with such engagements, and I usually try to do the work personally, because I find it so rewarding. What’s so great about metrics? Well, when you use metrics to track, control, and manage your testing and quality efforts, you can be confident that you are managing with facts and reality, not opinions and guesswork.

When clients want to get started with metrics, they often have questions. How can we use metrics to manage testing? What metrics can we use to measure the test process?  What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? We work with clients to answer these questions all the time. In this article, and the next three articles in this series, I’ll show you some of the answers.

Continue reading →

Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy. One of those benefits is the opportunity to make risk-aware release decisions.  However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging.  This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.

Testing can be thought of as (one) way to reduce the risks to system quality prior to release. Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces. All testing strategies, competently executed, will reduce quality risks.  However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.

There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA). Most organizations find this technique too difficult to implement, so RBCS typically recommends and helps clients to implement a technique called Pragmatic Risk Analysis and Management (PRAM).  You can find a case study of PRAM implementation at another large company, CA here. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.

Continue reading →

In the previous article in this series, I offered a number of general observations about metrics, illustrated with examples. We talked about the use of metrics to manage testing and quality with facts. We covered the proper development of metrics, top-down (objective-based) not bottom-up (tools-based). We looked at how to recognize a good set of metrics.

In the next three articles in the series, we’ll look at specific types of metrics. In this article, we will take up process metrics. Process metrics can help us understand the quality capability of the software engineering process as well as the testing capability of the software testing process. Understanding these capabilities is a pre-requisite to rational, fact-driven process improvement decisions. In this article, you’ll learn how to develop and understand good process metrics.

Continue reading →

In the previous article in this series, we moved from general observations about metrics to a specific discussion about how metrics can help you manage processes. We talked about the use of metrics to understand and improve test and development process capability with facts.  We covered the proper development of process metrics, starting with objectives for the metrics and ultimately setting industry-based goals for those metrics. We looked at how to recognize a good set of process metrics, and trade-offs for those metrics

In this and the next article in the series, we’ll look at two more specific types of metrics.  In this article, we turn from process to project metrics.  Project metrics can help us understand our status in terms of the progress of testing and quality on a project.  Understanding current project status is a pre-requisite to rational, fact-driven project management decisions.  In this article, you’ll learn how to develop, understand, and respond to good project metrics.

Continue reading →

Analytical risk based testing offers a number of benefits to test teams and organizations that use this strategy.  One of those benefits is the opportunity to make risk-aware release decisions.  However, this benefit requires risk based test results reporting, which many organizations have found particularly challenging.  This article describes the basics of risk based testing results reporting, then shows how Rex Black (of RBCS) and Nagata Atsushi (of Sony) developed and implemented new and ground-breaking ways to report test results based on risk.

Testing can be thought of as (one) way to reduce the risks to system quality prior to release.  Quality risks typically include possible situations like slow system response to use input, incorrect calculations, corruption of customer data, and difficulty in understanding system interfaces.  All testing strategies, competently executed, will reduce quality risks.  However, analytical risk based testing, a strategy that allocates testing effort and sequences test execution based on risk, minimizes the level of residual quality risk for any given amount of testing effort.

There are various techniques for risk based testing, including highly formal techniques like Failure Mode and Effect Analysis (FMEA).  Most organizations find this technique too difficult to implement, so RBCS typically recommends ­and helps clients to implement­ a technique called Pragmatic Risk Analysis and Management (PRAM).  You can find a case study of PRAM implementation at another large company, CA, here. While this article describes the implementation of the technique for projects following a sequential lifecycle, a similar approach has been implemented by organizations using Agile and iterative lifecycle models.

This article was originally published in Software Test and Quality Assurance www.softwaretestpro.com in their December 2010 edition.

Continue reading →

In the previous article in this series, we moved from a discussion of process metrics to a discussion of how metrics can help you manage projects. I talked about the use of project metrics to understand the progress of testing on a project, and how to use those metrics to respond and guide the project to the best possible outcome. We looked at the way to use project metrics, and how to avoid the misuse of these metrics. 

In this final article in the series, we’ll look at one more type of metric. In this article, we examine product metrics. Product metrics are often forgotten, but having good product metrics helps you understand the quality status of the system under test. This article will help you understand how to use product metrics properly. I’ll also offer some concluding thoughts on the proper use of metrics in testing, as I wind up this series of articles. 

As I wrote above, product metrics help us understand the current quality status of the system under testing. Good testing allows us to measure the quality and the quality risk in a system, but we need proper product metrics to capture those measures. These product metrics provide the insights to guide where product improvements should occur, if the quality is not where it should be (e.g., given the current point on the schedule). As mentioned in the first article in this series, we can talk about metrics as relevant to effectiveness, efficiency, and elegance.

Effectiveness product metrics measure the extent to which the product is achieving desired levels of quality. Efficiency product metrics measure the extent to which a product achieves that desired level of quality results in an economical fashion. Elegance product metrics measure the extent to which a product effectively and efficiently achieves those results in a graceful, well-executed fashion.

Continue reading →

There are two measures that have a strong influence on the outcomes of software projects: 1) Defect potentials; 2) Defect removal efficiency.

The term “defect potentials” refers to the total quantity of bugs or defects that will be found in five software artifacts: requirements, design, code, documents, and “bad fixes” or secondary defects. 

Continue reading →

Project managers must develop quality systems that provide the needed features in a timely and cost-effective fashion. Testing systems prior to release is necessary to assess their quality, but what’s the return on that testing investment? This article describes four quantifiable metrics for testing ROI

Continue reading →

Most software test teams exist to assess the software’s readiness prior to release. To achieve this goal, two primary tactics are used:

  1. Execute test cases or scenarios that are likely to find errors, resemble actual usage, or both.
  2. Report the test results, the defects found, and defects fixed, which, collectively, make up the test status and reflect the quality of the software.

For the test manager, the first task category primarily involves managing inward: assembling the test team and test resources, designing a solid test system, implementing the test system, and running the tests in an intelligent, logical fashion. The second area of responsibility involves upward and outward management. Upward management includes how you communicate with your managers, other senior managers, and executive staff. Outward management includes communication with management peers in development, configuration/release management, sales, marketing, finance, operations, technical support, and so on. As a test manager, your effectiveness in reporting test status has a lot to do with both your real and perceived effectiveness in your position. To put it another way, your management superiors and peers measure your managerial competence as much by how well you report your results as by how you obtain your results. 

Continue reading →

Testing can be considered an investment. A software organization—whether an in-house IT shop, market-driven shrink-wrap software vendor, or Internet ASP— chooses to forego spending money on new projects or additional features to fund the test team. What’s the return on that investment (ROI)? Cost of quality analysis provides one way to quantify ROI.

Continue reading →

Project Retrospective

By Rex Black

An excerpt from The Expert Test Manager: Guide to the ISTQB Expert Level Certification book by Rex Black, Jim Rommens and Leo Van Der Aalst due to be published by Rocky Nook. All material is provisional and may be subject to change
 
As a colleague told me once, a good motto for software teams is: "Make interesting new mistakes." His explanation was, since you are human, you'll make mistakes. But you should make interesting ones, ones you can learn from, and you should only make a mistake once.  How do you ensure that you make only interesting new mistakes? By learning from each mistake that you make. How to you learn from each mistake? In this short article, you'll read about a proven technique for learning from mistakes, retrospectives. Useful in both Agile and traditional lifecycles, this simple technique can make you and your colleagues a process improvement machine!
 

Continue reading →

Webinars

Test Reporting for Impact

Recorded November 23, 2009

Podcast Episodes

Test Reporting for Impact 02/08/2010


Length: 1h 26m 57s

Testing is about producing information, and effective testing is about communicating that information to the various test stakeholders in a way that makes an impact on their thinking, their understanding, and their choices for how the project moves forward. Testers, test leads, and test managers frequently have to report on test status and on test analysis, answering important questions about bugs, progress, coverage, the meaning of the results, and the causes of the observed outcomes. In this talk, Rex Black will give examples of test status and analysis graphs and charts that have helped him achieve test reporting for impact.

Listen now →

Testing is an important investment. Organizations forego expenditures in development, support, and other initiatives to check for bugs and make sure systems will work before deploying or shipping the software. What’s the return on that investment? How can we measure the return? These are questions management will ask, and the smart test professional will be ready with answers. Listen to this webinar and be ready.

Listen now →

Some of our favorite engagements involve helping clients implement metrics programs for testing. Facts and measures are the foundation of true understanding, but misuse of metrics is the cause of much confusion. How can we use metrics to manage testing? What metrics can we use to measure the test process? What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? In this webinar, Rex will share some things he’s learned about metrics that you can put to work right away.

Listen now →

Webinar: Two Bug Metrics, Millions in Process Improvement


Length: 1h 20m 50s

When we do assessments, we always try to look at process metrics. In most cases, we can find millions of dollars in process improvement opportunities. In this webinar, Rex will show you how two very simple bug metrics, calculated using only two simple facts for each bug report using simple, free spreadsheets you can get from our website, can reveal millions and millions of dollars in potential process improvements. All the more reason to track those bugs! To paraphrase Timothy Leary: Tune in, download, and drop software co

Listen now →

Training

Testing Metrics: Virtual Workshop

Some of our favorite engagements involve helping clients implement metrics programs for testing. Facts and measures are the foundation of true understanding, but misuse of metrics is the cause of much confusion. How can we use metrics to manage testing? What metrics can we use to measure the test process? What metrics can we use to measure our progress in testing a project? What do metrics tell us about the quality of the product? In this virtual workshop, Rex will share some things he’s learned about metrics that you can put to work right away. You’ll work on some practical exercises to develop metrics for your testing, and have a chance to discuss those with Rex and with other attendees.

View details →

ISTQB Virtual Advanced Test Automation Engineer Boot Camp

The Advanced Test Automation Engineer Boot Camp, created by Rex Black, past President of the International Software Testing Qualifications Board (ISTQB), past President of the American Software Testing Qualifications Board (ASTQB) and co-author of a number of International Software Testing Qualifications Board syllabi, is ideal for testers and test teams preparing for certification in a short timeframe with time and money constraints.

View details →


Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo IIBA Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.