Resources

Our notable partnerships give us access to the best resources and tools for the job.

Get software testing training resources:

A metric is a measurement scale (e.g., Fahrenheit, defect severity) and a method used for measurement (e.g., a thermometer, a bug tracking tool). Defect metrics are metrics specifically focused on defects.


Resources for “Defect Metrics”

Articles

Mistakes lead to the introduction of defects (also called bugs). As I personally am aware, like all human beings I can make mistakes at any point in time, no matter what I might be working on. So it is on projects, where the business analyst can put a defect into a requirements specification, a tester can put a defect into a test case, a programmer can put a defect into the code, a technical writer can put a defect into the user guide, and so forth. Any work product can and often will have defects because any worker can and will make mistakes!

Now, I like to drink good wine, and I’ve collected bottles from around the world on my travels. Some I’ve stashed away for years, waiting for the right time to drink them, which always comes sooner or later. Some of these bottles have gotten a lot more valuable over the years. Odd as this will sound, in just a few ways, bugs are like wine. They definitely get more expensive with age, taking more effort and thus incurring more cost the longer they are in the system. Also, sooner or later most bugs need to be fixed. However, it’s definitely not a good idea to leave bugs lying around—or perhaps crawling around—in a cellar, and they’re certainly not appetizing!

Continue reading →

Bug Reporting Process

By Rex Black

Last time, I talked about an internal test process, managing test execution. This is a process that is only indirectly visible to the rest of the development team, in that, as long as you get through all your planned tests, effectively respond to change, and report your findings intelligibly, then for all most people care your testing could occur through voodoo, augury, and a Ouija board.

Continue reading →

There are two measures that have a strong influence on the outcomes of software projects: 1) Defect potentials; 2) Defect removal efficiency.

The term “defect potentials” refers to the total quantity of bugs or defects that will be found in five software artifacts: requirements, design, code, documents, and “bad fixes” or secondary defects. 

Continue reading →

Aggregate defect data (bug reports), presented graphically, allow testers to give development, project, and executive management “dashboard” information they need to drive development projects to successful conclusions. I discuss four charts that distill meaning from test findings in terms of product stability, quality, bug repair, root cause, and affected subsystems. 

Continue reading →

Most software test teams exist to assess the software’s readiness prior to release. To achieve this goal, two primary tactics are used:

  1. Execute test cases or scenarios that are likely to find errors, resemble actual usage, or both.
  2. Report the test results, the defects found, and defects fixed, which, collectively, make up the test status and reflect the quality of the software.

For the test manager, the first task category primarily involves managing inward: assembling the test team and test resources, designing a solid test system, implementing the test system, and running the tests in an intelligent, logical fashion. The second area of responsibility involves upward and outward management. Upward management includes how you communicate with your managers, other senior managers, and executive staff. Outward management includes communication with management peers in development, configuration/release management, sales, marketing, finance, operations, technical support, and so on. As a test manager, your effectiveness in reporting test status has a lot to do with both your real and perceived effectiveness in your position. To put it another way, your management superiors and peers measure your managerial competence as much by how well you report your results as by how you obtain your results. 

Continue reading →

In a speech at Quality Week ’99, Roger Sherman, a Microsoft test manager, identified the leading cause of bug report closure as “unreproducible.” This is a regrettable circumstance, since such bug reports waste precious time during tight development schedules, add absolutely nothing to product quality, and lead to frustration and bad feelings between development engineers and test engineers. Sometimes, these bug reports arise from transient or random events, inconsistency of tools and configurations between test and development, or a vague definition of “correct” behavior under the tested conditions, but many bug reports closed as unreproducible are unclear, misleading, or just plain wrong.

Continue reading →

"Back in 2010, at the launch of Core Magazine, http://www.coremag.eu/, I wrote a series of columns to welcome people to the magazine. As a sort of Throw-Back-December, here they are, as they appeared in the original magazine issues. I hope you enjoy them."
-Rex Black
 
Greetings, and welcome to my quarterly column on software testing best practices.  When I was asked to write this column, I had to choose the approach, the theme.  The writers' aphorism says, "Write what you know." So, what do I know?
 
Well, if you know me and my consulting company, RBCS, you know that we spend time with clients around the world, in every possible industry, helping people improve their testing with training or consulting services, or doing testing for them with our outsourcing services.  Our work gives me insights into what goes on, the actual day-to-day practice of software testing.
 

Continue reading →

Webinars

Podcast Episodes

Software Testing: Listen to Your Defects 11/23/11


Length: 1h 24m 12s

As Yogi Berra famously said, “You can observe a lot just by watching.” Or listening for that matter. In testing, we can listen to our defects. Defects can tell us a lot about what’s going on with our projects, a lot about the current quality of our products, and a lot about our software engineering process and its capabilities. For example, how many defects can you expect users to find after you release the product? In some cases, defects can tell us interesting things about what’s not going on, too. For example, when testers have previous hands-on user experience, do they really write better defect reports? In this webinar, Rex will discuss important things test professionals can learn by listening to defects. He’ll illustrate these insights with a variety of case studies and examples. You’ll walk away ready to listen to your defects, and to understand what they’re telling you.

Listen now →

Why Does Software Quality (Still) Suck


Length: 4h 20m 0s

Software quality, for the most part, sucks. It still sucks, seventy-five years since the advent of the programmable computer. Software bugs are a constant fact of life, thanks to the ubiquity of software and the ubiquity of software bugs. Sometimes the bugs costs millions of dollars or kill people. Why is the reaction so muted? Rather than just accept software bugs as unavoidable, let’s ask the obvious question: Given that manufacturing is able to achieve six sigma levels of quality—i.e., only three defective items per million manufactured—why does software quality still suck? In this webinar, Rex will address some of the real barriers to achieving six sigma quality in software, while at the same time holding software engineering as a profession accountable for not doing nearly as much as we can.

Listen now →

Training

There are no training products in this category currently.


Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.