Blog

Product Quality Includes the Whole Product

By Rex Black

Fresh off my diatribe just a couple weeks ago about how McAfee had a bug that serendipitously made a credit card charge in their favor, here's another "bug from the trenches."  Now, Cisco enjoys a good reputation for the quality of their products, but the packaging doesn't exactly inspire confidence.  Can you spot the error?

[caption id="attachment_210" align="alignnone" width="300" caption="Does Packaging Quality Matter? Yes"]Does Packaging Quality Matter?  Yes[/caption]

The point of this post is not to yank Cisco's chain--though I am surprised to see such a large, serious organization making such an obvious and silly mistake--but to reinforce an important point. Product quality includes the whole product, and that includes the package.  I've done some consulting for systems and consumer electronics makers, and most of them include testing of the "out of box experience". 

Cisco, did your OBE testers miss this one?  I'd be happy to post a comment from Cisco explaining how this kind of obvious bug snuck past.

— Published


Why Is Software Quality So Unmeasured?

By Rex Black

As regular readers of this blog know, from time to time I like to throw a topic out for discussion and see what comes of it.  It's that time.

I have said (more than once) that most companies manage their holiday parties more rigorously and quantitatively than they manage the quality of their software.  That's not just a throwaway snarky line: It's a fact.  You can go the treasurer, CFO, head accountant, whatever the moneybags person is called in any organization worth calling an organization and ask that person, "How much money did you spend on holiday parties last year?"  You'll get an answer.  You can ask people at that same organization, "What benefits did you receive from those parties?"  You'll get an answer (albeit probably not as quantitative).

Now, try this experiment: Ask Mr. or Ms. Moneybags, "How much money did your organization spend on software testing and software quality last year?"  While they might be able to answer the first half of that question (about testing), most organizations couldn't answer the second half, even though a technique for getting a good approximation of the costs of software quality has been around a long time.  (For example, check out the free tool here and also the article here.)  You can ask them about the benefits received from testing and quality and get the same lack of solid answers most of the time. 

Given that people have been accounting for stuff for thousands of years (e.g., I saw a 4,000 year old receipt for donkeys and other sundry items, written in Sumerian cuneiform, in a museum in Japan last month), how to explain this lack of fiscal measurement, given the widely-acknowledged importance of software in the modern economy? 

Moving beyond cost, while some of our clients do have reasonably good metrics for product quality (what some call "quality in use metrics"), many companies do not.  Some companies that do have such metrics don't tie those metrics back to what is getting tested.  We've seen situations where companies knew they had interoperability problems in their data centers and yet, when we asked people who was responsible for interoperability testing, the accountability trail went around in circles and ended up nowhere.  Same story for performance and reliability. 

So, why does this happen?  The cynic in me wants to say that this problem comes down to a lack of legal liability for quality and quality problems.  In other words, until organizations are held to the same legal standards for software quality as they would be for other products (e.g., food, cars, etc.), we will see this immature approach to managing and measuring quality.  But is it really that simple?  What do you think?  Comments, please.

— Published


More Thoughts on Software Testing

By Rex Black

You can find the second part of the uTest interview here.

— Published


Some Thoughts on Software Testing

By Rex Black

While I try to stay focused on facts (along with illustrative case studies) in this blog, sometimes people ask my opinion on software testing, usually in the form of interviews.  uTest did that recently.  You can find the first part here.

— Published


McAfee Online Systems, Including Financials, Not Tested?

By Rex Black

Some of  the readers of this blog have perhaps read or heard me say that the state of the common practice of software testing lags about 25 years behind the state of the art.  Here's a situation which either is a case study in why I say that, or it's something worse.

We (RBCS) used McAfee AntiVirus software for years on many of the PCs in our office as well as on company laptops.  This was mostly due to laziness, not a high level of satisfaction.  McAfee often came pre-installed on computers we bought, so we would simply renew the update subscriptions.  However, repeated problems with McAfee's virus protection--especially its aggressive interference with e-mail programs--led us to switch to Trend Micro, which now runs on all our PCs.

On September 30, we received an e-mail from McAfee (specifically, an account called "subscriptions" at "mcafee.com").  This e-mail read, in part, "ATTENTION: The credit card linked to your account for McAfee-based security products has expired. Please update your account now to keep your PC protected without interruption."  After checking to make sure that we were no longer using McAfee software on any of our computers, we ignored the message.  After all, the credit card had expired.

It turns out that McAfee's software had a bug in it that cause it to submit the charge anyway, because my October 13 American Express bill has a charge for $43.29 from McAfee.com.  No receipt was mailed for the charge, unlike in previous years, though I'm not sure this has anything to do with the expired card. We're going to contest and reverse this charge, of course; it is a relatively minor hassle.

The testing implications of this situation are significant, though.  Remember, antivirus software, like other security software, is software that we rely on to keep systems safe.  Given the kind of damage a widespread interruption to computer systems due to virus attacks can cause, you'd expect the entire suite of software--including the online systems that provide updates and handle customer financial matters--to be well tested, following known best practices and techniques.

So, what's the explanation for this situation?  If a credit card has expired, the system should not silently put through a charge on the card, especially when the system has sent an e-mail to the customer giving the impression that, unless the customer takes a specific and deliberate action to update the card, no charge will occur and the subscription will expire.   We have an obvious equivalence partition. Equivalence partitioning as a test technique is over 25 years old.  We clearly have a block of code that recognizes the equivalence partition and triggers an e-mail to the customer prior to the charge occuring.  Statement coverage, the lowest of the white box coverage criteria, is also a test technique that is over 25 years old.

In risk based testing--a topic I've covered often in this blog--it's almost certain that two financial-related risks would be noted during quality risk identification:

  • Processing a credit card transaction that should be rejected for processing
  • Not processing a credit card transaction that should be accepted for processing

Best practices of risk based testing would generally lead to such risks given a high impact rating, even if the likelihood rating were low.  That would require more than cursory testing against such risks.  Now, analytical risk based testing as a strategy is relatively leading edge, having been perfected only in the last ten years.

Maybe someone at McAfee would care to post a comment about which of the following statements is true:

  1. McAfee software systems receive only cursory testing, with financial processing code tested to less than 100% equivalence partition coverage and less than 100% statement coverage.  McAfee's test team was not aware that the system attempts charges against credit cards it knows to be no longer valid.
  2. McAfee's test team was aware that the system attempts charges against credit cards it knows to be no longer valid.  A bug was reported against this behavior by the testers, and deliberately deferred or cancelled by the product or project manager because they decided that they wanted the additional revenue from former customers.

Of course, it's quite possible that McAfee's financial processing code is tested to less than 100% equivalence partition coverage and less than 100% statement coverage, but even so their testers found this bug.  After all, testing to force all possible messages to occur--a simple experience based technique recommended by James Whittaker in How to Break Software--would have revealed this bug as well.

In all, any of three well-established test design techniques--one black box (equivalence partitioning), one white box (statement coverage), and one experience based (the force-all-messages attack)--would have found this bug.  Risk based testing would have lead to more thorough coverage of the underlying quality risk. If financial related quality risks are not being tested using well-established best practices, then what else isn't being tested in McAfee's systems?

— Published


A Holiday Gift for Yourself: Improving Your Testing by Christmas

By Rex Black

For those of us on the Western calendar, we have some holiday time coming soon, including the December break.  Many of us will spend this time relaxing, which is always good. However, why not invest a little of your holiday time in improving your testing operation? After all, if you’re like most testers, you are time constrained and need to make improvements quickly that show fast results. So here are three practical ideas which you can put into action before January arrives, which will make a noticeable difference when you start to take on the projects that await in 2011.

Get Hip to Risk-Based Testing

I've gone on quite a bit in this blog about risk based testing, but let's keep it short and sweet here.  I have a simple rule of thumb for test execution: Find the scary stuff first. How do we do this? Make smart guesses about where high-impact bugs are likely. How do we do that? Risk-based testing.

In a nutshell, risk-based testing consists of the following:

1. Identify specific risks to system quality.

2. Assess and assign the level of risk for each risk, based on likelihood (technical considerations) and impact (business considerations).

3. Allocate test effort and prioritize (sequence) test execution based on risk.

4. Revise the risk analysis at regular intervals in the project, including after testing the first build.

You can make this process as formal or as informal as necessary. We have helped clients get started doing risk-based testing in as little as one day, though one week is more typical. You can mine this blog for more ideas, check out a few articles on the RBCS web site (such as this one and this one), the year-long series of videos on our Digital Library, , or my books Managing the Testing Process (for the test management perspective) or Pragmatic Software Testing (for the test analyst perspective).

Whip Those Bug Reports into Shape

One of the major deliverables for us as testers is the bug report. But, like Rodney Dangerfield, the bug report gets “no respect” in too many organizations. Just because we write them all the time doesn’t mean they aren’t critical—quite the contrary—and it doesn’t mean we know how to write them well. Most test groups have opportunities to improve their bug reporting process.

When RBCS does test assessments for clients, we always look at the quality of the bug reports. We focus on three questions:

1. What is the percentage of rejected bug reports?

2. What is the percentage of duplicate bug reports?

3. Do all project stakeholder groups feel they are getting the information they need from the bug reports? If

the answer to questions one or two is, “More than 5%,” we do further analysis as to why. (Hint: This isn’t always a matter of tester competence, so don’t assume it is.) If the answer to question three is, “No,” then we spend time figuring out which project stakeholders are being overlooked or underserved. Recommendations in our assessment reports will include ways to gets these measures where they ought to be. Asking the stakeholders what they need from the bug reports is a great way to start—and to improve your relationships with your coworkers, too.

Read a Book on Testing

Most practicing testers have never read a book on testing. This is regrettable. We have a lot we can learn from each other in this field, but we have to reach out to gain that knowledge.

(Lest you consider this suggestion self-serving, let me point out that writing technical books yields meager book royalties. In fact, on an hourly basis it’s more lucrative to work bagging groceries at a supermarket. Other benefits, including the opportunity to improve our field, are what motivate most of us.)

There are many good books on testing out there now. Here’s a small selection, any one of which you could work your way through during a winter vacation:

  • General tips and techniques for test engineers: Pragmatic Software Testing, Rex Black; A Practitioner’s Guide to Software Test Design, Lee Copeland.
  • Object-oriented testing: Testing Object-Oriented Systems, Robert Binder.
  • Web testing: The Web Testing Handbook, Steve Splaine
  • Security testing: Testing Web Security, Steve Splaine; How to Break Software Security, James Whittaker
  • Dynamic test strategies and techniques: How to Break Software, James Whittaker; Advanced Software Testing: Volume 1, Rex Black.
  • Test management: Managing the Testing Process, Rex Black; Advanced Software Testing: Volume 2, Rex Black
  • Test process assessment and improvement: Critical Testing Processes, Rex Black; Test Process Improvement, Martin Pol et al
  • ISTQB tester certification: Foundations of Software Testing, Rex Black et al; The Testing Practitioner, ed. Erik van Veenendaal; Advanced Software Testing: Volumes 1, 2, and 3, Rex Black et al. 

I have read each of these books (some of which I also wrote or co-wrote). I can promise you that, if you need to learn about the topic given, reading one of the books for that topic will repay you in hours and hours saved over the years, as well as teaching you at least one or two good ideas you can put in place immediately.

— Published


System Integration, Quality Risks, and Implications for Testing

By Rex Black

More and more projects involve more integration of custom or commercial off the shelf packages, rather than in-house development or enhancement of software.  In effect, this is direct (under contract) or indirect (market purchase) outsourcing of some of the development work. 

While some project managers see such outsourcing of development as reducing the overall risk, each integrated component can bring with it significantly increased risks to system quality.  Let’s take a look at each factor that can increase risk to system quality, and then talk about strategies for mitigating such risks.

One factor that increases risks is coupling, which creates a strong interaction with the system—or consequence to the system—when the component fails.  Another factor that increases risks is irreplaceability, when there are few similar components available.  To the extent that the component creates quality problems, you are stuck with them. Yet another factor that increases risks is essentiality, where some key feature or features of the system will be unavailable if the component does not work properly.  The final factor that increases risks is vendor quality problems, especially if accompanied by slow turnaround on bug fixes. If there is a high likelihood of the vendor sending you a bad component, the level of risk to the quality of the entire system is higher.

How can you mitigate these risks?  I have seen and used various options.

One is to integrate, track, and manage the vendor testing of their component as part of an overall, distributed test effort for the system.  This involves up-front planning, along with having sufficient clout with the vendor or vendors to insist that they consider their test teams and test efforts subordinate to and contained within yours. When I have used this approach, it has worked well.

Another option is simply to trust the vendor component testing to deliver a working component to you.  This approach may sound silly and naive, expressed in such words.  However, project teams do this all the time.   My suggestion is, if you choose to do so, do so with your eyes open, understanding the risks you are accepting and allocating schedule time to deal with issues.

Another option is to decide to fix the component vendor testing or quality problems.  On one project, my client hired me to do exactly that for a vendor.  It worked out nicely.  Again, though, your organization must have the clout to insist that you be allowed to go in and straighten out what’s broken in their testing process and that they have time allocated to fix what you find.  And don’t you have your own problems to attend to?  As such, this is an ideal job for a test consultant.

A final option, especially if you find yourself confronted by proof of incompetent testing by the vendor, is to disregard their testing, assume the component is coming to you untested, and retest the component.  I’ve had to do this, notably on one project when the vendor sold my client an IMAP mail server package that was seriously buggy.

Both of the last two options have serious political implications.  The vendor is unlikely to accept your assertion that their testing is incompetent, and will likely attack your credibility. Since someone made the choice to use that vendor—and it may have been an expensive choice—that person will likely also side with the vendor against your assertion.  You’ll need to bring data to the discussion.  Better yet, see if you can influence the contract negotiations up front to include proof of testing along with acceptance testing by your team prior to payment.  It’s amazing how motivational that can be for vendors!

With the risks to system quality managed at the component level, it’s still possible to make a serious mistake in the area of testing.  Remember that even the best-tested and highest-quality components might not work well in the particular environment you intend to use them in.  So, plan on integration testing and system testing the integrated package yourself.

— Published


The Future of Test Management

By Rex Black

The smart test manager plans for the future.  These plans should cover not only the current project, but also the current decade.  How will you succeed as a test manager in the 2010s decade? Here are ten things you must learn to do:

  1. Connect testing to business value, including measuring effectiveness and efficiency against strategy goals;
  2. Manage testing on outsourced projects, including outsourcing of testing and outsourcing on Agile projects;
  3. Perform system integration testing on systems-of-systems projects effectively;
  4. Test systems that include open source software, and use open source tools;
  5. Test integration of new systems with legacy systems, and test the maintenance of legacy systems;
  6. Test effectively and efficientlywhen there's too much testing work, too little time, and too few resources;
  7. Deal with the tester "skills gluts" that are created by outsourcing and crowd-sourcing, with millions of entry-level testers;
  8. Deal with the tester "skills shortages" that are created at the upper end of the skills triangle by these entry-level testers, especially in developing regions;
  9. Choose the right certifications, including security, tools, ISTQB, technology, and more;
  10. Manage testing on iterative and Agile projects.

The smart test manager who can do these ten things will be in a strong position to succeed as this decade unfolds.  Hear more about the future of test management here.

— Published


Software Testing Podcast

By Rex Black

If you enjoy these regular small bites of software testing concepts, you might want to know that we have something very similar in a "to go" package.  Just check out our software testing podcast page.  You can download the podcasts to your MP3 player,  iPod,  iPhone, or other capable smartphone/handheld/pad device, or just play them directly from the page.

Enjoy!

— Published


Selection of Test Design Techniques in Risk Based Testing

By Rex Black

In this blog, I have talked a lot about RBCS' approach to risk based testing, whichwe call the Pragmatic Risk Analysis and Management process. As you know if you've followed our videos on risk based testing (e.g., this one), PRAM defines the following extents of testing, in decreasing order of thoroughness:

  • Extensive
  • Broad
  • Cursory
  • Opportunity
  • Report bugs only
  • None

Risk based testing does not prescribe specific test design techniques to mitigate quality risks based on the level of risk, as the selection of test design technique for a given risk item is subject to many factors. These factors include the suspected defects (what Beizer called the “bug hypothesis”), the technology of the system under test, and so forth. However, risk based testing does give guidance in terms of the level of test design (e.g., see here), implementation, and execution effort to expend, and that does influence the selection of test design techniques. The following subsections will provide heuristic guides to help test engineers select appropriate test techniques based on the extent of testing indicated for a risk item by the quality risk analysis process. These guides apply to testing during system and system integration testing by independent test teams.

Extensive

According to the quality risk analysis process template, for risks rated to receive this extent of testing, the tester should “run a large number of tests that are both broad and deep, exercising combinations and variations of interesting conditions.” Because combinational testing is specified, testers should select test design techniques that generate test values to cover combinations. These techniques are either: (a) domain analysis or decision tables; or, b) classification trees, pairwise testing, or orthogonal arrays. The techniques in option (a) are appropriate where the mode of interaction between factors is understood (e.g., rules determining output values). The techniques in option (b) are appropriate where the mode of interaction between factors is not understood or indeed interaction should not occur (e.g., configuration compatibility). For each technique selected, the strongest coverage criteria should be applied; e.g., all columns in a decision table, including the application of boundary value analysis and equivalence partitioning on the conditions in the decision table. The use of these combinational techniques guarantees deep coverage.

In addition, testers should ensure that, for all relevant inputs or factors, tests cover all equivalence partitions and, if applicable, boundary values. This contributes to broad coverage.

Testers should plan to augment the test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution. This augmentation can be used to broaden test coverage, to deepen test coverage, or both.

If available, use cases should be tested, and the tester should cover all normal and exception paths.

If available, the tester should use state transition diagrams. Complete state/transition coverage is required, 1-switch (or higher) coverage is recommended, and, in the case of a safety-related risk items, state transition table coverage is also recommended.

In some cases—e.g., safety critical risks, risks related to key features, etc.—the tester may elect to use code coverage measurements for risks assigned this extent of coverage, and to apply white box test design techniques to fill any code coverage gaps detected by such measures.

As a general rule of thumb, around 50% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Broad

According to the quality risk analysis process template, for risks rated to receive this extent of testing, the tester should “run a medium number of tests that exercise many different interesting conditions.” Testers should create tests that cover all equivalence partitions and, if applicable, boundary values. Testers should plan to augment the test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution. This augmentation should be used to broaden test coverage.

If available, use cases should be tested, and the tester should cover all normal and exception paths.

If available, the tester should use state transition diagrams. Complete state/transition coverage is required, but higher levels of coverage should only be used if possible without greatly expanding the number of test cases.

If available, the tester should use decision tables, but strive to have only one test per column.

Other than the possible use of decision tables, combinational testing typically should not be used unless it can be done without generating a large number of test cases.

As a general rule of thumb, between 25 and 35% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Cursory

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “run a small number of tests that sample the most interesting conditions.” Testers should use equivalence partitioning or boundary value analysis on the appropriate areas of the system to identify particularly interesting test values, though they should not try to cover all partitions or boundary values.

Testers should plan to augment these test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution.

If available, use cases should be used. The tester should cover normal paths, though the tester need not cover all exception paths.

The tester may use decision tables, but should not try to cover columns that represent unusual situations.

The tester may use state transition diagrams, but need not visit unusual states or force unusual events to occur.

Other than the possible use of decision tables, combinational testing should not be used.

As a general rule of thumb, between 5 and 15% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Opportunity

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “leverage other tests or activities to run a test or two of an interesting condition, but invest very little time and effort.” Experience-based and defect-based techniques are particularly useful for opportunity testing, as the tester can augment other tests with additional test values that fit into the logical flow of the tests. This can occur during the design and implementation of tests or alternatively during test execution.

In addition, testers can use equivalence partitioning or boundary value analysis on the appropriate areas of the system to identify particularly interesting test values, though they should not try to cover all partitions or boundary values.

As a general rule of thumb, less than 5% of the total test design, implementation, and execution effort should be spent addressing all of the risk items assigned this extent of testing. In addition, no more than 20% of the effort allocated to design, implement, and execute any given test case should be devoted to addressing any risk item assigned this extent of testing.

Report Bugs Only

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “not test at all, but, if bugs related to this risk arise during other tests, report those bugs.” Therefore no test design, implementation, or execution effort should occur, and it is a misallocation of testing effort if it does.

None

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “neither test for these risks nor report related bugs.” Therefore no test design, implementation, or execution effort should occur, and it is a misallocation of testing effort if it does.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.