RBCS Covid-19 response: Until further notice, all public training classes will be run virtually. Remote proctored certification exams are available (view details).

Blog

System Integration, Quality Risks, and Implications for Testing

By Rex Black

More and more projects involve more integration of custom or commercial off the shelf packages, rather than in-house development or enhancement of software.  In effect, this is direct (under contract) or indirect (market purchase) outsourcing of some of the development work. 

While some project managers see such outsourcing of development as reducing the overall risk, each integrated component can bring with it significantly increased risks to system quality.  Let’s take a look at each factor that can increase risk to system quality, and then talk about strategies for mitigating such risks.

One factor that increases risks is coupling, which creates a strong interaction with the system—or consequence to the system—when the component fails.  Another factor that increases risks is irreplaceability, when there are few similar components available.  To the extent that the component creates quality problems, you are stuck with them. Yet another factor that increases risks is essentiality, where some key feature or features of the system will be unavailable if the component does not work properly.  The final factor that increases risks is vendor quality problems, especially if accompanied by slow turnaround on bug fixes. If there is a high likelihood of the vendor sending you a bad component, the level of risk to the quality of the entire system is higher.

How can you mitigate these risks?  I have seen and used various options.

One is to integrate, track, and manage the vendor testing of their component as part of an overall, distributed test effort for the system.  This involves up-front planning, along with having sufficient clout with the vendor or vendors to insist that they consider their test teams and test efforts subordinate to and contained within yours. When I have used this approach, it has worked well.

Another option is simply to trust the vendor component testing to deliver a working component to you.  This approach may sound silly and naive, expressed in such words.  However, project teams do this all the time.   My suggestion is, if you choose to do so, do so with your eyes open, understanding the risks you are accepting and allocating schedule time to deal with issues.

Another option is to decide to fix the component vendor testing or quality problems.  On one project, my client hired me to do exactly that for a vendor.  It worked out nicely.  Again, though, your organization must have the clout to insist that you be allowed to go in and straighten out what’s broken in their testing process and that they have time allocated to fix what you find.  And don’t you have your own problems to attend to?  As such, this is an ideal job for a test consultant.

A final option, especially if you find yourself confronted by proof of incompetent testing by the vendor, is to disregard their testing, assume the component is coming to you untested, and retest the component.  I’ve had to do this, notably on one project when the vendor sold my client an IMAP mail server package that was seriously buggy.

Both of the last two options have serious political implications.  The vendor is unlikely to accept your assertion that their testing is incompetent, and will likely attack your credibility. Since someone made the choice to use that vendor—and it may have been an expensive choice—that person will likely also side with the vendor against your assertion.  You’ll need to bring data to the discussion.  Better yet, see if you can influence the contract negotiations up front to include proof of testing along with acceptance testing by your team prior to payment.  It’s amazing how motivational that can be for vendors!

With the risks to system quality managed at the component level, it’s still possible to make a serious mistake in the area of testing.  Remember that even the best-tested and highest-quality components might not work well in the particular environment you intend to use them in.  So, plan on integration testing and system testing the integrated package yourself.

— Published


The Future of Test Management

By Rex Black

The smart test manager plans for the future.  These plans should cover not only the current project, but also the current decade.  How will you succeed as a test manager in the 2010s decade? Here are ten things you must learn to do:

  1. Connect testing to business value, including measuring effectiveness and efficiency against strategy goals;
  2. Manage testing on outsourced projects, including outsourcing of testing and outsourcing on Agile projects;
  3. Perform system integration testing on systems-of-systems projects effectively;
  4. Test systems that include open source software, and use open source tools;
  5. Test integration of new systems with legacy systems, and test the maintenance of legacy systems;
  6. Test effectively and efficientlywhen there's too much testing work, too little time, and too few resources;
  7. Deal with the tester "skills gluts" that are created by outsourcing and crowd-sourcing, with millions of entry-level testers;
  8. Deal with the tester "skills shortages" that are created at the upper end of the skills triangle by these entry-level testers, especially in developing regions;
  9. Choose the right certifications, including security, tools, ISTQB, technology, and more;
  10. Manage testing on iterative and Agile projects.

The smart test manager who can do these ten things will be in a strong position to succeed as this decade unfolds.  Hear more about the future of test management here.

— Published


Software Testing Podcast

By Rex Black

If you enjoy these regular small bites of software testing concepts, you might want to know that we have something very similar in a "to go" package.  Just check out our software testing podcast page.  You can download the podcasts to your MP3 player,  iPod,  iPhone, or other capable smartphone/handheld/pad device, or just play them directly from the page.

Enjoy!

— Published


Selection of Test Design Techniques in Risk Based Testing

By Rex Black

In this blog, I have talked a lot about RBCS' approach to risk based testing, whichwe call the Pragmatic Risk Analysis and Management process. As you know if you've followed our videos on risk based testing (e.g., this one), PRAM defines the following extents of testing, in decreasing order of thoroughness:

  • Extensive
  • Broad
  • Cursory
  • Opportunity
  • Report bugs only
  • None

Risk based testing does not prescribe specific test design techniques to mitigate quality risks based on the level of risk, as the selection of test design technique for a given risk item is subject to many factors. These factors include the suspected defects (what Beizer called the “bug hypothesis”), the technology of the system under test, and so forth. However, risk based testing does give guidance in terms of the level of test design (e.g., see here), implementation, and execution effort to expend, and that does influence the selection of test design techniques. The following subsections will provide heuristic guides to help test engineers select appropriate test techniques based on the extent of testing indicated for a risk item by the quality risk analysis process. These guides apply to testing during system and system integration testing by independent test teams.

Extensive

According to the quality risk analysis process template, for risks rated to receive this extent of testing, the tester should “run a large number of tests that are both broad and deep, exercising combinations and variations of interesting conditions.” Because combinational testing is specified, testers should select test design techniques that generate test values to cover combinations. These techniques are either: (a) domain analysis or decision tables; or, b) classification trees, pairwise testing, or orthogonal arrays. The techniques in option (a) are appropriate where the mode of interaction between factors is understood (e.g., rules determining output values). The techniques in option (b) are appropriate where the mode of interaction between factors is not understood or indeed interaction should not occur (e.g., configuration compatibility). For each technique selected, the strongest coverage criteria should be applied; e.g., all columns in a decision table, including the application of boundary value analysis and equivalence partitioning on the conditions in the decision table. The use of these combinational techniques guarantees deep coverage.

In addition, testers should ensure that, for all relevant inputs or factors, tests cover all equivalence partitions and, if applicable, boundary values. This contributes to broad coverage.

Testers should plan to augment the test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution. This augmentation can be used to broaden test coverage, to deepen test coverage, or both.

If available, use cases should be tested, and the tester should cover all normal and exception paths.

If available, the tester should use state transition diagrams. Complete state/transition coverage is required, 1-switch (or higher) coverage is recommended, and, in the case of a safety-related risk items, state transition table coverage is also recommended.

In some cases—e.g., safety critical risks, risks related to key features, etc.—the tester may elect to use code coverage measurements for risks assigned this extent of coverage, and to apply white box test design techniques to fill any code coverage gaps detected by such measures.

As a general rule of thumb, around 50% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Broad

According to the quality risk analysis process template, for risks rated to receive this extent of testing, the tester should “run a medium number of tests that exercise many different interesting conditions.” Testers should create tests that cover all equivalence partitions and, if applicable, boundary values. Testers should plan to augment the test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution. This augmentation should be used to broaden test coverage.

If available, use cases should be tested, and the tester should cover all normal and exception paths.

If available, the tester should use state transition diagrams. Complete state/transition coverage is required, but higher levels of coverage should only be used if possible without greatly expanding the number of test cases.

If available, the tester should use decision tables, but strive to have only one test per column.

Other than the possible use of decision tables, combinational testing typically should not be used unless it can be done without generating a large number of test cases.

As a general rule of thumb, between 25 and 35% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Cursory

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “run a small number of tests that sample the most interesting conditions.” Testers should use equivalence partitioning or boundary value analysis on the appropriate areas of the system to identify particularly interesting test values, though they should not try to cover all partitions or boundary values.

Testers should plan to augment these test values with values selected using experience-based and defect-based techniques. This augmentation can occur during the design and implementation of tests or alternatively during test execution.

If available, use cases should be used. The tester should cover normal paths, though the tester need not cover all exception paths.

The tester may use decision tables, but should not try to cover columns that represent unusual situations.

The tester may use state transition diagrams, but need not visit unusual states or force unusual events to occur.

Other than the possible use of decision tables, combinational testing should not be used.

As a general rule of thumb, between 5 and 15% of the total test design, implementation, and execution effort should be spent addressing the risk items assigned this extent of testing.

Opportunity

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “leverage other tests or activities to run a test or two of an interesting condition, but invest very little time and effort.” Experience-based and defect-based techniques are particularly useful for opportunity testing, as the tester can augment other tests with additional test values that fit into the logical flow of the tests. This can occur during the design and implementation of tests or alternatively during test execution.

In addition, testers can use equivalence partitioning or boundary value analysis on the appropriate areas of the system to identify particularly interesting test values, though they should not try to cover all partitions or boundary values.

As a general rule of thumb, less than 5% of the total test design, implementation, and execution effort should be spent addressing all of the risk items assigned this extent of testing. In addition, no more than 20% of the effort allocated to design, implement, and execute any given test case should be devoted to addressing any risk item assigned this extent of testing.

Report Bugs Only

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “not test at all, but, if bugs related to this risk arise during other tests, report those bugs.” Therefore no test design, implementation, or execution effort should occur, and it is a misallocation of testing effort if it does.

None

According to the quality risk analysis process templates, for risks rated to receive this extent of testing, the tester should “neither test for these risks nor report related bugs.” Therefore no test design, implementation, or execution effort should occur, and it is a misallocation of testing effort if it does.

— Published


Three More Risk Based Testing Fallacies

By Rex Black

In response to my recent post on risk based testing fallacies, an RBCS client--I'll refer to him by his initials AN--wrote to tell us about some fallacies he's struggling with implementing risk based testing in his organization.  He recounted a discussion he had with two colleagues--I'll refer to them as Confused and Confused Too--who were caught in these fallacies. 

Confused said, "Risk-based testing does not control the extensiveness of test design."

I [AN] was very surprised and replied, "The tester can choose an appropriate test technique as risk mitigation."

In his statement, AN is entirely right.  Confused has fallen into the fallacy of assuming that, because risk based testing is non-prescriptive on test design, it's silent on test design.  Risk based testing does not prescibe the technique, but rather gives guidance on the level of risk mitigation that is required.  Often, people use a descending scale for this extent of testing: e.g., extensive, broad, cursory, opportunity, and report bugs only.  It's up to the test engineer to select a test technique--or blend of techniques--that will yield the correct risk mitigation.

He [Confused] doubted that risk-based testing works for test design because the level of risk could not deduce the test types and/or test conditions directly. I think we can design tests using risk-based testing. The level of risk is not an absolute value, but a relative value. We can assign the resources based on these relative values when balanced against the risk inherent in the entire system under test. Therefore I think test design is a very important process for risk-based testing.

AN is again correct, and has diagosed the fallacy here.  Confused has fallen into the fallacy of assuming that risk based testing is quantitative risk management.  Risk based testing is qualitative, because we (as an industry) don't have access to pools of statistical failure data such as those the insurance companies have.  The test conditions to be covered are the risk items which are identified during the quality risk analysis.  The degree to which they are covered is determined by looking at two factors, likelihood and impact.  Based on the relative level of risk, we select test design techniques that will give the proper level of coverage.

A test leader [Confused Too] insisted that he was using risk-based testing, but did not have test design documents, only a test plan and test cases. I don't believe it was risk-based testing.

Again, AN is correct.  Confused Too has fallen into the fallacy that risk based testing can be done without any additional structure.  In fact, in order to have proper risk based testing, you need some document, tool, or other structure to capture the risk items, their risk ratings, and other ancillary information.  Otherwise, you'll not be able to manage the alignment of the other testing work with the risks and their ratings.

— Published


The Importance of Intra-project Deliverables (aka Hand-offs)

By Rex Black

I try to avoid using US-specific slang in my writing, especially sports-related US-specific slang, and most especially US-football-specific slang (since no one outside the US plays that game).  However, it looks like one such phrase remains embedded in my writing style, as astute reader Thomas Wagner pointed out in a recent e-mail:

I am currently studying for ISTQB Advanced Test Manager by following your book "Advanced Software Testing - Vol 2. Guide to the ISTQB Advanced Certification as an advanced test manager." I have a question: You frequently use the term "hand-offs".  What do you mean by this? Examples where used in the book:

Section 1.2, page 3, line 2 "...this is especially true at key interfaces and hand-offs."

Section 3.3.6., page 171, line 19: "...In general, foul-ups are always more likely during hand-offs"

The ever-helpful dictionary.com defines hand-off as "an offensive play in which a player, usually a back, hands the ball to a teammate."  (In this case, note that "offensive" refers to the opposite of defensive, not that the maneuver itself is likely to offend.)  And, indeed, it says that the origin of the phrase is from US football.  My mistake, but how to fix it?

The problem is that a precise, universal way of saying "hand-off" might be something clunky like "intra-project deliverable."  That's certainly not an easy phrase to write or to understand.  Any time one distinct group of people (e.g., the programmers) within a project team creates a deliverable (e.g., the software to be tested) and delivers it to another group (e.g., the testers), you have, well, a hand-off.  In this example, the programmers have handed the software to the testers, for the purpose of testing. 

(The last part of that sentence above, that there is a specific purpose, is also important.  A hand-off is not a merely informative delivery, where no action is required on the part of the recipients.  The recipients--in the example above, the testers--are required to carry out a specific set of actions with the deliverable.  That's an important part of the concept of a hand-off that isn't included in the phrase "intra-project deliverable" unless we say "intra-project deliverable given to recipients for the purpose of taking some action with it," or "transfer of an intra-project deliverable from one group to another within a project that includes a responsibility for a specific set of activities on the part of the recipients," and now we're really getting into a long and tortured phrases!)

Whatever we call it, these junction points between groups in a project team are always risky.  Mismatched expectations between delivering and receiving parties can result in problems (e.g., not fixing certain bugs that block some tests).  Failure to deliver on time can occur (e.g., the all-too-common delay of the start of test execution due to incompleteness of the software to be tested).  Failure to deliver something usable for the intended purpose can occur (e.g., the untestable test release).  Miscommunications can arise (e.g., the bug report that doesn't give the programmer enough information to debug the underlying problem).

Given the difficulty of thinking up a good alternative phrase, I'm going to keep using "hand-off", though I'm glad Thomas sensitized me to the cultural difficulty of the phrase.  Call it what you will, any time one group transfers something to another group during a project, take care.  Especially as testers, being downstream of just about everything else that happens on a project, we have a lot of opportunities for bad hand-offs.

— Published


Five Fallacies of Risk Based Testing

By Rex Black

Risk based testing is a phrase that we hear many times in testing.  Many people know many facts and have many opinions.  The trouble is, in many cases, these facts are actually wrong or based on a poor understanding of risk based testing, and thus many opinions about risk based testing are incorrect.  There are many risk based testing fallacies.  In this first post (in what is likely to be a series of occassional posts on this topic), I'll start with five frequently encountered fallacies.

  1. Risk based testing is just a method to cut corners (part 1).  This whole idea for a series of blog posts came about when someone said to me, "Well, risk based testing means not testing everything."  Well, right.  So does every kind of testing.  There are an infinite number of tests you could run, and you are going to select a finite subset from that infinite set.  The only question is whether you are going to select that subset intelligently, with an understanding of the likelihood and impact associated with potential problems. Risk based testing allows you to do that.
  2. Risk based testing is just a method to cut corners (part 2).  Sometimes when people say this, they mean that risk based testing does not cover all the requirements.  Unfortunately, some people have promoted an approach which they call risk based or risk driven testing that involves exactly that: Selecting which requirements not to test based on risk.  While in some cases it is appropriate to skip testing some of the requirements, as a general rule we want to cover not only the important risks but all the requirements (at least those which are specified).  By ensuring that every requirement has at least one associated risk item and at least one associated test case, you can do so.  This is an example of a blended strategy of risk based and requirements based testing.
  3. Risk based testing is all about technical risk.  Some people have put forward this idea that risk based testing is a form of reactive testing where we wait to see what the system does (i.e., no planning, analysis, or up-front test development), then use experience, defect taxonomies, and other aids to predict and find as many bugs as we can in a limited period of time.  To me, that approach is just a big geeky bug hunt; it does not cover all of the strategic objectives most organizations have for test teams.  Yes, we should consider defect likelihood when analyzing quality risks, but we should also consider the impact of potential defects as well.
  4. Risk based testing can be done entirely by the test team.  Those who believe this fallacy simply analyze requirements or other information, in isolation from other project and product stakeholders, and then test based on that analysis.  Sorry, but that's just a risk-aware form of requirements based testing.  What makes true risk based testing truly powerful is the consideration of input from a cross-functional team of project and product stakeholders.  When we help clients start doing risk based testing, I always emphasize that getting the right quality risk analysis team together is more important than the right process or templates.
  5. Risk based testing only influences selection of test cases. It's true that one major benefit of risk based testing is the smart selection of test cases.  However, with risk based testing you can also report test results in terms of residual risk, which makes test status truly clear to non-test project team members. You can also run tests in risk priority order, which maximizes the likelihood of finding important bugs first.  And, if you do get squeezed for time, you can triage your test cases based on risk, ensuring that the most important tests get run.

I hope this blog entry has helped to dispel some of these fallacies.  I'll return to it in a later post someday to try to dispel more such fallacies.  In the meantime, you might want to check out the videos on risk based testing, found in our Digital Library, for more information about what risk based testing really is and how to make it work for you.

— Published


Testing and Quality Metrics: How to Stay "Lean and Agile" while Maintaining Visibility?

By Rex Black

As part of the continuing series of video blog entries, I'm throwing out to the wider software testing and quality community a question:  How do we balance the desire--especially in Agile and Lean/Agile projects--to enjoy the efficiency of lightweight record-keeping, while at the same time having enough visibility into project, process, and product status, including testing and quality metrics?

For a little video context on my question, check out: How to Balance Metrics, Lean, and Agile when Measuring Software Testing and Quality

I'd be interested in hearing from people as to how their projects are achieving balance here, and also in anecdotes about when they are not achieving balance.

— Published


Free Tool for Calculating Software Testing ROI

By Rex Black

I recently gave a workshop at the STANZ conference, first in Wellington and then in Sydney.  In this workshop, I mentioned that connecting software testing to business value is a key test management challenge of the 2010 decade.  (Of course, it's really been a challenge for the entire time there has been software testing, but it's a challenge we've yet to resolve.)  Everyone in b0th audiences agreed, and a number of people offered examples of how this challenge was affecting them.

Earlier this week, I gave a webinar on how to calculate the return on the software testing investment.   You can listen to a recording of that webinar if you missed it.

In that webinar, I walked through a case study of calculating software testing ROI.  This case study was described in an article originally published in Software Test and Performance magazine, and you can now find the article here on the RBCS website.

After the webinar, a bunch of people sent e-mails saying, "Hey, could you please post the spreadsheet that you walked through during the webinar?"  Here at RBCS, we like to say yes to our friends, clients, and supporters, so we did.  You can find the free software testing ROI spreadsheet on our Advanced Library now, under the name Case Study Info Appliance Test ROI.xls.

Before you use the spreadsheet, I suggest you read the article I mentioned above.  The article explains how the spreadsheet works and explains the case study numbers included in the spreadsheet by way of example.

— Published


Picking Certifications: Software Testing and Beyond

By Rex Black

One key to quality software is the quality of the people involved in creating and maintaining it.  One of the tools for increasing the quality of your team is through training of existing employees, which I’ll address in a later blog post. For this post, I want to focus on something that is often confused with training, but actually is (or at least should be) something entirely different: certification.

All IT managers--whether software test managers or other software managers--want to hire qualified people.  Certainly, IT certification can be part of the qualification puzzle in many IT fields.  IT professionals often use certification in key skill areas to demonstrate their qualifications.  However, with all the certification programs floating around out there, how do managers and professionals distinguish useful certifications from pointless wallpaper?  In this post, I’ll examine how you can pick the right IT certifications for yourself (as an individual) or for your team and the people you hire (as a manager).

Any certification worth considering will have, at its basis, a body of knowledge or syllabus.  This document should describe the skills and abilities that the certification measures.  Those people who have mastered most of these skills and abilities (sometimes called “learning objectives” in the syllabus) will be able to earn the certification, usually through some kind of exam. 

So, the first and most important step is to determine whether the skills and abilities listed in the syllabus are useful.  Does the syllabus relate to your day-to-day work?  Will the benefits of achieving the certification—increased effectiveness and efficiency, credibility of the team, etc.—justify the cost?

Of course, it’s possible that your day-to-day work should more closely resemble what is described in the syllabus.  This can happen when your organization is not following industry best practices.  So, you should also evaluate the source of the syllabus.  If the syllabus was written by a broad, international team of recognized, published industry experts, perhaps you should consider moving your practices towards those required for certification.  Adopting the certification as a guideline for your practices—and hiring people with the certification—can be a good way to move in this direction.

Selecting a certification developed by a broad team of recognized, published industry experts is important because, in general, such certifications enjoy increased acceptance over certifications developed by a small clique of like-minded people.  People in the industry will recognize the names of the authors and developers of the syllabus.  To some extent, the credibility and thus value of all certifications rests upon the reputation and credibility of the people who stand behind those certifications. 

I also mentioned that the team of experts should be international, because so often now we are engaged in globally distributed work.  If you are not working in a globally distributed fashion today, you probably will be soon.  So, you need certifications that have a global reach.  If you want to hire (or be part of) a global team of certified professionals, a single common certification is key.  This way, the whole team speaks the same language and knows the same concepts.

Of course, if you plan to hire people who hold a certification because you believe the syllabus has value, you want to be confident that those people have indeed mastered the topics in the syllabus.  This brings us back to the matter of the exam. 

Certification exams are a complicated issue, and some ill-informed polemics about exams occur on a few internet web sites.  Proper creation of exams is the province of a profession called psychometrics.  Psychometrics applies the fields of psychology, education, and statistics to the process of qualifying people through exams.  Any legitimate certification body (i.e., the organization developing and administering an exam against a syllabus) will employ professional psychometricians to ensure proper exams.

In evaluating whether an exam properly assesses someone’s qualifications, you need answers to four questions. First, is the exam statistically valid, and can the certification body prove validity?  Second, is the exam a quality instrument, free from grammatical and spelling errors, formatting problems, and other glitches that might distract exam takers, and what process does the certification body use to ensure quality?  Third, is the exam of uniform difficulty and quality whenever and wherever it is administered, and how does the certification body accomplish uniformity?  And, fourth and finally, since exam questions are developed by people, what steps does the certification body use to ensure the integrity of the exams; i.e., that the questions are not leaked to candidates, onto the internet, or to accredited training providers?

This last point—that of accredited training providers—brings us to an important consideration.  It is certainly valuable to have training available to support certification programs.  Accrediting training, whereby the certification body checks the content of the training to ensure compliance with and coverage of the syllabus, can help busy managers and professions narrow their search for quality training.  However, when the accreditation process is opaque, when only members of the certification body offer accredited training, or, worse yet, when accredited training is required to take an exam, you are not looking at a real certification: you are looking at a marketing vehicle for some company’s or cartel’s training programs.  You should pick certification programs that have open, transparent processes for accreditation, with a diverse, competitive field of training providers, and which do not require any training at all to take the exams.

Certifications can help IT managers and professionals grow their teams and their skills, if chosen carefully. If you select the right bodies of knowledge, developed by the right people and delivering the right skills for your work, certification can lead to improvements in effectiveness, efficiency, and communication within teams. It’s also essential that the certification body follow best practices in the creation and delivery of the exams. And, if you decide to use training to help achieve certification, make sure to pick a program where the training supports the certification, not vice versa.  If you follow these basic concepts, you can obtain good value from IT certification programs, both as a professional and as a hiring manager.

— Published



Copyright ® 2020 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo ISTQB Logo PMI Logo

PMI is a registered mark of the Project Management Institute, Inc.

View Rex Black Consulting Services Inc. profile on Ariba Discovery