In response to my recent post on risk based testing fallacies, an RBCS client--I'll refer to him by his initials AN--wrote to tell us about some fallacies he's struggling with implementing risk based testing in his organization. He recounted a discussion he had with two colleagues--I'll refer to them as Confused and Confused Too--who were caught in these fallacies.
Confused said, "Risk-based testing does not control the extensiveness of test design."
I [AN] was very surprised and replied, "The tester can choose an appropriate test technique as risk mitigation."
In his statement, AN is entirely right. Confused has fallen into the fallacy of assuming that, because risk based testing is non-prescriptive on test design, it's silent on test design. Risk based testing does not prescibe the technique, but rather gives guidance on the level of risk mitigation that is required. Often, people use a descending scale for this extent of testing: e.g., extensive, broad, cursory, opportunity, and report bugs only. It's up to the test engineer to select a test technique--or blend of techniques--that will yield the correct risk mitigation.
He [Confused] doubted that risk-based testing works for test design because the level of risk could not deduce the test types and/or test conditions directly. I think we can design tests using risk-based testing. The level of risk is not an absolute value, but a relative value. We can assign the resources based on these relative values when balanced against the risk inherent in the entire system under test. Therefore I think test design is a very important process for risk-based testing.
AN is again correct, and has diagosed the fallacy here. Confused has fallen into the fallacy of assuming that risk based testing is quantitative risk management. Risk based testing is qualitative, because we (as an industry) don't have access to pools of statistical failure data such as those the insurance companies have. The test conditions to be covered are the risk items which are identified during the quality risk analysis. The degree to which they are covered is determined by looking at two factors, likelihood and impact. Based on the relative level of risk, we select test design techniques that will give the proper level of coverage.
A test leader [Confused Too] insisted that he was using risk-based testing, but did not have test design documents, only a test plan and test cases. I don't believe it was risk-based testing.
Again, AN is correct. Confused Too has fallen into the fallacy that risk based testing can be done without any additional structure. In fact, in order to have proper risk based testing, you need some document, tool, or other structure to capture the risk items, their risk ratings, and other ancillary information. Otherwise, you'll not be able to manage the alignment of the other testing work with the risks and their ratings.
I try to avoid using US-specific slang in my writing, especially sports-related US-specific slang, and most especially US-football-specific slang (since no one outside the US plays that game). However, it looks like one such phrase remains embedded in my writing style, as astute reader Thomas Wagner pointed out in a recent e-mail:
I am currently studying for ISTQB Advanced Test Manager by following your book "Advanced Software Testing - Vol 2. Guide to the ISTQB Advanced Certification as an advanced test manager." I have a question: You frequently use the term "hand-offs". What do you mean by this? Examples where used in the book:
Section 1.2, page 3, line 2 "...this is especially true at key interfaces and hand-offs."
Section 3.3.6., page 171, line 19: "...In general, foul-ups are always more likely during hand-offs"
The ever-helpful dictionary.com defines hand-off as "an offensive play in which a player, usually a back, hands the ball to a teammate." (In this case, note that "offensive" refers to the opposite of defensive, not that the maneuver itself is likely to offend.) And, indeed, it says that the origin of the phrase is from US football. My mistake, but how to fix it?
The problem is that a precise, universal way of saying "hand-off" might be something clunky like "intra-project deliverable." That's certainly not an easy phrase to write or to understand. Any time one distinct group of people (e.g., the programmers) within a project team creates a deliverable (e.g., the software to be tested) and delivers it to another group (e.g., the testers), you have, well, a hand-off. In this example, the programmers have handed the software to the testers, for the purpose of testing.
(The last part of that sentence above, that there is a specific purpose, is also important. A hand-off is not a merely informative delivery, where no action is required on the part of the recipients. The recipients--in the example above, the testers--are required to carry out a specific set of actions with the deliverable. That's an important part of the concept of a hand-off that isn't included in the phrase "intra-project deliverable" unless we say "intra-project deliverable given to recipients for the purpose of taking some action with it," or "transfer of an intra-project deliverable from one group to another within a project that includes a responsibility for a specific set of activities on the part of the recipients," and now we're really getting into a long and tortured phrases!)
Whatever we call it, these junction points between groups in a project team are always risky. Mismatched expectations between delivering and receiving parties can result in problems (e.g., not fixing certain bugs that block some tests). Failure to deliver on time can occur (e.g., the all-too-common delay of the start of test execution due to incompleteness of the software to be tested). Failure to deliver something usable for the intended purpose can occur (e.g., the untestable test release). Miscommunications can arise (e.g., the bug report that doesn't give the programmer enough information to debug the underlying problem).
Given the difficulty of thinking up a good alternative phrase, I'm going to keep using "hand-off", though I'm glad Thomas sensitized me to the cultural difficulty of the phrase. Call it what you will, any time one group transfers something to another group during a project, take care. Especially as testers, being downstream of just about everything else that happens on a project, we have a lot of opportunities for bad hand-offs.
Risk based testing is a phrase that we hear many times in testing. Many people know many facts and have many opinions. The trouble is, in many cases, these facts are actually wrong or based on a poor understanding of risk based testing, and thus many opinions about risk based testing are incorrect. There are many risk based testing fallacies. In this first post (in what is likely to be a series of occassional posts on this topic), I'll start with five frequently encountered fallacies.
I hope this blog entry has helped to dispel some of these fallacies. I'll return to it in a later post someday to try to dispel more such fallacies. In the meantime, you might want to check out the videos on risk based testing, found in our Digital Library, for more information about what risk based testing really is and how to make it work for you.
As part of the continuing series of video blog entries, I'm throwing out to the wider software testing and quality community a question: How do we balance the desire--especially in Agile and Lean/Agile projects--to enjoy the efficiency of lightweight record-keeping, while at the same time having enough visibility into project, process, and product status, including testing and quality metrics?
For a little video context on my question, check out: How to Balance Metrics, Lean, and Agile when Measuring Software Testing and Quality
I'd be interested in hearing from people as to how their projects are achieving balance here, and also in anecdotes about when they are not achieving balance.
I recently gave a workshop at the STANZ conference, first in Wellington and then in Sydney. In this workshop, I mentioned that connecting software testing to business value is a key test management challenge of the 2010 decade. (Of course, it's really been a challenge for the entire time there has been software testing, but it's a challenge we've yet to resolve.) Everyone in b0th audiences agreed, and a number of people offered examples of how this challenge was affecting them.
Earlier this week, I gave a webinar on how to calculate the return on the software testing investment. You can listen to a recording of that webinar if you missed it.
In that webinar, I walked through a case study of calculating software testing ROI. This case study was described in an article originally published in Software Test and Performance magazine, and you can now find the article here on the RBCS website.
After the webinar, a bunch of people sent e-mails saying, "Hey, could you please post the spreadsheet that you walked through during the webinar?" Here at RBCS, we like to say yes to our friends, clients, and supporters, so we did. You can find the free software testing ROI spreadsheet on our Advanced Library now, under the name Case Study Info Appliance Test ROI.xls.
Before you use the spreadsheet, I suggest you read the article I mentioned above. The article explains how the spreadsheet works and explains the case study numbers included in the spreadsheet by way of example.
One key to quality software is the quality of the people involved in creating and maintaining it. One of the tools for increasing the quality of your team is through training of existing employees, which I’ll address in a later blog post. For this post, I want to focus on something that is often confused with training, but actually is (or at least should be) something entirely different: certification.
All IT managers--whether software test managers or other software managers--want to hire qualified people. Certainly, IT certification can be part of the qualification puzzle in many IT fields. IT professionals often use certification in key skill areas to demonstrate their qualifications. However, with all the certification programs floating around out there, how do managers and professionals distinguish useful certifications from pointless wallpaper? In this post, I’ll examine how you can pick the right IT certifications for yourself (as an individual) or for your team and the people you hire (as a manager).
Any certification worth considering will have, at its basis, a body of knowledge or syllabus. This document should describe the skills and abilities that the certification measures. Those people who have mastered most of these skills and abilities (sometimes called “learning objectives” in the syllabus) will be able to earn the certification, usually through some kind of exam.
So, the first and most important step is to determine whether the skills and abilities listed in the syllabus are useful. Does the syllabus relate to your day-to-day work? Will the benefits of achieving the certification—increased effectiveness and efficiency, credibility of the team, etc.—justify the cost?
Of course, it’s possible that your day-to-day work should more closely resemble what is described in the syllabus. This can happen when your organization is not following industry best practices. So, you should also evaluate the source of the syllabus. If the syllabus was written by a broad, international team of recognized, published industry experts, perhaps you should consider moving your practices towards those required for certification. Adopting the certification as a guideline for your practices—and hiring people with the certification—can be a good way to move in this direction.
Selecting a certification developed by a broad team of recognized, published industry experts is important because, in general, such certifications enjoy increased acceptance over certifications developed by a small clique of like-minded people. People in the industry will recognize the names of the authors and developers of the syllabus. To some extent, the credibility and thus value of all certifications rests upon the reputation and credibility of the people who stand behind those certifications.
I also mentioned that the team of experts should be international, because so often now we are engaged in globally distributed work. If you are not working in a globally distributed fashion today, you probably will be soon. So, you need certifications that have a global reach. If you want to hire (or be part of) a global team of certified professionals, a single common certification is key. This way, the whole team speaks the same language and knows the same concepts.
Of course, if you plan to hire people who hold a certification because you believe the syllabus has value, you want to be confident that those people have indeed mastered the topics in the syllabus. This brings us back to the matter of the exam.
Certification exams are a complicated issue, and some ill-informed polemics about exams occur on a few internet web sites. Proper creation of exams is the province of a profession called psychometrics. Psychometrics applies the fields of psychology, education, and statistics to the process of qualifying people through exams. Any legitimate certification body (i.e., the organization developing and administering an exam against a syllabus) will employ professional psychometricians to ensure proper exams.
In evaluating whether an exam properly assesses someone’s qualifications, you need answers to four questions. First, is the exam statistically valid, and can the certification body prove validity? Second, is the exam a quality instrument, free from grammatical and spelling errors, formatting problems, and other glitches that might distract exam takers, and what process does the certification body use to ensure quality? Third, is the exam of uniform difficulty and quality whenever and wherever it is administered, and how does the certification body accomplish uniformity? And, fourth and finally, since exam questions are developed by people, what steps does the certification body use to ensure the integrity of the exams; i.e., that the questions are not leaked to candidates, onto the internet, or to accredited training providers?
This last point—that of accredited training providers—brings us to an important consideration. It is certainly valuable to have training available to support certification programs. Accrediting training, whereby the certification body checks the content of the training to ensure compliance with and coverage of the syllabus, can help busy managers and professions narrow their search for quality training. However, when the accreditation process is opaque, when only members of the certification body offer accredited training, or, worse yet, when accredited training is required to take an exam, you are not looking at a real certification: you are looking at a marketing vehicle for some company’s or cartel’s training programs. You should pick certification programs that have open, transparent processes for accreditation, with a diverse, competitive field of training providers, and which do not require any training at all to take the exams.
Certifications can help IT managers and professionals grow their teams and their skills, if chosen carefully. If you select the right bodies of knowledge, developed by the right people and delivering the right skills for your work, certification can lead to improvements in effectiveness, efficiency, and communication within teams. It’s also essential that the certification body follow best practices in the creation and delivery of the exams. And, if you decide to use training to help achieve certification, make sure to pick a program where the training supports the certification, not vice versa. If you follow these basic concepts, you can obtain good value from IT certification programs, both as a professional and as a hiring manager.
The last one isn't actually a template, but it's something many people find interesting.
If there's some other template you need, search the Basic Library and Advanced Library. If you still don't find it, post a comment here on the blog letting the readers and me know what you're looking for. Maybe someone can help.
Remember, though, a template is not an excuse to turn your brain off. Be sure to use templates thoughtfully.
I took a few moments today to record another video blog entry, which you can find at Regional Software Testing Immaturity: Fact or Fallacy?
Here's the synopsis: I'm in Los Angeles, on my way to the STANZ conference in Australia and New Zealand. That and other recent international trips have gotten me thinking about something that I often hear at such international conferences, which are comments along the lines of, "You know, software testing as a profession and a practice is really immature in region X," where region X might or might not be where I'm at. Based on my experience with clients around the world, though, the gap isn't as big as people often think it is. Is software testing actually significantly less mature in some regions than others? What has your experience been? I'd be interested in opinions, case studies, and stories from you, especially the many international readers of this blog but also people in North America.
A couple recent events might seem to indicate a greater appetite--and need--for greater technical skills. I thought I'd leave a quick video blog post on this topic--see Technical Testers Blog. Let me know what you think. Are we in need of stronger technical skills? Do you have technical skills? Do you need and want technical skills?
When I wrote my book Critical Testing Processes in the early 2000s, I started with the premise that some test processes are critical, some are not. I designed this lightweight framework for test process assessment and improvement in order to focus the test team and test manager on a few test areas that they simply must do properly. This contrasts with other, more expansive, and more complex frameworks. In addition, the Critical Testing Processes (CTP) framework eschews the prescriptive elements of other frameworks because it does not impose an arbitrary, staged maturity model.
What’s the problem with prescriptive models? In our consulting work, we have found that businesses want to make improvements based on the business value of the improvement and the organizational pain that improvement will alleviate. A simplistic maturity rating might lead a business to make improvements in parts of the overall software process or test process that are actually less problematic or less important than other parts of the process simply because the model listed them in order.
CTP is a non-prescriptive process model. It describes the important software processes and what should happen in them, but it doesn’t put them in any order of improvement. This makes CTP a very flexible model. It allows you to identify and deal with specific challenges to your test processes. It identifies various attributes of good processes, both quantitative and qualitative. It allows you to use business value and organizational pain to select the order and importance of improvements. It is also adaptable to all software development lifecycle models.
Since CTP focuses on a small number of processes, how are those processes selected? What is a critical testing process? Here’s a quick way to think about it: If a test team executes the critical testing processes well, it will almost always succeed, but if it executes these activities poorly, even talented individual testers and test managers will usually fail. Let’s expand this definition a bit.
First, I defined a process as some sequence of actions, observations, and decisions. Next, I defined testing as the activities involved in planning, preparing, performing, and perfecting the assessing of the quality of a system. So, with a definition of a test process firmly in hand, what makes a test process a critical test process? I applied four criteria:
In other words, a critical test process directly and significantly affects the test team’s ability to find bugs, build confidence, reduce risks, and generate information.
Based on these criteria, I identified the following twelve critical testing processes:
You might notice that I’ve described each of these processes in terms of what an optimal process will achieve. If the process does not achieve those standards of capability—and more—then it is not optimal and has room for improvement.
How can you use CTP for assessment and improvement? Test process improvements using CTP begin with an assessment of the existing test process. This assessment will identify which of the twelve test processes are currently done properly and which need improvement. The assessment results in a set of prioritized recommendations for improvements. Whether you use the framework to do your own assessment or hire a consulting firm like RBCS to do it, the assessment should base the recommendations on organizational needs.
Since the assessments are tailorable, they can vary depending on the specific organization and its needs. We have done narrowly focused CTP assessments that looked only at one test team, we have done CTP assessments that looked only at a one or two test processes like the test system, and we have done broad CTP assessments that looked at everything that affects quality. So, while CTP assessments vary, we tend to examine the following metrics during a CTP assessment:
In addition, we also tend to evaluate the following qualitative factors, among others, during a CTP assessment:
Once an assessment has identified opportunities for improvement, the assessor will develop plans to implement this improvement. While the model includes generic guidelines for process improvement for each of the critical testing processes, the assessor is expected to tailor those heavily.
I designed the CTP model to be very flexible. It does assume the primary use of an analytical risk-based testing strategy, balanced with a dynamic testing strategy. However, you can adapt CTP to use other test strategies primarily, such as requirements based, checklist based, or model based.
There are some five to ten metrics for each critical testing process, along with some qualitative evaluations. Now, we don’t typically look at every metric for every process on every single assessment because the selected metrics, like the model itself, are tunable. This results in a customer-centric assessment.
So, what is the value of a CTP assessment? CTP, like any other process model, provides a starting point, a standard framework, and a way of measuring your processes. A process assessment using a process model identifies opportunities to improve your current process that could not be identified by applying continuous process improvement techniques, such as Deming’s Plan-Do-Check-Act (PDCA), to your current existing processes. This is true because techniques such as PDCA are incremental improvement techniques, while process models allow provide a method for quantum leaps in process improvement through the introduction of known best practices.
Properly done, CTP assessments deliver specific recommendations, along with the order in which to implement them. When my associates and I carry out CTP assessments for clients, we typically deliver a 50 to 100 page report with our assessment of the critical testing processes, our recommendations for improving them, the business justification for the improvements, and a roadmap for implementing those improvements. If you look at any two of our assessment reports, you might see very similar recommendations but in very different order. Why? Because each client has a different level of opportunity associated with the recommendation. In some cases, constraints or preconditions can influence the order. Those constraints and levels of opportunity tend to be unique from one organization to another, and a non-prescriptive model adapts to those unique organizational needs.
Let me close with an important note: Whether you use CTP or some other test process assessment model, don’t use process assessments as a one-time activity. Having done an assessment and found opportunities to improve, you should—of course—improve. Furthermore, at regular intervals, you should reassess to see the effect of the changes. Based on that reassessment, you should course correct your process improvement.