Blog

The Importance of Intra-project Deliverables (aka Hand-offs)

By Rex Black

I try to avoid using US-specific slang in my writing, especially sports-related US-specific slang, and most especially US-football-specific slang (since no one outside the US plays that game).  However, it looks like one such phrase remains embedded in my writing style, as astute reader Thomas Wagner pointed out in a recent e-mail:

I am currently studying for ISTQB Advanced Test Manager by following your book "Advanced Software Testing - Vol 2. Guide to the ISTQB Advanced Certification as an advanced test manager." I have a question: You frequently use the term "hand-offs".  What do you mean by this? Examples where used in the book:

Section 1.2, page 3, line 2 "...this is especially true at key interfaces and hand-offs."

Section 3.3.6., page 171, line 19: "...In general, foul-ups are always more likely during hand-offs"

The ever-helpful dictionary.com defines hand-off as "an offensive play in which a player, usually a back, hands the ball to a teammate."  (In this case, note that "offensive" refers to the opposite of defensive, not that the maneuver itself is likely to offend.)  And, indeed, it says that the origin of the phrase is from US football.  My mistake, but how to fix it?

The problem is that a precise, universal way of saying "hand-off" might be something clunky like "intra-project deliverable."  That's certainly not an easy phrase to write or to understand.  Any time one distinct group of people (e.g., the programmers) within a project team creates a deliverable (e.g., the software to be tested) and delivers it to another group (e.g., the testers), you have, well, a hand-off.  In this example, the programmers have handed the software to the testers, for the purpose of testing. 

(The last part of that sentence above, that there is a specific purpose, is also important.  A hand-off is not a merely informative delivery, where no action is required on the part of the recipients.  The recipients--in the example above, the testers--are required to carry out a specific set of actions with the deliverable.  That's an important part of the concept of a hand-off that isn't included in the phrase "intra-project deliverable" unless we say "intra-project deliverable given to recipients for the purpose of taking some action with it," or "transfer of an intra-project deliverable from one group to another within a project that includes a responsibility for a specific set of activities on the part of the recipients," and now we're really getting into a long and tortured phrases!)

Whatever we call it, these junction points between groups in a project team are always risky.  Mismatched expectations between delivering and receiving parties can result in problems (e.g., not fixing certain bugs that block some tests).  Failure to deliver on time can occur (e.g., the all-too-common delay of the start of test execution due to incompleteness of the software to be tested).  Failure to deliver something usable for the intended purpose can occur (e.g., the untestable test release).  Miscommunications can arise (e.g., the bug report that doesn't give the programmer enough information to debug the underlying problem).

Given the difficulty of thinking up a good alternative phrase, I'm going to keep using "hand-off", though I'm glad Thomas sensitized me to the cultural difficulty of the phrase.  Call it what you will, any time one group transfers something to another group during a project, take care.  Especially as testers, being downstream of just about everything else that happens on a project, we have a lot of opportunities for bad hand-offs.

— Published


Five Fallacies of Risk Based Testing

By Rex Black

Risk based testing is a phrase that we hear many times in testing.  Many people know many facts and have many opinions.  The trouble is, in many cases, these facts are actually wrong or based on a poor understanding of risk based testing, and thus many opinions about risk based testing are incorrect.  There are many risk based testing fallacies.  In this first post (in what is likely to be a series of occassional posts on this topic), I'll start with five frequently encountered fallacies.

  1. Risk based testing is just a method to cut corners (part 1).  This whole idea for a series of blog posts came about when someone said to me, "Well, risk based testing means not testing everything."  Well, right.  So does every kind of testing.  There are an infinite number of tests you could run, and you are going to select a finite subset from that infinite set.  The only question is whether you are going to select that subset intelligently, with an understanding of the likelihood and impact associated with potential problems. Risk based testing allows you to do that.
  2. Risk based testing is just a method to cut corners (part 2).  Sometimes when people say this, they mean that risk based testing does not cover all the requirements.  Unfortunately, some people have promoted an approach which they call risk based or risk driven testing that involves exactly that: Selecting which requirements not to test based on risk.  While in some cases it is appropriate to skip testing some of the requirements, as a general rule we want to cover not only the important risks but all the requirements (at least those which are specified).  By ensuring that every requirement has at least one associated risk item and at least one associated test case, you can do so.  This is an example of a blended strategy of risk based and requirements based testing.
  3. Risk based testing is all about technical risk.  Some people have put forward this idea that risk based testing is a form of reactive testing where we wait to see what the system does (i.e., no planning, analysis, or up-front test development), then use experience, defect taxonomies, and other aids to predict and find as many bugs as we can in a limited period of time.  To me, that approach is just a big geeky bug hunt; it does not cover all of the strategic objectives most organizations have for test teams.  Yes, we should consider defect likelihood when analyzing quality risks, but we should also consider the impact of potential defects as well.
  4. Risk based testing can be done entirely by the test team.  Those who believe this fallacy simply analyze requirements or other information, in isolation from other project and product stakeholders, and then test based on that analysis.  Sorry, but that's just a risk-aware form of requirements based testing.  What makes true risk based testing truly powerful is the consideration of input from a cross-functional team of project and product stakeholders.  When we help clients start doing risk based testing, I always emphasize that getting the right quality risk analysis team together is more important than the right process or templates.
  5. Risk based testing only influences selection of test cases. It's true that one major benefit of risk based testing is the smart selection of test cases.  However, with risk based testing you can also report test results in terms of residual risk, which makes test status truly clear to non-test project team members. You can also run tests in risk priority order, which maximizes the likelihood of finding important bugs first.  And, if you do get squeezed for time, you can triage your test cases based on risk, ensuring that the most important tests get run.

I hope this blog entry has helped to dispel some of these fallacies.  I'll return to it in a later post someday to try to dispel more such fallacies.  In the meantime, you might want to check out the videos on risk based testing, found in our Digital Library, for more information about what risk based testing really is and how to make it work for you.

— Published


Testing and Quality Metrics: How to Stay "Lean and Agile" while Maintaining Visibility?

By Rex Black

As part of the continuing series of video blog entries, I'm throwing out to the wider software testing and quality community a question:  How do we balance the desire--especially in Agile and Lean/Agile projects--to enjoy the efficiency of lightweight record-keeping, while at the same time having enough visibility into project, process, and product status, including testing and quality metrics?

For a little video context on my question, check out: How to Balance Metrics, Lean, and Agile when Measuring Software Testing and Quality

I'd be interested in hearing from people as to how their projects are achieving balance here, and also in anecdotes about when they are not achieving balance.

— Published


Free Tool for Calculating Software Testing ROI

By Rex Black

I recently gave a workshop at the STANZ conference, first in Wellington and then in Sydney.  In this workshop, I mentioned that connecting software testing to business value is a key test management challenge of the 2010 decade.  (Of course, it's really been a challenge for the entire time there has been software testing, but it's a challenge we've yet to resolve.)  Everyone in b0th audiences agreed, and a number of people offered examples of how this challenge was affecting them.

Earlier this week, I gave a webinar on how to calculate the return on the software testing investment.   You can listen to a recording of that webinar if you missed it.

In that webinar, I walked through a case study of calculating software testing ROI.  This case study was described in an article originally published in Software Test and Performance magazine, and you can now find the article here on the RBCS website.

After the webinar, a bunch of people sent e-mails saying, "Hey, could you please post the spreadsheet that you walked through during the webinar?"  Here at RBCS, we like to say yes to our friends, clients, and supporters, so we did.  You can find the free software testing ROI spreadsheet on our Advanced Library now, under the name Case Study Info Appliance Test ROI.xls.

Before you use the spreadsheet, I suggest you read the article I mentioned above.  The article explains how the spreadsheet works and explains the case study numbers included in the spreadsheet by way of example.

— Published


Picking Certifications: Software Testing and Beyond

By Rex Black

One key to quality software is the quality of the people involved in creating and maintaining it.  One of the tools for increasing the quality of your team is through training of existing employees, which I’ll address in a later blog post. For this post, I want to focus on something that is often confused with training, but actually is (or at least should be) something entirely different: certification.

All IT managers--whether software test managers or other software managers--want to hire qualified people.  Certainly, IT certification can be part of the qualification puzzle in many IT fields.  IT professionals often use certification in key skill areas to demonstrate their qualifications.  However, with all the certification programs floating around out there, how do managers and professionals distinguish useful certifications from pointless wallpaper?  In this post, I’ll examine how you can pick the right IT certifications for yourself (as an individual) or for your team and the people you hire (as a manager).

Any certification worth considering will have, at its basis, a body of knowledge or syllabus.  This document should describe the skills and abilities that the certification measures.  Those people who have mastered most of these skills and abilities (sometimes called “learning objectives” in the syllabus) will be able to earn the certification, usually through some kind of exam. 

So, the first and most important step is to determine whether the skills and abilities listed in the syllabus are useful.  Does the syllabus relate to your day-to-day work?  Will the benefits of achieving the certification—increased effectiveness and efficiency, credibility of the team, etc.—justify the cost?

Of course, it’s possible that your day-to-day work should more closely resemble what is described in the syllabus.  This can happen when your organization is not following industry best practices.  So, you should also evaluate the source of the syllabus.  If the syllabus was written by a broad, international team of recognized, published industry experts, perhaps you should consider moving your practices towards those required for certification.  Adopting the certification as a guideline for your practices—and hiring people with the certification—can be a good way to move in this direction.

Selecting a certification developed by a broad team of recognized, published industry experts is important because, in general, such certifications enjoy increased acceptance over certifications developed by a small clique of like-minded people.  People in the industry will recognize the names of the authors and developers of the syllabus.  To some extent, the credibility and thus value of all certifications rests upon the reputation and credibility of the people who stand behind those certifications. 

I also mentioned that the team of experts should be international, because so often now we are engaged in globally distributed work.  If you are not working in a globally distributed fashion today, you probably will be soon.  So, you need certifications that have a global reach.  If you want to hire (or be part of) a global team of certified professionals, a single common certification is key.  This way, the whole team speaks the same language and knows the same concepts.

Of course, if you plan to hire people who hold a certification because you believe the syllabus has value, you want to be confident that those people have indeed mastered the topics in the syllabus.  This brings us back to the matter of the exam. 

Certification exams are a complicated issue, and some ill-informed polemics about exams occur on a few internet web sites.  Proper creation of exams is the province of a profession called psychometrics.  Psychometrics applies the fields of psychology, education, and statistics to the process of qualifying people through exams.  Any legitimate certification body (i.e., the organization developing and administering an exam against a syllabus) will employ professional psychometricians to ensure proper exams.

In evaluating whether an exam properly assesses someone’s qualifications, you need answers to four questions. First, is the exam statistically valid, and can the certification body prove validity?  Second, is the exam a quality instrument, free from grammatical and spelling errors, formatting problems, and other glitches that might distract exam takers, and what process does the certification body use to ensure quality?  Third, is the exam of uniform difficulty and quality whenever and wherever it is administered, and how does the certification body accomplish uniformity?  And, fourth and finally, since exam questions are developed by people, what steps does the certification body use to ensure the integrity of the exams; i.e., that the questions are not leaked to candidates, onto the internet, or to accredited training providers?

This last point—that of accredited training providers—brings us to an important consideration.  It is certainly valuable to have training available to support certification programs.  Accrediting training, whereby the certification body checks the content of the training to ensure compliance with and coverage of the syllabus, can help busy managers and professions narrow their search for quality training.  However, when the accreditation process is opaque, when only members of the certification body offer accredited training, or, worse yet, when accredited training is required to take an exam, you are not looking at a real certification: you are looking at a marketing vehicle for some company’s or cartel’s training programs.  You should pick certification programs that have open, transparent processes for accreditation, with a diverse, competitive field of training providers, and which do not require any training at all to take the exams.

Certifications can help IT managers and professionals grow their teams and their skills, if chosen carefully. If you select the right bodies of knowledge, developed by the right people and delivering the right skills for your work, certification can lead to improvements in effectiveness, efficiency, and communication within teams. It’s also essential that the certification body follow best practices in the creation and delivery of the exams. And, if you decide to use training to help achieve certification, make sure to pick a program where the training supports the certification, not vice versa.  If you follow these basic concepts, you can obtain good value from IT certification programs, both as a professional and as a hiring manager.

— Published


Useful Software Testing Templates

By Rex Black

Many of our clients and course attendees ask about templates.  We have many templates posted on our Basic Library and our Advanced Library.  A few worth particular mention are:

The last one isn't actually a template, but it's something many people find interesting. 

If there's some other template you need, search the Basic Library and Advanced Library.  If you still don't find it, post a comment here on the blog letting the readers and me know what you're looking for.  Maybe someone can help.

Remember, though, a template is not an excuse to turn your brain off.  Be sure to use templates thoughtfully.

— Published


Regional Software Testing Immaturity: Fact or Fallacy?

By Rex Black

I took a few moments today to record another video blog entry, which you can find at Regional Software Testing Immaturity: Fact or Fallacy?

Here's the synopsis: I'm in Los Angeles, on my way to the STANZ conference in Australia and New Zealand. That and other recent international trips have gotten me thinking about something that I often hear at such international conferences, which are comments along the lines of, "You know, software testing as a profession and a practice is really immature in region X," where region X might or might not be where I'm at.  Based on my experience with clients around the world, though, the gap isn't as big as people often think it is.  Is software testing actually significantly less mature in some regions than others?  What has your experience been?  I'd be interested in opinions, case studies, and stories from you, especially the many international readers of this blog but also people in North America.

— Published


Do Software Test Professionals Needs More Technical Skills?

By Rex Black

A couple recent events might seem to indicate a greater appetite--and need--for greater technical skills.  I thought I'd leave a quick video blog post on this topic--see Technical Testers Blog.  Let me know what you think.  Are we in need of stronger technical skills?  Do you have technical skills?  Do you need and want technical skills?

— Published


Assessing Software Testing Processes

By Rex Black

When I wrote my book Critical Testing Processes in the early 2000s, I started with the premise that some test processes are critical, some are not. I designed this lightweight framework for test process assessment and improvement in order to focus the test team and test manager on a few test areas that they simply must do properly. This contrasts with other, more expansive, and more complex frameworks.  In addition, the Critical Testing Processes (CTP) framework eschews the prescriptive elements of other frameworks because it does not impose an arbitrary, staged maturity model.

What’s the problem with prescriptive models?  In our consulting work, we have found that businesses want to make improvements based on the business value of the improvement and the organizational pain that improvement will alleviate. A simplistic maturity rating might lead a business to make improvements in parts of the overall software process or test process that are actually less problematic or less important than other parts of the process simply because the model listed them in order.

CTP is a non-prescriptive process model. It describes the important software processes and what should happen in them, but it doesn’t put them in any order of improvement. This makes CTP a very flexible model. It allows you to identify and deal with specific challenges to your test processes. It identifies various attributes of good processes, both quantitative and qualitative. It allows you to use business value and organizational pain to select the order and importance of improvements. It is also adaptable to all software development lifecycle models.

Since CTP focuses on a small number of processes, how are those processes selected? What is a critical testing process? Here’s a quick way to think about it: If a test team executes the critical testing processes well, it will almost always succeed, but if it executes these activities poorly, even talented individual testers and test managers will usually fail. Let’s expand this definition a bit.

First, I defined a process as some sequence of actions, observations, and decisions. Next, I defined testing as the activities involved in planning, preparing, performing, and perfecting the assessing of the quality of a system. So, with a definition of a test process firmly in hand, what makes a test process a critical test process? I applied four criteria:

  • Is the process repeated frequently, so that it affects efficiency of the test team and the project team?
  • Is the process highly cooperative, involving a large number of people, particularly cross-functionally, so that it affects test team and project team cohesion and cooperation?
  • Is the process visible to peers and superiors, so that it affects the credibility of the test team?
  • Is the process linked to project success, in such a way as to affect project team or test team effectiveness?

In other words, a critical test process directly and significantly affects the test team’s ability to find bugs, build confidence, reduce risks, and generate information.

Based on these criteria, I identified the following twelve critical testing processes:

  • Testing. The overall process, viewed at a macro, strategic level. It consists of eleven constituent critical testing processes.
  • Establishing context. This process aligns testing within the project and the organization. It clarifies expectations on all sides. It establishes the groundwork for tailoring all other testing processes.
  • Quality risk analysis. This process identifies the key risks to system quality. It aligns testing with the key risks to system quality. It builds quality and test stakeholder consensus around what is to be tested (and how much) and what is not to be tested (and why).
  • Test estimation: This process balances the costs and time required for testing against project needs and risks. It accurately and actionably forecasts the tasks and duration of testing. It demonstrates the return on the testing investment to justify the amount of test work requested.
  • Test planning. This process builds consensus and commitment among test team and broader project team participants. It creates a detailed map for all test participants. It captures information for retrospectives and future projects.
  • Test team development. Since testing is only as good as the team that does it, this process matches test team skills to the critical test tasks. It assures competence in the critical skills areas. It continuously aligns team capabilities with organizational value of testing.
  • Test system development. This process ensures coverage of the critical risks to system quality. It creates tests that reproduce the customers’ and users’ experiences of quality. It balances resource and time requirements against criticality of risk. It includes test cases, test data, test procedures, test environments, and other support material.
  • Test release management. If we don’t have the test object, we can’t test it. If the test items don’t work in the test environment, we can’t test them. If each test release is not better than the one before, we’re not on a path for success. So, this process focuses on how to get solid, reliable test releases into the test environment.
  • Test execution. This process, the running of test cases and comparison of test results against expected results, generates information about bugs, what works, and what doesn’t. In other words, this is where the value of testing is created. This process consumes significant resources. It occurs at the end of the project and gates project completion.
  • Bug reporting. This process creates an opportunity to improve the system (and thus to save money). While test execution generates the value of testing, this process delivers part of the value of testing to the project team, specifically the individual contributors and line managers. It builds tester credibility with programmers.
  • Results reporting. This process provides management with the information needed to guide the project. It delivers another part of the value of testing to the project team, particularly line managers, senior managers, and executives. Since test results are often bad news, it separates the message from messenger. It builds tester credibility with managers
  • Change management. This process allows the test team and the project team to respond to what they’ve learned so far. It selects the right changes in the right order. It focuses efforts on the highest return-on-investment activities.

You might notice that I’ve described each of these processes in terms of what an optimal process will achieve. If the process does not achieve those standards of capability—and more—then it is not optimal and has room for improvement.

How can you use CTP for assessment and improvement? Test process improvements using CTP begin with an assessment of the existing test process. This assessment will identify which of the twelve test processes are currently done properly and which need improvement. The assessment results in a set of prioritized recommendations for improvements. Whether you use the framework to do your own assessment or hire a consulting firm like RBCS to do it, the assessment should base the recommendations on organizational needs.

Since the assessments are tailorable, they can vary depending on the specific organization and its needs. We have done narrowly focused CTP assessments that looked only at one test team, we have done CTP assessments that looked only at a one or two test processes like the test system, and we have done broad CTP assessments that looked at everything that affects quality. So, while CTP assessments vary, we tend to examine the following metrics during a CTP assessment:

  • Defect detection percentage
  • Return on the testing investment
  • Requirements coverage and risk coverage
  • Test release overhead
  • Defect report rejection rate

In addition, we also tend to evaluate the following qualitative factors, among others, during a CTP assessment:

  • Test team role and effectiveness
  • Usefulness of the test plan
  • Test team skills in testing, domain knowledge, and technology
  • Value of the defect reports
  • Usefulness of test result reports
  • Change management utility and balance

Once an assessment has identified opportunities for improvement, the assessor will develop plans to implement this improvement. While the model includes generic guidelines for process improvement for each of the critical testing processes, the assessor is expected to tailor those heavily.

I designed the CTP model to be very flexible. It does assume the primary use of an analytical risk-based testing strategy, balanced with a dynamic testing strategy. However, you can adapt CTP to use other test strategies primarily, such as requirements based, checklist based, or model based.

There are some five to ten metrics for each critical testing process, along with some qualitative evaluations. Now, we don’t typically look at every metric for every process on every single assessment because the selected metrics, like the model itself, are tunable. This results in a customer-centric assessment.

So, what is the value of a CTP assessment?  CTP, like any other process model, provides a starting point, a standard framework, and a way of measuring your processes. A process assessment using a process model identifies opportunities to improve your current process that could not be identified by applying continuous process improvement techniques, such as Deming’s Plan-Do-Check-Act (PDCA), to your current existing processes.  This is true because techniques such as PDCA are incremental improvement techniques, while process models allow provide a method for quantum leaps in process improvement through the introduction of known best practices.

Properly done, CTP assessments deliver specific recommendations, along with the order in which to implement them. When my associates and I carry out CTP assessments for clients, we typically deliver a 50 to 100 page report with our assessment of the critical testing processes, our recommendations for improving them, the business justification for the improvements, and a roadmap for implementing those improvements. If you look at any two of our assessment reports, you might see very similar recommendations but in very different order. Why? Because each client has a different level of opportunity associated with the recommendation. In some cases, constraints or preconditions can influence the order. Those constraints and levels of opportunity tend to be unique from one organization to another, and a non-prescriptive model adapts to those unique organizational needs.

Let me close with an important note:  Whether you use CTP or some other test process assessment model, don’t use process assessments as a one-time activity. Having done an assessment and found opportunities to improve, you should—of course—improve. Furthermore, at regular intervals, you should reassess to see the effect of the changes.  Based on that reassessment, you should course correct your process improvement.

— Published


The Return on the Software Testing Investment

By Rex Black

The process of negotiating a software testing budget can be painful. Some project managers view testing as a necessary evil that occurs at the end of the project. In these people’s minds, testing costs too much, takes too long, doesn’t help them build the product, and can create hostility between the test team and the rest of the development organization. No wonder people who view testing this way spend as little as possible on it.

Other project managers, though, are inclined to spend more on testing. Why? Smart software managers understand that testing is an investment in quality. Out of the overall project budget, the project managers set aside some money for assessing the product and resolving the bugs that the testers find. Smart test managers have learned how to manage that investment wisely. In such circumstances, the test investment produces a positive return, fits within the overall project schedule, has quantifiable findings, and is seen as a definite contributor to the project.

As Phil Crosby wrote, “quality is free.” This can be demonstrated by breaking down quality related costs as follows:

Cquality=Cconformance+Cnonconformance

Conformance costs include prevention costs and appraisal costs. Prevention costs are moneys spent on quality assurance—tasks like code reviews, walkthroughs, or inspections, requirements definition, and other activities that promote good software. Appraisal costs include moneys spent planning test activities, developing test cases and data, and executing those cases—once.

Nonconformance costs come in two flavors, internal failures and external failures. The costs of internal failure include all expenses that arise when test cases fail the first time they’re run—as they often do. Think through the process: The tester researches and reports the failure, the developer finds and fixes the fault, the build engineer produces a new release, and the tester retests the new release to confirm the fix and to check for regression. The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. In these cases, not only does the same process described above occur, but you also incur the technical support overhead and the more expensive process of releasing a fix to the field rather than to the test lab. In addition, consider the intangible costs: Angry customers, damage to the company image, lost business, and maybe even lawsuits.

Two observations lay the foundation for the enlightened view of testing as an investment. First, like any cost equation in business, we will want to minimize the cost of quality. Second, while it is often cheaper to prevent problems than to repair them, if we must repair problems, internal failures cost less than external failures.

Let's look at a hypothetical case study, illustrated in the figure below.  Assume we have a software product in the field, with one new release every quarter. On average, each release contains 1,000 “must-fix” bugs—unacceptable defects—which we identify and repair over the life of the release. Currently, developers find and fix 250 of those bugs during development, while the customers find the rest. Cost of quality accounting shows that, on average, each pre-release defect costs $10, while field failures cost $1,000. As shown in the “No Formal Testing” column in the figure below, our cost of quality is three-quarters of a million dollars—and customers are mad! So, we invest $70,000 per quarterly release in a manual testing process. The “Manual Testing” column shows how profitable this investment is. The testers find 350 bugs before the release, which cuts almost in half the number of bugs found by customers. Not only does this make the customers happier, but, because cost of quality accounting shows that we pay only $100 on average for each bug found by testers, our total cost of quality has dropped to about half a million dollars. Our ROI on this $70,000 investment is 350%.

Return on the Software Testing Investment:

In some cases, we can do even better. For example, suppose that we invest $150,000 in test automation tools, amortizing that investment over twelve quarterly releases, and manage to find about 40% more bugs? Finding 500 bugs in the test process would lower the overall customer bug find count for each release to 250—a huge improvement over the initial situation. In addition, cost of quality would fall to a little under $400,000, a 445% return on investment.

Cost of quality analyses on software process improvement bear out these figures. We have seen clients who enjoy return on the software testing investment that ranges anywhere from 50% to as high as 3200%.  While software testing is only part of achieving software quality, it is an important part, and a substantial investment is justifiable to achieve such phenomenal gains.

To get started, you’ll need a management team wise enough to look at the cost of quality over the entire life of the software release. A management team that ignores the long term and focuses just on the budget required to get the software out the door initially does not see testing as an investment. Quality is given lip-service when the only priorities are shipping something—anything—on a given schedule and within a given budget.

However, having supportive, far-sighted management is only a necessary—not a sufficient—condition for achieving positive returns on your test investment. Just as in the stock market, there are right and wrong ways to invest. Picking the right tests, managing the appropriate quality risks, using the proper tools and techniques, and driving testing throughout the organization will result in optimal returns, while failure in any one of these areas can mean disappointing or even negative returns. These topics come up again and again in this blog, but if you'd like me to address one or more of them specifically, please let me know.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.