Blog

Free Tool for Calculating Software Testing ROI

By Rex Black

I recently gave a workshop at the STANZ conference, first in Wellington and then in Sydney.  In this workshop, I mentioned that connecting software testing to business value is a key test management challenge of the 2010 decade.  (Of course, it's really been a challenge for the entire time there has been software testing, but it's a challenge we've yet to resolve.)  Everyone in b0th audiences agreed, and a number of people offered examples of how this challenge was affecting them.

Earlier this week, I gave a webinar on how to calculate the return on the software testing investment.   You can listen to a recording of that webinar if you missed it.

In that webinar, I walked through a case study of calculating software testing ROI.  This case study was described in an article originally published in Software Test and Performance magazine, and you can now find the article here on the RBCS website.

After the webinar, a bunch of people sent e-mails saying, "Hey, could you please post the spreadsheet that you walked through during the webinar?"  Here at RBCS, we like to say yes to our friends, clients, and supporters, so we did.  You can find the free software testing ROI spreadsheet on our Advanced Library now, under the name Case Study Info Appliance Test ROI.xls.

Before you use the spreadsheet, I suggest you read the article I mentioned above.  The article explains how the spreadsheet works and explains the case study numbers included in the spreadsheet by way of example.

— Published


Picking Certifications: Software Testing and Beyond

By Rex Black

One key to quality software is the quality of the people involved in creating and maintaining it.  One of the tools for increasing the quality of your team is through training of existing employees, which I’ll address in a later blog post. For this post, I want to focus on something that is often confused with training, but actually is (or at least should be) something entirely different: certification.

All IT managers--whether software test managers or other software managers--want to hire qualified people.  Certainly, IT certification can be part of the qualification puzzle in many IT fields.  IT professionals often use certification in key skill areas to demonstrate their qualifications.  However, with all the certification programs floating around out there, how do managers and professionals distinguish useful certifications from pointless wallpaper?  In this post, I’ll examine how you can pick the right IT certifications for yourself (as an individual) or for your team and the people you hire (as a manager).

Any certification worth considering will have, at its basis, a body of knowledge or syllabus.  This document should describe the skills and abilities that the certification measures.  Those people who have mastered most of these skills and abilities (sometimes called “learning objectives” in the syllabus) will be able to earn the certification, usually through some kind of exam. 

So, the first and most important step is to determine whether the skills and abilities listed in the syllabus are useful.  Does the syllabus relate to your day-to-day work?  Will the benefits of achieving the certification—increased effectiveness and efficiency, credibility of the team, etc.—justify the cost?

Of course, it’s possible that your day-to-day work should more closely resemble what is described in the syllabus.  This can happen when your organization is not following industry best practices.  So, you should also evaluate the source of the syllabus.  If the syllabus was written by a broad, international team of recognized, published industry experts, perhaps you should consider moving your practices towards those required for certification.  Adopting the certification as a guideline for your practices—and hiring people with the certification—can be a good way to move in this direction.

Selecting a certification developed by a broad team of recognized, published industry experts is important because, in general, such certifications enjoy increased acceptance over certifications developed by a small clique of like-minded people.  People in the industry will recognize the names of the authors and developers of the syllabus.  To some extent, the credibility and thus value of all certifications rests upon the reputation and credibility of the people who stand behind those certifications. 

I also mentioned that the team of experts should be international, because so often now we are engaged in globally distributed work.  If you are not working in a globally distributed fashion today, you probably will be soon.  So, you need certifications that have a global reach.  If you want to hire (or be part of) a global team of certified professionals, a single common certification is key.  This way, the whole team speaks the same language and knows the same concepts.

Of course, if you plan to hire people who hold a certification because you believe the syllabus has value, you want to be confident that those people have indeed mastered the topics in the syllabus.  This brings us back to the matter of the exam. 

Certification exams are a complicated issue, and some ill-informed polemics about exams occur on a few internet web sites.  Proper creation of exams is the province of a profession called psychometrics.  Psychometrics applies the fields of psychology, education, and statistics to the process of qualifying people through exams.  Any legitimate certification body (i.e., the organization developing and administering an exam against a syllabus) will employ professional psychometricians to ensure proper exams.

In evaluating whether an exam properly assesses someone’s qualifications, you need answers to four questions. First, is the exam statistically valid, and can the certification body prove validity?  Second, is the exam a quality instrument, free from grammatical and spelling errors, formatting problems, and other glitches that might distract exam takers, and what process does the certification body use to ensure quality?  Third, is the exam of uniform difficulty and quality whenever and wherever it is administered, and how does the certification body accomplish uniformity?  And, fourth and finally, since exam questions are developed by people, what steps does the certification body use to ensure the integrity of the exams; i.e., that the questions are not leaked to candidates, onto the internet, or to accredited training providers?

This last point—that of accredited training providers—brings us to an important consideration.  It is certainly valuable to have training available to support certification programs.  Accrediting training, whereby the certification body checks the content of the training to ensure compliance with and coverage of the syllabus, can help busy managers and professions narrow their search for quality training.  However, when the accreditation process is opaque, when only members of the certification body offer accredited training, or, worse yet, when accredited training is required to take an exam, you are not looking at a real certification: you are looking at a marketing vehicle for some company’s or cartel’s training programs.  You should pick certification programs that have open, transparent processes for accreditation, with a diverse, competitive field of training providers, and which do not require any training at all to take the exams.

Certifications can help IT managers and professionals grow their teams and their skills, if chosen carefully. If you select the right bodies of knowledge, developed by the right people and delivering the right skills for your work, certification can lead to improvements in effectiveness, efficiency, and communication within teams. It’s also essential that the certification body follow best practices in the creation and delivery of the exams. And, if you decide to use training to help achieve certification, make sure to pick a program where the training supports the certification, not vice versa.  If you follow these basic concepts, you can obtain good value from IT certification programs, both as a professional and as a hiring manager.

— Published


Useful Software Testing Templates

By Rex Black

Many of our clients and course attendees ask about templates.  We have many templates posted on our Basic Library and our Advanced Library.  A few worth particular mention are:

The last one isn't actually a template, but it's something many people find interesting. 

If there's some other template you need, search the Basic Library and Advanced Library.  If you still don't find it, post a comment here on the blog letting the readers and me know what you're looking for.  Maybe someone can help.

Remember, though, a template is not an excuse to turn your brain off.  Be sure to use templates thoughtfully.

— Published


Regional Software Testing Immaturity: Fact or Fallacy?

By Rex Black

I took a few moments today to record another video blog entry, which you can find at Regional Software Testing Immaturity: Fact or Fallacy?

Here's the synopsis: I'm in Los Angeles, on my way to the STANZ conference in Australia and New Zealand. That and other recent international trips have gotten me thinking about something that I often hear at such international conferences, which are comments along the lines of, "You know, software testing as a profession and a practice is really immature in region X," where region X might or might not be where I'm at.  Based on my experience with clients around the world, though, the gap isn't as big as people often think it is.  Is software testing actually significantly less mature in some regions than others?  What has your experience been?  I'd be interested in opinions, case studies, and stories from you, especially the many international readers of this blog but also people in North America.

— Published


Do Software Test Professionals Needs More Technical Skills?

By Rex Black

A couple recent events might seem to indicate a greater appetite--and need--for greater technical skills.  I thought I'd leave a quick video blog post on this topic--see Technical Testers Blog.  Let me know what you think.  Are we in need of stronger technical skills?  Do you have technical skills?  Do you need and want technical skills?

— Published


Assessing Software Testing Processes

By Rex Black

When I wrote my book Critical Testing Processes in the early 2000s, I started with the premise that some test processes are critical, some are not. I designed this lightweight framework for test process assessment and improvement in order to focus the test team and test manager on a few test areas that they simply must do properly. This contrasts with other, more expansive, and more complex frameworks.  In addition, the Critical Testing Processes (CTP) framework eschews the prescriptive elements of other frameworks because it does not impose an arbitrary, staged maturity model.

What’s the problem with prescriptive models?  In our consulting work, we have found that businesses want to make improvements based on the business value of the improvement and the organizational pain that improvement will alleviate. A simplistic maturity rating might lead a business to make improvements in parts of the overall software process or test process that are actually less problematic or less important than other parts of the process simply because the model listed them in order.

CTP is a non-prescriptive process model. It describes the important software processes and what should happen in them, but it doesn’t put them in any order of improvement. This makes CTP a very flexible model. It allows you to identify and deal with specific challenges to your test processes. It identifies various attributes of good processes, both quantitative and qualitative. It allows you to use business value and organizational pain to select the order and importance of improvements. It is also adaptable to all software development lifecycle models.

Since CTP focuses on a small number of processes, how are those processes selected? What is a critical testing process? Here’s a quick way to think about it: If a test team executes the critical testing processes well, it will almost always succeed, but if it executes these activities poorly, even talented individual testers and test managers will usually fail. Let’s expand this definition a bit.

First, I defined a process as some sequence of actions, observations, and decisions. Next, I defined testing as the activities involved in planning, preparing, performing, and perfecting the assessing of the quality of a system. So, with a definition of a test process firmly in hand, what makes a test process a critical test process? I applied four criteria:

  • Is the process repeated frequently, so that it affects efficiency of the test team and the project team?
  • Is the process highly cooperative, involving a large number of people, particularly cross-functionally, so that it affects test team and project team cohesion and cooperation?
  • Is the process visible to peers and superiors, so that it affects the credibility of the test team?
  • Is the process linked to project success, in such a way as to affect project team or test team effectiveness?

In other words, a critical test process directly and significantly affects the test team’s ability to find bugs, build confidence, reduce risks, and generate information.

Based on these criteria, I identified the following twelve critical testing processes:

  • Testing. The overall process, viewed at a macro, strategic level. It consists of eleven constituent critical testing processes.
  • Establishing context. This process aligns testing within the project and the organization. It clarifies expectations on all sides. It establishes the groundwork for tailoring all other testing processes.
  • Quality risk analysis. This process identifies the key risks to system quality. It aligns testing with the key risks to system quality. It builds quality and test stakeholder consensus around what is to be tested (and how much) and what is not to be tested (and why).
  • Test estimation: This process balances the costs and time required for testing against project needs and risks. It accurately and actionably forecasts the tasks and duration of testing. It demonstrates the return on the testing investment to justify the amount of test work requested.
  • Test planning. This process builds consensus and commitment among test team and broader project team participants. It creates a detailed map for all test participants. It captures information for retrospectives and future projects.
  • Test team development. Since testing is only as good as the team that does it, this process matches test team skills to the critical test tasks. It assures competence in the critical skills areas. It continuously aligns team capabilities with organizational value of testing.
  • Test system development. This process ensures coverage of the critical risks to system quality. It creates tests that reproduce the customers’ and users’ experiences of quality. It balances resource and time requirements against criticality of risk. It includes test cases, test data, test procedures, test environments, and other support material.
  • Test release management. If we don’t have the test object, we can’t test it. If the test items don’t work in the test environment, we can’t test them. If each test release is not better than the one before, we’re not on a path for success. So, this process focuses on how to get solid, reliable test releases into the test environment.
  • Test execution. This process, the running of test cases and comparison of test results against expected results, generates information about bugs, what works, and what doesn’t. In other words, this is where the value of testing is created. This process consumes significant resources. It occurs at the end of the project and gates project completion.
  • Bug reporting. This process creates an opportunity to improve the system (and thus to save money). While test execution generates the value of testing, this process delivers part of the value of testing to the project team, specifically the individual contributors and line managers. It builds tester credibility with programmers.
  • Results reporting. This process provides management with the information needed to guide the project. It delivers another part of the value of testing to the project team, particularly line managers, senior managers, and executives. Since test results are often bad news, it separates the message from messenger. It builds tester credibility with managers
  • Change management. This process allows the test team and the project team to respond to what they’ve learned so far. It selects the right changes in the right order. It focuses efforts on the highest return-on-investment activities.

You might notice that I’ve described each of these processes in terms of what an optimal process will achieve. If the process does not achieve those standards of capability—and more—then it is not optimal and has room for improvement.

How can you use CTP for assessment and improvement? Test process improvements using CTP begin with an assessment of the existing test process. This assessment will identify which of the twelve test processes are currently done properly and which need improvement. The assessment results in a set of prioritized recommendations for improvements. Whether you use the framework to do your own assessment or hire a consulting firm like RBCS to do it, the assessment should base the recommendations on organizational needs.

Since the assessments are tailorable, they can vary depending on the specific organization and its needs. We have done narrowly focused CTP assessments that looked only at one test team, we have done CTP assessments that looked only at a one or two test processes like the test system, and we have done broad CTP assessments that looked at everything that affects quality. So, while CTP assessments vary, we tend to examine the following metrics during a CTP assessment:

  • Defect detection percentage
  • Return on the testing investment
  • Requirements coverage and risk coverage
  • Test release overhead
  • Defect report rejection rate

In addition, we also tend to evaluate the following qualitative factors, among others, during a CTP assessment:

  • Test team role and effectiveness
  • Usefulness of the test plan
  • Test team skills in testing, domain knowledge, and technology
  • Value of the defect reports
  • Usefulness of test result reports
  • Change management utility and balance

Once an assessment has identified opportunities for improvement, the assessor will develop plans to implement this improvement. While the model includes generic guidelines for process improvement for each of the critical testing processes, the assessor is expected to tailor those heavily.

I designed the CTP model to be very flexible. It does assume the primary use of an analytical risk-based testing strategy, balanced with a dynamic testing strategy. However, you can adapt CTP to use other test strategies primarily, such as requirements based, checklist based, or model based.

There are some five to ten metrics for each critical testing process, along with some qualitative evaluations. Now, we don’t typically look at every metric for every process on every single assessment because the selected metrics, like the model itself, are tunable. This results in a customer-centric assessment.

So, what is the value of a CTP assessment?  CTP, like any other process model, provides a starting point, a standard framework, and a way of measuring your processes. A process assessment using a process model identifies opportunities to improve your current process that could not be identified by applying continuous process improvement techniques, such as Deming’s Plan-Do-Check-Act (PDCA), to your current existing processes.  This is true because techniques such as PDCA are incremental improvement techniques, while process models allow provide a method for quantum leaps in process improvement through the introduction of known best practices.

Properly done, CTP assessments deliver specific recommendations, along with the order in which to implement them. When my associates and I carry out CTP assessments for clients, we typically deliver a 50 to 100 page report with our assessment of the critical testing processes, our recommendations for improving them, the business justification for the improvements, and a roadmap for implementing those improvements. If you look at any two of our assessment reports, you might see very similar recommendations but in very different order. Why? Because each client has a different level of opportunity associated with the recommendation. In some cases, constraints or preconditions can influence the order. Those constraints and levels of opportunity tend to be unique from one organization to another, and a non-prescriptive model adapts to those unique organizational needs.

Let me close with an important note:  Whether you use CTP or some other test process assessment model, don’t use process assessments as a one-time activity. Having done an assessment and found opportunities to improve, you should—of course—improve. Furthermore, at regular intervals, you should reassess to see the effect of the changes.  Based on that reassessment, you should course correct your process improvement.

— Published


The Return on the Software Testing Investment

By Rex Black

The process of negotiating a software testing budget can be painful. Some project managers view testing as a necessary evil that occurs at the end of the project. In these people’s minds, testing costs too much, takes too long, doesn’t help them build the product, and can create hostility between the test team and the rest of the development organization. No wonder people who view testing this way spend as little as possible on it.

Other project managers, though, are inclined to spend more on testing. Why? Smart software managers understand that testing is an investment in quality. Out of the overall project budget, the project managers set aside some money for assessing the product and resolving the bugs that the testers find. Smart test managers have learned how to manage that investment wisely. In such circumstances, the test investment produces a positive return, fits within the overall project schedule, has quantifiable findings, and is seen as a definite contributor to the project.

As Phil Crosby wrote, “quality is free.” This can be demonstrated by breaking down quality related costs as follows:

Cquality=Cconformance+Cnonconformance

Conformance costs include prevention costs and appraisal costs. Prevention costs are moneys spent on quality assurance—tasks like code reviews, walkthroughs, or inspections, requirements definition, and other activities that promote good software. Appraisal costs include moneys spent planning test activities, developing test cases and data, and executing those cases—once.

Nonconformance costs come in two flavors, internal failures and external failures. The costs of internal failure include all expenses that arise when test cases fail the first time they’re run—as they often do. Think through the process: The tester researches and reports the failure, the developer finds and fixes the fault, the build engineer produces a new release, and the tester retests the new release to confirm the fix and to check for regression. The costs of external failure are those incurred when, rather than a tester finding a bug, the customer does. In these cases, not only does the same process described above occur, but you also incur the technical support overhead and the more expensive process of releasing a fix to the field rather than to the test lab. In addition, consider the intangible costs: Angry customers, damage to the company image, lost business, and maybe even lawsuits.

Two observations lay the foundation for the enlightened view of testing as an investment. First, like any cost equation in business, we will want to minimize the cost of quality. Second, while it is often cheaper to prevent problems than to repair them, if we must repair problems, internal failures cost less than external failures.

Let's look at a hypothetical case study, illustrated in the figure below.  Assume we have a software product in the field, with one new release every quarter. On average, each release contains 1,000 “must-fix” bugs—unacceptable defects—which we identify and repair over the life of the release. Currently, developers find and fix 250 of those bugs during development, while the customers find the rest. Cost of quality accounting shows that, on average, each pre-release defect costs $10, while field failures cost $1,000. As shown in the “No Formal Testing” column in the figure below, our cost of quality is three-quarters of a million dollars—and customers are mad! So, we invest $70,000 per quarterly release in a manual testing process. The “Manual Testing” column shows how profitable this investment is. The testers find 350 bugs before the release, which cuts almost in half the number of bugs found by customers. Not only does this make the customers happier, but, because cost of quality accounting shows that we pay only $100 on average for each bug found by testers, our total cost of quality has dropped to about half a million dollars. Our ROI on this $70,000 investment is 350%.

Return on the Software Testing Investment:

In some cases, we can do even better. For example, suppose that we invest $150,000 in test automation tools, amortizing that investment over twelve quarterly releases, and manage to find about 40% more bugs? Finding 500 bugs in the test process would lower the overall customer bug find count for each release to 250—a huge improvement over the initial situation. In addition, cost of quality would fall to a little under $400,000, a 445% return on investment.

Cost of quality analyses on software process improvement bear out these figures. We have seen clients who enjoy return on the software testing investment that ranges anywhere from 50% to as high as 3200%.  While software testing is only part of achieving software quality, it is an important part, and a substantial investment is justifiable to achieve such phenomenal gains.

To get started, you’ll need a management team wise enough to look at the cost of quality over the entire life of the software release. A management team that ignores the long term and focuses just on the budget required to get the software out the door initially does not see testing as an investment. Quality is given lip-service when the only priorities are shipping something—anything—on a given schedule and within a given budget.

However, having supportive, far-sighted management is only a necessary—not a sufficient—condition for achieving positive returns on your test investment. Just as in the stock market, there are right and wrong ways to invest. Picking the right tests, managing the appropriate quality risks, using the proper tools and techniques, and driving testing throughout the organization will result in optimal returns, while failure in any one of these areas can mean disappointing or even negative returns. These topics come up again and again in this blog, but if you'd like me to address one or more of them specifically, please let me know.

— Published


Testing in the Dark: What If I Have No Specs?

By Rex Black

In order to design, develop, and run tests, you need what’s often referred to as a test oracle, something that tells you what the expected, correct result of a specific test should be. Specifications, requirements, business rules, marketing road maps, and other such documents frequently play this role. However, what if you receive no formal information that explains what the system under test should do?

In some organizations with mature development processes, the test department will not proceed without specifications. Because everyone expects to provide a specification to the test team as part of the development process, you are seen as reasonable and within the bounds of the company’s culture when you insist on written specs.

Trouble arises, however, if you stiffen your neck this way in a company that operates in a less mature fashion. Depending on your company’s readiness to embrace formal processes (and also on your personal popularity, tenure, and political clout), any one of a spectrum of outcomes could occur:

  • Your management, recognizing the need to formalize processes, backs you up 100 percent and institutes formal requirements- and design-specification processes throughout the organization as part of the planning phase of every new development project. Industry-standard templates for internal product documentation become the norm, and consultants are brought in to train people.
  • Your management, not knowing quite how to handle this odd demand, assumes that you must know what you’re talking about. The dictate goes out to all the organization’s groups to support you, but since no one has any training in formal development processes, the effort produces poor-quality documents that don’t help. Furthermore, because the effort is (rightly) seen as a waste of time, people are upset with you for bringing it up.
  • Your management listens to your demand but then explains that the company just isn’t ready for such cultural and process shifts. Perhaps things will change after the next few products go out, they speculate, but right now, process just isn’t on the to-do list. Besides, this product is really critical to the success of the company, and taking big chances on unproven ways of doing things would be too risky. You are told to get back to work.
  • You are fired.

The moral of this story is that you should carefully consider whether your company is ready for formal processes before you insist on requirements or design specifications and other accoutrements of mature development projects.

This situation is exacerbated by the increasing popularity of Agile methodologies. Some of the people behind Agile approaches have endorsed a set of principles that include the following, “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” As you can imagine—if you are not already dealing with this situation—such a principle discounts the role of written specifications, encouraging instead an on-going dialog about the correct behavior of a system. In other words, the outcomes of discussions between stakeholders represent correct behavior. These discussions can happen in a meeting, but for some RBCS clients following Agile methods these discussions happen as one-on-ones between developers and another stakeholder.

I’m all for ongoing dialog between project stakeholders. However, unless there are written minutes from these discussions, agreed upon by all project stakeholders, including the test team, this approach has the risk that different people come to different conclusions about what the outcome of the discussion was. Of course, this is a big testing challenge if you weren’t part of the discussion.

Even more challenging is the need to deal with the possibility that at any point the definition of correct behavior can change. The Agile principles include this, saying, “Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.” While I can see the business value in being able to shape software like modeling clay up until the last minute, it’s hard to create and maintain test cases if the definition of correctness is continually re-defined.

When properly applied, Agile models can bring some advantages to the test team. However, they also bring challenges, not the least of which is the tendency to devalue solid, stable, documented definitions of correct behavior.

If you have to deal with a situation where the project team cannot or will not deliver written specifications in advance, during test development, you might consider the following options for testing without specifications:

If you are testing a commercial product, remember that you have the benefit of competitors. Because your customers will expect your product to behave substantially like the products of your competitors, these competitive products are, in a sense, your test oracle. In compatibility test labs, for example, most projects have a reference platform—a competitor’s system, against which the system under test is being positioned, in the hope of demolishing it in the marketplace.

If your technical colleagues won’t tell you what the product should do, perhaps your friends in sales and marketing will. In my experience, sales and marketing people live to create glitzy presentations showing where the product line is going. Although they can be general and imprecise, these documents might tell you which features and capabilities the product should support. If you’re testing a product for which questions about supported features are harder to answer than questions regarding correct behavior, these documents might suffice for a somewhat vague but useful oracle.

Ask your customer-support colleagues. Your colleagues in customer support might not have much information about what the product should do, but they probably know what they don’t want the product to do. Since your testing stands between them and the hellish scenario outlined in the previous section, they are usually happy to tell you.

Unless the product is truly unique, you can use inductive reasoning to figure out what constitutes reasonable expectations and correct behavior in many cases. The generic categories into which products fit tell you a lot about what the products are supposed to do: a word processor, a Web browser, a PC, a laptop, a server, an operating system. Some esoteric questions might arise, but a core dump, a system crash, a burning CPU, garbage on the screen, an error message in the wrong language, and abysmal performance are indisputably bugs.

If in doubt, you should consider any suspect behavior buggy. Because you don’t have a crisp way of determining pass and fail conditions, you will make mistakes in result interpretation. Remember that calling correct behavior a bug and working through the bug life cycle is less detrimental to product quality than failing to report questionable behavior that does turn out to be a bug. Be sure to file bug reports when questions arise.

One thing to keep in mind about this situation is that you are definitely not alone. Many people are struggling with the right amount of documentation to gather, and errors are made on both sides. I try to maintain an open mind, even though the 20-questions approach to defining expected results is somewhat frustrating. It is a good idea, if you’re working in a poorly specified situation, to make sure management understands that your test development will be less efficient due to the need to pull information from other groups. My usual rule of thumb is that the lack of clear requirements and design specifications imposes a 20- to 30-percent inefficiency on test development, and I estimate accordingly.

— Published


Malaysia: Software Testing Hub?

By Rex Black

It's not just hype; it's government policy.  That phrase could describe Malaysia's announced ambitions to become a software testing "hub." Hub seems to mean a preferred software test outsourcing destination, but it might also have other connotations that are less obvious.

In his opening remarks at the SOFTEC 2010 conference, government minister Mohamed Yakcop made the point again and again.  Come to Malaysia (or stay in Malaysia) and help us build the country into a worldwide, premier venue for software test outsourcing. 

It's a nice ambition for a country to have. As another fellow foreign testing dignitary who was sitting next to me as Yakcop spoke (and who shall remain nameless) asked after Yakcop's speech, "Can they pull it off?"  The answer to that question depends on a number of factors.  With 20+ years of experience with outsource testing, I can identify some key enablers to outsourcing success that help a country establish itself as a powerhouse:

  • People:  Certainly the history of high tech outsourcing is about what is politely referred to as "labor rate arbitrage," which, more franky expressed, means, lots of people who will work comparatively cheaply.  Malaysia has about 30 million people, slightly smaller than the population of California. 
  • Education:  Since software testing, like all software engineering, is brain work, said cheap workforce must also be an educated workforce.  The Malaysian Software Testing Board has announced a goal of taking 10,000 people (including recent college grads) through the ISTQB certification program (using materials licensed from RBCS).
  • Location: As I've said for years, outsourcing requires jet fuel.  Key people will need to visit, in both directions, for outsourcing to work.  Malaysia is a long trip from just about any North American or European city, but certainly there are plenty of flights to Kuala Lumpur.  Malaysia's not quite as clean as Singapore, but it's as clean as Taiwan or China.  And it is a charming place to visit, with friendly people and a culinary capacity that almost surpasses Taipei.  Don't come expecting to lose weight, unless you wire your mouth shut before arriving!
  • Infrastructure: It's really helpful to outsource to a country that works.  Malaysia works, infrastructure-wise.  The roads work.  The internet works.  The power works.  The planes run on time.  The airports don't look like a cross between a landfill and the waiting area of the visitors lounge in a prison.  You can safely drink the water in the parts of Malaysia that I've visited (you'll never underestimate the value of potable water after you've had a bout of water-borne illness in a foreign country). Infrastructure is an enabler that is often missing from other countries, which can make doing business in those countries a real hassle.  Malaysia's infrastructure makes it about as easy to get work done as in any western country.
  • Political stability: Outsourcing doesn't require laissez-faire capitalism to work (cf., China for a case in point), but it does require a set of rules, enforced with reasonable transparency, that don't change all the time.  Anxiety about revolutions, full-scale civil war, or major hostilities with neighbors can create obstacles, or at least hesitancy, though Taiwan's, Korea's, and India's success proves that the potential for war is not as scary to outside businesses as you might think it would be.  Malaysia, a constitutional, parliamentary democracy, has been governed by the ruling party, UMNO, since it's founding, and has maintained a business-friendly environment.
  • Time: According to Malcolm Gladwell's book, Outliers, it takes about 10,000 hours of practice to master any intellectually difficult field.  Software testing certainly is intellectually difficult.  10,000 hours comes to about five years.  This factor is probably one of the bigger challenges for Malaysia, given the relative paucity of experts currently available to mentor and lead the software testing hubfolk (hubizens? hubinots?), though there's plenty of evidence that the market will overlook issues of experience if the price per hour is low enough.  An attempt to overcome this challenge by importing foreign mentors was an subtext of Yakcop's remarks, at least as I heard them.
  • Anchors:  As someone who has run an international consultancy for over 15 years now, I know that it really helps to have a few strong, anchor clients.  The same is going to be true for Malaysia's testing hub.  Getting a few multinationals to establish testing centers of excellence in Malaysia and getting a few large, successful Malaysia testing service providers off the ground will be key.
  • Buzz: Let's face it.  When it comes to following fads, the tech industry is second only to the fashion industry.  (For example, check out Gartner's venerable hype cycle.)  If Malaysia obtains the industry buzz that India had in 2001 or that China has today, at least in terms of software testing, then they can probably expect to get all the software testing hubbiness that 30 million people can handle--and then some.  Which brings us back to people and education, and the need to be ready to scale the software testing workforce quickly.

I've been to Malaysia about a dozen times now over the last few years.  From what I've seen, from the hundreds of people I've talked to and trained there, and from the leaders behind this hub concept, it certainly looks like most of the enablers are there.  Malaysia is a place to watch in the software testing industry, and it would be unwise to bet against people like Mohamed Yakcop and Mastura abu Samah, President of the Malaysian Software Testing Board.  To paraphrase Lenin, the Malaysians involved in this Malaysian Software Testing Hub initiative are software testers in a hurry.

— Published


How to Develop an Effective and Efficient Test Team

By Rex Black

As with any other activity, a major element of doing testing well is using the right people.  The right test team can mean the difference between good software and bad software, with serious security, performance, or reliability problems.  So, let’s look at how you can develop a test team that can help you achieve your quality goals in an effective and efficient manner.

I’ll start with how not to develop a test team.  Many managers—especially those who have never seen professional testing—assume anyone can do it.  They try to get the job done with junior programmers, users, or business analysts.  While these people have a role to play, by themselves they do a poor job of testing.  For example, our assessments show that a professional, trained test team typically finds over 85% of the bugs in the software that they test.  Users, business analysts, and programmers are lucky to find 40% of the bugs.  So, while we need programmers to unit test the code they write, and users and business analysts to acceptance test the applications as they go into production, most of the system testing should be done by a professional test team.

What skills should a professional test team have?  This varies from project to project, but the skills fall into three groups. First are application domain skills, an understanding of the business problem solved by the application.  Second are technical skills, an understanding of the technologies used to build the application.  Third are testing skills, which are the skills that testers bring to the quality assurance role that neither programmers, business analysts, nor users can bring.  Professional testers know how to analyze risks so that testing focuses on the right things.  Professional testers can plan testing in a way that maximizes the value realized from the time and resources invested.  Professional testers can apply test design techniques to find bugs that would otherwise slip past people. 

True professional testers can be hard to find, so you might decide to develop your own team around a core of one or two senior testers.  This is certainly a strategy that can succeed, and many of our clients have succeeded with it. Companies such as RBCS offer a wide range of training programs for testers throughout their career arc, from entry level to seasoned expert.  As with other professions, there are mature, globally-recognized certifications available for testers. The best and most widely-accepted of these certifications, the International Software Testing Qualifications Board, offers Foundation and Advanced certifications as a way to measure progress towards professionalism throughout the testers career, through objective, vendor-independent exams. 

You’ll need a particular set of skills if you want to use automation for your testing.  Test automation tools allow you to automate important activities like regression, performance, and reliability testing.  Regression testing checks whether changes to your application broke existing features.  Performance testing checks for slow response times under various likely levels of usage.  Reliability testing checks for frequent or lengthy application availability problems.  It takes years to learn how to do test automation properly.  Our clients have told us horror stories about losing hundreds of thousands of dollars on failed test automation projects, so make sure that anyone you hire for such a position has been through a lot of test automation projects.

You might also need people with testing experience in your application domain or industry.  Some of our clients, such as those in medical systems and banking, find that regulatory compliance issues associated with their application have important implications for testing, implications known only to experienced testers who have previously tested such applications.  Other clients, such as those in video games and oil exploration applications, find that special knowledge of how the applications work is essential.  That said, don’t over-estimate the value of application domain knowledge, because that route often leads people to employ business analysts and users exclusively, with the poor outcomes I mentioned before.

A proper, professional test team has the right mix of skills and experience for the complex and often under-appreciated job of checking the quality of the applications.  Not only will such a team often find twice as many bugs as one composed of non-testers, but they will also typically save their employers the cost of employing them, many times over.  Our assessments have shown that professional test teams save anywhere from 40% to 3200% more than the team costs, primarily through avoided costs of production failure.  Armed with the proper test team, you are ready to achieve your quality goals, and to save money while doing so.

— Published



Copyright ® 2019 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.