Blog

Testing in the Dark: What If I Have No Specs?

By Rex Black

In order to design, develop, and run tests, you need what’s often referred to as a test oracle, something that tells you what the expected, correct result of a specific test should be. Specifications, requirements, business rules, marketing road maps, and other such documents frequently play this role. However, what if you receive no formal information that explains what the system under test should do?

In some organizations with mature development processes, the test department will not proceed without specifications. Because everyone expects to provide a specification to the test team as part of the development process, you are seen as reasonable and within the bounds of the company’s culture when you insist on written specs.

Trouble arises, however, if you stiffen your neck this way in a company that operates in a less mature fashion. Depending on your company’s readiness to embrace formal processes (and also on your personal popularity, tenure, and political clout), any one of a spectrum of outcomes could occur:

  • Your management, recognizing the need to formalize processes, backs you up 100 percent and institutes formal requirements- and design-specification processes throughout the organization as part of the planning phase of every new development project. Industry-standard templates for internal product documentation become the norm, and consultants are brought in to train people.
  • Your management, not knowing quite how to handle this odd demand, assumes that you must know what you’re talking about. The dictate goes out to all the organization’s groups to support you, but since no one has any training in formal development processes, the effort produces poor-quality documents that don’t help. Furthermore, because the effort is (rightly) seen as a waste of time, people are upset with you for bringing it up.
  • Your management listens to your demand but then explains that the company just isn’t ready for such cultural and process shifts. Perhaps things will change after the next few products go out, they speculate, but right now, process just isn’t on the to-do list. Besides, this product is really critical to the success of the company, and taking big chances on unproven ways of doing things would be too risky. You are told to get back to work.
  • You are fired.

The moral of this story is that you should carefully consider whether your company is ready for formal processes before you insist on requirements or design specifications and other accoutrements of mature development projects.

This situation is exacerbated by the increasing popularity of Agile methodologies. Some of the people behind Agile approaches have endorsed a set of principles that include the following, “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” As you can imagine—if you are not already dealing with this situation—such a principle discounts the role of written specifications, encouraging instead an on-going dialog about the correct behavior of a system. In other words, the outcomes of discussions between stakeholders represent correct behavior. These discussions can happen in a meeting, but for some RBCS clients following Agile methods these discussions happen as one-on-ones between developers and another stakeholder.

I’m all for ongoing dialog between project stakeholders. However, unless there are written minutes from these discussions, agreed upon by all project stakeholders, including the test team, this approach has the risk that different people come to different conclusions about what the outcome of the discussion was. Of course, this is a big testing challenge if you weren’t part of the discussion.

Even more challenging is the need to deal with the possibility that at any point the definition of correct behavior can change. The Agile principles include this, saying, “Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.” While I can see the business value in being able to shape software like modeling clay up until the last minute, it’s hard to create and maintain test cases if the definition of correctness is continually re-defined.

When properly applied, Agile models can bring some advantages to the test team. However, they also bring challenges, not the least of which is the tendency to devalue solid, stable, documented definitions of correct behavior.

If you have to deal with a situation where the project team cannot or will not deliver written specifications in advance, during test development, you might consider the following options for testing without specifications:

If you are testing a commercial product, remember that you have the benefit of competitors. Because your customers will expect your product to behave substantially like the products of your competitors, these competitive products are, in a sense, your test oracle. In compatibility test labs, for example, most projects have a reference platform—a competitor’s system, against which the system under test is being positioned, in the hope of demolishing it in the marketplace.

If your technical colleagues won’t tell you what the product should do, perhaps your friends in sales and marketing will. In my experience, sales and marketing people live to create glitzy presentations showing where the product line is going. Although they can be general and imprecise, these documents might tell you which features and capabilities the product should support. If you’re testing a product for which questions about supported features are harder to answer than questions regarding correct behavior, these documents might suffice for a somewhat vague but useful oracle.

Ask your customer-support colleagues. Your colleagues in customer support might not have much information about what the product should do, but they probably know what they don’t want the product to do. Since your testing stands between them and the hellish scenario outlined in the previous section, they are usually happy to tell you.

Unless the product is truly unique, you can use inductive reasoning to figure out what constitutes reasonable expectations and correct behavior in many cases. The generic categories into which products fit tell you a lot about what the products are supposed to do: a word processor, a Web browser, a PC, a laptop, a server, an operating system. Some esoteric questions might arise, but a core dump, a system crash, a burning CPU, garbage on the screen, an error message in the wrong language, and abysmal performance are indisputably bugs.

If in doubt, you should consider any suspect behavior buggy. Because you don’t have a crisp way of determining pass and fail conditions, you will make mistakes in result interpretation. Remember that calling correct behavior a bug and working through the bug life cycle is less detrimental to product quality than failing to report questionable behavior that does turn out to be a bug. Be sure to file bug reports when questions arise.

One thing to keep in mind about this situation is that you are definitely not alone. Many people are struggling with the right amount of documentation to gather, and errors are made on both sides. I try to maintain an open mind, even though the 20-questions approach to defining expected results is somewhat frustrating. It is a good idea, if you’re working in a poorly specified situation, to make sure management understands that your test development will be less efficient due to the need to pull information from other groups. My usual rule of thumb is that the lack of clear requirements and design specifications imposes a 20- to 30-percent inefficiency on test development, and I estimate accordingly.

— Published


Malaysia: Software Testing Hub?

By Rex Black

It's not just hype; it's government policy.  That phrase could describe Malaysia's announced ambitions to become a software testing "hub." Hub seems to mean a preferred software test outsourcing destination, but it might also have other connotations that are less obvious.

In his opening remarks at the SOFTEC 2010 conference, government minister Mohamed Yakcop made the point again and again.  Come to Malaysia (or stay in Malaysia) and help us build the country into a worldwide, premier venue for software test outsourcing. 

It's a nice ambition for a country to have. As another fellow foreign testing dignitary who was sitting next to me as Yakcop spoke (and who shall remain nameless) asked after Yakcop's speech, "Can they pull it off?"  The answer to that question depends on a number of factors.  With 20+ years of experience with outsource testing, I can identify some key enablers to outsourcing success that help a country establish itself as a powerhouse:

  • People:  Certainly the history of high tech outsourcing is about what is politely referred to as "labor rate arbitrage," which, more franky expressed, means, lots of people who will work comparatively cheaply.  Malaysia has about 30 million people, slightly smaller than the population of California. 
  • Education:  Since software testing, like all software engineering, is brain work, said cheap workforce must also be an educated workforce.  The Malaysian Software Testing Board has announced a goal of taking 10,000 people (including recent college grads) through the ISTQB certification program (using materials licensed from RBCS).
  • Location: As I've said for years, outsourcing requires jet fuel.  Key people will need to visit, in both directions, for outsourcing to work.  Malaysia is a long trip from just about any North American or European city, but certainly there are plenty of flights to Kuala Lumpur.  Malaysia's not quite as clean as Singapore, but it's as clean as Taiwan or China.  And it is a charming place to visit, with friendly people and a culinary capacity that almost surpasses Taipei.  Don't come expecting to lose weight, unless you wire your mouth shut before arriving!
  • Infrastructure: It's really helpful to outsource to a country that works.  Malaysia works, infrastructure-wise.  The roads work.  The internet works.  The power works.  The planes run on time.  The airports don't look like a cross between a landfill and the waiting area of the visitors lounge in a prison.  You can safely drink the water in the parts of Malaysia that I've visited (you'll never underestimate the value of potable water after you've had a bout of water-borne illness in a foreign country). Infrastructure is an enabler that is often missing from other countries, which can make doing business in those countries a real hassle.  Malaysia's infrastructure makes it about as easy to get work done as in any western country.
  • Political stability: Outsourcing doesn't require laissez-faire capitalism to work (cf., China for a case in point), but it does require a set of rules, enforced with reasonable transparency, that don't change all the time.  Anxiety about revolutions, full-scale civil war, or major hostilities with neighbors can create obstacles, or at least hesitancy, though Taiwan's, Korea's, and India's success proves that the potential for war is not as scary to outside businesses as you might think it would be.  Malaysia, a constitutional, parliamentary democracy, has been governed by the ruling party, UMNO, since it's founding, and has maintained a business-friendly environment.
  • Time: According to Malcolm Gladwell's book, Outliers, it takes about 10,000 hours of practice to master any intellectually difficult field.  Software testing certainly is intellectually difficult.  10,000 hours comes to about five years.  This factor is probably one of the bigger challenges for Malaysia, given the relative paucity of experts currently available to mentor and lead the software testing hubfolk (hubizens? hubinots?), though there's plenty of evidence that the market will overlook issues of experience if the price per hour is low enough.  An attempt to overcome this challenge by importing foreign mentors was an subtext of Yakcop's remarks, at least as I heard them.
  • Anchors:  As someone who has run an international consultancy for over 15 years now, I know that it really helps to have a few strong, anchor clients.  The same is going to be true for Malaysia's testing hub.  Getting a few multinationals to establish testing centers of excellence in Malaysia and getting a few large, successful Malaysia testing service providers off the ground will be key.
  • Buzz: Let's face it.  When it comes to following fads, the tech industry is second only to the fashion industry.  (For example, check out Gartner's venerable hype cycle.)  If Malaysia obtains the industry buzz that India had in 2001 or that China has today, at least in terms of software testing, then they can probably expect to get all the software testing hubbiness that 30 million people can handle--and then some.  Which brings us back to people and education, and the need to be ready to scale the software testing workforce quickly.

I've been to Malaysia about a dozen times now over the last few years.  From what I've seen, from the hundreds of people I've talked to and trained there, and from the leaders behind this hub concept, it certainly looks like most of the enablers are there.  Malaysia is a place to watch in the software testing industry, and it would be unwise to bet against people like Mohamed Yakcop and Mastura abu Samah, President of the Malaysian Software Testing Board.  To paraphrase Lenin, the Malaysians involved in this Malaysian Software Testing Hub initiative are software testers in a hurry.

— Published


How to Develop an Effective and Efficient Test Team

By Rex Black

As with any other activity, a major element of doing testing well is using the right people.  The right test team can mean the difference between good software and bad software, with serious security, performance, or reliability problems.  So, let’s look at how you can develop a test team that can help you achieve your quality goals in an effective and efficient manner.

I’ll start with how not to develop a test team.  Many managers—especially those who have never seen professional testing—assume anyone can do it.  They try to get the job done with junior programmers, users, or business analysts.  While these people have a role to play, by themselves they do a poor job of testing.  For example, our assessments show that a professional, trained test team typically finds over 85% of the bugs in the software that they test.  Users, business analysts, and programmers are lucky to find 40% of the bugs.  So, while we need programmers to unit test the code they write, and users and business analysts to acceptance test the applications as they go into production, most of the system testing should be done by a professional test team.

What skills should a professional test team have?  This varies from project to project, but the skills fall into three groups. First are application domain skills, an understanding of the business problem solved by the application.  Second are technical skills, an understanding of the technologies used to build the application.  Third are testing skills, which are the skills that testers bring to the quality assurance role that neither programmers, business analysts, nor users can bring.  Professional testers know how to analyze risks so that testing focuses on the right things.  Professional testers can plan testing in a way that maximizes the value realized from the time and resources invested.  Professional testers can apply test design techniques to find bugs that would otherwise slip past people. 

True professional testers can be hard to find, so you might decide to develop your own team around a core of one or two senior testers.  This is certainly a strategy that can succeed, and many of our clients have succeeded with it. Companies such as RBCS offer a wide range of training programs for testers throughout their career arc, from entry level to seasoned expert.  As with other professions, there are mature, globally-recognized certifications available for testers. The best and most widely-accepted of these certifications, the International Software Testing Qualifications Board, offers Foundation and Advanced certifications as a way to measure progress towards professionalism throughout the testers career, through objective, vendor-independent exams. 

You’ll need a particular set of skills if you want to use automation for your testing.  Test automation tools allow you to automate important activities like regression, performance, and reliability testing.  Regression testing checks whether changes to your application broke existing features.  Performance testing checks for slow response times under various likely levels of usage.  Reliability testing checks for frequent or lengthy application availability problems.  It takes years to learn how to do test automation properly.  Our clients have told us horror stories about losing hundreds of thousands of dollars on failed test automation projects, so make sure that anyone you hire for such a position has been through a lot of test automation projects.

You might also need people with testing experience in your application domain or industry.  Some of our clients, such as those in medical systems and banking, find that regulatory compliance issues associated with their application have important implications for testing, implications known only to experienced testers who have previously tested such applications.  Other clients, such as those in video games and oil exploration applications, find that special knowledge of how the applications work is essential.  That said, don’t over-estimate the value of application domain knowledge, because that route often leads people to employ business analysts and users exclusively, with the poor outcomes I mentioned before.

A proper, professional test team has the right mix of skills and experience for the complex and often under-appreciated job of checking the quality of the applications.  Not only will such a team often find twice as many bugs as one composed of non-testers, but they will also typically save their employers the cost of employing them, many times over.  Our assessments have shown that professional test teams save anywhere from 40% to 3200% more than the team costs, primarily through avoided costs of production failure.  Armed with the proper test team, you are ready to achieve your quality goals, and to save money while doing so.

— Published


A Little Fun in China

By Rex Black

While I focus on immediately applicable ideas in this blog, every so often, maybe a little light-hearted fun is in order?  On that note, here's short video of me ring a big bell in Nanking, China.  Let me know if you like it!  Ringing Confucius Bell in Nanking

— Published


System Interoperability Risks: Software Testing and Beyond

By Rex Black

It's a well-known--though not always widely practiced--best practice of software engineering to build quality into an application by integrating multiple quality assurance activities into the entire software development and maintenance process.  This is important, but most of us don’t have the luxury of thinking about only one application.  Today, it’s not enough to know that each application works properly on its own.  More and more, everything talks to everything else, and the trend is for greater interoperability in the future.  Before release, we need to know that these disparate applications will work together.

Of course, you can and should expand the scope of testing (and other quality assurance activities) to address issues of interoperability.  Requirements engineers should consider the various applications that should talk to each other.  Designers and architects should construct simple, robust channels for data and control to flow between those interoperating applications. 

The whole team should particularly consider security, performance, and reliability risks associated with interoperability.  Connecting two or more applications together magnifies the potential likelihood and impact of security vulnerabilities.  For example, a previously-standalone application with access to customer credit card information, when connected to an application that provides information to your company web site, might create an avenue for leakage of this sensitive data.

When applications communicate, that can affect the performance of each application.  For example, suppose an application can now request data from another application that stores the requested information on two or more tables.  That could result in a database join query that deals with large volumes of data.  This problem can be made worse yet if the table indices are not set up properly. I once worked with a client where a multi-year project that involved dozens of people failed after three years of effort because of database-related performance problems.

Application interoperability can also affect reliability.  For example, if delays or losses of information occur, this can result in timeouts, unexpected voids in data records, and processing with illegal default values.  In addition, the lack of proper data conversion by the sending or receiving application can occur.  This could cause the application to stop responding or even crash, or simply result in “garbage in garbage out” scenarios that cause failures later.  For example, the Mars Climate Orbiter mission was lost due to problems with data conversion between metric and English units in two interoperating NASA systems.

As I mentioned, the project team should start to address these risks during requirements definition and design.  In addition, code reviews and static analysis can also locate potential problems.  However, you’ll also need to carry out system integration testing prior to releasing interoperating applications.

When you do system integration testing, interoperability is one of the main types of testing you are typically doing, along with security, performance, reliability, and end-to-end functionality.  All of these types of testing require the use of realistic test data so that important test conditions are covered.  Production data is the obvious place to obtain realistic test data, but there are pitfalls and surprises that can occur. 

Remember that production data can include personal and confidential information.  For good reason, organizations tend to place restrictions on who can access such data.  This becomes particularly critical if you intend to outsource some of the system integration testing.  There are tools available to anonymize production data in a way that preserves valuable test conditions while irreversibly hiding personal data.  Production data sets are often very large, so be sure to allocate plenty of time to complete any test data anonymization project.

Keep in mind that using software as a service (SAAS) and open source applications changes the situation, but it certainly does not make these risks go away.  With either approach, if you are sharing data with such application, you lose some of the control you would otherwise have over the interfaces between those applications and your own.  In the case of open source software, an organization can at least determine its own schedule for taking updates.  For SAAS, the software can change without any warning.  If your application shares data with open source or SAAS applications in typical ways, you can rely (to some extent) on the risks being addressed as part of the release process.  However, if your application interoperates with these applications in an atypical fashion, you could end up discovering issues that no one thought to address. You should try to understand whether your usage of these applications matches the typical usage in order to understand interoperability risks and thus the degree of system integration testing required.

Today, with communicating systems both collocated and distributed across the cloud, interoperability matters more than ever. Project teams should start considering interoperability during requirements and design.  They should continue to address key interoperability risks such as performance, security, and reliability throughout the lifecycle, including code reviews, static analysis, and testing, especially during system integration testing. Since system integration testing involves test types that have very particular test data requirements, be sure to plan carefully for this phase of testing.  And, in your system integration test plans, don’t forget to include open source and SAAS applications that you use in your data center.  While managing interoperability risks is not trivial, it is essential to avoid nasty interoperability surprises.

— Published


What Testers Can Learn from Toyota’s Woes

By Rex Black

How the mighty have fallen.  Starting in the 1980s, Japanese companies became legendary for quality, and none more legendary than Toyota.  Today, Toyota leads the news—due to quality problems.  The situation is so severe that Toyota CEO Akio Toyoda personally appeared at a Congressional hearing.  In that hearing, Toyoda said, “We know that the problem is not software, because we tested it.” 

Is this a realistic way to think about software quality assurance?  In fact, increasing indications (including reliable information from confidential sources in Japan) are that some of the problems are software-related.  Let’s look at the quality and testing lessons we can draw from Toyota’s debacle.  Let’s start with that quote from Toyoda, because it’s so categorical—and so wrong. 

Size can deceive.  Consider bridges.  The Sydney Harbour Bridge, the Golden Gate Bridge, and the Tsing Ma Bridge are enormous structures.   However, they are built of well-understood engineering materials such as concrete, steel, stone, and asphalt, which have well-defined engineering, physical, and chemical properties.  Being physical objects, they obey the laws of physics and chemistry, as do the materials that interact with them—air, water, rubber, pollution, salt, and so forth.  Further, we’ve been building bridges for thousands of years.  We know how bridges behave, and how they fail.  Ironically enough, given some of the lessons in this post, our ability to use computers to design and simulate bridges has increased their reliability even further. 

Size notwithstanding, a bridge is a simpler thing to test than a Toyota Prius.  In the complex system of systems that controls the Prius, there are too many states, too many lines of code, too many data flows, too many use cases, too many sequences of events, too many transient failures to recover from.  Consider this example:  Engineers at Sun Microsystems told an associate of mine that the number of possible internal states in a single Solaris server is 10,000 times greater than the number of molecules in the universe. 

I have been involved in testing and quality almost my whole 25-plus year career.  I know how important testing is to quality.  In the two lost Shuttle missions, software failure was not the cause, thanks to the legions of software testers who worked on the mission control systems.   However, there’s less software involved in a shuttle mission than in driving your Prius to the grocery store.  Software for late 1970s and early 1980s hardware is orders of magnitude simpler and smaller than software for 2010-era computers.  You could not run an iPhone, not to mention a Prius, on the computers that run the shuttle.  And even when computers were smaller and simpler, you could not exhaustively test the systems.  Glenford Myers, in the first book on software testing, written in 1979, recognized this fact.  Whether testing cars or data centers, software testing is a necessary but insufficient means to quality.    

This brings us to the next lesson from Toyota, though it is by no means company or culturally specific.  We have clients around the world. It is common, across borders, across companies, across cultures, for people to forget that complex systems can exhibit unpredictable, in some cases catastrophic, failures.  It is also common for people to forget that failures are not proportional to the size of the defect. 

To see examples, consult the Internet for the answers to four questions. Why did a SCUD missile evade the Patriot missiles and hit a troop barracks in the first Gulf War?  Why did the first Arianne 5 rocket explode?  Why did not one but two NASA Mars missions fail?  Why did the Therac kill cancer patients?  In each instance, the answer is discouragingly simple: an infinitesimally small percentage of the code proved defective.

Again, size deceives.  If you knock a rivet out of a bridge, does the bridge fall?  No.  If you nick a wire in a single suspension cable, does the bridge fall?  No.  If you carve your name in a facing stone on a pillar, does the bridge fall?  No.  Yet some software fails for similarly small defects involving just a few lines of code. 

So, what can we do?  Well, first, remember that software testing cannot save us from this problem.  However, there are many different software testing techniques.  Each types of testing can expose a different set of defects.  Testers must use different test techniques, test data, and test environments for different bugs during different levels of testing.  Each technique, each set of test data, each environment, and each test level filters out a different set of bugs.  There is no “one true way” to test software.

Now, I’m not saying that Toyota believes in a “one true way” to quality.  Toyota learned quality management from J.M. Juran and W.E. Deming, heroes in the pantheon of quality. Juran and Deming knew much better than to believe in a single magic bullet for quality.  However, as we saw from Toyoda’s comments, he did believe too much in testing.  In addition, I suspect that Toyota as a company believed too little in integration testing, and perhaps too much in vendors. 

Here’s the problem: When complex systems are built from many subsystems, and some of the subsystems are produced by vendors, risks can go up and accountability can go down.  It’s not that vendors don’t care; it’s that they can’t always foresee how their subsystems will be used.  It’s not the people won’t take responsibility—though that happens—it’s that, when multiple subsystems are at fault, neither vendor wants to take all the blame.  So, understand, measure, and manage the quality assurance process for such systems from end-to-end, including vendor subsystems.  After all, the end user drives just one car, not a dozen, and there is only one brand on the grill—and this is true for the systems you test, too, isn’t it?

— Published


Four Ideas for Improving Test Efficiency

By Rex Black

We have spent the last couple years in an economic downturn, and no one seems to know how much longer it will last. For the foreseeable future, management will exhort testers and test teams to do more with less. A tedious refrain, indeed, but you can improve your chances of weathering this economic storm if you take steps now to address this efficiency fixation. In this blog, I’ll give you four ideas you can implement to improve test efficiency. All can show results quickly, within the next six months. Better yet, none require sizeable investments which you could never talk your managers into making in this current economic situation. By achieving quick, measurable improvements, you will position yourself as a stalwart supporter of the larger organizational cost-cutting goals, always smart in a down economy.

Know Your Efficiency

The first idea—and the foundation for the others—is that you should know your efficiency to know what to improve. All too often, test teams have unclear goals. Without clear goals, how can you measure your efficiency? Efficiency at what? Cost per what? Here are three common goals for test teams:

  • Find bugs
  • Reduce risk
  • Build confidence

You should work with your stakeholders—not just the people on the project, but others in the organization who rely on testing—to determine the right goals for your team. With the goals established, ask yourself, can you measure your efficiency in each area? What is the average cost of detecting and repairing a bug found by your test team, and how does that compare with the cost of a bug found in production? (I describe this method of measuring test efficiency in detail in my article, “Testing ROI: What IT Managers Should Know.”) What risks do you cover in your testing, and how much does it cost on average to cover each risk? What requirements, use cases, user stories, or other specification elements do you cover in your testing, and how much does it cost on average to cover each element? Only by knowing your team’s efficiency can you hope to improve it.

Institute Risk-Based Testing

I mentioned risk reduction as a key testing goal. Many people agree, but few people can speak objectively about how they serve this goal. However, those people who have instituted analytical risk-based testing strategies can. Let me be clear on what I mean by analytical risk-based testing. Risk is the possibility of a negative or undesirable outcome, so a quality risk is a possible way that something about your organization’s products or services could negatively affect customer, user, or stakeholder satisfaction. Through testing, we can reduce the overall level of quality risk. Analytical risk-based testing uses an analysis of quality risks to prioritize tests and allocate testing effort. We involve key technical and business stakeholders in this process. Risk-based testing provides a number of efficiency benefits:

  • You find the most important bugs earlier in test execution, reducing risk of schedule delay.
  • You find more important bugs than unimportant bugs, reducing the time spent chasing trivialities.
  • You provide the option of reducing the test execution period in the event of a schedule crunch without accepting unduly high risks.

You can learn more about how to implement risk-based testing in Chapter 3 of my book, Advanced Software Testing: Volume II. You can also read the article I co-wrote with an RBCS client, CA, on our experiences with piloting risk-based testing at one of their locations.

Tighten Up Your Test Set

With many of our clients, RBCS assessments reveal that they are dragging around heavy, unnecessarily-large regression test sets. Once a test is written, it goes into the regression test set, never to be removed. However, in the absence of complete test automation, this leads to inefficient, prolonged test execution periods. The scope of the regression test work will increase with each new feature, each bug fix, each patch, eventually overwhelming the team. Once you have instituted risk-based testing, you can establish traceability between risks and test cases, identifying those risks which you are over-testing. You can then remove or consolidate certain tests. You can also apply fundamental test design principles to do identify redundant tests. We had one client that, after taking our Test Engineering Foundation course, applied the ideas in that course to reduce the regression test set from 800 test cases to 300 test cases. Since regression testing made up most of the test execution effort for this team, you can imagine the kind of immediate efficiency gain that occurred.

Introduce Lightweight Test Automation

I mentioned complete test automation above. That’s sometimes seen as an easy way to improve test efficiency. However, for many of our clients, that approach proves chimerical. The return on the test automation investment some of our clients see is low, zero, or even negative. Even when the return is strongly positive, for many traditional forms of GUI-based test automation, the payback period is too far in the future and the initial investment is too high. However, there are cheap, lightweight approaches to test automation. We helped one of our clients, Arrowhead Electronic Healthcare, create a test automation tool called a dumb monkey. It was designed and implemented using open source tools, so the tool budget was zero. It required a total of 120 person-hours to create. Within four months, it had already saved almost three times that much in testing effort. For more information, see the article I co-wrote with our client.

Conclusion

In this blog, I’ve shown you four ideas you can implement quickly to improve your efficiency.  Start by clearly defining your team’s goals, then derive efficiency metrics for those goals and measure your team now.  With that baseline measurement, move on to put risk-based testing in place, ensuring the right focus for your effort.  Next, apply risk-based testing and other test fundamentals to reduce the overall size of your test set while not increasing the level of regression risk on release.  Finally, use dumb monkeys and other lightweight test automation tools to tackle manual, repetitive test tasks, saving your people for other more creative tasks.  With these changes in place, measure your efficiency again six months or a year from now.  If you are like most of our clients, you’ll have some sizeable improvements to show off for your managers.

— Published


Seven Steps to Reducing Software Security Risks

By Rex Black

Software security is an important concern, and it’s not just for operating system and network vendors.  If you’re working at the application layer, your code is a target.  In fact, the trend in software security exploits is away from massive, blunt-force attacks on the Internet or IT infrastructure and towards carefully crafted, criminal attacks on specific applications to achieve specific damage, often economic.

How can you respond effectively? While the threat is large and potentially intimidating, it turns out that there is a straightforward seven-step process that you can apply to reduce your software’s exposure to these attacks. 

  1. Assess security risks to focus your improvements.
  2. Test the software for security failures.
  3. Analyze the software for security bugs.
  4. Evaluate patterns in security risks, failures, and bugs.
  5. Repair the bugs with due care for regression.
  6. Examine the real-world results by monitoring important security metrics.
  7. Institutionalize the successful process improvements. 

Carefully following this process will allow your organization to improve your software security in a way which is risk-based, thoroughly tested, data-driven, prudent, and continually re-aligned with real-world results.  You can read more about this topic in my article, Seven Steps to Reduce Software Security Risk.

— Published


Top Three Business Cases for Software Test Automation

By Rex Black

Businesses spend millions of dollars annually on software test automation.  A few years back, while doing some work in Israel (birthplace of the Mercury toolset), someone told me that Mercury Interactive had a billion dollars in a bank in Tel Aviv.  Probably an urban legend, but who knows? Mercury certainly made a lot of money selling tools over the years, which is why HP bought them. 

That's nice for Mercury and Hewlett Packard, but so what, right?  I don't know about your company, but none of RBCS' clients buy software testing tools so that they can help tool vendors make money.  Our clients buy software testing tools because they expect those tools will help them make money

Unfortunately, it's often the case that there's a real lack of clarity in terms of the business case for software test automation at some organizations.  Without a clear business case, there's no clear return on investment. This leads to a lack of clear success (or failure) of the automation effort.  Efforts that should be cancelled continue too long, and efforts that should continue are cancelled. 

So, one of the pre-requisites of software test automation success a clear business case, leading to clear measures of success.  Here are the top three business cases for software test automation that we've observed with our clients:

  1. Automation is the only practical way to address some critical set of quality risks.  The two most common examples are reliability and performance, which generally cannot be tested manually.
  2. Automation is used to shorten test execution time.  This is particularly true in highly competitive situations where time-to-market is critical, and at the same time customers have a low tolerance for quality problems.
  3. Automation is used to reduce the effort required to achieve a desired level of quality risk.  This is often the case in large, complex products where regression, especially regression across interconnected features, is considered unacceptable.

This list is not exhaustive, and, in some cases, two or more reasons may apply.  One of the particularly nice aspects of each of these three business cases is that the return on investment is clearly quantifiable.  That makes achieving success in one or more of these areas easy to measure and to demonstrate.  It also makes it easy to determine which tests should be automated and which should not.

— Published


Making Software Testing Go Faster

By Rex Black

We often want--and need--testing to go more quickly, don't we?  So, here's a list of organizational behaviors and attributes that tend to accelerate the test process. Encourage these activities and values among your peers, and jump at the opportunities to perform them yourself where appropriate.

Testing throughout the project. I use the phrase testing throughout the project in a three-dimensional sense. The first dimension involves time: in order to be properly prepared, and to help contain bugs as early as possible, the test team must become involved when the project starts, not at the end. The second dimension is organizational: the more a company promotes open communication between the test organization and the other teams throughout the company, the better the test group can align its efforts with the company’s needs. The third dimension is cultural: in a mature company, testing as an entity, a way of mitigating risk, and a business-management philosophy permeates the development projects. I also call this type of testing pervasive testing.

Smart use of cheaper resources. One way to do this is to use test technicians. You can get qualified test technicians from the computer-science and engineering schools of local universities and colleges as well as from technical institutes. Try to use these employees to perform any tasks that do not specifically require a test engineer’s level of expertise. Another way to do this is to use distributed and outsourced testing.

Appropriate test automation. The more automated the test system, the less time it takes to run the tests. Automation also allows unattended test execution overnight and over weekends, which maximizes utilization of the system under test and other resources, leaving more time for engineers and technicians to analyze and report test failures. You should apply a careful balance, however. Generating a good automated test suite can take many more hours than writing a good manual test suite. Developing a completely automated test management system is a large endeavor. If you don’t have the running room to thoroughly automate everything you’d like before test execution begins, you should focus on automating a few simple tools that will make manual testing go more quickly. In the long run, automation of test execution is typically an important part of dealing with regression risk during maintenance.

Good test system architecture. Spending time in advance understanding how the test system should work, selecting the right tools, ensuring the compatibility and logical structure of all the components, and designing for subsequent maintainability really pay off once test execution starts. The more intuitive the test system, the more easily testers can use it.

Clearly defined test-to-development handoff processes. Let's illustrate this with an example. Two closely related activities, bug isolation and debugging, occur on opposite sides of the fence between test and development. On the one hand, test managers must ensure that test engineers and technicians thoroughly isolate every bug they find and write up those isolation steps in the bug report. Development managers, on the other hand, must ensure that their staff does not try to involve test engineers and technicians, who have other responsibilities, in debugging activities.

Clearly defined development-to-test handoff processes. The project team must manage the release of new hardware and software revisions to the test group. As part of this process, the following conditions should be met:

  • All software is under revision control.
  • All test builds come from revision-controlled code.
  • Consistent, clear release naming nomenclatures exist for each major system.
  • A regular, planned release schedule exists and is followed.
  • A well-understood, correct integration strategy is developed and followed during the test-planning stages.

Automated smoke tests run against test releases, whether in the development, build (or release engineering), or testing environments (or all three), are also a good idea to ensure that broken test releases don’t block test activities for hours or even days at the beginning of a test cycle.

Another handoff occurs when exit and entry criteria for phases result in the test team commencing or ending their testing work on a given project. The more clearly defined and mutually accepted these criteria are, the more smoothly and efficiently the testing will proceed.

A clearly defined system under test. If the test team receives clear requirements and design specifications when developing tests and clear documentation while running tests, it can perform both tasks more effectively and efficiently. When the project management team commits to and documents how the product is expected to behave, you and your intrepid team of testers don’t have to waste time trying to guess—or dealing with the consequences of guessing incorrectly. In a later post, I'll give you some tips on operating without clear requirements, design specifications, and documentation when the project context calls for it.

Continuous test execution. Related to, and enabled by, test automation, this type of execution involves setting up test execution so that the system under test runs as nearly continuously as possible. This arrangement can entail some odd hours for the test staff, especially test technicians, so everyone on the test team should have access to all appropriate areas of the test lab.

Continuous test execution also implies not getting blocked. If you’re working on a 1-week test cycle, being blocked for 1 just day means that 20 percent of the planned tests for this release will not happen, or will have to happen through extra staff, overtime, weekend work, and other undesirable methods. Good release engineering and management practices, including smoke-testing builds before installing them in the test environment, can be a big part of this. Another part is having an adequate test environment so that testers don’t have to queue to run tests that require some particular configuration or to report test results.

Adding test engineers.  Fred Brooks once observed that “adding more people to a late software project makes it later,” a statement that has become known as Brooks’s Law. Depending on the ramp-up time required for test engineers in your projects, this law might not hold true as strongly in testing as it does in other areas of software and hardware engineering. Brooks reasoned that as you add people to a project, you increase the communication overhead, burden the current development engineers with training the new engineers, and don’t usually get the new engineers up to speed soon enough to do much good. In contrast, a well-designed behavioral test system reflects the (ideally) simpler external interfaces of the system under test, not its internal complexities. In some cases, this can allow a new engineer to contribute within a couple of weeks of joining the team.

My usual rule of thumb is that, if a schedule crisis looms six weeks or more in my future, I might be able to bring in a new test engineer in time to help. However, I have also added test engineers on the day system test execution started, and I once joined a laptop development project as the test manager about two weeks before the start of system test execution. In both cases, the results were good. (Note, though, that I am not contradicting myself. Testing does proceed most smoothly when the appropriate levels of test staffing become involved early, but don’t let having missed the opportunity to do that preclude adding more staff.) Talk to your test engineers to ascertain the amount of time that’ll be required, if any, to ramp up new people, and then plan accordingly.

While these software test process accelerators are not universally applicable--or even universally effective--consider them when your managers tell you that you need to make the testing go faster.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.