Our notable partnerships give us access to the best resources and tools for the job.
Get software testing training resources:
Mistakes lead to the introduction of defects (also called bugs). As I personally am aware, like all human beings I can make mistakes at any point in time, no matter what I might be working on. So it is on projects, where the business analyst can put a defect into a requirements specification, a tester can put a defect into a test case, a programmer can put a defect into the code, a technical writer can put a defect into the user guide, and so forth. Any work product can and often will have defects because any worker can and will make mistakes!
Now, I like to drink good wine, and I’ve collected bottles from around the world on my travels. Some I’ve stashed away for years, waiting for the right time to drink them, which always comes sooner or later. Some of these bottles have gotten a lot more valuable over the years. Odd as this will sound, in just a few ways, bugs are like wine. They definitely get more expensive with age, taking more effort and thus incurring more cost the longer they are in the system. Also, sooner or later most bugs need to be fixed. However, it’s definitely not a good idea to leave bugs lying around—or perhaps crawling around—in a cellar, and they’re certainly not appetizing!
To celebrate completion of the update to Advanced Software Testing: Volume I, here is an excerpt of Chapter 3. This is the central chapter of the book, addressing test design techniques.
We start with the most basic of specification-based test design techniques, equivalence partitioning.
Conceptually, equivalence partitioning is about testing various groups that are expected to be handled the same way by the system and exhibit similar behavior. Those groups can be inputs, outputs, internal values, calculations, or time values, and should include valid and invalid groups. We select a single value from each equivalence partition, and this allows us to reduce the number of tests. We can calculate coverage by dividing the number of equivalence partitions tested by the number identified, though generally the goal is to achieve 100% coverage by selecting at least one value from each partition.
This technique is universally applicable at any test level, in any situation where we can identify the equivalence partitions. Ideally, those partitions are independent, though some amount of interaction between input values does not preclude the use of the technique. This technique is also very useful in constructing smoke tests, though testing of some of the less-risky partitions frequently is omitted in smoke tests. This technique will find primarily functional defects where data is processed improperly in one or more partitions. The key to this technique is to take care that the values in each equivalence partition are indeed handled the same way; otherwise, you will miss potentially important test values.
Since it is not possible to test everything, it is necessary to pick a subset of the overall set of tests to be run. Read this article to discover how quality risks analysis can help one focus the test effort.
This is an excerpt from my book, Expert Test Manager, written with James Rommens and Leo van der Aalst. I hope it helps you think more clearly about the test strategies you use.
A test policy contains the mission and objectives of testing along with metrics and goals associated with the effectiveness, efficiency, and satisfaction with which we achieve those objectives. In short, the policy defines why we test. While it might also include some high-level description of the fundamental test process, in general the test policy does not talk about how we test.
The document that describes how we test is the test strategy. In the test strategy, the test group explains how the test policy will be implemented. This document should be a general description that spans multiple projects. While the test strategy can describe how testing is done for all projects, organizations might choose to have separate documents for various types of projects. For example, an organization might have a sequential lifecycle test strategy, an Agile test strategy, and a maintenance test strategy.
[Note: This is an excerpt from Agile Testing Foundations: An ISTQB Foundation Level Agile Tester Guide, by Rex Black, Marie Walsh, Gerry Coleman, Bertrand Cornanguer, Istvan Forgacs, Kari Kakkonen, and Jan Sabak, published July 2017. Kari Kakkonen wrote this selection. The authors are all members of the ISTQB Working Group that wrote the ISTQB Agile Tester Foundation syllabus.]
The traditional way of developing code is to write the code first, and then test it. Some of the major challenges of this approach are that testing is generally conducted late in the process and it is difficult to achieve adequate test coverage. Test-first practices can help solve these challenges. In this environment, tests are designed first, in a collaboration between business stakeholders, testers, and developers. Their knowledge of what will be tested helps developers write code that fulfils the tests. A test-first approach allows the team to focus on and clarify the expressed needs through a discussion of how to test the resulting code. Developers can use these tests to guide their development. Developers, testers, and business stakeholders can use these tests to verify the code once it is developed.
A number of test-first practices have been created for Agile projects, as mentioned in section 2.1 of this book. They tend to be called X Driven Development, where X stands for the driving force for the development. In Test Driven Development (TDD), the driving force is testing. In Acceptance Test-Driven Development (ATDD), it is the acceptance tests that will verify the implemented user story. In Behaviour-Driven Development (BDD), it is the behaviour of the software that the user will experience. Common to all these approaches is that the tests are written before the code is developed, i.e., they are test-first approaches. The approaches are usually better known by their acronyms. This subsection describes these test-first approaches and information on how to apply them is contained in section 3.3.
Test-Driven Development was the first of these approaches to appear. It was introduced as one of the practices within Extreme Programming (XP) back in 1990. It has been practiced for two decades and has been adopted by many software developers, in Agile and traditional projects. However, it is also a good example of an Agile practice that is not used in all projects. One limitation with TDD is that if the developer misunderstands what the software is to do, the unit tests will also include the same misunderstandings, giving passing results even though the software is not working properly. There is some controversy over whether TDD delivers the benefits it promises. Some, such as Jim Coplien, even suggest that unit testing is mostly waste.
TDD is mostly for unit testing by developers. Agile teams soon came up with the question: What if we could have a way to get the benefits of test-first development for acceptance tests and higher level testing in general? And thus Acceptance Test-Driven Development was born. (There are also other names for similar higher-level test-first methods; for example, Specification by Example (SBE) from Gojko Adzic.) Later, Dan North wanted to emphasize the behaviours from a business perspective, leading him to give his technique the name Behaviour-Driven Development. ATDD and BDD are in practice very similar concepts.
Let’s look at these three test-first techniques, TDD, ATDD, and BDD, more closely in the following subsections.
Recorded March 19, 2019
Recorded January 29, 2019
Recorded October 9, 2017
Recorded May 18, 2017
Recorded February 16, 2017
Recorded April 7, 2016
Length: 0h 38m 45s
If you’ve been testing for any length of time, you know that the number of possible test cases is enormous if you try to test all possible combinations of inputs, configuration values, types of data, and so forth. It’s like the mythical monster, the many-headed Hydra, which would sprout two or more new heads for each head that was cut off. Two simple approaches to dealing with combinatorial explosions such as this are equivalence partitioning and boundary value analysis, but those techniques don’t check for interactions between factors. A reasonable, manageable way to test combinations is called pairwise testing, but to do it you’ll need a tool. In this inaugural One Key Idea session, Rex will demonstrate the use of a free tool, ACTS, built by the US NIST and available for download worldwide. We can’t promise to turn you into Hercules, but you will definitely walk away able to slay the combinatorial Hydra.
Length: 0h 53m 46s
If you are testing a simple mobile app, you may find it relatively easy to find representative test data. However, what if you are testing enterprise scale applications? In the enterprise data center, one hundred or more applications of various sizes, complexity, and criticality co-exist, operating on various data repositories, in some cases shared data repositories. In some cases, disparate data repositories hold related data, and the ability to test integration across applications that access these data sets is critical. In this keynote speech, Rex Black will talk about the challenges facing his clients as they deal with these testing problems. You’ll go away with a better understanding of the nature of the challenges, as well as ideas on how to handle them, grounded in lessons Rex has learned in over 30 years of software engineering and testing.
Length: 1h 3m 33s
If you are testing a simple mobile app, you may find it relatively easy to find representative test data. However, what if you are testing enterprise scale applications? In the enterprise data center, one hundred or more applications of various sizes, complexity, and criticality co-exist, operating on various data repositories, in some cases shared data repositories. In some cases, disparate data repositories hold related data, and the ability to test integration across applications that access these data sets is critical. In this webinar, Rex Black will talk about the challenges facing his clients as they deal with these testing problems. You’ll go away with a better understanding of the nature of the challenges, as well as ideas on how to handle them, grounded in lessons Rex has learned in over 30 years of software engineering and testing.
Length: 0h 10m 51s
A typical objective of testing is finding defects, but how effectively do you achieve that objective? Fortunately, there’s a way to measure that, using a metric called defect detection effectiveness. In this One Key Idea webinar, Rex will explain how to use two simple, easy-to-gather metrics to calculate defect detection effectiveness, and also share some insights into how to use that metric to set a baseline for future improvements and to benchmark yourself against other companies.
The Advanced Test Automation Engineer Boot Camp, created by Rex Black, past President of the International Software Testing Qualifications Board (ISTQB), past President of the American Software Testing Qualifications Board (ASTQB) and co-author of a number of International Software Testing Qualifications Board syllabi, is ideal for testers and test teams preparing for certification in a short timeframe with time and money constraints.
The Foundation Business Analyst Boot Camp, created by Rex Black, past President of the International Software Testing Qualifications Board (ISTQB), past President of the American Software Testing Qualifications Board (ASTQB) and co-author of a number of International Software Testing Qualifications Board syllabi, is ideal for testers and test teams preparing for certification in a short timeframe with time and money constraints.