Our notable partnerships give us access to the best resources and tools for the job.
Get software testing training resources:
Test design is the process of transforming test objectives into test conditions and test cases.
To celebrate completion of the update to Advanced Software Testing: Volume I, here is an excerpt of Chapter 3. This is the central chapter of the book, addressing test design techniques.
We start with the most basic of specification-based test design techniques, equivalence partitioning.
Conceptually, equivalence partitioning is about testing various groups that are expected to be handled the same way by the system and exhibit similar behavior. Those groups can be inputs, outputs, internal values, calculations, or time values, and should include valid and invalid groups. We select a single value from each equivalence partition, and this allows us to reduce the number of tests. We can calculate coverage by dividing the number of equivalence partitions tested by the number identified, though generally the goal is to achieve 100% coverage by selecting at least one value from each partition.
This technique is universally applicable at any test level, in any situation where we can identify the equivalence partitions. Ideally, those partitions are independent, though some amount of interaction between input values does not preclude the use of the technique. This technique is also very useful in constructing smoke tests, though testing of some of the less-risky partitions frequently is omitted in smoke tests. This technique will find primarily functional defects where data is processed improperly in one or more partitions. The key to this technique is to take care that the values in each equivalence partition are indeed handled the same way; otherwise, you will miss potentially important test values.
This article is an excerpt from Rex Black's recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test engineers. It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can-and should-use. In this first article in a series of excerpts, Black starts by discussing the related concepts of decision tables and cause-effect graphs.
Equivalence partitioning and boundary value analysis are very useful techniques. They are especially useful when testing input field validation at the user interface. However, lots of testing that we do as test analysts involves testing the business logic that sits underneath the user interface. We can use boundary values and equivalence partitioning on business logic, too, but three additional techniques, decision tables, use cases, and state-based testing, will often prove handier and more effective. Read this article to learn more about these powerful techniques.
This software testing article was originally published in the June 2009 edition of Testing Experience Magazine.
In this article, we look at state-based testing. State-based testing is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past. In some cases, the sequences of events can be potentially infinite, which of course exceeds our testing capabilities, but we want to have a test design technique that allows us to handle arbitrarily-long sequences of events. Read this article to learn more about state-based testing.
This article was originally published in Testing Experience Magazine.
The following is an excerpt from my recently-published book, Advanced Software Testing: Volume 1. This is a book for test analysts and test engineers. It is especially useful for ISTQB Advanced Test Analyst certificate candidates, but contains detailed discussions of test design techniques that any tester can—and should—use. In this third article in a series of excerpts, I discuss the application of use cases to testing workflows.
At the start of this series, I said we would cover three techniques that would prove useful for testing business logic, often more useful than equivalence partitioning and boundary value analysis. First, we covered decision tables, which are best in transactional testing situations. Next, we looked at state-based testing, which is ideal when we have sequences of events that occur and conditions that apply to those events, and the proper handling of a particular event/condition situation depends on the events and conditions that have occurred in the past. In this article, we’ll cover use cases, where preconditions and postconditions help to insulate one workflow from the previous workflow and the next workflow. With these three techniques in hand, you have a set of powerful techniques for testing the business logic of a system.
This article was originally published in Testing Experience Magazine.
Many of you are probably familiar with basic test techniques like equivalence partitioning and boundary value analysis. In this article, Rex presents an advanced technique for black-box testing called domain analysis. Domain analysis is an analytical way to deal with the interaction of factors or variables within the business logic layer of a program. It is appropriate when you have some number of factors to deal with. These factors might be input fields, output fields, database fields, events, or conditions. They should interact to create two or more situations in which the system will process data differently. Those situations are the domains. In each domain, the value of one or more factors influences the values of other factors, the system's outputs, or the processing performed.
In some cases, the number of possible test cases becomes very large due to the number of variables or factors and the potentially interesting test values or options for each variable or factor. For example, suppose you have 10 integer input fields that accept a number from 0 to 99. There are 10 billion billion valid input combinations.
Equivalence class partitioning and boundary value analysis on each field will reduce but not resolve the problem. You have four boundary values for each field. The illegal values are easy, because you have only 20 tests for those. However, to test each legal combination of fields, you have 1,024 test cases. But do you need to do so? And would testing combinations of boundary values necessarily make for good tests? Are there smarter options for dealing with such combinatorial explosions?
This article was originally published in Quality Matters.
The following is an excerpt from Chapter 2 of the new edition of Advanced Software Testing: Volume 3, by Jamie Mitchell and Rex Black. Jamie is the primary author of the material in this chapter.
Structure-based testing uses the internal structure of the system as a test basis for deriving dynamic test cases. In other words, we are going to use information about how the system is designed and built to derive our tests.
The question that should come to mind is why. We have all kinds of specification-based (black-box) testing methods to choose from. Why do we need more? We don’t have time or resources to spare for extra testing, do we?
Well, consider a world-class, outstanding system test team using all black-box and experience-based techniques. Suppose they go through all of their testing, using decision tables, state-based tests, boundary analysis, and equivalence classes. They do exploratory and attack-based testing and error guessing and use checklist-based methods. After all that, have they done enough testing? Perhaps for some. But research has shown, that even with all of that testing, and all of that effort, they may have missed a few things.
There is a really good possibility that as much as 70 percent of all of the code that makes up the system might never have been executed once! Not once!
How can that be? Well, a good system is going to have a lot of code that is only there to handle the unusual, exceptional conditions that may occur. The happy path is often fairly straightforward to build—and test. And, if every user were an expert, and no one ever made mistakes, and everyone followed the happy path without deviation, we would not need to worry so much about testing the rest. If systems did not sometimes go down, and networks sometimes fail, and databases get busy and stuff didn’t happen...
The choice of the right techniques is critical to achieving a good return on the test investment. Some tests happen before we can even run the software. Some tests involve analyzing the structure of the system, while others involve analyzing the system’s behavior. Each technique can involve special skills and particular participants, and might appropriately entail the use of tools—or not.
Functional testing focuses on what the system does, rather than how it does it. Non-functional testing is focused on how the system does what it does. Both functional and non-functional testing are black-box tests, being focused on behavior. White-box tests are focused on how the system works internally—i.e., on its structure.
Functional tests can have, as their test basis, the functional requirements. These include both the requirements that are written down in a specification document and those that are implicit. The domain expertise of the tester can also be part of the test basis.
Functional tests will vary by test level or phase. A functional integration test will focus on the functionality of a collection of interfacing modules, usually in terms of the partial or complete user workflows, use cases, operations, or features these modules provide. A functional system test will focus on the functionality of the application as a whole, complete user workflows, use cases, operations, and features. A functional system integration test will focus on end-to-end functionality that spans the entire set of integrated systems.
The test analyst can employ various test techniques during functional testing at any level. All of the techniques discussed in Advanced Software Testing: Volume 1 will be useful.
We should keep in mind that test analyst is a role, not a title, job description, or position. In other words, some people play the role of test analyst exclusively, but others play that role as part of another job. So, when dedicated, professional testers do functional testing, they are test analysts both in position and in role. However, when domain experts do the analysis, design, implementation, or execution of functional tests, they are working as test analysts. When developers do the analysis, design, implementation, or execution of functional tests, they are working as test analysts.
For test analysts in the ISTQB Advanced syllabus, we consider functional and usability testing as concerned with the following quality attributes:
In this excerpt, we’ll look at testing the first three of these attributes, starting with accuracy.
This is an excerpt from my book, Expert Test Manager, written with James Rommens and Leo van der Aalst. I hope it helps you think more clearly about the test strategies you use.
A test policy contains the mission and objectives of testing along with metrics and goals associated with the effectiveness, efficiency, and satisfaction with which we achieve those objectives. In short, the policy defines why we test. While it might also include some high-level description of the fundamental test process, in general the test policy does not talk about how we test.
The document that describes how we test is the test strategy. In the test strategy, the test group explains how the test policy will be implemented. This document should be a general description that spans multiple projects. While the test strategy can describe how testing is done for all projects, organizations might choose to have separate documents for various types of projects. For example, an organization might have a sequential lifecycle test strategy, an Agile test strategy, and a maintenance test strategy.
[Note: This is an excerpt from Agile Testing Foundations: An ISTQB Foundation Level Agile Tester Guide, by Rex Black, Marie Walsh, Gerry Coleman, Bertrand Cornanguer, Istvan Forgacs, Kari Kakkonen, and Jan Sabak, published July 2017. Kari Kakkonen wrote this selection. The authors are all members of the ISTQB Working Group that wrote the ISTQB Agile Tester Foundation syllabus.]
The traditional way of developing code is to write the code first, and then test it. Some of the major challenges of this approach are that testing is generally conducted late in the process and it is difficult to achieve adequate test coverage. Test-first practices can help solve these challenges. In this environment, tests are designed first, in a collaboration between business stakeholders, testers, and developers. Their knowledge of what will be tested helps developers write code that fulfils the tests. A test-first approach allows the team to focus on and clarify the expressed needs through a discussion of how to test the resulting code. Developers can use these tests to guide their development. Developers, testers, and business stakeholders can use these tests to verify the code once it is developed.
A number of test-first practices have been created for Agile projects, as mentioned in section 2.1 of this book. They tend to be called X Driven Development, where X stands for the driving force for the development. In Test Driven Development (TDD), the driving force is testing. In Acceptance Test-Driven Development (ATDD), it is the acceptance tests that will verify the implemented user story. In Behaviour-Driven Development (BDD), it is the behaviour of the software that the user will experience. Common to all these approaches is that the tests are written before the code is developed, i.e., they are test-first approaches. The approaches are usually better known by their acronyms. This subsection describes these test-first approaches and information on how to apply them is contained in section 3.3.
Test-Driven Development was the first of these approaches to appear. It was introduced as one of the practices within Extreme Programming (XP) back in 1990. It has been practiced for two decades and has been adopted by many software developers, in Agile and traditional projects. However, it is also a good example of an Agile practice that is not used in all projects. One limitation with TDD is that if the developer misunderstands what the software is to do, the unit tests will also include the same misunderstandings, giving passing results even though the software is not working properly. There is some controversy over whether TDD delivers the benefits it promises. Some, such as Jim Coplien, even suggest that unit testing is mostly waste.
TDD is mostly for unit testing by developers. Agile teams soon came up with the question: What if we could have a way to get the benefits of test-first development for acceptance tests and higher level testing in general? And thus Acceptance Test-Driven Development was born. (There are also other names for similar higher-level test-first methods; for example, Specification by Example (SBE) from Gojko Adzic.) Later, Dan North wanted to emphasize the behaviours from a business perspective, leading him to give his technique the name Behaviour-Driven Development. ATDD and BDD are in practice very similar concepts.
Let’s look at these three test-first techniques, TDD, ATDD, and BDD, more closely in the following subsections.
Recorded June 29, 2017
Recorded February 16, 2017
Recorded March 30, 2016
Recorded March 3, 2016
Recorded September 27, 2013
Recorded August 7, 2012
Recorded October 25, 2011
Recorded December 2, 2010
Recorded October 6, 2010
Recorded August 28, 2010
Recorded August 3, 2010
Recorded June 1, 2010
Recorded April 6, 2010
Length: 0h 38m 45s
If you’ve been testing for any length of time, you know that the number of possible test cases is enormous if you try to test all possible combinations of inputs, configuration values, types of data, and so forth. It’s like the mythical monster, the many-headed Hydra, which would sprout two or more new heads for each head that was cut off. Two simple approaches to dealing with combinatorial explosions such as this are equivalence partitioning and boundary value analysis, but those techniques don’t check for interactions between factors. A reasonable, manageable way to test combinations is called pairwise testing, but to do it you’ll need a tool. In this inaugural One Key Idea session, Rex will demonstrate the use of a free tool, ACTS, built by the US NIST and available for download worldwide. We can’t promise to turn you into Hercules, but you will definitely walk away able to slay the combinatorial Hydra.
Length: 0h 19m 51s
In our inaugural One Key Idea session, we looked at how use pairwise testing to examine combinations of inputs, configuration values, types of data, and the like. This is a great technique when the interaction between these factors is unpredictable. However, in some cases, specific business rules govern these interactions. How can we model these business rules and use that model to develop a reasonable set of tests? Simple: decision tables. In this One Key Idea session, Rex will explain the basics of this fundamental technique. In twenty minutes or less, you’ll learn how to create and use these straightforward, table-based representations of business logic in your daily work.
There are no training products in this category currently.