Software security is an important concern, and it’s not just for operating system and network vendors. If you’re working at the application layer, your code is a target. In fact, the trend in software security exploits is away from massive, blunt-force attacks on the Internet or IT infrastructure and towards carefully crafted, criminal attacks on specific applications to achieve specific damage, often economic.
How can you respond effectively? While the threat is large and potentially intimidating, it turns out that there is a straightforward seven-step process that you can apply to reduce your software’s exposure to these attacks.
Carefully following this process will allow your organization to improve your software security in a way which is risk-based, thoroughly tested, data-driven, prudent, and continually re-aligned with real-world results. You can read more about this topic in my article, Seven Steps to Reduce Software Security Risk.
Businesses spend millions of dollars annually on software test automation. A few years back, while doing some work in Israel (birthplace of the Mercury toolset), someone told me that Mercury Interactive had a billion dollars in a bank in Tel Aviv. Probably an urban legend, but who knows? Mercury certainly made a lot of money selling tools over the years, which is why HP bought them.
That's nice for Mercury and Hewlett Packard, but so what, right? I don't know about your company, but none of RBCS' clients buy software testing tools so that they can help tool vendors make money. Our clients buy software testing tools because they expect those tools will help them make money.
Unfortunately, it's often the case that there's a real lack of clarity in terms of the business case for software test automation at some organizations. Without a clear business case, there's no clear return on investment. This leads to a lack of clear success (or failure) of the automation effort. Efforts that should be cancelled continue too long, and efforts that should continue are cancelled.
So, one of the pre-requisites of software test automation success a clear business case, leading to clear measures of success. Here are the top three business cases for software test automation that we've observed with our clients:
This list is not exhaustive, and, in some cases, two or more reasons may apply. One of the particularly nice aspects of each of these three business cases is that the return on investment is clearly quantifiable. That makes achieving success in one or more of these areas easy to measure and to demonstrate. It also makes it easy to determine which tests should be automated and which should not.
We often want--and need--testing to go more quickly, don't we? So, here's a list of organizational behaviors and attributes that tend to accelerate the test process. Encourage these activities and values among your peers, and jump at the opportunities to perform them yourself where appropriate.
Testing throughout the project. I use the phrase testing throughout the project in a three-dimensional sense. The first dimension involves time: in order to be properly prepared, and to help contain bugs as early as possible, the test team must become involved when the project starts, not at the end. The second dimension is organizational: the more a company promotes open communication between the test organization and the other teams throughout the company, the better the test group can align its efforts with the company’s needs. The third dimension is cultural: in a mature company, testing as an entity, a way of mitigating risk, and a business-management philosophy permeates the development projects. I also call this type of testing pervasive testing.
Smart use of cheaper resources. One way to do this is to use test technicians. You can get qualified test technicians from the computer-science and engineering schools of local universities and colleges as well as from technical institutes. Try to use these employees to perform any tasks that do not specifically require a test engineer’s level of expertise. Another way to do this is to use distributed and outsourced testing.
Appropriate test automation. The more automated the test system, the less time it takes to run the tests. Automation also allows unattended test execution overnight and over weekends, which maximizes utilization of the system under test and other resources, leaving more time for engineers and technicians to analyze and report test failures. You should apply a careful balance, however. Generating a good automated test suite can take many more hours than writing a good manual test suite. Developing a completely automated test management system is a large endeavor. If you don’t have the running room to thoroughly automate everything you’d like before test execution begins, you should focus on automating a few simple tools that will make manual testing go more quickly. In the long run, automation of test execution is typically an important part of dealing with regression risk during maintenance.
Good test system architecture. Spending time in advance understanding how the test system should work, selecting the right tools, ensuring the compatibility and logical structure of all the components, and designing for subsequent maintainability really pay off once test execution starts. The more intuitive the test system, the more easily testers can use it.
Clearly defined test-to-development handoff processes. Let's illustrate this with an example. Two closely related activities, bug isolation and debugging, occur on opposite sides of the fence between test and development. On the one hand, test managers must ensure that test engineers and technicians thoroughly isolate every bug they find and write up those isolation steps in the bug report. Development managers, on the other hand, must ensure that their staff does not try to involve test engineers and technicians, who have other responsibilities, in debugging activities.
Clearly defined development-to-test handoff processes. The project team must manage the release of new hardware and software revisions to the test group. As part of this process, the following conditions should be met:
Automated smoke tests run against test releases, whether in the development, build (or release engineering), or testing environments (or all three), are also a good idea to ensure that broken test releases don’t block test activities for hours or even days at the beginning of a test cycle.
Another handoff occurs when exit and entry criteria for phases result in the test team commencing or ending their testing work on a given project. The more clearly defined and mutually accepted these criteria are, the more smoothly and efficiently the testing will proceed.
A clearly defined system under test. If the test team receives clear requirements and design specifications when developing tests and clear documentation while running tests, it can perform both tasks more effectively and efficiently. When the project management team commits to and documents how the product is expected to behave, you and your intrepid team of testers don’t have to waste time trying to guess—or dealing with the consequences of guessing incorrectly. In a later post, I'll give you some tips on operating without clear requirements, design specifications, and documentation when the project context calls for it.
Continuous test execution. Related to, and enabled by, test automation, this type of execution involves setting up test execution so that the system under test runs as nearly continuously as possible. This arrangement can entail some odd hours for the test staff, especially test technicians, so everyone on the test team should have access to all appropriate areas of the test lab.
Continuous test execution also implies not getting blocked. If you’re working on a 1-week test cycle, being blocked for 1 just day means that 20 percent of the planned tests for this release will not happen, or will have to happen through extra staff, overtime, weekend work, and other undesirable methods. Good release engineering and management practices, including smoke-testing builds before installing them in the test environment, can be a big part of this. Another part is having an adequate test environment so that testers don’t have to queue to run tests that require some particular configuration or to report test results.
Adding test engineers. Fred Brooks once observed that “adding more people to a late software project makes it later,” a statement that has become known as Brooks’s Law. Depending on the ramp-up time required for test engineers in your projects, this law might not hold true as strongly in testing as it does in other areas of software and hardware engineering. Brooks reasoned that as you add people to a project, you increase the communication overhead, burden the current development engineers with training the new engineers, and don’t usually get the new engineers up to speed soon enough to do much good. In contrast, a well-designed behavioral test system reflects the (ideally) simpler external interfaces of the system under test, not its internal complexities. In some cases, this can allow a new engineer to contribute within a couple of weeks of joining the team.
My usual rule of thumb is that, if a schedule crisis looms six weeks or more in my future, I might be able to bring in a new test engineer in time to help. However, I have also added test engineers on the day system test execution started, and I once joined a laptop development project as the test manager about two weeks before the start of system test execution. In both cases, the results were good. (Note, though, that I am not contradicting myself. Testing does proceed most smoothly when the appropriate levels of test staffing become involved early, but don’t let having missed the opportunity to do that preclude adding more staff.) Talk to your test engineers to ascertain the amount of time that’ll be required, if any, to ramp up new people, and then plan accordingly.
While these software test process accelerators are not universally applicable--or even universally effective--consider them when your managers tell you that you need to make the testing go faster.
Smart professionals learn continuously. They learn not only from their own experience, but also from the experience of other smart professionals. Learning from other smart people is the essence and origin of best practices. A best practice is an approach to achieving important objectives or completing important tasks that generally gives good results, when applied appropriately and thoughtfully.
I have identified a number of software testing best practices over the years. Some I learned in my own work as a test manager. I have learned many more in my work as a consultant, since I get a chance to work with so many other smart test professionals in that role. Here are five of my favorite software testing best practices:
You can listen to me explain these five software testing best practices in a recent webinar. I've also included links above for webinars that deal with some of these software testing best practices specifically.
If you have adopted risk based testing and are using it on projects, how do you know if you are doing it properly? Measure the effectiveness, of course.
I've discussed good software testing metrics previously. Good metrics for a process derive from the objectives that process serves. So, let's look at the four typical objectives of risk based testing and how we might measure effectiveness.
After each project, you can use these metrics to assess the effective implementation of risk based testing.
Next week, I'll be in Germany for the Testing and Finance conference, giving a keynote speech on how testing professionals and teams can satisfy their stakeholders. One of the key themes of that presentation is the following:
There are a wide variety of groups with an interest in testing and quality on each project; these are testing stakeholders. Each stakeholder group has objectives and expectations for the testing work that will occur.
When we do test assessments for our clients, we often find that test teams are not satisfying their stakeholders.
Why? Well, many times, what the testers think the stakeholders need and expect from testing differs from what the stakeholders actually need and expect. In order to understand the stakeholder's true objectives and expectations, testers need to talk to each stakeholder group. Since in many cases the stakeholders have not thought about this issue before, these talks often take the form of an iterative, brainstorming discussion between testers and stakeholders to articulate and define these objectives and expectations.
To truly satisfy these stakeholders, the test team needs to achieve these objectives effectively, efficiently, and elegantly.
The next step of defining these objectives, and what it means to achieve them effectively, efficiently, and elegantly, is often to define a set of metrics, along with goals for those metrics. These metrics and their goals allow the test team to demonstrate the value they are delivering. With goals achieved, testers and stakeholders can be confident that testing is delivering satisfying services to the organization.
Are you satisfying your stakeholders? Catch me in Bad Homborg, Germany, on June 8, to discuss the topic with me directly. Or, your can e-mail firstname.lastname@example.org to find out when we will post the recorded webinar on the RBCS Digital Library.
When I talk to senior project and product stakeholders outside of test teams, confidence in the system—especially, confidence that it will have a sufficient level of quality—is one benefit they want from a test team involved in system and system integration testing. Another key benefit such stakeholders commonly mention is providing timely, credible information about quality, including our level of confidence in system quality.
Reporting their level of confidence in system quality often proves difficult to many testers. Some testers resort to reporting confidence in terms of their gut feel. Next to major functional areas, they draw smiley faces and frowny faces on a whiteboard, and say things like, “I’ve got a bad feeling about function XYZ.” When management decides to release the product anyway, the hapless testers either suffer the Curse of Cassandra if function XYZ fails in production, or watch their credibility evaporate if there are no problems with function XYZ in production.
If you’ve been through those unpleasant experiences a few times, you’re probably looking for a better option. In the next 500 words, you’ll find that better option. That option is using multi-dimensional coverage metrics as a way to establish and measure confidence. While not every coverage dimension applies to all systems, you should consider the following:
Notice that I talked about “passing tests” in my metrics above. If the associated tests fail, then you have confidence that you know of—and can meaningfully describe, in terms non-test stakeholders will understand—problems in dimensions of the system. Instead of talking about “bad feelings” or drawing frowny faces on whiteboards, you can talk specifically about how tests have revealed unmitigated risks, unmet requirements, failing designs, inoperable environments, and unfulfilled use cases.
What about code coverage? Code coverage measures the extent to which tests exercise statements, branches, and loops in the software. Where untested statements, branches, and loops exist, that should reduce our confidence that we have learned everything we need to learn about the quality of the software. Any code that is uncovered is also unmeasured from a quality perspective.
If you manage a system test or system integration test team, it’s a useful exercise to measure the code coverage of your team’s tests. This can identify important holes in the tests. I and many other test professionals have used code coverage this way for over 20 years. However, in terms of designing tests specifically to achieve a particular level of code coverage, I believe that responsibility resides with the programmers during unit testing. At the system test and system integration test levels, code coverage is a useful tactic for finding testing gaps, but not a useful strategy for building confidence.
The other dimensions of coverage measurement do offer useful strategies for building confidence in the quality of the system and the meaningfulness of the test results. As professional test engineers and test analysts, we should design and execute tests along the applicable coverage dimensions. As professional test managers, our test results reports should describe how thoroughly we’ve addressed each applicable coverage dimension. Test teams that do so can deliver confidence, both in terms of the credibility and meaningfulness of their test results, and, ultimately, in the quality of the system.
In a later post, I'll talk about what software testing is, and what it can do. However, in this post, I'd like to talk about what software testing isn't and what it can't do.
In some organizations, when I talk to people outside of the testing team, they say they want testing to demonstrate that the software has no bugs, or to find all the bugs in it. Either is an impossible mission, for four main reasons:
It's important to understand and explain these limits on what software testing can do. Recently, the CEO of Toyota said that software problems couldn't be behind the problems with their cars, because "we tested the software." As long as non-testers think that testers can test exhaustively, those of us who are professional testers will not measure up to expectations.
As regular readers of my posts, books, and/or articles know, I like risk based testing. That said, it's not without its own risks. One key project risk in risk based testing is missing some key quality risks. If you don't identify the risk, you can't assess the level of risk, and, of course, you won't cover the risk with tests--even if you really should.
How to mitigate this risk? Well, one part is getting the right stakeholders involved, and I have thoughts on doing that in a previous blog post. Another part is to use the right approach to the analysis, as discussed in this blog post.
However, another key part of getting as thorough-as-possible a list of risks is to use a framework or checklist to structure and suggest quality risks. I've seen four common approaches to this, two of which work and two of which don't work.
Here's my recommendation for most clients getting started with risk based testing. Start with the general list of quality risk categories I mentioned above. Customize the risk categories for your product, if needed, but beware of dropping any risk categories unless everyone agrees nothing bad could happen in that category. If you find you need a more structured framework after a couple projects, move to ISO 9126.
I want to depart a bit from the usual theme to share a cautionary tale about reliability that has lessons for system design, system testing, cloud computing, and public communication. Regular readers will have noticed that we had only one post last week, down from the usual two posts. The reason is that Friday's post was pre-empted by a thunderstorm that knocked out the high-speed internet to our offices. We get our internet from a company called GVTC. In fact, the storm appears to have affected hundreds of customers, because we are only now (a full three days after the failure) finding out that we won't have a GVTC service person here until Thursday.
Shame on RBCS for not having backup, you might think. But we did have backup. In addition to a GVTC's fiber-based wired connection, we had a failover router (a Junxion Box) with an AT&T 3G wireless card in it. However, when the fiber-to-ethernet adapter failed, it created a surge in the ethernet connection (which ran through the router). That surge completely destroyed the router. So, no backup internet. Worse yet, because the router was also acting as the DHCP server, the entire local area network was now inaccessible.
Chaos ensued, as you might imagine, and we're still recovering from it. I'll spare you the details of what we have done and are still doing, and jump to the lessons.
Enough ruminations on the lessons learned. Later this week, we'll be back to our regularly-scheduled programming. In the meantime, give a thought to reliability--before circumstances force you to do so.