Blog

Seven Steps to Reducing Software Security Risks

By Rex Black

Software security is an important concern, and it’s not just for operating system and network vendors.  If you’re working at the application layer, your code is a target.  In fact, the trend in software security exploits is away from massive, blunt-force attacks on the Internet or IT infrastructure and towards carefully crafted, criminal attacks on specific applications to achieve specific damage, often economic.

How can you respond effectively? While the threat is large and potentially intimidating, it turns out that there is a straightforward seven-step process that you can apply to reduce your software’s exposure to these attacks. 

  1. Assess security risks to focus your improvements.
  2. Test the software for security failures.
  3. Analyze the software for security bugs.
  4. Evaluate patterns in security risks, failures, and bugs.
  5. Repair the bugs with due care for regression.
  6. Examine the real-world results by monitoring important security metrics.
  7. Institutionalize the successful process improvements. 

Carefully following this process will allow your organization to improve your software security in a way which is risk-based, thoroughly tested, data-driven, prudent, and continually re-aligned with real-world results.  You can read more about this topic in my article, Seven Steps to Reduce Software Security Risk.

— Published


Top Three Business Cases for Software Test Automation

By Rex Black

Businesses spend millions of dollars annually on software test automation.  A few years back, while doing some work in Israel (birthplace of the Mercury toolset), someone told me that Mercury Interactive had a billion dollars in a bank in Tel Aviv.  Probably an urban legend, but who knows? Mercury certainly made a lot of money selling tools over the years, which is why HP bought them. 

That's nice for Mercury and Hewlett Packard, but so what, right?  I don't know about your company, but none of RBCS' clients buy software testing tools so that they can help tool vendors make money.  Our clients buy software testing tools because they expect those tools will help them make money

Unfortunately, it's often the case that there's a real lack of clarity in terms of the business case for software test automation at some organizations.  Without a clear business case, there's no clear return on investment. This leads to a lack of clear success (or failure) of the automation effort.  Efforts that should be cancelled continue too long, and efforts that should continue are cancelled. 

So, one of the pre-requisites of software test automation success a clear business case, leading to clear measures of success.  Here are the top three business cases for software test automation that we've observed with our clients:

  1. Automation is the only practical way to address some critical set of quality risks.  The two most common examples are reliability and performance, which generally cannot be tested manually.
  2. Automation is used to shorten test execution time.  This is particularly true in highly competitive situations where time-to-market is critical, and at the same time customers have a low tolerance for quality problems.
  3. Automation is used to reduce the effort required to achieve a desired level of quality risk.  This is often the case in large, complex products where regression, especially regression across interconnected features, is considered unacceptable.

This list is not exhaustive, and, in some cases, two or more reasons may apply.  One of the particularly nice aspects of each of these three business cases is that the return on investment is clearly quantifiable.  That makes achieving success in one or more of these areas easy to measure and to demonstrate.  It also makes it easy to determine which tests should be automated and which should not.

— Published


Making Software Testing Go Faster

By Rex Black

We often want--and need--testing to go more quickly, don't we?  So, here's a list of organizational behaviors and attributes that tend to accelerate the test process. Encourage these activities and values among your peers, and jump at the opportunities to perform them yourself where appropriate.

Testing throughout the project. I use the phrase testing throughout the project in a three-dimensional sense. The first dimension involves time: in order to be properly prepared, and to help contain bugs as early as possible, the test team must become involved when the project starts, not at the end. The second dimension is organizational: the more a company promotes open communication between the test organization and the other teams throughout the company, the better the test group can align its efforts with the company’s needs. The third dimension is cultural: in a mature company, testing as an entity, a way of mitigating risk, and a business-management philosophy permeates the development projects. I also call this type of testing pervasive testing.

Smart use of cheaper resources. One way to do this is to use test technicians. You can get qualified test technicians from the computer-science and engineering schools of local universities and colleges as well as from technical institutes. Try to use these employees to perform any tasks that do not specifically require a test engineer’s level of expertise. Another way to do this is to use distributed and outsourced testing.

Appropriate test automation. The more automated the test system, the less time it takes to run the tests. Automation also allows unattended test execution overnight and over weekends, which maximizes utilization of the system under test and other resources, leaving more time for engineers and technicians to analyze and report test failures. You should apply a careful balance, however. Generating a good automated test suite can take many more hours than writing a good manual test suite. Developing a completely automated test management system is a large endeavor. If you don’t have the running room to thoroughly automate everything you’d like before test execution begins, you should focus on automating a few simple tools that will make manual testing go more quickly. In the long run, automation of test execution is typically an important part of dealing with regression risk during maintenance.

Good test system architecture. Spending time in advance understanding how the test system should work, selecting the right tools, ensuring the compatibility and logical structure of all the components, and designing for subsequent maintainability really pay off once test execution starts. The more intuitive the test system, the more easily testers can use it.

Clearly defined test-to-development handoff processes. Let's illustrate this with an example. Two closely related activities, bug isolation and debugging, occur on opposite sides of the fence between test and development. On the one hand, test managers must ensure that test engineers and technicians thoroughly isolate every bug they find and write up those isolation steps in the bug report. Development managers, on the other hand, must ensure that their staff does not try to involve test engineers and technicians, who have other responsibilities, in debugging activities.

Clearly defined development-to-test handoff processes. The project team must manage the release of new hardware and software revisions to the test group. As part of this process, the following conditions should be met:

  • All software is under revision control.
  • All test builds come from revision-controlled code.
  • Consistent, clear release naming nomenclatures exist for each major system.
  • A regular, planned release schedule exists and is followed.
  • A well-understood, correct integration strategy is developed and followed during the test-planning stages.

Automated smoke tests run against test releases, whether in the development, build (or release engineering), or testing environments (or all three), are also a good idea to ensure that broken test releases don’t block test activities for hours or even days at the beginning of a test cycle.

Another handoff occurs when exit and entry criteria for phases result in the test team commencing or ending their testing work on a given project. The more clearly defined and mutually accepted these criteria are, the more smoothly and efficiently the testing will proceed.

A clearly defined system under test. If the test team receives clear requirements and design specifications when developing tests and clear documentation while running tests, it can perform both tasks more effectively and efficiently. When the project management team commits to and documents how the product is expected to behave, you and your intrepid team of testers don’t have to waste time trying to guess—or dealing with the consequences of guessing incorrectly. In a later post, I'll give you some tips on operating without clear requirements, design specifications, and documentation when the project context calls for it.

Continuous test execution. Related to, and enabled by, test automation, this type of execution involves setting up test execution so that the system under test runs as nearly continuously as possible. This arrangement can entail some odd hours for the test staff, especially test technicians, so everyone on the test team should have access to all appropriate areas of the test lab.

Continuous test execution also implies not getting blocked. If you’re working on a 1-week test cycle, being blocked for 1 just day means that 20 percent of the planned tests for this release will not happen, or will have to happen through extra staff, overtime, weekend work, and other undesirable methods. Good release engineering and management practices, including smoke-testing builds before installing them in the test environment, can be a big part of this. Another part is having an adequate test environment so that testers don’t have to queue to run tests that require some particular configuration or to report test results.

Adding test engineers.  Fred Brooks once observed that “adding more people to a late software project makes it later,” a statement that has become known as Brooks’s Law. Depending on the ramp-up time required for test engineers in your projects, this law might not hold true as strongly in testing as it does in other areas of software and hardware engineering. Brooks reasoned that as you add people to a project, you increase the communication overhead, burden the current development engineers with training the new engineers, and don’t usually get the new engineers up to speed soon enough to do much good. In contrast, a well-designed behavioral test system reflects the (ideally) simpler external interfaces of the system under test, not its internal complexities. In some cases, this can allow a new engineer to contribute within a couple of weeks of joining the team.

My usual rule of thumb is that, if a schedule crisis looms six weeks or more in my future, I might be able to bring in a new test engineer in time to help. However, I have also added test engineers on the day system test execution started, and I once joined a laptop development project as the test manager about two weeks before the start of system test execution. In both cases, the results were good. (Note, though, that I am not contradicting myself. Testing does proceed most smoothly when the appropriate levels of test staffing become involved early, but don’t let having missed the opportunity to do that preclude adding more staff.) Talk to your test engineers to ascertain the amount of time that’ll be required, if any, to ramp up new people, and then plan accordingly.

While these software test process accelerators are not universally applicable--or even universally effective--consider them when your managers tell you that you need to make the testing go faster.

— Published


Five Software Testing Best Practices

By Rex Black

Smart professionals learn continuously.  They learn not only from their own experience, but also from the experience of other smart professionals.  Learning from other smart people is the essence and origin of best practices.  A best practice is an approach to achieving important objectives or completing important tasks that generally gives good results, when applied appropriately and thoughtfully.

I have identified a number of software testing best practices over the years.  Some I learned in my own work as a test manager.  I have learned many more in my work as a consultant, since I get a chance to work with so many other smart test professionals in that role.  Here are five of my favorite software testing best practices:

  1. Use analytical risk based testing strategies
  2. Define realistic objectives for testing, with metrics
  3. Institute continuous test process improvement based on lessons learned from previous projects
  4. Have trained and certified test teams
  5. Distribute testing work intelligently

You can listen to me explain these five software testing best practices in a recent webinar.  I've also included links above for webinars that deal with some of these software testing best practices specifically.

— Published


Recognizing Effective Risk Based Testing

By Rex Black

If you have adopted risk based testing and are using it on projects, how do you know if you are doing it properly?  Measure the effectiveness, of course. 

I've discussed good software testing metrics previously.  Good metrics for a process derive from the objectives that process serves.  So, let's look at the four typical objectives of risk based testing and how we might measure effectiveness.

  • We want to find important bugs early in test execution (“find the scary stuff first”).  So, measure whether the critical bugs (however you classify bugs) are found in the first half of test execution period.
  • We want to allocate test execution effort appropriately (“pick the right tests”). So, measure defect detection effectiveness of for the critical bugs and for all bugs, and ensure that the metric for critical bugs is higher than for all bugs.
  • We want to help management make a good, risk-aware release decision (“balance the risks”).  This involves surveying the project management team.  Do they rate the test reports rated as effective at this goal? Do the test reports give them the information they need?
  • We want to be able to triage tests if schedule pressure requires (“under pressure, drop the least-worrisome tests”).  So, check whether the risk priority for all skipped tests is less than or equal to the risk priority for every test that was run.

After each project, you can use these metrics to assess the effective implementation of risk based testing.

— Published


What Do Test Stakeholders Want?

By Rex Black

Next week, I'll be in Germany for the Testing and Finance conference, giving a keynote speech on how testing professionals and teams can satisfy their stakeholders.  One of the key themes of that presentation is the following:

There are a wide variety of groups with an interest in testing and quality on each project; these are testing stakeholders.  Each stakeholder group has objectives and expectations for the testing work that will occur. 

When we do test assessments for our clients, we often find that test teams are not satisfying their stakeholders. 

Why?  Well, many times, what the testers think the stakeholders need and expect from testing differs from what the stakeholders actually need and expect.  In order to understand the stakeholder's true objectives and expectations, testers need to talk to each stakeholder group.  Since in many cases the stakeholders have not thought about this issue before, these talks often take the form of an iterative, brainstorming discussion between testers and stakeholders to articulate and define these objectives and expectations. 

To truly satisfy these stakeholders, the test team needs to achieve these objectives effectively, efficiently, and elegantly. 

  • Effectiveness: satisfying objectives and expectations to some reasonable degree.
  • Efficiency: maximizing the value delivered for the resources invested.
  • Elegance: achieving effectiveness and efficiency in a graceful, well-executed fashion.

The next step of defining these objectives, and what it means to achieve them effectively, efficiently, and elegantly, is often to define a set of metrics, along with goals for those metrics.  These metrics and their goals allow the test team to demonstrate the value they are delivering.  With goals achieved, testers and stakeholders can be confident that testing is delivering satisfying services to the organization.

Are you satisfying your stakeholders?  Catch me in Bad Homborg, Germany, on June 8, to discuss the topic with me directly.  Or, your can e-mail info@rbcs-us.com to find out when we will post the recorded webinar on the RBCS Digital Library.

— Published


Software Test Coverage Dimensions: Measures of Confidence

By Rex Black

When I talk to senior project and product stakeholders outside of test teams, confidence in the system—especially, confidence that it will have a sufficient level of quality—is one benefit they want from a test team involved in system and system integration testing.   Another key benefit such stakeholders commonly mention is providing timely, credible information about quality, including our level of confidence in system quality. 

Reporting their level of confidence in system quality often proves difficult to many testers.  Some testers resort to reporting confidence in terms of their gut feel.  Next to major functional areas, they draw smiley faces and frowny faces on a whiteboard, and say things like, “I’ve got a bad feeling about function XYZ.”  When management decides to release the product anyway, the hapless testers either suffer the Curse of Cassandra if function XYZ fails in production, or watch their credibility evaporate if there are no problems with function XYZ in production. 

If you’ve been through those unpleasant experiences a few times, you’re probably looking for a better option. In the next 500 words, you’ll find that better option.  That option is using multi-dimensional coverage metrics as a way to establish and measure confidence.  While not every coverage dimension applies to all systems, you should consider the following:

  • Risk coverage: One or more tests (depending on the level of risk) for each quality risk item identified during quality risk analysis.  You can only have confidence that the residual level of quality risk is acceptable if you test the risks. The percentage of risks with passing tests measures the residual level of risk.
  • Requirements coverage:  One or more tests for each requirements specification element.  You can only have confidence that the system will “conform to requirements as specified” (to use Crosby’s definition of quality) if you test the requirements. The percentage of requirements with passing tests measures the extent to which the system conforms.
  • Design coverage: One or more tests for each design specification element.  You can only have confidence that the design is effective if you test the design. The percentage of design elements with passing tests measures design effectivity.
  • Environment coverage: Appropriate environment-sensitive tests run in each supported environment.  You can only have confidence that the system is “fit for use” (to use Juran’s definition of quality) if you test the supported environments.  The percentage of environments with passing tests measures environment support.
  • Use case, user profile, and/or user story coverage:  Proper test cases for each use case, user profile, and/or user story.  Again, you can only have confidence that the system is “fit for use” if you test the way the user will use the system.  The percentage of use cases, user profiles, and/or user stories with passing tests measures user readiness.

Notice that I talked about “passing tests” in my metrics above.  If the associated tests fail, then you have confidence that you know of—and can meaningfully describe, in terms non-test stakeholders will understand—problems in dimensions of the system.  Instead of talking about “bad feelings” or drawing frowny faces on whiteboards, you can talk specifically about how tests have revealed unmitigated risks, unmet requirements, failing designs, inoperable environments, and unfulfilled use cases.

What about code coverage?  Code coverage measures the extent to which tests exercise statements, branches, and loops in the software.  Where untested statements, branches, and loops exist, that should reduce our confidence that we have learned everything we need to learn about the quality of the software.  Any code that is uncovered is also unmeasured from a quality perspective.

If you manage a system test or system integration test team, it’s a useful exercise to measure the code coverage of your team’s tests.  This can identify important holes in the tests.  I and many other test professionals have used code coverage this way for over 20 years.  However, in terms of designing tests specifically to achieve a particular level of code coverage, I believe that responsibility resides with the programmers during unit testing.  At the system test and system integration test levels, code coverage is a useful tactic for finding testing gaps, but not a useful strategy for building confidence.

The other dimensions of coverage measurement do offer useful strategies for building confidence in the quality of the system and the meaningfulness of the test results.  As professional test engineers and test analysts, we should design and execute tests along the applicable coverage dimensions.   As professional test managers, our test results reports should describe how thoroughly we’ve addressed each applicable coverage dimension.  Test teams that do so can deliver confidence, both in terms of the credibility and meaningfulness of their test results, and, ultimately, in the quality of the system.

— Published


What Software Testing Isn't

By Rex Black

In a later post, I'll talk about what software testing is, and what it can do.  However, in this post, I'd like to talk about what software testing isn't and what it can't do.

In some organizations, when I talk to people outside of the testing team, they say they want testing to demonstrate that the software has no bugs, or to find all the bugs in it.  Either is an impossible mission, for four main reasons:

  1. The combination of software execution paths (control flows) in any non-trivial software is either infinite or so close to infinite that attempting to test all of the paths is impossible, even with sophisticated test automation. 
  2. Software exists to manage data, and these large dataflows are separated across space (in terms of the features) and time (in terms of static data such as database records).  This creates an infinite or near-infinite set of possible dataflows.
  3. Even if you could test all control flows and dataflows, slight changes in the software can cause regressions which are not proportional to the size of the change.
  4. There are myriad usage profiles and field configurations, some unknown (especially in mass-market, web, and enterprise software) and some unknowable (given the fact that interoperating and cohabiting software can change without notice).

It's important to understand and explain these limits on what software testing can do.  Recently, the CEO of Toyota said that software problems couldn't be behind the problems with their cars, because "we tested the software."  As long as non-testers think that testers can test exhaustively, those of us who are professional testers will not measure up to expectations.

— Published


Managing a Key Risk in Risk Based Software Testing

By Rex Black

As regular readers of my posts, books, and/or articles know, I like risk based testing.  That said, it's not without its own risks.  One key project risk in risk based testing is missing some key quality risks.  If you don't identify the risk, you can't assess the level of risk, and, of course, you won't cover the risk with tests--even if you really should. 

How to mitigate this risk? Well, one part is getting the right stakeholders involved, and I have thoughts on doing that in a previous blog post.  Another part is to use the right approach to the analysis, as discussed in this blog post

However, another key part of getting as thorough-as-possible a list of risks is to use a framework or checklist to structure and suggest quality risks.  I've seen four common approaches to this, two of which work and two of which don't work.

  1. A generic list of quality risk categories (such as the one available in the RBCS Basic Library here).  These are easy to learn and use, which is important, because all the participants in the risk analysis need to understand the framework.  It is very informal, and needs tailoring for each organization.
  2. ISO 9126 quality characteristics (for example of ISO 9126, see here).  This is very structured and designed to ensure that software teams are aware of all aspects of the system that are important for quality.  It is harder to learn, which can create problems with some participants.  It also doesn't inherently address hardware-related risks, which is a problem for testing hardware/software systems.   
  3. Major functional areas (e.g., formatting, file operations, etc. in a word processor).  I do not recommend this for higher-level testing such as system test, system integration test, or integration test, unless the list of major functional areas is integrated into a larger generic quality risk categories list that includes non-functional categories.  By themselves, lists of major functional areas focus testing on fine-grained functionality only, omitting important use cases and non-functional attributes such as performance or reliability.
  4. Major subsystems (e.g., edit engine, user interface, file subsystem, etc. in a word processor).  This approach does work for hardware, and in fact is described in some books on formal risk analysis techniques like failure mode and effect analysis such as Stamatis's classic.  However, as with the functional areas, risk lists generated from subsystems tend to miss emergent behaviors in software systems, such as--once again--end-to-end use cases, performance, reliability, and so forth.

Here's my recommendation for most clients getting started with risk based testing.  Start with the general list of quality risk categories I mentioned above.  Customize the risk categories for your product, if needed, but beware of dropping any risk categories unless everyone agrees nothing bad could happen in that category.  If you find you need a more structured framework after a couple projects, move to ISO 9126.

— Published


A Cautionary Tale in System Reliability

By Rex Black

I want to depart a bit from the usual theme to share a cautionary tale about reliability that has lessons for system design, system testing, cloud computing, and public communication.  Regular readers will have noticed that we had only one post last week, down from the usual two posts.  The reason is that Friday's post was pre-empted by a thunderstorm that knocked out the high-speed internet to our offices.  We get our internet from a company called GVTC.  In fact, the storm appears to have affected hundreds of customers, because we are only now (a full three days after the failure) finding out that we won't have a GVTC service person here until Thursday.

Shame on RBCS for not having backup, you might think.  But we did have backup.  In addition to a GVTC's fiber-based wired connection, we had a failover router (a Junxion Box) with an AT&T 3G wireless card in it.  However, when the fiber-to-ethernet adapter failed, it created a surge in the ethernet connection (which ran through the router).  That surge completely destroyed the router. So, no backup internet.  Worse yet, because the router was also acting as the DHCP server, the entire local area network was now inaccessible. 

Chaos ensued, as you might imagine, and we're still recovering from it.  I'll spare you the details of what we have done and are still doing, and jump to the lessons.

  • Testing lesson: Yes, we had tested the failover to the 3G router.  We did it by disconnecting the ethernet connection to the GVTC fiber-to-ethernet hardware. That tested the "what happens if the connection goes dark" condition.  It didn't test the "what happens if the hardware is damaged and starts sending dangerous signals" condition.  The lesson here for testers is, when doing risk analysis for reliability testing, make sure to consider all possible risks. Murphy's Law says that the one risk you forget is the one that'll get you.
  • Design lesson: When you're designing for reliability, don't assume that single-points-of-failure can be eliminated simply by adding a failover resource.  If the failover resource is connected in some way to the primary resource, there may well be a path for failure of the primary to propagate to the failover.  Our particular problem is the kind of design flaw that the iterative application of hazard analysis could have revealed.  I'll be more careful with choice of contract support personnel as I rebuild this network.
  • Cloud lesson:  Cloud computing and software as a service (SaaS) are the latest thing, and gaining popularity by leaps and bounds.  RBCS doesn't rely much on the cloud, other than having our e-learning systems remotely hosted.  That hosting of e-learning was a good decision, it turns out, because the loss of connectivity to our offices did not affect our e-learning customers.  However, had we relied on our e-learning system for internal training over the weekend, we'd have been out of luck.  A key takeaway here--especially if you run a small business like I do--is that, if you rely on the cloud or SaaS, those applications are no more reliable than your high speed internet access.
  • Public communication lesson: For those who communicate to the public, GVTC's handling of this problem is a textbook example of how not to communicate.  They did not issue any e-mail or phone information about what to expect.  My business partner spent over five hours on the phone with them in the last 72 hours, and it wasn't until today that we got even the remotest promise of resolution.  She was told conflicting stories on each call.  The IVR system at one point instruted her to "dial 9 for technical support," and, when she did, it replied, "9 is not a supported option." Clear communication to affected customers when a service fails will have a big impact on the customers' experience of quality.  Conversely, failing to communicate sends a clear message, too: "We don't care about you."

Enough ruminations on the lessons learned.  Later this week, we'll be back to our regularly-scheduled programming.  In the meantime, give a thought to reliability--before circumstances force you to do so.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo IIBA Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.