Blog

Moving Software Testing Up in Price and Value

By Rex Black

I had an interesting set of questions from a reader arrive in my inbox today.  I've interleaved my answers with his questions, with "RB:" in front. 

Dear Mr. Black,

 Would you please comment on the following three questions, or perhaps direct me to where I might gain some meaningful information that addresses them?

What is today’s trend in pricing for the software testing industry i.e. is it increasing, decreasing, stable, etc.?

RB: There certainly are what marketers refer to as "value customers" who make service purchase decisions solely on price, and these customers continue to drive down pricing on average.  However, at the top end, especially for clients that need and value senior consultants, we have managed to resist that. 

Is the service looked at as value added or a commodity, with pricing accordingly?

RB:  For the "value customer" mentioned above, it's a commodity.  For other customers, it's really a matter of doing a good job of connecting what is happening in testing with strategic business objectives.  I talked about this in my chapter in the book, Beautiful Testing.  To the extent that testing is very tactical and inward focused--especially when the focus is almost entirely on finding a large number of potentially unimportant bugs--it will be seen as a commodity.

Given that much of the labor is offshore in India and China, and subject to increase as these countries develop, will market be receptive to required price increases to allow a reasonable margin?

RB:  The "value customer" will not be receptive to such price increases, because price is all that matters.  The value customer will try to have their cake and eat it, too, by raising the minimum bar of qualifications while not allowing price to rise. Because there are billions of under-utilized human brains in the world, and because technology has almost eliminated barriers to entry for using those brains as commodity software testers, the value customer will get to have their cake and eat it, too. 

Best Regards,

Randy Francisco

SGS Consumer Testing Services

Randy, thanks for the questions. I talked about some matters relevant to these questions in my webinar on the Future of Test Management, which you can view here.

I'd be interested in other people's comments.  What do you think about these questions?

— Published


Metrics and Bonuses

By Rex Black

One of the topics I find very interesting and useful for our clients is the proper use of metrics.  We do a lot of metrics-related engagements, and in fact just this morning I'll be talking with a client about some US$ 100 million in defect-related waste that we've found in their software development process.  I've written a lot on the topic, including in my books and in various articles

Regular blog reader Gianni Pucciani asks an interesting metrics question in an e-mail:

The question is: how can you give a bonus to your test team, to motivate it, based on 90% of bugs found before the release to production date? How can you know that you found 90% of the bugs at the time you release the software?

Gianni is referring to a metric called defect detection effectiveness or defect detection percentage.  This is a metric I've discussed quite a bit in my books, especially Managing the Testing Process.

Defect detection effectiveness is a very useful metric for measuring the effectiveness of a test process at defect detection.  Most testing processes have defect detection as a primary objective, and we certainly should have effectiveness and efficiency metrics for objectives.

That said, it is a retrospective metric that can only be calculated some time after a release, if you intend to calculate it on a release-by-release basis.  (Some of our clients calculate it on an annual basis, aggregating all their projects together, which also works.)  It's typical to wait 90 days after a release to calculate defect detection effectiveness, though you really should verify what time period is required to have say 80% or more of the field-reported defects.

I could go on for days about this metric, but, since it's a blog and since Gianni asked a specific question, I'll address the other point he brought up, which is the use of this metric for bonuses.  Defect detection effectiveness is a process metric, which is not the same as a metric of individual or collective performance.  Many things are required to enable good defect detection effectiveness, including good testers, and many things can reduce defect detection effectiveness, some of which are beyond the control of testers.  I'd encourage a web search on the string "Deming red bead black bead experiment" for a discussion on the risks of rewarding or punishing based on metrics that might not be entirely in the individuals' control.

In addition, while defect detection is typically a primary objective of testing, it's not the only objective, and defect detection effectiveness is only an effectivness metric.  It doesn't measure the efficiency or the elegance with which the test team detects defects.  A test process should have a fully articulated set of objectives, with effectiveness, efficiency, and (ideally) elegance metrics for each objective, rather than a single unidimensional metric by which it is measured.

For further information on defect detection effectiveness, I'd refer people to my book Managing the Testing Process, 3e.  My colleague Capers Jones also contributed an article to our web site on a couple related defect metrics that readers might find interesting.

— Published


Advanced Test Manager: Retrospectives

By Rex Black

Reader Gianni Pucciani has another good question about the Advanced Software Testing: Volume 2 book.  Specifically, he's concerned with question 2 from Chapter 8:

Which of the following is a best practice for retrospective meetings that will lead to process improvement?

A. Ensuring management commitment to implement improvements

B. Allowing retrospective participants’ to rely exclusively on subjective assessment

C. Requiring that every project include a retrospective meeting in its closure activities

D. Prohibiting any management staff from attending the retrospective meeting

Gianni writes, "I had marked A, but also C. Where is the mistake? I have a feeling on it, but I would like you to confirm. Is C not correct because it is an organizational best practice, and not a best practice for retrospective meetings. A logic trick basically :), is that correct?"

Actually, Gianni, the reason C is not correct is because merely having retrospectives does not guarantee process improvements.  In fact, I've encountered a few situations were organizations were good about having retrospectives, but not so good about management commitment, and thus no improvements occurred.

— Published


Cost of Poor Software Quality: $242,000,000

By Rex Black

The Financial Times today featured an article on how a software bug--abysmally handled--in a financial application cost the company US$ 242,000,000:

http://www.ft.com/cms/s/0/5e1ba340-2feb-11e0-a7c6-00144feabdc0.html#axzz1D2IDiwLs

Because I don't know how long that link will live, here's the summary.

Axa Rosenberg Group had some quantitative analysis software that it used to service its clients accounts.  Axa Rosenberg Group manages money for other people, and the software is an internal application, albeit one they touted as a key differentiator, apparently--and indeed it did turn out to be, though not in a happy way.

The software had a bug that disabled a key risk-management component of the software, which was released to production in 2007.  Apparently management found out about the bug in November 2009.  However, rather than fix the problem, they tried to cover up the reasons for the poor performance of their funds.

Over one third of their customers were affected by the bug.

A wee bit of analysis from yours truly:  I have clients in the financial world, and I know how hard it can be to test these kinds of applications.  When a calculation is wrong, it can be wrong in a way that is beyond the ability of a human tester to detect.  However, Axa Rosenberg Group's handling of the bug after they found out about it is truly a textbook illustration of how not to handle a software quality problem.

— Published


A Brief Call for Civil Discourse in Software Testing

By Rex Black

While I typically restrict myself to discussions and posts related purely to how to do and manage software testing better, I feel I must make a brief side expedition to the land of commentary.  This should not be a controversial commentary, but I'm afraid it will be with some.  I'd like to make a brief call for more civility in the way software testing professionals address each other, both in print and in person. 

The following are real quotes from published articles this year (not an old year).  They are phrases used to describe software testing professionals.  They are used by people who style themselves as experts and coaches in the software testing profession.  See how professional and encouraging these words sound to you: "profiteer and bully," "risk-based testing cargo cult," "moral and intellectual bankrupt," "shadowy pseudo-experts," "power mad," and "embarrassingly stupid." 

I could go on, but you get the picture. 

I have a simple rule for public discourse, both on-line and in-person: if people want to participate in a debate or discussion with me, they can expect me to be civil and respectful towards them and towards other software testing professionals, and I expect the same from them.  It'll be a better software testing world, and we'll make a lot more progress together, when this simple rule--one we all learned as children, if we paid attention in school--wins out over the sort of self-promotion-through-name-calling that dominates so much of our debate. 

Back to your regularly scheduled fact-focused software testing blogging...

— Published


Free of Defects...or Not?

By Rex Black

Like most people, I don't always read those pesky agreements that come with software these days, but I made an exception for the Tune-Up package I'm installing to try to revive my tired old Windows XP system.  I came across this curious contradiction in the warranty section of the agreement:

The Software and your documentation are free of defects if they can be used in accordance with the description of the Software and its functionalities that was provided by TuneUp at the point in time that you received the Software and documentation. Further qualities of the Software are not agreed.

Since no Software is free of defects, we urgently recommend you to back up your data regularly. 

Okay, guys, what is it?  Is the software free of defects or not?  If it is free of defects, perhaps you could enlighten us all on how you did that?

— Published


Advanced Test Manager: Improving the Test Process

By Rex Black

Here's another good observation on a question in the Advanced Test Manager book.  Gianni Pucciani commented about question 18 in chapter 3:

Assume you are a test manager in charge of integration testing, system testing, and acceptance testing for a bank. You are working on a project to upgrade an existing automated teller machine system to allow customers to obtain cash advances from supported credit cards. The system should allow cash advances from $20 to $500, inclusively, for all supported credit cards. The supported credit cards are American Express, Visa, Japan Credit Bank, Eurocard, and MasterCard.

During test execution, you find five defects, each reported by a different tester, that involve the same problem with cash advances, with the only difference between these reports being the credit card tested. Which of the following is an improvement to the test process that you might suggest?

A.  Revise all cash advance test cases to test with only one credit card.

B.  Review all reports filed subsequently and close any such duplicate defect reports before assignment to development.

C.  Change the requirements to delete support for American Express cards.

D.  Have testers check for similar problems with other cards and report their findings in defect reports.

The answer is D.  Gianni commented, "I see that B is a reactive solution, does not really improve the process. But I probably misinterpreted D: I thought of D as  a duplication of work, cause I thought it was suggesting that each testers execute the same test case with all the 4 credit cards. Instead I suppose the real sense was that each tester should just check, before filing a new bug, bug reports already opened on the same issue, and add information in there...  The improvement I would suggest is that each tester executes his/her own test cases with all the 4 cards, which I think is better than D."

Yes, Gianni, this is the sense in which I meant option D.  When testers find a bug, they should isolate it by checking against the other cards.  One of the problems with multiple choice questions is that you can't use an entire paragraph in each option!

— Published


Agile Testing Best Practices

By Rex Black

I'm going to start a semi-regular feature in this blog, talking about testing best practices.  If you know me and my consulting company, RBCS, you know that we spend time with clients around the world, in every possible industry, helping people improve their testing with training or consulting services, or doing testing for them with our outsourcing services.  Our work gives me insights into what goes on, the actual day-to-day practice of software testing.

Now, not all of what goes on is good.  There are bad practices, and we help clients fix those.  But you don’t need me to write about what not to do.  Aren’t there enough scolding bloviators in our business?  With a click of your mouse, you can read these people’s disdainful rants about testers they think are stupid, testers they think are in the wrong “school of testing,” testers they love to hate.  Lecture, scold, rant, bloviate.  How tedious!

So, being a contrarian, I will do the opposite:  With the exception of the paragraph above—where I poured well-earned scorn on people who write bad things about other testers—let's focus on good news.  A blog entry on best practices should discuss testing best practices that my associates and I have observed other smart people doing. 

I want to start with Agile testing when it works.  No, I’m not recanting.  Yes, I’ve written about the testing challenges of Agile, and I stand by what I wrote.  Yes, I can talk about testing worst practices in some Agile teams, and I might in some future post—but not here.  Here, I focus on what’s right about Agile.  Here are five testing best practices we’ve found in Agile done right:

Unit testing. Okay, it’s true that most programmers, even Agile programmers, still have a lot to learn about proper test design.  But if you’re a professional tester like me, you have love hearing programmers talk about the importance of unit testing.  We all know that unit tested software is easier to system test.

Static analysis. Not only do smart Agile programmers like unit testing, they like static analysis, too. Coding standards are hip again.  Cyclomatic complexity is back.  Writing more testable, more maintainable code: that’ll make testers’ lives easier in the long run.

Component integration testing. This under-appreciated test level exists—on properly run Agile projects.  You can go years on sequential-model projects without seeing component integration testing.  However, on a good Agile teams, people look for integration failures, and, because of continuous integration, the underlying integration bugs aren’t hard to find.

Tools, tools, tools—and many free.  All of this talk about unit testing, static analysis, and component integration testing would be just that—talk—without tool support.  Fortunately, the Agile—err, what should we call it?—movement, revolution, fad, concept, pick your term, has brought with it a lot of tools to support these best practices, along with other best practices.  For those of us without unlimited budgets—and isn’t that all of us?—a lot of the best tools are free, too. 

Tester and developer teamwork. At the beginning of our latest assessment, I had a great conversation with a test manager who works on Agile projects. Among areas of agreement: our shared joy at the death of a bad idea. The bad idea in question was this: the idea that the role of the test team is the quality cop, the enforcer, the Dirty Harry to the punks of the software team.  “Seeing as I can refuse to approve the release, you gotta ask yourself one question: Do you feel lucky, programmer?”  Instead, we see more people working together, collaborating for quality, and that’s especially true on good Agile teams.

One recent morning, I spent three hours talking to two programmers—real seasoned professionals with years in the field—talking to them about testing.  The testing that they did.  In fact, it wasn’t so much about testing, but testing as an essential tactical element in a larger strategy for higher quality code.  They really knew testing, and they knew how the Agile approach and tools were helping them to achieve better testing and thus better code.  At the end of our talk, I mentioned how much I enjoyed talking to programmers about good testing and good code.

He replied, “Yeah, we spend a lot of time around here talking to each other about that.  How to be better craftsmen.  How to test better.  How to build better code.”

Wow.  If the entire methodology, the lifecycle, the tools, and every other aspect of Agile fades away, leaving behind only the habits of programmers serious about code quality, and testers working cooperatively with them to achieve it, that will be a signal achievement in the software engineering profession.  Best practices, indeed.

— Published


Test Environments

By Rex Black

My colleague Gianni Pucciani wrote recently to suggest a discussion:

I would like to propose a discussion on your blog, about how to manage the testing environment when multiple testers are running tests concurrently, basically sharing the test environment. In my organization we rely heavily on virtualization, therefore each tester has it's own installation of the system under test on a separate virtual machine, and there are no concurrency issues. I was wondering whether this is a standard practice and how this issue was managed when virtualization software was not used as much as now.

This is a great topic for discussion.  Certainly, many of our clients are using virtualization to try to insulate testers from each other, and also to insulate manual and automated environments.  Probably the worst train wrecks that I've seen, from a test environment perspective, related to unvirtualized environments shared across manual and automated tests.

Of course, in some cases the systems under test only read data from shared repositories, which prevents the concurrency problem Gianni mentioned.  In other cases each instance of the system under test (one instance per tester) has its own data for reading and writing, which also avoids the problem.

So, how about other readers of the blog?  What have you done to deal with the problems that can arise with parallel testers, testing in the same hardware environments at the same time, or with concurrent manual and automated testing in the same hardware environments?  How has the much-ballyhooed cloud affected this issue, if at all?

— Published


Advanced Test Manager: A Bug Found

By Rex Black

It was bound to happen: Sharp-eyed reader Gianni Pucciani caught a bug in the Advanced Software Testing: Volume 2 book he is using to prepare for the ISTQB exam.

Question 15: You are a test manager in charge of system testing on a project to update a cruise-control module for a new model of a car. The goal of the cruise-control software update is to make the car more fuel efficient.

You have written a first release of the system test plan based on the final requirements specification. You receive an early draft of the design specification. Identify all of the following statements that are true.

A. Do not update the system test plan until the final version of the design specification is available.

B. Produce a draft update of the system test plan based on this version of the design specification.

C. Check this version of the design specification for inconsistencies with the requirements specification.

D. Participate in the final review of the design specification but not any preliminary reviews of the design specification.

E. Review the quality risk analysis to see if the design specification has identified additional risk items.

The answer key in the book says that A, C, and E are correct answers, but, as Gianni pointed out to me, the right answer is B, C, and E.  As he explained, "My reasoning was following the 'test early' principle, so even if the design is not complete, the information in there could help preparing the testing activities, especially if your are short of time and trust the design team."  That is, of course, correct.  Nice catch, Gianni.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.