RBCS Covid-19 response: Until further notice, all public training classes will be run virtually. Remote proctored certification exams are available (view details).

Blog

A Brief Call for Civil Discourse in Software Testing

By Rex Black

While I typically restrict myself to discussions and posts related purely to how to do and manage software testing better, I feel I must make a brief side expedition to the land of commentary.  This should not be a controversial commentary, but I'm afraid it will be with some.  I'd like to make a brief call for more civility in the way software testing professionals address each other, both in print and in person. 

The following are real quotes from published articles this year (not an old year).  They are phrases used to describe software testing professionals.  They are used by people who style themselves as experts and coaches in the software testing profession.  See how professional and encouraging these words sound to you: "profiteer and bully," "risk-based testing cargo cult," "moral and intellectual bankrupt," "shadowy pseudo-experts," "power mad," and "embarrassingly stupid." 

I could go on, but you get the picture. 

I have a simple rule for public discourse, both on-line and in-person: if people want to participate in a debate or discussion with me, they can expect me to be civil and respectful towards them and towards other software testing professionals, and I expect the same from them.  It'll be a better software testing world, and we'll make a lot more progress together, when this simple rule--one we all learned as children, if we paid attention in school--wins out over the sort of self-promotion-through-name-calling that dominates so much of our debate. 

Back to your regularly scheduled fact-focused software testing blogging...

— Published


Free of Defects...or Not?

By Rex Black

Like most people, I don't always read those pesky agreements that come with software these days, but I made an exception for the Tune-Up package I'm installing to try to revive my tired old Windows XP system.  I came across this curious contradiction in the warranty section of the agreement:

The Software and your documentation are free of defects if they can be used in accordance with the description of the Software and its functionalities that was provided by TuneUp at the point in time that you received the Software and documentation. Further qualities of the Software are not agreed.

Since no Software is free of defects, we urgently recommend you to back up your data regularly. 

Okay, guys, what is it?  Is the software free of defects or not?  If it is free of defects, perhaps you could enlighten us all on how you did that?

— Published


Advanced Test Manager: Improving the Test Process

By Rex Black

Here's another good observation on a question in the Advanced Test Manager book.  Gianni Pucciani commented about question 18 in chapter 3:

Assume you are a test manager in charge of integration testing, system testing, and acceptance testing for a bank. You are working on a project to upgrade an existing automated teller machine system to allow customers to obtain cash advances from supported credit cards. The system should allow cash advances from $20 to $500, inclusively, for all supported credit cards. The supported credit cards are American Express, Visa, Japan Credit Bank, Eurocard, and MasterCard.

During test execution, you find five defects, each reported by a different tester, that involve the same problem with cash advances, with the only difference between these reports being the credit card tested. Which of the following is an improvement to the test process that you might suggest?

A.  Revise all cash advance test cases to test with only one credit card.

B.  Review all reports filed subsequently and close any such duplicate defect reports before assignment to development.

C.  Change the requirements to delete support for American Express cards.

D.  Have testers check for similar problems with other cards and report their findings in defect reports.

The answer is D.  Gianni commented, "I see that B is a reactive solution, does not really improve the process. But I probably misinterpreted D: I thought of D as  a duplication of work, cause I thought it was suggesting that each testers execute the same test case with all the 4 credit cards. Instead I suppose the real sense was that each tester should just check, before filing a new bug, bug reports already opened on the same issue, and add information in there...  The improvement I would suggest is that each tester executes his/her own test cases with all the 4 cards, which I think is better than D."

Yes, Gianni, this is the sense in which I meant option D.  When testers find a bug, they should isolate it by checking against the other cards.  One of the problems with multiple choice questions is that you can't use an entire paragraph in each option!

— Published


Agile Testing Best Practices

By Rex Black

I'm going to start a semi-regular feature in this blog, talking about testing best practices.  If you know me and my consulting company, RBCS, you know that we spend time with clients around the world, in every possible industry, helping people improve their testing with training or consulting services, or doing testing for them with our outsourcing services.  Our work gives me insights into what goes on, the actual day-to-day practice of software testing.

Now, not all of what goes on is good.  There are bad practices, and we help clients fix those.  But you don’t need me to write about what not to do.  Aren’t there enough scolding bloviators in our business?  With a click of your mouse, you can read these people’s disdainful rants about testers they think are stupid, testers they think are in the wrong “school of testing,” testers they love to hate.  Lecture, scold, rant, bloviate.  How tedious!

So, being a contrarian, I will do the opposite:  With the exception of the paragraph above—where I poured well-earned scorn on people who write bad things about other testers—let's focus on good news.  A blog entry on best practices should discuss testing best practices that my associates and I have observed other smart people doing. 

I want to start with Agile testing when it works.  No, I’m not recanting.  Yes, I’ve written about the testing challenges of Agile, and I stand by what I wrote.  Yes, I can talk about testing worst practices in some Agile teams, and I might in some future post—but not here.  Here, I focus on what’s right about Agile.  Here are five testing best practices we’ve found in Agile done right:

Unit testing. Okay, it’s true that most programmers, even Agile programmers, still have a lot to learn about proper test design.  But if you’re a professional tester like me, you have love hearing programmers talk about the importance of unit testing.  We all know that unit tested software is easier to system test.

Static analysis. Not only do smart Agile programmers like unit testing, they like static analysis, too. Coding standards are hip again.  Cyclomatic complexity is back.  Writing more testable, more maintainable code: that’ll make testers’ lives easier in the long run.

Component integration testing. This under-appreciated test level exists—on properly run Agile projects.  You can go years on sequential-model projects without seeing component integration testing.  However, on a good Agile teams, people look for integration failures, and, because of continuous integration, the underlying integration bugs aren’t hard to find.

Tools, tools, tools—and many free.  All of this talk about unit testing, static analysis, and component integration testing would be just that—talk—without tool support.  Fortunately, the Agile—err, what should we call it?—movement, revolution, fad, concept, pick your term, has brought with it a lot of tools to support these best practices, along with other best practices.  For those of us without unlimited budgets—and isn’t that all of us?—a lot of the best tools are free, too. 

Tester and developer teamwork. At the beginning of our latest assessment, I had a great conversation with a test manager who works on Agile projects. Among areas of agreement: our shared joy at the death of a bad idea. The bad idea in question was this: the idea that the role of the test team is the quality cop, the enforcer, the Dirty Harry to the punks of the software team.  “Seeing as I can refuse to approve the release, you gotta ask yourself one question: Do you feel lucky, programmer?”  Instead, we see more people working together, collaborating for quality, and that’s especially true on good Agile teams.

One recent morning, I spent three hours talking to two programmers—real seasoned professionals with years in the field—talking to them about testing.  The testing that they did.  In fact, it wasn’t so much about testing, but testing as an essential tactical element in a larger strategy for higher quality code.  They really knew testing, and they knew how the Agile approach and tools were helping them to achieve better testing and thus better code.  At the end of our talk, I mentioned how much I enjoyed talking to programmers about good testing and good code.

He replied, “Yeah, we spend a lot of time around here talking to each other about that.  How to be better craftsmen.  How to test better.  How to build better code.”

Wow.  If the entire methodology, the lifecycle, the tools, and every other aspect of Agile fades away, leaving behind only the habits of programmers serious about code quality, and testers working cooperatively with them to achieve it, that will be a signal achievement in the software engineering profession.  Best practices, indeed.

— Published


Test Environments

By Rex Black

My colleague Gianni Pucciani wrote recently to suggest a discussion:

I would like to propose a discussion on your blog, about how to manage the testing environment when multiple testers are running tests concurrently, basically sharing the test environment. In my organization we rely heavily on virtualization, therefore each tester has it's own installation of the system under test on a separate virtual machine, and there are no concurrency issues. I was wondering whether this is a standard practice and how this issue was managed when virtualization software was not used as much as now.

This is a great topic for discussion.  Certainly, many of our clients are using virtualization to try to insulate testers from each other, and also to insulate manual and automated environments.  Probably the worst train wrecks that I've seen, from a test environment perspective, related to unvirtualized environments shared across manual and automated tests.

Of course, in some cases the systems under test only read data from shared repositories, which prevents the concurrency problem Gianni mentioned.  In other cases each instance of the system under test (one instance per tester) has its own data for reading and writing, which also avoids the problem.

So, how about other readers of the blog?  What have you done to deal with the problems that can arise with parallel testers, testing in the same hardware environments at the same time, or with concurrent manual and automated testing in the same hardware environments?  How has the much-ballyhooed cloud affected this issue, if at all?

— Published


Advanced Test Manager: A Bug Found

By Rex Black

It was bound to happen: Sharp-eyed reader Gianni Pucciani caught a bug in the Advanced Software Testing: Volume 2 book he is using to prepare for the ISTQB exam.

Question 15: You are a test manager in charge of system testing on a project to update a cruise-control module for a new model of a car. The goal of the cruise-control software update is to make the car more fuel efficient.

You have written a first release of the system test plan based on the final requirements specification. You receive an early draft of the design specification. Identify all of the following statements that are true.

A. Do not update the system test plan until the final version of the design specification is available.

B. Produce a draft update of the system test plan based on this version of the design specification.

C. Check this version of the design specification for inconsistencies with the requirements specification.

D. Participate in the final review of the design specification but not any preliminary reviews of the design specification.

E. Review the quality risk analysis to see if the design specification has identified additional risk items.

The answer key in the book says that A, C, and E are correct answers, but, as Gianni pointed out to me, the right answer is B, C, and E.  As he explained, "My reasoning was following the 'test early' principle, so even if the design is not complete, the information in there could help preparing the testing activities, especially if your are short of time and trust the design team."  That is, of course, correct.  Nice catch, Gianni.

— Published


Advanced Test Manager: Designing Tests from Requirements

By Rex Black

As I mentioned earlier in this blog, we are adopting a unique feature here. Readers can submit questions about my books to me to answer in this blog. I will answer at most one a week--as I have a lot of other work going on, which I hope everyone can understand--but I will get to the questions eventually. Here's the first question, from Gianni Pucciani of CERN.

Gianni wrote:

Hi Rex,

I finished reading the book Advanced Software Testing Vol.2 for the preparation of the ISTQB AL-TM. First of all thanks a lot, I found the book excellent, with lots of good tips that one could not know without adequate experience, and very well explained. Now I am reviewing all the chapters and their Q/A. I am planning to send you an email at the end of each chapter in case I have doubts, in order to clarify some of the questions.

For Chapter 1 I have only one doubt, on question #2 [which I've inserted here].

Assume you are a test manager working on a project to create a programmable thermostat for home use to control central heating, ventilation, and air conditioning (HVAC) systems. This project is following a sequential lifecycle model, specifically the V-model. Currently, the system architects have released a first draft design specification, based on the approved requirements specification released previously. Which of the following are appropriate test tasks to execute at this time?

A. Design tests from the requirements specification.
B. Analyze design-related risks.
C. Execute unit test cases.
D. Write the test summary report.
E. Design tests from the design specification .

The solution is A, B, E, but I don't agree on A. It asks to identify the tests that are appropriate to execute at this time (release of the first draft design, requirements specification was already released). A  (design tests from the requirements specification) is wrong in my opinion because this should have already been done as soon as the requirements specification was available. So, I don't think A is appropriate, it can be done "now," but it should have been done before. I would agree with including A if the questions was "identify the tests that can be done at this time". The Chapter stresses the importance of testing activities aligned with the development process. Executing A at that time for me is an example of sub-optimal alignment. What do you think?

Thank you.
Best regards,
Gianni Pucciani
CERN IT Dept.

Gianni, you are correct that the design of tests based on the requirements should have started earlier,  which is indeed a key theme of the chapter.  However, that set of test tasks might not have been completed yet.  In addition, the design of tests from design specifications often involves referring to the requirements specification as well (e.g., as a test oracle).  Therefore, it is appropriate that the test tasks described in option A take place at this time.

I hope that helps?

— Published


Decision Tables and Testing

By Rex Black

Recently, one of our licensed instructors asked me about a question in our Advanced Test Analyst course, related to two very useful test design techniques, the decision table and the related cause-effect graph.  The question is as follows:

An on-line shoe-selling e-commerce Web site stocks the following options for men’s loafers:

  • Tassel: Tassel (T) or non-tassel (~T)
  • Color: Black (B), cordovan (C), or white (W)
  • Size: all full and half sizes from 8 to 14 (S=n)

The store is overstocked with tasseled loafers of all sizes and colors, along with white loafers in all sizes, and cordovan loafers in sizes 13, 13 ½, and 14. As a result, they are offering a 10% discount (10%) and free shipping (FS) on these items. Design a full decision table that shows all combinations of conditions, then collapse that table by using don’t care (“-“) notation where one or two conditions cannot influence the action. Which of the following statements is true about these two tables?

A. The full table has 8 rules; the collapsed table has 5.

 B. The full table has 12 rules; the collapsed table has 7.

C. The full table has 12 rules; the collapsed table has 5.

 D. Both tables have 12 rules, as no combinations can collapse

The instructor wrote, "The answer is C – however I was wondering if you explain the logic to as to why?"

Okay, so here's the trick.  The full table has twelve rules (columns) because you have one condition with three possible values (color) and two conditions with two possible values (size >= 13 and tassel), so 3x2x2=12. Because half of the columns have tassel == true, then six columns collapse to one, leaving seven columns. The four remaining columns that collapse to leave two columns each (or five columns total) have to do with color being black (which is not on sale no matter size) and color being white (which is on sale no matter size).

So, you can completely test the combinations of conditions for the business logic behind the discount with just twelve tests, and, if you are pressed for time, just five tests will give you pretty good risk mitigation.

— Published


Discussing My Software Testing Books

By Rex Black

From time to time, I get questions about the books I've written.  I've never found a way (at least, one that I thought worked properly) to handle those questions efficiently.  Now I have an idea, and we'll see if it works.  If you are a reader of one of my books, and have a question about something in that book, you can send the question to info@rbcs-us.com with the subject line "Book Question for Blog".  Put your question in the body of the e-mail, watch the blog, and within a 2-3 days you should see an answer.

— Published


Building the Skills of Software Testers

By Rex Black

Throughout 2010, I’ve spent months doing a lot of traveling, and talking to a lot of testers and test managers around the world.  I’ve been to various spots in North America, China, Malaysia, New Zealand, Australia, Turkey, and Germany.  No matter where I go, I hear two comments fairly consistently from test  managers and staff alike:  1) Management is pushing for increased productivity; and, 2) training budgets are tight.  For people to improve productivity, they have to improve their skills.  So, how can the smart test manager build the skills of her test  staff without breaking the bank?  Let's  evaluate various options.

To start, you need a skills management plan.  First, you perform a task analysis.  In a task analysis, you examine the tasks that your staff performs as part of their regular (and perhaps irregular) duties.  From this analysis, you then create a list of skills that someone would need to effectively and efficiently perform those tasks.

Second, you create a skills inventory.  In this step, for each of the skills you identified, you assess what skills level a perfectly qualified person would have, for each of the positions in your teams.  (This assumes that you have some degree of specialization in your teams, in that people are not considered interchangeable, but rather are assigned tasks based on their positions.)  You then assess your current team against these skills.

Third, based on this information, you can now perform a gap analysis for the skills in your current team.  In other words, what’s the gap between your current team and the perfect team? This tells you where skills augmentation is needed, and thus where training can have a positive return on investment.

Finally, your skills management plan must address how people will apply the skills you intend to improve.  This requires an opportunity to put those new skills to real-world use, on real tasks, within a few weeks (at most) of the person obtaining those new skills. It’s a classic worst-practice of training to send people to training courses and then assume that somehow, magically, that training will someday translate into increased effectiveness and efficiency.  In such cases, these new skills often molder unused so long that, by the time you need them, the skills are forgotten.

Okay, if you’ve followed these four steps, you now have a specific list of skills that you want to improve, for each member of your team, along with a plan for how to utilize those improved skills.  Time to select training options.

The training options you have available constitute both a spectrum and an a la carte menu.  The options are a spectrum in that the degree of investment ranges from high to low.  The options are also an a la carte menu in that you can certainly select multiple options, not just one.

The first option is live, instructor-led courses.  This can involve either sending one or more staff members to a public course, or having the course run at one or more sites in your company.  The advantages of live, instructor-led courses are the immediate attention of the instructor (including direct interaction when questions arise and discussions of the application of the concepts to specific situations) and, for on-site courses, the possibility of making the course a hands-on workshop focused on your specific skills gaps.  In addition, for some staff members, having them devote their time entirely to training for a continuous period improves their focus and retention.  The effectivity of knowledge transfer is maximized, but so is the cost.

A second option, closely related, is what is called a virtual course.  In such a course, the instruction happens synchronously, as with a live course.  The course is instructor-led.  However, the instructor leads the course via a webinar or similar virtual classroom. This can cost less per attendee, but some less self-directed attendees can lose focus over time.

The third option is e-learning.  This typically involves some kind of asynchronous, browser-based interactive application.  The course should include some kind of presentation (e.g., animated slides) accompanied by a recorded audio lecture. Such courses should also include exercises and some kind of regular check of comprehension of the material. The latter is important because, since the instructor cannot monitor attendee comprehension directly in real-time, the attendees must check their own comprehension.  Typical ways to check comprehension include multiple choice questions about the material covered in the last few minutes of the e-learning lecture.

Some types of e-learning are called blended e-learning.  Blended e-learning combines webinar-type facilitation sessions with an asynchronous e-learning course.  The facilitation is instructor-led, and typically includes anywhere from two to six such sessions.  In these sessions, the instructor reinforces key ideas from the course.  Facilitation provides attendees with an opportunity to ask questions, and also provides structure that helps to keep less self-directed attendees engaged.

Pure asynchronous e-learning is typically considerably less expensive than live, instructor-led training, sometimes as much as half or a third as much.  Indeed, you can often purchase e-learning course enterprise licenses that allow the training of an unlimited number of attendees for a relatively small fixed cost. The addition of facilitation sessions adds cost, but savvy training customers can find ways to balance the cost of facilitation against the benefit.

A fourth option is self-study.  In such a situation, an attendee uses books, articles, blogs, podcasts, videos, and web-site materials to learn.  The range internet options makes self-study truly attractive, and no one can argue with the price.  Buying one or two books and spending a few hours availing oneself of free internet resources is cheap.  Of course, the risk is that the attendee will spend time reading what are effectively sales pitches or, worse yet, really bad ideas.

The fifth option is cross-training and other forms of on-the-job training.  In such programs, you assigned someone a task—and a mentor—that will allow them to expand their skills.  Obviously, the cost of such an approach is low, though remember to take into account the efficiency costs on both the person learning the new skill and the mentor. Even when other training options are used, I suggest that every skills growth initiative should include this option as the last step of cementing the new skills.  For example, if someone takes an e-learning course, they can then be assigned a mentor and a task that involves one or more of the new skills they have acquired.

As managers, the economic exigencies of the current economy require that we become more effective and efficient in our use of all our resources.  In software testing, people are indeed the most important resource, because software testing work is brain-work. Training, in one form of another, is an essential part of becoming more effective and efficient.  To use train your staff properly, start with a proper understanding of what they need to know—and what they currently don’t know.  Next, select options such as instructor-led training, e-learning, self-study, and cross-training to ensure proper skills transfer and the application of those skills to real-world problems.  If you develop and execute a smart skills-growth plan that covers these elements, you can expect significant improvements in your team’s abilities over the next six to eighteen months.

— Published



Copyright ® 2020 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo ISTQB Logo PMI Logo

PMI is a registered mark of the Project Management Institute, Inc.

View Rex Black Consulting Services Inc. profile on Ariba Discovery