Blog

Agile Testing Best Practices

By Rex Black

I'm going to start a semi-regular feature in this blog, talking about testing best practices.  If you know me and my consulting company, RBCS, you know that we spend time with clients around the world, in every possible industry, helping people improve their testing with training or consulting services, or doing testing for them with our outsourcing services.  Our work gives me insights into what goes on, the actual day-to-day practice of software testing.

Now, not all of what goes on is good.  There are bad practices, and we help clients fix those.  But you don’t need me to write about what not to do.  Aren’t there enough scolding bloviators in our business?  With a click of your mouse, you can read these people’s disdainful rants about testers they think are stupid, testers they think are in the wrong “school of testing,” testers they love to hate.  Lecture, scold, rant, bloviate.  How tedious!

So, being a contrarian, I will do the opposite:  With the exception of the paragraph above—where I poured well-earned scorn on people who write bad things about other testers—let's focus on good news.  A blog entry on best practices should discuss testing best practices that my associates and I have observed other smart people doing. 

I want to start with Agile testing when it works.  No, I’m not recanting.  Yes, I’ve written about the testing challenges of Agile, and I stand by what I wrote.  Yes, I can talk about testing worst practices in some Agile teams, and I might in some future post—but not here.  Here, I focus on what’s right about Agile.  Here are five testing best practices we’ve found in Agile done right:

Unit testing. Okay, it’s true that most programmers, even Agile programmers, still have a lot to learn about proper test design.  But if you’re a professional tester like me, you have love hearing programmers talk about the importance of unit testing.  We all know that unit tested software is easier to system test.

Static analysis. Not only do smart Agile programmers like unit testing, they like static analysis, too. Coding standards are hip again.  Cyclomatic complexity is back.  Writing more testable, more maintainable code: that’ll make testers’ lives easier in the long run.

Component integration testing. This under-appreciated test level exists—on properly run Agile projects.  You can go years on sequential-model projects without seeing component integration testing.  However, on a good Agile teams, people look for integration failures, and, because of continuous integration, the underlying integration bugs aren’t hard to find.

Tools, tools, tools—and many free.  All of this talk about unit testing, static analysis, and component integration testing would be just that—talk—without tool support.  Fortunately, the Agile—err, what should we call it?—movement, revolution, fad, concept, pick your term, has brought with it a lot of tools to support these best practices, along with other best practices.  For those of us without unlimited budgets—and isn’t that all of us?—a lot of the best tools are free, too. 

Tester and developer teamwork. At the beginning of our latest assessment, I had a great conversation with a test manager who works on Agile projects. Among areas of agreement: our shared joy at the death of a bad idea. The bad idea in question was this: the idea that the role of the test team is the quality cop, the enforcer, the Dirty Harry to the punks of the software team.  “Seeing as I can refuse to approve the release, you gotta ask yourself one question: Do you feel lucky, programmer?”  Instead, we see more people working together, collaborating for quality, and that’s especially true on good Agile teams.

One recent morning, I spent three hours talking to two programmers—real seasoned professionals with years in the field—talking to them about testing.  The testing that they did.  In fact, it wasn’t so much about testing, but testing as an essential tactical element in a larger strategy for higher quality code.  They really knew testing, and they knew how the Agile approach and tools were helping them to achieve better testing and thus better code.  At the end of our talk, I mentioned how much I enjoyed talking to programmers about good testing and good code.

He replied, “Yeah, we spend a lot of time around here talking to each other about that.  How to be better craftsmen.  How to test better.  How to build better code.”

Wow.  If the entire methodology, the lifecycle, the tools, and every other aspect of Agile fades away, leaving behind only the habits of programmers serious about code quality, and testers working cooperatively with them to achieve it, that will be a signal achievement in the software engineering profession.  Best practices, indeed.

— Published


Test Environments

By Rex Black

My colleague Gianni Pucciani wrote recently to suggest a discussion:

I would like to propose a discussion on your blog, about how to manage the testing environment when multiple testers are running tests concurrently, basically sharing the test environment. In my organization we rely heavily on virtualization, therefore each tester has it's own installation of the system under test on a separate virtual machine, and there are no concurrency issues. I was wondering whether this is a standard practice and how this issue was managed when virtualization software was not used as much as now.

This is a great topic for discussion.  Certainly, many of our clients are using virtualization to try to insulate testers from each other, and also to insulate manual and automated environments.  Probably the worst train wrecks that I've seen, from a test environment perspective, related to unvirtualized environments shared across manual and automated tests.

Of course, in some cases the systems under test only read data from shared repositories, which prevents the concurrency problem Gianni mentioned.  In other cases each instance of the system under test (one instance per tester) has its own data for reading and writing, which also avoids the problem.

So, how about other readers of the blog?  What have you done to deal with the problems that can arise with parallel testers, testing in the same hardware environments at the same time, or with concurrent manual and automated testing in the same hardware environments?  How has the much-ballyhooed cloud affected this issue, if at all?

— Published


Advanced Test Manager: A Bug Found

By Rex Black

It was bound to happen: Sharp-eyed reader Gianni Pucciani caught a bug in the Advanced Software Testing: Volume 2 book he is using to prepare for the ISTQB exam.

Question 15: You are a test manager in charge of system testing on a project to update a cruise-control module for a new model of a car. The goal of the cruise-control software update is to make the car more fuel efficient.

You have written a first release of the system test plan based on the final requirements specification. You receive an early draft of the design specification. Identify all of the following statements that are true.

A. Do not update the system test plan until the final version of the design specification is available.

B. Produce a draft update of the system test plan based on this version of the design specification.

C. Check this version of the design specification for inconsistencies with the requirements specification.

D. Participate in the final review of the design specification but not any preliminary reviews of the design specification.

E. Review the quality risk analysis to see if the design specification has identified additional risk items.

The answer key in the book says that A, C, and E are correct answers, but, as Gianni pointed out to me, the right answer is B, C, and E.  As he explained, "My reasoning was following the 'test early' principle, so even if the design is not complete, the information in there could help preparing the testing activities, especially if your are short of time and trust the design team."  That is, of course, correct.  Nice catch, Gianni.

— Published


Advanced Test Manager: Designing Tests from Requirements

By Rex Black

As I mentioned earlier in this blog, we are adopting a unique feature here. Readers can submit questions about my books to me to answer in this blog. I will answer at most one a week--as I have a lot of other work going on, which I hope everyone can understand--but I will get to the questions eventually. Here's the first question, from Gianni Pucciani of CERN.

Gianni wrote:

Hi Rex,

I finished reading the book Advanced Software Testing Vol.2 for the preparation of the ISTQB AL-TM. First of all thanks a lot, I found the book excellent, with lots of good tips that one could not know without adequate experience, and very well explained. Now I am reviewing all the chapters and their Q/A. I am planning to send you an email at the end of each chapter in case I have doubts, in order to clarify some of the questions.

For Chapter 1 I have only one doubt, on question #2 [which I've inserted here].

Assume you are a test manager working on a project to create a programmable thermostat for home use to control central heating, ventilation, and air conditioning (HVAC) systems. This project is following a sequential lifecycle model, specifically the V-model. Currently, the system architects have released a first draft design specification, based on the approved requirements specification released previously. Which of the following are appropriate test tasks to execute at this time?

A. Design tests from the requirements specification.
B. Analyze design-related risks.
C. Execute unit test cases.
D. Write the test summary report.
E. Design tests from the design specification .

The solution is A, B, E, but I don't agree on A. It asks to identify the tests that are appropriate to execute at this time (release of the first draft design, requirements specification was already released). A  (design tests from the requirements specification) is wrong in my opinion because this should have already been done as soon as the requirements specification was available. So, I don't think A is appropriate, it can be done "now," but it should have been done before. I would agree with including A if the questions was "identify the tests that can be done at this time". The Chapter stresses the importance of testing activities aligned with the development process. Executing A at that time for me is an example of sub-optimal alignment. What do you think?

Thank you.
Best regards,
Gianni Pucciani
CERN IT Dept.

Gianni, you are correct that the design of tests based on the requirements should have started earlier,  which is indeed a key theme of the chapter.  However, that set of test tasks might not have been completed yet.  In addition, the design of tests from design specifications often involves referring to the requirements specification as well (e.g., as a test oracle).  Therefore, it is appropriate that the test tasks described in option A take place at this time.

I hope that helps?

— Published


Decision Tables and Testing

By Rex Black

Recently, one of our licensed instructors asked me about a question in our Advanced Test Analyst course, related to two very useful test design techniques, the decision table and the related cause-effect graph.  The question is as follows:

An on-line shoe-selling e-commerce Web site stocks the following options for men’s loafers:

  • Tassel: Tassel (T) or non-tassel (~T)
  • Color: Black (B), cordovan (C), or white (W)
  • Size: all full and half sizes from 8 to 14 (S=n)

The store is overstocked with tasseled loafers of all sizes and colors, along with white loafers in all sizes, and cordovan loafers in sizes 13, 13 ½, and 14. As a result, they are offering a 10% discount (10%) and free shipping (FS) on these items. Design a full decision table that shows all combinations of conditions, then collapse that table by using don’t care (“-“) notation where one or two conditions cannot influence the action. Which of the following statements is true about these two tables?

A. The full table has 8 rules; the collapsed table has 5.

 B. The full table has 12 rules; the collapsed table has 7.

C. The full table has 12 rules; the collapsed table has 5.

 D. Both tables have 12 rules, as no combinations can collapse

The instructor wrote, "The answer is C – however I was wondering if you explain the logic to as to why?"

Okay, so here's the trick.  The full table has twelve rules (columns) because you have one condition with three possible values (color) and two conditions with two possible values (size >= 13 and tassel), so 3x2x2=12. Because half of the columns have tassel == true, then six columns collapse to one, leaving seven columns. The four remaining columns that collapse to leave two columns each (or five columns total) have to do with color being black (which is not on sale no matter size) and color being white (which is on sale no matter size).

So, you can completely test the combinations of conditions for the business logic behind the discount with just twelve tests, and, if you are pressed for time, just five tests will give you pretty good risk mitigation.

— Published


Discussing My Software Testing Books

By Rex Black

From time to time, I get questions about the books I've written.  I've never found a way (at least, one that I thought worked properly) to handle those questions efficiently.  Now I have an idea, and we'll see if it works.  If you are a reader of one of my books, and have a question about something in that book, you can send the question to info@rbcs-us.com with the subject line "Book Question for Blog".  Put your question in the body of the e-mail, watch the blog, and within a 2-3 days you should see an answer.

— Published


Building the Skills of Software Testers

By Rex Black

Throughout 2010, I’ve spent months doing a lot of traveling, and talking to a lot of testers and test managers around the world.  I’ve been to various spots in North America, China, Malaysia, New Zealand, Australia, Turkey, and Germany.  No matter where I go, I hear two comments fairly consistently from test  managers and staff alike:  1) Management is pushing for increased productivity; and, 2) training budgets are tight.  For people to improve productivity, they have to improve their skills.  So, how can the smart test manager build the skills of her test  staff without breaking the bank?  Let's  evaluate various options.

To start, you need a skills management plan.  First, you perform a task analysis.  In a task analysis, you examine the tasks that your staff performs as part of their regular (and perhaps irregular) duties.  From this analysis, you then create a list of skills that someone would need to effectively and efficiently perform those tasks.

Second, you create a skills inventory.  In this step, for each of the skills you identified, you assess what skills level a perfectly qualified person would have, for each of the positions in your teams.  (This assumes that you have some degree of specialization in your teams, in that people are not considered interchangeable, but rather are assigned tasks based on their positions.)  You then assess your current team against these skills.

Third, based on this information, you can now perform a gap analysis for the skills in your current team.  In other words, what’s the gap between your current team and the perfect team? This tells you where skills augmentation is needed, and thus where training can have a positive return on investment.

Finally, your skills management plan must address how people will apply the skills you intend to improve.  This requires an opportunity to put those new skills to real-world use, on real tasks, within a few weeks (at most) of the person obtaining those new skills. It’s a classic worst-practice of training to send people to training courses and then assume that somehow, magically, that training will someday translate into increased effectiveness and efficiency.  In such cases, these new skills often molder unused so long that, by the time you need them, the skills are forgotten.

Okay, if you’ve followed these four steps, you now have a specific list of skills that you want to improve, for each member of your team, along with a plan for how to utilize those improved skills.  Time to select training options.

The training options you have available constitute both a spectrum and an a la carte menu.  The options are a spectrum in that the degree of investment ranges from high to low.  The options are also an a la carte menu in that you can certainly select multiple options, not just one.

The first option is live, instructor-led courses.  This can involve either sending one or more staff members to a public course, or having the course run at one or more sites in your company.  The advantages of live, instructor-led courses are the immediate attention of the instructor (including direct interaction when questions arise and discussions of the application of the concepts to specific situations) and, for on-site courses, the possibility of making the course a hands-on workshop focused on your specific skills gaps.  In addition, for some staff members, having them devote their time entirely to training for a continuous period improves their focus and retention.  The effectivity of knowledge transfer is maximized, but so is the cost.

A second option, closely related, is what is called a virtual course.  In such a course, the instruction happens synchronously, as with a live course.  The course is instructor-led.  However, the instructor leads the course via a webinar or similar virtual classroom. This can cost less per attendee, but some less self-directed attendees can lose focus over time.

The third option is e-learning.  This typically involves some kind of asynchronous, browser-based interactive application.  The course should include some kind of presentation (e.g., animated slides) accompanied by a recorded audio lecture. Such courses should also include exercises and some kind of regular check of comprehension of the material. The latter is important because, since the instructor cannot monitor attendee comprehension directly in real-time, the attendees must check their own comprehension.  Typical ways to check comprehension include multiple choice questions about the material covered in the last few minutes of the e-learning lecture.

Some types of e-learning are called blended e-learning.  Blended e-learning combines webinar-type facilitation sessions with an asynchronous e-learning course.  The facilitation is instructor-led, and typically includes anywhere from two to six such sessions.  In these sessions, the instructor reinforces key ideas from the course.  Facilitation provides attendees with an opportunity to ask questions, and also provides structure that helps to keep less self-directed attendees engaged.

Pure asynchronous e-learning is typically considerably less expensive than live, instructor-led training, sometimes as much as half or a third as much.  Indeed, you can often purchase e-learning course enterprise licenses that allow the training of an unlimited number of attendees for a relatively small fixed cost. The addition of facilitation sessions adds cost, but savvy training customers can find ways to balance the cost of facilitation against the benefit.

A fourth option is self-study.  In such a situation, an attendee uses books, articles, blogs, podcasts, videos, and web-site materials to learn.  The range internet options makes self-study truly attractive, and no one can argue with the price.  Buying one or two books and spending a few hours availing oneself of free internet resources is cheap.  Of course, the risk is that the attendee will spend time reading what are effectively sales pitches or, worse yet, really bad ideas.

The fifth option is cross-training and other forms of on-the-job training.  In such programs, you assigned someone a task—and a mentor—that will allow them to expand their skills.  Obviously, the cost of such an approach is low, though remember to take into account the efficiency costs on both the person learning the new skill and the mentor. Even when other training options are used, I suggest that every skills growth initiative should include this option as the last step of cementing the new skills.  For example, if someone takes an e-learning course, they can then be assigned a mentor and a task that involves one or more of the new skills they have acquired.

As managers, the economic exigencies of the current economy require that we become more effective and efficient in our use of all our resources.  In software testing, people are indeed the most important resource, because software testing work is brain-work. Training, in one form of another, is an essential part of becoming more effective and efficient.  To use train your staff properly, start with a proper understanding of what they need to know—and what they currently don’t know.  Next, select options such as instructor-led training, e-learning, self-study, and cross-training to ensure proper skills transfer and the application of those skills to real-world problems.  If you develop and execute a smart skills-growth plan that covers these elements, you can expect significant improvements in your team’s abilities over the next six to eighteen months.

— Published


Risk Based Software Testing: Better Ways to Report Results

By Rex Black

 As readers of this blog will know, I've spent a lot of time in this blog (and in articles, books, and consulting work) on the topic of risk based testing.  I recently spent some time in Japan, working with various teams in Sony to implement better risk based testing.  Part of that time was spent working with Atsushi Nagata, who is helping teams in Sony put risk based testing  into action.  Together, he and I have written an article describing some ground-breaking work that they've done on risk based test results reporting.  That article was just published in ST&QA magazine.  You can read it (and comment on it) by clicking here.

— Published


Interviewing Software Testers

By Rex Black

I spend a lot of time traveling the world and talking to IT executives, managers, and staff members.  Even with the economy struggling, a surprising number of managers are hiring new staff.  I did a webinar on hiring in November, and over 300 people registered to attend.  You can listen to the recorded version of the webinar here.

Hiring is great, and it’s sure better to hire than to fire.  However, with hiring comes the possibility of the dreaded hiring mistake, the opening scene of one of the manager’s worst nightmares.  Since interviews are so important in selecting the right people to hire—and, ideally, deflecting the hiring mistakes—you need to be able to conduct effective interviews.  What are the important elements of an effective interviewing process?

One element is a good job description.  In addition to the obvious sections of this document, it should answer for following questions:

  • What tasks and responsibilities are involved?
  • What and how much experience?
  • What specific skills are needed, and what is the career path?
  • What training, education, certification, or licenses are required?
  • What is the start date?
  • If unusual, what are the hours, the dress code, and the travel requirements?

The first four of these sections are easy to complete if you have done a task analysis and skills inventory for your team.  Finally, avoid a classic worst-practice of job descriptions by distinguishing between required and desirable qualifications. 

With the job description in place, you can start interviewing candidates.  (I assume that your HR department will handle the actual recruiting activities that bring candidates and their resumes your way.)  To make the process efficient—to be blunt, to minimize wasted time on in-person interviews with unqualified candidates—I recommend using a phone interview (perhaps more accurately called a “phone screen”) to start.

This brings us to an important point that applies to the interview hiring process.  While we were all taught to be polite when growing up—and you should be polite in this entire process—we do need to turn off the politeness instinct that causes us to pull back and redirect questions when we sense that the other person is uncomfortable.  Remember, the objective of the hiring process is to hire the most qualified candidate, not to make all the interviewees completely comfortable. 

So, in the phone screen, you should explore the person’s experience and qualifications in a polite but incisive way.  In particular, weed out people who pad their resume or inflate their experience.  If a buzzword or acronym is on a resume, check that the candidate has meaningful mastery of the subject. Carefully evaluate all claimed expertise and experience in the phone interview.  Be especially skeptical if a skill is listed without any description of a particular job where the skill was applied.  Also, you may want to verify degrees, certifications, and licenses if these are important.  If the candidate passes the phone screen, then you can schedule an in-person interview with yourself and others on the team in which the person will work.  Key managers who will work with the candidate are often included.  Again, I assume your HR team can help you set up the interview participants and schedule.

In the in-person interview, include a mix of qualification questions, behavioral interviewing, and audition interviewing.  Qualification questions are those with correct and incorrect answers; e.g., “What programming language is primarily used to write the Linux operating system?” Pick skills and knowledge that relates to the actual work the successful candidate will perform, then develop a set of good qualification questions for them.  Don’t make these questions so hard that no one gets them right; the objective is not to pose the riddle of the sphinx to the candidates, but to measure their level of skill.

Behavioral interviewing is concerned, obviously enough, with how a person will behave on the job.  Behavioral questions are open-ended, and often require candidates to relate their past experience to the job you are considering them for. For example, here are three possible behavioral interview questions:

  • Tell me about ways that past managers have enabled you to do your best work.
  • How will what you learned on project XYZ help us here at our company?
  • Of all the jobs you’ve had, what was your most enjoyable job, and what did like the most?

Depending on the culture, workstyles and values of your company, the right answer for any of these questions could be the wrong answer for other companies.

You should also include audition interviews.  An audition interview is where you set up an actual work task—or a scaled down version of it—and ask the candidate to perform it.  For example, if you are hiring a test engineer, you could ask the candidate to create a test based on a requirements specification or user story from a past or current project.  As another example, if you are hiring a test technician, you could give the candidate a real test case--written to the expected level of detail as most test cases--and have the candidate test a real system.  While audition interviews might sound complicated, they’re actually easy and fun once you get the hang of them.  I have hired some really good people—and not hired some people I might otherwise have mistakenly hired—based on their audition interviews.

Let me conclude with some cautionary notes on avoiding classic, all-too-common interviewing mistakes.  One of these mistakes is scaring off the candidates.  This can happen when people deliberately intimidate, stress out, or just plain weird out candidates in interviews.  It can also happen when interviewers who are having a bad day vent their frustrations; you should be honest about the good and bad aspects of the company, but don’t painting a bleak and bitter picture.  Another classic mistake, which I mentioned earlier, is being afraid to ask tough questions.  Probe for weakness in the skills and experienced claimed in the resume and the interview.  Keep your ears open for fudging, vagueness, attempts to redirect the question, and using incorrect technical terms.  Another classic mistake is to break the law.  Be sure you know what questions and topics you can’t bring up.  Again, turn to your HR department to help you (and others who will be involved in interviewing) understand what topics are acceptable in an interview.

Successful teams are built by smart managers who have mastered the art of effective interviewing.  Effective interviewing starts with a good job description, as that document defines clear requirements for the position.  Effective interviewing should also include a good phone screen.  The interview process should include qualification questions, behavioral interviewing, and an audition interview. Don’t scare off the candidate or break the law, but do ask polite yet challenging questions. By including these essential elements of effective interviewing, you can be a smarter hiring manager, too.

— Published


Processes (Not Just Software Testing Processes), Enabled by Tools

By Rex Black

Often, software engineering processes--including but not limited to software testing processes--are made more efficient by tools, or in some cases are only enabled by the use of a tool.  When the tool is missing, the process breaks down.  The dependency--and thus the breakdown--might not be as obvious as shown in the picture below; sometimes you have to think harder about the problem.

[caption id="attachment_218" align="alignnone" width="225" caption="What Is Missing?"]What Is Missing?[/caption]

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.