RBCS Covid-19 response: Until further notice, all public training classes will be run virtually. Remote proctored certification exams are available (view details).

Blog

Risk Based Software Testing: Better Ways to Report Results

By Rex Black

 As readers of this blog will know, I've spent a lot of time in this blog (and in articles, books, and consulting work) on the topic of risk based testing.  I recently spent some time in Japan, working with various teams in Sony to implement better risk based testing.  Part of that time was spent working with Atsushi Nagata, who is helping teams in Sony put risk based testing  into action.  Together, he and I have written an article describing some ground-breaking work that they've done on risk based test results reporting.  That article was just published in ST&QA magazine.  You can read it (and comment on it) by clicking here.

— Published


Interviewing Software Testers

By Rex Black

I spend a lot of time traveling the world and talking to IT executives, managers, and staff members.  Even with the economy struggling, a surprising number of managers are hiring new staff.  I did a webinar on hiring in November, and over 300 people registered to attend.  You can listen to the recorded version of the webinar here.

Hiring is great, and it’s sure better to hire than to fire.  However, with hiring comes the possibility of the dreaded hiring mistake, the opening scene of one of the manager’s worst nightmares.  Since interviews are so important in selecting the right people to hire—and, ideally, deflecting the hiring mistakes—you need to be able to conduct effective interviews.  What are the important elements of an effective interviewing process?

One element is a good job description.  In addition to the obvious sections of this document, it should answer for following questions:

  • What tasks and responsibilities are involved?
  • What and how much experience?
  • What specific skills are needed, and what is the career path?
  • What training, education, certification, or licenses are required?
  • What is the start date?
  • If unusual, what are the hours, the dress code, and the travel requirements?

The first four of these sections are easy to complete if you have done a task analysis and skills inventory for your team.  Finally, avoid a classic worst-practice of job descriptions by distinguishing between required and desirable qualifications. 

With the job description in place, you can start interviewing candidates.  (I assume that your HR department will handle the actual recruiting activities that bring candidates and their resumes your way.)  To make the process efficient—to be blunt, to minimize wasted time on in-person interviews with unqualified candidates—I recommend using a phone interview (perhaps more accurately called a “phone screen”) to start.

This brings us to an important point that applies to the interview hiring process.  While we were all taught to be polite when growing up—and you should be polite in this entire process—we do need to turn off the politeness instinct that causes us to pull back and redirect questions when we sense that the other person is uncomfortable.  Remember, the objective of the hiring process is to hire the most qualified candidate, not to make all the interviewees completely comfortable. 

So, in the phone screen, you should explore the person’s experience and qualifications in a polite but incisive way.  In particular, weed out people who pad their resume or inflate their experience.  If a buzzword or acronym is on a resume, check that the candidate has meaningful mastery of the subject. Carefully evaluate all claimed expertise and experience in the phone interview.  Be especially skeptical if a skill is listed without any description of a particular job where the skill was applied.  Also, you may want to verify degrees, certifications, and licenses if these are important.  If the candidate passes the phone screen, then you can schedule an in-person interview with yourself and others on the team in which the person will work.  Key managers who will work with the candidate are often included.  Again, I assume your HR team can help you set up the interview participants and schedule.

In the in-person interview, include a mix of qualification questions, behavioral interviewing, and audition interviewing.  Qualification questions are those with correct and incorrect answers; e.g., “What programming language is primarily used to write the Linux operating system?” Pick skills and knowledge that relates to the actual work the successful candidate will perform, then develop a set of good qualification questions for them.  Don’t make these questions so hard that no one gets them right; the objective is not to pose the riddle of the sphinx to the candidates, but to measure their level of skill.

Behavioral interviewing is concerned, obviously enough, with how a person will behave on the job.  Behavioral questions are open-ended, and often require candidates to relate their past experience to the job you are considering them for. For example, here are three possible behavioral interview questions:

  • Tell me about ways that past managers have enabled you to do your best work.
  • How will what you learned on project XYZ help us here at our company?
  • Of all the jobs you’ve had, what was your most enjoyable job, and what did like the most?

Depending on the culture, workstyles and values of your company, the right answer for any of these questions could be the wrong answer for other companies.

You should also include audition interviews.  An audition interview is where you set up an actual work task—or a scaled down version of it—and ask the candidate to perform it.  For example, if you are hiring a test engineer, you could ask the candidate to create a test based on a requirements specification or user story from a past or current project.  As another example, if you are hiring a test technician, you could give the candidate a real test case--written to the expected level of detail as most test cases--and have the candidate test a real system.  While audition interviews might sound complicated, they’re actually easy and fun once you get the hang of them.  I have hired some really good people—and not hired some people I might otherwise have mistakenly hired—based on their audition interviews.

Let me conclude with some cautionary notes on avoiding classic, all-too-common interviewing mistakes.  One of these mistakes is scaring off the candidates.  This can happen when people deliberately intimidate, stress out, or just plain weird out candidates in interviews.  It can also happen when interviewers who are having a bad day vent their frustrations; you should be honest about the good and bad aspects of the company, but don’t painting a bleak and bitter picture.  Another classic mistake, which I mentioned earlier, is being afraid to ask tough questions.  Probe for weakness in the skills and experienced claimed in the resume and the interview.  Keep your ears open for fudging, vagueness, attempts to redirect the question, and using incorrect technical terms.  Another classic mistake is to break the law.  Be sure you know what questions and topics you can’t bring up.  Again, turn to your HR department to help you (and others who will be involved in interviewing) understand what topics are acceptable in an interview.

Successful teams are built by smart managers who have mastered the art of effective interviewing.  Effective interviewing starts with a good job description, as that document defines clear requirements for the position.  Effective interviewing should also include a good phone screen.  The interview process should include qualification questions, behavioral interviewing, and an audition interview. Don’t scare off the candidate or break the law, but do ask polite yet challenging questions. By including these essential elements of effective interviewing, you can be a smarter hiring manager, too.

— Published


Processes (Not Just Software Testing Processes), Enabled by Tools

By Rex Black

Often, software engineering processes--including but not limited to software testing processes--are made more efficient by tools, or in some cases are only enabled by the use of a tool.  When the tool is missing, the process breaks down.  The dependency--and thus the breakdown--might not be as obvious as shown in the picture below; sometimes you have to think harder about the problem.

[caption id="attachment_218" align="alignnone" width="225" caption="What Is Missing?"]What Is Missing?[/caption]

— Published


An Agnostic Software Test Professional's Reflections on the Agile Principles

By Rex Black

I've made some comments, both on this blog and in various speechs/webinars/courses, about Agile development processes and how they affect testing.  However, I haven't addressed the entire set of Agile principles at once.  I haven't seen others who I would call "Agile agnostics" do so either.  (By "Agile agnostics" I mean those who do not cast themselves as proponents for or opponents of Agile.)  So, in this post, I make some test-centric observations about the Agile principles from the Agile manifesto.  These observations are based on my experiences working on Agile projects and working with Agile teams.

  • “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”  Like any iterative lifecycle, Agile breaks the development work to be done into iterations.  Each iteration should create a collection of features that are potentially valuable to customers.  I say “potentially” because not all iterations do result in the delivery of features to customers, but, when practiced properly, each iteration’s features could be delivered to customers; i.e., each iteration is sufficiently complete and of sufficient quality. This focus on regularly assuring quality (in the most holistic meaning of the term “quality assurance”) is helpful to the test team.
  • “Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.” As a practical matter, this principle is one of the most challenging for testing, because there is a point in each iteration at which changes become highly disruptive in terms of complete testing of the change and the associated possible regressions.  Balancing quality and agility in the face of desired changes is important.
  • “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.”  Most Agile teams and projects seem to have settled on iterations of between two to four weeks.  At the end of each iteration, as mentioned above, the software should work and be potentially deliverable to customers.  This principle is challenging to testers, because of the short timeframes for test preparation and execution, but it also provides the benefit of limiting the number of features delivered for testing at any one time.  Short iterations also help to contain the number of bugs that could accumulate in the code prior to test execution, a distinct advantage to the test team.
  • “Business people and developers must work together daily throughout the project.”  This principle, while laudable in theory, is difficult.  Often, surrogates represent the users or customers, and business people are too busy to participate daily.  In addition, the absence of the word “testers” from the list above can create challenges.  However, accessibility of the business stakeholders to the project team is certainly helpful to the testers, and good Agile practices can make it easier for test teams to resolve questions about expected behavior.
  • “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”  This principle simply re-states what is called “theory Y management.”  Simply put, the theory is that people are essential self-motivated and want to get work done.  Tom DeMarco and Tim Lister, in their books on management, are probably the leading current proponents of theory Y management in the software engineering discipline.  From a testing point of view, to the extent that individuals are motiviated not simply to produce large volumes of features, but to produce quality features, this principle supports the testing process when realized.
  • “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.”  Certainly, no one can argue with the idea that excessive documentation can reduce the efficiency of a project.  However, this principle can pose some challenges for the test team if the face-to-face conversations happen when testers are not in a room, thus leaving them disconnected from decisions about how the system should work, what features it should contain, etc.  The use of brief daily meetings to re-synchronize the team can help manage this challenge, but it’s essential that these meetings expand to become so long that they become simply a different form of inefficiency. It’s also important that Agile teams remember that “less documentation” does not mean “no documentation;” essential documentation, including for testing, must still be prepared.
  • “Working software is the primary measure of progress.”  While this principle is also laudable from a testing point of view—most testers would accept that working software speaks for itself—this principle is sometimes stretched to the point that smart metrics, including metrics of quality and testing progress, are abandoned.  Good practices in terms of testing and quality metrics, measurement and management apply in Agile projects as with any other project, though the specific forms of the metrics tend to differ from the metrics used on sequential lifecycles.
  • “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.”  Working a normal workweek, without overtime, weekends, excessive pressure or stress, is certainly a desirable goal.  In some cases, I've seen Agile teams avoid the “death march” behaviors that arise at the end of some projects when large feature backlogs and enormous numbers of bugs overwhelm the teams.  However, I've seen plenty of situations where overtime is a regular feature at the end of every iteration, and where this burden falls disproportionate on testers.  This principle remains under-fulfilled in practice, to the detriment of testers.
  • “Continuous attention to technical excellence and good design enhances agility.”  Music to the testers’ ears, indeed.  Of course, some programmers find it hard to make the transition from big chunks of development work, done on rushed schedules (with the concomitant quality compromises) to the smaller chunks of work, done carefully, that are proposed by this principle.  I hope to see better realization of this principle on actual projects as software engineering professionals internalize the practice of Agile development.
  • “Simplicity—the art of maximizing the amount of work not done—is essential.” As testers, this principle is also a joy to hear, and to see.  Simplicity implies a small set of high-quality, working features, the opposite of the untestable, complex, sprawling applications that are so hard to cover in any reasonable sense.  Again, though, this is a major mental shift for many software engineering professionals, and this principle is under-realized in practice.
  • “The best architectures, requirements, and designs emerge from self-organizing teams.”  As with the theory Y management topic discussed earlier, this is an assertion about the nature of human psychology and capabilities that is beyond the scope of this book.  However, I have observed that Agile processes don’t always scale smoothly to large, complex, and especially distributed projects and teams.  I have also seen and heard of instances where the reality of “emergent design” and “emergent architecture” was considerably less satisfactory than this principle might lead us to expect; the cliché about “painting oneself into a corner” can apply.  With complex applications, testers should watch carefully for problems with performance, maintainability, and reliability, because these can reflect fundamental design and architecture problems that are difficult to fix after too many iterations have gone by.
  • “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” Of course, project retrospectives are hardly an Agile innovation, but all testers would agree that such periodic reflection is a great idea. Testers should encourage the actualization of this principle, and ensure that becoming more effective at producing quality is part of the agenda.

These observations are reflections on a work-in-progress.  Software engineering teams are still learning how to apply Agile approaches.  Agile approaches have not (yet?) been successfully applied to all types of projects or products.  Some tester challenges remain to be surmounted with respect to Agile development.  However, Agile methodologies are starting to show promising results in terms of both development efficiency and quality of the delivered code.

So, what do you think about Agile methodologies and testing?  I'd be happy to discuss this topic with interested readers of this blog.

— Published


Product Quality Includes the Whole Product

By Rex Black

Fresh off my diatribe just a couple weeks ago about how McAfee had a bug that serendipitously made a credit card charge in their favor, here's another "bug from the trenches."  Now, Cisco enjoys a good reputation for the quality of their products, but the packaging doesn't exactly inspire confidence.  Can you spot the error?

[caption id="attachment_210" align="alignnone" width="300" caption="Does Packaging Quality Matter? Yes"]Does Packaging Quality Matter?  Yes[/caption]

The point of this post is not to yank Cisco's chain--though I am surprised to see such a large, serious organization making such an obvious and silly mistake--but to reinforce an important point. Product quality includes the whole product, and that includes the package.  I've done some consulting for systems and consumer electronics makers, and most of them include testing of the "out of box experience". 

Cisco, did your OBE testers miss this one?  I'd be happy to post a comment from Cisco explaining how this kind of obvious bug snuck past.

— Published


Why Is Software Quality So Unmeasured?

By Rex Black

As regular readers of this blog know, from time to time I like to throw a topic out for discussion and see what comes of it.  It's that time.

I have said (more than once) that most companies manage their holiday parties more rigorously and quantitatively than they manage the quality of their software.  That's not just a throwaway snarky line: It's a fact.  You can go the treasurer, CFO, head accountant, whatever the moneybags person is called in any organization worth calling an organization and ask that person, "How much money did you spend on holiday parties last year?"  You'll get an answer.  You can ask people at that same organization, "What benefits did you receive from those parties?"  You'll get an answer (albeit probably not as quantitative).

Now, try this experiment: Ask Mr. or Ms. Moneybags, "How much money did your organization spend on software testing and software quality last year?"  While they might be able to answer the first half of that question (about testing), most organizations couldn't answer the second half, even though a technique for getting a good approximation of the costs of software quality has been around a long time.  (For example, check out the free tool here and also the article here.)  You can ask them about the benefits received from testing and quality and get the same lack of solid answers most of the time. 

Given that people have been accounting for stuff for thousands of years (e.g., I saw a 4,000 year old receipt for donkeys and other sundry items, written in Sumerian cuneiform, in a museum in Japan last month), how to explain this lack of fiscal measurement, given the widely-acknowledged importance of software in the modern economy? 

Moving beyond cost, while some of our clients do have reasonably good metrics for product quality (what some call "quality in use metrics"), many companies do not.  Some companies that do have such metrics don't tie those metrics back to what is getting tested.  We've seen situations where companies knew they had interoperability problems in their data centers and yet, when we asked people who was responsible for interoperability testing, the accountability trail went around in circles and ended up nowhere.  Same story for performance and reliability. 

So, why does this happen?  The cynic in me wants to say that this problem comes down to a lack of legal liability for quality and quality problems.  In other words, until organizations are held to the same legal standards for software quality as they would be for other products (e.g., food, cars, etc.), we will see this immature approach to managing and measuring quality.  But is it really that simple?  What do you think?  Comments, please.

— Published


More Thoughts on Software Testing

By Rex Black

You can find the second part of the uTest interview here.

— Published


Some Thoughts on Software Testing

By Rex Black

While I try to stay focused on facts (along with illustrative case studies) in this blog, sometimes people ask my opinion on software testing, usually in the form of interviews.  uTest did that recently.  You can find the first part here.

— Published


McAfee Online Systems, Including Financials, Not Tested?

By Rex Black

Some of  the readers of this blog have perhaps read or heard me say that the state of the common practice of software testing lags about 25 years behind the state of the art.  Here's a situation which either is a case study in why I say that, or it's something worse.

We (RBCS) used McAfee AntiVirus software for years on many of the PCs in our office as well as on company laptops.  This was mostly due to laziness, not a high level of satisfaction.  McAfee often came pre-installed on computers we bought, so we would simply renew the update subscriptions.  However, repeated problems with McAfee's virus protection--especially its aggressive interference with e-mail programs--led us to switch to Trend Micro, which now runs on all our PCs.

On September 30, we received an e-mail from McAfee (specifically, an account called "subscriptions" at "mcafee.com").  This e-mail read, in part, "ATTENTION: The credit card linked to your account for McAfee-based security products has expired. Please update your account now to keep your PC protected without interruption."  After checking to make sure that we were no longer using McAfee software on any of our computers, we ignored the message.  After all, the credit card had expired.

It turns out that McAfee's software had a bug in it that cause it to submit the charge anyway, because my October 13 American Express bill has a charge for $43.29 from McAfee.com.  No receipt was mailed for the charge, unlike in previous years, though I'm not sure this has anything to do with the expired card. We're going to contest and reverse this charge, of course; it is a relatively minor hassle.

The testing implications of this situation are significant, though.  Remember, antivirus software, like other security software, is software that we rely on to keep systems safe.  Given the kind of damage a widespread interruption to computer systems due to virus attacks can cause, you'd expect the entire suite of software--including the online systems that provide updates and handle customer financial matters--to be well tested, following known best practices and techniques.

So, what's the explanation for this situation?  If a credit card has expired, the system should not silently put through a charge on the card, especially when the system has sent an e-mail to the customer giving the impression that, unless the customer takes a specific and deliberate action to update the card, no charge will occur and the subscription will expire.   We have an obvious equivalence partition. Equivalence partitioning as a test technique is over 25 years old.  We clearly have a block of code that recognizes the equivalence partition and triggers an e-mail to the customer prior to the charge occuring.  Statement coverage, the lowest of the white box coverage criteria, is also a test technique that is over 25 years old.

In risk based testing--a topic I've covered often in this blog--it's almost certain that two financial-related risks would be noted during quality risk identification:

  • Processing a credit card transaction that should be rejected for processing
  • Not processing a credit card transaction that should be accepted for processing

Best practices of risk based testing would generally lead to such risks given a high impact rating, even if the likelihood rating were low.  That would require more than cursory testing against such risks.  Now, analytical risk based testing as a strategy is relatively leading edge, having been perfected only in the last ten years.

Maybe someone at McAfee would care to post a comment about which of the following statements is true:

  1. McAfee software systems receive only cursory testing, with financial processing code tested to less than 100% equivalence partition coverage and less than 100% statement coverage.  McAfee's test team was not aware that the system attempts charges against credit cards it knows to be no longer valid.
  2. McAfee's test team was aware that the system attempts charges against credit cards it knows to be no longer valid.  A bug was reported against this behavior by the testers, and deliberately deferred or cancelled by the product or project manager because they decided that they wanted the additional revenue from former customers.

Of course, it's quite possible that McAfee's financial processing code is tested to less than 100% equivalence partition coverage and less than 100% statement coverage, but even so their testers found this bug.  After all, testing to force all possible messages to occur--a simple experience based technique recommended by James Whittaker in How to Break Software--would have revealed this bug as well.

In all, any of three well-established test design techniques--one black box (equivalence partitioning), one white box (statement coverage), and one experience based (the force-all-messages attack)--would have found this bug.  Risk based testing would have lead to more thorough coverage of the underlying quality risk. If financial related quality risks are not being tested using well-established best practices, then what else isn't being tested in McAfee's systems?

— Published


A Holiday Gift for Yourself: Improving Your Testing by Christmas

By Rex Black

For those of us on the Western calendar, we have some holiday time coming soon, including the December break.  Many of us will spend this time relaxing, which is always good. However, why not invest a little of your holiday time in improving your testing operation? After all, if you’re like most testers, you are time constrained and need to make improvements quickly that show fast results. So here are three practical ideas which you can put into action before January arrives, which will make a noticeable difference when you start to take on the projects that await in 2011.

Get Hip to Risk-Based Testing

I've gone on quite a bit in this blog about risk based testing, but let's keep it short and sweet here.  I have a simple rule of thumb for test execution: Find the scary stuff first. How do we do this? Make smart guesses about where high-impact bugs are likely. How do we do that? Risk-based testing.

In a nutshell, risk-based testing consists of the following:

1. Identify specific risks to system quality.

2. Assess and assign the level of risk for each risk, based on likelihood (technical considerations) and impact (business considerations).

3. Allocate test effort and prioritize (sequence) test execution based on risk.

4. Revise the risk analysis at regular intervals in the project, including after testing the first build.

You can make this process as formal or as informal as necessary. We have helped clients get started doing risk-based testing in as little as one day, though one week is more typical. You can mine this blog for more ideas, check out a few articles on the RBCS web site (such as this one and this one), the year-long series of videos on our Digital Library, , or my books Managing the Testing Process (for the test management perspective) or Pragmatic Software Testing (for the test analyst perspective).

Whip Those Bug Reports into Shape

One of the major deliverables for us as testers is the bug report. But, like Rodney Dangerfield, the bug report gets “no respect” in too many organizations. Just because we write them all the time doesn’t mean they aren’t critical—quite the contrary—and it doesn’t mean we know how to write them well. Most test groups have opportunities to improve their bug reporting process.

When RBCS does test assessments for clients, we always look at the quality of the bug reports. We focus on three questions:

1. What is the percentage of rejected bug reports?

2. What is the percentage of duplicate bug reports?

3. Do all project stakeholder groups feel they are getting the information they need from the bug reports? If

the answer to questions one or two is, “More than 5%,” we do further analysis as to why. (Hint: This isn’t always a matter of tester competence, so don’t assume it is.) If the answer to question three is, “No,” then we spend time figuring out which project stakeholders are being overlooked or underserved. Recommendations in our assessment reports will include ways to gets these measures where they ought to be. Asking the stakeholders what they need from the bug reports is a great way to start—and to improve your relationships with your coworkers, too.

Read a Book on Testing

Most practicing testers have never read a book on testing. This is regrettable. We have a lot we can learn from each other in this field, but we have to reach out to gain that knowledge.

(Lest you consider this suggestion self-serving, let me point out that writing technical books yields meager book royalties. In fact, on an hourly basis it’s more lucrative to work bagging groceries at a supermarket. Other benefits, including the opportunity to improve our field, are what motivate most of us.)

There are many good books on testing out there now. Here’s a small selection, any one of which you could work your way through during a winter vacation:

  • General tips and techniques for test engineers: Pragmatic Software Testing, Rex Black; A Practitioner’s Guide to Software Test Design, Lee Copeland.
  • Object-oriented testing: Testing Object-Oriented Systems, Robert Binder.
  • Web testing: The Web Testing Handbook, Steve Splaine
  • Security testing: Testing Web Security, Steve Splaine; How to Break Software Security, James Whittaker
  • Dynamic test strategies and techniques: How to Break Software, James Whittaker; Advanced Software Testing: Volume 1, Rex Black.
  • Test management: Managing the Testing Process, Rex Black; Advanced Software Testing: Volume 2, Rex Black
  • Test process assessment and improvement: Critical Testing Processes, Rex Black; Test Process Improvement, Martin Pol et al
  • ISTQB tester certification: Foundations of Software Testing, Rex Black et al; The Testing Practitioner, ed. Erik van Veenendaal; Advanced Software Testing: Volumes 1, 2, and 3, Rex Black et al. 

I have read each of these books (some of which I also wrote or co-wrote). I can promise you that, if you need to learn about the topic given, reading one of the books for that topic will repay you in hours and hours saved over the years, as well as teaching you at least one or two good ideas you can put in place immediately.

— Published



Copyright ® 2020 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo ISTQB Logo PMI Logo

PMI is a registered mark of the Project Management Institute, Inc.

View Rex Black Consulting Services Inc. profile on Ariba Discovery