RBCS COVID-19 response: All of our public training courses through May will be run virtually (view details).
As readers of this blog will know, I've spent a lot of time in this blog (and in articles, books, and consulting work) on the topic of risk based testing. I recently spent some time in Japan, working with various teams in Sony to implement better risk based testing. Part of that time was spent working with Atsushi Nagata, who is helping teams in Sony put risk based testing into action. Together, he and I have written an article describing some ground-breaking work that they've done on risk based test results reporting. That article was just published in ST&QA magazine. You can read it (and comment on it) by clicking here.
I spend a lot of time traveling the world and talking to IT executives, managers, and staff members. Even with the economy struggling, a surprising number of managers are hiring new staff. I did a webinar on hiring in November, and over 300 people registered to attend. You can listen to the recorded version of the webinar here.
Hiring is great, and it’s sure better to hire than to fire. However, with hiring comes the possibility of the dreaded hiring mistake, the opening scene of one of the manager’s worst nightmares. Since interviews are so important in selecting the right people to hire—and, ideally, deflecting the hiring mistakes—you need to be able to conduct effective interviews. What are the important elements of an effective interviewing process?
One element is a good job description. In addition to the obvious sections of this document, it should answer for following questions:
The first four of these sections are easy to complete if you have done a task analysis and skills inventory for your team. Finally, avoid a classic worst-practice of job descriptions by distinguishing between required and desirable qualifications.
With the job description in place, you can start interviewing candidates. (I assume that your HR department will handle the actual recruiting activities that bring candidates and their resumes your way.) To make the process efficient—to be blunt, to minimize wasted time on in-person interviews with unqualified candidates—I recommend using a phone interview (perhaps more accurately called a “phone screen”) to start.
This brings us to an important point that applies to the interview hiring process. While we were all taught to be polite when growing up—and you should be polite in this entire process—we do need to turn off the politeness instinct that causes us to pull back and redirect questions when we sense that the other person is uncomfortable. Remember, the objective of the hiring process is to hire the most qualified candidate, not to make all the interviewees completely comfortable.
So, in the phone screen, you should explore the person’s experience and qualifications in a polite but incisive way. In particular, weed out people who pad their resume or inflate their experience. If a buzzword or acronym is on a resume, check that the candidate has meaningful mastery of the subject. Carefully evaluate all claimed expertise and experience in the phone interview. Be especially skeptical if a skill is listed without any description of a particular job where the skill was applied. Also, you may want to verify degrees, certifications, and licenses if these are important. If the candidate passes the phone screen, then you can schedule an in-person interview with yourself and others on the team in which the person will work. Key managers who will work with the candidate are often included. Again, I assume your HR team can help you set up the interview participants and schedule.
In the in-person interview, include a mix of qualification questions, behavioral interviewing, and audition interviewing. Qualification questions are those with correct and incorrect answers; e.g., “What programming language is primarily used to write the Linux operating system?” Pick skills and knowledge that relates to the actual work the successful candidate will perform, then develop a set of good qualification questions for them. Don’t make these questions so hard that no one gets them right; the objective is not to pose the riddle of the sphinx to the candidates, but to measure their level of skill.
Behavioral interviewing is concerned, obviously enough, with how a person will behave on the job. Behavioral questions are open-ended, and often require candidates to relate their past experience to the job you are considering them for. For example, here are three possible behavioral interview questions:
Depending on the culture, workstyles and values of your company, the right answer for any of these questions could be the wrong answer for other companies.
You should also include audition interviews. An audition interview is where you set up an actual work task—or a scaled down version of it—and ask the candidate to perform it. For example, if you are hiring a test engineer, you could ask the candidate to create a test based on a requirements specification or user story from a past or current project. As another example, if you are hiring a test technician, you could give the candidate a real test case--written to the expected level of detail as most test cases--and have the candidate test a real system. While audition interviews might sound complicated, they’re actually easy and fun once you get the hang of them. I have hired some really good people—and not hired some people I might otherwise have mistakenly hired—based on their audition interviews.
Let me conclude with some cautionary notes on avoiding classic, all-too-common interviewing mistakes. One of these mistakes is scaring off the candidates. This can happen when people deliberately intimidate, stress out, or just plain weird out candidates in interviews. It can also happen when interviewers who are having a bad day vent their frustrations; you should be honest about the good and bad aspects of the company, but don’t painting a bleak and bitter picture. Another classic mistake, which I mentioned earlier, is being afraid to ask tough questions. Probe for weakness in the skills and experienced claimed in the resume and the interview. Keep your ears open for fudging, vagueness, attempts to redirect the question, and using incorrect technical terms. Another classic mistake is to break the law. Be sure you know what questions and topics you can’t bring up. Again, turn to your HR department to help you (and others who will be involved in interviewing) understand what topics are acceptable in an interview.
Successful teams are built by smart managers who have mastered the art of effective interviewing. Effective interviewing starts with a good job description, as that document defines clear requirements for the position. Effective interviewing should also include a good phone screen. The interview process should include qualification questions, behavioral interviewing, and an audition interview. Don’t scare off the candidate or break the law, but do ask polite yet challenging questions. By including these essential elements of effective interviewing, you can be a smarter hiring manager, too.
Often, software engineering processes--including but not limited to software testing processes--are made more efficient by tools, or in some cases are only enabled by the use of a tool. When the tool is missing, the process breaks down. The dependency--and thus the breakdown--might not be as obvious as shown in the picture below; sometimes you have to think harder about the problem.
I've made some comments, both on this blog and in various speechs/webinars/courses, about Agile development processes and how they affect testing. However, I haven't addressed the entire set of Agile principles at once. I haven't seen others who I would call "Agile agnostics" do so either. (By "Agile agnostics" I mean those who do not cast themselves as proponents for or opponents of Agile.) So, in this post, I make some test-centric observations about the Agile principles from the Agile manifesto. These observations are based on my experiences working on Agile projects and working with Agile teams.
These observations are reflections on a work-in-progress. Software engineering teams are still learning how to apply Agile approaches. Agile approaches have not (yet?) been successfully applied to all types of projects or products. Some tester challenges remain to be surmounted with respect to Agile development. However, Agile methodologies are starting to show promising results in terms of both development efficiency and quality of the delivered code.
So, what do you think about Agile methodologies and testing? I'd be happy to discuss this topic with interested readers of this blog.
Fresh off my diatribe just a couple weeks ago about how McAfee had a bug that serendipitously made a credit card charge in their favor, here's another "bug from the trenches." Now, Cisco enjoys a good reputation for the quality of their products, but the packaging doesn't exactly inspire confidence. Can you spot the error?
The point of this post is not to yank Cisco's chain--though I am surprised to see such a large, serious organization making such an obvious and silly mistake--but to reinforce an important point. Product quality includes the whole product, and that includes the package. I've done some consulting for systems and consumer electronics makers, and most of them include testing of the "out of box experience".
Cisco, did your OBE testers miss this one? I'd be happy to post a comment from Cisco explaining how this kind of obvious bug snuck past.
As regular readers of this blog know, from time to time I like to throw a topic out for discussion and see what comes of it. It's that time.
I have said (more than once) that most companies manage their holiday parties more rigorously and quantitatively than they manage the quality of their software. That's not just a throwaway snarky line: It's a fact. You can go the treasurer, CFO, head accountant, whatever the moneybags person is called in any organization worth calling an organization and ask that person, "How much money did you spend on holiday parties last year?" You'll get an answer. You can ask people at that same organization, "What benefits did you receive from those parties?" You'll get an answer (albeit probably not as quantitative).
Now, try this experiment: Ask Mr. or Ms. Moneybags, "How much money did your organization spend on software testing and software quality last year?" While they might be able to answer the first half of that question (about testing), most organizations couldn't answer the second half, even though a technique for getting a good approximation of the costs of software quality has been around a long time. (For example, check out the free tool here and also the article here.) You can ask them about the benefits received from testing and quality and get the same lack of solid answers most of the time.
Given that people have been accounting for stuff for thousands of years (e.g., I saw a 4,000 year old receipt for donkeys and other sundry items, written in Sumerian cuneiform, in a museum in Japan last month), how to explain this lack of fiscal measurement, given the widely-acknowledged importance of software in the modern economy?
Moving beyond cost, while some of our clients do have reasonably good metrics for product quality (what some call "quality in use metrics"), many companies do not. Some companies that do have such metrics don't tie those metrics back to what is getting tested. We've seen situations where companies knew they had interoperability problems in their data centers and yet, when we asked people who was responsible for interoperability testing, the accountability trail went around in circles and ended up nowhere. Same story for performance and reliability.
So, why does this happen? The cynic in me wants to say that this problem comes down to a lack of legal liability for quality and quality problems. In other words, until organizations are held to the same legal standards for software quality as they would be for other products (e.g., food, cars, etc.), we will see this immature approach to managing and measuring quality. But is it really that simple? What do you think? Comments, please.
You can find the second part of the uTest interview here.
While I try to stay focused on facts (along with illustrative case studies) in this blog, sometimes people ask my opinion on software testing, usually in the form of interviews. uTest did that recently. You can find the first part here.
Some of the readers of this blog have perhaps read or heard me say that the state of the common practice of software testing lags about 25 years behind the state of the art. Here's a situation which either is a case study in why I say that, or it's something worse.
We (RBCS) used McAfee AntiVirus software for years on many of the PCs in our office as well as on company laptops. This was mostly due to laziness, not a high level of satisfaction. McAfee often came pre-installed on computers we bought, so we would simply renew the update subscriptions. However, repeated problems with McAfee's virus protection--especially its aggressive interference with e-mail programs--led us to switch to Trend Micro, which now runs on all our PCs.
On September 30, we received an e-mail from McAfee (specifically, an account called "subscriptions" at "mcafee.com"). This e-mail read, in part, "ATTENTION: The credit card linked to your account for McAfee-based security products has expired. Please update your account now to keep your PC protected without interruption." After checking to make sure that we were no longer using McAfee software on any of our computers, we ignored the message. After all, the credit card had expired.
It turns out that McAfee's software had a bug in it that cause it to submit the charge anyway, because my October 13 American Express bill has a charge for $43.29 from McAfee.com. No receipt was mailed for the charge, unlike in previous years, though I'm not sure this has anything to do with the expired card. We're going to contest and reverse this charge, of course; it is a relatively minor hassle.
The testing implications of this situation are significant, though. Remember, antivirus software, like other security software, is software that we rely on to keep systems safe. Given the kind of damage a widespread interruption to computer systems due to virus attacks can cause, you'd expect the entire suite of software--including the online systems that provide updates and handle customer financial matters--to be well tested, following known best practices and techniques.
So, what's the explanation for this situation? If a credit card has expired, the system should not silently put through a charge on the card, especially when the system has sent an e-mail to the customer giving the impression that, unless the customer takes a specific and deliberate action to update the card, no charge will occur and the subscription will expire. We have an obvious equivalence partition. Equivalence partitioning as a test technique is over 25 years old. We clearly have a block of code that recognizes the equivalence partition and triggers an e-mail to the customer prior to the charge occuring. Statement coverage, the lowest of the white box coverage criteria, is also a test technique that is over 25 years old.
In risk based testing--a topic I've covered often in this blog--it's almost certain that two financial-related risks would be noted during quality risk identification:
Best practices of risk based testing would generally lead to such risks given a high impact rating, even if the likelihood rating were low. That would require more than cursory testing against such risks. Now, analytical risk based testing as a strategy is relatively leading edge, having been perfected only in the last ten years.
Maybe someone at McAfee would care to post a comment about which of the following statements is true:
Of course, it's quite possible that McAfee's financial processing code is tested to less than 100% equivalence partition coverage and less than 100% statement coverage, but even so their testers found this bug. After all, testing to force all possible messages to occur--a simple experience based technique recommended by James Whittaker in How to Break Software--would have revealed this bug as well.
In all, any of three well-established test design techniques--one black box (equivalence partitioning), one white box (statement coverage), and one experience based (the force-all-messages attack)--would have found this bug. Risk based testing would have lead to more thorough coverage of the underlying quality risk. If financial related quality risks are not being tested using well-established best practices, then what else isn't being tested in McAfee's systems?
For those of us on the Western calendar, we have some holiday time coming soon, including the December break. Many of us will spend this time relaxing, which is always good. However, why not invest a little of your holiday time in improving your testing operation? After all, if you’re like most testers, you are time constrained and need to make improvements quickly that show fast results. So here are three practical ideas which you can put into action before January arrives, which will make a noticeable difference when you start to take on the projects that await in 2011.
Get Hip to Risk-Based Testing
I've gone on quite a bit in this blog about risk based testing, but let's keep it short and sweet here. I have a simple rule of thumb for test execution: Find the scary stuff first. How do we do this? Make smart guesses about where high-impact bugs are likely. How do we do that? Risk-based testing.
In a nutshell, risk-based testing consists of the following:
1. Identify specific risks to system quality.
2. Assess and assign the level of risk for each risk, based on likelihood (technical considerations) and impact (business considerations).
3. Allocate test effort and prioritize (sequence) test execution based on risk.
4. Revise the risk analysis at regular intervals in the project, including after testing the first build.
You can make this process as formal or as informal as necessary. We have helped clients get started doing risk-based testing in as little as one day, though one week is more typical. You can mine this blog for more ideas, check out a few articles on the RBCS web site (such as this one and this one), the year-long series of videos on our Digital Library, , or my books Managing the Testing Process (for the test management perspective) or Pragmatic Software Testing (for the test analyst perspective).
Whip Those Bug Reports into Shape
One of the major deliverables for us as testers is the bug report. But, like Rodney Dangerfield, the bug report gets “no respect” in too many organizations. Just because we write them all the time doesn’t mean they aren’t critical—quite the contrary—and it doesn’t mean we know how to write them well. Most test groups have opportunities to improve their bug reporting process.
When RBCS does test assessments for clients, we always look at the quality of the bug reports. We focus on three questions:
1. What is the percentage of rejected bug reports?
2. What is the percentage of duplicate bug reports?
3. Do all project stakeholder groups feel they are getting the information they need from the bug reports? If
the answer to questions one or two is, “More than 5%,” we do further analysis as to why. (Hint: This isn’t always a matter of tester competence, so don’t assume it is.) If the answer to question three is, “No,” then we spend time figuring out which project stakeholders are being overlooked or underserved. Recommendations in our assessment reports will include ways to gets these measures where they ought to be. Asking the stakeholders what they need from the bug reports is a great way to start—and to improve your relationships with your coworkers, too.
Read a Book on Testing
Most practicing testers have never read a book on testing. This is regrettable. We have a lot we can learn from each other in this field, but we have to reach out to gain that knowledge.
(Lest you consider this suggestion self-serving, let me point out that writing technical books yields meager book royalties. In fact, on an hourly basis it’s more lucrative to work bagging groceries at a supermarket. Other benefits, including the opportunity to improve our field, are what motivate most of us.)
There are many good books on testing out there now. Here’s a small selection, any one of which you could work your way through during a winter vacation:
I have read each of these books (some of which I also wrote or co-wrote). I can promise you that, if you need to learn about the topic given, reading one of the books for that topic will repay you in hours and hours saved over the years, as well as teaching you at least one or two good ideas you can put in place immediately.