RBCS Covid-19 response: Until further notice, all public training classes will be run virtually. Remote proctored certification exams are available (view details).

Blog

Sequencing Software Test Levels

By Rex Black

Advanced Test Manager e-learning attendee Patricia Osorio asked a question about a sample exam question:

You are managing the testing of a hospital information system project that will integrate off-the-shelf software from three vendors. Which of the following gives a reasonable sequence of test levels that you will execute?

A. Component integration test; system integration test; user acceptance test

B. System acceptance test; system integration test; user acceptance test.

C. System integration test; system acceptance test; user acceptance test

D. System test; system integration test; user acceptance test

The correct answer is B.  Each of the vendors' systems should undergo an acceptance test test first.  Then, the systems should be integrated and tested by the test team.  Finally, the users should do an acceptance test of the integrated system of systems.

— Published


Does Citrix Think Software Compatibility Testing Is Unnecessary?

By Rex Black

As long-time listeners--or even brand new listeners, for that matter--of the RBCS webinars know, we use Citrix's GoToWebinar service for our free monthly webinars.  Now, I've been fairly satisfied with GoToWebinar.  I've used one or two of the competing services, and been less happy with those. Of course, webinar listeners (and readers of this blog) might remember I chided Citrix back in May for the ungraceful way the system handles audio drop-outs by the presenter.

So, during the June webinar, webinar attendee Keith Stobie reported an inability to see the presentation using Internet Explorer 9.  He said that Chrome (not sure which version) worked just fine.  I reported the problem to Citrix on Wednesday of last week.  Five days later, I receive the following reply, quoted in its entirety (minus the links provided at the end):

Thank you for contacting Citrix Online Global Customer Support,

Dear Rex Black,

IE 9 has not been tested with any of our products as of yet. we will try to help fix any issues the best we can, but cannot guarantee anything. Hopefully we should get this done as soon as possible.

If you have any additional questions or need further clarification regarding this matter, please feel free to reply directly to this email. For any other product inquiries or technical assistance, please visit us at our Support Centers listed at the bottom of this email. Our Support Centers include Self Help files and our Global Customer Support Contact Information.

Thank you,

Richard Carrel | Global Customer Support

So, I appreciate the reply, though I have to say that five days isn't quick turnaround for a customer complaint about a browser-based service that's incompatible with a major vendor's browser.

More surprising to me is the admission that Citrix hadn't tested IE9.  I don't keep up with the browser wars, so I'm not sure what share of the browsing action IE9 has, but I'm pretty sure that Microsoft's IE family of browsers remains at least one of the 800 pound gorillas in the room.

Putting myself in the position of the Director of Quality or VP of Testing or whatever the head-testing-honcho's title is at Citrix, I understand that there are constraints on compatibility testing.  I wouldn't bother to test four-year-old versions of Opera, for example.  But come on, not testing IE9?  If I were in charge of testing for any SaaS provider, compatibility would be one of my top quality risks, and testing browser/OS/malware configuration combinations would receive a fair amount of time, money, and attention.  Of course, functionality, reliability, performance, and security would also be high on the list of risk categories, too.

Here's some free consulting advice to my fellow test professionals who work at Citrix: Spend a little time getting ramped up on how to do quality risk analysis and risk based testing.  You can find lots of free resources on our web site, especially in the articles and the Digital Library. You'll notice that compatibility is one of the quality risk categories included in our free quality risk checklist.  If you need more help, let me know, as we can provide a one-week risk based testing bootstrapping service that will get you headed in the right direction.

Morale of the story:  If you are in charge of testing at any SaaS vendor, and you're not testing for compatibility, it's only a matter of time before someone writes a blog post like this one about your product and the degree to which you aren't testing it.

— Published


RBCS Webinar Audio Adjustment

By Rex Black

Just in time for tomorrow's and next week's webinar--and this week's webinar-style virtual Foundation bootcamp--I can share some advice on audio adjustments from long-time webinar attendee Avner:

...I experienced a personal issue with the gotowebinar sound. The problem manifests as untenable echo on the presenters voice. I don't know why by default the audio started with this effect ON. After a few minutes of experimenting I was able to find the solution. It's an easy fix but if you don't know where to go, it's a deal breaker. I.e. I think I'll go to lunch. Included is a screen print of how to fix it [see embedded picture below]. Please share this with Mr Black or whoever needs to know. This has happened to me before on other seminars where I was not able to fix the problem & had a good lunch instead.

Regards & thank you again for an informative webinar,

Avner Uzan

Adjusting Webinar Audio to Avoid Echo
Adjusting Webinar Audio to Avoid Echo

Avner, thanks for sharing this tip.  Audio for these webinars can indeed be tricky, both for the presenter and for the listener.  I'm sure you've helped a few people catch webinars they would other miss...for lunch.

— Published


Software Testing Terminology

By Rex Black

Long-time reader Patricia Osorio Aristizabal sent the following question via e-mail (info@rbcs-us.com):

Hi Mr Black

I have a dilemma and I would like to know your thoughts about it.

It is clear for me the difference between error, defect, and failure (IEEE 610). Besides, the convenience of keeping in mind and use these words orally in order to help the project team - may be - to find out where or when an issue (incident) occurs. However, it is not easy to change all the team to use these words without cause resistance and incredulity to the importance of that difference. In your experience, how I could get to change first, my team and then all the organization to standardized the use of that concept? Could you please give some tips to do this change?

For those readers not familiar with the distinction to which Patricia alludes, here are the ISTQB Glossary definitions for each of those terms:

error (or mistake): A human action that produces an incorrect result.

defect (or bug): A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

failure: Deviation of the component or system from its expected delivery, service or result.

incident: Any event occurring that requires investigation.  [Note: During testing, any failure that occurs would be an incident, though some incidents ultimately turn out to be false positives caused by things like bad test cases, bad test data, bad test environments, etc.]

So, based on these definitions, the programmer makes an error (or mistake) in their thinking while writing a program, leading to a defect in the code.  The code is executed during testing and the defect produces a failure.  The tester observes the failure, investigates the incident, and presumably files a bug report.

Certainly, having clear thinking in one's mind--reflected by understanding the distinctions drawn--can help testers understand what's going on.  However, it can be difficult to convince people to adopt the definitions. As a consultant, I'm familiar with the challenge of trying to motivate people to make changes of any sort.

The general rule for change, in my opinion, derives from a basic rule of sales:  People move away from pain much more quickly and reliably than they move toward a desirable situation.  In other words, if you want to motivate change, make people aware of the organizational pain--waste, delay, frustration, etc.--associated with the behavior you want to change, and then suggest how to make that pain go away based on your proposed change. 

So, Patricia, if you can find a way in which a lack of clarity on these definitions is causing waste, delay, frustration, or other sorts of organizational problems, then you might succeed at motivating changes here.

— Published


Upgrade Reveals Regression Bug of the Week

By Rex Black

I almost titled this blog post, "Why Maintenance Testing Is Good (and Why Pair Networks Should Do More of It)," but decided it was the confluence of events that was the real story here.

Last week was a week for RBCs to get hit with a bunch of costs of external failure, foisted on us by various vendors and service providers.  First, we had the problem with the dying audio driver during the Q&A session of our metrics webinar (see here for details).  Next, and literally just as I was finishing the blog post describing that reliability bug, the consequences of an apparently-not-well-tested upgrade by Pair Networks (hosts of our website) hit our entire site like a grenade. 

At first, rbcs-us.com was completely taken out by the upgrade.  With some heroic efforts by our web team, we got most of the site back up and running within a couple hours.  However, the store was damaged more severely, due to database hooks apparently. The store is still down, and won't be back up for a few days.  The irony of this situation is that, while we intend to send store discount codes to people inconvenienced by the webinar audio crash, there's no point doing that until the store is back up.  Compounding costs of external failure.

If you're thinking, "Well, ho-hum, these things happen," consider this mind-experiment.  Imagine Ford Motor Company released an upgrade to engine control software and sent it to all of their dealers to install in cars during their next scheduled maintenance.  Imagine that this software caused thousands of cars to stop working completely, and to only be partly repairable with minor efforts, with complete restoration of function requiring a major service.  Who do you think would be paying for the service?  The customer?  Or Ford?

The answer, of course, is Ford.  Software and software services, however, remains among the few businesses that are allowed to transfer their costs of external failure, which is a cost-of-quality way of saying "deliver crap products and services to their customers without having to face the consequences."  This isn't the first time I've made this point on this blog (see here), and I'm guessing it won't be the last. 

Anyway, a big Bronx cheer out to Pair Networks for their failure to properly test this update before putting it out there, and an even bigger Bronx cheer out to Pair Networks for their utter failure to even bother to contact us (or presumably anyone else) to express regret for the cost and inconvenience inflicted by their negligence.

P.S.  For those of you unfamiliar with the phrase, dictionary.com defines "Bronx cheer" as "a loud, abrasive, spluttering noise made with the lips and tongue to express contempt."

— Published


Measuring Software Test Processes (and Software Testers)

By Rex Black

On the heels of the webinar last week, listeners have had lots of comments (all good) and some questions.  Here's an interesting set of questions from listener Stephen Ho. I've interspersed my answers in his e-mail, with "RB:" in front to make it easier to follow:

Rex,

Thanks for such Webinar. This webinar did not talk about how to organize and build up a good testing Metrics.

RB:  There was a general discussion early on about how to go from an objective to a specific metric and specific targets for that metric.  Perhaps you were missing the part about implementing the metrics with specific tools? 

However, it provided some interesting points to measure the successful of a testing project, such as: BFE. Just for more realistic, how can we know that a testing metrics is good?

RB: One attribute of a good metric is that it is traceable back to some specific objective. That objective should relate to a process (e.g., finding defects is an objective for the test process), to a project (e.g., reaching 100% completion of all schedule tests is an objective for many projects), or to a product (e.g., reaching 100% coverage of requirements with passing tests is an objective for some products).  Another important attribute is that the metric supports smart decision-making and, if necessary, guides corrective action.  Yet another important attribute is that the metric have a realistic target.

At here, I have another topic for your interest.

"What is effective & efficiency testing?"

RB:  We have to be more specific than this.  What are the objectives for testing, as you mean it here?  Once you have defined those objectives, you can then discuss effectively and efficiently meeting them.  For example, if you define finding defects as an objective, then you can use the DDP metric (discussed in the presentation) as a metric of effectiveness.  Cost of quality (which is discussed in various articles in the RBCS web site, such as this one) can serve as a metric of efficiency.

-how to narrow it down to know that our existing testing job is in effective & efficiency ways.

RB:  You might want to read the chapter I wrote (Chapter 2) on this topic in the book Beautiful Software.  That's a book worth reading, anyway, because there a number of other good chapters in it.

-What are the right way to measure the performance of a QA?

RB:  I assume you're talking about an individual tester here. If we can define specific objectives for the tester, then we can use the same method to define metrics.  Keep in mind the rule about objectives needing to be SMART.

-How can we know that a QA is in competence level?

RB: Check out my book, Managing the Testing Process, 3e, for a discussion about how to use skills inventories to manage the skills of your test team.

-How to increase the productivity of a QA?

RB: This question is too general, I'm afraid.  Productive at what?  I suggest that you define specific objectives for the test team, and then measure the current efficiency with which those objectives are achieved.  At that point, you can make realistic (and measurable) goals for improvement of productivity.

You may have existing webinar or article regarding this topic. If yes, I am thirsty to study your material. Would you direct me how to access to this information. I would definitely provide my feedback to you.

RB:  Follow this link for another article on metrics that you might find useful.

Thanks,

Stephen

RB: You're welcome.

— Published


Boundary Value Analysis

By Rex Black

Long-time reader Din asked a good question about the ISTQB Advanced syllabus and the test design techniques covered in Chapter 4.  He wrote:

Hi Rex,

As far as the ISTQB syllabus is concerned, topic on boundary value analysis & testing is discussed in Foundation and Advanced level. Just want to check whether deeper discussion on extension of BVA/BVT such as robustness testing and robust worst- case testing (and other related approach) of BVA/BVT are covered in our ISTQB syllabus, especially in Advanced level. I raise this as I came across several academic presentations discussing on BVA/BVT of more than one parameter/variables which I didn't realize so much before this. Hope to get your feedback.

Thanks.

-Din-

Well, Din, I can't speak for other training providers--and they probably wouldn't want me to even if I could! :-)  However, I can say that the RBCS Advanced Test Analyst and Advanced Technical Test Analyst courses--and the corresponding books--go into the topic of boundary value testing (and the related concept of equivalence partition testing) in great depth.  This includes multi-value boundary value testing, along with the use of these techniques in combination with other test design techniques such as all-pairs testing, state-based testing, decision-table testing, and so forth.

— Published


Accidental Software Test Reveals Reliability Bug of the Week

By Rex Black

As the 300 or so people who attended yesterday's metric webinar know, we had an audio problem about ten minutes into the question and answer session.  This failure resulted in me talking to a dead microphone for a minute or two before I realized the problem and fixed it.

The symptom of the bug was two-fold.  The red light on the front of the Blue Yeti microphone I was using turned off (i.e., went from illuminated red to uniluminated), indicating a complete loss of power to the microphone.  In addtion, naturally enough, the audio display on the GoToWebinar console (which is tucked away toward the bottom of the console, unfortunately) showed no audio and no one talking. 

To resolve the problem, I unplugged the USB cable from the PC, then plugged it back in.  After resetting the audio configuration on the GoToWebinar console to use the Yeti mike (as GoToWebinar had defaulted back to the Windows default audio), we were back in business.  I suspect if I had noticed immediately I could have restored audio within 1 to 2 minutes.

The main culprit is, I suspect, a reliability problem either in Windows XP's USB drivers or in the audio drivers themselves.  I favor the USB drivers as the cause, because the Yeti appeared to have been completely powered down.  The failure was abrupt, and cleared itself immediately once I disconnected and re-connected the cable.  If the audio drivers had failed, I would expect that clearing the problem would have taken more work.  Perhaps someone with a greater understanding of Microsoft's XP USB drivers could comment on my hypothesis.

I also would suggest to the Citrix people that their user interface is not entirely blameless in this glitch.  Surely, if a webinar is active, someone should be speaking.  GoToWebinar is very good about warning the presenter when they have disabled screen sharing; e.g., by changing the focus to an application other than the application being shared.  Why not have a red warning that comes up saying, "The webinar is in progress, but no audio is currently detected"?  Obviously, some pauses, say for a few seconds, are not a problem, but in this case the audio was down for a total of four minutes.  Also, depending on the way in which the failure of the microphone manifested itself to the GoToWebinar software--e.g., did the selected and active audio device abruptly disappear from the list of available audio devices?--perhaps GoToWebinar could have put up a warning that the active audio device has become unavaiable.  I'm not sure what, but I know I could have used a much more urgent heads-up than the fairly passive indicator that GoToWebinar gave me.

In general, I have been very pleased with GoToWebinar and GoToMeeting--and yes, I have used the major competitors--but this failure to warn the presenter when the audio stops is a signficant usability/ error handling bug, in my opinion.  I'd be much less inclined to recommend GoToWebinar as a webinar hosting tool now, after having this experience.  If anyone from Citrix is reading, I'd be happy to discuss this with you.  Same for you guys from Microsoft, regarding my assertion about the reason for the mike failure.

For those of you who were victims of this glitch and spent four minutes listening to dead air, you'll be receiving a discount code applicable to any product or service in our store.  Other than that little incident, I enjoyed the webinar and everyone's great participation.  The recorded webinar (of the evening presentation, that went without a hitch) will be posted in the next few days.

— Published


Quantifying Testing Effectiveness with the Defect Detection Percentage

By Rex Black

After the test metrics webinar held yesterday--link to recorded webinar coming soon in the Digital Library--we had an attendee ask a good question by e-mail (mailto:info@rbcs-us.com).  Linda Li wrote to ask,

Hello Rex,

 I just attended your free webinar about test metrics, you mentioned :

DDP=Bugs Detected/Bugs Present.

 So I want to know how can I get ‘Bugs Present’?  what's included in Bugs Present?  Thank you very much.

You delivered a great presentation, that is really help me much.

Thanks for the kind words about the presentation, Linda. I do hope it provides useful ideas. 

As this metric, which is variously called defect detection percentage (DDP) or defect detection effectiveness (DDE), it is mathematically defined as Linda mention:

DDP = bugs found/bugs present

When we're talking about testing at the end of the software development or maintenance process, we can say that:

bugs present = bugs found by testing + bugs subsequently found in production

So, to calculate DDP for testing, use this formula:

DDP = bugs found by testing/(bugs found by testing + bugs subsequently found in production)

In this equation, test bugs are the unique, true bugs found by the test team. This number excludes duplicates, non-problems, test and tester errors, and other spurious bug reports, but includes any bugs found but not fixed due to deferral or other management prioritization decisions. Production bugs are the unique, true bugs found by users or customers after release that were reported to technical support and for which a fix was released; in other words, bugs that represented real quality problems. Again, this number excludes duplicates, non-problems, customer, user, and configuration errors, and other spurious field problem reports, and excludes any bugs found but not fixed due to deferral or other management prioritization decisions. In this case, excluding duplicates means that production bugs do not include bugs found by users or customers that were previously detected by testers, developers, or other prerelease activities but were deferred or otherwise deprioritized, because that would be double-counting the bug. In other words, there is only one bug, no matter how many times it is found and reported.

To calculate this metric, you need to have a bug tracking system for all the bugs found in testing (and for those using Agile methods, yes, I do mean bugs found during the sprints even if fixed during the sprints).  You also need a way to track bugs found in production. Most help-desk or technical-support organizations have such data, so it’s usually just a matter of figuring out how to sort and collate the information from the two (often) distinct databases. You also have to decide on a time window. That depends on how long it takes for your customers or users to find 80 percent or so of the bugs they will find over the entire post-release life cycle of the system. For consumer electronics, for example, the rate of customer encounters with new bugs (unrelated to new releases, patches, and so forth) in a release tends to fall off very close to zero after the first three to six months. Therefore, if you perform the calculation at three months, adjust upward by some historical factor—say 10 to 20 percent—you should have a fairly accurate estimate of production bugs, and furthermore one for which you can, based on your historical metrics, predict the statistical accuracy if need be.

Note that this is a measure of the test processes' effectiveness as a bug-finding filter. Finding bugs and giving the project team an opportunity to fix them before release is typically one of the major objectives for a test team, as I mentioned in my presentation.

— Published


More ISTQB Certified Tester Advanced Level Exam Advice from the Trenches

By Rex Black

Following on my most recent post, another one of the Advanced Test Analyst course attendees took the exam this week.  Kelly Rasmussen passed on the following advice.  While a bit more targetted to the CTAL-TA exam than the other CTAL exams, there are some good general guidelines, especially regarding Foundation-level questions, the weighting of questions, and the typical e-exam experience.

Just took my exam today and I passed also! Don't know if you all took the test yet or not, but wanted to share my experience with you. Hopefully it'll be of some help.

Firstly, I would like to say that there was no lockers [at the Kriterion exam center] to lock up my stuff. I didn't know this going in so I had my purse and cell phone with me. The receptionist took my stuff and put it under her desk, which I didn't feel all that comfortable doing but didn't want to go back to my car and put my sutff up. The exam didn't take the whole 3 hours. I was able to answer all the questions within 2 hours. Then I used next 30 minutes or so going back to the questions that I marked for review. I wasn't sure if I was quite ready for my results yet, so I sat there for a minute or two deciding whether to click the submit button or not. lol... Yes, I was very nervous to find out if I passed or not, but I was also tired and wanted to go home. So clicked the button, and luckily, I passed.

 As far as the type of questions I got, I was surprised to see so many K1 questions from the Foundation Level. I think my first 7 questions or so were from the Foundation Level, which worked to my advantage. There were plenty of scenario questions, which were tricky in my opinion. [People with extensive testing experience will find that helpful] in deciding on the correct answer. Another thing that I thought was tricky is that you had to choose which test technique would be the best one to use depending on the scenario. So make sure you know when it is better to use one technique over another and of course how to figure out how many test cases one would need to test. You want to get as many of the K3 questions correct as possible since these are worth 3 points. I basically ditto what Jennifer said. Go over everything!!! Pay close attention to chapters 2 and 4. There were a lot of questions from these two chapters!!!

I'll add that, implicit in Kelly's comments, and Jennifer's, is the lesson that extensive pre-exam preparation is required.  In addition to attending a class or going through e-learning, be sure to spend lots of additional time studying for the exam.

— Published



Copyright ® 2020 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo ISTQB Logo PMI Logo

PMI is a registered mark of the Project Management Institute, Inc.

View Rex Black Consulting Services Inc. profile on Ariba Discovery