Blog

Paging Dr. Android and Dr. iPhone, Stat!

Mobile devices. They ain't just for Candy Crush, Mobile Strike, or Pokemon Go. More and more, personal medical information, diagnostics, and more health-related and other private details are showing up on Android, Apple, and Windows tablets and phones. For example, check out this article title: “Mobile healthcare information management utilizing Cloud Computing and Android OS."  Hey what could possibly go wrong with that, huh?  Or take a look at this one.  

What are the safety, privacy, testing, and quality implications of all health-related and other private data going mobile? Have we figured that out yet?  This is the topic of my talk at TEST-IT Africa next month. We're also offering a special two-day mobile testing course, including a chance to become an ASTQB Certified Mobile Tester.

If you're not in Africa, don't worry. You can take our mobile course virtually (see listings on our website) or catch me live at the STP Conference, also happening in September.

— Published


Intelligent Use of Testing Service Providers

If your company is using testing service providers, you might not be getting all the value you can from that relationship, and you could be making some basic mistakes. Want to get more bang for your buck with less headaches?  Check out my article in this edition of Quality Matters magazine.

— Published


Don't Gather Test Metrics: Gather Insightful Test Metrics

To paraphrase Lord Kelvin, a test team has little insight if they have no metrics.  However, there are plenty of test teams who, through the wrong choice of metrics, have plenty of metrics, but no insight. Find ideas on developing and using insightful metrics at our website, in the webinars, articles, and templates, and in our training courses.

— Published


Are You Living in Sin?

Are you a software testing who is living in sin? Not sure if you're naughty or nice? Join our free webinar this week, and learn the seven deadly sins of software testing and how to avoid them.

— Published


Think "Test Strategies" and Become a More Successful and Flexible Test Professional

As discussed in this webinar, there are a wide variety of patterns in how testing teams carry out testing. These patterns are properly called strategies, because perceptive testers and test managers can adapt their strategies and shift the blending of archetypal strategies to suit their situations. (These patterns are not properly called schools, since schools require ongoing adherence and conformance to orthodoxies set down by the authorities of the schools.)  By gaining a better awareness of patterns of testing approaches in other teams, you can expand your repertoire of options in handling testing challenges and achieving testing objectives.  Take a listen to the webinar to become more successful and flexible.

— Published


Time to Tap the Brakes on Self-driving Cars Until We Know More About the Testing

As the recent car-maker announcements and now this from Uber shows, everyone is full-steam ahead on self-driving cars.  However, unlike avionics software in airplanes, the software that runs dialysis machines, or even the spinach that ends up in salads in the United States, we know nothing--nothing--about the testing and quality assurance steps that are being taken to ensure the safety, security (hackable self-driving Jeeps and VWs, anyone), and quality of this software. This software will be on public roads, operating vehicles that weigh thousands of pounds, at up to highway speeds.

Here's a thought exercise: Imagine a robotic law enforcement officer, armed with a gun. Imagine knowing nothing about how that software worked or was tested. Imagine that powerful business interests were pushing said cops on states, who were lining up to replace their expensive, error-prone human cops with these new robots. Sound familiar? It should, because it's a story-line that's showed up in more than one hit movie.

At this point, what we're looking at is robo-car, not robo-cop, but the basic story line--and the public safety implications--are not that different. The damage that can be done by an errant car is certainly no less than that of a few stray bullets

We need to tap the brakes on this headlong pursuit of self-driving cars to make sure that at least two things are in place:

  • Testing and quality assurance standards that can reduce software-related risks to safety, security, and overall quality to an acceptable level, and which are transparent to anyone who wants to review those standards (a la the FAA, FDA, and even food-safety standards)
  • Laws and regulations that can answer basic questions such as, "Who's at fault when a self-driving car injures or kills someone?" and "Can click-wrap contracts of adhesion force people to give up rights that are currently theirs when they buy a vehicle?" as a start.

I'm not confident that either of these things will happen in the short-run. There's too much money pushing to build these driverless vehicles now, too much FOMO at the state level for any one state to take on the regulatory aspects, and absolutely no willpower at a Federal level to do anything. In the long-run, unfortunately, once people start dying and being maimed, that will drive some of the changes needed. It's a shame that it will probably take dead people to make automakers and lawmakers do what they could do right now.

— Published


It's Back

By Rex Black

Well, with our new website up and running, I've decided to get back into the blogging business. It's be a leaner-and-meaner version, more like a cross between the RBCS Facebook posts that I've been doing over the last few years and my usual articles. I'll be tying it all together through our RBCS Facebook page and our RBCS Twitter feed. To provide an opportunity for us to interact in public forums, I'll be using the RBCS Facebook page and the RBCS Twitter feed as a way of collecting and reacting to comments. If it turns out that doesn't work as well as I'm hoping it will, we'll get public commenting turned on here on the blog, though somehow having yet another way of commenting on something seems to be less good, not more good. 

So, what do you want to talk about? Post a comment and let me know what topics you'd like to see discussed here in the blog.

— Published


Testing Metrics: Useful or Not?

By Rex Black

Some of you who follow me on Twitter (@RBCS, by the way) may have seen the ongoing debate between myself on one side and what seems to be the entirety of the "context driven school of testing" on the other side.  I've been saying that testing metrics, while imperfect, are useful when used properly.  The other side seems to be saying...well, I'm not clear exactly what they are saying, and I'll let them say it themselves on their own blogs.  Suffice it to say they don't like test metrics.

If you have missed the Twittersphere brouhaha and you want to get more details on what I think about metrics, you can listen to my recorded webinar on the topic.  You can ask the other folks about what their objections are. 

Once you do, I'd be interested in knowing your thoughts. Are test metrics useful to you? What problems have you had?  What benefits have you received?  What different situations have you used metrics in (e.g., Agile vs. waterfall, browser-based apps, level of risk, etc.), and how did that context affect the metrics?  Let me know...

— Published


Risk-based Testing, but Not Enough Stakeholders

By Rex Black

I received an interesting tweet from Shoshanah Gil, which then lead to the following e-mail:
Hello,
I wanted to respond to your Twitter reply regarding my first attempt at using PRAM (Twitter post Feb 18, 2013 6:58am) in more detail.

 The real success of this testing efforts will of course be measured once the software is out in the field...but this is the "small" success: I was assigned a project to coordinate testing involving 25 integrating applications. Testing needed to be coordinated between unit and production testers, who would be testing at the same point in the development pipeline. (The company is exploring new testing methodologies, so utilizing testing resources in different ways is part of this.) I needed to find a common ground from which to develop a test plan for this project. I had just seen your presentation on PRAM and brainstorming, so I thought focusing the risks would bring testers together. From the software specs, I compiled a list of quality risk categories. I put these on the board, and as I passed out Post-it notes, I asked the newly-formed team of unit/production testers from each application to identify the test types they would use to mitigate the risks. One of the interesting outcomes was that we found that one area of risk did not have as many Post-it notes. We then needed to decide whether this was a risk area that did not need to be a priority, or whether it needed to be tested but not by all apps. As a result of this exercise, testers collaborated on their test plans for their specific application, while simultaneously evaluating the effect this project had on the product as a whole.

 How does this experience differ from how you might have coordinated the test planning for this type of project? Any suggestions for the future?

 Thank you for your interest,

Shoshannah Gil

First, glad to hear that you're having success with the PRAM technique. 

One thing that struck me about your e-mail is that you only mentioned testers as participants in your risk analysis process.  Testers do have an excellent perspective on product quality risk, but their perspective, like all stakeholders, is incomplete. 

As you continue to refine your use of the technique, be sure to include other business and technical stakeholders.  You can find more details on the how's and why's of stakeholder involvement on the RBCS Digital Library.

— Published


Estimation of Review Effort

By Rex Black

I had an interesting question from a reader, Stas Milev:

Hi Rex

I hope you are well. I wanted to ask you a question about test estimation. I am sure you have been asked many of these before but the one I have is not really about the estimation techniques themselves (such as usage of historical data, dev effort, etc)

There is one area of test estimates which is always arguable, hard to estimate and finally explain to sponsors no matter how well you are prepared. This is a test analysis and design task which is vague by definition. If we quickly decompose it into smaller pieces we would end up with the following simplified list of activities:

1. Analyse the test basis (if exist).
2. Get ambiguities, inconsistencies and gaps in the test basis resolved.
3. Apply the test techniques to create the test cases.

While you can more or less quantify 1 and 3 in terms of the effort (let's assume we at least have something to work with in terms of test basis), the issue is obviously with 2 where we are dependant on many people (Business Analysts, system Analysts, dev team, end-users, etc).

There are two obvious options we can choose from:

Option 1: Assume there will be no gaps, issues or they will be resolved immediately and all our questions will get answers with no delays and thus, simply estimate 1 and 3. Of course, we will make the assumptions documented to highlight the risk if they arise. The problem with this one is that we know straightaway we will overbudget and we will have to come back to business sponsors and ask for more money. Nobody likes doing this, especially if we start asking for an additional amount every single time our inadequate estimates deviate with the reality. Moreover, on some of the project the budget is fixed straight away once it has been confirmed.

Option 2: Get this slippage time or time spent on requirements clarification somehow estimated based on previous experience. The issue here is that this is an 'unexplained' effort to an extent it can't be justified by a statement: "but we know there will be issues or something will not be ready". Pretty valid scenario in this case would be: "Hey, we have just two requirements here. Why the hell it takes two weeks and not two days to create the tests for these two?"

To me getting the right questions raised, asked and answered is a part of test analysis and this activity is extremely important as it prevents defects. To a certain extent this is a very informal static testing or a QA activity which needs to be build into the process but nobody is willing to pay for it explicitly. From the other hand, the ethics does not allow you to simply ignore problems and test that a buggy software is buggy. In the latter case, I normally still try to squeeze in the static test and get decision makers to accept the risk that problems with the requirements may arise very late.

I wanted to hear for your recommendation on test analysis and design effort estimates and test effort negotiation with business sponsors and project managers.  It would be also great to hear your comments on both options or perhaps option 3 if it exists.

Thanks
Stas Milev
ISTQB Certified Advanced Test Manager (CTAL)

Hi Stas--

A good question.  What I would suggest is that the estimation for activities 1, 2, and 3 should be based on historical data.  So, if you know that you have some average number of test cases be identified quality risk, per specified requirement, per supported configuration, etc., you should be to estimate activities 1 and 3 based on the average number of hours effort associated per test case.  For activity 2, once again, if you have historical data on the average number of defects typically found per test basis document page, you should be able to estimate the number of defects you'll find.  If you know the average time from discovery to resolution of such defects, and the average amount of effort for each such defect, you can then estimate the delay and effort.

The metrics gathered about test basis defects could be used not only for estimation, but also for process improvement.

— Published



Copyright ® 2016 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo IIBA Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.