RBCS COVID-19 response: All of our public training courses through May will be run virtually (view details).
As followers of the RBCS Facebook and Twitter micro-blogs will know, last week we offered an ISTQB Advanced Test Analyst course. The corresponding exam has the mouthful-name "ISTQB Certified Tester Advanced Level-Test Analyst," typically abbreviated "CTAL-TA." CTAL-TA is also the authorized resume acronym for those who pass the exam, by the way.
Jennifer Parran of Booz Allen Hamilton took an electronic version of the exam at a Kriterion exam center just this week, and she passed on the following details. This is really good advice for anyone taking an ISTQB Advanced exam, especially the Test Analyst and Technical Test Analyst exams.
I just wanted to let you all know I took the exam yesterday and PASSED! To give you an idea of what it was like… I took the electronic exam at the Career Technology Center in Falls Church [Virginia]. You have 3 hours once you click the “Start” button and there is a timer counting down on the screen (try not to let that distract you). (Also you may be in the room with other people taking other exams, some of which are allowed to bring books in and they might be making noise flipping back and forth in the book.)
You can mark questions for “review” if you want to review the answer or if you want to skip a question and come back to it later. You can also navigate backwards and forwards so if you answer a question and want to go right back to it, you are able to do so. Once you’re all done and click the “Submit” button (at which point I felt like I wanted to throw up) it takes a couple seconds and then displays your score and that you “passed”. It also sends you an email with this information which I saw later.
Oh yeah…don’t touch the keyboard while taking the exam! I accidentally hit the spacebar and it backed me out of the exam and I had to get the proctor to log back in and bring the exam back up. THANKFULLY it saved all my answers as I only had an hour left but to play it safe – only use the mouse!
The questions were very similar in style to the sample questions [provided in the RBCS live and e-learning courses and exam prep guides] so definitely do all of those. My questions covered every chapter in the advanced syllabus so you definitely want to read the entire syllabus [as discussed in the class]. Some of the questions seemed more foundational but it was hard to distinguish between K1 Foundation and K1 Advanced. I also had questions pertaining to several of the referenced standards so like Rex said in class – you don’t want to be “sad” when you’re reading one those questions and you can’t remember what it is.
Make sure you are very comfortable with the black-box techniques like we practiced in class because there are quite a few questions that are very long and involve a lot of reading [that involved] these techniques so you don’t want to have to spend a lot of time reading the question and remembering how to do the technique. These are of course the K3 questions.
As far as preparation goes, I read through all of the foundation and advanced syllabus’ once and then skimmed over my “highlights” several more times (once right before I took the exam). I did all the sample questions and mock exams. I skimmed over the referenced standards a couple times. I studied a little Monday night, 3 hours Tuesday night, all day Wednesday, and then read over my highlights right before the exam.
Good luck to all of you and Thanks Rex for all the great explanations and preparation!
By the way, for those of you considering taking an ISTQB exam and still mulling over your exam preparation options, RBCS will have an exciting announcement in the next couple weeks on this topic. Watch our Facebook page, Twitter posts, and our newsletter for more information. If you've already decided to take an RBCS course, you can find the course schedule on the home page.
I'm running an ISTQB Advanced Test Analyst course this week in DC. This course includes a hands-on set of exercises based on a realistic requirements specification. This allows people to transfer the techniques immediately to real-world projects when they return from training, because they have already done so in the class.
In this particular class, we use a requirements specification based on a real project. This requirements specification was originally written by a senior programmer--not a business analyst or requirements engineer--on the project. I anonymized the requirements and reorganized them, for use in the training course, but other than that did not change their quality or coverage.
We did a review exercise today. People did 30 minutes of preparation, followed by a 30 minute walkthrough. We found over 20 defects, some of which had multiple manifestations in the requirements specification. We concluded that, in a real world situation, the document would need revision to repair these defects, which would cause significant changes, and thus would necessitate a re-review. Basically, this means that these requirements were not an adequate basis for further development or for creation of test cases.
Interestingly enough, one of the attendees, an experienced, professional tester, remarked, "These are the best requirements I've ever seen." Only one participant said she typically sees better requirement specifications.
So, is this the best we can do, in this industry? We must make do with requirements written by people with no real qualifications to do so? We must proceed with development and testing using fundamentally flawed requirements as a basis?
I'd be interested in your thoughts. How do requirements affect you and your testing? Do you receive adequate requirements? If so, why do you think you do? If not, what do you think you don't?
Avid blog reader Din asks an interesting question via e-mail:
It's me again. As I am reading a lot on gaining deeper understanding on different software development process or life cycle (or whatever others may call it) and match it back to standard testing process institutionalized by ISTQB, I need your feedback on the V-Model explained in ISTQB syllabus as well as the W-Model introduced by Paul Herzlich in 1993. As we learned on the static & dynamic testing in the syllabus where we involve in activities such as review of requirements and also actual execution of the test for system, I observe those practices represents total picture of applying W-Model. Looking forward for your feedback.
I think that the W-model and the V-model are essentially presenting the same set of activities, it's just that the W-model shows them as additional nodes on the "inside" of the left and right sides of the V. In other words, on the V-model, in the "cross-arms" connecting the development activities on the left side with the test activities on the right side, labelled "develop tests" in the figure below, that would include the review of the various test basis documents which is shown explicitly in the W-model.
I think an important thing to remember about these lifecycle models--whether V-model, W-model, Agile model, iterative model, etc.--is encapsulated in a witticism attributed to W.E. Deming: "All models are wrong; some are useful." What he meant by that, as I understand it, is that every model is a simplification of reality that omits some elements of reality. To the extent that the model helps you think more clearly about something you need to do or understand in the real world, that's great; it's a helpful model. To the extent that a model becomes dogma and interferes with real-world progress, then it becomes a problem. We see this happen with some of our clients from time to time, where following a model becomes more important than doing the right thing in terms of achieving the goals of the project.
So, Din, if the W-model helps you and your colleagues think more clearly and act more correctly on your projects--especially in terms of integrating testing into the lifecycle from beginning to end--then that's great. Personally, I find a prefer the simply V-model diagram, as I understand that implicit in that diagram is testing involvement from the start of the project and also the principle that every work product should be subjected to both static and dynamic tests, starting as early as possible.
The US space shuttle program is coming to an end this year. In the next three months, the last two shuttle missions will occur. The retiring shuttles will be sent to museums, as discussed here: http://bit.ly/fwAVIk
NASA's shuttle program has been a triumph of human engineering, but it has not been without tragedy. Two shuttles--and, more importantly, two crews of astronauts--were lost due to catastrophic failures. The Challenger was destroyed by an O-ring failure on launch, while the Columbia disintegrated on re-entry due to ice damage to its shields which occured on launch but did not cause shuttle failure until re-entry.
As a US citizen and as a human being, I am proud of the accomplishments of the shuttle program, but also I was greatly saddened by these two shuttle failures. As a software engineer, I feel a sense of poignant pride that the complex software systems that controlled the many shuttle missions never resulted in a loss of life, a loss of mission, or a loss of a shuttle. As I've said before (http://www.rbcs-us.com/software-testing-resources/148), we know how to build quality software; we just don't always do it.
Here's another good question from a reader. Tingting Ren asks:
2.10. Sample Exam Questions
3. Which of the following is not always a pre condition for test execution?
A. A properly configured test environment
B. A thoroughly specified test procedure
C. A process for managing identified defects
D. A test oracle
I think the answer is C but in the book it gives B.
This is a good question. The answer is "B" because oftentimes testers use exploratory testing and logical (high-level) test cases when running tests, rather than concrete (low-level) test cases that spell out every action to be taken and result to be observed. For example, see Whittaker's book How to Break Software.
The answer cannot be "A" because the test environment must be configured correctly in order for the results to be reliable. The answer cannot be "C" because we must have an agreed-upon way--a bug tracking system, ideally--for handling failures that we observe when running tests. The answer cannot be "D" because we must have a way of distinguishing expect results from unexpected results.
I received an interesting question from a colleague in Malaysia, Dhiauddin Suffian. He wrote:
Hi Rex, I have one simple question with regard to Fundamental Test Process. As we aware, the process involves Planning & Control, Analysis & Design, Implementation & Execution, Evaluating Exit Criteria & Reporting and Test Closure. My concern is on the Test Planning and Control, since it goes along the way of the whole process. I have no issue on the "Planning" portion. My question is directed to the "Control" part. What are "Control" activities involved in subsequent phases, i.e. "Control" activities that happen in Analysis & Design, Implementation & Execution, Evaluating Exit Criteria & Reporting and Test Closure, respectively. Thanks. Regards, -Din (CTFL, CTAL-TM)-
Test control can be thought of as the test management tasks required throughout the test process in order to keep the testing aligned with the software development process, the needs of the project, and the needs of the organization. These tasks occur as needed, based on the judgement of the test manager or other members of the project team, and can also occur on a planned basis.
For example, we might plan to regularly check our risk analysis to see if we have discovered new risks, or uncovered information that tells us we should revise the risk levels for the existing risks. As another example, if we find that a key piece of testing hardware will be available earlier than we expected, we might re-work our test execution schedule to accelerate the tests that use that hardware.
Yet another example could be if we discovered, during test execution, that a key test staff member will be leaving the team. In this case, if we did a thorough job during test planning, we might have identified a contingency plan for loss of a key staff member. This is a classic project risk, after all, and a good manager should consider all such risks. If we do have a contingency plan, triggering that contingency plan would be an act of test control.
Here's an analogy: Think of the test plan as a roadmap, with the starting location and the final destination clearly indicated. This roadmap will help you drive to your chosen destination. However, throughout your drive, you should plan to stop at traffic lights, mind your lane and speed, adapt to unexpected events (such as pedestrians stepping into a crosswalk), and even adaptively overcome errors in the roadmap (such as discovering a planned route is closed due to roadwork). While a good test plan makes test control easier--just as a good roadmap makes driving easier--the smart manager remains ever alert to the possible need for test control.
For those readers outside the US or too young to remember this ad slogan, there is a glue-trap-based insect control device called the Roach Motel. If a cockroach (or other bug) walked onto the floor of the rectangular box (open at the ends, of course), it would become trapped on the glue. The slogan was, naturally enough, "Bugs check in but they don't check out."
While it might seem like I'm setting up a story about a software development team that never fixes bugs, I actually have a different story to tell.
A number of years ago, I broke one of my rules--be a conservative adopter of new technologies--and was an early adopter of LinkedIn. However, within a short period of time, I read an article (I believe in Computer World or Information Week) that said LinkedIn was planning on "monetizing" its business model by selling information to recruiters and other interested parties. So, I stopped using it and started declining invites.
Being somewhat lazy--and not having many contacts at risk--I didn't get around to trying to delete the account for some time. About a year or so ago, I finally got tired of having to decline two or three LinkedIn invites every week, so I went ahead and deleted the account.
It wasn't easy to figure out how to do it. Usability of that feature didn't seem to be high on the list. Finally, I did manage to get to the right page and did get a confirmation that the account was deleted. I turns out, I should have saved a screen shot of that confirmation message.
Because, to my surprise, the LinkedIn invites keep on coming. I guess they didn't test whether this delete feature works, or at least didn't test it very well. Of course, to some extent, this confirms my decision not to get caught up in the LinkedIn mania. I'm glad I didn't get a lot of contacts in there before I read that article.
So, if you are a colleague of mine, and you send me a LinkedIn invite, you're likely to receive an e-mail much like this one:
I hope you are dong well? It's been a while since we've touched base. How are things going with you?
I've stopped using LinkedIn because I'm not comfortable with the way they manage personal data and contact data.
In the meantime, I will probably spend some time in the next few weeks trying to figure out how to check out of the Roach Motel of social networking. I'll post an update if I can find any way to unstick my little paws from the glue-trap floor.
Here's a question about equivalence partitioning.
My name is Nhat Khai Le and I'm taking ISTQB Advanced Technical Test Analyst online course (provided by RBCS).
I faced this sample question:
You are testing an accounting software package. You have a field which asks for the month of a transaction.
You know that months are treated differently in this software depending on the month of the quarter (first month of the quarter is treated differently than the second which is treated differently than the third.)
How many test cases, minimum, would you need to test this field to get equivalence coverage?
I chose answer B, i.e 3, but the answer is C, i.e 5.
The reason the answer is C is because there are three months in each quarter, each of which is different, plus the two months immediately preceding and following the quarter. It's important to remember that equivalence partitioning includes partitions outside the "normal" range when using it for testing.
To help visualize the answer, see the illustration below:
Equivalence Partitioning for the Months in the Quarter:
Reader Gianni Pucciani has a good question about a question in the Advanced Software Testing: Volume 2 book:
I have another doubt for a question in Advanced Software Testing Vol. 2. It is about the first question in Chapter 7, Incident Management. The book says that the correct answer is C "Insufficient Isolation". What does it mean? I had chosen B "Inadequate classification information", because all the rest was not making sense to me. For B, I could justify it saying that more information could be added to the incident report, e.g the error message displayed by the application.
Here is the question from the book:
Assume you are a test manager working on a project to create a programmable thermostat for home use to control central heating, ventilation, and air conditioning (HVAC) systems. In addition to the normal HVAC control functions, the thermostat also has the ability to download data to a browser-based application that runs on PCs for further analysis.
During quality risk analysis, you identify compatibility problems between the browser-based application and the different PC configurations that can host that application as a quality risk item with a high level of likelihood.
Your test team is currently executing compatibility tests. Consider the following excerpt from the failure description of a compatibility bug report:
1. Connect the thermostat to a Windows Vista PC.
2. Start the thermostat analysis application on the PC. Application starts normally and recognizes connected thermostat.
3. Attempt to download the data from the thermostat.
4. Data does not download.
5. Attempt to download the data three times. Data will not download.
Based on this information alone, which of the following is a problem that exists with this bug report?
A. Lack of structured testing
B. Inadequate classification information
C. Insufficient isolation
D. Poorly documented steps to reproduce
The reason that the answer is "C" is because we don't see any evidence of the tester trying some different scenarios to see if the data downloads properly. The testing is clearly well-structured and carefully thought out, and the steps to reproduce are well-described. The classifications are not given, so we have no way of saying, based on this information alone, whether those classifications are correct.
I received an interesting e-mail from long-time reader, John Singleton:
I'm so proud of my 6-year old, Josh. This weekend, we were playing Wii. Those of you likewise hooked on Wii will know that pushing the Home button on the controller pauses the current game and allows the user to access some configurations and such. It also displays the battery level for all connected Wii controllers. This time, the battery icon had only one bar, with the red color to get your attention. Josh said, "Hey Dad, have you ever noticed that when you push Home, it shows the battery is full for just a second before it shows that it's empty?"
I had him show me, and sure enough. There on the Home screen, it shows the battery meter as blue and full for just an instant before it refreshes with the red, almost-empty icon. I think I spouted off something geeky about how it probably shows a default value for just a moment while it queries for the actual battery value.
I don't know if I mentioned, but my son also has some special needs, including Sensory Processing Disorder, which tends to make one much more acutely aware of any kind of sensory stimulus. I wonder if this episode is similar to the kinds of dynamics that have led people to hire individuals with Aspergers' or Autism Spectrum Disorder for software testing. Or maybe he's just quirky, like his dad...
Regardless, my heart swells to about ten times its normal size when I hear my six-year-old finding obscure software defects in robust commercial products!
An interesting set of questions raised here, John. Certainly, my children are very adept at digital devices, but they don't seem to show that level of awareness for problems. Is testing an innate skill? Do certain traits which are thought of as "disorders" in some contexts actually provide skills in other contexts?