Book Review: Exploratory Software Testing

Posted on Jan 28, 2014 | 0 comments

I’d heard of the subject before, vaguely knew some things about it, thought it was the type of testing I had naturally evolved to, and vowed to really find out by reading Exploratory Software Testing by James A. Whittaker.

As it turns out, my weak understanding was mostly wrong. The entire strategy is based upon testing “tour” metaphors that can be visualized by thinking of yourself as a person “touring” in a new city. That new city being your code. What I did understand and agree with fully in both mind and practice is the idea that strict adherence to a detailed test case or test plan is not efficient, not enjoyable, and metaphorically speaking, not the best way to see town. Instead, write test cases that are more of a general guide and do your own exploration in order to introduce variance with each run. The tour metaphor is ever-expanding and will be different for each team and each feature. The author presented his list of favorite tours, which can conveniently be found on MSDN (the author used to [maybe still does?] work at Microsoft and most of this book was MS-centric).

My takeaways (as usual, personal comments and clarifications in italics):

  • Behind most good ideas is a graveyard of those that weren’t good enough. This is a universal. Don’t be afraid to screw up/do something worthless because it is all part of the process.
  • The modern practice of manual testing is aimless, ad hoc, and repetitive.
  • Software is peerless in its ability to fail.
  • Software is not, and likely never will be, bug free.
  • There is no replacement for the tester attitude of “how can I break this.” Any development team that thinks they can get away without dedicated QA persons is fooling itself. I’ve seen this time and again in the gaming industry as I’m privy to many an alpha and beta from knowing folks in the industry and my previous experience running a major gaming site. I’ve not yet been close to a project that ignored QA and did not fail. Not to say they don’t occur, but my personal experience is a goose egg. It is too hard for someone who was in the code to step back and say to themselves “This sucks.” You need someone disconnected, but fully respected to say that.
  • The less time a bug lives, the better off we’ll all be. Bugs found in design are the best kind.
  • If testers are judged based on the number of tests they run, automation will win every time. If they are based on the quality of tests they run, it’s a different matter altogether.
  • Automation suffers from many of the same problems that other forms of developer testing suffers from: It’s run in a laboratory environment.
  • Test automation can find only the most egregious of failures: crashes, hangs (maybe), and exceptions. … Subtle/complex failures are missed.
  • Manual testing is the best choice for finding bugs related to the underlying business logic of an application.
  • Tester-based detection is our best hope at finding the bugs that matter.
  • Exploratory testing allows the full power of the human brain to be brought to bear on finding bugs and verifying functionality without preconceived restrictions. Don’t let yourself get bogged down by manual scripts. THINK. Otherwise you might as well be a robot and then be automated.

EPIC DIGRESSION: I’ve never had luck outsourcing testing to India and this is the reason why. The workers there are not encouraged to think, and instead treated and expected to function as worker bees. Follow instructions to a tee, don’t raise your hand, put in your hours, and get paid. I was involved early in outsourcing and several stories from colleagues lead me to believe it has changed for the better, but it was so bad for me that I still have that filthy taste in my mouth and would never proactively attempt to implement Asian testing for my team until I saw it work first-hand somewhere. At a recent QA Meetup, a colleague at Symantec actually mentioned they have it working really well there. It involved A LOT of effort from both sides of the ocean. I think that was what we missed in the early days. We assumed we could just drop things on these people and they’d get it done. Outsourcing with European countries now, I don’t know if it is just the individuals or the culture that matters, but we get amazing work from a group that considers themselves part of our team, company, and culture. It is a relationship that has been cultivated over many years and the fruit it bears is wonderful.

  • Exploratory testing is especially suited to modern web application development using agile methods. … Features often evolve quickly, so minimizing dependent artifacts (like pre-prepared test cases) is a desirable attribute. … If the test case has a good chance of becoming irrelevant, why write it in the first place?
  • Having formal scripts can provide a structure to frame exploration, and exploratory methods can add an element of variation to scripts that can amplify their effectiveness.
  • Start with formal scripts and use exploratory techniques to inject variation into them. In my case, I do this when regressing a feature. On initial testing, I stick very closely to my script. With subsequent runs, tracing those same code paths is unlikely to discover anything new. I inject this variation then and use my former test plan to guide me to all the code, but I will not take all those same steps.
  • We need the human mind to be present when software is tested.
  • Testing is infinite; we’re never really done, so we must take care to prioritize tasks and do the most important things first.
  • Don’t allow stubbornness to force you into testing the same paths over and over without any real hope of finding a bug or exploring new territory.
  • All software is fundamentally the same. … [They] perform four basic tasks: They accept input, produce output, store data, and perform computation.
  • It is good to keep in mind that most developers don’t like writing error code.
  • No matter how you ultimately do testing, it’s simply too complex to do it completely.
  • As testers, we don’t often get a chance to return at a later date. Our first “visit” is likely to be our only chance to really dig in and explore our application. We can’t afford to wander around aimlessly and take the chance that we miss important functionality and major bugs.
  • Tours represent a mechanism to both organize a tester’s thinking about how to approach exploring an application and in organizing actual testing. A list of tours can be used as a “did you think about this” checklist.

Despite my plentiful notes, I didn’t really care for this book. I feel it could have been summarized in 40 pages or so. The touring metaphor is a great one that I’ll keep handy in my tool belt, but the book had a number of user stories (don’t think Agile) from using the tours that didn’t really add anything, there was a good amount of general testing knowledge that is always a nice reminder, but better suited for a different book, and it finished with this wildly optimistic/futuristic section on the future of testing that had nothing at all to do with the subject I cared about reading. I had a fair amount of difficulty getting through this one, but I’m glad I did because ultimately it will make me better at what I do.

2 out of 5. Too much noise.

My other QA book reviews:

Don’t Make Me Think – A Common Sense Approach to Web Usability