Unsolicited Bug Report   

I love the HopStop app for getting around the NYC subway and bus system. Especially the new “SmartTrip” feature.

But there’s a bugaboo they haven’t fixed – and I do this almost every time I use the app.

1. Tap the field for “Destination” and type the address I want to go to.
2. Tap the big enticing “Go” button.
3. Ooops! There’s the annoying “Please enter a start address.” popup because I didn’t happen to remember to tap the “current location” first in the From field.
4. Tap OK to make the annoying box go away.
5. Stab around randomly at the screen to try to figure out how to get back to the initial screen.
6. Now I have some random address in my destination field, and the app didn’t even save my carefully typed address.

I’d love it if the HopStop devs fix this workflow!

The Thinking Tester, Emerging   

The thinking tester is curious.

Ilari argues that “Curiosity is the most important attribute in a software tester.” The value of that curiosity is avoiding the streetlamp effect, where the only places you look are the places that have been illuminated for you. Still, is there a torchlight effect? The path is not illuminated by someone else — you are off the street lamps; but your own curiosity leads you to the things you are curious about and away from the uninteresting. Curiosity must be tempered so that your curiosity is aligned with what your stakeholder values.

The thinking tester knows cares about philosophy.

Scott talks the metaphysics of causality. And I talk about epistemology. To the context-driven skeptical relativists we say “There are more things in heaven and earth,… than are dreamt of in your philosophy”, in which the limits of discourse are neatly circumscribed to the narrow knowables of the empirical sciences. Avoiding the cement of the universe might not prevent one from being an excellent tester, but it certainly impoverishes the understanding of the world in which he works.

The thinking tester makes the thinking visible.

The thinking tester sees a book as a learning opportunity.

Or at least a reading opportunity.

The thinking tester doesn’t have just “conversations”, but “transpection sessions”.

What is the difference between a transpection session and “two guys talking”, I ask? A transpection session session is, in part, a deliberate use of the divergent/convergent thinking modes to effect a chartered conversation.

The thinking tester is thinking about the same things senior executives are – or should be.

The executives who hire Tripp Babbitt are thinking deeply about the dysfunctions of their organizations. Concepts like failure demand. Who’s focusing on reducing and preventing failure demand in your organization? It raises the sneaky suspicion that all that push for “QA Processes”, Best Practices, CMM, ITIL etc. is a poor compensation for a lack of understanding of what Deming has taught us.

The thinking tester is like a doctor – a good doctor.

Doctors make judgments quickly, under extreme time pressure and conditions of uncertainty, in a way that will stand up to scrutiny. Sound familiar? They recognize that there are no absolute indications of quality, but plenty of contraindications. They build up via encountering many exemplars over the course of a career; yet, there are some contexts in which an algorithm is better than intuition. Once again, the consultant conundrum raises its head. How do we build up tacit knowledge without encountering a diversity of situations? Perhaps we need to have something similar to Grand Rounds as part of tester training.

The thinking tester includes randomization in test automation.

Randomness should be an incredibly powerful tool for finding out information about your software. Of course, the cost offset is that random generators can’t tell whether what they generated is actually a path that would provide value to some person.

The thinking tester is in balance.

Everything is balance. When to use automation and when to use sapient exploratory testing, when to push for quality and when merely to expose, when to theorize and when to simply deliver.

The thinking tester wears his sunglasses indoors

At least when his normal prescription glasses are broken – and there’s something to be said for seeing the world through a different shade, as well as appearing to be an international man of mystery.

As you may have deduced, the theme of CAST 2012 was “the thinking tester.”

Coaches and Consultants   

One seemingly intractable feature of the software test conference scene is that it is dominated by consultants/contractors. I surmise this is intractable because of the obvious benefits to consultants to build their brand by presenting and networking at conferences; secondly, there is something about the wide variety of experiences that consultants are exposed to that allows them to better synthesize their experiences into a theoretical framework. Now, Test Coach Camp is a special case of this phenomenon because coaching is so essential to effective test consulting at a high level.

Working at a single company on a single product line is limiting in terms of the ability to innovate the meta-field of testing because the quantity of contexts is lower. To mitigate this, a company with stable cross-functional product teams needs to do everything it can to encourage cross-team collaboration. Otherwise, testers are doomed to only acquire the simplest level of knowledge to attain: domain subject matter expertise, and perhaps some skills related to a particular technology stack. Overall, this reduces the value of testers to the organization, which drives down salaries, and makes it even harder to attract testers who will make good theoreticians to the field. There’s an obvious collective action problem here.

Good testers are very good at inductive reasoning from small data sets; but inductive reasoning is more reliable the higher your sample size. In a large organization, the test team as internal consultant model seems highly effective; where this is impossible, a reasonable approximation is having as many ways for a dispersed test function to regroup and learn from each other as possible. I really like the two tiered Community of Practice approach for this: a leadership group dealing with meta-concerns, and a SIG focused on directly applicable education, sharing, and improvement.

Get Ready for Gettier   

At 10:45am on Monday, I will be blowing minds in the Emerging Topics track at CAST 2012. The famous “Gettier Cases” challenged the millenia-old contention that knowledge is precisely justified, true, belief. These cases were transformative in the field of analytic philosophy, and I believe inserting the intellectual rigor of analytic philosophy into software testing will be similarly transformative.

I can personally vouch that the two times I have given this talk to coworkers, the follow-up is passionate and vehement.

Don’t miss it.

But whether or not you make it to the talk, you might want to read Gettier’s famous paper (PDF) and see if the analytic philosophy bug bites you.

Camp Day I   

Among the topics I heard discussed today:
– coaching non-testers through paired work
– being a role model
– working constructively with tester diversity
– testing challenges/games/exercises
– agile testing in a waterfall world
– coaching when you don’t know what you’re doing.

I added The Clean Coder and Management 3.0 to my reading list.

I discovered the name Gary A. Klein for further research.

I spent a lot of time learning from extremely smart people. I hope I asked some provocative questions.

Much to my surprise, it took us until 4:45 for the topic of “what is test coaching?” to take over the forefront… and, much to the relief of several conference attendees, the topic seemed to fizzle quickly. People here are much more interested in getting better at test coaching than deciding exactly what test coaching is. (I wonder if having no firsthand experience of what an athletic coach does… helps or hinders developing a mental model of test coaching.)

There are many things other than working software that we can be testing. Some that struck me today:

* Are you testing your business model? For viability? For security holes? For whether it is delivering the value it should be? Who’s the stakeholder for this?

* Is your process aligned to the true needs of the stakeholders? Are individuals’ local optimizations oriented to global optimizations by proper incentives? Are you doing everything to reduce moral hazard in your organization?

* Are you regularly smelling your code for some of the 3 classic coder mistakes: poorly thought-out names and off-by-one errors?

In mircocosm, these seem to be three major approaches for testers making the legitimate case that their value is greater than mere bug-finding: Pushing higher into the business and testing what the senior executive really cares about; pushing higher into the dev org and testing what the delivery managers really care about; pushing deeper into technology and testing what the developers really care about. All of these are also exercises in making other professionals on the team look good.

Humility and service are among the virtues of an effective test coach.

Test Coach Camp   

I’m very excited to be participating in the inaugural Test Coach Camp, starting this evening in beautiful San Jose, California.

(Why does anyone bother with offices in the Valley? If today is any indication, it would be better to just sit outside in the beautiful summer breeze all day. I only miss having my three big monitors for my desk. Maybe Apple should create a portable Cinema Display for this purpose.)

I’ll probably cover any real-time epiphanies on Twitter: @tvaniotis and do larger summation of experiences here.

And stay tuned for some details about my upcoming Emerging Topics talk at CAST. It’s going to blow your socks off!

Three Mile Island   

It’s instructive to read accounts of colossal engineering failures and to consider how one could apply them to software and software testing.

The riveting and somewhat clinical accounts of the ethical implications of the incident reveal lots of details about how people react to crises and how the tools (hardware & software) they are provided with can help or hinder the resolution.

Of particular interest to those of us doing devops work or building support tools are some of these tidbits from the report of the President’s Commission:

Over 100 alarms went off in the early stages of the accident with now way of suppressing the unimoprtant ones and identifying the important ones…

Several instruments went off-scale during the course of the accident … these instruments were not designed to follow the course of an accident.

[My favorite] The computer printer registering alarms was running more than 2-k hours behind the events and at one point jammed, thereby losing valuable information.

(Page 29-30 in print)

Are the tools that monitor your system capable of suppressing unimportant alarms in a crisis? Are your metrics scaled? Is your monitoring/logging system capable of handling dramatically more messages than usual – does it get behind, or does it drop messages? There are obvious drawbacks to either approach.

Are the controls to investigate disasters as well designed as your product? TMI was full of design flaws in controls, from labels which obscured critical indicator lights to necessary switches being located on the wrong side of equipment, etc.

(I have a vague memory of a similar incident with a teletype machine failing to print alerts in a scifi thriller – maybe Michael Crichton? Please write a comment if you remember the source and whether the author was inspired by the TMI incident.)

PDF of the President’s Commission Report on Three Mile Island

Holding Business Back   

Two gems from the Harvard Business Review blog:

Tackling the innovator’s (or maybe just the winner’s) dilemma by building a second corporation. Of course you have a problem of balancing incentives here. First, you must keep the second business nimble and unlikely to chase dead ends too long (a sort of moral hazard for individuals who end up personally invested in the success of duds because if they don’t succeed, they know they will lose funding). Second, you want to avoid making joining the “innovation” corporation too risky – otherwise everyone will want to stay in the cash cow business. Especially if you have a golden handcuffs retention strategy.

Also,
why HR still isn’t a strategic partner to the business. (The old logical trick of being able to draw any conclusion from a faulty premise may be at play, however.)

Legendary   

An epic tale of tracking down a very strange bug in the Mac version of the canonically nerdy typesetting tool LaTEX. Interesting to observe how debugging happens across a large open-source team without anyone designated as tester or QA.

(Stylistically, I find this to be a compelling use of the character-based narrative format for telling a testing story, a trope I find cloying in many software testing articles. Perhaps it succeeds because in this case the characters are real rather than obviously invented in service of the authorial point.)

For more context, MacTex’s full story of the font cache bug.