The Thinking Tester, Emerging

The thinking tester is curious.

Ilari argues that “Curiosity is the most important attribute in a software tester.” The value of that curiosity is avoiding the streetlamp effect, where the only places you look are the places that have been illuminated for you. Still, is there a torchlight effect? The path is not illuminated by someone else — you are off the street lamps; but your own curiosity leads you to the things you are curious about and away from the uninteresting. Curiosity must be tempered so that your curiosity is aligned with what your stakeholder values.

The thinking tester knows cares about philosophy.

Scott talks the metaphysics of causality. And I talk about epistemology. To the context-driven skeptical relativists we say “There are more things in heaven and earth,… than are dreamt of in your philosophy”, in which the limits of discourse are neatly circumscribed to the narrow knowables of the empirical sciences. Avoiding the cement of the universe might not prevent one from being an excellent tester, but it certainly impoverishes the understanding of the world in which he works.

The thinking tester makes the thinking visible.

The thinking tester sees a book as a learning opportunity.

Or at least a reading opportunity.

The thinking tester doesn’t have just “conversations”, but “transpection sessions”.

What is the difference between a transpection session and “two guys talking”, I ask? A transpection session session is, in part, a deliberate use of the divergent/convergent thinking modes to effect a chartered conversation.

The thinking tester is thinking about the same things senior executives are – or should be.

The executives who hire Tripp Babbitt are thinking deeply about the dysfunctions of their organizations. Concepts like failure demand. Who’s focusing on reducing and preventing failure demand in your organization? It raises the sneaky suspicion that all that push for “QA Processes”, Best Practices, CMM, ITIL etc. is a poor compensation for a lack of understanding of what Deming has taught us.

The thinking tester is like a doctor – a good doctor.

Doctors make judgments quickly, under extreme time pressure and conditions of uncertainty, in a way that will stand up to scrutiny. Sound familiar? They recognize that there are no absolute indications of quality, but plenty of contraindications. They build up via encountering many exemplars over the course of a career; yet, there are some contexts in which an algorithm is better than intuition. Once again, the consultant conundrum raises its head. How do we build up tacit knowledge without encountering a diversity of situations? Perhaps we need to have something similar to Grand Rounds as part of tester training.

The thinking tester includes randomization in test automation.

Randomness should be an incredibly powerful tool for finding out information about your software. Of course, the cost offset is that random generators can’t tell whether what they generated is actually a path that would provide value to some person.

The thinking tester is in balance.

Everything is balance. When to use automation and when to use sapient exploratory testing, when to push for quality and when merely to expose, when to theorize and when to simply deliver.

The thinking tester wears his sunglasses indoors

At least when his normal prescription glasses are broken – and there’s something to be said for seeing the world through a different shade, as well as appearing to be an international man of mystery.

As you may have deduced, the theme of CAST 2012 was “the thinking tester.”

Start a discussion thread: