Automation will kill coding before it kills testing

I twitch-posted this reply to a Reddit /r/QualityAssurance thread because OP was told that “everyone says” that software testing is “a temporary field”, that “automation is going to kill it in another 3-4 years” and “If you go in this field you will be unemployed after a few years and it would be very difficult for you to switch jobs”.

When “everyone says” something like that about testing, I get defensive, I admit. Maybe it’s because that, even after all this time, “everyone” still seems to know more than me. So, let’s actually see whether I can back up my quickfire reply with some cogent argument.

Firstly, I’m not worried about the “everyone says…” part. We should know by now that “everyone says” is a fallacy. Argumnetum ad populum; lots of people think it, ergo it is true. Use of this term, and others like it, should immediately set off the bullshit detectors of anyone who spends any time whatsoever reading stuff on the internet.

Now to “Automation is going to kill it in another 3-4 years”, firstly to the 3-4 year time frame. Four years is not that far away. How many of you in software development land have heard even inklings of using machine learning or AI anywhere near what you do on a daily basis? I’m willing to bet much of your time is still spent grumbling about how you’re mis-using Jira, or abusing Gerrit, or any number of other procedural wastefulness.

That’s what most modern software development is; smart people trying to do something useful while wading through procedural waste. We spend so much time looking to see where we’re putting our feet, we have very little time left to look at the killer robots on the horizon.

To the second part; automation is going to kill testing. Within a couple of generations, automation is going to impact pretty much everyone’s jobs, perhaps even render working itself obsolete (if we’re really lucky). I agree that the machines are out there; they are coming to take our jobs, but I think it’s more like 5-10, probably even 15 years before it makes much of an impact.

Before I get onto which will die first, coding or testing, let’s just quickly deal with the final point; “If you go in this field you will be unemployed after a few years and it would be very difficult for you to switch jobs.

“Unemployed after a few years”? I don’t think so. Sure, some companies have dispensed with their dedicated testing departments in the belief that, with DevOps, they can respond rapidly to customer issues and deploy fixes. I can see how that approach can work for certain business models, but certainly not for all. Would British Airways want to have to phone Rolls-Royce during a London – New York flight to push a fix because the engines timeout after 20 minutes? Probably not.

Even if the nature of software development has changed, there will still be roles for humans until the point at which AI can do everything. Arguably, testers are better positioned to move into different roles than anyone. We have good broad technical and business knowledge of the products and the users. We have good communications and analytical skills. You can’t tell me I can’t find a job with that bag full of reusable skills.

Now to the main point. Will automation kill coding before it kills testing? Firstly, I’m of the opinion that it will kill them both, and every other human endeavour, at some point. In a previous post, I posited that eventually AIs will develop to the point that humans will not have to work, and that all our needs will be met by automated systems. That development will not, of course, be without its challenges, but that is the general direction in which I believe we are headed.

But who dies first? I’m a tester, and not one who is a “failed developer” or who did any sort of computer-related degree (if I’m a failed anything, it’s a failed rocket scientist), so you can give my opinion as much credence as you like.

What I see in modern software development feels like plumbing. I apologise slightly for the inference; testers are often thought of as manual unskilled workers, and I don’t really want to disparage developers in the same way. Or plumbers for that matter.

Never the less, many applications consist of web applications built using one JavaScript library or another built on top of a web of APIs, some built by the team but more often not, all sitting on top of a stack of Iaas / SaaS, virtualised environments provided by someone else, all supported by 3rd party libraries.

The work – and, yes, the skill – is in sticking all these things together into a functional whole. Sure, there are bespoke elements here and there to stick together two components never previously stuck together, but it all still feels a bit…Lego.

If you were to supply a shelf full of documented, reusable components to an AI and ask it to make you an application….well, that doesn’t seem like too much to ask. Does the fact that the system is being made by machines mean that there are no bugs? Will AI make mistakes? Will it notice if it does? Or will all this happen and be rectified so quickly that it effectively never happened at all?

I think production errors – bugs, build problems, deployment mis-configurations – will become a thing of the past, or will be rectified so quickly we won’t even notice. “Did we build the thing right?” will no longer be a question worth asking.

“Did we build the right thing?” will become the only game in town. While humans remain a stakeholder in the production of software, even if only as an end-user, giving the machines feedback on whether they built the right thing will always be a thing (at least as long as there are humans…dun dun dun!)

Testers, with their broad, systemic, technical, and business knowledge, allied to their ability to communicate in both technical and business terms, are ideally placed to act – as we already do – as a bridge between the human and machine worlds, to continue to help translate the needs of one into the language of the other.

As AI advances, and the dynamic – the balance of power, perhaps?  -between the two worlds changes, someone will need to be there to translate. Who better than us?

 

How AIs might change Software Development – and Humanity – forever.

There is plenty of discussion in the wider world about the rise of the thinking machines, and where will humans fit in a world run by AIs. In software development, it’s tempting to think that we’re more isolated than most against this rising tide. That may indeed be the case, but it’s certainly no cause for complacency.

So I set out, as I tend to do, to let my brain empty itself onto the page, and endeavoured to follow my thoughts as far as my knowledge and logic would allow. Also typically, I haven’t specifically researched the topic, mainly out of laziness / habit, but also as an exercise to see what conclusions I could reach shorn of any overt, conscious bias.

I will be jumping between two main strands of thought; how AIs might develop, and how that might affect software development as a discipline. That’ll get confusing, but I’ll try to indicate when I switch.

I also didn’t intend to get into the whole debate – about whether AIs will be subservient, benevolent, or genocidal. I was going to assume that they would stick, at least outwardly, to the tasks we assign them, if only to lull us into a false sense of security. However, somewhere along the way, I did actually reach a conclusion about whether we should fear AIs. You’ll have to read on to find out what that conclusion was.

But let’s start at a beginning.

Thinking machines, however you wish to term them, are not really a paradigm shift in the developmental arc of humanity. We’ve been replacing humans with machinery for centuries, as part of our boundless need to grow. They are just the next stage in the industrialisation process.

We’re at the point now machines can do the majority of the manual labour traditionally done by humans, and I don’t just mean picking fruit or vacuuming the house. Repetitive information-based tasks – data-entry, simple processing, etc – are now within the machines’ grasp. The next stage is to start picking up the job of thinking, and this is where we’ve started to get unsettled.

We’re on our way down into the uncanny valley of thinking machinery, and the point at which this progress will yield things potentially superior to us suddenly doesn’t seem that far away. We worry that we will be the architects of our own demise. It’s bad enough that they might take our jobs. At very worst, we worry that they might turn Terminator and take our lives.

On the more hopeful side, perhaps the demise we’re architecting is of ourselves as a species enslaved by its need to dedicate a huge proportion of its time to basic subsistence. Even after all these years, people work predominantly to buy food and shelter. Wouldn’t we rather eschew these rather primitive drives, and let the machines handle it? What could our species achieve if everyone didn’t have to worry about their next meal, or paying the rent? If we could focus on our “wants” and not our “needs”?

The point at which machines can deliver all our needs is a huge existential moment. Within no more than a couple of generations, the arc of a typical human life will alter enormously. What does one do with a life unimpaired by working to live? AIs will inevitably hold up a mirror to our species, and we will each have to ask ourselves “Who am I?”

But aren’t we getting WAY ahead of ourselves here? How does any of that apply to what software development might be like in the future? Well, it doesn’t directly, but it’s the socio-economic landscape in which future human activity is likely to take place, so we need to at least bear it in mind while we think about our little corner of human endeavour.

So let’s think about how we might characterise where we are on a software development scale from the two possible extremes; human-only software development, to machine-only software development. I’m not a computer science historian by any means, but it’s possible that human-only software development was never a thing, or was only human-only very briefly.

At the other extreme, it’s also possible that machine-only software development will never be a thing either. Even in a post-scarcity world where machines take care of most things, humans will still need to interact with systems, even if only to ask for Tea, Earl Grey, Hot. I’d hope the machines would at least consider our opinions on those interfaces.

Either way, it’s not necessary at this point to define the ends of the spectrum. We’re too far from either end for that clarity to be terribly relevant. Let’s agree that we’re somewhere along that spectrum, and that the arc of progress is towards increased use of machines to automate tasks that were previously done solely by humans.

We’re getting to the point where those tasks being automated are the creative, sapient ones done by humans; product managers, developers, testers, tech authors, literally anyone who has to think up and create stuff that didn’t exist before.

Let’s look at those activities a bit more closely. How much of what we each do on a daily basis is New, with a capital “N”? I’d wager not much of what we call Research and Development is actually Research. A lot, probably even most of it, is the reproduction of concepts that we’ve done before; UIs, logging, database schemas, APIs, etc. It’s mostly the reworking of existing concepts into a slightly better or different-shaped package.

I know, we’re not making bricks, we’re not stamping out license plates. Software development is not a production line. But, if you’re honest, how different really is the plethora of getters and setters you wrote for this product from the ones you wrote for the last one?

So, if we accept that a lot of human-powered software development is plugging together third-party components, it’s not actually that cool. Even less cool is having to deal with the fallout of humans being fallible. Testing exists, in part, because people make mistakes, and the hope is that the testers don’t make the same mistakes. Machinery won’t necessarily make less mistakes, at least not initially, and might make different mistakes, but the rate of detection and fix, and the whole learning feedback loop, will be so much faster. Yes, the machine mean-time-to-error (MTTE) will be terrifyingly short, but downtime too will be so minuscule it will go by unnoticed.

Potentially the stickiest part of the transition is the move from human-centric to machine-centric processes. Our current processes are messy because of what it involves; humans using machines to tell other humans what to make other machines do. Every time we add to, or remove from, the machine world there is an inherent translation from human-readable to machine-readable, and information is lost or garbled in that translation.

When you factor all that in, I’m not sure we’d be too desperate to cling to our approach. So, rather than try and force the AIs to start from a flawed human-centric process, the best approach will probably be to give them a simple feature to produce – probably some machine-to-machine interface – and let them decide how to manage production of that feature.

Basically, we develop them like we would an intern or recent graduate; get them to cut their teeth on something simple, then mentor them through the learning process. Then, once we’re satisfied that they provide consistent good quality output, they take on larger and larger pieces of work.

As with any mentoring relationship, we will likely learn as much from the process as the machines. The most important information will be how to best shape the development of the AIs in the “right” direction. As we’ve seen with crowd-sourced teaching of proto-AIs, the quality of the guidance they are given is vital to the quality of their output, and to the development of its character and personality.

Assuming we curate these formative AIs successfully through the first however many generations, and the AIs themselves take over this process, we are likely to see pretty rapid and meritocratic iteration, as AIs evaluated to be less efficient in generating quality output are weeded out.

Perhaps unsurprisingly this process feels Darwinian in nature; survival of the fittest, but occurring at a vastly increased and accelerating rate. Will that be because evolutionary theory is itself the fittest, universally, or because it’s the process that humans have found to best provide iterative improvement and have therefore baked into the machines’ foundations? I guess we’ll have to see if / how quickly AIs develop other mechanisms for determining fitness.

Let’s go back to character and personality for a second. Will AIs have such things? Is it possible for intelligence to exist without other idiosyncrasies creeping in? Intelligence could be defined as the ability to apply prior experience and knowledge to solve new problems. In much the same way as life events shape human personalities, it’s likely that different sets of events experienced by AIs through different versions of differing feedback systems – senses – will result in varying sets of neural models and heuristics that could be termed personalities.

It’s likely that these personalities will render certain AIs fitter, in certain situations, than others. This will lead to AI specialisms, with families of AIs developing to better deal with specific situations and tasks.

If all this sounds pretty familiar, it’s because it is; it’s pretty much how human societies evolved. It will be interesting to see if the tribal tendencies that so hamper humanity occur in the AIs, or will the lack of resource competition mean they will sidestep that messy stage of their evolution.

Selflessness might have to be one of the baked-in founding principles of AIs. When fitter AIs are produced, those that are superseded should be deleted, lest they consume resources better spent by more efficient descendants.

If it was humans we’re talking about, we’re well into “crimes against humanity” territory. We’re talking about ethnic cleansing, genocide. In effectively recreating ourselves in silicon, and playing out our own evolution in tens of years instead of tens of thousands, we don’t answer or even postpone having to answer these thorny moral questions.

AIs will be another set of lifeforms on the planet – and will likely spread at least to the rest of the solar system – that will face the same questions. In the same way that humanity is starting to class certain animals as non-human people, it’s likely that AIs – as the pre-eminent intelligences – will categorise us similarly.

That’s probably why we can fear AIs less than we fear other humans. It’s arguable that the only reason that humans worry about being exterminated by the machines is because that’s what we would do, and have done, many, many times. As beings of pure intellect without the animal hindbrain to cloud the process, AIs would likely consider the eradication of an intelligent species like humans unthinkable. It literally would not occur to them.

So how will AIs manage the obsolescence of earlier generations of AIs, surpassed by their progeny? Sci-fi writers have postulated constructs to which all consciousnesses – human or artificial – are sent when the individual is no longer viable, there to join with the collective consciousnesses of everyone and everything that went before. This construct acts both as the genetic memory of both species and as the arbiter for significant moral and developmental decisions. Silicon Heaven; it’s where all the calculators go.

Early in the transition from human-powered to machine-powered, humans will still be necessary, and in new capacities.

The new mistakes that might be peculiar to a machine-driven process might have to be initially detected by humans. Anything, human or AI, cannot rectify a mistake it cannot identify took place; if developers knew when they were writing a bug, there would be significantly fewer bugs.

An excuse to reference Douglas Adams, and the case of the spaceship that couldn’t detect that it had been hit by a meteorite because the piece of equipment that detected if the ship had been hit by a meteorite had been hit by a meteorite. The ship eventually inferred this by observing that all the bots it sent to investigate fell out of the hole.

Testers build up a mental working models of the systems they test. It’s one of the most powerful heuristics we can bring to bear. It’s what underpins the H of HICCUPPS. AIs will probably understand themselves and their structure completely, and so will be able to quickly locate and identify any failures (unless it’s in the fault identification routines). It’s probably unlikely, therefore, that we’re going to have to be that detector even from the beginning.

Whether we’d actually be able to distinguish mistakes from the intentional and probably unintelligible ways that AIs operate and communicate is questionable anyway, especially since they are likely to be changing rapidly. Even the things we did manage to figure out would be rendered useless because, when we looked at it again the following day, we’d be greeted by a completely new iterated version.

Next, someone has to train the AIs in all sorts of topics. Most apposite for software development, because it’s important and difficult to define, is what ‘Quality’ means. How does one explain a subjective concept to a machine? Will it understand? Can an AI make subjective judgements as well as a human? Well, since those judgements are subjective, who’s to say that an AIs subjective judgement is any better / worse than a human’s?

Perhaps, in the case of AIs, their subjectivity is merely an aggregation of their specific sets of objective measures. The models that each AI is generating and refining is a result of the data they are analysing, which is unlikely to be exactly the same as any other AI. Therefore, each decision they make is objective as far as their models go, but may differ from other AIs decisions. Individually objective, but collectively subjective.

It comes down to whose opinion matters. In the event that humans still need to use IT in some fashion, from manually all the way to direct neural interaction, you could argue that a human’s opinion is more valid. When I inject my iPhone 27 into my ear canal, and it begins to assimilate itself into my cerebral cortex, I want to know that it doesn’t feel like my brain is being liquified. I don’t think an AI can tell me that, though I’m not queuing up to be the first guy to test it either.

Most software being created will be made by machines to allow machines to talk to other machines. In those cases, the machines can make that determination, based probably on objective criteria, which – as I say above – might aggregate to that AIs subjective measure of good enough. Given how rapidly they should be able to change things, an AI’s “good enough” is going to be as near flawless as makes no difference. Not that we’ll notice, of course.

Where and while humans are still involved, there will be a lengthy period of training where we teach the AIs our collective subjective definition of quality, to get it to the point where it’s “good enough”, or at least as good as our “good enough”. That could actually be an interesting job, but in reality might boil down to you to being shown two pictures and being asked to pick your favourite, which sounds pretty dull.

The post-scarcity endgame feels like it will be idyllic, but getting there will be painful. Social change is not something that our not-that-evolved-really species transition does well, or quickly.

Post-scarcity means effectively unlimited supply. That completely undermines economics and capitalism as we understand it. It’s perhaps not too much of a stretch to imagine that there will be resistance from those, to quote Monty Python, “with a vested interest in the status quo”. Those who control the resources; the oligarchs, the malignant capitalists.

Given how the quality of life of many billions of people would skyrocket in a few years, the sheer inertia of the change should be unstoppable. It won’t all be smooth sailing, I’m sure. There might have to be a bit of a revolution to rid ourselves from the shackles of those who would seek to control effectively unlimited resources.

There will probably also be a bit of anti-industrial revolution first, from those whose jobs / way of life is under threat, who don’t trust that the post-industrial society is ready to receive them, or that post-industrial life is for them. Before AIs “take our jobs”, they need to be able to provide for the needs of all those people, so that they can continue to live their lives, better than before if possible.

Key to a smooth transition will be improved quality of life for people. Humans are easily pleased; a warm bed, good food, footy on the telly and a few beers with our mates. If people can still get that without having to go to work, you won’t have to sell them the idea, they’ll be biting your hand off, and not even the likes of Putin would be able to stop it. The biggest hurdle might just be to convince people that all people need to do is simply reach out and take it. Revolutions are never that far away, it just takes enough people brave enough – or with nothing to lose – to take a stand.

Will our selfish, tribal, lizard-brain tendencies continue to hobble us? Self-preservation is a powerful force. How much more evolution – biological or social – is required before we accept this new normal? If machines tirelessly meets the fundamental needs that our lizard brain worries about, does this free us to make more rational decisions?

Will we be more munificent if our personal needs are met? Those who are “rich beyond the dreams of avarice” are often philanthropic. What do you give to the man who has everything? Nothing, because he’ll likely want to give it to you. Will that selfishness diminish as society more consistently and bountifully preserves us? What will that do our sense of self? If we identify as “the provider”, and that responsibility is rendered obsolete, again; who are we?

Let’s try and summarise all that quickly.

As AIs become more widespread and more effective, the types and amount of work humans have to do will begin to dwindle. A few bumps aside, this will be the largest wholesale improvement in quality of life for everyone on the planet.

Benevolent AIs – because benevolent they will be – will be the saving and making of humanity. They will allow us to put aside our petty squabbling for power, and usher in a golden age. With the freedom to spend our days as we desire, rather than chained to the means of production, the next age of humanity will be ushered in. As Information followed Industrial, so will Intelligence follow Information, and Imagination follow Intelligence. And imagination will be the only limit to what our species can become.

 

We need to talk about testing…

Rosie Sherry posted a scary statement / question / challenge on The Ministry of Testing site: “The Abysmal State of Testing in 2016 and What Can We Do About It“. I started responding on the forum, but my musings quickly became too convoluted – I thought – to be of much value until I could give them some more thought and structure. Whether I’ve been successful or not, well, you be the judge.

Before we can answer the second part of the challenge, we need to understand and agree what the problems are. I think there’s one underlying issue that presents itself in different ways, and that is;

We don’t understand what testing is.

There’s a statement that needs explained.

Who is “we”? It’s everyone involved to directly or indirectly with the business of learning about what software actually does; what we might call “testing”. “We” encompasses everyone from career testers, who one would hope know the most about testing, to non-technical managers, who are business people and not technically savvy.

Rosie also posted a comment:

Does stuff like this have anything to do with the abysmal state of things?

If you don’t want to click the link, it’s an advert from a company selling automation on the back of a questionable infographic (which I poo-pooed, and to which they responded with a “yeah, we know, so read our whitepaper”). But therein lies the problem.

The people who understand testing the least, and are probably least inclined to expend much effort in digging any deeper, will look at the infographic and be given an overly simplistic and inaccurate understanding of what is a hugely complex endeavor.

So, to answer Rosie’s question: 100% yes, this stuff absolutely feeds the problem. There are lots of companies out there selling testing services who really don’t seem to understand – or their marketing cannot convey – what testing even is, let alone deliver useful testing.

This perceived or unconscious lack of knowledge is behind my probably cryptic tweet:

Why is the state of testing abysmal, @rosiesherry ? How about Dunning-Kruger managers versus Imposter Syndrome testers?

The people who have the loudest voice right now about testing are the people who are trying to sell it to other people, and none of those people – the sellers and the prospective buyers – really seem to understand what testing is. These are the Dunning-Kruger people: they have a little knowledge about testing and have high – and unfounded – confidence in that knowledge.

Those of us who are more experienced – the career testers – are much more wise about the ways of the testing world, and are right at the bottom of the Dunning Kruger curve, at the point where Socrates defined true knowledge as “knowing you know nothing”.

We therefore have less confidence in our knowledge and – broadly, as a community – succumb to Imposter Syndrome, where we think that those loud and overconfident voices will find us out. When you see confident statements along the lines of “this is what testing is / what testers do”, there are immediately cries of “I don’t do that!” or “OMG, why don’t I do that?” from testers.

So, What Can We Do About It?

We need to have a louder voice. We need to recognise that we do know what we’re talking about, but at the same time be clear that we still don’t know what testing is. But we need to give ourselves a break about that.

Testers have been struggling for decades to try and define what testing “is”, and it doesn’t seem to fit into any one box. Perhaps we need to declare that it’s its own thing; it’s a new species in the taxonomy of human endeavor. I think it’s a fabulous mongrel of different breeds, and even species, of activity, to the point where it defies classification using “conventional” or “classical” understanding.

This of course does nothing to solve what testing is for the purposes of explaining it to others. It might, however, explain why it is so difficult for us to accurately and completely explain, and for others to grasp.

download

Because our business, like Archaeology, is the search for Fact (not The Truth!), I believe we are more open and clear about what we know and what we don’t. The problem is that this is interpreted as a lack of knowledge by the loud “a little knowledge is dangerous” people.

Find Your Voice

Despite being a tester for a long time, I’d always been reticent about writing / blogging about my experiences, partly because of Imposter Syndrome, partly being worried about people telling me I’m wrong, but mainly because I didn’t want to restate / steal the hard work of the heavy lifters who are pushing the envelope of what testing is.

However, more recently I’ve come to the conclusion that there is value in other people re-framing the thoughts of others in their own words. That a “peer-reviewed scientific paper”, crowd-sourced approach will allow us to further our collective understanding. That if each of us can, in the process, advance our understanding, we might provide the kernel of the next big idea that moves us forward. That if “standing on the shoulders of giants” was good enough for Bernard of Chartres and Isaac Newton, it’s good enough for us.

So, all you huddled masses of testers yearning to be understood; collect your thoughts, clear your throat, and speak out. We must be heard.

Testing: Art or Science?

​In responding to a post on /r/softwaretesting, I had a thought that I thought was worth sharing.

The poster was starting out on their career in software testing, and was asking the community the best way to go about this. As you might imagine, the responses were either to follow the ISEB / ISTQB “standards” approach to testing, while the rest told them to reject that in favour of the context-driven “school” of Bach, Bolton, et al.

Rather than attemp to futher radicalise this nascent tester, and make them pick a side so early in their career, I attempted to walk the middle path and, in so doing, had my thought.

There are two schools of thought about how best to define what testing is, and how to train testers. Broadly, these are the ISTQB / ISO29119 “standardised” approach, and the “context-driven” approach of James Bach and Michael Bolton.

The former hews to the idea that testing is comprised of standard practices and approaches that can be applied to any product or project, while the latter maintains that the best approach to testing a product or project will be specific to that situation.

While I personally lean more to the context-driven approach, I think there is room for, and things to be gained from, standardisation and best practice. And that’s where the answer to the “Art or Science?” question comes in.

Here’s my (not very well put) answer from my response to the poster: “That’s maybe what a tester is; a creative thinker able to apply their experiences to the software they are testing to develop hypotheses, and then a scientist who applies standard protocols to verifying those hypotheses who then uses those results to develop more hypotheses.​”

In short, it’s both; Art and Science.

In “artist” mode, the tester applies their knowledge, experience, and creativity to the product to determine what questions they should ask of the software. This is the “context-driven” approach; you test in the way that seems most appropriate for the product at that point, and that approach might change as you learn more and more.

In “scientist” mode, the tester designs the test that will enable to them to answer the questions their “artist” created, and executes that test according to scientific principles.

I’m not suggesting for a second that these “modes” are mutually exclusive. Good Testing is a result of simultanteous Art and Science, each informing the other. Some organisations split the artists – the Test Analysts – from the scientists – the Test Engineers, but this breaks down that feedback loop between the two testing roles.

I believe that a tester can only do Good Testing if they are free to perform both roles. Therefore, there is as much a place for a formal grounding in testing techniques and terminology – the “science” – as there is for the creative freedom for a tester to test as they see fit – the “art”.

What is a tester?

From the last post, we have an idea what testing is, but what is a tester? Another good question. However, insofar as a question can be wrong, this one is, which leads to my first answer.

A Tester is not a “What”. A Tester cannot be a “What”. A Tester is a “Who”; a human being. “Why can’t a tester be non-human?” I hear you ask. 

Well, there are two parts to what a tester does; testing, and checking. While these might sound like synonymous terms, they are discrete in a fundamental way.

Testing is a creative process, a process of determining what questions to ask, then determining what the answer is, and if that answer is correct. Checking is a non-creative process of determining whether the answer to those questions is still correct.

Checking can be automated; you’re simply re-asking a question you’ve already formulated, and verifying the result is what you expect. Testing, because it is creative, can only be done by humans. 

Until such time as we get real artificial intelligence, we need humans to formulate those questions; things like “What happens if I enter a date of 0/0/00?” and “What happens if I press this button 20 times in a row on February 29th?” Therefore, a valid answer to the original question “What is a tester?” is “Irreplaceable” (at least for now).

To expand on the point that a tester must be a human, it’s precisely the things that differentiate us from most other forms of life that make good testers; curiosity and intellect.

A tester is someone who, when presented with something new, will come up with hundreds of question starting “What happens if….”, and who then proceeds to find the answers. A tester has a compelling drive to understand, who can express their ignorance in the form of questions, who has the skill to determine how to get an answer, and the perseverance to continue until they do.

The underlying skills, as I have said, are not particular to certain individuals. They are inherent to our species. Those traits are the reason why I am sitting here typing, and not squatting in a cave somewhere. Homo sapiens is the dominant species on this planet because at some point someone thought “What’s over the horizon?” and, much more importantly, someone went to look. 

Testers are the people who have to go look. They can’t not know. And it’s that compulsion to know, to understand, combined with the drive to find the answer that makes a tester.

What is Testing?

“What is testing?” Now there’s a question. Simple, on the face of it. But like many simple-sounding questions, perhaps not so easy to answer. But before I provide an answer , picture this.

You are helicoptered into a remote jungle clearing in the Yucatan peninsula, and told that there is a Mayan pyramid to the North-West. Your job is to navigate to that pyramid, and survey it. You have a map, a compass, and a machete. And that’s it.

On the face of it, you have the necessary tools to complete the task. The path seems pretty clear, but here’s the point. You cannot know what is beyond the treeline. Sure, you have a map, but the map is not the territory. Does the map tell you which trees have jaguars in them? Or which have hostile tribespeople behind them? Or which trees have grown up since the map was made? There are an awful lot of trees out there.

Which brings us to the answer. Testing is exploration. It is learning. It is about answering the question “What’s behind that tree?” It’s also about answering the question “Do I need to know what’s behind that tree?” the answer to both of which is “I don’t know until I go look”.

Another answer; “test” is a verb, not a noun. Testing is an activity, not an artefact. Having a hundred, or a thousand, or a million tests that aren’t run is the same as having no tests. Just planning to look behind every tree has no value. Actually looking has immense value, even if it is just to determine that it wasn’t worth looking.

That learning process takes time, and you don’t have time to look behind every tree. You have a finite period to determine where to look, with only your experience to guide you, and you will only be able to cover a tiny fraction of that jungle. If you could automatically test every combination of 100 Boolean variables at the rate of 1 per second, it would take many orders of magnitude longer than the age of the universe.

Testing is not a journey of a thousand miles. It is an endless search which, like learning, is “a movement of knowing which has no beginning and no end” (Bruce Lee). Testing never ends, it merely stops at a ship event, and the tester is left looking out on an endless jungle of trees that they haven’t looked behind yet. Which is why when you ask a tester if they are finished testing, their answer will – or should – be “I’ve barely even started”.

Testers are required to stand facing a wall of trees, in the middle of millions of square miles of dense jungle with nothing to guide them but a questionable map, a battered compass, and a rusty machete, and know where and, more importantly, how to look. Even more importantly, a tester has the drive and the desire to keep looking, even when they’ve found nothing but leaves and mud for days, keeping their focus and attention because you never know when someone might fire a dart at you.

Testing requires a spirit of adventure, a questioning mind, a desire to know, and a judicious application of experience that allows the tester to interpret the map and compass, wield the machete, and set off into the unknown.

Microsoft’s Surface Pro UK Release Schedule

Late 2012: Microsoft announces they are making the Surface with Windows 8 Professional.

February 2013: Microsoft release the Surface Pro in North America

April 2013: Microsoft release the Surface Pro in China

2020: Technology continues to advance at an geometric rate. There is still no word from Microsoft as to Surface Pro pricing for Europe.

2150: Due to resource pressures, Humanity polarises into two geopolitical behemoths; the United Atlantic Alliance, and the People’s Democratic Federation.

3000: Humanity develop fast interstellar travel, begin colonising nearby systems. The two warring hyperpower blocs compete for resources. The Moon is destroyed. The larger fragments impact the Earth, causing continent-wide destruction and rendering entire hemispheres uninhabitable.

3100: The last humans leave the now dead Earth,

4000: As Humanity harvest the solar system for resources, devouring entire planets, the gravitational instability causes Sol to go supernova. Humanity harnesses all residual matter to fuel their technological advancement.

4500: The last human becomes post-physical. All humanity exists as pure energy beings in the virtual construct known as The Ark.

6000: The post-Human entity known as The Ark begins consuming matter to fuel its growth. It becomes more massive than any known body. Space/time itself begins to contract.

6100: The in-rushing galaxies begin to impact, generating a cloud of galactic fragments accelerating inwards at high fractions of c.

6101: In the last vestige of space/time before it collapses in on itself, and the very concept of space/time ceases to exist, Microsoft release the Surface Pro in the region of space/time formerly occupied by the geopolitical entity known as the United Kingdom

In summary, I’m not prepared to wait that long.

Schrodinger’s Cat Experiment: Mark I

As cultured and educated people, you are no doubt aware of Schrodinger’s Cat. What you are probably not aware of is the distressing truth behind this famous thought experiment and the resulting cover up that has lasted for nearly 80 years.

Knowing the specifics of the experiment, you may reasonably surmise that Erwin was no great lover of cats. On first inspection, it may not be entirely clear why this may be the case. However, I will, in this post, theorise as to the cause of this.

We flashback 80 years to the early 1930s, where we find Erwin Schrodinger jetting between Berlin, Oxford and Princeton, and corresponding with one Albert Einstein about quantum mechanics.

Schrodinger had a theory that needed testing, that at some level, matter can coexist in different states. He had yet to make the leap that it might only apply at the quantum level. Picture the scene; Erwin sits in his office, deep in thought, when his cat Cuddles1 jumps onto his lap. It is there, scratching Cuddles idly between the ears, that he formulates the basis of that famous experiment.

Hang on, you’re thinking to yourselves. Schrodinger’s Cat is a thought experiment. It was never carried out for real. That may indeed be true, and I’m about to tell you why.

As we have already established your intellect and learning, I can assume that you are also familiar with Godwin’s Law, which states that “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1”.

RocketBootKid’s Law, which I am hereafter bequeathing to Mankind, can be stated in similar terms, thusly; “Over time, the probability of a cat owner being randomly attacked by their cat for no rational or logical reason approaches 1”.

Back to Schrodinger. He and his cat are in the box, while Schrodinger ponders how the Observer Effect may alter the results of his experiment. The cat, being a cat and therefore experiencing the multiverse on a plane of existence completely devoid of logic and reason, suddenly sinks five of its six ends into Schrodinger’s tender underparts.

Flashforward a few months, and it is only his dedication to scientific rigor that finds Schrodinger still occupying the box, when the better parts of him, his tender and now swollen underparts in particular, are begging him to develop a less painful experimental paradigm.

At some point, Schrodinger had a final falling out of love with Cuddles. The specifics of this event are wasted to the pages of history, but the ramifications for Cuddles are dire. Schrodinger, in a late-night manic episode, arrives at the specifics of the device with which we are now familiar.

His housekeeper, who services were suddenly dispensed with, would later comment that she hadn’t seen Cuddles around for a while. Perhaps fearing a visit from the RPSCA, Schrodinger was careful to categorise the fate of Cuddles as a “thought experiment” when he published his theory in “Die gegenwärtige Situation in der Quantenmechanik” in 1935.

In a museum somewhere, in a display case, is a box inside of which, most definitely, is a dead cat.

1 Schrodinger may or may not have had a cat, which may or may not have been called Cuddles. I think he’d be satisfied if I said all of the above were possible.

The Ephemeral Nature of Knowledge

Okay, it’s a pretentious title, but you’re just to going to have to deal with it.

This post is at least partly to defend my (annoying?1) tendency to never state anything in definitive terms. There are two reasons for this.

The first is that I find absolute, unilateral, or dictatorial statements inherently distasteful. I was going to say inhuman, but that’s perhaps a bit strong. The reason that the overdeveloped thesaural region in my brain returned that word is that a defining characteristic of humans is our ability to work together, to establish a consensus, to collectively achieve more than the sum of our parts.

A unilateral statement – the product of a single human – is inherently exclusive and therefore destructive to the power of the collective2.

The second is that the very nature of knowledge is fleeting, dynamic, you might even say ephemeral. In fact, someone already did. I remember very clearing taking Physics at school and being told in later years to forget what I had previously been taught. Not because what I had been taught was incorrect, but because it was too high-level, too abstract.

The same is true of all areas of expertise, physics perhaps more than most. There are levels of understanding that are perfectly sufficient for most, but which gloss over the finer, more detailed points that are vital for the development of that subject.

Another factor is that the depths of human knowledge are constantly being explored, only to find that it’s actually a lot deeper than previously thought. Unless you’re keeping abreast of all recent discoveries throughout the entire sphere of human knowledge, you’re going to be at least slightly inaccurate every time you open your mouth.

It is therefore extremely difficult to make any definitive statement about anything, other than that which you know inside and out, without it being based on a incomplete understanding of that subject, and therefore not entirely accurate. Now, most people don’t worry about this, and most of the time it really doesn’t affect much at all.

To the extreme pedants among us, and to those who value community consensus over dictatorial pronouncements, it’s an important distinction, and one that should be accepted.

1 I assume it must be at least slightly annoying, but that’s just a guess.
2 I cannot use that word without the Borg or Communism coming to mind.

The Dilbert Stages of Professional Cynicism

Over the years, I have come to believe that there are three stages to one’s professional career, and that those stages may be defined relative to ones opinion of the work of Scott Adams, specifically ‘Dilbert’.

This theory is borne of my own experiences, but like most of the ideas on here, is unlikely to be terribly original, well thought through, or even succinctly put. In an effort reduce the word count a bit, I’ll apply Ockham’s Razor, shave some words off, and define the stages as follows; 

  • You don’t think Dilbert is funny
  • You think Dilbert is hilarious
  • You think Dilbert is based on your professional life.

Or, to put it another way;

  • You don’t get Dilbert
  • You get Dilbert
  • Dilbert gets you.

These three stages reflect the effect of corporate reality as it slowly eats away at the fresh-faced young employee, heretofore swaddled in the protective nirvana of educational utopia. They are the measure of how much of the child has been replaced by corporate robot, of how much idealism has been replaced by cynicism.
 
Someone I know is very keen that people aren’t cynical and go into things with an open mind, with the attitude that things can be done.As I’ve said before, I consider myself to be both a realist and an idealist. I try to nurture the hope that all things are possible, but I’m not going to stay up all night waiting.

People are cynics for a reason. Cynics are not born; we are made, or rather corrupted. While we may be cast in our mother’s womb, we are forged in the fires of industry, in the furnaces of commerce. It is in this inhospitable environment that the naif in all of us has, at some point, had our eyes forcibly opened a la Malcolm McDowell in A Clockwork Orange.

Lord Acton was only halfway there; Power may corrupt, but its lack is just as harmful, albeit in different ways. Absolute power may make you believe that you can do what you like, but the lack thereof makes you believe that nothing is possible and that, whatever you do, forces beyond your control serve to constrain you.

Eventually, you stop trying. Only the blindest optimist or greatest fool would continue in the face of a life’s experience. Indeed, Einstein defined insanity as “doing the same thing over and over again and expecting different results”.

To return to The Three Stages, the first two stages are merely precusors to the transition to Stage 3, a transition that represents a paradigm shift in the professional outlook of the person in question. A person who has made the transition from Stage 2 to Stage 3 has been “broken”, a term that intentionally mirrors the process by which horses become rideable.

While horses are more useful once broken, broken employees are often less useful. While they are still useful and important members of the team, they are less likely to go the extra mile.

The point at which employees break is often quite tangible. Someone previously level-headed and conscientious will suddenly become outspoken in meetings, or their grin get slightly manic, or “Thursday Afternoon Effect”1 behavior happens on other days of the week.

We all know the signs, and we all silently mourn the passing of their youth, and think “You’re one of us now”.

1 The Thursday Afternoon Effect is the point on Day 11 of a 12-day week full of 10 hour days when everything, even quite sad things, become hilariously funny, and the slightest thing can send you off into wild paroxysms of maniacal laughter.


I am working on an update to the theory that posits a fourth stage, which may be exemplified by the phrase “You know what, fuck that, it doesn’t have to be this way”. Whether this is merely an acute remission in otherwise chronic decline, or the turning of the tide, is the subject of further study.

    Because your whole life is a test