I twitch-posted this reply to a Reddit /r/QualityAssurance thread because OP was told that “everyone says” that software testing is “a temporary field”, that “automation is going to kill it in another 3-4 years” and “If you go in this field you will be unemployed after a few years and it would be very difficult for you to switch jobs”.
When “everyone says” something like that about testing, I get defensive, I admit. Maybe it’s because that, even after all this time, “everyone” still seems to know more than me. So, let’s actually see whether I can back up my quickfire reply with some cogent argument.
Firstly, I’m not worried about the “everyone says…” part. We should know by now that “everyone says” is a fallacy. Argumnetum ad populum; lots of people think it, ergo it is true. Use of this term, and others like it, should immediately set off the bullshit detectors of anyone who spends any time whatsoever reading stuff on the internet.
Now to “Automation is going to kill it in another 3-4 years”, firstly to the 3-4 year time frame. Four years is not that far away. How many of you in software development land have heard even inklings of using machine learning or AI anywhere near what you do on a daily basis? I’m willing to bet much of your time is still spent grumbling about how you’re mis-using Jira, or abusing Gerrit, or any number of other procedural wastefulness.
That’s what most modern software development is; smart people trying to do something useful while wading through procedural waste. We spend so much time looking to see where we’re putting our feet, we have very little time left to look at the killer robots on the horizon.
To the second part; automation is going to kill testing. Within a couple of generations, automation is going to impact pretty much everyone’s jobs, perhaps even render working itself obsolete (if we’re really lucky). I agree that the machines are out there; they are coming to take our jobs, but I think it’s more like 5-10, probably even 15 years before it makes much of an impact.
Before I get onto which will die first, coding or testing, let’s just quickly deal with the final point; “If you go in this field you will be unemployed after a few years and it would be very difficult for you to switch jobs.
“Unemployed after a few years”? I don’t think so. Sure, some companies have dispensed with their dedicated testing departments in the belief that, with DevOps, they can respond rapidly to customer issues and deploy fixes. I can see how that approach can work for certain business models, but certainly not for all. Would British Airways want to have to phone Rolls-Royce during a London – New York flight to push a fix because the engines timeout after 20 minutes? Probably not.
Even if the nature of software development has changed, there will still be roles for humans until the point at which AI can do everything. Arguably, testers are better positioned to move into different roles than anyone. We have good broad technical and business knowledge of the products and the users. We have good communications and analytical skills. You can’t tell me I can’t find a job with that bag full of reusable skills.
Now to the main point. Will automation kill coding before it kills testing? Firstly, I’m of the opinion that it will kill them both, and every other human endeavour, at some point. In a previous post, I posited that eventually AIs will develop to the point that humans will not have to work, and that all our needs will be met by automated systems. That development will not, of course, be without its challenges, but that is the general direction in which I believe we are headed.
But who dies first? I’m a tester, and not one who is a “failed developer” or who did any sort of computer-related degree (if I’m a failed anything, it’s a failed rocket scientist), so you can give my opinion as much credence as you like.
What I see in modern software development feels like plumbing. I apologise slightly for the inference; testers are often thought of as manual unskilled workers, and I don’t really want to disparage developers in the same way. Or plumbers for that matter.
The work – and, yes, the skill – is in sticking all these things together into a functional whole. Sure, there are bespoke elements here and there to stick together two components never previously stuck together, but it all still feels a bit…Lego.
If you were to supply a shelf full of documented, reusable components to an AI and ask it to make you an application….well, that doesn’t seem like too much to ask. Does the fact that the system is being made by machines mean that there are no bugs? Will AI make mistakes? Will it notice if it does? Or will all this happen and be rectified so quickly that it effectively never happened at all?
I think production errors – bugs, build problems, deployment mis-configurations – will become a thing of the past, or will be rectified so quickly we won’t even notice. “Did we build the thing right?” will no longer be a question worth asking.
“Did we build the right thing?” will become the only game in town. While humans remain a stakeholder in the production of software, even if only as an end-user, giving the machines feedback on whether they built the right thing will always be a thing (at least as long as there are humans…dun dun dun!)
Testers, with their broad, systemic, technical, and business knowledge, allied to their ability to communicate in both technical and business terms, are ideally placed to act – as we already do – as a bridge between the human and machine worlds, to continue to help translate the needs of one into the language of the other.
As AI advances, and the dynamic – the balance of power, perhaps? -between the two worlds changes, someone will need to be there to translate. Who better than us?