Speaking with business leaders around the world about quality assurance, I hear a lot of the same questions asked time and time again: How much should I spend on quality? I use software everybody else is using—do I even need testing? Why don’t I just have AI automating all my testing? All of these questions have one thing in common at the root: They’re avoidance questions. When someone asks how much they should spend on testing, what they’re really wondering is how they can get away with spending the least amount.
Nowhere is that truer than in the question of AI’s role in quality assurance. Because there are “experts” out there who hear people asking about AI’s potential to further automate testing, and those experts don’t realize that people are really worried about spending less. So the so-called experts respond in agreement—“Yes! We need AI in quality assurance for efficiency’s sake!”—and so does the market. So, over time, we find that the only thing people are talking about when it comes to AI in quality is cost—but that’s the wrong conversation to be having.
Because the reality is simple: AI currently does not replace quality assurance the way people want it to.
For a while now, self-proclaimed subject matter experts on the quality assurance speaking circuits have been saying AI is around the corner as something of a doomsday prophecy for QA. Rather than heralding the arrival of a powerful new tool, for some reason the tone is often one of fear and consequence; “If QA professionals don’t get their act together, they’ll be replaced by artificial intelligence,” they all seem to say. I believe they say it because it’s what executives want to hear—people want to believe that some silver bullet is going to come along, force QA teams to cut their staff, and make quality cheap.
But here’s why this is a major problem: The producers of the AI tech aren’t saying this at all. They aren’t promising that AI is coming to replace QA teams and reduce QA payroll. Perhaps it’s because they understand that they are going to have a difficult time convincing QA professionals to adopt AI tools by threatening their livelihood. I believe that QA teams can benefit tremendously from utilizing AI in the right ways, but fear is not the tactic that’s going to convince this industry to make that leap. Instead, it’s an appreciation for the ways AI can impact not cost, but the other two elements of the QA equation: Time and quality itself.
In QA, we never test anything soup-to-nuts. Not every application needs that level of testing; honestly, very few do. Testing is all about doing the littlest amount of work for the greatest impact. Frankly, investing tons of time and energy into AI for the purposes of replacing human quality professionals sounds like an awful amount of work for me. But what if we had tools that could help us live in a world where 100% coverage is possible without having to increase the amount of human staff needed to oversee the quality process?
Because if you do replace people with AI outright, the problem then becomes the sheer number of fixes you’re going to have if you do test 100% of your software. Your developers are suddenly going to find their inboxes piled high with hours and hours of fixes. Without humans actively engaged in the quality process, there’s nobody there to prioritize those fixes, and major risks could end up buried under much less urgent issues. The end result is so much work and so much need for human insight to wade through the ever-growing fix list, you’ve hardly come up with a solution to cut back on headcount—if anything, you’ve ultimately increased it.
So, if you start talking about AI replacing quality assurance, I’m not even going to listen to you. But I’m a little more open minded when it comes to using AI to increase the amount of coverage we can get with the same amount of time spent testing. It’s important to understand that AI is currently useful in supporting human work, not replacing it. For an example of what I mean, look no further than the meteorologist. Around the world, we’re using AI constantly to analyze tremendous amounts of weather data and look for patterns. But that doesn’t replace the job of the meteorologist, who has to review that information, confirm it, interpret it, and eventually communicate it.
The same is true for QA. Testing as an important part of quality can be improved by using AI to analyze code and spot potential issues, tedious work that would otherwise take human eyes endless hours to do. But that’s about giving human testers more time to solve problems; the quicker they can spot errors, the quicker they can start reviewing them. Troubleshooting them. Prioritizing them in the context of a broader business case. Communicating with product owners about them. That’s going to mean better, more efficient testing, but that doesn’t mean you can simply fire your human workforce in favor of technology to fully run your testing. If anything, like I said above, the most immediate outcome I can see from AI in quality is an increase in the amount of human staff you’ll need to keep up—which hardly seems like the answer most people are looking for when they ask about AI. But, in the same stroke, this sort of AI adoption would certainly increase quality while decreasing the amount of time it takes to achieve that level of quality, which in turn just may lower costs over time.
Instead of focusing on quick shortcuts, focus on proven quality assurance methodology. Find opportunities where you can to increase efficiency and efficacy—even using AI where possible to help with testing automation—but shift your perspective away from finding that silver bullet that’s going to solve your testing spending worries. Quality costs money, and that’s a cost that’s worth paying.