The future of QA isn’t AI, it’s Human + AI

June 24, 2025

Written by: Elisha Dongol, Senior Software Engineer, QA


The first time I saw an AI tool generate test cases, my reaction wasn’t awe—it was anxiety.

“Is this going to replace me?”

As a QA engineer, I’ve lived through change—from manual to automated testing, from reactive to proactive approaches. But the buzz around AI in QA hit differently. It felt like a whole new level.

Now that I’ve worked with AI tools in real projects, here’s what I’ve realized: AI in QA isn’t your replacement, it’s your assistant.

And like any assistant, it’s eager, fast, and full of potential but it still needs guidance. That’s where we, the QA engineers, come in.

In this blog, I’ll walk you through how AI is really being used in QA, the difference between hype and reality, and why human testers still play the most critical role. I’ll share real-world examples, insights from an internal QA event at Leapfrog, and my personal experience navigating this evolving space.

The AI hype vs. QA reality

We’ve all seen the headlines:

“AI can write and run all your tests.”

“AI will eliminate the need for manual QA.”

“Just feed your codebase to an AIno testers needed.”

But if you’ve ever tested a real product, you know that’s not how things work. Testing is messy. Creative. Full of surprises.

Business rules aren’t always clear. What works in development can break in staging. User flows constantly evolve. Bugs can hide in the most unexpected places. And often, the most critical test cases are ones that require human intuition, not machine logic.

Yes, AI tools are getting smarter. They can:

  • Auto-generate test cases based on existing code and user flows.
  • Summarize logs to help debug issues faster.
  • Detect flaky tests that fail inconsistently across runs.
  • Even suggest potential fixes for failed tests based on history.

All of this is promising and incredibly helpful for improving speed and efficiency. But there’s one thing they still can’t do: Think like a QA engineer.

Real-world spotlight: Leapfrog Quality Alliance 2025

Recently at Leapfrog, we organized a QA-centered external event, Leapfrog Quality Alliance 2025.

The main theme? You guessed it—AI in QA.

It was a powerful opportunity to reflect on how our roles as QA professionals are evolving alongside the tech.

We explored:

  • Real use cases of AI tools in QA across projects.
  • Success stories where AI helped speed up releases or reduce manual efforts.
  • Honest conversations about what AI can and can’t do.

The takeaway was clear. AI is a game changer, but QA engineers are still the game leaders.

Read the full event highlights here.

Why QA Engineers still lead the way

We make strategic decisions

AI can recommend test cases based on code changes, but it doesn’t understand the business impact behind them.

For example, in one project I was aware of, a pricing-related bug occurred under specific discount conditions. AI didn’t flag it because it wasn’t in the direct code path. But a QA engineer, knowing how critical pricing logic is to revenue, manually tested the edge case and caught the issue.

That’s the strategic lens we bring. We prioritize based on risk, context, and timelines. We ask questions like:

  • What happens if this feature fails?
  • How critical is this to the end user?
  • Do we need exploratory testing here?

Our decisions aren’t just data-driven, they’re human-aware.

We see what AI can’t

AI understands structured data but not domain-specific logic.

In a health-tech project a colleague worked on, the workflows varied significantly based on patient types and clinical roles. AI-generated tests missed many real-world paths. But the QA team caught them because they understood the users, workflows, and compliance requirements deeply.

Domain knowledge is essential. Without it, tests become shallow covering only what’s obvious.

We turn data into decisions

AI gives us dashboards, logs, and insights. But interpreting them is a different game.

Recently, the automated test suite in one of the projects showed a 20% failure spike after a release. The AI suggested a rollback, but after investigating, the failures were tied to outdated test data, not a real defect. A simple data refresh fixed the issue; no rollback was required.

AI can surface problems. But humans determine the right response. We contextualize bugs. We evaluate the pros and cons. We communicate clearly with stakeholders.

AI shows the data. We tell the story.

The future is human + AI

Here’s the truth: AI makes us faster, not obsolete.

It’s brilliant at handling repetitive tasks, like:

  • Running regression suites across multiple browsers.
  • Parsing logs to identify trends.
  • Generating initial test cases from user stories.

That gives us more time to focus on the hard stuff, like:

  • Exploratory testing that mimics real-world behavior.
  • High-level strategy and release planning.
  • Mentoring junior testers and fostering team collaboration.

I like to think of AI as a smart assistant, one that boosts our productivity but still depends on us to lead.

Conclusion: Still the boss

AI in QA is advancing quickly. It’s changing how we work, like automating tasks, improving accuracy, and accelerating feedback loops.

But it’s not the boss. We are. And like any great leader, QA engineers:

  • Guide the tools.
  • Define the direction.
  • Make the final call on what quality really means.

So next time someone says, “AI is going to replace testers,” just smile and tell them, “AI is my assistant. And I’m still the boss.”

Join The Discussion

One thought on “The future of QA isn’t AI, it’s Human + AI”

  • Nep Estate Venture

    Very well though. Me too also wondering AI will kill jobs and cut off human resources, thanks to your thought AI is our assistant. Hats off to AI.

    Reply

Compare listings

Compare