Test management is how QA teams organize their testing work. It covers writing test cases, grouping them into logical structures, running them against builds, and recording what passed or failed. That's the short version.
The longer version is that most teams do this badly. The 2025 State of Testing report found that 45% of QA teams still haven't integrated AI into their testing workflows, despite 72% of QA professionals saying AI is critical for their future. Meanwhile, maintaining test cases and communicating results to stakeholders remain top-3 challenges year after year.
So what's going wrong, and what does a working test management process actually look like?
Why most teams struggle with testing#
Here's a pattern I've seen repeatedly: a team starts with a shared Google Sheet. Columns for test steps, expected results, pass/fail. It works fine for 30 test cases. By the time they hit 200, nobody can find anything. The sheet loads slowly. Someone accidentally deletes a row. Three people are editing the same tab.
The World Quality Report 2025-26 (Capgemini/Sogeti) paints an interesting picture: nearly 90% of organizations are pursuing AI in quality engineering, but only 15% have achieved enterprise-scale deployment. The gap between "we're experimenting with AI" and "AI actually works in our process" is massive. And the old problems haven't gone away — integration complexity (64%) and reliability concerns (60%) top the list of challenges.
Cem Kaner, one of the founders of context-driven testing, put it this way: "The key question isn't whether you have test cases. It's whether you can find the right test case at the right time." That sounds obvious, but most teams can't do it reliably past a certain scale — AI or not.
90% of organizations are pursuing AI in quality engineering, but only 15% have achieved enterprise-scale deployment — World Quality Report, 2025-26
The four parts of test management#
Let's break this down into what actually happens day-to-day.
1. Writing test cases#
A test case is a set of steps with expected outcomes. "Click the login button with an empty password field" is a step. "An error message appears saying 'Password is required'" is the expected result.
The most common mistake here is writing steps that are too vague. "Test the login page" tells a tester nothing. "Enter a valid email and wrong password, click Submit, verify error message says 'Invalid credentials'" tells them exactly what to do and what to look for.
James Bach, the testing methodologist, distinguishes between "scripted testing" (following written steps) and "exploratory testing" (investigating the software freely). Both have their place, but test management primarily deals with scripted testing because that's what you need to repeat across releases.
2. Organizing test cases#
Once you have more than a few dozen test cases, you need structure. Most teams organize by feature area: Authentication, Checkout, User Profile, Admin Panel. Within each area, individual test cases cover specific scenarios.
Tags add a second dimension. Mark cases as "smoke" (quick sanity check), "regression" (full pass before release), or "edge-case" (unusual scenarios). This way a tester can run just the smoke suite in 15 minutes instead of the full regression suite that takes three hours.
In TestRush, this maps to scripts with nested headers and child items. Headers group related steps ("Payment errors"), and child items are the individual test steps underneath. Tags let you filter which items show up in a given run.
3. Executing test runs#
A test run is one pass through a set of test cases against a specific build. You pick which script to run, optionally filter by tags, and go through each step marking it as pass, fail, blocked, or needs clarification.
Speed matters here more than people realize. If a tester spends 3 seconds clicking through a dropdown menu to set a status on each of 200 test items, that's 10 minutes just on clicking. Keyboard shortcuts (1 for pass, 2 for fail, arrow keys to navigate) cut that to under a second per item.
Most test management tools force you through click-heavy interfaces. TestRush uses keyboard shortcuts — press 1 for pass, 2 for fail, arrows to move. Try the live demo to see the difference.
4. Tracking results over time#
The point of recording results isn't paperwork. It's answering questions like: "Did this feature pass last release?" or "Which tests keep failing on every build?" or "What's our pass rate trending toward?"
Without a history of runs, every release feels like the first time. You can't tell if quality is improving or degrading. You can't show a product manager a chart that says "we had 12 failures last sprint, 4 this sprint" — which is often the most effective way to justify QA time.
Getting started without overthinking it#
If you're setting up test management for the first time, here's what I'd actually recommend:
Start with one feature area. Don't try to write test cases for your entire application in a week. Pick the feature that breaks most often or the one your team is working on right now. Write 10-20 test cases for it. Run them once. See what you learn.
Write steps your teammates can follow. If only you can understand your test cases, they're notes, not test cases. Have someone else run through them and note where they get confused.
Run tests against real builds. Don't write test cases and let them sit. Run them on your next release. Mark what passes and what fails. This is where the value becomes obvious.
Review and prune quarterly. Test cases go stale. Features change, UI moves around, edge cases get fixed. Schedule a quarterly review where you archive obsolete cases and update steps that no longer match the product.
Common mistakes#
-
Organizing by sprint instead of by feature. Sprint-based folders become useless after the sprint ends. Nobody looks at "Sprint 23" six months later. Feature-based organization stays relevant.
-
Writing test cases after the feature ships. By then, the details are fuzzy and the pressure is gone. Write test cases before or during development, so they're ready when the build lands.
-
Tracking everything in spreadsheets past 50 test cases. Spreadsheets don't have versioning, they don't track who tested what, and they don't generate reports. They work for very small projects. Past that, they create more problems than they solve.
-
Ignoring test data. A test case that says "log in with a valid user" assumes the tester knows which user to use, which environment to point at, and what data should exist. Specify this or link to a test data document.
Where AI fits in (2026 edition)#
Let's be honest about where things stand. AI adoption in QA has accelerated sharply — 72% of QA professionals now use AI for test generation and script optimization. But there's a trust gap: 67% say they'd only trust AI-generated tests with human review. That's a healthy instinct.
The real shift in 2026 isn't chatbots that suggest test cases. It's AI agents — autonomous systems that handle multi-step workflows end-to-end. Unlike a prompt-response tool, an agent can read your codebase, draft a test plan, create scripts, execute runs, and log defects. Gartner forecasts that AI agents will independently handle up to 40% of QA workloads by end of 2026. Deloitte projects 25% of GenAI-investing businesses will deploy agents this year, rising to 50% in 2027.
The practical difference is huge. Instead of copy-pasting a feature description into a chat window and manually creating test cases from the output, an AI agent connects directly to your test management tool via MCP (Model Context Protocol) and works inside your workflow. It reads your existing test repository, understands what's already covered, and creates new scripts where gaps exist.
TestRush is built with MCP from day one. AI agents connect directly to your test repository — they read existing scripts, create new ones, and execute runs without switching between tabs. See how MCP works.
This doesn't replace QA engineers. 45% of practitioners believe manual testing is irreplaceable, and they're right — someone has to decide what matters and evaluate whether results make sense. What AI agents replace is the mechanical overhead: the typing, the formatting, the "did I cover all the variants?" grunt work. The human sets direction. The agent does the legwork.
Picking a test management tool#
There are a lot of options. Here's what actually matters when choosing:
Pricing model. Per-seat pricing ($30-50 per user per month) gets expensive fast as your team grows. Per-team flat pricing means you can add a new tester without recalculating your budget. TestRush pricing starts at $8/month regardless of how many features you use.
Speed of execution. Open a test run, go through items quickly, mark results. If this takes more than 2 clicks per item, the tool is slowing you down.
Structure flexibility. Can you nest items? Use headers? Tag items for filtered runs? The more rigid the structure, the more workarounds you'll need later.
AI agent support. In 2026, this means MCP integration. Without it, AI assistance is limited to copy-pasting between your tool and a chat window. With it, agents can autonomously create and manage test scripts inside your workflow.
FAQ#
Is test management the same as test automation?#
No. Test automation means writing code that runs tests automatically (Selenium, Playwright, Cypress). Test management covers planning and tracking all tests, whether manual or automated. You need test management even if every test is automated, because someone still decides what to test and reviews the results.
How many test cases should I write?#
There's no magic number. Start with the most critical user flows — login, checkout, core feature. A typical mid-size web app might have 200-500 test cases organized across 10-15 feature areas. The goal isn't maximum coverage; it's covering the things that would hurt most if they broke.
Can I use Jira for test management?#
You can, with plugins like Zephyr Scale or Xray. But Jira wasn't designed for test management, so these plugins add complexity. A dedicated tool tends to be faster and simpler for day-to-day test execution. If your team already lives in Jira and doesn't want another tool, the plugins are a reasonable compromise.
Ready to try structured test management? Start your free trial or explore the live demo first.