Start small. Pick your most critical feature, write 20 test cases for it, and run them on your next release. That's your QA process. Everything else (regression suites, tagging strategies, reporting dashboards) is optimization you add once the foundation works.
Most teams overcomplicate this. They research tools for weeks, design elaborate test plan templates, debate testing methodologies, and never actually write a single test case. The SmartBear State of Software Quality 2025 report highlights that formalization of QA processes remains a challenge across organizations of all sizes. The fix isn't a better framework. It's starting.
Week 1: your first test cases#
Pick one feature#
Choose the feature that would cause the most damage if it broke. For most products, this is authentication (login/signup), the core workflow (the thing users actually pay for), or payments/checkout.
Don't start with "the whole application." Start with one feature area, one screen, one workflow.
Write 10-20 test cases#
A test case needs three things: what to do (steps), what should happen (expected result), and any setup required (preconditions).
Here's a real example for a login feature:
Test case: Login with valid credentials
- Precondition: User account exists with email test@example.com
- Steps: Navigate to /login, enter email and password, click Sign In
- Expected result: User is redirected to the dashboard, username appears in the top-right corner
Test case: Login with wrong password
- Precondition: User account exists
- Steps: Enter correct email, enter wrong password, click Sign In
- Expected result: Error message "Invalid credentials" appears, user stays on login page
Write these for the happy path first (everything works correctly), then add failure scenarios (wrong input, empty fields, expired sessions), and finally edge cases (special characters, extremely long input, rapid repeated submissions).
Need help writing effective test cases? The guide to writing test cases covers structure, examples, and common pitfalls in detail.
Store them somewhere structured#
For your first 20 test cases, a spreadsheet works. But plan to migrate. Teams that stay in spreadsheets past 50 test cases universally regret it: no version history, no execution tracking, no reporting.
In TestRush, you'd create a script called "Authentication" with headers for each scenario group (Login, Signup, Password Reset) and child items for individual test cases. Tags like "smoke" and "regression" let you filter which cases to run in different situations.
Week 2: your first test run#
Run the tests against a real build#
This is where most teams stall. They write test cases and leave them sitting in a document. Don't. Pick your current build, open your test cases, and go through each one.
Mark each test case:
- Pass — works as expected
- Fail — something is wrong (note what happened)
- Blocked — can't test this (environment issue, missing data, dependency broken)
Record everything#
Write down what failed and why. "Login button doesn't respond" is vague. "Clicking Sign In with valid credentials shows a spinner for 10+ seconds, then returns 500 error in console" is actionable.
The first run is always the most informative. You'll find bugs you didn't know about. You'll realize some test cases are unclear. You'll notice gaps where you didn't write a test for something obvious.
Cem Kaner, one of the founders of context-driven testing, nailed why this matters: "Testing is not about finding bugs. It is about providing information." Your first test run gives you information you've never had — a structured view of what works and what doesn't in your product.
Test case maintenance remains one of the top challenges for QA teams year after year — PractiTest State of Testing, 2025
Fix your test cases based on what you learned#
After the first run, update your test cases:
- Clarify steps that were confusing
- Add preconditions you forgot
- Remove duplicate or irrelevant cases
- Split complex cases into smaller, focused ones
This review is just as valuable as the test execution itself.
Week 3: expand to a second feature#
Add another feature area#
Repeat the Week 1 process for your second-most critical feature. You now have two scripts: Authentication and [Core Feature].
Introduce tags#
With two feature areas, you need a way to run subsets. Tags solve this:
- smoke — the absolute minimum check. 10-15 cases that verify the app starts, users can log in, and the core feature works. Run this on every build.
- regression — comprehensive check before releases. All your test cases, run end-to-end.
- critical — tests for functionality where a failure means data loss, security breach, or payment errors.
Tag each test case as you go. A single case can have multiple tags ("smoke" + "critical" for the login test).
Set up a schedule#
Even a basic schedule prevents testing from being forgotten:
- Every build/deploy: Run smoke tests (15 minutes)
- Every sprint/release: Run full regression (1-3 hours depending on size)
- After hotfixes: Run regression for the affected feature area only
Week 4: your first regression run#
Run everything before a release#
A regression run checks that previously working features still work after recent changes. This is where test management pays for itself.
Go through all your test cases across both feature areas, marking results. Compare against your previous runs. Are the same things failing? Are new failures appearing?
Build your first report#
After the regression run, you have data. Even a simple summary is useful:
- Total test cases: 45
- Passed: 38
- Failed: 5
- Blocked: 2
- Pass rate: 84%
Share this with your team. This is often the first time anyone has a concrete, quantitative answer to "is this build ready to ship?"
Scaling: when to add more structure#
Your basic process is running. Here's when to add more sophistication:
At 100+ test cases#
You need a real test management tool. Spreadsheets become unmanageable: slow to load, no search, no execution history. TestRush handles nested scripts, tag filtering, keyboard-first execution, and run history from day one. Pricing starts at $8/month for the whole team.
At 3+ testers#
You need clear ownership. Who runs which tests? Who reviews failures? Without explicit assignment, you get duplication (two people test the same thing) and gaps (nobody tests the edge cases).
At weekly releases#
Your smoke suite should be tight, 15 minutes maximum. If smoke tests take longer, you're either running too many or your tool is too slow. Keyboard shortcuts make a noticeable difference here: press 1 for pass, 2 for fail, arrows to move through items.
At the "we need to report to stakeholders" point#
Stakeholders care about trends, not individual test results. They want to know: "Is quality improving?" "Are we ready to release?" "What areas have the most failures?" Build simple pass/fail trend charts from your run data.
Tools: what you need and when#
Day 1: the minimum#
- A place to write test cases (spreadsheet, doc, or test management tool)
- A way to mark pass/fail results
- A channel to report failures (Slack, email, issue tracker)
Month 1: growing up#
- A test management tool with execution tracking
- Tags for filtering (smoke, regression)
- A bug tracker integration (link failed tests to tickets)
Month 6: the mature setup#
- Regression suite running before every release
- AI-assisted test case generation via MCP
- Automated smoke tests in CI (optional, for stable flows)
- Guest access for external testers or stakeholders to run specific suites
TestRush covers the Month 1 and Month 6 needs out of the box: scripts with nested structure, tags, keyboard execution, MCP integration, and guest access for external testers.
Common mistakes#
-
Trying to cover everything at once. Writing 200 test cases before running a single one means you'll have 200 untested, unrefined test cases. Start with 20, run them, learn, then expand.
-
Making the process too formal too early. You don't need a test plan document, a requirements traceability matrix, and a formal review process on day one. You need test cases and a run. Add formality as the team grows.
-
Skipping the "fix the test cases" step. Your first draft of test cases will be imperfect. If you don't revise them after the first run, they stay imperfect forever. Build review into your cycle.
-
Not recording failures properly. "Login broken" in a Slack message gets lost. A failed test case with a description, screenshots, and a linked bug ticket gets fixed. Use your test management tool to document what went wrong.
-
Waiting for a "QA hire" to start testing. Developers can write and execute test cases. Many startups run effective testing processes with developers testing each other's features using structured scripts. A dedicated QA person helps, but isn't a prerequisite.
FAQ#
How long does it take to set up a QA process?#
A basic process (test cases written, first run executed, results recorded) can be running within a week. You'll spend day one writing cases, day two running them, and the rest of the week refining based on what you learn. A mature process with regression suites, tags, and stakeholder reporting takes about a month to establish.
Do I need a dedicated QA person to start?#
Not necessarily. Many startups begin with developers testing each other's work using structured test cases. Cross-review catches bugs that the original developer is blind to. A dedicated QA engineer becomes valuable once you have more than 100 test cases or need someone focused on test strategy and process improvement.
What should I test first?#
Test the feature that would cause the most damage if it broke. For most products: authentication (can users log in?), the core value proposition (can users do the main thing they pay for?), and payments (is money being handled correctly?). Everything else comes after these three are covered.
How do I convince my team to invest in QA?#
Don't argue in theory. Demonstrate in practice. Run one test pass against the current build, find 3-5 real bugs, and show the team. Nothing builds the case for testing faster than a list of bugs that were about to ship to production. The numbers from your first regression run make the argument for you.
Ready to set up your QA process? Start your free trial — it takes under 5 minutes to create your first test script and run it.