Skip to main content
All articles
QA Fundamentals

How to Build a QA Process From Scratch

9 min readBy Elmonds Kreslins

Most teams build QA backwards

The typical story goes like this: the team ships fast for the first year, bugs get fixed as they're reported by real users, and it mostly works. Then the product grows, the team grows, and suddenly a single release can break three things that used to work fine. Someone suggests "we should probably do proper QA" and the search begins.

The problem is that most guides to "setting up QA" start with tools and frameworks. Install this, run that, write tests here. But tools without process just add noise. This guide starts from the other end: what does a QA process actually need to achieve, and how do you build one that fits where your team is right now?

Step 1: Define what you are actually trying to prevent

Before writing a single test case, spend 30 minutes with your product and engineering lead answering one question: what types of failures have caused the most damage in the past 12 months?

The answer is almost always in one of three categories:

  • Regressions: a change to one area broke something unrelated. A payment flow stopped working after a UI update. An email notification disappeared after a backend refactor.
  • Edge cases: something works for most users but fails for a specific combination of account type, browser, or data state. These are often found by real users before testers.
  • Release-day surprises: something that worked in staging didn't work in production. Environment differences, configuration issues, or scale-related failures.

Your QA process should directly address whichever of these has cost you the most. A team that has repeatedly shipped regressions needs a solid regression test suite. A team burned by staging-to-production differences needs environment parity and smoke tests on every deployment. Start specific, not general.

Step 2: Map your critical paths

A critical path is a user journey that, if broken, causes immediate and significant harm to the business. For most products this is a short list:

  • Sign up and onboarding
  • Core product action (the thing the user came to do)
  • Checkout or subscription flow (if revenue-generating)
  • Account management: password reset, profile update, billing changes

These paths get tested on every single release, without exception. Everything else gets tested on a risk-adjusted basis: how often does this area change? What is the impact if it breaks? How hard is it to test manually?

Write these paths down as plain-language user stories before you write any test cases. "User creates account with valid email and password, receives confirmation email, logs in successfully." Simple. Unambiguous. Something any tester can execute without needing to guess what "working" looks like.

Step 3: Choose your test types before you choose your tools

There are four types of testing that every product needs at a basic level. Each answers a different question:

  • Functional testing: does the feature do what it is supposed to do? This is your baseline. Manual execution against written test cases.
  • Regression testing: did the new change break anything that previously worked? This is where automation earns its cost.
  • Exploratory testing: what does the feature do that no one thought to write a test case for? This requires a human tester with product knowledge and curiosity.
  • Non-functional testing: how does the product perform under load, across different browsers, for users with disabilities? This is often the last to be introduced and the most costly to skip.

Most early-stage teams start with functional and exploratory testing, then add regression automation as the product matures. Non-functional testing (performance, accessibility, cross-browser) gets introduced when there is a specific driver: a launch, a compliance requirement, or an incident.

Step 4: Write test cases that are actually useful

A bad test case is vague, context-dependent, and requires the tester to make decisions about what "passing" means. A good test case has three parts: a precondition, steps, and an expected result. Nothing more.

Example of a poor test case: "Test the login page."

Example of a good test case:

  • Precondition: user has a verified account with email test@example.com and password Test123!
  • Steps: navigate to /login, enter email, enter password, click Sign In
  • Expected result: user is redirected to /dashboard and sees their name in the top-right navigation

Write test cases for your critical paths first. Store them somewhere the whole team can see: a shared Notion page, a Google Sheet, or a test management tool like TestRail or Qase. The format matters less than the fact that they exist and are maintained.

Step 5: Decide what to automate (and what not to)

Automation is worth the investment for tests that are executed frequently, are stable (the behaviour doesn't change often), and are time-consuming to run manually. That description fits most regression suites.

The best starting point for automation is the critical paths you mapped in step 2. Write end-to-end tests that exercise each path from the user's perspective. Playwright is the current standard for this: it's fast, reliable, has excellent documentation, and the test code is readable enough that non-engineers can understand what is being tested.

Do not automate tests that require visual judgment (does this look right?), tests for features that are still changing rapidly, or exploratory scenarios. These are manual testing territory.

A realistic automation timeline for a new product: three to six months to have meaningful regression coverage on your critical paths, running in CI on every pull request. This assumes one engineer dedicating meaningful time to it, not fitting it around everything else.

Step 6: Integrate testing into your sprint cycle

Testing at the end of a sprint is a bottleneck by design. Every ticket that gets developed arrives at QA at the same time, and the release date is fixed. Something always gets cut or ships under-tested.

The fix is to move testing earlier in the ticket lifecycle. This means:

  • QA is involved in ticket refinement, not just execution. Before a ticket enters development, the acceptance criteria should include a clear definition of testable behaviour.
  • Test cases are written before or during development, not after. The developer knows what correct behaviour looks like before they start building it.
  • QA testing begins as soon as individual tickets are ready, not when the whole sprint is "done". Large releases get tested incrementally.

This approach, sometimes called shift-left testing, doesn't require more testing hours. It redistributes them to points in the process where catching a bug is ten times cheaper than catching it later.

Step 7: Build a bug reporting discipline

A bug report that doesn't reproduce is worse than no bug report. It consumes developer time and creates distrust between QA and engineering. Every bug report should include:

  • Environment: browser, OS, device, user account type, test data used
  • Steps to reproduce: exact sequence, numbered, from a clean starting state
  • Actual result: what happened
  • Expected result: what should have happened
  • Severity: blocks release, impairs core function, cosmetic, or informational
  • Evidence: screenshot, screen recording, or console log

A screen recording tool like Loom or a browser extension like Bug Magnet makes this fast. A good bug report takes three minutes to write and saves thirty minutes of back-and-forth with the developer.

Step 8: Add non-functional testing when the product is ready for it

Performance, accessibility, and cross-browser testing are important but they don't need to be solved on day one. They get added as the product matures and the specific risks become clear.

Performance testing becomes critical when you have a launch with a marketing push, a product used under real load, or a checkout flow where latency costs conversion. A basic k6 load test on your critical paths takes a day to set up and gives you a reliable signal before every major release.

Accessibility testing is worth starting early because accessibility debt compounds. An automated scan with axe-core in your CI pipeline catches the most common issues (missing alt text, form labels, contrast failures) before they accumulate. Manual screen reader testing comes later, but the automated baseline is cheap and catches a lot.

Cross-browser and device testing is most valuable after you have analytics data showing you where your users actually are. Test the top three browser-device combinations in your user base, not a theoretical matrix.

The honest answer on build vs hire

Building a QA process from scratch takes longer than most teams expect. The documentation, tooling, automation, and discipline all need to be established while the product continues to ship. Many teams find that the fastest path to a working QA process is to bring in a specialist for the initial setup, use that to build the process and tooling, and then hand it over to the internal team once the foundations are in place.

If you are at the stage of building this from zero and want a practical conversation about what your product specifically needs, book a free 30-minute call. No pitch, no obligation — just a QA engineer looking at your product and telling you where to start.

Related reading: The Complete Web Application QA Testing Checklist · What is Exploratory Testing and Why Automation Can't Replace It · Running Playwright Tests in GitHub Actions

EK

Elmonds Kreslins

Lead QA Engineer

Elmonds has led QA programmes at BBC, Bupa, and multiple UK fintech startups. He founded RedQA to give growing product teams access to the same quality rigour as enterprise engineering teams, without the overhead.

QA insights, monthly

No spam. Unsubscribe any time.

Get practical QA guides, testing tips, and industry news sent straight to your inbox. Join engineers and product teams from across the UK.

Ready to Ship with Confidence?

Let's discuss how RedQA can help you deliver better software, faster. Get a free consultation and quote tailored to your project.