QA Engineer Skills 2026QA-2026Definition of Done

Definition of Done

Why Definition of Done Is Critical

The Definition of Done (DoD) is the team's shared agreement on what "done" means. Without explicit test criteria in the DoD, "done" defaults to "the developer thinks it works." This leads to stories being marked complete without adequate testing, bugs escaping to production, and arguments about whether something is really "done."

A well-defined DoD is the single most impactful process artifact a QA engineer can champion.


A Strong Definition of Done

Here is a DoD for a team with quality discipline:

  • Code reviewed and approved by at least one peer
  • Unit tests written and passing (coverage does not decrease)
  • Integration tests written for new API endpoints
  • Browser tests written for new user-facing functionality
  • Manual exploratory testing completed
  • No critical or high-severity defects open
  • Test evidence linked in the story (CI run URL, screenshots)
  • Accessibility basics verified (keyboard navigation, screen reader labels)
  • Documentation updated if behavior changed
  • Deployed to staging and smoke-tested

Anatomy of Each DoD Item

Code Review

Code review is the first quality gate. It catches logic errors, security issues, and design problems before testing begins.

QA perspective on code review: QA engineers should participate in reviews, especially for:

  • Test code (is it deterministic, isolated, well-named?)
  • Application code that affects testability (are there data-testid attributes, API contracts?)
  • Configuration changes (environment variables, feature flags)

Unit Test Coverage

Unit tests verify individual functions and methods in isolation. They are the fastest tests and should cover all significant logic paths.

What "coverage does not decrease" means in practice:

  • New code must have unit tests
  • Deleting code may decrease coverage (that is acceptable)
  • Refactoring should maintain or improve coverage
  • A specific threshold (e.g., 80%) is less important than the trend

Integration Tests

Integration tests verify that components work together: API endpoints, database queries, service interactions.

When a story adds or changes an API endpoint, integration tests should verify:

  • Happy path: correct request returns correct response
  • Error paths: invalid input returns appropriate error codes
  • Edge cases: boundary values, empty collections, large payloads

Browser Tests

Browser tests verify the end-to-end user experience in a real browser. They should cover the primary user flows added or changed by the story.

Not every story needs browser tests. A backend-only change that modifies a database query does not need a new browser test (though existing browser tests should still pass).

Exploratory Testing

Exploratory testing is manual, creative testing that automated tests cannot replicate. It discovers issues that no one thought to write a test for.

Exploratory testing checklist:

  • Try unexpected inputs (emojis, very long strings, special characters)
  • Test with slow network conditions
  • Test with different user roles and permissions
  • Test the feature on mobile viewports
  • Try to break the feature by using it in ways the spec does not describe

Test Evidence

"Trust but verify" applies to testing too. Link to the CI run that shows tests passing, include screenshots of manual testing, and reference the specific test files.

Evidence examples:

  • CI run URL: https://github.com/org/repo/actions/runs/12345
  • Screenshot: screenshot-checkout-mobile.png
  • Test file reference: tests/checkout/payment.spec.ts

No Critical or High-Severity Defects Open

A story is not done if it has known critical or high-severity bugs. These must be fixed within the story's scope, not deferred to another sprint.

Low and medium severity bugs can be tracked as follow-up stories, but they should be documented and linked.

Accessibility

Accessibility is not a "nice to have" -- it is a legal requirement in many jurisdictions and a quality standard.

Minimum accessibility checks:

  • Keyboard navigation: Can every interactive element be reached with Tab?
  • Screen reader labels: Do buttons and inputs have meaningful aria-labels?
  • Color contrast: Is text readable against its background?
  • Focus indicators: Can you see where keyboard focus is?

Implementing the DoD

Step 1: Draft with the Team

The DoD is a team agreement, not a QA mandate. Bring a draft to a retrospective or dedicated session, discuss each item, and agree on what is realistic for your team's current maturity.

Step 2: Start Small and Grow

If your team currently has no DoD, do not introduce a 15-item checklist overnight. Start with 5 items the team can commit to, then add more as the team builds the habit.

Starter DoD (minimum viable quality):

  • Code reviewed by at least one peer
  • All existing tests pass
  • No critical defects open for this story
  • Deployed to staging environment

Mature DoD (target state): The full 10-item list above.

Step 3: Make It Visible

The DoD should be:

  • Posted in the team's Slack channel or wiki
  • Referenced during sprint reviews ("Let us check the DoD for this story")
  • Printed on the wall (for co-located teams)
  • Part of the Jira workflow (required fields or checklists)

Step 4: Enforce Without Being Rigid

The DoD should be followed consistently, but exceptions happen. When the team agrees to skip a DoD item for a specific story, document why. If exceptions become the norm, the DoD is too ambitious -- scale it back.


DoD Anti-Patterns

Anti-Pattern Problem Fix
DoD exists but nobody follows it It is decoration, not a standard Review DoD in every sprint review
DoD is too long (20+ items) Team ignores most of it Focus on 8-10 high-impact items
DoD does not include testing "Done" means "code written, not tested" Add specific test criteria (unit, integration, exploratory)
DoD is static Team maturity changes but DoD does not Review and update quarterly
Different DoD per person Inconsistent quality One team, one DoD
DoD is just a Jira checklist Checked off without actually doing the work Spot-check: randomly audit stories for DoD compliance

Measuring DoD Effectiveness

Track these metrics to see if your DoD is working:

Metric What It Tells You Healthy Target
Escaped defects per sprint Are bugs getting past the DoD? < 2 per sprint
Stories reopened after "Done" Was the DoD actually followed? < 5% of stories
Sprint velocity variance Is the DoD helping the team deliver predictably? < 15% variance sprint-to-sprint
DoD exception rate How often is the DoD bypassed? < 10% of stories

Hands-On Exercise

  1. Write down your team's current Definition of Done. If there is none, that is the answer.
  2. Compare it to the "strong DoD" above. Which items are missing?
  3. Propose 2-3 additions to your DoD at the next retrospective.
  4. Audit 5 recently completed stories against the DoD. Were all items actually satisfied?
  5. Measure the escaped defect rate for the last 3 sprints. Is the DoD preventing bugs?