QA Engineer Skills 2026QA-2026Estimation and Blockers

Estimation and Blockers

Estimating Testing Effort

When the team estimates story points, testing effort must be included. A story is not "2 points for dev and then QA tests it." It is 2 points total, including testing. If the team consistently underestimates testing effort, stories spill over into the next sprint, and QA becomes the bottleneck.


Factors That Increase Testing Effort

Factor Impact Example
Multiple platforms/browsers 2-4x test effort "Test on Chrome, Firefox, Safari, and mobile"
Complex data setup Hours of preparation "Need 10,000 products in the catalog for performance testing"
External system integration Dependencies and timing risks "Payment API has rate limits and a flaky sandbox"
New features without test infrastructure Must build infrastructure first "No page objects exist for the new admin panel"
Regulatory/compliance requirements Documentation and audit trail "Need evidence of every test case for SOC 2 audit"
Multiple user roles Combinatorial testing "Admin, manager, viewer, and guest all have different permissions"
Data migrations Backward compatibility testing "Migrating user table schema; verify no data loss"

Estimation Guide

When the team estimates a story, QA should consider:

  1. How many test cases are needed? A simple bug fix might need 2-3 tests. A new feature might need 20+.
  2. Is test infrastructure ready? If page objects, test data, or environments need setup, add time.
  3. How many environments/browsers? Multiply effort by the number of platforms.
  4. Are there dependencies? External APIs, shared test data, or other stories that must be done first.
  5. What is the risk level? High-risk features (payment, authentication) need more thorough testing.

The QA Estimation Anti-Pattern

Team: "This story is 3 points."
QA: "But testing will take 2 days."
Team: "We'll add a separate testing task."

This is wrong. If testing is separate from the story, the story's "done" status becomes meaningless. Testing effort should be baked into the story estimate.

Better approach:

Team: "This story is 3 points."
QA: "Given the testing needed (3 browsers, integration with payment API,
      6 edge cases), I think this is a 5."
Team: "Let's discuss. Can we split it to reduce risk?"

Raising Blockers

A blocker is anything that prevents you from completing your testing work. Raise blockers early, clearly, and publicly. "I cannot test the checkout flow because the payment sandbox is down" is a blocker for standup, not something to mention casually at the end of the sprint.

Blocker Format

Every blocker should include three pieces of information:

  1. What is blocked: The specific story, test, or task that cannot proceed
  2. Why it is blocked: The specific dependency, issue, or missing resource
  3. What is needed to unblock: The action required and who can take it

Example Blockers

BLOCKER: Cannot test SHOP-789 (checkout flow)
REASON: Payment sandbox returns 503 since Monday
NEEDED: Provider to restore service (ticket #45678 filed)
WORKAROUND: Can test non-payment checkout paths; payment tests deferred

BLOCKER: Cannot run integration tests
REASON: Staging database not refreshed with current schema
NEEDED: DevOps to run migration script on staging-db
WORKAROUND: Running tests against local database (partial coverage)

BLOCKER: Cannot test SHOP-801 (email notifications)
REASON: SHOP-800 (email service refactor) not yet deployed to staging
NEEDED: Developer to deploy SHOP-800 to staging
WORKAROUND: None -- email tests depend on the new service

Blocker Escalation

Duration Action
< 4 hours Mention in standup, work on other tasks
4-24 hours Escalate to team lead, document workaround
1-3 days Escalate to manager, propose alternative testing approach
3+ days Impact sprint commitment, discuss with PO about descoping

The Agile QA Checklist

Use this checklist to ensure nothing falls through the cracks during a sprint.

Before the Sprint

  • Review upcoming stories for testability and clarity
  • Identify testing dependencies (environments, data, tools)
  • Prepare test environments and test data
  • Write draft test cases from acceptance criteria
  • Verify CI pipeline is stable and reliable
  • Check that previous sprint's flaky test fixes are deployed

During the Sprint

  • Attend all ceremonies and contribute the QA perspective
  • Test incrementally as features complete (do not batch at the end)
  • Communicate blockers immediately at standup
  • File defects with clear reproduction steps and evidence
  • Update test management tools with execution results
  • Pair with developers on complex test scenarios
  • Run exploratory testing sessions on completed features

End of Sprint

  • Verify all stories meet the Definition of Done
  • Run full regression suite (automated)
  • File and communicate any remaining defects
  • Prepare quality metrics for sprint review
  • Participate in retrospective with quality data
  • Document what was tested and what was not (release notes)

Between Sprints

  • Address flaky tests and test infrastructure issues
  • Refactor test code that has become hard to maintain
  • Update test documentation and onboarding materials
  • Research new tools or approaches for upcoming features
  • Review and update the Definition of Done if needed
  • Prepare for the next sprint's testing needs

Sprint Velocity and QA

If the team's velocity is inconsistent, testing effort is often the hidden variable. Track these metrics to understand the impact:

Metric What It Shows
Stories "done" vs "in testing" at sprint end Is QA the bottleneck?
Stories carried over from sprint to sprint Are estimates too low (often due to underestimated testing)?
Defects found late in the sprint Is testing starting too late?
Defects found after sprint closes Is the DoD being followed?

If stories consistently pile up in "testing" at the end of the sprint, the team has three options:

  1. Reduce sprint scope: Take fewer stories so testing has adequate time
  2. Shift left: Start testing earlier (see shift-left testing)
  3. Automate more: Reduce manual testing time by automating regression tests

Hands-On Exercise

  1. Review the stories in your current sprint. For each one, assess whether testing effort was adequately estimated.
  2. Identify your current blockers and write them using the three-part format above.
  3. Use the agile QA checklist for your next sprint. Track which items you consistently miss.
  4. Propose a change to your team's estimation process that better includes testing effort.
  5. Measure how many stories end the sprint "in testing" vs "done." Is there a pattern?