QA Engineer Skills 2026QA-2026Technical Assessment Preparation

Technical Assessment Preparation

What Technical Interviews for QA Actually Test

Technical interviews for QA engineers are fundamentally different from developer interviews. You are rarely asked to implement a sorting algorithm or build a REST API from scratch. Instead, you are evaluated on your ability to think about testing systematically, write test automation code that is clean and maintainable, and design test strategies for complex systems.

The four most common formats are: live coding, take-home assignments, "how would you test X" exercises, and system design for testing. Each requires different preparation. This file covers all four.


Format 1: Live Coding Challenges

What to Expect

You are given a testing-related coding problem and 30-60 minutes to solve it while the interviewer watches. The problem usually involves one of:

  • Writing a Page Object Model from scratch
  • Building API test automation with assertions
  • Debugging a failing test
  • Parsing and validating test data
  • Writing a utility function for test infrastructure

Sample Challenge: Build a Page Object Model

Prompt: "Here is a login page with a username field, password field, remember-me checkbox, and submit button. Write a Page Object and a test that verifies successful login, failed login with wrong password, and the remember-me functionality."

What a strong answer looks like (Playwright + TypeScript):

// login.page.ts
import { Page, Locator } from '@playwright/test';

export class LoginPage {
  readonly page: Page;
  readonly usernameInput: Locator;
  readonly passwordInput: Locator;
  readonly rememberMeCheckbox: Locator;
  readonly submitButton: Locator;
  readonly errorMessage: Locator;

  constructor(page: Page) {
    this.page = page;
    this.usernameInput = page.getByLabel('Username');
    this.passwordInput = page.getByLabel('Password');
    this.rememberMeCheckbox = page.getByLabel('Remember me');
    this.submitButton = page.getByRole('button', { name: 'Sign in' });
    this.errorMessage = page.getByRole('alert');
  }

  async goto() {
    await this.page.goto('/login');
  }

  async login(username: string, password: string, rememberMe = false) {
    await this.usernameInput.fill(username);
    await this.passwordInput.fill(password);
    if (rememberMe) {
      await this.rememberMeCheckbox.check();
    }
    await this.submitButton.click();
  }
}
// login.spec.ts
import { test, expect } from '@playwright/test';
import { LoginPage } from './login.page';

test.describe('Login', () => {
  let loginPage: LoginPage;

  test.beforeEach(async ({ page }) => {
    loginPage = new LoginPage(page);
    await loginPage.goto();
  });

  test('successful login redirects to dashboard', async ({ page }) => {
    await loginPage.login('validuser', 'validpass');
    await expect(page).toHaveURL('/dashboard');
  });

  test('wrong password shows error message', async () => {
    await loginPage.login('validuser', 'wrongpass');
    await expect(loginPage.errorMessage).toBeVisible();
    await expect(loginPage.errorMessage).toHaveText(
      'Invalid username or password'
    );
  });

  test('remember me persists session after browser restart', async ({
    page,
    context,
  }) => {
    await loginPage.login('validuser', 'validpass', true);
    await expect(page).toHaveURL('/dashboard');

    // Verify cookie is set with extended expiry
    const cookies = await context.cookies();
    const sessionCookie = cookies.find((c) => c.name === 'session');
    expect(sessionCookie).toBeDefined();
    expect(sessionCookie!.expires).toBeGreaterThan(Date.now() / 1000 + 86400);
  });
});

What the interviewer evaluates:

  • User-facing locator strategy (getByLabel, getByRole) rather than brittle CSS selectors
  • Separation of page interactions from test assertions
  • Meaningful test names that describe expected behavior
  • Proper use of setup (beforeEach) and parameterization
  • Edge case thinking (the remember-me test checks the cookie, not just the checkbox)

Sample Challenge: API Test Automation

Prompt: "Write tests for a REST API endpoint POST /api/users that creates a new user. The endpoint accepts {name, email, role} and returns the created user with an id. Test the happy path and at least 3 error cases."

import requests
import pytest

BASE_URL = "https://api.example.com"

class TestCreateUser:
    def test_create_user_success(self):
        payload = {"name": "Jane Doe", "email": "jane@example.com", "role": "tester"}
        response = requests.post(f"{BASE_URL}/api/users", json=payload)

        assert response.status_code == 201
        data = response.json()
        assert data["name"] == "Jane Doe"
        assert data["email"] == "jane@example.com"
        assert data["role"] == "tester"
        assert "id" in data
        assert isinstance(data["id"], int)

    def test_create_user_missing_required_field(self):
        payload = {"name": "Jane Doe"}  # missing email and role
        response = requests.post(f"{BASE_URL}/api/users", json=payload)

        assert response.status_code == 400
        assert "email" in response.json()["errors"]

    def test_create_user_invalid_email(self):
        payload = {"name": "Jane Doe", "email": "not-an-email", "role": "tester"}
        response = requests.post(f"{BASE_URL}/api/users", json=payload)

        assert response.status_code == 400
        assert "email" in response.json()["errors"]

    def test_create_user_duplicate_email(self, create_test_user):
        payload = {"name": "Another Jane", "email": create_test_user["email"], "role": "tester"}
        response = requests.post(f"{BASE_URL}/api/users", json=payload)

        assert response.status_code == 409

    def test_create_user_invalid_role(self):
        payload = {"name": "Jane Doe", "email": "jane2@example.com", "role": "superadmin"}
        response = requests.post(f"{BASE_URL}/api/users", json=payload)

        assert response.status_code == 400

Sample Challenge: Debugging a Failing Test

Prompt: "This test passes locally but fails in CI. Find the bug."

def test_report_generation():
    report = generate_report(start_date="2025-01-01", end_date="2025-01-31")
    assert report.title == "Monthly Report - January 2025"
    assert report.generated_at.date() == datetime.date.today()

What to identify: The test is brittle because datetime.date.today() returns a different value depending on when and where it runs. In CI, the timezone might differ from local, and if the test runs around midnight, the date could be off by one. The fix is to mock datetime.date.today() or assert within a time window rather than an exact date.


Format 2: "How Would You Test X?" Exercises

This is the most common QA interview format. The interviewer names a system, feature, or physical object, and you are expected to systematically identify test scenarios. The goal is not to list every test case -- it is to demonstrate structured thinking.

The Structured Approach

Use this 5-step framework for any "how would you test" question:

  1. Clarify requirements: Ask 3-5 questions before you start listing tests
  2. Identify user personas: Who uses this and how?
  3. Functional testing: Happy path, edge cases, error handling
  4. Non-functional testing: Performance, security, accessibility, usability
  5. Integration and system testing: How does it interact with other systems?

"How would you test a login page?"

Step 1 -- Clarify: "What authentication methods are supported? Is there MFA? Rate limiting? Password complexity requirements? SSO?"

Step 2 -- Personas: New user, returning user, admin, locked-out user, user on mobile

Step 3 -- Functional:

Category Test Cases
Happy path Valid username + password, redirect to dashboard
Invalid inputs Wrong password, nonexistent username, empty fields, SQL injection attempts, XSS in username
Boundary values Maximum length username, maximum length password, minimum password length
Error handling Clear error message (does not reveal which field is wrong for security), retry behavior
State management Session creation, cookie handling, concurrent sessions, session expiry
Remember me Cookie persistence, security implications, cross-browser behavior
Forgot password Email delivery, token expiry, link reuse prevention

Step 4 -- Non-functional:

  • Performance: Login under load (1000 concurrent users), response time < 2s
  • Security: Brute force protection, account lockout after N attempts, HTTPS enforcement, password not logged in plaintext, CSRF protection
  • Accessibility: Screen reader compatibility, keyboard navigation, color contrast on error messages
  • Usability: Tab order, autofill behavior, error message clarity

Step 5 -- Integration: OAuth/SSO provider integration, audit log verification, notification system (login from new device)

"How would you test an elevator?"

This classic question tests whether you can apply systematic thinking to a non-software system.

Clarify: How many floors? How many elevators? Is there a weight limit? Are there accessibility requirements?

Category Test Cases
Basic functionality Press button, elevator arrives. Select floor, elevator goes there. Doors open and close.
Edge cases Press all floor buttons simultaneously. Press the same floor you are on. Press door-open while doors are closing.
Load testing Maximum weight capacity. One person below minimum. Exactly at weight limit.
Safety Emergency stop button. Door sensor (object in doorway). Power failure behavior. Fire mode.
Accessibility Braille buttons. Audio announcements. Door timing for wheelchair users. Low-mounted buttons.
Concurrency Multiple people pressing buttons on different floors. Two elevators optimizing for efficiency.
Environmental Operation during earthquake. Temperature extremes. Water/flooding.
Usability Button feedback (lights up when pressed). Floor indicator display. Wait time expectations.

"How would you test a shopping cart?"

Clarify: Web or mobile? What payment methods? Is there a guest checkout? What is the maximum cart size?

Category Test Cases
Add to cart Single item, multiple items, same item multiple times, maximum quantity
Remove from cart Remove one item, remove all items, remove during checkout
Update quantity Increase, decrease, set to zero, exceed inventory
Pricing Unit price, bulk discounts, coupon codes, tax calculation by region, currency display
Persistence Cart survives page refresh, cart survives logout/login, cart merges on login (guest + account)
Inventory Out-of-stock handling, item becomes unavailable during browsing, last-item race condition
Performance Cart with 100 items, price recalculation speed, concurrent cart updates
Security Price manipulation via API, coupon code brute force, session hijacking
Integration Payment gateway, inventory system, shipping calculator, tax service
Accessibility Screen reader announces cart updates, keyboard navigation through cart items

Format 3: System Design for Testing

Senior and architect-level interviews may ask you to design a test automation framework from scratch.

"Design a test automation framework for an e-commerce platform."

Structure your answer in layers:

Layer 5: Reporting & Analytics
    Test reports, dashboards, trend analysis, Slack/email notifications

Layer 4: CI/CD Integration
    GitHub Actions pipeline, parallel execution, quality gates, artifact storage

Layer 3: Test Suites
    Smoke (5 min), Regression (30 min), Full (2 hr), Performance (1 hr)

Layer 2: Test Infrastructure
    Page Objects, API clients, test data factory, fixtures, utilities

Layer 1: Frameworks & Tools
    Playwright (browser), pytest/requests (API), k6 (performance), axe-core (accessibility)

Key design decisions to discuss:

  • Why Playwright over Selenium: Auto-waiting, built-in API testing, better TypeScript support, multi-browser without extra drivers (Chapter 13)
  • Test data strategy: Factory pattern for creating test data, API-based setup instead of UI, cleanup in afterEach hooks
  • Parallelization: Sharding by test file, independent test data per shard, shared nothing architecture
  • Environment management: Configuration per environment, secrets management, feature flags for test control
  • Reporting: HTML reports stored as CI artifacts, Slack notification on failure, trend dashboard in Grafana

Format 4: Take-Home Assignments

What to Include

Component Why It Matters
Clear README with setup instructions Shows you think about the reader, not just the code
Clean project structure Shows you can organize a real framework, not just write scripts
Multiple test types UI, API, and at least one non-functional test shows breadth
CI configuration A working GitHub Actions file shows you think about the full lifecycle
Meaningful assertions Assert behavior, not implementation details
Error handling Tests that produce clear failure messages, not stack traces
Code comments (sparingly) Explain why, not what. Comments on architectural decisions, not obvious code.

Common Take-Home Mistakes

Mistake Why It Hurts Fix
No README Reviewer cannot run your tests Write a README with setup, run, and architecture sections
Tests depend on each other One failure cascades, masking real issues Each test should be independent and idempotent
Hardcoded test data Tests break on different environments Use environment variables or config files
No negative tests Shows only happy-path thinking Include at least 30% negative/edge case tests
Over-engineering Take-home is a time-boxed exercise, not a production framework Build what is asked, add a "future improvements" section in the README
No CI Shows you do not think about automation end-to-end Add a simple GitHub Actions workflow, even if basic

Live Coding Tips

Before You Start Coding

  1. Restate the problem: "So I need to write tests for a REST API that manages a to-do list. I will cover CRUD operations, validation, and error handling. Does that match your expectations?"
  2. Ask clarifying questions: Authentication? Rate limiting? Pagination? The interviewer wants to see that you think before you code.
  3. Outline your approach: "I will start with the Page Object, then write the happy path test, then add edge cases. I will handle test data setup using the API directly."

While Coding

  • Think out loud: "I am using getByRole here instead of a CSS selector because it is more resilient to markup changes and aligns with how users interact with the page."
  • Start with the simplest test: Get something passing first, then add complexity.
  • Handle errors gracefully: If you hit a syntax error, do not panic. Debug it out loud: "I think the issue is with the async/await here -- let me check the return type."
  • Name things well: test('should show error when password is too short') is better than test('test3').

Time Management

Time Block Activity
First 5 minutes Clarify requirements, outline approach, set up structure
Minutes 5-35 Write the core solution: Page Objects or API client, 3-4 tests
Minutes 35-50 Add edge cases, clean up code, add assertions
Last 10 minutes Run the tests (if applicable), explain your design decisions, discuss what you would add with more time

If You Get Stuck

  • Say it out loud: "I am stuck on how to handle the authentication token. Let me think through this..."
  • Simplify: "I know the ideal approach would be to use a fixture, but for time, I will inline the setup and note that I would refactor it."
  • Ask for a hint: This is not a sign of weakness. "Can you remind me of the API endpoint for creating test data?" is perfectly fine.

Cross-Chapter References for Technical Interviews

The technical interview draws on almost every chapter in this guide. Here are the most commonly tested areas:

Interview Topic Relevant Chapters
Browser automation and Page Objects Chapter 1 (Agent Skills), Chapter 13 (Selenium/WebDriver)
API testing and assertions Chapter 4 (API Contract Testing), Chapter 14 (API Fundamentals)
CI/CD pipeline design Chapter 16 (CI/CD Pipelines)
Test strategy and risk assessment Chapter 22 (Test Strategy and Quality Metrics)
Performance testing approach Chapter 5 (Performance and Chaos Engineering)
Security testing considerations Chapter 7 (Security Testing for AI Apps)
Accessibility testing Chapter 10 (Visual and Accessibility Testing)
Database and test data Chapter 15 (SQL and Database Testing)
Programming fundamentals Chapter 12 (Programming for QA)

Hands-On Exercise

  1. Build a small test automation project (Page Object + 5 tests) and time yourself -- aim for 45 minutes. This simulates a live coding session.
  2. Practice the "how would you test" framework on 3 different systems: one software feature, one physical object, and one API endpoint.
  3. Design a test automation framework on a whiteboard (or blank document) for a product you use daily. Explain each layer in 1 minute.
  4. Complete a take-home assignment for an open-source project: write tests for a public API (e.g., JSONPlaceholder, PokéAPI) with a full README and CI configuration.
  5. Record yourself doing a live coding exercise. Watch it back and identify moments where you went silent, got stuck, or could have explained your thinking better.