QA Engineer Skills 2026QA-2026Prompt Templates for Test Generation

Prompt Templates for Test Generation

Why Templates Matter

Writing a great prompt from scratch every time is wasteful. Prompt templates are reusable, parameterized prompts that encode your team's best practices for test generation. They turn prompt engineering from an art into a repeatable process.

This file contains four battle-tested templates that cover the most common test generation scenarios. Each template has placeholders marked with {curly_braces} that you fill in per feature.


Template 1: Requirements-to-Tests

Use this when you have a user story with acceptance criteria and need a full test suite.

You are a senior QA engineer. Given the following user story and acceptance
criteria, generate a comprehensive test suite.

**User Story:**
{paste user story here}

**Acceptance Criteria:**
{paste AC here}

**Technical Context:**
- Frontend: {framework and version, e.g., React 18 with React Testing Library}
- API: {API style, e.g., REST endpoints at /api/v2/*}
- Auth: {auth mechanism, e.g., JWT tokens in Authorization header}

**Generate:**
1. Unit tests for the component logic (no API calls)
2. Integration tests for the API layer (mock HTTP)
3. E2E test scenarios (describe steps, no code)

**For each test, include:**
- Test name following the pattern: "should [expected behavior] when [condition]"
- Arrange/Act/Assert structure
- Explicit assertions (not just "expect something")

**Coverage requirements:**
- All acceptance criteria must have at least one test
- Include at least 2 negative/error cases per AC
- Include 1 boundary value test where numeric limits exist

When to Use

  • Sprint planning: generate initial test list from stories
  • Story refinement: identify missing acceptance criteria by reviewing generated tests
  • Onboarding: new team members use this to understand testing expectations

Example Filled-In

You are a senior QA engineer. Given the following user story and acceptance
criteria, generate a comprehensive test suite.

**User Story:**
As a store manager, I want to set a maximum discount percentage per product
category so that sales staff cannot give discounts beyond the allowed limit.

**Acceptance Criteria:**
1. Admin can set max discount (0-100%) per category via the settings page
2. Attempting to apply a discount above the max shows an error toast
3. The max discount persists across browser sessions (stored server-side)
4. Categories with no max set allow any discount (backward compatible)

**Technical Context:**
- Frontend: React 18 with React Testing Library
- API: REST endpoints at /api/v2/settings/discount-limits
- Auth: JWT tokens in Authorization header, requires "admin" role

**Generate:**
1. Unit tests for the DiscountLimitForm component
2. Integration tests for the discount limit API endpoint
3. E2E test scenarios (describe steps, no code)

**For each test, include:**
- Test name following the pattern: "should [expected behavior] when [condition]"
- Arrange/Act/Assert structure
- Explicit assertions (not just "expect something")

**Coverage requirements:**
- All 4 acceptance criteria must have at least one test
- Include at least 2 negative/error cases per AC
- Include boundary value tests for 0%, 1%, 99%, 100%, 101%, and -1%

Template 2: API Schema to Tests

Use this when you have an OpenAPI/Swagger schema and need endpoint tests.

Analyze the following OpenAPI 3.0 schema and generate a test suite for the
{endpoint_name} endpoint.

**Schema:**
```json
{paste OpenAPI schema excerpt}

Generate tests for:

  1. Every HTTP method defined on this endpoint
  2. Every required field -- test with missing, null, empty string, wrong type
  3. Every optional field -- test with present and absent
  4. Every enum value -- test each valid value plus one invalid
  5. Response codes -- test conditions that trigger each documented status code
  6. Auth -- test with valid token, expired token, missing token, wrong role

Framework: {test framework, e.g., pytest + httpx} Base URL variable: {env var name, e.g., BASE_URL (from environment)} Auth helper: {auth fixture, e.g., Use get_test_token(role="user"|"admin") fixture}

Output as a single {language} file with clear test class grouping.


### Pro Tips for This Template

1. **Paste only the relevant endpoint**, not the entire spec. If your OpenAPI file is 5000 lines, extract the 50-100 lines for the endpoint under test.

2. **Include the components/schemas section** for any `$ref` references. The LLM cannot resolve `$ref: '#/components/schemas/Product'` if you do not include the Product schema.

3. **Specify your auth fixture exactly.** If you have a helper function like `get_test_token(role="admin")`, name it in the prompt. Otherwise the LLM invents its own auth mechanism.

4. **Request parametrized tests** for enum fields to keep the output DRY:

For enum fields, use @pytest.mark.parametrize to test all valid values in a single test function, plus a separate test for one invalid value.


---

## Template 3: Test-Driven Prompting

This template flips the script: you describe the desired behavior, and the LLM generates both the tests and the implementation. Useful for TDD workflows and interview demonstrations.

I need a function called {function_name} with this behavior:

  • Accepts: {input signature}
  • Returns: {output signature}
  • Rules: {list all business rules, one per line}

Generate:

  1. FIRST: A complete test file covering every rule, including boundary values ({list specific boundary values}) and every {category, e.g., destination category}.
  2. THEN: The implementation that makes all tests pass.

### Complete Example

I need a function called calculate_shipping_cost with this behavior:

  • Accepts: { weight_kg: number, destination_country: string, is_express: boolean }
  • Returns: { cost_usd: number, estimated_days: number }
  • Rules:
    • Base rate: $5 + $2 per kg
    • Express multiplier: 1.5x cost, halves estimated days (round up)
    • Domestic (US): 5 days standard
    • Canada/Mexico: 7 days standard
    • Europe: 10 days standard
    • All other: 14 days standard
    • Max weight: 30kg. Over 30kg raises ValueError.
    • Negative weight raises ValueError.
    • Zero weight is valid (just the base rate).

Generate:

  1. FIRST: A complete test file covering every rule, including boundary values (0kg, 30kg, 30.01kg, -1kg) and every destination category.
  2. THEN: The implementation that makes all tests pass.

### Why This Pattern Is Powerful

This pattern is powerful in interviews because it demonstrates that you think **specification-first**, not code-first. It also produces a natural test-implementation pair that is self-documenting. The tests serve as the executable specification for the function.

In practice, this pattern works best for:
- Pure functions with clear input/output contracts
- Utility functions shared across the codebase
- Business logic with well-defined rules
- Data transformation and validation functions

---

## Template 4: Mutation-Aware Test Generation

This advanced template produces tests with higher bug-detection capability by incorporating mutation testing concepts directly into the prompt.

For the following {language} function, generate tests that would survive mutation testing. For each test:

  1. State the specific line of code or condition you are targeting
  2. Explain what mutation (change) would break this test
  3. Write the test with a tight assertion that catches the mutation

Function under test:

{paste function code}

Requirements:

  • Each test must target a specific branch, condition, or calculation
  • Assertions must be precise enough that changing any operator (+, -, *, /), comparison (>, <, >=, <=, ==, !=), or constant would cause the test to fail
  • Include boundary values for every conditional
  • Do NOT write tests that only verify "no exception is thrown"

Example of a LOW mutation-score test (avoid this):

def test_create_user():
    response = client.post("/users", json={"name": "Alice"})
    assert response.status_code == 200
    # If the name is not saved, this test still passes. Low mutation score.

Example of a HIGH mutation-score test (aim for this):

def test_create_user():
    response = client.post("/users", json={"name": "Alice"})
    assert response.status_code == 200
    user = response.json()
    assert user["name"] == "Alice"   # Catches missing name persistence
    assert "id" in user              # Catches missing ID generation

### When to Use This Template

- When you are retrofitting tests onto legacy code and need high confidence
- When your team tracks mutation testing scores
- When you want to demonstrate testing depth in interviews
- When a critical function (payments, auth, data migration) needs airtight coverage

---

## Storing and Versioning Templates

Keep your prompt templates in your repository alongside the test code:

tests/ prompts/ requirements-to-tests.prompt.md api-schema-to-tests.prompt.md tdd-prompt.prompt.md mutation-aware.prompt.md test_users.py test_orders.py


Version them in git. When you refine a template based on review feedback, commit the change with a message explaining what improved. This creates institutional knowledge about effective test generation.

---

## Key Takeaway

Templates turn prompt engineering from a creative exercise into a repeatable engineering practice. Start with these four, customize them for your stack, and iterate based on the quality of generated tests. The best template is the one your entire team uses consistently.