Boundary-Value Analysis and Equivalence Partitioning via AI
Why Classical Techniques Still Matter
Boundary-value analysis (BVA) and equivalence partitioning (EP) are among the oldest formal test design techniques, dating back to the 1970s. They remain foundational because they address a universal truth: bugs cluster at boundaries. Off-by-one errors, inclusive-vs-exclusive range mistakes, and type coercion edge cases account for a disproportionate share of production defects.
What has changed is the speed at which you can apply these techniques. Manually identifying all partitions and boundary values for a complex API with 15 input fields takes hours. With AI, it takes minutes -- and the AI is less likely to forget an edge case.
Boundary-Value Analysis with AI
The Prompt Pattern
Describe the input constraints and let the LLM enumerate every boundary:
Given the following input field constraints, generate boundary value test cases:
Field: age (integer)
Valid range: 18-65 (inclusive)
Required: yes
Generate test values for:
- Just below minimum (17)
- At minimum (18)
- Just above minimum (19)
- Nominal value (40)
- Just below maximum (64)
- At maximum (65)
- Just above maximum (66)
- Zero
- Negative (-1)
- Null/undefined
- Non-numeric ("abc")
- Float (18.5)
- Very large number (999999999)
- Empty string
For each, state: input value, expected result (accept/reject), and why.
Expected Output
A well-trained LLM returns a structured table:
| Input | Expected | Rationale |
|---|---|---|
17 |
Reject | Below minimum boundary |
18 |
Accept | Minimum boundary (inclusive) |
19 |
Accept | Just above minimum |
40 |
Accept | Nominal/mid-range |
64 |
Accept | Just below maximum |
65 |
Accept | Maximum boundary (inclusive) |
66 |
Reject | Above maximum boundary |
0 |
Reject | Below minimum |
-1 |
Reject | Negative number |
null |
Reject | Required field |
"abc" |
Reject | Type mismatch |
18.5 |
Reject | Not an integer |
999999999 |
Reject | Above maximum |
"" |
Reject | Empty/required field |
Converting BVA Tables to Code
The table above translates directly to parametrized tests:
import pytest
class TestAgeValidation:
"""Boundary value tests for the age field."""
@pytest.mark.parametrize("age,should_accept", [
(17, False), # Just below minimum
(18, True), # Minimum boundary (inclusive)
(19, True), # Just above minimum
(40, True), # Nominal/mid-range
(64, True), # Just below maximum
(65, True), # Maximum boundary (inclusive)
(66, False), # Just above maximum
(0, False), # Zero
(-1, False), # Negative
])
def test_age_integer_boundaries(self, client, age, should_accept):
response = client.post("/api/users", json={"age": age, "name": "Test"})
if should_accept:
assert response.status_code == 201, f"age={age} should be accepted"
else:
assert response.status_code == 400, f"age={age} should be rejected"
@pytest.mark.parametrize("age,description", [
(None, "null value"),
("abc", "non-numeric string"),
(18.5, "float instead of integer"),
("", "empty string"),
(999999999, "extremely large number"),
])
def test_age_type_boundaries(self, client, age, description):
response = client.post("/api/users", json={"age": age, "name": "Test"})
assert response.status_code == 400, f"age={age} ({description}) should be rejected"
Multi-Field BVA
Real APIs have multiple constrained fields. Ask the AI to generate BVA for all of them at once:
Generate boundary value test cases for the following fields in the
POST /api/v2/products endpoint:
Fields:
- name: string, required, minLength=1, maxLength=200
- price: number, required, minimum=0 (exclusive: price > 0)
- quantity: integer, required, minimum=0 (inclusive), maximum=10000
- weight_kg: number, optional, minimum=0.01, maximum=500.0
For each field, test:
- At and around each boundary (min-1, min, min+1, max-1, max, max+1)
- Type mismatches
- Null/missing
- Empty string (for string fields)
- Precision limits (for number fields: 0.001, 0.009, etc.)
Equivalence Partitioning via AI
The Concept
Equivalence partitioning divides the input space into classes where all values in a class are expected to behave the same way. You test one representative value per class instead of every possible value.
The Prompt Pattern
For the following API endpoint, identify equivalence classes for each input
parameter and generate one representative test per class:
POST /api/orders
{
"product_id": "string (UUID format)",
"quantity": "integer (1-100)",
"shipping_method": "enum: standard | express | overnight",
"coupon_code": "string (optional, alphanumeric, 8 chars)"
}
For each parameter, identify:
1. Valid equivalence classes
2. Invalid equivalence classes
3. One representative value per class
Expected Output
The LLM should produce partitions like:
product_id:
- Valid class 1: existing UUID ("550e8400-e29b-41d4-a716-446655440000")
- Invalid class 1: non-existent UUID ("00000000-0000-0000-0000-000000000000")
- Invalid class 2: not a UUID ("not-a-uuid")
- Invalid class 3: empty string ("")
- Invalid class 4: null
quantity:
- Valid class 1: minimum (1)
- Valid class 2: mid-range (50)
- Valid class 3: maximum (100)
- Invalid class 1: zero (0)
- Invalid class 2: negative (-5)
- Invalid class 3: above max (101)
- Invalid class 4: non-integer (2.5)
- Invalid class 5: non-numeric ("ten")
shipping_method:
- Valid class 1: "standard"
- Valid class 2: "express"
- Valid class 3: "overnight"
- Invalid class 1: non-enum value ("drone")
- Invalid class 2: empty string ("")
- Invalid class 3: case variant ("Standard")
coupon_code:
- Valid class 1: valid code ("SAVE20AB")
- Valid class 2: absent/omitted (optional field)
- Invalid class 1: too short ("SAVE20")
- Invalid class 2: too long ("SAVE20ABC")
- Invalid class 3: non-alphanumeric ("SAVE-20!")
- Invalid class 4: lowercase ("save20ab")
Converting EP to Parametrized Tests
class TestOrderCreationPartitions:
"""Equivalence partition tests for POST /api/orders."""
@pytest.mark.parametrize("quantity,expected_status", [
(1, 201), # Valid: minimum
(50, 201), # Valid: mid-range
(100, 201), # Valid: maximum
(0, 400), # Invalid: zero
(-5, 400), # Invalid: negative
(101, 400), # Invalid: above max
])
def test_quantity_partitions(self, client, auth, valid_product, quantity, expected_status):
response = client.post("/api/orders", json={
"product_id": str(valid_product.id),
"quantity": quantity,
"shipping_method": "standard",
"idempotency_key": str(uuid4()),
}, headers=auth)
assert response.status_code == expected_status
@pytest.mark.parametrize("method,expected_status", [
("standard", 201),
("express", 201),
("overnight", 201),
("drone", 400),
("", 400),
("Standard", 400), # Case sensitivity check
])
def test_shipping_method_partitions(self, client, auth, valid_product, method, expected_status):
response = client.post("/api/orders", json={
"product_id": str(valid_product.id),
"quantity": 1,
"shipping_method": method,
"idempotency_key": str(uuid4()),
}, headers=auth)
assert response.status_code == expected_status
Combining BVA and EP: The Power Move
The most thorough approach combines both techniques. Use EP to identify the classes, then use BVA to test the boundaries of each class.
For the "quantity" field (integer, range 1-100):
Equivalence classes:
Valid: [1, 100]
Invalid below: [-inf, 0]
Invalid above: [101, +inf]
Boundary values:
-1 (invalid below, near zero)
0 (invalid below, at boundary)
1 (valid, lower boundary)
2 (valid, just above lower boundary)
50 (valid, nominal)
99 (valid, just below upper boundary)
100 (valid, upper boundary)
101 (invalid above, at boundary)
This combination gives you exactly 8 test values that cover the entire input space with high confidence. Without AI, identifying and coding all 8 takes 15-20 minutes per field. With AI, it takes under a minute.
Practical Tips for Interview Discussions
When discussing BVA/EP with interviewers:
Know the vocabulary. "Boundary value," "equivalence class," "partition" -- these are terms that signal formal training.
Explain the economics. "Full exhaustive testing of a 5-field form with 10 values each requires 100,000 combinations. EP reduces that to about 30-50 tests with the same defect detection rate for category-level bugs."
Show the AI advantage. "I use AI to enumerate partitions and boundaries from the OpenAPI schema, then convert the table to parametrized tests. This turns a half-day manual analysis into a 15-minute automated process."
Acknowledge the limits. "BVA and EP are excellent for input validation but do not catch business logic bugs, race conditions, or integration failures. They are one layer in a multi-layer test strategy."
Key Takeaway
BVA and EP are classical techniques made dramatically faster with AI. Instead of manually identifying partitions, describe the input domain and let the LLM enumerate them. Then convert the output to parametrized tests for maximum coverage with minimal test count.