Assertions and Flaky Tests
Two problems define the early automation experience: assertions that pass when they should fail (or fail without telling you why), and tests that pass sometimes and fail other times for no apparent reason. Both problems share a root cause — tests designed around the happy path rather than designed for diagnostic clarity and deterministic execution.
Assertions That Waste Time
Anti-Pattern: Generic assertions like
expect(result).toBe(true)orexpect(status).toBe(200). When they fail, the error message is "expected true, received false" — telling you nothing about what went wrong.
Pattern: Diagnostic-first assertions that tell you why they failed, not just that they failed.
Example of the anti-pattern:
// Fails with: "Expected true, received false"
expect(response.success).toBe(true);
Example of the pattern:
// Fails with: "Expected status 200, received 503. Body: { error: 'Database connection timeout' }"
expect(response.status, `Body: ${JSON.stringify(response.body)}`).toBe(200);
Soft assertions — Collect multiple failures in a single test run instead of stopping at the first. Useful for form validation, where you want to know all the fields that failed, not just the first one.
Custom assertion helpers — Domain-specific assertions like expectValidOrder(order) that check multiple conditions and produce readable error messages. They embed your team's quality knowledge into reusable code.
Flaky Tests and Deterministic Design
A flaky test is a test that passes and fails without any code change. Flaky tests destroy trust in the test suite — when a failure might be "just flakiness," developers stop investigating failures.
Root Causes of Flakiness
| Source | Example | Fix |
|---|---|---|
| Timing / race conditions | Asserting before async operation completes | Use event-driven waits, not sleep() |
| Shared state | Test A creates data that Test B depends on | Isolate test data per test |
| External dependencies | Test calls a live third-party API that is slow or down | Mock external services |
| Order dependency | Tests pass in sequence but fail when shuffled | Each test must set up its own preconditions |
| Non-deterministic data | Asserting on timestamps or random IDs | Assert on stable properties, use patterns/ranges |
Anti-Pattern: Add retries to make flaky tests pass. The test still has a bug — you have just hidden it behind retries.
Pattern: Design tests for determinism from the start. When flakiness appears, investigate and fix the root cause. Track flaky test rates on a dashboard — a rising flaky rate is an early warning that test architecture needs attention.
Key Takeaways
- Write assertions that diagnose failures, not just detect them — include context in error messages
- Use soft assertions when you need to check multiple conditions in one test
- Build custom assertion helpers that encode domain knowledge
- Flaky tests have root causes: timing, shared state, external dependencies, order dependency, non-deterministic data
- Never use
sleep()as a synchronization strategy — use event-driven waits - Track flaky test rates and treat rising flakiness as an infrastructure problem, not individual test problems