Verification vs Validation
These two terms are frequently confused, even by experienced engineers. Understanding the difference is fundamental to QA thinking — it shapes how you approach testing, what questions you ask, and what kinds of defects you look for.
The Core Distinction
| Aspect | Verification | Validation |
|---|---|---|
| Question answered | "Are we building the product right?" | "Are we building the right product?" |
| Focus | Conformance to specification | Conformance to user needs |
| Timing | During development (reviews, inspections, unit tests) | After build (UAT, beta testing, usability testing) |
| Approach | Static and dynamic analysis against specs | User testing, acceptance testing, real-world usage |
| Who performs | Developers, QA engineers, peer reviewers | End users, product owners, beta testers |
The Classic Example
A development team builds a login system. They follow the specification exactly:
- Email field accepts valid email format
- Password must be at least 8 characters
- Successful login redirects to /dashboard
- Failed login shows an error message
Verification confirms: Yes, the code implements the spec correctly. All unit tests pass. Code review found no issues. The login page behaves exactly as specified.
Validation reveals: Users expected to log in with their phone number, not email. The spec never mentioned phone-based login because the product owner assumed email was obvious. The product passes verification but fails validation.
Verification Activities
Verification checks whether the implementation matches the specification. It is primarily an internal activity.
Static Verification (no code execution)
| Activity | What It Catches |
|---|---|
| Code review | Logic errors, missed edge cases, style violations |
| Requirement review | Ambiguity, contradictions, missing acceptance criteria |
| Design review | Architecture issues, scalability concerns, security gaps |
| Spec walkthrough | Misunderstandings between product and engineering |
| Static analysis tools | Code smells, potential null references, unused variables |
Dynamic Verification (code execution)
| Activity | What It Catches |
|---|---|
| Unit tests | Individual function/method correctness |
| Integration tests | Component interaction issues |
| Regression tests | Previously fixed bugs reintroduced |
| Contract tests | API response structure matches the documented schema |
| Linting and type checking | Type mismatches, undefined variables |
Verification in Practice
When you write a test case that says "Given a user with valid credentials, When they submit the login form, Then they are redirected to /dashboard" — you are performing verification. You are checking that the software does what the spec says it should do.
Validation Activities
Validation checks whether the product meets actual user needs. It goes beyond the specification to ask "does this solve the user's problem?"
Common Validation Activities
| Activity | What It Reveals |
|---|---|
| User Acceptance Testing (UAT) | Does the feature meet business requirements from the user's perspective? |
| Beta testing | How do real users interact with the feature in the wild? |
| Usability testing | Can users accomplish their goals efficiently and without confusion? |
| A/B testing | Which implementation better achieves the business objective? |
| Customer feedback | Post-release validation of whether the feature solves real problems |
| Analytics review | Are users actually using the feature? Where do they drop off? |
Validation in Practice
A QA engineer involved in validation asks different questions than one focused on verification:
Verification question: "Does the checkout flow handle international addresses correctly?"
Validation question: "Is the checkout flow simple enough that users complete purchases without abandoning?"
Verification question: "Does the search return results matching the query?"
Validation question: "Does the search return results that users actually find useful?"
Why Both Matter
A product can pass all verification checks and still fail validation. This happens frequently:
Scenario 1: Feature Nobody Uses
The team builds a complex reporting dashboard with 50 charts. Every chart renders correctly (verification passes). But users only need 3 of those charts, and the interface is so cluttered they cannot find them (validation fails).
Scenario 2: Correct but Confusing
The error message "Authentication failed: INVALID_CREDENTIALS_401" is technically accurate (verification passes). But users do not understand what it means or what to do next (validation fails). "Incorrect email or password. Try again or reset your password." would pass both.
Scenario 3: Spec Was Wrong
The specification says "display prices in USD." The implementation correctly shows USD prices (verification passes). But the product is launching in Europe, and users expect EUR (validation fails — the spec itself was wrong).
QA's Role in Both
Verification Role
- Write and execute test cases against specifications
- Report defects when behavior deviates from specs
- Maintain regression test suites
- Participate in code and design reviews
Validation Role
- Participate in UAT sessions with real users
- Provide usability feedback based on testing experience
- Question requirements that seem confusing or incomplete
- Report "this works correctly but seems wrong" observations
- Advocate for the end user when the spec is ambiguous
The Shift-Left Approach
Modern QA engineers participate in validation earlier in the process:
- During requirement refinement: "This requirement says X, but users might expect Y"
- During design review: "This flow has 7 steps — can we reduce it to 3?"
- During sprint planning: "Have we talked to users about whether they actually need this feature?"
Verification vs Validation in Different Methodologies
| Methodology | Verification Emphasis | Validation Emphasis |
|---|---|---|
| Waterfall | Heavy (testing phase against detailed specs) | Late (UAT at the end, expensive to change) |
| Agile | Continuous (automated tests, CI/CD) | Frequent (sprint demos, user feedback each iteration) |
| DevOps | Automated (pipeline gates, quality checks) | Continuous (feature flags, canary releases, monitoring) |
| Lean/Startup | Minimal (just enough to avoid breaking things) | Primary focus (validated learning, MVP testing) |
Related Concepts
Alpha vs Beta Testing
- Alpha testing: Internal validation by the QA team or internal users, before external release
- Beta testing: External validation by real users in a production-like environment
V-Model
The V-Model explicitly maps verification and validation activities to development phases:
Requirements ──────────────────── Acceptance Testing (Validation)
System Design ──────────────── System Testing (Verification)
Architecture ─────────────── Integration Testing (Verification)
Coding ─────────────────── Unit Testing (Verification)
Each level on the left (development) has a corresponding test level on the right. Verification dominates the lower levels; validation dominates the top.
Practical Exercise
For the following features, write one verification test and one validation test:
- Email notifications: The system sends an email when an order is shipped
- Search autocomplete: The search bar suggests results as the user types
- User onboarding: A 5-step wizard guides new users through account setup
For each, consider: what could pass verification but fail validation?
Key Takeaways
- Verification: "Building the product right" — does it match the spec?
- Validation: "Building the right product" — does it meet user needs?
- A product can pass verification and fail validation
- QA engineers must participate in both — checking specs AND advocating for users
- Modern QA shifts validation left: question requirements early, do not wait until UAT
Interview Talking Point: "I treat manual testing as a thinking discipline, not a task to automate away. I use session-based exploratory testing with charters and time boxes to systematically find issues that scripted tests miss. When I file a bug, I include a minimal reproduction, environment details, and a clear severity/priority split — because a bug report that a developer can reproduce in under two minutes gets fixed in hours, not weeks. I also distinguish between verification and validation — I test against specs, but I also ask whether the spec itself serves the user."