WCAG 2.1 AA Compliance Automation
The Standard
The Web Content Accessibility Guidelines (WCAG) 2.1 at the AA conformance level is the standard that most laws reference. It defines success criteria across four principles (Perceivable, Operable, Understandable, Robust) that make web content accessible to people with disabilities.
Automating compliance checks catches roughly 30-40% of accessibility issues -- the rest require manual testing and judgment. That 30-40% includes the most common violations: missing alt text, insufficient color contrast, missing form labels, and broken heading hierarchy. Catching these automatically on every PR is the minimum viable accessibility practice.
What Can Be Automated vs What Cannot
| Can Be Automated | Requires Human Judgment |
|---|---|
| Missing alt text on images | Whether alt text is meaningful |
| Color contrast ratios | Whether color conveys meaning |
| Missing form labels | Whether label text is clear |
| Missing ARIA roles | Whether ARIA roles are correct |
| Keyboard focusability | Whether focus order is logical |
| Heading hierarchy (h1-h6) | Whether headings describe content |
| Lang attribute present | Whether language is correct for content |
| Link text exists | Whether "click here" is descriptive enough |
| Duplicate IDs | Whether ID references are semantically correct |
| Table headers present | Whether data tables are structured logically |
The 30/70 Split
Automated tools catch the "is it present?" questions. Humans (or AI agents) must answer the "is it correct?" questions.
Automated: "Does this image have an alt attribute?" -> Yes/No
Human: "Does the alt text accurately describe the image for
someone who cannot see it?" -> Judgment required
Automated: "Is the color contrast ratio >= 4.5:1?" -> Yes/No
Human: "Is color the ONLY way this information
is communicated?" -> Judgment required
Automated: "Does this form input have a <label>?" -> Yes/No
Human: "Does the label clearly explain what
the user should enter?" -> Judgment required
WCAG 2.1 AA Success Criteria Overview
| Principle | Key Criteria | Automated? | Common Violations |
|---|---|---|---|
| Perceivable | 1.1.1 Non-text Content | Partially | Missing alt text |
| 1.3.1 Info and Relationships | Partially | Missing headings, bad table structure | |
| 1.4.3 Contrast (Minimum) | Yes | Text below 4.5:1 ratio | |
| 1.4.11 Non-text Contrast | Partially | UI components below 3:1 ratio | |
| Operable | 2.1.1 Keyboard | Partially | Non-focusable interactive elements |
| 2.4.3 Focus Order | No | Illogical tab sequence | |
| 2.4.4 Link Purpose | No | "Click here" links | |
| 2.4.7 Focus Visible | Partially | Missing focus indicators | |
| Understandable | 3.1.1 Language of Page | Yes | Missing lang attribute |
| 3.2.1 On Focus | No | Unexpected context changes | |
| 3.3.2 Labels or Instructions | Partially | Missing form labels | |
| Robust | 4.1.1 Parsing | Yes | Duplicate IDs, invalid HTML |
| 4.1.2 Name, Role, Value | Partially | Missing ARIA attributes |
Building an Accessibility Testing Strategy
Level 1: Automated Scanning (Every PR)
Run axe-core or similar tools on every page during every PR. This catches the 30-40% of issues that are detectable by rule-based scanning.
# Minimum viable accessibility CI
npx playwright test tests/accessibility/ --reporter=json > a11y-results.json
Level 2: AI-Assisted Auditing (Weekly)
Use AI agents to evaluate qualitative aspects -- alt text quality, reading flow, error message clarity. This catches an additional 20-30% of issues.
Level 3: Manual Expert Audit (Quarterly)
A human accessibility expert uses screen readers, keyboard-only navigation, and magnification tools to find the remaining issues. This is irreplaceable for complex interactions, cognitive accessibility, and WCAG AAA compliance.
Level 4: User Testing (Semi-Annually)
Test with real users who have disabilities. They find issues that no tool or expert anticipated.
Quick Wins: Most Common WCAG Violations
These five violations account for the majority of accessibility failures found in automated scans:
- Missing alt text (WCAG 1.1.1) -- Every meaningful image needs descriptive alt text
- Insufficient color contrast (WCAG 1.4.3) -- Text must have 4.5:1 contrast against background
- Missing form labels (WCAG 1.3.1) -- Every input needs an associated
<label> - Empty links (WCAG 2.4.4) -- Links must have discernible text
- Missing document language (WCAG 3.1.1) --
<html lang="en">must be present
Fixing these five issues eliminates the majority of automated accessibility violations. They are simple to fix and easy to test.
<!-- Fix #1: Add alt text -->
<img src="product.jpg" alt="Blue running shoes, size 10">
<!-- Fix #2: Use sufficient contrast -->
<style>
/* BAD: #999 on white = 2.85:1 ratio */
.label { color: #999; }
/* GOOD: #595959 on white = 7:1 ratio */
.label { color: #595959; }
</style>
<!-- Fix #3: Associate labels with inputs -->
<label for="email">Email address</label>
<input id="email" type="email" name="email">
<!-- Fix #4: Give links meaningful text -->
<!-- BAD -->
<a href="/products">Click here</a>
<!-- GOOD -->
<a href="/products">View all products</a>
<!-- Fix #5: Set document language -->
<html lang="en">
These fixes take minutes each and have an outsized impact on accessibility compliance scores and real user experience.
Integrating Compliance Checks into Your Workflow
Pre-Commit: Linting HTML for Accessibility
Before code even reaches CI, lint HTML templates for common violations:
# Using axe-linter or htmlhint with accessibility rules
npx htmlhint --rules "alt-require,title-require" src/**/*.html
CI: Automated Page Scanning
Run axe-core against every page in your application during CI. Fail the build on critical and serious violations, warn on moderate ones.
// tests/accessibility/scan-all-pages.spec.ts
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
const pages = [
'/',
'/products',
'/cart',
'/checkout',
'/account',
'/help',
];
for (const pagePath of pages) {
test(`a11y scan: ${pagePath}`, async ({ page }) => {
await page.goto(pagePath);
const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa'])
.analyze();
// Fail on critical and serious
const blocking = results.violations.filter(
v => v.impact === 'critical' || v.impact === 'serious'
);
expect(blocking).toEqual([]);
});
}
Post-Release: Monitoring
After deployment, run periodic accessibility scans against production to catch regressions introduced by content changes, CMS updates, or third-party script injections that bypass CI.
Mapping WCAG to Test Types
| WCAG Criterion | Test Type | Tool |
|---|---|---|
| 1.1.1 Non-text Content | Automated + AI review | axe-core + AI agent |
| 1.4.3 Contrast | Fully automated | axe-core |
| 2.1.1 Keyboard | Semi-automated | Playwright keyboard tests |
| 2.4.7 Focus Visible | Semi-automated | Visual regression on focus states |
| 3.3.1 Error Identification | Manual + AI | AI agent + human review |
| 4.1.2 Name, Role, Value | Automated | axe-core ARIA checks |
This mapping helps teams decide where to invest automation effort and where human judgment is irreplaceable.