Web Vitals Performance Gates in CI
From Lighthouse to CI Pipeline
Having a Lighthouse CI configuration is only half the battle. The real value comes from integrating performance checks into your CI pipeline so that every pull request is automatically validated against performance budgets. This file covers the complete CI integration patterns.
GitHub Actions: Complete Performance Budget Workflow
# .github/workflows/performance-budget.yml
name: Performance Budget Check
on:
pull_request:
branches: [main]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- name: Install dependencies and build
run: npm ci && npm run build
- name: Deploy preview
run: npm run deploy:preview
env:
PREVIEW_TOKEN: ${{ secrets.PREVIEW_TOKEN }}
- name: Run Lighthouse CI
uses: treosh/lighthouse-ci-action@v11
with:
configPath: ./lighthouserc.json
uploadArtifacts: true
temporaryPublicStorage: true
- name: Check bundle size budget
run: npx bundlesize --config bundlesize.config.json
- name: Check API response time budget
run: |
FAILED=0
for endpoint in "/api/products" "/api/cart" "/api/search?q=test"; do
RESPONSE_TIME=$(curl -o /dev/null -s -w '%{time_total}' \
"https://staging.example.com${endpoint}")
echo "$endpoint: ${RESPONSE_TIME}s"
if (( $(echo "$RESPONSE_TIME > 0.5" | bc -l) )); then
echo "FAIL: $endpoint exceeded 500ms budget (actual: ${RESPONSE_TIME}s)"
FAILED=1
fi
done
if [ "$FAILED" -eq 1 ]; then
exit 1
fi
- name: Post results as PR comment
if: always()
uses: marocchino/sticky-pull-request-comment@v2
with:
header: performance-budget
path: lighthouse-results-summary.md
Multi-Stage Performance Pipeline
For mature teams, performance validation should be a multi-stage process:
# .github/workflows/performance-gates.yml
name: Performance Gates
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
# Stage 1: Build-time checks (fast, every PR)
build-time-budget:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm run build
- name: Check bundle sizes
run: npx bundlesize --config bundlesize.config.json
- name: Check for new dependencies > 50KB
run: |
npx webpack-bundle-analyzer dist/stats.json \
--mode json --no-open -r bundle-report.json
node scripts/check-large-deps.js bundle-report.json
# Stage 2: Lighthouse audit (medium speed, every PR)
lighthouse-audit:
needs: build-time-budget
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci && npm run build
- name: Start local server
run: npm run serve &
env:
PORT: 3000
- name: Wait for server
run: npx wait-on http://localhost:3000 --timeout 30000
- name: Run Lighthouse CI
uses: treosh/lighthouse-ci-action@v11
with:
urls: |
http://localhost:3000/
http://localhost:3000/products/1
configPath: ./lighthouserc.json
# Stage 3: API performance (on merge to main only)
api-performance:
if: github.event_name == 'push'
needs: [build-time-budget, lighthouse-audit]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run k6 API performance tests
uses: grafana/k6-action@v0.4.0
with:
filename: tests/performance/api-budget.js
flags: --out json=k6-results.json
env:
K6_TARGET_URL: ${{ secrets.STAGING_URL }}
- name: Validate k6 thresholds
run: |
# k6 exits non-zero if thresholds fail
echo "k6 thresholds validated successfully"
- name: Upload results
uses: actions/upload-artifact@v4
with:
name: k6-api-budget-results
path: k6-results.json
k6 Performance Budget Script
Complement Lighthouse (frontend) with k6 (backend API) budgets:
// tests/performance/api-budget.js
import http from 'k6/http';
import { check } from 'k6';
export const options = {
scenarios: {
budget_check: {
executor: 'shared-iterations',
vus: 5,
iterations: 50,
maxDuration: '2m',
},
},
thresholds: {
// API performance budgets
'http_req_duration{name:products_list}': ['p(95)<200'],
'http_req_duration{name:product_detail}': ['p(95)<150'],
'http_req_duration{name:search}': ['p(95)<300'],
'http_req_duration{name:cart}': ['p(95)<250'],
// Global budgets
http_req_failed: ['rate<0.01'],
},
};
const BASE_URL = __ENV.K6_TARGET_URL || 'https://staging.example.com';
export default function () {
// Products list
let res = http.get(`${BASE_URL}/api/products`, { tags: { name: 'products_list' } });
check(res, { 'products 200': (r) => r.status === 200 });
// Product detail
res = http.get(`${BASE_URL}/api/products/1`, { tags: { name: 'product_detail' } });
check(res, { 'product detail 200': (r) => r.status === 200 });
// Search
res = http.get(`${BASE_URL}/api/search?q=laptop`, { tags: { name: 'search' } });
check(res, { 'search 200': (r) => r.status === 200 });
// Cart operations
res = http.post(`${BASE_URL}/api/cart`, JSON.stringify({ product_id: 1, qty: 1 }), {
headers: { 'Content-Type': 'application/json' },
tags: { name: 'cart' },
});
check(res, { 'cart 200': (r) => r.status === 200 });
}
Integrating with the Full Performance Pipeline
The performance budget checks are one stage of a larger performance strategy:
Developer Commits Code
|
v
+------------------+
| Build-Time | <-- Bundle size check, dependency audit
| Budget (< 1 min) | Runs on every PR
+--------+---------+
|
v
+------------------+
| Lighthouse CI | <-- Core Web Vitals, accessibility, SEO
| (2-5 min) | Runs on every PR
+--------+---------+
|
v
+------------------+
| API Performance | <-- k6 endpoint latency budgets
| Budget (2-3 min) | Runs on merge to main
+--------+---------+
|
v
+------------------+
| Staging Load | <-- Full k6 load test with production traffic model
| Test (15-30 min) | Runs before production deployment
+--------+---------+
|
v
+------------------+
| Production | <-- Canary + RUM validation
| Validation | Continuous after deployment
+------------------+
Handling Performance Budget Failures
When a performance budget check fails, the team needs a clear process:
Triage Process
- Is it a real regression or test flakiness? Re-run the check. If it passes on retry, investigate test stability.
- What changed? Compare the failing PR against the main branch baseline. Lighthouse CI diff view helps here.
- Is it acceptable? Some features legitimately increase bundle size or LCP. If the trade-off is justified, update the budget with a comment explaining why.
- Can it be optimized? Common fixes include lazy loading, code splitting, image optimization, and caching headers.
Budget Exception Process
Sometimes a budget increase is justified. Document it:
{
"budgetExceptions": [
{
"date": "2026-01-15",
"metric": "largest-contentful-paint",
"oldBudget": 2500,
"newBudget": 2800,
"reason": "New hero image carousel adds 300ms but increases conversion by 5%",
"reviewer": "qa-architect",
"revertDate": "2026-03-15"
}
]
}
Metrics to Track Over Time
Beyond pass/fail, track these trends in your performance dashboard:
| Metric | Tracking Purpose | Alert Threshold |
|---|---|---|
| LCP trend (30-day) | Gradual degradation detection | 10% increase from baseline |
| Bundle size trend | JavaScript bloat detection | 5% increase per quarter |
| Lighthouse score trend | Overall quality trajectory | Score drops below 85 |
| Budget failure rate | Process health | > 20% of PRs fail on performance |
| Time to fix budget failures | Team responsiveness | > 3 days to resolve |
Performance budgets are not a one-time setup. They require ongoing maintenance, calibration, and team commitment to be effective. The payoff is a product that stays fast as it grows.