QA Engineer Skills 2026QA-2026Lighthouse CI: Enforcing Frontend Performance Budgets

Lighthouse CI: Enforcing Frontend Performance Budgets

What Is a Performance Budget?

A performance budget is a set of constraints on metrics that a page or API must meet. If the budget is exceeded, the CI pipeline fails -- just like a broken unit test. Performance budgets prevent the "death by a thousand cuts" problem where each small regression is acceptable individually but collectively degrades user experience.

The concept is simple: treat performance like correctness. If your code has a bug, CI fails. If your code is too slow, CI should also fail.


Why Lighthouse CI?

Lighthouse is Google's open-source auditing tool for web page quality. Lighthouse CI (LHCI) wraps Lighthouse in a CI-friendly package that can:

  • Run Lighthouse audits against multiple URLs
  • Assert that scores and metrics meet defined thresholds
  • Compare results against a baseline (previous builds)
  • Upload results to a server for trend visualization
  • Run multiple times per URL and report median results

Lighthouse CI Configuration

The core configuration lives in a lighthouserc.json file at the root of your project:

{
  "ci": {
    "collect": {
      "url": [
        "https://staging.example.com/",
        "https://staging.example.com/products/1",
        "https://staging.example.com/checkout"
      ],
      "numberOfRuns": 3,
      "settings": {
        "chromeFlags": "--no-sandbox --headless",
        "throttling": {
          "cpuSlowdownMultiplier": 4,
          "downloadThroughputKbps": 1600,
          "uploadThroughputKbps": 768,
          "rttMs": 150
        }
      }
    },
    "assert": {
      "assertions": {
        "categories:performance": ["error", { "minScore": 0.9 }],
        "categories:accessibility": ["error", { "minScore": 0.95 }],
        "first-contentful-paint": ["error", { "maxNumericValue": 1800 }],
        "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
        "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
        "interactive": ["error", { "maxNumericValue": 3800 }],
        "total-blocking-time": ["warn", { "maxNumericValue": 300 }],
        "speed-index": ["warn", { "maxNumericValue": 3400 }]
      }
    },
    "upload": {
      "target": "lhci",
      "serverBaseUrl": "https://lhci.internal.example.com"
    }
  }
}

Configuration Breakdown

collect: Defines what to measure.

  • url: List of pages to audit. Always include your most critical user journeys.
  • numberOfRuns: Run each URL multiple times. Lighthouse CI reports the median, which reduces variance from network jitter.
  • settings.throttling: Simulate realistic network conditions. The default simulates a mid-tier mobile device on a 4G connection.

assert: Defines what must pass.

  • "error" level assertions fail the pipeline.
  • "warn" level assertions produce warnings but do not block deployment.
  • Assertions can target category scores (0-1 scale) or specific metrics (milliseconds).

upload: Optional -- stores results for historical comparison.


Understanding Core Web Vitals

The assertions in your Lighthouse CI config should be tied to Google's Core Web Vitals, which measure real-world user experience:

Metric What It Measures Good Needs Improvement Poor
LCP (Largest Contentful Paint) Loading performance < 2.5s 2.5-4.0s > 4.0s
INP (Interaction to Next Paint) Interactivity responsiveness < 200ms 200-500ms > 500ms
CLS (Cumulative Layout Shift) Visual stability < 0.1 0.1-0.25 > 0.25

Additional metrics worth tracking:

Metric What It Measures Recommended Budget
FCP (First Contentful Paint) First visible content < 1.8s
TBT (Total Blocking Time) Main thread blocking < 300ms
Speed Index Visual completeness over time < 3.4s
TTI (Time to Interactive) Fully interactive < 3.8s

Setting Budgets for Different Page Types

Not all pages have the same performance requirements. Set per-page budgets based on business impact:

{
  "ci": {
    "collect": {
      "url": [
        "https://staging.example.com/",
        "https://staging.example.com/products/1",
        "https://staging.example.com/checkout"
      ]
    },
    "assert": {
      "assertMatrix": [
        {
          "matchingUrlPattern": ".*/$",
          "assertions": {
            "largest-contentful-paint": ["error", { "maxNumericValue": 2000 }],
            "categories:performance": ["error", { "minScore": 0.95 }]
          }
        },
        {
          "matchingUrlPattern": ".*/products/.*",
          "assertions": {
            "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
            "categories:performance": ["error", { "minScore": 0.9 }]
          }
        },
        {
          "matchingUrlPattern": ".*/checkout",
          "assertions": {
            "largest-contentful-paint": ["error", { "maxNumericValue": 3000 }],
            "total-blocking-time": ["error", { "maxNumericValue": 200 }],
            "categories:performance": ["error", { "minScore": 0.85 }]
          }
        }
      ]
    }
  }
}

Bundle Size Budgets

Lighthouse CI covers runtime performance. For build-time budgets, add bundle size checks:

// bundlesize.config.json
{
  "files": [
    {
      "path": "dist/js/main.*.js",
      "maxSize": "150 kB",
      "compression": "gzip"
    },
    {
      "path": "dist/js/vendor.*.js",
      "maxSize": "200 kB",
      "compression": "gzip"
    },
    {
      "path": "dist/css/main.*.css",
      "maxSize": "30 kB",
      "compression": "gzip"
    },
    {
      "path": "dist/js/chunk-*.js",
      "maxSize": "50 kB",
      "compression": "gzip"
    }
  ]
}

Run alongside Lighthouse CI:

npx bundlesize --config bundlesize.config.json

Lighthouse CI Server: Tracking Trends

The LHCI server provides historical tracking, allowing you to see performance trends over time and catch gradual degradation that per-commit checks might miss:

# Install and start the LHCI server
npm install -g @lhci/cli @lhci/server
lhci server --storage.storageMethod=sql \
  --storage.sqlDialect=postgres \
  --storage.sqlConnectionUrl=postgresql://user:pass@db:5432/lhci

The server dashboard shows:

  • Score trends over time (per URL, per metric)
  • Comparison between branches (PR vs. main)
  • Build-to-build diffs highlighting what changed
  • Flagged regressions with commit attribution

Common Pitfalls

  1. Flaky results from CI environments. CI runners have variable performance. Use numberOfRuns: 3 (or 5) and assert on the median, not a single run.

  2. Budgets too tight. Start with achievable budgets and tighten them over time. A budget that fails on every PR gets disabled, defeating the purpose.

  3. Ignoring mobile. Default Lighthouse settings simulate mobile. Do not change throttling to fast desktop to make numbers look good. Mobile is where most users are.

  4. Not testing authenticated pages. Lighthouse CI supports puppeteerScript for login flows. Do not skip your most important pages just because they require authentication.

  5. Budget drift. Review and update budgets quarterly. What was acceptable last year may not meet current user expectations or competitive standards.


Practical Advice for QA Architects

  • Start with three pages: homepage, one product/content page, and the most critical conversion page (checkout, signup). Expand coverage incrementally.
  • Align budgets with business goals. A 100ms improvement in LCP can measurably improve conversion rates. Use published case studies (Google, Walmart, BBC) to justify budget investments.
  • Make results visible. Post Lighthouse scores as PR comments so developers see the impact of their changes before merge.
  • Pair with Real User Monitoring (RUM). Lighthouse CI measures synthetic performance. RUM tools (Google Analytics, Datadog RUM) measure actual user experience. Use both to validate that synthetic budgets reflect real-world performance.