QA Engineer Skills 2026QA-2026GitHub Actions: A Practical Example

GitHub Actions: A Practical Example

A Production-Ready Test Pipeline

The following workflow demonstrates a realistic CI pipeline for a web application with multiple test layers. Study each section carefully -- every line serves a purpose.

# .github/workflows/test-pipeline.yml
name: Test Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 6 * * 1-5'  # Weekdays at 6 AM UTC

env:
  NODE_ENV: test
  BASE_URL: https://staging.example.com

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: npm run test:unit -- --coverage
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: unit-coverage
          path: coverage/

  integration-tests:
    needs: unit-tests
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_DB: test_db
          POSTGRES_PASSWORD: ${{ secrets.DB_PASSWORD }}
        ports:
          - 5432:5432
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: npm run test:integration
        env:
          DATABASE_URL: postgres://postgres:${{ secrets.DB_PASSWORD }}@localhost:5432/test_db

  browser-tests:
    needs: integration-tests
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        browser: [chromium, firefox, webkit]
        shard: [1, 2, 3]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'
      - run: npm ci
      - run: npx playwright install --with-deps ${{ matrix.browser }}
      - run: npx playwright test --project=${{ matrix.browser }} --shard=${{ matrix.shard }}/3
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: traces-${{ matrix.browser }}-${{ matrix.shard }}
          path: test-results/

Line-by-Line Breakdown

Triggers

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]
  schedule:
    - cron: '0 6 * * 1-5'

Three trigger types cover three different feedback loops:

  • Push runs on direct pushes to main and develop -- catches issues immediately after merge
  • Pull request runs when PRs target main -- catches issues before merge
  • Schedule runs weekday mornings -- catches environment drift, expired tokens, or flaky tests that only show up intermittently

Global Environment Variables

env:
  NODE_ENV: test
  BASE_URL: https://staging.example.com

Set at the workflow level, these are available to all jobs. Use global env vars for configuration that applies everywhere. Use job-level or step-level env vars for more specific settings.

Job Dependencies

needs: unit-tests

The needs keyword creates a pipeline where unit tests gate integration tests gate browser tests. If unit tests fail, integration tests never start -- saving compute time and providing faster feedback.

Why this order matters:

  1. Unit tests (seconds) -- catch logic errors immediately
  2. Integration tests (minutes) -- catch wiring and database issues
  3. Browser tests (minutes) -- catch UI and end-to-end issues

If you run browser tests first and they fail because of a broken utility function, you waste 15 minutes before getting the same feedback that unit tests would have given in 30 seconds.

Services (Sidecar Containers)

services:
  postgres:
    image: postgres:16
    env:
      POSTGRES_DB: test_db
      POSTGRES_PASSWORD: ${{ secrets.DB_PASSWORD }}
    ports:
      - 5432:5432

GitHub Actions spins up a Postgres container alongside your test runner. The database is accessible at localhost:5432 from within the job. This is how you run integration tests against real databases without managing external infrastructure.

Other common services:

  • Redis: redis:7 on port 6379
  • MySQL: mysql:8 on port 3306
  • Elasticsearch: elasticsearch:8.11.0 on port 9200
  • RabbitMQ: rabbitmq:3-management on port 5672

Secrets

${{ secrets.DB_PASSWORD }}

Secrets are encrypted and never printed in logs. GitHub automatically masks them in output. Configure secrets in your repository settings under Settings > Secrets and variables > Actions.

Best practices:

  • Use descriptive names: DB_PASSWORD, not SECRET1
  • Document which secrets are required in a CONTRIBUTING.md
  • Use environment-specific secrets when possible (e.g., STAGING_API_KEY vs PROD_API_KEY)

Matrix Strategy

strategy:
  fail-fast: false
  matrix:
    browser: [chromium, firefox, webkit]
    shard: [1, 2, 3]

This creates 3 browsers x 3 shards = 9 parallel jobs. Each combination runs independently. The fail-fast: false setting means all 9 jobs complete even if some fail, giving you the full picture of failures across browsers.

When to use matrices:

  • Cross-browser testing (Chromium, Firefox, WebKit)
  • Cross-platform testing (ubuntu, windows, macos)
  • Test sharding for parallelization
  • Multi-version testing (Node 18, 20, 22)

Artifacts

- uses: actions/upload-artifact@v4
  if: always()
  with:
    name: unit-coverage
    path: coverage/

The if: always() condition uploads artifacts even when the job fails. For coverage reports, you always want the data. For traces, you may use if: failure() to only collect artifacts when something goes wrong.


Extending the Pipeline

Adding Lint and Type Checks

Add a fast first job that runs before everything else:

lint-and-typecheck:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with:
        node-version: 20
        cache: 'npm'
    - run: npm ci
    - run: npm run lint
    - run: npm run typecheck

Then make unit-tests depend on lint-and-typecheck:

unit-tests:
  needs: lint-and-typecheck

Adding Slack Notifications

notify-on-failure:
  needs: [unit-tests, integration-tests, browser-tests]
  if: failure()
  runs-on: ubuntu-latest
  steps:
    - uses: slackapi/slack-github-action@v1
      with:
        payload: |
          {
            "text": "Pipeline failed on ${{ github.ref_name }}: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
          }
      env:
        SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

Adding Path Filtering

Skip expensive tests when only documentation changes:

on:
  push:
    branches: [main, develop]
    paths-ignore:
      - '**.md'
      - 'docs/**'
      - '.github/ISSUE_TEMPLATE/**'

Debugging GitHub Actions

Debugging workflows is one of the biggest pain points. Here are practical strategies:

  1. Enable debug logging: Set the repository secret ACTIONS_STEP_DEBUG to true for verbose output
  2. Use act locally: The act tool runs GitHub Actions workflows on your local machine, dramatically speeding up iteration
  3. Add diagnostic steps: Insert env and ls -la steps to inspect the environment when something unexpected happens
  4. Check the Actions tab: Failed steps show expandable logs. Look at the last successful step and the first failed step to narrow the issue
  5. Use continue-on-error: true temporarily: Let a failing step continue so you can see the output of subsequent steps for debugging
# Temporary debugging step
- name: Debug environment
  if: failure()
  run: |
    echo "Working directory: $(pwd)"
    ls -la
    env | sort
    cat package.json | head -20

Hands-On Exercise

  1. Fork a simple Node.js project (or create one with npm init)
  2. Add the test pipeline from this file to .github/workflows/test-pipeline.yml
  3. Simplify it: start with just the unit-tests job
  4. Push and watch the Actions tab -- verify the pipeline triggers and produces artifacts
  5. Add integration-tests with a Postgres service
  6. Add a matrix strategy for browser tests
  7. Intentionally break a test and verify artifacts are uploaded on failure
  8. Add path filtering and Slack notifications

Build the pipeline incrementally. Do not try to write the entire configuration at once -- you will spend more time debugging than building.