QA Engineer Skills 2026QA-2026Logs and Environment Variables

Logs and Environment Variables

Reading Logs

Logs are your primary debugging tool when tests fail in CI or on remote servers. You cannot set breakpoints in a CI runner. You cannot open the browser DevTools on a headless runner at 2 AM. But you can always read the logs.


Log Analysis Commands

Basic Log Reading

# Real-time log monitoring
tail -f /var/log/app/application.log

# Last 100 lines
tail -100 /var/log/app/application.log

# First 50 lines (check log format, headers)
head -50 /var/log/app/application.log

# Page through a large log file
less /var/log/app/application.log
# Use: / to search, n for next match, q to quit

Filtering Logs by Time

# Errors in a specific time window
awk '/2024-01-15 14:3[0-9]/' app.log | grep ERROR

# Logs from the last hour (if log format includes ISO timestamps)
awk -v start="$(date -d '1 hour ago' '+%Y-%m-%d %H:%M')" '$0 >= start' app.log

# Between two timestamps
awk '/2024-01-15 14:30/,/2024-01-15 14:45/' app.log

Structured (JSON) Logs

Many modern applications log in JSON format. Use jq to parse them:

# Pretty-print JSON log entries
cat app.json | jq .

# Filter for errors only
cat app.json | jq 'select(.level == "error")'

# Extract specific fields
cat app.json | jq 'select(.level == "error") | {timestamp, message, stack}'

# Count errors by message
cat app.json | jq -r 'select(.level == "error") | .message' | sort | uniq -c | sort -rn

# Filter by service name
cat app.json | jq 'select(.service == "payment-api" and .level == "error")'

Container Logs

# Docker container logs
docker logs test-app --tail 100 --follow

# Docker Compose logs (all services)
docker compose logs -f

# Docker Compose logs (specific service)
docker compose logs -f app

# Kubernetes pod logs
kubectl logs pod/test-app -f

# Kubernetes logs with timestamp filtering
kubectl logs pod/test-app --since=1h

Common Log Patterns to Search For

Pattern What It Means Grep Command
ERROR / FATAL Application errors grep -E "ERROR|FATAL" app.log
NullPointerException Missing data or object grep "NullPointerException" app.log
Connection refused Service dependency down grep "Connection refused" app.log
timeout Slow dependency or network issue grep -i "timeout" app.log
401 / 403 Authentication/authorization failure grep -E "401|403" access.log
500 / 502 / 503 Server errors grep -E "50[023]" access.log
OutOfMemoryError Memory leak or insufficient allocation grep "OutOfMemory" app.log
ECONNREFUSED Cannot connect to service (Node.js) grep "ECONNREFUSED" app.log

Environment Variables

Environment variables configure tests for different environments without changing code. They are the standard way to manage configuration across dev, staging, and production.

Setting Environment Variables

# Set for the current session
export BASE_URL=https://staging.example.com
export API_KEY=test-key-12345
export DB_HOST=localhost
export DB_PORT=5432

# Use in commands
curl -H "Authorization: Bearer $API_KEY" "$BASE_URL/api/users"

# Set for a single command only
BASE_URL=https://prod.example.com npx playwright test --project=smoke

Loading from .env Files

# .env file format
BASE_URL=https://staging.example.com
API_KEY=test-key-12345
DB_HOST=localhost
DB_PORT=5432
LOG_LEVEL=debug

# Load all variables from .env
export $(grep -v '^#' .env | xargs)

# Or load with a specific file
export $(grep -v '^#' .env.staging | xargs)

Default Values

# Use default if variable is not set
TIMEOUT=${TEST_TIMEOUT:-30000}
BROWSER=${TEST_BROWSER:-chromium}
WORKERS=${TEST_WORKERS:-4}
BASE_URL=${BASE_URL:-http://localhost:3000}

# Verify required variables are set
: "${API_KEY:?ERROR: API_KEY must be set}"
# If API_KEY is empty or unset, the script exits with the error message

Environment-Specific Configuration

# Pattern: different .env files per environment
# .env.dev
BASE_URL=http://localhost:3000
DB_HOST=localhost

# .env.staging
BASE_URL=https://staging.example.com
DB_HOST=staging-db.example.com

# .env.production
BASE_URL=https://www.example.com
DB_HOST=prod-db.example.com

# Load the right one
ENV=${ENVIRONMENT:-dev}
export $(grep -v '^#' ".env.${ENV}" | xargs)
echo "Running tests against $BASE_URL"

Viewing Environment Variables

# Show all environment variables
env

# Show all, sorted and filtered
env | sort | grep -i test

# Show a specific variable
echo $BASE_URL

# Check if a variable is set
if [ -z "$API_KEY" ]; then
  echo "WARNING: API_KEY is not set"
fi

Quick Reference: Essential Commands

Task Command
Find a file find / -name "playwright.config.ts" 2>/dev/null
Disk usage df -h
Memory usage free -h
Watch a command repeatedly watch -n 5 "curl -s localhost:3000/health"
Compare two files diff expected.json actual.json
Compare directories diff -r dir1/ dir2/
Check DNS nslookup staging.example.com
Open network ports ss -tlnp
Download a file wget https://example.com/file.zip
Create an archive tar -czf backup.tar.gz test-results/
Extract an archive tar -xzf backup.tar.gz
Check file encoding file document.txt
Count lines in files wc -l *.spec.ts

Putting It All Together: Debug Workflow

When a test fails in CI, here is a systematic approach using command-line tools:

# Step 1: Check what changed (what could have caused the failure?)
git log --oneline -5 --stat

# Step 2: Check the test results
cat test-results/junit.xml | grep 'failure message'

# Step 3: Check application logs for errors around the test execution time
grep -A 5 "ERROR" /var/log/app/application.log | tail -30

# Step 4: Check system resources (was the runner overloaded?)
free -h
df -h
uptime

# Step 5: Check service health
curl -s -o /dev/null -w "%{http_code}" https://staging.example.com/health

# Step 6: Check container status (if using Docker)
docker ps -a
docker logs test-app --tail 50

# Step 7: Check environment variables (is the right config loaded?)
env | grep -i "base_url\|api_key\|db_"

# Step 8: Reproduce locally
export $(grep -v '^#' .env.staging | xargs)
npx playwright test tests/failing-test.spec.ts --headed

Hands-On Exercise

  1. Find and read the last 50 lines of an application log file on your system
  2. Use jq to parse a JSON log file and extract only error-level entries
  3. Create a .env file with test configuration and load it into your shell
  4. Write a script that checks if all required environment variables are set before running tests
  5. Use watch to monitor an endpoint's health every 5 seconds
  6. Practice the full debug workflow above on a real or simulated test failure

Interview Talking Point: "I use the command line daily for QA work -- curl for quick API validation, jq for parsing JSON responses, grep for digging through application logs to find root causes, and Bash scripts to automate repetitive tasks like smoke-testing multiple environments. I am comfortable with Docker for spinning up isolated test environments with Compose files, reading container logs to debug infrastructure issues, and managing environment variables for test configuration across dev, staging, and production. When a test fails in CI, my first step is usually checking container logs and application logs rather than waiting for someone else to investigate. I also write scripts for test data cleanup, service readiness checks, and environment health monitoring to keep our test infrastructure reliable."