Newman and pytest for API Automation
Postman is excellent for exploration. For CI/CD integration and scalable test suites, you need command-line tools: Newman for running Postman collections, and pytest with requests for full programmatic control.
Newman: Postman Collections in CI
Newman runs Postman collections from the command line, making them suitable for CI pipelines.
Basic Usage
# Run a collection with an environment
newman run collection.json --environment staging.json --reporters cli,junit
# Run with specific folder
newman run collection.json --folder "Auth" --environment staging.json
# Run with data file (data-driven testing)
newman run collection.json --iteration-data test_data.csv --reporters cli,htmlextra
# Set environment variables from command line
newman run collection.json -e staging.json --env-var "auth_token=abc123"
Newman in GitHub Actions
jobs:
api-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- run: npm install -g newman newman-reporter-htmlextra
- run: |
newman run tests/collection.json \
--environment tests/staging.json \
--reporters cli,junit,htmlextra \
--reporter-junit-export results.xml \
--reporter-htmlextra-export report.html
- uses: actions/upload-artifact@v4
if: always()
with:
name: api-test-report
path: |
results.xml
report.html
pytest + requests: Full Control
For production API test suites, pytest with the requests library provides the flexibility, maintainability, and power that Newman lacks.
Project Structure
tests/
api/
conftest.py # Shared fixtures
test_auth.py # Authentication tests
test_users.py # User CRUD tests
test_orders.py # Order tests
test_error_handling.py
data/
test_users.csv # Test data files
pytest.ini # Configuration
Shared Fixtures (conftest.py)
import os
import pytest
import requests
@pytest.fixture(scope="session")
def base_url():
return os.environ.get("API_BASE_URL", "http://localhost:3000/api/v1")
@pytest.fixture(scope="session")
def auth_headers(base_url):
"""Authenticate once per test session."""
r = requests.post(f"{base_url}/auth/login", json={
"email": "test@example.com",
"password": "testpass123"
})
assert r.status_code == 200, f"Auth failed: {r.text}"
token = r.json()["access_token"]
return {"Authorization": f"Bearer {token}"}
@pytest.fixture
def api(base_url, auth_headers):
"""Pre-configured API session."""
session = requests.Session()
session.headers.update(auth_headers)
session.headers.update({"Content-Type": "application/json"})
# Store base_url on session for convenience
session._base_url = base_url
original_request = session.request
def patched_request(method, url, **kwargs):
if url.startswith("/"):
url = f"{base_url}{url}"
return original_request(method, url, **kwargs)
session.request = patched_request
return session
@pytest.fixture
def create_user(api):
"""Factory fixture for creating test users with cleanup."""
created_ids = []
def _create(name="Test User", email=None, role="viewer"):
import uuid
email = email or f"test-{uuid.uuid4().hex[:8]}@test.com"
r = api.post("/users", json={"name": name, "email": email, "role": role})
assert r.status_code == 201
user = r.json()
created_ids.append(user["id"])
return user
yield _create
# Cleanup: delete all created users
for uid in created_ids:
api.delete(f"/users/{uid}")
Test Examples
# test_users.py
def test_create_user(api, create_user):
user = create_user(name="Alice", role="admin")
assert user["name"] == "Alice"
assert user["role"] == "admin"
assert "id" in user
def test_get_user(api, create_user):
user = create_user(name="Bob")
r = api.get(f"/users/{user['id']}")
assert r.status_code == 200
assert r.json()["name"] == "Bob"
def test_update_user(api, create_user):
user = create_user(name="Original")
r = api.put(f"/users/{user['id']}", json={
"name": "Updated",
"email": user["email"],
"role": user["role"]
})
assert r.status_code == 200
assert r.json()["name"] == "Updated"
def test_delete_user(api, create_user):
user = create_user()
r = api.delete(f"/users/{user['id']}")
assert r.status_code == 204
# Verify deletion
r = api.get(f"/users/{user['id']}")
assert r.status_code == 404
@pytest.mark.parametrize("email,expected", [
("valid@test.com", 201),
("", 400),
("not-an-email", 422),
("a" * 300 + "@test.com", 422),
])
def test_create_user_email_validation(api, email, expected):
r = api.post("/users", json={"name": "Test", "email": email, "role": "viewer"})
assert r.status_code == expected
Newman vs pytest: When to Use Which
| Aspect | Newman | pytest + requests |
|---|---|---|
| Learning curve | Low (if using Postman) | Medium (requires Python) |
| Flexibility | Limited to Postman scripting | Full Python ecosystem |
| Data-driven testing | CSV/JSON data files | pytest parametrize, fixtures |
| Maintenance at scale | Collections become unwieldy | Standard code with modules and imports |
| Debugging | Postman console | Python debugger, logging |
| Reusable utilities | Limited | Full Python: custom assertions, helpers, libraries |
| Version control | JSON exports (hard to diff) | Python files (easy to diff and review) |
| Team collaboration | Postman workspaces | Git (standard code review) |
The Hybrid Approach
Many teams use both:
- Postman + Newman for smoke tests and quick endpoint checks
- pytest for comprehensive test suites with complex logic, data setup, and teardown
Running Tests in CI
# Run all API tests
API_BASE_URL=https://api.staging.example.com pytest tests/api/ -v --junitxml=results.xml
# Run only smoke tests
API_BASE_URL=https://api.staging.example.com pytest tests/api/ -m smoke -v
# Run with parallel execution
API_BASE_URL=https://api.staging.example.com pytest tests/api/ -n 4 -v
# Run against production (read-only tests only)
API_BASE_URL=https://api.example.com pytest tests/api/ -m "readonly" -v
Practical Exercise
- Create a pytest project with conftest.py containing base_url, auth, and API session fixtures
- Write CRUD tests for a resource (create, read, update, delete)
- Add parametrized validation tests for at least one endpoint
- Add a factory fixture that creates test data and cleans it up after the test
- Run the suite against a local or public API
Key Takeaways
- Newman runs Postman collections in CI — quick to set up, limited in flexibility
- pytest + requests provides full programmatic control for production test suites
- Use fixtures for shared setup: auth, API sessions, test data factories
- Factory fixtures with cleanup prevent test data accumulation
- Parametrize for data-driven testing instead of duplicating test functions
- The hybrid approach (Postman for exploration, pytest for CI) works well for many teams