QA Engineer Skills 2026QA-2026Session Reports

Session Reports

Session reports are the tangible output of exploratory testing. Without them, exploratory testing is invisible work — valuable bugs may be found, but there is no record of what was tested, what was skipped, or what risks remain. A well-written session report transforms an individual testing session into organizational knowledge.


The Session Report Template

Every exploratory session should produce a report. Keep it lightweight — the goal is to capture observations in real time, not write a polished document afterward.

Charter:    Explore file upload with edge-case formats (.svg, .webp, 0-byte files)
Tester:     Jane D.
Duration:   60 min
Start:      2024-03-15 14:00
Environment: Staging (build #4521), Chrome 122, macOS 14.3

AREAS COVERED:
- Upload .svg (accepted, renders in preview)
- Upload .webp (accepted, but thumbnail generation fails silently)
- Upload 0-byte .txt (accepted, but download link returns 500)
- Upload .gif animated (accepted, preview shows first frame only)
- Upload file with Unicode filename (accepted, filename truncated in UI)

BUGS FOUND:
- BUG: 0-byte file upload succeeds but download endpoint crashes (severity: high)
- BUG: .webp thumbnail silently missing — no error to user (severity: medium)
- BUG: Unicode filename truncated without notification (severity: low)

QUESTIONS / RISKS:
- What is the maximum file count per upload batch? No documentation found.
- Is there server-side virus scanning? Could not confirm.
- Does the CDN cache handle .webp files correctly?

FOLLOW-UP CHARTERS:
- Explore bulk upload (10+ files simultaneously) to discover performance issues
- Explore upload with slow/interrupted network to discover retry behavior

Anatomy of Each Section

Charter

Restate the original charter. If you deviated during the session (which is acceptable), note the deviation:

Charter:    Explore checkout with international addresses for currency issues
Deviation:  Discovered that the address autocomplete API fails for some countries.
            Spent 15 minutes investigating the autocomplete issue instead.

Areas Covered

List what you actually tested, with brief results. This section answers: "If someone else needs to continue this exploration, where did I leave off?"

Be specific:

  • Bad: "Tested file upload"
  • Good: "Uploaded .svg (accepted, renders), .webp (accepted, no thumbnail), 0-byte .txt (accepted, download fails), .exe (rejected with error message)"

Bugs Found

List each bug with a one-line summary and severity. File full bug reports separately (in your bug tracker), and reference the bug IDs here.

BUGS FOUND:
- BUG-1234: 0-byte file upload succeeds but download crashes (severity: high)
- BUG-1235: .webp thumbnail missing silently (severity: medium)

Questions and Risks

This is often the most valuable section. Questions that arise during exploration frequently point to specification gaps, missing test coverage, or architectural risks.

Questions should be actionable:

  • Bad: "Is upload safe?" (too vague)
  • Good: "Is there server-side virus scanning on uploaded files? The /upload endpoint accepts any file content without apparent validation."

Follow-Up Charters

Based on what you learned, propose charters for future sessions. This creates a continuous exploration pipeline.


Note-Taking Strategies During Sessions

Taking notes while actively testing is the hardest part of SBTM. Here are practical approaches:

The Timestamp Approach

Write brief notes with timestamps every 2-3 minutes:

14:00 - Starting. Environment verified. Opening upload page.
14:03 - Uploaded test.svg (200KB). Accepted. Preview renders correctly.
14:05 - Uploaded test.webp (150KB). Accepted. Preview area shows blank.
14:07 - Checking network tab. No errors on upload. Thumbnail endpoint returns 404.
14:10 - Filing bug for .webp thumbnail issue.
14:15 - Trying 0-byte file. Created empty.txt (0 bytes).
14:17 - Upload succeeds (unexpected). Download link appears.
14:18 - Clicking download link. Server returns 500. Filing bug.

The Observation/Question/Bug (OQB) Approach

Categorize notes as you write them:

[O] .svg upload works correctly with preview
[O] .webp upload accepted but thumbnail area is blank
[Q] Is .webp thumbnail generation a separate service?
[B] Download endpoint returns 500 for 0-byte files
[O] Error message for .exe rejection is user-friendly
[Q] What file types are on the allowlist?

Tools for Note-Taking

Tool Pros Cons
Plain text file Fast, no setup, version-controllable No screenshots inline
OneNote / Notion Rich formatting, screenshots, sharing Requires switching windows
Screen recording + notes Complete record of actions Large files, time to review
Rapid Reporter (free tool) Designed for SBTM, keyboard-driven Learning curve
TestBuddy Built for exploratory sessions Paid tool

Aggregating Session Reports

Individual session reports become powerful when aggregated across a release cycle.

Coverage Matrix

Track which areas have been explored and which have not:

Feature Area Sessions Bugs Found Risk Level More Testing?
File upload 3 5 High Yes — bulk upload untested
User profiles 2 1 Low No — good coverage
Checkout flow 1 3 High Yes — international payments untested
Admin panel 0 0 Unknown Yes — not explored at all

Metrics Over Time

Track across sprints:

  • Session count per sprint: is exploratory testing happening consistently?
  • Bugs per session: are testers finding issues (good) or exploring already-stable areas (reallocate)?
  • Questions generated: are specification gaps being identified?
  • Follow-up charter completion rate: are recommended follow-ups actually executed?

Integrating Exploratory Findings into the Test Process

Exploratory testing does not exist in isolation. Its findings should feed into other testing activities:

From Exploration to Automation

When an exploratory session discovers a bug:

  1. File the bug report
  2. After the fix, write an automated regression test for that specific scenario
  3. The automated test ensures the bug never returns
  4. The exploratory session moves on to discover new issues

This creates a virtuous cycle: exploration finds new bugs, automation prevents recurrence, exploration continues to find new things.

From Exploration to Test Cases

When exploration reveals an important scenario that was not in the scripted test suite:

  1. Document the scenario as a formal test case
  2. Add it to the regression suite
  3. Update the charter list to reflect that the area now has scripted coverage

From Questions to Requirements

When exploration raises questions about expected behavior:

  1. Document the question in the session report
  2. Bring it to the product owner or developer
  3. The answer becomes a documented requirement
  4. Write a test case for the clarified requirement

Why Exploratory Testing Survives Automation

Automation verifies what you already know should work. Exploration discovers what nobody thought to check. The two are complementary.

In an AI-augmented world, exploratory testing becomes more important, not less — someone must evaluate whether the AI-generated tests actually cover meaningful scenarios. The AI can generate 1,000 test cases, but a skilled exploratory tester can determine in one session that the AI missed the entire "what happens when the user does something unexpected" category.


Practical Exercise

  1. Choose an application you use daily
  2. Write three charters for different features
  3. Execute a 30-minute session on the first charter
  4. Write a session report using the template above
  5. After completing the report, identify two questions and two follow-up charters
  6. Share the report with a colleague and conduct a 5-minute debrief

Key Takeaways

  • Session reports make exploratory testing visible, accountable, and reproducible
  • Every report needs: charter, areas covered, bugs found, questions, and follow-up charters
  • Take notes during the session, not after — real-time notes capture observations that memory loses
  • Aggregate reports across sessions to track coverage and identify untested areas
  • Feed exploration findings into automation, test cases, and requirements
  • Exploratory testing and automation are complementary — never competing