SBTM Methodology
Exploratory testing is not "clicking around." It is simultaneous test design, execution, and learning, guided by a charter and bounded by time. Session-Based Test Management (SBTM), formalized by James Bach, brings structure and accountability to exploratory testing without killing its creative power.
What Is Exploratory Testing?
Exploratory testing is a style of testing that emphasizes the personal freedom and responsibility of the individual tester. The tester simultaneously designs tests, executes them, and learns about the system under test — all at the same time. Unlike scripted testing, where you follow predefined steps, exploratory testing adapts in real time based on what you discover.
When Exploratory Testing Excels
- New features with unclear requirements — you are learning the feature as you test it
- Areas with high complexity — too many paths for scripted tests to cover
- After automation catches regressions — exploration finds what scripts miss
- Usability evaluation — scripts cannot assess whether an interface "feels right"
- Time-pressured releases — exploration finds critical bugs faster than writing scripted tests first
When Scripted Tests Are Better
- Regression testing across stable features
- Compliance testing with auditable evidence
- Data-driven testing with hundreds of input combinations
The SBTM Framework
SBTM provides three core elements: charters, sessions, and debriefs.
Charters
A charter is a mission statement for your testing session. It answers: "What am I testing, and why?"
Good charters are specific and bounded:
Explore the checkout flow with international addresses
to discover currency conversion and tax calculation issues.
Explore the file upload feature with edge-case formats (.svg, .webp, 0-byte files)
to discover handling gaps and silent failures.
Explore the admin user management panel under concurrent usage
to discover race conditions in role assignment.
Bad charters are vague:
- "Test the login page" (too broad — what are you looking for?)
- "Find bugs" (not a charter — it is a wish)
- "Test everything in the settings module" (no focus, no time boundary)
Writing Effective Charters
Use the formula: Explore [target] with [resources/approach] to discover [information].
| Component | Description | Example |
|---|---|---|
| Target | The feature or area to test | Checkout flow |
| Resources/Approach | Specific data, tools, or techniques | International addresses, edge-case payment methods |
| Information | What kind of issues you are looking for | Currency conversion errors, tax miscalculations |
Prepare 2-3 charters before a session. If you finish one early, move to the next. If you discover something unexpected, pivot — but document the pivot.
Sessions
A session is a focused, uninterrupted block of testing, typically 60-90 minutes. The time constraint is essential: it prevents exploration from becoming aimless wandering.
Session Structure
| Phase | Duration | Activity |
|---|---|---|
| Setup | 5-10 min | Review charter, prepare environment, load test data |
| Exploration | 40-70 min | Active testing, note-taking, bug filing |
| Wrap-up | 5-10 min | Organize notes, draft bug reports, prepare for debrief |
Rules During a Session
- No interruptions. Treat it like a focused work block. Close Slack, mute notifications.
- Stay on charter. If you discover something outside your charter, note it as a new charter for a future session.
- Take notes continuously. If you stop taking notes, you stop doing exploratory testing and start doing random testing.
- Time-box strictly. When time is up, stop. You can always schedule another session.
Session Metrics
Track these metrics across sessions to understand your testing effort:
| Metric | Definition |
|---|---|
| Session count | Total sessions conducted for this release |
| Bug count | Issues found per session |
| Charter coverage | Percentage of planned charters completed |
| Test vs investigation ratio | Time spent testing vs investigating bugs |
| Session notes density | Volume of observations per session (sparse notes = unfocused session) |
Debriefs
After each session, conduct a brief (10-15 minute) debrief with a peer, test lead, or the entire team. Debriefs serve three purposes:
- Share knowledge — what did you learn about the system?
- Validate findings — are the bugs real? Are the risks valid?
- Adjust strategy — should the next session target a different area?
Debrief Questions
- What was your charter?
- What did you actually test?
- What did you find?
- What are you concerned about that you did not have time to explore?
- What should the next session focus on?
Heuristics for Exploratory Testing
Heuristics are mental shortcuts that guide your exploration. Some widely used ones:
SFDPOT (San Francisco Depot)
Created by James Bach:
| Mnemonic | Focus Area | Example Questions |
|---|---|---|
| Structure | What is the product made of? | Modules, pages, components, APIs |
| Function | What does it do? | Features, operations, error handling |
| Data | What data does it process? | Inputs, outputs, stored data, data flow |
| Platform | What does it depend on? | OS, browser, network, hardware |
| Operations | How is it used? | Workflows, user personas, usage patterns |
| Time | How does behavior change over time? | Timeouts, scheduling, concurrency, state transitions |
FEW HICCUPPS
Used for consistency oracles (how to decide if something is a bug):
- Familiar problems — does it have bugs you have seen in similar products?
- Explainability — can you explain the behavior to a user?
- World — does it match how the real world works?
- History — has this area had bugs before?
- Image — does it match the company's brand and quality standards?
- Comparable products — how do competitors handle this?
- Claims — does it match what the documentation says?
- User expectations — would a typical user be confused?
- Product — is it consistent with other parts of the same product?
- Purpose — does it fulfill its intended purpose?
- Standards — does it comply with relevant standards and regulations?
SBTM vs Ad-Hoc Testing
| Aspect | SBTM | Ad-Hoc Testing |
|---|---|---|
| Structure | Charter + time box + notes | None |
| Reproducibility | Session notes allow reconstruction | "I found this bug somehow" |
| Accountability | Metrics show coverage and effort | No visibility into what was tested |
| Skill development | Debriefs improve technique over time | No feedback loop |
| Management reporting | Session counts and bug rates | "We tested for two days" |
SBTM does not constrain creativity — it channels it. The charter gives direction without dictating every step.
Practical Exercise
Plan and execute a 30-minute exploratory session:
- Choose a web application you use regularly (your email client, a social media site, a shopping site)
- Write a charter: "Explore [specific feature] with [specific approach] to discover [specific type of issue]"
- Set a timer for 30 minutes
- Take notes every 2-3 minutes: what you tried, what you observed, what questions arose
- When the timer ends, stop and review your notes
- Write a brief summary: bugs found, areas of concern, recommended next charters
Key Takeaways
- Exploratory testing is disciplined investigation, not random clicking
- SBTM provides structure through charters, time-boxed sessions, and debriefs
- Good charters are specific: target + approach + information sought
- Session notes are the evidence that distinguishes exploration from ad-hoc testing
- Heuristics like SFDPOT guide exploration without constraining it
- Debriefs create a feedback loop that improves testing over time