Buzzword Decoder: What People Actually Mean
"Agentic Testing"
What they say: "We need agentic testing capabilities."
What they mean: Tests that can make decisions, not just follow scripts. The test system should be able to:
- Explore the application (not just verify known flows)
- Recover from unexpected states
- Generate new test cases based on what it observes
- Report findings in natural language
What you should say: "Agentic testing means the test system has a reasoning loop — it observes the application state, decides what to do next, acts, and evaluates the result. In our framework, Claude Code is the agent. It reads test definitions, decides how to interact with the app through the vibe-check skill, and reasons about whether the results match expectations."
"Self-Healing Tests"
What they say: "Do your tests self-heal?"
What they mean: When the UI changes, do tests automatically adapt instead of breaking?
Levels of self-healing:
| Level | Mechanism | Example |
|---|---|---|
| Basic | Retry with fallback selectors | ID → class → text → CSS path |
| Medium | Visual matching | Find element by appearance, not code |
| Advanced | AI reasoning | Agent understands page structure and finds alternative |
What you should say:
"Yes, through three tiers. First, when a selector fails, the agent discovers existing elements via find-all and matches by text content — this handles 80% of cases cheaply. Second, the agent analyzes a screenshot and page text for deeper understanding. Third, it falls back to MCP accessibility trees for semantic analysis. The key difference from traditional self-healing tools is that our agent reasons about context, not just applies rules."
"ReAct Pattern"
What they say: "Does your agent use the ReAct pattern?"
What they mean: The Reason + Act pattern where the agent:
- Observes the current state
- Thinks about what to do (visible reasoning)
- Acts on its decision
- Observes the result
- Repeats
In our framework:
Observe: vibe-check text → reads page content
Think: Agent reasons: "I see a login form. I need to enter credentials."
Act: vibe-check type "#email" "user@test.com"
Observe: vibe-check text ".error" → checks for validation errors
Think: Agent reasons: "No errors. Submit the form."
Act: vibe-check click "#submit"
What you should say: "Claude Code naturally follows the ReAct pattern. Each vibe-check command is an action, and between commands, the agent reasons about the output. The skill provides the vocabulary (22 browser commands), and the agent provides the intelligence (deciding which command to run next based on observations)."
"Shift-Left Testing"
What they say: "How does AI help us shift left?"
What they mean: Moving testing earlier in the development process — catching bugs before they reach QA.
What you should say: "AI agents can generate browser tests from requirements or PR descriptions, running them against feature branches before the code is even reviewed. A developer pushes code, the CI pipeline spins up the app, and the agent runs exploratory tests against the new UI — all before a human QA engineer touches it. This catches layout bugs, broken flows, and regression issues at the PR stage instead of the QA stage."
"Test Observability"
What they say: "What's your observability strategy for tests?"
What they mean: Can you see what's happening inside your tests? Can you debug failures efficiently?
What you should say: "We capture five artifacts per test: command log (what the agent did), screenshots (what the page looked like), page text (what the content was), timing data (how long each step took), and URL history (where we navigated). On failure, we add the agent's reasoning — why it thought the test failed. This gives both human reviewers and AI agents everything they need to diagnose issues."
"Test Maintenance Debt"
What they say: "How do you manage test maintenance debt?"
What they mean: Over time, test suites accumulate broken, slow, and flaky tests that cost more to maintain than they provide in value.
What you should say:
"Agent-driven tests dramatically reduce maintenance debt because the agent adapts to UI changes. When a button ID changes from btn-submit to submit-button, traditional tests break — someone has to find the test, update the selector, verify it works, and merge the fix. Our agent finds the button by text content and keeps going. We still track healing events and periodically update selector hints, but it's a review task, not an emergency fix."
"Page Object Model"
What they say: "Do you use the Page Object pattern?"
What they mean: A design pattern where each page/component has a corresponding class that encapsulates element selectors and interactions.
In agent-driven testing: The Page Object Model is less relevant because the agent doesn't need a class abstraction — it interacts with the page directly through CLI commands. However, the concept maps to:
What you should say: "We use a lighter version of the concept. Instead of page object classes, we have test definition files with selector hints per page. The agent reads these hints but can also discover elements independently. It's more like a 'page guide' than a 'page object' — the agent uses the hints when available and reasons about alternatives when they're stale."
"Context Window" / "Token Budget"
What they say: "How do you manage the context window?"
What they mean: AI models have a limited context size. How do you ensure the agent has enough room for reasoning?
What you should say: "This is why we chose CLI skills over MCP. Each MCP tool definition costs 200-500 tokens, loaded on every API call. With 15+ browser tools, that's 3,000-12,500 tokens per turn just for schemas. Our CLI skill costs about 50 tokens per turn for the skill description, plus ~80 tokens per command. A 20-step test costs 3,200 tokens via skill vs 92,000 via MCP — leaving 98% of the context window for reasoning instead of 39%."
"Deterministic vs Non-Deterministic Testing"
What they say: "How do you handle the non-deterministic nature of AI?"
What they mean: Traditional tests produce the same result every time (deterministic). AI agents might take different paths (non-deterministic).
What you should say: "We achieve 'practical determinism' — the agent follows the same path 95%+ of the time because the SKILL.md instructions and test definitions constrain its decisions. The remaining 5% is during error recovery, where non-determinism is actually a feature — the agent tries creative solutions a deterministic script can't.
We mitigate the risk through:
- Command logging — see exactly what the agent did
- Screenshot trail — visual evidence of each state
- CI gates — pass/fail is deterministic even if the path varies
- Reproducibility — we can replay the logged commands to reproduce any run"
"AI-Native" / "Agent-Native"
What they say: "Is your tool AI-native?"
What they mean: Was it designed from the ground up for AI agents, or is it a traditional tool with AI bolted on?
What you should say: "Vibium is AI-native — designed from the ground up for agent integration. The key indicators: zero-configuration setup (no manual browser installation), CLI-first interface (agents use Bash, not APIs), stdout/stderr output (agents read text, not parse objects), auto-wait for element readiness (agents shouldn't need retry logic), and a SKILL.md that teaches agents the interface in their own language (markdown). Compare this to adding an MCP wrapper around Playwright — that's AI-compatible, not AI-native."
"Composability"
What they say: "How composable is your framework?"
What they mean: Can the pieces be used independently and combined in different ways?
What you should say: "Highly composable. Each vibe-check command is independent and can be combined with any other CLI tool:
# Combine with jq for JSON parsing
vibe-check eval 'JSON.stringify(data)' | jq '.users[].email'
# Combine with diff for comparison
vibe-check text > before.txt
# ... do something ...
vibe-check text > after.txt
diff before.txt after.txt
# Combine with grep for assertions
vibe-check text | grep -q 'Welcome' && echo PASS || echo FAIL
This is a Unix philosophy advantage — small tools that compose via pipes. MCP tools can't be piped."