Technical Discussions
The QA Voice in Architecture, Design, and Code
QA engineers who only speak up when bugs are found are leaving most of their value on the table. The earlier you participate in technical conversations -- architecture reviews, design discussions, code reviews -- the more bugs you prevent and the more credibility you build. This section covers how to engage in technical discussions as a QA engineer: what to say, when to say it, and how to be heard.
Participating in Architecture Reviews
Architecture reviews are where the biggest quality decisions happen. A poorly chosen architecture can make entire categories of testing impossible or prohibitively expensive. QA engineers belong in these conversations.
What QA Brings to Architecture Reviews
Developers think about how to build it. Product thinks about what to build. QA thinks about how it can break. This perspective is uniquely valuable in architecture reviews because it surfaces risks that neither developers nor product managers are trained to see.
Questions QA Should Ask in Architecture Reviews
| Question Category | Example Questions |
|---|---|
| Testability | "How will we test this in isolation?" "Can we mock this external dependency?" |
| Observability | "How will we know if this is working correctly in production?" "What logging and monitoring is planned?" |
| Failure modes | "What happens when this service is down?" "How does the system degrade under load?" |
| Data integrity | "What happens if the message queue loses a message?" "Is this operation idempotent?" |
| State management | "How do we handle partial failures in this multi-step process?" "Can we roll back?" |
| Security boundaries | "Where are the trust boundaries?" "What happens if this input is malicious?" |
| Performance | "What are the expected latency requirements?" "How does this scale with 10x the current load?" |
How to Contribute Without Overstepping
- Ask questions, do not make demands. "Have we considered what happens when..." is more effective than "You need to add a fallback mechanism."
- Ground your input in testing experience. "In the last three projects, we struggled to test microservice interactions because there was no service virtualization layer. Should we plan for that here?"
- Offer to own the testability assessment. "I would like to take an action item to write up a testability analysis of this architecture. I will have it ready for the next review."
- Know your boundaries. Architecture decisions involve trade-offs. Your job is to surface the testing and quality implications, not to dictate the architecture.
Asking the Right Questions
The most valuable QA contribution to any technical discussion is asking questions that nobody else thought to ask. Two question patterns are particularly powerful.
"What Happens When..."
This question pattern explores failure modes, edge cases, and unexpected states:
- "What happens when the database connection pool is exhausted?"
- "What happens when two users edit the same record simultaneously?"
- "What happens when the third-party API changes its response format?"
- "What happens when the user's session expires during a multi-page form submission?"
- "What happens when the file upload is interrupted at 99%?"
Each of these questions forces the team to think about scenarios they may not have considered. Many of the most severe production incidents come from "what happens when" scenarios that nobody asked about during design.
"How Do We Test..."
This question pattern ensures testability is considered from the start:
- "How do we test the email notification flow without sending real emails?"
- "How do we test the payment integration without charging real cards?"
- "How do we test the recommendation engine with deterministic data?"
- "How do we test the migration script without risking production data?"
- "How do we test the cron job that runs once a month?"
If the answer to "How do we test this?" is "We can't" or "We'll test it manually in production," that is a design problem that should be addressed before development begins.
Pushing Back on Untestable Designs
Sometimes a proposed design makes testing impractical. Pushing back is necessary but must be done diplomatically.
Signs of an Untestable Design
- No way to inject test data or mock dependencies
- Side effects that cannot be observed or verified
- Tight coupling between components that prevents isolated testing
- Race conditions baked into the architecture
- No clear boundaries between units of functionality
- Configuration that differs so significantly between environments that test results are meaningless
The Diplomatic Pushback Pattern
Step 1: Acknowledge the design's strengths.
"The event-driven approach is a great fit for this use case -- it gives us the decoupling we need."
Step 2: Raise the testing concern as a question.
"One thing I want to make sure we plan for: how will we verify that events are processed in the correct order? In async systems, I have seen ordering issues that are nearly impossible to reproduce in tests."
Step 3: Propose a concrete alternative or addition.
"Could we add a correlation ID to each event and build a test harness that traces the full event chain? That would let us write deterministic integration tests."
Step 4: Quantify the cost of not addressing it.
"Without this, we would be relying on manual testing with timing-dependent reproduction steps. Based on similar features, that typically means 2-3 production incidents before we identify and fix the edge cases."
What Not to Do
- Do not say "this is untestable" without offering an alternative
- Do not block the discussion by insisting on perfection -- suggest improvements that can be phased in
- Do not frame testability as QA's problem -- frame it as a product quality problem
Code Review from a QA Perspective
Code review is one of the most effective shift-left activities QA can participate in. You bring a different perspective than the developer reviewer.
What QA Looks for in Code Reviews
| Focus Area | What to Check | Example Comment |
|---|---|---|
| Input validation | Are inputs validated before processing? | "This endpoint accepts user input but doesn't validate the quantity field. Negative values would cause issues downstream." |
| Error handling | Are errors caught, logged, and handled gracefully? | "If fetchUser() throws, the catch block logs but doesn't return an error response to the client." |
| Edge cases | Are boundary conditions handled? | "What happens if items is an empty array here? The .reduce() call would throw." |
| Test coverage | Are the new tests sufficient? | "The tests cover the happy path. Should we add a test for the case where the API returns a 429?" |
| Logging and observability | Can we troubleshoot issues in production? | "This function handles payment processing but has no logging. If something goes wrong, we won't have a trail." |
| Security | Are there obvious security issues? | "This query interpolates user input directly. Should we use parameterized queries?" |
How to Comment Effectively
- Be specific. Point to the exact line and explain the concern.
- Suggest, do not demand. "Consider adding validation for..." rather than "You must add validation."
- Explain why. "This could cause a NullPointerException in production when the user has no address on file" is more persuasive than "Add a null check."
- Distinguish severity. Use prefixes like
[nit]for minor style issues,[question]for things you are unsure about, and[bug]for things that will cause problems.
[question] Line 42: If `user.subscription` is null (free-tier users),
this will throw. Should we default to the free plan here?
[nit] Line 67: This variable name `d` could be more descriptive --
maybe `discountPercentage`?
[bug] Line 89: The SQL query concatenates user input directly.
This is vulnerable to SQL injection. Should use parameterized queries.
Building Credibility with Developers
Credibility is not given; it is earned through consistent demonstration of technical competence and professional judgment.
How to Build Credibility
- Learn the tech stack. Read the code. Understand the architecture. Know what frameworks the team uses and why.
- Write high-quality automation. If your test code is clean, well-structured, and maintainable, developers will respect your technical judgment.
- Be right more often than wrong. When you flag something in a code review or raise a concern in an architecture meeting, be prepared to back it up. False alarms erode credibility.
- Acknowledge when you are wrong. "I investigated further and the behavior is actually correct -- the spec was ambiguous. I will update the test case" builds more trust than quietly closing your bug report.
- Contribute beyond testing. Fix a small bug. Improve a CI pipeline. Write a utility function that helps the team. These actions show you are a software engineer, not just a tester.
- Stay current. Read the same engineering blogs the developers read. Attend the same tech talks. Speak the same language.
How Credibility Compounds
Small win → Developer starts reading your code review comments
↓
Useful comment → Developer starts inviting you to design discussions
↓
Insightful question → Architect starts seeking your input on technical decisions
↓
Prevented a production issue → Team sees QA as essential, not optional
Common QA-Developer Conflicts and How to Resolve Them
Conflict 1: "It's not a bug, it's a feature"
Root cause: Ambiguous requirements. Neither the developer nor the QA engineer knows the intended behavior.
Resolution: Go to the product owner together. "We have different interpretations of this story. Can you clarify the expected behavior for this scenario?" Document the answer in the acceptance criteria so it does not recur.
Conflict 2: "It works on my machine"
Root cause: Environment differences. The developer's local setup differs from the test environment.
Resolution: Agree on a reference environment (usually staging) and test there. If the bug is environment-specific, document the environment configuration that triggers it. Push for containerized environments that eliminate "works on my machine" permanently.
Conflict 3: "This is low priority, we'll fix it later"
Root cause: Different risk assessments. QA sees user impact; the developer sees implementation complexity.
Resolution: Use data. "This affects the checkout flow, which 40% of our users hit daily. Even if only 2% trigger this edge case, that is 800 users per day." Let the product owner make the priority call with full information.
Conflict 4: "QA is the bottleneck"
Root cause: Testing is happening too late in the sprint, or stories are not testable when they arrive.
Resolution: Shift left. "Let's review stories for testability before sprint planning so I can start writing test cases while you write code. I'll review your PRs as they come in instead of waiting for the feature to be 'done.'"
Conflict 5: "Why did QA miss this?"
Root cause: A bug escaped to production and the team is looking for someone to blame.
Resolution: "Let's look at why the entire team's quality process missed this, not just why testing missed it. Was this scenario in the acceptance criteria? Did we have a test for it? Was the test environment representative of production? What can we change systematically so no one person is the last line of defense?"
Hands-On Exercise
- Attend one architecture review or design discussion this sprint. Prepare three "What happens when..." questions in advance.
- Review one PR this week using the QA code review checklist above. Use the comment prefix convention.
- Identify a recent QA-developer conflict on your team and determine which of the five patterns it matches. Propose a resolution.
- Ask a developer: "What technical discussions would you find it most useful to have QA participate in?" Act on their answer.
- Write a testability assessment for a feature currently in development. Share it with the team and discuss.