The First Two Weeks
Your first two weeks at a new company determine how people perceive you for months. Most QA engineers spend this time passively — reading documentation, attending meetings, waiting to be told what to do. Senior QA engineers use this window actively, building a mental model of the system, the team, and the quality culture before writing a single test.
System Reconnaissance
Before you can test anything effectively, you need to understand what you are testing. System reconnaissance is the deliberate process of mapping the technical landscape.
Codebase structure — Identify the main services, their languages and frameworks, how they communicate (REST, gRPC, message queues), and where the test code lives. AI coding assistants can accelerate this: feed the repository to an LLM and ask it to map the architecture, list API endpoints, and summarize the test infrastructure.
CI/CD pipeline — Find the pipeline configurations. Understand what tests run on PR, on merge, and on deploy. Note the pipeline runtime — a 45-minute pipeline tells a different story than a 5-minute one.
Flaky test culture — Check the test dashboard (if one exists). What is the flaky rate? Are flaky tests quarantined or ignored? How the team handles flakiness reveals more about quality culture than any onboarding document.
Bug and incident history — Read the last 20 bug reports and the last 5 incident post-mortems. These show you where the system actually breaks, not where people think it might.
Social Architecture Mapping
Technical systems exist within social systems. Understanding who makes decisions, how decisions flow, and what the unwritten rules are is as important as understanding the code.
Decision makers — Who decides when to release? Who has veto power? Who is the de facto quality authority even if it is not in their job title?
Cultural signals — Does the team celebrate catching bugs before production or only react to production incidents? Are tests treated as first-class code or as an afterthought? Is "it works on my machine" an acceptable response to a bug report?
Unwritten rules — Every team has them. Which tests can be skipped in a rush? Which services are considered stable (and which are actually stable)? Who do you go to when something does not make sense?
The 30-Day Assessment Document
Pattern: Within your first 30 days, produce a written assessment of the quality system — what works, what does not, and what you recommend. This document builds credibility faster than months of quiet competence.
The document covers six areas:
- Current test coverage — What is tested, what is not, and where the gaps are riskiest
- Test infrastructure health — Pipeline speed, flaky rate, environment reliability
- Process observations — How bugs are reported, triaged, and fixed; how releases are gated
- Quick wins — Two or three improvements achievable in the next sprint
- Medium-term recommendations — Structural improvements for the next quarter
- Questions — Areas where you need more context before forming recommendations
Key Takeaways
- The first two weeks are for active reconnaissance, not passive observation
- Map both the technical system (codebase, pipeline, test infrastructure) and the social system (decision makers, culture, unwritten rules)
- Use AI tools to accelerate codebase understanding — feed repos to LLMs for architecture summaries
- Produce a 30-day quality assessment document to establish credibility early
- Read bug reports and incident post-mortems to learn where the system actually breaks