QA Engineer Skills 2026QA-2026First 90 Days

First 90 Days

Why the First 90 Days Define Your Next 3 Years

The first 90 days in a new QA role are disproportionately important. In this window, you establish your reputation, build or destroy trust, demonstrate your value, and set the trajectory for your entire tenure. People form opinions about new hires in the first month, and those opinions become extremely difficult to change.

Most QA engineers start a new role and immediately try to fix everything they see wrong. This is the fastest path to failure. Before you can change anything, you need to understand the system -- the product, the people, the processes, and the history. The QA engineers who make the biggest long-term impact are the ones who spend the first 90 days listening, learning, and building relationships before they start proposing changes.

This file gives you a structured plan for each phase: the first 30 days (learn), the first 60 days (contribute), and the first 90 days (lead).


The First 30 Days: Learn Everything

Your only goal in the first month is to understand the current state. You are not here to fix things yet. You are here to absorb.

Week 1: Orientation and Relationships

Day 1-2: Environment setup

  • Get your development environment running (all repositories, test frameworks, CI access)
  • Request access to all tools: Jira, test management, CI/CD, monitoring dashboards, Slack channels
  • Run the existing test suite locally. Does it pass? How long does it take? What does the output look like?

Day 3-5: Meet people

  • Schedule 30-minute 1:1s with every person you will work with directly: QA team members, developers on your squad, product owner, engineering manager, and DevOps/SRE
  • Your agenda for each 1:1: introduce yourself briefly, ask about their role and current challenges, ask what they wish QA would do differently, and ask what the biggest quality risk is right now

Week 2-4: Deep Learning

Product knowledge:

  • Use the product as a real user. Go through every major flow. Take notes on what confuses you -- your fresh perspective is valuable.
  • Read the product documentation, knowledge base, and customer-facing help articles.
  • Review the last 30 days of production incidents. What broke? Why? What was the customer impact?
  • Review the last 30 days of bug reports. What patterns do you see?

Testing landscape:

  • Map the existing test coverage: what is automated, what is manual, what is not tested at all
  • Understand the test architecture: frameworks, patterns, data management, environment configuration
  • Read the existing test strategy document (if one exists)
  • Run the full test suite in CI and in local. Note flaky tests, slow tests, and failing tests.
  • Understand the release process from code commit to production deployment

People and process:

  • Attend all team ceremonies (standup, planning, review, retrospective) and observe before contributing
  • Learn the team's communication patterns: who talks to whom, who has influence, who has institutional knowledge
  • Identify the informal leaders -- the people whose opinions carry weight regardless of title

Questions to Ask in Your First Week

These are organized by audience. You do not need to ask all of them -- choose the ones most relevant to your role and level.

Questions for Your QA Manager

  1. "What does success look like for me in 30, 60, and 90 days?"
  2. "What is the biggest quality challenge the team is facing right now?"
  3. "How is QA performance measured here?"
  4. "What QA initiatives are currently in progress or planned?"
  5. "What has the team tried before that did not work?"
  6. "How is the QA team structured relative to the development teams?"
  7. "What is the budget situation for tools, training, and conferences?"
  8. "Who are the key stakeholders I should build relationships with?"

Questions for QA Team Members

  1. "What is the most painful part of your daily work?"
  2. "Which areas of the product are the scariest to test? Why?"
  3. "What do you wish was automated but is not?"
  4. "Where do flaky tests come from, and how are they handled?"
  5. "What is the test data situation -- easy or painful?"
  6. "How do you handle testing when requirements are unclear?"
  7. "What is the onboarding experience like -- what do you wish someone had told you on day 1?"

Questions for Developers

  1. "How do you prefer to receive bug reports?"
  2. "At what point in your development process would QA involvement be most helpful?"
  3. "What is your experience with the test suite -- do you trust it? Do you run it?"
  4. "Are there areas of the code that you consider fragile or risky?"
  5. "How do you handle test failures in CI -- do you fix them immediately or skip?"
  6. "What would make your life easier from a QA perspective?"

Questions for Product Owners

  1. "What upcoming features have the highest quality risk?"
  2. "How are acceptance criteria typically defined? Is QA involved?"
  3. "What was the last production incident that affected customers, and how was it handled?"
  4. "Which areas of the product get the most customer complaints?"
  5. "How do you prioritize bug fixes versus new features?"

Questions for DevOps/SRE

  1. "What is the deployment pipeline, and where do tests fit in?"
  2. "How are test environments managed? Who can spin up a new one?"
  3. "What monitoring and alerting exists for production quality?"
  4. "What is the incident response process, and is QA involved?"
  5. "What are the biggest infrastructure bottlenecks affecting test execution?"

Questions to Ask Yourself

  1. "What is the gap between how testing is done here and how I think it should be done?"
  2. "What are 3 quick wins I could deliver in the next 30 days?"
  3. "What is the biggest risk to product quality that no one is talking about?"
  4. "Where can I add the most value given my specific skills and experience?"

The First 60 Days: Contribute Visibly

By day 30, you understand the landscape. Now it is time to start contributing in ways that build credibility and trust.

Identify and Deliver Quick Wins

Quick wins are small improvements that require low effort but produce visible results. They are critical for establishing credibility before you propose larger changes.

Quick Win Category Examples Why It Works
Fix a flaky test Identify the root cause and fix the top 3 flakiest tests Developers notice immediately when CI stops randomly failing
Improve a bug report template Add structured fields for severity, reproduction steps, environment Shows you care about process and communication (Chapter 24)
Automate a manual test Pick the most tedious manual regression test and automate it Saves someone's time and demonstrates automation skill
Create a test data utility Build a helper that generates common test data patterns Solves a shared pain point across the team
Document a testing process Write up the undocumented tribal knowledge about how to test a specific feature Creates visible value and shows initiative (Chapter 24)
Set up a missing quality metric Add a test execution time trend or flaky test tracker to the CI dashboard Makes quality visible (Chapter 22)

Build Relationships Strategically

Relationship Why It Matters How to Build It
Your QA manager They decide your reviews, projects, and promotion Regular 1:1s, proactive updates, ask for feedback
The most senior developer They have technical influence and institutional knowledge Ask thoughtful questions about architecture, review their code changes
The product owner They define what "done" means and prioritize your bugs Provide early test feedback on stories, be data-driven in severity discussions
DevOps/SRE They control the infrastructure you depend on Help with pipeline improvements, understand their constraints
Another QA engineer They are your closest peer and sounding board Pair test together, share knowledge, give and receive feedback

Start Contributing to Ceremonies

By day 30-45, you should be an active participant in all team ceremonies, not just an observer.

  • Sprint planning: Ask clarifying questions about testability. Estimate test effort. Identify dependencies.
  • Daily standup: Report on testing status with specifics, not generalities.
  • Sprint review: Present quality metrics for 2-3 minutes. Show what was tested and what was found.
  • Retrospective: Bring one data-backed observation about quality from the sprint.

This draws directly on the sprint ceremony practices covered in Chapter 20.


The First 90 Days: Propose and Lead

By day 60, you have credibility. People know you, trust you, and have seen you deliver. Now you can propose larger changes.

Propose Improvements with Data

Do not say "I think we should adopt contract testing." Say "I analyzed our last 20 production incidents and found that 9 of them (45%) were caused by API contract mismatches between services. Contract testing would target this specific failure mode. Here is a 3-sprint plan to implement it, starting with the 3 highest-traffic API boundaries."

The 90-Day Improvement Proposal

Write a short document (1-2 pages) that covers:

  1. Current state: What you have observed about the testing practice (strengths and gaps)
  2. Key risks: The top 3 quality risks you have identified, with data
  3. Proposed improvements: 3-5 specific initiatives, each with effort estimate, expected impact, and dependencies
  4. Priority order: What to do first and why
  5. Success metrics: How you will measure whether the improvements worked

Share this with your manager first, get feedback, then present to the broader team. This document serves multiple purposes: it demonstrates strategic thinking, it shows you have listened and learned, and it gives your manager concrete evidence of your impact for your first performance review.

Example 90-Day Proposal Outline

Current State Assessment:
- Test automation covers 62% of critical paths (gap: checkout flow, admin panel)
- CI pipeline takes 52 minutes (target: <30 minutes)
- 14 flaky tests causing 3-4 false failures per day
- No contract testing between payment service and order service
- Manual regression takes 8 hours per release

Proposed Initiatives (priority order):

1. Fix flaky tests (Sprint 1-2)
   Effort: 3 days
   Impact: Eliminate 3-4 false CI failures per day, restore developer trust in the suite

2. Automate checkout flow (Sprint 2-4)
   Effort: 2 weeks
   Impact: Cover the highest-revenue user path, reduce manual regression by 2 hours

3. Pipeline optimization (Sprint 3-5)
   Effort: 1 week
   Impact: Reduce CI time from 52 to <30 minutes through sharding and caching

4. Contract testing pilot (Sprint 5-7)
   Effort: 2 weeks
   Impact: Target the #1 production incident category (API mismatches)

5. Deprecate manual smoke tests (Sprint 7-9)
   Effort: 3 weeks
   Impact: Eliminate 8 hours of manual testing per release

Success Metrics:
- Flaky test rate: from 14 to 0
- CI pipeline time: from 52 min to <30 min
- Automation coverage of critical paths: from 62% to 85%
- Manual regression time per release: from 8 hours to 2 hours
- Production incidents from API mismatches: from 9/quarter to <2/quarter

Creating a Learning Plan

Your first 90 days will expose gaps in your knowledge -- tools you have not used, domains you do not understand, processes that are new to you. Create a structured learning plan rather than trying to learn everything at once.

Learning Plan Template

Week Focus Area Learning Activity Milestone
1-2 Product knowledge Use every major feature, read documentation Can explain the product to a new hire
3-4 Test framework Read all test code, run locally, modify a test Can write a new test independently
5-6 CI/CD pipeline Read pipeline config, trigger builds, review artifacts Can troubleshoot a pipeline failure
7-8 Domain-specific testing Study the business domain (finance, healthcare, e-commerce) Can identify domain-specific test scenarios
9-10 Architecture Understand service interactions, data flow, deployment topology Can draw the system architecture from memory
11-12 Advanced tooling Learn any tools new to you (monitoring, performance, security) Can use each tool independently

Common Mistakes in New QA Roles

Mistake Why It Hurts What to Do Instead
Criticizing the existing setup immediately Alienates the team who built it Acknowledge the good before suggesting improvements
Trying to change everything at once Overwhelms the team and dilutes your impact Pick 1-2 focused improvements and execute them well
Working in isolation Misses context and fails to build relationships Pair with team members, attend all ceremonies, communicate frequently
Not asking for help Wastes time and signals arrogance Ask questions early. Saying "I don't know this yet" is a strength, not a weakness
Overpromising early Sets expectations you cannot meet Under-promise and over-deliver on your first 2-3 contributions
Ignoring the existing test suite Misses the team's institutional knowledge embedded in tests Read the existing tests before writing new ones
Focusing only on automation Ignores process, communication, and strategy Balance technical contributions with process observations
Not documenting what you learn Loses the fresh perspective that is most valuable in your first weeks Keep a running document of observations, questions, and ideas

Setting Yourself Up for a Strong First Performance Review

Your first performance review will likely happen at 3-6 months. Start preparing for it on day 1.

What Reviewers Look For

Dimension Evidence to Collect
Technical competence Tests you wrote, bugs you found, framework improvements
Collaboration Feedback from developers and product owners, contributions to ceremonies
Initiative Quick wins you delivered, improvements you proposed, problems you identified
Communication Bug reports, status updates, documentation you created
Growth Skills you learned, feedback you incorporated, areas you improved

Keeping a Brag Document

Start a private document on day 1 and update it weekly:

Week of [date]:
- Completed: [what you delivered]
- Impact: [measurable result]
- Feedback received: [positive or constructive]
- Skills developed: [what you learned]
- Relationships built: [who you connected with]

When review time comes, you will have 12+ weeks of documented impact instead of trying to remember what you did.

The Pre-Review Conversation

Two weeks before your review, have a conversation with your manager:

"My review is coming up in two weeks. I have been tracking my contributions and I want to make sure we are aligned on how things are going. Can we spend 15 minutes reviewing my progress against the goals we set at the start?"

This gives your manager time to prepare thoughtful feedback and signals that you take your growth seriously.


Building Allies: Who to Connect With First and Why

The Priority Map

Week 1:   Your manager          -- alignment on expectations
          Your QA team           -- immediate peers and support network

Week 2:   Your squad's developers -- daily collaborators
          Your product owner      -- defines what you test

Week 3:   DevOps/SRE            -- controls your infrastructure
          Senior engineer         -- technical authority and mentor

Week 4:   QA engineers on other squads -- broader QA community
          Customer support        -- understands user pain points

Month 2:  Engineering leadership  -- visibility and sponsorship
          Design team             -- accessibility and usability perspective

The Ally-Building Script

For every 1:1 in your first month, use this structure:

  1. Introduction (2 min): Brief background, what you are excited about
  2. Their perspective (15 min): "What is the biggest challenge you are facing right now? How does QA fit into your world? What would make your life easier?"
  3. Offer value (5 min): "Based on what you have shared, here is one thing I think I can help with in the next few weeks."
  4. Follow up (within 1 week): Deliver on your offer or provide an update

The key insight is that every relationship starts with listening and giving, not asking and taking.


Hands-On Exercise

  1. Write out your first-week schedule using the structure above: environment setup, 1:1 meetings, product exploration.
  2. Choose 10 questions from the list above that are most important for your next role. Rank them by priority.
  3. Identify 3 potential quick wins you could deliver in the first 60 days of any new role, based on common QA pain points.
  4. Create a learning plan template tailored to a specific company or domain you are targeting.
  5. Start a brag document now, even for your current role. Practice the habit of weekly documentation.

Interview Talking Point: "When I start a new role, I follow a structured 90-day approach. The first 30 days, I focus entirely on learning -- understanding the product from a user's perspective, mapping the current test coverage and architecture, meeting the team, and identifying pain points. I do not propose changes until I understand the system and the history behind it. In days 30 through 60, I deliver quick wins that build credibility -- fixing flaky tests, automating a painful manual process, or improving a shared tool. By day 60, I have earned trust through visible contributions. Then in the final 30 days, I write a data-backed improvement proposal: current state, key risks with evidence, and 3-5 prioritized initiatives with effort estimates and success metrics. I present this to my manager first, then the team. The combination of demonstrated competence and a strategic improvement plan establishes me as someone who listens before acting, delivers before proposing, and backs opinions with data. Every improvement I propose connects to measurable quality outcomes -- defect escape rates, pipeline speed, coverage gaps, or manual effort reduction -- because impact without measurement is invisible."