SOC 2 Compliance Roadmap: From Gap Assessment to Audit-Ready

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed March 18, 2026

Every SOC 2 roadmap on the internet reads the same way: pick a platform, connect your integrations, run the gap analysis, remediate, audit. Five steps, clean and linear. What none of them tell you is what happens between step 2 and step 4, which is where companies actually spend 80% of their time and where most compliance projects stall.

This roadmap is different. It's built from how SOC 2 readiness engagements actually run, not how vendor marketing describes them. The timelines, the workshop structure, the evidence split between automated and manual, the deliverables at each phase, the things that slow teams down. This is what the process looks like when you open the hood.

The Real Timeline

Most SOC 2 guides quote "4-6 months." That's technically correct for a Type 1 report if everything goes smoothly. Here's what the full timeline looks like from kickoff through Type 2:

Phase Duration What Happens
Assessment 2-4 weeks Scoping, system inventory, gap analysis, remediation plan
Build 2-8 weeks Close gaps, write policies, configure platform, build the Security Program Manual, workshops
Type 1 audit 2-3 weeks Auditor tests control design at a point in time
Observation period 3-6 months Controls operate, evidence accumulates, program runs on cadence
Type 2 audit 2-4 weeks Auditor tests operational effectiveness over the observation period

For Type 1 only: Smaller organizations with a clean cloud-native stack can go from kickoff to Type 1 report in as little as 5-7 weeks (2-4 weeks prep + 2-3 weeks audit).

For the full path to Type 2: 9-15 months. The companies that move fastest commit a dedicated internal point of contact, hold workshops twice per week, and treat readiness as a business priority rather than a side project.

The biggest variable isn't technical complexity. It's team availability. In companies where one person wears multiple hats, scheduling workshops and completing action items competes with operational responsibilities. That's the constraint that determines whether the build phase takes 4 weeks or 8.

Phase 1: Assessment (Weeks 1-4)

The assessment answers one question: where does the company actually stand against SOC 2 requirements, given how it actually operates?

System inventory and scoping. Every system that touches customer data or supports the service gets inventoried: production infrastructure, databases, identity providers, endpoints, network appliances, third-party services. Each system is classified by risk tier, which determines the proportionate controls it needs. This scoping step is where most generic checklists fail, because they apply the same controls to everything regardless of risk.

Existing security practices mapped to SOC 2 criteria. Companies that have been operating for years are usually doing more real security than they can prove. Access is controlled. Patching happens. Firewalls are configured tightly. The gap is rarely that security isn't happening. It's that the evidence trail was never designed.

Each control assessed on a maturity scale: not in place, in place but not effective, effective but not provable, effective and provable. This reframes the conversation from "do you have it or not" to "how mature is it," which is more useful for prioritizing remediation.

Assessment Deliverables

  • Scoped system inventory with risk-tier classification
  • Network and data flow diagrams
  • Gap assessment mapped to SOC 2 Trust Services Criteria (not a generic checklist, mapped to the actual architecture)
  • Prioritized remediation plan with owners, tasks, and definition of done

Phase 2: Build (Weeks 5-12)

The build phase is where the real work happens, and where generic roadmaps go silent. This isn't "remediate failing controls." It's designing and building the security program that makes those controls operational and sustainable.

How the Build Phase Actually Runs

The most effective structure is collaborative workshops, typically 1.5-2 hours each, covering one or two security domains per session. The program covers roughly 15 domains: vulnerability management, access management, network security, backup and disaster recovery, incident response, change management, logging and monitoring, endpoint security, encryption, and so on.

Each workshop moves through four stages for each domain:

  1. Define how the domain actually operates. Which tools are in use? How often do things run? Where do results go? Who owns it? These need to be realistic, not aspirational. If quarterly access reviews aren't happening today, don't write a policy that says they are.
  2. Align the policy to match reality. The policy gets updated to describe how the company actually operates, not the other way around. Generic templates that reference "cloud provider access controls" when the company runs on-premises infrastructure with Active Directory and VPN-based access are the most common source of audit friction.
  3. Map the controls. Once the manual section and policy are aligned, controls map cleanly across the SOC 2 criteria (CC1-CC9).
  4. Define evidence requirements. What evidence does the auditor need to see, and where does it come from? Vulnerability scanning evidence might be split into separate tests for Tier 1 systems (production, scanned weekly), Tier 2 (internal, scanned quarterly), and Tier 3 (network appliances, verified annually).

With 15 domains and one or two per session, roughly 10 workshops cover the full program. At twice per week, that's 5 weeks of workshops. The remainder of the build phase covers remediation work, platform configuration, and evidence architecture setup.

The Security Program Manual

The central deliverable from the build phase is the Security Program Manual. This is the internal operating playbook, the document the team actually references day to day. For each security domain, it covers five elements:

Five Elements Per Security Domain

  • Scope: What systems and data assets does this domain cover?
  • Technology: What tools are in use, including where the process is manual?
  • Evidence: What evidence is captured, how, and for how long?
  • Process: What is the operating cadence? Weekly scans, quarterly reviews, annual assessments?
  • People: Who owns it, who reviews it, who backs them up?

The manual is what makes the program operational. Policies exist for the auditor. The manual exists for the team.

The Evidence Reality

Here's a number that surprises most teams: in a recent engagement covering all five Trust Services Criteria on Azure, the total test count across the full SOC 2 program was 232. Those tests break into three categories:

AUTOMATED TESTS (75 OF 232)

Evidence pulled automatically via API integrations with the cloud provider, identity provider, endpoint management, and version control. No human involvement once configured.

PLATFORM-MANAGED TESTS

Handled inside the GRC platform's own modules: policy publishing and acknowledgment tracking, security training completion, risk register management, and vendor risk assessments. The platform manages the workflow and captures the evidence, but someone still needs to configure the modules, upload policies, assign training, and maintain the registers.

MANUAL TESTS

Evidence gathered and uploaded by the team. Management review minutes, BCDR tabletop results, penetration test reports, background check verifications, access review documentation. These require defined owners, cadence, and evidence capture workflows.

The automated portion was roughly 32% for this full-scope audit. Engagements scoped to Security only (the most common starting point) have fewer total tests and a higher automation ratio. But even with the platform-managed tests factored in, a significant portion of the program requires human process design and execution. The platform handles the workflow. Someone still has to do the work.

Compliance software is like accounting software. QuickBooks doesn't replace your accountant. It gives your accountant a better system to work in. The same applies here: the platform is essential, but someone still has to design and run the program inside it.

Other Build Phase Deliverables

  • Policies customized to the company's actual architecture and operations
  • GRC platform configured with controls, tests, and evidence requirements mapped to the environment
  • Company risk assessment and vendor risk assessment
  • Incident response plan
  • Templates for recurring activities: access reviews, firewall rule reviews, BCDR tabletop exercises
  • Subservice organization documentation (carve-out or inclusive method for each third-party dependency)
  • Optional: Security Posture Report for buyer conversations while the audit is in progress

Phase 3: Audit (Type 1, Then Type 2)

Type 1: Validating Control Design

A Type 1 audit tests whether controls are designed and in place at a point in time. The auditor reviews policies, inspects configurations, and verifies that the control environment exists as described in the system description. The evidence bar is lower than teams expect: the auditor is verifying current state, not sustained operation.

Type 1 is valuable for three reasons: it produces a shareable report faster, it gives the team a real audit cycle to learn from (where the auditor asks questions you didn't anticipate), and it doesn't delay the Type 2 timeline. The observation period starts running as soon as controls are operational, regardless of when the Type 1 report is issued.

The Observation Period

For Type 2, controls must operate effectively over a sustained period, typically 3-6 months. During this window, evidence accumulates: automated tests run continuously through the platform, and manual evidence tasks execute on their defined cadence (quarterly access reviews, annual BCDR tests, monthly management reviews).

The teams that handle this well run evidence collection in 2-week cycles: one week of review (checking what's due, what's overdue, what needs attention), one week of gathering and uploading. This cadence prevents the observation period from becoming a black box that nobody checks until audit time.

Type 2: Proving Operational Effectiveness

The Type 2 audit tests whether controls operated effectively across the entire observation period. The auditor samples evidence from different points in the period, looking for consistency and completeness. A control that worked in month 1 but wasn't documented in month 3 is a finding.

The GRC platform makes this process significantly smoother. Instead of weeks of back-and-forth, the auditor gets secure read-only access to the platform and can review continuously collected evidence without requesting screenshots. The manual evidence (management review minutes, BCDR test results, pen test reports, background checks) should already be uploaded and linked to the corresponding controls.

Phase 4: Operate (Ongoing)

The first SOC 2 report is a milestone, not a finish line. SOC 2 is an annual requirement, and the program needs to run continuously between audit cycles.

The operate phase establishes the cadence: quarterly access reviews, annual policy reviews, continuous vulnerability scanning, monthly control owner check-ins, vendor SOC 2 report reviews. A coordinator role keeps this cadence alive, because programs run on cadence, not intention.

The most important function of the operate phase is the feedback loop. Incidents, vulnerability findings, and pen test results feed back into the program as improvements. A vulnerability discovered during a scan leads to a patching process update. An access review reveals a terminated employee whose account wasn't deprovisioned, leading to an offboarding process fix. This is what turns a compliance exercise into a living security program that gets stronger over time.

What Determines Speed

Three factors determine whether the full process takes 9 months or 15:

  1. Team availability. The single biggest variable. Companies that assign a dedicated point of contact and commit to a regular workshop cadence (twice per week) move through the build phase in 4-5 weeks. Companies where the point of contact is also the CTO, the IT manager, and the on-call engineer take twice as long.
  2. Existing security maturity. Companies already running tight operations (patching, access controls, encryption, monitoring) have less remediation work. The build phase is mostly documentation and evidence architecture. Companies starting from scratch have more to implement.
  3. Infrastructure model. Cloud-native stacks with standard tooling (AWS + Okta + GitHub) have higher platform automation coverage and faster integration setup. On-premises and bare metal environments require more evidence architecture design and manual process documentation, but the timeline structure is the same.

Ready to start your SOC 2 roadmap?

We'll assess where you stand, build an effective security program around your actual operations, and get you audit-ready.

Frequently Asked Questions

How long does SOC 2 compliance take from start to finish?

For a Type 1 report, smaller organizations with clean cloud-native stacks can be ready in 5-7 weeks. For the full path to Type 2, most engagements take 9-15 months: 2-4 weeks assessment, 2-8 weeks build, 2-3 weeks Type 1 audit, 3-6 months observation, and 2-4 weeks Type 2 audit. The biggest variable is team availability during the build phase, not technical complexity.

What does a SOC 2 compliance roadmap look like week by week?

Weeks 1-4 cover assessment: system inventory, scoping, gap analysis, and remediation planning. Weeks 5-12 are the build phase: collaborative workshops (twice per week) covering 15 security domains, policy customization, platform configuration, and evidence architecture. After build, the Type 1 audit validates control design, then the 3-6 month observation period begins for Type 2.

What is the SOC 2 observation period?

The observation period is the window (typically 3-6 months) during which your controls must operate effectively and evidence must accumulate. Your GRC platform monitors automated controls continuously, while manual controls (access reviews, BCDR tests, management reviews) execute on their defined cadence. The Type 2 auditor samples evidence from across this period to verify consistent operation.

Can I get a SOC 2 Type 1 report while working toward Type 2?

Yes, and it's recommended. Type 1 validates your control design at a point in time, producing a shareable report faster. The observation period for Type 2 starts as soon as controls are operational, so getting a Type 1 first doesn't delay your Type 2 timeline. It also gives your team a real audit cycle to learn from before the more rigorous Type 2 examination.

What percentage of SOC 2 evidence collection is manual vs automated?

It depends on your scope and infrastructure. For standard cloud-native stacks scoped to Security only, GRC platforms automate roughly 40-60% of evidence collection. For full-scope audits (all five Trust Services Criteria) or on-premises environments, automated coverage can drop to 20-30%. Evidence breaks into three tiers: fully automated (API integrations), platform-managed (policy tracking, training, risk modules), and manual (management reviews, BCDR tests, pen test reports).

Share this article:

About the Author
Ali Aleali
Ali Aleali, CISSP, CCSP

Co-Founder & Principal Consultant, Truvo Cyber

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.