SOC 2 Logging and SIEM for Bare Metal Servers: Building the Evidence Layer

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed March 20, 2026

In a cloud environment, centralized logging is a toggle. Enable CloudTrail, turn on VPC Flow Logs, configure GuardDuty, and the compliance platform pulls everything into a dashboard. The logging architecture is largely decided by the cloud provider, and the team's job is to make sure it is switched on.

On-premises infrastructure does not work that way. A bare metal production environment running Windows servers, Linux application servers, a hypervisor, network appliances, and a firewall has no default logging pipeline. Each system generates logs in a different format, stores them in a different location, and retains them for a different duration. Without deliberate architecture, the logs exist but nobody can find them, correlate them, or prove to an auditor that they were reviewed.

This is the domain where the gap between cloud and on-prem SOC 2 readiness is most visible. The bare metal overview introduced the evidence automation gap: cloud environments see 50-60% of evidence flowing automatically, while on-prem environments start closer to 20-30%. Logging and SIEM is where that gap is felt most acutely, because every other security domain depends on it. Vulnerability scanning produces alerts that need to be logged. Access events need to be recorded. Changes need to be tracked. Without centralized logging, each of those domains operates in its own silo.

This post covers how to design a logging and SIEM architecture for on-prem infrastructure that satisfies SOC 2 CC7.2 and CC7.3, produces the two distinct types of evidence auditors expect, and scales from a two-person IT team to a full security operations function.

What This Article Covers

  • Why on-prem logging requires deliberate architecture that cloud environments inherit automatically
  • The two evidence streams auditors check: monitoring evidence and response evidence
  • What systems belong in SOC 2 logging scope and why
  • The SIEM landscape: open-source, enterprise, and managed options compared
  • The three-part evidence pattern auditors follow when reviewing logging controls
  • The weekly, monthly, quarterly, and annual monitoring cadence for a small team

Cloud Logging vs. On-Prem Logging: Why the Architecture Is Different

Cloud providers made a design decision that on-prem teams need to make for themselves: where do logs go, how long are they kept, and who reviews them?

In AWS, CloudTrail captures API activity across every service. VPC Flow Logs record network traffic. GuardDuty runs detection models against those logs automatically. The logging architecture is embedded in the platform. Turning it on is a configuration choice. Designing it is not required.

On-prem environments require explicit decisions at every layer:

THE FOUR LOGGING ARCHITECTURE DECISIONS

Collection

Which systems send logs, and how? Windows Event Forwarding, syslog, agent-based forwarding, or SNMP traps each have different capabilities and limitations.

Aggregation

Where do all logs land? A centralized SIEM, a log management platform, or even a dedicated syslog server.

Retention

How long are logs kept? SOC 2 observation periods typically require 6-12 months of log history.

Review

Who looks at the logs, how often, and what happens when something is found? Each of these decisions creates, or fails to create, audit evidence. The architecture itself is the control.

Two Types of Evidence: Monitoring vs. Response

This is the distinction that catches most on-prem teams during their first audit. SOC 2 CC7.2 requires monitoring for anomalies. CC7.3 requires response to identified incidents. These are two different evidence streams, and having one without the other creates a gap.

CC7.2: Monitoring Evidence

Proves the system is watching.

A SIEM dashboard showing log ingestion rates, a list of active detection rules, and a history of alerts generated over the observation period. This demonstrates that the organization has the capability to detect security events.

CC7.3: Response Evidence

Proves someone is acting on what the system finds.

Documented triage of alerts, investigation notes, escalation decisions, and case closures. This demonstrates that detection leads to action, not just dashboard noise.

The pattern we see repeatedly on first audits

A team deploys a SIEM, configures alerting, and then clears alerts from the dashboard without documenting any investigation. The monitoring evidence looks strong, but the response evidence is empty. When the auditor asks to see how the team handled a specific alert, there is nothing to show.

The fix is simple but requires a process change. Every alert that warrants investigation gets a case: timestamp, the alert that triggered it, the events and observables reviewed, and the conclusion. Even if the conclusion is no action needed, this was a false positive from automated scanning, that documented triage is exactly what the auditor wants to see. A clean dashboard with zero alerts is suspicious. A dashboard with alerts that were investigated and resolved is evidence of a functioning security monitoring program.

Scope: What Needs to Be Logged

Not every system generates the same value from logging. The tiered asset classification used for vulnerability management applies here too, but the logging scope is broader because CC7.2 requires monitoring across the environment, not just on high-risk assets.

Log Category What to Capture Source Systems
Authentication events Successful and failed logins Servers, VPN, application, database, network appliance management interfaces
Authorization changes Account creation, modification, deletion, privilege escalation Active Directory, Linux PAM, application user management
System changes Configuration modifications, software installations, service starts and stops Windows Event Log, Linux audit daemon, hypervisor logs
Network security events Firewall deny logs, IDS/IPS alerts, VPN connection logs Firewall appliance, IDS/IPS, VPN gateway
Application events Error logs, transaction logs, security-relevant application-layer events Application servers, web servers, database audit logs
File integrity events Changes to critical system files and configuration files SIEM agent file integrity monitoring module

For on-prem environments, the challenge is collecting these from heterogeneous sources. A Windows server sends events via Windows Event Forwarding or an agent. A Linux server sends syslog. A firewall appliance sends syslog or SNMP traps. A database has its own audit log format. The SIEM needs to normalize all of these into a format that supports correlation and search.

Ensuring Log Quality: CIS Benchmarks and Sysmon

CIS Benchmarks for audit logging: Every CIS Benchmark includes a dedicated section on audit logging configuration, specifying which event categories should be enabled, what audit policies to set, and what log retention settings to apply. Following the CIS logging recommendations for each operating system and appliance type ensures the SIEM receives security-relevant events rather than noise.

Sysmon for Windows: The default Windows event log configuration misses critical activity. Installing Sysmon (System Monitor) fills this gap. Sysmon logs process creation with full command-line arguments, network connections, file creation timestamps, and driver/DLL loading. These events are essential for detecting lateral movement, malicious process execution, and persistence mechanisms. Sysmon is free, lightweight, and the de facto standard for Windows endpoint visibility. The Sysmon logs feed directly into the SIEM alongside standard Windows Event Logs.

Technology: The On-Prem SIEM Landscape

The right SIEM depends on team size, budget, log volume, and whether the team wants to run the platform themselves or outsource it. The on-prem market has options across the full spectrum.

Category Tool Best For Trade-off
Open Source Wazuh SMB on-prem, multi-domain coverage (SIEM + vuln + FIM + CIS) Self-managed, requires deployment and tuning
Open Source Security Onion Network-focused monitoring, full packet capture, NSM Complements Wazuh; not a standalone host-log solution
Open Source Elastic SIEM (ELK Stack) High log volumes, custom detection, strong engineering teams More configuration overhead than purpose-built security tools
Enterprise Splunk Enterprise Enterprise-grade search, compliance dashboards, broad integrations High licensing cost scales with daily ingestion volume
Cloud-Hosted Google SecOps On-prem log ingestion without managing SIEM infrastructure Log data leaves on-prem; may conflict with data residency requirements
Managed SOC Various providers Small teams (2-5 people) that cannot manage a SIEM themselves Ongoing service cost; team installs agents, SOC handles the rest
Option 01

Open-Source / Self-Hosted

WAZUH

The most common choice for on-prem SOC 2 environments in the SMB space. It combines SIEM, host-based intrusion detection, file integrity monitoring, vulnerability detection, and CIS benchmark scanning in a single platform. The agent runs on each server, forwarding logs to a central Wazuh manager. For teams that need one tool covering multiple SOC 2 domains, Wazuh delivers the most coverage per dollar.

SECURITY ONION

A network-focused security monitoring platform built on Elasticsearch, Zeek (formerly Bro), and Suricata. Where Wazuh is primarily agent-based (host logs), Security Onion excels at network traffic analysis, full packet capture, and network-based intrusion detection. For environments where network visibility matters as much as host visibility, deploying both creates a comprehensive monitoring stack.

ELASTIC SIEM (ELK STACK)

Provides a flexible, self-hosted SIEM built on Elasticsearch, Logstash, and Kibana. It handles high log volumes well and offers powerful search and visualization. The trade-off is that it requires more configuration and tuning than purpose-built security platforms. Teams with strong engineering resources and custom detection requirements often prefer it.

Option 02

Enterprise and Cloud-Hosted

SPLUNK ENTERPRISE

The enterprise standard for log management and SIEM. The on-prem deployment handles massive log volumes, supports complex correlation rules, and integrates with virtually every system type. The licensing model is based on daily ingestion volume, which makes it expensive for environments generating large amounts of log data. For organizations that need enterprise-grade search, reporting, and compliance dashboards, Splunk is the benchmark.

GOOGLE SECOPS (FORMERLY CHRONICLE)

A cloud-hosted SIEM that ingests logs from on-prem infrastructure via forwarders, applies Google's threat intelligence and detection models, and provides a managed detection and response platform. For on-prem environments that want SIEM capabilities without managing the SIEM infrastructure, Google SecOps offloads the platform management while keeping log analysis and detection in a cloud-hosted service. The trade-off is that log data leaves the on-prem environment, which may conflict with data residency requirements or customer contracts.

Option 03

Managed SOC Services

For teams that do not have the bandwidth to run a SIEM themselves, managed SOC providers operate the monitoring infrastructure and deliver alerts, triage, and escalation as a service. The on-prem team installs agents or configures log forwarding, and the managed SOC handles the rest. This model is common for companies where the IT team is small (two to five people) and adding SIEM management to their workload is not sustainable.

A common on-prem stack pattern

Wazuh for host-based monitoring and vulnerability detection, Security Onion for network monitoring, and the SIEM data feeding into the GRC platform for compliance evidence. Larger environments might replace Wazuh with Splunk or layer Google SecOps on top for advanced detection. The choice is not always one tool.

Evidence: What the Auditor Wants to See

Logging and SIEM evidence for SOC 2 follows the same three-part evidence pattern that applies across all continuous controls:

The Three-Part Evidence Pattern for Logging Controls

  1. Configuration evidence showing the logging infrastructure is set up and running. Screenshots of the SIEM configuration, the log sources connected, the retention policy, and the active detection rules. This proves the monitoring capability exists.
  2. Execution history showing the system has been operating on its expected cadence over the observation period. Log ingestion trends, alert volume over time, and uptime metrics for the SIEM itself. Any gaps in log ingestion during the observation period will be questioned.
  3. Representative samples showing the output is meaningful: a sample alert that was triaged and closed with documented investigation notes, a sample alert that was escalated to an incident with the full response lifecycle, and weekly or monthly monitoring summary reports showing alert trends.

The third category is where many teams fall short. The SIEM is running, the logs are flowing, but nobody documented what happened when alerts fired. Building the triage documentation habit from day one of the observation period is critical, because the auditor will ask for examples, and they need to come from the actual observation window.

For a closer look at how alert triage and case management feed into the ticketing and SLA workflow, that post covers the full incident response evidence chain.

Process: The Monitoring Cadence

Security monitoring is not a set it and forget it control. The operating cadence needs to be realistic for the team's size and documented in the Security Program Manual.

DAILY (OR CONTINUOUS)

The SIEM ingests logs and runs detection rules automatically. For teams with a managed SOC, this happens without internal effort. For self-managed deployments, the SIEM runs autonomously, but someone needs to check that log ingestion is healthy and no sources have stopped reporting.

WEEKLY

Review the SIEM dashboard. Triage any alerts that fired during the week. Create cases for alerts that warrant investigation. Close cases with documented rationale. This is the minimum cadence that produces audit-ready evidence. For a typical on-prem environment, the weekly review takes 30-60 minutes.

MONTHLY

Review monitoring coverage. Check whether any new systems were deployed that are not sending logs. Review detection rule effectiveness: are the rules generating useful alerts or mostly noise? Adjust thresholds as needed.

QUARTERLY

Produce a monitoring summary report for the GRC platform. This report shows log volume trends, alert counts by severity, triage outcomes, and any changes to the monitoring configuration. The quarterly report becomes a key evidence artifact for the auditor.

ANNUALLY

Full review of the logging architecture. Evaluate whether the current tooling is meeting the organization's needs. Review retention policies against both SOC 2 requirements and any contractual obligations. Update the Security Program Manual.

People: Ownership and Escalation

For small teams, the person managing the SIEM is usually the same person managing the infrastructure. The critical element is defining what happens when the SIEM finds something:

Ownership Model for Security Monitoring

  • Monitoring owner: Responsible for the health of the logging infrastructure, detection rule maintenance, and weekly alert triage. This is an operational role, not a full-time security analyst position in most on-prem environments.
  • Escalation path: When an alert warrants investigation beyond the standard triage, who gets involved? For small teams, this might be the CTO or an external incident response partner. The escalation path must be documented, even if it is rarely used.
  • Backup: Coverage during absences. If the monitoring owner is out for a week and nobody reviews alerts, the evidence trail has a gap. A designated backup ensures continuity.

The Architecture Decision

The key differentiator between cloud and on-prem logging is that on-prem requires deliberate architecture. A cloud environment inherits a logging architecture from the provider. An on-prem environment builds one.

The decisions that matter: which systems send logs, where those logs are aggregated, how long they are retained, what detection rules run against them, who reviews the output, and how triage is documented. Each of these decisions maps to a SOC 2 control requirement. Each produces, or fails to produce, audit evidence.

For teams starting from scratch, the minimum viable SIEM deployment covers three things: centralized log collection from all in-scope systems, a set of detection rules that generate alerts (even a simple failed login threshold rule counts), and a documented triage process that creates cases from alerts. Everything else, network monitoring, advanced correlation, threat intelligence feeds, can be layered on as the program matures.

The logging architecture guide covers the technical design patterns in more detail. This post focuses on the compliance evidence layer that sits on top of that architecture.

Need logging and monitoring for SOC 2?

We help on-prem teams build an effective security program where logging, monitoring, and response evidence are audit-ready from day one.

Further Reading: On-Prem SOC 2 Cluster

Frequently Asked Questions


Does SOC 2 require a specific SIEM platform?

No. SOC 2 CC7.2 requires monitoring for anomalies using appropriate tools. It does not prescribe a specific product. Wazuh, Security Onion, Splunk, Google SecOps, Elastic SIEM, and managed SOC services are all acceptable. What matters is that the platform collects logs from in-scope systems, runs detection rules, and produces evidence of both monitoring and response.


What is the difference between monitoring evidence and response evidence for SOC 2?

Monitoring evidence proves the system is watching: log ingestion dashboards, active detection rules, alert history. Response evidence proves someone acts on findings: documented alert triage, investigation notes, case closures with rationale. CC7.2 covers monitoring. CC7.3 covers response. Auditors check for both, and having monitoring without documented response is a common gap on first audits.


How much log retention does SOC 2 require?

SOC 2 does not specify an exact retention period, but the observation period for a Type 2 audit is typically 6-12 months. Log retention needs to cover at least the full observation window. In practice, 12 months of retention is a safe baseline. Some industries and contractual obligations require longer.


Can a small team manage a SIEM without dedicated security staff?

Yes, with the right tooling and cadence. Wazuh and similar platforms run autonomously once configured. The weekly review cadence (30-60 minutes of dashboard review and alert triage) is manageable for a system administrator who also handles other responsibilities. For teams that cannot sustain even that, a managed SOC service handles monitoring and triage externally, delivering only escalations that require internal action.


Should we use a cloud-hosted SIEM for on-prem infrastructure?

It depends on the team's capacity to manage infrastructure. Cloud-hosted options like Google SecOps and Elastic Cloud ingest logs from on-prem systems via forwarders but host the analysis platform in the cloud. This removes the burden of managing the SIEM infrastructure itself (storage, compute, upgrades, availability). The trade-off is that log data leaves the on-prem environment, which may conflict with data residency requirements or customer contracts. Self-hosted options like Wazuh and Splunk Enterprise keep everything on-premises.


Ready to Start Your Compliance Journey?

Get a clear, actionable roadmap with our free readiness assessment.

Share this article:

About the Author
Ali Aleali
Ali Aleali, CISSP, CCSP

Co-Founder & Principal Consultant, Truvo Cyber

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.