Skip to Content
← Back to Articles

Security Audit Planning and Execution: A Practitioner's Field Guide

Last year, I inherited an audit that had gone sideways. The previous administrator had run a vulnerability scan, dumped 14,000 findings into a spreadsheet, and handed it to leadership with no context, no prioritization, and no remediation path. The result was predictable: nothing got fixed. That experience taught me that a security audit is only as valuable as the planning behind it and the clarity of its execution. Here's the framework I've refined since then.



Phase 1: Scoping and Objective Definition

Every audit begins with a deceptively simple question: what exactly are we auditing, and why?

Without a clear scope, audits balloon into unfocused exercises that waste cycles and erode stakeholder trust. I use a scoping document that defines four things explicitly:

  • Audit type (compliance, vulnerability assessment, configuration review, or penetration test)
  • Asset boundaries (specific subnets, applications, cloud accounts, or business units)
  • Regulatory drivers (PCI-DSS, HIPAA, SOC 2, NIST 800-53, or internal policy)
  • Exclusions (production databases during peak hours, third-party managed services, etc.)

For example, a quarterly internal audit might scope to: "Configuration compliance review of all Linux servers in the 10.20.0.0/16 subnet against CIS Benchmark Level 1, excluding the R&D sandbox environment."

Document this in writing and get sign-off from the asset owners and your CISO before touching a single system.

Phase 2: Evidence Collection and Automated Scanning

With scope locked, I divide evidence collection into two tracks: automated scanning and manual verification.

Automated scanning provides breadth. I typically run vulnerability assessments and compliance checks in parallel. Here's how I kick off a CIS benchmark scan using OpenSCAP against a RHEL host:

# Install the SCAP Security Guide
sudo yum install scap-security-guide -y

# Run a CIS Level 1 profile scan
sudo oscap xccdf eval \
  --profile xccdf_org.ssgproject.content_profile_cis \
  --results /tmp/audit-results.xml \
  --report /tmp/audit-report.html \
  /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml

For network-level vulnerability scanning, I schedule Nmap service discovery first, then feed the results into a credentialed scan:

# Service discovery on target subnet
nmap -sV -oX discovery.xml 10.20.0.0/16

# Credentialed vulnerability scan with OpenVAS (via gvm-cli)
gvm-cli socket --xml \
  "<create_task><name>Q1-Audit-Internal</name><target id='TARGET_ID'/><config id='CONFIG_ID'/></create_task>"

Manual verification provides depth. Automated tools miss business logic flaws, misconfigured access controls, and policy violations that require human judgment. I always manually review:

  • Firewall rule sets for overly permissive rules (0.0.0.0/0 on ingress)
  • Privileged account inventories against HR's active employee list
  • Backup and recovery test logs (not just that backups exist, but that restores have been tested)

Phase 3: Risk-Based Prioritization

Raw findings are noise. Prioritized findings are intelligence.

I score every finding using a contextual risk model that factors in CVSS base score, asset criticality, and exploitability in our environment. A critical CVE on an internet-facing payment server is not the same as a critical CVE on an isolated test VM—even though the scanner treats them identically.

I maintain a simple classification matrix:

Priority Criteria Remediation SLA
P1 - Critical Exploitable, internet-facing, sensitive data 72 hours
P2 - High Exploitable, internal, business-critical 14 days
P3 - Medium Requires local access or chaining 30 days
P4 - Low Informational, hardening recommendations Next maintenance cycle

Phase 4: Reporting and Remediation Tracking

I deliver two reports: an executive summary (one page, risk posture, trend lines, top five findings) and a technical detail report (every finding, evidence, and specific remediation steps). Leadership needs the first. Engineers need the second. Sending either audience the wrong report guarantees inaction.

Track remediation in your ticketing system, not in the report itself. Assign owners, set SLAs from your priority matrix, and schedule a rescan to validate closure.

Closing Thought

A security audit is not a point-in-time snapshot you file away. It's a feedback loop. Each cycle should measurably reduce your attack surface, and if it doesn't, the process—not just the infrastructure—needs fixing.


Have questions about security audit planning and execution? I'm always happy to talk shop — reach out or connect with me on LinkedIn.

← Back to Articles