Why Benchmarks Matter#

Security benchmarks translate “harden the cluster” into specific, testable checks. Run a scan, get a pass/fail report, fix what failed. CIS publishes the most widely adopted benchmarks for Kubernetes and Docker. NSA/CISA provide additional Kubernetes-specific threat guidance.

CIS Kubernetes Benchmark with kube-bench#

kube-bench runs CIS Kubernetes Benchmark checks against cluster nodes, testing API server flags, etcd configuration, kubelet settings, and control plane security:

# Run on a master node
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-master.yaml

# Run on worker nodes
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml

# Read results
kubectl logs job/kube-bench

Or run directly on a node:

kube-bench run --targets master
kube-bench run --targets node

Output shows PASS, FAIL, WARN, or INFO per check. Focus on FAIL results first. WARN checks require manual assessment.

Key Checks That Often Fail#

  • Anonymous auth enabled on kubelet. Set --anonymous-auth=false in kubelet configuration.
  • No audit policy configured. The API server should log at least metadata-level audit events. Configure with --audit-policy-file and --audit-log-path.
  • etcd not encrypted. Enable encryption at rest for secrets as described in the secret management guide.
  • RBAC not the sole authorization mode. Ensure --authorization-mode=RBAC,Node and nothing else.

CIS Docker Benchmark#

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security && sudo bash docker-bench-security.sh

Common findings: daemon without user namespace remapping, containers running as root, no resource limits, Docker socket mounted into containers (allows escape).

NSA/CISA Kubernetes Hardening Guide#

Complements CIS with threat-focused recommendations.

Pod security: Use Pod Security Standards (restricted profile):

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

Network segmentation: Default-deny network policies in every namespace. Authentication: Disable anonymous auth, use short-lived tokens, audit RBAC bindings. Audit logging: Enable API server audit logging and ship to a SIEM. Audit logs are the primary forensic tool after incidents.

Automated Scanning Tools#

Kubescape#

Scans against NSA-CISA, MITRE ATT&CK, and CIS frameworks:

# Scan against the NSA framework
kubescape scan framework nsa --enable-host-scan

# Scan a specific namespace
kubescape scan framework cis-v1.23-t1.0.1 --include-namespaces production

# Output as JSON for CI/CD integration
kubescape scan framework nsa -f json -o results.json

Polaris#

Checks deployment-level best practices: resource limits, health probes, security contexts, image pull policies:

# CLI scan
polaris audit --format=pretty

# Run as a dashboard in-cluster
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install polaris fairwinds-stable/polaris --namespace polaris --create-namespace

Polaris also runs as a validating webhook to block non-compliant deployments.

kube-score#

Static analysis on Kubernetes YAML manifests before they reach the cluster:

# Score a manifest file
kube-score score deployment.yaml

# Score from stdin (integrate with helm template)
helm template myapp ./chart | kube-score score -

Catches issues in CI before deployment.

PCI-DSS Considerations for Containers#

PCI-DSS (Payment Card Industry Data Security Standard) requires specific controls when cardholder data flows through containerized workloads:

Requirement 2 (Secure configurations): CIS benchmarks map directly here. Run kube-bench and docker-bench-security and remediate failures.

Requirement 6 (Secure development): Container image scanning in CI/CD pipelines. No images with critical CVEs reach production. SBOMs demonstrate dependency awareness.

Requirement 7 (Restrict access): RBAC with least privilege. Service accounts scoped per workload. No shared credentials.

Requirement 10 (Logging and monitoring): API server audit logs, container runtime logs, network flow logs. Centralized, tamper-evident storage.

Requirement 11 (Regular testing): Automated scanning on schedule, not just at deploy time. Re-scan running workloads for newly discovered CVEs.

SOC2 Controls Mapping#

SOC2 Trust Service Criteria map to Kubernetes controls as follows:

  • CC6.1 (Logical access): RBAC, network policies, namespace isolation, OIDC authentication for cluster access.
  • CC6.6 (System boundaries): Network policies, ingress controllers with WAF, API gateway authentication.
  • CC7.1 (Monitoring): Prometheus metrics, audit logs, Falco runtime detection.
  • CC7.2 (Incident detection): Alert rules on suspicious API calls, unexpected privilege escalation, container escapes.
  • CC8.1 (Change management): GitOps workflows (ArgoCD/Flux), image signing, admission controllers that enforce signed images.

Remediation Prioritization#

Not all findings are equal. Prioritize by exploitability and blast radius:

  1. Critical, externally reachable: Public-facing services running as root with no network policies. Fix immediately.
  2. Critical, internal: Cluster-admin role bindings to default service accounts. High risk if any pod is compromised.
  3. High, configuration: Missing audit logging, no encryption at rest. These are evidence gaps, not immediate exploits, but they blind you during incidents.
  4. Medium, best practice: Missing resource limits, no pod disruption budgets. Operational risk rather than direct security risk.

Run scans in CI to prevent new violations. Run scheduled scans against the live cluster to catch drift. Track findings in a ticketing system with SLAs by severity: critical within 48 hours, high within one week, medium within one sprint.