Pod Security Standards and Admission#

PodSecurityPolicy (PSP) was removed from Kubernetes in v1.25. Its replacement is Pod Security Admission (PSA), a built-in admission controller that enforces three predefined security profiles. PSA is simpler than PSP – no separate policy objects, no RBAC bindings to manage – but it is also less flexible. You apply security standards to namespaces via labels and the admission controller handles enforcement.

The Three Security Standards#

Kubernetes defines three Pod Security Standards, each progressively more restrictive:

Privileged – completely unrestricted. No security checks are applied. Use this for system-level workloads like CNI plugins, storage drivers, and log collectors that genuinely need elevated privileges.

Baseline – prevents known privilege escalation vectors. Blocks hostNetwork, hostPID, hostIPC, hostPath volumes, privileged containers, and adding dangerous capabilities. Most well-written applications work under baseline without modification.

Restricted – heavily locked down. Requires running as non-root, dropping all capabilities, setting a seccomp profile, and using a read-only root filesystem. Many off-the-shelf Helm charts will fail under restricted without modifications.

Enforcement Modes#

PSA supports three enforcement modes per namespace. You can combine them:

  • enforce – reject pods that violate the standard. The pod is not created.
  • audit – allow the pod but record the violation in the audit log.
  • warn – allow the pod but return a warning to the user in the kubectl output.

The recommended approach is to start with warn and audit to discover violations, then switch to enforce once you have resolved them.

Applying PSA via Namespace Labels#

PSA is configured entirely through labels on namespaces. No separate policy objects to create:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/enforce-version: v1.28
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/audit-version: v1.28
    pod-security.kubernetes.io/warn: restricted
    pod-security.kubernetes.io/warn-version: v1.28

This configuration enforces baseline (blocks obvious privilege escalations), and audits/warns on restricted violations (so you can work toward the stricter standard without breaking anything). The -version labels pin to a specific Kubernetes version’s definition of each standard, preventing surprise breakage when you upgrade the cluster.

Apply to an existing namespace with kubectl:

kubectl label namespace production \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/audit=restricted \
  --overwrite

What Restricted Actually Requires#

The restricted standard is the target for production application workloads. Here is what a compliant pod spec looks like:

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
  namespace: production
spec:
  securityContext:
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: myregistry.io/app:v1.2.3
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}

The specific requirements for restricted:

  • runAsNonRoot: true – container must not run as UID 0
  • allowPrivilegeEscalation: false – disables setuid binaries and ptrace
  • capabilities.drop: ["ALL"] – drop all Linux capabilities
  • seccompProfile.type: RuntimeDefault or Localhost – a seccomp profile must be set
  • No hostNetwork, hostPID, hostIPC
  • No hostPath volumes
  • No privileged: true
  • Volume types limited to: configMap, emptyDir, projected, secret, downwardAPI, persistentVolumeClaim, ephemeral, csi

Migration Strategy from PodSecurityPolicy#

If you are migrating from PSP, follow this sequence to avoid breaking workloads:

Step 1: Enable PSA in warn + audit alongside existing PSPs. PSPs and PSA can coexist. Label your namespaces with warn and audit mode set to your target standard. PSPs continue to enforce while you gather data.

# Apply warn+audit to all non-system namespaces
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep -v '^kube-'); do
  kubectl label namespace "$ns" \
    pod-security.kubernetes.io/warn=baseline \
    pod-security.kubernetes.io/audit=baseline \
    --overwrite
done

Step 2: Review audit logs and warnings. Check the API server audit logs for PSA violations. Fix workloads that do not comply.

# Look for PSA warnings in recent events
kubectl get events -A --field-selector reason=FailedCreate | grep -i security

Step 3: Fix non-compliant workloads. Common fixes:

# Add to every Deployment's pod template
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: app
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]

Step 4: Switch to enforce mode.

kubectl label namespace production \
  pod-security.kubernetes.io/enforce=baseline \
  --overwrite

Step 5: Remove PSPs. Once all namespaces are enforced by PSA, delete the PodSecurityPolicy resources and their associated RBAC bindings.

Cluster-Wide Exemptions#

Some workloads legitimately need elevated privileges. Configure cluster-wide exemptions via the AdmissionConfiguration file passed to the API server:

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
  configuration:
    apiVersion: pod-security.admission.config.k8s.io/v1
    kind: PodSecurityConfiguration
    defaults:
      enforce: baseline
      enforce-version: latest
      audit: restricted
      audit-version: latest
      warn: restricted
      warn-version: latest
    exemptions:
      usernames: []
      runtimeClasses: []
      namespaces:
      - kube-system
      - monitoring
      - logging

This sets cluster-wide defaults (so new namespaces get security standards automatically) and exempts system namespaces. Namespace-level labels override these defaults.

Common Gotchas#

Restricted breaks most Helm charts. Many popular charts (nginx-ingress, prometheus, grafana) do not set seccomp profiles or drop capabilities by default. You will need to pass security context values via Helm values:

# values.yaml for a typical chart
podSecurityContext:
  runAsNonRoot: true
  seccompProfile:
    type: RuntimeDefault
containerSecurityContext:
  allowPrivilegeEscalation: false
  capabilities:
    drop: ["ALL"]

Check the chart’s documentation for the correct value keys – they vary between charts.

Init containers must also comply. PSA checks all containers, including init containers. An init container that runs as root to set permissions on a volume will be rejected under restricted. Use fsGroup in the pod security context instead of root init containers.

Enforcement only applies to pods. PSA checks pod specs at admission time. It evaluates the pod template inside Deployments, StatefulSets, Jobs, etc. Existing running pods are not affected when you add labels to a namespace – only new pods are checked.

No per-pod exemptions. Unlike PSP, which could be granted to specific service accounts, PSA applies uniformly to an entire namespace. If one workload in a namespace needs hostNetwork, the entire namespace must use baseline or privileged. This pushes you toward splitting workloads into separate namespaces by security level, which is generally a good practice anyway.

DaemonSets for monitoring and logging. Node-level agents (like Fluent Bit, Datadog agent, Prometheus node-exporter) typically need hostPath, hostNetwork, or hostPID. Deploy these to a dedicated namespace (e.g., monitoring) with privileged enforcement, and keep application namespaces on baseline or restricted.