GKE Security and Identity#
GKE security covers identity (who can do what), workload isolation (sandboxing untrusted code), supply chain integrity (ensuring only trusted images run), and data protection (encryption at rest). These features layer on top of standard Kubernetes RBAC and network policies.
Workload Identity Federation#
Workload Identity Federation is the successor to the original Workload Identity. It removes the need for a separate workload-pool flag and uses the standard GCP IAM federation model. The concept is the same: bind a Kubernetes service account to a Google Cloud service account so pods get GCP credentials without exported keys.
# Create a GCP service account
gcloud iam service-accounts create app-sa \
--display-name "Application Service Account"
# Grant GCP permissions to the service account
gcloud projects add-iam-policy-binding my-project \
--member "serviceAccount:app-sa@my-project.iam.gserviceaccount.com" \
--role "roles/storage.objectViewer"
# Allow the KSA to impersonate the GSA
gcloud iam service-accounts add-iam-policy-binding \
app-sa@my-project.iam.gserviceaccount.com \
--role "roles/iam.workloadIdentityUser" \
--member "serviceAccount:my-project.svc.id.goog[production/app-ksa]"# Kubernetes ServiceAccount with annotation
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-ksa
namespace: production
annotations:
iam.gke.io/gcp-service-account: app-sa@my-project.iam.gserviceaccount.comAny pod using serviceAccountName: app-ksa automatically receives credentials for app-sa. The GKE metadata server intercepts calls to the instance metadata endpoint and returns federated tokens instead of the node’s service account credentials.
Debugging Workload Identity: If pods get 403 Forbidden from GCP APIs, check three things: (1) the KSA annotation matches the GSA email exactly, (2) the IAM binding includes the correct [namespace/ksa-name] pair, and (3) the workload pool is enabled on the cluster.
GKE RBAC and IAM Integration#
GKE maps Google Cloud IAM roles to Kubernetes RBAC:
roles/container.admin– full control over all cluster resourcesroles/container.developer– read/write to workload resources (Deployments, Services, Pods), no access to RBAC or node configurationroles/container.viewer– read-only access to cluster resources
IAM controls who can access the cluster API server. Kubernetes RBAC controls what they can do once authenticated. Use both together:
# Grant IAM access to the cluster
gcloud projects add-iam-policy-binding my-project \
--member "user:dev@example.com" \
--role "roles/container.developer"# Fine-grained RBAC within the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-deployer
namespace: staging
subjects:
- kind: User
name: dev@example.com
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.ioThe IAM role gets the user past the API server’s authentication. The RoleBinding scopes what they can touch to the staging namespace only.
Binary Authorization#
Binary Authorization enforces that only signed container images run in your cluster. It uses Artifact Registry attestations to verify image provenance.
# Enable Binary Authorization on the cluster
gcloud container clusters update my-cluster \
--region us-central1 \
--binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
# Create an attestor
gcloud container binauthz attestors create build-attestor \
--attestation-authority-note=build-note \
--attestation-authority-note-project=my-project
# Set the policy to require attestation
gcloud container binauthz policy export > /tmp/policy.yamlEdit the policy YAML to require attestation:
defaultAdmissionRule:
evaluationMode: REQUIRE_ATTESTATION
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
requireAttestationsBy:
- projects/my-project/attestors/build-attestor
globalPolicyEvaluationMode: ENABLEgcloud container binauthz policy import /tmp/policy.yamlIn your CI pipeline, after building and scanning the image, create an attestation. Without it, the GKE admission controller rejects the pod.
GKE Sandbox (gVisor)#
GKE Sandbox runs pods inside a gVisor user-space kernel, isolating them from the host. Use it for untrusted workloads, multi-tenant clusters, or when running arbitrary user code.
# Create a node pool with gVisor enabled
gcloud container node-pools create sandbox-pool \
--cluster my-cluster \
--region us-central1 \
--machine-type e2-standard-4 \
--sandbox type=gvisor \
--num-nodes 2# Pod spec targeting the sandbox
apiVersion: v1
kind: Pod
metadata:
name: untrusted-workload
spec:
runtimeClassName: gvisor
nodeSelector:
sandbox.gke.io/runtime: gvisor
containers:
- name: app
image: us-docker.pkg.dev/my-project/repo/untrusted-app:latestgVisor intercepts system calls at the container boundary. Not all syscalls are supported – workloads that rely on low-level Linux kernel features (certain storage drivers, raw sockets, kernel modules) may fail. Test thoroughly before deploying.
Shielded GKE Nodes#
Shielded Nodes ensure that the node OS has not been tampered with. They use Secure Boot, vTPM, and integrity monitoring:
gcloud container clusters create my-cluster \
--region us-central1 \
--shielded-secure-boot \
--shielded-integrity-monitoringAutopilot clusters enable Shielded Nodes by default. For Standard clusters, enable it at creation time or update existing clusters.
Encryption at Rest with CMEK#
By default, GKE encrypts etcd data and persistent disks with Google-managed keys. Customer-managed encryption keys (CMEK) give you control over the key and the ability to revoke access:
# Create a key ring and key in Cloud KMS
gcloud kms keyrings create gke-ring --location us-central1
gcloud kms keys create gke-key --keyring gke-ring \
--location us-central1 --purpose encryption
# Grant the GKE service agent access to the key
gcloud kms keys add-iam-policy-binding gke-key \
--keyring gke-ring --location us-central1 \
--member "serviceAccount:service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com" \
--role "roles/cloudkms.cryptoKeyEncrypterDecrypter"
# Create cluster with CMEK
gcloud container clusters create my-cluster \
--region us-central1 \
--database-encryption-key=projects/my-project/locations/us-central1/keyRings/gke-ring/cryptoKeys/gke-keySecret Manager Integration#
Instead of Kubernetes Secrets, you can mount Google Secret Manager secrets directly into pods using the Secret Manager CSI driver:
# Enable the add-on
gcloud container clusters update my-cluster \
--region us-central1 \
--enable-secret-managerapiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: app-secrets
spec:
provider: gke
parameters:
secrets: |
- resourceName: "projects/my-project/secrets/db-password/versions/latest"
path: "db-password"
---
# Mount in the pod
volumes:
- name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: app-secrets
containers:
- name: app
volumeMounts:
- name: secrets
mountPath: /var/secrets
readOnly: trueThe pod’s Workload Identity service account must have roles/secretmanager.secretAccessor on the referenced secrets.
Security Posture Dashboard#
GKE’s Security Posture dashboard in the Cloud Console surfaces workload vulnerabilities, misconfigured RBAC, missing network policies, and CVEs in running container images. Enable it at the fleet level or per-cluster. It runs continuous scans and maps findings to CIS benchmarks. Check the GKE security bulletins page regularly – Google publishes advisories for critical CVEs affecting GKE nodes and control plane components.