AKS Identity and Security#

AKS identity operates at three levels: who can access the cluster API (authentication), what they can do inside it (authorization), and how pods authenticate to Azure services (workload identity). Each level has Azure-specific mechanisms that replace or extend vanilla Kubernetes patterns.

Entra ID Integration (Azure AD)#

AKS supports two Entra ID integration modes.

AKS-managed Azure AD: Enable with --enable-aad at cluster creation. AKS handles the app registrations and token validation. This is the recommended approach.

Bring Your Own (BYO) Azure AD: You create the server and client app registrations manually. Only needed for specific compliance scenarios. For new clusters, always use AKS-managed.

az aks create \
  --resource-group myapp-rg \
  --name myapp-aks \
  --enable-aad \
  --aad-admin-group-object-ids <group-object-id> \
  --enable-azure-rbac

The --aad-admin-group-object-ids flag designates an Entra ID security group whose members get cluster-admin access. Without this, nobody can access the cluster after creation (except via --admin credentials).

Users authenticate with kubelogin, which handles Entra ID token acquisition:

az aks get-credentials --resource-group myapp-rg --name myapp-aks
kubelogin convert-kubeconfig -l azurecli

# For non-interactive scenarios (CI/CD)
kubelogin convert-kubeconfig -l spn
export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<client-id>
export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<client-secret>

Azure RBAC for Kubernetes Authorization#

With --enable-azure-rbac, you manage Kubernetes permissions through Azure role assignments instead of (or in addition to) Kubernetes RBAC objects. This unifies access control in Azure and makes it auditable through Azure Activity Log.

Four built-in roles exist:

Azure Role Kubernetes Equivalent
Azure Kubernetes Service RBAC Cluster Admin cluster-admin
Azure Kubernetes Service RBAC Admin admin (namespace-scoped)
Azure Kubernetes Service RBAC Writer edit (namespace-scoped)
Azure Kubernetes Service RBAC Reader view (namespace-scoped)
# Grant a user read access to a specific namespace
az role assignment create \
  --assignee user@example.com \
  --role "Azure Kubernetes Service RBAC Reader" \
  --scope "/subscriptions/<sub>/resourceGroups/myapp-rg/providers/Microsoft.ContainerService/managedClusters/myapp-aks/namespaces/production"

# Grant a group admin access cluster-wide
az role assignment create \
  --assignee-object-id <group-object-id> \
  --assignee-principal-type Group \
  --role "Azure Kubernetes Service RBAC Cluster Admin" \
  --scope "/subscriptions/<sub>/resourceGroups/myapp-rg/providers/Microsoft.ContainerService/managedClusters/myapp-aks"

Azure RBAC and Kubernetes RBAC work together. Azure RBAC is checked first. If it denies, the request falls through to Kubernetes RBAC. You can use Azure RBAC for broad access patterns and Kubernetes RBAC for fine-grained, custom roles.

Workload Identity#

Workload Identity replaces the deprecated AAD Pod Identity. It uses Kubernetes service account token federation to let pods authenticate to Azure services without storing credentials. The flow: Kubernetes issues a projected service account token, your pod presents this token to Azure AD, Azure AD validates it against a federated credential and issues an Azure access token.

# Enable workload identity on the cluster
az aks update \
  --resource-group myapp-rg \
  --name myapp-aks \
  --enable-oidc-issuer \
  --enable-workload-identity

# Create a managed identity for the workload
az identity create --resource-group myapp-rg --name myapp-identity

# Create the Kubernetes service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: myapp-sa
  namespace: production
  annotations:
    azure.workload.identity/client-id: "<managed-identity-client-id>"
  labels:
    azure.workload.identity/use: "true"
EOF

# Create the federated credential (links the k8s SA to the Azure identity)
az identity federated-credential create \
  --name myapp-fedcred \
  --identity-name myapp-identity \
  --resource-group myapp-rg \
  --issuer "$(az aks show -g myapp-rg -n myapp-aks --query oidcIssuerProfile.issuerUrl -o tsv)" \
  --subject "system:serviceaccount:production:myapp-sa" \
  --audience "api://AzureADTokenExchange"

Pods using this service account automatically get environment variables (AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_FEDERATED_TOKEN_FILE) injected by the workload identity webhook. Azure SDKs pick these up through DefaultAzureCredential with no code changes.

Key Vault Integration with CSI Driver#

The Secrets Store CSI driver mounts Key Vault secrets as files in the pod filesystem. Enable the add-on and create a SecretProviderClass:

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: azure-kv-secrets
  namespace: production
spec:
  provider: azure
  parameters:
    usePodIdentity: "false"
    useVMAssignedIdentity: "false"
    clientID: "<managed-identity-client-id>"
    keyvaultName: "myapp-kv"
    tenantId: "<tenant-id>"
    objects: |
      array:
        - |
          objectName: db-password
          objectType: secret
        - |
          objectName: api-key
          objectType: secret
  secretObjects:
    - secretName: myapp-secrets
      type: Opaque
      data:
        - objectName: db-password
          key: DB_PASSWORD
        - objectName: api-key
          key: API_KEY

The secretObjects section optionally syncs Key Vault secrets to Kubernetes Secret objects, letting you use them as environment variables. Without it, secrets are only available as mounted files.

Reference in the pod spec:

spec:
  serviceAccountName: myapp-sa
  volumes:
    - name: secrets
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: azure-kv-secrets
  containers:
    - name: app
      volumeMounts:
        - name: secrets
          mountPath: /mnt/secrets
          readOnly: true
      envFrom:
        - secretRef:
            name: myapp-secrets

Azure Policy for AKS#

Azure Policy deploys Gatekeeper (OPA) into your cluster and syncs policy definitions as constraint templates. Enable with az aks enable-addons --addons azure-policy.

Assign built-in policy initiatives:

# Assign the "Kubernetes cluster pod security baseline standards" initiative
az policy assignment create \
  --name "aks-baseline-security" \
  --policy-set-definition "a8640138-9b0a-4a28-b8cb-1666c838647d" \
  --scope "/subscriptions/<sub>/resourceGroups/myapp-rg/providers/Microsoft.ContainerService/managedClusters/myapp-aks" \
  --params '{"effect": {"value": "deny"}}'

This initiative enforces: no privileged containers, no hostPath volumes, no host networking, required resource requests, and more. Set the effect to audit first to see violations without blocking deployments, then switch to deny after remediation.

Hardening Checklist#

  1. Private cluster with authorized IP ranges: --enable-private-cluster or at minimum --api-server-authorized-ip-ranges to restrict API server access.
  2. Disable local accounts: --disable-local-accounts forces all authentication through Entra ID. No more --admin escape hatch.
  3. Enable Defender for Containers: az security pricing create --name Containers --tier standard provides runtime threat detection, vulnerability scanning, and security recommendations.
  4. Restrict egress: Use Azure Firewall or an NVA with a user-defined route table to control outbound traffic from the cluster.
  5. Image integrity: Enable Azure Policy to require images from trusted registries only, and enable ACR content trust for image signing.