EKS IAM and Security#
EKS bridges two identity systems: AWS IAM and Kubernetes RBAC. Understanding how they connect is essential for both granting pods access to AWS services and controlling who can access the cluster.
IAM Roles for Service Accounts (IRSA)#
IRSA lets Kubernetes pods assume IAM roles without using node-level credentials. Each pod gets exactly the AWS permissions it needs, not the broad permissions attached to the node role.
How it works: EKS runs an OIDC identity provider. The kubelet injects a projected token into annotated pods. AWS STS validates the token and returns temporary credentials for the specified role.
Setup#
- Enable the OIDC provider (one-time per cluster):
eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve
# Or get the OIDC URL for Terraform:
aws eks describe-cluster --name my-cluster \
--query "cluster.identity.oidc.issuer" --output text- Create an IAM role with a trust policy scoped to the service account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:production:my-app-sa",
"oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
}
}
}
]
}The sub condition locks this role to a specific service account in a specific namespace. Without it, any service account in the cluster could assume the role.
- Annotate the Kubernetes service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: production
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/my-app-s3-role"- Reference the service account in your pod spec:
spec:
serviceAccountName: my-app-sa
containers:
- name: app
image: my-app:latestThe AWS SDK automatically picks up the injected credentials. No access keys, no environment variables, no code changes.
Terraform IRSA Module#
module "irsa_s3_reader" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "~> 5.0"
role_name = "my-app-s3-reader"
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["production:my-app-sa"]
}
}
role_policy_arns = {
s3_read = aws_iam_policy.s3_read.arn
}
}EKS Pod Identity#
Pod Identity is the newer, simpler alternative to IRSA. It does not require an OIDC provider or complex trust policies. AWS manages the token exchange through the Pod Identity Agent add-on.
# Install the Pod Identity Agent add-on
aws eks create-addon --cluster-name my-cluster --addon-name eks-pod-identity-agent
# Create the association
aws eks create-pod-identity-association \
--cluster-name my-cluster \
--namespace production \
--service-account my-app-sa \
--role-arn arn:aws:iam::123456789012:role/my-app-roleThe IAM role trust policy for Pod Identity is simpler:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}
]
}Pod Identity is recommended for new setups. IRSA still works and is necessary for add-ons that do not yet support Pod Identity.
Cluster Access: aws-auth and Access Entries#
aws-auth ConfigMap (Legacy)#
The aws-auth ConfigMap in kube-system maps IAM roles and users to Kubernetes RBAC groups:
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789012:role/eks-node-role
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::123456789012:role/admin-role
username: admin
groups:
- system:mastersDo not edit aws-auth with kubectl edit. A syntax error locks everyone out. Use eksctl create iamidentitymapping or manage it through Terraform.
EKS Access Entries (Preferred)#
Access entries are the newer API-based approach that does not rely on a ConfigMap:
aws eks create-access-entry --cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/dev-role \
--type STANDARD
aws eks associate-access-policy --cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/dev-role \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
--access-scope '{"type":"namespace","namespaces":["production"]}'Access entries are recoverable if misconfigured (unlike aws-auth, where a bad edit can lock you out). Use access entries for new clusters.
IAM Roles: Cluster vs Node#
Cluster IAM role – used by the EKS control plane to manage AWS resources. Needs AmazonEKSClusterPolicy. This role does not affect what pods can do.
Node IAM role – attached to worker EC2 instances. Needs AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly. Every pod inherits this role unless you use IRSA or Pod Identity – which is why per-pod identity matters.
Encrypting Secrets with KMS#
By default, EKS stores Secrets as base64-encoded plaintext in etcd. Enable envelope encryption with a KMS key:
# For new clusters, pass --encryption-config at creation time
# For existing clusters:
aws eks associate-encryption-config --cluster-name my-cluster \
--encryption-config '[{"resources":["secrets"],"provider":{"keyArn":"arn:aws:kms:us-east-1:123456789012:key/abc-123"}}]'After enabling, re-encrypt existing Secrets: kubectl get secrets -A -o json | kubectl replace -f -.
Private API Server Endpoint#
By default, the EKS API server is publicly accessible (authenticated by IAM). For production, disable public access or restrict it:
aws eks update-cluster-config --name my-cluster \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=trueWith private-only access, kubectl must run from within the VPC (a bastion host, VPN, or AWS CloudShell connected to the VPC). Make sure your VPC has DNS resolution enabled and the EKS VPC endpoint security group allows inbound 443 from your client network.
Pod Security Standards#
EKS supports Kubernetes Pod Security Standards through namespace labels:
kubectl label namespace production \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/warn=restrictedbaseline blocks known privilege escalations (hostNetwork, privileged containers, hostPath). restricted enforces non-root, read-only root filesystem, and dropped capabilities. Start with enforce=baseline and warn=restricted, then tighten once warnings are addressed.