ConfigMaps and Secrets#
ConfigMaps hold non-sensitive configuration data. Secrets hold sensitive data like passwords, tokens, and TLS certificates. They look similar in structure but differ in handling: Secrets are base64-encoded, stored with slightly restricted access by default, and can be encrypted at rest if the cluster is configured for it.
Creating ConfigMaps#
From a literal value:
kubectl create configmap app-config \
--from-literal=LOG_LEVEL=info \
--from-literal=MAX_CONNECTIONS=100From a file:
kubectl create configmap nginx-config --from-file=nginx.confThe key name defaults to the filename. Override it with --from-file=custom-key=nginx.conf.
From an env file:
# app.env contains KEY=VALUE pairs, one per line
kubectl create configmap app-config --from-env-file=app.envDeclarative YAML (recommended for version control):
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: payments-prod
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
config.yaml: |
server:
port: 8080
timeout: 30s
database:
pool_size: 10Creating Secrets#
From literals:
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password='s3cret!@#'kubectl automatically base64-encodes the values.
Declarative YAML:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: payments-prod
type: Opaque
data:
username: YWRtaW4= # echo -n "admin" | base64
password: czNjcmV0IUAj # echo -n "s3cret!@#" | base64The Base64 Gotcha#
Values under data: must be base64-encoded. If you put plaintext there, Kubernetes accepts it silently but your application receives garbage after double-decoding. Use stringData: to avoid this entirely:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
stringData:
username: admin
password: "s3cret!@#"stringData is write-only. When you kubectl get secret -o yaml, values always appear base64-encoded under data. But stringData is the safer way to define secrets in manifests because you write plaintext and Kubernetes encodes it for you.
Secret Types#
Kubernetes defines several built-in Secret types:
Opaque– Generic key-value pairs. The default.kubernetes.io/dockerconfigjson– Image pull credentials for private registries.kubernetes.io/tls– TLS certificate and private key.kubernetes.io/basic-auth– Username and password (keys:username,password).kubernetes.io/service-account-token– Automatically created for ServiceAccounts (legacy, pre-1.24).
Creating a TLS secret:
kubectl create secret tls my-tls-cert \
--cert=path/to/cert.pem \
--key=path/to/key.pemCreating an image pull secret:
kubectl create secret docker-registry registry-cred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=passThen reference it in your pod spec:
spec:
imagePullSecrets:
- name: registry-credMounting as Volumes vs Environment Variables#
Environment Variables#
spec:
containers:
- name: app
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: LOG_LEVEL
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: passwordOr inject all keys at once with envFrom:
spec:
containers:
- name: app
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: db-credentialsVolume Mounts#
spec:
containers:
- name: app
volumeMounts:
- name: config-volume
mountPath: /etc/app/config
readOnly: true
- name: secret-volume
mountPath: /etc/app/secrets
readOnly: true
volumes:
- name: config-volume
configMap:
name: app-config
- name: secret-volume
secret:
secretName: db-credentials
defaultMode: 0400Each key becomes a file in the mount directory. The file contents are the values. For Secrets mounted as volumes, set defaultMode: 0400 to restrict file permissions.
To mount a single key as a specific file (not a directory):
volumes:
- name: config-volume
configMap:
name: app-config
items:
- key: config.yaml
path: config.yamlWhen Changes Propagate to Pods#
This is the single most important behavioral difference to understand:
Environment variables from ConfigMaps/Secrets do NOT update when the source changes. The values are injected at pod creation time. To pick up changes, you must restart the pod. There is no way around this.
Volume-mounted ConfigMaps/Secrets DO update automatically, but with a delay. The kubelet syncs mounted ConfigMaps roughly every 60-120 seconds (configurable via --sync-frequency). Your application must watch the mounted files for changes or periodically re-read them.
A common pattern to force pod restarts on ConfigMap changes in Helm:
# In your Deployment template
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}This changes the pod template annotation whenever the ConfigMap content changes, triggering a rolling update.
Immutable ConfigMaps and Secrets#
Marking a ConfigMap or Secret as immutable prevents accidental changes and improves cluster performance (the kubelet stops polling for updates):
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-v2
immutable: true
data:
LOG_LEVEL: "warn"Once set, immutable cannot be changed back to false. You must delete and recreate the resource. This encourages a versioning pattern: app-config-v1, app-config-v2, with deployments referencing specific versions.
Key Takeaways#
- Use
stringDatain Secret manifests to avoid base64 encoding mistakes. - Environment variables are set at pod creation and never update. Volume mounts update with a delay.
- Always set
defaultMode: 0400on Secret volume mounts. - Use the Helm
sha256sumannotation trick to trigger rolling updates on config changes. - Immutable ConfigMaps reduce API server load and prevent accidental changes; version them explicitly.
- Choose the correct Secret type (
tls,dockerconfigjson, etc.) rather than stuffing everything intoOpaque.