Kustomize Patterns#
Kustomize lets you customize Kubernetes manifests without templating. You start with plain YAML (bases) and layer modifications (overlays) on top. It is built into kubectl, so there is no extra tool to install.
Base and Overlay Structure#
The standard layout separates shared manifests from per-environment customizations:
k8s/
base/
kustomization.yaml
deployment.yaml
service.yaml
configmap.yaml
overlays/
dev/
kustomization.yaml
replica-patch.yaml
staging/
kustomization.yaml
ingress.yaml
production/
kustomization.yaml
replica-patch.yaml
hpa.yamlThe base kustomization.yaml lists the resources:
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- configmap.yamlAn overlay references the base and adds modifications:
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
- hpa.yaml # additional resources for production only
patches:
- path: replica-patch.yaml
namespace: production
commonLabels:
environment: productionApply with kubectl directly:
# Preview the rendered output
kubectl kustomize overlays/production
# Apply to cluster
kubectl apply -k overlays/production
# Diff against what is currently deployed
kubectl diff -k overlays/productionPatches: Strategic Merge vs JSON 6902#
Kustomize supports two patching strategies.
Strategic Merge Patch – write a partial YAML that gets merged with the base resource. You only specify the fields you want to change:
# overlays/production/replica-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app # must match the base resource name
spec:
replicas: 5
template:
spec:
containers:
- name: my-app # must match the container name
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 250m
memory: 256MiReference it in kustomization.yaml:
patches:
- path: replica-patch.yamlJSON 6902 Patch – precise operations (add, replace, remove) on specific paths. Use this when strategic merge cannot express what you need, such as removing a field or modifying array elements by index:
# overlays/staging/patch-env.yaml
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: LOG_LEVEL
value: debug
- op: replace
path: /spec/template/spec/containers/0/image
value: my-app:staging-latest
- op: remove
path: /spec/template/spec/containers/0/resources/limitsReference with target metadata:
patches:
- path: patch-env.yaml
target:
group: apps
version: v1
kind: Deployment
name: my-appInline patches work without separate files:
patches:
- target:
kind: Deployment
name: my-app
patch: |-
- op: replace
path: /spec/replicas
value: 3ConfigMap and Secret Generators#
Instead of managing ConfigMaps and Secrets as static YAML, let Kustomize generate them. Generated resources get a content hash suffix appended to their name, which forces Deployments to roll when config changes – solving the “config changed but pods did not restart” problem.
# kustomization.yaml
configMapGenerator:
- name: app-config
literals:
- DATABASE_HOST=postgres.default.svc
- LOG_LEVEL=info
files:
- configs/nginx.conf
- name: app-config-from-env
envs:
- configs/app.env # key=value pairs, one per line
secretGenerator:
- name: app-secrets
literals:
- DB_PASSWORD=supersecret
type: Opaque
generatorOptions:
disableNameSuffixHash: false # default; set true if you do not want hash suffixesThe generated ConfigMap name becomes something like app-config-7h2bg9. Any Deployment referencing app-config is automatically updated to reference app-config-7h2bg9. When the content changes, a new hash is generated, and the Deployment rolls.
Image Transformer#
Override image names and tags without patching:
# kustomization.yaml
images:
- name: my-app # matches the image name in base manifests
newName: ghcr.io/myorg/my-app
newTag: "v2.1.0"
- name: nginx
newTag: "1.25-alpine"
- name: sidecar
newName: gcr.io/myproject/sidecar
digest: sha256:abc123...This finds every container using image my-app across all resources and rewrites it. No patches needed.
Namespace Transformer#
Set the namespace for all resources:
# kustomization.yaml
namespace: productionThis adds or overrides metadata.namespace on every resource. Combined with overlays, this is how you deploy the same app to multiple namespaces.
Common Labels and Annotations#
commonLabels:
app.kubernetes.io/part-of: my-platform
team: backend
commonAnnotations:
managed-by: kustomizeLabels added via commonLabels are injected into metadata.labels, spec.selector.matchLabels, and spec.template.metadata.labels. Be careful: once a Deployment is created, its selector labels are immutable. Adding a new commonLabel after initial deployment will break upgrades. For labels you might change, use commonAnnotations or patches instead.
Kustomize vs Helm: Tradeoffs#
Choose Kustomize when:
- You want to keep plain, readable YAML without template syntax
- Your customization is mostly per-environment diffs (namespace, replicas, image tags)
- You do not need to distribute your config as a package for others
Choose Helm when:
- You need complex logic (conditionals, loops, computed values)
- You are distributing a chart for others to install with different configurations
- You depend on the Helm ecosystem (chart repositories, dependency management)
- You need lifecycle hooks (pre-install, post-upgrade jobs)
They work together. A common pattern is to render Helm charts into static YAML, then manage that output with Kustomize:
helm template my-release bitnami/postgresql -f values.yaml > base/postgresql.yaml
# Then use Kustomize overlays for environment-specific tweaksDebugging#
# See the final rendered output without applying
kubectl kustomize overlays/production
# Verbose output to understand what Kustomize is doing
kubectl kustomize overlays/production --enable-alpha-plugins 2>&1
# Validate the output
kubectl kustomize overlays/production | kubectl apply --dry-run=server -f -
# Common error: resource not found in base
# Fix: ensure the resource name in your patch exactly matches metadata.name in the base