Converting kubectl Manifests to Helm Charts#
You have a set of YAML files that you kubectl apply to deploy your application. They work, but deploying to a second environment means copying files and editing values by hand. Helm charts solve this by parameterizing your manifests.
Step 1: Scaffold the Chart#
Create the chart structure with helm create:
helm create my-appThis generates:
my-app/
Chart.yaml # Chart metadata (name, version, appVersion)
values.yaml # Default configuration values
charts/ # Subcharts / dependencies
templates/
deployment.yaml # Deployment template
service.yaml # Service template
ingress.yaml # Ingress template
hpa.yaml # HorizontalPodAutoscaler
serviceaccount.yaml
_helpers.tpl # Named template helpers
NOTES.txt # Post-install message
tests/
test-connection.yamlDelete the generated templates you do not need. Keep _helpers.tpl – it provides essential naming functions.
Step 2: Move Manifests Into Templates#
Take your working YAML files and copy them into the templates/ directory. Then replace hardcoded values with template expressions.
Before (raw manifest):
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
spec:
containers:
- name: my-app
image: ghcr.io/myorg/my-app:v1.2.0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512MiAfter (Helm template):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "my-app.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "my-app.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
resources:
{{- toYaml .Values.resources | nindent 10 }}Step 3: Parameterize with values.yaml#
The values.yaml file holds all configurable defaults:
replicaCount: 3
image:
repository: ghcr.io/myorg/my-app
tag: "v1.2.0"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
className: nginx
hosts:
- host: my-app.example.com
paths:
- path: /
pathType: Prefix
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512MiOverride per environment with separate values files:
helm upgrade --install my-app ./my-app \
-n production \
-f values-production.yamlWhere values-production.yaml overrides only what differs:
replicaCount: 5
image:
tag: "v1.3.0"
resources:
limits:
cpu: "1"
memory: 1GiStep 4: Write Helper Templates#
The _helpers.tpl file defines reusable named templates. The scaffolded version provides sensible defaults. The critical ones:
# templates/_helpers.tpl
{{- define "my-app.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "my-app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- define "my-app.selectorLabels" -}}
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "my-app.labels" -}}
{{ include "my-app.selectorLabels" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}These ensure consistent naming and labeling across all resources in the chart. Selector labels are kept separate because they are immutable on Deployments.
Step 5: Add Chart Dependencies#
If your app needs PostgreSQL or Redis, declare them as dependencies in Chart.yaml rather than including them in your templates:
# Chart.yaml
apiVersion: v2
name: my-app
version: 0.1.0
appVersion: "1.2.0"
dependencies:
- name: postgresql
version: "15.x.x"
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
- name: redis
version: "19.x.x"
repository: https://charts.bitnami.com/bitnami
condition: redis.enabledThen run helm dependency update to download them. Configure the subchart through your values.yaml:
postgresql:
enabled: true
auth:
database: mydb
username: myuserWhen Helm Beats Raw Terraform kubernetes_manifest#
Use Helm when you need environment-specific overrides (values files), the chart will be shared across teams, you want rollback (helm rollback), or community charts exist for dependencies.
Use Terraform kubernetes resources when your infrastructure team already uses Terraform for cloud resources, you need a unified dependency graph (cloud infra + Kubernetes resources), or you want strong schema validation.
Use both together by deploying Helm charts via Terraform’s helm_release resource. This gives you Terraform’s state management with Helm’s templating power.
Validation Before Deploy#
Always lint and template-render before deploying:
# Check for syntax errors
helm lint ./my-app -f values-production.yaml
# Render templates without deploying to verify output
helm template my-app ./my-app -f values-production.yaml
# Dry-run against the cluster to catch server-side issues
helm upgrade --install my-app ./my-app --dry-run -f values-production.yamlMinimal Chart Checklist#
Chart.yamlhas correctname,version, andappVersion.- Every hardcoded value in templates has a corresponding entry in
values.yaml. _helpers.tpldefinesname,fullname,labels, andselectorLabels.- Resources use
{{ include "my-app.fullname" . }}for names, not hardcoded strings. - Namespace is
{{ .Release.Namespace }}, never hardcoded. helm lintandhelm templatepass cleanly.