Converting kubectl Manifests to Terraform#
You have a working Kubernetes setup built with kubectl apply -f. It works, but there is no state tracking, no dependency graph, and no way to reliably reproduce it. Terraform fixes all three problems.
Step 1: Export Existing Resources#
Start by extracting what you have. For each resource type, export the YAML:
kubectl get deployment,service,configmap,ingress -n my-app -o yaml > exported.yamlFor a single resource with cleaner output:
kubectl get deployment my-app -n my-app -o yaml > deployment.yamlStep 2: Clean Up Kubernetes-Generated Fields#
Exported manifests contain fields that Kubernetes manages internally. These must be removed before converting to Terraform, or you will get perpetual diffs on every plan.
Remove these fields from every resource:
# DELETE all of these from exported YAML:
metadata:
resourceVersion: "12345" # Server-managed version
uid: "abc-123-def" # Server-assigned unique ID
creationTimestamp: "..." # Server-set timestamp
generation: 2 # Server-tracked generation
managedFields: [...] # Field ownership tracking
status: {} # Entire status blockA quick yq command strips them in bulk:
yq eval 'del(.metadata.resourceVersion, .metadata.uid,
.metadata.creationTimestamp, .metadata.generation,
.metadata.managedFields, .status)' deployment.yamlStep 3: Configure the Kubernetes Provider#
Set up the Terraform provider to talk to your cluster:
# providers.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.25"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.12"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "minikube"
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
config_context = "minikube"
}
}For production, replace config_path with host, token, and cluster_ca_certificate sourced from your cloud provider’s Terraform outputs.
Step 4: Convert Manifests to Terraform Resources#
Here is the before and after. A manual deployment:
# Before: manual kubectl apply
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: ghcr.io/myorg/my-app:v1.2.0
ports:
- containerPort: 8080
EOFBecomes a Terraform resource:
# applications/main.tf
resource "kubernetes_deployment_v1" "my_app" {
metadata {
name = "my-app"
namespace = "my-app"
}
spec {
replicas = 3
selector {
match_labels = { app = "my-app" }
}
template {
metadata {
labels = { app = "my-app" }
}
spec {
container {
name = "my-app"
image = "ghcr.io/myorg/my-app:v1.2.0"
port {
container_port = 8080
}
}
}
}
}
}When to Use Helm Provider vs kubernetes_manifest#
Use the helm_release resource when a community chart already exists for what you need (PostgreSQL, Redis, NGINX Ingress, Prometheus). It handles templating and upgrades:
resource "helm_release" "postgresql" {
name = "dt-postgresql"
repository = "https://charts.bitnami.com/bitnami"
chart = "postgresql"
namespace = "my-app"
set { name = "auth.database"; value = "mydb" }
set { name = "auth.username"; value = "myuser" }
}Use kubernetes_deployment_v1 and typed resources for your own application manifests. Terraform validates the schema, catches typos at plan time, and provides meaningful diffs.
Use kubernetes_manifest only for CRDs or resources without a typed Terraform equivalent. It takes raw YAML but gives weaker validation.
Module Organization#
Structure your Terraform into logical modules:
terraform/
main.tf # Provider config, module calls
variables.tf # Cluster-wide variables
modules/
networking/ # Ingress, NetworkPolicies, Services
databases/ # Helm releases for PostgreSQL, Redis
applications/ # Your app Deployments, Services
monitoring/ # Prometheus, Grafana Helm releasesEach module has its own main.tf, variables.tf, and outputs.tf. The root module wires them together:
module "databases" {
source = "./modules/databases"
namespace = var.namespace
}
module "applications" {
source = "./modules/applications"
namespace = var.namespace
db_host = module.databases.postgresql_host
depends_on = [module.databases]
}State Management Considerations#
Terraform state tracks every resource it manages. For Kubernetes workloads, keep these points in mind:
Import existing resources before running terraform apply to avoid duplicates:
terraform import kubernetes_deployment_v1.my_app my-app/my-appUse remote state from the start. An S3 bucket with DynamoDB locking, or Terraform Cloud, prevents state corruption when multiple people run applies.
State file contains secrets. Kubernetes Secrets managed by Terraform appear in plaintext in state. Use sensitive = true on variables and consider encrypting the state backend.
Do not mix Terraform and manual kubectl. If Terraform manages a resource, all changes must go through Terraform. Manual edits cause drift that the next terraform apply will revert.
Migration Order#
Convert resources in dependency order: namespaces first, then ConfigMaps and Secrets, then databases (via Helm), then application Deployments and Services, and finally Ingress. Run terraform plan after each batch to verify no unintended changes.