kind Validation Templates#
kind (Kubernetes IN Docker) runs Kubernetes clusters using Docker containers as nodes. It was designed for testing Kubernetes itself, which makes it an excellent tool for validating infrastructure changes. It starts fast, uses fewer resources than minikube, and is disposable by design.
This article provides copy-paste cluster configurations and complete lifecycle scripts for common validation scenarios.
Cluster Configuration Templates#
Basic Single-Node#
The simplest configuration. One container acts as both control plane and worker. Sufficient for validating that deployments, services, ConfigMaps, and Secrets work correctly.
# kind-single-node.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-planeCreate it:
kind create cluster --name validation --config kind-single-node.yamlThis cluster has no ingress controller, no storage provisioner beyond the default local-path provisioner, and no metrics server. For most Helm chart validation, this is enough.
Multi-Node (1 Control Plane + 2 Workers)#
Use this when you need to validate scheduling behavior, pod anti-affinity rules, or topology spread constraints. With multiple worker nodes, the scheduler has real choices to make.
# kind-multi-node.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: workerThis adds approximately 1 GB of RAM overhead per worker node. Only use multi-node when your validation scenario specifically requires it (anti-affinity, PodDisruptionBudgets, rolling update behavior across nodes).
Ingress Controller Enabled#
kind does not include an ingress controller by default. This configuration exposes ports 80 and 443 on the host and labels the control-plane node for ingress. After creation, you install an ingress controller (typically NGINX) that binds to these ports.
# kind-ingress.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCPAfter creating the cluster, install the NGINX ingress controller configured for kind:
kind create cluster --name validation --config kind-ingress.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
# Wait for the ingress controller to be ready
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90sNow Ingress resources will route traffic from localhost:80 and localhost:443 into the cluster.
Local Registry#
When validating images built locally, pushing to Docker Hub or another remote registry adds unnecessary latency. This configuration connects a local Docker registry to the kind cluster so images built on the host are immediately available to pods.
# kind-registry.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5001"]
endpoint = ["http://kind-registry:5001"]
nodes:
- role: control-planeThe cluster config alone is not enough. You need a script that creates the registry, creates the cluster, and connects them:
#!/bin/bash
set -euo pipefail
REGISTRY_NAME="kind-registry"
REGISTRY_PORT="5001"
CLUSTER_NAME="validation"
# Start the registry if it is not already running
if [ "$(docker inspect -f '{{.State.Running}}' "${REGISTRY_NAME}" 2>/dev/null || true)" != 'true' ]; then
docker run -d --restart=always -p "127.0.0.1:${REGISTRY_PORT}:5000" --name "${REGISTRY_NAME}" registry:2
fi
# Create the cluster
kind create cluster --name "${CLUSTER_NAME}" --config kind-registry.yaml
# Connect the registry to the kind network
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${REGISTRY_NAME}")" = 'null' ]; then
docker network connect "kind" "${REGISTRY_NAME}"
fi
# Document the local registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${REGISTRY_PORT}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
echo "Registry running at localhost:${REGISTRY_PORT}"
echo "Tag images as localhost:${REGISTRY_PORT}/image:tag and push"Usage:
docker build -t localhost:5001/my-app:test .
docker push localhost:5001/my-app:test
# Now use image "localhost:5001/my-app:test" in your Kubernetes manifestsSpecific Kubernetes Version#
Pin the Kubernetes version when validating against a specific target cluster version. kind publishes node images for each Kubernetes release.
# kind-versioned.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.29.2@sha256:51a1434a5397193442f0be2a297b488b6c919ce8a3931be0ce822606ea5ca245Find available image digests for your target version from the kind releases page. Always use the SHA256 digest to ensure reproducibility.
Common version targets:
# Kubernetes 1.28
image: kindest/node:v1.28.7@sha256:9bc6c451a289cf96ad0bbaf33d416901de6fd632415b076ab05f5fa7e4f65c58
# Kubernetes 1.29
image: kindest/node:v1.29.2@sha256:51a1434a5397193442f0be2a297b488b6c919ce8a3931be0ce822606ea5ca245
# Kubernetes 1.30
image: kindest/node:v1.30.0@sha256:047357ac0cfea04663786a612ba1eaba9702bef25227a794b52890dd8bcd692e
# Kubernetes 1.31
image: kindest/node:v1.31.0@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865Note: Digests change with patch releases. Verify the current digest from the kind GitHub releases before using these in production validation.
Extra Port Mappings#
When you need to access specific NodePorts or custom ports from the host machine. Useful for validating services that use NodePort or for debugging.
# kind-ports.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
protocol: TCP
- containerPort: 30001
hostPort: 30001
protocol: TCP
- containerPort: 30080
hostPort: 8080
protocol: TCP
- containerPort: 30443
hostPort: 8443
protocol: TCPMap containerPort to the NodePort your services will use. The hostPort is what you access from the host machine. You can remap ports (container 30080 to host 8080) when host ports are already in use.
Validation Lifecycle Scripts#
Every validation follows the same lifecycle: create environment, deploy, verify, capture results, tear down. These scripts encode that lifecycle.
Generic Helm Chart Validation#
This script validates any Helm chart on kind. It creates a cluster, installs the chart, runs basic health checks, and tears down. It exits with a nonzero code if any step fails.
#!/bin/bash
# validate-helm-chart.sh
# Usage: ./validate-helm-chart.sh <chart-path> [values-file] [release-name]
set -euo pipefail
CHART_PATH="${1:?Usage: $0 <chart-path> [values-file] [release-name]}"
VALUES_FILE="${2:-}"
RELEASE_NAME="${3:-validation}"
CLUSTER_NAME="helm-validation-$$"
TIMEOUT="180s"
cleanup() {
echo "--- Tearing down cluster ${CLUSTER_NAME} ---"
kind delete cluster --name "${CLUSTER_NAME}" 2>/dev/null || true
}
trap cleanup EXIT
echo "=== Step 1: Static validation ==="
helm lint "${CHART_PATH}" ${VALUES_FILE:+--values "$VALUES_FILE"}
echo "Lint passed."
helm template "${RELEASE_NAME}" "${CHART_PATH}" \
${VALUES_FILE:+--values "$VALUES_FILE"} > /dev/null
echo "Template rendering passed."
echo "=== Step 2: Create kind cluster ==="
kind create cluster --name "${CLUSTER_NAME}" --wait 60s
echo "Cluster ready."
echo "=== Step 3: Install chart ==="
helm install "${RELEASE_NAME}" "${CHART_PATH}" \
${VALUES_FILE:+--values "$VALUES_FILE"} \
--wait --timeout "${TIMEOUT}"
echo "Chart installed successfully."
echo "=== Step 4: Verify pods ==="
kubectl get pods -o wide
# Check that all pods are Running or Completed
NOT_READY=$(kubectl get pods --no-headers | grep -v -E "Running|Completed" || true)
if [ -n "${NOT_READY}" ]; then
echo "ERROR: Some pods are not ready:"
echo "${NOT_READY}"
kubectl describe pods
exit 1
fi
echo "All pods healthy."
echo "=== Step 5: Verify services ==="
kubectl get svc
echo "Services created."
echo "=== Step 6: Helm test ==="
if helm test "${RELEASE_NAME}" --timeout 60s 2>/dev/null; then
echo "Helm tests passed."
else
echo "WARNING: Helm tests failed or no tests defined."
fi
echo "=== VALIDATION PASSED ==="
echo "Chart: ${CHART_PATH}"
echo "Release: ${RELEASE_NAME}"
echo "Cluster: ${CLUSTER_NAME}"Run it:
chmod +x validate-helm-chart.sh
./validate-helm-chart.sh ./my-chart
./validate-helm-chart.sh ./my-chart values-production.yaml my-appDeployment Upgrade Validation#
Validates that a Helm chart can be upgraded from one version to another without downtime or errors. This catches breaking changes in chart values or template structure.
#!/bin/bash
# validate-helm-upgrade.sh
# Usage: ./validate-helm-upgrade.sh <chart-path> <old-values> <new-values> [release-name]
set -euo pipefail
CHART_PATH="${1:?Usage: $0 <chart-path> <old-values> <new-values>}"
OLD_VALUES="${2:?Provide old values file}"
NEW_VALUES="${3:?Provide new values file}"
RELEASE_NAME="${4:-upgrade-test}"
CLUSTER_NAME="upgrade-validation-$$"
cleanup() {
kind delete cluster --name "${CLUSTER_NAME}" 2>/dev/null || true
}
trap cleanup EXIT
echo "=== Create cluster ==="
kind create cluster --name "${CLUSTER_NAME}" --wait 60s
echo "=== Install with old values ==="
helm install "${RELEASE_NAME}" "${CHART_PATH}" \
--values "${OLD_VALUES}" --wait --timeout 180s
echo "Old version installed. Pods:"
kubectl get pods
echo "=== Upgrade to new values ==="
helm upgrade "${RELEASE_NAME}" "${CHART_PATH}" \
--values "${NEW_VALUES}" --wait --timeout 180s
echo "Upgrade succeeded. Pods:"
kubectl get pods
echo "=== Verify post-upgrade state ==="
NOT_READY=$(kubectl get pods --no-headers | grep -v -E "Running|Completed" || true)
if [ -n "${NOT_READY}" ]; then
echo "ERROR: Pods not ready after upgrade:"
echo "${NOT_READY}"
exit 1
fi
echo "=== Check rollback ==="
helm rollback "${RELEASE_NAME}" 1 --wait --timeout 180s
echo "Rollback succeeded. Pods:"
kubectl get pods
echo "=== UPGRADE VALIDATION PASSED ==="Multi-Service Stack Validation#
Validates that multiple Helm releases work together. Deploys a database first, then the application that depends on it, and verifies connectivity.
#!/bin/bash
# validate-stack.sh
# Usage: ./validate-stack.sh
set -euo pipefail
CLUSTER_NAME="stack-validation-$$"
cleanup() {
kind delete cluster --name "${CLUSTER_NAME}" 2>/dev/null || true
}
trap cleanup EXIT
kind create cluster --name "${CLUSTER_NAME}" --wait 60s
echo "=== Deploy database ==="
helm install postgres oci://registry-1.docker.io/bitnamicharts/postgresql \
--set auth.postgresPassword=testpassword \
--set auth.database=myapp \
--wait --timeout 180s
echo "=== Deploy application ==="
helm install myapp ./my-app-chart \
--set database.host=postgres-postgresql \
--set database.password=testpassword \
--set database.name=myapp \
--wait --timeout 180s
echo "=== Verify connectivity ==="
kubectl run connectivity-test --image=busybox --rm -it --restart=Never -- \
sh -c "wget -qO- --timeout=10 http://myapp:8080/health"
echo "=== STACK VALIDATION PASSED ==="Verification Patterns#
After deploying to kind, these patterns check specific behaviors.
Pod Health Verification#
# Wait for all pods in a namespace to be ready
kubectl wait --for=condition=ready pods --all --timeout=120s
# Check for crash loops
RESTARTS=$(kubectl get pods -o jsonpath='{.items[*].status.containerStatuses[*].restartCount}' | tr ' ' '\n' | awk '{s+=$1} END {print s}')
if [ "${RESTARTS}" -gt 0 ]; then
echo "WARNING: ${RESTARTS} total restarts detected"
kubectl get pods -o custom-columns=NAME:.metadata.name,RESTARTS:.status.containerStatuses[0].restartCount
fiService DNS Resolution#
# Verify that service DNS resolves within the cluster
kubectl run dns-test --image=busybox:1.36 --rm -it --restart=Never -- \
nslookup my-service.default.svc.cluster.local
# Verify actual HTTP connectivity
kubectl run http-test --image=curlimages/curl --rm -it --restart=Never -- \
curl -sf --max-time 10 http://my-service:8080/healthResource Requests Validation#
# Check that resource requests are set on all containers
NO_REQUESTS=$(kubectl get pods -o jsonpath='{range .items[*]}{range .spec.containers[*]}{.name}{"\t"}{.resources.requests}{"\n"}{end}{end}' | grep -v "cpu\|memory" || true)
if [ -n "${NO_REQUESTS}" ]; then
echo "WARNING: Containers without resource requests:"
echo "${NO_REQUESTS}"
fiConfigMap and Secret Mount Verification#
# Verify that expected config files exist inside a pod
POD=$(kubectl get pods -l app=my-app -o jsonpath='{.items[0].metadata.name}')
kubectl exec "${POD}" -- ls -la /etc/config/
kubectl exec "${POD}" -- cat /etc/config/app.yamlTroubleshooting kind Clusters#
Cluster creation fails with “port already in use”: Another kind cluster or service is using the same ports. Run kind get clusters to list existing clusters and delete unused ones. Check for processes on ports 80/443 with lsof -i :80.
Images fail to pull: kind nodes use containerd, not Docker. Images available locally to Docker are not automatically available inside kind. Either load them explicitly (kind load docker-image my-app:test --name validation) or use a local registry (see the local registry configuration above).
Pods stuck in Pending: Check if the node has enough resources. kind nodes have limited CPU and memory. Run kubectl describe node and look at the Allocatable section. If resources are exhausted, reduce your deployment’s resource requests for validation or increase Docker’s resource allocation.
DNS resolution fails inside pods: This occasionally happens if CoreDNS is not yet ready. Wait for it: kubectl wait --namespace kube-system --for=condition=ready pod -l k8s-app=kube-dns --timeout=60s.
Cleanup after failed script: If a script fails partway through, the trap handler should clean up. If it does not (for example, due to a SIGKILL), list orphaned clusters with kind get clusters and delete them manually with kind delete cluster --name <name>. Also check for orphaned Docker containers with docker ps -a | grep kind.