Using Minikube for CI, Integration Testing, and Local Development Workflows#

Minikube gives you a real Kubernetes cluster wherever you need one – on a developer laptop, in a GitHub Actions runner, or in any CI environment that has Docker. The patterns differ between local development and CI, but the underlying approach is the same: stand up a cluster, deploy your workload and its dependencies, test against it, tear it down.

Local Development: Fast Feedback Loops#

Loading Images Without a Registry#

The fastest way to get a locally-built image into minikube is minikube image load:

docker build -t myapp:dev .
minikube image load myapp:dev

This copies the image from your host Docker into minikube’s container runtime. No registry, no push/pull overhead. Reference it in your manifests with imagePullPolicy: Never:

containers:
  - name: myapp
    image: myapp:dev
    imagePullPolicy: Never

Building Inside Minikube’s Docker#

Alternatively, point your shell at minikube’s Docker daemon and build directly:

eval $(minikube docker-env)
docker build -t myapp:dev .

Images built this way are immediately available to minikube without any loading step. The tradeoff is that your build context is uploaded to minikube’s Docker, which can be slower for large projects. Switch back to your host Docker with:

eval $(minikube docker-env -u)

Building Without Docker Desktop#

If you do not have Docker Desktop installed, minikube can build images directly:

minikube image build -t myapp:dev .

This uses minikube’s built-in container runtime to build the image. It supports standard Dockerfiles and produces images that are immediately available in the cluster.

Hot Reload with Skaffold#

For continuous development, Skaffold watches your source code, rebuilds on changes, and redeploys to minikube automatically:

# skaffold.yaml
apiVersion: skaffold/v4beta6
kind: Config
build:
  local:
    push: false
  artifacts:
    - image: myapp
      docker:
        dockerfile: Dockerfile
deploy:
  kubectl:
    manifests:
      - k8s/*.yaml

Run the development loop:

skaffold dev --port-forward

Skaffold detects file changes, rebuilds the container image, updates the deployment, and sets up port forwarding – all in a single command. Tilt is an alternative with a similar workflow and a web UI for monitoring the rebuild cycle.

Helm Chart Testing#

Minikube is the natural environment for testing Helm charts before publishing:

# Lint the chart
helm lint ./charts/myapp

# Dry run to catch template errors
helm install myapp ./charts/myapp --dry-run --debug

# Install into minikube
helm install myapp ./charts/myapp --namespace myapp --create-namespace

# Run Helm tests (pods defined in templates/tests/)
helm test myapp -n myapp

# Verify the deployment
kubectl -n myapp rollout status deployment/myapp --timeout=120s

The dry-run catches template rendering errors. The actual install verifies that the rendered manifests are valid Kubernetes resources. Helm tests run test pods that verify the application is working correctly.

Integration Testing Pattern#

A structured approach for running integration tests against real Kubernetes infrastructure:

Step 1: Start the cluster

minikube start --driver=docker --cpus=2 --memory=4096 --wait=false

The --wait=false flag skips waiting for all system pods to be ready, shaving 10-20 seconds off startup. You will wait for specific dependencies later.

Step 2: Deploy dependencies

helm install postgres bitnami/postgresql \
  --namespace deps --create-namespace \
  --set auth.postgresPassword=testpass \
  --set primary.resources.requests.memory=256Mi \
  --set primary.resources.requests.cpu=100m

Step 3: Wait for readiness

kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=postgresql \
  -n deps --timeout=120s

Step 4: Run tests against in-cluster services

# Port-forward the database
kubectl port-forward -n deps svc/postgres-postgresql 5432:5432 &
PF_PID=$!

# Run integration tests
DATABASE_URL="postgres://postgres:testpass@localhost:5432/postgres" \
  go test ./integration/... -v -timeout 300s

# Clean up port-forward
kill $PF_PID

Step 5: Collect logs on failure

if [ $? -ne 0 ]; then
  echo "=== Test failure logs ==="
  kubectl logs -n deps -l app.kubernetes.io/name=postgresql --tail=100
  kubectl get events -n deps --sort-by='.lastTimestamp'
fi

Step 6: Tear down

minikube delete

GitHub Actions Integration Test Workflow#

A complete GitHub Actions workflow that runs integration tests against minikube:

name: Integration Tests
on:
  pull_request:
    branches: [main]
  push:
    branches: [main]

jobs:
  integration:
    runs-on: ubuntu-latest
    timeout-minutes: 20
    steps:
      - uses: actions/checkout@v4

      - name: Start minikube
        uses: medyagh/setup-minikube@latest
        with:
          minikube-version: 'latest'
          kubernetes-version: 'v1.29.0'
          cpus: 2
          memory: 4096m
          driver: docker

      - name: Build and load image
        run: |
          docker build -t myapp:${{ github.sha }} .
          minikube image load myapp:${{ github.sha }}

      - name: Deploy dependencies
        run: |
          helm repo add bitnami https://charts.bitnami.com/bitnami
          helm install postgres bitnami/postgresql \
            --namespace test --create-namespace \
            --set auth.postgresPassword=testpass \
            --set primary.persistence.enabled=false \
            --wait --timeout 120s

      - name: Deploy application
        run: |
          kubectl apply -n test -f k8s/
          kubectl set image -n test deployment/myapp \
            myapp=myapp:${{ github.sha }}
          kubectl -n test rollout status deployment/myapp --timeout=120s

      - name: Run integration tests
        run: |
          kubectl port-forward -n test svc/myapp 8080:80 &
          sleep 5
          go test ./integration/... -v -timeout 300s

      - name: Collect failure logs
        if: failure()
        run: |
          echo "=== Pod Status ==="
          kubectl get pods -n test -o wide
          echo "=== Pod Logs ==="
          kubectl logs -n test -l app=myapp --tail=200
          echo "=== Events ==="
          kubectl get events -n test --sort-by='.lastTimestamp' | tail -30
          echo "=== Describe Failing Pods ==="
          kubectl describe pods -n test --field-selector=status.phase!=Running

Key details: persistence is disabled for the database (persistence.enabled=false) because CI does not need data to survive, and it speeds up pod startup. The --wait flag on helm install blocks until all pods are ready, replacing a separate kubectl wait step. The failure log collection step runs only when the workflow fails.

Speed Optimizations for CI#

Minikube in CI is slower than on a developer machine. Every second counts in a pipeline.

Cache the minikube binary and images. The setup-minikube action handles binary caching. For container images, use GitHub Actions cache:

- name: Cache container images
  uses: actions/cache@v4
  with:
    path: /tmp/minikube-images
    key: minikube-images-${{ hashFiles('k8s/deps.yaml') }}

- name: Load cached images
  if: steps.cache.outputs.cache-hit == 'true'
  run: |
    for img in /tmp/minikube-images/*.tar; do
      minikube image load "$img"
    done

Pre-pull required images to avoid pulling during deployment:

minikube image pull postgres:16
minikube image pull redis:7

Reduce resource allocation. CI runners typically have 2 CPUs and 7GB RAM. Allocate 2 CPUs and 4GB to minikube, leaving room for the runner itself.

Use --wait=false and targeted waits. Waiting for all system pods is unnecessary if you only need specific services.

Disable features you do not need:

minikube start --driver=docker \
  --cpus=2 --memory=4096 \
  --wait=false \
  --extra-config=kubelet.housekeeping-interval=10s

Common CI Gotchas#

Docker-in-Docker works in GitHub Actions. The Ubuntu runner has Docker pre-installed, so minikube with the Docker driver works out of the box. You do not need to configure Docker-in-Docker separately.

Resource pressure causes flaky tests. CI environments have less CPU and memory than developer machines. Pods take longer to start, and resource-constrained containers may be OOM-killed. Mitigate this by:

  • Setting lower replica counts (1 replica instead of 3)
  • Reducing resource requests on dependencies
  • Using smaller database images or in-memory modes where available

Timeouts need to be longer in CI. Image pulls are slower, pod scheduling is slower, and everything competes for limited resources. Double your local timeout values:

kubectl wait --for=condition=ready pod -l app=myapp --timeout=180s  # not 60s

Port-forward needs a sleep. After starting kubectl port-forward in the background, wait a few seconds before hitting the forwarded port. The tunnel takes a moment to establish:

kubectl port-forward svc/myapp 8080:80 &
sleep 5  # Let the tunnel establish
curl localhost:8080/health

Debugging Test Failures#

When tests fail in minikube, use these commands to diagnose:

# Pod status overview
kubectl get pods -A -o wide

# Events reveal scheduling failures, image pull errors, probe failures
kubectl get events --sort-by='.lastTimestamp' -A | tail -30

# Describe a specific failing pod for full event history
kubectl describe pod <pod-name>

# Application logs
kubectl logs <pod-name> --tail=200
kubectl logs <pod-name> --previous  # Logs from the crashed container

# SSH into the minikube node for low-level inspection
minikube ssh
# Inside the node: check disk space, memory, running containers
df -h
free -m
docker ps

Practical Template: Complete Makefile#

A Makefile that encapsulates the full local development and testing workflow:

CLUSTER_NAME ?= dev
K8S_VERSION ?= v1.29.0
APP_IMAGE ?= myapp:dev

.PHONY: local-cluster deploy-deps build deploy test teardown ci

local-cluster:
	minikube start -p $(CLUSTER_NAME) \
		--driver=docker \
		--cpus=2 \
		--memory=4096 \
		--kubernetes-version=$(K8S_VERSION) \
		--wait=false
	minikube -p $(CLUSTER_NAME) addons enable metrics-server

deploy-deps:
	helm repo add bitnami https://charts.bitnami.com/bitnami --force-update
	helm upgrade --install postgres bitnami/postgresql \
		--namespace deps --create-namespace \
		--set auth.postgresPassword=devpass \
		--set primary.resources.requests.memory=256Mi \
		--kube-context $(CLUSTER_NAME) \
		--wait --timeout 120s

build:
	docker build -t $(APP_IMAGE) .
	minikube -p $(CLUSTER_NAME) image load $(APP_IMAGE)

deploy: build
	kubectl --context $(CLUSTER_NAME) apply -f k8s/
	kubectl --context $(CLUSTER_NAME) rollout status deployment/myapp --timeout=120s

test:
	kubectl --context $(CLUSTER_NAME) port-forward svc/myapp 8080:80 &
	sleep 5
	go test ./integration/... -v -timeout 300s || \
		(kubectl --context $(CLUSTER_NAME) logs -l app=myapp --tail=100 && exit 1)
	-pkill -f "port-forward svc/myapp"

teardown:
	minikube delete -p $(CLUSTER_NAME)

# Full CI pipeline: cluster -> deps -> build -> deploy -> test -> teardown
ci: local-cluster deploy-deps deploy test teardown

Usage:

# Full pipeline
make ci

# Or step by step for development
make local-cluster
make deploy-deps
make deploy
make test

# Clean up when done
make teardown

This Makefile works identically on developer machines and in CI. The only prerequisite is minikube, kubectl, helm, and Docker – all commonly available in both environments.