Multi-Cluster Emulation with Minikube Profiles#
Production infrastructure rarely runs on a single cluster. You have staging, production, maybe a dedicated cluster for CI or data workloads. Minikube profiles let you run multiple independent Kubernetes clusters on one machine, each with its own version, resources, and addons. This is how you test multi-cluster workflows without cloud accounts.
What Profiles Are#
A minikube profile is a fully independent cluster. Each profile has its own:
- Kubernetes version
- CPU and memory allocation
- Enabled addons
- Container runtime state
- Kubeconfig context
The default profile is called minikube. Every additional profile you create is a separate cluster that runs alongside it.
Creating Profiles#
Create two clusters simulating staging and production, each running a different Kubernetes version:
minikube start -p staging \
--kubernetes-version=v1.28.0 \
--cpus=2 \
--memory=4096 \
--driver=docker
minikube start -p production \
--kubernetes-version=v1.29.0 \
--cpus=2 \
--memory=4096 \
--driver=dockerList all profiles:
minikube profile list
# |------------|-----------|---------|--------------|------|---------|---------|-------|--------|
# | Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active |
# |------------|-----------|---------|--------------|------|---------|---------|-------|--------|
# | staging | docker | docker | 192.168.49.2 | 8443 | v1.28.0 | Running | 1 | |
# | production | docker | docker | 192.168.58.2 | 8443 | v1.29.0 | Running | 1 | * |
# |------------|-----------|---------|--------------|------|---------|---------|-------|--------|Switching Between Clusters#
Two ways to switch context:
minikube profile command – sets the active profile for all subsequent minikube commands:
minikube profile staging
# minikube profile was successfully set to stagingkubectl context – the more common approach, works the same as switching between any Kubernetes clusters:
kubectl config get-contexts
# CURRENT NAME CLUSTER AUTHINFO NAMESPACE
# staging staging staging
# * production production production
kubectl config use-context staging
# Switched to context "staging".You can also pass --context to individual kubectl commands without switching globally:
kubectl --context staging get pods
kubectl --context production get podsThis is essential when running scripts that deploy to multiple clusters in sequence.
Resource Planning#
Each profile is a separate container (Docker driver) or VM consuming its own CPU and memory. Plan your host resources accordingly:
| Setup | Per-Profile | Total Host Need |
|---|---|---|
| 2 lightweight clusters | 2 CPU, 3GB RAM | 4 CPU, 6GB RAM |
| 2 production-like clusters | 4 CPU, 6GB RAM | 8 CPU, 12GB RAM |
| 3 clusters (staging + prod + CI) | 2 CPU, 4GB RAM | 6 CPU, 12GB RAM |
On a machine with 16GB RAM, two profiles with 4GB each leaves 8GB for the host OS and other applications. On machines with 32GB or more, you can comfortably run three or four profiles.
Check actual resource usage:
# Per-profile resource consumption
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}" | grep minikubeUse Case: Testing Kubernetes Upgrades#
Run your current version in one profile and the target version in another. Deploy the same workloads to both and compare behavior:
minikube start -p current --kubernetes-version=v1.28.0 --cpus=2 --memory=4096
minikube start -p upgrade-target --kubernetes-version=v1.29.0 --cpus=2 --memory=4096
# Deploy the same workload to both
kubectl --context current apply -f manifests/
kubectl --context upgrade-target apply -f manifests/
# Check for deprecation warnings or behavioral differences
kubectl --context upgrade-target get events --field-selector reason=WarningThis catches API deprecations, changed defaults, and behavioral regressions before they hit your real clusters.
Use Case: Staging-to-Production Promotion#
Mirror a real promotion workflow by deploying to a staging profile first, verifying, then promoting to production:
# Deploy to staging
kubectl --context staging apply -f manifests/
kubectl --context staging rollout status deployment/myapp --timeout=120s
# Run smoke tests against staging
STAGING_IP=$(minikube -p staging service myapp --url)
curl -sf "${STAGING_IP}/health" || { echo "Staging health check failed"; exit 1; }
# Promote to production
kubectl --context production apply -f manifests/
kubectl --context production rollout status deployment/myapp --timeout=120sThis validates your deployment manifests work correctly before they touch the production context, using the exact same YAML files.
Use Case: ArgoCD Multi-Cluster Management#
Install ArgoCD in one profile and register the second as a managed cluster. This tests the same multi-cluster GitOps workflow you would use with real clusters.
# Install ArgoCD in the staging profile (acts as management cluster)
kubectl config use-context staging
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for ArgoCD to be ready
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server \
-n argocd --timeout=180s
# Get the initial admin password
ARGOCD_PASSWORD=$(kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d)
# Port-forward to access ArgoCD
kubectl port-forward -n argocd svc/argocd-server 8080:443 &
# Login and register the production cluster
argocd login localhost:8080 --username admin --password "${ARGOCD_PASSWORD}" --insecure
argocd cluster add production --name production-clusterNow you can create ArgoCD Applications that target the production profile, testing multi-cluster sync, RBAC, and project configurations locally.
Kubeconfig Management#
Minikube automatically creates kubeconfig contexts for each profile. They live in your default kubeconfig file (~/.kube/config):
kubectl config get-contexts
# CURRENT NAME CLUSTER AUTHINFO NAMESPACE
# staging staging staging
# * production production productionIf you use tools like kubectx for fast context switching, profiles show up as regular contexts:
kubectx
# staging
# production
kubectx staging
# Switched to context "staging".For scripts that need to operate on both clusters, use explicit --context flags rather than switching the global context. This avoids race conditions in parallel operations.
Resource Sharing with Docker Driver#
An important nuance: when using the Docker driver, all profiles share the host’s Docker daemon. This means:
- Images built in one profile are NOT automatically visible to others. Each profile has its own container runtime namespace. Use
minikube -p <profile> image load <image>to load images into a specific profile. - Docker build cache is shared. Building the same Dockerfile in different profiles reuses cached layers from the host Docker.
- Port conflicts are possible. If both profiles try to expose NodePort services on the same host port, one will fail. Use
minikube -p <profile> service <name> --urlto get the auto-assigned URL.
# Load a locally-built image into a specific profile
minikube -p staging image load myapp:v1.2.3
minikube -p production image load myapp:v1.2.3Cleanup#
Remove a single profile without affecting others:
minikube delete -p staging
# Deletes the staging cluster, its PVs, and all associated dataRemove all profiles at once:
minikube delete --all
# Removes every profile and all minikube stateStop profiles without deleting them (preserves data):
minikube stop -p staging
minikube stop -p productionPractical Example: Two-Profile Promotion Pipeline#
A complete script that sets up staging and production profiles, deploys a workload through the pipeline, and verifies the promotion:
#!/usr/bin/env bash
set -euo pipefail
APP_IMAGE="myapp:v1.0.0"
# Create clusters
minikube start -p staging --kubernetes-version=v1.29.0 --cpus=2 --memory=3072
minikube start -p production --kubernetes-version=v1.29.0 --cpus=2 --memory=3072
# Load application image into both profiles
minikube -p staging image load "${APP_IMAGE}"
minikube -p production image load "${APP_IMAGE}"
# Deploy to staging
echo "Deploying to staging..."
kubectl --context staging apply -f k8s/namespace.yaml
kubectl --context staging apply -f k8s/
kubectl --context staging wait --for=condition=ready pod -l app=myapp \
-n myapp --timeout=120s
# Verify staging
STAGING_URL=$(minikube -p staging service myapp -n myapp --url)
if curl -sf "${STAGING_URL}/health" > /dev/null; then
echo "Staging verification passed."
else
echo "Staging verification failed. Aborting promotion."
exit 1
fi
# Promote to production
echo "Promoting to production..."
kubectl --context production apply -f k8s/namespace.yaml
kubectl --context production apply -f k8s/
kubectl --context production wait --for=condition=ready pod -l app=myapp \
-n myapp --timeout=120s
# Verify production
PROD_URL=$(minikube -p production service myapp -n myapp --url)
if curl -sf "${PROD_URL}/health" > /dev/null; then
echo "Production deployment verified."
else
echo "Production deployment failed."
exit 1
fi
echo "Promotion pipeline complete."This gives you a realistic staging-to-production workflow running entirely on your local machine, using the same kubectl commands and verification steps you would use in a real pipeline.