Emulating Production Namespace Organization in Minikube#
Setting up namespaces locally the same way you organize them in production builds muscle memory for real operations. When your local cluster mirrors production namespace structure, you catch RBAC misconfigurations, resource limit issues, and network policy gaps before they reach staging. It also means your Helm values files, Kustomize overlays, and deployment scripts work identically across environments.
Why Bother Locally#
The default minikube experience is everything deployed into default. This teaches bad habits. Developers forget -n flags, RBAC issues are never caught, resource contention is never simulated, and the first time anyone encounters namespace isolation is in production – where the consequences are real.
Spending 10 minutes setting up namespaces locally saves hours of debugging later.
Common Production Namespace Patterns#
By Application Tier#
Separate infrastructure services from application workloads from observability:
infra/ -- databases, message queues, shared caches
app/ -- application workloads
monitoring/ -- Prometheus, Grafana, alertingBy Team#
Each team owns a namespace:
team-platform/
team-payments/
team-frontend/By Environment (Multi-Environment in One Cluster)#
Run multiple environments locally to test promotion workflows:
dev/
staging/
production/Hybrid: Application-Per-Environment#
The most granular approach, and the one that best mirrors production:
payments-dev/
payments-staging/
api-dev/
api-staging/Creating Namespaces with Labels and Annotations#
Production namespaces carry metadata. Labels enable NetworkPolicy selectors and bulk operations. Annotations store ownership and documentation.
# namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: infra
labels:
tier: infrastructure
managed-by: platform-team
annotations:
description: "Shared infrastructure services: databases, caches, queues"
owner: "platform-team@company.com"
---
apiVersion: v1
kind: Namespace
metadata:
name: app
labels:
tier: application
managed-by: app-team
annotations:
description: "Application workloads"
owner: "app-team@company.com"
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
tier: observability
managed-by: platform-team
annotations:
description: "Monitoring and alerting stack"
owner: "platform-team@company.com"Apply them:
kubectl apply -f namespaces.yamlThe labels on these namespaces become powerful selectors for NetworkPolicies. A policy in the infra namespace can allow ingress only from namespaces labeled tier: application, blocking direct access from monitoring tools.
ResourceQuotas Per Namespace#
ResourceQuotas prevent any single namespace from consuming the entire cluster. In minikube, this simulates the constrained environments your workloads actually run in.
# quota-app.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: app-quota
namespace: app
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "15"
services: "10"
persistentvolumeclaims: "5"kubectl apply -f quota-app.yamlOnce a ResourceQuota is active, every pod in that namespace must specify resource requests and limits. Pods without them are rejected. This catches a common production issue early: deployments that forget to set resource requests get silently scheduled onto already-overloaded nodes.
Check quota usage:
kubectl describe resourcequota app-quota -n appLimitRanges: Default Resource Requests#
LimitRanges provide defaults so that pods without explicit resource specs are not rejected by the quota. They also set minimums and maximums per container.
# limitrange-app.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: app
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "2"
memory: 2Gi
min:
cpu: 50m
memory: 64Mi
type: Containerkubectl apply -f limitrange-app.yamlNow any pod deployed to app without resource specs gets 100m CPU request, 128Mi memory request, 500m CPU limit, and 512Mi memory limit automatically. Any pod requesting more than 2 CPUs or 2Gi memory is rejected.
Namespace-Scoped RBAC#
Production clusters restrict what each ServiceAccount can do. Replicate this locally to verify your service account configurations before deployment.
Create a ServiceAccount with limited permissions in the app namespace:
# rbac-app.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-deployer
namespace: app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-deployer-role
namespace: app
rules:
- apiGroups: ["", "apps"]
resources: ["deployments", "services", "configmaps", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
# Note: no create/update -- deployer can read secrets but not modify them
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-deployer-binding
namespace: app
subjects:
- kind: ServiceAccount
name: app-deployer
namespace: app
roleRef:
kind: Role
name: app-deployer-role
apiGroup: rbac.authorization.k8s.iokubectl apply -f rbac-app.yamlTest that the ServiceAccount cannot access resources in other namespaces:
# This should succeed -- app-deployer can list pods in app namespace
kubectl auth can-i list pods --namespace=app --as=system:serviceaccount:app:app-deployer
# yes
# This should fail -- app-deployer has no permissions in infra namespace
kubectl auth can-i list pods --namespace=infra --as=system:serviceaccount:app:app-deployer
# no
# This should fail -- app-deployer cannot create secrets in app namespace
kubectl auth can-i create secrets --namespace=app --as=system:serviceaccount:app:app-deployer
# noTesting Namespace Isolation#
Verify that pods in one namespace cannot access secrets in another namespace. This is the RBAC boundary that matters most in production.
# Create a secret in the infra namespace
kubectl create secret generic db-creds \
--from-literal=password=supersecret \
-n infra
# Run a pod in the app namespace using the app-deployer service account
kubectl run rbac-test --image=bitnami/kubectl:latest \
--restart=Never \
--overrides='{"spec":{"serviceAccountName":"app-deployer"}}' \
-n app \
--command -- sleep 3600
# Try to read the infra secret from the app namespace pod
kubectl exec rbac-test -n app -- kubectl get secret db-creds -n infra
# Error from server (Forbidden): secrets "db-creds" is forbiddenThis confirms the RBAC boundary is enforced. The app-deployer ServiceAccount in app cannot reach secrets in infra.
Practical Example: Full Microservices Namespace Setup#
Here is a complete setup for a microservices application with shared infrastructure, application workloads, and monitoring:
#!/bin/bash
# setup-namespaces.sh
# Create namespaces
kubectl create namespace infra --dry-run=client -o yaml | \
kubectl label --local -f - tier=infrastructure --dry-run=client -o yaml | \
kubectl apply -f -
kubectl create namespace app --dry-run=client -o yaml | \
kubectl label --local -f - tier=application --dry-run=client -o yaml | \
kubectl apply -f -
kubectl create namespace monitoring --dry-run=client -o yaml | \
kubectl label --local -f - tier=observability --dry-run=client -o yaml | \
kubectl apply -f -
# Apply quotas
kubectl apply -f quota-infra.yaml
kubectl apply -f quota-app.yaml
kubectl apply -f quota-monitoring.yaml
# Apply limit ranges
kubectl apply -f limitrange-infra.yaml
kubectl apply -f limitrange-app.yaml
kubectl apply -f limitrange-monitoring.yaml
# Apply RBAC
kubectl apply -f rbac-app.yaml
# Verify
kubectl get namespaces --show-labels
kubectl describe resourcequota -ADeploy services into their appropriate namespaces:
# Databases and shared services go into infra
helm upgrade --install postgresql bitnami/postgresql -n infra -f values/postgresql.yaml
helm upgrade --install redis bitnami/redis -n infra -f values/redis.yaml
# Application workloads go into app
kubectl apply -f deployments/api-server.yaml -n app
kubectl apply -f deployments/worker.yaml -n app
# Monitoring goes into monitoring
helm upgrade --install prometheus prometheus-community/kube-prometheus-stack -n monitoringCleanup Patterns#
Delete a namespace and everything in it:
kubectl delete namespace app
# This deletes ALL resources in the namespace: pods, services, secrets, configmaps, everythingLabel-based cleanup for selective deletion:
# Delete all namespaces belonging to the platform team
kubectl delete namespace -l managed-by=platform-team
# Delete only test namespaces
kubectl delete namespace -l environment=testTo clean up everything and start fresh without destroying the minikube cluster itself:
# Delete all custom namespaces, keeping system namespaces
kubectl get namespaces --no-headers -o custom-columns=":metadata.name" | \
grep -v -E '^(default|kube-system|kube-public|kube-node-lease)$' | \
xargs kubectl delete namespaceKey Takeaways#
- Mirror your production namespace structure locally. The cost is minimal and the payoff is catching misconfigurations early.
- Always pair namespaces with ResourceQuotas and LimitRanges. Pods without resource specs should get sane defaults, not unlimited access.
- Test RBAC boundaries with
kubectl auth can-i. If a ServiceAccount should not access a resource, verify it locally. - Use namespace labels for NetworkPolicy selectors and bulk operations.
- Lock down the
defaultnamespace with a zero-pod ResourceQuota to prevent accidental deployments.