Namespace Strategy and Multi-Tenancy#
Namespaces are the foundation for isolating workloads in a shared Kubernetes cluster. Without a deliberate strategy, teams deploy into arbitrary namespaces, resources are unbound, and one misbehaving application can take down the entire cluster.
Why Namespaces Matter#
Namespaces provide four isolation boundaries:
- RBAC scoping: Roles and RoleBindings are namespace-scoped, so you can grant teams access to their namespaces only.
- Resource quotas: Limit CPU, memory, and object counts per namespace, preventing one team from starving others.
- Network policies: Restrict traffic between namespaces so a compromised application cannot reach services it should not.
- Organizational clarity:
kubectl get pods -n payments-prodshows exactly what you expect, not a jumble of unrelated workloads.
Recommended Namespace Layout#
System Namespaces#
These exist in every cluster and should be off-limits to application teams:
| Namespace | Purpose |
|---|---|
kube-system |
Core Kubernetes components (CoreDNS, kube-proxy) |
ingress-nginx or ingress |
Ingress controller |
monitoring |
Prometheus, Grafana, Alertmanager |
argocd |
GitOps controller |
cert-manager |
TLS certificate management |
Application Namespaces#
Use the {app}-{env} pattern for the best isolation:
payments-dev
payments-staging
payments-prod
frontend-dev
frontend-prod
shared-services-prod # For shared databases, message queuesThis gives each application in each environment its own resource quota, RBAC scope, and network boundary.
ResourceQuota: Prevent Resource Hogging#
A ResourceQuota caps what a namespace can consume:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: payments-prod
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "30"
services: "10"
persistentvolumeclaims: "5"
configmaps: "20"
secrets: "20"Once a quota exists, every pod in the namespace must specify resource requests and limits, or the API server rejects it. This catches the common mistake of deploying containers with no resource bounds.
LimitRange: Set Sensible Defaults#
A LimitRange provides default requests and limits so developers do not have to add them to every pod spec manually:
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: payments-prod
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "2"
memory: 2Gi
min:
cpu: 50m
memory: 64Mi
type: ContainerThe max and min fields reject containers that request resources outside the acceptable range. This prevents someone from deploying a pod requesting 64 CPU cores into a namespace capped at 4.
NetworkPolicy: Isolate Namespaces#
By default, all pods can communicate with all other pods across all namespaces. A default-deny policy locks this down:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: payments-prod
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressThen explicitly allow the traffic you need:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: payments-prod
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
- to: # Allow DNS resolution
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-controller
namespace: payments-prod
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginxRBAC: Scope Access to Namespaces#
Define a ClusterRole once and bind it per namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-developer
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["pods", "pods/log", "deployments", "services", "configmaps", "jobs", "cronjobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payments-team-access
namespace: payments-prod
subjects:
- kind: Group
name: payments-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: namespace-developer
apiGroup: rbac.authorization.k8s.ioNote that Secrets get get and list but not create or update – production secrets should be managed through a secrets manager, not by developers directly.
Complete Namespace Setup Script#
This script creates a namespace with all isolation primitives in place:
#!/bin/bash
set -euo pipefail
NAMESPACE="${1:?Usage: $0 <namespace> <cpu-request> <mem-request>}"
CPU_REQUEST="${2:-4}"
MEM_REQUEST="${3:-8Gi}"
kubectl create namespace "$NAMESPACE" --dry-run=client -o yaml | kubectl apply -f -
# Label for NetworkPolicy selectors
kubectl label namespace "$NAMESPACE" \
environment="${NAMESPACE##*-}" \
--overwrite
# ResourceQuota
kubectl apply -f - <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: $NAMESPACE
spec:
hard:
requests.cpu: "$CPU_REQUEST"
requests.memory: $MEM_REQUEST
limits.cpu: "$((CPU_REQUEST * 2))"
limits.memory: "$(echo $MEM_REQUEST | sed 's/Gi//')$(echo "* 2" | bc)Gi"
pods: "30"
services: "10"
persistentvolumeclaims: "5"
EOF
# LimitRange
kubectl apply -f - <<EOF
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: $NAMESPACE
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "2"
memory: 2Gi
min:
cpu: 50m
memory: 64Mi
type: Container
EOF
# Default deny NetworkPolicy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: $NAMESPACE
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace-and-dns
namespace: $NAMESPACE
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
egress:
- to:
- podSelector: {}
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
EOF
echo "Namespace $NAMESPACE created with quota, limits, and network policies."Decision Framework: Separate Namespaces vs Separate Clusters#
| Factor | Separate Namespaces | Separate Clusters |
|---|---|---|
| Cost | Lower – shared control plane | Higher – per-cluster overhead |
| Isolation | Soft – shared kernel, shared etcd | Hard – full isolation |
| Blast radius | Control plane failure affects all tenants | Failure isolated to one cluster |
| Compliance | May not satisfy regulatory requirements | Required for PCI, HIPAA in some interpretations |
| Complexity | Lower – one cluster to manage | Higher – multi-cluster networking, identity federation |
Use namespaces when teams trust each other, workloads have similar security requirements, and you want simpler operations.
Use separate clusters when regulatory compliance demands hard isolation, teams have fundamentally different reliability requirements, or the blast radius of a single cluster failure is unacceptable.
Most organizations start with namespaces and split into separate clusters only when they have a concrete reason.