Kubernetes Namespace Organization#
Namespaces are Kubernetes’ primary mechanism for dividing a cluster among teams, applications, and environments. Getting the strategy right early saves significant pain later. Getting it wrong means RBAC tangles, resource contention, and deployment confusion.
Strategy 1: Per-Team Namespaces#
Each team gets a namespace (team-platform, team-payments, team-frontend). All applications owned by that team deploy into it.
When it works: Clear team boundaries with shared responsibility for multiple services.
When it breaks: Teams with dozens of microservices end up with enormous namespaces. One noisy service can starve others, and resource quota allocation becomes a negotiation.
Strategy 2: Per-Environment Namespaces#
Separate namespaces per stage: dev, staging, production.
When it works: Small teams running the same applications across environments. Simple RBAC: developers get full access to dev, read-only to production.
When it breaks: Multiple teams deploying into the same environment namespace. No isolation between unrelated apps.
Strategy 3: Per-Application-Per-Environment (Recommended)#
Combine application and environment: payments-dev, payments-staging, payments-prod, frontend-prod.
This gives the best isolation. Resource quotas target each app in each environment. RBAC grants developers access to -dev namespaces while restricting -prod to deploy service accounts. More namespaces to manage, but that is easily automated.
Resource Quotas Per Namespace#
Without quotas, a single namespace can consume the entire cluster. Always set quotas on shared clusters.
apiVersion: v1
kind: ResourceQuota
metadata:
name: namespace-quota
namespace: payments-prod
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
services: "10"
persistentvolumeclaims: "5"When a quota is in place, every pod must specify resource requests and limits or be rejected. Use a LimitRange to set defaults:
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
namespace: payments-prod
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
type: ContainerRBAC Scoped to Namespaces#
Roles and RoleBindings are namespace-scoped, which is how you grant per-namespace access:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer-access
namespace: payments-dev
rules:
- apiGroups: ["", "apps", "batch"]
resources: ["pods", "deployments", "services", "configmaps", "jobs"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payments-team-dev-access
namespace: payments-dev
subjects:
- kind: Group
name: payments-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer-access
apiGroup: rbac.authorization.k8s.ioTo grant the same permissions across multiple namespaces, define a ClusterRole and bind it per-namespace with RoleBindings. This avoids duplicating the Role definition.
Cross-Namespace Communication#
Services in different namespaces communicate via fully qualified DNS names:
<service-name>.<namespace>.svc.cluster.localFor example, the frontend in frontend-prod calls the payments API in payments-prod:
http://payments-api.payments-prod.svc.cluster.local:8080The short form payments-api only works within the same namespace. Across namespaces, you must use the full name. NetworkPolicies can further restrict which namespaces are allowed to communicate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-only
namespace: payments-prod
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
app-tier: frontendThe Default Namespace Pitfall#
The default namespace exists in every cluster and is where resources land when you do not specify -n. This causes two problems:
- Accidental deployments. Forgetting
-ncreates ghost workloads that are hard to find. - No resource isolation. No quotas or RBAC restrictions by default.
Fix it:
# Apply a zero-resource quota to prevent any workloads in default
kubectl apply -f - <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: no-workloads
namespace: default
spec:
hard:
pods: "0"
EOFBetter yet, set a default namespace in your kubeconfig context so kubectl never targets default:
kubectl config set-context --current --namespace=payments-devNamespace Stuck in Terminating#
When you run kubectl delete namespace <name>, it sometimes hangs in Terminating status indefinitely. This happens when finalizers on resources inside the namespace cannot be satisfied, usually because a controller that handles the finalizer is gone.
Diagnose it:
# Find resources with finalizers still present
kubectl api-resources --verbs=list --namespaced -o name | \
xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>If a custom resource has a finalizer but its controller is deleted, remove the finalizer manually:
kubectl patch <resource-type> <name> -n <namespace> \
-p '{"metadata":{"finalizers":null}}' --type=mergeAs a last resort, you can force-delete the namespace by removing its finalizer via the API. Export the namespace spec, remove the kubernetes finalizer from spec.finalizers, and PUT it back:
kubectl get namespace <name> -o json | \
jq '.spec.finalizers = []' | \
kubectl replace --raw "/api/v1/namespaces/<name>/finalize" -f -This is a forceful operation. Only use it after confirming no legitimate finalizers are pending.
Key Takeaways#
- Per-application-per-environment namespaces give the best isolation for most teams.
- Always pair namespaces with ResourceQuotas and LimitRanges on shared clusters.
- Use ClusterRoles with per-namespace RoleBindings to avoid duplicating RBAC definitions.
- Cross-namespace DNS requires the full
<svc>.<ns>.svc.cluster.localform. - Lock down the
defaultnamespace to prevent accidental deployments. - Stuck
Terminatingnamespaces are almost always caused by orphaned finalizers.