Network Policies: Namespace Isolation and Pod-to-Pod Rules#
By default, every pod in a Kubernetes cluster can talk to every other pod. Network policies let you restrict that. They are namespace-scoped resources that select pods by label and define allowed ingress and egress rules.
Critical Prerequisite: CNI Support#
Network policies are only enforced if your CNI plugin supports them. Calico, Cilium, and Weave all support network policies. Flannel does not. If you are running Flannel, you can create NetworkPolicy resources without errors, but they will have absolutely no effect. This is a silent failure that wastes hours of debugging.
Check your CNI:
kubectl get pods -n kube-system | grep -E 'calico|cilium|flannel|weave'In minikube, start with a network-policy-supporting CNI:
minikube start --cni=calicoDefault Deny: The Foundation#
Without any NetworkPolicy, all traffic is allowed. The moment you apply a NetworkPolicy that selects a pod, all traffic not explicitly allowed by a policy is denied for that pod. This is the key mental model: policies are additive allow-lists on top of an implicit deny.
Default Deny All Ingress in a Namespace#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # empty selector matches ALL pods in the namespace
policyTypes:
- IngressAfter applying this, no pod in the production namespace accepts any incoming traffic unless another NetworkPolicy explicitly allows it.
Default Deny All Egress in a Namespace#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- EgressThis blocks all outgoing traffic from pods in the namespace, including DNS. You almost certainly need to pair this with a DNS allow rule.
Default Deny Both#
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressAllow DNS: The One Everyone Forgets#
When you set a default deny on egress, DNS resolution breaks immediately. Every pod needs to reach CoreDNS on UDP/TCP port 53 in the kube-system namespace. Apply this alongside any egress deny:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53The label kubernetes.io/metadata.name is automatically set on every namespace by Kubernetes 1.21+. If you are on an older version, you need to manually label kube-system: kubectl label namespace kube-system kubernetes.io/metadata.name=kube-system.
Allow Specific Pod-to-Pod Traffic#
Allow the web-frontend pods to talk to api-backend pods on port 8080:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-frontend
ports:
- protocol: TCP
port: 8080This policy selects api-backend pods and allows ingress from web-frontend pods in the same namespace.
Cross-Namespace Access#
Allow pods in the monitoring namespace to scrape metrics from pods in production:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-scrape
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
ports:
- protocol: TCP
port: 9090To combine namespace and pod selectors (pods with a specific label in a specific namespace), put both selectors in the same from entry:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheusIf you put them as separate list items under from, they act as an OR – any pod in the monitoring namespace OR any pod labeled app: prometheus in any namespace gets access. This is a common and dangerous mistake.
Egress Rules#
Allow api-backend pods to reach an external database on port 5432:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.5.0/24 # database subnet
ports:
- protocol: TCP
port: 5432For internal database access within the cluster, use a podSelector or namespaceSelector instead of ipBlock.
Debugging Network Policies#
List all policies in a namespace:
kubectl get networkpolicies -n productionInspect a specific policy:
kubectl describe networkpolicy allow-frontend-to-api -n productionTest connectivity between pods:
# From the frontend pod, try reaching the api-backend service
kubectl exec -it <frontend-pod> -n production -- wget -qO- --timeout=3 http://api-backend:8080/healthz
# If it hangs and times out, the network policy is blocking it
# If it immediately refuses, the service or pod is not runningCheck which policies apply to a specific pod:
kubectl get networkpolicies -n production -o json | \
jq '.items[] | select(.spec.podSelector.matchLabels.app == "api-backend") | .metadata.name'Common failure mode: You apply a default-deny-egress policy and everything breaks because pods cannot resolve DNS. Always deploy the DNS allow policy at the same time as any egress deny policy. If you suspect DNS is the issue, test with an IP address instead of a hostname to confirm.