Kubernetes Service Types and DNS-Based Discovery#
Services are the stable networking abstraction in Kubernetes. Pods come and go, but a Service gives you a consistent DNS name and IP address that routes to the right set of pods. Choosing the wrong Service type or misunderstanding DNS discovery is behind a large percentage of connectivity failures.
Service Types#
ClusterIP (Default)#
ClusterIP creates an internal-only virtual IP. Only pods inside the cluster can reach it. This is what you want for internal communication between microservices.
apiVersion: v1
kind: Service
metadata:
name: api-backend
namespace: production
spec:
type: ClusterIP
selector:
app: api-backend
ports:
- port: 8080
targetPort: 8080Other pods reach this at api-backend.production.svc.cluster.local:8080, or just api-backend:8080 if they are in the same namespace.
NodePort#
NodePort exposes the service on a static port (30000-32767) on every node’s IP. Useful for development, minikube setups, or when you need external access without a cloud load balancer.
apiVersion: v1
kind: Service
metadata:
name: web-frontend
spec:
type: NodePort
selector:
app: web-frontend
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # optional; Kubernetes assigns one if omittedAccess it at <any-node-ip>:30080. In minikube, use minikube service web-frontend --url to get the routable address.
LoadBalancer#
LoadBalancer provisions an external load balancer through your cloud provider (AWS ELB, GCP LB, etc.). It is a superset of NodePort – it creates a NodePort and then puts a cloud LB in front of it.
apiVersion: v1
kind: Service
metadata:
name: public-api
spec:
type: LoadBalancer
selector:
app: public-api
ports:
- port: 443
targetPort: 8443On bare metal or minikube, LoadBalancer services stay in Pending state forever unless you run MetalLB or use minikube tunnel.
ExternalName#
ExternalName creates a CNAME DNS record pointing to an external hostname. No proxying happens. It is just a DNS alias.
apiVersion: v1
kind: Service
metadata:
name: external-db
namespace: production
spec:
type: ExternalName
externalName: mydb.us-east-1.rds.amazonaws.comPods can now connect to external-db:5432 and DNS resolves to the RDS hostname. No selector, no endpoints. Be aware that ExternalName does not support ports – it only does DNS-level redirection. The client must know the correct port.
Headless Services#
A headless service has clusterIP: None. Instead of a single virtual IP, DNS returns the individual pod IPs. This is essential for StatefulSets where clients need to reach specific pods (database replicas, Kafka brokers).
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
clusterIP: None
selector:
app: postgres
ports:
- port: 5432DNS for postgres.default.svc.cluster.local returns A records for every pod matching the selector. For a StatefulSet, each pod also gets a DNS entry: postgres-0.postgres.default.svc.cluster.local.
DNS-Based Service Discovery#
Every Service gets a DNS record in the form:
<service-name>.<namespace>.svc.cluster.localPods in the same namespace can use just <service-name>. Pods in a different namespace must include the namespace: <service-name>.<namespace>. The full FQDN svc.cluster.local suffix is rarely needed but eliminates ambiguity.
CoreDNS configures each pod with search domains:
search <pod-namespace>.svc.cluster.local svc.cluster.local cluster.localThis is why short names like api-backend resolve – the search domain appends the namespace and cluster suffix automatically.
Debugging Service Connectivity#
When a pod cannot reach a service, work through this sequence:
1. Does the service exist and have endpoints?
kubectl get svc -n <namespace>
kubectl get endpoints <service-name> -n <namespace>If endpoints are empty, the selector does not match any running pods. Compare the service selector with actual pod labels:
kubectl describe svc <service-name> -n <namespace>
kubectl get pods -n <namespace> --show-labels2. Can you resolve the DNS name from inside a pod?
kubectl exec -it <pod-name> -n <namespace> -- nslookup <service-name>
# Or if nslookup is not available:
kubectl run dns-test --image=busybox:1.36 --rm -it --restart=Never -- nslookup <service-name>.<namespace>.svc.cluster.localIf DNS resolution fails, check that CoreDNS is running: kubectl get pods -n kube-system -l k8s-app=kube-dns.
3. Can you reach the service port?
kubectl exec -it <pod-name> -n <namespace> -- wget -qO- http://<service-name>:<port>/healthz
# Or with curl if available:
kubectl exec -it <pod-name> -n <namespace> -- curl -s http://<service-name>:<port>/healthz4. Is the target pod actually listening?
kubectl exec -it <target-pod> -n <namespace> -- ss -tlnpVerify the targetPort on the Service matches the port the container is actually listening on.
5. Check for network policies blocking traffic:
kubectl get networkpolicies -n <namespace>If network policies exist and do not explicitly allow the traffic, it will be silently dropped.
Common Mistakes#
- Wrong selector. The service selector must match pod labels exactly. Labels on the Deployment do not count – it is the
template.metadata.labelsthat matters. - Port vs targetPort confusion.
portis what consumers use.targetPortis what the container listens on. They do not have to be the same. - Cross-namespace without qualification.
curl api-backend:8080works in the same namespace. From another namespace you needcurl api-backend.production:8080. - LoadBalancer pending on bare metal. Without a cloud provider or MetalLB, LoadBalancer services never get an external IP. Use NodePort or set up MetalLB.
- Headless service with session affinity.
sessionAffinityhas no effect on headless services since there is no proxy – the client connects directly to pod IPs.