Minikube Networking: Services, Ingress, DNS, and LoadBalancer Emulation#
Minikube networking behaves differently from cloud Kubernetes in ways that cause confusion. LoadBalancer services do not get external IPs by default, the minikube IP may or may not be directly reachable from your host depending on the driver, and ingress requires specific addon setup. Understanding these differences prevents hours of debugging connection timeouts to services that are actually running fine.
How Minikube Networking Works#
Minikube creates a single node (a VM or container depending on the driver) with its own IP address. Pods inside the cluster get IPs from an internal CIDR. Services get ClusterIPs from another internal range. The bridge between your host machine and the cluster depends entirely on which driver you use.
Get the minikube node IP:
minikube ip
# 192.168.49.2 (typical for Docker driver)Critical difference by driver:
- VirtualBox/Hyperkit/QEMU: The minikube IP is a real network interface on your host. You can curl it directly.
- Docker driver on macOS: The minikube IP is inside the Docker VM’s network. It is not directly reachable from your macOS host. You must use
minikube tunnel,minikube service, orkubectl port-forwardto access services. - Docker driver on Linux: The minikube IP is on a Docker bridge network and is usually directly reachable.
Service Types in Minikube#
ClusterIP#
ClusterIP services work identically to production. They are only reachable from within the cluster.
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: app
spec:
type: ClusterIP
selector:
app: my-api
ports:
- port: 8080
targetPort: 8080To access a ClusterIP service from your host, use port-forward:
kubectl port-forward svc/my-api 8080:8080 -n app
# Now accessible at localhost:8080NodePort#
NodePort services expose a port on the minikube node’s IP. The port is in the range 30000-32767.
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: app
spec:
type: NodePort
selector:
app: my-api
ports:
- port: 8080
targetPort: 8080
nodePort: 30080Access it:
# With VM-based drivers (VirtualBox, Hyperkit):
curl http://$(minikube ip):30080
# With Docker driver (works on any platform):
minikube service my-api -n app
# Opens the service URL in your browser and prints the URL
minikube service my-api -n app --url
# Just prints the URL without opening the browserThe minikube service command handles the Docker driver networking abstraction automatically. It creates a tunnel if necessary and returns the correct URL.
LoadBalancer#
In cloud Kubernetes, LoadBalancer services get an external IP from the cloud provider. In minikube, they stay in <pending> state forever unless you do one of two things.
Option 1: minikube tunnel
# Run in a separate terminal (foreground process, requires sudo)
minikube tunnelWhile the tunnel is running, LoadBalancer services get external IPs assigned from the minikube node’s network. This makes them reachable from your host.
kubectl get svc -n app
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# my-api LoadBalancer 10.96.45.12 10.96.45.12 8080:30080/TCPThe tunnel must remain running. When you stop it, the external IPs are released.
Option 2: MetalLB addon
MetalLB is a bare-metal load balancer that assigns IPs from a configured pool. It provides real LoadBalancer behavior without needing minikube tunnel running.
minikube addons enable metallbConfigure the IP range (use a range within the minikube network):
# Get the minikube IP to determine the network
minikube ip
# 192.168.49.2
# Configure MetalLB with an IP range in the same subnet
minikube addons configure metallb
# -- Enter Load Balancer Start IP: 192.168.49.100
# -- Enter Load Balancer End IP: 192.168.49.120Or configure via ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: metallb-system
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.49.100-192.168.49.120After MetalLB is configured, LoadBalancer services automatically get IPs from the pool.
Ingress Controller#
The ingress addon installs an Nginx ingress controller:
minikube addons enable ingressVerify it is running:
kubectl get pods -n ingress-nginx
# NAME READY STATUS
# ingress-nginx-controller-5d88495688-xxxxx 1/1 RunningCreate an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
namespace: app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.local
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 3000The /etc/hosts Trick#
For ingress to work with custom hostnames, add the minikube IP to your hosts file:
# Get the minikube IP
echo "$(minikube ip) myapp.local" | sudo tee -a /etc/hostsOn Docker driver for macOS, where the minikube IP is not directly reachable, use 127.0.0.1 instead and run minikube tunnel:
echo "127.0.0.1 myapp.local" | sudo tee -a /etc/hosts
minikube tunnel # in a separate terminalNow curl http://myapp.local/api routes through the ingress controller to the correct backend service.
DNS Inside the Cluster#
CoreDNS runs in minikube just as it does in production clusters. Services are discoverable via DNS:
<service-name>.<namespace>.svc.cluster.localShort names work within the same namespace:
# From a pod in the "app" namespace:
curl http://my-api:8080 # resolves to my-api.app.svc.cluster.local
curl http://postgresql.infra:5432 # WRONG -- short cross-namespace does not work
curl http://postgresql.infra.svc.cluster.local:5432 # correctDNS Debugging#
Spin up a debugging pod with DNS tools:
kubectl run dnsutils \
--image=registry.k8s.io/e2e-test-images/jessie-dnsutils \
--restart=Never \
-- sleep 3600Run DNS queries from inside the cluster:
# Resolve a service
kubectl exec dnsutils -- nslookup my-api.app.svc.cluster.local
# Check what DNS server the pod is using
kubectl exec dnsutils -- cat /etc/resolv.conf
# Verify CoreDNS is responding
kubectl exec dnsutils -- nslookup kubernetes.default
# Test external DNS resolution
kubectl exec dnsutils -- nslookup google.comIf internal DNS works but external does not, the CoreDNS forward configuration is broken. Check the CoreDNS ConfigMap:
kubectl get configmap coredns -n kube-system -o yamlClean up when done:
kubectl delete pod dnsutilsNetwork Policies#
By default, minikube uses a CNI that does not enforce network policies. To test network policies locally, start minikube with Calico:
minikube start --cni=calicoVerify Calico is running:
kubectl get pods -n kube-system -l k8s-app=calico-nodeDefault Deny Policy#
Start with a default deny policy and then add explicit allow rules:
# Deny all ingress to pods in the app namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: app
spec:
podSelector: {}
policyTypes:
- IngressAllow Specific Traffic#
# Allow traffic from ingress controller to frontend pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-frontend
namespace: app
spec:
podSelector:
matchLabels:
app: frontend
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- port: 3000
---
# Allow frontend to talk to API, but nothing else
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: app
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080Do Not Forget DNS Egress#
A default deny egress policy blocks DNS queries, which breaks everything. Always include a DNS egress rule:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: app
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCPAccessing Services From Your Host: When to Use Which#
| Method | When to Use | Limitations |
|---|---|---|
kubectl port-forward |
Quick debugging of a single service | Only one service at a time, must keep terminal open |
minikube service |
Accessing NodePort/LoadBalancer services | Docker driver on macOS needs this instead of direct IP |
minikube tunnel |
LoadBalancer services need external IPs | Requires sudo, must run in foreground |
| Ingress + /etc/hosts | Multiple services behind one IP with routing | Requires ingress addon and hosts file editing |
| MetalLB | Persistent LoadBalancer IPs without tunnel | Slightly more setup, but no foreground process needed |
Common Gotchas#
- Docker driver on macOS: minikube IP is unreachable. Use
minikube service,kubectl port-forward, orminikube tunnelinstead of curling the minikube IP directly. - LoadBalancer stuck in Pending. You need either
minikube tunnelrunning or MetalLB enabled. There is no cloud provider to assign IPs. - Ingress returns 404. Check that the ingress controller pod is running in
ingress-nginxnamespace and that your Ingress resource’shostmatches the hostname you are requesting. - Network policies have no effect. The default minikube CNI does not enforce policies. Restart with
--cni=calico. - Port-forward dies silently.
kubectl port-forwardconnections time out and drop. For persistent access, use ingress or NodePort services. - DNS resolution slow for external hosts. The default
ndots:5setting causes 4 failed lookups before resolving external names. Setndots:2in your pod’sdnsConfigif this causes timeout issues.