Choosing an Ingress Controller#

An Ingress controller is the component that actually routes external traffic into your cluster. The Ingress resource (or Gateway API resource) defines the rules – which hostnames and paths map to which backend Services – but without a controller watching those resources and configuring a reverse proxy, nothing happens. The choice of controller affects performance, configuration ergonomics, TLS management, protocol support, and operational cost.

Unlike CNI plugins, you can run multiple ingress controllers in the same cluster, which is a common pattern for separating internal and external traffic. This reduces the stakes of any single choice, but your primary controller still deserves careful selection.

What Ingress Controllers Do#

All ingress controllers share a core responsibility: watch for Ingress (or Gateway) resources in the Kubernetes API, then configure a reverse proxy to route traffic according to those rules. The differences are in:

  • Configuration model: Annotations on Ingress resources, custom CRDs, or ConfigMaps.
  • Feature set: Rate limiting, authentication, circuit breaking, header manipulation, traffic mirroring.
  • Protocol support: HTTP/HTTPS only, or also TCP, UDP, gRPC native support.
  • TLS management: Manual certificate configuration, integration with cert-manager, or built-in ACME (Let’s Encrypt).
  • Performance characteristics: Connection handling, latency, throughput under load.

Comparison Table#

Feature Nginx Ingress Traefik HAProxy Ingress AWS ALB Controller Gateway API (standard)
Config model Annotations + ConfigMap CRDs (IngressRoute) + labels Annotations + ConfigMap Annotations Gateway, HTTPRoute CRDs
Performance High Good Very high Cloud-managed Depends on implementation
TLS management Manual / cert-manager Built-in ACME (Let’s Encrypt) Manual / cert-manager ACM (AWS Certificate Manager) Implementation-dependent
TCP/UDP support Yes (via ConfigMap) Yes (native) Yes (native) NLB for TCP Yes (TCPRoute, UDPRoute)
gRPC support Yes Yes Yes Yes Yes
Rate limiting Annotation-based Middleware CRDs Advanced algorithms WAF integration Implementation-dependent
Auth integration Basic, OAuth (external auth) Forward auth, middleware chains Basic, OAuth Cognito, OIDC native Implementation-dependent
WebSocket support Yes Yes Yes Yes Yes
Gateway API support Yes (nginx-gateway-fabric) Yes Limited Yes Native
Cost Free (self-managed) Free (self-managed) Free (self-managed) Per-ALB pricing ($16+/mo) Free (self-managed)
Maturity Very mature Mature Mature Mature (AWS) Evolving (GA for core)

Nginx Ingress Controller#

The most widely deployed ingress controller. It uses nginx as the underlying reverse proxy and configures it dynamically based on Ingress resources. Configuration is primarily through annotations on individual Ingress objects and a global ConfigMap.

Choose Nginx Ingress when:

  • You need a general-purpose, well-documented ingress controller.
  • Your team is familiar with nginx configuration concepts.
  • You need custom nginx configuration snippets for edge cases (the nginx.ingress.kubernetes.io/server-snippet and configuration-snippet annotations allow injecting raw nginx config).
  • You want the broadest community support – most troubleshooting guides and Stack Overflow answers assume nginx ingress.
  • You need a proven solution that handles high traffic reliably.

Limitations:

  • Annotation-driven configuration becomes unwieldy at scale. With dozens of Ingress resources, each needing different annotations, the configuration is scattered across many objects.
  • Global settings via ConfigMap apply to all Ingress resources. Per-Ingress overrides require annotations that can conflict with global settings in non-obvious ways.
  • Custom snippets are a security risk in multi-tenant clusters (arbitrary nginx config injection).
  • nginx reloads on configuration change. Under very high churn (many Ingress resources changing frequently), this can cause brief connection drops.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/rate-limit: "10"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - api.example.com
    secretName: api-tls
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-server
            port:
              number: 8080

Important: There are two nginx ingress projects. ingress-nginx (kubernetes/ingress-nginx) is the community-maintained, open-source version. nginx-ingress-controller (nginxinc/kubernetes-ingress) is NGINX Inc’s commercial version. Most guides and most clusters use the community ingress-nginx.

Traefik#

Traefik is a cloud-native reverse proxy that auto-discovers services and configures routing dynamically. In Kubernetes, it uses custom CRDs (IngressRoute, Middleware) for configuration, providing a more structured alternative to annotation sprawl. Its standout feature is built-in ACME support for automatic Let’s Encrypt certificate provisioning.

Choose Traefik when:

  • You want automatic TLS certificate management without deploying cert-manager separately. Traefik’s built-in ACME client handles Let’s Encrypt certificate issuance and renewal.
  • You prefer CRD-based configuration over annotations. Traefik’s IngressRoute and Middleware CRDs are more readable and composable than annotation lists.
  • You are running dynamic environments where services appear and disappear frequently (Traefik’s service discovery is very responsive).
  • You want middleware chains (rate limiting, authentication, headers, circuit breaking) as reusable, composable objects.

Limitations:

  • Smaller community than nginx ingress, so fewer troubleshooting resources.
  • Performance under extreme load (100k+ concurrent connections) is lower than nginx or HAProxy.
  • The CRD approach means Traefik-specific configuration is not portable to other ingress controllers.
  • Dashboard and some advanced features require Traefik Enterprise (paid).
# Traefik IngressRoute with middleware chain
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: api-route
spec:
  entryPoints:
  - websecure
  routes:
  - match: Host(`api.example.com`) && PathPrefix(`/v1`)
    kind: Rule
    services:
    - name: api-server
      port: 8080
    middlewares:
    - name: rate-limit
    - name: api-headers
  tls:
    certResolver: letsencrypt
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: rate-limit
spec:
  rateLimit:
    average: 100
    burst: 50
---
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: api-headers
spec:
  headers:
    customResponseHeaders:
      X-Frame-Options: DENY
      X-Content-Type-Options: nosniff

HAProxy Ingress#

HAProxy is a high-performance TCP/HTTP load balancer that has been in production since 2001. The HAProxy ingress controller brings this performance to Kubernetes. It excels at raw throughput, advanced load balancing algorithms, and TCP-level routing.

Choose HAProxy Ingress when:

  • You need the highest possible throughput and lowest latency. HAProxy consistently benchmarks at the top for connection handling and raw performance.
  • You need advanced load balancing algorithms beyond round-robin (least connections, source hashing, URI-based hashing, random with power-of-two-choices).
  • You need TCP-level routing and load balancing (not just HTTP).
  • Your team has HAProxy operational experience.

Limitations:

  • Smaller Kubernetes community compared to nginx and Traefik. Fewer tutorials and community resources.
  • Configuration model is annotation-based, similar to nginx ingress, with the same scaling challenges.
  • No built-in ACME/Let’s Encrypt support – requires cert-manager.
  • Less feature parity with newer cloud-native capabilities (middleware chains, CRDs).

AWS ALB Ingress Controller#

The AWS Load Balancer Controller provisions actual AWS Application Load Balancers (ALBs) or Network Load Balancers (NLBs) in response to Ingress or Service resources. Instead of running a reverse proxy inside the cluster, it creates cloud infrastructure.

Choose AWS ALB Controller when:

  • You are on AWS and want native integration with AWS services (WAF, Shield, Cognito, ACM).
  • You need AWS WAF (Web Application Firewall) for DDoS protection, bot filtering, or IP-based rules.
  • You want TLS certificates managed through AWS Certificate Manager (free, auto-renewing).
  • You prefer cloud-managed load balancing over self-managed in-cluster proxies.
  • Your compliance requirements mandate using cloud-native security features.

Limitations:

  • Cost: Each ALB costs approximately $16-22/month base, plus data processing charges. With many Ingress resources, costs add up. Use IngressGroup annotations to share a single ALB across multiple Ingress resources.
  • AWS-only. No portability.
  • Configuration changes take longer to propagate (ALB rule updates are API calls, not in-memory config reloads).
  • Less flexibility for custom routing logic compared to in-cluster proxies.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789:certificate/abc-123
    alb.ingress.kubernetes.io/waf-acl-id: arn:aws:wafv2:us-east-1:123456789:regional/webacl/my-acl/abc-123
    alb.ingress.kubernetes.io/group.name: shared-alb
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-server
            port:
              number: 8080

GKE Ingress and Azure AGIC#

GKE Ingress: Uses Google Cloud Load Balancers. Tightly integrated with GCP – supports Cloud Armor (WAF), managed certificates, and Cloud CDN. Choose when running on GKE and you want fully managed load balancing.

Azure Application Gateway Ingress Controller (AGIC): Uses Azure Application Gateway as the ingress. Integrates with Azure WAF, managed certificates, and Azure networking. Choose when running on AKS and you want Azure-native load balancing.

Both follow the same pattern: instead of running a reverse proxy in the cluster, they provision and configure cloud load balancer infrastructure.

Gateway API#

Gateway API is the successor to the Ingress resource. It is a Kubernetes-native API (SIG-Network project) that provides richer routing capabilities, better role separation (infrastructure teams manage Gateways, application teams manage HTTPRoutes), and a standard specification that multiple controllers implement.

Consider Gateway API for new clusters when:

  • You are starting a new cluster and want to invest in the future-standard API.
  • You need features that Ingress does not support (header-based routing, traffic splitting, request mirroring) without controller-specific annotations.
  • You want a cleaner separation of concerns between platform and application teams.
  • Your chosen controller supports it (nginx-gateway-fabric, Traefik, Cilium, Istio, and cloud controllers all support Gateway API).

Gateway API is GA for core HTTP routing. TCP, UDP, and gRPC routes are at varying maturity levels. It is not a separate ingress controller – it is an API that existing controllers implement.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
spec:
  parentRefs:
  - name: main-gateway
  hostnames:
  - api.example.com
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /v1
    backendRefs:
    - name: api-server-v1
      port: 8080
  - matches:
    - path:
        type: PathPrefix
        value: /v2
    backendRefs:
    - name: api-server-v2
      port: 8080

Running Multiple Controllers#

You can run more than one ingress controller in the same cluster. This is a common and recommended pattern for separating concerns:

  • External traffic: AWS ALB or nginx ingress on a public-facing load balancer, handling internet traffic with WAF and rate limiting.
  • Internal traffic: Traefik or a second nginx instance on an internal load balancer, handling service-to-service traffic within a VPC.

Use the ingressClassName field (or kubernetes.io/ingress.class annotation for older versions) to direct each Ingress resource to the correct controller.

# External ingress -- handled by ALB controller
spec:
  ingressClassName: alb

# Internal ingress -- handled by nginx
spec:
  ingressClassName: nginx-internal

Choose X When – Summary#

Scenario Recommended Controller
General purpose, broadest community support Nginx Ingress (ingress-nginx)
Auto-TLS with Let’s Encrypt, no cert-manager Traefik
CRD-based config, composable middleware Traefik
Maximum raw performance HAProxy Ingress
AWS with WAF/Shield/Cognito requirements AWS ALB Controller
GKE with Cloud Armor/CDN GKE Ingress
AKS with Azure WAF Azure AGIC
New cluster, future-proofing Any controller with Gateway API support
Separate internal/external traffic Two controllers (e.g., ALB external + nginx internal)
Cost-sensitive, many routes Nginx or Traefik (single self-managed instance vs per-ALB pricing)
TCP/UDP load balancing needed HAProxy or Traefik

For most teams starting out, nginx ingress (ingress-nginx) is the safe default. It has the largest community, the most documentation, and handles the majority of use cases well. If you are on AWS and need WAF integration, pair it with the ALB controller for external traffic. If you value operational simplicity for TLS, Traefik’s built-in ACME support removes the need for a separate cert-manager deployment. For new clusters where you have the latitude to adopt newer APIs, evaluate Gateway API support across your shortlisted controllers – it will become the standard.