Init Containers and Sidecar Patterns#

A pod can contain more than one container. Init containers run sequentially before the main application starts. Sidecars run alongside the main container for the lifetime of the pod. Together, they enable patterns where setup logic and cross-cutting concerns are separated from application code.

Init Containers#

Init containers are defined in spec.initContainers[] and run in order. Each must exit 0 before the next one starts. If any init container fails, Kubernetes restarts the pod (subject to restartPolicy). The main application containers do not start until every init container has completed successfully.

spec:
  initContainers:
  - name: wait-for-db
    image: busybox:1.36
    command: ['sh', '-c', 'until nc -z postgres-svc 5432; do echo "waiting for postgres"; sleep 2; done']
  - name: run-migrations
    image: myapp:2.3.0
    command: ['./migrate', '--up']
    env:
    - name: DATABASE_URL
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: url
  containers:
  - name: myapp
    image: myapp:2.3.0

In this sequence, wait-for-db loops until PostgreSQL accepts TCP connections, then run-migrations applies database schema changes. Only after both succeed does the main myapp container start.

Common Init Container Use Cases#

Wait for a dependency: The most common pattern. Prevent your application from starting before its database, cache, or external service is reachable. This avoids startup crash loops when dependencies take longer to come up.

Pre-populate shared volumes: Clone a git repo, download configuration files, or fetch secrets that the main container needs.

initContainers:
- name: clone-config
  image: alpine/git:2.40
  command: ['git', 'clone', 'https://github.com/company/app-config.git', '/config']
  volumeMounts:
  - name: config-volume
    mountPath: /config
containers:
- name: myapp
  image: myapp:2.3.0
  volumeMounts:
  - name: config-volume
    mountPath: /app/config
    readOnly: true
volumes:
- name: config-volume
  emptyDir: {}

The emptyDir volume is created when the pod is assigned to a node and shared between the init container and the main container.

Set permissions on volumes: PersistentVolumes may be created with root ownership. If your main container runs as a non-root user, an init container can fix permissions:

initContainers:
- name: fix-permissions
  image: busybox:1.36
  command: ['sh', '-c', 'chown -R 1000:1000 /data']
  securityContext:
    runAsUser: 0
  volumeMounts:
  - name: data
    mountPath: /data
containers:
- name: myapp
  image: myapp:2.3.0
  securityContext:
    runAsUser: 1000
  volumeMounts:
  - name: data
    mountPath: /data

Register with a service: Call an API to register this pod instance before it starts accepting traffic – useful for service discovery systems outside Kubernetes.

Resource Handling#

Init containers have their own resource requests and limits, separate from the main containers. The effective resource request for the pod is:

effectiveRequest = max(max(initContainer[i].request), sum(appContainer[i].request))

This matters for scheduling. If you have an init container that requests 2Gi memory but your main containers request 512Mi total, the pod’s effective request is 2Gi. The scheduler must find a node with 2Gi available, even though the running pod only uses 512Mi. Keep init container resource requests as low as practical.

Sidecar Pattern#

A sidecar is a helper container that runs alongside the main container in the same pod. They share the network namespace (localhost), filesystem (via volumes), and lifecycle.

Common Sidecar Use Cases#

Log shipping: A Fluent Bit sidecar reads application logs from a shared volume and forwards them to an aggregation system:

spec:
  containers:
  - name: myapp
    image: myapp:2.3.0
    volumeMounts:
    - name: app-logs
      mountPath: /var/log/app
  - name: log-shipper
    image: fluent/fluent-bit:3.0
    volumeMounts:
    - name: app-logs
      mountPath: /var/log/app
      readOnly: true
    - name: fluent-config
      mountPath: /fluent-bit/etc/
  volumes:
  - name: app-logs
    emptyDir: {}
  - name: fluent-config
    configMap:
      name: fluent-bit-sidecar

Authentication proxy: An oauth2-proxy sidecar handles authentication before requests reach the main container:

containers:
- name: myapp
  image: myapp:2.3.0
  ports:
  - containerPort: 8080    # only accessible via localhost
- name: auth-proxy
  image: oauth2-proxy:7.5
  ports:
  - containerPort: 4180    # external traffic enters here
  args:
  - --upstream=http://localhost:8080
  - --provider=oidc
  - --cookie-secret=$(COOKIE_SECRET)

External traffic hits the auth proxy on port 4180. Authenticated requests are forwarded to the main app on localhost:8080. The main app never needs to handle authentication logic.

Config file watcher: A sidecar that watches a ConfigMap-mounted volume and signals the main process to reload when configuration changes:

containers:
- name: nginx
  image: nginx:1.25
  volumeMounts:
  - name: config
    mountPath: /etc/nginx/conf.d
- name: config-reloader
  image: jimmidyson/configmap-reload:0.12
  args:
  - --volume-dir=/etc/nginx/conf.d
  - --webhook-url=http://localhost:80/-/reload
  volumeMounts:
  - name: config
    mountPath: /etc/nginx/conf.d
    readOnly: true
volumes:
- name: config
  configMap:
    name: nginx-config

Service mesh proxy: Istio and Linkerd inject an Envoy or linkerd-proxy sidecar that intercepts all network traffic for mTLS, traffic routing, and observability. The injection happens automatically via a mutating webhook – you typically do not write the sidecar spec yourself.

Native Sidecar Containers (v1.28+)#

Before Kubernetes 1.28, sidecars were regular containers with no guaranteed startup order. The main container might start before the sidecar was ready, causing connection failures.

Native sidecar containers solve this by using restartPolicy: Always on an init container:

spec:
  initContainers:
  - name: log-shipper
    image: fluent/fluent-bit:3.0
    restartPolicy: Always          # makes this a native sidecar
    volumeMounts:
    - name: app-logs
      mountPath: /var/log/app
      readOnly: true
  containers:
  - name: myapp
    image: myapp:2.3.0
    volumeMounts:
    - name: app-logs
      mountPath: /var/log/app
  volumes:
  - name: app-logs
    emptyDir: {}

Native sidecars:

  • Start before main containers: They are init containers, so they start in order and must be running before main containers launch. This solves the “sidecar not ready” race condition.
  • Do not block startup: Unlike regular init containers, they do not need to exit. The startup sequence continues once they are running.
  • Shut down after main containers: When the pod terminates, native sidecars are stopped last, after all main containers have exited. This ensures the log shipper captures final log entries.
  • Restart automatically: The restartPolicy: Always means Kubernetes restarts the sidecar if it crashes, just like a regular container.

This is particularly important for service mesh proxies. Before native sidecars, Istio’s envoy proxy could start after the application, causing failed outbound connections during startup. With native sidecars, the proxy is guaranteed to be running first.

Lifecycle Management#

Sidecar containers need proper shutdown handling. When a pod is terminating, all containers receive SIGTERM simultaneously. A log-shipping sidecar should finish flushing its buffer before exiting:

containers:
- name: log-shipper
  image: fluent/fluent-bit:3.0
  lifecycle:
    preStop:
      exec:
        command: ["/bin/sh", "-c", "sleep 10 && kill -SIGTERM 1"]

The preStop sleep gives the main container time to write final log entries. Then the sidecar receives its own SIGTERM and can flush remaining data. Set terminationGracePeriodSeconds on the pod high enough to cover this sequence.

Debugging Multi-Container Pods#

# List containers in a pod
kubectl get pod myapp-7x4k2 -o jsonpath='{.spec.containers[*].name}'
kubectl get pod myapp-7x4k2 -o jsonpath='{.spec.initContainers[*].name}'

# Get logs from a specific container
kubectl logs myapp-7x4k2 -c log-shipper

# Get logs from an init container
kubectl logs myapp-7x4k2 -c wait-for-db

# Exec into a specific container
kubectl exec myapp-7x4k2 -c myapp -- /bin/sh

# Check container statuses (shows waiting/running/terminated with reasons)
kubectl get pod myapp-7x4k2 -o jsonpath='{.status.initContainerStatuses}' | jq .
kubectl get pod myapp-7x4k2 -o jsonpath='{.status.containerStatuses}' | jq .

When a pod is stuck in Init:0/2, that means the first init container has not completed. Check its logs. When a pod is stuck in Init:1/2, the first init container passed but the second has not. Work through them in order.

Common Gotchas#

Init container failure blocks everything: If your init container has a bug, the pod will never start. There is no timeout by default – it will restart the init container indefinitely based on the pod’s restart policy. Use activeDeadlineSeconds on a Job or monitor for pods stuck in Init: state.

istio-init failure: Istio injects an istio-init init container that sets up iptables rules for traffic interception. If this init container fails (usually due to missing NET_ADMIN capability or incompatible kernel settings), the entire pod fails. Check kubectl logs <pod> -c istio-init for the specific error.

Shared volume permissions: An init container writes files as root, but the main container runs as UID 1000 and cannot read them. Always set correct ownership in the init container, or use fsGroup in the pod security context to ensure the volume is group-readable.

Sidecar outliving the main container: Before native sidecars, when the main container exits (as in a Job), sidecar containers keep running and the pod never completes. This is a known issue with Istio sidecars on Jobs. The workaround is to have the main container signal the sidecar to shut down via a shared file or localhost endpoint, or use native sidecar containers in v1.28+.

Practical Example: App with Migration Init Container and Log Sidecar#

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
    spec:
      terminationGracePeriodSeconds: 45
      initContainers:
      - name: wait-for-db
        image: busybox:1.36
        command: ['sh', '-c', 'until nc -z postgres-svc 5432; do sleep 2; done']
      - name: run-migrations
        image: order-service:3.1.0
        command: ['./migrate', '--up']
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url
      - name: log-shipper
        image: fluent/fluent-bit:3.0
        restartPolicy: Always
        resources:
          requests:
            cpu: 25m
            memory: 32Mi
          limits:
            memory: 128Mi
        volumeMounts:
        - name: app-logs
          mountPath: /var/log/app
          readOnly: true
        - name: fluent-config
          mountPath: /fluent-bit/etc/
      containers:
      - name: order-service
        image: order-service:3.1.0
        ports:
        - containerPort: 8080
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url
        - name: LOG_DIR
          value: /var/log/app
        volumeMounts:
        - name: app-logs
          mountPath: /var/log/app
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          periodSeconds: 5
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh", "-c", "sleep 5"]
      volumes:
      - name: app-logs
        emptyDir: {}
      - name: fluent-config
        configMap:
          name: fluent-bit-sidecar

The startup sequence: wait-for-db confirms PostgreSQL is reachable, run-migrations applies schema changes, log-shipper starts as a native sidecar (stays running), then order-service starts. The log shipper is guaranteed to be running before the application writes its first log line. On shutdown, the application container stops first, the log shipper flushes remaining entries, then the pod terminates.