Choosing the Right Workload Type#
Every application fits one of four deployment patterns. Choosing the wrong one creates problems that are hard to fix later – a database deployed as a Deployment loses data on reschedule, a batch job deployed as a Deployment wastes resources running 24/7.
| Pattern | Kubernetes Resource | Use When |
|---|---|---|
| Stateless web app | Deployment + Service + Ingress | HTTP APIs, frontends, microservices |
| Stateful app | StatefulSet + Headless Service + PVC | Databases, caches with persistence, message brokers |
| Background worker | Deployment (no Service) | Queue consumers, event processors, stream readers |
| Batch processing | CronJob | Scheduled reports, data cleanup, periodic syncs |
Pattern 1: Stateless Web App#
A web API that can be scaled horizontally with no persistent state. Any pod can handle any request.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-api
labels:
app: web-api
tier: backend
spec:
replicas: 2
selector:
matchLabels:
app: web-api
template:
metadata:
labels:
app: web-api
tier: backend
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: web-api
image: web-api:1.0.0
ports:
- containerPort: 8080
name: http
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 256Mi
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
readinessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 15
periodSeconds: 20
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: web-api-secrets
key: database-url
---
apiVersion: v1
kind: Service
metadata:
name: web-api
spec:
selector:
app: web-api
ports:
- port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-api
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: web-api.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-api
port:
number: 80Key production practices in this manifest: runAsNonRoot prevents container root exploits, readOnlyRootFilesystem blocks filesystem-based attacks, resource limits prevent a single pod from starving the node, and separate readiness/liveness probes ensure traffic only hits healthy pods while restarting unhealthy ones.
On minikube, add web-api.local to /etc/hosts pointing to $(minikube ip) to test the Ingress.
Pattern 2: Stateful App with PVC#
A database or other service that needs stable network identity and persistent storage.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
securityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
name: tcp-postgres
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
memory: 512Mi
readinessProbe:
exec:
command: ["pg_isready", "-U", "postgres"]
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
exec:
command: ["pg_isready", "-U", "postgres"]
initialDelaySeconds: 30
periodSeconds: 30
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
clusterIP: None
selector:
app: postgres
ports:
- port: 5432
targetPort: tcp-postgresA StatefulSet gives pods stable names (postgres-0, postgres-1) and stable DNS (postgres-0.postgres.default.svc.cluster.local). The headless Service (clusterIP: None) enables this DNS. The PVC persists data across pod restarts – on minikube, the storage provisioner handles this automatically.
Pattern 3: Background Worker#
A consumer that pulls work from a queue. No Service or Ingress because it does not receive inbound traffic.
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue-worker
labels:
app: queue-worker
tier: worker
spec:
replicas: 1
selector:
matchLabels:
app: queue-worker
template:
metadata:
labels:
app: queue-worker
tier: worker
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: worker
image: queue-worker:1.0.0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 256Mi
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pgrep -f worker"]
initialDelaySeconds: 10
periodSeconds: 30
env:
- name: QUEUE_URL
valueFrom:
configMapKeyRef:
name: queue-worker-config
key: queue-urlNo readiness probe is needed since this pod does not back a Service. The liveness probe uses a process check because there is no HTTP endpoint to hit. Scale by increasing replicas when queue depth grows.
Pattern 4: CronJob for Batch Processing#
A scheduled task that runs to completion and exits.
apiVersion: batch/v1
kind: CronJob
metadata:
name: data-cleanup
labels:
app: data-cleanup
tier: batch
spec:
schedule: "0 2 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 3600
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
restartPolicy: OnFailure
containers:
- name: cleanup
image: data-cleanup:1.0.0
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
memory: 512Mi
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: falseconcurrencyPolicy: Forbid prevents overlapping runs if a previous job has not finished. activeDeadlineSeconds kills jobs that hang. backoffLimit: 2 retries failures twice before giving up. These three settings prevent runaway batch jobs from consuming cluster resources.
Test a CronJob immediately without waiting for the schedule:
kubectl create job --from=cronjob/data-cleanup data-cleanup-test
kubectl logs job/data-cleanup-test