Jenkins Kubernetes Integration#
The kubernetes plugin gives Jenkins elastic build capacity. Each build spins up a pod, runs its work, and the pod is deleted. No idle agents, no capacity planning, no snowflake build servers.
The Kubernetes Plugin#
The plugin creates agent pods on demand. When a pipeline requests an agent, a pod is created from a template, its JNLP container connects back to Jenkins, the build runs, and the pod is deleted.
Configure the cloud in JCasC:
jenkins:
clouds:
- kubernetes:
name: "k8s"
serverUrl: "" # empty string = in-cluster config
namespace: "jenkins"
jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
podLabels:
- key: "jenkins"
value: "agent"
templates:
- name: "default"
label: "k8s-agent"
serviceAccount: "jenkins-agent"
containers:
- name: "jnlp"
image: "jenkins/inbound-agent:latest-jdk17"
resourceRequestCpu: "500m"
resourceRequestMemory: "512Mi"
resourceLimitCpu: "1"
resourceLimitMemory: "1Gi"The jenkinsUrl and jenkinsTunnel tell agent pods how to reach the Jenkins controller. If Jenkins runs in the same cluster, use the in-cluster service DNS names. The jenkinsTunnel points to the agent listener service on port 50000.
Pod Templates in Jenkinsfile#
For most pipelines, define the pod template inline in the Jenkinsfile. This keeps the build environment versioned with the code:
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
app: jenkins-agent
spec:
serviceAccountName: jenkins-agent
containers:
- name: golang
image: golang:1.22
command: ["sleep"]
args: ["infinity"]
resources:
requests:
cpu: "1"
memory: "2Gi"
- name: kubectl
image: bitnami/kubectl:1.29
command: ["sleep"]
args: ["infinity"]
'''
defaultContainer 'golang'
}
}
stages {
stage('Build') {
steps {
sh 'go build ./...'
}
}
stage('Test') {
steps {
sh 'go test ./... -v'
}
}
stage('Deploy') {
steps {
container('kubectl') {
sh 'kubectl apply -f k8s/deployment.yaml'
}
}
}
}
}Key details:
- The
jnlpcontainer is injected automatically by the kubernetes plugin. You do not need to declare it unless you want to customize its image or resources. - Use
command: ["sleep"]andargs: ["infinity"]to keep sidecar containers alive. Without this, the container exits immediately and Jenkins cannot exec into it. defaultContainersets which container runsshsteps by default. Switch containers withcontainer('name') { ... }.- Each stage can target a different container for different toolchains.
Persistent Jenkins Home#
Jenkins state (job configs, build history, credentials) lives in $JENKINS_HOME. On Kubernetes, back this with a PersistentVolumeClaim:
# In Helm values
persistence:
enabled: true
storageClass: "standard" # or your cluster's storage class
size: 50Gi
accessMode: ReadWriteOnceWithout persistence, restarting the Jenkins pod loses everything. Size the PVC based on build history retention and use buildDiscarder(logRotator(...)) in pipelines to control growth.
RBAC for Agent Pods#
The Jenkins controller needs permission to create and delete pods in the agent namespace. Create a ServiceAccount and bind it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-agent-manager
namespace: jenkins
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log"]
verbs: ["get", "list", "watch", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-agent-manager
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-agent-manager
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkinsThe Helm chart creates these automatically when rbac.create: true (the default). Cross-namespace agent launching requires a ClusterRole or cross-namespace RoleBinding.
Kaniko: Docker Builds Without Docker Socket#
Building Docker images inside Kubernetes pods is a common challenge. Mounting the Docker socket (/var/run/docker.sock) is a security risk – it gives the build container root access to the host. Kaniko builds container images from a Dockerfile without requiring Docker or root privileges.
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker
volumes:
- name: docker-config
secret:
secretName: docker-registry-creds
items:
- key: .dockerconfigjson
path: config.json
'''
}
}
stages {
stage('Build and Push') {
steps {
container('kaniko') {
sh '''
/kaniko/executor \
--context=dir://$(pwd) \
--destination=registry.example.com/myapp:${BUILD_NUMBER} \
--cache=true \
--cache-repo=registry.example.com/myapp/cache
'''
}
}
}
}
}The debug tag of the kaniko image includes a shell. The standard image does not, which makes it unusable as a Jenkins agent container. The Docker config secret provides registry authentication. Create it with:
kubectl create secret docker-registry docker-registry-creds \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=pass \
-n jenkinsScaling Considerations#
- Pod resource requests determine scheduling. If your cluster lacks nodes with enough capacity, agent pods stay Pending. Use a cluster autoscaler or set reasonable resource requests.
- Pod retention defaults to deleting pods after the build. Set
podRetention: OnFailurein the pod template to keep failed pods for debugging. - Workspace sharing between stages works because all containers in the pod share the same workspace volume. Files created in one container are visible in another.
- Init containers can pre-populate caches or download tools before the build starts, reducing build time for subsequent stages.