Jenkins Pipeline Patterns#

Jenkins pipelines define your build, test, and deploy process as code in a Jenkinsfile stored alongside your application source. This eliminates configuration drift and makes CI/CD reproducible across branches.

Declarative vs Scripted#

Declarative is the standard choice. It has a fixed structure, better error reporting, and supports the Blue Ocean visual editor. Scripted is raw Groovy – more flexible, but harder to read and maintain. Use declarative unless you need control flow that declarative cannot express.

Declarative pipelines must be wrapped in a pipeline {} block. Scripted pipelines use node {} blocks directly.

Declarative Pipeline Structure#

pipeline {
    agent { label 'linux' }

    options {
        timeout(time: 30, unit: 'MINUTES')
        disableConcurrentBuilds()
        buildDiscarder(logRotator(numToKeepStr: '20'))
    }

    environment {
        REGISTRY = 'registry.example.com'
        IMAGE = "${REGISTRY}/myapp"
        DOCKER_CREDS = credentials('docker-registry')
    }

    parameters {
        string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: 'Target environment')
        booleanParam(name: 'SKIP_TESTS', defaultValue: false, description: 'Skip test stage')
    }

    stages {
        stage('Build') {
            steps {
                sh 'make build'
            }
        }
        stage('Test') {
            when {
                expression { return !params.SKIP_TESTS }
            }
            steps {
                sh 'make test'
            }
            post {
                always {
                    junit 'build/test-results/**/*.xml'
                }
            }
        }
        stage('Docker Build & Push') {
            steps {
                sh """
                    docker build -t ${IMAGE}:${BUILD_NUMBER} .
                    echo ${DOCKER_CREDS_PSW} | docker login ${REGISTRY} -u ${DOCKER_CREDS_USR} --password-stdin
                    docker push ${IMAGE}:${BUILD_NUMBER}
                """
            }
        }
        stage('Deploy') {
            when {
                branch 'main'
            }
            steps {
                sh "kubectl set image deployment/myapp myapp=${IMAGE}:${BUILD_NUMBER} -n ${params.DEPLOY_ENV}"
            }
        }
    }

    post {
        failure {
            slackSend channel: '#builds', message: "FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
        }
        success {
            archiveArtifacts artifacts: 'build/output/**', fingerprint: true
        }
    }
}

Key Blocks Explained#

  • agent – Where the pipeline runs. agent any picks any available executor. agent { label 'docker' } targets agents with that label. agent none at the top level means each stage must declare its own agent.
  • options – Pipeline-level settings. timeout kills stuck builds. disableConcurrentBuilds prevents race conditions. buildDiscarder prevents disk from filling up.
  • environment – Variables available to all stages. credentials() binds Jenkins credentials to variables.
  • when – Conditionals on stages. branch 'main' runs only on that branch. expression { ... } evaluates arbitrary Groovy. changeset '**/*.java' runs only if matching files changed.
  • post – Runs after stages complete. Sections: always, success, failure, unstable, changed, cleanup.

Parallel Stages#

Run independent work concurrently to cut build time:

stage('Test') {
    parallel {
        stage('Unit Tests') {
            agent { label 'linux' }
            steps {
                sh 'make unit-test'
            }
        }
        stage('Integration Tests') {
            agent { label 'linux' }
            steps {
                sh 'make integration-test'
            }
        }
        stage('Lint') {
            agent { label 'linux' }
            steps {
                sh 'make lint'
            }
        }
    }
}

If any parallel branch fails, the entire parallel stage fails. Use failFast true inside the parallel block to abort sibling branches immediately on first failure.

Shared Libraries#

Shared libraries extract reusable pipeline logic into a separate Git repo. The standard directory structure:

jenkins-shared-lib/
  vars/
    buildAndPush.groovy    # global functions callable by name
    deployToK8s.groovy
  src/
    com/myorg/Utils.groovy  # class-based helpers
  resources/
    templates/              # non-Groovy files

A shared library function (vars/buildAndPush.groovy):

def call(Map config) {
    def image = config.image
    def tag = config.tag ?: env.BUILD_NUMBER

    sh "docker build -t ${image}:${tag} ."
    withCredentials([usernamePassword(credentialsId: config.credentialsId,
                     usernameVariable: 'USER', passwordVariable: 'PASS')]) {
        sh "echo ${PASS} | docker login ${config.registry} -u ${USER} --password-stdin"
        sh "docker push ${image}:${tag}"
    }
}

Register the library in JCasC:

unclassified:
  globalLibraries:
    libraries:
      - name: "my-shared-lib"
        defaultVersion: "main"
        retriever:
          modernSCM:
            scm:
              git:
                remote: "https://github.com/myorg/jenkins-shared-lib.git"
                credentialsId: "github-creds"

Use it in a Jenkinsfile:

@Library('my-shared-lib') _

pipeline {
    agent any
    stages {
        stage('Build & Push') {
            steps {
                buildAndPush(
                    image: 'registry.example.com/myapp',
                    registry: 'registry.example.com',
                    credentialsId: 'docker-registry'
                )
            }
        }
    }
}

The @Library annotation loads the library. The trailing _ is required when there is no explicit import.

Multi-Branch Pipeline#

A multi-branch pipeline automatically discovers branches (and PRs) in a repository and creates a job for each branch that contains a Jenkinsfile. Configure it via JCasC:

jobs:
  - script: >
      multibranchPipelineJob('myapp') {
        branchSources {
          github {
            id('myapp-github')
            repoOwner('myorg')
            repository('myapp')
            credentialsId('github-creds')
          }
        }
        orphanedItemStrategy {
          discardOldItems { numToKeep(10) }
        }
      }

Within the Jenkinsfile, use when { branch 'main' } or when { changeRequest() } to differentiate behavior by branch type.

Triggering Downstream Jobs#

stage('Deploy Downstream') {
    steps {
        build job: 'deploy-service-b',
              parameters: [string(name: 'VERSION', value: "${BUILD_NUMBER}")],
              wait: false  // fire and forget
    }
}

Set wait: true (the default) to block until the downstream job completes and propagate its result.

Input Steps for Manual Approval#

stage('Promote to Production') {
    steps {
        input message: 'Deploy to production?', ok: 'Deploy',
              submitter: 'admin,release-managers'
        sh 'make deploy-prod'
    }
}

The pipeline pauses until an authorized user clicks approve. Set a timeout in options or wrap in timeout() to prevent indefinite waits.

Environment Variable Reference#

Built-in variables available in every pipeline: BUILD_NUMBER, BUILD_URL, JOB_NAME, WORKSPACE, BRANCH_NAME (multi-branch only), CHANGE_ID (PR number, multi-branch only), GIT_COMMIT, GIT_BRANCH. Access them as env.BUILD_NUMBER in Groovy or ${BUILD_NUMBER} in sh steps.