CI/CD Anti-Patterns and Migration Strategies#

CI/CD pipelines accumulate technical debt faster than application code. Nobody refactors a Jenkinsfile. Nobody reviews pipeline YAML with the same rigor as production code. Over time, pipelines become slow, fragile, inconsistent, and actively hostile to developer productivity. Recognizing the anti-patterns is the first step. Migrating to better tooling is often the second.

Anti-Pattern: Snowflake Pipelines#

Every repository has a unique pipeline that someone wrote three years ago and nobody fully understands. Repository A uses Makefile targets, B uses bash scripts, C calls Python, and D has inline shell commands across 40 pipeline steps. There is no shared structure, no reusable components, and no way to make organization-wide changes.

Fix: Create a shared pipeline library. In GitHub Actions, this means reusable workflows and composite actions in a central repository:

# org/ci-templates/.github/workflows/go-service.yml
name: Go Service CI
on:
  workflow_call:
    inputs:
      go-version:
        type: string
        default: "1.23"
      deploy-env:
        type: string
        required: false

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: ${{ inputs.go-version }}
      - uses: golangci/golangci-lint-action@v6

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with:
          go-version: ${{ inputs.go-version }}
      - run: go test -race -coverprofile=coverage.out ./...
      - uses: actions/upload-artifact@v4
        with:
          name: coverage
          path: coverage.out

Each service repository calls this with two lines:

jobs:
  ci:
    uses: org/ci-templates/.github/workflows/go-service.yml@v2
    with:
      go-version: "1.23"

Start by templating your most common service type. Migrate repositories one at a time. Do not attempt a big-bang migration.

Anti-Pattern: Monolithic Builds#

A single pipeline that builds everything, tests everything, and deploys everything. It takes 45 minutes. Developers wait. They batch changes to avoid triggering it. The batch is larger, so it fails more often. Failures are harder to diagnose because multiple changes landed. A vicious cycle.

Fix: Split the pipeline. Use path filters to run only relevant stages:

on:
  push:
    paths:
      - 'services/api/**'
      - 'shared/proto/**'

jobs:
  build-api:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: cd services/api && make build test

For monorepos, use change detection to determine which services were affected. Tools like nx affected, turborepo, or a simple git diff --name-only comparison build only what changed:

- name: Detect changes
  id: changes
  run: |
    CHANGED=$(git diff --name-only origin/main...HEAD | grep -oP '^services/\K[^/]+' | sort -u | tr '\n' ',')
    echo "services=$CHANGED" >> "$GITHUB_OUTPUT"

- name: Build changed services
  run: |
    IFS=',' read -ra SERVICES <<< "${{ steps.changes.outputs.services }}"
    for svc in "${SERVICES[@]}"; do
      [ -n "$svc" ] && make -C "services/$svc" build
    done

Anti-Pattern: Secrets in Code#

Hardcoded API keys, database passwords, and tokens in pipeline configuration, environment files checked into git, or worse, in plain text in CI system variables visible to anyone who can edit the pipeline.

Fix: Use OIDC authentication for cloud providers (eliminates static credentials entirely), store remaining secrets in a vault (HashiCorp Vault, AWS Secrets Manager, GitHub encrypted secrets), and scan for leaked secrets in CI:

- name: Scan for secrets
  uses: trufflesecurity/trufflehog@main
  with:
    extra_args: --only-verified --results=verified

- name: Gitleaks scan
  uses: gitleaks/gitleaks-action@v2
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Rotate every secret that has ever been committed to version control. Assume it has been read by someone who should not have seen it.

Anti-Pattern: Manual Gates That Block Everything#

A deployment pipeline that requires VP approval for every staging deployment. The VP is in meetings all day. Deployments queue up. Developers merge code on Monday and it reaches staging on Wednesday. By Wednesday, nobody remembers the context of Monday’s changes.

Fix: Automate everything except production. Staging deployments should be fully automated on merge to main. Use environment protection rules for production only:

jobs:
  deploy-staging:
    # No approval needed -- automatic on merge to main
    if: github.ref == 'refs/heads/main'
    environment: staging
    runs-on: ubuntu-latest
    steps:
      - run: ./deploy.sh staging

  deploy-production:
    needs: deploy-staging
    environment:
      name: production  # Requires manual approval in GitHub settings
    runs-on: ubuntu-latest
    steps:
      - run: ./deploy.sh production

If compliance requires approvals, make them asynchronous. The approval is on the PR, not the deployment. Once the PR is reviewed and merged, deployment should be automatic. The review happened at merge time.

Anti-Pattern: Environment Drift#

Staging has different versions of dependencies, different OS versions, different configuration, and different resource limits than production. Tests pass in staging and fail in production. Nobody trusts staging.

Fix: Use identical container images across all environments. The image built in CI is the same image deployed to staging and production. Only environment-specific configuration (database URLs, API keys, feature flags) changes between environments.

# Build once, deploy everywhere
jobs:
  build:
    outputs:
      image: ${{ steps.build.outputs.image }}
    steps:
      - id: build
        run: |
          IMAGE="registry.example.com/api:sha-${GITHUB_SHA::8}"
          docker build -t "$IMAGE" .
          docker push "$IMAGE"
          echo "image=$IMAGE" >> "$GITHUB_OUTPUT"

  deploy-staging:
    needs: build
    steps:
      - run: kubectl set image deployment/api api=${{ needs.build.outputs.image }} -n staging

  deploy-production:
    needs: [build, deploy-staging]
    steps:
      - run: kubectl set image deployment/api api=${{ needs.build.outputs.image }} -n production

Infrastructure drift is harder. Use Infrastructure as Code (Terraform, Pulumi) for environment provisioning and run terraform plan in CI to detect drift before it causes problems.

Migration: Jenkins to GitHub Actions#

Jenkins is the most common migration source. The patterns differ significantly.

Key mapping:

Jenkins GitHub Actions
Jenkinsfile (Groovy) .github/workflows/*.yml
pipeline { stages {} } jobs: with needs:
agent { docker {} } runs-on: + container actions
environment {} env: at workflow/job/step level
Shared libraries Reusable workflows + composite actions
input step Environment protection rules
credentials() ${{ secrets.* }}
parallel {} Jobs without needs: run in parallel

Jenkinsfile:

pipeline {
    agent { docker { image 'golang:1.23' } }
    stages {
        stage('Test') {
            steps { sh 'go test ./...' }
        }
        stage('Build') {
            steps { sh 'go build -o app ./cmd/server' }
        }
        stage('Deploy') {
            when { branch 'main' }
            steps { sh './deploy.sh' }
        }
    }
}

Equivalent GitHub Actions:

name: CI/CD
on:
  push:
    branches: [main]
  pull_request:

jobs:
  test:
    runs-on: ubuntu-latest
    container: golang:1.23
    steps:
      - uses: actions/checkout@v4
      - run: go test ./...

  build:
    needs: test
    runs-on: ubuntu-latest
    container: golang:1.23
    steps:
      - uses: actions/checkout@v4
      - run: go build -o app ./cmd/server

  deploy:
    if: github.ref == 'refs/heads/main'
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: ./deploy.sh

Migration strategy: Run Jenkins and GitHub Actions in parallel for 2-4 weeks. Migrate one pipeline at a time, starting with the simplest. Keep Jenkins as a fallback until the GitHub Actions pipeline has proven stable. Decommission Jenkins pipelines only after the team has run exclusively on GitHub Actions for at least two weeks without issues.

Migration: CircleCI to GitHub Actions#

CircleCI’s concepts map more directly to GitHub Actions than Jenkins does.

CircleCI GitHub Actions
.circleci/config.yml .github/workflows/*.yml
orbs Marketplace actions + reusable workflows
workflows: jobs: with needs:
executors: runs-on: or container:
commands: Composite actions
contexts Environment secrets
approval job type Environment protection rules

CircleCI orbs translate to GitHub Actions marketplace actions or composite actions. If you use heavily customized orbs, extract the underlying shell commands and port them directly rather than looking for action equivalents.

Migration: On-Premises to Cloud CI#

Moving from on-prem Jenkins/TeamCity/Bamboo to cloud CI introduces network challenges. Your CI previously had direct access to internal databases, artifact repositories, and deployment targets. Cloud CI does not.

Hybrid approach: Use cloud CI for build and test, self-hosted runners for deployment to internal infrastructure. The self-hosted runners live inside your network and can access internal resources:

jobs:
  build-and-test:
    runs-on: ubuntu-latest  # Cloud runner
    steps:
      - run: make build test

  deploy:
    needs: build-and-test
    runs-on: [self-hosted, internal-network]  # On-prem runner
    steps:
      - run: kubectl apply -f k8s/ --context internal-cluster

This gives you the benefits of cloud CI (zero maintenance, elastic scaling, managed infrastructure) for the compute-heavy build phase, while keeping deployment within your network boundary. Migrate build and test first, deployment last.