CircleCI Pipeline Patterns#
CircleCI pipelines are defined in .circleci/config.yml. The configuration model uses workflows to orchestrate jobs, jobs to define execution units, and steps to define commands within a job. Every job runs inside an executor – a Docker container, Linux VM, macOS VM, or Windows VM.
Config Structure and Executors#
A minimal config defines a job and a workflow:
version: 2.1
executors:
go-executor:
docker:
- image: cimg/go:1.22
resource_class: medium
working_directory: ~/project
jobs:
build:
executor: go-executor
steps:
- checkout
- run:
name: Build application
command: go build -o myapp ./cmd/myapp
workflows:
main:
jobs:
- buildNamed executors let you reuse environment definitions across jobs. The resource_class controls CPU and memory – small (1 vCPU/2GB), medium (2 vCPU/4GB), large (4 vCPU/8GB), xlarge (8 vCPU/16GB). Choose the smallest class that avoids OOM kills to keep costs down.
Docker executors accept multiple images. The first image is the primary container where steps execute. Additional images run as services accessible via localhost:
jobs:
integration-test:
docker:
- image: cimg/go:1.22
- image: cimg/postgres:15.4
environment:
POSTGRES_USER: test
POSTGRES_DB: testdb
- image: cimg/redis:7.2
steps:
- checkout
- run:
name: Wait for services
command: dockerize -wait tcp://localhost:5432 -timeout 30s
- run:
name: Run integration tests
command: go test ./... -tags=integrationIn GitHub Actions, the equivalent is services containers, but they use Docker networking with hostname-based addressing rather than localhost. CircleCI’s localhost model is simpler for service discovery but limits you to one container per port.
Orbs#
Orbs are reusable packages of configuration – jobs, commands, and executors published to the CircleCI registry. They eliminate boilerplate for common tasks:
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr@9.0
aws-ecs: circleci/aws-ecs@4.0
slack: circleci/slack@4.13
workflows:
deploy:
jobs:
- aws-ecr/build_and_push_image:
repo: myapp
tag: ${CIRCLE_SHA1}
context: aws-production
- aws-ecs/deploy_service_update:
requires:
- aws-ecr/build_and_push_image
cluster: production
service-name: myapp
container-image-name-updates: "container=myapp,tag=${CIRCLE_SHA1}"
context: aws-production
- slack/on-hold:
requires:
- aws-ecs/deploy_service_update
context: slack-notificationsPin orb versions explicitly. Volatile orbs (circleci/aws-ecr@volatile) always pull the latest, which breaks builds without warning. Use exact major versions at minimum.
GitHub Actions has a comparable ecosystem via the marketplace, but Actions are referenced per-step while orbs provide entire jobs and commands as a unit. Orbs also support parameterized configuration more naturally than composite Actions.
Workspaces vs Caching#
This distinction trips up most teams. They solve different problems:
Workspaces persist data between jobs within a single workflow run. Job A attaches files to the workspace; Job B retrieves them. Workspaces are ephemeral – they disappear when the workflow completes:
jobs:
build:
executor: go-executor
steps:
- checkout
- run: go build -o myapp ./cmd/myapp
- persist_to_workspace:
root: .
paths:
- myapp
test:
executor: go-executor
steps:
- attach_workspace:
at: ~/project
- run: ./myapp --version
workflows:
main:
jobs:
- build
- test:
requires:
- buildCaches persist data between workflow runs. They are keyed by a hash and survive across pipelines. Use caches for dependencies that change infrequently:
jobs:
build:
executor: go-executor
steps:
- checkout
- restore_cache:
keys:
- go-mod-v1-{{ checksum "go.sum" }}
- go-mod-v1-
- run: go mod download
- save_cache:
key: go-mod-v1-{{ checksum "go.sum" }}
paths:
- /home/circleci/go/pkg/mod
- run: go build -o myapp ./cmd/myappThe restore_cache fallback pattern is important. If the exact key misses, CircleCI tries the prefix go-mod-v1- and restores the most recent partial match. This gives you a warm cache even when go.sum changes slightly.
In GitHub Actions, actions/cache handles both use cases, and artifacts serve a similar role to workspaces. CircleCI’s explicit separation makes intent clearer.
Parallelism and Test Splitting#
CircleCI has built-in parallelism at the job level. Set parallelism: N and CircleCI spawns N identical containers, then use circleci tests split to distribute work:
jobs:
test:
executor: go-executor
parallelism: 4
steps:
- checkout
- run:
name: Run tests
command: |
PACKAGES=$(go list ./... | circleci tests split --split-by=timings)
gotestsum --junitfile results.xml -- $PACKAGES -v
- store_test_results:
path: results.xml--split-by=timings uses historical test duration data from store_test_results to balance work across containers. The first run splits evenly by count; subsequent runs optimize for equal wall-clock time. This is significantly more ergonomic than GitHub Actions, where you must manually shard test suites using matrix strategies and external splitting logic.
Approval Jobs and Gated Deployments#
Approval jobs pause a workflow until a human clicks “Approve” in the CircleCI UI:
workflows:
deploy-production:
jobs:
- build
- test:
requires:
- build
- deploy-staging:
requires:
- test
- hold-for-approval:
type: approval
requires:
- deploy-staging
- deploy-production:
requires:
- hold-for-approval
context: production-secretsThe type: approval job has no executor and no steps. It is purely a gate. You can restrict who can approve by combining this with CircleCI’s project-level permissions. GitHub Actions achieves similar gating with environment protection rules and required reviewers, but the workflow syntax is less explicit.
Contexts and Secrets#
Contexts are named collections of environment variables managed at the organization level. Jobs reference contexts to gain access:
workflows:
deploy:
jobs:
- deploy-staging:
context: aws-staging
- deploy-production:
context:
- aws-production
- slack-notificationsContext security groups restrict which teams can trigger jobs using a given context. This means you can let any developer trigger the staging deploy but limit production deploys to the platform team. Contexts are managed in the CircleCI UI under Organization Settings.
Project-level environment variables are available to all jobs in a project. Context variables override project variables when names collide. For secrets that span multiple projects (AWS credentials, Slack tokens), always use contexts.
Docker Layer Caching#
Docker Layer Caching (DLC) persists Docker build layers between job runs. It requires a machine executor or setup_remote_docker with DLC enabled:
jobs:
build-image:
docker:
- image: cimg/base:current
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Build and push
command: |
docker build -t myregistry/myapp:${CIRCLE_SHA1} .
echo "$DOCKER_PASS" | docker login -u "$DOCKER_USER" --password-stdin
docker push myregistry/myapp:${CIRCLE_SHA1}DLC is a paid feature. It caches layers from the previous build, so unchanged layers skip rebuilding. The savings are proportional to how much of your Dockerfile is stable – base image pulls, dependency installs, and system package layers benefit the most. DLC does not help if every layer changes every build.
Common Mistakes#
- Using workspaces when you need caches. Workspaces are per-workflow. If you want
node_modulesto survive between pushes, usesave_cache/restore_cache, notpersist_to_workspace. - Not using test splitting with
store_test_results. Timing-based splitting only works when you upload JUnit XML results. Withoutstore_test_results, the--split-by=timingsflag falls back to naive splitting. - Oversizing resource classes. Running every job on
xlargewastes credits. Profile your jobs – most build jobs fit comfortably inmedium. - Ignoring context security groups. Without restrictions, any project member can trigger jobs with production credentials. Lock down sensitive contexts to specific teams.
- Not pinning orb versions. Using
@volatileor unpinned major versions means upstream orb changes can break your pipeline without any code change on your side.