CI/CD Patterns for Monorepos#
A monorepo puts multiple packages, services, and applications in a single repository. This simplifies cross-package changes and dependency management, but it breaks the assumption that most CI systems are built on: one repo means one build. Without careful pipeline design, every commit triggers a full rebuild of everything, and CI becomes the bottleneck.
The Core Problem#
In a monorepo, a commit that touches packages/auth-service/src/handler.ts should build and test auth-service and its dependents, but not billing-service or frontend. Getting this right is the central challenge of monorepo CI.
Change Detection#
Change detection determines which packages are affected by a commit. There are three approaches, each with different tradeoffs.
Git Diff Based#
Compare the current commit against the base branch and map changed files to packages:
# Find changed files relative to main
CHANGED_FILES=$(git diff --name-only origin/main...HEAD)
# Map to packages (assuming packages/<name>/ structure)
CHANGED_PACKAGES=$(echo "$CHANGED_FILES" | grep '^packages/' | cut -d/ -f2 | sort -u)In GitHub Actions, use path filters directly:
on:
pull_request:
paths:
- 'packages/auth-service/**'
- 'packages/shared-lib/**'
jobs:
build-auth:
runs-on: ubuntu-latest
steps:
- run: make build-authThe limitation is that path filters are static. They do not understand dependency graphs, so a change to shared-lib will not trigger auth-service’s pipeline unless explicitly listed. This becomes unmanageable as the graph grows.
Dependency Graph Based#
Use the package manager’s dependency graph to determine what is affected by a change. If shared-lib changed and auth-service depends on shared-lib, both must be rebuilt.
# pnpm: list packages affected by changes since main
pnpm --filter "...[origin/main]" run build
# The [...] syntax means "packages changed since this ref, plus their dependents"Nx and Turborepo both provide this natively:
# Nx: run build for affected projects
npx nx affected --target=build --base=origin/main
# Turborepo: run build for changed packages and their dependents
npx turbo run build --filter="...[origin/main]"This is the correct approach for most monorepos. It builds only what needs building and nothing that does not.
File Hash Based#
Bazel and similar hermetic build systems hash all inputs (source files, dependencies, build configuration) and rebuild only when hashes change. This is the most precise approach but requires the build system to track every input, which is a significant upfront investment.
Selective Builds#
Once you know what changed, configure CI to build only the affected packages. Two patterns work well.
Dynamic Job Generation#
Generate CI jobs dynamically based on detected changes:
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
packages: ${{ steps.detect.outputs.packages }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- id: detect
run: |
PACKAGES=$(npx turbo run build --dry-run=json --filter="...[origin/main]" \
| jq -r '[.packages[] | select(. != "//") ] | @json')
echo "packages=$PACKAGES" >> "$GITHUB_OUTPUT"
build:
needs: detect-changes
if: needs.detect-changes.outputs.packages != '[]'
strategy:
matrix:
package: ${{ fromJson(needs.detect-changes.outputs.packages) }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npx turbo run build --filter="${{ matrix.package }}"This creates one CI job per affected package. Each job runs in parallel. Unaffected packages do not consume CI resources.
Build Tool Orchestration#
Alternatively, let the build tool handle orchestration in a single CI job:
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- run: npx turbo run build test lint --filter="...[origin/main]"Simpler CI configuration, but you lose per-package CI status visibility.
Caching Strategies#
Monorepo builds are inherently redundant. If you built shared-lib yesterday and nothing changed, building it again wastes time. Caching eliminates redundant work.
Build Tool Caches#
All three major build tools cache task outputs.
Turborepo caches by hashing inputs. On cache hit, it replays output instead of re-executing:
{
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**"],
"inputs": ["src/**", "tsconfig.json", "package.json"]
},
"test": {
"dependsOn": ["build"],
"outputs": [],
"inputs": ["src/**", "tests/**"]
}
}
}Nx uses a computation cache with the same principle. Configure it in nx.json:
{
"targetDefaults": {
"build": {
"inputs": ["production", "^production"],
"outputs": ["{projectRoot}/dist"],
"cache": true
},
"test": {
"inputs": ["default", "^production"],
"cache": true
}
}
}Bazel caches at the action level with content-addressable hashing. Its cache granularity is finer than Turborepo or Nx – individual compilation units rather than entire packages.
Remote Caching#
Local caches only help if the same machine runs consecutive builds. In CI, every job starts with an empty cache. Remote caching stores build outputs in a shared location so cache hits work across machines and developers.
Turborepo Remote Cache is available through Vercel or self-hosted with turbo-remote-cache-server:
npx turbo run build --team=myteam --token=$TURBO_TOKENNx Cloud provides remote caching and distributed task execution:
npx nx-cloud start-ci-run --distribute-on="5 linux-medium-js"Bazel Remote Execution uses a remote build cache (e.g., BuildBuddy, EngFlow, or a self-hosted gRPC cache):
build --remote_cache=grpcs://remote.buildbuddy.io
build --remote_header=x-buildbuddy-api-key=YOUR_KEYCI-Level Caching#
Cache node_modules, package manager stores, and build outputs between CI runs. Layer CI caching with build tool caching – the CI cache restores the .turbo directory, and Turborepo uses it to skip unchanged tasks:
- uses: actions/cache@v4
with:
path: |
node_modules
.turbo
key: ${{ runner.os }}-turbo-${{ hashFiles('pnpm-lock.yaml') }}-${{ github.sha }}
restore-keys: |
${{ runner.os }}-turbo-${{ hashFiles('pnpm-lock.yaml') }}-Build Tool Comparison#
| Feature | Turborepo | Nx | Bazel |
|---|---|---|---|
| Language support | JavaScript/TypeScript focused | JavaScript/TypeScript focused, plugins for Go, Java, etc. | Any language with rules |
| Configuration | turbo.json, minimal | nx.json + project.json per package | BUILD files per package |
| Learning curve | Low | Medium | High |
| Change detection | File hash + dependency graph | File hash + dependency graph | Content-addressable hashing |
| Remote cache | Vercel or self-hosted | Nx Cloud or self-hosted | gRPC remote cache protocol |
| Distributed execution | No (CI matrix only) | Nx Cloud agents | Native remote execution |
| Cache granularity | Task level (per package) | Task level (per project) | Action level (per compilation unit) |
| Best for | JS/TS monorepos up to ~50 packages | JS/TS monorepos with complex graphs | Large polyglot monorepos (100+ packages) |
Choose Turborepo for JS/TS monorepos where simplicity matters – fastest to set up, works well up to ~50 packages. Choose Nx for larger JS/TS monorepos needing code generation, dependency visualization, and distributed task execution. Choose Bazel for polyglot monorepos with hundreds of packages where build correctness and hermeticity are critical – significant upfront cost but unmatched precision at scale.
Workspace-Aware Testing#
Testing in monorepos must respect the dependency graph. Run tests only for affected packages, but include transitive dependents.
# pnpm: test packages changed since main, including dependents
pnpm --filter "...[origin/main]" run test
# Nx: test affected projects
npx nx affected --target=test --base=origin/main
# Turborepo: test changed packages and dependents
npx turbo run test --filter="...[origin/main]"For cross-package integration tests, create a dedicated integration-tests package that depends on the packages under test rather than putting them in any single package’s suite.
Artifact Management#
Monorepos produce multiple artifacts per build. Managing which artifacts to publish and how to version them requires discipline.
Independent Versioning#
Each package has its own version. Tools like Changesets automate version bumping across the dependency graph:
npx changeset add # Developer describes the change
npx changeset version # CI bumps versions
npx changeset publish # CI publishes packagesContainer Images Per Service#
Build container images only for changed services. Tag with the commit SHA and push to the registry:
- name: Build affected service images
run: |
for svc in $AFFECTED_SERVICES; do
docker build -t registry.example.com/$svc:${{ github.sha }} \
-f packages/$svc/Dockerfile .
docker push registry.example.com/$svc:${{ github.sha }}
doneNote the build context is the repo root (.), not the service directory. Monorepo services typically need access to shared packages during the build. Use .dockerignore to exclude irrelevant packages from the build context to keep it small. Label each artifact with the source commit and package name so you can trace any deployed artifact back to its exact source.
Common Mistakes#
- Running all tests on every commit. Use change detection to build only affected packages.
- Ignoring transitive dependencies. A change to
shared-libmust trigger builds of everything that depends on it. - Shallow clones breaking change detection. Use
fetch-depth: 0so git diff can compare against the base branch. - Shared mutable state in tests. Parallel test execution fails if tests share databases, ports, or filesystem paths.
- Skipping remote caching. Without it, every CI run rebuilds from scratch.