Artifact Management#
Every CI/CD pipeline produces artifacts: container images, compiled binaries, library packages, Helm charts. Where these artifacts live, how long they are retained, how they are promoted between environments, and how they are scanned for vulnerabilities are decisions that affect security, cost, and operational reliability. Choosing the wrong artifact repository or neglecting lifecycle management creates accumulating storage costs and security blind spots.
Repository Options#
JFrog Artifactory#
Artifactory is the most comprehensive option. It supports every package type: Docker images, Maven/Gradle JARs, npm packages, PyPI wheels, Helm charts, Go modules, NuGet packages, and generic binaries. It serves as a universal artifact store and a caching proxy for upstream registries.
Strengths: Universal format support. Virtual repositories aggregate multiple registries behind a single URL. Remote repositories proxy and cache artifacts from Docker Hub, npm, Maven Central, and others, reducing external network dependency and protecting against upstream outages. Fine-grained access control per repository. Built-in vulnerability scanning with JFrog Xray. Replication between instances for multi-region deployments.
Weaknesses: Expensive. The free tier (Open Source) lacks many enterprise features. Complex to operate self-hosted. Licensing cost scales with usage.
Best for: Organizations with multiple artifact types across multiple languages, needing a single source of truth for all build outputs. Teams that require caching proxies for upstream registries.
Sonatype Nexus Repository#
Nexus supports Docker, Maven, npm, PyPI, NuGet, Helm, and more. It comes in OSS (free) and Pro editions.
Strengths: Nexus OSS provides Docker registry, Maven hosting, npm hosting, and proxy capabilities at no cost. Simpler to operate than Artifactory. The Pro edition adds HA clustering, staging repositories, and advanced security features. Lighter resource footprint.
Weaknesses: OSS edition lacks HA, cleanup policies are basic, and there is no built-in vulnerability scanning (requires integration with external tools). No replication in OSS.
Best for: Teams that need a self-hosted artifact repository without licensing costs. Java-heavy shops already familiar with the Nexus ecosystem.
GitHub Packages#
GitHub Packages integrates directly with GitHub repositories. It supports Docker images (via ghcr.io), npm, Maven, NuGet, and RubyGems.
Strengths: Zero setup for GitHub users. Authentication uses GitHub tokens. Visibility tied to repository visibility (public repos get public packages). Free for public repositories. Tight integration with GitHub Actions – no separate credential management needed.
Weaknesses: Limited to GitHub-supported formats. No proxy/caching capability for upstream registries. Storage and bandwidth limits on free plans. No promotion or staging workflow built in. Cannot serve as a universal artifact store.
Best for: Open-source projects and teams fully committed to the GitHub ecosystem. Projects where container images and npm packages are the only artifact types.
Cloud-Native Registries#
AWS ECR, Google Artifact Registry (GAR), and Azure Container Registry (ACR) are managed, container-focused registries. All three provide lifecycle policies, replication, IAM-based access control, and built-in vulnerability scanning. GAR also supports Maven, npm, Python, and Go packages.
Strengths: No servers to operate. Network proximity to compute (same-region pulls are fast and free of data transfer charges). Native IAM integration.
Weaknesses: Vendor lock-in. Cross-cloud pulls incur data transfer costs. Not suitable as universal artifact stores.
Best for: Teams running workloads on a single cloud provider where Kubernetes pulls images from the same cloud’s registry.
Selection Criteria#
| Criterion | Artifactory | Nexus OSS | GitHub Packages | Cloud Registry |
|---|---|---|---|---|
| Container images | Yes | Yes | Yes | Yes |
| Language packages | All | Most | Some | Limited |
| Proxy/cache upstream | Yes | Yes | No | No |
| Self-hosted option | Yes | Yes | No | No |
| HA/replication | Yes (paid) | Pro only | N/A | Built-in |
| Vulnerability scanning | Xray (paid) | External | Advisory DB | Native |
| Cost at scale | High | Low (OSS) | Medium | Medium |
| Setup complexity | High | Medium | None | Low |
If you need one registry for everything: Artifactory. Nothing else matches its format breadth and enterprise features.
If you need a free self-hosted registry: Nexus OSS. It covers the major formats and runs on modest hardware.
If everything is on GitHub already: GitHub Packages for simplicity. Supplement with a cloud registry if you need lifecycle policies.
If you run single-cloud Kubernetes: Use your cloud provider’s registry. The network performance and IAM integration are unmatched.
Container Image Lifecycle#
Tagging Strategy#
Every image needs at least two tags: an immutable identifier and a mutable convenience tag:
# Immutable: Git SHA or build number
docker tag myapp:build registry.example.com/myapp:a1b2c3d
docker tag myapp:build registry.example.com/myapp:build-1847
# Mutable: latest, environment markers
docker tag myapp:build registry.example.com/myapp:latest
docker tag myapp:build registry.example.com/myapp:stagingNever deploy with :latest in production. Use the immutable tag. The mutable tags exist for human convenience when inspecting registries or pulling images for local testing.
Semantic version tags (v2.4.1, v2.4, v2) let consumers pin to a major version for automatic minor updates, or a full version for exact reproducibility.
Retention Policies#
Uncontrolled image accumulation is the most common registry problem. A moderately active CI pipeline producing one image per commit on a team of ten generates thousands of images per month. All major registries support lifecycle policies – ECR with JSON rules, GAR with cleanup policies, ACR with purge tasks. Configure rules based on these guidelines:
- Production-tagged images (semver tags): keep indefinitely or for the support window of each release.
- Staging/QA images: keep the last 10-20 per repository.
- Development/feature branch images: keep for 7-14 days after last push.
- Untagged manifests: delete after 3-7 days. These are typically intermediate layers or replaced tags.
Vulnerability Scanning Integration#
Scan images at build time (in the CI pipeline) and at rest (periodic registry scans):
# GitLab CI example with Trivy
scan-image:
stage: test
image:
name: aquasec/trivy:latest
entrypoint: [""]
script:
- trivy image --exit-code 1 --severity CRITICAL,HIGH
--ignore-unfixed
$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
allow_failure: false--exit-code 1 fails the pipeline on findings. --severity CRITICAL,HIGH ignores medium and low findings to avoid blocking on noise. --ignore-unfixed skips vulnerabilities without available patches – you cannot fix what upstream has not patched.
Registries with built-in scanning (ECR, GAR, ACR, Artifactory with Xray) can also scan images on push and surface findings in the registry UI. Configure admission controllers (OPA Gatekeeper, Kyverno) to block deployment of images with critical vulnerabilities.
Promotion Workflows#
Promotion moves an artifact from one environment to the next without rebuilding. The same image binary deployed to staging is the exact image deployed to production. This is non-negotiable for reproducibility.
Tag-Based Promotion#
The simplest approach. Re-tag the image to indicate its promotion status:
# After staging validation passes
crane tag registry.example.com/myapp:a1b2c3d staging-approved
crane tag registry.example.com/myapp:a1b2c3d productioncrane (from Google’s go-containerregistry) manipulates tags without pulling/pushing full images. skopeo copy achieves the same across different registries:
skopeo copy \
docker://staging-registry.example.com/myapp:a1b2c3d \
docker://production-registry.example.com/myapp:a1b2c3dRepository-Based Promotion#
Separate repositories represent environment tiers. An image starts in dev/myapp, gets copied to staging/myapp after tests pass, and to prod/myapp after approval. Use skopeo copy to move images between tier repositories. This approach enables different access controls per tier: development teams push to dev/*, the CI system promotes to staging/*, and only the release pipeline promotes to prod/*.
Signing Promoted Artifacts#
Sign images at promotion time to create a verifiable chain of trust:
# Sign with cosign after promotion to production
cosign sign --key cosign.key registry.example.com/prod/myapp:a1b2c3dConfigure Kubernetes admission policies to require valid signatures on production images. This ensures that only images that passed through the promotion workflow can run in production.
Common Mistakes#
- Rebuilding images for each environment. A rebuild produces a different binary. The image deployed to production must be the exact image tested in staging.
- No retention policies. Registries grow without bound. A year of CI output can consume terabytes and cost thousands in storage fees.
- Scanning only at build time. New CVEs are discovered daily. An image clean at build time may have critical vulnerabilities a month later. Enable periodic scanning on stored images.
- Using registry credentials in plain text. Store registry credentials in secret managers (Vault, cloud provider secrets) and inject them into CI jobs at runtime. Never commit credentials to repository configuration files.
- Choosing Artifactory when GitHub Packages would suffice. Artifact management complexity should match organizational complexity. A small team with only Docker images does not need a universal artifact store.