AWS CodePipeline and CodeBuild#
AWS CodePipeline orchestrates CI/CD workflows as a series of stages. CodeBuild executes the actual build and test commands. Together they provide a fully managed pipeline that integrates natively with S3, ECR, ECS, EKS, Lambda, and CloudFormation. No servers to manage, no agents to maintain – but the trade-off is less flexibility than self-hosted systems and tighter coupling to the AWS ecosystem.
Pipeline Structure#
A CodePipeline has stages, and each stage has actions. Actions can run in parallel or sequentially within a stage. The most common pattern is Source -> Build -> Deploy:
{
"pipeline": {
"name": "myapp-pipeline",
"roleArn": "arn:aws:iam::123456789012:role/codepipeline-role",
"stages": [
{
"name": "Source",
"actions": [{
"name": "GitHubSource",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"configuration": {
"Owner": "myorg",
"Repo": "myapp",
"Branch": "main",
"OAuthToken": "{{resolve:secretsmanager:github-token}}"
},
"outputArtifacts": [{"name": "SourceOutput"}]
}]
},
{
"name": "Build",
"actions": [{
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"inputArtifacts": [{"name": "SourceOutput"}],
"outputArtifacts": [{"name": "BuildOutput"}],
"configuration": {
"ProjectName": "myapp-build"
}
}]
},
{
"name": "Deploy",
"actions": [{
"name": "ECSDeploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "ECS",
"version": "1"
},
"inputArtifacts": [{"name": "BuildOutput"}],
"configuration": {
"ClusterName": "production",
"ServiceName": "myapp",
"FileName": "imagedefinitions.json"
}
}]
}
]
}
}Most teams define pipelines with CloudFormation or Terraform rather than raw JSON. The JSON above illustrates the structure – stages are ordered, actions within a stage can run in parallel by assigning different runOrder values.
In CloudFormation:
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: myapp-pipeline
RoleArn: !GetAtt PipelineRole.Arn
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeStarSourceConnection
Version: '1'
Configuration:
ConnectionArn: !Ref GitHubConnection
FullRepositoryId: myorg/myapp
BranchName: main
OutputArtifacts:
- Name: SourceOutput
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: '1'
Configuration:
ProjectName: !Ref BuildProject
InputArtifacts:
- Name: SourceOutput
OutputArtifacts:
- Name: BuildOutputUse CodeStar Connections (the CodeStarSourceConnection provider) instead of the legacy GitHub provider. CodeStar Connections use an AWS-managed GitHub App and do not require storing OAuth tokens.
buildspec.yml#
CodeBuild uses buildspec.yml to define what happens during a build. It has four phases: install, pre_build, build, and post_build:
version: 0.2
env:
variables:
APP_NAME: myapp
GO_VERSION: "1.22"
parameter-store:
DOCKER_PASSWORD: /codebuild/docker-password
secrets-manager:
API_KEY: prod/myapp:api_key
phases:
install:
runtime-versions:
golang: 1.22
commands:
- echo "Install phase"
pre_build:
commands:
- echo "Logging in to ECR..."
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo "Building Docker image..."
- docker build -t $REPOSITORY_URI:$IMAGE_TAG .
- docker tag $REPOSITORY_URI:$IMAGE_TAG $REPOSITORY_URI:latest
post_build:
commands:
- echo "Pushing to ECR..."
- docker push $REPOSITORY_URI:$IMAGE_TAG
- docker push $REPOSITORY_URI:latest
- printf '[{"name":"myapp","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files:
- imagedefinitions.json
- appspec.yaml
- taskdef.json
cache:
paths:
- '/root/.cache/go-build/**/*'
- '/go/pkg/mod/**/*'The env section pulls secrets from SSM Parameter Store and Secrets Manager at build time without embedding them in the buildspec. The artifacts section defines files passed to the next pipeline stage. The cache section persists directories to S3 between builds.
The imagedefinitions.json file is critical for ECS deployments. It maps container names to image URIs and tells the ECS deploy action which image to use.
ECR Integration#
The standard pattern is: CodeBuild builds the image, pushes to ECR, and outputs the image URI for the deploy stage.
CodeBuild needs IAM permissions for ECR:
# IAM policy for CodeBuild role
- Effect: Allow
Action:
- ecr:GetAuthorizationToken
Resource: '*'
- Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:GetDownloadUrlForLayer
- ecr:BatchGetImage
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
Resource: !Sub 'arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/myapp'Enable Docker layer caching in CodeBuild by using the LOCAL_DOCKER_LAYER_CACHE or LOCAL_CUSTOM_CACHE modes:
BuildProject:
Type: AWS::CodeBuild::Project
Properties:
Cache:
Type: LOCAL
Modes:
- LOCAL_DOCKER_LAYER_CACHE
- LOCAL_SOURCE_CACHE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_MEDIUM
Image: aws/codebuild/amazonlinux2-x86_64-standard:5.0
PrivilegedMode: truePrivilegedMode: true is required for Docker builds. Without it, the Docker daemon cannot start inside the CodeBuild container.
ECS Deployment Actions#
For ECS deployments, CodePipeline supports two action types:
Standard ECS deploy (ECS provider): Updates the service with a new task definition. Simple but performs in-place replacement – all tasks are stopped and restarted. Use imagedefinitions.json to specify the new image.
Blue/Green ECS deploy (CodeDeployToECS provider): Uses CodeDeploy to perform blue/green or canary deployments. Requires an appspec.yaml and taskdef.json:
# appspec.yaml
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: myapp
ContainerPort: 8080// taskdef.json
{
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "myapp",
"image": "<IMAGE1_NAME>",
"portMappings": [{"containerPort": 8080, "protocol": "tcp"}],
"essential": true
}
],
"requiresCompatibilities": ["FARGATE"],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "myapp"
}CodePipeline replaces <TASK_DEFINITION> and <IMAGE1_NAME> placeholders with the actual values during deployment. The blue/green strategy routes traffic to the new task set via a target group swap on the ALB, with configurable rollback triggers.
EKS Deployment#
CodePipeline does not have a native EKS deploy action. The standard approach uses CodeBuild as the deploy step with kubectl or helm:
# buildspec-deploy.yml
version: 0.2
phases:
install:
commands:
- curl -LO "https://dl.k8s.io/release/v1.29.0/bin/linux/amd64/kubectl"
- chmod +x kubectl && mv kubectl /usr/local/bin/
- curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
pre_build:
commands:
- aws eks update-kubeconfig --name $EKS_CLUSTER_NAME --region $AWS_DEFAULT_REGION
build:
commands:
- |
helm upgrade --install myapp ./chart \
--namespace myapp \
--set image.repository=$REPOSITORY_URI \
--set image.tag=$IMAGE_TAG \
--wait --timeout 5mThe CodeBuild role needs eks:DescribeCluster permission and must be mapped in the EKS cluster’s aws-auth ConfigMap or EKS access entries:
eksctl create iamidentitymapping \
--cluster production \
--arn arn:aws:iam::123456789012:role/codebuild-deploy-role \
--group system:masters \
--username codebuild-deployUse a dedicated IAM role for EKS deploys with least-privilege Kubernetes RBAC rather than system:masters in production.
Cross-Account Deployments#
Production workloads typically run in a separate AWS account. CodePipeline supports cross-account actions through IAM role assumption:
# In the pipeline account (Dev)
DeployAction:
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
ActionMode: CREATE_UPDATE
StackName: myapp-stack
TemplatePath: BuildOutput::template.yaml
RoleArn: arn:aws:iam::999888777666:role/cloudformation-deploy-role
RoleArn: arn:aws:iam::999888777666:role/codepipeline-cross-account-roleThe setup requires:
- Pipeline account: CodePipeline role with
sts:AssumeRolepermission for the target account role. - Target account: A role that trusts the pipeline account and has permissions to deploy resources.
- S3 artifact bucket: Must have a bucket policy allowing the target account to read artifacts. Use a KMS key shared across accounts for artifact encryption.
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::999888777666:role/codepipeline-cross-account-role"
},
"Action": ["s3:GetObject", "s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::pipeline-artifacts/*"
}For ECR cross-account access, add a repository policy in the source account allowing the target account to pull images:
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::999888777666:root"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
]
}EventBridge Triggers#
CodePipeline V2 pipelines use EventBridge for source triggers instead of polling. This is faster (near-instant vs 1-minute polling) and cheaper:
PipelineTrigger:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- CodeCommit Repository State Change
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- main
Targets:
- Arn: !Sub 'arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${Pipeline}'
RoleArn: !GetAtt EventBridgeRole.Arn
Id: CodePipelineTargetFor GitHub sources via CodeStar Connections, EventBridge triggers are configured automatically when you create a V2 pipeline. For ECR image push triggers:
EventPattern:
source:
- aws.ecr
detail-type:
- ECR Image Action
detail:
action-type:
- PUSH
repository-name:
- myapp
image-tag:
- latestThis enables chained pipelines: one pipeline builds and pushes an image to ECR, and the EventBridge rule triggers a second pipeline that deploys it. This decoupling is useful when build and deploy responsibilities belong to different teams.
Common Mistakes#
- Forgetting
PrivilegedModefor Docker builds. CodeBuild cannot run Docker commands without privileged mode. The error isCannot connect to the Docker daemonand it has nothing to do with Docker installation. - Using the legacy GitHub source provider. The V1
GitHubprovider stores OAuth tokens. CodeStar Connections use a managed GitHub App with better security and no token rotation burden. - Not encrypting the artifact bucket with a shared KMS key. Cross-account pipelines fail silently when the target account cannot decrypt artifacts. Always use a customer-managed KMS key with cross-account grants.
- Hardcoding account IDs in buildspec.yml. Use CodeBuild environment variables (
$AWS_ACCOUNT_ID,$AWS_DEFAULT_REGION) or pass them as CodePipeline variable overrides. Hardcoded IDs break when moving pipelines between accounts. - Skipping
--waitin Helm deploy steps. Without--wait, CodeBuild reports success as soon ashelm upgradereturns, before pods are actually running. The deploy might fail silently while the pipeline shows green.