Why Remote State#
Terraform stores the mapping between your configuration and real infrastructure in a state file. By default this is a local terraform.tfstate file. That breaks the moment a second person or a CI pipeline needs to run terraform apply. Remote state solves three problems: team collaboration (everyone reads the same state), CI/CD access (pipelines need state without copying files), and disaster recovery (your laptop dying should not lose your infrastructure mapping).
The S3 + DynamoDB Backend#
The standard pattern for AWS teams is an S3 bucket for state storage and a DynamoDB table for locking.
First, create the backend resources (typically done once, manually or with a separate bootstrapping config):
resource "aws_s3_bucket" "tfstate" {
bucket = "myorg-terraform-state"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "tfstate" {
bucket = aws_s3_bucket.tfstate.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tfstate" {
bucket = aws_s3_bucket.tfstate.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_dynamodb_table" "tflock" {
name = "terraform-lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}Then configure your working configuration to use it:
terraform {
backend "s3" {
bucket = "myorg-terraform-state"
key = "prod/networking/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock"
encrypt = true
}
}State Locking#
Without locking, two concurrent terraform apply runs can read the same state, compute independent plans, and write conflicting results. The state file ends up describing infrastructure that matches neither plan. DynamoDB locking prevents this: before any state-modifying operation, Terraform writes a lock record to the DynamoDB table. If the lock already exists, the operation fails with a clear error instead of corrupting state.
If a lock gets stuck (process crashed mid-apply), you can force-unlock:
terraform force-unlock LOCK_IDUse this with caution — only after confirming no other operation is actually running.
Workspace Isolation Patterns#
Terraform workspaces let you maintain multiple state files from the same configuration. Each workspace gets its own state under the same backend key prefix.
terraform workspace new staging
terraform workspace new production
terraform workspace select stagingIn your config, reference the workspace name to vary behavior:
locals {
instance_type = terraform.workspace == "production" ? "m5.xlarge" : "t3.medium"
instance_count = terraform.workspace == "production" ? 3 : 1
}When workspaces work well: environments that share the same resource structure but differ in sizing, counts, or naming. Same Terraform code, different variable values.
When workspaces do not work: environments that have fundamentally different resources. If production has a WAF, a CDN, and multi-AZ RDS but staging has none of those, conditional blocks everywhere make the code unreadable. Use separate root modules with a shared module library instead.
The alternative is per-environment backends — completely separate state files with separate backend configurations, typically organized as:
environments/
staging/
main.tf # backend "s3" { key = "staging/terraform.tfstate" }
production/
main.tf # backend "s3" { key = "prod/terraform.tfstate" }
modules/
networking/
compute/State File Security#
The state file contains every attribute of every managed resource, including secrets. Database passwords, API keys, TLS private keys — all in plaintext JSON. Treat the state file like a credentials file:
- Encrypt at rest (S3 SSE, as shown above)
- Restrict bucket access with IAM policies — not everyone who runs
terraform planneeds direct S3 access - Never commit
.tfstateor.tfstate.backupto git. Add both to.gitignore - Enable bucket versioning so you can recover from state corruption
Debugging and Manipulating State#
List everything Terraform tracks:
terraform state listShow details for a specific resource:
terraform state show aws_instance.webMove a resource (after refactoring module structure):
terraform state mv aws_instance.old module.compute.aws_instance.newImport an existing resource that was created outside Terraform:
terraform import aws_instance.web i-0abc123def456Common Mistakes#
- Committing
.tfstateto git. The state contains secrets. Once pushed, consider those secrets compromised. - No versioning on the S3 bucket. A corrupted state with no previous version means manual reconstruction of your entire resource mapping.
- Sharing a single state file across unrelated projects. A bad apply in one area can block all other teams waiting on the lock.
- Forgetting
encrypt = truein the backend block. The bucket encryption config covers objects at rest, but the backend flag ensures Terraform itself sends encrypted payloads.