Terraform Import and Brownfield Adoption#
Most organizations do not start with Infrastructure as Code. They start with console clicks, CLI commands, and scripts. At some point they decide to adopt Terraform — and now they have hundreds of existing resources that need to be brought under management without disruption.
This is the brownfield problem: writing Terraform code that matches existing infrastructure exactly, importing the state so Terraform knows about the resources, and resolving the inevitable drift between what exists and what the code describes.
Two Import Methods#
Legacy: terraform import Command#
The original method (all Terraform versions). You write the resource block first, then import:
# Step 1: Write the resource block in .tf file
# (You must know the resource type and all required attributes)
# Step 2: Import into state
terraform import aws_vpc.main vpc-0abc123def456
# Step 3: Run plan to see drift
terraform plan
# Shows differences between your code and the real resource
# Fix your code until the plan shows no changesLimitations:
- One resource at a time (slow for large imports)
- Does not generate code — you write it manually
- If your code does not match the real resource,
terraform planshows changes - No dry-run — the import modifies state immediately
Modern: Import Blocks (Terraform 1.5+)#
Import blocks declare imports in code. Combined with terraform plan -generate-config-out, Terraform can generate the resource code for you:
# import.tf — declare what to import
import {
to = aws_vpc.main
id = "vpc-0abc123def456"
}
import {
to = aws_subnet.public_a
id = "subnet-0abc123def456"
}
import {
to = aws_subnet.public_b
id = "subnet-0def456abc789"
}
import {
to = aws_security_group.web
id = "sg-0abc123def456"
}# Generate resource code from real infrastructure
terraform plan -generate-config-out=generated.tf
# Review generated code, clean up, move to proper files
# Then apply to import into state
terraform applyAdvantages over legacy import:
- Batch imports (declare many at once)
- Code generation (Terraform writes the resource blocks)
- Dry-run (plan shows what will be imported before apply)
- Reviewable (import blocks are code in Git, can be reviewed in PR)
- Idempotent (running apply again does nothing after import succeeds)
Planning an Import Campaign#
Inventory Phase#
Before writing any Terraform, inventory what exists:
# AWS — list all resources in a region
aws resourcegroupstaggingapi get-resources \
--region us-east-1 \
--output json > aws-inventory.json
# AWS — specific resource types
aws ec2 describe-vpcs --region us-east-1
aws ec2 describe-subnets --region us-east-1
aws rds describe-db-instances --region us-east-1
aws eks list-clusters --region us-east-1
# Azure — list all resources in a subscription
az resource list --output json > azure-inventory.json
# GCP — list all resources in a project
gcloud asset search-all-resources \
--project=my-project \
--format=json > gcp-inventory.jsonGrouping Strategy#
Import resources in dependency order, grouped by concern:
Phase 1: Networking (no dependencies)
├── VPC / VNET / VPC Network
├── Subnets
├── Route tables
├── NAT gateways
└── Security groups / NSGs / Firewall rules
Phase 2: Identity and Access (depends on Phase 1 for some)
├── IAM roles / Managed identities / Service accounts
├── IAM policies / Role assignments / IAM bindings
└── KMS keys / Key Vault / Cloud KMS
Phase 3: Data (depends on Phases 1-2)
├── RDS / Azure SQL / Cloud SQL
├── S3 / Storage Account / GCS
└── ElastiCache / Redis / Memorystore
Phase 4: Compute (depends on all above)
├── EKS / AKS / GKE
├── EC2 / VMs / Compute Engine
└── Load balancersEach phase becomes a separate Terraform root module with its own state file. This matches the state decomposition pattern from the agent-oriented Terraform approach.
Import Block Patterns by Cloud#
AWS resource IDs:
# Most AWS resources use their ARN or resource-specific ID
import { to = aws_vpc.main; id = "vpc-0abc123" }
import { to = aws_subnet.public; id = "subnet-0abc123" }
import { to = aws_security_group.web; id = "sg-0abc123" }
import { to = aws_iam_role.app; id = "my-app-role" } # name, not ARN
import { to = aws_s3_bucket.data; id = "my-bucket-name" } # bucket name
import { to = aws_db_instance.main; id = "my-rds-instance" } # DB identifier
import { to = aws_eks_cluster.main; id = "my-cluster-name" } # cluster nameAzure resource IDs:
# Azure uses full resource IDs (long paths)
import {
to = azurerm_resource_group.main
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg"
}
import {
to = azurerm_virtual_network.main
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg/providers/Microsoft.Network/virtualNetworks/my-vnet"
}
import {
to = azurerm_kubernetes_cluster.main
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-rg/providers/Microsoft.ContainerService/managedClusters/my-aks"
}GCP resource IDs:
# GCP uses project/region/name or project/name format
import {
to = google_compute_network.main
id = "projects/my-project/global/networks/my-vpc"
}
import {
to = google_compute_subnetwork.main
id = "projects/my-project/regions/us-central1/subnetworks/my-subnet"
}
import {
to = google_container_cluster.main
id = "projects/my-project/locations/us-central1/clusters/my-gke"
}
import {
to = google_sql_database_instance.main
id = "projects/my-project/instances/my-cloudsql"
}Gotcha: Finding the correct import ID format is the most common stumbling point. Check the Terraform provider documentation for each resource type — the “Import” section at the bottom of each resource page shows the expected format.
Handling Drift After Import#
After importing, terraform plan almost always shows changes. This is drift — differences between your code and the real resource.
Types of Drift#
# Type 1: Missing attribute — your code doesn't specify something the resource has
~ tags = {
+ "CreatedBy" = "manual" # tag exists on resource but not in code
}
# Type 2: Different value — your code specifies a different value
~ instance_type = "t3.medium" -> "t3.large" # code says medium, reality is large
# Type 3: Computed attribute — Terraform wants to set a default
~ enable_dns_hostnames = true -> false # provider default differs from realityResolution Strategy#
For each drift item, decide:
| Drift Type | Action | When |
|---|---|---|
| Code matches desired state | Let Terraform apply the change | The real resource was manually changed and should be corrected |
| Reality is correct | Update code to match | The code was wrong — the real resource is what you want |
| Attribute is auto-managed | Add ignore_changes |
Auto-scaling desired_capacity, last-modified timestamps |
| Attribute is irrelevant | Add to code to match | Tags, descriptions — match reality to get a clean plan |
# Example: after importing an ASG, desired_capacity drifts constantly
resource "aws_autoscaling_group" "main" {
# ... imported attributes ...
lifecycle {
ignore_changes = [desired_capacity] # managed by auto-scaling, not Terraform
}
}The Zero-Diff Goal#
Keep iterating until terraform plan shows No changes. This is the definition of “successfully imported”:
# The cycle:
terraform plan # shows drift
# Fix code to match reality (or decide to let Terraform fix reality)
terraform plan # fewer diffs
# Repeat until:
terraform plan
# No changes. Your infrastructure matches the configuration.Agent rule: After import, never apply until the plan shows exactly the changes you intend. A plan that shows 50 unexpected changes after import means the code is wrong — fix the code, do not apply.
Generated Code Cleanup#
terraform plan -generate-config-out=generated.tf produces valid but ugly code. It includes every attribute, even computed ones and defaults:
# Generated — verbose, includes computed attributes
resource "aws_vpc" "main" {
arn = "arn:aws:ec2:us-east-1:123456789:vpc/vpc-0abc123"
cidr_block = "10.0.0.0/16"
default_network_acl_id = "acl-0abc123"
default_route_table_id = "rtb-0abc123"
default_security_group_id = "sg-0abc123"
dhcp_options_id = "dopt-0abc123"
enable_dns_hostnames = true
enable_dns_support = true
enable_network_address_usage_metrics = false
id = "vpc-0abc123"
instance_tenancy = "default"
ipv6_association_id = null
ipv6_cidr_block = null
ipv6_cidr_block_network_border_group = null
ipv6_ipam_pool_id = null
ipv6_netmask_length = 0
main_route_table_id = "rtb-0abc123"
owner_id = "123456789"
tags = { "Name" = "production-vpc" }
tags_all = { "Name" = "production-vpc" }
}Clean it up:
# Cleaned — only attributes you set intentionally
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = { Name = "production-vpc" }
}What to remove from generated code:
- Computed attributes (ARN, ID, owner_id) — Terraform manages these
- Attributes set to their default value (instance_tenancy = “default”)
tags_all(computed fromtags+ provider default tags)- Null values
- Attributes you do not want Terraform to manage
What to keep:
- Attributes you explicitly set (CIDR, name, size, configuration)
- Attributes that differ from defaults
- Tags (the
tagsblock, nottags_all)
Large-Scale Import Workflow#
For environments with hundreds of resources:
Step 1: Generate Import Blocks Programmatically#
# AWS — generate import blocks for all VPCs
aws ec2 describe-vpcs --query 'Vpcs[].VpcId' --output text \
| tr '\t' '\n' \
| awk '{printf "import {\n to = aws_vpc.vpc_%s\n id = \"%s\"\n}\n\n", NR, $1}'
# Output:
# import {
# to = aws_vpc.vpc_1
# id = "vpc-0abc123def456"
# }Step 2: Generate and Clean Code#
# Generate config for all imports
terraform plan -generate-config-out=generated.tf
# Review, clean, reorganize into proper files
# Move networking resources to networking.tf
# Move compute resources to compute.tf
# etc.Step 3: Iterative Import#
# Import in phases, validating each phase
terraform apply -target=aws_vpc.main
terraform plan # verify VPC is clean
terraform apply -target=aws_subnet.public_a -target=aws_subnet.public_b
terraform plan # verify subnets are clean
# Continue until all resources are imported
terraform apply # final apply for remaining resources
terraform plan # must show "No changes"Step 4: Remove Import Blocks#
After successful import, the import blocks are no longer needed. They are idempotent (re-applying does nothing), but removing them keeps the code clean:
# After confirming successful import
rm import.tf
terraform plan # should still show "No changes"Common Import Gotchas#
| Gotcha | Symptom | Fix |
|---|---|---|
| Wrong import ID format | Error: Cannot import with unhelpful message |
Check provider docs “Import” section for correct format |
| Resource already in state | Resource already managed by Terraform |
Use terraform state rm first if re-importing |
| Missing required attribute | Plan shows forced replacement after import | Add the missing attribute to match reality |
| Provider version mismatch | Import works but plan shows unexpected changes | Pin provider version, check changelog for attribute renames |
| Sensitive attributes | Generated code shows (sensitive value) placeholders |
Manually set sensitive attributes (passwords, keys) |
for_each vs individual resources |
Want to import into aws_subnet.main["public-a"] |
Import ID includes the key: terraform import 'aws_subnet.main["public-a"]' subnet-0abc |
| Module resources | Want to import into module.vpc.aws_vpc.main |
Full address: terraform import 'module.vpc.aws_vpc.main' vpc-0abc |
| Cross-account resources | Import fails with access denied | Configure the correct provider alias before importing |
Agent Workflow for Brownfield Adoption#
- Inventory: List all resources in the target environment using cloud CLI
- Group: Organize resources by dependency layer (networking → identity → data → compute)
- Write import blocks: Create
import.tfwith import blocks for the first layer - Generate code: Run
terraform plan -generate-config-out=generated.tf - Clean code: Remove computed attributes, organize into proper files
- Validate: Run
terraform plan— resolve all drift until zero-diff - Import: Run
terraform applyto perform the imports - Verify: Run
terraform plan— must show “No changes” - Repeat: Move to the next dependency layer
- Clean up: Remove import blocks after all layers are imported
- Document: Record what was imported, what was left unmanaged, and why
Key principle: Import is a read operation — it does not change any real infrastructure. The danger comes from the first terraform apply after import if your code does not match reality. Always achieve zero-diff before allowing any applies on imported state.