Terraform Secrets and Sensitive Data#

Every Terraform configuration eventually needs a password, API key, or certificate. How you handle that secret determines whether it ends up in your state file (readable by anyone with state access), in plan output (visible in CI logs), in version control (permanent history), or properly managed through a secrets provider.

This article covers the patterns for handling secrets at every stage of the Terraform lifecycle — from variable declaration through state storage.

The Sensitivity Problem#

Terraform has multiple places where secrets can leak:

Location Risk Example
.tf files in Git Permanent history password = "hunter2" hardcoded
.tfvars files in Git Same as above db_password = "hunter2" in committed file
State file Anyone with state access sees plaintext RDS password stored in terraform.tfstate
Plan output Visible in CI logs, PR comments password = "hunter2" -> "newpass" in plan
Terminal output Scrollback, screen sharing terraform output db_password prints value
Provider logs Debug logging captures API calls TF_LOG=DEBUG shows auth headers

Critical understanding: Even with sensitive = true on a variable, Terraform still stores the value in plaintext in the state file. The sensitive flag only controls display — it does not encrypt anything.

Sensitive Variables#

Declaring Sensitive Variables#

variable "db_password" {
  type      = string
  sensitive = true  # suppresses display in plan/apply output
}

variable "api_key" {
  type      = string
  sensitive = true
}

# Outputs that reference sensitive values must also be marked sensitive
output "db_connection_string" {
  value     = "postgresql://admin:${var.db_password}@${aws_db_instance.main.endpoint}/mydb"
  sensitive = true
}

What sensitive = true does:

  • Plan output shows (sensitive value) instead of the actual value
  • terraform output shows (sensitive) unless you use -json or -raw
  • Prevents accidental display in terraform console

What sensitive = true does NOT do:

  • Does not encrypt the value in state
  • Does not prevent the value from appearing in provider logs (TF_LOG=DEBUG)
  • Does not prevent the value from being used in resource attributes that are not marked sensitive by the provider

Providing Sensitive Values#

Never commit secrets to version control. Use one of these injection methods:

# Method 1: Environment variables (most common in CI/CD)
# Terraform auto-reads TF_VAR_<name> environment variables
export TF_VAR_db_password="$(vault kv get -field=password secret/database)"
terraform apply

# Method 2: .tfvars file NOT in version control
# Add *.auto.tfvars and secrets.tfvars to .gitignore
terraform apply -var-file="secrets.tfvars"

# Method 3: stdin (interactive, not for CI/CD)
terraform apply  # Terraform prompts for unset required variables

Agent rule: When writing Terraform that needs secrets, always use variable with sensitive = true. Never hardcode values. Provide injection instructions in comments or a README.

The .gitignore Pattern#

# Terraform secrets - never commit
*.tfvars
!example.tfvars      # example file with placeholder values IS committed
.terraform/
terraform.tfstate
terraform.tfstate.backup

Create an example.tfvars with placeholder values so users know what variables to provide:

# example.tfvars — copy to terraform.tfvars and fill in real values
db_password     = "CHANGE_ME"
api_key         = "CHANGE_ME"
tls_private_key = "CHANGE_ME"

Vault Provider for Dynamic Secrets#

The most secure pattern: don’t store secrets in Terraform at all. Use HashiCorp Vault to generate short-lived credentials on demand.

Reading Secrets from Vault#

provider "vault" {
  address = "https://vault.internal.example.com"
  # Auth method depends on environment:
  # - CI/CD: AppRole or JWT (OIDC)
  # - Local: token from `vault login`
}

# Read a static secret
data "vault_kv_secret_v2" "database" {
  mount = "secret"
  name  = "production/database"
}

resource "aws_db_instance" "main" {
  engine               = "postgres"
  instance_class       = "db.t3.medium"
  username             = data.vault_kv_secret_v2.database.data["username"]
  password             = data.vault_kv_secret_v2.database.data["password"]
  skip_final_snapshot  = false
}

Dynamic Database Credentials#

Vault generates a new username/password pair every time Terraform runs. The credentials are short-lived and automatically revoked:

# Vault generates temporary database credentials
data "vault_database_credentials" "app" {
  backend = "database"
  role    = "app-readonly"
}

# Use the temporary credentials
resource "kubernetes_secret" "db_creds" {
  metadata {
    name      = "db-credentials"
    namespace = "production"
  }

  data = {
    username = data.vault_database_credentials.app.username
    password = data.vault_database_credentials.app.password
  }
}

Gotcha: Dynamic credentials change every terraform apply. Resources that reference them will show changes in every plan. Use ignore_changes on the consuming resource if the credentials are injected at runtime rather than baked in.

AWS Dynamic Credentials#

# Vault generates temporary AWS STS credentials
data "vault_aws_access_credentials" "deploy" {
  backend = "aws"
  role    = "deploy-role"
  type    = "sts"
}

provider "aws" {
  access_key = data.vault_aws_access_credentials.deploy.access_key
  secret_key = data.vault_aws_access_credentials.deploy.secret_key
  token      = data.vault_aws_access_credentials.deploy.security_token
  region     = "us-east-1"
}

SOPS for Encrypted Files#

Mozilla SOPS encrypts values in YAML/JSON files while leaving keys readable. The encrypted file can be committed to Git.

Setup#

# Create a SOPS config that uses AWS KMS
cat > .sops.yaml <<EOF
creation_rules:
  - path_regex: \.enc\.json$
    kms: arn:aws:kms:us-east-1:123456789:key/abc-123
EOF

# Encrypt a secrets file
sops --encrypt secrets.json > secrets.enc.json

Reading SOPS Secrets in Terraform#

# Using the SOPS provider
terraform {
  required_providers {
    sops = {
      source  = "carlpett/sops"
      version = "~> 1.0"
    }
  }
}

data "sops_file" "secrets" {
  source_file = "secrets.enc.json"
}

resource "aws_db_instance" "main" {
  engine               = "postgres"
  instance_class       = "db.t3.medium"
  username             = data.sops_file.secrets.data["db_username"]
  password             = data.sops_file.secrets.data["db_password"]
  skip_final_snapshot  = false
}

When to use SOPS: When you want secrets version-controlled alongside infrastructure code but encrypted at rest. Good for small teams without Vault.

When NOT to use SOPS: When secrets change frequently (every change requires re-encryption and commit), when you need audit trails (Vault has better logging), or when you need dynamic/short-lived credentials.

State File Security#

The state file contains every secret value in plaintext. Securing it is non-negotiable.

Remote Backend Encryption#

# AWS S3 backend with encryption and access control
terraform {
  backend "s3" {
    bucket         = "myorg-terraform-state"
    key            = "production/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true                    # SSE-S3 encryption at rest
    kms_key_id     = "arn:aws:kms:us-east-1:123456789:key/abc-123"  # SSE-KMS for stricter control
    dynamodb_table = "terraform-locks"
  }
}
# Azure Blob backend (encrypted by default)
terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "myorgterraformstate"
    container_name       = "tfstate"
    key                  = "production.terraform.tfstate"
    # Azure Storage is encrypted at rest by default (SSE)
    # Enable customer-managed keys for stricter control
  }
}
# GCS backend (encrypted by default)
terraform {
  backend "gcs" {
    bucket  = "myorg-terraform-state"
    prefix  = "production"
    # GCS is encrypted at rest by default
    # Use CMEK for customer-managed encryption keys
  }
}

State Access Control#

The state bucket should have the strictest access controls in your infrastructure:

# S3 bucket policy: only the Terraform CI/CD role can access state
resource "aws_s3_bucket_policy" "state" {
  bucket = aws_s3_bucket.terraform_state.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid       = "DenyUnauthorizedAccess"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.terraform_state.arn,
          "${aws_s3_bucket.terraform_state.arn}/*",
        ]
        Condition = {
          StringNotEquals = {
            "aws:PrincipalArn" = [
              "arn:aws:iam::123456789:role/terraform-ci",
              "arn:aws:iam::123456789:role/terraform-admin",
            ]
          }
        }
      }
    ]
  })
}

What State Exposes#

Even with encryption at rest, anyone who can terraform state pull sees:

{
  "type": "aws_db_instance",
  "attributes": {
    "password": "my-actual-password-in-plaintext",
    "username": "admin",
    "endpoint": "mydb.abc123.us-east-1.rds.amazonaws.com"
  }
}

Mitigation strategies:

  1. Restrict state access to CI/CD roles only (humans never read state directly)
  2. Use Vault dynamic credentials (credentials rotate, so stale state is harmless)
  3. Use terraform_remote_state with specific outputs (not full state access)
  4. Enable state versioning for audit trail (who accessed state when)

CI/CD Secret Injection#

GitHub Actions with OIDC#

The best pattern: no stored credentials. GitHub Actions authenticates to AWS/Azure/GCP via OIDC federation.

# GitHub Actions — no secrets stored in GitHub
permissions:
  id-token: write
  contents: read

steps:
  - uses: aws-actions/configure-aws-credentials@v4
    with:
      role-to-assume: ${{ vars.TERRAFORM_ROLE_ARN }}  # not a secret
      aws-region: us-east-1
      # No access key or secret key — OIDC token exchange

  - name: Terraform Plan
    env:
      TF_VAR_db_password: ${{ secrets.DB_PASSWORD }}
    run: terraform plan -out=tfplan

Vault in CI/CD#

steps:
  - name: Authenticate to Vault
    uses: hashicorp/vault-action@v3
    with:
      url: https://vault.internal.example.com
      method: jwt
      role: terraform-ci
      secrets: |
        secret/data/production/database password | DB_PASSWORD
        secret/data/production/api key | API_KEY

  - name: Terraform Plan
    env:
      TF_VAR_db_password: ${{ env.DB_PASSWORD }}
      TF_VAR_api_key: ${{ env.API_KEY }}
    run: terraform plan -out=tfplan

Secret Masking in Plan Output#

When posting Terraform plans as PR comments, secrets can leak even with sensitive = true if:

  • A resource attribute is not marked sensitive by the provider
  • The secret appears in an error message
  • Debug logging is enabled

Safeguard: Always filter plan output before posting to PRs:

# Strip potential secrets from plan output
terraform plan -no-color 2>&1 \
  | sed -E 's/(password|secret|key|token)\s*=\s*"[^"]*"/\1 = "***REDACTED***"/gi' \
  | tee plan-filtered.txt

Common Mistakes#

Mistake Why It Happens Fix
Hardcoded secret in .tf file Quick testing, forgot to replace Use variable with sensitive = true, add pre-commit hook to scan for secrets
.tfvars committed to Git Not in .gitignore Add *.tfvars to .gitignore, use example.tfvars for documentation
Secret in terraform output Output not marked sensitive Add sensitive = true to output blocks that reference secrets
TF_LOG=DEBUG in CI Debugging a provider issue Never use DEBUG in CI — it logs HTTP request bodies including auth headers
State file on local disk Developer running locally Always use remote backend, even for dev
Shared state bucket without encryption “We’ll add encryption later” Configure encrypt = true from day one
Using default on sensitive variable Providing a “dev” default Never set defaults on sensitive variables — force explicit injection

Agent Workflow for Secrets#

When writing Terraform that needs secrets:

  1. Declare sensitive variables with sensitive = true and no default
  2. Reference variables in resources, never hardcoded values
  3. Mark outputs that derive from secrets as sensitive = true
  4. Document which secrets are needed and how to provide them (environment variable, .tfvars, Vault)
  5. Verify the state backend has encryption enabled
  6. Check that .gitignore excludes .tfvars and state files
  7. Recommend OIDC authentication for cloud providers in CI/CD (no stored credentials)