Secrets Management in Minikube: From Basic to Production Patterns#

Secrets in Kubernetes are simultaneously simple (just base64-encoded data in etcd) and complex (getting the workflow right for rotation, RBAC, and git-safe storage requires multiple tools). Setting up proper secrets management locally means you can validate the entire workflow – from creation through mounting through rotation – before touching production credentials.

Kubernetes Secret Types#

Kubernetes has several built-in secret types, each with its own structure and validation:

# Opaque: generic key-value pairs (most common)
kubectl create secret generic db-creds \
  --from-literal=username=admin \
  --from-literal=password=s3cret \
  -n app

# TLS: certificate and private key
kubectl create secret tls my-tls \
  --cert=tls.crt \
  --key=tls.key \
  -n app

# Docker registry credentials
kubectl create secret docker-registry regcred \
  --docker-server=registry.example.com \
  --docker-username=user \
  --docker-password=pass \
  -n app

You can also create secrets from files:

# From a file (key = filename, value = file contents)
kubectl create secret generic app-config \
  --from-file=config.json \
  --from-file=ca.pem \
  -n app

# From an env file
kubectl create secret generic app-env \
  --from-env-file=.env.production \
  -n app

Or from a YAML manifest:

apiVersion: v1
kind: Secret
metadata:
  name: db-creds
  namespace: app
type: Opaque
data:
  username: YWRtaW4=      # base64 of "admin"
  password: czNjcmV0      # base64 of "s3cret"

The Base64 Misconception#

This is the single most important thing to understand about Kubernetes secrets: base64 encoding is not encryption. Anyone with kubectl get secret -o yaml access can decode every secret in the namespace instantly:

kubectl get secret db-creds -n app -o jsonpath='{.data.password}' | base64 -d
# s3cret

Secrets are stored in plain text in etcd by default. Base64 is purely a serialization format, not a security measure.

Mounting Secrets: Volumes vs Environment Variables#

Environment Variables#

spec:
  containers:
  - name: app
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-creds
          key: password

Tradeoff: Simple to use. But environment variables are set at pod startup and never update. If you rotate the secret, running pods keep the old value until they restart. Environment variables also appear in kubectl describe pod output and can leak into crash dumps and logging frameworks that dump the process environment.

Volume Mounts#

spec:
  containers:
  - name: app
    volumeMounts:
    - name: secret-volume
      mountPath: /etc/secrets
      readOnly: true
  volumes:
  - name: secret-volume
    secret:
      secretName: db-creds

Tradeoff: Files in the mounted volume are automatically updated when the secret changes (with a delay of up to the kubelet sync period, typically 60 seconds). Your application must re-read the file to pick up changes. Volume mounts are more secure because the values do not appear in describe output or process environment dumps.

Recommendation: Use volume mounts for any secret that may need rotation. Use environment variables only for static configuration that will not change during pod lifetime.

Secret Rotation Patterns#

Manual Rotation#

# Update the secret
kubectl create secret generic db-creds \
  --from-literal=username=admin \
  --from-literal=password=new-password \
  -n app \
  --dry-run=client -o yaml | kubectl apply -f -

# If using volume mounts: wait for kubelet sync (up to 60s)
# If using env vars: restart the pods
kubectl rollout restart deployment/my-app -n app

Automated Rotation with Annotations#

Use a hash of the secret in a pod annotation to trigger automatic restarts when the secret changes. This is a common Helm pattern:

# In your Deployment template
spec:
  template:
    metadata:
      annotations:
        checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}

When the secret content changes, the annotation changes, which triggers a rolling update.

RBAC for Secrets#

Restrict which ServiceAccounts can read which secrets. By default, any pod in a namespace can read any secret in that namespace through the API server.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: secret-reader
  namespace: app
rules:
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["db-creds", "api-keys"]  # only these specific secrets
  verbs: ["get"]
  # No list -- cannot enumerate secrets, only read known ones
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: app-secret-access
  namespace: app
subjects:
- kind: ServiceAccount
  name: my-app
  namespace: app
roleRef:
  kind: Role
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

The resourceNames field is critical. Without it, the ServiceAccount can read every secret in the namespace. With it, access is limited to the named secrets.

Sealed Secrets: Git-Safe Secret Storage#

Sealed Secrets encrypts your secrets so they can be safely committed to git. The controller in your cluster is the only thing that can decrypt them.

Install the Sealed Secrets controller in minikube:

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system

Install the kubeseal CLI:

brew install kubeseal

Encrypt a secret:

# Create a regular secret manifest (do NOT apply it)
kubectl create secret generic db-creds \
  --from-literal=password=s3cret \
  -n app \
  --dry-run=client -o yaml > secret.yaml

# Seal it (encrypt with the cluster's public key)
kubeseal --format yaml < secret.yaml > sealed-secret.yaml

# The sealed-secret.yaml is safe to commit to git
# Apply the sealed secret -- the controller decrypts it into a regular Secret
kubectl apply -f sealed-secret.yaml

The sealed secret manifest contains encrypted data that only the Sealed Secrets controller can decrypt. You can commit sealed-secret.yaml to your repository while keeping secret.yaml in .gitignore.

External Secrets Operator#

The External Secrets Operator syncs secrets from external stores (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) into Kubernetes secrets. You can test the workflow locally using a fake secret store.

Install the operator:

helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets -n external-secrets --create-namespace

For local testing, use the Kubernetes provider (reads secrets from one namespace and copies them to another) or the Fake provider:

# fake-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: fake-store
  namespace: app
spec:
  provider:
    fake:
      data:
      - key: "/db/password"
        value: "local-dev-password"
      - key: "/api/key"
        value: "local-dev-api-key"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-creds
  namespace: app
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: fake-store
    kind: SecretStore
  target:
    name: db-creds
  data:
  - secretKey: password
    remoteRef:
      key: "/db/password"
kubectl apply -f fake-secret-store.yaml
# The operator creates a regular Kubernetes Secret named "db-creds" in the app namespace
kubectl get secret db-creds -n app

This validates that your ExternalSecret manifests are correctly structured before pointing them at a real secret store in production.

Secrets in Helm Charts#

Referencing Secrets from Values#

# values.yaml
database:
  existingSecret: db-creds
  secretKeys:
    password: password
# templates/deployment.yaml
env:
- name: DB_PASSWORD
  valueFrom:
    secretKeyRef:
      name: {{ .Values.database.existingSecret }}
      key: {{ .Values.database.secretKeys.password }}

Using the lookup Function#

The Helm lookup function checks if a secret already exists before creating a new one. This prevents overwriting manually-created secrets during upgrades:

{{- $existingSecret := (lookup "v1" "Secret" .Release.Namespace "db-creds") -}}
{{- if not $existingSecret }}
apiVersion: v1
kind: Secret
metadata:
  name: db-creds
type: Opaque
data:
  password: {{ randAlphaNum 32 | b64enc | quote }}
{{- end }}

helm-secrets Plugin#

The helm-secrets plugin integrates SOPS (Secrets OPerationS) encryption with Helm:

helm plugin install https://github.com/jkroepke/helm-secrets

# Encrypt a values file
sops -e values-secret.yaml > values-secret.enc.yaml

# Use encrypted values during install
helm secrets upgrade --install my-app ./chart -f values-secret.enc.yaml

etcd Encryption at Rest#

Even in minikube, you can enable encryption at rest to test the workflow. Create an encryption configuration:

# encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}

Start minikube with the extra API server configuration:

minikube start \
  --extra-config=apiserver.encryption-provider-config=/path/to/encryption-config.yaml

After enabling, verify by reading secrets directly from etcd – they should be encrypted. This ensures your deployment manifests for encryption configuration are correct before applying them to production clusters.

Practical Local Workflow#

A complete local secrets workflow that mirrors production structure without real credentials:

# 1. Create namespaces
kubectl create namespace app

# 2. Create development secrets matching production structure
kubectl create secret generic db-creds \
  --from-literal=host=postgresql.infra.svc.cluster.local \
  --from-literal=port=5432 \
  --from-literal=username=appuser \
  --from-literal=password=local-dev-only \
  --from-literal=database=myapp \
  -n app

kubectl create secret generic api-keys \
  --from-literal=stripe-key=sk_test_fake \
  --from-literal=sendgrid-key=SG.fake \
  -n app

# 3. Verify your deployment manifests reference the correct secret names and keys
kubectl apply -f deployments/my-app.yaml -n app

# 4. Confirm the secrets are mounted correctly
kubectl exec deployment/my-app -n app -- cat /etc/secrets/password
kubectl exec deployment/my-app -n app -- printenv DB_PASSWORD

The key principle: your local secrets should have the same names, the same keys, and the same structure as production. Only the values differ.