Minikube Storage: PersistentVolumes, StorageClasses, and Data Persistence#
Minikube ships with a built-in storage provisioner that handles PersistentVolumeClaims automatically. Understanding how it works – and where it differs from production storage – is essential for testing stateful workloads locally.
Default Storage: The hostPath Provisioner#
When you start minikube, it registers a default StorageClass called standard backed by the k8s.io/minikube-hostpath provisioner. This provisioner creates PersistentVolumes as directories on the minikube node’s filesystem.
kubectl get storageclass
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
# standard (default) k8s.io/minikube-hostpath Delete ImmediateThe Immediate binding mode means PVs are created as soon as a PVC is submitted, without waiting for a pod to claim it. The Delete reclaim policy means the PV and its data are removed when the PVC is deleted.
Under the hood, each PV maps to a directory inside the minikube node at /tmp/hostpath-provisioner/<namespace>/<pvc-name>. You can verify this by SSH-ing into the node:
minikube ssh
ls /tmp/hostpath-provisioner/default/
# data-postgres-0 my-app-dataCreating and Binding a PVC#
A basic PersistentVolumeClaim that uses the default provisioner:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 5GiApply it and verify binding:
kubectl apply -f pvc.yaml
kubectl get pvc app-data
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
# app-data Bound pvc-a1b2c3d4-e5f6-7890-abcd-ef1234567890 5Gi RWO standardThe PVC binds immediately because the provisioner creates the PV on demand. Mount it in a pod:
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: app-dataDynamic vs Static Provisioning#
Dynamic provisioning is the default behavior. You create a PVC referencing a StorageClass, and the provisioner creates the PV automatically. This is what you will use most of the time in minikube.
Static provisioning means you pre-create the PV manually, then create a PVC that binds to it. This is useful when you need to pre-populate a volume with test data:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-data-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: manual
hostPath:
path: /data/test-fixtures
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-data-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 1GiThe storageClassName: manual on both resources ensures they match each other rather than triggering the dynamic provisioner.
Storage for Databases#
Database workloads in minikube follow the same patterns as production – PVCs with ReadWriteOnce access mode, mounted at the database’s data directory. The main difference is that minikube storage is hostPath-backed, so there is no replication or redundancy.
PostgreSQL example with a StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
env:
- name: POSTGRES_PASSWORD
value: "devpassword"
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 10GiNote the PGDATA environment variable pointing to a subdirectory. PostgreSQL requires its data directory to be empty on init and will refuse to start if the mount point contains a lost+found directory or other artifacts. Setting PGDATA to a subdirectory avoids this.
For MySQL, the same pattern applies with the data directory at /var/lib/mysql. Size your PVCs generously – in minikube, the size is not enforced by the hostPath provisioner, but it is good practice to match what you would use in production.
StatefulSet Storage: Per-Replica PVCs#
When a StatefulSet has multiple replicas, volumeClaimTemplates creates a separate PVC for each pod. This is the mechanism that gives each database replica its own storage:
kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
# data-postgres-0 Bound pvc-abc123 10Gi RWO standard
# data-postgres-1 Bound pvc-def456 10Gi RWO standardIf you delete postgres-1 and it gets rescheduled, it reattaches to data-postgres-1. The data persists across pod restarts. Deleting the StatefulSet does not delete the PVCs – you must remove them explicitly if you want a clean start.
Data Persistence Across Minikube Lifecycle#
This distinction is critical:
minikube stop– preserves all data. The minikube node is paused, and all PVs, PVCs, and their underlying hostPath directories survive. When youminikube startagain, everything comes back.minikube delete– destroys everything. The node, all PVs, all data, and all cluster state are removed. This is a full reset.
If you are working with a database and need the data to survive between sessions, use minikube stop when you are done for the day. Reserve minikube delete for when you genuinely want a fresh cluster.
Mounting Host Directories#
To share files between your host machine and the minikube cluster, use minikube mount:
minikube mount /Users/myuser/testdata:/mnt/testdata &This creates a 9P mount from the host path into the minikube node. You can then reference /mnt/testdata in a hostPath volume:
volumes:
- name: host-data
hostPath:
path: /mnt/testdata
type: DirectoryThe mount runs as a foreground process by default. Background it with & or run it in a separate terminal. The mount is lost when the process exits or when minikube stops.
This is useful for feeding test fixtures or configuration files into pods without building them into container images.
CSI Driver Addons#
Minikube includes a CSI hostpath driver addon for testing Container Storage Interface workflows:
minikube addons enable csi-hostpath-driverThis registers a second StorageClass (csi-hostpath-sc) backed by a proper CSI driver rather than the built-in provisioner. Use it when you need to test CSI-specific features like VolumeSnapshots:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: db-snapshot
spec:
volumeSnapshotClassName: csi-hostpath-snapclass
source:
persistentVolumeClaimName: data-postgres-0The CSI driver also supports volume expansion and cloning, making it closer to what you get with cloud CSI drivers like EBS or Persistent Disk.
Common Gotcha: hostPath Permissions#
The minikube hostPath provisioner creates directories owned by root. If your container runs as a non-root user (which most production images do), the process may not have write permissions to the mounted volume.
Symptoms: the pod starts but the application crashes with “Permission denied” when writing to its data directory.
Fix with an init container that sets permissions:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 999:999 /data"]
volumeMounts:
- name: data
mountPath: /dataReplace 999:999 with the UID/GID your application runs as. For PostgreSQL the UID is 999, for MySQL it is 999, for Redis it is 999. Check the Dockerfile of the image you are using.
Alternatively, set fsGroup in the pod security context:
spec:
securityContext:
fsGroup: 999This tells Kubernetes to change the group ownership of all files in mounted volumes to the specified GID, and to set the setgid bit on the volume root so new files inherit the group.
Practical Pattern: Pre-Populating Test Data#
For integration tests that need a database with specific data, combine static provisioning with an init container:
# Copy test fixtures into minikube
minikube cp ./test-fixtures/seed.sql /data/seed/seed.sqlapiVersion: v1
kind: PersistentVolume
metadata:
name: seed-data
spec:
capacity:
storage: 100Mi
accessModes: ["ReadOnlyMany"]
storageClassName: seed
hostPath:
path: /data/seed
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: seed-data
spec:
accessModes: ["ReadOnlyMany"]
storageClassName: seed
resources:
requests:
storage: 100MiThen mount the seed PVC as read-only alongside the database’s writable PVC, and run the seed script in an init container. This pattern gives you repeatable, pre-populated test databases without rebuilding images.