MySQL Backup and Recovery#

A backup that has never been restored is not a backup. This guide covers the tools, when to use each, and how to verify your backups work.

Logical vs Physical Backups#

Logical backups export SQL statements. Portable across versions but slow for large databases. Physical backups copy raw InnoDB data files. Fast but tied to the same MySQL version. Physical backups are essential once your database exceeds a few hundred gigabytes.

mysqldump#

The oldest and most widely used tool. It produces a single SQL stream.

# Full database backup
mysqldump -u root -p --single-transaction --routines --triggers \
  --set-gtid-purged=OFF appdb > appdb_$(date +%Y%m%d).sql

# All databases
mysqldump -u root -p --single-transaction --all-databases \
  --routines --triggers > full_$(date +%Y%m%d).sql

# Specific tables
mysqldump -u root -p --single-transaction appdb users orders > tables.sql

# Compressed backup
mysqldump -u root -p --single-transaction appdb | gzip > appdb.sql.gz

Key flags:

  • --single-transaction takes a consistent snapshot using a long-running transaction. Required for InnoDB tables to avoid locking. Do not use --lock-tables with InnoDB.
  • --routines --triggers includes stored procedures, functions, and triggers. These are excluded by default.
  • --set-gtid-purged=OFF avoids GTID-related errors when restoring to a non-replication target.

Restore:

mysql -u root -p appdb < appdb_20260222.sql
# or compressed
gunzip < appdb.sql.gz | mysql -u root -p appdb

mysqldump is single-threaded. For databases over 50 GB, it becomes painfully slow.

mysqlpump#

Deprecated. It added parallelism to logical dumps, but cannot guarantee consistent snapshots across tables when using parallel threads. MySQL is deprecating it in favor of MySQL Shell’s dump utilities. Avoid mysqlpump for new setups.

MySQL Shell Dump Utilities#

MySQL Shell (mysqlsh) provides the best logical backup tooling for MySQL 8.x. It is multi-threaded, produces consistent snapshots, and supports chunked output for parallel restore.

# Dump a full instance
mysqlsh root@localhost -- util dump-instance /backup/full \
  --threads=4 --compression=zstd

# Dump a specific schema
mysqlsh root@localhost -- util dump-schemas \
  --schemas=appdb --outputUrl=/backup/appdb \
  --threads=4 --compression=zstd

# Dump specific tables
mysqlsh root@localhost -- util dump-tables appdb users,orders \
  --outputUrl=/backup/tables --threads=4

Restore with util.load-dump:

mysqlsh root@localhost -- util load-dump /backup/appdb \
  --threads=4 --resetProgress

MySQL Shell dumps are significantly faster than mysqldump for both backup and restore due to multi-threading and chunked loading. Use this as your default logical backup tool.

Percona XtraBackup (Physical Backups)#

Percona XtraBackup creates physical (file-level) backups of InnoDB without locking the server. It copies data files while tracking InnoDB’s redo log to ensure consistency.

# Full backup
xtrabackup --backup --user=root --password=secret \
  --target-dir=/backup/full/$(date +%Y%m%d)

# Prepare the backup (apply redo logs to make it consistent)
xtrabackup --prepare --target-dir=/backup/full/20260222

# Restore: stop MySQL, move files, fix ownership
sudo systemctl stop mysqld
sudo mv /var/lib/mysql /var/lib/mysql.old
sudo xtrabackup --move-back --target-dir=/backup/full/20260222
sudo chown -R mysql:mysql /var/lib/mysql
sudo systemctl start mysqld

For incremental backups, use --incremental-basedir to capture only changes since the last full or incremental. Prepare by applying the base, then each incremental with --apply-log-only, then a final prepare without it. XtraBackup scales to multi-terabyte databases.

Point-in-Time Recovery with Binary Logs#

Binary logs record every write operation. Combined with a full backup, they let you recover to any specific moment.

Prerequisites: log_bin must be enabled (it is by default in MySQL 8.x).

# List available binary logs
mysqlbinlog --list-logs
mysql -e "SHOW BINARY LOGS;"

# Find the binary log position from your backup
# mysqldump includes this when used with --master-data or --source-data
head -30 appdb_20260222.sql | grep "CHANGE MASTER"

# Replay binary logs from backup position to a target time
mysqlbinlog --start-position=154 --stop-datetime="2026-02-22 14:30:00" \
  mysql-bin.000042 mysql-bin.000043 | mysql -u root -p

# Or stop at a position (useful for skipping a bad statement)
mysqlbinlog --start-position=154 --stop-position=89234 \
  mysql-bin.000042 | mysql -u root -p

The workflow for PITR:

  1. Restore the last full backup.
  2. Identify the binary log file and position at the time of the backup.
  3. Replay binary logs from that position to just before the problem occurred.
  4. If needed, skip the bad statement and continue replaying.

Backup Verification#

An untested backup is not a backup. Verify by restoring to a test instance:

mysql -h test-host -u root -p test_appdb < appdb_20260222.sql
mysql -h test-host -e "SELECT COUNT(*) FROM test_appdb.users;"
# For physical backups, xtrabackup --prepare will fail if the backup is corrupt

Automated Backup Pattern#

#!/bin/bash
# /usr/local/bin/mysql-backup.sh
set -euo pipefail

BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
RETAIN_DAYS=14

mysqlsh root@localhost -- util dump-schemas \
  --schemas=appdb \
  --outputUrl="${BACKUP_DIR}/${DATE}" \
  --threads=4 --compression=zstd

# Verify non-empty output
[ "$(ls -A ${BACKUP_DIR}/${DATE})" ] || { echo "BACKUP FAILED" >&2; exit 1; }

# Purge old backups and binary logs
find "${BACKUP_DIR}" -maxdepth 1 -type d -mtime +${RETAIN_DAYS} -exec rm -rf {} +
mysql -u root -e "PURGE BINARY LOGS BEFORE NOW() - INTERVAL ${RETAIN_DAYS} DAY;"

Run via cron (0 2 * * *) or a Kubernetes CronJob. For production, combine weekly XtraBackup full + daily incrementals with continuous binary log archival to object storage (S3, GCS).