Local vs Remote Backups
Local backups are about speed. Remote backups are about surviving loss of the server. A WordPress VPS needs both: one local copy for fast restores and one remote/offsite copy for real disaster recovery.
Quick Summary
- Local backups: fast restore, low bandwidth, but same failure domain as the VPS.
- Remote backups: survive VPS loss, but restores can be slower and need bandwidth.
- Recommended baseline: keep local + offsite, verify both, and practice restores.
Local backups
Local backups are stored on the same server or in the same provider account.
Good for:
- Quick rollback after a bad plugin/theme update.
- Fast restores when you accidentally delete files.
- Minimizing restore time during an incident.
Risks:
- If the VPS is wiped, compromised, or the disk fails, local backups can disappear too.
- If backups are stored under the webroot, an attacker may download them.
Remote/offsite backups
Remote backups are stored outside the VPS failure domain.
Good for:
- VPS loss (accidental destroy, billing issue, provider outage).
- Ransomware or compromise (if remote storage is isolated and access is scoped).
- Long-term retention.
Trade-offs:
- Restores require network bandwidth and time.
- You must manage credentials and encryption.
Recommended pattern for WordPress
Treat the live site as one copy, then add:
- Local artifacts for fast restore
- Files:
/backups/wp-files-YYYY-MM-DD.tar.zst - Database:
/backups/wp-db-YYYY-MM-DD.sql.zst
- Offsite artifacts for survivability
- Remote storage (another VPS, object storage, or cloud drive) via
rcloneorrsync.
A safe workflow (local then remote)
Create local artifacts first, verify them, then copy offsite.
backup-local-then-remote.sh
set -euo pipefail
BACKUP_DIR="/backups"
STAMP="$(date +%F)"
FILES_ARCHIVE="$BACKUP_DIR/wp-files-$STAMP.tar.zst"
DB_DUMP="$BACKUP_DIR/wp-db-$STAMP.sql.zst"
mkdir -p "$BACKUP_DIR"
echo "creating files archive: $FILES_ARCHIVE"
tar -C /var/www/html -cf - . \
| zstd -3 -T0 -o "$FILES_ARCHIVE"
echo "creating db dump: $DB_DUMP"
mysqldump --single-transaction --quick --routines --events --triggers \
--databases wordpress \
| zstd -3 -T0 -o "$DB_DUMP"
echo "verifying artifacts"
zstd -t "$FILES_ARCHIVE"
zstd -t "$DB_DUMP"
ls -lh "$FILES_ARCHIVE" "$DB_DUMP"
echo "copying offsite"
rclone copy "$BACKUP_DIR" "remote:wp-backups" \
--include 'wp-*.tar.zst' --include 'wp-*.sql.zst'
warning
Do not run rclone sync until you understand what it deletes. Prefer rclone copy unless you intentionally want the remote to mirror local state.
Verification you should actually do
Local existence is not enough. Verify:
- Integrity:
zstd -t/gzip -t/xz -t. - Paths: list archive contents (does it contain what you expect?).
- Restore drill: extract into a staging directory.
verify-local-files-archive.sh
tar --use-compress-program=zstd -tf "/backups/wp-files-$(date +%F).tar.zst" | sed -n '1,25p'
Remote verification:
verify-remote-backups.sh
rclone lsf "remote:wp-backups" | rg -n 'wp-(files|db)-'
Common mistakes
- Storing backups under
/var/www/html. - Keeping only local backups (not disaster recovery).
- Keeping only remote backups (slow restores, higher downtime).
- Backing up without doing restore drills.
Next steps
- Define retention: see
opt/docker-data/apps/docusaurus/site/docs/server/linux-server/10-backup-disaster-recovery/rotation--retention-policies.mdx. - Choose remote targets: see
opt/docker-data/apps/docusaurus/site/docs/server/linux-server/10-backup-disaster-recovery/rclone-remote-targets.mdx. - Practice restores: see
opt/docker-data/apps/docusaurus/site/docs/server/linux-server/10-backup-disaster-recovery/empty-2.mdx.