3-2-1 Backup Strategy for WordPress
The 3-2-1 strategy is a simple rule that prevents the most common backup failure: "everything was on the same server". For WordPress on a VPS, 3-2-1 becomes practical when you define exactly what your artifacts are, where each copy lives, and how you verify restores.
- 3 copies total: production + at least 2 backups.
- 2 storage types/failure domains: do not keep all copies on the same disk/provider.
- 1 offsite: at least one copy outside the VPS.
- Verify and practice restores or you do not have backups.
What counts as a "copy"
A "copy" is a complete, usable set of data needed to restore.
For WordPress this usually means:
- Files: WordPress root and
wp-content/. - Database: MySQL/MariaDB dump.
- Configuration/secrets:
wp-config.php(and.envif used).
One copy can be "live production". The additional copies should be backups that you can restore from.
The numbers explained (without misconceptions)
Three copies
"3" means you have three total copies:
- Copy 1: production (live site)
- Copy 2: a backup
- Copy 3: another backup
It does not mean "three cloud providers".
Example:
Copy 1: /var/www/html + DB on VPS (production)
Copy 2: /backups on VPS (local restore copy)
Copy 3: remote:wp-backups/site-a (offsite)
Two storage types (or failure domains)
"2" means your backups are not all stored on the same underlying thing.
Good interpretations:
- VPS disk + object storage
- VPS disk + another server in a different region
- attached block volume + cloud storage
Bad interpretations:
- two folders on the same disk
- two buckets controlled by the same leaked credential
One offsite copy
"1" means at least one copy is outside the VPS.
Offsite examples:
- object storage (S3/R2/B2)
- another VPS (different provider/region)
- NAS at a different site
A WordPress-specific 3-2-1 map
This is a practical way to map 3-2-1 to WordPress artifacts.
| Layer | Artifact | Example name | Notes |
|---|---|---|---|
| Files | WordPress files | wp-files-2026-03-01.tar.zst | exclude caches, nested archives |
| DB | Database dump | wp-db-2026-03-01.sql.zst | dump frequently; SQL compresses well |
| Secrets | config snapshot | secrets-2026-03-01.tar.gpg | encrypt before offsite |
Example architectures
Small business (simple and good)
VPS (production)
- /var/www/html + DB
- /backups (local artifacts)
Offsite
- object storage (remote:wp-backups/site-a)
Benefits:
- fast local restores for minor incidents
- survivability if the VPS is lost
Agency (strong isolation)
VPS-A (production)
- local artifacts
VPS-B (backup server, different region)
- receives rsync copies
Object storage
- encrypted long-term retention
Benefits:
- multiple offsite layers
- isolation if one provider/account fails
Implementation blueprint (files + DB + offsite)
This blueprint uses:
tar+zstdfor file archivesmysqldumpfor DB dumpsrclone copyfor offsite upload
Adjust paths and database name for your environment.
Create local artifacts
set -euo pipefail
umask 077
BACKUP_DIR="/backups"
STAMP="$(date +%F)"
FILES="$BACKUP_DIR/wp-files-$STAMP.tar.zst"
DB="$BACKUP_DIR/wp-db-$STAMP.sql.zst"
mkdir -p "$BACKUP_DIR"
tar -C /var/www/html \
--exclude='wp-content/cache' \
--exclude='wp-content/*/cache' \
--exclude='wp-content/updraft' \
-cf - . \
| zstd -3 -T0 -o "$FILES"
mysqldump --single-transaction --quick --routines --events --triggers \
--column-statistics=0 wordpress \
| zstd -3 -T0 -o "$DB"
zstd -t "$FILES"
zstd -t "$DB"
ls -lh "$FILES" "$DB"
Copy offsite (safe default)
rclone copy /backups remote:wp-backups/site-a \
--include 'wp-files-*.tar.zst' \
--include 'wp-db-*.sql.zst'
Use rclone copy unless you fully understand rclone sync deletions.
Verify offsite
rclone lsf remote:wp-backups/site-a | rg -n 'wp-(files|db)-' | sed -n '1,40p'
Verification and restore drills (the part most people skip)
You can satisfy "3-2-1" on paper and still fail to restore. Add a verification cadence.
Daily checks
- artifacts exist and are non-empty
- integrity tests succeed
- offsite copy exists
set -e
ls -lh /backups | sed -n '1,60p'
zstd -t /backups/wp-files-*.tar.zst
zstd -t /backups/wp-db-*.sql.zst
Weekly restore drill (staging)
set -e
sudo rm -rf /tmp/restore-test
sudo mkdir -p /tmp/restore-test
sudo tar --use-compress-program=zstd -xf /backups/wp-files-2026-03-01.tar.zst -C /tmp/restore-test
sudo find /tmp/restore-test -maxdepth 3 -type f -name wp-config.php -print
sudo find /tmp/restore-test -maxdepth 3 -type d -name wp-content -print
Database drill (restore into a staging database):
zstd -dc /backups/wp-db-2026-03-01.sql.zst | mysql wordpress_restore
Monthly disaster recovery simulation
Once a month, prove that offsite backups are usable:
- download (or rsync) one snapshot
- decrypt if needed
- restore on a fresh VM/VPS
- record actual RTO (time to restore)
How 3-2-1 affects RPO and RTO
3-2-1 itself does not set RPO/RTO, but it enables them:
- Offsite copy reduces the chance of total loss (helps you meet RPO).
- Local copy reduces restore time (helps you meet RTO).
See:
opt/docker-data/apps/docusaurus/site/docs/server/linux-server/10-backup-disaster-recovery/rpo-vs-rto.mdx
Security and isolation notes
If your VPS is compromised, assume attackers can read local backups and may try to delete offsite backups.
Mitigations:
- encrypt sensitive artifacts before upload
- separate credentials for backup upload
- enable remote versioning/immutability if possible
- keep decryption material out of the VPS when feasible
Quick self-audit
Answer these and you will know whether you really have 3-2-1:
- Do you have at least two backup copies in addition to production?
- Are those copies in different failure domains?
- Do you have at least one offsite copy?
- Can you restore both files and database into staging?
- Are backups protected from web access and casual reads?
Practice lab
Use this lab to implement a minimal 3-2-1 setup.
- Create
/backupsand lock it down.
sudo mkdir -p /backups
sudo chmod 700 /backups
- Create one files archive and one DB dump.
sudo tar -C /var/www/html -cf - . | zstd -3 -T0 -o "/backups/wp-files-$(date +%F).tar.zst"
mysqldump --single-transaction --quick wordpress | zstd -3 -T0 -o "/backups/wp-db-$(date +%F).sql.zst"
- Verify both.
zstd -t /backups/wp-files-*.tar.zst
zstd -t /backups/wp-db-*.sql.zst
- Copy offsite.
rclone copy /backups remote:wp-backups/site-a
- Restore into staging.
sudo rm -rf /tmp/restore-test
sudo mkdir -p /tmp/restore-test
sudo tar --use-compress-program=zstd -xf /backups/wp-files-*.tar.zst -C /tmp/restore-test
If you can do this reliably, you have a real foundation to build on.
Common 3-2-1 variants (optional)
3-2-1 is the baseline. You may also see variants that add specific guarantees.
3-2-1-1
Adds an extra "1" meaning one copy is:
- offline (not continuously reachable from the VPS), or
- immutable (cannot be modified/deleted for a retention window).
This helps against ransomware and credential compromise.
3-2-1-1-0
Adds "0" meaning "zero errors" verified:
- integrity tests pass (compression/encryption)
- restore drills succeed
- checksums match for offsite copies
What counts as "two storage types"
This is about failure domains, not file extensions.
Examples:
| Storage type A | Storage type B | Why it counts |
|---|---|---|
| VPS local disk | object storage bucket | different systems, different durability model |
| attached block volume | another VPS in a different region | reduces chance of same physical failure |
| NAS (different site) | object storage | physical + administrative separation |
Non-examples:
| Storage type A | Storage type B | Why it does not help |
|---|---|---|
/backups folder | /home/backups folder | still the same disk/system |
| two cloud folders | same compromised credential | deletion risk remains |
Isolation checklist (protect offsite backups)
Use this checklist to reduce "attacker deletes backups" risk:
- Backups are written to a directory not served by the web server.
- Remote credentials are not readable by
www-data. - Offsite storage has versioning/immutability (if available).
- Encryption keys/passphrases are not stored in the same place as the encrypted backups.
- You can revoke/rotate remote credentials without losing access to existing backups.
Bandwidth-friendly 3-2-1
If uploads are large, offsite transfer time is often the limiting factor.
Practical tactics:
- Keep frequent DB dumps (small, compressible).
- Use snapshot-style file backups to avoid re-uploading unchanged files.
- Run offsite uploads off-peak.
- Apply rate limiting if needed.
Example: offsite copy with a bandwidth limit:
rclone copy /backups remote:wp-backups/site-a --bwlimit 8M
A documentation template you can keep with the client
Store a short document describing your actual backup map.
Site name:
Primary host:
WordPress root:
Database name:
Artifacts:
- files archive pattern:
- db dump pattern:
- secrets archive pattern:
Local storage:
- path:
- retention:
Offsite storage:
- provider/remote name:
- path:
- retention:
Encryption:
- method:
- key/passphrase location:
Verification:
- daily checks:
- weekly restore drill:
- monthly DR simulation:
FAQ
Do I need three different clouds?
No. You need three total copies with at least one offsite. Multiple clouds are optional.
Is a provider snapshot the same as an offsite copy?
Sometimes. Many provider snapshots live in the same provider account and may not protect against account-level deletion. Treat them as helpful, but not your only offsite.
Do I need to back up WordPress core?
Often you can reinstall it, but full archives are still useful for speed and for unknown drift. If you back up only wp-content/, do it intentionally and practice restores.
How often should I do restore drills?
At least monthly. If the site is revenue-critical, consider weekly drills.
Troubleshooting
Offsite copy exists but restore fails
Common causes:
- wrong paths stored in the archive
- missing secrets (
wp-config.php) - database dump does not match the restored file set
Fix by restoring into staging, validating layout, and documenting a known-good restore procedure.
Offsite is missing some backups
Confirm whether you are using copy vs sync, and whether retention pruning is happening remotely.
rclone lsf remote:wp-backups/site-a --recursive | sed -n '1,120p'
Local backups keep filling the disk
This is a retention issue. Confirm pruning rules and ensure backups are not nested.
du -sh /backups
ls -lh /backups | sed -n '1,120p'