Skip to main content

Disk I/O Troubleshooting

Disk I/O issues often look like "CPU is idle but everything is slow." This happens when processes are blocked waiting for storage (high I/O wait), or when a single workload saturates the disk.

Quick Summary
  • Confirm I/O wait (vmstat, iostat -x) before you change anything.
  • Identify the process (iotop, pidstat -d) and the files (lsof).
  • Fix the cause: runaway backups, slow queries, log storms, or a full filesystem.

Step 1: confirm I/O wait

io-wait-quick-check.sh
vmstat 1 5

Look at:

  • wa (I/O wait): sustained high values are a strong indicator.
  • b (blocked processes): processes waiting on I/O.

Step 2: check device-level saturation

Install sysstat if needed, then:

iostat-extended.sh
iostat -x 1 10

Fields to interpret (varies slightly by distro):

  • util: near 100% suggests the device is saturated.
  • await: high values suggest slow I/O completion (queueing or slow storage).
  • r/s and w/s: IOPS pressure.
  • rkB/s and wkB/s: throughput pressure.

Step 3: identify the process causing I/O

iotop-top-io-processes.sh
sudo iotop -o -P -a

Alternative (more script-friendly):

pidstat-disk.sh
pidstat -d 1 10

Step 4: identify the files being hammered

Once you have a PID, inspect open files:

lsof-files-by-pid.sh
sudo lsof -p 12345 | head -n 50

Common culprits on WordPress servers:

  • MySQL/MariaDB data files (/var/lib/mysql/...).
  • Large logs (/var/log/...), especially if debug logging is enabled.
  • Backup targets (/backups/...), especially compressing large trees.
  • Cache directories under wp-content/.

Step 5: fix the cause (safe order)

Start with the least risky actions.

Backups are saturating the disk

If tar, zip, mysqldump, or compression is the culprit:

  • Run backups off-peak.
  • Lower compression level.
  • Apply nice and ionice.
make-backups-less-disruptive.sh
nice -n 10 ionice -c2 -n7 tar -czf /backups/site.tar.gz /var/www/html

Logs are growing rapidly

Check log size and rate:

log-growth-check.sh
sudo du -sh /var/log/* 2>/dev/null | sort -h | tail -n 20
sudo journalctl --disk-usage

The filesystem is full

If df -h is near 100%, many writes will block or fail.

check-disk-space.sh
df -hT /
df -ih /

Then use the disk usage workflow in [Disk usage insights](./disk-usage-insights).

warning

Do not delete MySQL files directly under /var/lib/mysql. If MySQL storage is the issue, fix it through MySQL (purge old logs, resize, or migrate storage).

Next steps

  • If a single process is the cause: see [Process control](./process-control).
  • If you need historical proof of I/O wait spikes: see [Historical performance stats](./historical-performance-stats).