Study Cases
These study cases show how systemd.path solves real-world automation problems. Each case includes the problem description, architecture, complete unit files, scripts, and lessons learned.
Study Case 1: WordPress VPS — Zero-Downtime Cache Management
Problem
A WordPress VPS team needs to flush the object cache (Redis/Memcached) and the page cache (Nginx FastCGI cache) without giving developers SSH access. Developers should be able to trigger cache flushes via SFTP by simply uploading an empty file.
Requirements
- No SSH access for developers — SFTP only.
- Two separate signal files: one for object cache, one for page cache.
- Logging of every cache flush with timestamp and which cache was flushed.
- Rate limiting to prevent accidental spam.
- Security hardening — the flush script should not have root privileges.
Architecture
Unit Files
[Unit]
Description=Watch for object cache flush signal
[Path]
PathExists=/var/www/html/clear_object_cache.txt
[Install]
WantedBy=paths.target
[Unit]
Description=Flush WordPress object cache (Redis/Memcached)
StartLimitBurst=5
StartLimitIntervalSec=60
[Service]
Type=oneshot
User=www-data
Group=www-data
ExecStart=/usr/local/bin/flush-object-cache.sh
RuntimeMaxSec=2m
StandardOutput=append:/var/log/cache-flush.log
StandardError=append:/var/log/cache-flush.log
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=read-only
ReadWritePaths=/var/log /var/www/html
[Unit]
Description=Watch for page cache flush signal
[Path]
PathExists=/var/www/html/clear_page_cache.txt
[Install]
WantedBy=paths.target
[Unit]
Description=Flush Nginx FastCGI page cache
StartLimitBurst=5
StartLimitIntervalSec=60
[Service]
Type=oneshot
User=www-data
Group=www-data
ExecStart=/usr/local/bin/flush-page-cache.sh
RuntimeMaxSec=2m
StandardOutput=append:/var/log/cache-flush.log
StandardError=append:/var/log/cache-flush.log
NoNewPrivileges=true
PrivateTmp=true
Scripts
#!/usr/bin/env bash
set -euo pipefail
echo "[$(date -Is)] [OBJECT-CACHE] Flush triggered"
/usr/local/bin/wp cache flush --path=/var/www/html 2>&1
rm -f /var/www/html/clear_object_cache.txt
echo "[$(date -Is)] [OBJECT-CACHE] Flush complete, signal file removed"
#!/usr/bin/env bash
set -euo pipefail
echo "[$(date -Is)] [PAGE-CACHE] Flush triggered"
rm -rf /var/cache/nginx/fastcgi/*
echo "[$(date -Is)] [PAGE-CACHE] Flush complete, signal file removed"
rm -f /var/www/html/clear_page_cache.txt
Deployment
sudo chmod +x /usr/local/bin/flush-object-cache.sh /usr/local/bin/flush-page-cache.sh
sudo systemctl daemon-reload
sudo systemctl enable --now object-cache-flush.path page-cache-flush.path
Testing
# Test object cache flush
touch /var/www/html/clear_object_cache.txt
sleep 2
tail -5 /var/log/cache-flush.log
# Test page cache flush
touch /var/www/html/clear_page_cache.txt
sleep 2
tail -5 /var/log/cache-flush.log
[2026-03-02T10:00:01+00:00] [OBJECT-CACHE] Flush triggered
Success: The cache was flushed.
[2026-03-02T10:00:01+00:00] [OBJECT-CACHE] Flush complete, signal file removed
[2026-03-02T10:00:15+00:00] [PAGE-CACHE] Flush triggered
[2026-03-02T10:00:15+00:00] [PAGE-CACHE] Flush complete, signal file removed
Lessons Learned
- Signal files are a simple but effective remote action mechanism. Developers don't need SSH — just SFTP access.
- Separate path units per action make it easy to enable/disable individual operations.
- Rate limiting (
StartLimitBurst=5) prevents accidental spam from multiple developers. - Log everything — when a client reports "the cache wasn't flushed," check
/var/log/cache-flush.log.
Study Case 2: CI/CD Pipeline — Git-Based Deployment Trigger
Problem
A development team uses a CI/CD pipeline that pushes code to a staging server. After the CI pipeline finishes uploading the new code, a deployment script should run automatically — without a webhook server or cron polling.
Requirements
- CI pipeline creates a
deploy.signalfile after uploading code. - The deployment script should pull the latest code, run migrations, clear caches, and restart services.
- Full rollback capability if deployment fails.
- Deployment should be logged with start/end timestamps.
- No more than one deployment at a time.
Architecture
Unit Files
[Unit]
Description=Watch for CI/CD deploy signal
[Path]
PathExists=/var/www/html/deploy.signal
[Install]
WantedBy=paths.target
[Unit]
Description=Auto-deploy from CI/CD signal
StartLimitBurst=3
StartLimitIntervalSec=300
[Service]
Type=oneshot
User=www-data
Group=www-data
WorkingDirectory=/var/www/html
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
ExecStart=/usr/local/bin/auto-deploy.sh
RuntimeMaxSec=15m
StandardOutput=append:/var/log/auto-deploy.log
StandardError=append:/var/log/auto-deploy.log
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/var/www/html /var/log /var/cache
Deployment Script
#!/usr/bin/env bash
set -euo pipefail
DEPLOY_DIR="/var/www/html"
LOG_PREFIX="[$(date -Is)] [DEPLOY]"
SIGNAL_FILE="$DEPLOY_DIR/deploy.signal"
log_info() { echo "$LOG_PREFIX [INFO] $*"; }
log_error() { echo "$LOG_PREFIX [ERROR] $*" >&2; }
# Save current commit for rollback
PREV_COMMIT=$(cd "$DEPLOY_DIR" && git rev-parse HEAD)
log_info "Starting deployment. Current: $PREV_COMMIT"
rollback() {
log_error "Deployment FAILED. Rolling back to $PREV_COMMIT"
cd "$DEPLOY_DIR"
git reset --hard "$PREV_COMMIT"
composer install --no-dev --optimize-autoloader 2>&1
rm -f "$SIGNAL_FILE"
# Optional: send alert
# curl -s -X POST "https://hooks.slack.com/..." -d "{\"text\": \"Deploy FAILED on $(hostname)\"}"
exit 1
}
trap rollback ERR
# Step 1: Pull latest code
cd "$DEPLOY_DIR"
git pull origin main 2>&1
NEW_COMMIT=$(git rev-parse HEAD)
log_info "Pulled: $PREV_COMMIT → $NEW_COMMIT"
# Step 2: Install dependencies
composer install --no-dev --optimize-autoloader 2>&1
log_info "Dependencies installed"
# Step 3: Run migrations
php artisan migrate --force 2>&1
log_info "Migrations complete"
# Step 4: Clear caches
php artisan config:cache 2>&1
php artisan route:cache 2>&1
php artisan view:cache 2>&1
log_info "Caches rebuilt"
# Step 5: Health check
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost/)
if [ "$HTTP_CODE" != "200" ]; then
log_error "Health check failed: HTTP $HTTP_CODE"
rollback
fi
log_info "Health check passed: HTTP $HTTP_CODE"
# Step 6: Cleanup
rm -f "$SIGNAL_FILE"
log_info "Deployment complete: $NEW_COMMIT"
CI Pipeline Integration
deploy:
runs-on: ubuntu-latest
steps:
- name: Upload code to staging
run: rsync -avz ./dist/ staging-server:/var/www/html/
- name: Trigger deployment
run: ssh staging-server "touch /var/www/html/deploy.signal"
Lessons Learned
- Rollback is non-negotiable. The
trap rollback ERRpattern ensures any failure triggers automatic rollback. - Health checks prevent broken deployments from staying live.
RuntimeMaxSec=15mprevents a stuck deployment from blocking future ones.- Rate limiting (
StartLimitBurst=3) prevents rapid re-deploy loops.
Study Case 3: Media Processing Pipeline
Problem
A photography studio uploads high-resolution images to a server. Each image needs to be:
- Optimized (compressed without quality loss).
- Watermarked.
- Thumbnail generated.
- Synced to cloud storage.
- Original moved to archive.
Architecture
Unit Files
[Unit]
Description=Watch for incoming images to process
[Path]
DirectoryNotEmpty=/var/media/queue
MakeDirectory=yes
DirectoryMode=0775
[Install]
WantedBy=paths.target
[Unit]
Description=Process one image through the pipeline
StartLimitBurst=20
StartLimitIntervalSec=60
[Service]
Type=oneshot
User=media
Group=media
ExecStart=/usr/local/bin/image-pipeline.sh
RuntimeMaxSec=10m
StandardOutput=append:/var/log/image-pipeline.log
StandardError=append:/var/log/image-pipeline.log
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/var/media /var/log
Processing Script
#!/usr/bin/env bash
set -euo pipefail
QUEUE="/var/media/queue"
PROCESSED="/var/media/processed"
THUMBNAILS="/var/media/thumbnails"
ARCHIVE="/var/media/archive"
FAILED="/var/media/failed"
mkdir -p "$PROCESSED" "$THUMBNAILS" "$ARCHIVE" "$FAILED"
FILE=$(ls "$QUEUE"/*.{jpg,jpeg,png,tiff} 2>/dev/null | head -1)
[ -z "$FILE" ] && exit 0
BASENAME=$(basename "$FILE")
echo "[$(date -Is)] [PIPELINE] Processing: $BASENAME"
# Stage function — moves to failed/ on error
process() {
# 1. Optimize
echo "[$(date -Is)] [OPTIMIZE] $BASENAME"
if [[ "$FILE" == *.jpg ]] || [[ "$FILE" == *.jpeg ]]; then
jpegoptim --max=85 "$FILE" 2>&1
elif [[ "$FILE" == *.png ]]; then
optipng -o2 "$FILE" 2>&1
fi
# 2. Watermark (using ImageMagick)
echo "[$(date -Is)] [WATERMARK] $BASENAME"
convert "$FILE" \
-gravity SouthEast \
-pointsize 24 \
-fill "rgba(255,255,255,0.5)" \
-annotate +10+10 "© Studio 2026" \
"$PROCESSED/$BASENAME" 2>&1
# 3. Generate thumbnail (300px wide)
echo "[$(date -Is)] [THUMBNAIL] $BASENAME"
convert "$PROCESSED/$BASENAME" \
-resize 300x \
"$THUMBNAILS/thumb_$BASENAME" 2>&1
# 4. Sync to cloud
echo "[$(date -Is)] [SYNC] $BASENAME"
rclone copy "$PROCESSED/$BASENAME" remote:studio/processed/ 2>&1
rclone copy "$THUMBNAILS/thumb_$BASENAME" remote:studio/thumbnails/ 2>&1
# 5. Archive original
mv "$FILE" "$ARCHIVE/"
echo "[$(date -Is)] [DONE] $BASENAME → archive/"
}
if ! process; then
echo "[$(date -Is)] [FAILED] $BASENAME → failed/" >&2
mv "$FILE" "$FAILED/" 2>/dev/null || true
exit 1
fi
Lessons Learned
- Process one file per invocation.
DirectoryNotEmpty=re-triggers automatically for remaining files. - Dead-letter queue (
failed/) catches problematic files without blocking the pipeline. RuntimeMaxSec=10mprevents a corrupt image from hanging the pipeline indefinitely.- Higher
StartLimitBurst=20accommodates batch uploads (20+ images at once).
Study Case 4: Multi-Tenant SaaS — Per-Customer Data Import
Problem
A SaaS application receives data files from multiple customers. Each customer has their own SFTP directory. When a customer uploads a file, it should be imported into their specific database schema.
Architecture
Template Units
Using systemd template units (@) to handle multiple customers with one set of files:
[Unit]
Description=Watch for incoming data from %i
[Path]
DirectoryNotEmpty=/mnt/sftp/%i
MakeDirectory=yes
DirectoryMode=0770
[Install]
WantedBy=paths.target
[Unit]
Description=Import data for customer %i
StartLimitBurst=10
StartLimitIntervalSec=60
[Service]
Type=oneshot
User=import-worker
Group=import-worker
Environment="CUSTOMER=%i"
EnvironmentFile=/etc/import/%i.env
ExecStart=/usr/local/bin/customer-import.sh
RuntimeMaxSec=30m
StandardOutput=append:/var/log/import/%i.log
StandardError=append:/var/log/import/%i.log
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/mnt/sftp/%i /var/log/import
Per-Customer Environment Files
DB_HOST=localhost
DB_NAME=saas_customer_a
DB_USER=import_a
DB_PASS=secret_a
DB_HOST=localhost
DB_NAME=saas_customer_b
DB_USER=import_b
DB_PASS=secret_b
Import Script
#!/usr/bin/env bash
set -euo pipefail
IMPORT_DIR="/mnt/sftp/$CUSTOMER"
DONE_DIR="/mnt/sftp/$CUSTOMER/done"
FAILED_DIR="/mnt/sftp/$CUSTOMER/failed"
mkdir -p "$DONE_DIR" "$FAILED_DIR"
FILE=$(ls "$IMPORT_DIR"/*.csv 2>/dev/null | head -1)
[ -z "$FILE" ] && exit 0
BASENAME=$(basename "$FILE")
echo "[$(date -Is)] [$CUSTOMER] Importing: $BASENAME"
if mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" \
-e "LOAD DATA LOCAL INFILE '$FILE' INTO TABLE imports FIELDS TERMINATED BY ','"; then
mv "$FILE" "$DONE_DIR/"
echo "[$(date -Is)] [$CUSTOMER] Success: $BASENAME"
else
mv "$FILE" "$FAILED_DIR/"
echo "[$(date -Is)] [$CUSTOMER] FAILED: $BASENAME" >&2
fi
Adding a New Customer
#!/usr/bin/env bash
CUSTOMER="$1"
# Create the environment file
sudo tee "/etc/import/${CUSTOMER}.env" > /dev/null <<EOF
DB_HOST=localhost
DB_NAME=saas_${CUSTOMER}
DB_USER=import_${CUSTOMER}
DB_PASS=$(openssl rand -base64 16)
EOF
# Create log directory
sudo mkdir -p /var/log/import
sudo touch "/var/log/import/${CUSTOMER}.log"
# Enable the template instance
sudo systemctl daemon-reload
sudo systemctl enable --now "import@${CUSTOMER}.path"
echo "Customer $CUSTOMER import watcher enabled"
sudo bash add-customer.sh customer-d
Lessons Learned
- Template units scale infinitely. One
import@.pathandimport@.servicehandles any number of customers. EnvironmentFile=keeps secrets out of unit files and allows per-customer configuration.- Per-customer log files (
/var/log/import/%i.log) make debugging customer-specific issues easy. MakeDirectory=yeswithDirectoryMode=0770ensures the SFTP directory exists with correct permissions.
Study Case 5: Infrastructure Monitoring — Config Change Alerting
Problem
A security-conscious operations team wants to be alerted whenever critical system configuration files are modified — potentially indicating unauthorized changes or misconfiguration.
Monitored Files
| File | Why It Matters |
|---|---|
/etc/passwd | User accounts |
/etc/shadow | Password hashes |
/etc/sudoers | Sudo privileges |
/etc/ssh/sshd_config | SSH settings |
/etc/nginx/nginx.conf | Web server config |
Unit Files
[Unit]
Description=Watch critical system configs for changes
[Path]
PathModified=/etc/passwd
PathModified=/etc/shadow
PathModified=/etc/sudoers
PathModified=/etc/ssh/sshd_config
PathModified=/etc/nginx/nginx.conf
[Install]
WantedBy=paths.target
[Unit]
Description=Alert on critical config change
StartLimitBurst=10
StartLimitIntervalSec=60
[Service]
Type=oneshot
ExecStart=/usr/local/bin/config-sentinel.sh
RuntimeMaxSec=2m
StandardOutput=append:/var/log/config-sentinel.log
StandardError=append:/var/log/config-sentinel.log
Alert Script
#!/usr/bin/env bash
set -euo pipefail
HOSTNAME=$(hostname)
TIMESTAMP=$(date -Is)
FILES="/etc/passwd /etc/shadow /etc/sudoers /etc/ssh/sshd_config /etc/nginx/nginx.conf"
echo "[$TIMESTAMP] [SENTINEL] Config change detected on $HOSTNAME"
# Log which files were recently modified
for f in $FILES; do
if [ -f "$f" ]; then
MOD_TIME=$(stat -c %y "$f")
echo " $f → last modified: $MOD_TIME"
fi
done
# Generate checksums for audit trail
echo "Current checksums:"
for f in $FILES; do
if [ -f "$f" ]; then
sha256sum "$f"
fi
done
# Send alert (uncomment for production)
# MSG="⚠ CONFIG CHANGE on $HOSTNAME at $TIMESTAMP"
# curl -s -X POST "https://hooks.slack.com/services/T.../B.../xxx" \
# -H 'Content-type: application/json' \
# -d "{\"text\": \"$MSG\"}"
Lessons Learned
PathModified=catches every write instantly — ideal for security monitoring.- Multiple paths in one unit with OR logic means one service handles all critical files.
- Checksum logging creates an audit trail for forensic analysis.
- Rate limiting prevents alert fatigue during legitimate config management sessions.
Study Case Summary
| Case | Pattern | Directive | Key Design Decision |
|---|---|---|---|
| 1. Cache Management | Signal file | PathExists= | Separate path units per action type |
| 2. CI/CD Deploy | Signal file | PathExists= | Rollback on failure, health check |
| 3. Media Pipeline | Queue folder | DirectoryNotEmpty= | One file per invocation, dead-letter queue |
| 4. Multi-Tenant SaaS | Template units | DirectoryNotEmpty= | @ template for scalability |
| 5. Config Alerting | Multi-file watch | PathModified= | Multiple paths in one unit |
What's Next
- Cheatsheet and Quiz — quick reference sheet and 10-question self-assessment quiz.