Backups
How to backup the database for Pro self-hosted deployments — SQLite (Starter/Advanced) and PostgreSQL (Enterprise).
Backing Up the Pro Self-Hosted Database
Password Pusher Pro self-hosted uses different databases depending on your plan:
| Plan | Database |
|---|---|
| Starter | SQLite3 |
| Advanced | SQLite3 |
| Enterprise | PostgreSQL |
Follow the section that matches your deployment. In Docker setups, the database file or data directory is typically on a volume — ensure your backup process can access that path (e.g. run backup from inside the container or from the host if the volume is bind-mounted).
SQLite (Starter & Advanced)
These methods create a consistent snapshot of your SQLite database, suitable for production and safe to run while the application is in use.
1. Online Backup API / .backup (recommended)
The safest and most official way to get a consistent snapshot while the database may still be in use.
From the host (if the DB file is in a known path):
# Replace with your actual DB path (e.g. Docker volume mount or container path)
sqlite3 /path/to/production.sqlite3 ".backup '/backups/pwpush-$(date +%Y-%m-%d).db'"
From inside the app container (Docker):
# Enter the container (replace container name as needed)
docker exec -it <your-pwpush-pro-container> bash
# Default Rails SQLite path is often under /rails/db or similar; adjust to your config
sqlite3 /path/to/db/production.sqlite3 ".backup '/tmp/backup-$(date +%Y%m%d).db'"
# Copy out if needed, or use a volume mounted at /backups
exit
docker cp <container>:/tmp/backup-20260225.db ./backup-20260225.db
In Python (e.g. for a backup script or cron job):
import sqlite3
from datetime import datetime
def backup_sqlite(db_path: str, backup_dir: str):
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = f"{backup_dir}/backup_{timestamp}.db"
source = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True) # read-only is safest
dest = sqlite3.connect(backup_path)
with dest:
source.backup(dest) # online backup API
source.close()
dest.close()
print(f"Backup created: {backup_path}")
Advantages: Consistent snapshot with other connections writing; no need to stop the app; fast on local storage.
2. VACUUM INTO (production / defragmented backup)
Good when you also want a smaller, defragmented copy. Requires SQLite 3.27+.
sqlite3 /path/to/production.sqlite3 "VACUUM INTO '/backups/pwpush-2026-02-25-vacuumed.db';"
When to prefer over .backup: You want a smaller backup file, you already run periodic VACUUM, or you have high write concurrency.
3. Logical backup (.dump — SQL script)
Creates a portable, plain-text SQL file (schema + data).
sqlite3 /path/to/production.sqlite3 .dump > full-dump-$(date +%Y-%m-%d).sql
# Data only (no schema):
sqlite3 /path/to/production.sqlite3 .dump --data-only > data-only.sql
Best for: Moving between SQLite versions, auditing, long-term archives. Downsides: Slower to create and restore; larger unless gzipped.
PostgreSQL (Enterprise)
Enterprise deployments use PostgreSQL. Use logical backups for portability and simplicity, or physical/base backups if you need point-in-time recovery.
1. Logical backup with pg_dump (recommended for most)
Creates a single file (custom, directory, or plain SQL). Run from a host that can connect to your Postgres (e.g. app container or admin host).
Custom format (compressed, flexible restore):
pg_dump -Fc -h <host> -U <user> -d <database_name> -f pwpush-$(date +%Y-%m-%d).dump
Plain SQL (human-readable, restorable with psql):
pg_dump -h <host> -U <user> -d <database_name> > pwpush-$(date +%Y-%m-%d).sql
Restore:
# Custom format
pg_restore -d <database_name> pwpush-2026-02-25.dump
# Plain SQL
psql -h <host> -U <user> -d <database_name> < pwpush-2026-02-25.sql
Use the same database name and credentials as your Pro app configuration.
2. Continuous backup and point-in-time recovery (PITR)
For minimal data loss, use PostgreSQL’s WAL archiving plus a base backup:
- Base backup (e.g.
pg_basebackupor a consistentpg_dump). - WAL archiving — set
archive_mode=onandarchive_commandso WAL segments are copied to safe storage (e.g. S3, NFS).
Then you can restore to any point in time using the base backup plus replayed WAL. Configure this in your PostgreSQL server (or managed service) and test restores regularly.
3. Managed PostgreSQL (e.g. RDS, Cloud SQL)
If your Enterprise deployment uses a managed Postgres service, use its built-in backup and snapshot features and follow the provider’s restore and PITR documentation.
Best Practices (both SQLite and PostgreSQL)
| Practice | Why it matters | Recommendation |
|---|---|---|
| Prefer online/safe methods | .backup / VACUUM INTO for SQLite; pg_dump for Postgres |
Use by default |
| 3-2-1 rule | 3 copies, 2 different media types, 1 off-site | Critical for production |
| Automate + timestamp | Daily or hourly depending on how often data changes | Strong |
| Test restores | Backups are useless if restores fail | Mandatory |
| Store off the same disk | Protects against disk failure and ransomware | Very strong |
| Compress | Use gzip or zstd — databases compress well |
Recommended |
| Encrypt if sensitive | Use age, rclone crypt, or provider encryption | If PII/financial |
| Don’t copy live WAL files manually (SQLite) | Risk of corruption — use SQLite tools only | Never |
Quick decision guide:
- Starter/Advanced (SQLite), simple setup →
.backup(orVACUUM INTO) + copy to NAS/cloud (e.g. rsync) on a schedule. - Starter/Advanced, production →
.backuporVACUUM INTO+ consider Litestream or similar for continuous backup to object storage. - Enterprise (PostgreSQL) → Scheduled
pg_dump(or managed backups) + off-site copy; enable WAL archiving and PITR if you need minimal RPO.
See Also
- Overview — Pro Self-Hosted plans and features
- Docker volumes — Where data lives in container deployments
- Migrate from OSS to Pro — Deployment and configuration context