Migrate from MySQL/MariaDB to PostgreSQL
Step-by-step guide for migrating your Password Pusher database from MySQL or MariaDB to PostgreSQL using pgloader
Why Migrate?
Starting with recent Password Pusher releases, MySQL and MariaDB support has been deprecated. The supported database backends are now:
- SQLite3 (default) - Ideal for single-server deployments
- PostgreSQL - Recommended for production and multi-server environments
If you’re currently running Password Pusher with MySQL or MariaDB, you’ll need to migrate to PostgreSQL to continue receiving updates and support.
Important: Always create a complete backup before starting the migration. This process migrates your data one-way from MySQL/MariaDB to PostgreSQL.
Migration Overview
The migration process involves these steps:
- Backup your existing MySQL/MariaDB database
- Set up a new PostgreSQL database
- Use pgloader to migrate the data
- Configure PostgreSQL user permissions
- Update Password Pusher to use PostgreSQL
- Verify the migration
Prerequisites
Before beginning, ensure you have:
- Access to your current MySQL/MariaDB database credentials
- A server or container to run PostgreSQL (version 13 or later recommended)
- pgloader installed
- Your current Password Pusher instance stopped during migration
- Your
PWPUSH_MASTER_KEYvalue (required to decrypt existing data)
Step 1: Backup Your MySQL/MariaDB Database
Never skip this step. Create a complete backup:
# For MySQL
mysqldump -u username -p pwpush_db > pwpush_mysql_backup_$(date +%Y%m%d).sql
# For MariaDB
mariadb-dump -u username -p pwpush_db > pwpush_mariadb_backup_$(date +%Y%m%d).sql
Store this backup securely - you may need it if anything goes wrong.
Step 2: Stop Password Pusher
Stop your current Password Pusher instance to prevent data changes during migration:
# Docker Compose
docker compose down
# Or for standalone Docker
docker stop pwpush
Step 3: Set Up PostgreSQL
Option A: Using Docker (Recommended)
Create a new docker-compose-postgres.yml:
services:
postgres:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_USER: pwpush_user
POSTGRES_PASSWORD: your_secure_password_here
POSTGRES_DB: pwpush_db
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres-data:
Start PostgreSQL:
docker compose -f docker-compose-postgres.yml up -d
Verify it’s running:
docker compose -f docker-compose-postgres.yml ps
Option B: Using System PostgreSQL
For a system-installed PostgreSQL:
# Connect as the postgres superuser
sudo -u postgres psql
# Create the database and user
CREATE USER pwpush_user WITH PASSWORD 'your_secure_password_here';
CREATE DATABASE pwpush_db OWNER pwpush_user;
GRANT ALL PRIVILEGES ON DATABASE pwpush_db TO pwpush_user;
# Exit
\q
Step 4: Install pgloader
pgloader is a powerful tool that handles the schema conversion and data migration between MySQL and PostgreSQL.
Ubuntu/Debian:
sudo apt-get update
sudo apt-get install pgloader
RHEL/CentOS/Fedora:
sudo dnf install pgloader
macOS (Homebrew):
brew install pgloader
Docker (alternative):
docker pull dimitri/pgloader:latest
Step 5: Create the Migration Script
Create a file named pwpush-migrate.load:
LOAD DATABASE
FROM mysql://MYSQL_USER:MYSQL_PASS@MYSQL_HOST:3306/pwpush_db
INTO postgresql://pwpush_user:POSTGRES_PASS@POSTGRES_HOST:5432/pwpush_db
WITH include drop, create tables, create indexes, reset sequences,
workers = 2, concurrency = 1,
data only, preserve index names
SET work_mem to '128MB',
maintenance_work_mem to '512 MB'
CAST type datetime to timestamp drop default drop not null using zero-dates-to-null,
type date drop not null drop default using zero-dates-to-null,
type tinyint to boolean using tinyint-to-boolean
ALTER SCHEMA 'pwpush_db' RENAME TO 'public';
Replace these placeholders with your actual values:
| Placeholder | Description | Example |
|---|---|---|
MYSQL_USER |
MySQL username | root |
MYSQL_PASS |
MySQL password | mysecretpass |
MYSQL_HOST |
MySQL hostname | localhost or mysql |
POSTGRES_PASS |
PostgreSQL password | your_secure_password_here |
POSTGRES_HOST |
PostgreSQL hostname | localhost or postgres |
Note: If your passwords contain special characters (%, #, @, :, etc.), you must URL-encode them. For example, p@ssword becomes p%40ssword. See DATABASE_URL for encoding details.
Step 6: Run the Migration
Using System pgloader
pgloader pwpush-migrate.load
Using Docker pgloader
If both databases are in Docker containers on the same network:
# Find your Docker network name
docker network ls
# Run pgloader connected to that network
docker run --rm -it \
--network your_docker_network \
-v $(pwd)/pwpush-migrate.load:/pwpush-migrate.load \
dimitri/pgloader:latest \
pgloader /pwpush-migrate.load
If databases are accessible via localhost:
docker run --rm -it \
--network host \
-v $(pwd)/pwpush-migrate.load:/pwpush-migrate.load \
dimitri/pgloader:latest \
pgloader /pwpush-migrate.load
Successful Output
You should see output similar to:
table name read imported errors
------------------------------ --------- --------- ---------
pushes 1234 1234 0
users 56 56 0
audit_logs 5678 5678 0
active_storage_blobs 12 12 0
active_storage_attachments 12 12 0
------------------------------ --------- --------- ---------
Total import time 5s
If you see errors, check the Troubleshooting section.
Step 7: Configure PostgreSQL Permissions
pgloader creates tables owned by the postgres superuser. You must grant permissions to your application user.
For Docker PostgreSQL:
docker exec -it postgres psql -U postgres -d pwpush_db -c "
-- Grant permissions on existing objects
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO pwpush_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO pwpush_user;
GRANT USAGE, CREATE ON SCHEMA public TO pwpush_user;
-- Grant permissions on future objects
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO pwpush_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO pwpush_user;
"
For system PostgreSQL:
sudo -u postgres psql -d pwpush_db -c "
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO pwpush_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO pwpush_user;
GRANT USAGE, CREATE ON SCHEMA public TO pwpush_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO pwpush_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO pwpush_user;
"
Step 8: Reset Sequence Values
PostgreSQL sequences may not be set correctly after migration. Reset them to prevent primary key conflicts:
docker exec -it postgres psql -U postgres -d pwpush_db -c "
SELECT setval('pushes_id_seq', COALESCE((SELECT MAX(id) FROM pushes), 0) + 1, false);
SELECT setval('users_id_seq', COALESCE((SELECT MAX(id) FROM users), 0) + 1, false);
SELECT setval('audit_logs_id_seq', COALESCE((SELECT MAX(id) FROM audit_logs), 0) + 1, false);
SELECT setval('active_storage_blobs_id_seq', COALESCE((SELECT MAX(id) FROM active_storage_blobs), 0) + 1, false);
SELECT setval('active_storage_attachments_id_seq', COALESCE((SELECT MAX(id) FROM active_storage_attachments), 0) + 1, false);
"
Step 9: Update Password Pusher Configuration
Now update your Password Pusher deployment to use PostgreSQL.
Docker Compose (Recommended)
Create or update your docker-compose.yml:
services:
postgres:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_USER: pwpush_user
POSTGRES_PASSWORD: your_secure_password_here
POSTGRES_DB: pwpush_db
volumes:
- postgres-data:/var/lib/postgresql/data
pwpush:
image: docker.io/pglombardo/pwpush:stable
restart: unless-stopped
depends_on:
- postgres
ports:
- "80:80"
- "443:443"
environment:
DATABASE_URL: postgres://pwpush_user:your_secure_password_here@postgres:5432/pwpush_db
# IMPORTANT: Use the same PWPUSH_MASTER_KEY from your original deployment
# PWPUSH_MASTER_KEY: 'your-original-encryption-key'
# TLS_DOMAIN: 'pwpush.example.com'
PWP__ENABLE_LOGINS: 'true'
PWP__ENABLE_FILE_PUSHES: 'true'
volumes:
- pwpush-storage:/opt/PasswordPusher/storage
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5100/up"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
postgres-data:
pwpush-storage:
Critical: You must use the same PWPUSH_MASTER_KEY that was used in your original MySQL deployment. Without it, encrypted data (push payloads, passphrases) cannot be decrypted.
Docker Run
docker run -d \
--name pwpush \
-p "80:80" -p "443:443" \
-e DATABASE_URL=postgres://pwpush_user:your_secure_password_here@postgres_host:5432/pwpush_db \
-e PWPUSH_MASTER_KEY='your-original-encryption-key' \
-v pwpush-storage:/opt/PasswordPusher/storage \
pglombardo/pwpush:stable
Step 10: Start and Verify
Start Password Pusher:
docker compose up -d
Check the logs for errors:
docker compose logs -f pwpush
Verify the application:
- Health check:
curl http://localhost:5100/upShould return:
success -
Access the web interface and verify you can log in (if logins are enabled)
-
Check existing pushes are accessible and display correctly
-
Create a new test push to verify new data can be saved
- If using file uploads, verify existing files are accessible
Troubleshooting
Error: “role ‘appuser’ does not exist”
This error means the PostgreSQL user in your DATABASE_URL doesn’t exist.
Solution:
# Create the missing role
docker exec -it postgres psql -U postgres -c "
CREATE ROLE appuser WITH LOGIN PASSWORD 'your_password';
"
# Grant database access
docker exec -it postgres psql -U postgres -c "
GRANT ALL PRIVILEGES ON DATABASE pwpush_db TO appuser;
"
# Grant table permissions
docker exec -it postgres psql -U postgres -d pwpush_db -c "
GRANT ALL ON ALL TABLES IN SCHEMA public TO appuser;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO appuser;
GRANT USAGE, CREATE ON SCHEMA public TO appuser;
"
Better solution: Update your DATABASE_URL to use pwpush_user (or whatever user you created in Step 3) instead of appuser.
Error: “password authentication failed”
The password in DATABASE_URL doesn’t match the PostgreSQL user’s password.
Solution:
- Verify the password is correct
- If using special characters, ensure they’re URL-encoded
- Reset the password if needed:
docker exec -it postgres psql -U postgres -c " ALTER USER pwpush_user WITH PASSWORD 'new_password'; "
Error: “relation ‘pushes’ does not exist”
The tables weren’t created during migration.
Solution:
- Check pgloader output for errors
- Verify MySQL connection details are correct
- Re-run pgloader with verbose output:
pgloader --verbose pwpush-migrate.load
Error: “permission denied for table pushes”
The database user lacks table permissions.
Solution: Re-run the permission grants from Step 7.
Encrypted Data Cannot Be Decrypted
Pushes show errors or encrypted content appears corrupted.
Causes and solutions:
- Wrong
PWPUSH_MASTER_KEY: Ensure you’re using the exact same key from your original deployment - Data corruption during migration: Check that
payload_ciphertextand other*_ciphertextcolumns contain data in PostgreSQL - Column type mismatch: Verify encrypted columns are
texttype in PostgreSQL
pgloader Connection Errors
“could not connect to MySQL”:
- Verify MySQL is running and accessible
- Check hostname, port, username, and password
- For Docker, ensure both containers are on the same network
“could not connect to PostgreSQL”:
- Verify PostgreSQL is running:
docker compose ps - Check that port 5432 is accessible
- For Docker Compose, use the service name as hostname (e.g.,
postgres)
pgloader Hangs or Is Very Slow
Solutions:
- Reduce workers in the migration script: change
workers = 2toworkers = 1 - For very large databases, run during off-peak hours
- Ensure adequate memory is available for PostgreSQL
Admin Interface Stuck After Migration
This usually indicates a connection or permission issue.
Debug steps:
# Check container logs
docker compose logs pwpush
# Verify database connection from container
docker compose exec pwpush rails db:version
# Check PostgreSQL logs
docker compose logs postgres
Post-Migration Checklist
After confirming everything works:
- Verify all user accounts can log in
- Test creating and viewing pushes
- Test file uploads (if enabled)
- Verify admin dashboard access (if enabled)
- Keep MySQL backup for at least 30 days
- Update any backup scripts to target PostgreSQL
- Remove old MySQL container and data (only after extended verification)
Removing Old MySQL Setup
Only after confirming the migration is successful (wait at least a few days):
# Stop and remove MySQL container
docker stop mysql_container_name
docker rm mysql_container_name
# Remove MySQL data volume (PERMANENT - ensure you have backups!)
docker volume rm mysql_data_volume
See Also
- DATABASE_URL Configuration - Database connection string format
- Upgrades & Backups - Backup procedures for Password Pusher
- Configuration Strategies - Configuration options
- Docker Environment File - Using .env files for secrets