Background Jobs
Password Pusher includes a comprehensive background job system that automatically maintains your instance by expiring pushes, cleaning up old records, and managing cache and storage.
Note: If you are self-hosting the pglombardo/pwpush Docker container, as of release v1.53.0, these background jobs run automatically on a recurring schedule and can be managed in the Administration Dashboard.
Overview
When a push expires, the password (payload) is deleted immediately from the database. What remains is the push record with its metadata (expiration settings) and audit logs.
The background job system performs several maintenance tasks:
- Preemptively expire pushes - Check and expire pushes before users access them
- Clean up expired records - Remove old push records and metadata
- Purge storage - Remove orphaned file uploads and cache files
- Maintain database - Keep the database clean and performant
All jobs run automatically on a schedule defined in config/recurring.yml. You can also run them manually via the Administration Dashboard or CLI.
Explanation & Background
Terminology
Push - one record in the Push table and its related audit log. This includes the payload and the metadata about the push such as expiration settings.
Payload - The sensitive data posted by the user. e.g. The password, text, file(s) and reference note.
Job - A unit of work that runs asynchronously outside of the main request/response cycle, often scheduled to run at a specific time or after a delay.
Periodic Expiration & Cleanup
Password Pusher bundles background tasks that can be run periodically on your instance to:
- run through all unexpired pushes, validate and conditionally expire them
- delete expired and anonymous records
Jobs
Recurring Background Jobs
Password Pusher includes 5 recurring background jobs that run automatically on a schedule:
1. ExpirePushesJob
Purpose: Preemptively expires pushes by checking all unexpired pushes and validating their expiration settings.
Schedule:
- Production: Every 2 hours
- Development: Every 4 hours
What it does:
- Scans all unexpired pushes
- Validates expiration settings (time-based and view-based)
- Deletes payloads for pushes that should be expired
- Reduces CPU and database load by expiring pushes before users access them
Note: This job preemptively expires pushes. Pushes are also checked and expired when users attempt to access them, but this job reduces the load by doing it in advance.
2. CleanUpPushesJob
Purpose: Deletes expired anonymous push records entirely.
Schedule:
- Production: Every day at 5:00 AM
- Development: Every 7 hours
What it does:
- Finds all expired pushes created by anonymous users
- Deletes the entire push record (metadata and audit logs)
- Preserves pushes created by logged-in users (for audit log purposes)
Why: Anonymous pushes don’t have accessible audit logs, so the metadata has little value after expiration. This job removes them to keep the database clean.
3. PurgeExpiredPushesJob
Purpose: Permanently deletes expired push records after a configurable duration.
Schedule:
- Production: Every day at 4:00 AM
- Development: Every day at 4:00 AM
What it does:
- Finds expired pushes older than the
purge_aftersetting - Deletes the entire push record and audit logs
- Only runs if
purge_afteris configured (disabled by default)
Configuration: Set PWP__PURGE_AFTER environment variable (e.g., "30 days", "90 days"). Set to "disabled" to skip this job.
Note: This job applies to all pushes (both anonymous and logged-in users) after the purge duration. Use this to comply with data retention policies.
4. PurgeUnattachedBlobsJob
Purpose: Cleans up orphaned Active Storage blobs (failed or partial file uploads).
Schedule:
- Production: Every 3 days
- Development: Every 3 days
What it does:
- Finds Active Storage blobs that aren’t attached to any push
- Removes orphaned blobs and their associated files
- Frees up storage space
Use case: If file uploads fail or are interrupted, blobs may be created but not attached to pushes. This job cleans them up.
5. CleanupCacheJob
Purpose: Removes old cache files to prevent disk space issues.
Schedule:
- Production: Every day at 3:00 AM
- Development: Every 6 hours
What it does:
- Cleans up Rails cache files older than 24 hours
- Cleans up Rack::Attack cache files older than 24 hours
- Removes files from
tmp/cacheandtmp/rack_attack_cache
Safety: The job includes safety checks to ensure it only deletes files within the tmp directory.
Job Schedule Summary
| Job | Production Schedule | Development Schedule |
|---|---|---|
ExpirePushesJob |
Every 2 hours | Every 4 hours |
CleanUpPushesJob |
Daily at 5:00 AM | Every 7 hours |
PurgeExpiredPushesJob |
Daily at 4:00 AM | Daily at 4:00 AM |
PurgeUnattachedBlobsJob |
Every 3 days | Every 3 days |
CleanupCacheJob |
Daily at 3:00 AM | Every 6 hours |
Background Worker System
As of release v1.53.0, the background worker system is included in the main pglombardo/pwpush container and runs automatically. No additional configuration is required.
Default Behavior
By default, the pglombardo/pwpush container runs both:
- Web server - Handles HTTP requests
- Background worker - Runs scheduled jobs
Both processes run in the same container using Foreman.
Disabling the Worker
To disable the background worker system (e.g., for memory-constrained environments), set the PWP__NO_WORKER environment variable to true:
Docker Run:
docker run -e PWP__NO_WORKER=true pglombardo/pwpush:stable
Docker Compose:
services:
pwpush:
image: docker.io/pglombardo/pwpush:stable
environment:
PWP__NO_WORKER: 'true'
Note: When PWP__NO_WORKER=true, background jobs will not run automatically. You can still run them manually via the Administration Dashboard or use a separate worker container (see below).
Using a Separate Worker Container
Alternatively, you can run background jobs in a separate container using the pglombardo/pwpush-worker image. This is useful for:
- Separating concerns - Isolate web server from background processing
- Scaling independently - Scale workers separately from web servers
- Resource optimization - Run workers on different hardware
- High availability - Run multiple worker containers for redundancy
Setup
Docker Compose Example:
services:
# Main web application
pwpush:
image: docker.io/pglombardo/pwpush:stable
environment:
PWP__NO_WORKER: 'true' # Disable worker in main container
DATABASE_URL: postgres://user:pass@postgres:5432/pwpush
# ... other configuration ...
depends_on:
- postgres
# Dedicated worker container
pwpush-worker:
image: docker.io/pglombardo/pwpush-worker:stable
environment:
DATABASE_URL: postgres://user:pass@postgres:5432/pwpush
# ... same configuration as main container ...
depends_on:
- postgres
postgres:
image: postgres:15
# ... database configuration ...
Important: The worker container requires:
- Same
DATABASE_URLas the main container (to access the same database) - Same environment variables and configuration
- Access to the same storage volumes (if using file storage)
- Network connectivity to the database
Manually Running the Background Jobs
Running Jobs via the Administration Dashboard
The built-in Administration Dashboard has a “Background Jobs” area.

From this page, you can run background jobs manually.
Note: As of release v1.53.0, these jobs are run automatically on a recurring schedule. Running manually is optional.

Running Manually From the CLI
Warning: The ability to manually run these jobs as described below may be removed in the near future as the new jobs have been moved to a background job framework in the Administration dashboard described above.
These tasks live in lib/tasks/pwpush.rake and can be run as follows:
/opt/PasswordPusher/bin/pwpush daily_expiration
/opt/PasswordPusher/bin/pwpush delete_expired_and_anonymous
Heroku Example:
heroku run --app=mypwp pwpush daily_expiration
heroku run --app=mypwp pwpush delete_expired_and_anonymous
Important Notes
Data Deletion
Warning: These jobs will delete data from your Password Pusher instance. Always make backups before running jobs manually, especially in production.
The jobs are tested and reliable - they run daily on pwpush.com in production.
Expiration Behavior
Preemptive Expiration: The ExpirePushesJob preemptively expires pushes before users access them. However, pushes are also checked and expired when users attempt to access them. This dual approach ensures:
- Reduced load from preemptive expiration
- Immediate expiration if a user accesses an expired push
- No race conditions between expiration and access
On-Demand Expiration: When a user requests a secret URL, the application validates expiration settings. If the push is past its expiration, the payload is deleted immediately and the user sees “This secret link has expired.”
Security Considerations
Deleted Record Handling: Secret URLs for deleted records still show “This Secret Link Has Expired” even when the record doesn’t exist. This is intentional for security:
- Privacy: Hides whether a push ever existed at a certain URL
- Consistency: Provides a consistent user experience for expired pushes
- Data Protection: Allows deletion of anonymous push data while maintaining security
Audit Logs
Logged-in Users: Pushes created by logged-in users are never deleted by CleanUpPushesJob or PurgeExpiredPushesJob (unless purge_after is configured). This preserves audit logs for compliance and security purposes.
Anonymous Users: Anonymous push records are deleted after expiration since they don’t have accessible audit logs.
Customization
Schedule Customization: Job schedules are defined in config/recurring.yml. For advanced customization, you can override this file in your deployment.
Purge After: Configure PWP__PURGE_AFTER to automatically delete expired pushes after a duration (e.g., "30 days", "90 days"). Set to "disabled" to skip automatic purging.
For more details on job implementation and strategy, see the job source code and recurring configuration.
Other Jobs
Maintenance Mode
Maintenance mode is a mode in which the application refuses to serve pages and instead displays a maintenance page for all request paths.

This is useful if you have to perform maintenance on your instance or if you would like to block access for a certain amount of time.
Enabling & Disabling
To enable, proceed to step 4 here and run the following command:
./bin/rake maintenance:start
and to end the maintenance:
./bin/rake maintenance:end
More Info
This functionality is implemented using the turnout2024 gem. More features, options and commands are available on that page.