Maintenance Worker
The Milvaion Maintenance worker provides essential housekeeping jobs for the Milvaion Job Scheduler system. It handles database cleanup, Redis cache management and data archival.
Overview
| Job | Purpose | Recommended Schedule | Cron Expression |
|---|---|---|---|
DatabaseMaintenanceJob | VACUUM, ANALYZE, REINDEX | Weekly (Sunday 03:00) | 0 0 3 * * 0 |
OccurrenceRetentionJob | Delete old occurrences | Daily (02:00) | 0 0 2 * * * |
FailedOccurrenceCleanupJob | Clean DLQ table | Weekly (Sunday 04:00) | 0 0 4 * * 0 |
RedisCleanupJob | Remove orphaned cache entries | Daily (05:00) | 0 0 5 * * * |
OccurrenceArchiveJob | Archive old occurrences to dated tables | Monthly (1st, 04:00) | 0 0 4 1 * * |
Jobs
DatabaseMaintenanceJob
Performs PostgreSQL maintenance operations to keep the database performant.
Operations:
- VACUUM - Reclaims storage from deleted/updated rows
- ANALYZE - Updates query planner statistics
- REINDEX - Rebuilds indexes (use with caution, locks tables)
Configuration:
"DatabaseMaintenance": {
"EnableVacuum": true,
"EnableAnalyze": true,
"EnableReindex": false,
"Tables": [
"JobOccurrences",
"ScheduledJobs",
"FailedOccurrences"
]
}
Cron: 0 3 * * 0 (Every Sunday at 03:00)
OccurrenceRetentionJob
Deletes old job occurrences based on status-specific retention policies.
Retention Policy:
| Status | Default Retention |
|---|---|
| Completed | 30 days |
| Failed | 90 days |
| Cancelled | 30 days |
| TimedOut | 30 days |
Configuration:
"OccurrenceRetention": {
"CompletedRetentionDays": 30,
"FailedRetentionDays": 90,
"CancelledRetentionDays": 30,
"TimedOutRetentionDays": 30,
"BatchSize": 1000
}
Cron: 0 2 * * * (Every day at 02:00)
FailedOccurrenceCleanupJob
Cleans up old entries from the FailedOccurrences table (Dead Letter Queue).
Configuration:
"FailedOccurrenceRetention": {
"RetentionDays": 180,
"BatchSize": 500
}
Cron: 0 4 * * 0 (Every Sunday at 04:00)
RedisCleanupJob
Removes orphaned Redis entries that are no longer needed:
- Orphaned job cache - Cache for deleted jobs
- Stale locks - Lock entries without TTL
- Orphaned running states - Running states for inactive jobs
Configuration:
"RedisCleanup": {
"KeyPrefix": "Milvaion:JobScheduler:",
"CleanOrphanedJobCache": true,
"CleanStaleLocks": true,
"CleanOrphanedRunningStates": true,
"StaleLockHours": 24
}
Cron: 0 5 * * * (Every day at 05:00)
OccurrenceArchiveJob
Archives old occurrences to dated tables instead of deleting them. Useful for compliance and auditing.
How it works:
- Creates archive table:
JobOccurrences_Archive_2024_01 - Moves old occurrences to archive table
- Optionally creates indexes on archive table
Configuration:
"OccurrenceArchive": {
"ArchiveAfterDays": 90,
"ArchiveTablePrefix": "JobOccurrences_Archive",
"StatusesToArchive": [2, 3, 4, 5],
"BatchSize": 1000,
"CreateIndexOnArchive": true
}
Cron: 0 4 1 * * (1st day of month at 04:00)
Archive Tables Created:
JobOccurrences_Archive_2024_01
JobOccurrences_Archive_2024_02
JobOccurrences_Archive_2024_03
...
Full Configuration
{
"MaintenanceConfig": {
"DatabaseConnectionString": "Host=postgres;Database=MilvaionDb;...",
"RedisConnectionString": "redis:6379",
"OccurrenceRetention": {
"CompletedRetentionDays": 30,
"FailedRetentionDays": 90,
"CancelledRetentionDays": 30,
"TimedOutRetentionDays": 30,
"BatchSize": 1000
},
"FailedOccurrenceRetention": {
"RetentionDays": 180,
"BatchSize": 500
},
"DatabaseMaintenance": {
"EnableVacuum": true,
"EnableAnalyze": true,
"EnableReindex": false,
"Tables": ["JobOccurrences", "ScheduledJobs", "FailedOccurrences"]
},
"RedisCleanup": {
"KeyPrefix": "Milvaion:JobScheduler:",
"CleanOrphanedJobCache": true,
"CleanStaleLocks": true,
"CleanOrphanedRunningStates": true,
"StaleLockHours": 24
},
"OccurrenceArchive": {
"ArchiveAfterDays": 90,
"ArchiveTablePrefix": "JobOccurrences_Archive",
"StatusesToArchive": [2, 3, 4, 5],
"BatchSize": 1000,
"CreateIndexOnArchive": true
}
}
}
Deployment
Docker Compose
services:
maintenance-worker:
image: milvasoft/milvaion-maintenance:latest
environment:
- Worker__WorkerId=maintenance-worker-01
- Worker__RabbitMQ__Host=rabbitmq
- Worker__Redis__ConnectionString=redis:6379
- MaintenanceConfig__DatabaseConnectionString=Host=postgres;...
- MaintenanceConfig__RedisConnectionString=redis:6379
depends_on:
- rabbitmq
- redis
- postgres
restart: unless-stopped
Environment Variables
All configuration can be overridden via environment variables:
# Database
MaintenanceConfig__DatabaseConnectionString=Host=...
# Retention
MaintenanceConfig__OccurrenceRetention__CompletedRetentionDays=30
MaintenanceConfig__OccurrenceRetention__FailedRetentionDays=90
Scheduling Jobs
After deploying the worker, create scheduled jobs in the UI:
| Job | Cron Expression | Description |
|---|---|---|
| DatabaseMaintenanceJob | 0 3 * * 0 | Sunday 03:00 |
| OccurrenceRetentionJob | 0 2 * * * | Daily 02:00 |
| FailedOccurrenceCleanupJob | 0 4 * * 0 | Sunday 04:00 |
| RedisCleanupJob | 0 5 * * * | Daily 05:00 |
| OccurrenceArchiveJob | 0 4 1 * * | Monthly 1st 04:00 |
Job Results
All jobs return JSON results for monitoring:
{
"Success": true,
"TotalDeleted": 1523,
"Details": {
"Completed": 1200,
"Failed": 200,
"Cancelled": 123
}
}
Best Practices
- Schedule during low traffic - Run maintenance jobs during off-peak hours
- Monitor execution time - Adjust batch sizes if jobs take too long
- Use archive for compliance - Keep
OccurrenceArchiveJobif you need audit trails - Delete or archive, not both - Choose one retention strategy
- Monitor disk space - Archive tables can grow; drop old ones periodically
Choosing Retention vs Archive
| Scenario | Use |
|---|---|
| No compliance requirements | OccurrenceRetentionJob (delete) |
| Need audit trail | OccurrenceArchiveJob (archive) |
| Both | Archive first, then delete very old archives |
For custom workers, see Your First Worker.