Quick Summary — TL;DR
A job runner is the process responsible for picking background jobs from a job queue and executing them. The queue holds work; the runner does the work. Without a runner, jobs sit in the queue indefinitely. Every background job system has some form of runner — it may be called a worker, consumer, processor, or executor, but the role is the same.
The distinction matters because they fail independently. If your queue goes down, jobs cannot be enqueued — but existing jobs are safe. If your runner goes down, new jobs keep piling up in the queue but nothing gets processed. Monitoring both is critical: queue depth tells you if runners are keeping up, and runner health tells you if jobs are being executed at all.
| Concern | Job Queue | Job Runner |
|---|---|---|
| Role | Stores and delivers jobs | Picks up and executes jobs |
| Technology | Redis, SQS, RabbitMQ, database | Long-running process (often the same app) |
| Scales by | Storage and throughput capacity | Adding more worker instances |
| Failure mode | Jobs cannot be enqueued | Jobs pile up unprocessed |
Every job runner follows the same core loop, regardless of the framework:
BRPOPLPUSH, SQS visibility timeout).A single runner process handles one job at a time (in most implementations). To process jobs in parallel, you run multiple instances of the runner — each instance is called a worker. Ten workers means ten jobs executing simultaneously.
Concurrency can be achieved in two ways:
queue:work and Celery operate.The right number of workers depends on what your jobs do. I/O-bound jobs (HTTP calls, database queries) can handle higher concurrency because they spend most of their time waiting. CPU-bound jobs (image processing, data crunching) should match the number of available CPU cores.
When you deploy new code or scale down workers, runners need to stop without corrupting in-progress jobs. A graceful shutdown:
If a runner is killed without a graceful shutdown (e.g., SIGKILL), the in-progress job's lock eventually expires and the queue makes it available for retry — assuming the job is idempotent.
| Framework | Runner command | Concurrency model |
|---|---|---|
| Sidekiq (Ruby) | bundle exec sidekiq | Multi-threaded (default 10 threads) |
| Celery (Python) | celery -A app worker | Multi-process (prefork) or multi-thread (eventlet/gevent) |
| Laravel (PHP) | php artisan queue:work | Single-process; run multiple for parallelism |
| BullMQ (Node.js) | new Worker('queue', processor) | Configurable concurrency per worker instance |
A job runner is a long-running process that continuously pulls jobs from a queue and executes them. It is the "doer" in a background job system — the queue stores work, the runner performs it.
A job queue is the data store that holds pending jobs. A job runner is the process that reads jobs from the queue and executes them. The queue is passive storage; the runner is the active executor. Both are necessary — a queue without a runner is a pile of unprocessed work, and a runner without a queue has nothing to do.
Start with a small number (2–5 workers), monitor queue depth and resource usage (CPU, memory, database connections), and increase gradually. For I/O-bound jobs like HTTP requests, you can run more workers per CPU core. For CPU-bound jobs, match workers to core count. Watch for rate limit violations on external APIs as you scale up.
Job runners execute background jobs pulled from a job queue, with concurrency controlled by the number of worker instances. Failed jobs follow a retry policy before landing in the dead letter queue. For jobs that need to run on a schedule rather than on demand, see job scheduling.
Recuro handles cron scheduling, retries, alerts, and execution logs -- so you can focus on building your product.
No credit card required