Quick Summary — TL;DR
A worker process is a long-running background process that consumes tasks from a job queue and executes them. Unlike a web process that handles HTTP requests from users, a worker process operates behind the scenes — picking up background jobs, processing them, and reporting the result back to the queue.
In a typical web application, you have two types of processes:
| Concern | Web process | Worker process |
|---|---|---|
| Triggered by | Incoming HTTP requests | Jobs appearing in a queue |
| Responds to | Users and API consumers | Internal job dispatchers |
| Latency priority | Must respond quickly (sub-second) | Throughput matters more than latency |
| Scaling signal | Request rate and response time | Queue depth and processing time |
| Failure impact | Users see errors immediately | Jobs retry silently; failures surface through alerts |
Web processes and worker processes typically run the same application code but with different entry points. A Laravel app runs php-fpm for web traffic and php artisan queue:work for workers. A Rails app runs puma for web and sidekiq for workers. Same codebase, different runtime behavior.
A worker process follows a continuous loop from startup to shutdown:
This loop runs indefinitely. A healthy worker can run for days, weeks, or months without restarting — though most teams restart workers during deployments to pick up new code.
Workers scale horizontally: run more instances to process more jobs per second. Each instance is an independent copy of the same process, pulling from the same queue.
The limit on worker count depends on what your jobs do. Each worker consumes memory, database connections, and potentially API quota. Ten workers each holding a database connection require 10 connections from the pool. Monitor resource usage as you scale and set concurrency limits to prevent exhaustion.
When you deploy new code or scale down, workers need to stop cleanly. A graceful shutdown means:
SIGTERM)If the current job does not finish within the timeout, the worker forcefully terminates and the job's visibility lock in the queue eventually expires — making it available for another worker to retry. This is why jobs should be idempotent: a job interrupted mid-execution may run again from the start.
Avoid sending SIGKILL to workers unless a graceful shutdown has timed out. SIGKILL terminates the process instantly without cleanup, potentially leaving resources in an inconsistent state.
A worker process can fail in ways that are not immediately obvious: it may hang on a blocking call, run out of memory and get OOM-killed, or enter an infinite loop. Without monitoring, a dead worker looks the same as an idle one — the queue just grows.
Common health monitoring strategies:
A worker process is a long-running background process that pulls tasks from a job queue and executes them. It runs separately from your web server and handles asynchronous work like sending emails, processing uploads, calling APIs, and running scheduled tasks.
Web processes handle incoming HTTP requests from users and must respond quickly. Worker processes handle background jobs from a queue and prioritize throughput over latency. They run the same application code but serve different purposes — web processes keep users happy, workers keep the system running.
Run more instances of the worker process. Each instance independently pulls from the same queue, so adding workers increases job throughput linearly (up to the limits of your queue broker and downstream resources). Use queue depth as the scaling signal: if the queue is growing faster than workers drain it, add more workers.
Worker processes are the execution layer of a job queue system, running background jobs with configurable concurrency. They share the same lifecycle pattern as a job runner — poll, execute, acknowledge — and rely on retry policies and dead letter queues to handle failures gracefully.
Recuro handles cron scheduling, retries, alerts, and execution logs -- so you can focus on building your product.
No credit card required