Recuro.

Worker Process

Quick Summary — TL;DR

  • A worker process is a long-running process that continuously pulls tasks from a queue and executes them, separate from your web server.
  • Scale workers horizontally — add more instances to increase throughput without changing your application code.
  • Workers must shut down gracefully (finish the current job before exiting) and report their health via heartbeats to avoid silent failures.

A worker process is a long-running background process that consumes tasks from a job queue and executes them. Unlike a web process that handles HTTP requests from users, a worker process operates behind the scenes — picking up background jobs, processing them, and reporting the result back to the queue.

Workers vs web processes

In a typical web application, you have two types of processes:

Concern Web process Worker process
Triggered byIncoming HTTP requestsJobs appearing in a queue
Responds toUsers and API consumersInternal job dispatchers
Latency priorityMust respond quickly (sub-second)Throughput matters more than latency
Scaling signalRequest rate and response timeQueue depth and processing time
Failure impactUsers see errors immediatelyJobs retry silently; failures surface through alerts

Web processes and worker processes typically run the same application code but with different entry points. A Laravel app runs php-fpm for web traffic and php artisan queue:work for workers. A Rails app runs puma for web and sidekiq for workers. Same codebase, different runtime behavior.

The worker lifecycle

A worker process follows a continuous loop from startup to shutdown:

  1. Start — the worker boots, loads the application, and connects to the queue broker (Redis, SQS, RabbitMQ)
  2. Poll — the worker asks the queue for available jobs. Blocking reads are common — the worker waits idle until a job appears, avoiding wasteful busy-polling
  3. Execute — the worker deserializes the job payload, runs the handler function, and produces a result
  4. Acknowledge — on success, the worker tells the queue to remove the job permanently. On failure, the job is returned for retry or moved to the dead letter queue
  5. Loop — the worker returns to step 2 and picks the next job

This loop runs indefinitely. A healthy worker can run for days, weeks, or months without restarting — though most teams restart workers during deployments to pick up new code.

Scaling workers

Workers scale horizontally: run more instances to process more jobs per second. Each instance is an independent copy of the same process, pulling from the same queue.

The limit on worker count depends on what your jobs do. Each worker consumes memory, database connections, and potentially API quota. Ten workers each holding a database connection require 10 connections from the pool. Monitor resource usage as you scale and set concurrency limits to prevent exhaustion.

Graceful shutdown

When you deploy new code or scale down, workers need to stop cleanly. A graceful shutdown means:

  1. The worker receives a termination signal (SIGTERM)
  2. It stops pulling new jobs from the queue
  3. It finishes processing the current job (up to a configurable timeout)
  4. It exits with a zero status code

If the current job does not finish within the timeout, the worker forcefully terminates and the job's visibility lock in the queue eventually expires — making it available for another worker to retry. This is why jobs should be idempotent: a job interrupted mid-execution may run again from the start.

Avoid sending SIGKILL to workers unless a graceful shutdown has timed out. SIGKILL terminates the process instantly without cleanup, potentially leaving resources in an inconsistent state.

Worker health and heartbeats

A worker process can fail in ways that are not immediately obvious: it may hang on a blocking call, run out of memory and get OOM-killed, or enter an infinite loop. Without monitoring, a dead worker looks the same as an idle one — the queue just grows.

Common health monitoring strategies:

FAQ

What is a worker process?

A worker process is a long-running background process that pulls tasks from a job queue and executes them. It runs separately from your web server and handles asynchronous work like sending emails, processing uploads, calling APIs, and running scheduled tasks.

How do worker processes differ from web processes?

Web processes handle incoming HTTP requests from users and must respond quickly. Worker processes handle background jobs from a queue and prioritize throughput over latency. They run the same application code but serve different purposes — web processes keep users happy, workers keep the system running.

How do I scale workers?

Run more instances of the worker process. Each instance independently pulls from the same queue, so adding workers increases job throughput linearly (up to the limits of your queue broker and downstream resources). Use queue depth as the scaling signal: if the queue is growing faster than workers drain it, add more workers.

Worker processes are the execution layer of a job queue system, running background jobs with configurable concurrency. They share the same lifecycle pattern as a job runner — poll, execute, acknowledge — and rely on retry policies and dead letter queues to handle failures gracefully.

Stop managing infrastructure. Start scheduling jobs.

Recuro handles cron scheduling, retries, alerts, and execution logs -- so you can focus on building your product.

No credit card required