Recuro.

Concurrency

Quick Summary — TL;DR

  • Concurrency is the number of background jobs executing simultaneously in your worker pool.
  • Too little concurrency causes queue backlogs; too much exhausts database connections, memory, or API rate limits.
  • Guard against race conditions and duplicate execution by making jobs idempotent and using unique job locks.

Concurrency in the context of background jobs refers to how many jobs execute at the same time. If your worker processes 5 jobs simultaneously, your concurrency is 5. Getting this number right determines whether your system is efficient or overwhelmed.

Why concurrency matters

Too little concurrency and your job queue backs up — jobs wait longer than necessary. Too much concurrency and you saturate your database connections, exhaust API rate limits, or run out of memory.

The right concurrency level depends on what your jobs do:

Concurrency vs parallelism

Concurrency means multiple jobs are in progress at the same time — they may be interleaved on a single core. Parallelism means multiple jobs are executing at the exact same instant on different cores. In practice, most job systems use parallelism (multiple worker processes or threads), and the terms are often used interchangeably.

Common concurrency problems

Race conditions

Two jobs processing the same record simultaneously can overwrite each other's changes. For example, two concurrent jobs both read a counter as 10, both increment it to 11, and write 11 — losing one increment. Solution: use database-level locks or atomic operations.

Duplicate execution

If a job is dispatched twice (e.g., due to a retry), both copies may run at the same time. This is why idempotency matters — the same job running twice should produce the same result as running once.

Resource exhaustion

Each concurrent job consumes memory, database connections, and possibly external API quota. 50 worker processes each holding a database connection can exhaust a connection pool configured for 25. Monitor resource usage and set concurrency limits accordingly.

Controlling concurrency

System How to set concurrency
Sidekiq-c 10 flag (10 threads)
Laravel Horizonprocesses and maxProcesses in config
Celery-c 4 flag (4 worker processes)
Bull (Node.js)concurrency option per queue

Unique job locks

Sometimes you want to ensure only one instance of a specific job runs at a time — for example, a daily report generator. This is called a unique job or job lock. The queue system checks if an identical job is already running before starting a new one.

FAQ

What is job concurrency?

Job concurrency is the number of background jobs your workers process simultaneously. Higher concurrency processes more jobs per second but uses more resources.

How do I choose the right concurrency level?

Start conservative (e.g., 5 workers), monitor CPU, memory, and queue latency, then increase gradually. For I/O-bound jobs like HTTP calls, you can usually go higher than for CPU-bound work.

What happens if concurrency is too high?

You risk database connection exhaustion, memory pressure, API rate limit violations, and degraded performance for all jobs. More concurrency doesn't always mean more throughput.

Concurrency is configured at the job queue level. Jobs must be idempotent to handle concurrent execution safely. External API calls should respect rate limits, and failed concurrent jobs retry with exponential backoff to avoid compounding load. When workers can't keep up, backpressure builds and queues grow — a sign you need more worker processes or lower concurrency per worker.

Stop managing infrastructure. Start scheduling jobs.

Recuro handles cron scheduling, retries, alerts, and execution logs -- so you can focus on building your product.

No credit card required