Quick Summary — TL;DR
Concurrency in the context of background jobs refers to how many jobs execute at the same time. If your worker processes 5 jobs simultaneously, your concurrency is 5. Getting this number right determines whether your system is efficient or overwhelmed.
Too little concurrency and your job queue backs up — jobs wait longer than necessary. Too much concurrency and you saturate your database connections, exhaust API rate limits, or run out of memory.
The right concurrency level depends on what your jobs do:
Concurrency means multiple jobs are in progress at the same time — they may be interleaved on a single core. Parallelism means multiple jobs are executing at the exact same instant on different cores. In practice, most job systems use parallelism (multiple worker processes or threads), and the terms are often used interchangeably.
Two jobs processing the same record simultaneously can overwrite each other's changes. For example, two concurrent jobs both read a counter as 10, both increment it to 11, and write 11 — losing one increment. Solution: use database-level locks or atomic operations.
If a job is dispatched twice (e.g., due to a retry), both copies may run at the same time. This is why idempotency matters — the same job running twice should produce the same result as running once.
Each concurrent job consumes memory, database connections, and possibly external API quota. 50 worker processes each holding a database connection can exhaust a connection pool configured for 25. Monitor resource usage and set concurrency limits accordingly.
| System | How to set concurrency |
|---|---|
| Sidekiq | -c 10 flag (10 threads) |
| Laravel Horizon | processes and maxProcesses in config |
| Celery | -c 4 flag (4 worker processes) |
| Bull (Node.js) | concurrency option per queue |
Sometimes you want to ensure only one instance of a specific job runs at a time — for example, a daily report generator. This is called a unique job or job lock. The queue system checks if an identical job is already running before starting a new one.
Job concurrency is the number of background jobs your workers process simultaneously. Higher concurrency processes more jobs per second but uses more resources.
Start conservative (e.g., 5 workers), monitor CPU, memory, and queue latency, then increase gradually. For I/O-bound jobs like HTTP calls, you can usually go higher than for CPU-bound work.
You risk database connection exhaustion, memory pressure, API rate limit violations, and degraded performance for all jobs. More concurrency doesn't always mean more throughput.
Concurrency is configured at the job queue level. Jobs must be idempotent to handle concurrent execution safely. External API calls should respect rate limits, and failed concurrent jobs retry with exponential backoff to avoid compounding load. When workers can't keep up, backpressure builds and queues grow — a sign you need more worker processes or lower concurrency per worker.
Recuro handles cron scheduling, retries, alerts, and execution logs -- so you can focus on building your product.
No credit card required