Recuro.

Rate Limit Calculator

Check if your scheduled jobs will exceed API rate limits before they run.

Processed entirely in your browser — no data sent to any server.
Within limits
Utilization 0%
Requests/sec
0
Requests/min
0
Requests/hr
0
Headroom
0

Burst Capacity

If all jobs fire simultaneously instead of being spread evenly:

Burst requests
0
Time to drain at limit
0s

Avoid rate limit spikes automatically

Recuro spreads your scheduled jobs to avoid rate limit spikes. Built-in throttling and retry logic included.

Start scheduling for free →

Why rate limits matter for scheduled jobs and cron tasks

Every API has rate limits — a cap on how many requests you can make within a time window. When you schedule dozens or hundreds of cron jobs that all call the same API, it is easy to exceed those limits without realizing it. The result is HTTP 429 responses, failed deliveries, and data that never arrives.

The math is straightforward but often overlooked. If you have 50 jobs each making 1 request per hour, that is only 50 requests/hour — well within most API limits. But if those 50 jobs are all scheduled at the top of the hour, you create a burst of 50 simultaneous requests. An API with a rate limit of 10 requests/second handles this fine, but one limited to 1 request/second will reject 49 of them.

The fix is to spread requests over time rather than clustering them. This can mean staggering cron schedules, using a queue with concurrency limits, or adding random jitter to execution times. The goal is to turn a spike into a steady stream that stays comfortably under the limit.

Recuro handles this automatically for your scheduled HTTP requests. Jobs are distributed across each time window to prevent bursts, and built-in retry logic with exponential backoff ensures that any 429 responses are handled gracefully without manual intervention.

Frequently Asked Questions

What strategies help avoid hitting rate limits with scheduled jobs?

The most effective strategies are spreading jobs across the time window instead of running them all at once, using a queue with concurrency limits, and adding small random delays (jitter) to each job. If you run 100 jobs every minute, firing them all at :00 creates a spike. Spreading them evenly across 60 seconds turns a burst into a steady stream. Most scheduling platforms, including Recuro, handle this automatically by distributing executions across the window.

What does HTTP 429 Too Many Requests mean and how should I handle it?

HTTP 429 is the standard status code APIs return when you exceed their rate limit. The response usually includes a Retry-After header telling you how many seconds to wait before retrying. Proper handling involves respecting the Retry-After value, implementing exponential backoff for subsequent retries, and logging the event so you can adjust your sending rate. Never retry immediately — that only makes the problem worse.

How does exponential backoff help with rate limiting?

Exponential backoff progressively increases the delay between retries (e.g., 1s, 2s, 4s, 8s). When you hit a rate limit, this gives the API time to recover and your request quota to reset. Combined with jitter (random variation), it prevents the "thundering herd" problem where many clients retry simultaneously and overwhelm the API again. Most API client libraries include built-in exponential backoff for 429 responses.

⌘Enter to run  ·  ⌘⇧C to copy

Next steps