Recuro.

AWS Lambda Cron Jobs: The Complete Setup Guide (2026)

·
Updated March 22, 2026
· Recuro Team
awslambdacronserverless

Quick Summary — TL;DR

  • Use Amazon EventBridge Scheduler (not the legacy CloudWatch Events) to trigger Lambda functions on a cron or rate schedule.
  • You can set this up via the AWS Console, SAM, CDK, Terraform, or the Serverless Framework — each approach is covered below with full examples.
  • Lambda cron expressions use six fields (with a year field and mandatory ? for day-of-month or day-of-week), which differs from standard Unix cron.
  • For production workloads, add dead-letter queues, CloudWatch alarms, and structured logging — or use an external HTTP scheduler for built-in retries and alerts.
How To Set Cron Job Aws Lambda

AWS Lambda is the natural home for scheduled tasks on AWS. No servers to manage, no idle compute costs, and it scales to zero when nothing’s running. But Lambda functions don’t run on their own — they need a trigger. For scheduled execution, that trigger is Amazon EventBridge.

This guide walks through every way to set up a cron job on Lambda, from clicking through the console to infrastructure-as-code, and covers the production hardening most tutorials skip.

How Lambda cron scheduling works

The architecture is straightforward:

  1. EventBridge Scheduler holds the cron expression and fires on schedule
  2. EventBridge invokes your Lambda function with an event payload
  3. Lambda runs your code and returns a result
  4. EventBridge logs the invocation (but not your function’s success or failure — that’s on you)

EventBridge replaced CloudWatch Events as the recommended scheduling service in 2022. The old CloudWatch Events rules still work, but EventBridge Scheduler gives you more flexibility: one-time schedules, time windows, and timezone support that CloudWatch never had.

AWS cron expression syntax

Before setting anything up, know that AWS cron expressions are not the same as standard Unix cron expressions. AWS uses six fields instead of five, and has quirks that trip up even experienced developers:

┌───────────── minute (0–59)
│ ┌───────────── hour (0–23)
│ │ ┌───────────── day of month (1–31)
│ │ │ ┌───────────── month (1–12 or JAN–DEC)
│ │ │ │ ┌───────────── day of week (1–7 or SUN–SAT)
│ │ │ │ │ ┌───────────── year (1970–2199)
│ │ │ │ │ │
* * * * * *

Key differences from Unix cron

FeatureUnix cronAWS cron
Fields56 (adds year)
Day of week0–7 (Sun = 0 or 7)1–7 (Sun = 1) or SUN–SAT
Wildcards* everywhereMust use ? for either day-of-month or day-of-week
The ? wildcardNot supportedRequired — means “no specific value”
The L wildcardNot supportedLast day of month (L) or last weekday (3L = last Tuesday)
The W wildcardNot supportedNearest weekday (15W = nearest weekday to 15th)

The ? rule is the one that catches everyone. You cannot use * for both day-of-month and day-of-week. One of them must be ?. If you want “every day,” use * * * ? * (wildcard on day-of-month, question mark on day-of-week) or * * ? * *.

Common AWS cron examples

ScheduleAWS cron expression
Every hourcron(0 * * * ? *)
Every day at midnight UTCcron(0 0 * * ? *)
Every Monday at 9 AM UTCcron(0 9 ? * MON *)
Every 15 minutescron(0/15 * * * ? *)
First of every month at nooncron(0 12 1 * ? *)
Weekdays at 6 PM UTCcron(0 18 ? * MON-FRI *)
Last day of every monthcron(0 0 L * ? *)

AWS also supports rate expressions for simpler intervals:

rate(1 hour)
rate(15 minutes)
rate(7 days)

Rate expressions are simpler but less flexible — you can’t say “every weekday at 9 AM” with a rate. Use cron() for anything beyond a fixed interval.

Need help building a standard cron expression? Use the cron expression generator to create one visually, then adapt it to AWS syntax using the table above. For a deep dive into standard five-field syntax, see the cron syntax cheat sheet. If you’re migrating from a traditional crontab setup, note the syntax differences carefully.

Method 1: AWS Console (quick setup)

The fastest way to get a scheduled Lambda running. Good for prototyping, not for production.

Step 1: Create the Lambda function

  1. Open the Lambda console
  2. Click Create functionAuthor from scratch
  3. Set a function name (e.g., nightly-cleanup)
  4. Choose your runtime (Node.js 22.x, Python 3.13, etc.)
  5. Under Permissions, let AWS create a new execution role
  6. Click Create function

Add your handler code. Here’s a minimal Python example:

import json
import urllib.request
def handler(event, context):
"""Called by EventBridge on schedule."""
req = urllib.request.Request(
'https://api.example.com/cleanup',
method='POST',
headers={'Authorization': 'Bearer YOUR_TOKEN'},
)
with urllib.request.urlopen(req, timeout=30) as resp:
body = resp.read().decode()
print(f"Status: {resp.status}, Body: {body}")
return {
'statusCode': resp.status,
'body': body,
}

Or in Node.js:

export const handler = async (event) => {
const resp = await fetch('https://api.example.com/cleanup', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_TOKEN' },
});
const body = await resp.text();
console.log(`Status: ${resp.status}, Body: ${body}`);
return { statusCode: resp.status, body };
};

Step 2: Create the EventBridge schedule

  1. Open the EventBridge Scheduler console
  2. Click Create schedule
  3. Enter a name (e.g., nightly-cleanup-schedule)
  4. Under Schedule pattern, choose Recurring schedule
  5. Select Cron-based schedule and enter your expression: cron(0 0 * * ? *)
  6. Set your timezone (EventBridge Scheduler supports timezones — use it)
  7. Under Target, select AWS LambdaInvoke
  8. Choose your function from the dropdown
  9. Optionally set a JSON payload under Input
  10. Under Settings, configure retry policy (default: 185 retries over 24 hours — you probably want to lower this)
  11. Click Create schedule

Your function will now be invoked on the schedule you defined.

Step 3: Verify it works

Don’t wait for the schedule to fire. Test immediately:

  1. In the Lambda console, go to your function
  2. Click Test → create a test event with {} as the payload
  3. Click Test again and verify the execution succeeds
  4. Check CloudWatch Logs to confirm your output appears

Method 2: AWS SAM (infrastructure as code)

AWS SAM (Serverless Application Model) is the simplest IaC approach for Lambda. One YAML file defines everything:

template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 60
Runtime: python3.13
MemorySize: 128
Resources:
CleanupFunction:
Type: AWS::Serverless::Function
Properties:
Handler: cleanup.handler
CodeUri: src/
Description: Nightly database cleanup
Events:
NightlySchedule:
Type: ScheduleV2
Properties:
ScheduleExpression: "cron(0 0 * * ? *)"
ScheduleExpressionTimezone: "UTC"
RetryPolicy:
MaximumRetryAttempts: 2
MaximumEventAgeInSeconds: 3600
Policies:
- AWSLambdaBasicExecutionRole
DeadLetterQueue:
Type: SQS
TargetArn: !GetAtt CleanupDLQ.Arn
CleanupDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: cleanup-dlq
MessageRetentionPeriod: 1209600 # 14 days
Outputs:
FunctionArn:
Value: !GetAtt CleanupFunction.Arn

Deploy it:

Terminal window
sam build
sam deploy --guided # First time — sets up the stack
sam deploy # Subsequent deploys

Key things to notice:

  • ScheduleV2 uses EventBridge Scheduler (not the old Schedule type which uses CloudWatch Events)
  • RetryPolicy limits retries to 2 instead of the default 185
  • DeadLetterQueue catches invocations that fail even after retries — without this, failed events vanish silently

Method 3: AWS CDK

If you prefer TypeScript for your infrastructure:

import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as scheduler from 'aws-cdk-lib/aws-scheduler';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as sqs from 'aws-cdk-lib/aws-sqs';
import { Construct } from 'constructs';
export class CronStack extends cdk.Stack {
constructor(scope: Construct, id: string) {
super(scope, id);
const fn = new lambda.Function(this, 'CleanupFunction', {
runtime: lambda.Runtime.NODEJS_22_X,
handler: 'cleanup.handler',
code: lambda.Code.fromAsset('lambda'),
timeout: cdk.Duration.seconds(60),
memorySize: 128,
});
const dlq = new sqs.Queue(this, 'CleanupDLQ', {
retentionPeriod: cdk.Duration.days(14),
});
// EventBridge Scheduler needs a role to invoke Lambda
const schedulerRole = new iam.Role(this, 'SchedulerRole', {
assumedBy: new iam.ServicePrincipal('scheduler.amazonaws.com'),
});
fn.grantInvoke(schedulerRole);
dlq.grantSendMessages(schedulerRole);
new scheduler.CfnSchedule(this, 'NightlySchedule', {
scheduleExpression: 'cron(0 0 * * ? *)',
scheduleExpressionTimezone: 'UTC',
flexibleTimeWindow: { mode: 'OFF' },
target: {
arn: fn.functionArn,
roleArn: schedulerRole.roleArn,
retryPolicy: {
maximumRetryAttempts: 2,
maximumEventAgeInSeconds: 3600,
},
deadLetterConfig: {
arn: dlq.queueArn,
},
},
});
}
}

Deploy:

Terminal window
cdk deploy

The CDK is more verbose than SAM but gives you full control over IAM roles and resource configuration. For teams already using CDK, this fits naturally into your existing stacks.

Method 4: Terraform

resource "aws_lambda_function" "cleanup" {
filename = "lambda.zip"
function_name = "nightly-cleanup"
role = aws_iam_role.lambda_exec.arn
handler = "cleanup.handler"
runtime = "python3.13"
timeout = 60
source_code_hash = filebase64sha256("lambda.zip")
}
resource "aws_iam_role" "lambda_exec" {
name = "cleanup-lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = { Service = "lambda.amazonaws.com" }
}]
})
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
# EventBridge Scheduler
resource "aws_scheduler_schedule" "nightly" {
name = "nightly-cleanup"
group_name = "default"
schedule_expression = "cron(0 0 * * ? *)"
schedule_expression_timezone = "UTC"
flexible_time_window {
mode = "OFF"
}
target {
arn = aws_lambda_function.cleanup.arn
role_arn = aws_iam_role.scheduler.arn
retry_policy {
maximum_retry_attempts = 2
maximum_event_age_in_seconds = 3600
}
dead_letter_config {
arn = aws_sqs_queue.cleanup_dlq.arn
}
}
}
resource "aws_iam_role" "scheduler" {
name = "cleanup-scheduler-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = { Service = "scheduler.amazonaws.com" }
}]
})
}
resource "aws_iam_role_policy" "scheduler_invoke" {
role = aws_iam_role.scheduler.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = "lambda:InvokeFunction"
Resource = aws_lambda_function.cleanup.arn
}]
})
}
resource "aws_sqs_queue" "cleanup_dlq" {
name = "cleanup-dlq"
message_retention_seconds = 1209600
}
Terminal window
terraform init
terraform plan
terraform apply

Method 5: Serverless Framework

The most concise option if your team is already using the Serverless Framework:

serverless.yml
service: scheduled-tasks
provider:
name: aws
runtime: python3.13
region: us-east-1
functions:
cleanup:
handler: cleanup.handler
timeout: 60
memorySize: 128
events:
- schedule:
rate: cron(0 0 * * ? *)
enabled: true
input:
task: "nightly-cleanup"
syncData:
handler: sync.handler
timeout: 120
events:
- schedule:
rate: rate(15 minutes)
enabled: true
Terminal window
serverless deploy

The Serverless Framework creates the EventBridge rule and Lambda permission automatically. Two lines of YAML per schedule.

IAM permissions: what EventBridge needs

EventBridge Scheduler needs an execution role with permission to invoke your Lambda function. This is the most common source of “schedule created but function never fires” issues.

The minimum policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:nightly-cleanup"
}
]
}

The trust policy on the role must allow the scheduler service:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": "scheduler.amazonaws.com"
}
}
]
}

SAM and Serverless Framework handle this automatically. With CDK and Terraform, you configure it explicitly (as shown in the examples above). If you’re using the console, EventBridge Scheduler prompts you to create a role — accept the default unless you need cross-account invocation.

Production hardening

Getting a Lambda to fire on a schedule takes five minutes. Keeping it running reliably in production takes more thought.

Error handling in your function

Lambda considers an invocation “successful” if the function doesn’t throw. A function that calls an API, gets a 500 response, and returns normally is a “successful” Lambda invocation as far as EventBridge is concerned. It won’t retry, and it won’t alert.

Handle errors explicitly:

import json
import urllib.request
import urllib.error
def handler(event, context):
try:
req = urllib.request.Request(
'https://api.example.com/cleanup',
method='POST',
headers={'Content-Type': 'application/json'},
)
with urllib.request.urlopen(req, timeout=30) as resp:
body = resp.read().decode()
status = resp.status
if status >= 400:
raise Exception(f"API returned {status}: {body}")
print(json.dumps({
'status': 'success',
'statusCode': status,
'body': body,
}))
return {'statusCode': status}
except Exception as e:
print(json.dumps({
'status': 'error',
'error': str(e),
}))
# Re-raise so Lambda marks this as a failed invocation
# and EventBridge retries according to your retry policy
raise

The key: re-raise exceptions so Lambda reports the invocation as failed. EventBridge only retries failed invocations.

Dead-letter queues

When a Lambda invocation fails even after retries, the event is discarded by default. You’ll never know it happened unless you’re watching CloudWatch Logs in real time.

Add a dead-letter queue (DLQ) to catch these failures. Every IaC example above includes a DLQ — it’s an SQS queue where EventBridge sends events that couldn’t be delivered. Check it periodically, or set up a CloudWatch alarm on the queue depth:

# CloudWatch alarm for DLQ messages
DLQAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: cleanup-dlq-messages
MetricName: ApproximateNumberOfMessagesVisible
Namespace: AWS/SQS
Statistic: Sum
Period: 300
EvaluationPeriods: 1
Threshold: 1
ComparisonOperator: GreaterThanOrEqualToThreshold
Dimensions:
- Name: QueueName
Value: !GetAtt CleanupDLQ.QueueName
AlarmActions:
- !Ref AlertSNSTopic

CloudWatch alarms for Lambda errors

Monitor your function’s error rate directly:

LambdaErrorAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: cleanup-lambda-errors
MetricName: Errors
Namespace: AWS/Lambda
Statistic: Sum
Period: 300
EvaluationPeriods: 1
Threshold: 1
ComparisonOperator: GreaterThanOrEqualToThreshold
Dimensions:
- Name: FunctionName
Value: !Ref CleanupFunction
AlarmActions:
- !Ref AlertSNSTopic

This catches failed invocations within five minutes. Without it, you’re relying on someone checking the CloudWatch dashboard — which nobody does at 3 AM.

Structured logging

Use structured JSON logs so you can query them in CloudWatch Logs Insights:

import json
import time
def handler(event, context):
start = time.time()
# ... do work ...
duration_ms = (time.time() - start) * 1000
print(json.dumps({
'level': 'info',
'message': 'Cleanup completed',
'duration_ms': round(duration_ms, 2),
'records_cleaned': 142,
'schedule_event': event,
}))

Then query with:

fields @timestamp, message, duration_ms, records_cleaned
| filter level = "error"
| sort @timestamp desc
| limit 20

Timeout configuration

Lambda’s default timeout is 3 seconds. For a cron job that calls an external API, waits for a database query, or processes data, that’s almost certainly too short. Set it explicitly:

  • API calls: 30–60 seconds (accounts for slow responses and retries)
  • Data processing: 60–300 seconds (depends on data volume)
  • Maximum: 900 seconds (15 minutes) — if you need more, Lambda isn’t the right tool

Always set the timeout lower than your schedule interval. A function with a 15-minute timeout running every 15 minutes can overlap with itself.

Concurrency control

By default, Lambda can run multiple instances of your function concurrently. For cron jobs, this usually isn’t what you want — two cleanup jobs running at the same time can conflict.

Set reserved concurrency to 1:

CleanupFunction:
Type: AWS::Serverless::Function
Properties:
ReservedConcurrentExecutions: 1
# ... rest of config

This ensures only one instance runs at a time. If EventBridge triggers the function while a previous invocation is still running, the new invocation is throttled and retried according to your retry policy.

The hidden costs of Lambda cron

Lambda cron jobs are cheap to run but expensive to operate. The compute cost for a function running once an hour is essentially zero — well within the free tier. The real costs are:

Observability overhead

A single scheduled Lambda needs:

  • CloudWatch Logs (automatic, but you pay for storage and ingestion)
  • CloudWatch alarms for errors and throttles
  • A dead-letter queue with its own alarm
  • CloudWatch Logs Insights queries to debug failures

For one function, this is manageable. For twenty scheduled functions across three AWS accounts, you’re maintaining a parallel monitoring infrastructure.

IAM complexity

Each schedule needs a role. Each role needs a trust policy and an execution policy. Each policy needs to be scoped to the right function ARN. Multiply by environments (dev, staging, production) and you’re managing dozens of IAM resources for what amounts to “call this URL every hour.”

Cross-region and cross-account scheduling

EventBridge Scheduler is regional. If your Lambda is in us-east-1 and you want to manage schedules from eu-west-1, you need cross-region configuration. Cross-account invocation adds another layer of IAM trust policies.

Cold starts

Lambda functions that run infrequently (once per hour or less) will almost always cold start. For most cron jobs this doesn’t matter — an extra 500ms on a background task is fine. But if your function needs to complete within a tight window, factor in cold start time. Provisioned concurrency eliminates cold starts but adds ongoing cost.

When to use an external scheduler instead

Lambda + EventBridge is the right choice when you’re already deep in the AWS ecosystem and have the operational maturity to manage the monitoring stack. But there are scenarios where an external HTTP scheduler is simpler:

Multiple cloud providers or hybrid infrastructure. If your scheduled tasks hit APIs across AWS, GCP, and on-prem services, managing separate scheduling systems per provider is overhead. An external scheduler works with any HTTP endpoint.

Teams without dedicated DevOps. Setting up DLQs, CloudWatch alarms, IAM roles, and structured logging for each scheduled function requires AWS expertise. An external scheduler gives you retries, alerts, and logs out of the box.

Rapid iteration. Changing a schedule in EventBridge means redeploying infrastructure. With an external scheduler, you change the cron expression in a dashboard — no deploy needed.

Centralized visibility. When you have 50 scheduled tasks across multiple services, a single dashboard showing every schedule’s status, last execution, and failure history is worth more than scattered CloudWatch alarms.

Recuro handles all of this. Consider what you just read: 80+ lines of SAM YAML, an SQS dead-letter queue, two CloudWatch alarms, an IAM role with a trust policy — all to call one URL on a schedule. With an external scheduler, that entire stack becomes a single API call with retries, alerts, and execution logs built in.

If your Lambda function already has an HTTP endpoint (via function URL or API Gateway), you can replace EventBridge + all the monitoring infrastructure with a single Recuro schedule.

Troubleshooting

Schedule created but function never fires

  1. Check the IAM role. EventBridge Scheduler needs lambda:InvokeFunction permission on your function. This is the most common issue.
  2. Verify the schedule is enabled. In the EventBridge Scheduler console, confirm the schedule state is ENABLED.
  3. Check the cron expression. Remember: AWS requires ? in either day-of-month or day-of-week. cron(0 0 * * * *) is invalid — use cron(0 0 * * ? *).
  4. Check the region. The schedule and the Lambda function must be in the same region (unless you’ve configured cross-region invocation).

Function fires but doesn’t do what you expect

  1. Check CloudWatch Logs. Every Lambda invocation logs to /aws/lambda/function-name. Look for errors or unexpected output.
  2. Test the function manually. In the Lambda console, create a test event with the same payload EventBridge sends and run it.
  3. Check the timeout. If your function times out, Lambda kills it and reports an error — but your function code won’t log anything after the cutoff.

Retries are excessive

EventBridge Scheduler defaults to 185 retries over 24 hours. For most cron jobs, this is too aggressive — if the first two retries fail, retry 183 probably won’t succeed either. Set MaximumRetryAttempts to 2 or 3 in your configuration.

Next steps


Stop managing infrastructure. Start scheduling jobs.

Recuro handles cron scheduling, retries, alerts, and execution logs — so you can focus on building your product.

No credit card required