Use case
Auto-Expiring Resources: Invite Links, Tokens, Trials, and More
Every resource with a lifetime needs a death. Here's how to make sure it actually happens.
Use case
Every resource with a lifetime needs a death. Here's how to make sure it actually happens.
Almost every application creates resources that should not live forever. The specifics vary, but the pattern is universal: something is created, it's valid for a window of time, and then it needs to become invalid — ideally without anyone having to remember to do it manually.
Forgetting to expire resources isn't just messy — it's dangerous.
Security. An API token that was supposed to expire last quarter is an attack vector. A password reset link from three months ago is a phishing opportunity. Time-boxed access that never actually expires is the same as permanent access.
Compliance. GDPR, SOC 2, and HIPAA all have requirements around data retention and access control. "We meant to revoke that access" isn't an audit response. Automated expiration is a control you can point to.
Data hygiene. Expired trials that stay in "active" status skew your metrics. Unused invite links that never get cleaned up clutter your database. Promo codes that work indefinitely cost you money when someone finds an old link.
The question isn't whether to implement auto-expiration — it's how.
Some data stores handle expiration natively. Redis has EXPIRE and EXPIREAT. DynamoDB has TTL attributes that automatically delete items. MongoDB supports TTL indexes that remove documents after a specified period.
# Redis — key expires in 48 hours
SET invite:abc123 '{"email":"[email protected]","team_id":42}'
EXPIRE invite:abc123 172800
# MongoDB — TTL index deletes documents 48 hours after createdAt
db.invites.createIndex({ "createdAt": 1 }, { expireAfterSeconds: 172800 })
# DynamoDB — items with ttl attribute are auto-deleted
{
"pk": "INVITE#abc123",
"email": "[email protected]",
"ttl": 1710460800
} Database-level TTL is elegant when it fits. The data store handles everything — no application code, no scheduled jobs, no external services. But it has limitations. Redis and DynamoDB TTL only delete the record; they don't trigger side effects like sending a "your trial has expired" email, revoking downstream permissions, or logging an audit event. MongoDB TTL indexes can have delays of up to 60 seconds (and sometimes longer under load). And if your primary data store is PostgreSQL or MySQL, this option doesn't exist at all.
Good for: cache invalidation, session expiry, and any case where deletion is sufficient and side effects aren't needed.
The simplest application-level approach: don't actively expire anything. Instead, check whether a resource has expired every time it's accessed.
# Python — check expiry on access
def redeem_invite(token):
invite = db.invites.find_one({"token": token})
if not invite:
return error("Invalid invite")
if invite["expires_at"] < datetime.utcnow():
return error("This invite has expired")
# proceed with redemption...
This is trivially simple to implement. You store an expires_at timestamp on the resource and compare it against the current time on every access. No background jobs, no cron, no infrastructure.
The limitation is that nothing happens when a resource expires. There's no event, no side effect, no notification. The invite link just returns "expired" the next time someone clicks it. If you need to send a "your trial ended" email, revoke API access, or update a status in your database, lazy expiration alone won't do it. It's also invisible — expired resources sit in your database looking active until someone happens to access them.
Good for: invite links, password reset tokens, magic login links — anything where "fail on access" is the only required behavior.
Run a scheduled job that scans your database for resources past their expiration time and processes them in bulk.
# Laravel — scheduled command
# app/Console/Commands/ExpireResources.php
class ExpireResources extends Command
{
protected $signature = 'resources:expire';
public function handle(): void
{
$expired = Trial::where('status', 'active')
->where('expires_at', '<=', now())
->get();
foreach ($expired as $trial) {
$trial->update(['status' => 'expired']);
$trial->user->notify(new TrialExpiredNotification());
$trial->user->revokeFeatureAccess();
}
$this->info("Expired {$expired->count()} trials.");
}
}
# Schedule it every 5 minutes
Schedule::command('resources:expire')->everyFiveMinutes(); Cron polling gives you full control over side effects. You can send emails, revoke access, write audit logs, and update external systems — all in one sweep. It's straightforward, easy to test, and works with any database.
The trade-off is timing granularity. If you poll every 5 minutes, a resource might live up to 5 minutes past its intended expiration. For most use cases (trials, promo codes, shared links), this is perfectly acceptable. For security-sensitive resources like temporary elevated permissions, it might not be tight enough.
You also need to handle scale carefully. Scanning a table with millions of rows every few minutes requires proper indexing (always index expires_at) and batch processing to avoid timeouts.
Good for: trials, promo codes, temporary access — anything where a few minutes of delay is acceptable and side effects are needed.
Instead of polling, schedule a specific job to run at the exact expiration time when you create the resource.
# Ruby/Sidekiq — schedule deactivation at creation
class InviteService
def create_invite(team, email)
invite = Invite.create!(
team: team,
email: email,
token: SecureRandom.hex(32),
expires_at: 48.hours.from_now
)
# Schedule the cleanup job for the exact expiry time
ExpireInviteJob.perform_at(invite.expires_at, invite.id)
invite
end
end This gives you exact timing and individual control per resource. Each expiration fires its own job with its own side effects. There's no polling, no scanning, no batch processing.
The downsides: you need a job queue that supports delayed dispatch (Sidekiq, SQS, Bull, Laravel queues). If the resource is renewed or deleted before expiry, you need to cancel or no-op the pending job. And if your queue loses jobs (crashes, purges), the expiration silently doesn't happen. Combining this with lazy expiration as a safety net is wise.
Good for: cases where exact timing matters and you already have a robust job queue infrastructure.
External HTTP scheduling services can trigger a deactivation endpoint at a specific time. When you create a time-limited resource, you also schedule an HTTP call to your expiration endpoint.
# At invite creation time, schedule the expiration call via Recuro API
curl -X POST https://app.recurohq.com/api/jobs \
-H "Authorization: Bearer {{token}}" \
-H "Content-Type: application/json" \
-d '{
"queue": "resource-expiration",
"url": "https://yourapp.com/api/internal/expire-invite",
"method": "POST",
"payload": {
"invite_id": "abc123"
},
"scheduled_at": "2026-03-16T14:30:00Z",
"headers": {
"X-Internal-Key": "{{your-secret}}"
}
}' This approach decouples scheduling from your application. You don't need a cron daemon, a polling job, or a delayed queue. The HTTP scheduler fires a request to your endpoint at the specified time, and your endpoint handles the deactivation logic and side effects. Services like Recuro also handle retries if your endpoint is temporarily down, which prevents the "the expiration job ran but the server was deploying" failure mode.
Good for: applications that don't have job queue infrastructure, or teams that want per-resource exact-time expiration without managing delayed job systems.
The right choice depends on three questions:
| Question | If no | If yes |
|---|---|---|
| Do you need side effects at expiry time? | Lazy expiration or database TTL | Cron polling, delayed jobs, or HTTP scheduler |
| Does exact timing matter (within seconds)? | Cron polling (minutes-level is fine) | Delayed jobs or HTTP scheduler |
| Do you already have job queue infrastructure? | Cron polling or HTTP scheduler | Delayed jobs (leverage what you have) |
In practice, most teams combine approaches. Lazy expiration as the baseline (always check on access, never trust that the background job ran). Then add cron polling or delayed jobs for the proactive side effects. Defense in depth.
Your expiration logic will sometimes run twice — a retry after a timeout, a cron job overlapping with a delayed job, a manual cleanup after an automated one. If expiring an already-expired resource throws an error or sends a duplicate notification, you have a bug. Always check state before acting:
def expire_invite(invite_id):
invite = db.invites.find(invite_id)
if invite.status == "expired":
return # already handled — no-op
invite.update(status="expired")
notify_inviter(invite)
log_audit_event("invite_expired", invite) For trials and subscriptions, don't just cut access without warning. Send a reminder 3 days before, 1 day before, and at the moment of expiration. This is both good UX and a conversion opportunity. The implementation is the same as the expiration job — just schedule additional triggers at earlier times.
Hard cutoffs frustrate users. A trial that ends at midnight shouldn't lock someone out mid-session at 12:01 AM. Consider a grace period (a few hours to a day) where the resource is technically expired but still functional, paired with a prominent in-app banner. This is especially important for paid features and access grants.
Every expiration should leave a trace. Who or what triggered it (automated vs. manual), when it happened, what state the resource was in before. This is essential for debugging ("why was my access revoked?"), compliance audits, and customer support. Even a simple log line is better than nothing.
If you're expiring thousands of resources at once (e.g., a promo code used by 50,000 people all ending on the same date), don't process them all in one synchronous loop. Chunk the work, use a job queue, and add rate limiting for external side effects like email sends. A query like WHERE expires_at <= NOW() AND status = 'active' LIMIT 500 with repeated execution is more resilient than processing everything in one pass.
Auto-expiration is a backend concern. Never rely on the client to enforce it.
Server-side enforcement is mandatory. A JWT with an exp claim is a good practice, but if your API doesn't also validate expiration server-side, anyone can ignore the client-side check. The token's expiration should be verified on every request, not just when the client decides to check.
Don't expose expiration times in URLs. A shared link like https://app.com/share/abc123?expires=1710460800 invites tampering. Store the expiration server-side and look it up by token. The URL should contain only an opaque identifier.
Revocation should be immediate. If a user manually revokes a token or an admin removes access, don't wait for the scheduled expiration. Have an explicit revoke action that takes effect immediately, independent of the TTL mechanism. The auto-expiration is a safety net, not the primary control.
Assume the expiration job might not run. Servers go down, queues get purged, cron daemons crash. Always pair background expiration with access-time validation. The expiration job handles proactive side effects (emails, audit logs, cleanup). The access-time check is the actual gate. Both should exist.
A robust expiration system typically layers two or three approaches:
expires_at column. Every approach that touches the database will query by this field.The simplest version that works for most teams: lazy expiration on access, plus a cron job that runs every few minutes to handle side effects. If you need exact timing without managing cron infrastructure, swap the cron job for delayed queue jobs or an HTTP scheduling service. Start simple, add complexity only when the use case demands it.
Recuro handles scheduling, retries, alerts, and execution logs. 1,000 free requests to start.
No credit card required