Rate Limiting in NestJS That Actually Works
ThrottlerModule is the right starting point. Here's the production config, the Redis-backed shared store, and per-route customization.
Rate Limiting in NestJS That Actually Works
NestJS has @nestjs/throttler, which is the right rate-limiting library for the framework. The defaults work in development. Production requires three changes.
Install + basic setup
npm install @nestjs/throttler
// app.module.ts
import { ThrottlerModule, ThrottlerGuard } from '@nestjs/throttler';
import { APP_GUARD } from '@nestjs/core';
@Module({
imports: [
ThrottlerModule.forRoot([
{ name: 'short', ttl: 1000, limit: 10 }, // 10/sec burst
{ name: 'medium', ttl: 10_000, limit: 30 }, // 30/10sec sustained
{ name: 'long', ttl: 60_000, limit: 100 }, // 100/min absolute
]),
],
providers: [
{ provide: APP_GUARD, useClass: ThrottlerGuard },
],
})
export class AppModule {}
Multiple windows give you both burst and sustained limits in one config. Most production apps want all three tiers.
Production change 1: trust proxy
Without this, you're rate-limiting your reverse proxy as a single IP:
// main.ts
import { NestFactory } from '@nestjs/core';
import { NestExpressApplication } from '@nestjs/platform-express';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create<NestExpressApplication>(AppModule);
app.set('trust proxy', 1); // <-- this line
await app.listen(3000);
}
bootstrap();
Adjust the 1 to the number of proxy hops in your topology.
Production change 2: Redis storage
npm install @nest-lab/throttler-storage-redis ioredis
// app.module.ts
import { ThrottlerStorageRedisService } from '@nest-lab/throttler-storage-redis';
import { Redis } from 'ioredis';
ThrottlerModule.forRootAsync({
useFactory: () => ({
throttlers: [
{ name: 'short', ttl: 1000, limit: 10 },
{ name: 'long', ttl: 60_000, limit: 100 },
],
storage: new ThrottlerStorageRedisService(new Redis(process.env.REDIS_URL)),
}),
}),
Now all instances share counters. Without this, scaling horizontally weakens your rate limits proportionally.
Production change 3: per-route overrides
Auth endpoints need stricter limits than general API:
import { Throttle } from '@nestjs/throttler';
@Controller('auth')
export class AuthController {
@Post('login')
@Throttle({ default: { ttl: 5 * 60_000, limit: 5 } })
async login(@Body() body: any) {
// 5 attempts per 5 minutes
}
@Post('reset-password')
@Throttle({ default: { ttl: 60_000, limit: 3 } })
async reset(@Body() body: any) {
// 3 attempts per minute — password reset is highly abuse-prone
}
}
For very high-volume endpoints (autocomplete, polling), increase the limit:
@Get('autocomplete')
@Throttle({ default: { ttl: 1000, limit: 50 } })
async autocomplete() { /* ... */ }
Per-user rate limiting
Override the tracker for authenticated routes:
// src/throttler/user-throttler.guard.ts
import { Injectable, ExecutionContext } from '@nestjs/common';
import { ThrottlerGuard } from '@nestjs/throttler';
@Injectable()
export class UserThrottlerGuard extends ThrottlerGuard {
protected async getTracker(req: any): Promise<string> {
return req.user?.id ?? req.ip;
}
}
Apply per-controller or globally.
Verifying
A burst test:
for i in {1..15}; do
curl -i -X POST http://localhost:3000/auth/login -d 'email=a@b.c&password=foo'
done
You should see the first 5 return 401/200, then 429 for the rest.
What this doesn't solve
Rate limiting is one layer:
- IP-reputation firewall is a different layer (use SecureNow Firewall)
- Credential-stuffing detection is a different problem (see credential-stuffing guard for NestJS)
- Behavioral abuse detection (cart cycling, etc.) is yet another layer
Related
Frequently Asked Questions
Should I use the default Throttler or a custom Guard?
Throttler for general rate limits across all routes; custom decorators (`@Throttle()`) to override per-route. Custom Guard only for non-rate-limit logic (e.g., credential stuffing) layered on top.
Does the in-memory storage work in production?
Only for single-instance deployments. With multiple replicas (Kubernetes, ECS, PM2 cluster), the throttle counts diverge per instance. Use the Redis storage adapter.
How do I configure trust proxy?
Set `app.set('trust proxy', 1)` in main.ts after creating the Express app. Without it, the throttler counts every CDN/load-balancer IP as one rate-limit bucket.
What about per-user rate limiting?
Override `getTracker()` in a custom ThrottlerGuard subclass. Return the user ID for authenticated requests, fall back to IP otherwise.
Recommended reading
If your team uses Sentry for frontend errors and needs backend distributed tracing without doubling the Sentry bill, here's the OpenTelemetry path that doesn't make you choose.
May 9Five approaches to bot blocking in Express, ranked by effort vs. effectiveness. From a 5-line allowlist to a full IP-reputation firewall — all without Cloudflare, AWS WAF, or any new infrastructure.
May 9Fastify hooks (onRequest) and the SecureNow preload both work cleanly. Here's the production setup for IP blocking and user-agent filtering.
May 9