Sentry's Per-Event Pricing: When Frontend Errors Break the Budget
Sentry charges per error event captured. Here's the pricing math, what triggers cost spikes, and how teams keep their bills predictable as traffic grows.
Sentry's Per-Event Pricing: When Frontend Errors Break the Budget
Sentry was built for frontend errors and is unmatched at that one job. The bill is fine — until it isn't. Three patterns will spike your Sentry invoice 3–10× overnight, and recognizing them is the difference between a $40 month and a $400 month.
For the bigger-picture comparison of when Sentry is the right choice and when to look elsewhere, see the SecureNow vs Sentry page.
The pricing model in plain English
Sentry's free tier gives you 5,000 errors and 10,000 transactions per month. The paid tiers price additional volume per event:
- Errors: ~$0.00029 per event after the included quota (Team plan)
- Transactions: ~$0.000091 per event
- Replays: ~$0.0028 per replay session
- Cron monitors, attachments, performance units: separate line items, each priced per use
For a small team, the math works out fine. A web app with 50K errors/month at $0.00029 each is $14.50, plus the $26/month base = ~$40/month. Predictable.
The problem is variance. Three scenarios blow this up.
Spike pattern 1: a bad deploy that errors on every request
You ship a typo at 4pm on Friday. Every request now throws. By Monday you've captured 2.4 million error events. At $0.00029 each that's $696 — and your monthly tier was $40.
Sentry has spike protection that caps event ingestion at 5× your tier limit by default. If you didn't enable it (or you set it loose), the bill is real.
Mitigation: spike protection on, set conservatively (3× tier). Plus alerting on event volume itself — if you're suddenly ingesting 10× normal events, that's a deploy regression alert, not just a billing alert.
Spike pattern 2: a third-party script that throws in user browsers
A vendor's analytics SDK starts throwing Cannot read property 'foo' of undefined after their deploy at 2am UTC. It runs on every page view across all your users. Tens of thousands of events per hour.
This pattern is brutal because (a) the bug isn't yours, (b) you can't fix it, and (c) the events look legitimate — they're real JavaScript errors from real user sessions.
Mitigation: inbound filters at the Sentry project level for stack frames originating from third-party domains. Or ignoreErrors regex at the SDK level. Both require knowing about the issue first; alerting helps.
Spike pattern 3: chatbots and scrapers triggering errors
Bots, particularly headless Chrome instances run by AI training crawlers and SEO scrapers, hit your site without the JavaScript runtime properly initialized. They trigger a different error pattern than real users — often ReferenceError from missing browser APIs that Node-style scrapers don't have.
These errors are technically real but they're noise from your perspective. They can quietly add tens of thousands of events per month.
Mitigation: filter by user agent at the SDK level. Or — and this is the cleaner answer — block the bots at the IP or user-agent layer before the page even renders. The free SecureNow Firewall auto-allowlists Googlebot, GPTBot, and ClaudeBot while blocking 500k+ known-bad IPs, which removes most of the bot-induced Sentry events at source.
The transactions trap
Sentry's transaction events (used for performance / APM) are individually cheap but accumulate fast. A 10K-RPS endpoint with 100% sampling produces 26B transactions a month. At $0.000091 each, that's $2.4M.
Nobody samples at 100% in production. The default sample rate in Sentry's SDK is 100% though, and teams often forget to lower it. The realistic sample rate for a production service is 1–10% — and you should know which it is for each service.
Action item: audit your tracesSampleRate across services. If you don't know what it's set to, the answer is probably "100%" and you're either paying for it or not capturing transactions because of spike protection.
The frontend-only ceiling
Sentry's economics work cleanly when frontend errors are 90% of your error volume. They get awkward when you also want:
- Backend distributed tracing (Sentry has it; the per-transaction pricing makes it 2–5× more expensive than per-host APM at scale)
- Application logs (Sentry's logs product is newer and not its strength)
- A WAF or IP firewall (not in Sentry's product line)
Most teams that hit this ceiling end up with three vendors: Sentry for frontend, Datadog/SecureNow for backend, Cloudflare/CloudArmor for the firewall. The combined bill exceeds Sentry alone, and there are now three dashboards to look at during incidents.
The collapsed alternative — one OpenTelemetry-native tool that handles backend traces, logs, errors, and security in one product — is what most teams move to when the bill or the operational cost becomes painful. See the Sentry alternative comparison for what that looks like.
How to keep Sentry bills under control if you're staying
Five settings that move the needle:
- Spike protection at 3× your tier limit (Settings → Spike Protection).
- Sample transactions aggressively — 5–10% sample rate on most services, 100% only on critical paths.
- Inbound filters for known noise: third-party domains, browser extensions, bot user agents.
- Use
beforeSendin the SDK to drop events you don't want before they're sent. This is free. - Audit replays — they're 10× more expensive than errors. Sample replays at 10–25%, not 100%.
Apply all five and most teams cut their Sentry bill 40–60% with no loss of signal.
When to leave
Stay with Sentry if: frontend errors are 80%+ of your error volume, your bill is under $300/month, and your team is happy with the dashboard.
Leave if: you're paying $300+/month and it's not yet covering backend traces and logs, or you find yourself on three observability tools whose data doesn't correlate. The migration is straightforward — Sentry's SDK is OpenTelemetry-compatible at the data layer, so the swap is a config change for traces and a re-instrumentation for errors.
Related
Frequently Asked Questions
Why does my Sentry bill spike randomly?
Most spikes come from one of three sources: a deployed bug that errors on every request, a third-party script that throws repeatedly in user browsers, or a chatbot or scraper triggering JavaScript errors at scale. Sentry's spike protection helps but doesn't eliminate the issue.
How is Sentry priced compared to Datadog?
Sentry is per-event (error and transaction), Datadog is per-host. For frontend-heavy apps Sentry can be cheaper; for backend-heavy apps with high traffic Sentry's per-event model adds up faster.
What's the cheapest way to reduce Sentry events?
Sample transactions aggressively (10–25% of low-value endpoints), filter known noise events at the SDK level (e.g., third-party errors), and use inbound filters in the Sentry UI for events you can't control at the SDK.
When should I leave Sentry for an alternative?
When Sentry covers under half of what your team needs (errors only, no backend traces or logs), or when the per-event bill exceeds $200/month for a small team. At that point an OpenTelemetry-native tool collapses three vendors into one.
Recommended reading
Aggregated, anonymized data from 1.2B requests across the SecureNow customer fleet. Top anomaly types, peak hours, and the day-of-week patterns nobody publishes.
May 9An honest, side-by-side comparison of the ten most-deployed application security monitoring tools — from enterprise platforms to free open-source options.
May 9A quarterly tally of malicious npm packages, the major incidents, and detection patterns. April 2026 set a new record at 847 confirmed malicious packages — here's what they did and how to detect them.
May 9