Adding Backend Tracing to a Sentry Stack with OpenTelemetry
If your team uses Sentry for frontend errors and needs backend distributed tracing without doubling the Sentry bill, here's the OpenTelemetry path that doesn't make you choose.
Adding Backend Tracing to a Sentry Stack with OpenTelemetry
If you've used Sentry for years on the frontend and are now adding backend services, you have two reasonable paths and one expensive one. Most teams default to the expensive one without realizing.
The expensive path: enable Sentry's Performance Monitoring on every backend service, run their SDK, pay per transaction. Bill goes up 3–5× as soon as a real-traffic service is instrumented.
The reasonable paths: (a) use OpenTelemetry for backend, separate destination, propagate trace context to Sentry on the frontend, or (b) collapse onto a single OpenTelemetry-native tool that handles both. This post covers path (a) because most teams want to keep Sentry on the frontend.
For the path (b) version — one tool covers both — see the Sentry alternative page.
What you keep with Sentry
Sentry on the frontend continues to be excellent for:
- JavaScript error tracking (uncaught exceptions, promise rejections, console errors)
- Session replay (the killer feature most other tools don't match)
- Release tracking and source maps
- User feedback and feedback widgets
- Performance metrics for the browser specifically (Core Web Vitals, load timing)
This is what Sentry was built for, what their SDK is mature on, and where the per-event pricing remains reasonable for most teams.
What you replace on the backend
Backend distributed tracing — spans, latency histograms, dependency graphs, error correlation across services — is where the cost-benefit shifts. OpenTelemetry has matured to the point where every major Node framework auto-instruments without code changes, and the resulting traces are equivalent or better than what Sentry's backend SDK produces.
The tradeoff: you now have two products (Sentry frontend + an OTel destination), but with proper trace context propagation they're really one observability story.
The architecture
[Browser] [Backend service] [Database]
| | |
| Sentry JS SDK | OpenTelemetry SDK |
| (errors + replay | (traces + logs) |
| + frontend perf) | |
| | |
| -- trace request --> | |
| (traceparent: 00-...) | |
| | -- query --> |
| | (traceparent: 00-...) |
| | |
[sentry.io] [your OTel destination]
(SecureNow / SigNoz / Tempo)
The trace ID propagates via the W3C traceparent header. Frontend errors land in Sentry with the trace ID attached. Backend spans land in the OTel destination with the same trace ID. From either side, you can find the corresponding events in the other tool.
Step-by-step setup
Frontend (no changes if Sentry is already there)
The Sentry JS SDK already supports W3C trace context as of v7.x. Make sure you have:
Sentry.init({
dsn: '...',
tracePropagationTargets: ['api.yourapp.com', /^https:\/\/api\./],
// tracesSampleRate moves from "100% Sentry transactions" to "0%" since
// backend transactions go to OTel — but keep it for frontend perf metrics:
tracesSampleRate: 0.1,
});
The tracePropagationTargets config tells the SDK which outbound URLs should get the traceparent header attached. Critical for distributed traces to work.
Backend (one install + one preload)
npm install securenow
# or:
npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node
Then:
node -r securenow/register app.js
# or:
node -r @opentelemetry/auto-instrumentations-node/register app.js
That's the swap. The OpenTelemetry SDK auto-detects Express, Next.js, NestJS, Fastify, and the rest, and starts emitting OTLP traces.
Configure the destination
# SecureNow:
export SECURENOW_API_KEY=snk_live_...
# Or self-hosted SigNoz:
export OTEL_EXPORTER_OTLP_ENDPOINT=http://signoz-collector:4318
# Or Grafana Tempo:
export OTEL_EXPORTER_OTLP_ENDPOINT=https://tempo.grafana.cloud:443
export OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic\ <encoded-creds>
Verify trace propagation
Hit your frontend, click something that calls the backend. Check:
- The frontend network tab — outbound request should have a
traceparentheader. - Sentry — the frontend transaction should have a trace ID.
- Your OTel destination — a backend trace with the same trace ID.
If steps 2 and 3 show different trace IDs, the propagation isn't wired correctly. The most common cause is tracePropagationTargets not matching your API domain — Sentry's SDK won't propagate to domains it doesn't trust.
Cross-tool incident response
When an incident happens:
- Customer reports broken checkout. Search Sentry for their email or session. Find the failed frontend transaction.
- Copy the trace ID from the Sentry event detail.
- Search your OTel destination by that trace ID. The backend spans show what happened on the server side — slow database query, downstream API timeout, exception in a span.
- Ship the fix, deploy, verify both Sentry replay and OTel traces look clean.
This workflow is the same as one tool, just with two tabs open. Most teams say the friction is acceptable; some find it frustrating after 6 months and migrate fully to one tool.
When to fully consolidate
If you're hitting one of these patterns, the two-tool architecture stops paying off:
- Sentry frontend events are now under 30% of your observability spend (most spend is now on backend transactions or other tools)
- Incident response is bouncing between Sentry, an OTel destination, and a third logs tool
- Your backend services span 4+ languages and you want a single instrumentation story
At that point, an OpenTelemetry-native tool that does APM + logs + frontend errors is worth evaluating. Sentry has been adding OTel support; SecureNow ships native; SigNoz works with both. See the Sentry alternative comparison.
The actual recommendation
For most teams: keep Sentry on the frontend, run OpenTelemetry on the backend, send to the cheapest reasonable destination. The combined bill will be lower than Sentry-everywhere, and you get the freedom to change either side independently.
For teams already burned by per-event pricing on the backend: consolidate. The migration math is documented in the Sentry alternative page.
Related
Frequently Asked Questions
Can Sentry handle backend traces?
Yes — Sentry has Performance Monitoring with backend SDKs for Node, Python, Go, and others. The challenge is per-transaction pricing makes it expensive at scale, and the data model is Sentry-specific rather than OpenTelemetry-native.
Should I use Sentry for both frontend and backend?
If your scale is small (under 10M backend transactions/month) and your team prefers one vendor, yes. Past that, OpenTelemetry on the backend is usually cheaper and more flexible while keeping Sentry for frontend errors.
Can frontend Sentry traces correlate with OpenTelemetry backend traces?
Yes, via W3C trace context headers. Sentry's SDK supports outbound `traceparent` headers; OpenTelemetry servers ingest them by default. Distributed traces span both worlds.
What does this look like in practice?
Sentry runs on the frontend for errors and replays. The backend uses an OpenTelemetry SDK exporting to a separate destination (SecureNow, SigNoz, Grafana Tempo). Trace IDs propagate via W3C headers. One trace shows up in both tools, with frontend context in Sentry and backend depth elsewhere.
Recommended reading
Five approaches to bot blocking in Express, ranked by effort vs. effectiveness. From a 5-line allowlist to a full IP-reputation firewall — all without Cloudflare, AWS WAF, or any new infrastructure.
May 9Fastify hooks (onRequest) and the SecureNow preload both work cleanly. Here's the production setup for IP blocking and user-agent filtering.
May 9NestJS guards, interceptors, and global middleware all give you bot-blocking hooks. Here's the cleanest pattern for each.
May 9