The Per-Host Pricing Trap (And How to Escape It)

Per-host APM pricing made sense in 2014. It's an actively bad axis for SaaS in 2026. Here's the math, why vendors haven't changed, and what alternatives look like.

Lhoussine
May 9, 2026·7 min read

The Per-Host Pricing Trap (And How to Escape It)

In 2014, when Datadog popularized per-host APM pricing, it was a reasonable axis. Hosts were 1:1 with engineers most of the time. The agent had real per-host overhead. The data ingested scaled roughly with host count. None of those things are true in 2026.

For broader SaaS observability context, see the SaaS observability page.

What changed

Three architectural shifts broke per-host pricing as a reasonable axis:

Microservices and Kubernetes. A workload that ran on 4 VMs in 2015 runs on 40 pods today. Same business value, 10× the host count. Vendor revenue per customer scales 10× without any value increase.

Burst and serverless. Lambda functions, Cloud Run, ephemeral pods — workloads that exist for 30 seconds and disappear. Per-host pricing either undercounts (per-host fee × short lifetime = small charge) or overcounts (full per-host fee for a 30-second container) depending on how vendors define a host.

Elastic redundancy. Modern SaaS stacks scale horizontally for HA, blue/green, canary deploys, and burst capacity. A 30-pod deployment may have 4 actual workloads of interesting compute. The other 26 are either redundant copies or capacity for spikes that may never come.

In each case, per-host pricing rewards growth that's not aligned with revenue or value.

The math for a typical mid-stage SaaS

A 30-engineer SaaS with $2M ARR running on:

  • 60 backend pods (30 active + 30 idle redundancy)
  • 8 database pods
  • 12 worker pods
  • 4 cache pods

Total: 84 hosts, of which roughly 35 do meaningful work most of the time.

At Datadog's $0.07/host/hour APM tier, that's $0.07 × 84 × 730 = ~$4,300/month. At the full-stack tier ($0.31/host/hour), it's ~$19,000/month. Same workload, same value, ~5× difference based on which tier you're on.

Now imagine the same SaaS at Series B, scaling to 500 hosts because they 5×'d their infrastructure for HA, multi-region, and burst capacity. Bill at the basic tier is now ~$25,000/month. Their revenue grew 3× but their observability bill grew 6×. Margin collapses on the observability line item.

Why vendors haven't changed

Three commercial reasons:

Predictable revenue. Per-host fees produce stable, growing revenue per account. Sales teams can model expansion mathematically — every Kubernetes scale-up triggers an automatic upsell. Switching to usage-based would make some quarters lumpy.

Anchoring at higher prices. Per-host pricing landed at $15–$30/host/month on average, which feels reasonable per host but compounds dramatically. Usage-based pricing exposes the comparison: $5/TB looks small, $25,000/month doesn't.

Customer inertia. Most existing customers have negotiated per-host rates and don't want to renegotiate. A model change disrupts the renewal cycle, which vendors avoid.

The result: vendors keep per-host pricing for existing customers, occasionally offer "committed-use discounts" (which are still per-host with a volume rebate), and resist changing the axis.

What usage-based looks like in practice

Three patterns of usage-based pricing:

Per-byte ingested. You pay for the data sent to the vendor, regardless of host count. Used by SigNoz Cloud ($0.30/GB), Dash0 ($0.30/GB), Sentry (per-event, similar concept).

Per-byte stored. You pay for retention, not ingestion. Less common but appears in some logging products.

Per-byte scanned. You pay for the data your queries actually read. Used by SecureNow ($5/TB scanned). The most aligned with value: you only pay when you're getting answers.

For a typical mid-sized SaaS, per-TB scanned models price at 5–20× cheaper than per-host because the byte volume is genuinely small relative to host count. The breaking point is high-cardinality logging or trace-heavy workloads where data volume is the constraint.

Hidden costs of per-host

Beyond the headline price, per-host pricing creates incentives that cost more:

Reduced instrumentation. When adding a service costs $25/month, teams hesitate to instrument internal tools, batch jobs, and one-off scripts. Coverage gets uneven; incident response suffers.

Sampling under-investment. To control bills, teams set aggressive sampling rates and lose trace fidelity. The traces you do have are less useful.

Tool fragmentation. Some teams keep small workloads on free open-source observability while paying for the commercial tool only on critical paths. Two tools = two query languages = two dashboards = slower incidents.

The actual cost includes the operational overhead of these workarounds, not just the invoice.

How to escape if you're already locked in

Three paths, in order of difficulty:

Easy: negotiate at renewal. Show your vendor a usage-based competitor's quote. Most enterprise reps have authority to drop per-host rates 20–40%. Use the leverage; don't migrate yet.

Medium: data-layer migration. Switch your instrumentation to OpenTelemetry while keeping the current vendor as the destination. This decouples the SDK from the destination, so the next migration is a config change rather than a rewrite. See migrating from Datadog APM in one afternoon.

Hard: full migration. Change destinations. The lift is rebuilding dashboards and alerts (the SDK swap is easy, the dashboard syntax migration isn't). Budget 1–2 weeks for a 5-service stack.

The decision framework

Per-host pricing is fine when:

  • You're under 10 hosts and unlikely to scale soon
  • You actually use agent-only features (NPM, continuous profiler)
  • Your team is small enough that the tooling cost is in the noise

Per-host pricing is the wrong choice when:

  • You're scaling horizontally for HA, multi-region, or burst capacity
  • Most of your hosts are doing the same thing (redundancy)
  • You expect to grow infrastructure faster than revenue (Series A → B is the canonical case)
  • Your data volume is small but host count is high

If two or more apply, start the migration conversation now. The migration takes 2–4 weeks; the negotiation leverage takes one quote.

Related

Frequently Asked Questions

Why is per-host pricing a problem?

It punishes SaaS architectures that scale on redundancy and burst capacity rather than revenue. A 30-pod Kubernetes deployment serving 50 customers and one serving 5,000 cost the same — but only one is producing proportionally more useful telemetry.

Why do vendors still use per-host?

Because it produces highly predictable revenue per customer and grows automatically as the customer's infrastructure grows. Switching to usage-based would make some accounts cheaper and reduce the vendor's revenue floor — they'd need to compensate elsewhere.

What's the cleanest alternative?

Per-byte (storage or ingestion) or per-byte-scanned (query). Both scale with actual usage rather than infrastructure shape. Tools like SecureNow use $5/TB scanned, which means your bill grows when you're getting value (running queries) rather than when you're scaling your fleet.

Are there cases where per-host is fair?

Yes — agent-heavy products that genuinely use per-host CPU and memory (NPM, continuous profiling) have real per-host cost. Pure APM and logs don't, and per-host pricing for those is mostly historical inertia.

Recommended reading

What 1.2B Requests Look Like: Anomaly Patterns from the SecureNow Firewall Fleet

Aggregated, anonymized data from 1.2B requests across the SecureNow customer fleet. Top anomaly types, peak hours, and the day-of-week patterns nobody publishes.

May 9
10 Best Application Security Monitoring Tools in 2026

An honest, side-by-side comparison of the ten most-deployed application security monitoring tools — from enterprise platforms to free open-source options.

May 9
The 2026 npm Supply-Chain Attack Survey, Q2

A quarterly tally of malicious npm packages, the major incidents, and detection patterns. April 2026 set a new record at 847 confirmed malicious packages — here's what they did and how to detect them.

May 9