From Traces to Security Alerts: A Developer's Guide to Threat Detection
Learn how developers can set up security alerts on their applications without a dedicated SOC — detect 4xx spikes, error patterns, and suspicious IPs using trace-based alert rules.
Posted by
Related reading
Adding Security Observability to Your App in 15 Minutes with OpenTelemetry
A step-by-step developer guide to instrumenting your application with OpenTelemetry and connecting it to SecureNow for real-time security monitoring, threat detection, and AI-powered analysis.
Writing Custom ClickHouse Queries for Application Security Analytics
A developer's tutorial on writing ClickHouse SQL queries for security analytics — find suspicious IPs, detect error patterns, and analyze application traffic using trace data.
The Complete SecureNow Workflow: From First Trace to Incident Resolution
A comprehensive walkthrough of the entire SecureNow platform — from application setup and trace ingestion through alert rules, AI investigation, forensic analysis, and incident resolution.
From Traces to Security Alerts: A Developer's Guide to Threat Detection
You shipped the feature. You wrote the tests. You configured the CI pipeline. But at 2 AM, somebody is running a credential stuffing tool against your login endpoint, and you won't find out until a user tweets about their compromised account on Monday morning.
This is the reality for most developer teams that don't have a dedicated Security Operations Center. You know security matters, but the tooling has always felt like it belongs to a different department — SIEM platforms that cost six figures, detection rules written in proprietary languages, and dashboards designed for analysts who stare at them eight hours a day.
What if you could set up meaningful security alerts in the same time it takes to configure a CI check? That's the promise of trace-based alerting. Your application already generates OpenTelemetry traces that capture every HTTP request, database query, and service call. SecureNow lets you write SQL queries against that trace data and fire alerts when the results match threat patterns. No SOC required. No proprietary query language. Just SQL you already know, running against data your app already produces.
If you haven't set up OpenTelemetry instrumentation yet, start with Adding Security Observability to Your App in 15 Minutes and come back here once traces are flowing.
Why Developers Should Own Security Alerts
The traditional model puts security monitoring in the hands of a dedicated SOC team. That model works for large enterprises. For startups, small teams, and developer-led organizations, it creates a dangerous gap: nobody is watching.
The Verizon 2024 DBIR found that median time to detect a breach is still measured in days, not minutes. For organizations without dedicated security staff, that window is even longer. But the detection doesn't require a security specialist — it requires someone who understands the application's normal behavior and can write a query to detect deviations.
That person is you.
As the developer who built the application, you know which endpoints handle authentication, which APIs are public vs. internal, what normal traffic patterns look like, and which error codes indicate real problems versus expected client behavior. You are better positioned than any external SOC analyst to write detection logic that actually works for your application.
Understanding SecureNow's Alert Rule System
Every alert rule in SecureNow has four components:
- Detection query — a SQL query that runs against your ClickHouse trace data
- Schedule — a cron expression defining how often the query runs (default: every 15 minutes)
- Throttle — a cooldown period that prevents duplicate notifications during sustained events
- Channels — where notifications go: Email, Slack webhook, or in-app
The detection query is the core. It's a standard SELECT statement against the signoz_traces.distributed_signoz_index_v2 table, which stores all your OpenTelemetry span data. If the query returns rows, the alert fires. If it returns nothing, the system moves on.
The __USER_APP_KEYS__ placeholder is critical — SecureNow replaces it with your registered application service names, automatically scoping every rule to your applications. This means you write one rule and it works across all your services.
Setting Up Notification Channels
Before creating alert rules, configure at least one notification channel.
Slack (Recommended for Developer Teams)
- Create a Slack incoming webhook for your security channel
- In SecureNow, navigate to Settings > Notification Channels
- Click Add Channel, select Slack, and paste the webhook URL
- Send a test notification to verify connectivity
Slack is the preferred channel for most developer teams because alerts surface in the same tool you're already watching. Create a dedicated #security-alerts channel to keep them separate from deployment notifications and other noise.
Add email addresses for team members who should receive alert notifications. SecureNow sends formatted emails via Resend with the alert details, query results, and direct links to investigate in the platform.
In-App Notifications
In-app notifications are always active. Every triggered alert appears in the SecureNow notification center, providing a persistent record even if Slack or email delivery fails.
The Essential Alert Rules for Every Application
Here are four alert rules that every developer should set up on day one. Each targets a different threat pattern and together they provide a solid baseline detection layer.
Alert 1: 4xx Error Spike Detection
Client errors (400-499 status codes) are the primary footprint of scanners, brute-force tools, and fuzzing frameworks. Legitimate users rarely generate more than a handful of 4xx errors. An IP producing hundreds in a 15-minute window is almost certainly automated.
SELECT
attribute_string_peer_ip AS source_ip,
count(*) AS total_requests,
countIf(response_status_code >= 400 AND response_status_code < 500) AS client_errors,
round(client_errors / total_requests * 100, 2) AS error_rate_pct,
groupArray(DISTINCT name) AS targeted_endpoints
FROM signoz_traces.distributed_signoz_index_v2
WHERE timestamp >= now() - INTERVAL 15 MINUTE
AND serviceName IN (__USER_APP_KEYS__)
AND attribute_string_peer_ip != ''
GROUP BY source_ip
HAVING client_errors > 50 AND error_rate_pct > 60
ORDER BY client_errors DESC
LIMIT 20
This query catches credential stuffing, directory brute-forcing, API fuzzing, and vulnerability scanners. The groupArray(DISTINCT name) column shows which endpoints the IP targeted, giving you immediate context without opening the trace explorer.
Recommended settings: Run every 15 minutes, throttle for 30 minutes (sustained scanning generates repeat alerts — you only need one notification to trigger investigation).
Alert 2: 5xx Error Burst Detection
Server errors indicate something broke. A burst of 5xx errors can signal successful exploitation (an injected payload causing unhandled exceptions), infrastructure problems, or a denial-of-service condition.
SELECT
name AS endpoint,
count(*) AS error_count,
min(timestamp) AS first_seen,
max(timestamp) AS last_seen,
groupArray(DISTINCT attribute_string_peer_ip) AS source_ips,
groupArray(DISTINCT attribute_string_http_method) AS methods
FROM signoz_traces.distributed_signoz_index_v2
WHERE timestamp >= now() - INTERVAL 15 MINUTE
AND serviceName IN (__USER_APP_KEYS__)
AND response_status_code >= 500
GROUP BY endpoint
HAVING error_count > 10
ORDER BY error_count DESC
LIMIT 10
Unlike the 4xx rule which groups by IP (since scanners come from a single source), this rule groups by endpoint (since exploitation often triggers errors on a specific vulnerable path regardless of source IP).
Recommended settings: Run every 5 minutes with a 15-minute throttle. Server errors warrant faster detection because they may indicate active exploitation.
Alert 3: Unusual IP Concentration
When a single IP generates a disproportionate share of your traffic, it's worth investigating. This pattern catches targeted attacks, aggressive scrapers, and API abuse that stays within normal error-rate thresholds.
SELECT
attribute_string_peer_ip AS source_ip,
count(*) AS request_count,
countIf(response_status_code >= 400) AS total_errors,
uniqExact(name) AS unique_endpoints,
min(timestamp) AS first_request,
max(timestamp) AS last_request
FROM signoz_traces.distributed_signoz_index_v2
WHERE timestamp >= now() - INTERVAL 30 MINUTE
AND serviceName IN (__USER_APP_KEYS__)
AND attribute_string_peer_ip != ''
GROUP BY source_ip
HAVING request_count > 500
ORDER BY request_count DESC
LIMIT 10
The threshold here depends on your traffic volume. For a low-traffic application, 500 requests in 30 minutes from a single IP is suspicious. For a high-traffic API, you might raise this to 5,000 or more. Tune based on your baseline.
Recommended settings: Run every 15 minutes, throttle for 60 minutes. IP concentration is typically persistent, and frequent re-alerting adds noise without new information.
<!-- CTA:trial -->Alert 4: Suspicious Authentication Patterns
Authentication endpoints are the most common attack surface. This rule specifically monitors login-related paths for patterns that indicate brute-force or credential stuffing.
SELECT
attribute_string_peer_ip AS source_ip,
count(*) AS login_attempts,
countIf(response_status_code = 401 OR response_status_code = 403) AS failed_attempts,
countIf(response_status_code >= 200 AND response_status_code < 300) AS successful_logins,
round(failed_attempts / login_attempts * 100, 2) AS failure_rate_pct
FROM signoz_traces.distributed_signoz_index_v2
WHERE timestamp >= now() - INTERVAL 15 MINUTE
AND serviceName IN (__USER_APP_KEYS__)
AND (name LIKE '%/auth%' OR name LIKE '%/login%' OR name LIKE '%/signin%')
AND attribute_string_peer_ip != ''
GROUP BY source_ip
HAVING failed_attempts > 20 AND failure_rate_pct > 80
ORDER BY failed_attempts DESC
LIMIT 10
An IP with 20+ failed login attempts and an 80%+ failure rate is almost certainly running credential lists. The successful_logins column is particularly important — if an attacker has a few successes mixed in with many failures, those compromised accounts need immediate attention.
Recommended settings: Run every 5 minutes, throttle for 15 minutes. Authentication attacks can compromise accounts quickly, so faster detection is worth the query cost.
Understanding Alert Notifications
When an alert fires, SecureNow sends notifications to all configured channels. Each notification includes:
- Rule name and description — which rule triggered
- Query results — the rows returned by the detection query (the IPs, endpoints, counts, etc.)
- Timestamp — when the alert fired
- Direct link — a URL to open the alert in SecureNow for investigation
In Slack, the notification renders as a formatted message with the key data points visible without clicking through. This lets you make a quick triage decision: is this worth investigating right now, or can it wait?
Investigating Alert Context
When an alert fires, your first move should be opening SecureNow and examining the traces that triggered it. Here's a practical investigation workflow:
1. Check the Trace Explorer
Open the Trace Explorer, filter by the flagged IP address and the alert's time window. Look at the actual requests — what URLs were hit, what methods were used, what status codes came back, and how the request pattern evolved over time.
2. Trigger AI Investigation on the IP
Navigate to IP Investigation and look up the flagged IP. SecureNow's AI analysis combines your trace data with AbuseIPDB threat intelligence to generate a verdict:
- Risk score — a numeric assessment of threat level
- Classification — scanner, bot, tor exit node, proxy, or legitimate
- Historical behavior — whether this IP has been reported for malicious activity elsewhere
- Recommended action — block, monitor, or dismiss
3. Run AI Trace Analysis
Select a few individual traces from the flagged IP and trigger AI security analysis. The AI examines the span tree to determine whether the requests constitute a genuine attack or benign automation. It checks for injection payloads in database spans, SSRF patterns in outbound calls, and authentication bypass sequences.
4. Expand the Investigation with Forensics
Use SecureNow's natural language forensics to ask follow-up questions: "Did this IP access any other services in the last 24 hours?" or "Show all successful requests from this IP's /24 subnet." The forensics system converts your question to a ClickHouse query and returns results in seconds.
Throttling: Preventing Alert Fatigue
Alert fatigue is the enemy. A single sustained attack can trigger the same rule dozens of times, generating a wall of identical Slack notifications that your team learns to ignore. That learned-ignore behavior is exactly how real threats slip through later.
Throttling is your primary defense. Each alert rule has a cooldown period — after firing, the rule suppresses notifications for the configured duration, even if the query continues to return results.
Guidelines for throttle configuration:
- Authentication alerts: 15-minute throttle. These are high-severity and time-sensitive.
- Error spike alerts: 30-minute throttle. Gives you time to investigate without re-alerting on the same incident.
- Traffic concentration alerts: 60-minute throttle. Persistent patterns don't need frequent re-notification.
- Reconnaissance/scanning alerts: 60-minute throttle. Scanning is noisy by nature.
The rule still executes during the throttle window — results are logged and visible in the alert history. You just don't get a notification. This means you can always review what happened during a cooldown period after the fact.
Converting Forensic Queries to Alert Rules
One of the most powerful workflows in SecureNow is discovering a threat pattern through ad-hoc investigation and then converting that query into a permanent alert rule. Here's how:
- During investigation, you write a forensic query that identifies a specific threat pattern
- Save the query to your Query Library with a descriptive name and tags
- Navigate to Alert Rules and create a new rule
- Paste the saved query as the detection query
- Replace any hardcoded time filters with relative intervals (e.g.,
now() - INTERVAL 15 MINUTE) - Add the
__USER_APP_KEYS__placeholder if the query should scope to your applications - Configure schedule, throttle, and channels
This turns reactive investigation into proactive detection. Every incident you investigate becomes an opportunity to add a new detection rule — building an increasingly comprehensive monitoring layer that's tailored to your specific application and threat landscape.
For the full guide on building advanced alert rules with exclusion patterns and multi-channel routing, see Building Alert Rules That Actually Catch Threats.
<!-- CTA:demo -->Building a Detection-First Culture
Security alerting doesn't have to be a SOC team's exclusive domain. As a developer, you have two advantages that dedicated security analysts often lack: deep understanding of your application's intended behavior, and the SQL skills to query structured trace data.
Start with the four essential rules in this guide. Monitor the alerts for a week. Tune thresholds based on your actual traffic patterns — if the 4xx rule fires too often, raise the threshold; if it never fires, lower it or expand the endpoint patterns. Add new rules as you discover new threat patterns through investigation.
The goal isn't to build a perfect detection system on day one. The goal is to stop being blind. A simple alert that tells you "something weird is happening on your login endpoint" at 2 AM is infinitely better than discovering the breach on Monday morning.
Your traces are already telling the story. You just need to start listening.
Frequently Asked Questions
Do I need SOC experience to set up security alerts?
No. SecureNow provides pre-built query templates and natural language query conversion that let developers create effective security alerts without deep SOC expertise.
What types of alerts should developers set up first?
Start with three basic alerts: 4xx spike detection (catches scanners and brute force), 5xx error monitoring (catches exploitation attempts), and unusual IP concentration (catches targeted attacks).
How quickly do alerts fire after an incident begins?
Alert rules run on configurable schedules (default every 15 minutes). For faster detection, you can set rules to run every 5 minutes, though this increases query load.
Can I receive alerts in Slack?
Yes, SecureNow supports Email, Slack (via webhooks), and in-app notification channels. Most developer teams prefer Slack for immediate visibility.