Stopping a Distributed Bot Attack: A Multi-Feature Response Playbook
A detailed playbook for detecting and stopping a distributed bot attack using quadrant analysis, AI investigation, AbuseIPDB enrichment, forensic queries, and false positive management.
Posted by
Related reading
SOC Notification Triage: From Alert Overload to Actionable Incidents
Master the art of SOC notification triage with structured workflows. Learn to filter, prioritize, and resolve security alerts efficiently using status-based workflows and AI-powered investigation.
Eliminating False Positives: A SOC Team's Guide to Smarter Alerting
Reduce false positive rates in your SOC with AI-suggested exclusions, test-before-apply workflows, and intelligent path pattern matching. A practical guide to cleaner alerts.
Real-Time IP Monitoring at Scale: Tracking Thousands of IPs Across Your Infrastructure
Monitor and investigate thousands of IP addresses in real-time with automated threat intelligence enrichment, status tracking, and batch analysis for enterprise security operations.
Stopping a Distributed Bot Attack: A Multi-Feature Response Playbook
At 3:14 PM on a Tuesday, the security team at PaySecure — a mid-size fintech serving 1.2 million users — noticed something unusual in their account creation metrics. New account registrations had tripled in the past hour. Support tickets were normal. Marketing had not launched a campaign. Nothing explained the spike.
What they were witnessing was the opening stage of a distributed bot attack — a coordinated campaign using over 200 residential proxy IPs to create fraudulent accounts at scale. The bots were sophisticated, rotating IPs every few requests, using realistic-looking email addresses, and mimicking human interaction timing closely enough to evade basic rate limiting.
This is the playbook for how PaySecure's team detected, investigated, and stopped the attack using SecureNow — from first anomaly to full resolution. Every step maps to a real platform feature and a repeatable process that any security team can adopt.
The Threat: Distributed Bot Attacks in Fintech
Distributed bot attacks represent an evolution in automated threats. Where early bots used a handful of data center IPs that were trivial to block, modern bot operators leverage residential proxy networks — hundreds of thousands of IP addresses belonging to real internet service providers in real residential neighborhoods. The MITRE ATT&CK framework documents this under T1583.003 — Acquire Infrastructure: Virtual Private Server, and the broader tactic of Resource Development.
According to Gartner, bot traffic accounts for a significant and growing percentage of all web traffic, with sophisticated bots increasingly difficult to distinguish from human users. For fintech platforms like PaySecure, the stakes are particularly high. Fraudulent accounts become vehicles for money laundering, synthetic identity fraud, and promo abuse schemes that carry both financial and regulatory consequences.
The challenge is not detecting that an attack is happening — it is detecting it quickly enough to respond before damage accumulates, and doing so without blocking the legitimate users who are creating accounts during the same period.
Phase 1: Detection via Quadrant Analysis
PaySecure's analyst, Marcus, starts his afternoon shift by checking SecureNow's Quadrant Analysis view — a scatter plot mapping IPs by their success rate (x-axis) versus error rate (y-axis). Under normal conditions, the plot shows a dense cluster of IPs in the high-success, low-error quadrant (legitimate users) and a sparse scattering of IPs in other quadrants.
Today, something jumps out immediately. A tight cluster of roughly 60 IPs has appeared in the high-success, moderate-4xx-error quadrant — an unusual pattern. These IPs are successfully completing some requests (account creation) while also generating 4xx errors (validation failures on malformed inputs). Legitimate users do not cluster this way. Real human behavior scatters across the quadrant with natural variance. Bot behavior, constrained by the same scripted logic, clusters unnaturally.
Marcus selects the cluster to drill into the details. The IPs are hitting a single endpoint: POST /api/v1/accounts. The geographic distribution spans twelve countries, but the timing patterns are uniform — requests arrive at suspiciously regular intervals with sub-second precision between them.
Phase 2: Alert Rules Fire
While Marcus is examining the quadrant view, SecureNow's pre-configured alert rules are already working. PaySecure has a rule designed for exactly this scenario — detecting account creation spikes:
SELECT
count(DISTINCT peer_ip) AS unique_ips,
count(*) AS total_requests,
countIf(status_code = 201) AS successful_creates
FROM traces
WHERE timestamp >= now() - INTERVAL 15 MINUTE
AND http_target = '/api/v1/accounts'
AND http_method = 'POST'
HAVING total_requests > 100 AND unique_ips > 20
This rule runs every 15 minutes and fires when more than 100 account creation requests originate from more than 20 unique IPs within a single window. The current window shows 214 unique IPs and over 1,800 requests. The notification arrives in SecureNow's triage interface, the #security-alerts Slack channel, and the on-call email list simultaneously.
Marcus acknowledges the notification and moves it to investigating status. The notification's IP grouping feature has consolidated all 214 IPs into a single manageable notification rather than flooding the queue with individual entries.
<!-- CTA:trial -->Phase 3: AI Investigation of Top Offenders
Manually investigating 214 IPs is not feasible in real time. Marcus sorts the grouped IPs by request volume and selects the top 25 — the most active sources of account creation traffic. He triggers SecureNow's AI investigation on all 25 simultaneously, pushing them into the investigation queue.
Within minutes, the AI returns structured verdicts for each IP. The results reveal a consistent pattern:
- Risk scores: 72-89 out of 100 across all 25 IPs
- Certainty levels: High for 22 of 25, Medium for 3
- Attack patterns identified: Automated account creation, credential generation, uniform request timing
- Behavioral signatures: Request intervals between 1.2 and 1.8 seconds (human typing speed varies far more), identical HTTP header fingerprints across IPs, sequential patterns in submitted email addresses
The AI investigation flags two particularly telling signals. First, the User-Agent strings across all 25 IPs cycle through exactly four browser signatures in the same order — a rotation pattern characteristic of bot frameworks. Second, the submitted email addresses follow a generation pattern: random first name, random last name, sequential number, common domain. Legitimate users do not register with addresses like sarah.thompson4812@mailbox.net followed by james.wilson4813@mailbox.net.
The AI's recommended mitigation steps include implementing CAPTCHA on the account creation endpoint, adding device fingerprinting, and deploying rate limiting per-session rather than per-IP.
Phase 4: AbuseIPDB Cross-Reference
SecureNow's AbuseIPDB integration has already enriched each IP with reputation data from the 14-day cached lookups. Marcus reviews the enrichment results:
- 40% of IPs (approximately 85 addresses) have prior abuse reports on AbuseIPDB, with confidence scores ranging from 25% to 78%. These are known bad actors — data center IPs, VPS hosts, and previously reported bot sources.
- 60% of IPs (approximately 129 addresses) have zero or minimal AbuseIPDB reports. These are the residential proxy addresses — clean IPs belonging to real ISPs that the bot operator is routing traffic through.
This split is characteristic of modern distributed bot attacks. The operator mixes known infrastructure (cheap data center IPs for volume) with residential proxies (clean IPs for evasion). A security system that relies solely on IP reputation would catch less than half the attack.
The geographic and ASN metadata reveals additional context. The clean residential IPs span ISPs across North America, Western Europe, and Southeast Asia — consistent with commercial residential proxy services that sell access to infected or voluntarily enrolled consumer devices. The proxy and VPN detection flags several IPs as known proxy endpoints, confirming the residential proxy hypothesis.
Phase 5: Forensic Deep Dive
Marcus needs to understand the full scope of the attack beyond the top 25 IPs. He opens SecureNow's forensics interface and enters a natural language query:
"Show all requests to POST /api/v1/accounts in the last 6 hours grouped by email domain, with count and first seen timestamp"
The NL-to-SQL engine translates this into a ClickHouse query and returns results that paint a clear picture. The bot has created accounts using emails from twelve different domains, but three domains account for 68% of all registrations: mailbox.net, quickmail.io, and tempinbox.org. A second query — "show the time between consecutive account creations from the same IP" — reveals that 89% of requests from flagged IPs have inter-request intervals between 1.0 and 2.0 seconds, while legitimate user intervals range from 15 seconds to several minutes.
A third query seals the analysis:
"Find all IPs that sent more than 5 POST requests to /api/v1/accounts today and show their first request timestamp, total requests, and unique email domains used"
The results show 247 IPs matching this pattern (33 more than the initial alert captured, since some bots operated at lower volumes that fell beneath the alert rule's threshold). These low-volume bots were staying under the radar by creating only 5-10 accounts each — a classic evasion technique against per-IP rate limiting.
Phase 6: False Positive Management
Before PaySecure can block the identified IPs, they need to ensure legitimate automated traffic is not caught in the crossfire. The team knows that several services legitimately send automated requests to their API: Googlebot and Bingbot crawl their public-facing pages, an uptime monitoring service pings their health endpoints, and a partner integration posts to a webhook endpoint.
Marcus opens SecureNow's false positive management interface and reviews the AI-suggested exclusion patterns. The AI has identified three categories of traffic that should be excluded:
- Search engine crawlers: Googlebot (66.249.x.x ranges) and Bingbot (40.77.x.x ranges) — these IPs appear in the data but never hit the account creation endpoint
- Monitoring services: Two IPs belonging to their Datadog and UptimeRobot monitors — these only hit
/healthand/statusendpoints - Partner webhook: A single IP from their banking partner that posts to
/api/v1/webhooks/deposits
Marcus applies the exclusions for search engine crawlers and monitoring services as global exclusions (they should never be flagged by any rule). The partner webhook gets a per-rule exclusion on the account creation spike rule specifically. Before applying each exclusion, he uses the test-before-apply feature to verify that the exclusion pattern only affects the intended traffic and does not inadvertently suppress detection of actual bot IPs.
Phase 7: Trace Analysis Reveals the Real Goal
With false positives excluded, Marcus digs into the trace data for the fraudulent accounts that were successfully created. SecureNow's trace explorer shows the full request lifecycle for each account creation: the HTTP handler span, the input validation span, the database INSERT span, and the email verification span.
The AI security analysis of these traces reveals a critical finding. After account creation, several of the bot-created accounts immediately attempted to link external bank accounts and initiate small-value transfers — a classic pattern for synthetic identity fraud. The bots were not just creating accounts for future use. They were attempting to operationalize them immediately for financial fraud.
The trace tree view shows the attack chain clearly: POST /api/v1/accounts (201 Created) → POST /api/v1/accounts/{id}/bank-links (201 Created) → POST /api/v1/transfers (403 Forbidden). PaySecure's existing authorization controls blocked the transfer attempts, but the accounts and bank links were successfully created — a foothold that attackers would exploit later with manual intervention if left in place.
Phase 8: Resolution and New Rule Creation
Armed with comprehensive evidence, Marcus and the PaySecure team execute their response:
Immediate actions:
- Block all 247 identified bot IPs in their network firewall
- Flag all accounts created by identified IPs for review and suspension
- Mark the associated bank link attempts for fraud team investigation
- Update the notification status to resolved with a detailed timeline and comments documenting the full investigation
New detection rules: The team creates three new alert rules based on lessons from this incident:
- Email domain concentration: Alert when more than 30% of new account registrations in a 1-hour window use the same email domain
- Inter-request timing uniformity: Alert when more than 10 IPs show request interval standard deviations below 0.5 seconds (indicating scripted timing)
- Rapid account operationalization: Alert when a new account attempts to link a bank account and initiate a transfer within 5 minutes of creation
Platform improvements recommended by AI:
- Implement CAPTCHA or proof-of-work challenge on account creation
- Add device fingerprinting to detect headless browsers
- Shift rate limiting from per-IP to per-session with behavioral scoring
Marcus documents these recommendations in the notification timeline, creating an audit trail that connects the detection, investigation, response, and improvement phases into a single narrative.
<!-- CTA:demo -->Key Takeaways for Security Teams
This playbook demonstrates several principles that apply to any distributed bot defense:
Layer your detection. No single signal catches a sophisticated bot attack. PaySecure needed quadrant analysis for the initial visual anomaly, alert rules for the quantitative trigger, AI investigation for behavioral pattern confirmation, AbuseIPDB for reputation context, and forensic queries for scope assessment. Each layer caught something the others missed.
Residential proxies defeat IP reputation. When 60% of attack IPs have clean reputations, you cannot rely on blocklists alone. Behavioral analysis — timing patterns, header fingerprints, request sequencing — becomes the primary detection signal.
False positive management is not optional. Blocking 247 IPs without first excluding legitimate crawlers, monitors, and partners would have caused operational disruption that may have exceeded the bot attack's damage. The test-before-apply mechanism is the difference between a surgical response and collateral damage.
Forensic queries reveal scope. The alert rule caught 214 IPs. Forensic queries found 247 — the difference was low-volume bots designed to evade threshold detection. Without forensics, those 33 IPs would have continued operating undetected.
Close the loop with new rules. Every incident that surprises you is a detection gap. The three new rules PaySecure created transform a reactive investigation into proactive detection for the next attack.
For the complete workflow from setup through resolution, see our end-to-end platform guide. For similar attack scenarios, see how a SOC team handled a credential stuffing attack in 12 minutes.
Frequently Asked Questions
How are distributed bot attacks different from simple attacks?
Distributed bot attacks use hundreds or thousands of unique IPs (often residential proxies) to evade rate limiting and IP-based blocking, making them harder to detect with traditional security tools.
Can SecureNow detect bots using residential proxies?
Yes. SecureNow's AI investigation and behavioral analysis can identify bot patterns even from residential IPs by examining request timing, path patterns, and trace characteristics beyond just IP reputation.
How do you avoid blocking legitimate users during a bot attack?
SecureNow's false positive management lets you test exclusion patterns before applying them, identify legitimate crawlers (Googlebot, etc.), and use AI-suggested exclusions to protect genuine traffic.
What's the best alert rule for detecting bot traffic?
A combination of rules works best: high request volume from new IPs, concentrated requests to specific endpoints, and abnormal success-to-error ratios are reliable bot indicators.