What 1.2B Requests Look Like: Anomaly Patterns from the SecureNow Firewall Fleet
Aggregated, anonymized data from 1.2B requests across the SecureNow customer fleet. Top anomaly types, peak hours, and the day-of-week patterns nobody publishes.
What 1.2B Requests Look Like: Anomaly Patterns from the SecureNow Firewall Fleet
This is an annotated data dump — what 1.2 billion requests across the SecureNow customer fleet looked like over the 30-day window ending May 1, 2026. The numbers are real, anonymized at the customer level, and meant for engineering teams trying to understand what their own traffic patterns probably look like.
The dataset
| Metric | Value |
|---|---|
| Total requests observed | 1,247,392,118 |
| Customer apps | 1,200+ |
| Countries observed | 198 |
| Unique source IPs | ~22M |
| Anomaly rate (firewall blocks + ASM signals) | 18.4% of total |
So out of every 100 requests, 18 were flagged as suspicious. That's high; the fleet skews toward public-facing apps which attract more bot traffic than private B2B SaaS.
Hour-of-day patterns
Average anomaly rate by UTC hour, smoothed over 30 days:
00:00 UTC ████████████████████████ 22%
03:00 UTC ███████████████████████████ 26% ← peak
06:00 UTC █████████████████████ 20%
09:00 UTC ████████████████ 15%
12:00 UTC ███████████████ 14%
15:00 UTC █████████████ 13%
18:00 UTC ███████████████ 14% ← second peak (US business hours)
21:00 UTC █████████████████ 16%
The 03:00 UTC peak corresponds to peak attack volume in our dataset. This doesn't mean your app is being targeted at 3 AM your time — it means automated scanners run on schedules, and the most active scanners run from time zones that put them at 03:00 UTC during business hours.
The 18:00 UTC bump corresponds to US business hours (10 AM PT to 2 PM ET) — likely human-operated scraping and reconnaissance.
Day-of-week patterns
Anomaly rate by day-of-week, normalized:
Monday ████████████ 1.0× (baseline)
Tuesday ████████████ 1.0×
Wednesday █████████████ 1.05×
Thursday ████████████ 0.98×
Friday ████████████ 0.95×
Saturday ███████████ 0.85×
Sunday ███████████ 0.80×
Weekends are quieter, but only by 15–20%. The "attackers don't work weekends" myth doesn't really hold up — automated traffic doesn't care about the calendar. The Sunday dip is from a slight reduction in human-operated scanning.
Top 10 anomaly types
| Rank | Anomaly type | % of all anomalies |
|---|---|---|
| 1 | Known-bad IP (AbuseIPDB) | 31.4% |
| 2 | Path probe (admin/wp-admin/.env) | 18.7% |
| 3 | Bad user agent (sqlmap, nikto, masscan) | 12.3% |
| 4 | Authentication failure burst | 8.9% |
| 5 | High request rate from one IP | 7.6% |
| 6 | Unusual geographic origin | 5.4% |
| 7 | Scanner-like sequential probing | 4.8% |
| 8 | Possible CVE exploitation | 3.9% |
| 9 | Header anomalies (missing UA, suspicious Accept) | 3.7% |
| 10 | Other | 3.3% |
Notable: known-bad IP filtering catches nearly a third of all anomalies on its own. The cost is approximately $0 (the AbuseIPDB feed is free for blocklist usage). Skipping this layer means accepting 30% more anomaly volume into your application.
Top 10 path patterns probed
(Excluding the noise from #2 above; this is the complete list of paths most-probed by scanners.)
| Rank | Path pattern | Likely intent |
|---|---|---|
| 1 | /.env | Environment file harvesting |
| 2 | /wp-admin/setup-config.php | WordPress install |
| 3 | /.git/config | Git repo metadata |
| 4 | /admin and /administrator | Admin endpoint discovery |
| 5 | /server-status | Apache info disclosure |
| 6 | /phpinfo.php | PHP info disclosure |
| 7 | /.aws/credentials | AWS credential file |
| 8 | /api/v1/users | User enumeration (often gated) |
| 9 | /.svn/entries | Subversion metadata |
| 10 | /_ignition/execute-solution | Laravel CVE-2021-3129 |
The "fundamentals" of scanner reconnaissance are stable year over year. The same paths probed in 2020 are probed in 2026.
Geographic distribution of anomalies
Top 10 source countries by anomaly volume (note: the top sources aren't necessarily the source countries of attackers — many use VPNs/proxies):
| Country | % of anomalies |
|---|---|
| US | 24.3% |
| Russia | 12.1% |
| China | 9.8% |
| Germany | 7.4% |
| Netherlands | 5.2% |
| Brazil | 4.6% |
| France | 3.8% |
| UK | 3.2% |
| India | 3.0% |
| Vietnam | 2.7% |
The US-led concentration reflects US-based cloud and VPS providers that attackers rent compute from, more than US-based attackers per se.
Authentication failure patterns
Auth-failure anomalies (8.9% of all anomalies) break down as:
- Single-IP credential stuffing: 47% — easy to detect, blocks per-IP
- Distributed credential stuffing: 31% — many IPs, residential-proxy networks
- Token replay attempts: 14% — using leaked tokens against multiple targets
- Slow-rolling brute force: 8% — under per-IP thresholds, requires AS-level detection
The distributed credential stuffing share has grown from 12% in 2024 to 31% in Q2 2026. This is the trend most worth watching: per-IP rate limits are progressively less effective.
What you should do with this data
Three concrete observations for engineering teams:
1. Known-bad IP filtering catches a third of anomalies. Install it. Cost: $0. The AbuseIPDB feed plus simple middleware is the highest-yield single intervention you can make.
2. Distributed credential stuffing requires AS-level detection. Per-IP rate limits alone leave you exposed. See the per-AS rate-limit pattern.
3. Path-probe alerts catch CVE exploitation early. When a new CVE drops, scanner traffic spikes against the relevant path within 24 hours. An alert on "new path probe pattern" gives you a 24-hour heads-up to patch.
Related
Frequently Asked Questions
What's an 'anomaly' in this dataset?
Any request flagged by SecureNow's detection rules — known-bad IP, suspicious user agent, anomalous rate, scanner-like path probe, etc. Excludes requests that legitimately failed authentication or hit 404s on real paths.
Is this representative?
Of SecureNow customers, yes. The fleet skews toward Node.js SaaS (B2B and B2C), mid-stage. Patterns may differ for Python/Go/Java fleets or enterprise traffic.
Why publish?
Most threat-intel data is locked inside vendors' marketing decks. We think the engineering community benefits from honest aggregate data with concrete numbers.
How fresh is the data?
30-day window ending May 1, 2026. We update this analysis quarterly.
Recommended reading
An honest, side-by-side comparison of the ten most-deployed application security monitoring tools — from enterprise platforms to free open-source options.
May 9A quarterly tally of malicious npm packages, the major incidents, and detection patterns. April 2026 set a new record at 847 confirmed malicious packages — here's what they did and how to detect them.
May 9An honest write-up of how a scraping campaign cost us $3,400 in egress over 72 hours, what we missed in detection, and what would have prevented it for $0.
May 9