StackPeek vs Wappalyzer API: Full 2026 Comparison
Both StackPeek and Wappalyzer solve the same problem: you send them a URL, they tell you what technology the site is running. The similarities end there. Their architectures are different. Their pricing models are different by an order of magnitude. Their authentication patterns differ. Their response schemas differ. Even what they consider a "lookup" differs.
This post is a technical comparison — not a marketing piece. We'll cover the API format, authentication requirements, response time benchmarks, detection coverage across categories, batch scanning, rate limiting behavior, and error handling. Side-by-side code examples throughout. We'll be honest about where Wappalyzer wins, because it genuinely does in some areas.
How Each API Works Internally
Understanding the architectural difference helps explain every other difference — speed, price, accuracy trade-offs, and what each API can and cannot detect.
Wappalyzer's approach: headless browser rendering
Wappalyzer spins up a headless Chromium instance for each lookup. The browser fetches the URL, executes JavaScript, waits for the page to render, then inspects the resulting DOM, window objects, cookies, HTTP headers, and network requests. This is comprehensive: it catches client-side frameworks that only reveal themselves after JS execution, dynamically injected analytics scripts, and lazy-loaded third-party widgets.
The cost is time. Spinning up a headless browser, loading a page, and waiting for JS to settle takes 2–5 seconds for a typical site. Heavy SPAs or sites with many third-party scripts can take longer. It's also resource-intensive on Wappalyzer's infrastructure, which helps explain the pricing.
StackPeek's approach: static analysis pipeline
StackPeek does not use a headless browser. When you submit a URL, it fetches the raw HTTP response, then runs it through a multi-stage analysis pipeline:
- HTTP header inspection —
Server,X-Powered-By,X-Generator,CF-Cache-Status, and dozens of vendor-specific headers. - HTML parsing —
<meta>tags, generator comments, data attributes, and script src patterns. - Script fingerprinting — matching script URLs and inline script patterns against known fingerprints.
- DNS and infrastructure signals — CNAME chains, nameserver patterns, and IP range ownership.
No JavaScript execution. This means StackPeek can miss technologies that only appear after client-side rendering. It also means the entire pipeline completes in under 500ms, and often under 300ms.
The honest trade-off: If a site injects its analytics tag via a tag manager that fires after load, StackPeek may miss it. Wappalyzer, because it actually runs the page, catches it. For most use cases this doesn't matter. For deep analytics stack auditing, it might.
API Format and Authentication
This is where the developer experience gap is widest.
StackPeek
Single endpoint. GET request. URL as a query parameter. No authentication required on the free tier (100 scans/day, enforced by IP).
# Free tier — no API key needed
GET https://stackpeek.web.app/api/v1/detect?url=https://example.com
# Paid tier — Bearer token in Authorization header
GET https://stackpeek.web.app/api/v1/detect?url=https://example.com
Authorization: Bearer sp_live_xxxxxxxxxxxx
The API key format is sp_live_ followed by a random string. Keys are scoped per account and can be rotated from the dashboard. There is no IP allowlisting requirement, no webhook registration, no SDK dependency.
Wappalyzer
Wappalyzer's API requires an API key on every request, even for the free tier. The key must be passed as an x-api-key header. There's no keyless access.
# Wappalyzer requires an API key on every request
GET https://api.wappalyzer.com/v2/lookup/?urls=https://example.com
x-api-key: wap_xxxxxxxxxxxxxxxxxxxxxxxx
Note the urls parameter (plural) and the trailing slash — Wappalyzer's endpoint accepts comma-separated URLs or repeated parameters for multi-URL lookups, but the authentication overhead is present for every request regardless of tier.
Side-by-Side: Detecting a Tech Stack
Same task: detect the tech stack at linear.app.
curl -s \
"https://stackpeek.web.app/api/v1/detect\
?url=https://linear.app" \
-H "Authorization: Bearer sp_live_xxx"
curl -s \
"https://api.wappalyzer.com/v2/lookup/\
?urls=https://linear.app" \
-H "x-api-key: wap_xxx"
StackPeek response
{
"url": "https://linear.app",
"technologies": [
{ "name": "React", "category": "framework", "confidence": 0.97 },
{ "name": "Next.js", "category": "framework", "confidence": 0.94 },
{ "name": "Vercel", "category": "hosting", "confidence": 0.99 },
{ "name": "Cloudflare", "category": "cdn", "confidence": 0.98 },
{ "name": "Tailwind CSS", "category": "css", "confidence": 0.91 },
{ "name": "Segment", "category": "analytics", "confidence": 0.85 },
{ "name": "Stripe", "category": "payments", "confidence": 0.89 }
],
"scanTime": 318
}
Wappalyzer response (abridged)
[
{
"url": "https://linear.app",
"technologies": [
{ "name": "React", "categories": [{"name": "JavaScript frameworks"}], "confidence": 100 },
{ "name": "Next.js", "categories": [{"name": "JavaScript frameworks"}], "confidence": 100 },
{ "name": "Vercel", "categories": [{"name": "PaaS"}], "confidence": 100 },
{ "name": "Cloudflare", "categories": [{"name": "CDN"}], "confidence": 100 },
{ "name": "Tailwind CSS", "categories": [{"name": "UI frameworks"}], "confidence": 100 },
{ "name": "Intercom", "categories": [{"name": "Live Chat"}], "confidence": 100 },
// ... ~12 more results including niche libraries
]
}
]
Several structural differences to note. Wappalyzer wraps results in an array (even for a single URL), uses categories (plural, array of objects) rather than a flat category string, and reports confidence as an integer 0–100 rather than a float 0–1. If you're switching between APIs, you'll need to normalize the response shape.
Response Time Benchmarks
We ran 200 scans across 50 different URLs using both APIs and measured wall-clock time from request to full JSON response. URLs included a mix of marketing sites, SaaS apps, e-commerce stores, and content sites. Wappalyzer was tested at off-peak hours to avoid rate-limit queuing.
| Metric | StackPeek | Wappalyzer |
|---|---|---|
| Median response time | 318ms | 2,840ms |
| p95 response time | 487ms | 5,120ms |
| p99 response time | 612ms | 9,300ms |
| Timeout rate (>10s) | 0% | 2.1% |
| Slowest observed | 1,100ms (SPA with many redirects) | 14,200ms (heavy JS app) |
The p99 gap is worth paying attention to. At StackPeek, your worst-case scenario is around 600ms. At Wappalyzer, the 99th-percentile request takes over 9 seconds — and roughly 1 in 50 requests exceeds 10 seconds entirely. If you're using either API in a user-facing context, StackPeek's predictable latency makes it significantly easier to design around.
Pricing Tiers: The Full Picture
The top-line headline is "$9/mo vs $250/mo," but the tier structures are quite different. Here's the complete picture.
| Tier | StackPeek | Wappalyzer |
|---|---|---|
| Free | 100 scans/day (~3,000/mo) No API key. IP-based limit. |
50 lookups/month API key required. |
| Developer / Starter | $9/mo — 5,000 scans | $250/mo — 25,000 lookups |
| Team / Growth | $29/mo — 25,000 scans | $450/mo — 100,000 lookups |
| Cost per scan (Developer tier) | $0.0018 | $0.01 |
| Overage policy | Requests blocked at limit No surprise charges. |
Per-lookup overage charges |
| Annual cost at 5,000 scans/mo | $108 | $3,000 |
One detail worth flagging: Wappalyzer charges per-lookup overage if you exceed your monthly allocation. StackPeek hard-blocks at the limit. Depending on your usage pattern, Wappalyzer's overage model is either convenient flexibility or an unexpected bill. StackPeek's hard cap means you won't be surprised, but you'll need to upgrade proactively if you're approaching your limit.
Annual math: At 5,000 scans/month, StackPeek costs $108/year. Wappalyzer costs $3,000/year for equivalent volume (their $250/mo plan). The $2,892 annual difference covers a year of several other developer tools combined.
Rate Limits and Headers
Both APIs communicate rate limit state via response headers, but with different header names.
StackPeek rate limit headers
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4837
X-RateLimit-Reset: 1743292800
X-Scan-Time: 318
X-RateLimit-Reset is a Unix timestamp indicating when the monthly counter resets. On the free tier, the limit resets daily at midnight UTC. On paid tiers it resets on your billing date.
Wappalyzer rate limit headers
X-RateLimit-Limit: 25000
X-RateLimit-Remaining: 23914
X-RateLimit-Reset: 1743292800
X-Credits-Used: 1086
Wappalyzer also returns X-Credits-Used, which tracks cumulative credits consumed in the current billing period. Some Wappalyzer lookups consume more than one credit if the target site is complex enough to require multiple rendering passes. StackPeek always consumes exactly one scan per URL.
Handling 429 responses
When you exceed your rate limit, both APIs return HTTP 429 Too Many Requests. The retry strategy differs.
async function detect(url, retries = 3) {
const res = await fetch(
`https://stackpeek.web.app/api/v1/detect?url=${url}`,
{ headers: { 'Authorization': `Bearer ${KEY}` } }
);
if (res.status === 429) {
const reset = res.headers.get('X-RateLimit-Reset');
const wait = (reset * 1000) - Date.now();
// wait until reset, then retry
await sleep(wait);
return detect(url, retries - 1);
}
return res.json();
}
async function lookup(url, retries = 3) {
const res = await fetch(
`https://api.wappalyzer.com/v2/lookup/?urls=${url}`,
{ headers: { 'x-api-key': KEY } }
);
if (res.status === 429) {
// Wappalyzer uses Retry-After (seconds)
const wait = res.headers.get('Retry-After');
await sleep(parseInt(wait, 10) * 1000);
return lookup(url, retries - 1);
}
return res.json();
}
Batch Scanning
Processing multiple URLs is where StackPeek's speed advantage compounds into something dramatic.
StackPeek batch endpoint
POST an array of URLs to /api/v1/detect/batch. Scans run in parallel server-side. Up to 50 URLs per batch on paid plans (10 on free tier).
curl -s -X POST \
https://stackpeek.web.app/api/v1/detect/batch \
-H "Authorization: Bearer sp_live_xxx" \
-H "Content-Type: application/json" \
-d '{
"urls": [
"https://shopify.com",
"https://stripe.com",
"https://vercel.com",
"https://linear.app"
]
}'
A batch of 50 URLs typically resolves in 1.5–2.5 seconds total. Compare that to 50 sequential requests at 3 seconds each: a sequential Wappalyzer job for the same 50 URLs takes 2.5 minutes. StackPeek's batch finishes 60x faster.
Wappalyzer multi-URL lookup
Wappalyzer accepts multiple URLs via repeated query parameters or comma separation, but processes them sequentially server-side. There is no dedicated batch endpoint with parallel processing in their API v2. For large lists, their recommended pattern is to queue requests and poll for results, or use their bulk lookup web UI.
# Wappalyzer multi-URL — sequential processing
curl -s \
"https://api.wappalyzer.com/v2/lookup/?urls=https://shopify.com,https://stripe.com" \
-H "x-api-key: wap_xxx"
For batch workloads: Scanning 10,000 URLs at StackPeek's 50-URL batch capacity takes 200 batch calls. At ~2s per batch, that's roughly 7 minutes. The same job via sequential Wappalyzer lookups at 3s each would take 8+ hours.
Detection Coverage: Category-by-Category
Wappalyzer's 1,500+ fingerprints vs StackPeek's 120+ is the honest headline, but the gap isn't uniform across all technology categories. In the categories that drive most API use cases, the difference is much narrower.
| Category | StackPeek | Wappalyzer | Gap |
|---|---|---|---|
| Frontend frameworks | React, Next, Vue, Nuxt, Angular, Svelte, Astro, Remix, Solid, Qwik | All of StackPeek + 15 more (Lit, Alpine, Preact, HTMX, etc.) | Small |
| CMS platforms | WordPress, Shopify, Squarespace, Wix, Webflow, Ghost, Drupal, Joomla, HubSpot CMS | All of StackPeek + Sitecore, Adobe AEM, Contentful, dozens more | Moderate |
| WordPress plugins | WooCommerce, Yoast (limited) | WooCommerce, Yoast, Elementor, ACF, WPML, 100+ plugins | Large |
| Hosting / PaaS | Vercel, Netlify, AWS, GCP, Azure, Firebase, Cloudflare, Heroku, Fly, Railway | All of StackPeek + more granular variants | Small |
| CDN | Cloudflare, Fastly, Akamai, CloudFront, Bunny, BunnyCDN | All of StackPeek + niche options | Small |
| Analytics | GA4, Plausible, Fathom, Mixpanel, Amplitude, Heap, Hotjar, Segment, PostHog | All of StackPeek + 50+ minor platforms | Moderate |
| Payments | Stripe, PayPal, Square, Braintree, Paddle, Recurly | All of StackPeek + regional gateways | Small |
| CSS frameworks | Tailwind, Bootstrap, Bulma, Chakra, MUI, Styled Components | All of StackPeek + legacy CSS frameworks | Small |
| Build tools | Webpack, Vite, Turbopack, Parcel, esbuild, Rollup | Similar coverage | Minimal |
| Niche JS libraries | Limited | Lodash, Moment, D3, Three.js versions, 200+ libraries | Large |
The coverage gap is real and largest in two specific areas: WordPress plugin detection and niche JavaScript library identification. If either of those is your core use case, Wappalyzer's broader database is genuinely valuable. For lead generation, competitive analysis, and most sales intelligence workflows, StackPeek's coverage is sufficient.
Python Integration: Full Working Examples
Here's a complete Python script that works with both APIs, showing how you'd normalize the different response shapes into a common format.
import requests
from dataclasses import dataclass
from typing import List
@dataclass
class Technology:
name: str
category: str
confidence: float # normalized to 0.0-1.0
def detect_stackpeek(url: str, api_key: str = None) -> List[Technology]:
headers = {}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
resp = requests.get(
"https://stackpeek.web.app/api/v1/detect",
params={"url": url},
headers=headers,
timeout=5
)
resp.raise_for_status()
data = resp.json()
return [
Technology(t["name"], t["category"], float(t["confidence"]))
for t in data["technologies"]
]
def detect_wappalyzer(url: str, api_key: str) -> List[Technology]:
resp = requests.get(
"https://api.wappalyzer.com/v2/lookup/",
params={"urls": url},
headers={"x-api-key": api_key},
timeout=15 # headless browser needs longer timeout
)
resp.raise_for_status()
data = resp.json()
results = []
for tech in data[0]["technologies"]:
# Normalize: Wappalyzer confidence is 0-100 int
# categories is an array of objects, take first
category = tech["categories"][0]["name"] if tech["categories"] else "unknown"
results.append(Technology(
tech["name"], category, tech["confidence"] / 100.0
))
return results
Error Handling
Both APIs return standard HTTP error codes, but the error body structure differs.
StackPeek error format
# HTTP 422 — invalid URL
{
"error": "invalid_url",
"message": "The provided URL is not reachable or malformed.",
"status": 422
}
# HTTP 429 — rate limit exceeded
{
"error": "rate_limit_exceeded",
"message": "Monthly scan limit reached. Resets 2026-04-01T00:00:00Z.",
"status": 429
}
Wappalyzer error format
# HTTP 400 — invalid request
{
"statusCode": 400,
"message": "Validation failed",
"error": "Bad Request"
}
# HTTP 401 — missing or invalid API key
{
"statusCode": 401,
"message": "Unauthorized"
}
Wappalyzer's error messages are less specific. A 400 with "Validation failed" doesn't tell you whether the URL is malformed, unreachable, or blocked by robots.txt. StackPeek's error codes are more descriptive, which helps when debugging automation pipelines.
When to Use Each
After running both APIs extensively, the use case split is clear.
Choose StackPeek if
- You're building real-time features where latency affects user experience — enrichment on signup, live competitive intel, inline tech badges
- You're running batch jobs at any meaningful scale — 1,000+ URLs a month
- You're an indie developer, startup, or small team and $250/month is a serious budget item
- You want to prototype before committing — 100 free scans/day with no credit card is a genuine free tier
- Your use case is lead qualification or competitive analysis — the tech categories that drive those workflows are all covered
- You want predictable pricing — fixed plans, no overage surprises
Choose Wappalyzer if
- You need deep WordPress plugin detection — specific plugins, versions, themes, child themes
- You need to identify niche JavaScript libraries or legacy CMS extensions that fall outside the top 120 technologies
- Your workflow relies on the browser extension alongside the API — Wappalyzer's extension is genuinely best-in-class for manual ad hoc research
- You need pre-built lead lists ("all Salesforce users in healthcare") rather than real-time scanning of your own URL lists
- You need historical technology data — Wappalyzer tracks changes over time for many sites; StackPeek is point-in-time only
- You're at an enterprise with a budget that justifies the depth and you need every fingerprint available
Start scanning for free
100 scans/day, no API key required. Full JSON response in under 500ms.
See pricing →Migrating from Wappalyzer to StackPeek
If you're currently on Wappalyzer and your use case fits StackPeek's coverage, migration is a 30-minute job. The key changes:
- Endpoint: Replace
https://api.wappalyzer.com/v2/lookup/withhttps://stackpeek.web.app/api/v1/detect - Authentication header: Replace
x-api-key: wap_xxxwithAuthorization: Bearer sp_live_xxx - Response normalization:
categories[0].name(array of objects) becomes a flatcategorystring.confidencechanges from 0–100 integer to 0–1 float. - Timeout: Drop your timeout from 15s to 3s. StackPeek's p99 is well under 1 second.
- Batch: Replace looped sequential requests with a single POST to
/api/v1/detect/batch.
Before going live, run your 50 most common scan targets through both APIs and diff the results. Pay attention to any technologies you rely on that fall outside StackPeek's 120+ fingerprint set. If everything critical shows up, you're ready to switch.
Conclusion
Wappalyzer is a mature, comprehensive technology detection platform with the deepest fingerprint database available. If you need to detect WordPress plugin combinations, historical technology timelines, or the long tail of niche libraries, it's the right tool and the price reflects the breadth.
StackPeek is built for a different job: fast, cheap, programmatic tech stack detection for the categories that drive real business decisions. At 28x lower cost, with sub-500ms response times, true parallel batch scanning, and a free tier that's 60x more generous, it covers the 90% of use cases where you need to know the framework, CMS, hosting, CDN, analytics, and payment stack — not every obscure jQuery plugin version.
The decision is straightforward once you know which category you're in. Most developers building tools that touch tech stack data belong in the StackPeek category. Enterprise market research teams doing deep plugin fingerprinting belong in the Wappalyzer category. Few teams need both.
Related reading: Best Wappalyzer Alternative API in 2026 · StackPeek vs Wappalyzer: Detailed API Comparison · SimilarTech alternative API comparison · Tech stack detection for sales prospecting · Using tech stack APIs for startup lead generation