Rust is the language systems programmers reach for when they need speed without sacrificing safety. If you are building a web crawler, a competitive intelligence pipeline, a security scanner, or a CLI tool that needs to identify what technologies a website is running, you need a reliable API that returns structured data you can deserialize into strongly-typed Rust structs.
The StackPeek API gives you exactly that: send a URL, get back JSON listing every detected framework, CMS, analytics tool, CDN, and hosting provider. No headless browser dependencies, no JavaScript runtime, no C bindings. Just a single HTTP GET request using reqwest and type-safe deserialization with serde.
This guide covers production-ready Rust code for every common pattern: serde struct definitions for the API response, async HTTP calls with reqwest and tokio, a full CLI tool built with clap, an actix-web handler that wraps StackPeek as a proxy, concurrent batch scanning with tokio::spawn and semaphores, and robust error handling with thiserror and anyhow. Every example compiles with cargo build and runs against the live API.
Rust excels at tasks that demand high throughput and low resource consumption. Scanning thousands of websites to build a competitive intelligence database is exactly that kind of task. A Rust binary that calls the StackPeek API can process URLs faster than equivalent Python or Node.js scripts while using a fraction of the memory.
Common scenarios where Rust developers reach for tech stack detection:
The API is a single GET endpoint. You pass the target URL as a query parameter and receive a JSON response containing every detected technology with its category, confidence score, and version (when detectable).
GET https://us-central1-todd-agent-prod.cloudfunctions.net/stackpeekApi/api/v1/detect?url=https://stripe.com
{
"url": "https://stripe.com",
"technologies": [
{
"name": "React",
"category": "JavaScript Framework",
"confidence": 95,
"version": "18.2.0",
"website": "https://reactjs.org"
},
{
"name": "Next.js",
"category": "Web Framework",
"confidence": 90,
"version": "13.4",
"website": "https://nextjs.org"
}
],
"scanTime": 1240
}
The free tier gives you 100 scans per day with no API key required. Paid plans start at $9/month for 5,000 scans. The response is clean JSON that maps directly to Rust structs with serde — no nested wrappers, no pagination tokens, no GraphQL complexity.
Before writing any HTTP code, define Rust structs that derive Deserialize so serde_json can automatically map the API response into typed values. This is the foundation every other example in this guide builds on.
use serde::Deserialize;
/// A single detected technology from the StackPeek API.
#[derive(Debug, Clone, Deserialize)]
pub struct TechDetection {
pub name: String,
pub category: String,
pub confidence: f64,
pub version: Option<String>,
pub website: Option<String>,
}
/// The top-level response from the StackPeek detect endpoint.
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct DetectResponse {
pub url: String,
pub technologies: Vec<TechDetection>,
pub scan_time: Option<u64>,
}
/// Wraps a detection result with domain context
/// for batch scanning operations.
#[derive(Debug)]
pub struct ScanResult {
pub domain: String,
pub response: Option<DetectResponse>,
pub error: Option<String>,
}
Rust's Option<String> handles missing JSON fields the same way Swift's optionals do — if the key is absent, the value is None rather than causing a deserialization error. The #[serde(rename_all = "camelCase")] attribute maps the API's scanTime JSON key to Rust's idiomatic scan_time snake_case field name automatically. The Clone derive lets you share these structs across threads without worrying about ownership.
Add these dependencies to your Cargo.toml:
[dependencies]
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.12", features = ["json"] }
tokio = { version = "1", features = ["full"] }
The reqwest crate is the de facto HTTP client for Rust. Combined with tokio as the async runtime, it provides a clean, ergonomic API for making HTTP requests. Here is the simplest possible call to the StackPeek API:
use reqwest::Client;
use std::time::Duration;
const BASE_URL: &str =
"https://us-central1-todd-agent-prod.cloudfunctions.net\
/stackpeekApi/api/v1/detect";
/// A lightweight client for the StackPeek tech detection API.
pub struct StackPeekClient {
http: Client,
}
impl StackPeekClient {
pub fn new() -> Self {
let http = Client::builder()
.timeout(Duration::from_secs(30))
.user_agent("stackpeek-rust/1.0")
.build()
.expect("failed to build HTTP client");
Self { http }
}
/// Detect the tech stack of a given URL.
pub async fn detect(
&self,
target_url: &str,
) -> Result<DetectResponse, reqwest::Error> {
self.http
.get(BASE_URL)
.query(&[("url", target_url)])
.send()
.await?
.error_for_status()?
.json::<DetectResponse>()
.await
}
}
#[tokio::main]
async fn main() {
let client = StackPeekClient::new();
match client.detect("https://stripe.com").await {
Ok(response) => {
println!(
"Detected {} technologies on {}",
response.technologies.len(),
response.url
);
for tech in &response.technologies {
let ver = tech
.version
.as_deref()
.unwrap_or("-");
println!(
" {:<20} {:<16} {:<10} {}%",
tech.name, tech.category, ver,
tech.confidence as u32
);
}
}
Err(e) => eprintln!("Detection failed: {e}"),
}
}
Key details: reqwest handles URL encoding of query parameters automatically when you use .query() — no manual percent-encoding needed. The .error_for_status()? call converts non-2xx HTTP responses into errors. The .json::<DetectResponse>() call deserializes the response body directly into your struct using serde. The entire chain uses Rust's ? operator for clean error propagation — no nested match blocks or unwrap chains.
Rust is one of the best languages for building CLI tools. The clap crate provides argument parsing with derive macros, and the resulting binary is a single static executable with no runtime dependencies. Here is a complete CLI tool that detects tech stacks:
use clap::Parser;
/// Detect the technology stack of any website.
#[derive(Parser)]
#[command(name = "stackpeek")]
#[command(about = "Detect website tech stacks via the StackPeek API")]
struct Cli {
/// Target URL to scan (e.g., https://stripe.com)
url: String,
/// Output results as JSON instead of a table
#[arg(short, long)]
json: bool,
/// Only show technologies with confidence above this threshold
#[arg(short, long, default_value_t = 0.0)]
min_confidence: f64,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
let client = StackPeekClient::new();
let response = client.detect(&cli.url).await?;
let filtered: Vec<&TechDetection> = response
.technologies
.iter()
.filter(|t| t.confidence >= cli.min_confidence)
.collect();
if cli.json {
println!("{}", serde_json::to_string_pretty(&response)?);
} else {
println!("Technologies on {}:\n", response.url);
println!(
" {:<22} {:<18} {:<10} {}",
"NAME", "CATEGORY", "VERSION", "CONFIDENCE"
);
println!(" {}", "-".repeat(66));
for tech in &filtered {
let ver = tech
.version
.as_deref()
.unwrap_or("-");
println!(
" {:<22} {:<18} {:<10} {}%",
tech.name, tech.category, ver,
tech.confidence as u32
);
}
println!(
"\n {} technologies detected",
filtered.len()
);
if let Some(ms) = response.scan_time {
println!(" Scan completed in {}ms", ms);
}
}
Ok(())
}
Usage from the command line:
# Basic scan
$ stackpeek https://linear.app
# JSON output for piping to jq
$ stackpeek https://vercel.com --json | jq '.technologies[].name'
# Only high-confidence detections
$ stackpeek https://github.com --min-confidence 80
Add clap = { version = "4", features = ["derive"] } and anyhow = "1" to your Cargo.toml. Clap automatically generates --help output from the doc comments on each field. The --json flag makes the tool composable with other Unix tools like jq, grep, and awk. Compile with cargo build --release and you get a single binary under 5 MB that runs on any machine with no dependencies.
If you are building a web service in Rust, you can expose StackPeek through your own API with caching, rate limiting, and custom response formats. Here is an actix-web handler that proxies requests to StackPeek and caches results:
use actix_web::{web, App, HttpServer, HttpResponse};
use dashmap::DashMap;
use std::sync::Arc;
use std::time::{Duration, Instant};
/// Cached entry with TTL.
struct CacheEntry {
response: DetectResponse,
expires_at: Instant,
}
/// Shared application state.
struct AppState {
client: StackPeekClient,
cache: DashMap<String, CacheEntry>,
cache_ttl: Duration,
}
#[derive(Deserialize)]
struct DetectQuery {
url: String,
}
/// Handler: GET /api/detect?url=https://example.com
async fn detect_handler(
query: web::Query<DetectQuery>,
state: web::Data<Arc<AppState>>,
) -> HttpResponse {
let target = &query.url;
// Check cache first
if let Some(entry) = state.cache.get(target) {
if Instant::now() < entry.expires_at {
return HttpResponse::Ok().json(&entry.response);
}
}
// Cache miss — call StackPeek
match state.client.detect(target).await {
Ok(response) => {
state.cache.insert(
target.clone(),
CacheEntry {
response: response.clone(),
expires_at: Instant::now() + state.cache_ttl,
},
);
HttpResponse::Ok().json(&response)
}
Err(e) => {
HttpResponse::BadGateway().json(
serde_json::json!({
"error": format!("StackPeek API error: {e}")
})
)
}
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let state = Arc::new(AppState {
client: StackPeekClient::new(),
cache: DashMap::new(),
cache_ttl: Duration::from_secs(86400), // 24 hours
});
println!("Starting server on http://127.0.0.1:8080");
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(state.clone()))
.route("/api/detect", web::get().to(detect_handler))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
Add actix-web = "4" and dashmap = "5" to your dependencies. DashMap is a concurrent hash map that does not require a mutex — multiple actix-web worker threads can read and write simultaneously without blocking each other. The cache TTL is set to 24 hours, matching the StackPeek API's data freshness. On cache hits, responses return in microseconds with zero API calls.
This pattern works identically with axum or warp — swap the handler signature and routing macro, but the StackPeekClient and DashMap cache stay exactly the same.
Rust's async runtime makes concurrent scanning straightforward. Use tokio::spawn to create tasks for each URL and a Semaphore to limit concurrency so you do not overwhelm the API:
use std::sync::Arc;
use tokio::sync::Semaphore;
use tokio::task::JoinHandle;
/// Scan multiple domains concurrently with a concurrency limit.
async fn scan_all(
domains: Vec<String>,
max_concurrency: usize,
) -> Vec<ScanResult> {
let client = Arc::new(StackPeekClient::new());
let semaphore = Arc::new(Semaphore::new(max_concurrency));
let handles: Vec<JoinHandle<ScanResult>> = domains
.into_iter()
.map(|domain| {
let client = Arc::clone(&client);
let semaphore = Arc::clone(&semaphore);
tokio::spawn(async move {
// Acquire a permit before making the request
let _permit = semaphore
.acquire()
.await
.expect("semaphore closed");
let target = format!("https://{domain}");
match client.detect(&target).await {
Ok(response) => ScanResult {
domain,
response: Some(response),
error: None,
},
Err(e) => ScanResult {
domain,
response: None,
error: Some(e.to_string()),
},
}
})
})
.collect();
// Await all handles and collect results
let mut results = Vec::with_capacity(handles.len());
for handle in handles {
match handle.await {
Ok(result) => results.push(result),
Err(e) => {
results.push(ScanResult {
domain: "unknown".into(),
response: None,
error: Some(format!("Task panicked: {e}")),
});
}
}
}
results
}
#[tokio::main]
async fn main() {
let domains = vec![
"stripe.com", "linear.app", "vercel.com", "notion.so",
"figma.com", "github.com", "shopify.com", "slack.com",
"gitlab.com", "netlify.com",
]
.into_iter()
.map(String::from)
.collect();
let start = std::time::Instant::now();
let results = scan_all(domains, 5).await;
let elapsed = start.elapsed();
println!(
"Scanned {} domains in {:.1}s\n",
results.len(),
elapsed.as_secs_f64()
);
for r in &results {
if let Some(err) = &r.error {
println!(" {:<18} ERROR: {err}", r.domain);
} else if let Some(resp) = &r.response {
let names: Vec<&str> = resp
.technologies
.iter()
.take(5)
.map(|t| t.name.as_str())
.collect();
let extra = if resp.technologies.len() > 5 {
format!(" (+{} more)", resp.technologies.len() - 5)
} else {
String::new()
};
println!(
" {:<18} {}{}",
r.domain,
names.join(", "),
extra
);
}
}
}
The Semaphore is the key concurrency primitive here. Each spawned task acquires a permit before making its HTTP request. With max_concurrency set to 5, at most 5 requests are in flight at any time. The remaining tasks wait asynchronously — they do not block a thread or spin-loop. When a request completes and the permit is dropped, the next waiting task proceeds immediately.
Unlike Go's goroutines or Swift's TaskGroup, Rust's tokio::spawn returns a JoinHandle that you must explicitly await. This gives you fine-grained control over error handling — if a task panics, you catch it at the handle.await call site rather than crashing the entire program.
Production Rust code needs structured error handling. The thiserror crate lets you define typed error enums for library code, while anyhow provides ergonomic error handling for application code. Here is a complete error handling setup for a StackPeek client:
use thiserror::Error;
#[derive(Error, Debug)]
pub enum StackPeekError {
#[error("HTTP request failed: {0}")]
Http(#[from] reqwest::Error),
#[error("Invalid target URL: {url}")]
InvalidUrl { url: String },
#[error("API returned status {status}: {body}")]
ApiError { status: u16, body: String },
#[error("Rate limited — retry after {retry_after_secs}s")]
RateLimited { retry_after_secs: u64 },
#[error("Scan timed out after {timeout_secs}s")]
Timeout { timeout_secs: u64 },
}
impl StackPeekClient {
/// Detect with structured error handling and retry logic.
pub async fn detect_robust(
&self,
target_url: &str,
) -> Result<DetectResponse, StackPeekError> {
// Validate URL before making the request
if !target_url.starts_with("http://")
&& !target_url.starts_with("https://")
{
return Err(StackPeekError::InvalidUrl {
url: target_url.to_string(),
});
}
let response = self
.http
.get(BASE_URL)
.query(&[("url", target_url)])
.send()
.await?;
let status = response.status().as_u16();
match status {
200 => Ok(response.json::<DetectResponse>().await?),
429 => {
let retry = response
.headers()
.get("retry-after")
.and_then(|v| v.to_str().ok())
.and_then(|v| v.parse().ok())
.unwrap_or(60);
Err(StackPeekError::RateLimited {
retry_after_secs: retry,
})
}
_ => {
let body = response
.text()
.await
.unwrap_or_default();
Err(StackPeekError::ApiError { status, body })
}
}
}
}
// Application code uses anyhow for ergonomic error handling
use anyhow::{Context, Result};
async fn scan_and_report(url: &str) -> Result<()> {
let client = StackPeekClient::new();
let response = client
.detect_robust(url)
.await
.context(format!("Failed to scan {url}"))?;
println!("Found {} technologies", response.technologies.len());
for tech in &response.technologies {
println!(" {} ({})", tech.name, tech.category);
}
Ok(())
}
Add thiserror = "1" and anyhow = "1" to your Cargo.toml. The #[from] attribute on StackPeekError::Http automatically converts reqwest::Error into your custom error type when you use ?. The anyhow::Context trait adds human-readable context to errors — when something fails, you see "Failed to scan https://stripe.com" followed by the underlying error, making debugging straightforward. This two-layer pattern — thiserror for the library, anyhow for the application — is the standard approach in the Rust ecosystem.
If you have looked at Wappalyzer's API pricing for a Rust project, here is how the two compare:
| Feature | StackPeek | Wappalyzer |
|---|---|---|
| Free tier | 100 scans/day | 50 scans/month |
| Paid plan (entry) | $9/month | $250/month |
| Scans on entry plan | 5,000/month | 50,000/month |
| Pro plan | $29/month (25,000 scans) | $450/month (100,000 scans) |
| Cost per scan (entry) | $0.0018 | $0.005 |
| API key required | No (free tier) | Yes |
| Technologies detected | 120+ | 1,400+ |
| Response format | JSON (serde-ready) | JSON |
StackPeek is 28x cheaper than Wappalyzer at the entry tier. For most Rust projects — CLI tools, web crawlers, microservices, security scanners — you need to detect the major frameworks, CMS platforms, and analytics tools. StackPeek covers these at a fraction of the cost. If you need to identify niche JavaScript libraries or obscure WordPress plugins, Wappalyzer's broader fingerprint database may justify the premium. But for 90% of use cases, StackPeek at $9/month vs Wappalyzer at $250/month is the obvious choice.
Here are the most common scenarios where Rust developers use tech stack detection:
tokio and reqwest. Scan thousands of domains per hour while using under 50 MB of memory. Rust's zero-cost abstractions mean your concurrent scanner runs as fast as hand-written C without the memory safety risks.cargo build --target. No runtime, no installer, no dependencies.wasm-pack and call it from JavaScript in the browser or from edge functions on Cloudflare Workers. Rust's WASM support is first-class, and the compiled module is typically under 200 KB.polars or arrow make this fast and memory-efficient.Start detecting tech stacks from Rust today
100 free scans/day. No API key required. JSON response deserializes natively with serde.
Try StackPeek Free →The fastest way to test StackPeek from Rust requires just three crates — reqwest, serde, and tokio:
cargo new stackpeek-demo to create a new projectreqwest, serde, serde_json, and tokio to your Cargo.tomlStackPeekClient into src/main.rscargo run and see detected technologies in the consoleFrom there, add clap for a polished CLI interface, wrap the client in an actix-web handler for a caching proxy, or use tokio::spawn with a semaphore for batch scanning. The API is the same regardless of how you call it — a single GET request with a URL query parameter that returns serde-compatible JSON.
For production deployments, add a DashMap cache to avoid redundant API calls for the same domain. Add retry logic with exponential backoff using the backoff crate for transient network failures. Use structured error handling with thiserror for your library and anyhow for your application. Every pattern in this guide is copy-paste ready and compiles with stable Rust.