Go is the language of choice for infrastructure tooling, CLI applications, and high-throughput services. If you are building a competitive intelligence crawler, a security scanner, or a sales enrichment pipeline in Go, you need a fast and reliable way to detect which technologies a website is running.
The StackPeek API does exactly that: send a URL, get back structured JSON listing every detected framework, CMS, analytics tool, CDN, and hosting provider. No browser automation, no headless Chrome, no Puppeteer dependency trees. Just a single HTTP GET request using Go's standard library.
This guide covers real, production-ready Go code for every common pattern: basic net/http calls, production clients with timeouts and retries, concurrent scanning with goroutines and semaphores, a Cobra-based CLI tool, and middleware for Gin and Echo. Every example compiles and runs with go run main.go.
Before writing any HTTP code, define Go structs with json tags that map to the StackPeek API response. This gives you type safety and makes encoding/json do all the heavy lifting.
package stackpeek
// TechDetection represents a single detected technology.
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
Version string `json:"version,omitempty"`
Website string `json:"website,omitempty"`
}
// DetectResponse is the top-level response from the StackPeek API.
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
ScanTime int64 `json:"scanTime,omitempty"`
}
// ScanResult wraps a detection response with error context
// for batch scanning operations.
type ScanResult struct {
Domain string
Response *DetectResponse
Err error
}
The omitempty tag on optional fields like Version and Website means they will be zero-valued if absent from the JSON response. The ScanResult struct is for the concurrent scanning section later — it pairs a domain with either a successful response or an error.
Go's standard library includes everything you need to call a REST API. No third-party HTTP client is required. Here is the simplest possible call to the StackPeek API:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"net/url"
)
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
Version string `json:"version,omitempty"`
}
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
}
const baseURL = "https://us-central1-todd-agent-prod.cloudfunctions.net" +
"/stackpeekApi/api/v1/detect"
func detectTechStack(targetURL string) (*DetectResponse, error) {
apiURL := fmt.Sprintf("%s?url=%s", baseURL, url.QueryEscape(targetURL))
resp, err := http.Get(apiURL)
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("API returned status %d", resp.StatusCode)
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, fmt.Errorf("JSON decode failed: %w", err)
}
return &result, nil
}
func main() {
result, err := detectTechStack("https://linear.app")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Technologies detected on %s:\n", result.URL)
for _, tech := range result.Technologies {
ver := ""
if tech.Version != "" {
ver = " v" + tech.Version
}
fmt.Printf(" %s%s — %s (%.0f%%)\n",
tech.Name, ver, tech.Category, tech.Confidence)
}
}
Key details: url.QueryEscape encodes the target URL so special characters do not break the query string. json.NewDecoder streams directly from the response body — it does not buffer the entire body into memory first, which matters for large responses. The defer resp.Body.Close() ensures the connection is returned to the pool even if decoding fails.
The default http.Get uses http.DefaultClient, which has no timeout. In production, a stalled connection will block your goroutine forever. Build a proper client with timeouts and retry logic:
package main
import (
"encoding/json"
"fmt"
"log"
"math"
"net/http"
"net/url"
"time"
)
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
Version string `json:"version,omitempty"`
}
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
ScanTime int64 `json:"scanTime,omitempty"`
}
// Client wraps an http.Client with retry logic
// for calling the StackPeek API.
type Client struct {
httpClient *http.Client
baseURL string
maxRetries int
}
// NewClient creates a StackPeek client with sensible defaults.
func NewClient() *Client {
return &Client{
httpClient: &http.Client{
Timeout: 30 * time.Second,
Transport: &http.Transport{
MaxIdleConns: 100,
MaxIdleConnsPerHost: 10,
IdleConnTimeout: 90 * time.Second,
},
},
baseURL: "https://us-central1-todd-agent-prod.cloudfunctions.net/stackpeekApi/api/v1/detect",
maxRetries: 3,
}
}
// Detect calls the StackPeek API with automatic retries
// and exponential backoff on transient failures.
func (c *Client) Detect(targetURL string) (*DetectResponse, error) {
apiURL := fmt.Sprintf("%s?url=%s", c.baseURL, url.QueryEscape(targetURL))
var lastErr error
for attempt := 0; attempt <= c.maxRetries; attempt++ {
if attempt > 0 {
backoff := time.Duration(math.Pow(2, float64(attempt-1))) * time.Second
time.Sleep(backoff)
}
req, err := http.NewRequest(http.MethodGet, apiURL, nil)
if err != nil {
return nil, fmt.Errorf("build request: %w", err)
}
req.Header.Set("Accept", "application/json")
req.Header.Set("User-Agent", "stackpeek-go/1.0")
resp, err := c.httpClient.Do(req)
if err != nil {
lastErr = fmt.Errorf("attempt %d: %w", attempt+1, err)
continue
}
if resp.StatusCode == http.StatusTooManyRequests ||
resp.StatusCode >= 500 {
resp.Body.Close()
lastErr = fmt.Errorf("attempt %d: status %d", attempt+1, resp.StatusCode)
continue
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("API returned status %d", resp.StatusCode)
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, fmt.Errorf("decode response: %w", err)
}
return &result, nil
}
return nil, fmt.Errorf("all %d attempts failed: %w", c.maxRetries+1, lastErr)
}
func main() {
client := NewClient()
result, err := client.Detect("https://stripe.com")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Detected %d technologies on %s:\n\n",
len(result.Technologies), result.URL)
for _, tech := range result.Technologies {
fmt.Printf(" %-20s %-12s %.0f%%\n",
tech.Name, tech.Category, tech.Confidence)
}
}
This client retries on HTTP 429 (rate limit) and 5xx (server errors) with exponential backoff: 1 second, 2 seconds, 4 seconds. It does not retry on 4xx client errors because those indicate a problem with your request, not a transient failure. The Transport configuration sets connection pooling limits so you do not exhaust file descriptors when scanning at scale.
Go's goroutines make concurrent scanning trivial. The pattern below uses a sync.WaitGroup for coordination and a buffered channel as a semaphore to limit how many API calls run in parallel.
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"net/url"
"sync"
"time"
)
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
}
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
}
type ScanResult struct {
Domain string
Response *DetectResponse
Err error
}
const apiBase = "https://us-central1-todd-agent-prod.cloudfunctions.net" +
"/stackpeekApi/api/v1/detect"
func scanDomain(client *http.Client, domain string) ScanResult {
targetURL := "https://" + domain
apiURL := fmt.Sprintf("%s?url=%s", apiBase, url.QueryEscape(targetURL))
resp, err := client.Get(apiURL)
if err != nil {
return ScanResult{Domain: domain, Err: err}
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return ScanResult{
Domain: domain,
Err: fmt.Errorf("status %d", resp.StatusCode),
}
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return ScanResult{Domain: domain, Err: err}
}
return ScanResult{Domain: domain, Response: &result}
}
// ScanAll scans multiple domains concurrently with a
// configurable concurrency limit using a semaphore channel.
func ScanAll(domains []string, concurrency int) []ScanResult {
client := &http.Client{Timeout: 30 * time.Second}
results := make([]ScanResult, len(domains))
sem := make(chan struct{}, concurrency)
var wg sync.WaitGroup
for i, domain := range domains {
wg.Add(1)
go func(idx int, d string) {
defer wg.Done()
sem <- struct{}{} // acquire semaphore
results[idx] = scanDomain(client, d)
<-sem // release semaphore
}(i, domain)
}
wg.Wait()
return results
}
func main() {
domains := []string{
"stripe.com",
"linear.app",
"vercel.com",
"notion.so",
"figma.com",
"github.com",
"shopify.com",
"slack.com",
"gitlab.com",
"netlify.com",
}
start := time.Now()
results := ScanAll(domains, 5)
elapsed := time.Since(start)
fmt.Printf("Scanned %d domains in %s\n\n", len(results), elapsed)
for _, r := range results {
if r.Err != nil {
fmt.Printf(" %-18s ERROR: %v\n", r.Domain, r.Err)
continue
}
techs := ""
for i, t := range r.Response.Technologies {
if i > 0 {
techs += ", "
}
if i >= 5 {
techs += fmt.Sprintf("(+%d more)", len(r.Response.Technologies)-5)
break
}
techs += t.Name
}
fmt.Printf(" %-18s %s\n", r.Domain, techs)
}
}
The semaphore pattern is the idiomatic Go approach to limiting concurrency. The buffered channel sem with capacity concurrency acts as a counting semaphore: a goroutine sends to the channel before starting work (blocking if the channel is full) and receives from it when done. With a concurrency of 5, you scan 10 domains in roughly the time it takes to scan 2 sequentially.
Results are written directly into a pre-allocated slice at the correct index, so no mutex is needed — each goroutine writes to a unique index. The WaitGroup ensures main blocks until every goroutine finishes.
Cobra is the standard library for building CLI tools in Go — it powers kubectl, docker, hugo, and hundreds of other tools. Here is a complete CLI that accepts URLs as arguments and prints a formatted table of detected technologies.
package main
import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"text/tabwriter"
"time"
"github.com/spf13/cobra"
)
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
Version string `json:"version,omitempty"`
}
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
}
const apiBase = "https://us-central1-todd-agent-prod.cloudfunctions.net" +
"/stackpeekApi/api/v1/detect"
func detect(client *http.Client, targetURL string) (*DetectResponse, error) {
apiURL := fmt.Sprintf("%s?url=%s", apiBase, url.QueryEscape(targetURL))
resp, err := client.Get(apiURL)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("API returned %d", resp.StatusCode)
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, err
}
return &result, nil
}
func main() {
client := &http.Client{Timeout: 30 * time.Second}
rootCmd := &cobra.Command{
Use: "stackpeek [urls...]",
Short: "Detect website tech stacks via StackPeek API",
Long: "Scan one or more URLs and display detected technologies,\ncategories, versions, and confidence scores.",
Args: cobra.MinimumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
defer w.Flush()
for _, targetURL := range args {
result, err := detect(client, targetURL)
if err != nil {
fmt.Fprintf(os.Stderr,
"Error scanning %s: %v\n", targetURL, err)
continue
}
fmt.Fprintf(w, "\n--- %s ---\n", result.URL)
fmt.Fprintf(w, "TECHNOLOGY\tCATEGORY\tVERSION\tCONFIDENCE\n")
fmt.Fprintf(w, "----------\t--------\t-------\t----------\n")
for _, tech := range result.Technologies {
ver := tech.Version
if ver == "" {
ver = "-"
}
fmt.Fprintf(w, "%s\t%s\t%s\t%.0f%%\n",
tech.Name, tech.Category, ver, tech.Confidence)
}
}
return nil
},
}
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}
# Initialize module and install Cobra
go mod init stackpeek-cli
go get github.com/spf13/cobra
# Run directly
go run main.go https://stripe.com https://linear.app
# Or compile to a static binary
go build -o stackpeek .
./stackpeek https://shopify.com
The output looks like this:
--- https://stripe.com ---
TECHNOLOGY CATEGORY VERSION CONFIDENCE
---------- -------- ------- ----------
React frontend 18.2.0 95%
Next.js framework 14.1.0 92%
Stripe payments - 99%
Cloudflare cdn - 90%
Google Tag analytics - 85%
Because Go compiles to a single static binary, you can distribute this tool across your team without requiring them to install a runtime, package manager, or dependencies. Just copy the binary.
If you are building a web application in Go, you can add middleware that automatically detects the tech stack of referring websites. This is useful for analytics dashboards and competitive intelligence — every time a visitor arrives from another site, you learn what that site runs.
package middleware
import (
"encoding/json"
"fmt"
"log"
"net/http"
"net/url"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
)
type TechDetection struct {
Name string `json:"name"`
Category string `json:"category"`
Confidence float64 `json:"confidence"`
}
type DetectResponse struct {
URL string `json:"url"`
Technologies []TechDetection `json:"technologies"`
}
const detectEndpoint = "https://us-central1-todd-agent-prod.cloudfunctions.net" +
"/stackpeekApi/api/v1/detect"
// cache stores detection results with a TTL to avoid
// redundant API calls for the same referring domain.
var (
cache = make(map[string]*cacheEntry)
cacheMu sync.RWMutex
)
type cacheEntry struct {
response *DetectResponse
expiresAt time.Time
}
func getCached(domain string) (*DetectResponse, bool) {
cacheMu.RLock()
defer cacheMu.RUnlock()
entry, ok := cache[domain]
if !ok || time.Now().After(entry.expiresAt) {
return nil, false
}
return entry.response, true
}
func setCache(domain string, resp *DetectResponse) {
cacheMu.Lock()
defer cacheMu.Unlock()
cache[domain] = &cacheEntry{
response: resp,
expiresAt: time.Now().Add(24 * time.Hour),
}
}
// TechStackDetector returns a Gin middleware that detects
// the tech stack of the referring website and attaches
// the result to the request context.
func TechStackDetector() gin.HandlerFunc {
client := &http.Client{Timeout: 10 * time.Second}
return func(c *gin.Context) {
referer := c.GetHeader("Referer")
if referer == "" {
c.Next()
return
}
parsed, err := url.Parse(referer)
if err != nil || parsed.Host == "" {
c.Next()
return
}
domain := strings.TrimPrefix(parsed.Host, "www.")
// Check cache first
if cached, ok := getCached(domain); ok {
c.Set("referer_tech_stack", cached)
c.Next()
return
}
// Fire async detection — do not block the request
go func(d string) {
apiURL := fmt.Sprintf("%s?url=%s",
detectEndpoint, url.QueryEscape("https://"+d))
resp, err := client.Get(apiURL)
if err != nil {
log.Printf("StackPeek error for %s: %v", d, err)
return
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return
}
setCache(d, &result)
log.Printf("Detected %d techs on referrer %s",
len(result.Technologies), d)
}(domain)
c.Next()
}
}
package main
import (
"net/http"
"github.com/gin-gonic/gin"
"yourproject/middleware"
)
func main() {
r := gin.Default()
r.Use(middleware.TechStackDetector())
r.GET("/", func(c *gin.Context) {
// Access the cached tech stack if available
if stack, exists := c.Get("referer_tech_stack"); exists {
c.JSON(http.StatusOK, gin.H{
"message": "welcome",
"referer_stack": stack,
})
return
}
c.JSON(http.StatusOK, gin.H{"message": "welcome"})
})
r.Run(":8080")
}
The same pattern works with Echo. The middleware signature is slightly different, but the logic is identical:
package middleware
import (
"encoding/json"
"fmt"
"log"
"net/http"
"net/url"
"strings"
"time"
"github.com/labstack/echo/v4"
)
// TechStackDetectorEcho returns an Echo middleware with
// the same caching and async detection behavior.
func TechStackDetectorEcho() echo.MiddlewareFunc {
client := &http.Client{Timeout: 10 * time.Second}
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
referer := c.Request().Header.Get("Referer")
if referer == "" {
return next(c)
}
parsed, err := url.Parse(referer)
if err != nil || parsed.Host == "" {
return next(c)
}
domain := strings.TrimPrefix(parsed.Host, "www.")
if cached, ok := getCached(domain); ok {
c.Set("referer_tech_stack", cached)
return next(c)
}
go func(d string) {
apiURL := fmt.Sprintf("%s?url=%s",
detectEndpoint, url.QueryEscape("https://"+d))
resp, err := client.Get(apiURL)
if err != nil {
log.Printf("StackPeek error for %s: %v", d, err)
return
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return
}
var result DetectResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return
}
setCache(d, &result)
}(domain)
return next(c)
}
}
}
Both middleware implementations fire the API call in a background goroutine so it never adds latency to the request. Results are cached for 24 hours. On the first request from a new referrer, the cache miss means no tech stack data is available for that specific request — but every subsequent request from the same referring domain gets the cached result instantly.
If you have looked at Wappalyzer's API pricing for a Go project, here is how the two compare:
| Feature | StackPeek | Wappalyzer |
|---|---|---|
| Free tier | 100 scans/day | 50 scans/month |
| Paid plan (entry) | $9/month | $250/month |
| Scans on entry plan | 5,000/month | 50,000/month |
| Cost per scan | $0.0018 | $0.005 |
| API key required | No (free tier) | Yes |
| Technologies detected | 120+ | 1,400+ |
| Response format | JSON | JSON |
For most Go projects — lead enrichment services, security scanners, competitive analysis tools — you need to detect the major frameworks, CMS platforms, and analytics tools. StackPeek covers these at a fraction of the cost. If you need to identify niche JavaScript libraries or obscure WordPress plugins, Wappalyzer's broader fingerprint database may justify the premium. But for 90% of use cases, StackPeek at $9/month vs Wappalyzer at $250/month is the clear winner.
Here are the most common scenarios where Go developers use tech stack detection:
Start detecting tech stacks from Go today
100 free scans/day. No API key required. JSON response ready for encoding/json.
Try StackPeek Free →The fastest way to test StackPeek from Go requires zero dependencies — just the standard library:
go mod init stackpeek-demomain.gogo run main.goFrom there, add the production client with retries for reliability, goroutines for concurrent scanning, or Cobra for a distributable CLI. The API is the same regardless of how you call it — a single GET request with a URL query parameter.
For production deployments, add caching (an in-memory map with a mutex, or a Redis client if you need shared cache), error handling with retries and exponential backoff, and concurrency limits with a semaphore channel. Every pattern in this guide is copy-paste ready and compiles with go build.