Docs menu
Crawlora Docs
Go Examples
Use Go for backend services, scheduled workers, and data pipelines with context timeouts and bounded concurrency.
net/http request with context timeout
Go request
Use environment variables for secrets and keep Crawlora API keys server-side.
package main
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"time"
)
const baseURL = "https://api.crawlora.net/api/v1"
func main() {
apiKey := os.Getenv("CRAWLORA_API_KEY")
if apiKey == "" {
panic("missing CRAWLORA_API_KEY")
}
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
body := []byte(`{"country":"us","keyword":"chatgpt","language":"en","limit":10,"page":1}`)
req, err := http.NewRequestWithContext(ctx, "POST", baseURL+"/google/search", bytes.NewReader(body))
if err != nil {
panic(err)
}
req.Header.Set("x-api-key", apiKey)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{Timeout: 65 * time.Second}
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
respBody, err := io.ReadAll(resp.Body)
if err != nil {
panic(err)
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
panic(fmt.Sprintf("Crawlora request failed: status=%d body=%s", resp.StatusCode, string(respBody)))
}
fmt.Println(string(respBody))
}Typed structs
Structs
type SearchRequest struct {
Country string `json:"country,omitempty"`
Keyword string `json:"keyword"`
Language string `json:"language,omitempty"`
Limit int `json:"limit,omitempty"`
Page int `json:"page,omitempty"`
}
type SearchResponse struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data struct {
Result []struct {
Title string `json:"title,omitempty"`
Link string `json:"link,omitempty"`
Snippet string `json:"Snippet,omitempty"`
Position int `json:"position,omitempty"`
} `json:"result,omitempty"`
} `json:"data,omitempty"`
}Simple worker pattern
Use bounded worker pools. Avoid unbounded goroutines when calling rate-limited APIs.
Bounded workers
jobs := make(chan SearchRequest)
var wg sync.WaitGroup
for worker := 0; worker < 4; worker++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobs {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
_ = callCrawlora(ctx, job)
cancel()
time.Sleep(250 * time.Millisecond)
}
}()
}
for _, job := range requests {
jobs <- job
}
close(jobs)
wg.Wait()Backoff note
- Retry 429 and temporary 5xx responses with capped backoff
- Keep http.Client timeouts
- Use context deadlines per request
- Limit worker count based on plan throughput
- Log status code and response body for failed attempts
Responsible public web data workflows
Crawlora is designed for responsible structured public web data workflows. Customers are responsible for using Crawlora in compliance with applicable laws, third-party rights, target-platform rules, and Crawlora terms.
Read Crawlora termsUse Go with endpoint docs
Confirm required fields and response schemas before turning examples into production workers.