Docs menu

Crawlora Docs

Build a SERP Monitoring Workflow

Track keyword visibility, store organic results, compare rank movement, and alert when important pages move.

Overview

Track keyword visibility, store organic results, compare rank movement, and alert when important pages move.

What you will build

  • Define keyword list
  • Select country and language
  • Schedule requests
  • Store organic results
  • Compare rank changes
  • Alert on movement

APIs used

Only endpoints that exist in the generated endpoint metadata are linked here. Missing optional endpoints are intentionally omitted.

POSTGoogleapiKey3 credits/request

Google search API

/google/search

Returns normalized Google web search results. Results are fetched through Rayobrowse-rendered Chrome with availability fanout and stale-cache fallback when available. The endpoint returns 502 when Google serves a challenge page or unusable HTML. Rate limit is enforced at 1 request per second, and if the limit is exceeded a 429 status code is returned with rate limit headers.

GETBingapiKey3 credits/request

Search Bing web results

/bing/search

Returns normalized Bing web search results for a query string, including organic results, optional context panel data, related queries, people-also-ask questions, news modules, video modules, and page-based pagination. Empty optional blocks are omitted from the JSON response. Locale defaults to country=us and lang=en-us. Results are fetched with the repo's Chrome-impersonated request client and return 502 when Bing serves a challenge page or unusable HTML.

GETBraveapiKey3 credits/request

Search Brave

/brave/search

Returns normalized web search results from Brave Search for a query string, along with offset-based pagination, related queries, discussions, videos, and the right-side knowledge card when Brave includes one. Use time_range for preset ranges or date_from/date_to for a custom YYYY-MM-DD range. Locale defaults to country=us and lang=en-us.

Data model

FieldNotes
keywordWorkflow field; map to exact endpoint response fields from endpoint docs.
countryWorkflow field; map to exact endpoint response fields from endpoint docs.
languageWorkflow field; map to exact endpoint response fields from endpoint docs.
result_urlWorkflow field; map to exact endpoint response fields from endpoint docs.
titleWorkflow field; map to exact endpoint response fields from endpoint docs.
snippetWorkflow field; map to exact endpoint response fields from endpoint docs.
rankWorkflow field; map to exact endpoint response fields from endpoint docs.
checked_atWorkflow field; map to exact endpoint response fields from endpoint docs.
request_idWorkflow field; map to exact endpoint response fields from endpoint docs.
credits_usedInclude when response or usage data makes it available.

Step-by-step workflow

  1. 1.Define keyword list
  2. 2.Select country and language
  3. 3.Schedule requests
  4. 4.Store organic results
  5. 5.Compare rank changes
  6. 6.Alert on movement

Example request

This example uses the real Google search API endpoint. Exact request fields come from the endpoint metadata.

Recipe request

Use environment variables for secrets and keep Crawlora API keys server-side.

curl -X POST "https://api.crawlora.net/api/v1/google/search" \
  -H "x-api-key: $CRAWLORA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"country":"us","keyword":"chatgpt","language":"en","limit":10,"page":1}'

Example response

Use endpoint detail pages for exact response schemas. This recipe does not invent response fields.

Generated example response

{
  "code": 200,
  "msg": "OK",
  "data": {
    "result": [
      {
        "position": 1,
        "title": "ChatGPT",
        "website_name": "ChatGPT",
        "icon": "",
        "link": "https://chatgpt.com/",
        "Snippet": "ChatGPT helps you get answers and create."
      }
    ]
  }
}

Storage/output suggestion

Store one row per keyword/result/check in a relational table or analytics warehouse so rank changes can be compared over time.

Error handling

  • Validate required inputs before calling Crawlora
  • Retry 429 and temporary 5xx responses with capped backoff
  • Log endpoint, input, timestamp, and request ID when present
  • Treat empty results as a state your application can handle
  • Open /docs/errors for production retry guidance

Rate-limit and credit planning

Estimate usage by multiplying requests by endpoint credit cost. The table below only shows real credit costs available from the billing constants.

EndpointCredit costDocs
Google search API3 credits/request/docs/Google/google-search
Search Bing web results3 credits/request/docs/Bing/bing-search
Search Brave3 credits/request/docs/Brave/brave-search

Production checklist

  • Keep API keys server-side
  • Use request timeouts
  • Back off on rate limits
  • Store raw responses or source IDs for auditability
  • Monitor credits and failures
  • Avoid unnecessary refreshes
  • Review responsible-use requirements

Responsible public web data workflows

Crawlora is designed for responsible structured public web data workflows. Customers are responsible for using Crawlora in compliance with applicable laws, third-party rights, target-platform rules, and Crawlora terms.

Read Crawlora terms

Related APIs and pages

Build this workflow with real endpoint docs

Use this recipe for workflow shape, then rely on endpoint reference pages for exact paths, request schemas, response schemas, and credit costs.