Docs menu

Crawlora Docs

Getting Started with Crawlora APIs

Make a first authenticated request, inspect normalized JSON, test the same endpoint in Playground, and prepare the integration for production.

What is Crawlora?

Crawlora provides structured public web data APIs for supported platforms. Instead of maintaining proxies, browsers, parsers, retry logic, and scaling infrastructure, you call Crawlora endpoints and receive normalized JSON.

Before you start

  • Crawlora account
  • API key
  • Endpoint from the API catalog
  • Terminal or HTTP client
  • Optional Playground access

Step 1: Get your API key

Create or copy an API key from your Crawlora dashboard. Keep it server-side and store it in an environment variable.

export CRAWLORA_API_KEY="crl_..."

Step 2: Choose an endpoint

Start from the API catalog or a supported platform page. The catalog is generated from the same endpoint metadata used by Playground.

Step 3: Make your first request

This example uses the real Google search API endpoint at https://api.crawlora.net/api/v1/google/search.

First request

Use environment variables for secrets and keep Crawlora API keys server-side.

curl -X POST "https://api.crawlora.net/api/v1/google/search" \
  -H "x-api-key: $CRAWLORA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"country":"us","keyword":"chatgpt","language":"en","limit":10,"page":1}'

Step 4: Read the response

Crawlora responses are normalized JSON. Exact fields vary by endpoint, so use each endpoint reference page as the schema source of truth.

  • Inspect the top-level status and message fields when present
  • Read the endpoint-specific data object or array
  • Keep request IDs when the API returns them
  • Track credits used when usage context is available
  • Check documented empty-state and pagination behavior per endpoint

Step 5: Test in Playground

Use Playground to run a request with generated inputs, inspect response modes, and confirm endpoint behavior before adding it to your application.

Open Playground

Step 6: Move to production

  • Store API keys in environment variables
  • Keep keys server-side
  • Add request timeouts
  • Handle 429 and temporary 5xx responses
  • Log request IDs when present
  • Monitor credits and daily caps
  • Avoid unnecessary data collection
  • Review Crawlora terms

Responsible public web data workflows

Crawlora is designed for responsible structured public web data workflows. Customers are responsible for using Crawlora in compliance with applicable laws, third-party rights, target-platform rules, and Crawlora terms.

Read Crawlora terms

FAQ

What can I build with Crawlora?

You can build structured public web data workflows for supported platforms such as search, maps, social video, app stores, marketplaces, reviews, finance, and AI-agent research.

Do I need my own proxies?

No. Crawlora handles managed proxy routing for supported endpoints where that infrastructure is needed.

Do I need to run browsers?

No. Supported dynamic workflows can use Crawlora browser-backed rendering and managed browser capacity.

How do credits work?

Credits measure API usage. Different endpoints may use different credit amounts depending on complexity. See rate limits and pricing for current values.

Can I test before paying?

Use the current pricing page and dashboard plan details to confirm available trial or free usage.

Where are response schemas documented?

Endpoint detail pages show request and response schema summaries generated from the endpoint metadata.

How should I handle errors?

Validate 400-class inputs, back off on 429, retry temporary 5xx and timeouts with caps, and inspect response bodies for request context.

Start with a real endpoint

Browse the catalog, open a reference page, and run the same request in Playground.