Docs menu

Google API endpoint

Search Google Jobs

Returns normalized Google Jobs results parsed from public Google web responses.

POSTapiKey3 credits/requestGooglegoogle.JobsOptiongoogle.JobsResponse/google/jobs

Overview

Returns normalized Google Jobs results parsed from public Google web responses.

Request schema

NameInTypeRequiredEnumExampleDescription
optionbodyobjectYesGoogle Jobs search payload
x-api-keyheaderstringYesAPI key required

Authentication

Send your scraping API key in the x-api-key header. Use the console API Keys page to rotate or select the active key.

Billing

Endpoint usage is metered in credits. The plan prices, included credits, limits, and overage rates below match the active backend billing configuration.

Credit cost
3 credits/request
Charged response
Successful 2xx responses
PlanPriceIncluded creditsDaily capRate limitOverage
Free$0/mo2,000500 daily credits5/minNo overage
Starter$9/mo20,0005,000 daily credits15/min$0.75/1,000 overage credits when enabled
Growth$29/mo100,00025,000 daily credits45/min$0.45/1,000 overage credits when enabled
Pro$79/mo400,000No daily cap120/min$0.30/1,000 overage credits
Business$199/mo1,200,000No daily cap300/min$0.20/1,000 overage credits
Enterprise$499/mo5,000,000No daily cap1,000/min$0.12/1,000 overage credits

Infrastructure behavior

This endpoint is executed through Crawlora's managed scraping infrastructure.

  • Proxy strategy: managed automatically where needed
  • Browser rendering: enabled for supported targets that require rendered HTML or JavaScript execution
  • Browser cluster: supported dynamic workloads can be routed through distributed browser instances
  • Retry behavior: automatic retry/fallback may be used depending on endpoint type
  • Challenge handling: challenged pages or unusable upstream HTML are detected and surfaced clearly when they cannot be used
  • Billing: credits are charged only for successful 2xx responses
  • Observability: responses include request context where available

Browser cluster behavior

Some targets require real browser execution because the data is loaded through JavaScript, dynamic rendering, or interaction-like browser behavior.

For supported endpoints, Crawlora can route requests through a managed browser cluster. This allows Crawlora to execute JavaScript, load dynamic content, apply browser-level request behavior, and normalize the rendered result into JSON.

You do not need to operate your own Playwright, Puppeteer, Chrome, proxy, queue, or retry infrastructure.

Error behavior

Crawlora does not silently return bad data when the upstream page cannot be used.

StatusCommon failure case
400Invalid input or missing required parameter
429Plan or endpoint rate limit exceeded
500Internal execution error
502Upstream platform failed, returned unusable HTML, or served a challenge page that could not be resolved

When possible, Crawlora returns structured error context so your integration can retry, back off, or inspect the request.

Failure responses

StatusDescriptionSchema
400Bad Request#/definitions/app.Response
429Too Many Requests#/definitions/app.Response
502Bad Gateway#/definitions/app.Response

Request body example

{
  "location": "San Francisco, CA",
  "page": 1,
  "query": "software engineer"
}

Example response

{
  "code": 200,
  "msg": "OK",
  "data": {
    "query": "software engineer",
    "location": "San Francisco, CA",
    "page": 1,
    "results": [
      {
        "title": "Software Engineer",
        "snippet": "Software Engineer · Example"
      }
    ]
  }
}

Request schema

#/definitions/google.JobsOption

FieldTypeRequiredEnumBoundsExampleDescription
locationstringNoSan Francisco, CA
pageintegerNo1
querystringYessoftware engineer

Response schema

#/definitions/google.JobsResponse

FieldTypeRequiredEnumBoundsExampleDescription
locationstringNoSan Francisco, CA
pageintegerNo1
querystringNosoftware engineer
resultsarrayNo
results[].companystringNo
results[].employmentstringNo
results[].locationstringNo
results[].posted_atstringNo
results[].snippetstringNo
results[].sourcestringNo
results[].titlestringNo
results[].urlstringNo

Code examples

Use environment variables for secrets and keep Crawlora API keys server-side.

curl -X POST "https://api.crawlora.net/api/v1/google/jobs" \
  -H "x-api-key: $CRAWLORA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"location":"San Francisco, CA","page":1,"query":"software engineer"}'

Responsible public web data workflows

Crawlora is designed for responsible structured public web data workflows. Customers are responsible for using Crawlora in compliance with applicable laws, third-party rights, target-platform rules, and Crawlora terms.

Read Crawlora terms

Related