Docs menu

Google Finance API endpoint

Google Finance quote news

Returns normalized news articles for a quote.

GETapiKey1 credit/requestGoogle Financefinance.articlesResponseDoc/google/finance/news/{quote}

Overview

Returns normalized news articles for a quote.

Request schema

NameInTypeRequiredEnumExampleDescription
quotepathstringYesQuote identifier such as AAPL:NASDAQ
limitqueryintegerNoArticle limit
x-api-keyheaderstringYesAPI key required

Authentication

Send your scraping API key in the x-api-key header. Use the console API Keys page to rotate or select the active key.

Billing

Endpoint usage is metered in credits. The plan prices, included credits, limits, and overage rates below match the active backend billing configuration.

Credit cost
1 credit/request
Charged response
Successful 2xx responses
PlanPriceIncluded creditsDaily capRate limitOverage
Free$0/mo2,000500 daily credits5/minNo overage
Starter$9/mo20,0005,000 daily credits15/min$0.75/1,000 overage credits when enabled
Growth$29/mo100,00025,000 daily credits45/min$0.45/1,000 overage credits when enabled
Pro$79/mo400,000No daily cap120/min$0.30/1,000 overage credits
Business$199/mo1,200,000No daily cap300/min$0.20/1,000 overage credits
Enterprise$499/mo5,000,000No daily cap1,000/min$0.12/1,000 overage credits

Infrastructure behavior

This endpoint is executed through Crawlora's managed scraping infrastructure.

  • Proxy strategy: managed automatically where needed
  • Browser rendering: enabled for supported targets that require rendered HTML or JavaScript execution
  • Browser cluster: supported dynamic workloads can be routed through distributed browser instances
  • Retry behavior: automatic retry/fallback may be used depending on endpoint type
  • Challenge handling: challenged pages or unusable upstream HTML are detected and surfaced clearly when they cannot be used
  • Billing: credits are charged only for successful 2xx responses
  • Observability: responses include request context where available

Browser cluster behavior

Some targets require real browser execution because the data is loaded through JavaScript, dynamic rendering, or interaction-like browser behavior.

For supported endpoints, Crawlora can route requests through a managed browser cluster. This allows Crawlora to execute JavaScript, load dynamic content, apply browser-level request behavior, and normalize the rendered result into JSON.

You do not need to operate your own Playwright, Puppeteer, Chrome, proxy, queue, or retry infrastructure.

Catalog quality warnings

  • missing example for required path param: quote

Error behavior

Crawlora does not silently return bad data when the upstream page cannot be used.

StatusCommon failure case
400Invalid input or missing required parameter
429Plan or endpoint rate limit exceeded
500Internal execution error
502Upstream platform failed, returned unusable HTML, or served a challenge page that could not be resolved

When possible, Crawlora returns structured error context so your integration can retry, back off, or inspect the request.

Failure responses

StatusDescriptionSchema
400Bad Request#/definitions/app.Response
404Not Found#/definitions/app.Response
500Internal Server Error#/definitions/app.Response
502Bad Gateway#/definitions/app.Response

Example response

{
  "code": 200,
  "msg": "OK",
  "data": [
    {
      "title": "Apple Stock Poised for 21% Surge",
      "source": "24/7 Wall St.",
      "url": "https://247wallst.com/investing/2026/04/28/apple-stock-poised-for-21-surge/",
      "published_unix": 1777398061,
      "published_at": "2026-04-28T17:41:01Z",
      "thumbnail_url": "https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSVjUAJrZYWBJA2Q7yragfnu71T..."
    }
  ]
}

Request schema

No body schema

Response schema

#/definitions/finance.articlesResponseDoc

FieldTypeRequiredEnumBoundsExampleDescription
codeintegerNo200
dataarrayNo
data[].published_atstringNo
data[].published_unixintegerNo1777077000
data[].relatedarrayNo
data[].related[].after_hoursfinance.PriceChangeNo
data[].related[].after_hours.changenumberNo-3.37
data[].related[].after_hours.change_percentnumberNo-1.3
data[].related[].after_hours.pricenumberNo255.65
data[].related[].changenumberNo-3.37
data[].related[].change_percentnumberNo-1.3
data[].related[].countrystringNoUS
data[].related[].currencystringNoUSD
data[].related[].exchangestringNoNASDAQ
data[].related[].google_idstringNo/m/07zmbvf
data[].related[].identifierstringNoAAPL:NASDAQ
data[].related[].last_update_unixintegerNo1777077000
data[].related[].namestringNoApple Inc
data[].related[].previous_closenumberNo236.35
data[].related[].pricenumberNo255.65
data[].related[].tickerstringNoAAPL
data[].related[].timezonestringNoAmerica/New_York
data[].related[].typestringNostock
data[].sourcestringNoReuters
data[].thumbnail_urlstringNo
data[].titlestringNoApple announces an update
data[].urlstringNohttps://example.com/news
msgstringNoOK

Code examples

Use environment variables for secrets and keep Crawlora API keys server-side.

curl -X GET "https://api.crawlora.net/api/v1/google/finance/news/AAPL%3ANASDAQ?limit=10" \
  -H "x-api-key: $CRAWLORA_API_KEY"

Responsible public web data workflows

Crawlora is designed for responsible structured public web data workflows. Customers are responsible for using Crawlora in compliance with applicable laws, third-party rights, target-platform rules, and Crawlora terms.

Read Crawlora terms

Related