Scrapinghub API Reference

Crawlera API

Note

Check also the Help Center for general guides and articles.

Proxy API

Crawlera works with a standard HTTP web proxy API, where you only need an API key for authentication. This is the standard way to perform a request via Crawlera:

curl -vx proxy.crawlera.com:8010 -U <API key>: http://httpbin.org/ip

Errors

When an error occurs, Crawlera sends a response containing an X-Crawlera-Error header and an error message in the body.

Note

These errors are internal to Crawlera and are subject to change at any time, so should not be relied on and only used for debugging.

X-Crawlera-Error Response Code Error Message
bad_session_id 400 Incorrect session ID
user_session_limit 400 Session limit exceeded
bad_auth 407  
too_many_conns 429 Too many connections*
header_auth 470 Unauthorized Crawlera header
500 Unexpected error
nxdomain 502 Error looking up domain
econnrefused 502 Connection refused
econnreset 502 Connection reset
socket_closed_remotely 502 Server closed socket connection
send_failed 502 Send failed
noslaves 503 No available proxies
slavebanned 503 Website crawl ban
serverbusy 503 Server busy: too many outstanding requests
timeout 504 Timeout from upstream server
msgtimeout 504 Timeout processing HTTP stream
domain_forbidden 523 Domain forbidden. Please contact help@scrapinghub.com
bad_header 540 Bad header value for <some_header>

* Crawlera limits the number of concurrent connections based on your Crawlera plan. See: Crawlera pricing table for more information on plans.

Sessions and Request Limits

Sessions

Sessions allow reusing the same slave for every request. Sessions expire 30 minutes after their last use and Crawlera limits the number of concurrent sessions to 100 for C10 plans, and 5000 for all other plans.

Sessions are managed using the X-Crawlera-Session header. To create a new session send:

X-Crawlera-Session: create

Crawlera will respond with the session ID in the same header:

X-Crawlera-Session: <session ID>

From then onward, subsequent requests can be made through the same slave by sending the session ID in the request header:

X-Crawlera-Session: <session ID>

Another way to create sessions is using the /sessions endpoint:

curl -u <API key>: proxy.crawlera.com:8010/sessions -X POST

This will also return a session ID which you can pass to future requests with the X-Crawlera-Session header like before. This is helpful when you can’t get the next request using X-Crawlera-Session.

If an incorrect session ID is sent, Crawlera responds with a bad_session_id error.

List sessions

Issue the endpoint List sessions with the GET method to list your sessions. The endpoint returns a JSON document in which each key is a session ID and the associated value is a slave.

Example:

curl -u <API key>: proxy.crawlera.com:8010/sessions
{"1836172": "<SLAVE1>", "1691272": "<SLAVE2>"}

Delete a session

Issue the endpoint Delete a session with the DELETE method in order to delete a session.

Example:

curl -u <API key>: proxy.crawlera.com:8010/sessions/1836172 -X DELETE

Request Limits

Crawlera’s default request limit is 5 requests per second (rps) for each website. There is a default delay of 200ms between each request and a default delay of 1 second between requests through the same slave. These delays can differ for more popular domains. If the requests per second limit is exceeded, further requests will be delayed for up to 15 minutes. Each request made after exceeding the limit will increase the request delay. If the request delay reaches the soft limit (120 seconds), then each subsequent request will contain X-Crawlera-Next-Request-In header with the calculated delay as the value.

Request Headers

Crawlera supports multiple HTTP headers to control its behaviour.

Not all headers are available in every plan, here is a chart of the headers available in each plan (C10, C50, etc):

Header C10 C50 C100 C200 Enterprise
X-Crawlera-UA  
X-Crawlera-No-Bancheck  
X-Crawlera-Cookies
X-Crawlera-Timeout
X-Crawlera-Session
X-Crawlera-JobId
X-Crawlera-Max-Retries

X-Crawlera-UA

Only available on C50, C100, C200 and Enterprise plans.

This header controls Crawlera User-Agent behaviour. The supported values are:

  • pass - pass the User-Agent as it comes on the client request
  • desktop - use a random desktop browser User-Agent
  • mobile - use a random mobile browser User-Agent

If X-Crawlera-UA isn’t specified, it will default to desktop. If an unsupported value is passed in X-Crawlera-UA header, Crawlera replies with a 540 Bad Header Value.

More User-Agent types will be supported in the future (chrome, firefox) and added to the list above.

X-Crawlera-No-Bancheck

Only available on C50, C100, C200 and Enterprise plans.

This header instructs Crawlera not to check responses against its ban rules and pass any received response to the client. The presence of this header (with any value) is assumed to be a flag to disable ban checks.

Example:

X-Crawlera-No-Bancheck: 1

X-Crawlera-Cookies

This header allows to disable internal cookies tracking performed by Crawlera.

Example:

X-Crawlera-Cookies: disable

X-Crawlera-Session

This header instructs Crawlera to use sessions which will tie requests to a particular slave until it gets banned.

Example:

X-Crawlera-Session: create

When create value is passed, Crawlera creates a new session an ID of which will be returned in the response header with the same name. All subsequent requests should use that returned session ID to prevent random slave switching between requests. Crawlera sessions currently have maximum lifetime of 30 minutes. See Sessions and Request Limits for information on the maximum number of sessions.

X-Crawlera-JobId

This header sets the job ID for the request (useful for tracking requests in the Crawlera logs).

Example:

X-Crawlera-JobId: 999

X-Crawlera-Max-Retries

This header limits the number of retries performed by Crawlera.

Example:

X-Crawlera-Max-Retries: 1

Passing 1 in the header instructs Crawlera to do up to 1 retry. Default number of retries is 5 (which is also the allowed maximum value, the minimum being 0).

X-Crawlera-Timeout

This header sets Crawlera’s timeout in milliseconds for receiving a response from the target website. The timeout must be specified in milliseconds and be between 30,000 and 180,000. It’s not possible to set the timeout higher than 180,000 milliseconds or lower than 30,000 milliseconds, it will be rounded to its nearest maximum or minimum value.

Example:

X-Crawlera-Timeout: 40000

The example above sets the response timeout to 40,000 milliseconds. In the case of a streaming response, each chunk has 40,000 milliseconds to be received. If no response is received after 40,000 milliseconds, a 504 response will be returned. If not specified, it will default to 30000.

[Deprecated] X-Crawlera-Use-Https

Previously the way to perform https requests needed the http variant of the url plus the header X-Crawlera-Use-Https with value 1 like the following example:

curl -x proxy.crawlera.com:8010 -U <API key>: http://twitter.com -H x-crawlera-use-https:1

Now you can directly use the https url and remove the X-Crawlera-Use-Https header, like this:

curl -x proxy.crawlera.com:8010 -U <API key>: https://twitter.com

If you don’t use curl for crawlera you can check the rest of the documentation and update your scripts in order to continue using crawlera without issues. Also some programming languages will ask for the Certificate file crawlera-ca.crt. You can install the certificate on your system or set it explicitely on the script.

Response Headers

X-Crawlera-Next-Request-In

This header is returned when response delay reaches the soft limit (120 seconds) and contains the calculated delay value. If the user ignores this header, the hard limit (1000 seconds) may be reached, after which Crawlera will return HTTP status code 503 with delay value in Retry-After header.

X-Crawlera-Debug

This header activates tracking of additional debug values which are returned in response headers. At the moment only request-time and ua values are supported, comma should be used as a separator. For example, to start tracking request time send:

X-Crawlera-Debug: request-time

or, to track both request time and User-Agent send:

X-Crawlera-Debug: request-time,ua

The request-time option forces Crawlera to output to the response header a request time (in seconds) of the last request retry (i.e. the time between Crawlera sending request to a slave and Crawlera receiving response headers from that slave):

X-Crawlera-Debug-Request-Time: 1.112218

The ua option allows to obtain information about the actual User-Agent which has been applied to the last request (useful for finding reasons behind redirects from a target website, for instance):

X-Crawlera-Debug-UA: Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-CN) AppleWebKit/533+ (KHTML, like Gecko)

X-Crawlera-Error

This header is returned when an error condition is met, stating a particular Crawlera error behind HTTP status codes (4xx or 5xx). The error message is sent in the response body.

Example:

X-Crawlera-Error: user_session_limit

Note

Returned errors are internal to Crawlera and are subject to change at any time, so should not be relied on.

Using Crawlera with Scrapy Cloud

To employ Crawlera in Scrapy Cloud projects the Crawlera addon is used. Go to Settings > Addons > Crawlera to activate.

Settings

CRAWLERA_URL proxy URL (default: http://proxy.crawlera.com:8010)
CRAWLERA_ENABLED tick the checkbox to enable Crawlera
CRAWLERA_APIKEY Crawlera API key
CRAWLERA_MAXBANS number of bans to ignore before closing the spider (default: 20)
CRAWLERA_DOWNLOAD_TIMEOUT timeout for requests (default: 190)

Using Crawlera with headless browsers

See our articles in our Knowledge base:

Using Crawlera from different languages

Check out our Knowledge Base for examples using Crawlera with different programming languages:

Fetch API

Warning

The Fetch API is deprecated and will be removed soon. Use the standard proxy API instead.

Crawlera’s fetch API let’s you request URLs as an alternative to Crawlera’s proxy interface.

Fields

Note

Field values should always be encoded.

Field Required Description Example
url yes URL to fetch http://www.food.com/
headers no Headers to send in the outgoing request header1:value1;header2:value2

Basic example:

curl -u <API key>: http://proxy.crawlera.com:8010/fetch?url=https://twitter.com

Headers example:

curl -u <API key>: 'http://proxy.crawlera.com:8010/fetch?url=http%3A//www.food.com&headers=Header1%3AVal1%3BHeader2%3AVal2'

Working with HTTPS

See Crawlera with HTTPS in our Knowledge Base

Working with Cookies

See Crawlera and Cookies in our Knowledge Base