scrapy-proxycrawl-middleware

Scrapy ProxyCrawl Proxy Middleware: ProxyCrawl interfacing middleware for Scrapy


Keywords
scrapy, middleware, scraping, scraper, crawler, crawling, proxycrawl, api, proxycrawl-api, scraping-api, scrapy-crawler, scrapy-framework
License
Apache-2.0
Install
pip install scrapy-proxycrawl-middleware==1.1.0

Documentation

DEPRECATION NOTICE

⚠️ IMPORTANT: This package is no longer maintained or supported. For the latest updates, please use our new package at scrapy-crawlbase-middleware.


ProxyCrawl API middleware for Scrapy

Processes Scrapy requests using ProxyCrawl services either with Normal or Javascript tokens

Installing

Choose a way of installing:

  • Clone the repository inside your Scrapy project and run the following:
python setup.py install
  • Or use PyPi Python package manager. pip install scrapy-proxycrawl-middleware

Then in your Scrapy settings.py add the following lines:

# Activate the middleware
PROXYCRAWL_ENABLED = True

# The ProxyCrawl API token you wish to use, either normal of javascript token
PROXYCRAWL_TOKEN = 'your token'

# Enable the middleware
DOWNLOADER_MIDDLEWARES = {
    'scrapy_proxycrawl.ProxyCrawlMiddleware': 610
}

Usage

Use the scrapy_proxycrawl.ProxyCrawlRequest instead of the scrapy built-in Request. The scrapy_proxycrawl.ProxyCrawlRequest accepts additional arguments, used in Proxy Crawl API:

from scrapy_proxycrawl import ProxyCrawlRequest

class ExampleScraper(Spider):

    def start_requests(self):
        yield ProxyCrawlRequest(
            "http://target-url",
            callback=self.parse_result
            device='desktop',
            country='US',
            page_wait=1000,
            ajax_wait=True,
            dont_filter=True
        )

The target url will be replaced with proxy crawl url and parameters will be encoded into the url by the middleware automatically.

If you have questions or need help using the library, please open an issue or contact us.


Copyright 2023 ProxyCrawl