scrapy-link-filter

Scrapy Middleware that allows a Scrapy Spider to filter requests.


Keywords
scrapy, link, filter, middleware, spider
License
BSD-3-Clause
Install
pip install scrapy-link-filter==0.2.0

Documentation

Scrapy-link-filter

Python ver Build Status Code coverage Code style: black

Spider Middleware that allows a Scrapy Spider to filter requests. There is similar functionality in the CrawlSpider already using Rules and in the RobotsTxtMiddleware, but there are twists. This middleware allows defining rules dinamically per request, or as spider arguments instead of project settings.

Install

This project requires Python 3.6+ and pip. Using a virtual environment is strongly encouraged.

$ pip install git+https://github.com/croqaz/scrapy-link-filter

Usage

For the middleware to be enabled as a Spider Middleware, it must be added in the project settings.py:

SPIDER_MIDDLEWARES = {
    # maybe other Spider Middlewares ...
    # can go after DepthMiddleware: 900
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 950,
}

Or, it can be enabled as a Downloader Middleware, in the project settings.py:

DOWNLOADER_MIDDLEWARES = {
    # maybe other Downloader Middlewares ...
    # can go before RobotsTxtMiddleware: 100
    'scrapy_link_filter.middleware.LinkFilterMiddleware': 50,
}

The rules must be defined either in the spider instance, in a spider.extract_rules dict, or per request, in request.meta['extract_rules']. Internally, the extract_rules dict is converted into a LinkExtractor, which is used to match the requests.

Note that the URL matching is case-sensitive by default, which works in most cases. To enable case-insensitive matching, you can specify a "(?i)" inline flag in the beggining of each "allow", or "deny" rule that needs to be case-insensitive.

Example of a specific allow filter, on a spider instance:

from scrapy.spiders import Spider

class MySpider(Spider):
    extract_rules = {"allow_domains": "example.com", "allow": "/en/items/"}

Or a specific deny filter, inside a request meta:

request.meta['extract_rules'] = {
    "deny_domains": ["whatever.com", "ignore.me"],
    "deny": ["/privacy-policy/?$", "/about-?(us)?$"]
}

The possible fields are:

  • allow_domains and deny_domains - one, or more domains to specifically limit to, or specifically reject
  • allow and deny - one, or more sub-strings, or patterns to specifically allow, or reject

All fields can be defined as string, list, set, or tuple.


License

BSD3 © Cristi Constantin.