pysimplecrawler

Python Simple Crawler


Keywords
pysimplecrawler, python, simple, crawler
License
MIT
Install
pip install pysimplecrawler==1.0.0

Documentation

Python Simple Crawler

This is an extendable and easy-to-use crawler which can be fully customized according to your needs.

  • Supports asynchronous requests
  • Able to add special crawling strategies within custom spiders
  • Specify needed tags to be scraped
  • Access to full logs and detailed execution timing
  • Minimize RAM usage by making use of BeautifulSoup's decomposition and python's garbage collection

Getting Started

Install the package using pip:

pip install pysimplecrawler

Or download the source code using this command:

git clone https://github.com/benyaminkosari/pysimplecrawler.git

Usage

  1. Import Crawler
from pysimplecrawler import crawler
  1. Define the main url to crawl:
url = "https://example.com/"
  1. Add custom config if needed:
config = {
    "async": True,
    "workers": 5,
    "headers": {"cookies", "my_cookie"}
}
  1. Execute the process:
spider = crawler.Crawl(url, depth=3, config=config)
spider.start()
  1. Access result:
print(spider.data)

Reference

crawler.Crawl(url, depth, priority, config)

  • url : str (required)
    Target's address to start crawling

  • depth : int (optional) default=3
    Maximum level which crawlers must go deep through target's urls

  • priority : str (optional) default='vertical'
    Strategy which the crawler uses. It's the name of the spider's class (all lowercase)
    Note that starting the class name with 'Async' will be automatically handled from config

  • config : dict (optional)
    Change default and add extra settings

    • tags : list = ["img", "h1", "meta", "title", ...]
      Html tags to be scraped while crawling
    • async : bool default=False
      Whether requests must be sent concurrently
    • workers : int default=10
      Number of requests to be sent simultaneously
    • logs : bool default=True
      Whether logs must be shown
    • headers : dict
      Headers of the requests.
      Note that there are already some default headers which will be overridden.

How does it work?

Crawl class of pysimplecrawler.crawler to Spider classes of pysimplecrawler.spiders are connected as a Factory to its Products. Considering inheritance from AbsSpider as the abstract class and SpiderBase as a helper class, You can add you own custom strategies to /spiders directory.