A micro-framework to crawl the web pages with crawlers configs. It can use MongoDB, Elasticsearch and Solr databases to cache and save the extracted data.


Keywords
crawler, elasticsearch, mongodb, parser-configs, python, scrapy
License
MIT
Install
pip install web-crawler-plus==0.9.11

Documentation