superss
RSS parsing with batteries included
feedparser
is great, but sometimes it doesn't put things in the right place. superss
fixes this by finding all known candidates for urls, content, images, tags, dates, and authors and intelligently picking the best candidate. It also does some other cool things like author parsing with lauteur, url reconciliation with siegfried, and pulling links and images out of the article html.
Another problem with RSS parsing is that feeds sometimes only include a summary of the article. superss
can also extract the article's full text from the page itself with particle and merge this data with the data from the RSS feed.
Finally, some sites don't even have RSS feeds. In this case we combine pageone and particle to create a feed of articles from article urls on a site's homepage.
Install
pip install superss
Test
Requires nose
. (only currently tests full_text rss feeds.)
nosetests
Usage
Full-Text Feeds
from superss import SupeRSS
s = SupeRSS('http://feeds.feedburner.com/publici_rss')
for entry in s.run():
print entry
Non Full-Text Feeds
from superss import SupeRSS
s = SupeRSS('http://feeds.feedburner.com/publici_rss', is_full_text = False)
for entry in s.run():
print entry
Feed from homepage
Experimental: Build a feed from a homepage. You must install pageone and particle to run this.
from superss import SupeRSS
s = SupeRSS(homepage = 'http://nytimes.com/')
for entry in s.run():
print entry
Concurrency
Optionally run any function concurrently via gevent
by passing in the kwargs concurrent
and num_workers
:
from superss import SupeRSS
s = SupeRSS(
'http://feeds.feedburner.com/publici_rss',
is_full_text = False,
concurrent=True,
num_workers=10
)
for entry in s.run():
print entry