rusenttokenize

Rule-based sentence tokenizer for Russian language


License
Apache-2.0
Install
pip install rusenttokenize==0.0.5

Documentation

ru_sent_tokenize

A simple and fast rule-based sentence segmentation. Tested on OpenCorpora and SynTagRus datasets.

Installation

pip install rusenttokenize

Running

>>> from rusenttokenize import ru_sent_tokenize
>>> ru_sent_tokenize('Эта шоколадка за 400р. ничего из себя не представляла. Артём решил больше не ходить в этот магазин')
['Эта шоколадка за 400р. ничего из себя не представляла.', 'Артём решил больше не ходить в этот магазин']

Metrics

The tokenizer has been tested on OpenCorpora and SynTagRus. There are two important metrics.

Precision. First one is we took single sentences from the datasets and measured how many times tokenizer didn't split them.

Recall. Second metric is we took two consecutive sentences from the datasets and joined each pair with a space characted. We measured how many times tokenizer correctly splitted a long sentence into two.

tokenizer OpenCorpora SynTagRus
Precision Recall Execution Time (sec) Precision Recall Execution Time (sec)
nltk.sent_tokenize 94.30 86.06 8.67 98.15 94.95 5.07
nltk.sent_tokenize(x, language='russian') 95.53 88.37 8.54 98.44 95.45 5.68
bureaucratic-labs.segmentator.split 97.16 88.62 359 96.79 92.55 210
ru_sent_tokenize 98.73 93.45 4.92 99.81 98.59 2.87

Notebook shows how the table above was calculated