edm

Tools for assessing the difficulty of datasets for machine learning models


License
GPL-2.0
Install
pip install edm==0.0.4

Documentation

Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks

Authors: Ed Collins, Nikolai Rozanov, Bingbing Zhang

Contact: contact@wluper.com

In the paper of the corresponding name, we discuss how we used an evolutionary algorithm to discover which statistics about a text classification dataset most accurately represent how difficult that dataset is likely to be for machine learning models to learn. We presented there the difficulty measure which we discovered and have provided this Python package of code which can calculate it.

Installation

This code is pip-installable so can be installed on your machine by changing into the top-level directory of this code (where the setup.py file is located) and running:

pip3 install .

The code requires Python 3 and NumPy.

It is strongly recommended that you install this code in a virtualenv:

$ ls
edm/
$ mkdir myvirtualenv/
$ virtualenv -p python3 myvirtualenv/
$ mv edm/ myvirtualenv/
$ cd myvirtualenv/
$ source bin/activate
(myvirtualenv) $ cd edm/
(myvirtualenv) $ ls
edm/
.gitignore
LICENSE
README.md
setup.py
(myvirtualenv) $ pip3 install .

Running

To calculate the difficulty of a text classification dataset, you will need to provide two lists: one of sentences and one of labels. These two lists need to be the same length - i.e. every sentence has a label. Each item of data should be an untokenized string and each label a string.

>>> sents, labels = your_own_loading_function(PATH_TO_DATA_FILE)
>>> sents
["this is a positive sentence", "this is a negative sentence", ...]
>>> labels
["positive", "negative", ...]
>>> assert len(sents) == len(labels)
True

This code does not support the loading of data files (e.g. csv files) into memory - you will need to do this separately.

Once you have loaded your dataset into memory, you can receive a "difficulty report" by running the code as follows:

from edm import report

sents, labels = your_own_loading_function(PATH_TO_DATA_FILE)

print(report.get_difficulty_report(sents, labels))

Note that if your dataset is very large, then counting the words of the dataset may take several minutes. The Amazon Reviews dataset from Character-level Convolutional Networks for Text Classification by Xiang Zhang, Junbo Zhao and Yann LeCun, 2015 which contains 3.6 million Amazon reviews takes approximately 15 minutes to be processed and the difficulty report created. A loading bar will be displayed while the words are counted.