Interpretable Machine Learning (iML) package. Explain the predictions of any model.


License
MIT
Install
pip install iml==0.6.2

Documentation

Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.

If you want to use iML for your interpretable machine learning project, let us know by starting a github issue. That way we don't break anything unintentionally.