Unverified black box model is the path to the failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.
DALEX package xrays any model and helps to explore and explain its behaviour, helps to understand how complex models are working. The main function
explain() creates a wrapper around a predictive model. Wrapped models may then be explored and compared with a collection of local and global explainers. Recent developents from the area of Interpretable Machine Learning/eXplainable Artificial Intelligence.
If you work with
mlr3, you may be interested in the DALEXtra package. It is an extension pack for
DALEX with easy to use connectors to models created in these libraries.
The easiest way to get the R version of DALEX is to install it from CRAN
The Python version of dalex is available on PyPI
pip install dalex -U
Machine Learning models are widely used and have various applications in classification or regression tasks. Due to increasing computational power, availability of new data sources and new methods, ML models are more and more complex. Models created with techniques like boosting, bagging of neural networks are true black boxes. It is hard to trace the link between input variables and model outcomes. They are use because of high performance, but lack of interpretability is one of their weakest sides.
In many applications we need to know, understand or prove how input variables are used in the model and what impact do they have on final model prediction.
DALEX is a set of tools that help to understand how complex models are working.
- Workshop: Explanation and exploration of machine learning models with R and DALEX at eRum 2020, cheatsheet
- XAI in the jungle of competing frameworks for machine learning
Gentle introduction to DALEX with examples short introduction to the
- How to compare models created in different languages crosscomparison of gbm and CatBoost in R / gbm in h2o / gbm in python
- How to use DALEX for fraud detection
- How to use DALEX with keras
- How to use DALEX with parsnip
- How to use DALEX with caret
- How to use DALEX with mlr
- How to use DALEX with H2O
- How to use DALEX with xgboost package
- [How to use DALEX for teaching]https://raw.githack.com/pbiecek/DALEX_docs/master/vignettes/DALEX_teaching.html)
- Introduction to the
dalexpackage: Titanic: tutorial and examples
- Important features explained: FIFA20: explain default vs tuned model with dalex
- How to use dalex with xgboost
- How to use dalex with tensorflow
- Interesting features in v0.2.1
- New fairness module
- Code in the form of jupyter notebook
- YouTube video showing how to do Break Down analysis
- Changelog: NEWS
Talks about DALEX
- Talk with your model! at USeR 2020
- Talk about DALEX at Complexity Institute / NTU February 2018
- Talk about DALEX at SER / WTU April 2018
- Talk about DALEX at STWUR May 2018 (in Polish)
- Talk about DALEX at BayArea 2018
- Talk about DALEX at PyData Warsaw 2018
76 years ago Isaac Asimov devised Three Laws of Robotics: 1) a robot may not injure a human being, 2) a robot must obey the orders given it by human beings and 3) A robot must protect its own existence. These laws impact discussion around Ethics of AI. Today’s robots, like cleaning robots, robotic pets or autonomous cars are far from being conscious enough to be under Asimov’s ethics.
Today we are surrounded by complex predictive algorithms used for decision making. Machine learning models are used in health care, politics, education, judiciary and many other areas. Black box predictive models have far larger influence on our lives than physical robots. Yet, applications of such models are left unregulated despite many examples of their potential harmfulness. See Weapons of Math Destruction by Cathy O'Neil for an excellent overview of potential problems.
It's clear that we need to control algorithms that may affect us. Such control is in our civic rights. Here we propose three requirements that any predictive model should fulfill.
- Prediction's justifications. For every prediction of a model one should be able to understand which variables affect the prediction and how strongly. Variable attribution to final prediction.
- Prediction's speculations. For every prediction of a model one should be able to understand how the model prediction would change if input variables were changed. Hypothesizing about what-if scenarios.
- Prediction's validations For every prediction of a model one should be able to verify how strong are evidences that confirm this particular prediction.
There are two ways to comply with these requirements. One is to use only models that fulfill these conditions by design. White-box models like linear regression or decision trees. In many cases the price for transparency is lower performance. The other way is to use approximated explainers – techniques that find only approximated answers, but work for any black box model. Here we present such techniques.
Work on this package was financially supported by the
NCN Opus grant 2016/21/B/ST6/02176 and
NCN Opus grant 2017/27/B/ST6/0130.