A branchless vectorized implementation of Bayesian Additive Regression Trees (BART) in JAX.
BART is a nonparametric Bayesian regression technique. Given predictors
This Python module provides an implementation of BART that runs on GPU, to process large datasets faster. It is also good on CPU. Most other implementations of BART are for R, and run on CPU only.
On CPU, bartz runs at the speed of dbarts (the fastest implementation I know of), but using half the memory. On GPU, the speed premium depends on sample size; with 50000 datapoints and 5000 trees, on an Nvidia Tesla V100 GPU it's 12 times faster than an Apple M1 CPU, and this factor is linearly proportional to the number of datapoints.
- stochtree C++ library with R and Python bindings taylored to researchers who want to make their own BART variants
- bnptools Feature-rich R packages for BART and some variants
- dbarts Fast R package
- bartMachine Fast R package, supports missing predictors imputation
- SoftBART R package with a smooth version of BART
- bcf R package for a version of BART for causal inference
- flexBART Fast R package, supports categorical predictors
- flexBCF R package, version of bcf optimized for large datasets
- XBART R/Python package, XBART is a faster variant of BART
- BART R package, BART warm-started with XBART
- XBCF
- BayesTree R package, original BART implementation
- bartCause R package, pre-made BART-based workflows for causal inference
- stan4bart
- VCBART
- monbart
- mBART
- SequentialBART
- sparseBART
- pymc-bart
- semibart
- CSP-BART
- AMBARTI
- MOTR-BART
- bcfbma
- bartBMAnew
- BART-BMA (superseded by bartBMAnew)
- gpbart
- GPBART
- bartpy
- BayesTreePrior
- BayesTree.jl
- longbet