A collection of sklearn transformers to encode categorical variables as numeric

python, data, science, machine, learning, pandas, sklearn
pip install category-encoders==2.6.3


Categorical Encoding Methods

Downloads Downloads Test Suite and Linting DOI

A set of scikit-learn-style transformers for encoding categorical variables into numeric by means of different techniques.

Important Links


Encoding Methods


  • Backward Difference Contrast [2][3]
  • BaseN [6]
  • Binary [5]
  • Gray [14]
  • Count [10]
  • Hashing [1]
  • Helmert Contrast [2][3]
  • Ordinal [2][3]
  • One-Hot [2][3]
  • Rank Hot [15]
  • Polynomial Contrast [2][3]
  • Sum Contrast [2][3]


  • CatBoost [11]
  • Generalized Linear Mixed Model [12]
  • James-Stein Estimator [9]
  • LeaveOneOut [4]
  • M-estimator [7]
  • Target Encoding [7]
  • Weight of Evidence [8]
  • Quantile Encoder [13]
  • Summary Encoder [13]


The package requires: numpy, statsmodels, and scipy.

To install the package, execute:

$ python install


pip install category_encoders


conda install -c conda-forge category_encoders

To install the development version, you may use:

pip install --upgrade git+


All of the encoders are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. Supported input formats include numpy arrays and pandas dataframes. If the cols parameter isn't passed, all columns with object or pandas categorical data type will be encoded. Please see the docs for transformer-specific configuration options.


There are two types of encoders: unsupervised and supervised. An unsupervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y =
X = pd.DataFrame(, columns=bunch.feature_names)

# use binary encoding to encode two categorical features
enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X)

# transform the dataset
numeric_dataset = enc.transform(X)

And a supervised example:

from category_encoders import *
import pandas as pd
from sklearn.datasets import load_boston

# prepare some data
bunch = load_boston()
y_train =[0:250]
y_test =[250:506]
X_train = pd.DataFrame([0:250], columns=bunch.feature_names)
X_test = pd.DataFrame([250:506], columns=bunch.feature_names)

# use target encoding to encode two categorical features
enc = TargetEncoder(cols=['CHAS', 'RAD'])

# transform the datasets
training_numeric_dataset = enc.fit_transform(X_train, y_train)
testing_numeric_dataset = enc.transform(X_test)

For the transformation of the training data with the supervised methods, you should use fit_transform() method instead of fit().transform(), because these two methods do not have to generate the same result. The difference can be observed with LeaveOneOut encoder, which performs a nested cross-validation for the training data in fit_transform() method (to decrease over-fitting of the downstream model) but uses all the training data for scoring with transform() method (to get as accurate estimates as possible).

Furthermore, you may benefit from following wrappers:

  • PolynomialWrapper, which extends supervised encoders to support polynomial targets
  • NestedCVWrapper, which helps to prevent overfitting

Additional examples and benchmarks can be found in the examples directory.


Category encoders is under active development, if you'd like to be involved, we'd love to have you. Check out the file or open an issue on the github project to get started.


  1. Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.
  2. Contrast Coding Systems for categorical variables. UCLA: Statistical Consulting Group. From
  3. Gregory Carey (2003). Coding Categorical Variables. From
  4. Owen Zhang - Leave One Out Encoding. From
  5. Beyond One-Hot: an exploration of categorical variables. From
  6. BaseN Encoding and Grid Search in categorical variables. From
  7. Daniele Miccii-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. SIGKDD Explor. Newsl. 3, 1. From
  8. Weight of Evidence (WOE) and Information Value Explained. From
  9. Empirical Bayes for multiple sample sizes. From
  10. Simple Count or Frequency Encoding. From
  11. Transforming categorical features to numerical features. From
  12. Andrew Gelman and Jennifer Hill (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. From
  13. Carlos Mougan, David Masip, Jordi Nin and Oriol Pujol (2021). Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems. Modeling Decisions for Artificial Intelligence, 2021. Springer International Publishing
  14. Gray Encoding. From
  15. Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow: Thermometer Encoding: One Hot Way To Resist Adversarial Examples. From
  16. Fairness implications of encoding protected categorical attributes. Carlos Mougan, Jose Alvarez, Salvatore Ruggieri, and Steffen Staab. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21,