Discover relevant information about categorical data with entity embeddings using Neural Networks (powered by Keras)

categorical-data, embeddings, entity-embedding, keras, machine-learning, neural-networks, pre-processing, utility-library
pip install entity-embeddings-categorical==0.6.7


PyPI version Build Status Coverage Status GitHub Codacy Badge


This project is aimed to serve as an utility tool for the preprocessing, training and extraction of entity embeddings through Neural Networks using the Keras framework. It's still under construction, so please use it carefully.


The installation is pretty simple if you have a virtualenv already installed on your machine. If you don't please rely to VirtualEnv official documentation.

pip install entity-embeddings-categorical


Besides the docstrings, major details about the documentation can be found here.


This project is inteded to suit most of the existent needs, so for this reason, testability is a major concern. Most of the code is heavily tested, along with Travis as Continuous Integration tool to run all the unit tests once there is a new commit.


The usage of this utility library is provided in two modes: default and custom. In the default configuration, you can perform the following operations: Regression, Binary Classification and Multiclass Classification.

If your data type differs from any of these, you can feel free to use the custom mode, where you can define most of the configurations related to the target processing and output from the neural network.

Default mode

The usage of the default mode is pretty straightforward, you just need to provide a few parameters to the Config object:

So for creating a simple embedding network that reads from file sales_last_semester.csv, where the target name is total_sales, with the desired output being a binary classification and with a training ratio of 0.9, our Python script would look like this:

    config = Config.make_default_config(csv_path='sales_last_semester.csv',

    embedder = Embedder(config)

Pretty simple, huh?

A working example of default mode can be found here as a Python script.

Custom mode

If you intend to customize the output of the Neural Network or even the way that the target variables are processed, you need to specify these when creating the configuration object. This can be done by creating a class that extend from TargetProcessor and ModelAssembler.

A working example of custom configuration mode can be found here.


Once you are done with the training of your model, you can use the module visualization_utils in order to create some visualizations from the generated weights as well as the accuraccy of your model.

Below are some examples created for the Rossmann dataset:

Weights for store id embedding


In case of any issue with the project, or for further questions, do not hesitate to open an issue here on GitHub.


Contributions are really welcome, so feel free to open a pull request :-)


  • Allow to use a Pandas DataFrame instead of the csv file path;