An Open Source Project from the Data to AI Lab, at MIT
Metrics for Synthetic Data Generation Projects
- Website: https://sdv.dev
- Documentation: https://sdv.dev/SDV
- Repository: https://github.com/sdv-dev/SDMetrics
- License: MIT
- Development Status: Pre-Alpha
The SDMetrics library provides a set of dataset-agnostic tools for evaluating the quality of a synthetic database by comparing it to the real database that it is modeled after.
It supports multiple data modalities:
Single Columns: Compare 1 dimensional
numpyarrays representing individual columns.
Column Pairs: Compare how columns in a
pandas.DataFramerelate to each other, in groups of 2.
Single Table: Compare an entire table, represented as a
Multi Table: Compare multi-table and relational datasets represented as a python
dictwith multiple tables passed as
- Time Series: Compare tables representing ordered sequences of events.
It includes a variety of metrics such as:
- Statistical metrics which use statistical tests to compare the distributions of the real and synthetic distributions.
- Detection metrics which use machine learning to try to distinguish between real and synthetic data.
- Efficacy metrics which compare the performance of machine learning models when run on the synthetic and real data.
- Bayesian Network and Gaussian Mixture metrics which learn the distribution of the real data and evaluate the likelihood of the synthetic data belonging to the learned distribution.
- Privacy metrics which evaluate whether the synthetic data is leaking information about the real data.
SDMetrics is part of the SDV project and is automatically installed alongside it. For details about this process please visit the SDV Installation Guide
Optionally, SDMetrics can also be installed as a standalone library using the following commands:
pip install sdmetrics
conda install -c sdv-dev -c conda-forge -c pytorch sdmetrics
For more installation options please visit the SDMetrics installation Guide
SDMetrics is included as part of the framework offered by SDV to evaluate the quality of your synthetic dataset. For more details about how to use it please visit the corresponding User Guide:
SDMetrics can also be used as a standalone library to run metrics individually.
In this short example we show how to use it to evaluate a toy multi-table dataset and its synthetic replica by running all the compatible multi-table metrics on it:
import sdmetrics # Load the demo data, which includes: # - A dict containing the real tables as pandas.DataFrames. # - A dict containing the synthetic clones of the real data. # - A dict containing metadata about the tables. real_data, synthetic_data, metadata = sdmetrics.load_demo() # Obtain the list of multi table metrics, which is returned as a dict # containing the metric names and the corresponding metric classes. metrics = sdmetrics.multi_table.MultiTableMetric.get_subclasses() # Run all the compatible metrics and get a report sdmetrics.compute_metrics(metrics, real_data, synthetic_data, metadata=metadata)
The output will be a table with all the details about the executed metrics and their score:
|KSTest||Inverted Kolmogorov-Smirnov D statistic||0.75||0||1||MAXIMIZE|
|KSTestExtended||Inverted Kolmogorov-Smirnov D statistic||0.777778||0||1||MAXIMIZE|
|BNLogLikelihood||BayesianNetwork Log Likelihood||nan||-inf||0||MAXIMIZE|
If you want to read more about each individual metric, please visit the following folders:
- Single Column Metrics: sdmetrics/single_column
- Single Table Metrics: sdmetrics/single_table
- Multi Table Metrics: sdmetrics/multi_table
- Time Series Metrics: sdmetrics/timeseries
The Synthetic Data Vault
This repository is part of The Synthetic Data Vault Project