schemaflow

A package to write schema-aware data pipelines


License
MIT
Install
pip install schemaflow==0.2.0

Documentation

Build Status Coverage Status Documentation Status

SchemaFlow

This is a a package to write data pipelines for data science systematically in Python. Thanks for checking it out.

Check out the very comprehensive documentation here.

The problem that this package solves

A major challenge in creating a robust data pipeline is guaranteeing interoperability between pipes: how do we guarantee that the pipe that someone wrote is compatible with others' pipe without running the whole pipeline multiple times until we get it right?

The solution that this package adopts

This package declares an API to define a stateful data transformation that gives the developer the opportunity to declare what comes in, what comes out, and what states are modified on each pipe and therefore the whole pipeline. Check out tests/test_pipeline.py or examples/end_to_end_kaggle.py

Install

pip install schemaflow

or, install the latest (recommended for now):

git clone https://github.com/jorgecarleitao/schemaflow
cd schemaflow && pip install -e .

Run examples

We provide one example that demonstrate the usage of SchemaFlow's API on developing an end-to-end pipeline applied to one of Kaggle's exercises.

To run it, download the data in that exercise to examples/all/ and run

pip install -r examples/requirements.txt
python examples/end_to_end_kaggle.py

You should see some prints to the console as well as the generation of 3 files at examples/: two plots and one submission.txt.

Run tests

pip install -r tests/requirements.txt
python -m unittest discover

Build documentation

pip install -r docs/requirements.txt
cd docs && make html && cd ..
open docs/build/html/index.html