When writing ConnectorDB, we noticed that our transform and query engine would be useful as a plugin and as a standalone data analysis tool. We therefore decided to extract our transform language, called 'PipeScript', and our dataset and query generation capabilities into this repository.
Pipescript can be used as a standalone executable (
pipes) for quick data filtering and analysis. It can also be plugged into databases
as a query and filtering language (as is done in ConnectorDB).
SQL is the standard query language for databases. Unfortunately, SQL is not designed for usage with large time series, where the dataset might be enormous, and several disparate series need to be combined into datasets for use in ML applications.
First off, when operating on time series, we can make many assumptions, such as timestamps being ordered. This allows us to implement many operations on data as streaming transformations, rather than large aggregations. PipeScript can perform analysis on enormous files without much memory use, as all calculations are done (almost) locally.
Second, the natural form for time series is with each datapoint from separate sensors/data streams to have its own timestamp. Your phone reports usage data independently of your laptop. Nevertheless, the real power of our datasets comes when we can combine many different streams, all with differing time stamps into a coherent (and tabular) whole. This requires more manipulation of the data than a simple JOIN, and can be done very efficiently on streaming ordered time series.
PipeScript is a very basic transform language and interpolation/dataset generation machinery which works entirely on streaming (arbitrarily sized) data. It is the main query engine used in ConnectorDB.
We think that its simplicity and utility for ML applications makes it a great fit for data analysis.
TL;DR: PipeScript is a small-scale spark.
go get github.com/connectordb/pipescript go test github.com/connectordb/pipescript/... go install github.com/connectordb/pipescript/pipes
The standalone can be used to run PipeScript queries on datapoint, json, and csv formatted data. The following will count the number of datapoints in a csv file:
pipes run -i myfile.csv -ifmt csv --notimestamp "count | if last"
Remember that you need to escape the identity transform ($) in bash ($)
If you want to perform SQL queries on files, please look at q. Unfortunately, advanced SQL operations require keeping the entire dataset in memory - something PipeScript tries to avoid.