Read the data of an ODBC data source as sequence of Apache Arrow record batches.


License
MIT
Install
pip install arrow-odbc==8.3.2

Documentation

arrow-odbc-py

Licence PyPI version Documentation Status

Fill Apache Arrow arrays from ODBC data sources. This package is build on top of the pyarrow Python package and arrow-odbc Rust crate and enables you to read the data of an ODBC data source as sequence of Apache Arrow record batches.

  • Fast. Makes efficient use of ODBC bulk reads and writes, to lower IO overhead.
  • Flexible. Query any ODBC data source you have a driver for. MySQL, MS SQL, Excel, ...
  • Portable. Easy to install and update dependencies. No binary dependency to specific implemenations of Python interpreter, Arrow or ODBC driver manager.

About Arrow

Apache Arrow defines a language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware like CPUs and GPUs. The Arrow memory format also supports zero-copy reads for lightning-fast data access without serialization overhead.

About ODBC

ODBC (Open DataBase Connectivity) is a standard which enables you to access data from a wide variaty of data sources using SQL.

Usage

Query

from arrow_odbc import connect

connection_string=
    "Driver={ODBC Driver 18 for SQL Server};" \
    "Server=localhost;" \
    "TrustServerCertificate=yes;"

connection = connect(
    connection_string=connection_string,
    user="SA",
    password="My@Test@Password",
)
reader = connection.read_arrow_batches(
    query=f"SELECT * FROM MyTable WHERE a=?",
    connection_string=connection_string,
    parameters=["I'm a positional query parameter"],
)

for batch in reader:
    # Process arrow batches
    df = batch.to_pandas()
    # ...

Insert

from arrow_odbc import insert_into_table
import pyarrow as pa
import pandas


def dataframe_to_table(df):
    table = pa.Table.from_pandas(df)
    reader = pa.RecordBatchReader.from_batches(table.schema, table.to_batches())

    connection = connect(
        connection_string=connection_string,
        user="SA",
        password="My@Test@Password",
    )
    connection.insert_into_table(
        chunk_size=1000,
        table="MyTable",
        reader=reader,
    )

Installation

Installing ODBC driver manager

The provided wheels dynamically link against the driver manager, which must be provided by the system.

Windows

Nothing to do. ODBC driver manager is preinstalled.

Ubuntu

sudo apt-get install unixodbc-dev

OS-X

You can use homebrew to install UnixODBC

brew install unixodbc

Installing the wheel

This package has been designed to be easily deployable, so it provides a prebuild many linux wheel which is independent of the specific version of your Python interpreter and the specific Arrow Version you want to use. It will dynamically link against the ODBC driver manager provided by your system.

Wheels have been uploaded to PyPi and can be installed using pip. The wheel (including the manylinux wheel) will link against the your system ODBC driver manager at runtime. If there are no prebuild wheels for your platform, you can build the wheel from source. For this the rust toolchain must be installed.

pip install arrow-odbc

arrow-odbc utilizes cffi and the Arrow C-Interface to glue Rust and Python code together. Therefore the wheel does not need to be build against the precise version either of Python or Arrow.

Installing with conda

conda install -c conda-forge arrow-odbc

Warning: The conan recipie is currently unmaintained. So to install the newest version you need to either install from source or use a wheel deployed via pip.

Building wheel from source

There is no ready made wheel for the platform you want to target? Do not worry, you can probably build it from source.

Encodings for SQL statement text

ODBC applications use either narrow or wide encodings. The narrow encoding is either UTF-8 or an extended ASCII, the wide encoding is always UTF-16. The narrow encoding is supposed to be governed by the system locale. arrow-odbc-py chooses to use the wide encoding on windows platform and the narrow ones on all others (e.g. Linux, Mac). UTF-8 is the default locale on many of these systems, and the wide paths are typically less battletested on Mac or Linux drivers. On the other hand, most Windows platforms do not have yet a UTF-8 local active by default. Over all the guess is, that sticking to UTF-16 on windows and hoping for a UTF-8 local and driver support on other Platform, results in the least problems on average.

Your milage may vary though. Please note that the encoding for the parameters and results of your queries can be controlled at runtime with the payload_text_encoding parameter of Connection.read_arrow_batches.

The encoding used for the statement text itself, e.g. for column names is controlled at compile time though. With the wheels deployed to pypi you will always get the wide encoding on Windows and the narrow encoding on the other platforms. If this does not work for you, you can build the wheel yourself with a different encoding. If you can build the wheel from source as described above, you can also change the compile time features flags.

E.g. to build the wheel with the wide encoding use:

uv run maturin build --features wide

or, to use the narrow encoding for windows:

uv run maturin build --features narrow

Matching of ODBC to Arrow types then querying

ODBC Arrow
Numeric(p <= 38) Decimal128
Decimal(p <= 38, s >= 0) Decimal128
Integer Int32
SmallInt Int16
Real Float32
Float(p <=24) Float32
Double Float64
Float(p > 24) Float64
Date Date32
LongVarbinary Binary
Time(p = 0) Time32Second
Time(p = 1..3) Time32Millisecond
Time(p = 4..6) Time64Microsecond
Time(p = 7..9) Time64Nanosecond
Timestamp(p = 0) TimestampSecond
Timestamp(p: 1..3) TimestampMilliSecond
Timestamp(p: 4..6) TimestampMicroSecond
Timestamp(p >= 7 ) TimestampNanoSecond
BigInt Int64
TinyInt Signed Int8
TinyInt Unsigned UInt8
Bit Boolean
Varbinary Binary
Binary FixedSizedBinary
All others Utf8

Matching of Arrow to ODBC types then inserting

Arrow ODBC
Utf8 VarChar
Decimal128(p, s = 0) VarChar(p + 1)
Decimal128(p, s != 0) VarChar(p + 2)
Decimal128(p, s < 0) VarChar(p - s + 1)
Decimal256(p, s = 0) VarChar(p + 1)
Decimal256(p, s != 0) VarChar(p + 2)
Decimal256(p, s < 0) VarChar(p - s + 1)
Int8 TinyInt
Int16 SmallInt
Int32 Integer
Int64 BigInt
Float16 Real
Float32 Real
Float64 Double
Timestamp s Timestamp(7)
Timestamp ms Timestamp(7)
Timestamp us Timestamp(7)
Timestamp ns Timestamp(7)
Timestamp with Tz s VarChar(25)
Timestamp with Tz ms VarChar(29)
Timestamp with Tz us VarChar(32)
Timestamp with Tz ns VarChar(35)
Date32 Date
Date64 Date
Time32 s Time
Time32 ms VarChar(12)
Time64 us VarChar(15)
Time64 ns VarChar(16)
Binary Varbinary
FixedBinary(l) Varbinary(l)
All others Unsupported

Comparision to other Python ODBC bindings

  • pyodbc - General purpose ODBC python bindings. In contrast arrow-odbc is specifically concerned with bulk reads and writes to arrow arrays.
  • turbodbc - Complies with the Python Database API Specification 2.0 (PEP 249) which arrow-odbc does not aim to do. Like arrow-odbc bulk read and writes is the strong point of turbodbc. turbodbc has more system dependencies, which can make it cumbersome to install if not using conda. turbodbc is build against the C++ implementation of Arrow, which implies it is only compatible with matching version of pyarrow.