Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.


Keywords
pytensor, math, numerical, symbolic, blas, numpy, autodiff, differentiation, ai, bayesian-inference, computational-science, deep-learning, statistics
License
Other
Install
pip install pytensor==2.18.3

Documentation

PyTensor logo

Tests Status Coverage

PyTensor is a Python library that allows one to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It provides the computational backend for PyMC.

Features

  • A hackable, pure-Python codebase
  • Extensible graph framework suitable for rapid development of custom operators and symbolic optimizations
  • Implements an extensible graph transpilation framework that currently provides compilation via C, JAX, and Numba
  • Contrary to PyTorch and TensorFlow, PyTensor maintains a static graph which can be modified in-place to allow for advanced optimizations

Getting started

import pytensor
from pytensor import tensor as pt

# Declare two symbolic floating-point scalars
a = pt.dscalar("a")
b = pt.dscalar("b")

# Create a simple example expression
c = a + b

# Convert the expression into a callable object that takes `(a, b)`
# values as input and computes the value of `c`.
f_c = pytensor.function([a, b], c)

assert f_c(1.5, 2.5) == 4.0

# Compute the gradient of the example expression with respect to `a`
dc = pytensor.grad(c, a)

f_dc = pytensor.function([a, b], dc)

assert f_dc(1.5, 2.5) == 1.0

# Compiling functions with `pytensor.function` also optimizes
# expression graphs by removing unnecessary operations and
# replacing computations with more efficient ones.

v = pt.vector("v")
M = pt.matrix("M")

d = a/a + (M + a).dot(v)

pytensor.dprint(d)
#  Add [id A]
#  β”œβ”€ ExpandDims{axis=0} [id B]
#  β”‚  └─ True_div [id C]
#  β”‚     β”œβ”€ a [id D]
#  β”‚     └─ a [id D]
#  └─ dot [id E]
#     β”œβ”€ Add [id F]
#     β”‚  β”œβ”€ M [id G]
#     β”‚  └─ ExpandDims{axes=[0, 1]} [id H]
#     β”‚     └─ a [id D]
#     └─ v [id I]

f_d = pytensor.function([a, v, M], d)

# `a/a` -> `1` and the dot product is replaced with a BLAS function
# (i.e. CGemv)
pytensor.dprint(f_d)
# Add [id A] 5
#  β”œβ”€ [1.] [id B]
#  └─ CGemv{inplace} [id C] 4
#     β”œβ”€ AllocEmpty{dtype='float64'} [id D] 3
#     β”‚  └─ Shape_i{0} [id E] 2
#     β”‚     └─ M [id F]
#     β”œβ”€ 1.0 [id G]
#     β”œβ”€ Add [id H] 1
#     β”‚  β”œβ”€ M [id F]
#     β”‚  └─ ExpandDims{axes=[0, 1]} [id I] 0
#     β”‚     └─ a [id J]
#     β”œβ”€ v [id K]
#     └─ 0.0 [id L]

See the PyTensor documentation for in-depth tutorials.

Installation

The latest release of PyTensor can be installed from PyPI using pip:

pip install pytensor

Or via conda-forge:

conda install -c conda-forge pytensor

The current development branch of PyTensor can be installed from GitHub, also using pip:

pip install git+https://github.com/pymc-devs/pytensor

Background

PyTensor is a fork of Aesara, which is a fork of Theano.

Contributing

We welcome bug reports and fixes and improvements to the documentation.

For more information on contributing, please see the contributing guide.

A good place to start contributing is by looking through the issues here.