What if most of your machine learning model code could be replaced by a single operation? Eins gives you a powerful language to describe array manipulation, making it a one-stop shop for all of your AI needs.
Let's say you want to compute batched pairwise distance between two batches of arrays of points:
from eins import EinsOp
EinsOp('batch n1 d, batch n2 d -> batch n1 n2',
combine='add', reduce='l2_norm')(pts1, -pts2)
If you're more interested in deep learning, here is the patch embedding from a vision transformer. That means breaking up an image into patches and then linearly embedding each patch.
from eins import EinsOp
patchify = EinsOp('''b (n_p patch) (n_p patch) c, (patch patch c) emb -> b (n_p n_p) emb''')
patches = patchify(images, kernel)
You input the shapes and Eins will figure out what to do.
If you've used einops
, then think of Eins as einops
with a more ambitious goal—being the only operation you should need for your next deep learning
project.
Interested? Check out the tutorial, which walks you through the highlights of Eins with examples of how Eins can make the array operations you know and love more readable, portable, and robust.
To learn more, consult the documentation or the examples.
pip install eins
Eins works with anything that implements the Array API, and Eins explicitly promises to support NumPy, PyTorch, and JAX—including full differentiability. You will need one of those libraries to actually use Eins operations.
- 🧩 A solver that can handle duplicate axes, named constants, and constraints
- 🚀 Compilation and optimization for high performance without sacrificing readability
- Split, concatenate, stack, flatten, transpose, normalize, reduce, broadcast, and more
- Works across frameworks
- A composable set of unified array operations for portable softmax, power normalization, and more
Eins is still in heavy development. Here's a sense of where we're headed.
-
Updating indexing syntax to match
eindex
- Unit array to indicate zero-dimensional tensors
-
...
for batching over dynamic numbers of batch axes - Specifying intermediate results to control the order of reduction
-
Support
-
and/
- Better error reporting
- Ways of visualizing and inspecting the computation graph
- Typo checking in errors about axes
- Multiple outputs, either through arrows or commas
- Layers for popular ML frameworks
- Automatically optimizing the execution of a specific EinsOp for a specific computer and input size
- Completing full support for tensor indexing
- Static typing support
- Tabulating the model FLOPs/memory usage as a function of named axes
-
Functionality akin to
pack
andunpack
The excellent einops
library
inspired this project and its syntax. After working on my own extension to
handle indexing, I realized that
eindex
already had a more coherent
vision for what indexing can look like, and so much of that syntax in this
library borrows from that one.
Any contributions to Eins are welcomed and appreciated! If you're interested in making serious changes or extensions to the syntax of operations, consider reaching out first to make sure we're on the same page. For any code changes, do make sure you're using the project Ruff settings.
Eins is distributed under the terms of the MIT license.