Name | einx JSON |
Version |
0.3.0
JSON |
| download |
home_page | https://github.com/fferflo/einx |
Summary | Universal Tensor Operations in Einstein-Inspired Notation for Python |
upload_time | 2024-06-11 13:49:37 |
maintainer | None |
docs_url | None |
author | Florian Fervers |
requires_python | >=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# *einx* - Universal Tensor Operations in Einstein-Inspired Notation
[![pytest](https://github.com/fferflo/einx/actions/workflows/run_pytest.yml/badge.svg)](https://github.com/fferflo/einx/actions/workflows/run_pytest.yml)
[![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einx.readthedocs.io)
[![PyPI version](https://badge.fury.io/py/einx.svg)](https://badge.fury.io/py/einx)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/release/python-380/)
einx is a Python library that provides a universal interface to formulate tensor operations in frameworks such as Numpy, PyTorch, Jax and Tensorflow. The design is based on the following principles:
1. **Provide a set of elementary tensor operations** following Numpy-like naming: `einx.{sum|max|where|add|dot|flip|get_at|...}`
2. **Use einx notation to express vectorization of the elementary operations.** einx notation is inspired by [einops](https://github.com/arogozhnikov/einops), but introduces several novel concepts such as `[]`-bracket notation and full composability that allow using it as a universal language for tensor operations.
einx can be integrated and mixed with existing code seamlessly. All operations are [just-in-time compiled](https://einx.readthedocs.io/en/latest/more/jit.html) into regular Python functions using Python's [exec()](https://docs.python.org/3/library/functions.html#exec) and invoke operations from the respective framework.
**Getting started:**
* [Tutorial](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_overview.html)
* [Example: GPT-2 with einx](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html)
* [How is einx different from einops?](https://einx.readthedocs.io/en/latest/faq/einops.html)
* [How is einx notation universal?](https://einx.readthedocs.io/en/latest/faq/universal.html)
* [API reference](https://einx.readthedocs.io/en/latest/api.html)
## Installation
```python
pip install einx
```
See [Installation](https://einx.readthedocs.io/en/latest/gettingstarted/installation.html) for more information.
## What does einx look like?
#### Tensor manipulation
```python
import einx
x = {np.asarray|torch.as_tensor|jnp.asarray|...}(...) # Create some tensor
einx.sum("a [b]", x) # Sum-reduction along second axis
einx.flip("... (g [c])", x, c=2) # Flip pairs of values along the last axis
einx.mean("b [s...] c", x) # Spatial mean-pooling
einx.sum("b (s [s2])... c", x, s2=2) # Sum-pooling with kernel_size=stride=2
einx.add("a, b -> a b", x, y) # Outer sum
einx.get_at("b [h w] c, b i [2] -> b i c", x, y) # Gather values at coordinates
einx.rearrange("b (q + k) -> b q, b k", x, q=2) # Split
einx.rearrange("b c, 1 -> b (c + 1)", x, [42]) # Append number to each channel
# Apply custom operations:
einx.vmap("b [s...] c -> b c", x, op=np.mean) # Spatial mean-pooling
einx.vmap("a [b], [b] c -> a c", x, y, op=np.dot) # Matmul
```
All einx functions simply forward computation to the respective backend, e.g. by internally calling `np.reshape`, `np.transpose`, `np.sum` with the appropriate arguments.
#### Common neural network operations
```python
# Layer normalization
mean = einx.mean("b... [c]", x, keepdims=True)
var = einx.var("b... [c]", x, keepdims=True)
x = (x - mean) * torch.rsqrt(var + epsilon)
# Prepend class token
einx.rearrange("b s... c, c -> b (1 + (s...)) c", x, cls_token)
# Multi-head attention
attn = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=8)
attn = einx.softmax("b q [k] h", attn)
x = einx.dot("b q k h, b k (h c) -> b q (h c)", attn, v)
# Matmul in linear layers
einx.dot("b... [c1->c2]", x, w) # - Regular
einx.dot("b... (g [c1->c2])", x, w) # - Grouped: Same weights per group
einx.dot("b... ([g c1->g c2])", x, w) # - Grouped: Different weights per group
einx.dot("b [s...->s2] c", x, w) # - Spatial mixing as in MLP-mixer
```
See [Common neural network ops](https://einx.readthedocs.io/en/latest/gettingstarted/commonnnops.html) for more examples.
#### Optional: Deep learning modules
```python
import einx.nn.{torch|flax|haiku|equinox|keras} as einn
batchnorm = einn.Norm("[b...] c", decay_rate=0.9)
layernorm = einn.Norm("b... [c]") # as used in transformers
instancenorm = einn.Norm("b [s...] c")
groupnorm = einn.Norm("b [s...] (g [c])", g=8)
rmsnorm = einn.Norm("b... [c]", mean=False, bias=False)
channel_mix = einn.Linear("b... [c1->c2]", c2=64)
spatial_mix1 = einn.Linear("b [s...->s2] c", s2=64)
spatial_mix2 = einn.Linear("b [s2->s...] c", s=(64, 64))
patch_embed = einn.Linear("b (s [s2->])... [c1->c2]", s2=4, c2=64)
dropout = einn.Dropout("[...]", drop_rate=0.2)
spatial_dropout = einn.Dropout("[b] ... [c]", drop_rate=0.2)
droppath = einn.Dropout("[b] ...", drop_rate=0.2)
```
See `examples/train_{torch|flax|haiku|equinox|keras}.py` for example trainings on CIFAR10, [GPT-2](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html) and [Mamba](https://github.com/fferflo/weightbridge/blob/master/examples/mamba2flax.py) for working example implementations of language models using einx, and [Tutorial: Neural networks](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_neuralnetworks.html) for more details.
#### Just-in-time compilation
einx traces the required backend operations for a given call into graph representation and just-in-time compiles them into a regular Python function using Python's [`exec()`](https://docs.python.org/3/library/functions.html#exec). This reduces overhead to a single cache lookup and allows inspecting the generated function. For example:
```python
>>> x = np.zeros((3, 10, 10))
>>> graph = einx.sum("... (g [c])", x, g=2, graph=True)
>>> print(graph)
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (3, 10, 2, 5))
x1 = np.sum(x0, axis=3)
return x1
```
See [Just-in-time compilation](https://einx.readthedocs.io/en/latest/more/jit.html) for more details.
Raw data
{
"_id": null,
"home_page": "https://github.com/fferflo/einx",
"name": "einx",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Florian Fervers",
"author_email": "florian.fervers@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/95/af/2a2f83f981e969ae3ec5dc30f9b0cd1a258acabc2ff7b33eb9726e334e55/einx-0.3.0.tar.gz",
"platform": null,
"description": "# *einx* - Universal Tensor Operations in Einstein-Inspired Notation\n\n[![pytest](https://github.com/fferflo/einx/actions/workflows/run_pytest.yml/badge.svg)](https://github.com/fferflo/einx/actions/workflows/run_pytest.yml)\n[![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einx.readthedocs.io)\n[![PyPI version](https://badge.fury.io/py/einx.svg)](https://badge.fury.io/py/einx)\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/release/python-380/)\n\neinx is a Python library that provides a universal interface to formulate tensor operations in frameworks such as Numpy, PyTorch, Jax and Tensorflow. The design is based on the following principles:\n\n1. **Provide a set of elementary tensor operations** following Numpy-like naming: `einx.{sum|max|where|add|dot|flip|get_at|...}`\n2. **Use einx notation to express vectorization of the elementary operations.** einx notation is inspired by [einops](https://github.com/arogozhnikov/einops), but introduces several novel concepts such as `[]`-bracket notation and full composability that allow using it as a universal language for tensor operations.\n\neinx can be integrated and mixed with existing code seamlessly. All operations are [just-in-time compiled](https://einx.readthedocs.io/en/latest/more/jit.html) into regular Python functions using Python's [exec()](https://docs.python.org/3/library/functions.html#exec) and invoke operations from the respective framework.\n\n**Getting started:**\n\n* [Tutorial](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_overview.html)\n* [Example: GPT-2 with einx](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html)\n* [How is einx different from einops?](https://einx.readthedocs.io/en/latest/faq/einops.html)\n* [How is einx notation universal?](https://einx.readthedocs.io/en/latest/faq/universal.html)\n* [API reference](https://einx.readthedocs.io/en/latest/api.html)\n\n## Installation\n\n```python\npip install einx\n```\n\nSee [Installation](https://einx.readthedocs.io/en/latest/gettingstarted/installation.html) for more information.\n\n## What does einx look like?\n\n#### Tensor manipulation\n\n```python\nimport einx\nx = {np.asarray|torch.as_tensor|jnp.asarray|...}(...) # Create some tensor\n\neinx.sum(\"a [b]\", x) # Sum-reduction along second axis\neinx.flip(\"... (g [c])\", x, c=2) # Flip pairs of values along the last axis\neinx.mean(\"b [s...] c\", x) # Spatial mean-pooling\neinx.sum(\"b (s [s2])... c\", x, s2=2) # Sum-pooling with kernel_size=stride=2\neinx.add(\"a, b -> a b\", x, y) # Outer sum\n\neinx.get_at(\"b [h w] c, b i [2] -> b i c\", x, y) # Gather values at coordinates\n\neinx.rearrange(\"b (q + k) -> b q, b k\", x, q=2) # Split\neinx.rearrange(\"b c, 1 -> b (c + 1)\", x, [42]) # Append number to each channel\n\n # Apply custom operations:\neinx.vmap(\"b [s...] c -> b c\", x, op=np.mean) # Spatial mean-pooling\neinx.vmap(\"a [b], [b] c -> a c\", x, y, op=np.dot) # Matmul\n```\n\nAll einx functions simply forward computation to the respective backend, e.g. by internally calling `np.reshape`, `np.transpose`, `np.sum` with the appropriate arguments.\n\n#### Common neural network operations\n\n```python\n# Layer normalization\nmean = einx.mean(\"b... [c]\", x, keepdims=True)\nvar = einx.var(\"b... [c]\", x, keepdims=True)\nx = (x - mean) * torch.rsqrt(var + epsilon)\n\n# Prepend class token\neinx.rearrange(\"b s... c, c -> b (1 + (s...)) c\", x, cls_token)\n\n# Multi-head attention\nattn = einx.dot(\"b q (h c), b k (h c) -> b q k h\", q, k, h=8)\nattn = einx.softmax(\"b q [k] h\", attn)\nx = einx.dot(\"b q k h, b k (h c) -> b q (h c)\", attn, v)\n\n# Matmul in linear layers\neinx.dot(\"b... [c1->c2]\", x, w) # - Regular\neinx.dot(\"b... (g [c1->c2])\", x, w) # - Grouped: Same weights per group\neinx.dot(\"b... ([g c1->g c2])\", x, w) # - Grouped: Different weights per group\neinx.dot(\"b [s...->s2] c\", x, w) # - Spatial mixing as in MLP-mixer\n```\n\nSee [Common neural network ops](https://einx.readthedocs.io/en/latest/gettingstarted/commonnnops.html) for more examples.\n\n#### Optional: Deep learning modules\n\n```python\nimport einx.nn.{torch|flax|haiku|equinox|keras} as einn\n\nbatchnorm = einn.Norm(\"[b...] c\", decay_rate=0.9)\nlayernorm = einn.Norm(\"b... [c]\") # as used in transformers\ninstancenorm = einn.Norm(\"b [s...] c\")\ngroupnorm = einn.Norm(\"b [s...] (g [c])\", g=8)\nrmsnorm = einn.Norm(\"b... [c]\", mean=False, bias=False)\n\nchannel_mix = einn.Linear(\"b... [c1->c2]\", c2=64)\nspatial_mix1 = einn.Linear(\"b [s...->s2] c\", s2=64)\nspatial_mix2 = einn.Linear(\"b [s2->s...] c\", s=(64, 64))\npatch_embed = einn.Linear(\"b (s [s2->])... [c1->c2]\", s2=4, c2=64)\n\ndropout = einn.Dropout(\"[...]\", drop_rate=0.2)\nspatial_dropout = einn.Dropout(\"[b] ... [c]\", drop_rate=0.2)\ndroppath = einn.Dropout(\"[b] ...\", drop_rate=0.2)\n```\n\nSee `examples/train_{torch|flax|haiku|equinox|keras}.py` for example trainings on CIFAR10, [GPT-2](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html) and [Mamba](https://github.com/fferflo/weightbridge/blob/master/examples/mamba2flax.py) for working example implementations of language models using einx, and [Tutorial: Neural networks](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_neuralnetworks.html) for more details.\n\n#### Just-in-time compilation\n\neinx traces the required backend operations for a given call into graph representation and just-in-time compiles them into a regular Python function using Python's [`exec()`](https://docs.python.org/3/library/functions.html#exec). This reduces overhead to a single cache lookup and allows inspecting the generated function. For example:\n\n```python\n>>> x = np.zeros((3, 10, 10))\n>>> graph = einx.sum(\"... (g [c])\", x, g=2, graph=True)\n>>> print(graph)\nimport numpy as np\ndef op0(i0):\n x0 = np.reshape(i0, (3, 10, 2, 5))\n x1 = np.sum(x0, axis=3)\n return x1\n```\n\nSee [Just-in-time compilation](https://einx.readthedocs.io/en/latest/more/jit.html) for more details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Universal Tensor Operations in Einstein-Inspired Notation for Python",
"version": "0.3.0",
"project_urls": {
"Homepage": "https://github.com/fferflo/einx"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "90044a730d74fd908daad86d6b313f235cdf8e0cf1c255b392b7174ff63ea81a",
"md5": "b39e9fe21fd0829d6514f9fcc54b8920",
"sha256": "367d62bab8dbb8c4937308512abb6f746cc0920990589892ba0d281356d39345"
},
"downloads": -1,
"filename": "einx-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b39e9fe21fd0829d6514f9fcc54b8920",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 102958,
"upload_time": "2024-06-11T13:49:36",
"upload_time_iso_8601": "2024-06-11T13:49:36.441686Z",
"url": "https://files.pythonhosted.org/packages/90/04/4a730d74fd908daad86d6b313f235cdf8e0cf1c255b392b7174ff63ea81a/einx-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "95af2a2f83f981e969ae3ec5dc30f9b0cd1a258acabc2ff7b33eb9726e334e55",
"md5": "86954b3b50240ab25ba6f891cd9fc310",
"sha256": "17ff87c6a0f68ab358c1da489f00e95f1de106fd12ff17d0fb3e210aaa1e5f8c"
},
"downloads": -1,
"filename": "einx-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "86954b3b50240ab25ba6f891cd9fc310",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 84758,
"upload_time": "2024-06-11T13:49:37",
"upload_time_iso_8601": "2024-06-11T13:49:37.532515Z",
"url": "https://files.pythonhosted.org/packages/95/af/2a2f83f981e969ae3ec5dc30f9b0cd1a258acabc2ff7b33eb9726e334e55/einx-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-11 13:49:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fferflo",
"github_project": "einx",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "einx"
}