einops


Nameeinops JSON
Version 0.8.0 PyPI version JSON
download
home_pageNone
SummaryA new flavour of deep learning operations
upload_time2024-04-28 04:07:48
maintainerNone
docs_urlNone
authorAlex Rogozhnikov
requires_python>=3.8
licenseMIT
keywords deep learning einops machine learning neural networks scientific computations tensor manipulation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<!--
<a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4' >
<div align="center">
  <img src="http://arogozhnikov.github.io/images/einops/einops_video.gif" alt="einops package examples" />
  <br>
  <small><a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4'>This video in high quality (mp4)</a></small>
  <br><br>
</div>
</a>
-->

<!-- this link magically rendered as video, unfortunately not in docs -->

https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4




# einops 
[![Run tests](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml/badge.svg)](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml)
[![PyPI version](https://badge.fury.io/py/einops.svg)](https://badge.fury.io/py/einops)
[![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einops.rocks/)
![Supported python versions](https://raw.githubusercontent.com/arogozhnikov/einops/master/docs/resources/python_badge.svg)


Flexible and powerful tensor operations for readable and reliable code. <br />
Supports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).

## Recent updates:

- 0.7.0: no-hassle `torch.compile`, support of [array api standard](https://data-apis.org/array-api/latest/API_specification/index.html) and more
- 10'000🎉: github reports that more than 10k project use einops
- einops 0.6.1: paddle backend added
- einops 0.6 introduces [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)
- einops 0.5: einsum is now a part of einops
- [Einops paper](https://openreview.net/pdf?id=oapKSVM2bcj) is accepted for oral presentation at ICLR 2022 (yes, it worth reading).
  Talk recordings are [available](https://iclr.cc/virtual/2022/oral/6603)


<details markdown="1">
<summary>Previous updates</summary>
- flax and oneflow backend added
- torch.jit.script is supported for pytorch layers
- powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/master/docs/3-einmix-layer.ipynb) 
</details>

<!--<div align="center">
  <img src="http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png" 
  alt="einops package logo" width="250" height="250" />
  <br><br>
</div> -->


## Tweets 

> In case you need convincing arguments for setting aside time to learn about einsum and einops...
[Tim Rocktäschel](https://twitter.com/_rockt/status/1230818967205425152)

> Writing better code with PyTorch and einops 👌
[Andrej Karpathy](https://twitter.com/karpathy/status/1290826075916779520)

> Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life
[Nasim Rahaman](https://twitter.com/nasim_rahaman/status/1216022614755463169)

[More testimonials](https://einops.rocks/pages/testimonials/)

<!--
## Recordings of talk at ICLR 2022

<a href='https://iclr.cc/virtual/2022/oral/6603'>
<img width="922" alt="Screen Shot 2022-07-03 at 1 00 15 AM" src="https://user-images.githubusercontent.com/6318811/177030789-89d349bf-ef75-4af5-a71f-609896d1c8d9.png">
</a>

Watch [a 15-minute talk](https://iclr.cc/virtual/2022/oral/6603) focused on main problems of standard tensor manipulation methods, and how einops improves this process.
-->

## Contents

- [Installation](#Installation)
- [Documentation](https://einops.rocks/)
- [Tutorial](#Tutorials) 
- [API micro-reference](#API)
- [Why using einops](#Why-using-einops-notation)
- [Supported frameworks](#Supported-frameworks)
- [Citing](#Citing)
- [Repository](https://github.com/arogozhnikov/einops) and [discussions](https://github.com/arogozhnikov/einops/discussions)

## Installation  <a name="Installation"></a>

Plain and simple:
```bash
pip install einops
```

<!--
`einops` has no mandatory dependencies (code examples also require jupyter, pillow + backends). 
To obtain the latest github version 

```bash
pip install https://github.com/arogozhnikov/einops/archive/master.zip
```
-->

## Tutorials <a name="Tutorials"></a>

Tutorials are the most convenient way to see `einops` in action

- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb) 
- part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb)
- part 3: [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)
- part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)   

Kapil Sachdeva recorded a small [intro to einops](https://www.youtube.com/watch?v=xGy75Pjsqzo).

## API <a name="API"></a>

`einops` has a minimalistic yet powerful API.

Three core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/) 
shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

```python
from einops import rearrange, reduce, repeat
# rearrange elements according to the pattern
output_tensor = rearrange(input_tensor, 't b c -> b c t')
# combine rearrangement and reduction
output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
# copy along a new axis
output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)
```

Later additions to the family are `pack` and `unpack` functions (better than stack/split/concatenate):

```python
from einops import pack, unpack
# pack and unpack allow reversibly 'packing' multiple tensors into one.
# Packed tensors may be of different dimensionality:
packed,  ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c')
class_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')
```

Finally, einops provides einsum with a support of multi-lettered names: 

```python
from einops import einsum, pack, unpack
# einsum is like ... einsum, generic and flexible dot-product 
# but 1) axes can be multi-lettered  2) pattern goes last 3) works with multiple frameworks
C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')
```

### EinMix

`EinMix` is a generic linear layer, perfect for MLP Mixers and similar architectures.

### Layers

Einops provides layers (`einops` keeps a separate version for each framework) that reflect corresponding functions

```python
from einops.layers.torch      import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce
from einops.layers.flax       import Rearrange, Reduce
from einops.layers.paddle     import Rearrange, Reduce
from einops.layers.chainer    import Rearrange, Reduce
```

<details markdown="1">
<summary>Example of using layers within a pytorch model</summary>
Example given for pytorch, but code in other frameworks is almost identical

```python 
from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Rearrange

model = Sequential(
    ...,
    Conv2d(6, 16, kernel_size=5),
    MaxPool2d(kernel_size=2),
    # flattening without need to write forward
    Rearrange('b c h w -> b (c h w)'),  
    Linear(16*5*5, 120), 
    ReLU(),
    Linear(120, 10), 
)
```

No more flatten needed! 

Additionally, torch users will benefit from layers as those are script-able and compile-able.
</details>




## Naming <a name="Naming"></a>

`einops` stands for Einstein-Inspired Notation for operations 
(though "Einstein operations" is more attractive and easier to remember).

Notation was loosely inspired by Einstein summation (in particular by `numpy.einsum` operation).

## Why use `einops` notation?! <a name="Why-using-einops-notation"></a>


### Semantic information (being verbose in expectations)

```python
y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')
```
While these two lines are doing the same job in *some* context,
the second one provides information about the input and output.
In other words, `einops` focuses on interface: *what is the input and output*, not *how* the output is computed.

The next operation looks similar:

```python
y = rearrange(x, 'time c h w -> time (c h w)')
```
but it gives the reader a hint: 
this is not an independent batch of images we are processing, 
but rather a sequence (video). 

Semantic information makes the code easier to read and maintain. 

### Convenient checks

Reconsider the same example:

```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
```
The second line checks that the input has four dimensions, 
but you can also specify particular dimensions. 
That's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated   
```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
```

### Result is strictly determined

Below we have at least two ways to define the depth-to-space operation
```python
# depth-to-space
rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)
rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)
```
There are at least four more ways to do it. Which one is used by the framework?

These details are ignored, since *usually* it makes no difference, 
but it can make a big difference (e.g. if you use grouped convolutions in the next stage), 
and you'd like to specify this in your code.


### Uniformity

```python
reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)
```
These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, 
those are all defined in a uniform way. 

Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:

```python
rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
```

### Framework independent behavior

Even simple functions are defined differently by different frameworks

```python
y = x.flatten() # or flatten(x)
```

Suppose `x`'s shape was `(3, 4, 5)`, then `y` has shape ...

- numpy, pytorch, cupy, chainer: `(60,)`
- keras, tensorflow.layers, gluon: `(3, 20)`

`einops` works the same way in all frameworks.

### Independence of framework terminology

Example: `tile` vs `repeat` causes lots of confusion. To copy image along width:
```python
np.tile(image, (1, 2))    # in numpy
image.repeat(1, 2)        # pytorch's repeat ~ numpy's tile
```

With einops you don't need to decipher which axis was repeated:
```python
repeat(image, 'h w -> h (tile w)', tile=2)  # in numpy
repeat(image, 'h w -> h (tile w)', tile=2)  # in pytorch
repeat(image, 'h w -> h (tile w)', tile=2)  # in tf
repeat(image, 'h w -> h (tile w)', tile=2)  # in jax
repeat(image, 'h w -> h (tile w)', tile=2)  # in cupy
... (etc.)
```

[Testimonials](https://einops.rocks/pages/testimonials/) provide users' perspective on the same question. 

## Supported frameworks <a name="Supported-frameworks"></a>

Einops works with ...

- [numpy](http://www.numpy.org/)
- [pytorch](https://pytorch.org/)
- [tensorflow](https://www.tensorflow.org/)
- [jax](https://github.com/google/jax)
- [cupy](https://cupy.chainer.org/)
- [chainer](https://chainer.org/)
- [tf.keras](https://www.tensorflow.org/guide/keras)
- [flax](https://github.com/google/flax) (experimental)
- [paddle](https://github.com/PaddlePaddle/Paddle) (experimental)
- [oneflow](https://github.com/Oneflow-Inc/oneflow) (community)
- [tinygrad](https://github.com/tinygrad/tinygrad) (community)

Additionally, starting from einops 0.7.0 einops can be used with any framework that supports [Python array API standard](https://data-apis.org/array-api/latest/API_specification/index.html)

## Citing einops <a name="Citing"></a>

Please use the following bibtex record

```text
@inproceedings{
    rogozhnikov2022einops,
    title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation},
    author={Alex Rogozhnikov},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=oapKSVM2bcj}
}
```


## Supported python versions

`einops` works with python 3.8 or later.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "einops",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "deep learning, einops, machine learning, neural networks, scientific computations, tensor manipulation",
    "author": "Alex Rogozhnikov",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/79/ca/9f5dcb8bead39959454c3912266bedc4c315839cee0e0ca9f4328f4588c1/einops-0.8.0.tar.gz",
    "platform": null,
    "description": "\n<!--\n<a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4' >\n<div align=\"center\">\n  <img src=\"http://arogozhnikov.github.io/images/einops/einops_video.gif\" alt=\"einops package examples\" />\n  <br>\n  <small><a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4'>This video in high quality (mp4)</a></small>\n  <br><br>\n</div>\n</a>\n-->\n\n<!-- this link magically rendered as video, unfortunately not in docs -->\n\nhttps://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4\n\n\n\n\n# einops \n[![Run tests](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml/badge.svg)](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml)\n[![PyPI version](https://badge.fury.io/py/einops.svg)](https://badge.fury.io/py/einops)\n[![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einops.rocks/)\n![Supported python versions](https://raw.githubusercontent.com/arogozhnikov/einops/master/docs/resources/python_badge.svg)\n\n\nFlexible and powerful tensor operations for readable and reliable code. <br />\nSupports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).\n\n## Recent updates:\n\n- 0.7.0: no-hassle `torch.compile`, support of [array api standard](https://data-apis.org/array-api/latest/API_specification/index.html) and more\n- 10'000\ud83c\udf89: github reports that more than 10k project use einops\n- einops 0.6.1: paddle backend added\n- einops 0.6 introduces [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)\n- einops 0.5: einsum is now a part of einops\n- [Einops paper](https://openreview.net/pdf?id=oapKSVM2bcj) is accepted for oral presentation at ICLR 2022 (yes, it worth reading).\n  Talk recordings are [available](https://iclr.cc/virtual/2022/oral/6603)\n\n\n<details markdown=\"1\">\n<summary>Previous updates</summary>\n- flax and oneflow backend added\n- torch.jit.script is supported for pytorch layers\n- powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/master/docs/3-einmix-layer.ipynb) \n</details>\n\n<!--<div align=\"center\">\n  <img src=\"http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png\" \n  alt=\"einops package logo\" width=\"250\" height=\"250\" />\n  <br><br>\n</div> -->\n\n\n## Tweets \n\n> In case you need convincing arguments for setting aside time to learn about einsum and einops...\n[Tim Rockt\u00e4schel](https://twitter.com/_rockt/status/1230818967205425152)\n\n> Writing better code with PyTorch and einops \ud83d\udc4c\n[Andrej Karpathy](https://twitter.com/karpathy/status/1290826075916779520)\n\n> Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life\n[Nasim Rahaman](https://twitter.com/nasim_rahaman/status/1216022614755463169)\n\n[More testimonials](https://einops.rocks/pages/testimonials/)\n\n<!--\n## Recordings of talk at ICLR 2022\n\n<a href='https://iclr.cc/virtual/2022/oral/6603'>\n<img width=\"922\" alt=\"Screen Shot 2022-07-03 at 1 00 15 AM\" src=\"https://user-images.githubusercontent.com/6318811/177030789-89d349bf-ef75-4af5-a71f-609896d1c8d9.png\">\n</a>\n\nWatch [a 15-minute talk](https://iclr.cc/virtual/2022/oral/6603) focused on main problems of standard tensor manipulation methods, and how einops improves this process.\n-->\n\n## Contents\n\n- [Installation](#Installation)\n- [Documentation](https://einops.rocks/)\n- [Tutorial](#Tutorials) \n- [API micro-reference](#API)\n- [Why using einops](#Why-using-einops-notation)\n- [Supported frameworks](#Supported-frameworks)\n- [Citing](#Citing)\n- [Repository](https://github.com/arogozhnikov/einops) and [discussions](https://github.com/arogozhnikov/einops/discussions)\n\n## Installation  <a name=\"Installation\"></a>\n\nPlain and simple:\n```bash\npip install einops\n```\n\n<!--\n`einops` has no mandatory dependencies (code examples also require jupyter, pillow + backends). \nTo obtain the latest github version \n\n```bash\npip install https://github.com/arogozhnikov/einops/archive/master.zip\n```\n-->\n\n## Tutorials <a name=\"Tutorials\"></a>\n\nTutorials are the most convenient way to see `einops` in action\n\n- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb) \n- part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb)\n- part 3: [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)\n- part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)   \n\nKapil Sachdeva recorded a small [intro to einops](https://www.youtube.com/watch?v=xGy75Pjsqzo).\n\n## API <a name=\"API\"></a>\n\n`einops` has a minimalistic yet powerful API.\n\nThree core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/) \nshows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)\n\n```python\nfrom einops import rearrange, reduce, repeat\n# rearrange elements according to the pattern\noutput_tensor = rearrange(input_tensor, 't b c -> b c t')\n# combine rearrangement and reduction\noutput_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)\n# copy along a new axis\noutput_tensor = repeat(input_tensor, 'h w -> h w c', c=3)\n```\n\nLater additions to the family are `pack` and `unpack` functions (better than stack/split/concatenate):\n\n```python\nfrom einops import pack, unpack\n# pack and unpack allow reversibly 'packing' multiple tensors into one.\n# Packed tensors may be of different dimensionality:\npacked,  ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c')\nclass_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')\n```\n\nFinally, einops provides einsum with a support of multi-lettered names: \n\n```python\nfrom einops import einsum, pack, unpack\n# einsum is like ... einsum, generic and flexible dot-product \n# but 1) axes can be multi-lettered  2) pattern goes last 3) works with multiple frameworks\nC = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')\n```\n\n### EinMix\n\n`EinMix` is a generic linear layer, perfect for MLP Mixers and similar architectures.\n\n### Layers\n\nEinops provides layers (`einops` keeps a separate version for each framework) that reflect corresponding functions\n\n```python\nfrom einops.layers.torch      import Rearrange, Reduce\nfrom einops.layers.tensorflow import Rearrange, Reduce\nfrom einops.layers.flax       import Rearrange, Reduce\nfrom einops.layers.paddle     import Rearrange, Reduce\nfrom einops.layers.chainer    import Rearrange, Reduce\n```\n\n<details markdown=\"1\">\n<summary>Example of using layers within a pytorch model</summary>\nExample given for pytorch, but code in other frameworks is almost identical\n\n```python \nfrom torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU\nfrom einops.layers.torch import Rearrange\n\nmodel = Sequential(\n    ...,\n    Conv2d(6, 16, kernel_size=5),\n    MaxPool2d(kernel_size=2),\n    # flattening without need to write forward\n    Rearrange('b c h w -> b (c h w)'),  \n    Linear(16*5*5, 120), \n    ReLU(),\n    Linear(120, 10), \n)\n```\n\nNo more flatten needed! \n\nAdditionally, torch users will benefit from layers as those are script-able and compile-able.\n</details>\n\n\n\n\n## Naming <a name=\"Naming\"></a>\n\n`einops` stands for Einstein-Inspired Notation for operations \n(though \"Einstein operations\" is more attractive and easier to remember).\n\nNotation was loosely inspired by Einstein summation (in particular by `numpy.einsum` operation).\n\n## Why use `einops` notation?! <a name=\"Why-using-einops-notation\"></a>\n\n\n### Semantic information (being verbose in expectations)\n\n```python\ny = x.view(x.shape[0], -1)\ny = rearrange(x, 'b c h w -> b (c h w)')\n```\nWhile these two lines are doing the same job in *some* context,\nthe second one provides information about the input and output.\nIn other words, `einops` focuses on interface: *what is the input and output*, not *how* the output is computed.\n\nThe next operation looks similar:\n\n```python\ny = rearrange(x, 'time c h w -> time (c h w)')\n```\nbut it gives the reader a hint: \nthis is not an independent batch of images we are processing, \nbut rather a sequence (video). \n\nSemantic information makes the code easier to read and maintain. \n\n### Convenient checks\n\nReconsider the same example:\n\n```python\ny = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)\ny = rearrange(x, 'b c h w -> b (c h w)')\n```\nThe second line checks that the input has four dimensions, \nbut you can also specify particular dimensions. \nThat's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated   \n```python\ny = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)\ny = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)\n```\n\n### Result is strictly determined\n\nBelow we have at least two ways to define the depth-to-space operation\n```python\n# depth-to-space\nrearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)\nrearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)\n```\nThere are at least four more ways to do it. Which one is used by the framework?\n\nThese details are ignored, since *usually* it makes no difference, \nbut it can make a big difference (e.g. if you use grouped convolutions in the next stage), \nand you'd like to specify this in your code.\n\n\n### Uniformity\n\n```python\nreduce(x, 'b c (x dx) -> b c x', 'max', dx=2)\nreduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)\nreduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)\n```\nThese examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, \nthose are all defined in a uniform way. \n\nSpace-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:\n\n```python\nrearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)\n```\n\n### Framework independent behavior\n\nEven simple functions are defined differently by different frameworks\n\n```python\ny = x.flatten() # or flatten(x)\n```\n\nSuppose `x`'s shape was `(3, 4, 5)`, then `y` has shape ...\n\n- numpy, pytorch, cupy, chainer: `(60,)`\n- keras, tensorflow.layers, gluon: `(3, 20)`\n\n`einops` works the same way in all frameworks.\n\n### Independence of framework terminology\n\nExample: `tile` vs `repeat` causes lots of confusion. To copy image along width:\n```python\nnp.tile(image, (1, 2))    # in numpy\nimage.repeat(1, 2)        # pytorch's repeat ~ numpy's tile\n```\n\nWith einops you don't need to decipher which axis was repeated:\n```python\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in numpy\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in pytorch\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in tf\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in jax\nrepeat(image, 'h w -> h (tile w)', tile=2)  # in cupy\n... (etc.)\n```\n\n[Testimonials](https://einops.rocks/pages/testimonials/) provide users' perspective on the same question. \n\n## Supported frameworks <a name=\"Supported-frameworks\"></a>\n\nEinops works with ...\n\n- [numpy](http://www.numpy.org/)\n- [pytorch](https://pytorch.org/)\n- [tensorflow](https://www.tensorflow.org/)\n- [jax](https://github.com/google/jax)\n- [cupy](https://cupy.chainer.org/)\n- [chainer](https://chainer.org/)\n- [tf.keras](https://www.tensorflow.org/guide/keras)\n- [flax](https://github.com/google/flax) (experimental)\n- [paddle](https://github.com/PaddlePaddle/Paddle) (experimental)\n- [oneflow](https://github.com/Oneflow-Inc/oneflow) (community)\n- [tinygrad](https://github.com/tinygrad/tinygrad) (community)\n\nAdditionally, starting from einops 0.7.0 einops can be used with any framework that supports [Python array API standard](https://data-apis.org/array-api/latest/API_specification/index.html)\n\n## Citing einops <a name=\"Citing\"></a>\n\nPlease use the following bibtex record\n\n```text\n@inproceedings{\n    rogozhnikov2022einops,\n    title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation},\n    author={Alex Rogozhnikov},\n    booktitle={International Conference on Learning Representations},\n    year={2022},\n    url={https://openreview.net/forum?id=oapKSVM2bcj}\n}\n```\n\n\n## Supported python versions\n\n`einops` works with python 3.8 or later.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A new flavour of deep learning operations",
    "version": "0.8.0",
    "project_urls": {
        "Homepage": "https://github.com/arogozhnikov/einops"
    },
    "split_keywords": [
        "deep learning",
        " einops",
        " machine learning",
        " neural networks",
        " scientific computations",
        " tensor manipulation"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "445af0b9ad6c0a9017e62d4735daaeb11ba3b6c009d69a26141b258cd37b5588",
                "md5": "b4138eb77f1b0f7fd6583b814d6c69a1",
                "sha256": "9572fb63046264a862693b0a87088af3bdc8c068fde03de63453cbbde245465f"
            },
            "downloads": -1,
            "filename": "einops-0.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b4138eb77f1b0f7fd6583b814d6c69a1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 43223,
            "upload_time": "2024-04-28T04:07:49",
            "upload_time_iso_8601": "2024-04-28T04:07:49.718336Z",
            "url": "https://files.pythonhosted.org/packages/44/5a/f0b9ad6c0a9017e62d4735daaeb11ba3b6c009d69a26141b258cd37b5588/einops-0.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "79ca9f5dcb8bead39959454c3912266bedc4c315839cee0e0ca9f4328f4588c1",
                "md5": "18be2c5cd91a7f6c7bf6b7fc11018e52",
                "sha256": "63486517fed345712a8385c100cb279108d9d47e6ae59099b07657e983deae85"
            },
            "downloads": -1,
            "filename": "einops-0.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "18be2c5cd91a7f6c7bf6b7fc11018e52",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 58861,
            "upload_time": "2024-04-28T04:07:48",
            "upload_time_iso_8601": "2024-04-28T04:07:48.041686Z",
            "url": "https://files.pythonhosted.org/packages/79/ca/9f5dcb8bead39959454c3912266bedc4c315839cee0e0ca9f4328f4588c1/einops-0.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-28 04:07:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "arogozhnikov",
    "github_project": "einops",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "einops"
}
        
Elapsed time: 0.21052s