# Φ<sub>ML</sub>
[🌐 **Homepage**](https://github.com/tum-pbs/PhiML)
• [📖 **Documentation**](https://tum-pbs.github.io/PhiML/)
• [🔗 **API**](https://tum-pbs.github.io/PhiML/phiml)
• [**▶ Videos**]()
• [<img src="https://www.tensorflow.org/images/colab_logo_32px.png" height=16>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)
Φ<sub>ML</sub> provides a unified math and neural network API for Jax, PyTorch, TensorFlow and NumPy.
See the [installation Instructions](https://tum-pbs.github.io/PhiML/Installation_Instructions.html) on how to compile the optional custom CUDA operations.
```python
from jax import numpy as jnp
import torch
import tensorflow as tf
import numpy as np
from phiml import math
math.sin(1.)
math.sin(jnp.asarray([1.]))
math.sin(torch.tensor([1.]))
math.sin(tf.constant([1.]))
math.sin(np.asarray([1.]))
```
**Compatibility**
* Writing code that works with PyTorch, Jax, and TensorFlow makes it easier to share code with other people and collaborate.
* Your published research code will reach a broader audience.
* When you run into a bug / roadblock with one library, you can simply switch to another.
* Φ<sub>ML</sub> can efficiently [convert tensors between ML libraries](https://tum-pbs.github.io/PhiML/Convert.html) on-the-fly, so you can even mix the different ecosystems.
**Fewer mistakes**
* *No more data type troubles*: Φ<sub>ML</sub> [automatically converts data types](https://tum-pbs.github.io/PhiML/Data_Types.html) where needed and lets you specify the [FP precision globally or by context](https://tum-pbs.github.io/PhiML/Data_Types.html#Precision)!
* *No more reshaping troubles*: Φ<sub>ML</sub> performs [reshaping under-the-hood.](https://tum-pbs.github.io/PhiML/Shapes.html)
* *Is `neighbor_idx.at[jnp.reshape(idx, (-1,))].set(jnp.reshape(cell_idx, (-1,) + cell_idx.shape[-2:]))` correct?*: Φ<sub>ML</sub> provides a custom Tensor class that lets you write [easy-to-read, more concise, more explicit, less error-prone code](https://tum-pbs.github.io/PhiML/Tensors.html).
**Unique features**
* **n-dimensional operations**: With Φ<sub>ML</sub>, you can write code that [automatically works in 1D, 2D and 3D](https://tum-pbs.github.io/PhiML/N_Dimensional.html), choosing the corresponding operations based on the input dimensions.
* **Preconditioned linear solves**: Φ<sub>ML</sub> can [build sparse matrices from your Python functions](https://tum-pbs.github.io/PhiML/Matrices.html) and run linear solvers [with preconditioners](https://tum-pbs.github.io/PhiML/Linear_Solves.html).
* **Flexible neural network architectures**: [Φ<sub>ML</sub> provides various configurable neural network architectures, from MLPs to U-Nets.](https://tum-pbs.github.io/PhiML/Networks.html)
* **Non-uniform tensors**: Φ<sub>ML</sub> allows you to [stack tensors of different sizes and keeps track of the resulting shapes](https://tum-pbs.github.io/PhiML/Non_Uniform.html).
Raw data
{
"_id": null,
"home_page": "https://github.com/tum-pbs/PhiML",
"name": "phiml",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "Machine Learning, Deep Learning, Math, Linear systems, Sparse, Tensor, Named dimensions",
"author": "Philipp Holl",
"author_email": "philipp.holl@tum.de",
"download_url": "https://files.pythonhosted.org/packages/ed/93/41850e0c3211af445c00a04b74968260ea79a0c7ce75e47749928c14edff/phiml-1.5.1.tar.gz",
"platform": null,
"description": "# \u03a6<sub>ML</sub>\n\n[\ud83c\udf10 **Homepage**](https://github.com/tum-pbs/PhiML)\n \u2022 [\ud83d\udcd6 **Documentation**](https://tum-pbs.github.io/PhiML/)\n \u2022 [\ud83d\udd17 **API**](https://tum-pbs.github.io/PhiML/phiml)\n \u2022 [**\u25b6 Videos**]()\n \u2022 [<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" height=16>](https://colab.research.google.com/github/tum-pbs/PhiML/blob/main/docs/Examples.ipynb) [**Examples**](https://tum-pbs.github.io/PhiML/Examples.html)\n\n\u03a6<sub>ML</sub> provides a unified math and neural network API for Jax, PyTorch, TensorFlow and NumPy.\n\nSee the [installation Instructions](https://tum-pbs.github.io/PhiML/Installation_Instructions.html) on how to compile the optional custom CUDA operations.\n\n```python\nfrom jax import numpy as jnp\nimport torch\nimport tensorflow as tf\nimport numpy as np\n\nfrom phiml import math\n\nmath.sin(1.)\nmath.sin(jnp.asarray([1.]))\nmath.sin(torch.tensor([1.]))\nmath.sin(tf.constant([1.]))\nmath.sin(np.asarray([1.]))\n```\n\n\n\n**Compatibility**\n\n* Writing code that works with PyTorch, Jax, and TensorFlow makes it easier to share code with other people and collaborate.\n* Your published research code will reach a broader audience.\n* When you run into a bug / roadblock with one library, you can simply switch to another.\n* \u03a6<sub>ML</sub> can efficiently [convert tensors between ML libraries](https://tum-pbs.github.io/PhiML/Convert.html) on-the-fly, so you can even mix the different ecosystems.\n\n\n**Fewer mistakes**\n\n* *No more data type troubles*: \u03a6<sub>ML</sub> [automatically converts data types](https://tum-pbs.github.io/PhiML/Data_Types.html) where needed and lets you specify the [FP precision globally or by context](https://tum-pbs.github.io/PhiML/Data_Types.html#Precision)!\n* *No more reshaping troubles*: \u03a6<sub>ML</sub> performs [reshaping under-the-hood.](https://tum-pbs.github.io/PhiML/Shapes.html)\n* *Is `neighbor_idx.at[jnp.reshape(idx, (-1,))].set(jnp.reshape(cell_idx, (-1,) + cell_idx.shape[-2:]))` correct?*: \u03a6<sub>ML</sub> provides a custom Tensor class that lets you write [easy-to-read, more concise, more explicit, less error-prone code](https://tum-pbs.github.io/PhiML/Tensors.html).\n\n**Unique features**\n\n* **n-dimensional operations**: With \u03a6<sub>ML</sub>, you can write code that [automatically works in 1D, 2D and 3D](https://tum-pbs.github.io/PhiML/N_Dimensional.html), choosing the corresponding operations based on the input dimensions.\n* **Preconditioned linear solves**: \u03a6<sub>ML</sub> can [build sparse matrices from your Python functions](https://tum-pbs.github.io/PhiML/Matrices.html) and run linear solvers [with preconditioners](https://tum-pbs.github.io/PhiML/Linear_Solves.html).\n* **Flexible neural network architectures**: [\u03a6<sub>ML</sub> provides various configurable neural network architectures, from MLPs to U-Nets.](https://tum-pbs.github.io/PhiML/Networks.html)\n* **Non-uniform tensors**: \u03a6<sub>ML</sub> allows you to [stack tensors of different sizes and keeps track of the resulting shapes](https://tum-pbs.github.io/PhiML/Non_Uniform.html).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Unified API for machine learning",
"version": "1.5.1",
"project_urls": {
"Download": "https://github.com/tum-pbs/PhiML/archive/1.5.1.tar.gz",
"Homepage": "https://github.com/tum-pbs/PhiML"
},
"split_keywords": [
"machine learning",
" deep learning",
" math",
" linear systems",
" sparse",
" tensor",
" named dimensions"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ed9341850e0c3211af445c00a04b74968260ea79a0c7ce75e47749928c14edff",
"md5": "3d79906a5535abe039af1c92c9c5007b",
"sha256": "831581faba02b026bf4c1ef0119d2cff44d6f69a2a348443f16ed1cbecd14f29"
},
"downloads": -1,
"filename": "phiml-1.5.1.tar.gz",
"has_sig": false,
"md5_digest": "3d79906a5535abe039af1c92c9c5007b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 284348,
"upload_time": "2024-04-15T12:12:27",
"upload_time_iso_8601": "2024-04-15T12:12:27.114404Z",
"url": "https://files.pythonhosted.org/packages/ed/93/41850e0c3211af445c00a04b74968260ea79a0c7ce75e47749928c14edff/phiml-1.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-15 12:12:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tum-pbs",
"github_project": "PhiML",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "phiml"
}