Name | jax JSON |
Version |
0.7.0
JSON |
| download |
home_page | https://github.com/jax-ml/jax |
Summary | Differentiate, compile, and transform Numpy code. |
upload_time | 2025-07-22 20:30:57 |
maintainer | None |
docs_url | None |
author | JAX team |
requires_python | >=3.11 |
license | Apache-2.0 |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
<img src="https://raw.githubusercontent.com/jax-ml/jax/main/images/jax_logo_250px.png" alt="logo"></img>
</div>
# Transformable numerical computing at scale
[](https://github.com/jax-ml/jax/actions/workflows/ci-build.yaml)
[](https://pypi.org/project/jax/)
[**Transformations**](#transformations)
| [**Scaling**](#scaling)
| [**Install guide**](#installation)
| [**Change logs**](https://docs.jax.dev/en/latest/changelog.html)
| [**Reference docs**](https://docs.jax.dev/en/latest/)
## What is JAX?
JAX is a Python library for accelerator-oriented array computation and program transformation,
designed for high-performance numerical computing and large-scale machine learning.
JAX can automatically differentiate native
Python and NumPy functions. It can differentiate through loops, branches,
recursion, and closures, and it can take derivatives of derivatives of
derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)
via [`jax.grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,
and the two can be composed arbitrarily to any order.
JAX uses [XLA](https://www.openxla.org/xla)
to compile and scale your NumPy programs on TPUs, GPUs, and other hardware accelerators.
You can compile your own pure functions with [`jax.jit`](#compilation-with-jit).
Compilation and automatic differentiation can be composed arbitrarily.
Dig a little deeper, and you'll see that JAX is really an extensible system for
[composable function transformations](#transformations) at [scale](#scaling).
This is a research project, not an official Google product. Expect
[sharp edges](https://docs.jax.dev/en/latest/notebooks/Common_Gotchas_in_JAX.html).
Please help by trying it out, [reporting bugs](https://github.com/jax-ml/jax/issues),
and letting us know what you think!
```python
import jax
import jax.numpy as jnp
def predict(params, inputs):
for W, b in params:
outputs = jnp.dot(inputs, W) + b
inputs = jnp.tanh(outputs) # inputs to the next layer
return outputs # no activation on last layer
def loss(params, inputs, targets):
preds = predict(params, inputs)
return jnp.sum((preds - targets)**2)
grad_loss = jax.jit(jax.grad(loss)) # compiled gradient evaluation function
perex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0))) # fast per-example grads
```
### Contents
* [Transformations](#transformations)
* [Scaling](#scaling)
* [Current gotchas](#gotchas-and-sharp-bits)
* [Installation](#installation)
* [Neural net libraries](#neural-network-libraries)
* [Citing JAX](#citing-jax)
* [Reference documentation](#reference-documentation)
## Transformations
At its core, JAX is an extensible system for transforming numerical functions.
Here are three: `jax.grad`, `jax.jit`, and `jax.vmap`.
### Automatic differentiation with `grad`
Use [`jax.grad`](https://docs.jax.dev/en/latest/jax.html#jax.grad)
to efficiently compute reverse-mode gradients:
```python
import jax
import jax.numpy as jnp
def tanh(x):
y = jnp.exp(-2.0 * x)
return (1.0 - y) / (1.0 + y)
grad_tanh = jax.grad(tanh)
print(grad_tanh(1.0))
# prints 0.4199743
```
You can differentiate to any order with `grad`:
```python
print(jax.grad(jax.grad(jax.grad(tanh)))(1.0))
# prints 0.62162673
```
You're free to use differentiation with Python control flow:
```python
def abs_val(x):
if x > 0:
return x
else:
return -x
abs_val_grad = jax.grad(abs_val)
print(abs_val_grad(1.0)) # prints 1.0
print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)
```
See the [JAX Autodiff
Cookbook](https://docs.jax.dev/en/latest/notebooks/autodiff_cookbook.html)
and the [reference docs on automatic
differentiation](https://docs.jax.dev/en/latest/jax.html#automatic-differentiation)
for more.
### Compilation with `jit`
Use XLA to compile your functions end-to-end with
[`jit`](https://docs.jax.dev/en/latest/jax.html#just-in-time-compilation-jit),
used either as an `@jit` decorator or as a higher-order function.
```python
import jax
import jax.numpy as jnp
def slow_f(x):
# Element-wise ops see a large benefit from fusion
return x * x + x * 2.0
x = jnp.ones((5000, 5000))
fast_f = jax.jit(slow_f)
%timeit -n10 -r3 fast_f(x)
%timeit -n10 -r3 slow_f(x)
```
Using `jax.jit` constrains the kind of Python control flow
the function can use; see
the tutorial on [Control Flow and Logical Operators with JIT](https://docs.jax.dev/en/latest/control-flow.html)
for more.
### Auto-vectorization with `vmap`
[`vmap`](https://docs.jax.dev/en/latest/jax.html#vectorization-vmap) maps
a function along array axes.
But instead of just looping over function applications, it pushes the loop down
onto the function’s primitive operations, e.g. turning matrix-vector multiplies into
matrix-matrix multiplies for better performance.
Using `vmap` can save you from having to carry around batch dimensions in your
code:
```python
import jax
import jax.numpy as jnp
def l1_distance(x, y):
assert x.ndim == y.ndim == 1 # only works on 1D inputs
return jnp.sum(jnp.abs(x - y))
def pairwise_distances(dist1D, xs):
return jax.vmap(jax.vmap(dist1D, (0, None)), (None, 0))(xs, xs)
xs = jax.random.normal(jax.random.key(0), (100, 3))
dists = pairwise_distances(l1_distance, xs)
dists.shape # (100, 100)
```
By composing `jax.vmap` with `jax.grad` and `jax.jit`, we can get efficient
Jacobian matrices, or per-example gradients:
```python
per_example_grads = jax.jit(jax.vmap(jax.grad(loss), in_axes=(None, 0, 0)))
```
## Scaling
To scale your computations across thousands of devices, you can use any
composition of these:
* [**Compiler-based automatic parallelization**](https://docs.jax.dev/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html)
where you program as if using a single global machine, and the compiler chooses
how to shard data and partition computation (with some user-provided constraints);
* [**Explicit sharding and automatic partitioning**](https://docs.jax.dev/en/latest/notebooks/explicit-sharding.html)
where you still have a global view but data shardings are
explicit in JAX types, inspectable using `jax.typeof`;
* [**Manual per-device programming**](https://docs.jax.dev/en/latest/notebooks/shard_map.html)
where you have a per-device view of data
and computation, and can communicate with explicit collectives.
| Mode | View? | Explicit sharding? | Explicit Collectives? |
|---|---|---|---|
| Auto | Global | ❌ | ❌ |
| Explicit | Global | ✅ | ❌ |
| Manual | Per-device | ✅ | ✅ |
```python
from jax.sharding import set_mesh, AxisType, PartitionSpec as P
mesh = jax.make_mesh((8,), ('data',), axis_types=(AxisType.Explicit,))
set_mesh(mesh)
# parameters are sharded for FSDP:
for W, b in params:
print(f'{jax.typeof(W)}') # f32[512@data,512]
print(f'{jax.typeof(b)}') # f32[512]
# shard data for batch parallelism:
inputs, targets = jax.device_put((inputs, targets), P('data'))
# evaluate gradients, automatically parallelized!
gradfun = jax.jit(jax.grad(loss))
param_grads = gradfun(params, (inputs, targets))
```
See the [tutorial](https://docs.jax.dev/en/latest/sharded-computation.html) and
[advanced guides](https://docs.jax.dev/en/latest/advanced_guide.html) for more.
## Gotchas and sharp bits
See the [Gotchas
Notebook](https://docs.jax.dev/en/latest/notebooks/Common_Gotchas_in_JAX.html).
## Installation
### Supported platforms
| | Linux x86_64 | Linux aarch64 | Mac aarch64 | Windows x86_64 | Windows WSL2 x86_64 |
|------------|--------------|---------------|--------------|----------------|---------------------|
| CPU | yes | yes | yes | yes | yes |
| NVIDIA GPU | yes | yes | n/a | no | experimental |
| Google TPU | yes | n/a | n/a | n/a | n/a |
| AMD GPU | yes | no | n/a | no | no |
| Apple GPU | n/a | no | experimental | n/a | n/a |
| Intel GPU | experimental | n/a | n/a | no | no |
### Instructions
| Platform | Instructions |
|-----------------|-----------------------------------------------------------------------------------------------------------------|
| CPU | `pip install -U jax` |
| NVIDIA GPU | `pip install -U "jax[cuda12]"` |
| Google TPU | `pip install -U "jax[tpu]"` |
| AMD GPU (Linux) | Follow [AMD's instructions](https://github.com/jax-ml/jax/blob/main/build/rocm/README.md). |
| Mac GPU | Follow [Apple's instructions](https://developer.apple.com/metal/jax/). |
| Intel GPU | Follow [Intel's instructions](https://github.com/intel/intel-extension-for-openxla/blob/main/docs/acc_jax.md). |
See [the documentation](https://docs.jax.dev/en/latest/installation.html)
for information on alternative installation strategies. These include compiling
from source, installing with Docker, using other versions of CUDA, a
community-supported conda build, and answers to some frequently-asked questions.
## Citing JAX
To cite this repository:
```
@software{jax2018github,
author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},
title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},
url = {http://github.com/jax-ml/jax},
version = {0.3.13},
year = {2018},
}
```
In the above bibtex entry, names are in alphabetical order, the version number
is intended to be that from [jax/version.py](../main/jax/version.py), and
the year corresponds to the project's open-source release.
A nascent version of JAX, supporting only automatic differentiation and
compilation to XLA, was described in a [paper that appeared at SysML
2018](https://mlsys.org/Conferences/2019/doc/2018/146.pdf). We're currently working on
covering JAX's ideas and capabilities in a more comprehensive and up-to-date
paper.
## Reference documentation
For details about the JAX API, see the
[reference documentation](https://docs.jax.dev/).
For getting started as a JAX developer, see the
[developer documentation](https://docs.jax.dev/en/latest/developer.html).
Raw data
{
"_id": null,
"home_page": "https://github.com/jax-ml/jax",
"name": "jax",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": null,
"author": "JAX team",
"author_email": "jax-dev@google.com",
"download_url": "https://files.pythonhosted.org/packages/c8/34/f26cdcb8e664306dc349aa9e126a858915089c22d0caa0131213b84e52da/jax-0.7.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n<img src=\"https://raw.githubusercontent.com/jax-ml/jax/main/images/jax_logo_250px.png\" alt=\"logo\"></img>\n</div>\n\n# Transformable numerical computing at scale\n\n[](https://github.com/jax-ml/jax/actions/workflows/ci-build.yaml)\n[](https://pypi.org/project/jax/)\n\n[**Transformations**](#transformations)\n| [**Scaling**](#scaling)\n| [**Install guide**](#installation)\n| [**Change logs**](https://docs.jax.dev/en/latest/changelog.html)\n| [**Reference docs**](https://docs.jax.dev/en/latest/)\n\n\n## What is JAX?\n\nJAX is a Python library for accelerator-oriented array computation and program transformation,\ndesigned for high-performance numerical computing and large-scale machine learning.\n\nJAX can automatically differentiate native\nPython and NumPy functions. It can differentiate through loops, branches,\nrecursion, and closures, and it can take derivatives of derivatives of\nderivatives. It supports reverse-mode differentiation (a.k.a. backpropagation)\nvia [`jax.grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation,\nand the two can be composed arbitrarily to any order.\n\nJAX uses [XLA](https://www.openxla.org/xla)\nto compile and scale your NumPy programs on TPUs, GPUs, and other hardware accelerators.\nYou can compile your own pure functions with [`jax.jit`](#compilation-with-jit).\nCompilation and automatic differentiation can be composed arbitrarily.\n\nDig a little deeper, and you'll see that JAX is really an extensible system for\n[composable function transformations](#transformations) at [scale](#scaling).\n\nThis is a research project, not an official Google product. Expect\n[sharp edges](https://docs.jax.dev/en/latest/notebooks/Common_Gotchas_in_JAX.html).\nPlease help by trying it out, [reporting bugs](https://github.com/jax-ml/jax/issues),\nand letting us know what you think!\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef predict(params, inputs):\n for W, b in params:\n outputs = jnp.dot(inputs, W) + b\n inputs = jnp.tanh(outputs) # inputs to the next layer\n return outputs # no activation on last layer\n\ndef loss(params, inputs, targets):\n preds = predict(params, inputs)\n return jnp.sum((preds - targets)**2)\n\ngrad_loss = jax.jit(jax.grad(loss)) # compiled gradient evaluation function\nperex_grads = jax.jit(jax.vmap(grad_loss, in_axes=(None, 0, 0))) # fast per-example grads\n```\n\n### Contents\n* [Transformations](#transformations)\n* [Scaling](#scaling)\n* [Current gotchas](#gotchas-and-sharp-bits)\n* [Installation](#installation)\n* [Neural net libraries](#neural-network-libraries)\n* [Citing JAX](#citing-jax)\n* [Reference documentation](#reference-documentation)\n\n## Transformations\n\nAt its core, JAX is an extensible system for transforming numerical functions.\nHere are three: `jax.grad`, `jax.jit`, and `jax.vmap`.\n\n### Automatic differentiation with `grad`\n\nUse [`jax.grad`](https://docs.jax.dev/en/latest/jax.html#jax.grad)\nto efficiently compute reverse-mode gradients:\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef tanh(x):\n y = jnp.exp(-2.0 * x)\n return (1.0 - y) / (1.0 + y)\n\ngrad_tanh = jax.grad(tanh)\nprint(grad_tanh(1.0))\n# prints 0.4199743\n```\n\nYou can differentiate to any order with `grad`:\n\n```python\nprint(jax.grad(jax.grad(jax.grad(tanh)))(1.0))\n# prints 0.62162673\n```\n\nYou're free to use differentiation with Python control flow:\n\n```python\ndef abs_val(x):\n if x > 0:\n return x\n else:\n return -x\n\nabs_val_grad = jax.grad(abs_val)\nprint(abs_val_grad(1.0)) # prints 1.0\nprint(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated)\n```\n\nSee the [JAX Autodiff\nCookbook](https://docs.jax.dev/en/latest/notebooks/autodiff_cookbook.html)\nand the [reference docs on automatic\ndifferentiation](https://docs.jax.dev/en/latest/jax.html#automatic-differentiation)\nfor more.\n\n### Compilation with `jit`\n\nUse XLA to compile your functions end-to-end with\n[`jit`](https://docs.jax.dev/en/latest/jax.html#just-in-time-compilation-jit),\nused either as an `@jit` decorator or as a higher-order function.\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef slow_f(x):\n # Element-wise ops see a large benefit from fusion\n return x * x + x * 2.0\n\nx = jnp.ones((5000, 5000))\nfast_f = jax.jit(slow_f)\n%timeit -n10 -r3 fast_f(x)\n%timeit -n10 -r3 slow_f(x)\n```\n\nUsing `jax.jit` constrains the kind of Python control flow\nthe function can use; see\nthe tutorial on [Control Flow and Logical Operators with JIT](https://docs.jax.dev/en/latest/control-flow.html)\nfor more.\n\n### Auto-vectorization with `vmap`\n\n[`vmap`](https://docs.jax.dev/en/latest/jax.html#vectorization-vmap) maps\na function along array axes.\nBut instead of just looping over function applications, it pushes the loop down\nonto the function\u2019s primitive operations, e.g. turning matrix-vector multiplies into\nmatrix-matrix multiplies for better performance.\n\nUsing `vmap` can save you from having to carry around batch dimensions in your\ncode:\n\n```python\nimport jax\nimport jax.numpy as jnp\n\ndef l1_distance(x, y):\n assert x.ndim == y.ndim == 1 # only works on 1D inputs\n return jnp.sum(jnp.abs(x - y))\n\ndef pairwise_distances(dist1D, xs):\n return jax.vmap(jax.vmap(dist1D, (0, None)), (None, 0))(xs, xs)\n\nxs = jax.random.normal(jax.random.key(0), (100, 3))\ndists = pairwise_distances(l1_distance, xs)\ndists.shape # (100, 100)\n```\n\nBy composing `jax.vmap` with `jax.grad` and `jax.jit`, we can get efficient\nJacobian matrices, or per-example gradients:\n\n```python\nper_example_grads = jax.jit(jax.vmap(jax.grad(loss), in_axes=(None, 0, 0)))\n```\n\n## Scaling\n\nTo scale your computations across thousands of devices, you can use any\ncomposition of these:\n* [**Compiler-based automatic parallelization**](https://docs.jax.dev/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html)\nwhere you program as if using a single global machine, and the compiler chooses\nhow to shard data and partition computation (with some user-provided constraints);\n* [**Explicit sharding and automatic partitioning**](https://docs.jax.dev/en/latest/notebooks/explicit-sharding.html)\nwhere you still have a global view but data shardings are\nexplicit in JAX types, inspectable using `jax.typeof`;\n* [**Manual per-device programming**](https://docs.jax.dev/en/latest/notebooks/shard_map.html)\nwhere you have a per-device view of data\nand computation, and can communicate with explicit collectives.\n\n| Mode | View? | Explicit sharding? | Explicit Collectives? |\n|---|---|---|---|\n| Auto | Global | \u274c | \u274c |\n| Explicit | Global | \u2705 | \u274c |\n| Manual | Per-device | \u2705 | \u2705 |\n\n```python\nfrom jax.sharding import set_mesh, AxisType, PartitionSpec as P\nmesh = jax.make_mesh((8,), ('data',), axis_types=(AxisType.Explicit,))\nset_mesh(mesh)\n\n# parameters are sharded for FSDP:\nfor W, b in params:\n print(f'{jax.typeof(W)}') # f32[512@data,512]\n print(f'{jax.typeof(b)}') # f32[512]\n\n# shard data for batch parallelism:\ninputs, targets = jax.device_put((inputs, targets), P('data'))\n\n# evaluate gradients, automatically parallelized!\ngradfun = jax.jit(jax.grad(loss))\nparam_grads = gradfun(params, (inputs, targets))\n```\n\nSee the [tutorial](https://docs.jax.dev/en/latest/sharded-computation.html) and\n[advanced guides](https://docs.jax.dev/en/latest/advanced_guide.html) for more.\n\n## Gotchas and sharp bits\n\nSee the [Gotchas\nNotebook](https://docs.jax.dev/en/latest/notebooks/Common_Gotchas_in_JAX.html).\n\n## Installation\n\n### Supported platforms\n\n| | Linux x86_64 | Linux aarch64 | Mac aarch64 | Windows x86_64 | Windows WSL2 x86_64 |\n|------------|--------------|---------------|--------------|----------------|---------------------|\n| CPU | yes | yes | yes | yes | yes |\n| NVIDIA GPU | yes | yes | n/a | no | experimental |\n| Google TPU | yes | n/a | n/a | n/a | n/a |\n| AMD GPU | yes | no | n/a | no | no |\n| Apple GPU | n/a | no | experimental | n/a | n/a |\n| Intel GPU | experimental | n/a | n/a | no | no |\n\n\n### Instructions\n\n| Platform | Instructions |\n|-----------------|-----------------------------------------------------------------------------------------------------------------|\n| CPU | `pip install -U jax` |\n| NVIDIA GPU | `pip install -U \"jax[cuda12]\"` |\n| Google TPU | `pip install -U \"jax[tpu]\"` |\n| AMD GPU (Linux) | Follow [AMD's instructions](https://github.com/jax-ml/jax/blob/main/build/rocm/README.md). |\n| Mac GPU | Follow [Apple's instructions](https://developer.apple.com/metal/jax/). |\n| Intel GPU | Follow [Intel's instructions](https://github.com/intel/intel-extension-for-openxla/blob/main/docs/acc_jax.md). |\n\nSee [the documentation](https://docs.jax.dev/en/latest/installation.html)\nfor information on alternative installation strategies. These include compiling\nfrom source, installing with Docker, using other versions of CUDA, a\ncommunity-supported conda build, and answers to some frequently-asked questions.\n\n## Citing JAX\n\nTo cite this repository:\n\n```\n@software{jax2018github,\n author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},\n title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},\n url = {http://github.com/jax-ml/jax},\n version = {0.3.13},\n year = {2018},\n}\n```\n\nIn the above bibtex entry, names are in alphabetical order, the version number\nis intended to be that from [jax/version.py](../main/jax/version.py), and\nthe year corresponds to the project's open-source release.\n\nA nascent version of JAX, supporting only automatic differentiation and\ncompilation to XLA, was described in a [paper that appeared at SysML\n2018](https://mlsys.org/Conferences/2019/doc/2018/146.pdf). We're currently working on\ncovering JAX's ideas and capabilities in a more comprehensive and up-to-date\npaper.\n\n## Reference documentation\n\nFor details about the JAX API, see the\n[reference documentation](https://docs.jax.dev/).\n\nFor getting started as a JAX developer, see the\n[developer documentation](https://docs.jax.dev/en/latest/developer.html).\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Differentiate, compile, and transform Numpy code.",
"version": "0.7.0",
"project_urls": {
"Homepage": "https://github.com/jax-ml/jax"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "adde3092df5073cd9c07c01b10612fc541538b74b02184fac90e3beada20f758",
"md5": "7d59f2b3aa66c0a541dfb11ae6fb7f56",
"sha256": "62833036cbaf4641d66ae94c61c0446890a91b2c0d153946583a0ebe04877a76"
},
"downloads": -1,
"filename": "jax-0.7.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7d59f2b3aa66c0a541dfb11ae6fb7f56",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 2785944,
"upload_time": "2025-07-22T20:30:55",
"upload_time_iso_8601": "2025-07-22T20:30:55.687791Z",
"url": "https://files.pythonhosted.org/packages/ad/de/3092df5073cd9c07c01b10612fc541538b74b02184fac90e3beada20f758/jax-0.7.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c834f26cdcb8e664306dc349aa9e126a858915089c22d0caa0131213b84e52da",
"md5": "c4ca831d79fdd2749d03cbb45782db53",
"sha256": "4dd8924f171ed73a4f1a6191e2f800ae1745069989b69fabc45593d6b6504003"
},
"downloads": -1,
"filename": "jax-0.7.0.tar.gz",
"has_sig": false,
"md5_digest": "c4ca831d79fdd2749d03cbb45782db53",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 2391317,
"upload_time": "2025-07-22T20:30:57",
"upload_time_iso_8601": "2025-07-22T20:30:57.169214Z",
"url": "https://files.pythonhosted.org/packages/c8/34/f26cdcb8e664306dc349aa9e126a858915089c22d0caa0131213b84e52da/jax-0.7.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-22 20:30:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jax-ml",
"github_project": "jax",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "jax"
}