jaxline


Namejaxline JSON
Version 0.0.8 PyPI version JSON
download
home_pagehttps://github.com/deepmind/jaxline
SummaryJAXline is a distributed JAX training framework.
upload_time2023-12-13 20:39:16
maintainer
docs_urlNone
authorDeepMind
requires_python
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # JAXline - Experiment framework for JAX

## What is JAXline

JAXline is a distributed JAX training and evaluation framework.
It is designed to be forked, covering only the most general aspects of
experiment boilerplate. This ensures that it can serve as an effective starting
point for a wide variety of use cases.

Many users will only need to fork the
[`experiment.py`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
file and rely on JAXline for everything else. Other users with more custom
requirements will want to (and are encouraged to) fork other components of
JAXline too, depending on their particular use case.

### Contents

*   [Quickstart](#quickstart)
*   [Checkpointing](#checkpointing)
*   [Logging](#logging)
*   [Launching](#launching)
*   [Distribution strategy](#distribution-strategy)
*   [Random number handling](#random-number-handling)
*   [Debugging](#debugging)
*   [Contributing](#contributing)

## Quickstart

### Installation

JAXline is written in pure Python, but depends on C++ code via JAX and
TensorFlow (the latter is used for writing summaries).

Because JAX / TensorFlow installation is different depending on your CUDA
version, JAXline does not list JAX or TensorFlow as a dependencies in
`requirements.txt`.

First, follow the instructions to install
[JAX](https://github.com/google/jax#installation) and
[TensorFlow](https://github.com/tensorflow/tensorflow#install)
respectively with the relevant accelerator support.

Then, install JAXline using pip:

```bash
$ pip install git+https://github.com/deepmind/jaxline
```

### Building your own experiment

1.  Create an `experiment.py` file and inside it define an `Experiment` class
    that inherits from
    [`experiment.AbstractExperiment`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py).
2.  Implement the methods required by
    `AbstractExperiment` in your own `Experiment` class (i.e. the
    `abstractmethod`s). Optionally override the default implementations of `AbstractExperiment`'s other methods.
3.  Define a `config`, either in `experiment.py` or elsewhere, defining any
    settings that you do not wish to inherit from
    [`base_config`](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).
    At the very least this will include `config.experiment_kwargs` to define the
    config required by your `Experiment`. Make sure this `config` object is
    included in the `flags` accessible to `experiment.py`.
4.  Add the following lines to the bottom of your `experiment.py` to ensure that
    your `Experiment` object is correctly passed through to
    [`platform.py`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py):

    ```
    if __name__ == '__main__':
      flags.mark_flag_as_required('config')
      platform.main(Experiment, sys.argv[1:])
    ```

4.  Run your `experiment.py`.

## Checkpointing

So far this version of JAXline only supports in-memory checkpointing, as handled
by our
[`InMemoryCheckpointer`](https://github.com/deepmind/jaxline/tree/master/jaxline/utils.py)
It allows you to save in memory multiple separate checkpoint series in your
train and eval jobs (see below).

The user is expected to override the
[`CHECKPOINT_ATTRS`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
and
[`NON_BROADCAST_CHECKPOINT_ATTRS`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
dicts (at least one of these) in order to map checkpointable attributes of their
own `Experiment` class to names they wish them to be stored under in the
checkpoint.
`CHECKPOINT_ATTRS` specifies jax `DeviceArrays` for which jaxline should only
take the first slice (corresponding to device 0) for checkpointing.
`NON_BROADCAST_CHECKPOINT_ATTRS` specifies any other picklable object that
jaxline should checkpoint whole.

You can specify the frequency with which to save checkpoints, as well as whether
to checkpoint based on step or seconds, by setting the
`save_checkpoint_interval` and `interval_type`  config flags
[here](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).

`config.max_checkpoints_to_keep` can be used to specify the maximum number of
checkpoints to keep. By default this is set to 5.

By setting `config.best_model_eval_metric`, you can specify which value in the
`scalars` dictionary returned by your
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
function to use as a 'fitness score'. JAXline will then save a separate series
of checkpoints corresponding to steps at which the fitness score is better than
previously seen. Depending on whether you are maximizing or minimizing the eval
metric, set `config.best_model_eval_metric_higher_is_better` to True or False.

## Logging

So far this version of JAXline only supports logging to Tensorboard via our
[`TensorBoardLogger`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)

The user is expected to return a dictionary of scalars from their
[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
and
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
methods, and
[`TensorBoardLogger.write_scalars`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)
will periodically write these scalars to `TensorBoard`.

All logging will happen asynchronously to the main thread so as not to interrupt
the training loop.

You can specify the frequency with which to log, as well as whether to log by
step or by seconds, by setting the `log_train_data_interval` and `interval_type`
config flags [here](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).
If `config.log_all_train_data` is set to `True` (`False` by default) JAXline
will cache the scalars from intermediate steps and log them all at once at the
end of the period.

JAXline passes the
[`TensorBoardLogger`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)
instance through to the
[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
and
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
methods to allow the user to perform additional logging inside their
`Experiment` class if they so wish. A particular use case for this is if you
want to write images, which can be achieved via
[`ExperimentWriter.write_images`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py).


## Launching

So far this version of JAXline does not support launching remotely.

## Distribution strategy

JAX makes it super simple to distribute your jobs across multiple hosts and
cores. As such, JAXline leaves it up to the user to implement distributed
training and evaluation.

Essentially, by decorating a function with
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
you tell JAX to slice the inputs along the first dimension and then run the
function in parallel on each input slice, across all available local devices (or
a subset thereof). In other words,
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
invokes the single-program multiple-data (SPMD) paradigm. Then by using
[`jax.lax`](https://jax.readthedocs.io/en/latest/jax.lax.html) collective
communications operations from within your pmapped function, you can tell JAX to
communicate results between all devices _on all hosts_. For example, you may
want to use [`jax.lax.psum`](https://jax.readthedocs.io/en/latest/jax.lax.html)
to sum up the gradients across all devices on all hosts, and return the result
to each device (an all-reduce).

JAX will then automatically detect which devices are available on each host
allowing
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
and [`jax.lax`](https://jax.readthedocs.io/en/latest/jax.lax.html) to work their
magic.

One very important thing to bear in mind is that each time you call
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap),
a separate TPU program will be compiled for the computation it wraps. Therefore
you do not want to be doing this regularly! In particular, for a standard ML
experiment you will want to call
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
once to wrap your parameter update function,
and then you call this wrapped function on each step, rather than calling
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
on each step, which will kill your performance! This is a very common mistake
for new JAX starters. Luckily it has quite an extreme downside so should be
easily noticeable. In JAXline we actually call
[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
once more in
[`next_device_state`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
to wrap our function to update device state between steps, so end up with 2 TPU
programs rather than just 1 (but this adds negligible overhead).

## Random number handling

Random numbers in JAX might seem a bit unfamiliar to users coming from ordinary
`numpy` and `Tensorflow`. In these languages we have global stateful PRNGs.
Every time you call a random op it updates the PRNGs global state. However,
stateful PRNGs in JAX would be incompatible with JAX's functional design
semantics, leading to problems with reproducibility and parallelizability. JAX
introduces stateless PRNGs to avoid these issues. The downside of this is that
the user needs to thread random state through their program, splitting a new
PRNG off from the old one every time they want to draw a new random number. This
can be quite onerous, especially in a distributed setting, where you may have
independent PRNGs on each device.

In JAXline we take care of this for you. On each step, in
[`next_device_state`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py),
we split a new PRNG from the old one, and optionally specialize it to the host
and/or device based on the
`random_mode_train` [config](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py)
value you specify. We then pass this new PRNG through to your
[step](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
function to use on that particular step. At evaluation time, we pass a fresh
PRNG to your
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
method, initialized according to the `random_mode_eval`
[config](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py) value
you specify. This PRNG will be the same on each call to
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
(as normally you want your evaluation to be deterministic). If you want
different random behaviour on each call, a simple solution would be to fold in
the `global_step` i.e. `jax.random.fold_in(rng, global_step)` at the top of your
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
method.

Of course you are free to completely ignore the PRNGs we pass through to your
[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
and
[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)
methods and handle random numbers in your own way, should you have different
requirements.

## Debugging

### Post mortem debugging

By setting the flag `--jaxline_post_mortem` (defined
[here](https://github.com/deepmind/jaxline/tree/master/jaxline/utils.py)) on the command-line,
tasks will pause on exceptions (except `SystemExit` and `KeyboardInterrupt`) and
enter post-mortem debugging using pdb. Paused tasks will hang until you attach
a debugger.

### Disabling pmap and jit

By setting the flag `--jaxline_disable_pmap_jit` on the command-line, all pmaps
and jits will be disabled, making it easier to inspect and trace code in a
debugger.

## Citing Jaxline

Please use [this reference](https://github.com/deepmind/jax/blob/main/deepmind2020jax.txt).


## Contributing

Thank you for your interest in JAXline. The primary goal of open-sourcing
JAXline was to allow us to open-source our research more easily. Unfortunately,
we are not currently able to accept pull requests from external contributors,
though we hope to do so in future. Please feel free to open GitHub issues.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deepmind/jaxline",
    "name": "jaxline",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "DeepMind",
    "author_email": "jaxline-copybara@google.com",
    "download_url": "https://files.pythonhosted.org/packages/ad/55/a9b8c9293f7323dbe2eda32ac1bea449f7a7e9c80aa0d7221de220b84fae/jaxline-0.0.8.tar.gz",
    "platform": null,
    "description": "# JAXline - Experiment framework for JAX\n\n## What is JAXline\n\nJAXline is a distributed JAX training and evaluation framework.\nIt is designed to be forked, covering only the most general aspects of\nexperiment boilerplate. This ensures that it can serve as an effective starting\npoint for a wide variety of use cases.\n\nMany users will only need to fork the\n[`experiment.py`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nfile and rely on JAXline for everything else. Other users with more custom\nrequirements will want to (and are encouraged to) fork other components of\nJAXline too, depending on their particular use case.\n\n### Contents\n\n*   [Quickstart](#quickstart)\n*   [Checkpointing](#checkpointing)\n*   [Logging](#logging)\n*   [Launching](#launching)\n*   [Distribution strategy](#distribution-strategy)\n*   [Random number handling](#random-number-handling)\n*   [Debugging](#debugging)\n*   [Contributing](#contributing)\n\n## Quickstart\n\n### Installation\n\nJAXline is written in pure Python, but depends on C++ code via JAX and\nTensorFlow (the latter is used for writing summaries).\n\nBecause JAX / TensorFlow installation is different depending on your CUDA\nversion, JAXline does not list JAX or TensorFlow as a dependencies in\n`requirements.txt`.\n\nFirst, follow the instructions to install\n[JAX](https://github.com/google/jax#installation) and\n[TensorFlow](https://github.com/tensorflow/tensorflow#install)\nrespectively with the relevant accelerator support.\n\nThen, install JAXline using pip:\n\n```bash\n$ pip install git+https://github.com/deepmind/jaxline\n```\n\n### Building your own experiment\n\n1.  Create an `experiment.py` file and inside it define an `Experiment` class\n    that inherits from\n    [`experiment.AbstractExperiment`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py).\n2.  Implement the methods required by\n    `AbstractExperiment` in your own `Experiment` class (i.e. the\n    `abstractmethod`s). Optionally override the default implementations of `AbstractExperiment`'s other methods.\n3.  Define a `config`, either in `experiment.py` or elsewhere, defining any\n    settings that you do not wish to inherit from\n    [`base_config`](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).\n    At the very least this will include `config.experiment_kwargs` to define the\n    config required by your `Experiment`. Make sure this `config` object is\n    included in the `flags` accessible to `experiment.py`.\n4.  Add the following lines to the bottom of your `experiment.py` to ensure that\n    your `Experiment` object is correctly passed through to\n    [`platform.py`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py):\n\n    ```\n    if __name__ == '__main__':\n      flags.mark_flag_as_required('config')\n      platform.main(Experiment, sys.argv[1:])\n    ```\n\n4.  Run your `experiment.py`.\n\n## Checkpointing\n\nSo far this version of JAXline only supports in-memory checkpointing, as handled\nby our\n[`InMemoryCheckpointer`](https://github.com/deepmind/jaxline/tree/master/jaxline/utils.py)\nIt allows you to save in memory multiple separate checkpoint series in your\ntrain and eval jobs (see below).\n\nThe user is expected to override the\n[`CHECKPOINT_ATTRS`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nand\n[`NON_BROADCAST_CHECKPOINT_ATTRS`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\ndicts (at least one of these) in order to map checkpointable attributes of their\nown `Experiment` class to names they wish them to be stored under in the\ncheckpoint.\n`CHECKPOINT_ATTRS` specifies jax `DeviceArrays` for which jaxline should only\ntake the first slice (corresponding to device 0) for checkpointing.\n`NON_BROADCAST_CHECKPOINT_ATTRS` specifies any other picklable object that\njaxline should checkpoint whole.\n\nYou can specify the frequency with which to save checkpoints, as well as whether\nto checkpoint based on step or seconds, by setting the\n`save_checkpoint_interval` and `interval_type`  config flags\n[here](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).\n\n`config.max_checkpoints_to_keep` can be used to specify the maximum number of\ncheckpoints to keep. By default this is set to 5.\n\nBy setting `config.best_model_eval_metric`, you can specify which value in the\n`scalars` dictionary returned by your\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nfunction to use as a 'fitness score'. JAXline will then save a separate series\nof checkpoints corresponding to steps at which the fitness score is better than\npreviously seen. Depending on whether you are maximizing or minimizing the eval\nmetric, set `config.best_model_eval_metric_higher_is_better` to True or False.\n\n## Logging\n\nSo far this version of JAXline only supports logging to Tensorboard via our\n[`TensorBoardLogger`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)\n\nThe user is expected to return a dictionary of scalars from their\n[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nand\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nmethods, and\n[`TensorBoardLogger.write_scalars`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)\nwill periodically write these scalars to `TensorBoard`.\n\nAll logging will happen asynchronously to the main thread so as not to interrupt\nthe training loop.\n\nYou can specify the frequency with which to log, as well as whether to log by\nstep or by seconds, by setting the `log_train_data_interval` and `interval_type`\nconfig flags [here](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py).\nIf `config.log_all_train_data` is set to `True` (`False` by default) JAXline\nwill cache the scalars from intermediate steps and log them all at once at the\nend of the period.\n\nJAXline passes the\n[`TensorBoardLogger`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py)\ninstance through to the\n[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nand\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nmethods to allow the user to perform additional logging inside their\n`Experiment` class if they so wish. A particular use case for this is if you\nwant to write images, which can be achieved via\n[`ExperimentWriter.write_images`](https://github.com/deepmind/jaxline/tree/master/jaxline/platform.py).\n\n\n## Launching\n\nSo far this version of JAXline does not support launching remotely.\n\n## Distribution strategy\n\nJAX makes it super simple to distribute your jobs across multiple hosts and\ncores. As such, JAXline leaves it up to the user to implement distributed\ntraining and evaluation.\n\nEssentially, by decorating a function with\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\nyou tell JAX to slice the inputs along the first dimension and then run the\nfunction in parallel on each input slice, across all available local devices (or\na subset thereof). In other words,\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\ninvokes the single-program multiple-data (SPMD) paradigm. Then by using\n[`jax.lax`](https://jax.readthedocs.io/en/latest/jax.lax.html) collective\ncommunications operations from within your pmapped function, you can tell JAX to\ncommunicate results between all devices _on all hosts_. For example, you may\nwant to use [`jax.lax.psum`](https://jax.readthedocs.io/en/latest/jax.lax.html)\nto sum up the gradients across all devices on all hosts, and return the result\nto each device (an all-reduce).\n\nJAX will then automatically detect which devices are available on each host\nallowing\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\nand [`jax.lax`](https://jax.readthedocs.io/en/latest/jax.lax.html) to work their\nmagic.\n\nOne very important thing to bear in mind is that each time you call\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap),\na separate TPU program will be compiled for the computation it wraps. Therefore\nyou do not want to be doing this regularly! In particular, for a standard ML\nexperiment you will want to call\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\nonce to wrap your parameter update function,\nand then you call this wrapped function on each step, rather than calling\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\non each step, which will kill your performance! This is a very common mistake\nfor new JAX starters. Luckily it has quite an extreme downside so should be\neasily noticeable. In JAXline we actually call\n[`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)\nonce more in\n[`next_device_state`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nto wrap our function to update device state between steps, so end up with 2 TPU\nprograms rather than just 1 (but this adds negligible overhead).\n\n## Random number handling\n\nRandom numbers in JAX might seem a bit unfamiliar to users coming from ordinary\n`numpy` and `Tensorflow`. In these languages we have global stateful PRNGs.\nEvery time you call a random op it updates the PRNGs global state. However,\nstateful PRNGs in JAX would be incompatible with JAX's functional design\nsemantics, leading to problems with reproducibility and parallelizability. JAX\nintroduces stateless PRNGs to avoid these issues. The downside of this is that\nthe user needs to thread random state through their program, splitting a new\nPRNG off from the old one every time they want to draw a new random number. This\ncan be quite onerous, especially in a distributed setting, where you may have\nindependent PRNGs on each device.\n\nIn JAXline we take care of this for you. On each step, in\n[`next_device_state`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py),\nwe split a new PRNG from the old one, and optionally specialize it to the host\nand/or device based on the\n`random_mode_train` [config](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py)\nvalue you specify. We then pass this new PRNG through to your\n[step](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nfunction to use on that particular step. At evaluation time, we pass a fresh\nPRNG to your\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nmethod, initialized according to the `random_mode_eval`\n[config](https://github.com/deepmind/jaxline/tree/master/jaxline/base_config.py) value\nyou specify. This PRNG will be the same on each call to\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\n(as normally you want your evaluation to be deterministic). If you want\ndifferent random behaviour on each call, a simple solution would be to fold in\nthe `global_step` i.e. `jax.random.fold_in(rng, global_step)` at the top of your\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nmethod.\n\nOf course you are free to completely ignore the PRNGs we pass through to your\n[`step`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nand\n[`evaluate`](https://github.com/deepmind/jaxline/tree/master/jaxline/experiment.py)\nmethods and handle random numbers in your own way, should you have different\nrequirements.\n\n## Debugging\n\n### Post mortem debugging\n\nBy setting the flag `--jaxline_post_mortem` (defined\n[here](https://github.com/deepmind/jaxline/tree/master/jaxline/utils.py)) on the command-line,\ntasks will pause on exceptions (except `SystemExit` and `KeyboardInterrupt`) and\nenter post-mortem debugging using pdb. Paused tasks will hang until you attach\na debugger.\n\n### Disabling pmap and jit\n\nBy setting the flag `--jaxline_disable_pmap_jit` on the command-line, all pmaps\nand jits will be disabled, making it easier to inspect and trace code in a\ndebugger.\n\n## Citing Jaxline\n\nPlease use [this reference](https://github.com/deepmind/jax/blob/main/deepmind2020jax.txt).\n\n\n## Contributing\n\nThank you for your interest in JAXline. The primary goal of open-sourcing\nJAXline was to allow us to open-source our research more easily. Unfortunately,\nwe are not currently able to accept pull requests from external contributors,\nthough we hope to do so in future. Please feel free to open GitHub issues.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "JAXline is a distributed JAX training framework.",
    "version": "0.0.8",
    "project_urls": {
        "Homepage": "https://github.com/deepmind/jaxline"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad55a9b8c9293f7323dbe2eda32ac1bea449f7a7e9c80aa0d7221de220b84fae",
                "md5": "5473dcc2fb723bc5711f8cfbdc0d78b7",
                "sha256": "475a1ab56cb556127fa99df0ab23cbccc0d345c6b1a02707f4414932cef50f76"
            },
            "downloads": -1,
            "filename": "jaxline-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "5473dcc2fb723bc5711f8cfbdc0d78b7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 35755,
            "upload_time": "2023-12-13T20:39:16",
            "upload_time_iso_8601": "2023-12-13T20:39:16.844213Z",
            "url": "https://files.pythonhosted.org/packages/ad/55/a9b8c9293f7323dbe2eda32ac1bea449f7a7e9c80aa0d7221de220b84fae/jaxline-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-13 20:39:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deepmind",
    "github_project": "jaxline",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "jaxline"
}
        
Elapsed time: 0.15953s