xminigrid


Namexminigrid JSON
Version 0.9.1 PyPI version JSON
download
home_pageNone
SummaryJAX-accelerated meta-reinforcement learning environments inspired by XLand and MiniGrid
upload_time2024-10-11 05:28:05
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords jax neural-networks deep-learning reinforcement learning meta reinforcement learning gridworld minigrid xland
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">XLand-MiniGrid</h1>

<p align="center">
    <a href="https://pypi.python.org/pypi/xminigrid">
        <img src="https://img.shields.io/pypi/pyversions/xminigrid.svg"/>
    </a>
    <a href="https://badge.fury.io/py/xminigrid">
        <img src="https://badge.fury.io/py/xminigrid.svg"/>
    </a>
    <a href="https://github.com/astral-sh/ruff">
        <img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json"/>
    </a>
    <a href="https://arxiv.org/abs/2312.12044">
        <img src="https://img.shields.io/badge/arXiv-2210.07105-b31b1b.svg"/>
    </a>
    <a href="https://twitter.com/vladkurenkov/status/1731709425524543550">
        <img src="https://badgen.net/badge/icon/twitter?icon=twitter&label"/>
    </a>
    <a target="_blank" href="https://colab.research.google.com/github/corl-team/xland-minigrid/blob/main/examples/walkthrough.ipynb">
      <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
    </a>
</p>

[//]: # (    <a href="https://badge.fury.io/py/xminigrid">)

[//]: # (        <img src="https://img.shields.io/pypi/dm/xminigrid?color=yellow&label=Downloads"/>)

[//]: # (    </a>)

[//]: # (    <a href="https://github.com/corl-team/xland-minigrid/main/LICENSE">)

[//]: # (        <img src="https://img.shields.io/badge/license-Apache_2.0-blue"/>)

[//]: # (    </a>)

![img](figures/readme-main-img.png)

# Meta-Reinforcement Learning in JAX

[//]: # (> đŸĨŗ **XLand-MiniGrid was [accepted]&#40;https://openreview.net/forum?id=xALDC4aHGz&#41; to [Intrinsically Motivated Open-ended Learning]&#40;https://imol-workshop.github.io&#41; workshop at NeurIPS 2023.** We look forward to seeing everyone at the poster session! )

> đŸĨŗ We recently released [**XLand-100B**](https://github.com/dunno-lab/xland-minigrid-datasets), a large multi-task dataset for offline meta and in-context RL research, based on XLand-MiniGrid. 
It is currently the largest dataset for in-context RL, containing full learning histories for **30k unique tasks, 100B transitions, and 2.5B episodes**. Check it out!

**XLand-MiniGrid** is a suite of tools, grid-world environments and benchmarks for meta-reinforcement learning research inspired by 
the diversity and depth of [XLand](https://deepmind.google/discover/blog/generally-capable-agents-emerge-from-open-ended-play/) 
and the simplicity and minimalism of [MiniGrid](https://github.com/Farama-Foundation/MiniGrid). Despite the similarities, 
XLand-MiniGrid is written in JAX from scratch and designed to be highly scalable, democratizing large-scale experimentation 
with limited resources. Ever wanted to reproduce a [DeepMind AdA](https://sites.google.com/view/adaptive-agent/) agent? Now you can and not in months, but days!


### Features

- 🔮 System of rules and goals that can be combined in arbitrary ways to produce
diverse task distributions
- 🔧 Simple to extend and modify, comes with example environments ported from the original
[MiniGrid](https://github.com/Farama-Foundation/MiniGrid)
- đŸĒ„ Fully compatible with all JAX transformations, can run on CPU, GPU and TPU
- 📈 Easily scales to $2^{16}$ parallel environments and millions of steps per second on a single GPU
- đŸ”Ĩ Multi-GPU PPO baselines in the [PureJaxRL](https://github.com/luchris429/purejaxrl) style, which can achieve **1 trillion** environment steps under two days 

How cool is that? For more details, take a look at the [technical paper](https://arxiv.org/abs/2312.12044) or
[examples](examples), which will walk you through the basics and training your own adaptive agents in minutes!

[//]: # (![img]&#40;figures/times_minigrid.jpg&#41;)

## Installation 🎁

The latest release of XLand-MiniGrid can be installed directly from PyPI:

```commandline
pip install xminigrid
# or, from github directly
pip install "xminigrid @ git+https://github.com/corl-team/xland-minigrid.git"
```

Alternatively, if you want to install the latest development version from the GitHub and run provided algorithms or scripts,
install the source as follows:
```commandline
git clone git@github.com:corl-team/xland-minigrid.git
cd xland-minigrid

# additional dependencies for baselines
pip install -e ".[dev,baselines]"
```
Note that the installation of JAX may differ depending on your hardware accelerator! 
We advise users to explicitly install the correct JAX version (see the [official installation guide](https://github.com/google/jax#installation)).

## Basic Usage 🕹ī¸

Most users who are familiar with other popular JAX-based environments 
(such as [gymnax](https://github.com/RobertTLange/gymnax) or [jumnaji](https://github.com/instadeepai/jumanji)), 
will find that the interface is very similar.
On the high level, current API combines [dm_env](https://github.com/google-deepmind/dm_env) and gymnax interfaces.

```python
import jax
import xminigrid
from xminigrid.wrappers import GymAutoResetWrapper
from xminigrid.experimental.img_obs import RGBImgObservationWrapper

key = jax.random.key(0)
reset_key, ruleset_key = jax.random.split(key)

# to list available benchmarks: xminigrid.registered_benchmarks()
benchmark = xminigrid.load_benchmark(name="trivial-1m")
# choosing ruleset, see section on rules and goals
ruleset = benchmark.sample_ruleset(ruleset_key)

# to list available environments: xminigrid.registered_environments()
env, env_params = xminigrid.make("XLand-MiniGrid-R9-25x25")
env_params = env_params.replace(ruleset=ruleset)

# auto-reset wrapper
env = GymAutoResetWrapper(env)

# render obs as rgb images if needed (warn: this will affect speed greatly)
env = RGBImgObservationWrapper(env)

# fully jit-compatible step and reset methods
timestep = jax.jit(env.reset)(env_params, reset_key)
timestep = jax.jit(env.step)(env_params, timestep, action=0)

# optionally render the state
env.render(env_params, timestep)
```
Similar to gymnasium or jumanji, users can register new environment 
variations with `register` for convenient further usage with `make`. 
`timestep` is a dataclass containing `step_type`, `reward`, `discount`, `observation`, as well as the internal environment `state`.

For a bit more advanced introduction see provided [walkthrough notebook](examples/walkthrough.ipynb).

### On environment interface

Currently, there are a lot of new JAX-based environments appearing, each offering its own variant of API. Initially, we tried to reuse Jumanji, but it turned out 
that its design [is not suitable for meta learning](https://github.com/instadeepai/jumanji/issues/212). The Gymnax design appeared to be more appropriate, but unfortunately it is not actively supported and
often departs from the idea that parameters should only be contained in `env_params`. Furthermore, splitting 
`timestep` into multiple entities seems suboptimal to us, as it complicates many things, such as envpool or dm_env 
style auto reset, where the reset occurs on the next step (we need access to done of previous step).

Therefore, we decided that we would make a minimal interface that would cover just our needs without the 
goal of making it generic. The core of our library is interface independent, and we plan 
to switch to the new one when/if a better design becomes available
(e.g. when stable Gymnasium [FuncEnv](https://gymnasium.farama.org/main/api/functional/) is released).

## Rules and Goals 🔮

<img src="figures/ruleset-example.jpg" align="right" width="55%" style="margin:15px;">

In XLand-MiniGrid, the system of rules and goals is the cornerstone of the 
emergent complexity and diversity. In the original MiniGrid 
some environments have dynamic goals, but the dynamics are never changed. 
To train and evaluate highly adaptive agents, we need to be able to change 
the dynamics in non-trivial ways. 

**Rules** are the functions that can change the environment state in some deterministic 
way according to the given conditions. **Goals** are similar to rules, except they do 
not change the state, they only test conditions. Every task should be described with a goal, rules and initial objects. We call these **rulesets**. 
Currently, we support only one goal per task. 

To illustrate, we provide visualization for specific ruleset. To solve this task, agent should take blue pyramid and put it near the purple square to transform both 
objects into red circle. To complete the goal, red circle should be placed near
green circle. However, placing purple square near yellow circle will make it unsolvable in this trial. Initial objects positions will be randomized on each reset. 

For more advanced introduction, see corresponding section in the provided [walkthrough notebook](examples/walkthrough.ipynb).
<br clear="right"/>

## Benchmarks 🎲 

While composing rules and goals by hand is flexible, it can quickly become cumbersome. 
Besides, it's hard to express efficiently in a JAX-compatible way due to the high number of heterogeneous computations 

To avoid significant overhead during training and facilitate reliable comparisons between agents, 
we pre-sampled several benchmarks with up to **three million unique tasks**, following the procedure used to train DeepMind 
AdA agent from the original XLand. Each task is represented with a tree, where root is a goal and all nodes are production rules, which should be triggered in a sequence to solve the task:

<p align="center">
  <img src="figures/task_tree_demo.jpg" width="60%"/>
</p>

These benchmarks differ in the generation configs, producing distributions with
varying levels of diversity and average difficulty of the tasks. They can be used for different purposes, for example
the `trivial-1m` benchmark can be used to debug your agents, allowing very quick iterations. However, we would caution 
against treating benchmarks as a progression from simple to complex. They are just different 🤷.

Pre-sampled benchmarks are hosted on [HuggingFace](https://huggingface.co/datasets/Howuhh/xland_minigrid/tree/main) and will be downloaded and cached on the first use:

```python
import jax.random
import xminigrid
from xminigrid.benchmarks import Benchmark

# downloading to path specified by XLAND_MINIGRID_DATA,
# ~/.xland_minigrid by default
benchmark: Benchmark = xminigrid.load_benchmark(name="trivial-1m")
# reusing cached on the second use
benchmark: Benchmark = xminigrid.load_benchmark(name="trivial-1m")

# users can sample or get specific rulesets
benchmark.sample_ruleset(jax.random.key(0))
benchmark.get_ruleset(ruleset_id=benchmark.num_rulesets() - 1)

# or split them for train & test
train, test = benchmark.shuffle(key=jax.random.key(0)).split(prop=0.8)
```

We also provide the [script](scripts/ruleset_generator.py) used to generate these benchmarks. Users can use it for their own purposes:
```commandline
python scripts/ruleset_generator.py --help
```

In depth description of all available benchmarks is provided [in the technical paper](https://arxiv.org/abs/2312.12044) (Section 3).

## Environments 🌍

We provide environments from two domains. `XLand` is our main focus for meta-learning. For this domain we provide single
environment and numerous registered variants with different grid layouts and sizes. All of them can be combined
with arbitrary rulesets. 

To demonstrate the generality of our library we also port majority of 
non-language based tasks from original `MiniGrid`. Similarly, some environments come with multiple registered variants. 
However, we have no current plans to actively develop and support them (but that may change).

| Name | Domain  | Visualization                                            | Goal                                                                         |
|------|---------|----------------------------------------------------------|------------------------------------------------------------------------------|
|   `XLand-MiniGrid` | XLand   | <img src="figures/xland.png" width="90px">               | specified by the provided ruleset                                            |
|   `MiniGrid-Empty`   | MiniGrid | <img src="figures/empty.png" width="90px">               | go to the green goal                                                         |
|   `MiniGrid-EmptyRandom`   | MiniGrid | <img src="figures/empty_random.png" width="90px">        | go the green goal from different starting positions                          |
|   `MiniGrid-FourRooms`   | MiniGrid | <img src="figures/fourrooms.png" width="90px">           | go the green goal, but goal and starting positions are randomized            |
|   `MiniGrid-LockedRoom`   | MiniGrid | <img src="figures/lockedroom.png" width="90px">          | find the key to unlock the door, go to the green goal                        |
|   `MiniGrid-Memory`   | MiniGrid | <img src="figures/memory.png" width="90px">              | remember the initial object and choose it at the end of the corridor         |
|   `MiniGrid-Playground`   | MiniGrid | <img src="figures/playground.png" width="90px">          | goal is not specified                                                        |
|   `MiniGrid-Unlock`   | MiniGrid | <img src="figures/unlock.png" width="90px">              | unlock the door with the key                                                 |
|   `MiniGrid-UnlockPickUp`   | MiniGrid | <img src="figures/unlockpickup.png" width="90px">        | unlock the door and pick up the object in another room                       |
|   `MiniGrid-BlockedUnlockPickUp`   | MiniGrid | <img src="figures/blockedunlockpickup.png" width="90px"> | unlock the door blocked by the object and pick up the object in another room |
|   `MiniGrid-DoorKey`   | MiniGrid | <img src="figures/doorkey.png" width="90px">             | unlock the door and go to the green goal                                     |

Users can get all registered environments with `xminigrid.registered_environments()`. We also provide manual control to easily explore the environments:
```commandline
python -m xminigrid.manual_control --env-id="MiniGrid-Empty-8x8"
```

## Baselines 🚀

In addition to the environments, we provide high-quality *almost* single-file 
implementations of recurrent PPO baselines in the style of [PureJaxRL](https://github.com/luchris429/purejaxrl). With the help of magical `jax.pmap` transformation 
they can scale to multiple accelerators, achieving impressive FPS of millions during training. 

Agents can be trained from the terminal and default arguments can be overwritten from the command line or from the yaml config:
```commandline
# for meta learning
python training/train_meta_task.py \
    --config-path='some-path/config.yaml' \
    --env_id='XLand-MiniGrid-R1-9x9'

# for minigrid envs
python training/train_singe_task.py \
    --config-path='some-path/config.yaml' \ 
    --env_id='XLand-MiniGrid-R1-9x9'
```
For the source code and hyperparameters available see [/training](training) or run `python training/train_meta_task.py --help`. 
Furthermore, we provide standalone implementations that can be trained in Colab:
[xland](examples/train_meta_standalone.ipynb),
[minigrid](examples/train_single_standalone.ipynb). 

**P.S.** Do not expect that provided baselines will solve the hardest environments or benchmarks 
available. How much fun would that be 🤔? However, we hope that they will 
help to get started quickly!

## Open Logs đŸ“Ŋ

We value openness and reproducibility in science, therefore all logs for the main experiments 
from the paper are open and available as a [public wandb report](https://wandb.ai/state-machine/xminigrid/reports/XLand-MiniGrid-Public-Logs--Vmlldzo4NjUxMTcw).
There you can discover all the latest plots, the behaviour of the losses, and exactly see the hyperparameters used. Enjoy!

## Contributing 🔨

We welcome anyone interested in helping out! Please take a look at our [contribution guide](CONTRIBUTING.md) 
for further instructions and open an issue if something is not clear.

## See Also 🔎

A lot of other work is going in a similar direction, transforming RL through JAX. Many of them have inspired us, 
and we encourage users to check them out as well.

- [Brax](https://github.com/google/brax) - fully differentiable physics engine used for research and development of robotics.
- [Gymnax](https://github.com/RobertTLange/gymnax) - implements classic environments including classic control, bsuite, MinAtar and simplistic meta learning tasks.
- [Jumanji](https://github.com/instadeepai/jumanji) - a diverse set of environments ranging from simple games to NP-hard combinatorial problems.
- [Pgx](https://github.com/sotetsuk/pgx) - JAX implementations of classic board games, such as Chess, Go and Shogi.
- [JaxMARL](https://github.com/flairox/jaxmarl) - multi-agent RL in JAX with wide range of commonly used environments.
- [Craftax](https://github.com/MichaelTMatthews/Craftax) - Crafter reimplementation with JAX.
- [Purejaxql](https://github.com/mttga/purejaxql?tab=readme-ov-file) - off-policy Q-learning baselines with JAX for single and multi-agent RL.

Let's build together!

## Citation 🙏

```bibtex
@inproceedings{
    nikulin2023xlandminigrid,
    title={{XL}and-MiniGrid: Scalable Meta-Reinforcement Learning Environments in {JAX}},
    author={Alexander Nikulin and Vladislav Kurenkov and Ilya Zisman and Viacheslav Sinii and Artem Agarkov and Sergey Kolesnikov},
    booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop, NeurIPS2023},
    year={2023},
    url={https://openreview.net/forum?id=xALDC4aHGz}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "xminigrid",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "jax, neural-networks, deep-learning, reinforcement learning, meta reinforcement learning, gridworld, minigrid, xland",
    "author": null,
    "author_email": "Alexander Nikulin <a.p.nikulin@tinkoff.ai>",
    "download_url": "https://files.pythonhosted.org/packages/45/35/9ededcc1f876fa913fc459ed405386a2b592986b42ce3bcaf5dfb4d7c496/xminigrid-0.9.1.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">XLand-MiniGrid</h1>\n\n<p align=\"center\">\n    <a href=\"https://pypi.python.org/pypi/xminigrid\">\n        <img src=\"https://img.shields.io/pypi/pyversions/xminigrid.svg\"/>\n    </a>\n    <a href=\"https://badge.fury.io/py/xminigrid\">\n        <img src=\"https://badge.fury.io/py/xminigrid.svg\"/>\n    </a>\n    <a href=\"https://github.com/astral-sh/ruff\">\n        <img src=\"https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json\"/>\n    </a>\n    <a href=\"https://arxiv.org/abs/2312.12044\">\n        <img src=\"https://img.shields.io/badge/arXiv-2210.07105-b31b1b.svg\"/>\n    </a>\n    <a href=\"https://twitter.com/vladkurenkov/status/1731709425524543550\">\n        <img src=\"https://badgen.net/badge/icon/twitter?icon=twitter&label\"/>\n    </a>\n    <a target=\"_blank\" href=\"https://colab.research.google.com/github/corl-team/xland-minigrid/blob/main/examples/walkthrough.ipynb\">\n      <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n    </a>\n</p>\n\n[//]: # (    <a href=\"https://badge.fury.io/py/xminigrid\">)\n\n[//]: # (        <img src=\"https://img.shields.io/pypi/dm/xminigrid?color=yellow&label=Downloads\"/>)\n\n[//]: # (    </a>)\n\n[//]: # (    <a href=\"https://github.com/corl-team/xland-minigrid/main/LICENSE\">)\n\n[//]: # (        <img src=\"https://img.shields.io/badge/license-Apache_2.0-blue\"/>)\n\n[//]: # (    </a>)\n\n![img](figures/readme-main-img.png)\n\n# Meta-Reinforcement Learning in JAX\n\n[//]: # (> \ud83e\udd73 **XLand-MiniGrid was [accepted]&#40;https://openreview.net/forum?id=xALDC4aHGz&#41; to [Intrinsically Motivated Open-ended Learning]&#40;https://imol-workshop.github.io&#41; workshop at NeurIPS 2023.** We look forward to seeing everyone at the poster session! )\n\n> \ud83e\udd73 We recently released [**XLand-100B**](https://github.com/dunno-lab/xland-minigrid-datasets), a large multi-task dataset for offline meta and in-context RL research, based on XLand-MiniGrid. \nIt is currently the largest dataset for in-context RL, containing full learning histories for **30k unique tasks, 100B transitions, and 2.5B episodes**. Check it out!\n\n**XLand-MiniGrid** is a suite of tools, grid-world environments and benchmarks for meta-reinforcement learning research inspired by \nthe diversity and depth of [XLand](https://deepmind.google/discover/blog/generally-capable-agents-emerge-from-open-ended-play/) \nand the simplicity and minimalism of [MiniGrid](https://github.com/Farama-Foundation/MiniGrid). Despite the similarities, \nXLand-MiniGrid is written in JAX from scratch and designed to be highly scalable, democratizing large-scale experimentation \nwith limited resources. Ever wanted to reproduce a [DeepMind AdA](https://sites.google.com/view/adaptive-agent/) agent? Now you can and not in months, but days!\n\n\n### Features\n\n- \ud83d\udd2e System of rules and goals that can be combined in arbitrary ways to produce\ndiverse task distributions\n- \ud83d\udd27 Simple to extend and modify, comes with example environments ported from the original\n[MiniGrid](https://github.com/Farama-Foundation/MiniGrid)\n- \ud83e\ude84 Fully compatible with all JAX transformations, can run on CPU, GPU and TPU\n- \ud83d\udcc8 Easily scales to $2^{16}$ parallel environments and millions of steps per second on a single GPU\n- \ud83d\udd25 Multi-GPU PPO baselines in the [PureJaxRL](https://github.com/luchris429/purejaxrl) style, which can achieve **1 trillion** environment steps under two days \n\nHow cool is that? For more details, take a look at the [technical paper](https://arxiv.org/abs/2312.12044) or\n[examples](examples), which will walk you through the basics and training your own adaptive agents in minutes!\n\n[//]: # (![img]&#40;figures/times_minigrid.jpg&#41;)\n\n## Installation \ud83c\udf81\n\nThe latest release of XLand-MiniGrid can be installed directly from PyPI:\n\n```commandline\npip install xminigrid\n# or, from github directly\npip install \"xminigrid @ git+https://github.com/corl-team/xland-minigrid.git\"\n```\n\nAlternatively, if you want to install the latest development version from the GitHub and run provided algorithms or scripts,\ninstall the source as follows:\n```commandline\ngit clone git@github.com:corl-team/xland-minigrid.git\ncd xland-minigrid\n\n# additional dependencies for baselines\npip install -e \".[dev,baselines]\"\n```\nNote that the installation of JAX may differ depending on your hardware accelerator! \nWe advise users to explicitly install the correct JAX version (see the [official installation guide](https://github.com/google/jax#installation)).\n\n## Basic Usage \ud83d\udd79\ufe0f\n\nMost users who are familiar with other popular JAX-based environments \n(such as [gymnax](https://github.com/RobertTLange/gymnax) or [jumnaji](https://github.com/instadeepai/jumanji)), \nwill find that the interface is very similar.\nOn the high level, current API combines [dm_env](https://github.com/google-deepmind/dm_env) and gymnax interfaces.\n\n```python\nimport jax\nimport xminigrid\nfrom xminigrid.wrappers import GymAutoResetWrapper\nfrom xminigrid.experimental.img_obs import RGBImgObservationWrapper\n\nkey = jax.random.key(0)\nreset_key, ruleset_key = jax.random.split(key)\n\n# to list available benchmarks: xminigrid.registered_benchmarks()\nbenchmark = xminigrid.load_benchmark(name=\"trivial-1m\")\n# choosing ruleset, see section on rules and goals\nruleset = benchmark.sample_ruleset(ruleset_key)\n\n# to list available environments: xminigrid.registered_environments()\nenv, env_params = xminigrid.make(\"XLand-MiniGrid-R9-25x25\")\nenv_params = env_params.replace(ruleset=ruleset)\n\n# auto-reset wrapper\nenv = GymAutoResetWrapper(env)\n\n# render obs as rgb images if needed (warn: this will affect speed greatly)\nenv = RGBImgObservationWrapper(env)\n\n# fully jit-compatible step and reset methods\ntimestep = jax.jit(env.reset)(env_params, reset_key)\ntimestep = jax.jit(env.step)(env_params, timestep, action=0)\n\n# optionally render the state\nenv.render(env_params, timestep)\n```\nSimilar to gymnasium or jumanji, users can register new environment \nvariations with `register` for convenient further usage with `make`. \n`timestep` is a dataclass containing `step_type`, `reward`, `discount`, `observation`, as well as the internal environment `state`.\n\nFor a bit more advanced introduction see provided [walkthrough notebook](examples/walkthrough.ipynb).\n\n### On environment interface\n\nCurrently, there are a lot of new JAX-based environments appearing, each offering its own variant of API. Initially, we tried to reuse Jumanji, but it turned out \nthat its design [is not suitable for meta learning](https://github.com/instadeepai/jumanji/issues/212). The Gymnax design appeared to be more appropriate, but unfortunately it is not actively supported and\noften departs from the idea that parameters should only be contained in `env_params`. Furthermore, splitting \n`timestep` into multiple entities seems suboptimal to us, as it complicates many things, such as envpool or dm_env \nstyle auto reset, where the reset occurs on the next step (we need access to done of previous step).\n\nTherefore, we decided that we would make a minimal interface that would cover just our needs without the \ngoal of making it generic. The core of our library is interface independent, and we plan \nto switch to the new one when/if a better design becomes available\n(e.g. when stable Gymnasium [FuncEnv](https://gymnasium.farama.org/main/api/functional/) is released).\n\n## Rules and Goals \ud83d\udd2e\n\n<img src=\"figures/ruleset-example.jpg\" align=\"right\" width=\"55%\" style=\"margin:15px;\">\n\nIn XLand-MiniGrid, the system of rules and goals is the cornerstone of the \nemergent complexity and diversity. In the original MiniGrid \nsome environments have dynamic goals, but the dynamics are never changed. \nTo train and evaluate highly adaptive agents, we need to be able to change \nthe dynamics in non-trivial ways. \n\n**Rules** are the functions that can change the environment state in some deterministic \nway according to the given conditions. **Goals** are similar to rules, except they do \nnot change the state, they only test conditions. Every task should be described with a goal, rules and initial objects. We call these **rulesets**. \nCurrently, we support only one goal per task. \n\nTo illustrate, we provide visualization for specific ruleset. To solve this task, agent should take blue pyramid and put it near the purple square to transform both \nobjects into red circle. To complete the goal, red circle should be placed near\ngreen circle. However, placing purple square near yellow circle will make it unsolvable in this trial. Initial objects positions will be randomized on each reset. \n\nFor more advanced introduction, see corresponding section in the provided [walkthrough notebook](examples/walkthrough.ipynb).\n<br clear=\"right\"/>\n\n## Benchmarks \ud83c\udfb2 \n\nWhile composing rules and goals by hand is flexible, it can quickly become cumbersome. \nBesides, it's hard to express efficiently in a JAX-compatible way due to the high number of heterogeneous computations \n\nTo avoid significant overhead during training and facilitate reliable comparisons between agents, \nwe pre-sampled several benchmarks with up to **three million unique tasks**, following the procedure used to train DeepMind \nAdA agent from the original XLand. Each task is represented with a tree, where root is a goal and all nodes are production rules, which should be triggered in a sequence to solve the task:\n\n<p align=\"center\">\n  <img src=\"figures/task_tree_demo.jpg\" width=\"60%\"/>\n</p>\n\nThese benchmarks differ in the generation configs, producing distributions with\nvarying levels of diversity and average difficulty of the tasks. They can be used for different purposes, for example\nthe `trivial-1m` benchmark can be used to debug your agents, allowing very quick iterations. However, we would caution \nagainst treating benchmarks as a progression from simple to complex. They are just different \ud83e\udd37.\n\nPre-sampled benchmarks are hosted on [HuggingFace](https://huggingface.co/datasets/Howuhh/xland_minigrid/tree/main) and will be downloaded and cached on the first use:\n\n```python\nimport jax.random\nimport xminigrid\nfrom xminigrid.benchmarks import Benchmark\n\n# downloading to path specified by XLAND_MINIGRID_DATA,\n# ~/.xland_minigrid by default\nbenchmark: Benchmark = xminigrid.load_benchmark(name=\"trivial-1m\")\n# reusing cached on the second use\nbenchmark: Benchmark = xminigrid.load_benchmark(name=\"trivial-1m\")\n\n# users can sample or get specific rulesets\nbenchmark.sample_ruleset(jax.random.key(0))\nbenchmark.get_ruleset(ruleset_id=benchmark.num_rulesets() - 1)\n\n# or split them for train & test\ntrain, test = benchmark.shuffle(key=jax.random.key(0)).split(prop=0.8)\n```\n\nWe also provide the [script](scripts/ruleset_generator.py) used to generate these benchmarks. Users can use it for their own purposes:\n```commandline\npython scripts/ruleset_generator.py --help\n```\n\nIn depth description of all available benchmarks is provided [in the technical paper](https://arxiv.org/abs/2312.12044) (Section 3).\n\n## Environments \ud83c\udf0d\n\nWe provide environments from two domains. `XLand` is our main focus for meta-learning. For this domain we provide single\nenvironment and numerous registered variants with different grid layouts and sizes. All of them can be combined\nwith arbitrary rulesets. \n\nTo demonstrate the generality of our library we also port majority of \nnon-language based tasks from original `MiniGrid`. Similarly, some environments come with multiple registered variants. \nHowever, we have no current plans to actively develop and support them (but that may change).\n\n| Name | Domain  | Visualization                                            | Goal                                                                         |\n|------|---------|----------------------------------------------------------|------------------------------------------------------------------------------|\n|   `XLand-MiniGrid` | XLand   | <img src=\"figures/xland.png\" width=\"90px\">               | specified by the provided ruleset                                            |\n|   `MiniGrid-Empty`   | MiniGrid | <img src=\"figures/empty.png\" width=\"90px\">               | go to the green goal                                                         |\n|   `MiniGrid-EmptyRandom`   | MiniGrid | <img src=\"figures/empty_random.png\" width=\"90px\">        | go the green goal from different starting positions                          |\n|   `MiniGrid-FourRooms`   | MiniGrid | <img src=\"figures/fourrooms.png\" width=\"90px\">           | go the green goal, but goal and starting positions are randomized            |\n|   `MiniGrid-LockedRoom`   | MiniGrid | <img src=\"figures/lockedroom.png\" width=\"90px\">          | find the key to unlock the door, go to the green goal                        |\n|   `MiniGrid-Memory`   | MiniGrid | <img src=\"figures/memory.png\" width=\"90px\">              | remember the initial object and choose it at the end of the corridor         |\n|   `MiniGrid-Playground`   | MiniGrid | <img src=\"figures/playground.png\" width=\"90px\">          | goal is not specified                                                        |\n|   `MiniGrid-Unlock`   | MiniGrid | <img src=\"figures/unlock.png\" width=\"90px\">              | unlock the door with the key                                                 |\n|   `MiniGrid-UnlockPickUp`   | MiniGrid | <img src=\"figures/unlockpickup.png\" width=\"90px\">        | unlock the door and pick up the object in another room                       |\n|   `MiniGrid-BlockedUnlockPickUp`   | MiniGrid | <img src=\"figures/blockedunlockpickup.png\" width=\"90px\"> | unlock the door blocked by the object and pick up the object in another room |\n|   `MiniGrid-DoorKey`   | MiniGrid | <img src=\"figures/doorkey.png\" width=\"90px\">             | unlock the door and go to the green goal                                     |\n\nUsers can get all registered environments with `xminigrid.registered_environments()`. We also provide manual control to easily explore the environments:\n```commandline\npython -m xminigrid.manual_control --env-id=\"MiniGrid-Empty-8x8\"\n```\n\n## Baselines \ud83d\ude80\n\nIn addition to the environments, we provide high-quality *almost* single-file \nimplementations of recurrent PPO baselines in the style of [PureJaxRL](https://github.com/luchris429/purejaxrl). With the help of magical `jax.pmap` transformation \nthey can scale to multiple accelerators, achieving impressive FPS of millions during training. \n\nAgents can be trained from the terminal and default arguments can be overwritten from the command line or from the yaml config:\n```commandline\n# for meta learning\npython training/train_meta_task.py \\\n    --config-path='some-path/config.yaml' \\\n    --env_id='XLand-MiniGrid-R1-9x9'\n\n# for minigrid envs\npython training/train_singe_task.py \\\n    --config-path='some-path/config.yaml' \\ \n    --env_id='XLand-MiniGrid-R1-9x9'\n```\nFor the source code and hyperparameters available see [/training](training) or run `python training/train_meta_task.py --help`. \nFurthermore, we provide standalone implementations that can be trained in Colab:\n[xland](examples/train_meta_standalone.ipynb),\n[minigrid](examples/train_single_standalone.ipynb). \n\n**P.S.** Do not expect that provided baselines will solve the hardest environments or benchmarks \navailable. How much fun would that be \ud83e\udd14? However, we hope that they will \nhelp to get started quickly!\n\n## Open Logs \ud83d\udcfd\n\nWe value openness and reproducibility in science, therefore all logs for the main experiments \nfrom the paper are open and available as a [public wandb report](https://wandb.ai/state-machine/xminigrid/reports/XLand-MiniGrid-Public-Logs--Vmlldzo4NjUxMTcw).\nThere you can discover all the latest plots, the behaviour of the losses, and exactly see the hyperparameters used. Enjoy!\n\n## Contributing \ud83d\udd28\n\nWe welcome anyone interested in helping out! Please take a look at our [contribution guide](CONTRIBUTING.md) \nfor further instructions and open an issue if something is not clear.\n\n## See Also \ud83d\udd0e\n\nA lot of other work is going in a similar direction, transforming RL through JAX. Many of them have inspired us, \nand we encourage users to check them out as well.\n\n- [Brax](https://github.com/google/brax) - fully differentiable physics engine used for research and development of robotics.\n- [Gymnax](https://github.com/RobertTLange/gymnax) - implements classic environments including classic control, bsuite, MinAtar and simplistic meta learning tasks.\n- [Jumanji](https://github.com/instadeepai/jumanji) - a diverse set of environments ranging from simple games to NP-hard combinatorial problems.\n- [Pgx](https://github.com/sotetsuk/pgx) - JAX implementations of classic board games, such as Chess, Go and Shogi.\n- [JaxMARL](https://github.com/flairox/jaxmarl) - multi-agent RL in JAX with wide range of commonly used environments.\n- [Craftax](https://github.com/MichaelTMatthews/Craftax) - Crafter reimplementation with JAX.\n- [Purejaxql](https://github.com/mttga/purejaxql?tab=readme-ov-file) - off-policy Q-learning baselines with JAX for single and multi-agent RL.\n\nLet's build together!\n\n## Citation \ud83d\ude4f\n\n```bibtex\n@inproceedings{\n    nikulin2023xlandminigrid,\n    title={{XL}and-MiniGrid: Scalable Meta-Reinforcement Learning Environments in {JAX}},\n    author={Alexander Nikulin and Vladislav Kurenkov and Ilya Zisman and Viacheslav Sinii and Artem Agarkov and Sergey Kolesnikov},\n    booktitle={Intrinsically-Motivated and Open-Ended Learning Workshop, NeurIPS2023},\n    year={2023},\n    url={https://openreview.net/forum?id=xALDC4aHGz}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS  APPENDIX: How to apply the Apache License to your work.  To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!)  The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives.  Copyright [yyyy] [name of copyright owner]  Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at  http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
    "summary": "JAX-accelerated meta-reinforcement learning environments inspired by XLand and MiniGrid",
    "version": "0.9.1",
    "project_urls": null,
    "split_keywords": [
        "jax",
        " neural-networks",
        " deep-learning",
        " reinforcement learning",
        " meta reinforcement learning",
        " gridworld",
        " minigrid",
        " xland"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "79355d0de048e1d6b7f1d6d93bd6836eb0d11b096b360979764f55dd4f2e7943",
                "md5": "da8126168422179e1ac6994dc5c080bb",
                "sha256": "dac54c13662a2cec4128f7d4900f622e7dccf285197b546b1f7631f6307d7f44"
            },
            "downloads": -1,
            "filename": "xminigrid-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "da8126168422179e1ac6994dc5c080bb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 58643,
            "upload_time": "2024-10-11T05:28:03",
            "upload_time_iso_8601": "2024-10-11T05:28:03.891214Z",
            "url": "https://files.pythonhosted.org/packages/79/35/5d0de048e1d6b7f1d6d93bd6836eb0d11b096b360979764f55dd4f2e7943/xminigrid-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "45359ededcc1f876fa913fc459ed405386a2b592986b42ce3bcaf5dfb4d7c496",
                "md5": "ec06433ae45e78d980e27955d1528270",
                "sha256": "3658be7179d0bf81ce51d5e509a1cbe79bb889217a82c70424445d491903b8de"
            },
            "downloads": -1,
            "filename": "xminigrid-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "ec06433ae45e78d980e27955d1528270",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 53276,
            "upload_time": "2024-10-11T05:28:05",
            "upload_time_iso_8601": "2024-10-11T05:28:05.527322Z",
            "url": "https://files.pythonhosted.org/packages/45/35/9ededcc1f876fa913fc459ed405386a2b592986b42ce3bcaf5dfb4d7c496/xminigrid-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-11 05:28:05",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "xminigrid"
}
        
Elapsed time: 1.39589s