Name | tetris-gymnasium JSON |
Version |
0.2.1
JSON |
| download |
home_page | None |
Summary | A fully configurable Gymnasium compatible Tetris environment |
upload_time | 2024-10-12 15:47:49 |
maintainer | None |
docs_url | None |
author | mw |
requires_python | <4.0,>=3.9 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[](https://badge.fury.io/py/tetris-gymnasium)
[](https://badge.fury.io/py/tetris-gymnasium)
[](https://pre-commit.com/)
[](https://github.com/psf/black)
# Tetris Gymnasium

Tetris Gymnasium is a state-of-the-art, modular Reinforcement Learning (RL) environment for Tetris, tightly integrated
with OpenAI's Gymnasium.
## Quick Start
Getting started with Tetris Gymnasium is straightforward. Here's an example to run an environment with random
actions:
```python
import cv2
import gymnasium as gym
from tetris_gymnasium.envs.tetris import Tetris
if __name__ == "__main__":
env = gym.make("tetris_gymnasium/Tetris", render_mode="human")
env.reset(seed=42)
terminated = False
while not terminated:
env.render()
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
key = cv2.waitKey(100) # timeout to see the movement
print("Game Over!")
```
For more examples, e.g. training a DQN agent, please refer to the [examples](examples) directory.
## Installation
Tetris Gymnasium can be installed via pip:
```bash
pip install tetris-gymnasium
```
## Why Tetris Gymnasium?
While significant progress has been made in RL for many Atari games, Tetris remains a challenging problem for AI, similar
to games like Pitfall. Its combination of NP-hard complexity, stochastic elements, and the need for long-term planning
makes it a persistent open problem in RL research. Tetris's intuitive gameplay and relatively modest computational
requirements position it as a potentially useful environment for developing and evaluating RL approaches in a demanding
setting.
Tetris Gymnasium aims to provide researchers and developers with a tool to address this challenge:
1. **Modularity**: The environment's architecture allows for customization and extension, facilitating exploration of
various RL techniques.
2. **Clarity**: Comprehensive documentation and a structured codebase are designed to enhance accessibility and support
experimentation.
3. **Adjustability**: Configuration options enable researchers to focus on specific aspects of the Tetris challenge as
needed.
4. **Up-to-date**: Built on the current Gymnasium framework, the environment is compatible with contemporary RL
algorithms and tools.
5. **Feature-rich**: Includes game-specific features that are sometimes absent in other Tetris environments, aiming to
provide a more comprehensive representation of the game's challenges.
These attributes make Tetris Gymnasium a potentially useful resource for both educational purposes and RL research. By
providing a standardized yet adaptable platform for approaching one of RL's ongoing challenges, Tetris Gymnasium may
contribute to further exploration and development in Tetris RL.
## Documentation
For detailed information on using and customizing Tetris Gymnasium, please refer to
our [full documentation](https://max-we.github.io/Tetris-Gymnasium/).
## Background
Tetris Gymnasium addresses the limitations of existing Tetris environments by offering a modular, understandable, and
adjustable platform. Our paper, "Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris,"
provides an in-depth look at the motivations and design principles behind this project.
**Abstract:**
> The game of Tetris is an open challenge in machine learning and especially Reinforcement Learning (RL). Despite its
> popularity, contemporary environments for the game lack key qualities, such as a clear documentation, an up-to-date
> codebase or game related features. This work introduces Tetris Gymnasium, a modern RL environment built with
> Gymnasium,
> that aims to address these problems by being modular, understandable and adjustable. To evaluate Tetris Gymnasium on
> these qualities, a Deep Q Learning agent was trained and compared to a baseline environment, and it was found that it
> fulfills all requirements of a feature-complete RL environment while being adjustable to many different requirements.
> The source-code and documentation is available at on GitHub and can be used for free under the MIT license.
Read the full paper: [Preprint on EasyChair](https://easychair.org/publications/preprint/154Q)
## Citation
If you use Tetris Gymnasium in your research, please cite our work:
```bibtex
@booklet{EasyChair:13437,
author = {Maximilian Weichart and Philipp Hartl},
title = {Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris},
howpublished = {EasyChair Preprint 13437},
year = {EasyChair, 2024}}
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgements
We extend our gratitude to the creators and maintainers
of [Gymnasium](https://github.com/Farama-Foundation/Gymnasium), [CleanRL](https://github.com/vwxyzjn/cleanrl),
and [Tetris-deep-Q-learning-pytorch](https://github.com/uvipen/Tetris-deep-Q-learning-pytorch) for providing powerful
frameworks and reference implementations that have contributed to the development of Tetris Gymnasium.
Raw data
{
"_id": null,
"home_page": null,
"name": "tetris-gymnasium",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "mw",
"author_email": "maximilian.weichart@icloud.com",
"download_url": "https://files.pythonhosted.org/packages/f7/83/e8821db07af58b6de61644f97b5ec4065c2843c4f7cec31e06da30dafc21/tetris_gymnasium-0.2.1.tar.gz",
"platform": null,
"description": "[](https://badge.fury.io/py/tetris-gymnasium)\n[](https://badge.fury.io/py/tetris-gymnasium)\n[](https://pre-commit.com/)\n[](https://github.com/psf/black)\n\n# Tetris Gymnasium\n\n\n\nTetris Gymnasium is a state-of-the-art, modular Reinforcement Learning (RL) environment for Tetris, tightly integrated\nwith OpenAI's Gymnasium.\n\n## Quick Start\n\nGetting started with Tetris Gymnasium is straightforward. Here's an example to run an environment with random\nactions:\n\n```python\nimport cv2\nimport gymnasium as gym\n\nfrom tetris_gymnasium.envs.tetris import Tetris\n\nif __name__ == \"__main__\":\n env = gym.make(\"tetris_gymnasium/Tetris\", render_mode=\"human\")\n env.reset(seed=42)\n\n terminated = False\n while not terminated:\n env.render()\n action = env.action_space.sample()\n observation, reward, terminated, truncated, info = env.step(action)\n key = cv2.waitKey(100) # timeout to see the movement\n print(\"Game Over!\")\n\n```\n\nFor more examples, e.g. training a DQN agent, please refer to the [examples](examples) directory.\n\n## Installation\n\nTetris Gymnasium can be installed via pip:\n\n```bash\npip install tetris-gymnasium\n```\n\n## Why Tetris Gymnasium?\n\nWhile significant progress has been made in RL for many Atari games, Tetris remains a challenging problem for AI, similar\nto games like Pitfall. Its combination of NP-hard complexity, stochastic elements, and the need for long-term planning\nmakes it a persistent open problem in RL research. Tetris's intuitive gameplay and relatively modest computational\nrequirements position it as a potentially useful environment for developing and evaluating RL approaches in a demanding\nsetting.\n\nTetris Gymnasium aims to provide researchers and developers with a tool to address this challenge:\n\n1. **Modularity**: The environment's architecture allows for customization and extension, facilitating exploration of\n various RL techniques.\n2. **Clarity**: Comprehensive documentation and a structured codebase are designed to enhance accessibility and support\n experimentation.\n3. **Adjustability**: Configuration options enable researchers to focus on specific aspects of the Tetris challenge as\n needed.\n4. **Up-to-date**: Built on the current Gymnasium framework, the environment is compatible with contemporary RL\n algorithms and tools.\n5. **Feature-rich**: Includes game-specific features that are sometimes absent in other Tetris environments, aiming to\n provide a more comprehensive representation of the game's challenges.\n\nThese attributes make Tetris Gymnasium a potentially useful resource for both educational purposes and RL research. By\nproviding a standardized yet adaptable platform for approaching one of RL's ongoing challenges, Tetris Gymnasium may\ncontribute to further exploration and development in Tetris RL.\n\n## Documentation\n\nFor detailed information on using and customizing Tetris Gymnasium, please refer to\nour [full documentation](https://max-we.github.io/Tetris-Gymnasium/).\n\n## Background\n\nTetris Gymnasium addresses the limitations of existing Tetris environments by offering a modular, understandable, and\nadjustable platform. Our paper, \"Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris,\"\nprovides an in-depth look at the motivations and design principles behind this project.\n\n**Abstract:**\n\n> The game of Tetris is an open challenge in machine learning and especially Reinforcement Learning (RL). Despite its\n> popularity, contemporary environments for the game lack key qualities, such as a clear documentation, an up-to-date\n> codebase or game related features. This work introduces Tetris Gymnasium, a modern RL environment built with\n> Gymnasium,\n> that aims to address these problems by being modular, understandable and adjustable. To evaluate Tetris Gymnasium on\n> these qualities, a Deep Q Learning agent was trained and compared to a baseline environment, and it was found that it\n> fulfills all requirements of a feature-complete RL environment while being adjustable to many different requirements.\n> The source-code and documentation is available at on GitHub and can be used for free under the MIT license.\n\nRead the full paper: [Preprint on EasyChair](https://easychair.org/publications/preprint/154Q)\n\n## Citation\n\nIf you use Tetris Gymnasium in your research, please cite our work:\n\n```bibtex\n@booklet{EasyChair:13437,\n author = {Maximilian Weichart and Philipp Hartl},\n title = {Piece by Piece: Assembling a Modular Reinforcement Learning Environment for Tetris},\n howpublished = {EasyChair Preprint 13437},\n year = {EasyChair, 2024}}\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgements\n\nWe extend our gratitude to the creators and maintainers\nof [Gymnasium](https://github.com/Farama-Foundation/Gymnasium), [CleanRL](https://github.com/vwxyzjn/cleanrl),\nand [Tetris-deep-Q-learning-pytorch](https://github.com/uvipen/Tetris-deep-Q-learning-pytorch) for providing powerful\nframeworks and reference implementations that have contributed to the development of Tetris Gymnasium.\n",
"bugtrack_url": null,
"license": null,
"summary": "A fully configurable Gymnasium compatible Tetris environment",
"version": "0.2.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "45c90b9314c4b2535c4dfb8eb02c6f2affd239ef4ce4937745eae1a9583fa29d",
"md5": "21805a741a3652bef37c31ecc9f1e92a",
"sha256": "e7a154c58cdc14abd02b3f97dbf2e3806c362d12bbfc6e18c078b7bad55c0dc6"
},
"downloads": -1,
"filename": "tetris_gymnasium-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "21805a741a3652bef37c31ecc9f1e92a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 24109,
"upload_time": "2024-10-12T15:47:48",
"upload_time_iso_8601": "2024-10-12T15:47:48.268755Z",
"url": "https://files.pythonhosted.org/packages/45/c9/0b9314c4b2535c4dfb8eb02c6f2affd239ef4ce4937745eae1a9583fa29d/tetris_gymnasium-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f783e8821db07af58b6de61644f97b5ec4065c2843c4f7cec31e06da30dafc21",
"md5": "8135e2723c93643aa9b6789426dfa835",
"sha256": "5243a2cb38ed7cb41356b717a001646d431ed5aadf134dff05e7a4197bce619e"
},
"downloads": -1,
"filename": "tetris_gymnasium-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "8135e2723c93643aa9b6789426dfa835",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 21759,
"upload_time": "2024-10-12T15:47:49",
"upload_time_iso_8601": "2024-10-12T15:47:49.935369Z",
"url": "https://files.pythonhosted.org/packages/f7/83/e8821db07af58b6de61644f97b5ec4065c2843c4f7cec31e06da30dafc21/tetris_gymnasium-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-12 15:47:49",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "tetris-gymnasium"
}