<p align="center">
<img src="https://github.com/OliverOverend/gym-simplifiedtetris/raw/master/assets/20x10_4.gif" width="500">
</p>
<h1 align="center">Gym-SimplifiedTetris </h1>
<p align="center">
<a href="https://www.codefactor.io/repository/github/oliveroverend/gym-simplifiedtetris">
<img src="https://img.shields.io/codefactor/grade/github/OliverOverend/gym-simplifiedtetris?color=ff69b4&style=for-the-badge">
</a>
<a href="https://pypi.org/project/gym-simplifiedtetris/">
<img src="https://img.shields.io/pypi/pyversions/gym_simplifiedtetris?style=for-the-badge">
</a>
<a href="/LICENSE.md">
<img src="https://img.shields.io/github/license/OliverOverend/gym-simplifiedtetris?color=darkred&style=for-the-badge">
</a>
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/commits/">
<img src="https://img.shields.io/github/last-commit/OliverOverend/gym-simplifiedtetris?style=for-the-badge">
</a>
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/releases">
<img src="https://img.shields.io/github/release-date/OliverOverend/gym-simplifiedtetris?color=teal &style=for-the-badge">
</a>
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/issues">
<img src="https://img.shields.io/github/issues-raw/OliverOverend/gym-simplifiedtetris?color=blueviolet&style=for-the-badge">
</a>
</p>
<p align="center">
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/issues/new?assignees=OliverOverend&labels=bug&late=BUG_REPORT.md&title=%5BBUG%5D%3A">Report Bug
</a>
·
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/issues/new?assignees=OliverOverend&labels=enhancement&late=FEATURE_REQUEST.md&title=%5BFEATURE%5D%3A">Request Feature
</a>
·
<a href="https://github.com/OliverOverend/gym-simplifiedtetris/discussions/new">Suggestions
</a>
</p>
---
> 🟥 Simplified Tetris environments compliant with OpenAI Gym's API
Gym-SimplifiedTetris is a pip installable package that creates simplified Tetris environments compliant with [OpenAI Gym's API](https://github.com/openai/gym). Gym's API is the field standard for developing and comparing reinforcement learning algorithms.
There are currently [three agents](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/gym_simplifiedtetris/agents) and [64 environments](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/gym_simplifiedtetris/envs) provided. The environments are simplified because the player must select the column and piece's rotation before the piece starts falling vertically downwards. If one looks at the previous approaches to the game of Tetris, most of them use this simplified setting.
---
- [1. Installation](#1-installation)
- [2. Usage](#2-usage)
- [3. Future work](#3-future-work)
- [4. Acknowledgements](#4-acknowledgements)
## 1. Installation
The package is pip installable:
```bash
pip install gym-simplifiedtetris
```
Or, you can copy the repository by forking it and then downloading it using:
```bash
git clone https://github.com/<YOUR-USERNAME>/gym-simplifiedtetris
```
Packages can be installed using pip:
```bash
cd gym-simplifiedtetris
pip install -r requirements.txt
```
## 2. Usage
The file [examples/envs.py](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/examples/envs.py) shows two examples of using an instance of the `simplifiedtetris-binary-20x10-4-v0` environment for ten games. You can create an environment using `gym.make`, supplying the environment's ID as an argument.
```python
import gym
import gym_simplifiedtetris
env = gym.make("simplifiedtetris-binary-20x10-4-v0")
obs = env.reset()
# Run 10 games of Tetris, selecting actions uniformly at random.
episode_num = 0
while episode_num < 10:
env.render()
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
if done:
print(f"Episode {episode_num + 1} has terminated.")
episode_num += 1
obs = env.reset()
env.close()
```
Alternatively, you can import the environment directly:
```python
from gym_simplifiedtetris import SimplifiedTetrisBinaryEnv as Tetris
env = Tetris(grid_dims=(20, 10), piece_size=4)
```
## 3. Future work
- Normalise the observation spaces.
- Implement an action space that only permits the agent to take non-terminal actions.
- Implement more shaping rewards: potential-style, potential-based, dynamic potential-based, and non-potential. Optimise their weights using an optimisation algorithm.
- Write end-to-end and integration tests using pytest.
- Perform mutation and property-based testing using mutmut and Hypothesis.
- Use Coverage.py to increase code coverage.
## 4. Acknowledgements
This package utilises several methods from the [codebase](https://github.com/andreanlay/tetris-ai-deep-reinforcement-learning) developed by andreanlay (2020) and the [codebase](https://github.com/Benjscho/gym-mdptetris) developed by Benjscho (2021).
Raw data
{
"_id": null,
"home_page": "https://github.com/OliverOverend/gym-simplifiedtetris",
"name": "gym-simplifiedtetris",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8, <3.10",
"maintainer_email": "",
"keywords": "tetris,gym,openai-gym,reinforcement-learning,research,reward-shaping",
"author": "Oliver Overend",
"author_email": "ollyoverend10@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/44/d1/086a7c522ae0747aafa2854fbec40faf7ea847db65ff42ce511771c9610b/gym_simplifiedtetris-1.0.7.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"https://github.com/OliverOverend/gym-simplifiedtetris/raw/master/assets/20x10_4.gif\" width=\"500\">\n</p>\n\n<h1 align=\"center\">Gym-SimplifiedTetris </h1>\n\n<p align=\"center\">\n <a href=\"https://www.codefactor.io/repository/github/oliveroverend/gym-simplifiedtetris\">\n <img src=\"https://img.shields.io/codefactor/grade/github/OliverOverend/gym-simplifiedtetris?color=ff69b4&style=for-the-badge\">\n </a>\n <a href=\"https://pypi.org/project/gym-simplifiedtetris/\">\n <img src=\"https://img.shields.io/pypi/pyversions/gym_simplifiedtetris?style=for-the-badge\">\n </a>\n <a href=\"/LICENSE.md\">\n <img src=\"https://img.shields.io/github/license/OliverOverend/gym-simplifiedtetris?color=darkred&style=for-the-badge\">\n </a>\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/commits/\">\n <img src=\"https://img.shields.io/github/last-commit/OliverOverend/gym-simplifiedtetris?style=for-the-badge\">\n </a>\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/releases\">\n <img src=\"https://img.shields.io/github/release-date/OliverOverend/gym-simplifiedtetris?color=teal &style=for-the-badge\">\n </a>\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/issues\">\n <img src=\"https://img.shields.io/github/issues-raw/OliverOverend/gym-simplifiedtetris?color=blueviolet&style=for-the-badge\">\n </a>\n</p>\n\n<p align=\"center\">\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/issues/new?assignees=OliverOverend&labels=bug&late=BUG_REPORT.md&title=%5BBUG%5D%3A\">Report Bug\n </a>\n \u00b7\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/issues/new?assignees=OliverOverend&labels=enhancement&late=FEATURE_REQUEST.md&title=%5BFEATURE%5D%3A\">Request Feature\n </a>\n \u00b7\n <a href=\"https://github.com/OliverOverend/gym-simplifiedtetris/discussions/new\">Suggestions\n </a>\n</p>\n\n---\n\n> \ud83d\udfe5 Simplified Tetris environments compliant with OpenAI Gym's API\n\nGym-SimplifiedTetris is a pip installable package that creates simplified Tetris environments compliant with [OpenAI Gym's API](https://github.com/openai/gym). Gym's API is the field standard for developing and comparing reinforcement learning algorithms.\n\nThere are currently [three agents](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/gym_simplifiedtetris/agents) and [64 environments](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/gym_simplifiedtetris/envs) provided. The environments are simplified because the player must select the column and piece's rotation before the piece starts falling vertically downwards. If one looks at the previous approaches to the game of Tetris, most of them use this simplified setting.\n\n---\n\n- [1. Installation](#1-installation)\n- [2. Usage](#2-usage)\n- [3. Future work](#3-future-work)\n- [4. Acknowledgements](#4-acknowledgements)\n\n## 1. Installation\n\nThe package is pip installable:\n```bash\npip install gym-simplifiedtetris\n```\n\nOr, you can copy the repository by forking it and then downloading it using:\n\n```bash\ngit clone https://github.com/<YOUR-USERNAME>/gym-simplifiedtetris\n```\n\nPackages can be installed using pip:\n\n```bash\ncd gym-simplifiedtetris\npip install -r requirements.txt\n```\n\n## 2. Usage\n\nThe file [examples/envs.py](https://github.com/OliverOverend/gym-simplifiedtetris/blob/master/examples/envs.py) shows two examples of using an instance of the `simplifiedtetris-binary-20x10-4-v0` environment for ten games. You can create an environment using `gym.make`, supplying the environment's ID as an argument.\n\n```python\nimport gym\nimport gym_simplifiedtetris\n\nenv = gym.make(\"simplifiedtetris-binary-20x10-4-v0\")\nobs = env.reset()\n\n# Run 10 games of Tetris, selecting actions uniformly at random.\nepisode_num = 0\nwhile episode_num < 10:\n env.render()\n \n action = env.action_space.sample()\n obs, reward, done, info = env.step(action)\n\n if done:\n print(f\"Episode {episode_num + 1} has terminated.\")\n episode_num += 1\n obs = env.reset()\n\nenv.close()\n```\n\nAlternatively, you can import the environment directly:\n\n```python\nfrom gym_simplifiedtetris import SimplifiedTetrisBinaryEnv as Tetris\n\nenv = Tetris(grid_dims=(20, 10), piece_size=4)\n```\n\n## 3. Future work\n\n- Normalise the observation spaces.\n- Implement an action space that only permits the agent to take non-terminal actions.\n- Implement more shaping rewards: potential-style, potential-based, dynamic potential-based, and non-potential. Optimise their weights using an optimisation algorithm.\n- Write end-to-end and integration tests using pytest.\n- Perform mutation and property-based testing using mutmut and Hypothesis.\n- Use Coverage.py to increase code coverage.\n\n## 4. Acknowledgements\n\nThis package utilises several methods from the [codebase](https://github.com/andreanlay/tetris-ai-deep-reinforcement-learning) developed by andreanlay (2020) and the [codebase](https://github.com/Benjscho/gym-mdptetris) developed by Benjscho (2021).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simplified Tetris environments compliant with OpenAI Gym's API",
"version": "1.0.7",
"split_keywords": [
"tetris",
"gym",
"openai-gym",
"reinforcement-learning",
"research",
"reward-shaping"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "c5bde60dc903467d5fb16b50145c0cca",
"sha256": "b7ad44d975cdc1686e018b103f2f8b09212cb7d355cb566ea3f34ba21cbea410"
},
"downloads": -1,
"filename": "gym_simplifiedtetris-1.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c5bde60dc903467d5fb16b50145c0cca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8, <3.10",
"size": 25321,
"upload_time": "2022-12-18T19:05:47",
"upload_time_iso_8601": "2022-12-18T19:05:47.931884Z",
"url": "https://files.pythonhosted.org/packages/c2/de/014214050f07f04e8528e95577f31c1ffd10bc970ec04dca98809f674792/gym_simplifiedtetris-1.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "2cacc395605a94eb39af981cb3238cc5",
"sha256": "b04165ac537a036f91b80544ad32e7e5e3f4d2a83f90718cf34fe3dec23bda07"
},
"downloads": -1,
"filename": "gym_simplifiedtetris-1.0.7.tar.gz",
"has_sig": false,
"md5_digest": "2cacc395605a94eb39af981cb3238cc5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8, <3.10",
"size": 23920,
"upload_time": "2022-12-18T19:05:49",
"upload_time_iso_8601": "2022-12-18T19:05:49.063430Z",
"url": "https://files.pythonhosted.org/packages/44/d1/086a7c522ae0747aafa2854fbec40faf7ea847db65ff42ce511771c9610b/gym_simplifiedtetris-1.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-18 19:05:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "OliverOverend",
"github_project": "gym-simplifiedtetris",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "absl-py",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "attrs",
"specs": [
[
"==",
"22.1.0"
]
]
},
{
"name": "black",
"specs": [
[
"==",
"22.3.0"
]
]
},
{
"name": "cachetools",
"specs": [
[
"==",
"5.2.0"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2022.9.24"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"2.1.1"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.3"
]
]
},
{
"name": "cloudpickle",
"specs": [
[
"==",
"2.2.0"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.11.0"
]
]
},
{
"name": "distlib",
"specs": [
[
"==",
"0.3.6"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.8.0"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.38.0"
]
]
},
{
"name": "google-auth",
"specs": [
[
"==",
"2.15.0"
]
]
},
{
"name": "google-auth-oauthlib",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.51.1"
]
]
},
{
"name": "gym",
"specs": [
[
"==",
"0.21.0"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.4"
]
]
},
{
"name": "importlib-metadata",
"specs": [
[
"==",
"5.1.0"
]
]
},
{
"name": "iniconfig",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.4"
]
]
},
{
"name": "Markdown",
"specs": [
[
"==",
"3.4.1"
]
]
},
{
"name": "MarkupSafe",
"specs": [
[
"==",
"2.1.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.5.2"
]
]
},
{
"name": "mypy-extensions",
"specs": [
[
"==",
"0.4.3"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.22.3"
]
]
},
{
"name": "oauthlib",
"specs": [
[
"==",
"3.2.2"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.5.5.64"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"21.3"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"1.4.2"
]
]
},
{
"name": "pathspec",
"specs": [
[
"==",
"0.10.2"
]
]
},
{
"name": "Pillow",
"specs": [
[
"==",
"9.1.0"
]
]
},
{
"name": "platformdirs",
"specs": [
[
"==",
"2.5.4"
]
]
},
{
"name": "pluggy",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"3.20.3"
]
]
},
{
"name": "py",
"specs": [
[
"==",
"1.11.0"
]
]
},
{
"name": "pyasn1",
"specs": [
[
"==",
"0.4.8"
]
]
},
{
"name": "pyasn1-modules",
"specs": [
[
"==",
"0.2.8"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.0.9"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"7.1.2"
]
]
},
{
"name": "pytest-html",
"specs": [
[
"==",
"3.1.1"
]
]
},
{
"name": "pytest-metadata",
"specs": [
[
"==",
"2.0.4"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.8.2"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2022.6"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.28.1"
]
]
},
{
"name": "requests-oauthlib",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "rsa",
"specs": [
[
"==",
"4.9"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "stable-baselines3",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "tensorboard",
"specs": [
[
"==",
"2.11.0"
]
]
},
{
"name": "tensorboard-data-server",
"specs": [
[
"==",
"0.6.1"
]
]
},
{
"name": "tensorboard-plugin-wit",
"specs": [
[
"==",
"1.8.1"
]
]
},
{
"name": "toml",
"specs": [
[
"==",
"0.10.2"
]
]
},
{
"name": "tomli",
"specs": [
[
"==",
"2.0.1"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"1.11.0"
]
]
},
{
"name": "tox",
"specs": [
[
"==",
"3.25.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.64.0"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.4.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"1.26.13"
]
]
},
{
"name": "virtualenv",
"specs": [
[
"==",
"20.17.0"
]
]
},
{
"name": "Werkzeug",
"specs": [
[
"==",
"2.2.2"
]
]
},
{
"name": "zipp",
"specs": [
[
"==",
"3.11.0"
]
]
}
],
"lcname": "gym-simplifiedtetris"
}