# Note
This projects maintains Talendar/flappy-bird-gym.
## Flappy Bird for OpenAI Gym
![Python versions](https://img.shields.io/pypi/pyversions/flappy-bird-gym)
[![PyPI](https://img.shields.io/pypi/v/flappy-bird-gym)](https://pypi.org/project/flappy-bird-gym/)
[![License](https://img.shields.io/github/license/Talendar/flappy-bird-gym)](https://github.com/Talendar/flappy-bird-gym/blob/master/LICENSE)
This repository contains the implementation of two OpenAI Gym environments for
the Flappy Bird game. The implementation of the game's logic and graphics was
based on the [FlapPyBird](https://github.com/sourabhv/FlapPyBird) project, by
[@sourabhv](https://github.com/sourabhv).
The two environments differ only on the type of observations they yield for the
agents. The "FlappyBird-rgb-v0" environment, yields RGB-arrays (images)
representing the game's screen. The "FlappyBird-v0" environment, on the other
hand, yields simple numerical information about the game's state as
observations. The yielded attributes are the:
* horizontal distance to the next pipe;
* difference between the player's y position and the next hole's y position.
<br>
<p align="center">
<img align="center"
src="https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/yellow_bird_playing.gif?raw=true"
width="200"/>
<img align="center"
src="https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/red_bird_start_screen.gif?raw=true"
width="200"/>
<img align="center"
src="https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/blue_bird_playing.gif?raw=true"
width="200"/>
</p>
## Installation
To install `flappy-bird-gym`, simply run the following command:
$ pip install flappy-bird-gym2
## Usage
Like with other `gym` environments, it's very easy to use `flappy-bird-gym`.
Simply import the package and create the environment with the `make` function.
Take a look at the sample code below:
```
import time
import flappy_bird_gym
env = flappy_bird_gym.make("FlappyBird-v0")
obs = env.reset()
while True:
# Next action:
# (feed the observation to your agent here)
action = ... # env.action_space.sample() for a random action
# Processing:
obs, reward, done, info = env.step(action)
# Rendering the game:
# (remove this two lines during training)
env.render()
time.sleep(1 / 30) # FPS
# Checking if the player is still alive
if done:
break
env.close()
```
## Playing
To play the game (human mode), run the following command:
$ flappy_bird_gym
To see a random agent playing, add an argument to the command:
$ flappy_bird_gym --mode random
Raw data
{
"_id": null,
"home_page": "https://github.com/chokychou/flappy-bird-gym2",
"name": "flappy-bird-gym2",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "Flappy-BirdGame Gym OpenAI-Gym Reinforcement-Learning Reinforcement-Learning-Environment",
"author": "Yi Zhou(chokychou)",
"author_email": "shenmeguislb@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/31/a0/28fad51f4b9d803c6ce72f0bd7a11bb0a49de1f58483957d1b7c4d55cae5/flappy-bird-gym2-0.0.7.tar.gz",
"platform": null,
"description": "# Note\nThis projects maintains Talendar/flappy-bird-gym.\n\n## Flappy Bird for OpenAI Gym\n\n![Python versions](https://img.shields.io/pypi/pyversions/flappy-bird-gym)\n[![PyPI](https://img.shields.io/pypi/v/flappy-bird-gym)](https://pypi.org/project/flappy-bird-gym/)\n[![License](https://img.shields.io/github/license/Talendar/flappy-bird-gym)](https://github.com/Talendar/flappy-bird-gym/blob/master/LICENSE)\n\nThis repository contains the implementation of two OpenAI Gym environments for\nthe Flappy Bird game. The implementation of the game's logic and graphics was\nbased on the [FlapPyBird](https://github.com/sourabhv/FlapPyBird) project, by\n[@sourabhv](https://github.com/sourabhv). \n\nThe two environments differ only on the type of observations they yield for the\nagents. The \"FlappyBird-rgb-v0\" environment, yields RGB-arrays (images)\nrepresenting the game's screen. The \"FlappyBird-v0\" environment, on the other\nhand, yields simple numerical information about the game's state as\nobservations. The yielded attributes are the:\n\n* horizontal distance to the next pipe;\n* difference between the player's y position and the next hole's y position.\n\n<br>\n\n<p align=\"center\">\n <img align=\"center\" \n src=\"https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/yellow_bird_playing.gif?raw=true\" \n width=\"200\"/>\n \n <img align=\"center\" \n src=\"https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/red_bird_start_screen.gif?raw=true\" \n width=\"200\"/>\n \n <img align=\"center\" \n src=\"https://github.com/Talendar/flappy-bird-gym/blob/main/imgs/blue_bird_playing.gif?raw=true\" \n width=\"200\"/>\n</p>\n\n## Installation\n\nTo install `flappy-bird-gym`, simply run the following command:\n\n $ pip install flappy-bird-gym2\n \n## Usage\n\nLike with other `gym` environments, it's very easy to use `flappy-bird-gym`.\nSimply import the package and create the environment with the `make` function.\nTake a look at the sample code below:\n\n```\nimport time\nimport flappy_bird_gym\nenv = flappy_bird_gym.make(\"FlappyBird-v0\")\n\nobs = env.reset()\nwhile True:\n # Next action:\n # (feed the observation to your agent here)\n action = ... # env.action_space.sample() for a random action\n\n # Processing:\n obs, reward, done, info = env.step(action)\n \n # Rendering the game:\n # (remove this two lines during training)\n env.render()\n time.sleep(1 / 30) # FPS\n \n # Checking if the player is still alive\n if done:\n break\n\nenv.close()\n```\n\n## Playing\n\nTo play the game (human mode), run the following command:\n\n $ flappy_bird_gym\n \nTo see a random agent playing, add an argument to the command:\n\n $ flappy_bird_gym --mode random\n\n\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "An OpenAI gym environment for the Flappy Bird game.",
"version": "0.0.7",
"project_urls": {
"Download": "https://github.com/chokychou/flappy-bird-gym2/releases",
"Homepage": "https://github.com/chokychou/flappy-bird-gym2"
},
"split_keywords": [
"flappy-birdgame",
"gym",
"openai-gym",
"reinforcement-learning",
"reinforcement-learning-environment"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e1e1a99f780dd089a9d9231138ad7439f25d5feaf5c6ca234f52137eb9f080e3",
"md5": "92e2fb114d44414f863402c35114fa7d",
"sha256": "bdd4773aad7e6af51d4dd45a7083743633c0316462ab829fbb9bad97f1dac137"
},
"downloads": -1,
"filename": "flappy_bird_gym2-0.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "92e2fb114d44414f863402c35114fa7d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 567730,
"upload_time": "2025-01-13T06:56:42",
"upload_time_iso_8601": "2025-01-13T06:56:42.344826Z",
"url": "https://files.pythonhosted.org/packages/e1/e1/a99f780dd089a9d9231138ad7439f25d5feaf5c6ca234f52137eb9f080e3/flappy_bird_gym2-0.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "31a028fad51f4b9d803c6ce72f0bd7a11bb0a49de1f58483957d1b7c4d55cae5",
"md5": "2e40f7db74e49972e1604cf19c11403b",
"sha256": "d8755c7348a88a6827f174272b45d8303ba5305b6f83e70dbbb5eb770c092887"
},
"downloads": -1,
"filename": "flappy-bird-gym2-0.0.7.tar.gz",
"has_sig": false,
"md5_digest": "2e40f7db74e49972e1604cf19c11403b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 545807,
"upload_time": "2025-01-13T06:56:43",
"upload_time_iso_8601": "2025-01-13T06:56:43.995366Z",
"url": "https://files.pythonhosted.org/packages/31/a0/28fad51f4b9d803c6ce72f0bd7a11bb0a49de1f58483957d1b7c4d55cae5/flappy-bird-gym2-0.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-13 06:56:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "chokychou",
"github_project": "flappy-bird-gym2",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "gym",
"specs": []
},
{
"name": "numpy",
"specs": [
[
"==",
"1.23.5"
]
]
},
{
"name": "pygame",
"specs": []
}
],
"lcname": "flappy-bird-gym2"
}