buffalo-gym


Namebuffalo-gym JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryBuffalo Gym environment
upload_time2024-04-26 02:48:11
maintainerNone
docs_urlNone
authorforeverska
requires_pythonNone
licenseNone
keywords gymnasium gym
VCS
bugtrack_url
requirements gymnasium numpy setuptools
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Buffalo Gym

A multi-armed bandit (MAB) environment for the gymnasium API.
One-armed Bandit is a reference to slot machines, and Buffalo 
is a reference to one such slot machine that I am fond 
of.  MABs are an excellent playground for theoretical exercise and 
debugging of RL agents as they provide an environment that 
can be reasoned about easily.  It helped me once to step back 
and write an MAB to debug my DQN agent.  But there was a lack 
of native gymnasium environments, so I wrote Buffalo, an easy-to-use 
 environment that it might help someone else.

## Buffalo ("Buffalo-v0")

Default multi-armed bandit environment.  Arm center values 
are drawn from a normal distribution (0, arms).  When an 
arm is pulled, a random value is drawn from a normal 
distribution (0, 1) and added to the chosen arm center 
value.  This is not intended to be challenging for an agent but 
easy for the debugger to reason about.

## Multi-Buffalo ("MultiBuffalo-v0")

This serves as a contextual bandit implementation.  It is a 
k-armed bandit with n states.  These states are indicated to 
the agent in the observation and the two states have different 
reward offsets for each arm.  The goal of the agent is to 
learn and contextualize best action for a given state.  This is 
a good stepping stone to Markov Decision Processes.

This module had an extra parameter, pace.  By default (None), a 
new state is chosen for every step of the environment.  It can 
be set to any integer to determine how many steps between randomly 
choosing a new state.  Of course, transitioning to a new state is 
not guaranteed as the next state is random.

## Buffalo Trail ("BuffaloTrail-v0")

There is a pervasive rumor that slot machine manufacturers put in 
a secret sequence of bets which trigger a large reward or the 
jackpot.  It is almost certainly not true in the real world but 
it is here.  A sequence of actions gives the max reward.  The 
sequence is randomly chosen on environment setup and indicated 
in the info of reset.  Not all sequences are aliased and this 
may be an important thing to check in an implementation.  Therefore, 
there is a rudimentary algorithm to force aliasing included.

## Using

Install via pip and import buffalo_gym along with gymnasium.

```
import gymnasium  
import buffalo_gym

env = gym.make("Buffalo-v0")
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "buffalo-gym",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "gymnasium, gym",
    "author": "foreverska",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/db/1c/e85a925412d1f558785c3ff538d34fe702e83943361d1da8efd871a64a7a/buffalo_gym-0.0.3.tar.gz",
    "platform": null,
    "description": "# Buffalo Gym\n\nA multi-armed bandit (MAB) environment for the gymnasium API.\nOne-armed Bandit is a reference to slot machines, and Buffalo \nis a reference to one such slot machine that I am fond \nof.  MABs are an excellent playground for theoretical exercise and \ndebugging of RL agents as they provide an environment that \ncan be reasoned about easily.  It helped me once to step back \nand write an MAB to debug my DQN agent.  But there was a lack \nof native gymnasium environments, so I wrote Buffalo, an easy-to-use \n environment that it might help someone else.\n\n## Buffalo (\"Buffalo-v0\")\n\nDefault multi-armed bandit environment.  Arm center values \nare drawn from a normal distribution (0, arms).  When an \narm is pulled, a random value is drawn from a normal \ndistribution (0, 1) and added to the chosen arm center \nvalue.  This is not intended to be challenging for an agent but \neasy for the debugger to reason about.\n\n## Multi-Buffalo (\"MultiBuffalo-v0\")\n\nThis serves as a contextual bandit implementation.  It is a \nk-armed bandit with n states.  These states are indicated to \nthe agent in the observation and the two states have different \nreward offsets for each arm.  The goal of the agent is to \nlearn and contextualize best action for a given state.  This is \na good stepping stone to Markov Decision Processes.\n\nThis module had an extra parameter, pace.  By default (None), a \nnew state is chosen for every step of the environment.  It can \nbe set to any integer to determine how many steps between randomly \nchoosing a new state.  Of course, transitioning to a new state is \nnot guaranteed as the next state is random.\n\n## Buffalo Trail (\"BuffaloTrail-v0\")\n\nThere is a pervasive rumor that slot machine manufacturers put in \na secret sequence of bets which trigger a large reward or the \njackpot.  It is almost certainly not true in the real world but \nit is here.  A sequence of actions gives the max reward.  The \nsequence is randomly chosen on environment setup and indicated \nin the info of reset.  Not all sequences are aliased and this \nmay be an important thing to check in an implementation.  Therefore, \nthere is a rudimentary algorithm to force aliasing included.\n\n## Using\n\nInstall via pip and import buffalo_gym along with gymnasium.\n\n```\nimport gymnasium  \nimport buffalo_gym\n\nenv = gym.make(\"Buffalo-v0\")\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Buffalo Gym environment",
    "version": "0.0.3",
    "project_urls": {
        "Github:": "https://github.com/foreverska/buffalo-gym"
    },
    "split_keywords": [
        "gymnasium",
        " gym"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f8c3c006e25f339fe5dc189455114e860973f6c2c627f97aee37c5253c81bd7c",
                "md5": "cc2458569634b2b1220090d18a55b9e6",
                "sha256": "d326bb40118a3325712397982a233caa672805ac2a80deff9c7d66f6c5a068cd"
            },
            "downloads": -1,
            "filename": "buffalo_gym-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cc2458569634b2b1220090d18a55b9e6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 6417,
            "upload_time": "2024-04-26T02:48:09",
            "upload_time_iso_8601": "2024-04-26T02:48:09.861930Z",
            "url": "https://files.pythonhosted.org/packages/f8/c3/c006e25f339fe5dc189455114e860973f6c2c627f97aee37c5253c81bd7c/buffalo_gym-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "db1ce85a925412d1f558785c3ff538d34fe702e83943361d1da8efd871a64a7a",
                "md5": "7d5899080295f7d7178586faa94cbfa6",
                "sha256": "75270ccb1e3844d6882c84ec830d85774e0ea3f7a81ae39e5b0989b2381affc3"
            },
            "downloads": -1,
            "filename": "buffalo_gym-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7d5899080295f7d7178586faa94cbfa6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4893,
            "upload_time": "2024-04-26T02:48:11",
            "upload_time_iso_8601": "2024-04-26T02:48:11.269922Z",
            "url": "https://files.pythonhosted.org/packages/db/1c/e85a925412d1f558785c3ff538d34fe702e83943361d1da8efd871a64a7a/buffalo_gym-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 02:48:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "foreverska",
    "github_project": "buffalo-gym",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "gymnasium",
            "specs": [
                [
                    "~=",
                    "0.29.1"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "~=",
                    "1.24.3"
                ]
            ]
        },
        {
            "name": "setuptools",
            "specs": [
                [
                    "~=",
                    "68.0.0"
                ]
            ]
        }
    ],
    "lcname": "buffalo-gym"
}
        
Elapsed time: 0.25668s