puffergrid


Namepuffergrid JSON
Version 0.0.8 PyPI version JSON
download
home_pagehttps://daveey.github.io
SummaryA framework for fast grid-based environments
upload_time2024-09-17 17:38:17
maintainerNone
docs_urlNone
authorDavid Bloomin
requires_python<4.0,>=3.10
licenseMIT
keywords puffergrid gridworld minigrid rl reinforcement-learning environment gym
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PufferGrid

PufferGrid is a fast GridWorld engine for Reinforcement Learning implemented in Cython.

## Features

- High-performance grid-based environments
- Customizable actions, events, and observations
- Easy integration with popular RL frameworks

## Installation

You can install PufferGrid using pip or from source.

### Using pip

The easiest way to install PufferGrid is using pip:

```
pip install puffergrid
```

### From Source

To install PufferGrid from source, follow these steps:

1. Clone the repository:
   ```
   git clone https://github.com/daveey/puffergrid.git
   cd puffergrid
   ```

2. Build and install the package:
   ```
   python setup.py build_ext --inplace
   pip install -e .
   ```

## Getting Started

The best way to understand how to create a PufferGrid environment is to look at a complete example. Check out the [`forage.pyx`](https://github.com/daveey/puffergrid/blob/main/examples/forage.pyx) file in the `examples` directory for a full implementation of a foraging environment.

Below is a step-by-step walkthrough of creating a similar environment, explaining each component along the way.

### Step 1: Define Game Objects

First, we'll define our game objects: Agent, Wall, and Tree.

```python
from puffergrid.grid_object cimport GridObject

cdef struct AgentProps:
    unsigned int energy
    unsigned int orientation
ctypedef GridObject[AgentProps] Agent

cdef struct WallProps:
    unsigned int hp
ctypedef GridObject[WallProps] Wall

cdef struct TreeProps:
    char has_fruit
ctypedef GridObject[TreeProps] Tree

cdef enum ObjectType:
    AgentT = 0
    WallT = 1
    TreeT = 2
```

### Step 2: Define Actions

Next, we'll define the actions our agents can take: Move, Rotate, and Eat.

```python
from puffergrid.action cimport ActionHandler, ActionArg

cdef class Move(ActionHandler):
    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
        # Implementation details...

cdef class Rotate(ActionHandler):
    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
        # Implementation details...

cdef class Eat(ActionHandler):
    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):
        # Implementation details...
```

### Step 3: Define Event Handlers

We'll create an event handler to reset trees after they've been eaten from.

```python
from puffergrid.event cimport EventHandler, EventArg

cdef class ResetTreeHandler(EventHandler):
    cdef void handle_event(self, GridObjectId obj_id, EventArg arg):
        # Implementation details...

cdef enum Events:
    ResetTree = 0
```

### Step 4: Define Observation Encoder

Create an observation encoder to define what agents can observe in the environment.

```python
from puffergrid.observation_encoder cimport ObservationEncoder

cdef class ObsEncoder(ObservationEncoder):
    cdef encode(self, GridObjectBase *obj, int[:] obs):
        # Implementation details...

    cdef vector[string] feature_names(self):
        return [
            "agent", "agent:energy", "agent:orientation",
            "wall", "tree", "tree:has_fruit"]
```

### Step 5: Define The Environment

Finally, we'll put it all together in our Forage environment class.

```python
from puffergrid.grid_env cimport GridEnv

cdef class Forage(GridEnv):
    def __init__(self, int map_width=100, int map_height=100,
                 int num_agents=20, int num_walls=10, int num_trees=10):
        GridEnv.__init__(
            self,
            map_width,
            map_height,
            0,  # max_timestep
            [ObjectType.AgentT, ObjectType.WallT, ObjectType.TreeT],
            11, 11,  # observation shape
            ObsEncoder(),
            [Move(), Rotate(), Eat()],
            [ResetTreeHandler()]
        )

        # Initialize agents, walls, and trees
        # Implementation details...
```

### Step 6: Using the Environment

Now that we've defined our environment, we can use it in a reinforcement learning loop:

```python
from puffergrid.wrappers.grid_env_wrapper import PufferGridEnv

# Create the Forage environment
c_env = Forage(map_width=100, map_height=100, num_agents=20, num_walls=10, num_trees=10)

# Wrap the environment with PufferGridEnv
env = PufferGridEnv(c_env, num_agents=20, max_timesteps=1000)

# Reset the environment
obs, _ = env.reset()

# Run a simple loop
for _ in range(1000):
    actions = env.action_space.sample()  # Random actions
    obs, rewards, terminals, truncations, infos = env.step(actions)

    if terminals.any() or truncations.any():
        break

# Print final stats
print(env.get_episode_stats())
```

This example demonstrates the core components of creating a PufferGrid environment: defining objects, actions, events, observations, and putting them together in an environment class.

## Performance Testing

To run performance tests on your PufferGrid environment, use the `test_perf.py` script:

```
python test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20
```

You can also run the script with profiling enabled:

```
python test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20 --profile
```

## Contributing

Contributions to PufferGrid are welcome! Please feel free to submit pull requests, create issues, or suggest improvements.

            

Raw data

            {
    "_id": null,
    "home_page": "https://daveey.github.io",
    "name": "puffergrid",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "puffergrid, gridworld, minigrid, rl, reinforcement-learning, environment, gym",
    "author": "David Bloomin",
    "author_email": "daveey@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/87/ba/cc3ac29b8f98368bb6b7435261e273cac6e1bc007faad00137ac7d454867/puffergrid-0.0.8.tar.gz",
    "platform": null,
    "description": "# PufferGrid\n\nPufferGrid is a fast GridWorld engine for Reinforcement Learning implemented in Cython.\n\n## Features\n\n- High-performance grid-based environments\n- Customizable actions, events, and observations\n- Easy integration with popular RL frameworks\n\n## Installation\n\nYou can install PufferGrid using pip or from source.\n\n### Using pip\n\nThe easiest way to install PufferGrid is using pip:\n\n```\npip install puffergrid\n```\n\n### From Source\n\nTo install PufferGrid from source, follow these steps:\n\n1. Clone the repository:\n   ```\n   git clone https://github.com/daveey/puffergrid.git\n   cd puffergrid\n   ```\n\n2. Build and install the package:\n   ```\n   python setup.py build_ext --inplace\n   pip install -e .\n   ```\n\n## Getting Started\n\nThe best way to understand how to create a PufferGrid environment is to look at a complete example. Check out the [`forage.pyx`](https://github.com/daveey/puffergrid/blob/main/examples/forage.pyx) file in the `examples` directory for a full implementation of a foraging environment.\n\nBelow is a step-by-step walkthrough of creating a similar environment, explaining each component along the way.\n\n### Step 1: Define Game Objects\n\nFirst, we'll define our game objects: Agent, Wall, and Tree.\n\n```python\nfrom puffergrid.grid_object cimport GridObject\n\ncdef struct AgentProps:\n    unsigned int energy\n    unsigned int orientation\nctypedef GridObject[AgentProps] Agent\n\ncdef struct WallProps:\n    unsigned int hp\nctypedef GridObject[WallProps] Wall\n\ncdef struct TreeProps:\n    char has_fruit\nctypedef GridObject[TreeProps] Tree\n\ncdef enum ObjectType:\n    AgentT = 0\n    WallT = 1\n    TreeT = 2\n```\n\n### Step 2: Define Actions\n\nNext, we'll define the actions our agents can take: Move, Rotate, and Eat.\n\n```python\nfrom puffergrid.action cimport ActionHandler, ActionArg\n\ncdef class Move(ActionHandler):\n    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):\n        # Implementation details...\n\ncdef class Rotate(ActionHandler):\n    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):\n        # Implementation details...\n\ncdef class Eat(ActionHandler):\n    cdef bint handle_action(self, unsigned int actor_id, GridObjectId actor_object_id, ActionArg arg):\n        # Implementation details...\n```\n\n### Step 3: Define Event Handlers\n\nWe'll create an event handler to reset trees after they've been eaten from.\n\n```python\nfrom puffergrid.event cimport EventHandler, EventArg\n\ncdef class ResetTreeHandler(EventHandler):\n    cdef void handle_event(self, GridObjectId obj_id, EventArg arg):\n        # Implementation details...\n\ncdef enum Events:\n    ResetTree = 0\n```\n\n### Step 4: Define Observation Encoder\n\nCreate an observation encoder to define what agents can observe in the environment.\n\n```python\nfrom puffergrid.observation_encoder cimport ObservationEncoder\n\ncdef class ObsEncoder(ObservationEncoder):\n    cdef encode(self, GridObjectBase *obj, int[:] obs):\n        # Implementation details...\n\n    cdef vector[string] feature_names(self):\n        return [\n            \"agent\", \"agent:energy\", \"agent:orientation\",\n            \"wall\", \"tree\", \"tree:has_fruit\"]\n```\n\n### Step 5: Define The Environment\n\nFinally, we'll put it all together in our Forage environment class.\n\n```python\nfrom puffergrid.grid_env cimport GridEnv\n\ncdef class Forage(GridEnv):\n    def __init__(self, int map_width=100, int map_height=100,\n                 int num_agents=20, int num_walls=10, int num_trees=10):\n        GridEnv.__init__(\n            self,\n            map_width,\n            map_height,\n            0,  # max_timestep\n            [ObjectType.AgentT, ObjectType.WallT, ObjectType.TreeT],\n            11, 11,  # observation shape\n            ObsEncoder(),\n            [Move(), Rotate(), Eat()],\n            [ResetTreeHandler()]\n        )\n\n        # Initialize agents, walls, and trees\n        # Implementation details...\n```\n\n### Step 6: Using the Environment\n\nNow that we've defined our environment, we can use it in a reinforcement learning loop:\n\n```python\nfrom puffergrid.wrappers.grid_env_wrapper import PufferGridEnv\n\n# Create the Forage environment\nc_env = Forage(map_width=100, map_height=100, num_agents=20, num_walls=10, num_trees=10)\n\n# Wrap the environment with PufferGridEnv\nenv = PufferGridEnv(c_env, num_agents=20, max_timesteps=1000)\n\n# Reset the environment\nobs, _ = env.reset()\n\n# Run a simple loop\nfor _ in range(1000):\n    actions = env.action_space.sample()  # Random actions\n    obs, rewards, terminals, truncations, infos = env.step(actions)\n\n    if terminals.any() or truncations.any():\n        break\n\n# Print final stats\nprint(env.get_episode_stats())\n```\n\nThis example demonstrates the core components of creating a PufferGrid environment: defining objects, actions, events, observations, and putting them together in an environment class.\n\n## Performance Testing\n\nTo run performance tests on your PufferGrid environment, use the `test_perf.py` script:\n\n```\npython test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20\n```\n\nYou can also run the script with profiling enabled:\n\n```\npython test_perf.py --env examples.forage.Forage --num_agents 20 --duration 20 --profile\n```\n\n## Contributing\n\nContributions to PufferGrid are welcome! Please feel free to submit pull requests, create issues, or suggest improvements.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A framework for fast grid-based environments",
    "version": "0.0.8",
    "project_urls": {
        "Homepage": "https://daveey.github.io",
        "Repository": "https://github.com/daveey/puffergrid"
    },
    "split_keywords": [
        "puffergrid",
        " gridworld",
        " minigrid",
        " rl",
        " reinforcement-learning",
        " environment",
        " gym"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "52d124345e23d955cc6dd3e1125d206cd2154671956070918118fdb265844835",
                "md5": "27917f790b4d35193540e67035dda71d",
                "sha256": "8577bc3520c23c5247b70c755be4a369197e5bfb22474e7734ba7397fe364bcc"
            },
            "downloads": -1,
            "filename": "puffergrid-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "27917f790b4d35193540e67035dda71d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 1840573,
            "upload_time": "2024-09-17T17:38:15",
            "upload_time_iso_8601": "2024-09-17T17:38:15.674430Z",
            "url": "https://files.pythonhosted.org/packages/52/d1/24345e23d955cc6dd3e1125d206cd2154671956070918118fdb265844835/puffergrid-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "87bacc3ac29b8f98368bb6b7435261e273cac6e1bc007faad00137ac7d454867",
                "md5": "3a8630187922d035ac50e01dd7eb06f9",
                "sha256": "eaf65f933611d1dde332e8769152e72a2b213ab5e5034dcf3e77532227b7b256"
            },
            "downloads": -1,
            "filename": "puffergrid-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "3a8630187922d035ac50e01dd7eb06f9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 1811604,
            "upload_time": "2024-09-17T17:38:17",
            "upload_time_iso_8601": "2024-09-17T17:38:17.428566Z",
            "url": "https://files.pythonhosted.org/packages/87/ba/cc3ac29b8f98368bb6b7435261e273cac6e1bc007faad00137ac7d454867/puffergrid-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-17 17:38:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "daveey",
    "github_project": "puffergrid",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "puffergrid"
}
        
Elapsed time: 0.60831s