turingpoint


Nameturingpoint JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/zbenmo/turingpoint
SummaryReinforcement Learning (RL) library
upload_time2023-06-08 14:36:22
maintainer
docs_urlNone
authorOren Zeev-Ben-Mordehai
requires_python
license
keywords reinforcement learning framework integration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # turingpoint

Turing point is a Reinforcement Learning (RL) library. It adds the missing duct tape.
It allows for multiple (hetrogenous) agents seamlessly. Per-agent partial observation is natural with Turing point.
Different agents can act in differnet frequencies.
You may opt to continue using also the environment and the agent libraries that you're currently using, for the such as Gym/Gymnasium, Stable-Baselines3, Tianshou, RLLib, etc.
Turing point integrates easily with existing RL libraries and your own custom code.
Integration of RL agents in the target applications should be significantly easier with Turing point.

The main concept in Turing point is that there are multiple participants and each gets its turn.
The participants communicate by a parcel that is passed among them. The agent and the environment are both participants in that sense. No more confusion which of those should call which. Reward's logic, for example,
can be addressed where you believe is most suitable.

Turing point may be helpful with parallel or distributed training, yet Turing point does not address those explicitly. On the contrary; with Turing point the flow is sequential among the participants. As far as we can tell Turing point at least does not hinder the use of parallel and / or distributed training.

Participants can be added and / or removed dynamically (ex. a new "monster" enters or then "disappears").

Consider a Gym/SB3 training realm:

```python
import gym

from stable_baselines3 import A2C

# Creating the specific Gym environment.
env = gym.make("CartPole-v1")

# An agent is created, it is injected with the environment.
# The agent probably makes a copy of the passed environment, wraps it etc.
model = A2C("MlpPolicy", env, verbose=1)

# The agent is trained against its environment.
# We can assume what is happening there (obs, action, reward, obs, ..), yet it is not explicit.
model.learn(total_timesteps=10_000)

# we now evaluate the performance of our agent with the help of the environment that the agent maintains.
vec_env = model.get_env()
obs = vec_env.reset()
for i in range(1000):
    # The parameter for predict is the observation,
    #  which is good as our application (ex. an actual cartpole robot) can indeed provide such observations and use the return action.
    # Note: the action space as well as the observation space are defined in the environment.
    # Also note. The environment is aware of the agent. This is how the environment was designed.
    # The action space of the agent is coded in the environment.
    # The observation space is intended for the agent and reflects probably also what the agent should know about itself.
    # The _state output is related to RNNs, AFAIK.
    action, _state = model.predict(obs, deterministic=True)
    # Here the reward, done, and info outputs are just for our evaluation.
    # Mainly what is happening here is that the environment moves to a new state.
    # The reward and done flag, are specific to the agent.
    # If there are other entities in the environments, those may continue to live also after done=True and may not care (directly) about this specific reward.
    obs, reward, done, info = vec_env.step(action)
    # We render here. We did not render during the training(learn) which probably makes sense performace wise.
    vec_env.render()
    # VecEnv resets automatically
    # if done:
    #   obs = vec_env.reset()

# Observation: we reset the environment. The model is supposed to be memory-less (MDP assumption). 
```

In the comments above, we've tried to give the intuition why some additional thinking is needed about
the software that is used to provision those environment / agent(s) realms.

Let's see how above can be described with Turing point:

```python
...
import turingpoint.gymnasium_utils as tp_gym_utils
import turingpoint.sb3_utils as tp_sb3_utils
import turingpoint.utils as tp_utils
import turingpoint as tp


def evaluate(env, agent, num_episodes: int) -> float:

  rewards_collector = tp_utils.Collector(['reward'])

  def get_participants():
    yield functools.partial(tp_gym_utils.call_reset, env=env)
    yield from itertools.cycle([
        functools.partial(tp_sb3_utils.call_predict, agent=agent, deterministic=True),
        functools.partial(tp_gym_utils.call_step, env=env),
        rewards_collector,
        tp_gym_utils.check_done
    ]) 

  evaluate_assembly = tp.Assembly(get_participants)

  for _ in range(num_episodes):
    _ = evaluate_assembly.launch()
    # Note that we don't clear the rewards in 'rewards_collector', and so we continue to collect.

  total_reward = sum(x['reward'] for x in rewards_collector.get_entries())

  return total_reward / num_episodes

..

def main():

  random.seed(1)
  np.random.seed(1)
  torch.manual_seed(1)

  env = gym.make('CartPole-v1')

  env.reset(seed=1)

  agent = PPO(MlpPolicy, env, verbose=0) # use verbose=1 for debugging

  mean_reward_before_train = evaluate(env, agent, 100)
  print("before training")
  print(f'{mean_reward_before_train=}')

..
```

What did we gain and was it worth the extra coding? Let's add to the environment a second agent, wind, or maybe it is part of the augmented environment, does not really matter. Let's just add it.

```python
..

def wind(parcel: dict) -> None:
    action_wind = "blow left" if random() < 0.5 else "blow right"
    parcel['action_wind'] = action_wind


def wind_impact(parcel: dict) -> None:
    action_wind = parcel['action_wind']
    # We'll modify the action of the agent, given the wind,
    # as we don't have here access to the state of the environment.
    parcel['action'] = ...


def evaluate(env, agent, num_episodes: int) -> float:

  rewards_collector = tp_utils.Collector(['reward'])

  def get_participants():
    yield functools.partial(tp_gym_utils.call_reset, env=env)
    yield from itertools.cycle([
        functools.partial(tp_sb3_utils.call_predict, agent=agent, deterministic=True),
        wind,
        wind_impact,
        functools.partial(tp_gym_utils.call_step, env=env),
        rewards_collector,
        tp_gym_utils.check_done
    ]) 

  evaluate_assembly = tp.Assembly(get_participants)

  for _ in range(num_episodes):
    _ = evaluate_assembly.launch()
    # Note that we don't clear the rewards in 'rewards_collector', and so we continue to collect.

  total_reward = sum(x['reward'] for x in rewards_collector.get_entries())

  return total_reward / num_episodes
```

To install use for example:

```
pip install turingpoint
```

The examples are found in the homepage (github) under the 'examples' folder.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zbenmo/turingpoint",
    "name": "turingpoint",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Reinforcement Learning,Framework,Integration",
    "author": "Oren Zeev-Ben-Mordehai",
    "author_email": "zbenmo@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/18/f2/fe0e06870dac4ed820ceeeaeb58deacea82340cada8b31302eab2218264e/turingpoint-0.2.0.tar.gz",
    "platform": null,
    "description": "# turingpoint\n\nTuring point is a Reinforcement Learning (RL) library. It adds the missing duct tape.\nIt allows for multiple (hetrogenous) agents seamlessly. Per-agent partial observation is natural with Turing point.\nDifferent agents can act in differnet frequencies.\nYou may opt to continue using also the environment and the agent libraries that you're currently using, for the such as Gym/Gymnasium, Stable-Baselines3, Tianshou, RLLib, etc.\nTuring point integrates easily with existing RL libraries and your own custom code.\nIntegration of RL agents in the target applications should be significantly easier with Turing point.\n\nThe main concept in Turing point is that there are multiple participants and each gets its turn.\nThe participants communicate by a parcel that is passed among them. The agent and the environment are both participants in that sense. No more confusion which of those should call which. Reward's logic, for example,\ncan be addressed where you believe is most suitable.\n\nTuring point may be helpful with parallel or distributed training, yet Turing point does not address those explicitly. On the contrary; with Turing point the flow is sequential among the participants. As far as we can tell Turing point at least does not hinder the use of parallel and / or distributed training.\n\nParticipants can be added and / or removed dynamically (ex. a new \"monster\" enters or then \"disappears\").\n\nConsider a Gym/SB3 training realm:\n\n```python\nimport gym\n\nfrom stable_baselines3 import A2C\n\n# Creating the specific Gym environment.\nenv = gym.make(\"CartPole-v1\")\n\n# An agent is created, it is injected with the environment.\n# The agent probably makes a copy of the passed environment, wraps it etc.\nmodel = A2C(\"MlpPolicy\", env, verbose=1)\n\n# The agent is trained against its environment.\n# We can assume what is happening there (obs, action, reward, obs, ..), yet it is not explicit.\nmodel.learn(total_timesteps=10_000)\n\n# we now evaluate the performance of our agent with the help of the environment that the agent maintains.\nvec_env = model.get_env()\nobs = vec_env.reset()\nfor i in range(1000):\n    # The parameter for predict is the observation,\n    #  which is good as our application (ex. an actual cartpole robot) can indeed provide such observations and use the return action.\n    # Note: the action space as well as the observation space are defined in the environment.\n    # Also note. The environment is aware of the agent. This is how the environment was designed.\n    # The action space of the agent is coded in the environment.\n    # The observation space is intended for the agent and reflects probably also what the agent should know about itself.\n    # The _state output is related to RNNs, AFAIK.\n    action, _state = model.predict(obs, deterministic=True)\n    # Here the reward, done, and info outputs are just for our evaluation.\n    # Mainly what is happening here is that the environment moves to a new state.\n    # The reward and done flag, are specific to the agent.\n    # If there are other entities in the environments, those may continue to live also after done=True and may not care (directly) about this specific reward.\n    obs, reward, done, info = vec_env.step(action)\n    # We render here. We did not render during the training(learn) which probably makes sense performace wise.\n    vec_env.render()\n    # VecEnv resets automatically\n    # if done:\n    #   obs = vec_env.reset()\n\n# Observation: we reset the environment. The model is supposed to be memory-less (MDP assumption). \n```\n\nIn the comments above, we've tried to give the intuition why some additional thinking is needed about\nthe software that is used to provision those environment / agent(s) realms.\n\nLet's see how above can be described with Turing point:\n\n```python\n...\nimport turingpoint.gymnasium_utils as tp_gym_utils\nimport turingpoint.sb3_utils as tp_sb3_utils\nimport turingpoint.utils as tp_utils\nimport turingpoint as tp\n\n\ndef evaluate(env, agent, num_episodes: int) -> float:\n\n  rewards_collector = tp_utils.Collector(['reward'])\n\n  def get_participants():\n    yield functools.partial(tp_gym_utils.call_reset, env=env)\n    yield from itertools.cycle([\n        functools.partial(tp_sb3_utils.call_predict, agent=agent, deterministic=True),\n        functools.partial(tp_gym_utils.call_step, env=env),\n        rewards_collector,\n        tp_gym_utils.check_done\n    ]) \n\n  evaluate_assembly = tp.Assembly(get_participants)\n\n  for _ in range(num_episodes):\n    _ = evaluate_assembly.launch()\n    # Note that we don't clear the rewards in 'rewards_collector', and so we continue to collect.\n\n  total_reward = sum(x['reward'] for x in rewards_collector.get_entries())\n\n  return total_reward / num_episodes\n\n..\n\ndef main():\n\n  random.seed(1)\n  np.random.seed(1)\n  torch.manual_seed(1)\n\n  env = gym.make('CartPole-v1')\n\n  env.reset(seed=1)\n\n  agent = PPO(MlpPolicy, env, verbose=0) # use verbose=1 for debugging\n\n  mean_reward_before_train = evaluate(env, agent, 100)\n  print(\"before training\")\n  print(f'{mean_reward_before_train=}')\n\n..\n```\n\nWhat did we gain and was it worth the extra coding? Let's add to the environment a second agent, wind, or maybe it is part of the augmented environment, does not really matter. Let's just add it.\n\n```python\n..\n\ndef wind(parcel: dict) -> None:\n    action_wind = \"blow left\" if random() < 0.5 else \"blow right\"\n    parcel['action_wind'] = action_wind\n\n\ndef wind_impact(parcel: dict) -> None:\n    action_wind = parcel['action_wind']\n    # We'll modify the action of the agent, given the wind,\n    # as we don't have here access to the state of the environment.\n    parcel['action'] = ...\n\n\ndef evaluate(env, agent, num_episodes: int) -> float:\n\n  rewards_collector = tp_utils.Collector(['reward'])\n\n  def get_participants():\n    yield functools.partial(tp_gym_utils.call_reset, env=env)\n    yield from itertools.cycle([\n        functools.partial(tp_sb3_utils.call_predict, agent=agent, deterministic=True),\n        wind,\n        wind_impact,\n        functools.partial(tp_gym_utils.call_step, env=env),\n        rewards_collector,\n        tp_gym_utils.check_done\n    ]) \n\n  evaluate_assembly = tp.Assembly(get_participants)\n\n  for _ in range(num_episodes):\n    _ = evaluate_assembly.launch()\n    # Note that we don't clear the rewards in 'rewards_collector', and so we continue to collect.\n\n  total_reward = sum(x['reward'] for x in rewards_collector.get_entries())\n\n  return total_reward / num_episodes\n```\n\nTo install use for example:\n\n```\npip install turingpoint\n```\n\nThe examples are found in the homepage (github) under the 'examples' folder.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Reinforcement Learning (RL) library",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/zbenmo/turingpoint"
    },
    "split_keywords": [
        "reinforcement learning",
        "framework",
        "integration"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b72cc122e87927afa387294e3abe2db9f714c8558aaf4ad597c11a7bf94270a",
                "md5": "8106e12d1a9030962ea9ecd8fd97aded",
                "sha256": "2077c0f6837f3dd4f7db7607d8b2d77f2f6bbe114e463bab443e703e889a4aff"
            },
            "downloads": -1,
            "filename": "turingpoint-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8106e12d1a9030962ea9ecd8fd97aded",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 8381,
            "upload_time": "2023-06-08T14:36:15",
            "upload_time_iso_8601": "2023-06-08T14:36:15.608042Z",
            "url": "https://files.pythonhosted.org/packages/0b/72/cc122e87927afa387294e3abe2db9f714c8558aaf4ad597c11a7bf94270a/turingpoint-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "18f2fe0e06870dac4ed820ceeeaeb58deacea82340cada8b31302eab2218264e",
                "md5": "7a632be83a266bb432546e8fbe4f5634",
                "sha256": "e55a1e5b8ba4a5358788413e2ec4b98c1627fd15a8376375b9693c81afbeb92e"
            },
            "downloads": -1,
            "filename": "turingpoint-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7a632be83a266bb432546e8fbe4f5634",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 9152,
            "upload_time": "2023-06-08T14:36:22",
            "upload_time_iso_8601": "2023-06-08T14:36:22.537082Z",
            "url": "https://files.pythonhosted.org/packages/18/f2/fe0e06870dac4ed820ceeeaeb58deacea82340cada8b31302eab2218264e/turingpoint-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-08 14:36:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zbenmo",
    "github_project": "turingpoint",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "turingpoint"
}
        
Elapsed time: 0.07176s