Name | torchlure JSON |
Version |
0.2409.7
JSON |
| download |
home_page | None |
Summary | None |
upload_time | 2024-09-29 19:07:14 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | None |
keywords |
pytorch
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Torch Lure
<a href="https://www.youtube.com/watch?v=wCzCOYCfY9g" target="_blank">
<img src="http://img.youtube.com/vi/wCzCOYCfY9g/maxresdefault.jpg" alt="Chandelure" style="width: 100%;">
</a>
<!-- # Depndencies -->
<!--
```
pip install git+https://github.com/Farama-Foundation/Minari.git@19565bd8cd33f2e4a3a9a8e4db372044b01ea8d3
``` -->
## Installations
```sh
pip install torchlure
```
## Usage
```py
import torchlure as lure
# Optimizers
lure.SophiaG(lr=1e-3, weight_decay=0.2)
# Functions
lure.tanh_exp(x)
lure.TanhExp()
lure.quantile_loss(y_pred, y_target, quantile=0.5)
lure.QuantileLoss(quantile=0.5)
lure.RMSNrom(dim=256, eps=1e-6)
# Noise Scheduler
lure.LinearNoiseScheduler(beta=1e-4, beta_end=0.02, num_timesteps=1000)
lure.CosineNoiseScheduler(max_beta=0.999, s=0.008, num_timesteps=1000):
lure.ReLUKAN(width=[11, 16, 16, 2], grid=5, k=3)
lure.create_relukan_network(
input_dim=11,
output_dim=2,
hidden_dim=32,
num_layers=3,
grid=5,
k=3,
)
```
```py
import torchlure as lure
# Optimizers
lure.SophiaG(lr=1e-3, weight_decay=0.2)
# Functions
lure.tanh_exp(x)
lure.TanhExp()
lure.quantile_loss(y_pred, y_target, quantile=0.5)
lure.QuantileLoss(quantile=0.5)
lure.RMSNrom(dim=256, eps=1e-6)
# Noise Scheduler
lure.LinearNoiseScheduler(beta=1e-4, beta_end=0.02, num_timesteps=1000)
lure.CosineNoiseScheduler(max_beta=0.999, s=0.008, num_timesteps=1000):
```
### Dataset
```py
import gymnasium as gym
import numpy as np
import torch
from torchlure.datasets import MinariEpisodeDataset, MinariTrajectoryDataset
from torchtyping import TensorType
def return_to_go(rewards: TensorType[..., "T"], gamma: float) -> TensorType[..., "T"]:
if gamma == 1.0:
return rewards.flip(-1).cumsum(-1).flip(-1)
seq_len = rewards.shape[-1]
rtgs = torch.zeros_like(rewards)
rtg = torch.zeros_like(rewards[..., 0])
for i in range(seq_len - 1, -1, -1):
rtg = rewards[..., i] + gamma * rtg
rtgs[..., i] = rtg
return rtgs
env = gym.make("Hopper-v4")
minari_dataset = MinariEpisodeDataset("Hopper-random-v0")
minari_dataset.create(env, n_episodes=100, exist_ok=True)
minari_dataset.info()
# Observation space: Box(-inf, inf, (11,), float64)
# Action space: Box(-1.0, 1.0, (3,), float32)
# Total episodes: 100
# Total steps: 2,182
traj_dataset = MinariTrajectoryDataset(minari_dataset, traj_len=20, {
"returns": lambda ep: return_to_go(torch.tensor(ep.rewards), 0.99),
})
traj = traj_dataset[2]
traj = traj_dataset[[3, 8, 15]]
traj = traj_dataset[np.arange(16)]
traj = traj_dataset[torch.arange(16)]
traj = traj_dataset[-16:]
traj["observations"].shape, traj["actions"].shape, traj["rewards"].shape, traj[
"terminated"
].shape, traj["truncated"].shape, traj["timesteps"].shape
# (torch.Size([16, 20, 4, 4, 16]),
# torch.Size([16, 20]),
# torch.Size([16, 20]),
# torch.Size([16, 20]),
# torch.Size([16, 20]),
# torch.Size([16, 20]))
```
<!-- # %%
dataset = D4RLDataset(
dataset_id= "hopper-medium-expert-v2.2405",
d4rl_name= "hopper-medium-expert-v2",
env_id= "Hopper-v4",
)
# if you are download it once
dataset = D4RLDataset(
dataset_id= "hopper-medium-expert-v2.2405",
) -->
<!-- See all datasets [here](https://github.com/pytorch/rl/blob/3a7cf6af2a08089f11e0ed8cad3dd1cea0e253fb/torchrl/data/datasets/d4rl_infos.py) -->
Raw data
{
"_id": null,
"home_page": null,
"name": "torchlure",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "pytorch",
"author": null,
"author_email": "fuyutarow <fuyutarow@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/a8/05/909f9681346ae03d7f0580f58a0045732d32344bd54a9f690f32aa6dac81/torchlure-0.2409.7.tar.gz",
"platform": null,
"description": "# Torch Lure\n\n\n<a href=\"https://www.youtube.com/watch?v=wCzCOYCfY9g\" target=\"_blank\">\n <img src=\"http://img.youtube.com/vi/wCzCOYCfY9g/maxresdefault.jpg\" alt=\"Chandelure\" style=\"width: 100%;\">\n</a>\n\n\n<!-- # Depndencies -->\n<!-- \n```\npip install git+https://github.com/Farama-Foundation/Minari.git@19565bd8cd33f2e4a3a9a8e4db372044b01ea8d3\n``` -->\n\n\n## Installations\n```sh\npip install torchlure\n```\n\n## Usage\n```py\nimport torchlure as lure\n\n# Optimizers\nlure.SophiaG(lr=1e-3, weight_decay=0.2)\n\n# Functions\nlure.tanh_exp(x)\nlure.TanhExp()\n\nlure.quantile_loss(y_pred, y_target, quantile=0.5)\nlure.QuantileLoss(quantile=0.5)\n\nlure.RMSNrom(dim=256, eps=1e-6)\n\n# Noise Scheduler\nlure.LinearNoiseScheduler(beta=1e-4, beta_end=0.02, num_timesteps=1000)\nlure.CosineNoiseScheduler(max_beta=0.999, s=0.008, num_timesteps=1000):\n\n\nlure.ReLUKAN(width=[11, 16, 16, 2], grid=5, k=3)\n\nlure.create_relukan_network(\n input_dim=11,\n output_dim=2,\n hidden_dim=32,\n num_layers=3,\n grid=5,\n k=3,\n)\n\n```\n\n```py\nimport torchlure as lure\n\n# Optimizers\nlure.SophiaG(lr=1e-3, weight_decay=0.2)\n\n# Functions\nlure.tanh_exp(x)\nlure.TanhExp()\n\nlure.quantile_loss(y_pred, y_target, quantile=0.5)\nlure.QuantileLoss(quantile=0.5)\n\nlure.RMSNrom(dim=256, eps=1e-6)\n\n# Noise Scheduler\nlure.LinearNoiseScheduler(beta=1e-4, beta_end=0.02, num_timesteps=1000)\nlure.CosineNoiseScheduler(max_beta=0.999, s=0.008, num_timesteps=1000):\n```\n\n### Dataset\n\n\n\n```py\nimport gymnasium as gym\nimport numpy as np\nimport torch\nfrom torchlure.datasets import MinariEpisodeDataset, MinariTrajectoryDataset\nfrom torchtyping import TensorType\n\ndef return_to_go(rewards: TensorType[..., \"T\"], gamma: float) -> TensorType[..., \"T\"]:\n if gamma == 1.0:\n return rewards.flip(-1).cumsum(-1).flip(-1)\n\n seq_len = rewards.shape[-1]\n rtgs = torch.zeros_like(rewards)\n rtg = torch.zeros_like(rewards[..., 0])\n\n for i in range(seq_len - 1, -1, -1):\n rtg = rewards[..., i] + gamma * rtg\n rtgs[..., i] = rtg\n\n return rtgs\n\n\nenv = gym.make(\"Hopper-v4\")\nminari_dataset = MinariEpisodeDataset(\"Hopper-random-v0\")\nminari_dataset.create(env, n_episodes=100, exist_ok=True)\nminari_dataset.info()\n# Observation space: Box(-inf, inf, (11,), float64)\n# Action space: Box(-1.0, 1.0, (3,), float32)\n# Total episodes: 100\n# Total steps: 2,182\n\ntraj_dataset = MinariTrajectoryDataset(minari_dataset, traj_len=20, {\n \"returns\": lambda ep: return_to_go(torch.tensor(ep.rewards), 0.99),\n})\n\ntraj = traj_dataset[2]\ntraj = traj_dataset[[3, 8, 15]]\ntraj = traj_dataset[np.arange(16)]\ntraj = traj_dataset[torch.arange(16)]\ntraj = traj_dataset[-16:]\ntraj[\"observations\"].shape, traj[\"actions\"].shape, traj[\"rewards\"].shape, traj[\n \"terminated\"\n].shape, traj[\"truncated\"].shape, traj[\"timesteps\"].shape\n# (torch.Size([16, 20, 4, 4, 16]),\n# torch.Size([16, 20]),\n# torch.Size([16, 20]),\n# torch.Size([16, 20]),\n# torch.Size([16, 20]),\n# torch.Size([16, 20]))\n\n\n```\n\n<!-- # %%\ndataset = D4RLDataset(\n dataset_id= \"hopper-medium-expert-v2.2405\",\n d4rl_name= \"hopper-medium-expert-v2\",\n env_id= \"Hopper-v4\",\n)\n\n# if you are download it once\ndataset = D4RLDataset(\n dataset_id= \"hopper-medium-expert-v2.2405\",\n) -->\n<!-- See all datasets [here](https://github.com/pytorch/rl/blob/3a7cf6af2a08089f11e0ed8cad3dd1cea0e253fb/torchrl/data/datasets/d4rl_infos.py) -->\n",
"bugtrack_url": null,
"license": null,
"summary": null,
"version": "0.2409.7",
"project_urls": {
"Homepage": "https://github.com/fuyutarow/torchlure",
"Repository": "https://github.com/fuyutarow/torchlure"
},
"split_keywords": [
"pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4bce3d00b06fa25c3074343292200bcbcfc158b2f08455f899ec0f0df325287c",
"md5": "cfcf1b629838402eab32549140f83b4b",
"sha256": "012bdebe711537eb00939c058d97f7e992618afb35ea1709558245cc2911ff99"
},
"downloads": -1,
"filename": "torchlure-0.2409.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cfcf1b629838402eab32549140f83b4b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 21852,
"upload_time": "2024-09-29T19:07:12",
"upload_time_iso_8601": "2024-09-29T19:07:12.337591Z",
"url": "https://files.pythonhosted.org/packages/4b/ce/3d00b06fa25c3074343292200bcbcfc158b2f08455f899ec0f0df325287c/torchlure-0.2409.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a805909f9681346ae03d7f0580f58a0045732d32344bd54a9f690f32aa6dac81",
"md5": "d6c311a35fd723dff18ad1b6a137c67b",
"sha256": "6974e60b3dd2284e9244ec39dd2e51c9e5660a19cb9e761a5348f28b1c89bbe3"
},
"downloads": -1,
"filename": "torchlure-0.2409.7.tar.gz",
"has_sig": false,
"md5_digest": "d6c311a35fd723dff18ad1b6a137c67b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 88415,
"upload_time": "2024-09-29T19:07:14",
"upload_time_iso_8601": "2024-09-29T19:07:14.453281Z",
"url": "https://files.pythonhosted.org/packages/a8/05/909f9681346ae03d7f0580f58a0045732d32344bd54a9f690f32aa6dac81/torchlure-0.2409.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-29 19:07:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fuyutarow",
"github_project": "torchlure",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "torchlure"
}