# rlmodule
Flexible reinforcement learning models instantiators library
Function approximators simple, but still strong. RNN - GRU - LSTM / SAC
Now it only supports skrl, but is intended to be library agnostic - in later expansion
try other algos
shared separate model
<!--
# todo
# some envs have by default range on which they work, like pendulum -2, 2, need to propaget that in.
# cnn
# WRITE README tutorial
# Run & fix pre-commit
# annotate cfgs in modules - why doesn't work TYPE_CHECKING
# extensive comments
# Launch new version to pip
# Import new version in Isaac-lab
# lazy linear? what is it ?
# random model run add function back
-->
## How to run
### Install rlmodule from local code
- Make sure you are in base rlmodule dict.
- Start virtual env.
```
python3 -m venv venv
source venv/bin/activate
```
- Install library from local code
```
pip install -e .
```
Note: sometimes installation may fail, if there is a run/ dir present, you may need to remove it (TODO: fix)
```
rm -rf runs
```
### Run chosen example
```
python3 rlmodule/skrl/torch/examples/gymnasium/pendulum_ppo_mlp_separate_model.py
```
Optional: observe run results in Tensorboard
```
tensorboard --logdir=runs/
```
## Update new version to PIP
Change version name in pyproject.toml
```
pip install build twine
```
```
rm -rf runs
python -m build
```
```
twine upload dist/*
```
Raw data
{
"_id": null,
"home_page": null,
"name": "rlmodule",
"maintainer": "Lopatovsky",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "reinforcement-learning, machine-learning, reinforcement, machine, learning, rl, rl-models, rl-modules, models, modules",
"author": "Lopatovsky",
"author_email": "lopatovsky@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/75/85/be5d7aee8da9235bf7d589497e015c6d3758c879b298d2c375672a44dc68/rlmodule-0.1.6.3.tar.gz",
"platform": null,
"description": "# rlmodule\nFlexible reinforcement learning models instantiators library\n\nFunction approximators simple, but still strong. RNN - GRU - LSTM / SAC\n\nNow it only supports skrl, but is intended to be library agnostic - in later expansion\n\ntry other algos\nshared separate model\n\n\n<!--\n# todo\n# some envs have by default range on which they work, like pendulum -2, 2, need to propaget that in.\n# cnn\n# WRITE README tutorial\n# Run & fix pre-commit\n# annotate cfgs in modules - why doesn't work TYPE_CHECKING\n# extensive comments\n# Launch new version to pip\n# Import new version in Isaac-lab\n# lazy linear? what is it ?\n# random model run add function back\n-->\n\n\n## How to run\n\n### Install rlmodule from local code\n\n- Make sure you are in base rlmodule dict.\n\n- Start virtual env.\n```\npython3 -m venv venv\nsource venv/bin/activate\n```\n- Install library from local code\n```\npip install -e .\n```\nNote: sometimes installation may fail, if there is a run/ dir present, you may need to remove it (TODO: fix)\n```\nrm -rf runs\n```\n\n### Run chosen example\n```\npython3 rlmodule/skrl/torch/examples/gymnasium/pendulum_ppo_mlp_separate_model.py\n```\n\nOptional: observe run results in Tensorboard\n\n```\ntensorboard --logdir=runs/\n```\n\n\n\n## Update new version to PIP\n\nChange version name in pyproject.toml\n\n```\npip install build twine\n```\n\n```\nrm -rf runs\npython -m build\n```\n\n```\ntwine upload dist/*\n```\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "Flexible reinforcement learning models instantiators library",
"version": "0.1.6.3",
"project_urls": {
"Bug Reports": "https://github.com/FabricaAI/rlmodule/issues",
"Discussions": "https://github.com/FabricaAI/rlmodule/discussions",
"Documentation": "https://github.com/FabricaAI/rlmodule",
"Homepage": "https://github.com/FabricaAI/rlmodule",
"Say Thanks!": "https://github.com/lopatovsky",
"Source": "https://github.com/FabricaAI/rlmodule"
},
"split_keywords": [
"reinforcement-learning",
" machine-learning",
" reinforcement",
" machine",
" learning",
" rl",
" rl-models",
" rl-modules",
" models",
" modules"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7acc6e90ef720bdd0f9ad3897b2332e135998c69ae2891aba4ba1406a6ee86be",
"md5": "6761b8dd36f1d63e1dc062c048671f27",
"sha256": "ddc3428b1c8dda6b04fe7a65da65e181b792d2b2d3a15bb83736351373efd868"
},
"downloads": -1,
"filename": "rlmodule-0.1.6.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6761b8dd36f1d63e1dc062c048671f27",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 56121,
"upload_time": "2024-11-12T10:27:09",
"upload_time_iso_8601": "2024-11-12T10:27:09.312428Z",
"url": "https://files.pythonhosted.org/packages/7a/cc/6e90ef720bdd0f9ad3897b2332e135998c69ae2891aba4ba1406a6ee86be/rlmodule-0.1.6.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7585be5d7aee8da9235bf7d589497e015c6d3758c879b298d2c375672a44dc68",
"md5": "ad1c2aac25c7c99b243a42ba871d8b81",
"sha256": "3a21bec1e4f9b02c324e7b2885fd959441d9cb02b16296e42b7e5797487d53ce"
},
"downloads": -1,
"filename": "rlmodule-0.1.6.3.tar.gz",
"has_sig": false,
"md5_digest": "ad1c2aac25c7c99b243a42ba871d8b81",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 36719,
"upload_time": "2024-11-12T10:27:11",
"upload_time_iso_8601": "2024-11-12T10:27:11.457473Z",
"url": "https://files.pythonhosted.org/packages/75/85/be5d7aee8da9235bf7d589497e015c6d3758c879b298d2c375672a44dc68/rlmodule-0.1.6.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-12 10:27:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "FabricaAI",
"github_project": "rlmodule",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "rlmodule"
}