regelum-control


Nameregelum-control JSON
Version 0.3.3 PyPI version JSON
download
home_pageNone
SummaryRegelum is a flexibly configurable framework for agent-environment simulation with a menu of predictive and reinforcement learning pipelines.
upload_time2024-10-20 19:33:09
maintainerNone
docs_urlNone
authorGeorgiy Malaniya
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![image](https://regelum.aidynamic.group/gfx/regelum_full_logo.png)

# About

`Regelum-control` stands as a framework designed to address optimal control and reinforcement learning (RL) tasks within continuous-time dynamical systems. It is made for researchers and engineers in reinforcement learning and control theory.

A detailed documentation is available [here](https://regelum.aidynamic.group/).

Explore [regelum-playground repo](https://github.com/osinenkop/regelum-playground) with ready-to-use examples.

# Features

- __Run pre-configured regelum algorithms with ease__. Regelum offers a set of implemented, ready-to-use algorithms in the domain of RL and Optimal Control. 
It provides flexibility through multiple optimization backends, including CasADi and PyTorch, to accommodate various computational needs.

-  __Stabilize your dynamical system with Regelum__. Regelum stands as a framework 
designed to address optimal control and reinforcement learning (RL) 
tasks within continuous-time dynamical systems. 
It comes equipped with an array of default systems, 
alongside a detailed tutorial that provides clear instructions 
for users to instantiate their own environments.

- __Manage your experiment data__. Regelum seamlessly captures
every detail of your experiment with little to no configuration required. 
From parameters to performance metrics, every datum is recorded. Through integration with [MLflow](https://mlflow.org/), 
Regelum streamlines tracking, comparison and real-time monitoring of metrics.

-  __Reproduce your experiments with ease__. Commit hashes and diffs for every experiment are also stored in Regelum, 
offering the ability to reproduce your experiments at any time with simple terminal commands.

-  __Configure your experiments efficiently__. Our [Hydra](https://hydra.cc/) fork within Regelum introduces enhanced functionaly, 
making the configuration of your RL and Optimal Control tasks more accessible and user-friendly.

-  __Fine-tune your models to perfection__ and achieve peak performance with minimal effort. 
By integrating with Hydra, regelum inherently adopts Hydra's powerful hyperparameter tuning capability.

# Install regelum-control with pip

```bash
pip install regelum-control
```

# Developer setup

1. Clone the repository.
2. Run:
   ```bash
   pip install -e .
   bash scripts/developer-setup.sh
   ```
3. Check `requirements-dev.txt` in the root of the repo for additional details.


# Licence

This project is licensed under the terms of the [MIT license](./LICENSE).

## Bibtex reference

```
@misc{regelum2024,
author =   {Pavel Osinenko, Grigory Yaremenko, Georgiy Malaniya, Anton Bolychev},
title =    {Regelum: a framework for simulation, control and reinforcement learning},
howpublished = {\url{https://github.com/osinekop/regelum-control}},
year = {2024},
note = {Licensed under the MIT License}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "regelum-control",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Georgiy Malaniya",
    "author_email": "pwlsd.gm@gmail.com> Anton Bolychev <bolychev.anton@gmail.com> Grigoriy Yaremenko <yaremenko8@gmail.com> Pavel Osinenko <p.osinenko@yandex.ru",
    "download_url": "https://files.pythonhosted.org/packages/2c/1a/4b15742eea6f7e4aec69d97df0a692b77c0865b2552819dc30630f52b878/regelum_control-0.3.3.tar.gz",
    "platform": null,
    "description": "![image](https://regelum.aidynamic.group/gfx/regelum_full_logo.png)\n\n# About\n\n`Regelum-control` stands as a framework designed to address optimal control and reinforcement learning (RL) tasks within continuous-time dynamical systems. It is made for researchers and engineers in reinforcement learning and control theory.\n\nA detailed documentation is available [here](https://regelum.aidynamic.group/).\n\nExplore [regelum-playground repo](https://github.com/osinenkop/regelum-playground) with ready-to-use examples.\n\n# Features\n\n- __Run pre-configured regelum algorithms with ease__. Regelum offers a set of implemented, ready-to-use algorithms in the domain of RL and Optimal Control. \nIt provides flexibility through multiple optimization backends, including CasADi and PyTorch, to accommodate various computational needs.\n\n-  __Stabilize your dynamical system with Regelum__. Regelum stands as a framework \ndesigned to address optimal control and reinforcement learning (RL) \ntasks within continuous-time dynamical systems. \nIt comes equipped with an array of default systems, \nalongside a detailed tutorial that provides clear instructions \nfor users to instantiate their own environments.\n\n- __Manage your experiment data__. Regelum seamlessly captures\nevery detail of your experiment with little to no configuration required. \nFrom parameters to performance metrics, every datum is recorded. Through integration with [MLflow](https://mlflow.org/), \nRegelum streamlines tracking, comparison and real-time monitoring of metrics.\n\n-  __Reproduce your experiments with ease__. Commit hashes and diffs for every experiment are also stored in Regelum, \noffering the ability to reproduce your experiments at any time with simple terminal commands.\n\n-  __Configure your experiments efficiently__. Our [Hydra](https://hydra.cc/) fork within Regelum introduces enhanced functionaly, \nmaking the configuration of your RL and Optimal Control tasks more accessible and user-friendly.\n\n-  __Fine-tune your models to perfection__ and achieve peak performance with minimal effort. \nBy integrating with Hydra, regelum inherently adopts Hydra's powerful hyperparameter tuning capability.\n\n# Install regelum-control with pip\n\n```bash\npip install regelum-control\n```\n\n# Developer setup\n\n1. Clone the repository.\n2. Run:\n   ```bash\n   pip install -e .\n   bash scripts/developer-setup.sh\n   ```\n3. Check `requirements-dev.txt` in the root of the repo for additional details.\n\n\n# Licence\n\nThis project is licensed under the terms of the [MIT license](./LICENSE).\n\n## Bibtex reference\n\n```\n@misc{regelum2024,\nauthor =   {Pavel Osinenko, Grigory Yaremenko, Georgiy Malaniya, Anton Bolychev},\ntitle =    {Regelum: a framework for simulation, control and reinforcement learning},\nhowpublished = {\\url{https://github.com/osinekop/regelum-control}},\nyear = {2024},\nnote = {Licensed under the MIT License}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Regelum is a flexibly configurable framework for agent-environment simulation with a menu of predictive and reinforcement learning pipelines.",
    "version": "0.3.3",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b0875268ba5a4a34d6d16ce749d07511244646fcd98151605967372e4c480d4d",
                "md5": "7419aef47b3318a026a71b37b7b8ae67",
                "sha256": "bebe93fb92acade966fea94da94c27f6c16469a36697b2887740b32a36109048"
            },
            "downloads": -1,
            "filename": "regelum_control-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7419aef47b3318a026a71b37b7b8ae67",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 148136,
            "upload_time": "2024-10-20T19:33:07",
            "upload_time_iso_8601": "2024-10-20T19:33:07.521807Z",
            "url": "https://files.pythonhosted.org/packages/b0/87/5268ba5a4a34d6d16ce749d07511244646fcd98151605967372e4c480d4d/regelum_control-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2c1a4b15742eea6f7e4aec69d97df0a692b77c0865b2552819dc30630f52b878",
                "md5": "69c40c89eddb7399852ea4b3f374dd03",
                "sha256": "e85fe66c1fa7736e296b93ed094701041b47baead9a7fc1ea0be8cdb83458ffc"
            },
            "downloads": -1,
            "filename": "regelum_control-0.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "69c40c89eddb7399852ea4b3f374dd03",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 135044,
            "upload_time": "2024-10-20T19:33:09",
            "upload_time_iso_8601": "2024-10-20T19:33:09.799821Z",
            "url": "https://files.pythonhosted.org/packages/2c/1a/4b15742eea6f7e4aec69d97df0a692b77c0865b2552819dc30630f52b878/regelum_control-0.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-20 19:33:09",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "regelum-control"
}
        
Elapsed time: 0.64850s