<p align="center">
<a href="https://docs.reinforceui-studio.com/welcome">
<img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/cover_RL.png" alt="ReinforceUI" width="100%">
</a>
</p>
<h1 align="center"> Reinforcement Learning Made Simple</h1>
<p align="center">
Intuitive, Powerful, and Hassle-Free RL Training & Monitoring β All in One Place.
</p>
<p align="center">
<a href="https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/pytest.yml">
<img src="https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/pytest.yml?style=for-the-badge&logo=github&label=CI" alt="Build Status">
</a>
<a href="https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/formatting.yml">
<img src="https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/formatting.yml?style=for-the-badge&label=Formatting&branch=main" alt="Formatting Status">
</a>
<a href="https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/docker-publish.yml">
<img src="https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/docker-publish.yml?style=for-the-badge&logo=docker&label=Docker" alt="Docker Status">
</a>
<a href="https://pypi.org/project/reinforceui-studio/">
<img src="https://img.shields.io/pypi/v/reinforceui-studio?style=for-the-badge&logo=pypi&color=#44cc11" alt="PyPI version">
</a>
<a href="https://docs.reinforceui-studio.com/">
<img src="https://img.shields.io/website?url=https%3A%2F%2Fdocs.reinforceui-studio.com&up_message=online&down_color=red&style=for-the-badge&label=Docs" alt="Documentation">
</a>
</p>
<p align="center">
<img src="https://img.shields.io/badge/python-3.10--3.12-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" alt="Python Version">
<img src="https://img.shields.io/badge/Ubuntu-22--24-E95420?style=for-the-badge&logo=ubuntu&logoColor=white&&color=blue" alt="Ubuntu Version">
<img src="https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge" alt="License">
</p>
<p align="center">
<img src="https://img.shields.io/badge/mlflow-%23d9ead3.svg?style=for-the-badge&logo=numpy&logoColor=blue" alt="Mlflow">
<img src="https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white" alt="Mlflow">
</p>
<p align="center">
<a href="https://pepy.tech/projects/reinforceui-studio">
<img src="https://img.shields.io/pepy/dt/reinforceui-studio?style=for-the-badge&logo=pypi&color=blue" alt="PyPI Downloads">
</a>
</p>
---
βοΈ If you find this project useful, please consider giving it a star! It really helps!
π Full Documentation: <a href="https://docs.reinforceui-studio.com" target="_blank">https://docs.reinforceui-studio.com</a>
π¬ Video Demo: [YouTube Tutorial](https://www.youtube.com/watch?v=itXyyttwZ1M)
---
## What is ReinforceUI Studio?
ReinforceUI Studio is a Python-based application designed to simplify Reinforcement Learning (RL) workflows through a beautiful, intuitive GUI.
No more memorizing commands, no more juggling extra repos β just train, monitor, and evaluate in a few clicks!
<p align="center"> <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/new_main_window_example.gif" width="80%"> </p>
## Quickstart
Getting started with ReinforceUI Studio is fast and easy!
### π₯οΈ Install and Run Locally
The easiest way to use ReinforceUI Studio is by installing it directly from PyPI. This provides a hassle-free installation, allowing you to get started quickly with no extra configuration.
Follow these simple steps:
1. Install ReinforceUI Studio from PyPI
```bash
pip install reinforceui-studio
```
2. Run the application
```bash
reinforceui-studio
```
That's it! Youβre ready to start training and monitoring your Reinforcement Learning agents through an intuitive GUI.
β
Tip:
If you encounter any issues, check out the [Installation Guide](https://docs.reinforceui-studio.com/user_guides/installation) in the full documentation.
## Why you should use ReinforceUI Studio
* π Instant RL Training: Configure environments, select algorithms, set hyperparameters β all in seconds.
* π₯οΈ Real-Time Dashboard: Watch your agents learn with live performance curves and metrics.
* π Mlflow Integration: Automatically log and visualize your training runs with MLflow.
* π§ Multi-Algorithm Support: Train and compare multiple algorithms simultaneously.
* π¦ Full Logging: Automatically save models, plots, evaluations, videos, and training stats.
* π§ Easy Customization: Adjust hyperparameters or load optimized defaults.
* π§© Environment Support: Works with MuJoCo, OpenAI Gymnasium, and DeepMind Control Suite.
* π Final Comparison Plots: Auto-generate publishable comparison graphs for your reports or papers.
## Quick Overview: Single and Multi-Algorithm Training
* **Single Training**: Choose an algorithm, tweak parameters, train & visualize.
* **Multi-Training**: Select several algorithms, run them simultaneously, and compare performances side-by-side.
<table align="center">
<tr>
<th>Selection Window</th>
<th>Main Window Display</th>
</tr>
<tr>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/single_selection.png" width="400"></td>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/single_selection_main_window.png" width="400"></td>
</tr>
<tr>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/multiple_selection.png" width="400"></td>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/multiple_selection_main_window.png" width="400"></td>
</tr>
</table>
## Mlflow Integration
<table align="center">
<tr>
<th>Example of MLflow Dashboard</th>
<th>Example of MLflow Metrics</th>
</tr>
<tr>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/mlflow_dashboard_1.png" width="400"></td>
<td align="center"><img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/mlflow_dashboard_2.png" width="400"></td>
</tr>
</table>
## Supported Algorithms
ReinforceUI Studio supports the following algorithms:
| Algorithm | Description |
| --- | --- |
| **CTD4** | Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple Critics |
| **DDPG** | Deep Deterministic Policy Gradient |
| **DQN** | Deep Q-Network |
| **PPO** | Proximal Policy Optimization |
| **SAC** | Soft Actor-Critic |
| **TD3** | Twin Delayed Deep Deterministic Policy Gradient |
| **TQC** | Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics |
## Results Examples
Below are some examples of results generated by ReinforceUI Studio, showcasing the evaluation curves along with snapshots of the policies in action.
| **Algorithm** | **Platform** | **Environment** | **Curve** | **Video** |
|---------------|--------------|--------------------|-----------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| **SAC** | DMCS | Walker Walk | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/SAC_walker_walk.png" width="200"> | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/walker_walk.gif" width="200"> |
| **TD3** | MuJoCo | HalfCheetah v5 | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/TD3_HalfCheetah-v5.png" width="200"> | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/HalfCheetah.gif" width="200"> |
| **CDT4** | DMCS | Ball in cup catch | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/CTD4_ball_in_cup_catch.png" width="200"> | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/ball_in_cup_catch.gif" width="200"> |
| **DQN** | Gymnasium | CartPole v1 | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/DQN_CartPole-v1.png" width="200"> | <img src="https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/CartPole.gif" width="200"> |
## Citation
If you find ReinforceUI Studio useful for your research or project, please kindly star this repo and cite is as follows:
```
@misc{reinforce_ui_studio_2025,
title = { ReinforceUI Studio: Simplifying Reinforcement Learning Training and Monitoring},
author = {David Valencia Redrovan},
year = {2025},
publisher = {GitHub},
url = {https://github.com/dvalenciar/ReinforceUI-Studio.}
}
```
## Why Star β this Repository?
Your support helps the project grow!
If you like ReinforceUI Studio, please star β this repository and share it with friends, colleagues, and the RL community!
Together, we can make Reinforcement Learning accessible to everyone!
## License
ReinforceUI Studio is licensed under the MIT License. You are free to use, modify, and distribute this software,
provided that the original copyright notice and license are included in any copies or substantial portions of the software.
Raw data
{
"_id": null,
"home_page": "https://github.com/dvalenciar/ReinforceUI-Studio",
"name": "reinforceui-studio",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "reinforcement-learning machine-learning deep-learning GUI",
"author": "David Valencia",
"author_email": "support@reinforceui-studio.com",
"download_url": "https://files.pythonhosted.org/packages/58/15/e1a049bd6762ab225b2caf80cec75444751d95d5cbe64029d9fe08a6cf13/reinforceui_studio-1.3.3.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <a href=\"https://docs.reinforceui-studio.com/welcome\">\n <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/cover_RL.png\" alt=\"ReinforceUI\" width=\"100%\">\n </a>\n</p>\n\n<h1 align=\"center\"> Reinforcement Learning Made Simple</h1>\n\n<p align=\"center\">\n Intuitive, Powerful, and Hassle-Free RL Training & Monitoring \u2013 All in One Place.\n</p>\n\n<p align=\"center\">\n <a href=\"https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/pytest.yml\">\n <img src=\"https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/pytest.yml?style=for-the-badge&logo=github&label=CI\" alt=\"Build Status\">\n </a>\n\n <a href=\"https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/formatting.yml\">\n <img src=\"https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/formatting.yml?style=for-the-badge&label=Formatting&branch=main\" alt=\"Formatting Status\">\n </a>\n\n <a href=\"https://github.com/dvalenciar/ReinforceUI-Studio/actions/workflows/docker-publish.yml\">\n <img src=\"https://img.shields.io/github/actions/workflow/status/dvalenciar/ReinforceUI-Studio/docker-publish.yml?style=for-the-badge&logo=docker&label=Docker\" alt=\"Docker Status\">\n </a>\n\n <a href=\"https://pypi.org/project/reinforceui-studio/\">\n <img src=\"https://img.shields.io/pypi/v/reinforceui-studio?style=for-the-badge&logo=pypi&color=#44cc11\" alt=\"PyPI version\">\n </a>\n\n <a href=\"https://docs.reinforceui-studio.com/\">\n <img src=\"https://img.shields.io/website?url=https%3A%2F%2Fdocs.reinforceui-studio.com&up_message=online&down_color=red&style=for-the-badge&label=Docs\" alt=\"Documentation\">\n </a>\n</p>\n\n<p align=\"center\">\n <img src=\"https://img.shields.io/badge/python-3.10--3.12-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54\" alt=\"Python Version\">\n <img src=\"https://img.shields.io/badge/Ubuntu-22--24-E95420?style=for-the-badge&logo=ubuntu&logoColor=white&&color=blue\" alt=\"Ubuntu Version\">\n <img src=\"https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge\" alt=\"License\">\n \n</p>\n\n\n<p align=\"center\">\n <img src=\"https://img.shields.io/badge/mlflow-%23d9ead3.svg?style=for-the-badge&logo=numpy&logoColor=blue\" alt=\"Mlflow\">\n <img src=\"https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white\" alt=\"Mlflow\">\n</p>\n\n\n<p align=\"center\">\n <a href=\"https://pepy.tech/projects/reinforceui-studio\">\n <img src=\"https://img.shields.io/pepy/dt/reinforceui-studio?style=for-the-badge&logo=pypi&color=blue\" alt=\"PyPI Downloads\">\n </a>\n</p>\n\n---\n\u2b50\ufe0f If you find this project useful, please consider giving it a star! It really helps!\n\n\ud83d\udcda Full Documentation: <a href=\"https://docs.reinforceui-studio.com\" target=\"_blank\">https://docs.reinforceui-studio.com</a>\n\n\ud83c\udfac Video Demo: [YouTube Tutorial](https://www.youtube.com/watch?v=itXyyttwZ1M)\n\n---\n\n## What is ReinforceUI Studio?\n\nReinforceUI Studio is a Python-based application designed to simplify Reinforcement Learning (RL) workflows through a beautiful, intuitive GUI.\nNo more memorizing commands, no more juggling extra repos \u2013 just train, monitor, and evaluate in a few clicks!\n\n<p align=\"center\"> <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/new_main_window_example.gif\" width=\"80%\"> </p>\n\n\n\n## Quickstart\nGetting started with ReinforceUI Studio is fast and easy!\n\n### \ud83d\udda5\ufe0f Install and Run Locally\nThe easiest way to use ReinforceUI Studio is by installing it directly from PyPI. This provides a hassle-free installation, allowing you to get started quickly with no extra configuration.\n\nFollow these simple steps:\n\n1. Install ReinforceUI Studio from PyPI\n\n```bash\npip install reinforceui-studio\n```\n\n2. Run the application\n\n```bash\nreinforceui-studio\n```\n\nThat's it! You\u2019re ready to start training and monitoring your Reinforcement Learning agents through an intuitive GUI.\n\n\u2705 Tip:\nIf you encounter any issues, check out the [Installation Guide](https://docs.reinforceui-studio.com/user_guides/installation) in the full documentation.\n\n## Why you should use ReinforceUI Studio\n* \ud83d\ude80 Instant RL Training: Configure environments, select algorithms, set hyperparameters \u2013 all in seconds.\n* \ud83d\udda5\ufe0f Real-Time Dashboard: Watch your agents learn with live performance curves and metrics.\n* \ud83d\udcca Mlflow Integration: Automatically log and visualize your training runs with MLflow.\n* \ud83e\udde0 Multi-Algorithm Support: Train and compare multiple algorithms simultaneously.\n* \ud83d\udce6 Full Logging: Automatically save models, plots, evaluations, videos, and training stats.\n* \ud83d\udd27 Easy Customization: Adjust hyperparameters or load optimized defaults.\n* \ud83e\udde9 Environment Support: Works with MuJoCo, OpenAI Gymnasium, and DeepMind Control Suite.\n* \ud83d\udcca Final Comparison Plots: Auto-generate publishable comparison graphs for your reports or papers.\n\n## Quick Overview: Single and Multi-Algorithm Training\n\n* **Single Training**: Choose an algorithm, tweak parameters, train & visualize.\n\n* **Multi-Training**: Select several algorithms, run them simultaneously, and compare performances side-by-side.\n\n<table align=\"center\">\n <tr>\n <th>Selection Window</th>\n <th>Main Window Display</th>\n </tr>\n <tr>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/single_selection.png\" width=\"400\"></td>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/single_selection_main_window.png\" width=\"400\"></td>\n </tr>\n <tr>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/multiple_selection.png\" width=\"400\"></td>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/multiple_selection_main_window.png\" width=\"400\"></td>\n </tr>\n</table>\n\n## Mlflow Integration\n<table align=\"center\">\n <tr>\n <th>Example of MLflow Dashboard</th>\n <th>Example of MLflow Metrics</th>\n </tr>\n <tr>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/mlflow_dashboard_1.png\" width=\"400\"></td>\n <td align=\"center\"><img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/mlflow_dashboard_2.png\" width=\"400\"></td>\n </tr>\n \n</table>\n\n## Supported Algorithms\nReinforceUI Studio supports the following algorithms:\n\n| Algorithm | Description |\n| --- | --- |\n| **CTD4** | Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple Critics |\n| **DDPG** | Deep Deterministic Policy Gradient |\n| **DQN** | Deep Q-Network |\n| **PPO** | Proximal Policy Optimization |\n| **SAC** | Soft Actor-Critic |\n| **TD3** | Twin Delayed Deep Deterministic Policy Gradient |\n| **TQC** | Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics |\n\n\n\n## Results Examples\nBelow are some examples of results generated by ReinforceUI Studio, showcasing the evaluation curves along with snapshots of the policies in action.\n\n| **Algorithm** | **Platform** | **Environment** | **Curve** | **Video** |\n|---------------|--------------|--------------------|-----------------------------------------------------------------|--------------------------------------------------------------------------------------------------|\n| **SAC** | DMCS | Walker Walk | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/SAC_walker_walk.png\" width=\"200\"> | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/walker_walk.gif\" width=\"200\"> | \n| **TD3** | MuJoCo | HalfCheetah v5 | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/TD3_HalfCheetah-v5.png\" width=\"200\"> | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/HalfCheetah.gif\" width=\"200\"> |\n| **CDT4** | DMCS | Ball in cup catch | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/CTD4_ball_in_cup_catch.png\" width=\"200\"> | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/ball_in_cup_catch.gif\" width=\"200\"> | \n| **DQN** | Gymnasium | CartPole v1 | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/DQN_CartPole-v1.png\" width=\"200\"> | <img src=\"https://raw.githubusercontent.com/dvalenciar/ReinforceUI-Studio/main/media_resources/result_examples/CartPole.gif\" width=\"200\"> | \n\n\n## Citation\nIf you find ReinforceUI Studio useful for your research or project, please kindly star this repo and cite is as follows:\n\n```\n@misc{reinforce_ui_studio_2025,\n title = { ReinforceUI Studio: Simplifying Reinforcement Learning Training and Monitoring},\n author = {David Valencia Redrovan},\n year = {2025},\n publisher = {GitHub},\n url = {https://github.com/dvalenciar/ReinforceUI-Studio.}\n}\n```\n\n\n## Why Star \u2b50 this Repository?\nYour support helps the project grow!\nIf you like ReinforceUI Studio, please star \u2b50 this repository and share it with friends, colleagues, and the RL community!\nTogether, we can make Reinforcement Learning accessible to everyone!\n\n## License\nReinforceUI Studio is licensed under the MIT License. You are free to use, modify, and distribute this software, \nprovided that the original copyright notice and license are included in any copies or substantial portions of the software.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A GUI to simplify the configuration and monitoring of RL training processes.",
"version": "1.3.3",
"project_urls": {
"Documentation": "https://docs.reinforceui-studio.com",
"Homepage": "https://github.com/dvalenciar/ReinforceUI-Studio",
"Repository": "https://github.com/dvalenciar/ReinforceUI-Studio",
"Tracker": "https://github.com/dvalenciar/ReinforceUI-Studio/issues"
},
"split_keywords": [
"reinforcement-learning",
"machine-learning",
"deep-learning",
"gui"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ca1c9ddf751f210d7b23da61c02ca620c95852c3681916c209264c891dd635f0",
"md5": "45bc57439d0d2e0288044df8b693505a",
"sha256": "01abe333d33eb357ac7858fb66e89306c525fc6e700009db8562c2c78f57d3f2"
},
"downloads": -1,
"filename": "reinforceui_studio-1.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "45bc57439d0d2e0288044df8b693505a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 3900506,
"upload_time": "2025-07-10T08:36:31",
"upload_time_iso_8601": "2025-07-10T08:36:31.987596Z",
"url": "https://files.pythonhosted.org/packages/ca/1c/9ddf751f210d7b23da61c02ca620c95852c3681916c209264c891dd635f0/reinforceui_studio-1.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5815e1a049bd6762ab225b2caf80cec75444751d95d5cbe64029d9fe08a6cf13",
"md5": "7faeca5aed924918e062c4d0b0d9473b",
"sha256": "6470ceb7966d9bb1250ff5a1845e589c78c7f4ca5da6a5c53de0ec8edaf32fa1"
},
"downloads": -1,
"filename": "reinforceui_studio-1.3.3.tar.gz",
"has_sig": false,
"md5_digest": "7faeca5aed924918e062c4d0b0d9473b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 3872092,
"upload_time": "2025-07-10T08:36:36",
"upload_time_iso_8601": "2025-07-10T08:36:36.275297Z",
"url": "https://files.pythonhosted.org/packages/58/15/e1a049bd6762ab225b2caf80cec75444751d95d5cbe64029d9fe08a6cf13/reinforceui_studio-1.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-10 08:36:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dvalenciar",
"github_project": "ReinforceUI-Studio",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "PyYAML",
"specs": [
[
"~=",
"6.0.2"
]
]
},
{
"name": "setuptools",
"specs": [
[
"~=",
"80.9.0"
]
]
},
{
"name": "PyQt5",
"specs": [
[
"~=",
"5.15.11"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"~=",
"3.9.4"
]
]
},
{
"name": "pandas",
"specs": [
[
"~=",
"2.2.3"
]
]
},
{
"name": "numpy",
"specs": [
[
"~=",
"1.26.4"
]
]
},
{
"name": "torch",
"specs": [
[
"~=",
"2.7.1"
]
]
},
{
"name": "seaborn",
"specs": [
[
"~=",
"0.13.2"
]
]
},
{
"name": "mlflow",
"specs": [
[
"~=",
"3.1.0"
]
]
},
{
"name": "mlflow-skinny",
"specs": [
[
"~=",
"3.1.0"
]
]
},
{
"name": "gymnasium",
"specs": [
[
"~=",
"1.1.1"
]
]
},
{
"name": "gymnasium",
"specs": [
[
"~=",
"1.1.1"
]
]
},
{
"name": "gymnasium",
"specs": [
[
"~=",
"1.1.1"
]
]
},
{
"name": "dm_control",
"specs": [
[
"~=",
"1.0.30"
]
]
},
{
"name": "swig",
"specs": [
[
"~=",
"4.3.0"
]
]
},
{
"name": "imageio",
"specs": [
[
"~=",
"2.37.0"
]
]
},
{
"name": "opencv-python-headless",
"specs": [
[
"~=",
"4.10.0"
]
]
}
],
"lcname": "reinforceui-studio"
}