# JoyRL
[![PyPI](https://img.shields.io/pypi/v/joyrl)](https://pypi.org/project/joyrl/) [![GitHub issues](https://img.shields.io/github/issues/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/issues) [![GitHub stars](https://img.shields.io/github/stars/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/stargazers) [![GitHub forks](https://img.shields.io/github/forks/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/network) [![GitHub license](https://img.shields.io/github/license/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/blob/master/LICENSE)
`JoyRL` is a parallel reinforcement learning library based on PyTorch and Ray. Unlike existing RL libraries, `JoyRL` is helping users to release the burden of implementing algorithms with tough details, unfriendly APIs, and etc. JoyRL is designed for users to train and test RL algorithms with **only hyperparameters configuration**, which is mush easier for beginners to learn and use. Also, JoyRL supports plenties of state-of-art RL algorithms including **RLHF(core of ChatGPT)**(See algorithms below). JoyRL provides a **modularized framework** for users as well to customize their own algorithms and environments.
## Install
⚠️ Note that donot install JoyRL through any mirror image!!!
```bash
# you need to install Anaconda first
conda create -n joyrl python=3.10
conda activate joyrl
pip install -U joyrl
```
Torch install:
```bash
# CPU
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1
# CUDA 11.8
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118
# CUDA 12.1
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
```
## Usage
### Quick Start
the following presents a demo to use joyrl. As you can see, first create a yaml file to **config hyperparameters**, then run the command as below in your terminal. That's all you need to do to train a DQN agent on CartPole-v1 environment.
```bash
joyrl --yaml ./presets/ClassControl/CartPole-v1/CartPole-v1_DQN.yaml
```
or you can run the following code in your python file.
```python
import joyrl
if __name__ == "__main__":
print(joyrl.__version__)
yaml_path = "./presets/ClassControl/CartPole-v1/CartPole-v1_DQN.yaml"
joyrl.run(yaml_path = yaml_path)
```
## Documentation
More tutorials and API documentation are hosted on [JoyRL docs](https://datawhalechina.github.io/joyrl/) or [JoyRL 中文文档](https://datawhalechina.github.io/joyrl-book/#/joyrl_docs/main).
## Algorithms
| Name | Reference | Author | Notes |
| :--------------: | :----------------------------------------------------------: | :-------------------------------------------: | :---: |
| Q-learning | [RL introduction](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) | [johnjim0816](https://github.com/johnjim0816) | |
| Sarsa | [RL introduction](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) | [johnjim0816](https://github.com/johnjim0816) | |
| DQN | [DQN Paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | [johnjim0816](https://github.com/johnjim0816) | |
| Double DQN | [DoubleDQN Paper](https://arxiv.org/abs/1509.06461) | [johnjim0816](https://github.com/johnjim0816) | |
| Dueling DQN | [DuelingDQN Paper](https://arxiv.org/abs/1511.06581) | [johnjim0816](https://github.com/johnjim0816) | |
| NoisyDQN | [NoisyDQN Paper](https://arxiv.org/pdf/1706.10295.pdf) | [johnjim0816](https://github.com/johnjim0816) | |
| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | [johnjim0816](https://github.com/johnjim0816) | |
| TD3 | [TD3 Paper](https://arxiv.org/pdf/1802.09477) | [johnjim0816](https://github.com/johnjim0816) | |
| A2C/A3C | [A3C Paper](https://arxiv.org/abs/1602.01783) | [johnjim0816](https://github.com/johnjim0816) | |
| PPO | [PPO Paper](https://arxiv.org/abs/1707.06347) | [johnjim0816](https://github.com/johnjim0816) | |
| SoftQ | [SoftQ Paper](https://arxiv.org/abs/1702.08165) | [johnjim0816](https://github.com/johnjim0816) | |
## Why JoyRL?
| RL Platform | GitHub Stars | # of Alg. <sup>(1)</sup> | Custom Env | Async Training | RNN Support | Multi-Head Observation | Backend |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------ | ------------------------------ | ------------------ | ------------------ | ---------------------- | ------------------------------------------------- |
| [Baselines](https://github.com/openai/baselines) | [![GitHub stars](https://img.shields.io/github/stars/openai/baselines)](https://github.com/openai/baselines/stargazers) | 9 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | TF1 |
| [Stable-Baselines](https://github.com/hill-a/stable-baselines) | [![GitHub stars](https://img.shields.io/github/stars/hill-a/stable-baselines)](https://github.com/hill-a/stable-baselines/stargazers) | 11 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | TF1 |
| [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) | [![GitHub stars](https://img.shields.io/github/stars/DLR-RM/stable-baselines3)](https://github.com/DLR-RM/stable-baselines3/stargazers) | 7 | :heavy_check_mark: (gym) | :x: | :x: | :heavy_check_mark: | PyTorch |
| [Ray/RLlib](https://github.com/ray-project/ray/tree/master/rllib/) | [![GitHub stars](https://img.shields.io/github/stars/ray-project/ray)](https://github.com/ray-project/ray/stargazers) | 16 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | TF/PyTorch |
| [SpinningUp](https://github.com/openai/spinningup) | [![GitHub stars](https://img.shields.io/github/stars/openai/spinningup)](https://github.com/openai/spinningupstargazers) | 6 | :heavy_check_mark: (gym) | :x: | :x: | :x: | PyTorch |
| [Dopamine](https://github.com/google/dopamine) | [![GitHub stars](https://img.shields.io/github/stars/google/dopamine)](https://github.com/google/dopamine/stargazers) | 7 | :x: | :x: | :x: | :x: | TF/JAX |
| [ACME](https://github.com/deepmind/acme) | [![GitHub stars](https://img.shields.io/github/stars/deepmind/acme)](https://github.com/deepmind/acme/stargazers) | 14 | :heavy_check_mark: (dm_env) | :x: | :heavy_check_mark: | :heavy_check_mark: | TF/JAX |
| [keras-rl](https://github.com/keras-rl/keras-rl) | [![GitHub stars](https://img.shields.io/github/stars/keras-rl/keras-rl)](https://github.com/keras-rl/keras-rlstargazers) | 7 | :heavy_check_mark: (gym) | :x: | :x: | :x: | Keras |
| [cleanrl](https://github.com/vwxyzjn/cleanrl) | ![GitHub stars](https://img.shields.io/github/stars/vwxyzjn/cleanrl) | 9 | :heavy_check_mark: (gym) | :x: | :x: | :x: | [poetry](https://github.com/python-poetry/poetry) |
| [rlpyt](https://github.com/astooke/rlpyt) | [![GitHub stars](https://img.shields.io/github/stars/astooke/rlpyt)](https://github.com/astooke/rlpyt/stargazers) | 11 | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |
| [ChainerRL](https://github.com/chainer/chainerrl) | [![GitHub stars](https://img.shields.io/github/stars/chainer/chainerrl)](https://github.com/chainer/chainerrl/stargazers) | 18 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | Chainer |
| [Tianshou](https://github.com/thu-ml/tianshou) | [![GitHub stars](https://img.shields.io/github/stars/thu-ml/tianshou)](https://github.com/thu-ml/tianshou/stargazers) | 20 | :heavy_check_mark: (Gymnasium) | :x: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |
| [JoyRL](https://github.com/datawhalechina/joyrl) | ![GitHub stars](https://img.shields.io/github/stars/datawhalechina/joyrl) | 11 | :heavy_check_mark: (Gymnasium) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |
Here are some other highlghts of JoyRL:
* Provide a series of Chinese courses [JoyRL Book](https://github.com/datawhalechina/joyrl-book) (with the English version in progress), suitable for beginners to start with a combination of theory
## Contributors
<table border="0">
<tbody>
<tr align="center" >
<td>
<a href="https://github.com/JohnJim0816"><img width="70" height="70" src="https://github.com/JohnJim0816.png?s=40" alt="pic"></a><br>
<a href="https://github.com/JohnJim0816">John Jim</a>
<p>Peking University</p>
</td>
<td>
<a href="https://github.com/qiwang067"><img width="70" height="70" src="https://github.com/qiwang067.png?s=40" alt="pic"></a><br>
<a href="https://github.com/qiwang067">Qi Wang</a>
<p>Shanghai Jiao Tong University</p>
</td>
<td>
<a href="https://github.com/yyysjz1997"><img width="70" height="70" src="https://github.com/yyysjz1997.png?s=40" alt="pic"></a><br>
<a href="https://github.com/yyysjz1997">Yiyuan Yang</a>
<p>University of Oxford</p>
</td>
</tr>
</tbody>
</table>
Raw data
{
"_id": null,
"home_page": "https://github.com/datawhalechina/joyrl",
"name": "joyrl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "reinforcement learning platform pytorch",
"author": "johnjim0816",
"author_email": "johnjim0816@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ba/6c/6db9da741fd78b5fee299199caeb12cbc2229391d02b23cacfdf32d56947/joyrl-0.6.8.tar.gz",
"platform": "any",
"description": "# JoyRL\n\n[![PyPI](https://img.shields.io/pypi/v/joyrl)](https://pypi.org/project/joyrl/) [![GitHub issues](https://img.shields.io/github/issues/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/issues) [![GitHub stars](https://img.shields.io/github/stars/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/stargazers) [![GitHub forks](https://img.shields.io/github/forks/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/network) [![GitHub license](https://img.shields.io/github/license/datawhalechina/joyrl)](https://github.com/datawhalechina/joyrl/blob/master/LICENSE)\n\n`JoyRL` is a parallel reinforcement learning library based on PyTorch and Ray. Unlike existing RL libraries, `JoyRL` is helping users to release the burden of implementing algorithms with tough details, unfriendly APIs, and etc. JoyRL is designed for users to train and test RL algorithms with **only hyperparameters configuration**, which is mush easier for beginners to learn and use. Also, JoyRL supports plenties of state-of-art RL algorithms including **RLHF(core of ChatGPT)**(See algorithms below). JoyRL provides a **modularized framework** for users as well to customize their own algorithms and environments. \n\n## Install\n\n\u26a0\ufe0f Note that donot install JoyRL through any mirror image!!!\n\n```bash\n# you need to install Anaconda first\nconda create -n joyrl python=3.10\nconda activate joyrl\npip install -U joyrl\n```\n\nTorch install:\n\n```bash\n# CPU\npip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1\n# CUDA 11.8\npip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118\n# CUDA 12.1\npip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121\n```\n\n## Usage\n\n### Quick Start\n\nthe following presents a demo to use joyrl. As you can see, first create a yaml file to **config hyperparameters**, then run the command as below in your terminal. That's all you need to do to train a DQN agent on CartPole-v1 environment.\n\n```bash\njoyrl --yaml ./presets/ClassControl/CartPole-v1/CartPole-v1_DQN.yaml\n```\nor you can run the following code in your python file. \n\n```python\nimport joyrl\nif __name__ == \"__main__\":\n print(joyrl.__version__)\n yaml_path = \"./presets/ClassControl/CartPole-v1/CartPole-v1_DQN.yaml\"\n joyrl.run(yaml_path = yaml_path)\n```\n\n\n\n## Documentation\n\nMore tutorials and API documentation are hosted on [JoyRL docs](https://datawhalechina.github.io/joyrl/) or [JoyRL \u4e2d\u6587\u6587\u6863](https://datawhalechina.github.io/joyrl-book/#/joyrl_docs/main).\n\n## Algorithms\n\n| Name | Reference | Author | Notes |\n| :--------------: | :----------------------------------------------------------: | :-------------------------------------------: | :---: |\n| Q-learning | [RL introduction](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) | [johnjim0816](https://github.com/johnjim0816) | |\n| Sarsa | [RL introduction](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf) | [johnjim0816](https://github.com/johnjim0816) | |\n| DQN | [DQN Paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) | [johnjim0816](https://github.com/johnjim0816) | |\n| Double DQN | [DoubleDQN Paper](https://arxiv.org/abs/1509.06461) | [johnjim0816](https://github.com/johnjim0816) | |\n| Dueling DQN | [DuelingDQN Paper](https://arxiv.org/abs/1511.06581) | [johnjim0816](https://github.com/johnjim0816) | |\n| NoisyDQN | [NoisyDQN Paper](https://arxiv.org/pdf/1706.10295.pdf) | [johnjim0816](https://github.com/johnjim0816) | |\n| DDPG | [DDPG Paper](https://arxiv.org/abs/1509.02971) | [johnjim0816](https://github.com/johnjim0816) | |\n| TD3 | [TD3 Paper](https://arxiv.org/pdf/1802.09477) | [johnjim0816](https://github.com/johnjim0816) | |\n| A2C/A3C | [A3C Paper](https://arxiv.org/abs/1602.01783) | [johnjim0816](https://github.com/johnjim0816) | |\n| PPO | [PPO Paper](https://arxiv.org/abs/1707.06347) | [johnjim0816](https://github.com/johnjim0816) | |\n| SoftQ | [SoftQ Paper](https://arxiv.org/abs/1702.08165) | [johnjim0816](https://github.com/johnjim0816) | |\n\n## Why JoyRL?\n\n| RL Platform | GitHub Stars | # of Alg. <sup>(1)</sup> | Custom Env | Async Training | RNN Support | Multi-Head Observation | Backend |\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------ | ------------------------------ | ------------------ | ------------------ | ---------------------- | ------------------------------------------------- |\n| [Baselines](https://github.com/openai/baselines) | [![GitHub stars](https://img.shields.io/github/stars/openai/baselines)](https://github.com/openai/baselines/stargazers) | 9 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | TF1 |\n| [Stable-Baselines](https://github.com/hill-a/stable-baselines) | [![GitHub stars](https://img.shields.io/github/stars/hill-a/stable-baselines)](https://github.com/hill-a/stable-baselines/stargazers) | 11 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | TF1 |\n| [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) | [![GitHub stars](https://img.shields.io/github/stars/DLR-RM/stable-baselines3)](https://github.com/DLR-RM/stable-baselines3/stargazers) | 7 | :heavy_check_mark: (gym) | :x: | :x: | :heavy_check_mark: | PyTorch |\n| [Ray/RLlib](https://github.com/ray-project/ray/tree/master/rllib/) | [![GitHub stars](https://img.shields.io/github/stars/ray-project/ray)](https://github.com/ray-project/ray/stargazers) | 16 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | TF/PyTorch |\n| [SpinningUp](https://github.com/openai/spinningup) | [![GitHub stars](https://img.shields.io/github/stars/openai/spinningup)](https://github.com/openai/spinningupstargazers) | 6 | :heavy_check_mark: (gym) | :x: | :x: | :x: | PyTorch |\n| [Dopamine](https://github.com/google/dopamine) | [![GitHub stars](https://img.shields.io/github/stars/google/dopamine)](https://github.com/google/dopamine/stargazers) | 7 | :x: | :x: | :x: | :x: | TF/JAX |\n| [ACME](https://github.com/deepmind/acme) | [![GitHub stars](https://img.shields.io/github/stars/deepmind/acme)](https://github.com/deepmind/acme/stargazers) | 14 | :heavy_check_mark: (dm_env) | :x: | :heavy_check_mark: | :heavy_check_mark: | TF/JAX |\n| [keras-rl](https://github.com/keras-rl/keras-rl) | [![GitHub stars](https://img.shields.io/github/stars/keras-rl/keras-rl)](https://github.com/keras-rl/keras-rlstargazers) | 7 | :heavy_check_mark: (gym) | :x: | :x: | :x: | Keras |\n| [cleanrl](https://github.com/vwxyzjn/cleanrl) | ![GitHub stars](https://img.shields.io/github/stars/vwxyzjn/cleanrl) | 9 | :heavy_check_mark: (gym) | :x: | :x: | :x: | [poetry](https://github.com/python-poetry/poetry) |\n| [rlpyt](https://github.com/astooke/rlpyt) | [![GitHub stars](https://img.shields.io/github/stars/astooke/rlpyt)](https://github.com/astooke/rlpyt/stargazers) | 11 | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |\n| [ChainerRL](https://github.com/chainer/chainerrl) | [![GitHub stars](https://img.shields.io/github/stars/chainer/chainerrl)](https://github.com/chainer/chainerrl/stargazers) | 18 | :heavy_check_mark: (gym) | :x: | :heavy_check_mark: | :x: | Chainer |\n| [Tianshou](https://github.com/thu-ml/tianshou) | [![GitHub stars](https://img.shields.io/github/stars/thu-ml/tianshou)](https://github.com/thu-ml/tianshou/stargazers) | 20 | :heavy_check_mark: (Gymnasium) | :x: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |\n| [JoyRL](https://github.com/datawhalechina/joyrl) | ![GitHub stars](https://img.shields.io/github/stars/datawhalechina/joyrl) | 11 | :heavy_check_mark: (Gymnasium) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | PyTorch |\n\nHere are some other highlghts of JoyRL:\n\n* Provide a series of Chinese courses [JoyRL Book](https://github.com/datawhalechina/joyrl-book) (with the English version in progress), suitable for beginners to start with a combination of theory\n\n## Contributors\n\n<table border=\"0\">\n <tbody>\n <tr align=\"center\" >\n <td>\n <a href=\"https://github.com/JohnJim0816\"><img width=\"70\" height=\"70\" src=\"https://github.com/JohnJim0816.png?s=40\" alt=\"pic\"></a><br>\n <a href=\"https://github.com/JohnJim0816\">John Jim</a>\n <p>Peking University</p>\n </td>\n <td>\n <a href=\"https://github.com/qiwang067\"><img width=\"70\" height=\"70\" src=\"https://github.com/qiwang067.png?s=40\" alt=\"pic\"></a><br>\n <a href=\"https://github.com/qiwang067\">Qi Wang</a> \n <p>Shanghai Jiao Tong University</p>\n </td>\n <td>\n <a href=\"https://github.com/yyysjz1997\"><img width=\"70\" height=\"70\" src=\"https://github.com/yyysjz1997.png?s=40\" alt=\"pic\"></a><br>\n <a href=\"https://github.com/yyysjz1997\">Yiyuan Yang</a> \n <p>University of Oxford</p>\n </td>\n </tr>\n </tbody>\n</table>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Library for Deep Reinforcement Learning",
"version": "0.6.8",
"project_urls": {
"Homepage": "https://github.com/datawhalechina/joyrl"
},
"split_keywords": [
"reinforcement",
"learning",
"platform",
"pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "02f21b6bb45e32e6ecfc73bdb8b78c3066989e78810050bfae4e503c1f3de8e3",
"md5": "e77b26d3bf121dd60493395a1863d8ea",
"sha256": "234f02e300e16b73eb42c384fbcbad8024c5d3268dd85ae1a76e166d33988cf7"
},
"downloads": -1,
"filename": "joyrl-0.6.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e77b26d3bf121dd60493395a1863d8ea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 108454,
"upload_time": "2024-12-19T05:45:08",
"upload_time_iso_8601": "2024-12-19T05:45:08.240609Z",
"url": "https://files.pythonhosted.org/packages/02/f2/1b6bb45e32e6ecfc73bdb8b78c3066989e78810050bfae4e503c1f3de8e3/joyrl-0.6.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ba6c6db9da741fd78b5fee299199caeb12cbc2229391d02b23cacfdf32d56947",
"md5": "b8a6ca9f66b80e42e7904139269b33e2",
"sha256": "20d338cc0fddd4fe4f2db7cc62c676be5cd58f67ef81ea4ed7dcc2a81b8dcace"
},
"downloads": -1,
"filename": "joyrl-0.6.8.tar.gz",
"has_sig": false,
"md5_digest": "b8a6ca9f66b80e42e7904139269b33e2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 79156,
"upload_time": "2024-12-19T05:45:11",
"upload_time_iso_8601": "2024-12-19T05:45:11.001586Z",
"url": "https://files.pythonhosted.org/packages/ba/6c/6db9da741fd78b5fee299199caeb12cbc2229391d02b23cacfdf32d56947/joyrl-0.6.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-19 05:45:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "datawhalechina",
"github_project": "joyrl",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "ray",
"specs": [
[
"==",
"2.6.3"
]
]
},
{
"name": "gymnasium",
"specs": [
[
"==",
"0.29.1"
]
]
},
{
"name": "tensorboard",
"specs": [
[
"==",
"2.16.2"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.8.4"
]
]
},
{
"name": "seaborn",
"specs": [
[
"==",
"0.13.2"
]
]
},
{
"name": "dill",
"specs": [
[
"==",
"0.3.8"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.13.0"
]
]
},
{
"name": "swig",
"specs": [
[
"==",
"4.2.1"
]
]
},
{
"name": "pygame",
"specs": [
[
"==",
"2.6.0"
]
]
},
{
"name": "gymnasium",
"specs": [
[
"==",
"0.29.1"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.2.2"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"69.5.1"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.13.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.1"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"1.10.17"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"6.0.0"
]
]
},
{
"name": "colorlog",
"specs": [
[
"==",
"6.8.2"
]
]
}
],
"lcname": "joyrl"
}