Name | rlportfolio JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | Reinforcement learning framework for portfolio optimization tasks. |
upload_time | 2024-12-04 04:52:26 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | The MIT License (MIT) Copyright (c) 2024 Caio de Souza Barbosa Costa. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
deep-learning
reinforcement-learning
pytorch
finance
portfolio-optimization
portfolio-management
asset-allocation
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
![RLPortfolio Logo](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/rlportfolio_title.png)
------------------------------------------
RLPortfolio is a Python package which provides several features to implement, train and test reinforcement learning agents that optimize a financial portfolio:
- A training simulation environment that implements the state-of-the-art mathematical formulation commonly used in the research field.
- Two policy gradient training algorithms that are specifically built to solve the portfolio optimization task.
- Four cutting-edge deep neural networks implemented in PyTorch that can be used as the agent policy.
[Click here to access the library documentation!](https://rlportfolio.readthedocs.io/en/latest/)
**Note**: This project is mainly intended for academic purposes. Therefore, be careful if using RLPortfolio to trade real money and consult a professional before investing, if possible.
## About RLPortfolio
This library is composed by the following components:
| Component | Description |
| ---- | --- |
| **rlportfolio.algorithm** | A compilation of specific training algorithms to portfolio optimization agents. |
| **rlportfolio.data** | Functions and classes to perform data preprocessing. |
| **rlportfolio.environment** | Training reinforcement learning environment. |
| **rlportfolio.policy** | A collection of deep neural networks to be used in the agent. |
| **rlportfolio.utils** | Utility functions for convenience. |
### A Modular Library
RLPortfolio is implemented with a modular architecture in mind so that it can be used in conjunction with several other libraries. To effectively train an agent, you need three constituents:
- A training algorithm.
- A simulation environment.
- A policy neural network (depending on the algorithm, a critic neural network might be necessary tools).
The figure below shows the dynamics between those components. All of them are present in this library, but users are free to use other libraries or custom implementations.
![Architecture](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/architecture.png)
### Modern Standards and Libraries
Differently than other implementations of the research field, this library utilizes modern versions of libraries ([PyTorch](https://pytorch.org/), [Gymnasium](https://gymnasium.farama.org/), [Numpy](https://numpy.org/) and [Pandas](https://pandas.pydata.org/)) and follows standards that allows its utilization in conjunction with other libraries.
### Easy to Use and Customizable
RLPortfolio aims to be easy to use and its code is heavily documented using [Google Python Style](https://google.github.io/styleguide/pyguide.html) so that users can understand how to utilize the classes and functions. Additionaly, the training components are very customizable and, thus, different training routines can be run without the need to directly modify the code.
### Integration with Tensorboard
The algorithms implemented in the package are integrated with [Tensorboard](https://www.tensorflow.org/tensorboard/get_started), automatically providing graphs of the main metrics during training, validation and testing.
![Tensorboard](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/tensorboard.png)
### Focus on Reliability
In order to be as reliable as possible, this project has a strong focus in implementing unit tests for new implementations. Therefore, RLPortfolio can be easily used to reproduce and compare research studies.
## Installation
To install this package, simply clone this repository and run the following command:
```bash
$ pip install .
```
## Interface
RLPortfolio's interface is very easy to use. In order to train an agent, you need to instantiate an environment object. The environment makes use of a dataframe which contains the time series of price of stocks.
```python
import pandas as pd
from rlportfolio.environment import PortfolioOptimizationEnv
# dataframe with training data (market price time series)
df_train = pd.read_csv("train_data.csv")
environment = PortfolioOptimizationEnv(
df_train, # data to be used
100000 # initial value of the portfolio
)
```
Then, it is possible to instantiate the policy gradient algorithm to generate an agent that actuates in the created environment.
```python
from rlportfolio.algorithm import PolicyGradient
algorithm = PolicyGradient(environment)
```
Finally, you can train the agent using the defined algorithm through the following method:
```python
# train the algorithm for 10000
algorithm.train(10000)
```
It's now possible to test the agent's performance in another environment which contains data of a different time period.
```python
# dataframe with testing data (market price time series)
df_test = pd.read_csv("test_data.csv")
environment_test = PortfolioOptimizationEnv(
df_test, # data to be used
100000 # initial value of the portfolio
)
# test the agent in the test environment
algorithm.test(environment_test)
```
The test function will return a dictionary with the metrics of the test.
Raw data
{
"_id": null,
"home_page": null,
"name": "rlportfolio",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Caio de Souza Barbosa Costa <csbc326@gmail.com>",
"keywords": "deep-learning, reinforcement-learning, pytorch, finance, portfolio-optimization, portfolio-management, asset-allocation",
"author": null,
"author_email": "Caio de Souza Barbosa Costa <csbc326@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/15/dd/3912db232a404b43138e604ce713d0efc3af9db5f95b52a4a1aa69e29533/rlportfolio-0.2.0.tar.gz",
"platform": null,
"description": "![RLPortfolio Logo](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/rlportfolio_title.png)\n\n------------------------------------------\n\nRLPortfolio is a Python package which provides several features to implement, train and test reinforcement learning agents that optimize a financial portfolio:\n\n- A training simulation environment that implements the state-of-the-art mathematical formulation commonly used in the research field.\n- Two policy gradient training algorithms that are specifically built to solve the portfolio optimization task.\n- Four cutting-edge deep neural networks implemented in PyTorch that can be used as the agent policy.\n\n[Click here to access the library documentation!](https://rlportfolio.readthedocs.io/en/latest/)\n\n**Note**: This project is mainly intended for academic purposes. Therefore, be careful if using RLPortfolio to trade real money and consult a professional before investing, if possible.\n\n## About RLPortfolio\n\nThis library is composed by the following components:\n\n| Component | Description |\n| ---- | --- |\n| **rlportfolio.algorithm** | A compilation of specific training algorithms to portfolio optimization agents. |\n| **rlportfolio.data** | Functions and classes to perform data preprocessing. |\n| **rlportfolio.environment** | Training reinforcement learning environment. |\n| **rlportfolio.policy** | A collection of deep neural networks to be used in the agent. |\n| **rlportfolio.utils** | Utility functions for convenience. |\n\n### A Modular Library\n\nRLPortfolio is implemented with a modular architecture in mind so that it can be used in conjunction with several other libraries. To effectively train an agent, you need three constituents:\n\n- A training algorithm.\n- A simulation environment.\n- A policy neural network (depending on the algorithm, a critic neural network might be necessary tools).\n\nThe figure below shows the dynamics between those components. All of them are present in this library, but users are free to use other libraries or custom implementations.\n\n![Architecture](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/architecture.png)\n\n### Modern Standards and Libraries\n\nDifferently than other implementations of the research field, this library utilizes modern versions of libraries ([PyTorch](https://pytorch.org/), [Gymnasium](https://gymnasium.farama.org/), [Numpy](https://numpy.org/) and [Pandas](https://pandas.pydata.org/)) and follows standards that allows its utilization in conjunction with other libraries.\n\n### Easy to Use and Customizable\n\nRLPortfolio aims to be easy to use and its code is heavily documented using [Google Python Style](https://google.github.io/styleguide/pyguide.html) so that users can understand how to utilize the classes and functions. Additionaly, the training components are very customizable and, thus, different training routines can be run without the need to directly modify the code.\n\n### Integration with Tensorboard\n\nThe algorithms implemented in the package are integrated with [Tensorboard](https://www.tensorflow.org/tensorboard/get_started), automatically providing graphs of the main metrics during training, validation and testing.\n\n![Tensorboard](https://raw.githubusercontent.com/CaioSBC/RLPortfolio/refs/heads/main/figs/tensorboard.png)\n\n\n### Focus on Reliability\n\nIn order to be as reliable as possible, this project has a strong focus in implementing unit tests for new implementations. Therefore, RLPortfolio can be easily used to reproduce and compare research studies.\n\n## Installation\n\nTo install this package, simply clone this repository and run the following command:\n\n```bash\n$ pip install .\n```\n\n## Interface\n\nRLPortfolio's interface is very easy to use. In order to train an agent, you need to instantiate an environment object. The environment makes use of a dataframe which contains the time series of price of stocks.\n\n```python\nimport pandas as pd\nfrom rlportfolio.environment import PortfolioOptimizationEnv\n\n# dataframe with training data (market price time series)\ndf_train = pd.read_csv(\"train_data.csv\")\n\nenvironment = PortfolioOptimizationEnv(\n df_train, # data to be used\n 100000 # initial value of the portfolio\n )\n```\n\nThen, it is possible to instantiate the policy gradient algorithm to generate an agent that actuates in the created environment.\n\n```python\nfrom rlportfolio.algorithm import PolicyGradient\n\nalgorithm = PolicyGradient(environment)\n```\n\nFinally, you can train the agent using the defined algorithm through the following method:\n\n```python\n# train the algorithm for 10000\nalgorithm.train(10000)\n```\n\nIt's now possible to test the agent's performance in another environment which contains data of a different time period.\n\n```python\n# dataframe with testing data (market price time series)\ndf_test = pd.read_csv(\"test_data.csv\")\n\nenvironment_test = PortfolioOptimizationEnv(\n df_test, # data to be used\n 100000 # initial value of the portfolio\n )\n\n# test the agent in the test environment\nalgorithm.test(environment_test)\n```\n\nThe test function will return a dictionary with the metrics of the test.\n",
"bugtrack_url": null,
"license": "The MIT License (MIT) Copyright (c) 2024 Caio de Souza Barbosa Costa. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Reinforcement learning framework for portfolio optimization tasks.",
"version": "0.2.0",
"project_urls": {
"Documentation": "https://rlportfolio.readthedocs.io/en/latest/",
"Homepage": "https://github.com/CaioSBC/RLPortfolio",
"Issues": "https://github.com/CaioSBC/RLPortfolio/issues"
},
"split_keywords": [
"deep-learning",
" reinforcement-learning",
" pytorch",
" finance",
" portfolio-optimization",
" portfolio-management",
" asset-allocation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "179c67c2173045d4f7db6375ff6577a8ca9e68d6e2a2b16359bff9ee3dd13ab5",
"md5": "25e0c94326fe728471d7accc39a43792",
"sha256": "5a5098f7e61a6cecc7884dcac45ffff20dc18c6bb98759a3800899e453f1be5d"
},
"downloads": -1,
"filename": "rlportfolio-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "25e0c94326fe728471d7accc39a43792",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 37447,
"upload_time": "2024-12-04T04:52:23",
"upload_time_iso_8601": "2024-12-04T04:52:23.930361Z",
"url": "https://files.pythonhosted.org/packages/17/9c/67c2173045d4f7db6375ff6577a8ca9e68d6e2a2b16359bff9ee3dd13ab5/rlportfolio-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "15dd3912db232a404b43138e604ce713d0efc3af9db5f95b52a4a1aa69e29533",
"md5": "b27c13341091479498e928506d289b27",
"sha256": "d7c394bd1c3e1384a30687345394c2a885a35415bb3d37a4db64ae26a9f81a81"
},
"downloads": -1,
"filename": "rlportfolio-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "b27c13341091479498e928506d289b27",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 35182,
"upload_time": "2024-12-04T04:52:26",
"upload_time_iso_8601": "2024-12-04T04:52:26.245463Z",
"url": "https://files.pythonhosted.org/packages/15/dd/3912db232a404b43138e604ce713d0efc3af9db5f95b52a4a1aa69e29533/rlportfolio-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-04 04:52:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "CaioSBC",
"github_project": "RLPortfolio",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "rlportfolio"
}