[![build](https://github.com/AOS55/deeptrade/workflows/build/badge.svg)](https://github.com/AOS55/deeptrade/actions?query=workflow%3Abuild)
[![Downloads](https://img.shields.io/pypi/dm/deeptrade-mbrl)](https://pypi.org/project/deeptrade-mbrl/)
[![PyPi Version](https://img.shields.io/pypi/v/deeptrade-mbrl)](https://pypi.org/project/deeptrade-mbrl/)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/b115af01c853420cac4503e23e783f96)](https://app.codacy.com/gh/AOS55/DeepTrade/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
# DeepTrade
Deeptrade is a backtesting system and library designed to test and evaluate machine learning based strategies.
## Getting Started
### Prerequisites
DeepTrade relies on python 3.8 or higher and [Pytorch](https://pytorch.org) 1.9.0 or higher.
We recommend using a [conda environment](https://docs.anaconda.com/miniconda/miniconda-install/) to manage dependencies. You can create a new environment with the following command:
```bash
conda create --name deeptrade-env python=3.10
conda activate deeptrade-env
```
### Installation
#### Standard Installation
> [!WARNING]
> The project is on PyPI as `deeptrade-mbrl`.
```bash
pip install deeptrade-mbrl
```
#### Development Installation
If you want to modify the library, clone the repository and setup a development environment:
```bash
git clone https://github.com/AOS55/deeptrade.git
pip install -e .
```
### Running Tests
To test the library, either run `pytest` at root or specify test directories from root with:
```bash
python -m pytest tests/core
python -m pytest tests/instruments
```
## Usage
The core idea of DeepTrade is to backtest machine learning trading strategies based on either synthetic or real data. Backtesting is split into 2 datasets, training data, available at the start of the theoretical trading period and backtest data used to evaluate the strategy which is where you started the strategy from. The following provides an overview of the basic components of the library, examples of various backtests are provided in the [notebooks](notebooks) directory.
The train-backtest split is shown below:
<img align="center" src="https://github.com/AOS55/DeepTrade/blob/assets/assets/Backtest-Split.svg" width="500" alt="Train/Backtest split">
The classical [Markov Decision Process](https://en.wikipedia.org/wiki/Markov_decision_process) (MDP) is used to model the trading environment. The environment is defined by the following components:
- **Environment**: the trading environment represents the world the agent interacts with, $p(s'|s, a)$. This is responsible for providing the agent with observations, rewards, and other information about the state of the environment. The environment is defined by the `gymnasium` interface. These include:
- `SingleInstrument-v0`: A single instrument trading environment designed for a simple single asset portfolio.
- `MultiInstrument-v0`: A multi-instrument trading environment designed to hold a multiple asset portfolio.
Each of the trading environments have the following key components:
- **Market data**: either generated synthetically or from a real dataset. Data is queried at time $t$ which is updated by a size `period` each time around the env-agent loop.
- **Account**: represents the portfolio consisting of:
- `Margin`: the amount of cash available.
- `Positions`: the quantity of the asset held.
The observation of the environment is a numpy array consisting of:
- `returns`, $r_{t-\tau:t}$ from the asset price, usually log returns over `window` $\tau$.
- `position`, position of the portfolio in the asset.
- `margin`, the amount of cash available.
- **Agent**: The agent, $\pi(a|s)$, is the decision maker that interacts with the environment. The agent is responsible for selecting actions based on observations from the environment. Model Based RL (MBRL) agents are provided along with classical systematic trading strategies. These include:
- **MBRL agents**
- `PETS`: Probabilistic Ensemble Trajectory Sampling from [Chua et al. (2018)](https://arxiv.org/abs/1805.12114).
- `MBPO`: :construction: Model Based Policy Optimization from [Janner et al. (2019)](https://arxiv.org/abs/1906.08253). :construction:
- `Dreamer`: :construction: Dream to Control from [Hafner et al. (2019)](https://arxiv.org/abs/1912.01603). :construction:
- **Systematic agents**
- `HoldAgent`: A simple buy and hold strategy.
- `EWMACAgent`: Exponential Weighted Moving Average Crossover, momentum based trend following.
- `BreakoutAgent`: Breakout strategy, based on the high and low of the previous `n` periods.
The overall environment-agent loop is shown below:
<img align="center" src="https://github.com/AOS55/DeepTrade/blob/assets/assets/DeepTrade-Env.png" width="500" alt="Agent/Env loop">
### Environment
The following is a basic example of how to instantiate an environment with `deeptrade.env`:
```python
import gymnasium as gym
import deeptrade.env
env = gym.make("SingleInstrument-v0")
obs, info = env.reset()
truncated, terminated = False, False
while not truncated or not terminated:
action = env.action_space.sample()
obs, reward, truncated, info = env.step(action)
print(f"Reward: {reward}")
```
<!-- ### Agent
```python
import deeptrade.model
``` -->
## Contributing
Please read the [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests.
## Citing
If you use this project in your research, please consider citing it with:
```bibtex
@misc{deeptrade,
author = {DeepTrade},
title = {DeepTrade: A Model Based Reinforcement Learning System for Trading},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com./AOS55/deeptrade}},
}
```
## Disclaimer
DeepTrade is for educational and research purposes and should is used for live trading entirely at your own risk.
Raw data
{
"_id": null,
"home_page": null,
"name": "deeptrade-mbrl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "backtesting, deep learning, finance, reinforcement learning, trading",
"author": null,
"author_email": "Alexander Quessy <alexander@quessy.io>",
"download_url": "https://files.pythonhosted.org/packages/b0/48/ce733f27e1f42acd6a3300ba851adb6e75661ab576508e9f713f498f9b69/deeptrade_mbrl-0.1.1.tar.gz",
"platform": null,
"description": "[![build](https://github.com/AOS55/deeptrade/workflows/build/badge.svg)](https://github.com/AOS55/deeptrade/actions?query=workflow%3Abuild)\n[![Downloads](https://img.shields.io/pypi/dm/deeptrade-mbrl)](https://pypi.org/project/deeptrade-mbrl/)\n[![PyPi Version](https://img.shields.io/pypi/v/deeptrade-mbrl)](https://pypi.org/project/deeptrade-mbrl/)\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/b115af01c853420cac4503e23e783f96)](https://app.codacy.com/gh/AOS55/DeepTrade/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)\n# DeepTrade\n\nDeeptrade is a backtesting system and library designed to test and evaluate machine learning based strategies.\n\n## Getting Started\n\n### Prerequisites\n\nDeepTrade relies on python 3.8 or higher and [Pytorch](https://pytorch.org) 1.9.0 or higher.\n\nWe recommend using a [conda environment](https://docs.anaconda.com/miniconda/miniconda-install/) to manage dependencies. You can create a new environment with the following command:\n\n```bash\nconda create --name deeptrade-env python=3.10\nconda activate deeptrade-env\n```\n\n### Installation\n\n#### Standard Installation\n\n> [!WARNING]\n> The project is on PyPI as `deeptrade-mbrl`.\n\n```bash\npip install deeptrade-mbrl\n```\n\n#### Development Installation\n\nIf you want to modify the library, clone the repository and setup a development environment:\n\n```bash\ngit clone https://github.com/AOS55/deeptrade.git\npip install -e .\n```\n\n### Running Tests\n\nTo test the library, either run `pytest` at root or specify test directories from root with:\n\n```bash\npython -m pytest tests/core\npython -m pytest tests/instruments\n```\n\n## Usage\n\nThe core idea of DeepTrade is to backtest machine learning trading strategies based on either synthetic or real data. Backtesting is split into 2 datasets, training data, available at the start of the theoretical trading period and backtest data used to evaluate the strategy which is where you started the strategy from. The following provides an overview of the basic components of the library, examples of various backtests are provided in the [notebooks](notebooks) directory.\n\nThe train-backtest split is shown below:\n\n<img align=\"center\" src=\"https://github.com/AOS55/DeepTrade/blob/assets/assets/Backtest-Split.svg\" width=\"500\" alt=\"Train/Backtest split\">\n\nThe classical [Markov Decision Process](https://en.wikipedia.org/wiki/Markov_decision_process) (MDP) is used to model the trading environment. The environment is defined by the following components:\n\n- **Environment**: the trading environment represents the world the agent interacts with, $p(s'|s, a)$. This is responsible for providing the agent with observations, rewards, and other information about the state of the environment. The environment is defined by the `gymnasium` interface. These include:\n - `SingleInstrument-v0`: A single instrument trading environment designed for a simple single asset portfolio.\n - `MultiInstrument-v0`: A multi-instrument trading environment designed to hold a multiple asset portfolio.\n\n Each of the trading environments have the following key components:\n - **Market data**: either generated synthetically or from a real dataset. Data is queried at time $t$ which is updated by a size `period` each time around the env-agent loop.\n - **Account**: represents the portfolio consisting of:\n - `Margin`: the amount of cash available.\n - `Positions`: the quantity of the asset held.\n\n The observation of the environment is a numpy array consisting of:\n - `returns`, $r_{t-\\tau:t}$ from the asset price, usually log returns over `window` $\\tau$.\n - `position`, position of the portfolio in the asset.\n - `margin`, the amount of cash available.\n\n- **Agent**: The agent, $\\pi(a|s)$, is the decision maker that interacts with the environment. The agent is responsible for selecting actions based on observations from the environment. Model Based RL (MBRL) agents are provided along with classical systematic trading strategies. These include:\n - **MBRL agents**\n - `PETS`: Probabilistic Ensemble Trajectory Sampling from [Chua et al. (2018)](https://arxiv.org/abs/1805.12114).\n - `MBPO`: :construction: Model Based Policy Optimization from [Janner et al. (2019)](https://arxiv.org/abs/1906.08253). :construction:\n - `Dreamer`: :construction: Dream to Control from [Hafner et al. (2019)](https://arxiv.org/abs/1912.01603). :construction:\n - **Systematic agents**\n - `HoldAgent`: A simple buy and hold strategy.\n - `EWMACAgent`: Exponential Weighted Moving Average Crossover, momentum based trend following.\n - `BreakoutAgent`: Breakout strategy, based on the high and low of the previous `n` periods.\n\nThe overall environment-agent loop is shown below:\n\n<img align=\"center\" src=\"https://github.com/AOS55/DeepTrade/blob/assets/assets/DeepTrade-Env.png\" width=\"500\" alt=\"Agent/Env loop\">\n\n### Environment\n\nThe following is a basic example of how to instantiate an environment with `deeptrade.env`:\n\n```python\nimport gymnasium as gym\nimport deeptrade.env\n\nenv = gym.make(\"SingleInstrument-v0\")\n\nobs, info = env.reset()\ntruncated, terminated = False, False\nwhile not truncated or not terminated:\n action = env.action_space.sample()\n obs, reward, truncated, info = env.step(action)\n print(f\"Reward: {reward}\")\n```\n\n<!-- ### Agent\n\n```python\nimport deeptrade.model\n\n``` -->\n\n\n## Contributing\n\nPlease read the [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests.\n\n## Citing\n\nIf you use this project in your research, please consider citing it with:\n```bibtex\n@misc{deeptrade,\n author = {DeepTrade},\n title = {DeepTrade: A Model Based Reinforcement Learning System for Trading},\n year = {2024},\n publisher = {GitHub},\n journal = {GitHub Repository},\n howpublished = {\\url{https://github.com./AOS55/deeptrade}},\n}\n```\n\n## Disclaimer\n\nDeepTrade is for educational and research purposes and should is used for live trading entirely at your own risk.",
"bugtrack_url": null,
"license": null,
"summary": "A simple trading system for backtesting Model Based RL strategies",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [
"backtesting",
" deep learning",
" finance",
" reinforcement learning",
" trading"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "556728b654820f76f99e1bc9a6fb2a13cd202fbf8f2af7ed14b21d74d42bff57",
"md5": "1547355070758f9db3d1fedd3c10f3b9",
"sha256": "a62379e0c2b4d9f1cb6244d1ab4353201c08e67702b2fe06ed1e7ac707f07117"
},
"downloads": -1,
"filename": "deeptrade_mbrl-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1547355070758f9db3d1fedd3c10f3b9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 81098,
"upload_time": "2024-10-19T16:35:27",
"upload_time_iso_8601": "2024-10-19T16:35:27.502058Z",
"url": "https://files.pythonhosted.org/packages/55/67/28b654820f76f99e1bc9a6fb2a13cd202fbf8f2af7ed14b21d74d42bff57/deeptrade_mbrl-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b048ce733f27e1f42acd6a3300ba851adb6e75661ab576508e9f713f498f9b69",
"md5": "734b330f56f07f19200bf7b51cfb55f0",
"sha256": "58cc6da73ade1565ea2da7e9e99e81d930c05355039e29e910b24005d1aea620"
},
"downloads": -1,
"filename": "deeptrade_mbrl-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "734b330f56f07f19200bf7b51cfb55f0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1332273,
"upload_time": "2024-10-19T16:35:28",
"upload_time_iso_8601": "2024-10-19T16:35:28.633081Z",
"url": "https://files.pythonhosted.org/packages/b0/48/ce733f27e1f42acd6a3300ba851adb6e75661ab576508e9f713f498f9b69/deeptrade_mbrl-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-19 16:35:28",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "deeptrade-mbrl"
}