akademy


Nameakademy JSON
Version 0.1.51 PyPI version JSON
download
home_pagehttps://github.com/alphazwest/akademy
Summaryakademy: A Reinforcement Learning Framework
upload_time2023-03-29 00:16:07
maintainer
docs_urlNone
authorZack West
requires_python>=3.7
licenseBSD-3-Clause
keywords reinforcement learning quantitative trading fintech trading bot algorithmic trading finance automated trading neural networks artificial intelligence machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Akademy 

Akademy is a module containing composable object classes for developing 
reinforcement learning algorithms focused on quantitative trading and 
time-series forecasting. This module is a work-in-progress and should, at no
time, be assumed to be designed well or be free of bugs.

# Overview
Akademy is designed using an `Agent`-`Environment` model such that `Agent`-class
objects ingest information from `Environment`-class objects (`Env`), produce
an `Action`, which is then applied to the `Environment` which results in a
change in `State` and possible reward to offer feedback to the agent.

*Note*: this module does not provide any training routines -- only the object class
that can be used to support the implementation of custom training routines.

# Getting Started

To install `akademy` use the following command in the desired Python 3.7+
environment:

`pip install akademy`

Once installed, developers will have access to `Agent`, `TradeEnv`, and `Network`
class objects in which to design Reinforcement Learning algorithms to train models.

Sample training routine:

```python
from akademy.models.envs import TradeEnv
from akademy.models.agents import DQNAgent
from akademy.common.utils import load_spy_daily

# loads the dataset used during training
data = load_spy_daily(count=2500)

# load the Trading Environment
env = TradeEnv(
    data=data,
    window=50,
    asset="spy",
)

# load the agent to train
agent = DQNAgent(
    action_count=env.action_space.n,
    state_shape=env.observation_space.shape
)

# load user-defined training routine
training_routine(
    agent=agent,
    env=env
)
```

## Tests
Unit testing can be run via the following command:

`python -m unittest`

For detailed information the `--verbose` flag can be used. For more detailed 
usage consult the `unittest` module documentation.

## Available Data
This module comes with minimal data for Agents and Environments to train on.
The current data available is listed below, along with sources for the most
up-to-date versions as well:

### 1. S&P500 
Location: `/data/SPY.CSV`\
Start:  `1993-01-29`\
End:    `2023-01-23`\
Total Rows: `7,454` (excludes header)\
Header: `Date,Open,High,Low,Close,Adj Close,Volume`\
Source: https://finance.yahoo.com/quote/SPY/history?p=SPY

*note*: Any data can be used easily enough via conversion into a Pandas DataFrame
object, but must contain information for `date` and pricing data for
`open`, `high`, `low`, and `close` as well as `volume` such that each row has
at least those 6 features or the latter 5 and an index representative of date.

# Notes

## Gym vs. Gymnasium
The `Gym` project by OpenAI has been sunset and now maintained as `Gymnasium` 
by the [Farama-Foundation](https://github.com/Farama-Foundation/Gymnasium). The
`Env` classes present here make use of the newer `Gymnasium` package which, among
other differences, produces an extra item in the `step` method indicating whether
an environment has been truncated. [See here](https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/core.py#L63)

## PyTorch
PyTorch requires some additional consideration for setup depending on use-case.
Akademy uses an approach whereby CPU-based training and inferences are possible
via parameterized function calls. However, GPU use (e.g. CUDA) requires local
considerations. [See here] (https://pytorch.org/get-started/locally/) for a more
in-depth discussion and guide.

This module currently uses the 1.* version, though a 2.* version release
is imminent and an upgrade to that version is planned.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/alphazwest/akademy",
    "name": "akademy",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "reinforcement learning,quantitative trading,fintech,trading bot,algorithmic trading,finance,automated trading,neural networks,artificial intelligence,machine learning",
    "author": "Zack West",
    "author_email": "Zack West <alphazwest@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/29/5c/3f9a9bdbb9e0cf44d61fd79f3700626257893cd587abc9b4c3ad9db2875e/akademy-0.1.51.tar.gz",
    "platform": null,
    "description": "# Akademy \r\n\r\nAkademy is a module containing composable object classes for developing \r\nreinforcement learning algorithms focused on quantitative trading and \r\ntime-series forecasting. This module is a work-in-progress and should, at no\r\ntime, be assumed to be designed well or be free of bugs.\r\n\r\n# Overview\r\nAkademy is designed using an `Agent`-`Environment` model such that `Agent`-class\r\nobjects ingest information from `Environment`-class objects (`Env`), produce\r\nan `Action`, which is then applied to the `Environment` which results in a\r\nchange in `State` and possible reward to offer feedback to the agent.\r\n\r\n*Note*: this module does not provide any training routines -- only the object class\r\nthat can be used to support the implementation of custom training routines.\r\n\r\n# Getting Started\r\n\r\nTo install `akademy` use the following command in the desired Python 3.7+\r\nenvironment:\r\n\r\n`pip install akademy`\r\n\r\nOnce installed, developers will have access to `Agent`, `TradeEnv`, and `Network`\r\nclass objects in which to design Reinforcement Learning algorithms to train models.\r\n\r\nSample training routine:\r\n\r\n```python\r\nfrom akademy.models.envs import TradeEnv\r\nfrom akademy.models.agents import DQNAgent\r\nfrom akademy.common.utils import load_spy_daily\r\n\r\n# loads the dataset used during training\r\ndata = load_spy_daily(count=2500)\r\n\r\n# load the Trading Environment\r\nenv = TradeEnv(\r\n    data=data,\r\n    window=50,\r\n    asset=\"spy\",\r\n)\r\n\r\n# load the agent to train\r\nagent = DQNAgent(\r\n    action_count=env.action_space.n,\r\n    state_shape=env.observation_space.shape\r\n)\r\n\r\n# load user-defined training routine\r\ntraining_routine(\r\n    agent=agent,\r\n    env=env\r\n)\r\n```\r\n\r\n## Tests\r\nUnit testing can be run via the following command:\r\n\r\n`python -m unittest`\r\n\r\nFor detailed information the `--verbose` flag can be used. For more detailed \r\nusage consult the `unittest` module documentation.\r\n\r\n## Available Data\r\nThis module comes with minimal data for Agents and Environments to train on.\r\nThe current data available is listed below, along with sources for the most\r\nup-to-date versions as well:\r\n\r\n### 1. S&P500 \r\nLocation: `/data/SPY.CSV`\\\r\nStart:  `1993-01-29`\\\r\nEnd:    `2023-01-23`\\\r\nTotal Rows: `7,454` (excludes header)\\\r\nHeader: `Date,Open,High,Low,Close,Adj Close,Volume`\\\r\nSource: https://finance.yahoo.com/quote/SPY/history?p=SPY\r\n\r\n*note*: Any data can be used easily enough via conversion into a Pandas DataFrame\r\nobject, but must contain information for `date` and pricing data for\r\n`open`, `high`, `low`, and `close` as well as `volume` such that each row has\r\nat least those 6 features or the latter 5 and an index representative of date.\r\n\r\n# Notes\r\n\r\n## Gym vs. Gymnasium\r\nThe `Gym` project by OpenAI has been sunset and now maintained as `Gymnasium` \r\nby the [Farama-Foundation](https://github.com/Farama-Foundation/Gymnasium). The\r\n`Env` classes present here make use of the newer `Gymnasium` package which, among\r\nother differences, produces an extra item in the `step` method indicating whether\r\nan environment has been truncated. [See here](https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/core.py#L63)\r\n\r\n## PyTorch\r\nPyTorch requires some additional consideration for setup depending on use-case.\r\nAkademy uses an approach whereby CPU-based training and inferences are possible\r\nvia parameterized function calls. However, GPU use (e.g. CUDA) requires local\r\nconsiderations. [See here] (https://pytorch.org/get-started/locally/) for a more\r\nin-depth discussion and guide.\r\n\r\nThis module currently uses the 1.* version, though a 2.* version release\r\nis imminent and an upgrade to that version is planned.\r\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "akademy: A Reinforcement Learning Framework",
    "version": "0.1.51",
    "split_keywords": [
        "reinforcement learning",
        "quantitative trading",
        "fintech",
        "trading bot",
        "algorithmic trading",
        "finance",
        "automated trading",
        "neural networks",
        "artificial intelligence",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "aedab560d3a55d2d13243dbf7c6886d5997936691c36a7251010e33f14b9825f",
                "md5": "8b1d5b0750669da8cddd2fb3053ac820",
                "sha256": "8ca049c34c33f87ee88a1a0b564e18004f885076d8436bd6d987f877279e1118"
            },
            "downloads": -1,
            "filename": "akademy-0.1.51-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b1d5b0750669da8cddd2fb3053ac820",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 27318,
            "upload_time": "2023-03-29T00:16:04",
            "upload_time_iso_8601": "2023-03-29T00:16:04.088660Z",
            "url": "https://files.pythonhosted.org/packages/ae/da/b560d3a55d2d13243dbf7c6886d5997936691c36a7251010e33f14b9825f/akademy-0.1.51-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "295c3f9a9bdbb9e0cf44d61fd79f3700626257893cd587abc9b4c3ad9db2875e",
                "md5": "0b764f13693fafbf056b544e26a49e00",
                "sha256": "fe58357f4869982fc3116cdccee885eadd9cd8e1e60fd32b652a1a2d150a1a74"
            },
            "downloads": -1,
            "filename": "akademy-0.1.51.tar.gz",
            "has_sig": false,
            "md5_digest": "0b764f13693fafbf056b544e26a49e00",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 23601,
            "upload_time": "2023-03-29T00:16:07",
            "upload_time_iso_8601": "2023-03-29T00:16:07.247970Z",
            "url": "https://files.pythonhosted.org/packages/29/5c/3f9a9bdbb9e0cf44d61fd79f3700626257893cd587abc9b4c3ad9db2875e/akademy-0.1.51.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-29 00:16:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "alphazwest",
    "github_project": "akademy",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "akademy"
}
        
Elapsed time: 0.13026s