# AmeriOpt
A Python Package for Pricing American Option using Reinforcement Learning.
The full documentation of paper can be found in [https://www.mdpi.com/1999-4893/17/9/400](https://www.mdpi.com/1999-4893/17/9/400)
![image info](https://raw.githubusercontent.com/Peymankor/AmeriOpt/main/example_mainimage.png)
To use package, you need to follwo the following steps:
## Installation
```bash
pip install ameriopt
```
## Import the package
```python
from ameriopt.rl_policy import RLPolicy
```
## Set the parameters of GBM model
- Number of Laguerre polynomials to be used in the RL model
```python
NUM_LAGUERRE = 5
```
- Number of training iterations for the RL algorithm
```python
TRAINING_ITERS = 3
```
- Small constant for numerical stability in the RL algorithm
```python
EPSILON = 1e-5
```
- Strike price of the option
```python
STRIKE_PRICE = 40
```
- Time to expiration (in years)
```python
EXPIRY_TIME = 1.0
```
- Risk-free interest rate
```python
INTEREST_RATE = 0.06
```
- Number of time intervals
```python
NUM_INTERVALS = 50
```
- Number of simulations for generating training data
```python
NUM_SIMULATIONS_TRAIN = 5000
```
- Number of simulations for testing the RL policy
```python
NUM_SIMULATIONS_TEST = 10000
```
- Spot price of the underlying asset at the start of the simulation
```python
SPOT_PRICE = 36.0
```
- Volatility of the underlying asset (annualized)
```python
VOLATILITY = 0.2
```
## Simulate Training Data using Geometric Brownian Motion (GBM)
```python
training_data = simulate_GBM_training(
expiry_time=EXPIRY_TIME,
num_intervals=NUM_INTERVALS,
num_simulations=NUM_SIMULATIONS_TRAIN,
spot_price=SPOT_PRICE,
interest_rate=INTEREST_RATE,
volatility=VOLATILITY
)
```
## Instantiate the RLPolicy model with defined parameter GBM Price Model
```python
rl_policy = RLPolicy(
num_laguerre=NUM_LAGUERRE,
strike_price=STRIKE_PRICE,
expiry=EXPIRY_TIME,
interest_rate=INTEREST_RATE,
num_steps=NUM_INTERVALS,
training_iters=TRAINING_ITERS,
epsilon=EPSILON
)
```
## Train the RL Model and Get Weights (Weight for the optimal policy)
```python
weights = rl_policy.get_weights(training_data=training_data)
```
# Generate test data (GBM paths) for option price scoring
```python
paths_test = scoring_sim_data(
expiry_time=EXPIRY_TIME,
num_intervals=NUM_INTERVALS,
num_simulations_test=NUM_SIMULATIONS_TEST,
spot_price=SPOT_PRICE,
interest_rate=INTEREST_RATE,
volatility=VOLATILITY
)
```
## Option price
```python
option_price = rl_policy.calculate_option_price(stock_paths=paths_test)
```
## Print the calculated option price
```python
print("Option Price using RL Method:", option_price)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/Peymankor/AmeriOpt",
"name": "ameriopt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "pandas",
"author": "Peyman Kor",
"author_email": "kor.peyman@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/f9/90/9ab66d66c25b22c6f057b5c6a22795f81ff99f4f294bbf8e6a826232ac43/ameriopt-0.1.5.tar.gz",
"platform": null,
"description": "# AmeriOpt\nA Python Package for Pricing American Option using Reinforcement Learning.\n\nThe full documentation of paper can be found in [https://www.mdpi.com/1999-4893/17/9/400](https://www.mdpi.com/1999-4893/17/9/400)\n\n![image info](https://raw.githubusercontent.com/Peymankor/AmeriOpt/main/example_mainimage.png)\n\nTo use package, you need to follwo the following steps:\n\n## Installation\n```bash\npip install ameriopt\n```\n\n## Import the package\n\n\n```python\n\nfrom ameriopt.rl_policy import RLPolicy\n```\n\n\n## Set the parameters of GBM model\n\n- Number of Laguerre polynomials to be used in the RL model\n\n```python\nNUM_LAGUERRE = 5\n```\n\n- Number of training iterations for the RL algorithm\n\n```python\nTRAINING_ITERS = 3\n```\n\n- Small constant for numerical stability in the RL algorithm\n\n```python\nEPSILON = 1e-5\n```\n\n- Strike price of the option\n\n```python\nSTRIKE_PRICE = 40\n```\n\n- Time to expiration (in years)\n\n```python\nEXPIRY_TIME = 1.0\n```\n\n- Risk-free interest rate\n\n```python\nINTEREST_RATE = 0.06\n```\n\n- Number of time intervals \n\n```python\nNUM_INTERVALS = 50\n```\n\n- Number of simulations for generating training data\n\n```python\nNUM_SIMULATIONS_TRAIN = 5000\n```\n\n- Number of simulations for testing the RL policy\n\n```python\nNUM_SIMULATIONS_TEST = 10000\n```\n\n- Spot price of the underlying asset at the start of the simulation\n\n```python\nSPOT_PRICE = 36.0\n```\n\n- Volatility of the underlying asset (annualized)\n\n```python\nVOLATILITY = 0.2\n```\n\n\n## Simulate Training Data using Geometric Brownian Motion (GBM)\n\n\n```python\ntraining_data = simulate_GBM_training(\n expiry_time=EXPIRY_TIME,\n num_intervals=NUM_INTERVALS,\n num_simulations=NUM_SIMULATIONS_TRAIN,\n spot_price=SPOT_PRICE,\n interest_rate=INTEREST_RATE,\n volatility=VOLATILITY\n)\n```\n\n## Instantiate the RLPolicy model with defined parameter GBM Price Model\n\n```python\nrl_policy = RLPolicy(\n num_laguerre=NUM_LAGUERRE,\n strike_price=STRIKE_PRICE,\n expiry=EXPIRY_TIME,\n interest_rate=INTEREST_RATE,\n num_steps=NUM_INTERVALS,\n training_iters=TRAINING_ITERS,\n epsilon=EPSILON\n)\n```\n\n## Train the RL Model and Get Weights (Weight for the optimal policy)\n\n\n```python\nweights = rl_policy.get_weights(training_data=training_data)\n```\n\n# Generate test data (GBM paths) for option price scoring\n\n```python\npaths_test = scoring_sim_data(\n expiry_time=EXPIRY_TIME,\n num_intervals=NUM_INTERVALS,\n num_simulations_test=NUM_SIMULATIONS_TEST,\n spot_price=SPOT_PRICE,\n interest_rate=INTEREST_RATE,\n volatility=VOLATILITY\n)\n```\n\n## Option price\n\n```python\noption_price = rl_policy.calculate_option_price(stock_paths=paths_test)\n```\n\n## Print the calculated option price\n\n```python\nprint(\"Option Price using RL Method:\", option_price)\n```",
"bugtrack_url": null,
"license": "MIT",
"summary": "This is a package for pricing American options using reinforcement learning",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://github.com/Peymankor/AmeriOpt",
"Repository": "https://github.com/Peymankor/AmeriOpt"
},
"split_keywords": [
"pandas"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "12395a3c5ca0814cf72088d7fde5879ec26ec64ce02d910b5f1a8e508d513030",
"md5": "2ad09e5dcb2a077ab74edd07ca48b005",
"sha256": "e05bab4ec0a85b9c1215dbddbc9b838ccc09969db95d6251bd0ddc96efa29243"
},
"downloads": -1,
"filename": "ameriopt-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2ad09e5dcb2a077ab74edd07ca48b005",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 7402,
"upload_time": "2024-09-11T13:09:54",
"upload_time_iso_8601": "2024-09-11T13:09:54.491550Z",
"url": "https://files.pythonhosted.org/packages/12/39/5a3c5ca0814cf72088d7fde5879ec26ec64ce02d910b5f1a8e508d513030/ameriopt-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f9909ab66d66c25b22c6f057b5c6a22795f81ff99f4f294bbf8e6a826232ac43",
"md5": "e8517fb9870c91be0aa2b29ac1b46985",
"sha256": "992c54315da4a16a394c700927f74dd0b05646b11e61d0f2a782894b832456e9"
},
"downloads": -1,
"filename": "ameriopt-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "e8517fb9870c91be0aa2b29ac1b46985",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 6070,
"upload_time": "2024-09-11T13:09:55",
"upload_time_iso_8601": "2024-09-11T13:09:55.843804Z",
"url": "https://files.pythonhosted.org/packages/f9/90/9ab66d66c25b22c6f057b5c6a22795f81ff99f4f294bbf8e6a826232ac43/ameriopt-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-11 13:09:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Peymankor",
"github_project": "AmeriOpt",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "ameriopt"
}