Name | RLInventoryOpt JSON |
Version |
0.1.1
JSON |
| download |
home_page | https://github.com/sebassaras02/rf_inventory_optimization |
Summary | Launching the first version of the RLInventoryOpt library! This version includes a fully functional Q-Learning-based inventory optimization model. |
upload_time | 2024-09-20 19:51:07 |
maintainer | None |
docs_url | None |
author | Sebastian Sarasti |
requires_python | >=3.6 |
license | MIT License Copyright (c) 2024 Sebastian Sarasti Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
reinforcement learning
q-learning
inventory optimization
|
VCS |
|
bugtrack_url |
|
requirements |
numpy
pandas
plotly
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# 📊 Reinforcement Learning in Inventory Optimization using Q-Learning
This repository contains a Python implementation of an inventory optimization model using Q-Learning, a Reinforcement Learning (RL) algorithm. The model is designed to help manage inventory levels by making optimal decisions on order quantities based on forecasted demand, initial stock levels, and inventory capacity.
## 🚀 Introduction
Efficient inventory management is crucial for reducing costs and avoiding stockouts or overstocking. This project implements a Q-Learning-based optimizer that learns to make optimal inventory decisions over time. It considers factors such as forecasted demand, security stock, and inventory capacity to minimize costs and maintain optimal stock levels.
## ✨ Features
- 🧠 **Q-Learning Algorithm**: Implements Q-Learning for decision-making based on temporal difference learning.
- 🔄 **Dynamic Inventory Management**: Adjusts inventory levels based on forecasted consumption dynamically without traditional rules.
- 🛠️ **Customizable Parameters**: Adjustable learning rate, discount factor, and exploration rate.
- 📈 **Visualizations**: Plots inventory levels, forecast, and order amounts to provide insights into the optimization process.
## Installation 🚀
You can install the `RLInventoryOpt` library directly from GitHub. Follow the instructions below to get started!
### Install from GitHub 🌟
To install the library directly from GitHub, use the following command:
```bash
pip install RLInventoryOpt
```
## 🔧 Usage
Initialize the Model
Create an instance of the QLearningOptimizer class.
To create any model for inventory optimization, you have to follow this:
1. Create a forecasting model and predict the future consuption.
2. You have to know the limitations of your system such as security stock, maximum level of stock, initial stock, lead time, and the minimal order quantity.
3. The actions are limited based from 1 to 6 times the minimal order quantity.
```python
from RLInventoryOpt.qlearning import QLearningOptimizer
import numpy as np
# Example forecasted demand for 6 months
forecast = np.array([400, 325, 356, 210, 150, 400])
# Initial conditions of the system
initial_state = {
"stock": 800,
"leadTime": 2,
"minimumOrder": 100,
"securityStock": 200,
"maximalCapacity": 1000
}
# Define the actions
actions = ["no", "m", "2m", "3m", "4m", "5m", "6m"]
# Initialize the optimizer
model = QLearningOptimizer(
forecast=forecast,
initial_stock=initial_state["stock"],
security_stock=initial_state["securityStock"],
capacity=initial_state["maximalCapacity"],
n_actions=actions,
min_order=initial_state["minimumOrder"],
lead_time=initial_state["leadTime"],
alpha=0.1,
gamma=0.6,
epsilon=0.1
)
# Train the model
model.fit(epochs=1000)
# Predict inventory levels and actions
predictions = model.predict()
# Plot the results
model.plot("bar")
```
A detailed explanation can be found on this article on Medium: https://lnkd.in/gDEvav59
## ⚙️ Customization
You can customize the behavior of the agent modifying the parameters for training.
- alpha (float): Learning rate (default: 0.1)
- gamma (float): Discount factor (default: 0.6)
- epsilon (float): Exploration rate (default: 0.1)
## ☕ Support the Project
If you find this inventory optimization tool helpful and would like to support its continued development, consider buying me a coffee. Your support helps maintain and improve this project!
[![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.paypal.com/paypalme/sebassarasti)
### Other Ways to Support
- ⭐ Star this repository
- 🍴 Fork it and contribute
- 📢 Share it with others who might find it useful
- 🐛 Report issues or suggest new features
Your support, in any form, is greatly appreciated! 🙏
Raw data
{
"_id": null,
"home_page": "https://github.com/sebassaras02/rf_inventory_optimization",
"name": "RLInventoryOpt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "Reinforcement Learning, Q-Learning, Inventory Optimization",
"author": "Sebastian Sarasti",
"author_email": "Sebastian Sarasti <sebitas.alejo@hotmail.com>",
"download_url": "https://files.pythonhosted.org/packages/fa/19/15cc0af74f0579700834eb6d8af4811112cd9c1d7032543425257e99bf5c/rlinventoryopt-0.1.1.tar.gz",
"platform": null,
"description": "# \ud83d\udcca Reinforcement Learning in Inventory Optimization using Q-Learning\n\nThis repository contains a Python implementation of an inventory optimization model using Q-Learning, a Reinforcement Learning (RL) algorithm. The model is designed to help manage inventory levels by making optimal decisions on order quantities based on forecasted demand, initial stock levels, and inventory capacity.\n\n## \ud83d\ude80 Introduction\n\nEfficient inventory management is crucial for reducing costs and avoiding stockouts or overstocking. This project implements a Q-Learning-based optimizer that learns to make optimal inventory decisions over time. It considers factors such as forecasted demand, security stock, and inventory capacity to minimize costs and maintain optimal stock levels.\n\n## \u2728 Features\n\n- \ud83e\udde0 **Q-Learning Algorithm**: Implements Q-Learning for decision-making based on temporal difference learning.\n- \ud83d\udd04 **Dynamic Inventory Management**: Adjusts inventory levels based on forecasted consumption dynamically without traditional rules.\n- \ud83d\udee0\ufe0f **Customizable Parameters**: Adjustable learning rate, discount factor, and exploration rate.\n- \ud83d\udcc8 **Visualizations**: Plots inventory levels, forecast, and order amounts to provide insights into the optimization process.\n\n\n## Installation \ud83d\ude80\n\nYou can install the `RLInventoryOpt` library directly from GitHub. Follow the instructions below to get started!\n\n### Install from GitHub \ud83c\udf1f\n\nTo install the library directly from GitHub, use the following command:\n\n```bash\npip install RLInventoryOpt\n```\n\n\n## \ud83d\udd27 Usage\nInitialize the Model\n\nCreate an instance of the QLearningOptimizer class.\n\nTo create any model for inventory optimization, you have to follow this:\n\n1. Create a forecasting model and predict the future consuption.\n2. You have to know the limitations of your system such as security stock, maximum level of stock, initial stock, lead time, and the minimal order quantity.\n3. The actions are limited based from 1 to 6 times the minimal order quantity. \n\n\n```python\nfrom RLInventoryOpt.qlearning import QLearningOptimizer\nimport numpy as np\n\n# Example forecasted demand for 6 months\nforecast = np.array([400, 325, 356, 210, 150, 400])\n\n# Initial conditions of the system\ninitial_state = {\n \"stock\": 800,\n \"leadTime\": 2,\n \"minimumOrder\": 100,\n \"securityStock\": 200,\n \"maximalCapacity\": 1000\n}\n\n# Define the actions\nactions = [\"no\", \"m\", \"2m\", \"3m\", \"4m\", \"5m\", \"6m\"]\n\n# Initialize the optimizer\nmodel = QLearningOptimizer(\n forecast=forecast, \n initial_stock=initial_state[\"stock\"], \n security_stock=initial_state[\"securityStock\"],\n capacity=initial_state[\"maximalCapacity\"],\n n_actions=actions, \n min_order=initial_state[\"minimumOrder\"], \n lead_time=initial_state[\"leadTime\"],\n alpha=0.1, \n gamma=0.6, \n epsilon=0.1\n)\n\n# Train the model\nmodel.fit(epochs=1000)\n\n# Predict inventory levels and actions\npredictions = model.predict()\n\n# Plot the results\nmodel.plot(\"bar\")\n```\n\nA detailed explanation can be found on this article on Medium: https://lnkd.in/gDEvav59\n\n## \u2699\ufe0f Customization\n\nYou can customize the behavior of the agent modifying the parameters for training.\n\n- alpha (float): Learning rate (default: 0.1)\n- gamma (float): Discount factor (default: 0.6)\n- epsilon (float): Exploration rate (default: 0.1)\n\n## \u2615 Support the Project\n\nIf you find this inventory optimization tool helpful and would like to support its continued development, consider buying me a coffee. Your support helps maintain and improve this project!\n\n[![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.paypal.com/paypalme/sebassarasti)\n\n### Other Ways to Support\n- \u2b50 Star this repository\n- \ud83c\udf74 Fork it and contribute\n- \ud83d\udce2 Share it with others who might find it useful\n- \ud83d\udc1b Report issues or suggest new features\n\nYour support, in any form, is greatly appreciated! \ud83d\ude4f\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 Sebastian Sarasti Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Launching the first version of the RLInventoryOpt library! This version includes a fully functional Q-Learning-based inventory optimization model.",
"version": "0.1.1",
"project_urls": {
"Explanation": "https://medium.com/@sebitas.alejo/reinforcement-learning-for-inventory-optimization-f63c26a59c19",
"Homepage": "https://github.com/sebassaras02/rf_inventory_optimization"
},
"split_keywords": [
"reinforcement learning",
" q-learning",
" inventory optimization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "df626942a74eac215b228c6e5be2ac4d7b2047ef36f2cdbc4ed05a23c1a5f8a0",
"md5": "41ed10bf343ec79b9072fb001397251f",
"sha256": "bd787a8a11dd2a6a482687c204b2bc8fcb26119aa96e6d5ef5d1e7e21b54bddc"
},
"downloads": -1,
"filename": "RLInventoryOpt-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "41ed10bf343ec79b9072fb001397251f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 9476,
"upload_time": "2024-09-20T19:51:06",
"upload_time_iso_8601": "2024-09-20T19:51:06.284489Z",
"url": "https://files.pythonhosted.org/packages/df/62/6942a74eac215b228c6e5be2ac4d7b2047ef36f2cdbc4ed05a23c1a5f8a0/RLInventoryOpt-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fa1915cc0af74f0579700834eb6d8af4811112cd9c1d7032543425257e99bf5c",
"md5": "42bbfc59164fa193e202ea5761351881",
"sha256": "e5fc0f5806508b75080b9da21b392144d804247f35a82f2c26c43d880866efd1"
},
"downloads": -1,
"filename": "rlinventoryopt-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "42bbfc59164fa193e202ea5761351881",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 10638,
"upload_time": "2024-09-20T19:51:07",
"upload_time_iso_8601": "2024-09-20T19:51:07.640778Z",
"url": "https://files.pythonhosted.org/packages/fa/19/15cc0af74f0579700834eb6d8af4811112cd9c1d7032543425257e99bf5c/rlinventoryopt-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-20 19:51:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sebassaras02",
"github_project": "rf_inventory_optimization",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.2.2"
]
]
},
{
"name": "plotly",
"specs": [
[
"==",
"5.22.0"
]
]
}
],
"lcname": "rlinventoryopt"
}