| Name | pymlup JSON |
| Version |
0.2.2
JSON |
| download |
| home_page | |
| Summary | MLup framework, fast ml to production, easy to learn, easy to use. |
| upload_time | 2023-10-04 07:11:30 |
| maintainer | |
| docs_url | None |
| author | |
| requires_python | >=3.7 |
| license | |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# PyMLup
[](https://github.com/nxexox/pymlup/actions/workflows/python-package.yml)
[](https://badge.fury.io/py/pymlup)
## Introduction
It's library for easy and fast run ML in production.
All you need is to deliver the model file and config to the server (in fact, the config is not necessary) 🙃
PyMLup is a modern way to run machine learning models in production. The market time has been reduced to a minimum. This library eliminates the need to write your own web applications with machine learning models and copy application code. It is enough to have a machine learning model to launch a web application with one command.
* It's library learning only clean python;
* Use FastApi in web app backend;
Work tested with machine learning model frameworks (links to tests):
* [scikit-learn>=1.2.0,<1.3.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_scikit_learn_model.py)
* [tensorflow>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_tensorflow_model.py)
* [lightgbm>=4.0.0,<5.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_lightgbm_model.py)
* [torch>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_pytorch_model.py)
* [onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)
* [onnxruntime>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)
Support and tested with machine learning libraries:
* [numpy>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_data_transformers.py)
* [pandas>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_data_transformers.py)
* [joblib>=1.2.0,<1.3.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)
* [tf2onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)
* [skl2onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)
* [jupyter==1.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/test_jupyter_notebook.py)
**The easiest way to try:**
```bash
pip install pymlup
mlup run -m /path/to/my/model.onnx
```
## Useful links
* [Docs](https://github.com/nxexox/pymlup/tree/main/docs);
* [Examples](https://github.com/nxexox/pymlup/tree/main/examples);
* [Tests models](https://github.com/nxexox/pymlup/tree/main/mldata);
## How it's work
1. You are making your machine learning model. Optional: you are making mlup config for your model.
2. You deliver your model to server. Optional: you deliver your config to server.
3. Installing pymlup to your server and libraries for model.
4. Run web app from your model or your config 🙃
## Requirements
Python 3.7+
* PyMLup stands on the shoulders of giants FastAPI for the web parts.
* Additionally, you need to install the libraries that your model uses.
## Installation
```bash
pip install pymlup
```
You will also can install with ml backend library:
```bash
pip install "pymlup[scikit-learn]" # For scikit-learn
pip install "pymlup[lightgbm]" # For microsoft lightgbm
pip install "pymlup[tensorflow]" # For tensorflow
pip install "pymlup[torch]" # For torch
pip install "pymlup[onnx]" # For onnx models: torch, tensorflow, sklearn, etc...
```
## Examples
### Examples code
```python
import mlup
class MyAnyModelForExample:
def predict(self, X):
return X
ml_model = MyAnyModelForExample()
up = mlup.UP(ml_model=ml_model)
# Need call up.ml.load(), for analyze your model
up.ml.load()
# If you want testing your web app, you can run in daemon mode
# You can open browser http://localhost:8009/docs
up.run_web_app(daemon=True)
import requests
response = requests.post('http://0.0.0.0:8009/predict', json={'X': [[1, 2, 3], [4, 5, 6]]})
print(response.json())
up.stop_web_app()
```
You can check work model by config, without web application.
* `predict` - Get model predict as inner arguments as in web app.
* `predict_from` - As `predict` method, but not use data transformer before call model predict.
* `async_predict` - Asynchronous version of the `predict` method.
```python
import mlup
import numpy
class MyAnyModelForExample:
def predict(self, X):
return X
ml_model = MyAnyModelForExample()
up = mlup.UP(ml_model=ml_model)
up.ml.load()
up.predict(X=[[1, 2, 3], [4, 5, 6]])
up.predict_from(X=numpy.array([[1, 2, 3], [4, 5, 6]]))
await up.async_predict(X=[[1, 2, 3], [4, 5, 6]])
```
#### Save ready application to disk
##### Make default config
If path endswith to json, make json config, else yaml config.
```python
import mlup
mlup.generate_default_config('path_to_yaml_config.yaml')
```
##### From config
You can save ready config to disk, but you need set local storage and path to model file in server.
In folder can there are many files, mask need for filter exactly our model file
```python
import mlup
from mlup.ml.empty import EmptyModel # This stub class
from mlup import constants
up = mlup.UP(ml_model=EmptyModel())
up.conf.storage_type = constants.StorageType.disk
up.conf.storage_kwargs = {
'path_to_files': 'path/to/model/file/in/model_name.modelextension',
'file_mask': 'model_name.modelextension',
}
up.to_yaml("path_to_yaml_config.yaml")
up.to_json("path_to_json_config.json")
# After in server
up = mlup.UP.load_from_yaml("path_to_yaml_config.yaml", load_model=True)
up.run_web_app()
```
##### From pickle
If you make pickle/joblib file your mlup with model, don't need to change storage type, because your model there is in your pickle/joblib file.
```python
import pickle
import mlup
from mlup.ml.empty import EmptyModel # This stub class
up = mlup.UP(ml_model=EmptyModel())
# You can create pickle file
with open('path_to_pickle_file.pckl', 'wb') as f:
pickle.dump(up, f)
# After in server
with open('path_to_pickle_file.pckl', 'rb') as f:
up = pickle.load(f)
up.ml.load()
up.run_web_app()
```
#### Change config
If you can change model settings (See [Description of the application life cycle](https://github.com/nxexox/pymlup/blob/main/docs/life_cycle.md#upmlload_model_settings)), need call `up.ml.load_model_settings()`.
```python
import mlup
class MyAnyModelForExample:
def predict(self, X):
return X
ml_model = MyAnyModelForExample()
up = mlup.UP(
ml_model=ml_model,
conf=mlup.Config(port=8011)
)
up.ml.load()
up.conf.auto_detect_predict_params = False
up.ml.load_model_settings()
```
### Examples server commands
#### mlup run
You can run web application from model, config, pickle up object. Bash command mlup run making this.
See `mlup run --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-run) for full docs.
##### From model
```bash
mlup run -m /path/to/your/model.extension
```
This will run code something like this:
```python
import mlup
from mlup import constants
up = mlup.UP(
conf=mlup.Config(
storage_type=constants.StorageType.disk,
storage_kwargs={
'path_to_files': '/path/to/your/model.extension',
'files_mask': r'.+',
},
)
)
up.ml.load()
up.run_web_app()
```
You change config attributes in this mode. For this, you can add arguments `--up.<config_attribute_name>=new_value`.
(For more examples see `mlup run --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-run)).
##### From config
```bash
mlup run -c /path/to/your/config.yaml
# or mlup run -ct json -c /path/to/your/config.json
```
This will run code something like this:
```python
import mlup
up = mlup.UP.load_from_yaml(conf_path='/path/to/your/config.yaml', load_model=True)
up.run_web_app()
```
##### From mlup.UP pickle/joblib object
```bash
mlup run -b /path/to/your/up_object.pckl
# or mlup run -bt joblib -b /path/to/your/up_object.joblib
```
This will run code something like this:
```python
import pickle
with open('/path/to/your/up_object.pckl', 'rb') as f:
up = pickle.load(f)
up.run_web_app()
```
#### mlup make-app
This command making `.py` file with mlup web application and your model, config, pickle up object or with default settings.
See `mlup make-app --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-make-app) for full docs.
##### With default settings
```bash
mlup make-app example_without_data_app.py
```
This command is making something like this:
```python
# example_without_data_app.py
import mlup
# You can load the model yourself and pass it to the "ml_model" argument.
# up = mlup.UP(ml_model=my_model, conf=mlup.Config())
up = mlup.UP(
conf=mlup.Config(
# Set your config, for work model and get model.
# You can use storage_type and storage_kwargs for auto_load model from storage.
)
)
up.ml.load()
up.web.load()
# If you want to run the application yourself, or add something else to it, use this variable.
# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80
app = up.web.app
if __name__ == '__main__':
up.run_web_app()
```
And you can write your settings and run web application:
```bash
python3 example_without_data_app.py
```
##### With only model
```bash
mlup make-app -ms /path/to/my/model.onnx example_without_data_app.py
```
This command is making something like this:
```python
# example_without_data_app.py
import mlup
from mlup import constants
up = mlup.UP(
conf=mlup.Config(
# Set your config, for work model and get model.
storage_type=constants.StorageType.disk,
storage_kwargs={
'path_to_files': '/path/to/my/model.onnx',
'files_mask': 'model.onnx',
},
)
)
up.ml.load()
up.web.load()
# If you want to run the application yourself, or add something else to it, use this variable.
# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80
app = up.web.app
if __name__ == '__main__':
up.run_web_app()
```
And you can run web application:
```bash
python3 example_without_data_app.py
```
##### With only config
```bash
mlup make-app -cs /path/to/my/config.yaml example_without_data_app.py
```
This command is making something like this:
```python
# example_without_data_app.py
import mlup
up = mlup.UP.load_from_yaml('/path/to/my/config.yaml', load_model=False)
up.ml.load()
up.web.load()
# If you want to run the application yourself, or add something else to it, use this variable.
# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80
app = up.web.app
if __name__ == '__main__':
up.run_web_app()
```
And you can run web application:
```bash
python3 example_without_data_app.py
```
##### With only binary UP object
```bash
mlup make-app -bs /path/to/my/up.pickle example_without_data_app.py
```
This command is making something like this:
```python
# example_without_data_app.py
import pickle
with open('/path/to/my/up.pickle', 'rb') as f:
up = pickle.load(f)
if not up.ml.loaded:
up.ml.load()
up.web.load()
# If you want to run the application yourself, or add something else to it, use this variable.
# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80
app = up.web.app
if __name__ == '__main__':
up.run_web_app()
```
And you can run web application:
```bash
python3 example_without_data_app.py
```
#### mlup validate-config
This command use for validation your config. This command have alpha version and need finalize.
See `mlup validate-config --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-validate-config) for full docs.
```bash
mlup validate-config /path/to/my/conf.yaml
```
## Web application interface
By default, web application starting on http://localhost:8009 and have api docs.
See [Web app API](https://github.com/nxexox/pymlup/tree/main/docs/web_app_api.md) for more details.
### Interactive API docs
Now go to http://localhost:8009/docs.
You will see the automatic interactive API documentation (provided by [Swagger UI](https://github.com/swagger-api/swagger-ui)):
### Api points
#### /health
Use for check health web application.
HTTP's methods: HEAD, OPTIONS, GET
<details>
##### Return JSON
```{'status': 200}``` and status code is 200.
</details>
#### /info
Use for get model and application information. If set debug=True in config, return full config.
HTTP's methods: GET
<details>
##### Return JSON:
```json
{
"model_info": {
"name": "MyFirstMLupModel",
"version": "1.0.0.0",
"type": "sklearn",
"columns": null
},
"web_app_info": {
"version": "1.0.0.0",
}
}
```
If set in config `debug=True`, return another json, almost complete config. But no sensitive data.
```json
{
"web_app_config": {
"host": "localhost",
"port": 8009,
"web_app_version": "1.0.0.0",
"column_validation": false,
"custom_column_pydantic_model": null,
"mode": "mlup.web.architecture.directly_to_predict.DirectlyToPredictArchitecture",
"max_queue_size": 100,
"ttl_predicted_data": 60,
"ttl_client_wait": 30.0,
"min_batch_len": 10,
"batch_worker_timeout": 1.0,
"is_long_predict": false,
"show_docs": true,
"debug": true,
"throttling_max_requests": null,
"throttling_max_request_len": null,
"timeout_for_shutdown_daemon": 3.0,
"item_id_col_name": "mlup_item_id"
},
"model_config": {
"name": "MyFirstMLupModel",
"version": "1.0.0.0",
"type": "sklearn",
"columns": null,
"predict_method_name": "predict",
"auto_detect_predict_params": true,
"storage_type": "mlup.ml.storage.memory.MemoryStorage",
"binarization_type": "auto",
"use_thread_loop": true,
"max_thread_loop_workers": true,
"data_transformer_for_predict": "mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer",
"data_transformer_for_predicted": "mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer",
"dtype_for_predict": null
}
}
```
</details>
#### /predict
Use for call predict in model.
HTTP's methods: POST
<details>
##### Requests body data:
```json
{
"data_for_predict": [
"input_data_for_obj_1",
"input_data_for_obj_2",
"input_data_for_obj_3"
]
}
```
Key `data_for_predict` is default key for inner data. In config by default set param `auto_detect_predict_params` is True.
This param activate analyze model predict method, get arguments from and generate API by params.
If `auto_detect_predict_params` found params, he changes `data_for_predict` to finding keys and change API docs.
Example for `scikit-learn` models:
```json
{
"X": [
"input_data_for_obj_1",
"input_data_for_obj_2",
"input_data_for_obj_3"
]
}
```
`input_data_for_obj_1` maybe any valid JSON data. These data are run through data transformers from config `data_transformer_for_predict`.
By default, this param is `mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer`.
##### Return JSON:
```json
{
"predict_result": [
"predict_result_for_obj_1",
"predict_result_for_obj_2",
"predict_result_for_obj_3"
]
}
```
`predict_result_for_obj_1` will be valid JSON data. These data, after being predicted by the model, are run through data transformers from config `data_transformer_for_predicted`.
By default, this param is `mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer`.
</details>
##### Validation
This method have validation for inner request data. It's making from config `columns` and flag `column_validation`.
## Web application modes
See [Web app architectures](https://github.com/nxexox/pymlup/tree/main/docs/web_app_architectures.md) for more details.
Web application have three works modes:
* `directly_to_predict` - is Default. User request send directly to model.
* `worker_and_queue` - ml model starts in thread worker and take data for predict from queue.
Web application new user requests send to queue and wait result from results queue.
* `batching` - ml model start in thread worker and take data for predict from queue.
But not for one request, but combines data from several requests and sends it in one large array to the model.
Web application new user requests send to queue and wait result from results queue.
This param is naming `mode`.
```python
import mlup
from mlup.ml.empty import EmptyModel
from mlup import constants
up = mlup.UP(
ml_model=EmptyModel(),
conf=mlup.Config(
mode=constants.WebAppArchitecture.worker_and_queue,
)
)
```
If your model is light, or you hae many CPU/GPU/RAM, you can run many processes:
```python
import mlup
from mlup.ml.empty import EmptyModel
from mlup import constants
up = mlup.UP(
ml_model=EmptyModel(),
conf=mlup.Config(
mode=constants.WebAppArchitecture.worker_and_queue,
uvicorn_kwargs={'workers': 4},
)
)
```
## Metrics
MLup PyPi download statistics: https://pepy.tech/project/pymlup
[](https://pepy.tech/project/pymlup)
[](https://pepy.tech/project/pymlup)
[](https://pepy.tech/project/pymlup)
Raw data
{
"_id": null,
"home_page": "",
"name": "pymlup",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "",
"author_email": "Deys Timofey <nxexox@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/0d/fc/c65980055f0e0090bca3793f86496879e0460d28dbe3ba0ef95f4a155a2a/pymlup-0.2.2.tar.gz",
"platform": null,
"description": "# PyMLup\n\n[](https://github.com/nxexox/pymlup/actions/workflows/python-package.yml)\n[](https://badge.fury.io/py/pymlup)\n\n## Introduction\n\nIt's library for easy and fast run ML in production. \n\nAll you need is to deliver the model file and config to the server (in fact, the config is not necessary) \ud83d\ude43\n\nPyMLup is a modern way to run machine learning models in production. The market time has been reduced to a minimum. This library eliminates the need to write your own web applications with machine learning models and copy application code. It is enough to have a machine learning model to launch a web application with one command.\n\n* It's library learning only clean python;\n* Use FastApi in web app backend;\n\nWork tested with machine learning model frameworks (links to tests):\n* [scikit-learn>=1.2.0,<1.3.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_scikit_learn_model.py)\n* [tensorflow>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_tensorflow_model.py)\n* [lightgbm>=4.0.0,<5.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_lightgbm_model.py)\n* [torch>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/frameworks/test_pytorch_model.py)\n* [onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)\n* [onnxruntime>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)\n\nSupport and tested with machine learning libraries:\n* [numpy>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_data_transformers.py)\n* [pandas>=2.0.0,<3.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_data_transformers.py)\n* [joblib>=1.2.0,<1.3.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)\n* [tf2onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)\n* [skl2onnx>=1.0.0,<2.0.0](https://github.com/nxexox/pymlup/tree/main/tests/unit_tests/ml/test_binarization.py)\n* [jupyter==1.0.0](https://github.com/nxexox/pymlup/tree/main/tests/integration_tests/test_jupyter_notebook.py)\n\n**The easiest way to try:**\n```bash\npip install pymlup\nmlup run -m /path/to/my/model.onnx\n```\n\n## Useful links\n* [Docs](https://github.com/nxexox/pymlup/tree/main/docs);\n* [Examples](https://github.com/nxexox/pymlup/tree/main/examples);\n* [Tests models](https://github.com/nxexox/pymlup/tree/main/mldata);\n\n## How it's work\n\n1. You are making your machine learning model. Optional: you are making mlup config for your model.\n2. You deliver your model to server. Optional: you deliver your config to server.\n3. Installing pymlup to your server and libraries for model.\n4. Run web app from your model or your config \ud83d\ude43\n\n## Requirements\n\nPython 3.7+\n\n* PyMLup stands on the shoulders of giants FastAPI for the web parts. \n* Additionally, you need to install the libraries that your model uses.\n\n## Installation\n\n```bash\npip install pymlup\n```\n\nYou will also can install with ml backend library:\n```bash\npip install \"pymlup[scikit-learn]\" # For scikit-learn\npip install \"pymlup[lightgbm]\" # For microsoft lightgbm\npip install \"pymlup[tensorflow]\" # For tensorflow\npip install \"pymlup[torch]\" # For torch\npip install \"pymlup[onnx]\" # For onnx models: torch, tensorflow, sklearn, etc...\n```\n\n## Examples\n\n### Examples code\n\n```python\nimport mlup\n\nclass MyAnyModelForExample:\n def predict(self, X):\n return X\n\nml_model = MyAnyModelForExample()\n\n\nup = mlup.UP(ml_model=ml_model)\n# Need call up.ml.load(), for analyze your model\nup.ml.load()\n# If you want testing your web app, you can run in daemon mode\n# You can open browser http://localhost:8009/docs\nup.run_web_app(daemon=True)\n\nimport requests\nresponse = requests.post('http://0.0.0.0:8009/predict', json={'X': [[1, 2, 3], [4, 5, 6]]})\nprint(response.json())\n\nup.stop_web_app()\n```\n\nYou can check work model by config, without web application.\n* `predict` - Get model predict as inner arguments as in web app.\n* `predict_from` - As `predict` method, but not use data transformer before call model predict.\n* `async_predict` - Asynchronous version of the `predict` method.\n```python\nimport mlup\nimport numpy\n\nclass MyAnyModelForExample:\n def predict(self, X):\n return X\n\nml_model = MyAnyModelForExample()\nup = mlup.UP(ml_model=ml_model)\nup.ml.load()\n\nup.predict(X=[[1, 2, 3], [4, 5, 6]])\nup.predict_from(X=numpy.array([[1, 2, 3], [4, 5, 6]]))\nawait up.async_predict(X=[[1, 2, 3], [4, 5, 6]])\n```\n\n#### Save ready application to disk\n\n##### Make default config\n\nIf path endswith to json, make json config, else yaml config.\n\n```python\nimport mlup\nmlup.generate_default_config('path_to_yaml_config.yaml')\n```\n\n##### From config\n\nYou can save ready config to disk, but you need set local storage and path to model file in server.\nIn folder can there are many files, mask need for filter exactly our model file\n\n```python\nimport mlup\nfrom mlup.ml.empty import EmptyModel # This stub class\nfrom mlup import constants\n\nup = mlup.UP(ml_model=EmptyModel())\nup.conf.storage_type = constants.StorageType.disk\nup.conf.storage_kwargs = {\n 'path_to_files': 'path/to/model/file/in/model_name.modelextension',\n 'file_mask': 'model_name.modelextension',\n}\nup.to_yaml(\"path_to_yaml_config.yaml\")\nup.to_json(\"path_to_json_config.json\")\n\n# After in server\nup = mlup.UP.load_from_yaml(\"path_to_yaml_config.yaml\", load_model=True)\nup.run_web_app()\n```\n\n##### From pickle\n\nIf you make pickle/joblib file your mlup with model, don't need to change storage type, because your model there is in your pickle/joblib file.\n\n```python\nimport pickle\nimport mlup\nfrom mlup.ml.empty import EmptyModel # This stub class\n\nup = mlup.UP(ml_model=EmptyModel())\n\n# You can create pickle file\nwith open('path_to_pickle_file.pckl', 'wb') as f:\n pickle.dump(up, f)\n\n# After in server\nwith open('path_to_pickle_file.pckl', 'rb') as f:\n up = pickle.load(f)\nup.ml.load()\nup.run_web_app()\n```\n\n#### Change config\n\nIf you can change model settings (See [Description of the application life cycle](https://github.com/nxexox/pymlup/blob/main/docs/life_cycle.md#upmlload_model_settings)), need call `up.ml.load_model_settings()`.\n\n```python\nimport mlup\n\nclass MyAnyModelForExample:\n def predict(self, X):\n return X\n\nml_model = MyAnyModelForExample()\n\nup = mlup.UP(\n ml_model=ml_model,\n conf=mlup.Config(port=8011)\n)\nup.ml.load()\nup.conf.auto_detect_predict_params = False\nup.ml.load_model_settings()\n```\n\n### Examples server commands\n\n#### mlup run\n\nYou can run web application from model, config, pickle up object. Bash command mlup run making this.\n\nSee `mlup run --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-run) for full docs.\n\n##### From model\n```bash\nmlup run -m /path/to/your/model.extension\n```\n\nThis will run code something like this: \n\n```python\nimport mlup\nfrom mlup import constants\n\nup = mlup.UP(\n conf=mlup.Config(\n storage_type=constants.StorageType.disk,\n storage_kwargs={\n 'path_to_files': '/path/to/your/model.extension',\n 'files_mask': r'.+',\n },\n )\n)\nup.ml.load()\nup.run_web_app()\n```\n\nYou change config attributes in this mode. For this, you can add arguments `--up.<config_attribute_name>=new_value`. \n(For more examples see `mlup run --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-run)).\n\n##### From config\n```bash\nmlup run -c /path/to/your/config.yaml\n# or mlup run -ct json -c /path/to/your/config.json\n```\n\nThis will run code something like this:\n\n```python\nimport mlup\n\nup = mlup.UP.load_from_yaml(conf_path='/path/to/your/config.yaml', load_model=True)\nup.run_web_app()\n```\n\n##### From mlup.UP pickle/joblib object\n```bash\nmlup run -b /path/to/your/up_object.pckl\n# or mlup run -bt joblib -b /path/to/your/up_object.joblib\n```\n\nThis will run code something like this:\n\n```python\nimport pickle\n\nwith open('/path/to/your/up_object.pckl', 'rb') as f:\n up = pickle.load(f)\nup.run_web_app()\n```\n\n#### mlup make-app\n\nThis command making `.py` file with mlup web application and your model, config, pickle up object or with default settings.\n\nSee `mlup make-app --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-make-app) for full docs.\n\n##### With default settings\n```bash\nmlup make-app example_without_data_app.py\n```\n\nThis command is making something like this:\n\n```python\n# example_without_data_app.py\nimport mlup\n\n\n# You can load the model yourself and pass it to the \"ml_model\" argument.\n# up = mlup.UP(ml_model=my_model, conf=mlup.Config())\nup = mlup.UP(\n conf=mlup.Config(\n # Set your config, for work model and get model.\n # You can use storage_type and storage_kwargs for auto_load model from storage.\n )\n)\nup.ml.load()\nup.web.load()\n\n# If you want to run the application yourself, or add something else to it, use this variable.\n# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80\napp = up.web.app\n\nif __name__ == '__main__':\n up.run_web_app()\n```\n\nAnd you can write your settings and run web application:\n```bash\npython3 example_without_data_app.py\n```\n\n##### With only model\n```bash\nmlup make-app -ms /path/to/my/model.onnx example_without_data_app.py\n```\n\nThis command is making something like this:\n\n```python\n# example_without_data_app.py\nimport mlup\nfrom mlup import constants\n\n\nup = mlup.UP(\n conf=mlup.Config(\n # Set your config, for work model and get model.\n storage_type=constants.StorageType.disk,\n storage_kwargs={\n 'path_to_files': '/path/to/my/model.onnx',\n 'files_mask': 'model.onnx',\n },\n )\n)\nup.ml.load()\nup.web.load()\n\n# If you want to run the application yourself, or add something else to it, use this variable.\n# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80\napp = up.web.app\n\nif __name__ == '__main__':\n up.run_web_app()\n\n```\n\nAnd you can run web application:\n```bash\npython3 example_without_data_app.py\n```\n\n##### With only config\n```bash\nmlup make-app -cs /path/to/my/config.yaml example_without_data_app.py\n```\n\nThis command is making something like this:\n\n```python\n# example_without_data_app.py\nimport mlup\n\n\nup = mlup.UP.load_from_yaml('/path/to/my/config.yaml', load_model=False)\nup.ml.load()\nup.web.load()\n\n# If you want to run the application yourself, or add something else to it, use this variable.\n# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80\napp = up.web.app\n\nif __name__ == '__main__':\n up.run_web_app()\n```\n\nAnd you can run web application:\n```bash\npython3 example_without_data_app.py\n```\n\n##### With only binary UP object\n```bash\nmlup make-app -bs /path/to/my/up.pickle example_without_data_app.py\n```\n\nThis command is making something like this:\n\n```python\n# example_without_data_app.py\nimport pickle\n\n\nwith open('/path/to/my/up.pickle', 'rb') as f:\n up = pickle.load(f)\n\nif not up.ml.loaded:\n up.ml.load()\nup.web.load()\n\n# If you want to run the application yourself, or add something else to it, use this variable.\n# Example with uvicorn: uvicorn example_app:app --host 0.0.0.0 --port 80\napp = up.web.app\n\nif __name__ == '__main__':\n up.run_web_app()\n```\n\nAnd you can run web application:\n```bash\npython3 example_without_data_app.py\n```\n\n#### mlup validate-config\n\nThis command use for validation your config. This command have alpha version and need finalize.\n\nSee `mlup validate-config --help` or [Description of the bash commands](https://github.com/nxexox/pymlup/blob/main/docs/bash_commands.md#mlup-validate-config) for full docs.\n\n```bash\nmlup validate-config /path/to/my/conf.yaml\n```\n\n## Web application interface\n\nBy default, web application starting on http://localhost:8009 and have api docs.\n\nSee [Web app API](https://github.com/nxexox/pymlup/tree/main/docs/web_app_api.md) for more details. \n\n### Interactive API docs\n\nNow go to http://localhost:8009/docs.\n\nYou will see the automatic interactive API documentation (provided by [Swagger UI](https://github.com/swagger-api/swagger-ui)):\n\n### Api points\n\n#### /health\nUse for check health web application.\n \nHTTP's methods: HEAD, OPTIONS, GET\n\n<details>\n\n##### Return JSON \n```{'status': 200}``` and status code is 200.\n\n</details>\n\n#### /info\nUse for get model and application information. If set debug=True in config, return full config. \n\nHTTP's methods: GET\n\n<details>\n\n##### Return JSON:\n```json\n{\n \"model_info\": {\n \"name\": \"MyFirstMLupModel\",\n \"version\": \"1.0.0.0\",\n \"type\": \"sklearn\",\n \"columns\": null\n },\n \"web_app_info\": {\n \"version\": \"1.0.0.0\",\n }\n}\n```\n\nIf set in config `debug=True`, return another json, almost complete config. But no sensitive data.\n\n```json\n{\n \"web_app_config\": {\n \"host\": \"localhost\",\n \"port\": 8009,\n \"web_app_version\": \"1.0.0.0\",\n \"column_validation\": false,\n \"custom_column_pydantic_model\": null,\n \"mode\": \"mlup.web.architecture.directly_to_predict.DirectlyToPredictArchitecture\",\n \"max_queue_size\": 100,\n \"ttl_predicted_data\": 60,\n \"ttl_client_wait\": 30.0,\n \"min_batch_len\": 10,\n \"batch_worker_timeout\": 1.0,\n \"is_long_predict\": false,\n \"show_docs\": true,\n \"debug\": true,\n \"throttling_max_requests\": null,\n \"throttling_max_request_len\": null,\n \"timeout_for_shutdown_daemon\": 3.0,\n \"item_id_col_name\": \"mlup_item_id\"\n },\n \"model_config\": {\n \"name\": \"MyFirstMLupModel\",\n \"version\": \"1.0.0.0\",\n \"type\": \"sklearn\",\n \"columns\": null,\n \"predict_method_name\": \"predict\",\n \"auto_detect_predict_params\": true,\n \"storage_type\": \"mlup.ml.storage.memory.MemoryStorage\",\n \"binarization_type\": \"auto\",\n \"use_thread_loop\": true,\n \"max_thread_loop_workers\": true,\n \"data_transformer_for_predict\": \"mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer\",\n \"data_transformer_for_predicted\": \"mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer\",\n \"dtype_for_predict\": null\n }\n}\n```\n\n</details>\n\n#### /predict\n\nUse for call predict in model.\n\nHTTP's methods: POST\n\n<details>\n\n##### Requests body data:\n```json\n{\n \"data_for_predict\": [\n \"input_data_for_obj_1\",\n \"input_data_for_obj_2\",\n \"input_data_for_obj_3\"\n ]\n}\n```\n\nKey `data_for_predict` is default key for inner data. In config by default set param `auto_detect_predict_params` is True. \nThis param activate analyze model predict method, get arguments from and generate API by params. \nIf `auto_detect_predict_params` found params, he changes `data_for_predict` to finding keys and change API docs.\n\nExample for `scikit-learn` models:\n```json\n{\n \"X\": [\n \"input_data_for_obj_1\",\n \"input_data_for_obj_2\",\n \"input_data_for_obj_3\"\n ]\n}\n```\n\n`input_data_for_obj_1` maybe any valid JSON data. These data are run through data transformers from config `data_transformer_for_predict`.\n\nBy default, this param is `mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer`.\n\n##### Return JSON:\n```json\n{\n \"predict_result\": [\n \"predict_result_for_obj_1\",\n \"predict_result_for_obj_2\",\n \"predict_result_for_obj_3\"\n ]\n}\n```\n\n`predict_result_for_obj_1` will be valid JSON data. These data, after being predicted by the model, are run through data transformers from config `data_transformer_for_predicted`.\n\nBy default, this param is `mlup.ml.data_transformers.numpy_data_transformer.NumpyDataTransformer`.\n\n</details>\n\n\n##### Validation\n\nThis method have validation for inner request data. It's making from config `columns` and flag `column_validation`.\n\n## Web application modes\n\nSee [Web app architectures](https://github.com/nxexox/pymlup/tree/main/docs/web_app_architectures.md) for more details. \n\nWeb application have three works modes:\n* `directly_to_predict` - is Default. User request send directly to model.\n* `worker_and_queue` - ml model starts in thread worker and take data for predict from queue. \n Web application new user requests send to queue and wait result from results queue.\n* `batching` - ml model start in thread worker and take data for predict from queue. \n But not for one request, but combines data from several requests and sends it in one large array to the model. \n Web application new user requests send to queue and wait result from results queue.\n\nThis param is naming `mode`.\n```python\nimport mlup\nfrom mlup.ml.empty import EmptyModel\nfrom mlup import constants\n\nup = mlup.UP(\n ml_model=EmptyModel(),\n conf=mlup.Config(\n mode=constants.WebAppArchitecture.worker_and_queue,\n )\n)\n```\n\nIf your model is light, or you hae many CPU/GPU/RAM, you can run many processes:\n```python\nimport mlup\nfrom mlup.ml.empty import EmptyModel\nfrom mlup import constants\n\nup = mlup.UP(\n ml_model=EmptyModel(),\n conf=mlup.Config(\n mode=constants.WebAppArchitecture.worker_and_queue,\n uvicorn_kwargs={'workers': 4},\n )\n)\n```\n\n## Metrics\n\nMLup PyPi download statistics: https://pepy.tech/project/pymlup\n\n[](https://pepy.tech/project/pymlup)\n[](https://pepy.tech/project/pymlup)\n[](https://pepy.tech/project/pymlup)\n",
"bugtrack_url": null,
"license": "",
"summary": "MLup framework, fast ml to production, easy to learn, easy to use.",
"version": "0.2.2",
"project_urls": {
"Documentation": "https://github.com/nxexox/pymlup/docs",
"Homepage": "https://github.com/nxexox/pymlup",
"Repository": "https://github.com/nxexox/pymlup"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "52447b2a143b6730d856e6fd9294cba3ed3c890bfce8e86c0ae8f8930ad63303",
"md5": "6527c7f50e1896a7ff7c27d3e5ecbed2",
"sha256": "1f735b38519a4daea9bc0f677d74f64a5f526f75e1db4904c40a722226f517a1"
},
"downloads": -1,
"filename": "pymlup-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6527c7f50e1896a7ff7c27d3e5ecbed2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 61013,
"upload_time": "2023-10-04T07:11:27",
"upload_time_iso_8601": "2023-10-04T07:11:27.885923Z",
"url": "https://files.pythonhosted.org/packages/52/44/7b2a143b6730d856e6fd9294cba3ed3c890bfce8e86c0ae8f8930ad63303/pymlup-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0dfcc65980055f0e0090bca3793f86496879e0460d28dbe3ba0ef95f4a155a2a",
"md5": "50fd3eef698ff2e92cdd307ff9380c06",
"sha256": "77f65a3121cb20cb0f319f15f5415d6f7d3a5e48204c2c8d002f051d303c8a7f"
},
"downloads": -1,
"filename": "pymlup-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "50fd3eef698ff2e92cdd307ff9380c06",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 47122,
"upload_time": "2023-10-04T07:11:30",
"upload_time_iso_8601": "2023-10-04T07:11:30.253543Z",
"url": "https://files.pythonhosted.org/packages/0d/fc/c65980055f0e0090bca3793f86496879e0460d28dbe3ba0ef95f4a155a2a/pymlup-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-04 07:11:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "nxexox",
"github_project": "pymlup",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "pymlup"
}