Name | lpr-pkg JSON |
Version |
3.0.4
JSON |
| download |
home_page | |
Summary | |
upload_time | 2023-06-06 07:04:13 |
maintainer | |
docs_url | None |
author | Your Name |
requires_python | >=3.8,<3.12 |
license | |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Introduction
This repo serves as a Model release template repo. This includes everything from onnx conversion up to model release into production with all nessasry configs.
# Getting Started
To get started you need to fork this repo, and start going through it and fitting it to your use case as described below.
1. [Makefile](#1-makefile)
2. [Dependency management](#2-dependency-management)
3. Configs
4. Package code
5. Onnx conversion
6. Trt conversion
7. Handlers
8. Testing
9. Flask app
# 1. Makefile
The makefile is the interface where developers interact with to perform any task. Below is the description of each command:
- download-saved-model: Download artifacts stored on mlflow at a certain epoch. Make sure to fill configs/config.yaml
- download-registered-model: Pull artifacts from model registry. Pass DEST as directory to store in. Make sure to fill configs/config.yaml
- convert-to-onnx: Run convert to onnx script.
- convert-trt: build & run container that performs trt
conversion and yeild to artifacts/trt_converted. Pass FP(floating point) BS(batch size) DEVICE(gpu device) ONNX_PATH(path to onnx weights)
- trt-exec: command to be executed from inside the trt container, perform the conversion and copies the model to outside container
- predict-onnx: predict using onnx weights. Pass DATA_DIR(directory of data to be predicted) ONNX_PATH(path to onnx weights) CONFIG_PATH(Model config path) OUTPUT(output path directory)
- predict-triton: predict by sending to a hosted triton server. Pass DATA_DIR(directory of data to be predicted) IP(ip of server) PORT(port of triton server) MODEL_NAME(model name on triton) CONFIG_PATH(Model config path) OUTPUT(output path directory)
- evaluate: evaluate predicted results and write out metrics. Pass MODEL_PREDS(model predictions directory) GT(ground truth directory) OUTPUT(output path)
- python-unittest: Run python tests defined.
- bash-unittest: Run defined bash tests.
- quick-host-onnx: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass ONNX_PATH
- quick-host-trt: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)
- host-endpoint: preform quick-host-trt and build and start flask container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)
- setup-flask-app: command to be executed from inside the flask container.
- push-model: push model to Model registry.
- build-publish-pypi: build package folder into a pypi package and push the package to registry.
# 2. Dependency Management
Raw data
{
"_id": null,
"home_page": "",
"name": "lpr-pkg",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8,<3.12",
"maintainer_email": "",
"keywords": "",
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/61/ff/20a6710f0e4336cc3bcd7c94bebed43f7bfdae11e4b9b1f7921f76e4f46e/lpr_pkg-3.0.4.tar.gz",
"platform": null,
"description": "# Introduction \nThis repo serves as a Model release template repo. This includes everything from onnx conversion up to model release into production with all nessasry configs.\n\n# Getting Started\nTo get started you need to fork this repo, and start going through it and fitting it to your use case as described below.\n1.\t[Makefile](#1-makefile)\n2.\t[Dependency management](#2-dependency-management)\n3. Configs\n4.\tPackage code\n5.\tOnnx conversion\n6. Trt conversion\n7. Handlers\n8. Testing\n9. Flask app\n\n# 1. Makefile\nThe makefile is the interface where developers interact with to perform any task. Below is the description of each command:\n- download-saved-model: Download artifacts stored on mlflow at a certain epoch. Make sure to fill configs/config.yaml\n\n- download-registered-model: Pull artifacts from model registry. Pass DEST as directory to store in. Make sure to fill configs/config.yaml\n\n- convert-to-onnx: Run convert to onnx script.\n\n- convert-trt: build & run container that performs trt \nconversion and yeild to artifacts/trt_converted. Pass FP(floating point) BS(batch size) DEVICE(gpu device) ONNX_PATH(path to onnx weights)\n- trt-exec: command to be executed from inside the trt container, perform the conversion and copies the model to outside container\n\n- predict-onnx: predict using onnx weights. Pass DATA_DIR(directory of data to be predicted) ONNX_PATH(path to onnx weights) CONFIG_PATH(Model config path) OUTPUT(output path directory)\n\n- predict-triton: predict by sending to a hosted triton server. Pass DATA_DIR(directory of data to be predicted) IP(ip of server) PORT(port of triton server) MODEL_NAME(model name on triton) CONFIG_PATH(Model config path) OUTPUT(output path directory)\n\n- evaluate: evaluate predicted results and write out metrics. Pass MODEL_PREDS(model predictions directory) GT(ground truth directory) OUTPUT(output path)\n\n- python-unittest: Run python tests defined.\n\n- bash-unittest: Run defined bash tests.\n\n- quick-host-onnx: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass ONNX_PATH\n\n- quick-host-trt: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)\n\n- host-endpoint: preform quick-host-trt and build and start flask container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)\n\n- setup-flask-app: command to be executed from inside the flask container.\n\n- push-model: push model to Model registry.\n\n- build-publish-pypi: build package folder into a pypi package and push the package to registry. \n\n\n\n# 2. Dependency Management\n",
"bugtrack_url": null,
"license": "",
"summary": "",
"version": "3.0.4",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8da62f27f1bb7a89910f22d61c0059d3982b680cbb3369507912f3e953eb41ee",
"md5": "728da8267e1744bbd4e85e2c10a19f97",
"sha256": "99366d95bd8f6144799b04ab055318a4e489f7104e253b88cc7a3c2cf16c05c0"
},
"downloads": -1,
"filename": "lpr_pkg-3.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "728da8267e1744bbd4e85e2c10a19f97",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<3.12",
"size": 6949,
"upload_time": "2023-06-06T07:04:11",
"upload_time_iso_8601": "2023-06-06T07:04:11.551742Z",
"url": "https://files.pythonhosted.org/packages/8d/a6/2f27f1bb7a89910f22d61c0059d3982b680cbb3369507912f3e953eb41ee/lpr_pkg-3.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "61ff20a6710f0e4336cc3bcd7c94bebed43f7bfdae11e4b9b1f7921f76e4f46e",
"md5": "092bc489033ab4b79b89ca31c7a1da03",
"sha256": "6794ac2364afb2d15f5a234bc84232c259d0b70b318cc41f2c9270efc2149608"
},
"downloads": -1,
"filename": "lpr_pkg-3.0.4.tar.gz",
"has_sig": false,
"md5_digest": "092bc489033ab4b79b89ca31c7a1da03",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<3.12",
"size": 6288,
"upload_time": "2023-06-06T07:04:13",
"upload_time_iso_8601": "2023-06-06T07:04:13.219467Z",
"url": "https://files.pythonhosted.org/packages/61/ff/20a6710f0e4336cc3bcd7c94bebed43f7bfdae11e4b9b1f7921f76e4f46e/lpr_pkg-3.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-06 07:04:13",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "lpr-pkg"
}