clearml-serving


Nameclearml-serving JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttps://github.com/allegroai/clearml-serving.git
Summaryclearml-serving - Model-Serving Orchestration and Repository Solution
upload_time2023-04-12 21:38:54
maintainer
docs_urlNone
authorClearML
requires_python
licenseApache License 2.0
keywords clearml mlops devops trains development machine deep learning version control machine-learning machinelearning deeplearning deep-learning model-serving
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">

<a href="https://app.clear.ml"><img src="https://github.com/allegroai/clearml/blob/master/docs/clearml-logo.svg?raw=true" width="250px"></a>

**ClearML Serving - Model deployment made easy**

## **`clearml-serving v1.3` </br> :sparkles: Model Serving (ML/DL) Made Easy :tada:** <br> :fire: NEW version 1.3 :rocket: 20% faster ! 


[![GitHub license](https://img.shields.io/github/license/allegroai/clearml-serving.svg)](https://img.shields.io/github/license/allegroai/clearml-serving.svg)
[![PyPI pyversions](https://img.shields.io/pypi/pyversions/clearml-serving.svg)](https://img.shields.io/pypi/pyversions/clearml-serving.svg)
[![PyPI version shields.io](https://img.shields.io/pypi/v/clearml-serving.svg)](https://img.shields.io/pypi/v/clearml-serving.svg)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/allegroai)](https://artifacthub.io/packages/helm/allegroai/clearml-serving)
[![Slack Channel](https://img.shields.io/badge/slack-%23clearml--community-blueviolet?logo=slack)](https://join.slack.com/t/allegroai-trains/shared_invite/zt-c0t13pty-aVUZZW1TSSSg2vyIGVPBhg)


</div>


**`clearml-serving`** is a command line utility for model deployment and orchestration.  
It enables model deployment including serving and preprocessing code to a Kubernetes cluster or custom container based solution.

### :fire: NEW :confetti_ball: Take it for a spin with a simple `docker-compose` [command](#nail_care-initial-setup) :magic_wand: :sparkles: 


<a><img src="https://github.com/allegroai/clearml-serving/blob/main/docs/design_diagram.png?raw=true" width="100%"></a>

Features:
* Easy to deploy & configure
  * Support Machine Learning Models (Scikit Learn, XGBoost, LightGBM)
  * Support Deep Learning Models (Tensorflow, PyTorch, ONNX)
  * Customizable RestAPI for serving (i.e. allow per model pre/post-processing for easy integration)
* Flexible  
  * On-line model deployment 
  * On-line endpoint model/version deployment (i.e. no need to take the service down)
  * Per model standalone preprocessing and postprocessing python code 
* Scalable
  * Multi model per container
  * Multi models per serving service
  * Multi-service support (fully seperated multiple serving service running independently)
  * Multi cluster support
  * Out-of-the-box node auto-scaling based on load/usage
* Efficient
  * Multi-container resource utilization
  * Support for CPU & GPU nodes
  * Auto-batching for DL models
* Automatic deployment
  * Automatic model upgrades w/ canary support 
  * Programmable API for model deployment
* Canary A/B deployment
  * Online Canary updates
* Model Monitoring
  * Usage Metric reporting
  * Metric Dashboard
  * Model performance metric
  * Model performance Dashboard

## ClearML Serving Design 

### ClearML Serving Design Principles 

**Modular** , **Scalable** , **Flexible** , **Customizable** , **Open Source**

## Installation

### Prerequisites

* ClearML-Server : Model repository, Service Health, Control plane
* Kubernetes / Single-instance Machine : Deploying containers 
* CLI : Configuration & model deployment interface

### :nail_care: Initial Setup

1. Setup your [**ClearML Server**](https://github.com/allegroai/clearml-server) or use the [Free tier Hosting](https://app.clear.ml)
2. Setup local access (if you haven't already), see instructions [here](https://clear.ml/docs/latest/docs/getting_started/ds/ds_first_steps#install-clearml)
3. Install clearml-serving CLI: 
```bash
pip3 install clearml-serving
```
4. Create the Serving Service Controller
  - `clearml-serving create --name "serving example"`
  - The new serving service UID should be printed `New Serving Service created: id=aa11bb22aa11bb22`
5. Write down the Serving Service UID
6. Clone clearml-serving repository
```bash
git clone https://github.com/allegroai/clearml-serving.git
```
7. Edit the environment variables file (`docker/example.env`) with your clearml-server credentials and Serving Service UID. For example, you should have something like
```bash
cat docker/example.env
```
```bash
  CLEARML_WEB_HOST="https://app.clear.ml"
  CLEARML_API_HOST="https://api.clear.ml"
  CLEARML_FILES_HOST="https://files.clear.ml"
  CLEARML_API_ACCESS_KEY="<access_key_here>"
  CLEARML_API_SECRET_KEY="<secret_key_here>"
  CLEARML_SERVING_TASK_ID="<serving_service_id_here>"
```
8. Spin the clearml-serving containers with docker-compose (or if running on Kubernetes use the helm chart)
```bash
cd docker && docker-compose --env-file example.env -f docker-compose.yml up 
```
If you need Triton support (keras/pytorch/onnx etc.), use the triton docker-compose file
```bash
cd docker && docker-compose --env-file example.env -f docker-compose-triton.yml up 
```
:muscle: If running on a GPU instance w/ Triton support (keras/pytorch/onnx etc.), use the triton gpu docker-compose file
```bash
cd docker && docker-compose --env-file example.env -f docker-compose-triton-gpu.yml up 
```

> **Notice**: Any model that registers with "Triton" engine, will run the pre/post processing code on the Inference service container, and the model inference itself will be executed on the Triton Engine container.


### :ocean: Optional: advanced setup - S3/GS/Azure access

To add access credentials and allow the inference containers to download models from your S3/GS/Azure object-storage,
add the respective environment variables to your env files (`example.env`)
See further details on configuring the storage access [here](https://clear.ml/docs/latest/docs/integrations/storage#configuring-storage)

```bash
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION

GOOGLE_APPLICATION_CREDENTIALS

AZURE_STORAGE_ACCOUNT
AZURE_STORAGE_KEY
```

### :information_desk_person: Concepts

**CLI** - Secure configuration interface for on-line model upgrade/deployment on running Serving Services

**Serving Service Task** - Control plane object storing configuration on all the endpoints. Support multiple separated instance, deployed on multiple clusters.

**Inference Services** - Inference containers, performing model serving pre/post processing. Also support CPU model inferencing.

**Serving Engine Services** - Inference engine containers (e.g. Nvidia Triton, TorchServe etc.) used by the Inference Services for heavier model inference.

**Statistics Service** - Single instance per Serving Service  collecting and broadcasting model serving & performance statistics

**Time-series DB** - Statistics collection service used by the Statistics Service, e.g. Prometheus

**Dashboards** - Customizable dashboard-ing solution on top of the collected statistics, e.g. Grafana

### :point_right: Toy model (scikit learn) deployment example 

1. Train toy scikit-learn model
  - create new python virtual environment
  - `pip3 install -r examples/sklearn/requirements.txt`
  - `python3 examples/sklearn/train_model.py`
  - Model was automatically registered and uploaded into the model repository. For Manual model registration see [here](#turtle-registering--deploying-new-models-manually) 
2. Register the new Model on the Serving Service
  - `clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --project "serving examples"`
  - **Notice** the preprocessing python code is packaged and uploaded to the "Serving Service", to be used by any inference container, and downloaded in realtime when updated
3. Spin the Inference Container
  - Customize container [Dockerfile](clearml_serving/serving/Dockerfile) if needed
  - Build container `docker build --tag clearml-serving-inference:latest -f clearml_serving/serving/Dockerfile .`
  - Spin the inference container: `docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest` 
4. Test new model inference endpoint
  - `curl -X POST "http://127.0.0.1:8080/serve/test_model_sklearn" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'`

**Notice**, now that we have an inference container running, we can add new model inference endpoints directly with the CLI. The inference container will automatically sync once every 5 minutes.

**Notice** On the first few requests the inference container needs to download the model file and preprocessing python code, this means the request might take a little longer, once everything is cached, it will return almost immediately.

**Notes:**
> Review the model repository in the ClearML web UI, under the "serving examples" Project on your ClearML account/server ([free hosted](https://app.clear.ml) or [self-deployed](https://github.com/allegroai/clearml-server)).

> Inference services status, console outputs and machine metrics are available in the ClearML UI in the Serving Service project (default: "DevOps" project)

> To learn more on training models and the ClearML model repository, see the [ClearML documentation](https://clear.ml/docs)

### :turtle: Registering & Deploying new models manually 

Uploading an existing model file into the model repository can be done via the `clearml` RestAPI, the python interface, or with the `clearml-serving` CLI. 

> To learn more on training models and the ClearML model repository, see the [ClearML documentation](https://clear.ml/docs)

- local model file on our laptop: 'examples/sklearn/sklearn-model.pkl'
- Upload the model file to the `clearml-server` file storage and register it
`clearml-serving --id <service_id> model upload --name "manual sklearn model" --project "serving examples" --framework "scikit-learn" --path examples/sklearn/sklearn-model.pkl`
- We now have a new Model in the "serving examples" project, by the name of "manual sklearn model". The CLI output prints the UID of the newly created model, we will use it to register a new endpoint 
- In the `clearml` web UI we can see the new model listed under the `Models` tab in the associated project. we can also download the model file itself directly from the web UI 
- Register a new endpoint with the new model
`clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --model-id <newly_created_model_id_here>`

**Notice** we can also provide a differnt storage destination for the model, such as S3/GS/Azure, by passing
`--destination="s3://bucket/folder"`, `gs://bucket/folder`, `azure://bucket/folder`. Yhere is no need to provide a unique path tp the destination argument, the location of the model will be a unique path based on the serving service ID and the model name


### :rabbit: Automatic model deployment

The clearml Serving Service support automatic model deployment and upgrades, directly connected with the model repository and API. When the model auto-deploy is configured, a new model versions will be automatically deployed when you "publish" or "tag" a new model in the `clearml` model repository. This automation interface allows for simpler CI/CD model deployment process, as a single API automatically deploy (or remove) a model from the Serving Service.

#### :bulb: Automatic model deployment example

1. Configure the model auto-update on the Serving Service
- `clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint "test_model_sklearn_auto" --preprocess "preprocess.py" --name "train sklearn model" --project "serving examples" --max-versions 2`
2. Deploy the Inference container (if not already deployed)
3. Publish a new model the model repository
- Go to the "serving examples" project in the ClearML web UI, click on the Models Tab, search for "train sklearn model" right click and select "Publish"
- Use the RestAPI [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many)
- Use Python interface: 
```python
from clearml import Model
Model(model_id="unique_model_id_here").publish()
```
4. The new model is available on a new endpoint version (1), test with: 
`curl -X POST "http://127.0.0.1:8080/serve/test_model_sklearn_auto/1" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'`

### :bird: Canary endpoint setup

Canary endpoint deployment add a new endpoint where the actual request is sent to a preconfigured set of endpoints with pre-provided distribution. For example, let's create a new endpoint "test_model_sklearn_canary", we can provide a list of endpoints and probabilities (weights).

```bash
clearml-serving --id <service_id> model canary --endpoint "test_model_sklearn_canary" --weights 0.1 0.9 --input-endpoints test_model_sklearn/2 test_model_sklearn/1
```
This means that any request coming to `/test_model_sklearn_canary/` will be routed with probability of 90% to
`/test_model_sklearn/1/` and with probability of 10% to `/test_model_sklearn/2/`. 

**Note:**
> As with any other Serving Service configuration, we can configure the Canary endpoint while the Inference containers are already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes)

We can also prepare a "fixed" canary endpoint, always splitting the load between the last two deployed models:
```bash
clearml-serving --id <service_id> model canary --endpoint "test_model_sklearn_canary" --weights 0.1 0.9 --input-endpoints-prefix test_model_sklearn/
```

This means that is we have two model inference endpoints: `/test_model_sklearn/1/` and `/test_model_sklearn/2/`. The 10% probability (weight 0.1) will match the last (order by version number) endpoint, i.e. `/test_model_sklearn/2/` and the 90% will match `/test_model_sklearn/2/`.
When we add a new model endpoint version, e.g. `/test_model_sklearn/3/`, the canary distribution will automatically match the 90% probability to `/test_model_sklearn/2/` and the 10% to the new endpoint `/test_model_sklearn/3/`.  

Example:
1. Add two endpoints:
  - `clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --version 1 --project "serving examples"`
  -  `clearml-serving --id <service_id> model add --engine sklearn --endpoint "test_model_sklearn" --preprocess "examples/sklearn/preprocess.py" --name "train sklearn model" --version 2 --project "serving examples"`
2. Add Canary endpoint:
  - `clearml-serving --id <service_id> model canary --endpoint "test_model_sklearn_canary" --weights 0.1 0.9 --input-endpoints test_model_sklearn/2 test_model_sklearn/1`
3. Test Canary endpoint:
  - `curl -X POST "http://127.0.0.1:8080/serve/test_model" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'` 


### :bar_chart: Model monitoring and performance metrics :bell:

![Grafana Screenshot](docs/grafana_screenshot.png)

ClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used 
to visualize and create live dashboards. 

The default docker-compose installation is preconfigured with Prometheus and Grafana, do notice that by default data/ate of both containers is *not* persistent. To add persistence we do recommend adding a volume mount.

You can also add many custom metrics on the input/predictions of your models.
Once a model endpoint is registered, adding custom metric can be done using the CLI.
For example, assume we have our mock scikit-learn model deployed on endpoint `test_model_sklearn`, 
we can log the requests inputs and outputs (see examples/sklearn/preprocess.py example):
```bash
clearml-serving --id <serving_service_id_here> metrics add --endpoint test_model_sklearn --variable-scalar
x0=0,0.1,0.5,1,10 x1=0,0.1,0.5,1,10 y=0,0.1,0.5,0.75,1
```

This will create a distribution histogram (buckets specified via a list of less-equal values after `=` sign),
that we will be able to visualize on Grafana.
Notice we can also log time-series values with `--variable-value x2` or discrete results (e.g. classifications strings) with `--variable-enum animal=cat,dog,sheep`.
Additional custom variables can be in the preprocess and postprocess with a call to `collect_custom_statistics_fn({'new_var': 1.337})` see clearml_serving/preprocess/preprocess_template.py

With the new metrics logged we can create a visualization dashboard over the latency of the calls, and the output distribution. 

Grafana model performance example:

- browse to http://localhost:3000
- login with: admin/admin
- create a new dashboard
- select Prometheus as data source
- Add a query: `100 * increase(test_model_sklearn:_latency_bucket[1m]) / increase(test_model_sklearn:_latency_sum[1m])`
- Change type to heatmap, and select on the right hand-side under "Data Format" select "Time series buckets"
- You now have the latency distribution, over time.
- Repeat the same process for x0, the query would be `100 * increase(test_model_sklearn:x0_bucket[1m]) / increase(test_model_sklearn:x0_sum[1m])`

> **Notice**: If not specified all serving requests will be logged, to change the default configure "CLEARML_DEFAULT_METRIC_LOG_FREQ", for example CLEARML_DEFAULT_METRIC_LOG_FREQ=0.2 means only 20% of all requests will be logged. You can also specify per endpoint log frequency with the `clearml-serving` CLI. Check the CLI documentation with `clearml-serving metrics --help`

### :fire: Model Serving Examples

- Scikit-Learn [example](examples/sklearn/readme.md) - random data 
- Scikit-Learn Model Ensemble [example](examples/ensemble/readme.md) - random data 
- XGBoost [example](examples/xgboost/readme.md) - iris dataset
- LightGBM [example](examples/lightgbm/readme.md) - iris dataset
- PyTorch [example](examples/pytorch/readme.md) - mnist dataset
- TensorFlow/Keras [example](examples/keras/readme.md) - mnist dataset
- Model Pipeline [example](examples/pipeline/readme.md) - random data
- Custom Model [example](examples/custom/readme.md) - custom data

### :pray: Status

  - [x] FastAPI integration for inference service
  - [x] multi-process Gunicorn for inference service
  - [x] Dynamic preprocess python code loading (no need for container/process restart)
  - [x] Model files download/caching (http/s3/gs/azure)
  - [x] Scikit-learn. XGBoost, LightGBM integration
  - [x] Custom inference, including dynamic code loading
  - [x] Manual model upload/registration to model repository (http/s3/gs/azure)
  - [x] Canary load balancing
  - [x] Auto model endpoint deployment based on model repository state
  - [x] Machine/Node health metrics
  - [x] Dynamic online configuration
  - [x] CLI configuration tool
  - [x] Nvidia Triton integration
  - [x] GZip request compression
  - [x] TorchServe engine integration
  - [x] Prebuilt Docker containers (dockerhub)
  - [x] Docker-compose deployment (CPU/GPU)
  - [x] Scikit-Learn example
  - [x] XGBoost example
  - [x] LightGBM example
  - [x] PyTorch example
  - [x] TensorFlow/Keras example
  - [x] Model ensemble example
  - [x] Model pipeline example
  - [x] Statistics Service
  - [x] Kafka install instructions
  - [x] Prometheus install instructions
  - [x] Grafana install instructions
  - [x] Kubernetes Helm Chart
  - [ ] Intel optimized container (python, numpy, daal, scikit-learn)

## Contributing

**PRs are always welcomed** :heart: See more details in the ClearML [Guidelines for Contributing](https://github.com/allegroai/clearml/blob/master/docs/contributing.md).





            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/allegroai/clearml-serving.git",
    "name": "clearml-serving",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "clearml mlops devops trains development machine deep learning version control machine-learning machinelearning deeplearning deep-learning model-serving",
    "author": "ClearML",
    "author_email": "support@clear.ml",
    "download_url": "",
    "platform": null,
    "description": "\n<div align=\"center\">\n\n<a href=\"https://app.clear.ml\"><img src=\"https://github.com/allegroai/clearml/blob/master/docs/clearml-logo.svg?raw=true\" width=\"250px\"></a>\n\n**ClearML Serving - Model deployment made easy**\n\n## **`clearml-serving v1.3` </br> :sparkles: Model Serving (ML/DL) Made Easy :tada:** <br> :fire: NEW version 1.3 :rocket: 20% faster ! \n\n\n[![GitHub license](https://img.shields.io/github/license/allegroai/clearml-serving.svg)](https://img.shields.io/github/license/allegroai/clearml-serving.svg)\n[![PyPI pyversions](https://img.shields.io/pypi/pyversions/clearml-serving.svg)](https://img.shields.io/pypi/pyversions/clearml-serving.svg)\n[![PyPI version shields.io](https://img.shields.io/pypi/v/clearml-serving.svg)](https://img.shields.io/pypi/v/clearml-serving.svg)\n[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/allegroai)](https://artifacthub.io/packages/helm/allegroai/clearml-serving)\n[![Slack Channel](https://img.shields.io/badge/slack-%23clearml--community-blueviolet?logo=slack)](https://join.slack.com/t/allegroai-trains/shared_invite/zt-c0t13pty-aVUZZW1TSSSg2vyIGVPBhg)\n\n\n</div>\n\n\n**`clearml-serving`** is a command line utility for model deployment and orchestration.  \nIt enables model deployment including serving and preprocessing code to a Kubernetes cluster or custom container based solution.\n\n### :fire: NEW :confetti_ball: Take it for a spin with a simple `docker-compose` [command](#nail_care-initial-setup) :magic_wand: :sparkles: \n\n\n<a><img src=\"https://github.com/allegroai/clearml-serving/blob/main/docs/design_diagram.png?raw=true\" width=\"100%\"></a>\n\nFeatures:\n* Easy to deploy & configure\n  * Support Machine Learning Models (Scikit Learn, XGBoost, LightGBM)\n  * Support Deep Learning Models (Tensorflow, PyTorch, ONNX)\n  * Customizable RestAPI for serving (i.e. allow per model pre/post-processing for easy integration)\n* Flexible  \n  * On-line model deployment \n  * On-line endpoint model/version deployment (i.e. no need to take the service down)\n  * Per model standalone preprocessing and postprocessing python code \n* Scalable\n  * Multi model per container\n  * Multi models per serving service\n  * Multi-service support (fully seperated multiple serving service running independently)\n  * Multi cluster support\n  * Out-of-the-box node auto-scaling based on load/usage\n* Efficient\n  * Multi-container resource utilization\n  * Support for CPU & GPU nodes\n  * Auto-batching for DL models\n* Automatic deployment\n  * Automatic model upgrades w/ canary support \n  * Programmable API for model deployment\n* Canary A/B deployment\n  * Online Canary updates\n* Model Monitoring\n  * Usage Metric reporting\n  * Metric Dashboard\n  * Model performance metric\n  * Model performance Dashboard\n\n## ClearML Serving Design \n\n### ClearML Serving Design Principles \n\n**Modular** , **Scalable** , **Flexible** , **Customizable** , **Open Source**\n\n## Installation\n\n### Prerequisites\n\n* ClearML-Server : Model repository, Service Health, Control plane\n* Kubernetes / Single-instance Machine : Deploying containers \n* CLI : Configuration & model deployment interface\n\n### :nail_care: Initial Setup\n\n1. Setup your [**ClearML Server**](https://github.com/allegroai/clearml-server) or use the [Free tier Hosting](https://app.clear.ml)\n2. Setup local access (if you haven't already), see instructions [here](https://clear.ml/docs/latest/docs/getting_started/ds/ds_first_steps#install-clearml)\n3. Install clearml-serving CLI: \n```bash\npip3 install clearml-serving\n```\n4. Create the Serving Service Controller\n  - `clearml-serving create --name \"serving example\"`\n  - The new serving service UID should be printed `New Serving Service created: id=aa11bb22aa11bb22`\n5. Write down the Serving Service UID\n6. Clone clearml-serving repository\n```bash\ngit clone https://github.com/allegroai/clearml-serving.git\n```\n7. Edit the environment variables file (`docker/example.env`) with your clearml-server credentials and Serving Service UID. For example, you should have something like\n```bash\ncat docker/example.env\n```\n```bash\n  CLEARML_WEB_HOST=\"https://app.clear.ml\"\n  CLEARML_API_HOST=\"https://api.clear.ml\"\n  CLEARML_FILES_HOST=\"https://files.clear.ml\"\n  CLEARML_API_ACCESS_KEY=\"<access_key_here>\"\n  CLEARML_API_SECRET_KEY=\"<secret_key_here>\"\n  CLEARML_SERVING_TASK_ID=\"<serving_service_id_here>\"\n```\n8. Spin the clearml-serving containers with docker-compose (or if running on Kubernetes use the helm chart)\n```bash\ncd docker && docker-compose --env-file example.env -f docker-compose.yml up \n```\nIf you need Triton support (keras/pytorch/onnx etc.), use the triton docker-compose file\n```bash\ncd docker && docker-compose --env-file example.env -f docker-compose-triton.yml up \n```\n:muscle: If running on a GPU instance w/ Triton support (keras/pytorch/onnx etc.), use the triton gpu docker-compose file\n```bash\ncd docker && docker-compose --env-file example.env -f docker-compose-triton-gpu.yml up \n```\n\n> **Notice**: Any model that registers with \"Triton\" engine, will run the pre/post processing code on the Inference service container, and the model inference itself will be executed on the Triton Engine container.\n\n\n### :ocean: Optional: advanced setup - S3/GS/Azure access\n\nTo add access credentials and allow the inference containers to download models from your S3/GS/Azure object-storage,\nadd the respective environment variables to your env files (`example.env`)\nSee further details on configuring the storage access [here](https://clear.ml/docs/latest/docs/integrations/storage#configuring-storage)\n\n```bash\nAWS_ACCESS_KEY_ID\nAWS_SECRET_ACCESS_KEY\nAWS_DEFAULT_REGION\n\nGOOGLE_APPLICATION_CREDENTIALS\n\nAZURE_STORAGE_ACCOUNT\nAZURE_STORAGE_KEY\n```\n\n### :information_desk_person: Concepts\n\n**CLI** - Secure configuration interface for on-line model upgrade/deployment on running Serving Services\n\n**Serving Service Task** - Control plane object storing configuration on all the endpoints. Support multiple separated instance, deployed on multiple clusters.\n\n**Inference Services** - Inference containers, performing model serving pre/post processing. Also support CPU model inferencing.\n\n**Serving Engine Services** - Inference engine containers (e.g. Nvidia Triton, TorchServe etc.) used by the Inference Services for heavier model inference.\n\n**Statistics Service** - Single instance per Serving Service  collecting and broadcasting model serving & performance statistics\n\n**Time-series DB** - Statistics collection service used by the Statistics Service, e.g. Prometheus\n\n**Dashboards** - Customizable dashboard-ing solution on top of the collected statistics, e.g. Grafana\n\n### :point_right: Toy model (scikit learn) deployment example \n\n1. Train toy scikit-learn model\n  - create new python virtual environment\n  - `pip3 install -r examples/sklearn/requirements.txt`\n  - `python3 examples/sklearn/train_model.py`\n  - Model was automatically registered and uploaded into the model repository. For Manual model registration see [here](#turtle-registering--deploying-new-models-manually) \n2. Register the new Model on the Serving Service\n  - `clearml-serving --id <service_id> model add --engine sklearn --endpoint \"test_model_sklearn\" --preprocess \"examples/sklearn/preprocess.py\" --name \"train sklearn model\" --project \"serving examples\"`\n  - **Notice** the preprocessing python code is packaged and uploaded to the \"Serving Service\", to be used by any inference container, and downloaded in realtime when updated\n3. Spin the Inference Container\n  - Customize container [Dockerfile](clearml_serving/serving/Dockerfile) if needed\n  - Build container `docker build --tag clearml-serving-inference:latest -f clearml_serving/serving/Dockerfile .`\n  - Spin the inference container: `docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest` \n4. Test new model inference endpoint\n  - `curl -X POST \"http://127.0.0.1:8080/serve/test_model_sklearn\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d '{\"x0\": 1, \"x1\": 2}'`\n\n**Notice**, now that we have an inference container running, we can add new model inference endpoints directly with the CLI. The inference container will automatically sync once every 5 minutes.\n\n**Notice** On the first few requests the inference container needs to download the model file and preprocessing python code, this means the request might take a little longer, once everything is cached, it will return almost immediately.\n\n**Notes:**\n> Review the model repository in the ClearML web UI, under the \"serving examples\" Project on your ClearML account/server ([free hosted](https://app.clear.ml) or [self-deployed](https://github.com/allegroai/clearml-server)).\n\n> Inference services status, console outputs and machine metrics are available in the ClearML UI in the Serving Service project (default: \"DevOps\" project)\n\n> To learn more on training models and the ClearML model repository, see the [ClearML documentation](https://clear.ml/docs)\n\n### :turtle: Registering & Deploying new models manually \n\nUploading an existing model file into the model repository can be done via the `clearml` RestAPI, the python interface, or with the `clearml-serving` CLI. \n\n> To learn more on training models and the ClearML model repository, see the [ClearML documentation](https://clear.ml/docs)\n\n- local model file on our laptop: 'examples/sklearn/sklearn-model.pkl'\n- Upload the model file to the `clearml-server` file storage and register it\n`clearml-serving --id <service_id> model upload --name \"manual sklearn model\" --project \"serving examples\" --framework \"scikit-learn\" --path examples/sklearn/sklearn-model.pkl`\n- We now have a new Model in the \"serving examples\" project, by the name of \"manual sklearn model\". The CLI output prints the UID of the newly created model, we will use it to register a new endpoint \n- In the `clearml` web UI we can see the new model listed under the `Models` tab in the associated project. we can also download the model file itself directly from the web UI \n- Register a new endpoint with the new model\n`clearml-serving --id <service_id> model add --engine sklearn --endpoint \"test_model_sklearn\" --preprocess \"examples/sklearn/preprocess.py\" --model-id <newly_created_model_id_here>`\n\n**Notice** we can also provide a differnt storage destination for the model, such as S3/GS/Azure, by passing\n`--destination=\"s3://bucket/folder\"`, `gs://bucket/folder`, `azure://bucket/folder`. Yhere is no need to provide a unique path tp the destination argument, the location of the model will be a unique path based on the serving service ID and the model name\n\n\n### :rabbit: Automatic model deployment\n\nThe clearml Serving Service support automatic model deployment and upgrades, directly connected with the model repository and API. When the model auto-deploy is configured, a new model versions will be automatically deployed when you \"publish\" or \"tag\" a new model in the `clearml` model repository. This automation interface allows for simpler CI/CD model deployment process, as a single API automatically deploy (or remove) a model from the Serving Service.\n\n#### :bulb: Automatic model deployment example\n\n1. Configure the model auto-update on the Serving Service\n- `clearml-serving --id <service_id> model auto-update --engine sklearn --endpoint \"test_model_sklearn_auto\" --preprocess \"preprocess.py\" --name \"train sklearn model\" --project \"serving examples\" --max-versions 2`\n2. Deploy the Inference container (if not already deployed)\n3. Publish a new model the model repository\n- Go to the \"serving examples\" project in the ClearML web UI, click on the Models Tab, search for \"train sklearn model\" right click and select \"Publish\"\n- Use the RestAPI [details](https://clear.ml/docs/latest/docs/references/api/models#post-modelspublish_many)\n- Use Python interface: \n```python\nfrom clearml import Model\nModel(model_id=\"unique_model_id_here\").publish()\n```\n4. The new model is available on a new endpoint version (1), test with: \n`curl -X POST \"http://127.0.0.1:8080/serve/test_model_sklearn_auto/1\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d '{\"x0\": 1, \"x1\": 2}'`\n\n### :bird: Canary endpoint setup\n\nCanary endpoint deployment add a new endpoint where the actual request is sent to a preconfigured set of endpoints with pre-provided distribution. For example, let's create a new endpoint \"test_model_sklearn_canary\", we can provide a list of endpoints and probabilities (weights).\n\n```bash\nclearml-serving --id <service_id> model canary --endpoint \"test_model_sklearn_canary\" --weights 0.1 0.9 --input-endpoints test_model_sklearn/2 test_model_sklearn/1\n```\nThis means that any request coming to `/test_model_sklearn_canary/` will be routed with probability of 90% to\n`/test_model_sklearn/1/` and with probability of 10% to `/test_model_sklearn/2/`. \n\n**Note:**\n> As with any other Serving Service configuration, we can configure the Canary endpoint while the Inference containers are already running and deployed, they will get updated in their next update cycle (default: once every 5 minutes)\n\nWe can also prepare a \"fixed\" canary endpoint, always splitting the load between the last two deployed models:\n```bash\nclearml-serving --id <service_id> model canary --endpoint \"test_model_sklearn_canary\" --weights 0.1 0.9 --input-endpoints-prefix test_model_sklearn/\n```\n\nThis means that is we have two model inference endpoints: `/test_model_sklearn/1/` and `/test_model_sklearn/2/`. The 10% probability (weight 0.1) will match the last (order by version number) endpoint, i.e. `/test_model_sklearn/2/` and the 90% will match `/test_model_sklearn/2/`.\nWhen we add a new model endpoint version, e.g. `/test_model_sklearn/3/`, the canary distribution will automatically match the 90% probability to `/test_model_sklearn/2/` and the 10% to the new endpoint `/test_model_sklearn/3/`.  \n\nExample:\n1. Add two endpoints:\n  - `clearml-serving --id <service_id> model add --engine sklearn --endpoint \"test_model_sklearn\" --preprocess \"examples/sklearn/preprocess.py\" --name \"train sklearn model\" --version 1 --project \"serving examples\"`\n  -  `clearml-serving --id <service_id> model add --engine sklearn --endpoint \"test_model_sklearn\" --preprocess \"examples/sklearn/preprocess.py\" --name \"train sklearn model\" --version 2 --project \"serving examples\"`\n2. Add Canary endpoint:\n  - `clearml-serving --id <service_id> model canary --endpoint \"test_model_sklearn_canary\" --weights 0.1 0.9 --input-endpoints test_model_sklearn/2 test_model_sklearn/1`\n3. Test Canary endpoint:\n  - `curl -X POST \"http://127.0.0.1:8080/serve/test_model\" -H \"accept: application/json\" -H \"Content-Type: application/json\" -d '{\"x0\": 1, \"x1\": 2}'` \n\n\n### :bar_chart: Model monitoring and performance metrics :bell:\n\n![Grafana Screenshot](docs/grafana_screenshot.png)\n\nClearML serving instances send serving statistics (count/latency) automatically to Prometheus and Grafana can be used \nto visualize and create live dashboards. \n\nThe default docker-compose installation is preconfigured with Prometheus and Grafana, do notice that by default data/ate of both containers is *not* persistent. To add persistence we do recommend adding a volume mount.\n\nYou can also add many custom metrics on the input/predictions of your models.\nOnce a model endpoint is registered, adding custom metric can be done using the CLI.\nFor example, assume we have our mock scikit-learn model deployed on endpoint `test_model_sklearn`, \nwe can log the requests inputs and outputs (see examples/sklearn/preprocess.py example):\n```bash\nclearml-serving --id <serving_service_id_here> metrics add --endpoint test_model_sklearn --variable-scalar\nx0=0,0.1,0.5,1,10 x1=0,0.1,0.5,1,10 y=0,0.1,0.5,0.75,1\n```\n\nThis will create a distribution histogram (buckets specified via a list of less-equal values after `=` sign),\nthat we will be able to visualize on Grafana.\nNotice we can also log time-series values with `--variable-value x2` or discrete results (e.g. classifications strings) with `--variable-enum animal=cat,dog,sheep`.\nAdditional custom variables can be in the preprocess and postprocess with a call to `collect_custom_statistics_fn({'new_var': 1.337})` see clearml_serving/preprocess/preprocess_template.py\n\nWith the new metrics logged we can create a visualization dashboard over the latency of the calls, and the output distribution. \n\nGrafana model performance example:\n\n- browse to http://localhost:3000\n- login with: admin/admin\n- create a new dashboard\n- select Prometheus as data source\n- Add a query: `100 * increase(test_model_sklearn:_latency_bucket[1m]) / increase(test_model_sklearn:_latency_sum[1m])`\n- Change type to heatmap, and select on the right hand-side under \"Data Format\" select \"Time series buckets\"\n- You now have the latency distribution, over time.\n- Repeat the same process for x0, the query would be `100 * increase(test_model_sklearn:x0_bucket[1m]) / increase(test_model_sklearn:x0_sum[1m])`\n\n> **Notice**: If not specified all serving requests will be logged, to change the default configure \"CLEARML_DEFAULT_METRIC_LOG_FREQ\", for example CLEARML_DEFAULT_METRIC_LOG_FREQ=0.2 means only 20% of all requests will be logged. You can also specify per endpoint log frequency with the `clearml-serving` CLI. Check the CLI documentation with `clearml-serving metrics --help`\n\n### :fire: Model Serving Examples\n\n- Scikit-Learn [example](examples/sklearn/readme.md) - random data \n- Scikit-Learn Model Ensemble [example](examples/ensemble/readme.md) - random data \n- XGBoost [example](examples/xgboost/readme.md) - iris dataset\n- LightGBM [example](examples/lightgbm/readme.md) - iris dataset\n- PyTorch [example](examples/pytorch/readme.md) - mnist dataset\n- TensorFlow/Keras [example](examples/keras/readme.md) - mnist dataset\n- Model Pipeline [example](examples/pipeline/readme.md) - random data\n- Custom Model [example](examples/custom/readme.md) - custom data\n\n### :pray: Status\n\n  - [x] FastAPI integration for inference service\n  - [x] multi-process Gunicorn for inference service\n  - [x] Dynamic preprocess python code loading (no need for container/process restart)\n  - [x] Model files download/caching (http/s3/gs/azure)\n  - [x] Scikit-learn. XGBoost, LightGBM integration\n  - [x] Custom inference, including dynamic code loading\n  - [x] Manual model upload/registration to model repository (http/s3/gs/azure)\n  - [x] Canary load balancing\n  - [x] Auto model endpoint deployment based on model repository state\n  - [x] Machine/Node health metrics\n  - [x] Dynamic online configuration\n  - [x] CLI configuration tool\n  - [x] Nvidia Triton integration\n  - [x] GZip request compression\n  - [x] TorchServe engine integration\n  - [x] Prebuilt Docker containers (dockerhub)\n  - [x] Docker-compose deployment (CPU/GPU)\n  - [x] Scikit-Learn example\n  - [x] XGBoost example\n  - [x] LightGBM example\n  - [x] PyTorch example\n  - [x] TensorFlow/Keras example\n  - [x] Model ensemble example\n  - [x] Model pipeline example\n  - [x] Statistics Service\n  - [x] Kafka install instructions\n  - [x] Prometheus install instructions\n  - [x] Grafana install instructions\n  - [x] Kubernetes Helm Chart\n  - [ ] Intel optimized container (python, numpy, daal, scikit-learn)\n\n## Contributing\n\n**PRs are always welcomed** :heart: See more details in the ClearML [Guidelines for Contributing](https://github.com/allegroai/clearml/blob/master/docs/contributing.md).\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "clearml-serving - Model-Serving Orchestration and Repository Solution",
    "version": "1.3.0",
    "split_keywords": [
        "clearml",
        "mlops",
        "devops",
        "trains",
        "development",
        "machine",
        "deep",
        "learning",
        "version",
        "control",
        "machine-learning",
        "machinelearning",
        "deeplearning",
        "deep-learning",
        "model-serving"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c1065d2a9f00b4b97c83736fb977ab7ea3512f76284818ced3500c7ed95a6e99",
                "md5": "bb0df1274fd8585c77624610f2d7147f",
                "sha256": "e70c888efb4a99b716e02380d12ec27d01b90b37a57d4fe5baf51ac49b88128f"
            },
            "downloads": -1,
            "filename": "clearml_serving-1.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bb0df1274fd8585c77624610f2d7147f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 52096,
            "upload_time": "2023-04-12T21:38:54",
            "upload_time_iso_8601": "2023-04-12T21:38:54.106448Z",
            "url": "https://files.pythonhosted.org/packages/c1/06/5d2a9f00b4b97c83736fb977ab7ea3512f76284818ced3500c7ed95a6e99/clearml_serving-1.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-12 21:38:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "allegroai",
    "github_project": "clearml-serving.git",
    "lcname": "clearml-serving"
}
        
Elapsed time: 0.07103s