![Code Quality Status](https://github.com/schmidtbri/rest-model-service/actions/workflows/test.yml/badge.svg)
[![License](https://img.shields.io/badge/license-BSD--3--Clause-green)](https://opensource.org/licenses/BSD-3-Clause)
[![PyPi](https://img.shields.io/badge/pypi-v0.6.0-green)](https://pypi.org/project/rest-model-service/)
# REST Model Service
**rest-model-service** is a package for building RESTful services for hosting machine learning models.
This package helps you to quickly build RESTful services for your ML model by handling many low level details, like:
- Documentation, using pydantic and OpenAPI
- Logging configuration
- Status Check Endpoints
- Metrics
This package also allows you to extend the functionality of your deployed models by following the
[Decorator Pattern](https://en.wikipedia.org/wiki/Decorator_pattern).
## Installation
The package can be installed from pypi:
```bash
pip install rest_model_service
```
## Usage
To use the service you must first have a working model class that uses the MLModel base class from the
[ml_base package](https://schmidtbri.github.io/ml-base/). The MLModel base class is designed to provide a consistent interface around model prediction
logic that allows the rest_model_service package to deploy any model that implements it. Some examples of how to create
MLModel classes for your model can be found [here](https://schmidtbri.github.io/ml-base/basic/).
You can then set up a configuration file that points at the model class of the model you want to host. The
configuration file should look like this:
```yaml
service_title: "REST Model Service"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
```
The "class_path" should contain the full path to the class, including the package names, module name, and class name
separated by periods. The "create_endpoint" option is there for cases when you might want to load a model but not create
an endpoint for it, if it is set to "false" the model will be loaded and available for use within the service but
will not have an endpoint defined for it. A reference to the model object will be available from the [ModelManager
singleton](https://schmidtbri.github.io/ml-base/basic/#using-the-modelmanager-class).
The config file should be YAML, be named "rest_config.yaml", and be in the current working directory. However,
we can point at configuration files that have different names and are in different locations if needed.
The service can host many models, all that is needed is to add entries to the "models" array.
Configuration options can also be passed to the models hosted by the service. To do this, add a configuration key to
the model entry in the "models" array:
```yaml
service_title: "REST Model Service"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
configuration:
parameter1: true
parameter2: string_value
parameter3: 123
```
The key-value pairs are passed directly into the model class' `__init__()` method at instantiation time as keyword
arguments. The model can then use the parameters to configure itself.
### Adding Service Information
We can add several details to the configuration file that are useful when building OpenAPI specifications.
```yaml
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
```
The service title, description, and version are passed into the application and used to build the OpenAPI specification.
Details for how to build the OpenAPI document for your model service are below.
### Adding a Decorator to a Model
The rest_model_service package also supports the [decorator pattern](https://en.wikipedia.org/wiki/Decorator_pattern).
Decorators are defined in the [ml_base package](https://schmidtbri.github.io/ml-base/) and explained
[here](https://schmidtbri.github.io/ml-base/decorator/). A decorator can be added to a model by adding the "decorators"
key to the model's configuration:
```yaml
service_title: REST Model Service With Decorators
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
decorators:
- class_path: tests.mocks.PredictionIDDecorator
```
The PredictionIDDecorator will be instantiated and added to the IrisModel instance when the service starts up.
Keyword arguments can also be provided to the decorator's `__init__()` by adding a "configuration" key to the
decorator's entry like this:
```yaml
service_title: REST Model Service With Decorators
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
decorators:
- class_path: tests.mocks.PredictionIDDecorator
configuration:
parameter1: "asdf"
parameter2: "zxcv"
```
The configuration dictionary will be passed to the decorator class as keyword arguments.
Many decorators can be added to a single model, in which case each decorator will decorate the decorator that was
previously attached to the model. This will create a "stack" of decorators that will each handle the prediction request
before the model's prediction is created.
### Adding Logging
The service also optionally accepts logging configuration through the YAML configuration file:
```yaml
service_title: REST Model Service With Logging
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
logging:
version: 1
disable_existing_loggers: true
formatters:
formatter:
class: logging.Formatter
format: "%(asctime)s %(pathname)s %(lineno)s %(levelname)s %(message)s"
handlers:
stdout:
level: INFO
class: logging.StreamHandler
stream: ext://sys.stdout
formatter: formatter
loggers:
root:
level: INFO
handlers:
- stdout
propagate: false
```
The YAML needs to be formatted so that it deserializes to a dictionary that matches the logging package's [configuration
dictionary schema](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema).
### Adding Metrics
This package allows you to create an endpoint that exposes metrics to a [Prometheus server](https://prometheus.io/).
The metrics endpoint is disabled by default and must be enabled in the configuration file.
Using this aspect of the service requires installing the "metrics" optional dependencies:
```bash
pip install rest_model_service[metrics]
```
To enable the metrics collection, simply set the "enabled" attribute in the "metrics" attribute to "true" in the YAML
configuration file:
```yaml
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
metrics:
enabled: true
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
```
The default metrics are:
- http_requests_total: A counter that counts the number of requests to the service.
- http_request_size_bytes: A summary that counts the size of the requests to the service.
- http_response_size_bytes: A summary that counts the size of the responses from the service.
- http_request_duration_seconds: A histogram that counts the duration of the requests to the service. Only a few
buckets to keep cardinality low.
- http_request_duration_highr_seconds: A histogram that counts the duration of the requests to the service. Large
number of buckets (>20).
The configuration allows more complex options to be passed to the Prometheus client library. To do this, add
keys to the metrics configuration:
```yaml
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
metrics:
enabled: true
should_group_status_codes: true
should_ignore_untemplated: false
should_group_untemplated: true
should_round_latency_decimals: false
should_respect_env_var: false
should_instrument_requests_inprogress: false
excluded_handlers: []
body_handlers: []
round_latency_decimals: 4
env_var_name: "ENABLE_METRICS"
inprogress_name: "http_requests_inprogress"
inprogress_labels: false
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
```
The options are passed directly into the Prometheus instrumentor
[library](https://pypi.org/project/prometheus-fastapi-instrumentator/), the options are explained in that library's documentation.
### Creating an OpenAPI Contract
An OpenAPI contract can be generated dynamically for your models as hosted within the REST model service. To create
the contract and save it execute this command:
```bash
generate_openapi
```
The command looks for a "rest_config.yaml" in the current working directory and creates the application from it.
The command then saves the resulting OpenAPI document to a file named "openapi.yaml" in the current working directory.
You can provide a path to the configuration file like this:
```bash
generate_openapi --configuration_file=examples/rest_config.yaml
```
You can also provide the desired path for the OpenAPI document that will be created like this:
```bash
generate_openapi --output_file=example.yaml
```
Both options together:
```bash
generate_openapi --configuration_file=examples/rest_config.yaml --output_file=example.yaml
```
An example rest_config.yaml file is provided in the examples of the project. It points at a MLModel class in the tests
package.
### Using Status Check Endpoints
The service supports three status check endpoints:
- "/api/health", indicates whether the service process is running. This endpoint will return a 200 status once the
service has started.
- "/api/health/ready", indicates whether the service is ready to respond to requests. This endpoint will return a 200
status only if all the models and decorators have finished being instantiated without errors. Once the models and
decorators are loaded, the readiness check will always return a ACCEPTING_TRAFFIC state.
- "/api/health/startup", indicates whether the service is started. This endpoint will return a 200 status only if all
the models and decorators have finished being instantiated without errors.
### Running the Service
To start the service in development mode, execute this command:
```bash
uvicorn rest_model_service.main:app --reload
```
The service should be able to find your configuration file, but if you did not place it in the current working
directory you can point the service to the right path like this:
```bash
export REST_CONFIG='examples/rest_config.yaml'
uvicorn rest_model_service.main:app --reload
```
### Common Errors
If you get an error that says something about not being able to find a module or a class, you might need to update your
PYTHONPATH environment variable:
```bash
export PYTHONPATH=./
```
The service relies on being able to find the model classes and the decorator classes in the python environment to load
them and instantiate them. If your Python interpreter is not able to find the classes, then the service won't be able
to instantiate the model classes or create endpoints for the models or an OpenAPI document for them.
## Development
Download the source code with this command:
```bash
git clone https://github.com/schmidtbri/rest-model-service
cd rest-model-service
```
Then create a virtual environment and activate it:
```bash
make venv
# on Macs
source venv/bin/activate
```
Install the dependencies:
```bash
make dependencies
```
## Testing
To run the unit test suite execute these commands:
```bash
# first install the test dependencies
make test-dependencies
# run the test suite
make test
# clean up the unit tests
make clean-test
```
Raw data
{
"_id": null,
"home_page": "https://github.com/schmidtbri/rest-model-service",
"name": "rest-model-service",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "machine learning,REST,service,model deployment",
"author": "Brian Schmidt",
"author_email": "6666331+schmidtbri@users.noreply.github.com",
"download_url": "https://files.pythonhosted.org/packages/9c/78/dc736d9581308af79c77d5aebfd43277eae15e57ea3be14320ef53ab1c62/rest_model_service-0.6.0.tar.gz",
"platform": null,
"description": "![Code Quality Status](https://github.com/schmidtbri/rest-model-service/actions/workflows/test.yml/badge.svg)\n[![License](https://img.shields.io/badge/license-BSD--3--Clause-green)](https://opensource.org/licenses/BSD-3-Clause)\n[![PyPi](https://img.shields.io/badge/pypi-v0.6.0-green)](https://pypi.org/project/rest-model-service/)\n\n# REST Model Service\n\n**rest-model-service** is a package for building RESTful services for hosting machine learning models. \n\nThis package helps you to quickly build RESTful services for your ML model by handling many low level details, like:\n\n- Documentation, using pydantic and OpenAPI\n- Logging configuration\n- Status Check Endpoints\n- Metrics\n\nThis package also allows you to extend the functionality of your deployed models by following the \n[Decorator Pattern](https://en.wikipedia.org/wiki/Decorator_pattern).\n\n## Installation\n\nThe package can be installed from pypi:\n\n```bash\npip install rest_model_service\n```\n\n## Usage \n\nTo use the service you must first have a working model class that uses the MLModel base class from the \n[ml_base package](https://schmidtbri.github.io/ml-base/). The MLModel base class is designed to provide a consistent interface around model prediction \nlogic that allows the rest_model_service package to deploy any model that implements it. Some examples of how to create \nMLModel classes for your model can be found [here](https://schmidtbri.github.io/ml-base/basic/).\n\nYou can then set up a configuration file that points at the model class of the model you want to host. The \nconfiguration file should look like this:\n\n```yaml\nservice_title: \"REST Model Service\"\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n```\n\nThe \"class_path\" should contain the full path to the class, including the package names, module name, and class name \nseparated by periods. The \"create_endpoint\" option is there for cases when you might want to load a model but not create\nan endpoint for it, if it is set to \"false\" the model will be loaded and available for use within the service but\nwill not have an endpoint defined for it. A reference to the model object will be available from the [ModelManager \nsingleton](https://schmidtbri.github.io/ml-base/basic/#using-the-modelmanager-class).\n\nThe config file should be YAML, be named \"rest_config.yaml\", and be in the current working directory. However, \nwe can point at configuration files that have different names and are in different locations if needed.\n\nThe service can host many models, all that is needed is to add entries to the \"models\" array.\n\nConfiguration options can also be passed to the models hosted by the service. To do this, add a configuration key to \nthe model entry in the \"models\" array:\n\n```yaml\nservice_title: \"REST Model Service\"\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n configuration:\n parameter1: true\n parameter2: string_value\n parameter3: 123\n```\n\nThe key-value pairs are passed directly into the model class' `__init__()` method at instantiation time as keyword\narguments. The model can then use the parameters to configure itself.\n\n### Adding Service Information\n\nWe can add several details to the configuration file that are useful when building OpenAPI specifications. \n\n```yaml\nservice_title: \"REST Model Service\"\ndescription: \"Service description\"\nversion: \"1.1.0\"\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n```\n\nThe service title, description, and version are passed into the application and used to build the OpenAPI specification.\nDetails for how to build the OpenAPI document for your model service are below.\n\n### Adding a Decorator to a Model\n\nThe rest_model_service package also supports the [decorator pattern](https://en.wikipedia.org/wiki/Decorator_pattern). \nDecorators are defined in the [ml_base package](https://schmidtbri.github.io/ml-base/) and explained\n[here](https://schmidtbri.github.io/ml-base/decorator/). A decorator can be added to a model by adding the \"decorators\" \nkey to the model's configuration:\n\n```yaml\nservice_title: REST Model Service With Decorators\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n decorators:\n - class_path: tests.mocks.PredictionIDDecorator\n```\n\nThe PredictionIDDecorator will be instantiated and added to the IrisModel instance when the service starts up. \nKeyword arguments can also be provided to the decorator's `__init__()` by adding a \"configuration\" key to the \ndecorator's entry like this:\n\n```yaml\nservice_title: REST Model Service With Decorators\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n decorators:\n - class_path: tests.mocks.PredictionIDDecorator\n configuration:\n parameter1: \"asdf\"\n parameter2: \"zxcv\"\n```\n\nThe configuration dictionary will be passed to the decorator class as keyword arguments.\n\nMany decorators can be added to a single model, in which case each decorator will decorate the decorator that was \npreviously attached to the model. This will create a \"stack\" of decorators that will each handle the prediction request \nbefore the model's prediction is created.\n\n### Adding Logging\n\nThe service also optionally accepts logging configuration through the YAML configuration file:\n\n```yaml\nservice_title: REST Model Service With Logging\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\nlogging:\n version: 1\n disable_existing_loggers: true\n formatters:\n formatter:\n class: logging.Formatter\n format: \"%(asctime)s %(pathname)s %(lineno)s %(levelname)s %(message)s\"\n handlers:\n stdout:\n level: INFO\n class: logging.StreamHandler\n stream: ext://sys.stdout\n formatter: formatter\n loggers:\n root:\n level: INFO\n handlers:\n - stdout\n propagate: false\n```\n\nThe YAML needs to be formatted so that it deserializes to a dictionary that matches the logging package's [configuration\ndictionary schema](https://docs.python.org/3/library/logging.config.html#logging-config-dictschema).\n\n### Adding Metrics\n\nThis package allows you to create an endpoint that exposes metrics to a [Prometheus server](https://prometheus.io/). \nThe metrics endpoint is disabled by default and must be enabled in the configuration file.\n\nUsing this aspect of the service requires installing the \"metrics\" optional dependencies:\n\n```bash\npip install rest_model_service[metrics]\n```\n\nTo enable the metrics collection, simply set the \"enabled\" attribute in the \"metrics\" attribute to \"true\" in the YAML \nconfiguration file:\n\n```yaml\nservice_title: \"REST Model Service\"\ndescription: \"Service description\"\nversion: \"1.1.0\"\nmetrics:\n enabled: true\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n```\n\nThe default metrics are:\n\n- http_requests_total: A counter that counts the number of requests to the service.\n- http_request_size_bytes: A summary that counts the size of the requests to the service.\n- http_response_size_bytes: A summary that counts the size of the responses from the service.\n- http_request_duration_seconds: A histogram that counts the duration of the requests to the service. Only a few \n buckets to keep cardinality low.\n- http_request_duration_highr_seconds: A histogram that counts the duration of the requests to the service. Large \n number of buckets (>20).\n\nThe configuration allows more complex options to be passed to the Prometheus client library. To do this, add\nkeys to the metrics configuration:\n\n```yaml\nservice_title: \"REST Model Service\"\ndescription: \"Service description\"\nversion: \"1.1.0\"\nmetrics:\n enabled: true\n should_group_status_codes: true\n should_ignore_untemplated: false\n should_group_untemplated: true\n should_round_latency_decimals: false\n should_respect_env_var: false\n should_instrument_requests_inprogress: false\n excluded_handlers: []\n body_handlers: []\n round_latency_decimals: 4\n env_var_name: \"ENABLE_METRICS\"\n inprogress_name: \"http_requests_inprogress\"\n inprogress_labels: false\nmodels:\n - class_path: tests.mocks.IrisModel\n create_endpoint: true\n```\n\nThe options are passed directly into the Prometheus instrumentor \n[library](https://pypi.org/project/prometheus-fastapi-instrumentator/), the options are explained in that library's documentation.\n\n### Creating an OpenAPI Contract\n\nAn OpenAPI contract can be generated dynamically for your models as hosted within the REST model service. To create \nthe contract and save it execute this command:\n\n```bash\ngenerate_openapi\n```\n\nThe command looks for a \"rest_config.yaml\" in the current working directory and creates the application from it.\nThe command then saves the resulting OpenAPI document to a file named \"openapi.yaml\" in the current working directory.\n\nYou can provide a path to the configuration file like this:\n\n```bash\ngenerate_openapi --configuration_file=examples/rest_config.yaml\n```\n\nYou can also provide the desired path for the OpenAPI document that will be created like this:\n\n```bash\ngenerate_openapi --output_file=example.yaml\n```\n\nBoth options together:\n\n```bash\ngenerate_openapi --configuration_file=examples/rest_config.yaml --output_file=example.yaml\n```\n\nAn example rest_config.yaml file is provided in the examples of the project. It points at a MLModel class in the tests\npackage.\n\n### Using Status Check Endpoints\n\nThe service supports three status check endpoints:\n\n- \"/api/health\", indicates whether the service process is running. This endpoint will return a 200 status once the \n service has started.\n- \"/api/health/ready\", indicates whether the service is ready to respond to requests. This endpoint will return a 200 \n status only if all the models and decorators have finished being instantiated without errors. Once the models and \n decorators are loaded, the readiness check will always return a ACCEPTING_TRAFFIC state.\n- \"/api/health/startup\", indicates whether the service is started. This endpoint will return a 200 status only if all \n the models and decorators have finished being instantiated without errors.\n\n### Running the Service\n\nTo start the service in development mode, execute this command:\n\n```bash\nuvicorn rest_model_service.main:app --reload\n```\n\nThe service should be able to find your configuration file, but if you did not place it in the current working \ndirectory you can point the service to the right path like this:\n\n```bash\nexport REST_CONFIG='examples/rest_config.yaml'\nuvicorn rest_model_service.main:app --reload\n```\n\n### Common Errors\n\nIf you get an error that says something about not being able to find a module or a class, you might need to update your \nPYTHONPATH environment variable:\n\n```bash\nexport PYTHONPATH=./\n```\n\nThe service relies on being able to find the model classes and the decorator classes in the python environment to load \nthem and instantiate them. If your Python interpreter is not able to find the classes, then the service won't be able\nto instantiate the model classes or create endpoints for the models or an OpenAPI document for them. \n\n## Development\n\nDownload the source code with this command:\n\n```bash\ngit clone https://github.com/schmidtbri/rest-model-service\n\ncd rest-model-service\n```\n\nThen create a virtual environment and activate it:\n\n```bash\nmake venv\n\n# on Macs\nsource venv/bin/activate\n```\n\nInstall the dependencies:\n\n```bash\nmake dependencies\n```\n\n## Testing\n\nTo run the unit test suite execute these commands:\n\n```bash\n# first install the test dependencies\nmake test-dependencies\n\n# run the test suite\nmake test\n\n# clean up the unit tests\nmake clean-test\n```\n",
"bugtrack_url": null,
"license": "BSD",
"summary": "RESTful service for hosting machine learning models.",
"version": "0.6.0",
"project_urls": {
"Documentation": "https://schmidtbri.github.io/rest-model-service/",
"Homepage": "https://github.com/schmidtbri/rest-model-service",
"Source Code": "https://github.com/schmidtbri/rest-model-service",
"Tracker": "https://github.com/schmidtbri/rest-model-service/issues"
},
"split_keywords": [
"machine learning",
"rest",
"service",
"model deployment"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9759adf74fa685e829cdeb84930a9910e479e955e7c1cbcd4c6b0928dfbfd343",
"md5": "b3e63ee4731eea897ebe65e0f5663cba",
"sha256": "ab7800c38e5f2f8a2da9c86435250109c8a69ed610e402ecf2a0c5bb646eb7ed"
},
"downloads": -1,
"filename": "rest_model_service-0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b3e63ee4731eea897ebe65e0f5663cba",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 16576,
"upload_time": "2023-12-28T03:53:24",
"upload_time_iso_8601": "2023-12-28T03:53:24.092735Z",
"url": "https://files.pythonhosted.org/packages/97/59/adf74fa685e829cdeb84930a9910e479e955e7c1cbcd4c6b0928dfbfd343/rest_model_service-0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9c78dc736d9581308af79c77d5aebfd43277eae15e57ea3be14320ef53ab1c62",
"md5": "45aabfb008b7c2e8d2527607fcfd9796",
"sha256": "0107e8df4e8156fc630e5b7dfde8e072d4a70b8cb2d2e620b66b5079d6fcc711"
},
"downloads": -1,
"filename": "rest_model_service-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "45aabfb008b7c2e8d2527607fcfd9796",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 17138,
"upload_time": "2023-12-28T03:53:26",
"upload_time_iso_8601": "2023-12-28T03:53:26.043412Z",
"url": "https://files.pythonhosted.org/packages/9c/78/dc736d9581308af79c77d5aebfd43277eae15e57ea3be14320ef53ab1c62/rest_model_service-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-28 03:53:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "schmidtbri",
"github_project": "rest-model-service",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"test_requirements": [],
"lcname": "rest-model-service"
}