Name | sm-serverless-benchmarking JSON |
Version |
0.2.3
JSON |
| download |
home_page | |
Summary | Benchmark sagemaker serverless endpoints for cost and performance |
upload_time | 2023-10-09 16:59:55 |
maintainer | |
docs_url | None |
author | Amazon Web Services |
requires_python | >=3.7 |
license | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
sagemaker
inference
hosting
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# SageMaker Serverless Inference Toolkit
Tools to benchmark sagemaker serverless endpoint configurations and help find the most optimal one
## Installation and Prerequisites
To install the toolkit into your environment, first clone this repo. Then inside of the repo directory run
```
pip install sm-serverless-benchmarking
```
In order to run the benchmark, your user profile or execution role would need to have the appropriate IAM Permissions Including:
#### **SageMaker**
- CreateModel
- CreateEndpointConfig / DeleteEndpointConfig
- CreateEndpoint / DeleteEndpoint
- CreateProcessingJob (if using SageMaker Runner)
#### **SageMaker Runtime**
- InvokeEndpoint
#### **CloudWatch**
- GetMetricStatistics
## Quick Start
To run a benchmark locally, provide your sagemaker [Model](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) name and a list of example invocation arguments. Each of these arguments will be passed directly to the SageMaker runtime [InvokeEndpoint](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint) API
```
from sm_serverless_benchmarking import benchmark
from sm_serverless_benchmarking.utils import convert_invoke_args_to_jsonl
model_name = "<SageMaker Model Name>"
example_invoke_args = [
{'Body': '1,2,3,4,5', "ContentType": "text/csv"},
{'Body': '6,7,8,9,10', "ContentType": "text/csv"}
]
example_args_file = convert_invoke_args_to_jsonl(example_invoke_args,
output_path=".")
r = benchmark.run_serverless_benchmarks(model_name, example_args_file)
```
Alternativelly, you can run the benchmarks as SageMaker Processing job
```
from sm_serverless_benchmarking.sagemaker_runner import run_as_sagemaker_job
run_as_sagemaker_job(
role="<execution_role_arn>",
model_name="<model_name>",
invoke_args_examples_file="<invoke_args_examples_file>",
)
```
A utility function `sm_serverless_benchmarking.utils.convert_invoke_args_to_jsonl` is provided to convert a list of invocation argument examples into a JSONLines file. If working with data that cannot be serialized to JSON such as binary data including images, audio, and video, use the `sm_serverless_benchmarking.utils.convert_invoke_args_to_pkl` function which will serilize the examples to a pickle file instead.
Refer to the [sample_notebooks](sample_notebooks) directory for complete examples
## Types of Benchmarks
By default two types of benchmarks will be executed
- **Stability Benchmark** For each memory configuration, and a [MaxConcurency](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints-create.html#serverless-endpoints-create-config) of 1, will invoke the endpoint a specified number of times sequentially. The goal of this benchmark is to determine the most cost effective and stable memory configuration
- **Concurrency Benchmark** Will invoke an endpoint with a simulated number of concurrent clients under different MaxConcurrency configurations
## Configuring the Benchmarks
For either of the two approaches described above, you can specify a number of parameters to configure the benchmarking job
cold_start_delay (int, optional): Number of seconds to sleep before starting the benchmark. Helps to induce a cold start on initial invocation. Defaults to 0.
memory_sizes (List[int], optional): List of memory configurations to benchmark Defaults to [1024, 2048, 3072, 4096, 5120, 6144].
stability_benchmark_invocations (int, optional): Total number of invocations for the stability benchmark. Defaults to 1000.
stability_benchmark_error_thresh (int, optional): The allowed number of endpoint invocation errors before the benchmark is terminated for a configuration. Defaults to 3.
include_concurrency_benchmark (bool, optional): Set True to run the concurrency benchmark with the optimal configuration from the stability benchmark. Defaults to True.
concurrency_benchmark_max_conc (List[int], optional): A list of max_concurency settings to benchmark. Defaults to [2, 4, 8].
concurrency_benchmark_invocations (int, optional): Total number of invocations for the concurency benchmark. Defaults to 1000.
concurrency_num_clients_multiplier (List[int], optional): List of multipliers to specify the number of simulated clients which is determined by max_concurency * multiplier. Defaults to [1, 1.5, 1.75, 2].
result_save_path (str, optional): The location to which the output artifacts will be saved. Defaults to ".".
Raw data
{
"_id": null,
"home_page": "",
"name": "sm-serverless-benchmarking",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "sagemaker,inference,hosting",
"author": "Amazon Web Services",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/9a/cb/a02b10b69c667f6a1f96aa2407a2119ae3a9fbc2355d26221d78e9ab095f/sm-serverless-benchmarking-0.2.3.tar.gz",
"platform": null,
"description": "# SageMaker Serverless Inference Toolkit\n\nTools to benchmark sagemaker serverless endpoint configurations and help find the most optimal one\n\n## Installation and Prerequisites\nTo install the toolkit into your environment, first clone this repo. Then inside of the repo directory run\n```\npip install sm-serverless-benchmarking\n```\nIn order to run the benchmark, your user profile or execution role would need to have the appropriate IAM Permissions Including:\n#### **SageMaker**\n- CreateModel\n- CreateEndpointConfig / DeleteEndpointConfig\n- CreateEndpoint / DeleteEndpoint\n- CreateProcessingJob (if using SageMaker Runner) \n#### **SageMaker Runtime**\n- InvokeEndpoint\n#### **CloudWatch**\n- GetMetricStatistics\n\n## Quick Start\nTo run a benchmark locally, provide your sagemaker [Model](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) name and a list of example invocation arguments. Each of these arguments will be passed directly to the SageMaker runtime [InvokeEndpoint](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint) API\n```\nfrom sm_serverless_benchmarking import benchmark\nfrom sm_serverless_benchmarking.utils import convert_invoke_args_to_jsonl\n\nmodel_name = \"<SageMaker Model Name>\"\n\nexample_invoke_args = [\n {'Body': '1,2,3,4,5', \"ContentType\": \"text/csv\"},\n {'Body': '6,7,8,9,10', \"ContentType\": \"text/csv\"}\n ]\n \nexample_args_file = convert_invoke_args_to_jsonl(example_invoke_args, \n output_path=\".\")\n \nr = benchmark.run_serverless_benchmarks(model_name, example_args_file)\n```\nAlternativelly, you can run the benchmarks as SageMaker Processing job\n```\nfrom sm_serverless_benchmarking.sagemaker_runner import run_as_sagemaker_job\n\nrun_as_sagemaker_job(\n role=\"<execution_role_arn>\",\n model_name=\"<model_name>\",\n invoke_args_examples_file=\"<invoke_args_examples_file>\",\n )\n```\nA utility function `sm_serverless_benchmarking.utils.convert_invoke_args_to_jsonl` is provided to convert a list of invocation argument examples into a JSONLines file. If working with data that cannot be serialized to JSON such as binary data including images, audio, and video, use the `sm_serverless_benchmarking.utils.convert_invoke_args_to_pkl` function which will serilize the examples to a pickle file instead.\n\nRefer to the [sample_notebooks](sample_notebooks) directory for complete examples\n\n## Types of Benchmarks\nBy default two types of benchmarks will be executed\n\n- **Stability Benchmark** For each memory configuration, and a [MaxConcurency](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints-create.html#serverless-endpoints-create-config) of 1, will invoke the endpoint a specified number of times sequentially. The goal of this benchmark is to determine the most cost effective and stable memory configuration\n- **Concurrency Benchmark** Will invoke an endpoint with a simulated number of concurrent clients under different MaxConcurrency configurations \n\n## Configuring the Benchmarks\nFor either of the two approaches described above, you can specify a number of parameters to configure the benchmarking job\n\n cold_start_delay (int, optional): Number of seconds to sleep before starting the benchmark. Helps to induce a cold start on initial invocation. Defaults to 0.\n memory_sizes (List[int], optional): List of memory configurations to benchmark Defaults to [1024, 2048, 3072, 4096, 5120, 6144].\n stability_benchmark_invocations (int, optional): Total number of invocations for the stability benchmark. Defaults to 1000.\n stability_benchmark_error_thresh (int, optional): The allowed number of endpoint invocation errors before the benchmark is terminated for a configuration. Defaults to 3.\n include_concurrency_benchmark (bool, optional): Set True to run the concurrency benchmark with the optimal configuration from the stability benchmark. Defaults to True.\n concurrency_benchmark_max_conc (List[int], optional): A list of max_concurency settings to benchmark. Defaults to [2, 4, 8].\n concurrency_benchmark_invocations (int, optional): Total number of invocations for the concurency benchmark. Defaults to 1000.\n concurrency_num_clients_multiplier (List[int], optional): List of multipliers to specify the number of simulated clients which is determined by max_concurency * multiplier. Defaults to [1, 1.5, 1.75, 2].\n result_save_path (str, optional): The location to which the output artifacts will be saved. Defaults to \".\".\n\n\n",
"bugtrack_url": null,
"license": "Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Benchmark sagemaker serverless endpoints for cost and performance",
"version": "0.2.3",
"project_urls": null,
"split_keywords": [
"sagemaker",
"inference",
"hosting"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1083f6bd87853e8aa195eab8515b6497f322450f62e42aa4e6886acc25337438",
"md5": "20919ced74d3f6073d9441356387d310",
"sha256": "1e8385dea704819306472b417756ce99467bb81c7f09ce5636560207c987bead"
},
"downloads": -1,
"filename": "sm_serverless_benchmarking-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "20919ced74d3f6073d9441356387d310",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 21752,
"upload_time": "2023-10-09T16:59:54",
"upload_time_iso_8601": "2023-10-09T16:59:54.037878Z",
"url": "https://files.pythonhosted.org/packages/10/83/f6bd87853e8aa195eab8515b6497f322450f62e42aa4e6886acc25337438/sm_serverless_benchmarking-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9acba02b10b69c667f6a1f96aa2407a2119ae3a9fbc2355d26221d78e9ab095f",
"md5": "6e279ad96d5b8d595a36c7be09637106",
"sha256": "c5cccd5af15d7414c62833316b193e35839cf3117812fc4d6c714f1ca22d22a1"
},
"downloads": -1,
"filename": "sm-serverless-benchmarking-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "6e279ad96d5b8d595a36c7be09637106",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 19278,
"upload_time": "2023-10-09T16:59:55",
"upload_time_iso_8601": "2023-10-09T16:59:55.749540Z",
"url": "https://files.pythonhosted.org/packages/9a/cb/a02b10b69c667f6a1f96aa2407a2119ae3a9fbc2355d26221d78e9ab095f/sm-serverless-benchmarking-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-09 16:59:55",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "sm-serverless-benchmarking"
}