Name | genai-perf JSON |
Version |
0.0.8
JSON |
| download |
home_page | None |
Summary | GenAI Perf Analyzer CLI - CLI tool to simplify profiling LLMs and Generative AI models with Perf Analyzer |
upload_time | 2024-11-26 22:16:21 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <4,>=3.10 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<!--
Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of NVIDIA CORPORATION nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->
# GenAI-Perf
GenAI-Perf is a command line tool for measuring the throughput and latency of
generative AI models as served through an inference server.
For large language models (LLMs), GenAI-Perf provides metrics such as
[output token throughput](#output_token_throughput_metric),
[time to first token](#time_to_first_token_metric),
[inter token latency](#inter_token_latency_metric), and
[request throughput](#request_throughput_metric).
For a full list of metrics please see the [Metrics section](#metrics).
Users specify a model name, an inference server URL, the type of inputs to use
(synthetic or from a dataset defined via a file), and the type of load to generate
(number of concurrent requests, request rate).
GenAI-Perf generates the specified load, measures the performance of the
inference server and reports the metrics in a simple table as console output.
The tool also logs all results in a csv and json file that can be used to derive
additional metrics and visualizations. The inference server must already be
running when GenAI-Perf is run.
You can use GenAI-Perf to run performance benchmarks on
- [Large Language Models](docs/tutorial.md)
- [Vision Language Models](docs/multi_modal.md)
- [Embedding Models](docs/embeddings.md)
- [Ranking Models](docs/rankings.md)
- [Multiple LoRA Adapters](docs/lora.md)
> [!Note]
> GenAI-Perf is currently in early release and under rapid development. While we
> will try to remain consistent, command line options and functionality are
> subject to change as the tool matures.
</br>
<!--
======================
INSTALLATION
======================
-->
## Installation
The easiest way to install GenAI-Perf is through
[Triton Server SDK container](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver).
Install the latest release using the following command:
```bash
export RELEASE="24.10"
docker run -it --net=host --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
# Check out genai_perf command inside the container:
genai-perf --help
```
<details>
<summary>Alternatively, to install from source:</summary>
Since GenAI-Perf depends on Perf Analyzer,
you'll need to install the Perf Analyzer binary:
### Install Perf Analyzer (Ubuntu, Python 3.10+)
**NOTE**: you must already have CUDA 12 installed
(checkout the [CUDA installation guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)).
```bash
pip install tritonclient
```
You can also build Perf Analyzer [from source](../docs/install.md#build-from-source) as well.
### Install GenAI-Perf from source
```bash
pip install git+https://github.com/triton-inference-server/perf_analyzer.git#subdirectory=genai-perf
```
</details>
</br>
<!--
======================
QUICK START
======================
-->
## Quick Start
In this quick start, we will use GenAI-Perf to run performance benchmarking on
the GPT-2 model running on Triton Inference Server with a TensorRT-LLM engine.
### Serve GPT-2 TensorRT-LLM model using Triton CLI
You can follow the [quickstart guide](https://github.com/triton-inference-server/triton_cli?tab=readme-ov-file#serving-a-trt-llm-model)
in the Triton CLI Github repository to serve GPT-2 on the Triton server with the TensorRT-LLM backend.
The full instructions are copied below for convenience:
```bash
# This container comes with all of the dependencies for building TRT-LLM engines
# and serving the engine with Triton Inference Server.
docker run -ti \
--gpus all \
--network=host \
--shm-size=1g --ulimit memlock=-1 \
-v /tmp:/tmp \
-v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
nvcr.io/nvidia/tritonserver:24.10-trtllm-python-py3
# Install the Triton CLI
pip install git+https://github.com/triton-inference-server/triton_cli.git@0.0.11
# Build TRT LLM engine and generate a Triton model repository pointing at it
triton remove -m all
triton import -m gpt2 --backend tensorrtllm
# Start Triton pointing at the default model repository
triton start
```
### Running GenAI-Perf
Now we can run GenAI-Perf inside the Triton Inference Server SDK container:
```bash
genai-perf profile -m gpt2 --service-kind triton --backend tensorrtllm --streaming
```
Example output:
```
NVIDIA GenAI-Perf | LLM Metrics
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓
┃ Statistic ┃ avg ┃ min ┃ max ┃ p99 ┃ p90 ┃ p75 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩
│ Time to first token (ms) │ 16.26 │ 12.39 │ 17.25 │ 17.09 │ 16.68 │ 16.56 │
│ Inter token latency (ms) │ 1.85 │ 1.55 │ 2.04 │ 2.02 │ 1.97 │ 1.92 │
│ Request latency (ms) │ 499.20 │ 451.01 │ 554.61 │ 548.69 │ 526.13 │ 514.19 │
│ Output sequence length │ 261.90 │ 256.00 │ 298.00 │ 296.60 │ 270.00 │ 265.00 │
│ Input sequence length │ 550.06 │ 550.00 │ 553.00 │ 551.60 │ 550.00 │ 550.00 │
│ Output token throughput (per sec) │ 520.87 │ N/A │ N/A │ N/A │ N/A │ N/A │
│ Request throughput (per sec) │ 1.99 │ N/A │ N/A │ N/A │ N/A │ N/A │
└───────────────────────────────────┴────────┴────────┴────────┴────────┴────────┴────────┘
```
See [Tutorial](docs/tutorial.md) for additional examples.
</br>
<!--
======================
VISUALIZATION
======================
-->
## Visualization
GenAI-Perf can also generate various plots that visualize the performance of the
current profile run. This is disabled by default but users can easily enable it
by passing the `--generate-plots` option when running the benchmark:
```bash
genai-perf profile \
-m gpt2 \
--service-kind triton \
--backend tensorrtllm \
--streaming \
--concurrency 1 \
--generate-plots
```
This will generate a [set of default plots](docs/compare.md#example-plots) such as:
- Time to first token (TTFT) analysis
- Request latency analysis
- TTFT vs Input sequence lengths
- Inter token latencies vs Token positions
- Input sequence lengths vs Output sequence lengths
### Using `compare` Subcommand to Visualize Multiple Runs
The `compare` subcommand in GenAI-Perf facilitates users in comparing multiple
profile runs and visualizing the differences through plots.
#### Usage
Assuming the user possesses two profile export JSON files,
namely `profile1.json` and `profile2.json`,
they can execute the `compare` subcommand using the `--files` option:
```bash
genai-perf compare --files profile1.json profile2.json
```
Executing the above command will perform the following actions under the
`compare` directory:
1. Generate a YAML configuration file (e.g. `config.yaml`) containing the
metadata for each plot generated during the comparison process.
2. Automatically generate the [default set of plots](docs/compare.md#example-plots)
(e.g. TTFT vs. Input Sequence Lengths) that compare the two profile runs.
```
compare
├── config.yaml
├── distribution_of_input_sequence_lengths_to_output_sequence_lengths.jpeg
├── request_latency.jpeg
├── time_to_first_token.jpeg
├── time_to_first_token_vs_input_sequence_lengths.jpeg
├── token-to-token_latency_vs_output_token_position.jpeg
└── ...
```
#### Customization
Users have the flexibility to iteratively modify the generated YAML configuration
file to suit their specific requirements.
They can make alterations to the plots according to their preferences and execute
the command with the `--config` option followed by the path to the modified
configuration file:
```bash
genai-perf compare --config compare/config.yaml
```
This command will regenerate the plots based on the updated configuration settings,
enabling users to refine the visual representation of the comparison results as
per their needs.
See [Compare documentation](docs/compare.md) for more details.
</br>
<!--
======================
MODEL INPUTS
======================
-->
## Model Inputs
GenAI-Perf supports model input prompts from either synthetically generated
inputs, or from a dataset defined via a file.
When the dataset is synthetic, you can specify the following options:
* `--num-prompts <int>`: The number of unique prompts to generate as stimulus, >= 1.
* `--synthetic-input-tokens-mean <int>`: The mean of number of tokens in the
generated prompts when using synthetic data, >= 1.
* `--synthetic-input-tokens-stddev <int>`: The standard deviation of number of
tokens in the generated prompts when using synthetic data, >= 0.
* `--random-seed <int>`: The seed used to generate random values, >= 0.
When the dataset is coming from a file, you can specify the following
options:
* `--input-file <path>`: The input file or directory containing the prompts or
filepaths to images to use for benchmarking as JSON objects.
For any dataset, you can specify the following options:
* `--output-tokens-mean <int>`: The mean number of tokens in each output. Ensure
the `--tokenizer` value is set correctly, >= 1.
* `--output-tokens-stddev <int>`: The standard deviation of the number of tokens
in each output. This is only used when output-tokens-mean is provided, >= 1.
* `--output-tokens-mean-deterministic`: When using `--output-tokens-mean`, this
flag can be set to improve precision by setting the minimum number of tokens
equal to the requested number of tokens. This is currently supported with the
Triton service-kind. Note that there is still some variability in the
requested number of output tokens, but GenAi-Perf attempts its best effort
with your model to get the right number of output tokens.
You can optionally set additional model inputs with the following option:
* `--extra-inputs <input_name>:<value>`: An additional input for use with the
model with a singular value, such as `stream:true` or `max_tokens:5`. This
flag can be repeated to supply multiple extra inputs.
For [Large Language Models](docs/tutorial.md), there is no batch size (i.e.
batch size is always `1`). Each request includes the inputs for one individual
inference. Other modes such as the [embeddings](docs/embeddings.md) and
[rankings](docs/rankings.md) endpoints support client-side batching, where
`--batch-size-text N` means that each request sent will include the inputs for
`N` separate inferences, allowing them to be processed together.
</br>
<!--
======================
METRICS
======================
-->
## Metrics
GenAI-Perf collects a diverse set of metrics that captures the performance of
the inference server.
| Metric | Description | Aggregations |
| - | - | - |
| <span id="time_to_first_token_metric">Time to First Token</span> | Time between when a request is sent and when its first response is received, one value per request in benchmark | Avg, min, max, p99, p90, p75 |
| <span id="inter_token_latency_metric">Inter Token Latency</span> | Time between intermediate responses for a single request divided by the number of generated tokens of the latter response, one value per response per request in benchmark | Avg, min, max, p99, p90, p75 |
| Request Latency | Time between when a request is sent and when its final response is received, one value per request in benchmark | Avg, min, max, p99, p90, p75 |
| Output Sequence Length | Total number of output tokens of a request, one value per request in benchmark | Avg, min, max, p99, p90, p75 |
| Input Sequence Length | Total number of input tokens of a request, one value per request in benchmark | Avg, min, max, p99, p90, p75 |
| <span id="output_token_throughput_metric">Output Token Throughput</span> | Total number of output tokens from benchmark divided by benchmark duration | None–one value per benchmark |
| <span id="request_throughput_metric">Request Throughput</span> | Number of final responses from benchmark divided by benchmark duration | None–one value per benchmark |
</br>
<!--
======================
COMMAND LINE OPTIONS
======================
-->
## Command Line Options
##### `-h`
##### `--help`
Show the help message and exit.
### Endpoint Options:
##### `-m <list>`
##### `--model <list>`
The names of the models to benchmark.
A single model is recommended, unless you are
[profiling multiple LoRA adapters](docs/lora.md). (default: `None`)
##### `--model-selection-strategy {round_robin, random}`
When multiple models are specified, this is how a specific model
is assigned to a prompt. Round robin means that each model receives
a request in order. Random means that assignment is uniformly random
(default: `round_robin`)
##### `--backend {tensorrtllm,vllm}`
When using the "triton" service-kind, this is the backend of the model. For the
TRT-LLM backend, you currently must set `exclude_input_in_output` to true in the
model config to not echo the input tokens in the output. (default: tensorrtllm)
##### `--endpoint <str>`
Set a custom endpoint that differs from the OpenAI defaults. (default: `None`)
##### `--endpoint-type {chat,completions,embeddings,rankings}`
The endpoint-type to send requests to on the server. This is only used with the
`openai` service-kind. (default: `None`)
##### `--service-kind {triton,openai}`
The kind of service perf_analyzer will generate load for. In order to use
`openai`, you must specify an api via `--endpoint-type`. (default: `triton`)
##### `--streaming`
An option to enable the use of the streaming API. (default: `False`)
##### `-u <url>`
##### `--url <url>`
URL of the endpoint to target for benchmarking. (default: `None`)
### Input Options
##### `-b <int>`
##### `--batch-size <int>`
##### `--batch-size-text <int>`
The text batch size of the requests GenAI-Perf should send.
This is currently only supported with the
[embeddings](docs/embeddings.md), and
[rankings](docs/rankings.md) endpoint types.
(default: `1`)
##### `--batch-size-image <int>`
The image batch size of the requests GenAI-Perf should send.
This is currently only supported with the
image retrieval endpoint type.
(default: `1`)
##### `--extra-inputs <str>`
Provide additional inputs to include with every request. You can repeat this
flag for multiple inputs. Inputs should be in an input_name:value format.
Alternatively, a string representing a json formatted dict can be provided.
(default: `None`)
##### `--input-file <path>`
The input file or directory containing the content to use for
profiling. To use synthetic files for a converter that needs
multiple files, prefix the path with 'synthetic:', followed by a
comma-separated list of filenames. The synthetic filenames should not have
extensions. For example, 'synthetic:queries,passages'.
Each line should be a JSON object with a 'text' or 'image' field
in JSONL format. Example: {\"text\": \"Your prompt here\"}"
##### `--num-prompts <int>`
The number of unique prompts to generate as stimulus. (default: `100`)
##### `--output-tokens-mean <int>`
##### `--osl`
The mean number of tokens in each output. Ensure the `--tokenizer` value is set
correctly. (default: `-1`)
##### `--output-tokens-mean-deterministic`
When using `--output-tokens-mean`, this flag can be set to improve precision by
setting the minimum number of tokens equal to the requested number of tokens.
This is currently supported with the Triton service-kind. Note that there is
still some variability in the requested number of output tokens, but GenAi-Perf
attempts its best effort with your model to get the right number of output
tokens. (default: `False`)
##### `--output-tokens-stddev <int>`
The standard deviation of the number of tokens in each output. This is only used
when `--output-tokens-mean` is provided. (default: `0`)
##### `--random-seed <int>`
The seed used to generate random values. (default: `0`)
##### `--synthetic-input-tokens-mean <int>`
##### `--isl`
The mean of number of tokens in the generated prompts when using synthetic
data. (default: `550`)
##### `--synthetic-input-tokens-stddev <int>`
The standard deviation of number of tokens in the generated prompts when
using synthetic data. (default: `0`)
### Profiling Options
##### `--concurrency <int>`
The concurrency value to benchmark. (default: `None`)
##### `--measurement-interval <int>`
##### `-p <int>`
The time interval used for each measurement in milliseconds. Perf Analyzer
will sample a time interval specified and take measurement over the requests
completed within that time interval. (default: `10000`)
##### `--request-rate <float>`
Sets the request rate for the load generated by PA. (default: `None`)
##### `-s <float>`
##### `--stability-percentage <float>`
The allowed variation in latency measurements when determining if a result is
stable. The measurement is considered as stable if the ratio of max / min from
the recent 3 measurements is within (stability percentage) in terms of both
infer per second and latency. (default: `999`)
### Output Options
##### `--artifact-dir`
The directory to store all the (output) artifacts generated by GenAI-Perf and
Perf Analyzer. (default: `artifacts`)
##### `--generate-plots`
An option to enable the generation of plots. (default: False)
##### `--profile-export-file <path>`
The path where the perf_analyzer profile export will be generated. By default,
the profile export will be to `profile_export.json`. The genai-perf files will be
exported to `<profile_export_file>_genai_perf.json` and
`<profile_export_file>_genai_perf.csv`. For example, if the profile
export file is `profile_export.json`, the genai-perf file will be exported to
`profile_export_genai_perf.csv`. (default: `profile_export.json`)
### Other Options
##### `--tokenizer <str>`
The HuggingFace tokenizer to use to interpret token metrics from prompts and
responses. The value can be the name of a tokenizer or the filepath of the
tokenizer. (default: `hf-internal-testing/llama-tokenizer`)
##### `--tokenizer-revision <str>`
The specific tokenizer model version to use. It can be a branch
name, tag name, or commit ID. (default: `main`)
##### `--tokenizer-trust-remote-code`
Allow custom tokenizer to be downloaded and executed. This carries security
risks and should only be used for repositories you trust. This is only
necessary for custom tokenizers stored in HuggingFace Hub. (default: `False`)
##### `-v`
##### `--verbose`
An option to enable verbose mode. (default: `False`)
##### `--version`
An option to print the version and exit.
</br>
<!--
======================
Known Issues
======================
-->
## Known Issues
* GenAI-Perf can be slow to finish if a high request-rate is provided
* Token counts may not be exact
Raw data
{
"_id": null,
"home_page": null,
"name": "genai-perf",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": null,
"download_url": null,
"platform": null,
"description": "<!--\nCopyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of NVIDIA CORPORATION nor the names of its\n contributors may be used to endorse or promote products derived\n from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY\nEXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR\nCONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\nEXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\nPROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY\nOF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n-->\n\n# GenAI-Perf\n\nGenAI-Perf is a command line tool for measuring the throughput and latency of\ngenerative AI models as served through an inference server.\nFor large language models (LLMs), GenAI-Perf provides metrics such as\n[output token throughput](#output_token_throughput_metric),\n[time to first token](#time_to_first_token_metric),\n[inter token latency](#inter_token_latency_metric), and\n[request throughput](#request_throughput_metric).\nFor a full list of metrics please see the [Metrics section](#metrics).\n\nUsers specify a model name, an inference server URL, the type of inputs to use\n(synthetic or from a dataset defined via a file), and the type of load to generate\n(number of concurrent requests, request rate).\n\nGenAI-Perf generates the specified load, measures the performance of the\ninference server and reports the metrics in a simple table as console output.\nThe tool also logs all results in a csv and json file that can be used to derive\nadditional metrics and visualizations. The inference server must already be\nrunning when GenAI-Perf is run.\n\nYou can use GenAI-Perf to run performance benchmarks on\n- [Large Language Models](docs/tutorial.md)\n- [Vision Language Models](docs/multi_modal.md)\n- [Embedding Models](docs/embeddings.md)\n- [Ranking Models](docs/rankings.md)\n- [Multiple LoRA Adapters](docs/lora.md)\n\n> [!Note]\n> GenAI-Perf is currently in early release and under rapid development. While we\n> will try to remain consistent, command line options and functionality are\n> subject to change as the tool matures.\n\n</br>\n\n<!--\n======================\nINSTALLATION\n======================\n-->\n\n## Installation\n\nThe easiest way to install GenAI-Perf is through\n[Triton Server SDK container](https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver).\nInstall the latest release using the following command:\n\n```bash\nexport RELEASE=\"24.10\"\n\ndocker run -it --net=host --gpus=all nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk\n\n# Check out genai_perf command inside the container:\ngenai-perf --help\n```\n\n<details>\n\n<summary>Alternatively, to install from source:</summary>\n\nSince GenAI-Perf depends on Perf Analyzer,\nyou'll need to install the Perf Analyzer binary:\n\n### Install Perf Analyzer (Ubuntu, Python 3.10+)\n\n**NOTE**: you must already have CUDA 12 installed\n(checkout the [CUDA installation guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)).\n\n```bash\npip install tritonclient\n```\n\nYou can also build Perf Analyzer [from source](../docs/install.md#build-from-source) as well.\n\n### Install GenAI-Perf from source\n\n```bash\npip install git+https://github.com/triton-inference-server/perf_analyzer.git#subdirectory=genai-perf\n```\n\n</details>\n\n</br>\n\n<!--\n======================\nQUICK START\n======================\n-->\n\n## Quick Start\n\nIn this quick start, we will use GenAI-Perf to run performance benchmarking on\nthe GPT-2 model running on Triton Inference Server with a TensorRT-LLM engine.\n\n### Serve GPT-2 TensorRT-LLM model using Triton CLI\n\nYou can follow the [quickstart guide](https://github.com/triton-inference-server/triton_cli?tab=readme-ov-file#serving-a-trt-llm-model)\nin the Triton CLI Github repository to serve GPT-2 on the Triton server with the TensorRT-LLM backend.\nThe full instructions are copied below for convenience:\n\n```bash\n# This container comes with all of the dependencies for building TRT-LLM engines\n# and serving the engine with Triton Inference Server.\ndocker run -ti \\\n --gpus all \\\n --network=host \\\n --shm-size=1g --ulimit memlock=-1 \\\n -v /tmp:/tmp \\\n -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \\\n nvcr.io/nvidia/tritonserver:24.10-trtllm-python-py3\n\n# Install the Triton CLI\npip install git+https://github.com/triton-inference-server/triton_cli.git@0.0.11\n\n# Build TRT LLM engine and generate a Triton model repository pointing at it\ntriton remove -m all\ntriton import -m gpt2 --backend tensorrtllm\n\n# Start Triton pointing at the default model repository\ntriton start\n```\n\n### Running GenAI-Perf\n\nNow we can run GenAI-Perf inside the Triton Inference Server SDK container:\n\n```bash\ngenai-perf profile -m gpt2 --service-kind triton --backend tensorrtllm --streaming\n```\n\nExample output:\n\n```\n NVIDIA GenAI-Perf | LLM Metrics\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Statistic \u2503 avg \u2503 min \u2503 max \u2503 p99 \u2503 p90 \u2503 p75 \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 Time to first token (ms) \u2502 16.26 \u2502 12.39 \u2502 17.25 \u2502 17.09 \u2502 16.68 \u2502 16.56 \u2502\n\u2502 Inter token latency (ms) \u2502 1.85 \u2502 1.55 \u2502 2.04 \u2502 2.02 \u2502 1.97 \u2502 1.92 \u2502\n\u2502 Request latency (ms) \u2502 499.20 \u2502 451.01 \u2502 554.61 \u2502 548.69 \u2502 526.13 \u2502 514.19 \u2502\n\u2502 Output sequence length \u2502 261.90 \u2502 256.00 \u2502 298.00 \u2502 296.60 \u2502 270.00 \u2502 265.00 \u2502\n\u2502 Input sequence length \u2502 550.06 \u2502 550.00 \u2502 553.00 \u2502 551.60 \u2502 550.00 \u2502 550.00 \u2502\n\u2502 Output token throughput (per sec) \u2502 520.87 \u2502 N/A \u2502 N/A \u2502 N/A \u2502 N/A \u2502 N/A \u2502\n\u2502 Request throughput (per sec) \u2502 1.99 \u2502 N/A \u2502 N/A \u2502 N/A \u2502 N/A \u2502 N/A \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nSee [Tutorial](docs/tutorial.md) for additional examples.\n\n</br>\n\n<!--\n======================\nVISUALIZATION\n======================\n-->\n\n## Visualization\n\nGenAI-Perf can also generate various plots that visualize the performance of the\ncurrent profile run. This is disabled by default but users can easily enable it\nby passing the `--generate-plots` option when running the benchmark:\n\n```bash\ngenai-perf profile \\\n -m gpt2 \\\n --service-kind triton \\\n --backend tensorrtllm \\\n --streaming \\\n --concurrency 1 \\\n --generate-plots\n```\n\nThis will generate a [set of default plots](docs/compare.md#example-plots) such as:\n- Time to first token (TTFT) analysis\n- Request latency analysis\n- TTFT vs Input sequence lengths\n- Inter token latencies vs Token positions\n- Input sequence lengths vs Output sequence lengths\n\n\n### Using `compare` Subcommand to Visualize Multiple Runs\n\nThe `compare` subcommand in GenAI-Perf facilitates users in comparing multiple\nprofile runs and visualizing the differences through plots.\n\n#### Usage\nAssuming the user possesses two profile export JSON files,\nnamely `profile1.json` and `profile2.json`,\nthey can execute the `compare` subcommand using the `--files` option:\n\n```bash\ngenai-perf compare --files profile1.json profile2.json\n```\n\nExecuting the above command will perform the following actions under the\n`compare` directory:\n1. Generate a YAML configuration file (e.g. `config.yaml`) containing the\nmetadata for each plot generated during the comparison process.\n2. Automatically generate the [default set of plots](docs/compare.md#example-plots)\n(e.g. TTFT vs. Input Sequence Lengths) that compare the two profile runs.\n\n```\ncompare\n\u251c\u2500\u2500 config.yaml\n\u251c\u2500\u2500 distribution_of_input_sequence_lengths_to_output_sequence_lengths.jpeg\n\u251c\u2500\u2500 request_latency.jpeg\n\u251c\u2500\u2500 time_to_first_token.jpeg\n\u251c\u2500\u2500 time_to_first_token_vs_input_sequence_lengths.jpeg\n\u251c\u2500\u2500 token-to-token_latency_vs_output_token_position.jpeg\n\u2514\u2500\u2500 ...\n```\n\n#### Customization\nUsers have the flexibility to iteratively modify the generated YAML configuration\nfile to suit their specific requirements.\nThey can make alterations to the plots according to their preferences and execute\nthe command with the `--config` option followed by the path to the modified\nconfiguration file:\n\n```bash\ngenai-perf compare --config compare/config.yaml\n```\n\nThis command will regenerate the plots based on the updated configuration settings,\nenabling users to refine the visual representation of the comparison results as\nper their needs.\n\nSee [Compare documentation](docs/compare.md) for more details.\n\n</br>\n\n<!--\n======================\nMODEL INPUTS\n======================\n-->\n\n## Model Inputs\n\nGenAI-Perf supports model input prompts from either synthetically generated\ninputs, or from a dataset defined via a file.\n\nWhen the dataset is synthetic, you can specify the following options:\n* `--num-prompts <int>`: The number of unique prompts to generate as stimulus, >= 1.\n* `--synthetic-input-tokens-mean <int>`: The mean of number of tokens in the\n generated prompts when using synthetic data, >= 1.\n* `--synthetic-input-tokens-stddev <int>`: The standard deviation of number of\n tokens in the generated prompts when using synthetic data, >= 0.\n* `--random-seed <int>`: The seed used to generate random values, >= 0.\n\nWhen the dataset is coming from a file, you can specify the following\noptions:\n* `--input-file <path>`: The input file or directory containing the prompts or\n filepaths to images to use for benchmarking as JSON objects.\n\nFor any dataset, you can specify the following options:\n* `--output-tokens-mean <int>`: The mean number of tokens in each output. Ensure\n the `--tokenizer` value is set correctly, >= 1.\n* `--output-tokens-stddev <int>`: The standard deviation of the number of tokens\n in each output. This is only used when output-tokens-mean is provided, >= 1.\n* `--output-tokens-mean-deterministic`: When using `--output-tokens-mean`, this\n flag can be set to improve precision by setting the minimum number of tokens\n equal to the requested number of tokens. This is currently supported with the\n Triton service-kind. Note that there is still some variability in the\n requested number of output tokens, but GenAi-Perf attempts its best effort\n with your model to get the right number of output tokens.\n\nYou can optionally set additional model inputs with the following option:\n* `--extra-inputs <input_name>:<value>`: An additional input for use with the\n model with a singular value, such as `stream:true` or `max_tokens:5`. This\n flag can be repeated to supply multiple extra inputs.\n\nFor [Large Language Models](docs/tutorial.md), there is no batch size (i.e.\nbatch size is always `1`). Each request includes the inputs for one individual\ninference. Other modes such as the [embeddings](docs/embeddings.md) and\n[rankings](docs/rankings.md) endpoints support client-side batching, where\n`--batch-size-text N` means that each request sent will include the inputs for\n`N` separate inferences, allowing them to be processed together.\n\n</br>\n\n<!--\n======================\nMETRICS\n======================\n-->\n\n## Metrics\n\nGenAI-Perf collects a diverse set of metrics that captures the performance of\nthe inference server.\n\n| Metric | Description | Aggregations |\n| - | - | - |\n| <span id=\"time_to_first_token_metric\">Time to First Token</span> | Time between when a request is sent and when its first response is received, one value per request in benchmark | Avg, min, max, p99, p90, p75 |\n| <span id=\"inter_token_latency_metric\">Inter Token Latency</span> | Time between intermediate responses for a single request divided by the number of generated tokens of the latter response, one value per response per request in benchmark | Avg, min, max, p99, p90, p75 |\n| Request Latency | Time between when a request is sent and when its final response is received, one value per request in benchmark | Avg, min, max, p99, p90, p75 |\n| Output Sequence Length | Total number of output tokens of a request, one value per request in benchmark | Avg, min, max, p99, p90, p75 |\n| Input Sequence Length | Total number of input tokens of a request, one value per request in benchmark | Avg, min, max, p99, p90, p75 |\n| <span id=\"output_token_throughput_metric\">Output Token Throughput</span> | Total number of output tokens from benchmark divided by benchmark duration | None\u2013one value per benchmark |\n| <span id=\"request_throughput_metric\">Request Throughput</span> | Number of final responses from benchmark divided by benchmark duration | None\u2013one value per benchmark |\n\n</br>\n\n<!--\n======================\nCOMMAND LINE OPTIONS\n======================\n-->\n\n## Command Line Options\n\n##### `-h`\n##### `--help`\n\nShow the help message and exit.\n\n### Endpoint Options:\n\n##### `-m <list>`\n##### `--model <list>`\n\nThe names of the models to benchmark.\nA single model is recommended, unless you are\n[profiling multiple LoRA adapters](docs/lora.md). (default: `None`)\n\n##### `--model-selection-strategy {round_robin, random}`\n\nWhen multiple models are specified, this is how a specific model\nis assigned to a prompt. Round robin means that each model receives\na request in order. Random means that assignment is uniformly random\n(default: `round_robin`)\n\n##### `--backend {tensorrtllm,vllm}`\n\nWhen using the \"triton\" service-kind, this is the backend of the model. For the\nTRT-LLM backend, you currently must set `exclude_input_in_output` to true in the\nmodel config to not echo the input tokens in the output. (default: tensorrtllm)\n\n##### `--endpoint <str>`\n\nSet a custom endpoint that differs from the OpenAI defaults. (default: `None`)\n\n##### `--endpoint-type {chat,completions,embeddings,rankings}`\n\nThe endpoint-type to send requests to on the server. This is only used with the\n`openai` service-kind. (default: `None`)\n\n##### `--service-kind {triton,openai}`\n\nThe kind of service perf_analyzer will generate load for. In order to use\n`openai`, you must specify an api via `--endpoint-type`. (default: `triton`)\n\n##### `--streaming`\n\nAn option to enable the use of the streaming API. (default: `False`)\n\n##### `-u <url>`\n##### `--url <url>`\n\nURL of the endpoint to target for benchmarking. (default: `None`)\n\n### Input Options\n\n##### `-b <int>`\n##### `--batch-size <int>`\n##### `--batch-size-text <int>`\n\nThe text batch size of the requests GenAI-Perf should send.\nThis is currently only supported with the\n[embeddings](docs/embeddings.md), and\n[rankings](docs/rankings.md) endpoint types.\n(default: `1`)\n\n##### `--batch-size-image <int>`\n\nThe image batch size of the requests GenAI-Perf should send.\nThis is currently only supported with the\nimage retrieval endpoint type.\n(default: `1`)\n\n##### `--extra-inputs <str>`\n\nProvide additional inputs to include with every request. You can repeat this\nflag for multiple inputs. Inputs should be in an input_name:value format.\nAlternatively, a string representing a json formatted dict can be provided.\n(default: `None`)\n\n##### `--input-file <path>`\n\nThe input file or directory containing the content to use for\nprofiling. To use synthetic files for a converter that needs\nmultiple files, prefix the path with 'synthetic:', followed by a\ncomma-separated list of filenames. The synthetic filenames should not have\nextensions. For example, 'synthetic:queries,passages'.\nEach line should be a JSON object with a 'text' or 'image' field\nin JSONL format. Example: {\\\"text\\\": \\\"Your prompt here\\\"}\"\n\n##### `--num-prompts <int>`\n\nThe number of unique prompts to generate as stimulus. (default: `100`)\n\n##### `--output-tokens-mean <int>`\n##### `--osl`\n\nThe mean number of tokens in each output. Ensure the `--tokenizer` value is set\ncorrectly. (default: `-1`)\n\n##### `--output-tokens-mean-deterministic`\n\nWhen using `--output-tokens-mean`, this flag can be set to improve precision by\nsetting the minimum number of tokens equal to the requested number of tokens.\nThis is currently supported with the Triton service-kind. Note that there is\nstill some variability in the requested number of output tokens, but GenAi-Perf\nattempts its best effort with your model to get the right number of output\ntokens. (default: `False`)\n\n##### `--output-tokens-stddev <int>`\n\nThe standard deviation of the number of tokens in each output. This is only used\nwhen `--output-tokens-mean` is provided. (default: `0`)\n\n##### `--random-seed <int>`\n\nThe seed used to generate random values. (default: `0`)\n\n##### `--synthetic-input-tokens-mean <int>`\n##### `--isl`\n\nThe mean of number of tokens in the generated prompts when using synthetic\ndata. (default: `550`)\n\n##### `--synthetic-input-tokens-stddev <int>`\n\nThe standard deviation of number of tokens in the generated prompts when\nusing synthetic data. (default: `0`)\n\n### Profiling Options\n\n##### `--concurrency <int>`\n\nThe concurrency value to benchmark. (default: `None`)\n\n##### `--measurement-interval <int>`\n##### `-p <int>`\n\nThe time interval used for each measurement in milliseconds. Perf Analyzer\nwill sample a time interval specified and take measurement over the requests\ncompleted within that time interval. (default: `10000`)\n\n##### `--request-rate <float>`\n\nSets the request rate for the load generated by PA. (default: `None`)\n\n##### `-s <float>`\n##### `--stability-percentage <float>`\n\nThe allowed variation in latency measurements when determining if a result is\nstable. The measurement is considered as stable if the ratio of max / min from\nthe recent 3 measurements is within (stability percentage) in terms of both\ninfer per second and latency. (default: `999`)\n\n### Output Options\n\n##### `--artifact-dir`\n\nThe directory to store all the (output) artifacts generated by GenAI-Perf and\nPerf Analyzer. (default: `artifacts`)\n\n##### `--generate-plots`\n\nAn option to enable the generation of plots. (default: False)\n\n##### `--profile-export-file <path>`\n\nThe path where the perf_analyzer profile export will be generated. By default,\nthe profile export will be to `profile_export.json`. The genai-perf files will be\nexported to `<profile_export_file>_genai_perf.json` and\n`<profile_export_file>_genai_perf.csv`. For example, if the profile\nexport file is `profile_export.json`, the genai-perf file will be exported to\n`profile_export_genai_perf.csv`. (default: `profile_export.json`)\n\n### Other Options\n\n##### `--tokenizer <str>`\n\nThe HuggingFace tokenizer to use to interpret token metrics from prompts and\nresponses. The value can be the name of a tokenizer or the filepath of the\ntokenizer. (default: `hf-internal-testing/llama-tokenizer`)\n\n##### `--tokenizer-revision <str>`\n\nThe specific tokenizer model version to use. It can be a branch\nname, tag name, or commit ID. (default: `main`)\n\n##### `--tokenizer-trust-remote-code`\n\nAllow custom tokenizer to be downloaded and executed. This carries security\nrisks and should only be used for repositories you trust. This is only\nnecessary for custom tokenizers stored in HuggingFace Hub. (default: `False`)\n\n##### `-v`\n##### `--verbose`\n\nAn option to enable verbose mode. (default: `False`)\n\n##### `--version`\n\nAn option to print the version and exit.\n\n</br>\n\n<!--\n======================\nKnown Issues\n======================\n-->\n\n## Known Issues\n\n* GenAI-Perf can be slow to finish if a high request-rate is provided\n* Token counts may not be exact\n",
"bugtrack_url": null,
"license": null,
"summary": "GenAI Perf Analyzer CLI - CLI tool to simplify profiling LLMs and Generative AI models with Perf Analyzer",
"version": "0.0.8",
"project_urls": {
"Bug Tracker": "https://github.com/triton-inference-server/perf_analyzer/issues",
"Homepage": "https://github.com/triton-inference-server/perf_analyzer"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "369c1e3fffce922f27162021e9765f1910856b34ee7dd082352628ef9d17eb00",
"md5": "e62e7091779d327b9574fd4163440cdf",
"sha256": "df7f4c20bbc1024cbb43c937cb8eec3ad6a35b092632152befcc0f1a5d948cec"
},
"downloads": -1,
"filename": "genai_perf-0.0.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e62e7091779d327b9574fd4163440cdf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.10",
"size": 637519,
"upload_time": "2024-11-26T22:16:21",
"upload_time_iso_8601": "2024-11-26T22:16:21.058692Z",
"url": "https://files.pythonhosted.org/packages/36/9c/1e3fffce922f27162021e9765f1910856b34ee7dd082352628ef9d17eb00/genai_perf-0.0.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-26 22:16:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "triton-inference-server",
"github_project": "perf_analyzer",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "genai-perf"
}