tritony


Nametritony JSON
Version 0.0.19 PyPI version JSON
download
home_pagehttps://github.com/rtzr/tritony
SummaryTiny configuration for Triton Inference Server
upload_time2024-12-04 06:52:20
maintainerNone
docs_urlNone
authorArthur Kim, RTZR team
requires_pythonNone
licenseBSD
keywords grpc http triton tensorrt inference server service client nvidia rtzr
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # tritony - Tiny configuration for Triton Inference Server

![Pypi](https://badge.fury.io/py/tritony.svg)
![CI](https://github.com/rtzr/tritony/actions/workflows/pre-commit_pytest.yml/badge.svg)
[![Coverage Status](https://coveralls.io/repos/github/rtzr/tritony/badge.svg?branch=main)](https://coveralls.io/github/rtzr/tritony?branch=main)

## What is this?

If you see [the official example](https://github.com/triton-inference-server/client/tree/main/src/python/examples), it is really confusing to use where to start.

Use tritony! You will get really short lines of code like example below.

```python
import argparse
import os
from glob import glob
import numpy as np
from PIL import Image

from tritony import InferenceClient


def preprocess(img, dtype=np.float32, h=224, w=224, scaling="INCEPTION"):
    sample_img = img.convert("RGB")

    resized_img = sample_img.resize((w, h), Image.Resampling.BILINEAR)
    resized = np.array(resized_img)
    if resized.ndim == 2:
        resized = resized[:, :, np.newaxis]

    scaled = (resized / 127.5) - 1
    ordered = np.transpose(scaled, (2, 0, 1))
    
    return ordered.astype(dtype)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--image_folder", type=str, help="Input folder.")
    FLAGS = parser.parse_args()

    client = InferenceClient.create_with("densenet_onnx", "0.0.0.0:8001", input_dims=3, protocol="grpc")
    client.output_kwargs = {"class_count": 1}

    image_data = []
    for filename in glob(os.path.join(FLAGS.image_folder, "*")):
        image_data.append(preprocess(Image.open(filename)))

    result = client(np.asarray(image_data))

    for output in result:
        max_value, arg_max, class_name = output[0].decode("utf-8").split(":")
        print(f"{max_value} ({arg_max}) = {class_name}")
```

## Release Notes

- 24.07.11 Upgrade minimum tritonclient version to 2.34.0
- 23.08.30 Support `optional` with model input, `parameters` on config.pbtxt
- 23.06.16 Support tritonclient>=2.34.0
- Loosely modified the requirements related to tritonclient


## Key Features

- [x] Simple configuration. Only `$host:$port` and `$model_name` are required.
- [x] Generating asynchronous requests with `asyncio.Queue`
- [x] Simple Model switching
- [ ] Support async tritonclient

## Requirements

    $ pip install tritonclient[all]

## Install

    $ pip install tritony

## Test

### With Triton

```bash
./bin/run_triton_tritony_sample.sh
```

```bash
pytest -s --cov-report term-missing --cov=tritony tests/
```

### Example with image_client.py

- Follow steps
  in [the official triton server documentation](https://github.com/triton-inference-server/server#serve-a-model-in-3-easy-steps)

```bash
# Download Images from https://github.com/triton-inference-server/server.git
python ./example/image_client.py --image_folder "./server/qa/images"
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rtzr/tritony",
    "name": "tritony",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "grpc, http, triton, tensorrt, inference, server, service, client, nvidia, rtzr",
    "author": "Arthur Kim, RTZR team",
    "author_email": "arthur@rtzr.ai",
    "download_url": "https://files.pythonhosted.org/packages/73/e2/e87a66d934e9f7a48c4bf5967045eb1e5042e52504d589ac8ac3d0481fc6/tritony-0.0.19.tar.gz",
    "platform": null,
    "description": "# tritony - Tiny configuration for Triton Inference Server\n\n![Pypi](https://badge.fury.io/py/tritony.svg)\n![CI](https://github.com/rtzr/tritony/actions/workflows/pre-commit_pytest.yml/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/github/rtzr/tritony/badge.svg?branch=main)](https://coveralls.io/github/rtzr/tritony?branch=main)\n\n## What is this?\n\nIf you see [the official example](https://github.com/triton-inference-server/client/tree/main/src/python/examples), it is really confusing to use where to start.\n\nUse tritony! You will get really short lines of code like example below.\n\n```python\nimport argparse\nimport os\nfrom glob import glob\nimport numpy as np\nfrom PIL import Image\n\nfrom tritony import InferenceClient\n\n\ndef preprocess(img, dtype=np.float32, h=224, w=224, scaling=\"INCEPTION\"):\n    sample_img = img.convert(\"RGB\")\n\n    resized_img = sample_img.resize((w, h), Image.Resampling.BILINEAR)\n    resized = np.array(resized_img)\n    if resized.ndim == 2:\n        resized = resized[:, :, np.newaxis]\n\n    scaled = (resized / 127.5) - 1\n    ordered = np.transpose(scaled, (2, 0, 1))\n    \n    return ordered.astype(dtype)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--image_folder\", type=str, help=\"Input folder.\")\n    FLAGS = parser.parse_args()\n\n    client = InferenceClient.create_with(\"densenet_onnx\", \"0.0.0.0:8001\", input_dims=3, protocol=\"grpc\")\n    client.output_kwargs = {\"class_count\": 1}\n\n    image_data = []\n    for filename in glob(os.path.join(FLAGS.image_folder, \"*\")):\n        image_data.append(preprocess(Image.open(filename)))\n\n    result = client(np.asarray(image_data))\n\n    for output in result:\n        max_value, arg_max, class_name = output[0].decode(\"utf-8\").split(\":\")\n        print(f\"{max_value} ({arg_max}) = {class_name}\")\n```\n\n## Release Notes\n\n- 24.07.11 Upgrade minimum tritonclient version to 2.34.0\n- 23.08.30 Support `optional` with model input, `parameters` on config.pbtxt\n- 23.06.16 Support tritonclient>=2.34.0\n- Loosely modified the requirements related to tritonclient\n\n\n## Key Features\n\n- [x] Simple configuration. Only `$host:$port` and `$model_name` are required.\n- [x] Generating asynchronous requests with `asyncio.Queue`\n- [x] Simple Model switching\n- [ ] Support async tritonclient\n\n## Requirements\n\n    $ pip install tritonclient[all]\n\n## Install\n\n    $ pip install tritony\n\n## Test\n\n### With Triton\n\n```bash\n./bin/run_triton_tritony_sample.sh\n```\n\n```bash\npytest -s --cov-report term-missing --cov=tritony tests/\n```\n\n### Example with image_client.py\n\n- Follow steps\n  in [the official triton server documentation](https://github.com/triton-inference-server/server#serve-a-model-in-3-easy-steps)\n\n```bash\n# Download Images from https://github.com/triton-inference-server/server.git\npython ./example/image_client.py --image_folder \"./server/qa/images\"\n```\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Tiny configuration for Triton Inference Server",
    "version": "0.0.19",
    "project_urls": {
        "Download": "https://pypi.org/project/tritony/#files",
        "Homepage": "https://github.com/rtzr/tritony",
        "Source": "https://github.com/rtzr/tritony",
        "Tracker": "https://github.com/rtzr/tritony/issues"
    },
    "split_keywords": [
        "grpc",
        " http",
        " triton",
        " tensorrt",
        " inference",
        " server",
        " service",
        " client",
        " nvidia",
        " rtzr"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a91cb6f8203240578c33053c1f353ed300453c44b36aa215c15b0105d7457c3b",
                "md5": "a4759bbf078146b3e70153b48656ddd3",
                "sha256": "24cfc5029368347c94e1ed8fb8a8a34ecfd440ab9187699d4dcf10802249504b"
            },
            "downloads": -1,
            "filename": "tritony-0.0.19-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a4759bbf078146b3e70153b48656ddd3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 11142,
            "upload_time": "2024-12-04T06:52:19",
            "upload_time_iso_8601": "2024-12-04T06:52:19.603317Z",
            "url": "https://files.pythonhosted.org/packages/a9/1c/b6f8203240578c33053c1f353ed300453c44b36aa215c15b0105d7457c3b/tritony-0.0.19-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "73e2e87a66d934e9f7a48c4bf5967045eb1e5042e52504d589ac8ac3d0481fc6",
                "md5": "0d9b3015c2bddf94d9ae508001ea2b4a",
                "sha256": "4b1c2d24ea54879fa2bf71a7dc5c4af07ec260f53557e7fc88865ea2678e56bb"
            },
            "downloads": -1,
            "filename": "tritony-0.0.19.tar.gz",
            "has_sig": false,
            "md5_digest": "0d9b3015c2bddf94d9ae508001ea2b4a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 13896,
            "upload_time": "2024-12-04T06:52:20",
            "upload_time_iso_8601": "2024-12-04T06:52:20.804925Z",
            "url": "https://files.pythonhosted.org/packages/73/e2/e87a66d934e9f7a48c4bf5967045eb1e5042e52504d589ac8ac3d0481fc6/tritony-0.0.19.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-04 06:52:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rtzr",
    "github_project": "tritony",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tritony"
}
        
Elapsed time: 0.38078s