tritony


Nametritony JSON
Version 0.0.16 PyPI version JSON
download
home_pagehttps://github.com/rtzr/tritony
SummaryTiny configuration for Triton Inference Server
upload_time2023-12-15 04:15:36
maintainer
docs_urlNone
authorArthur Kim, RTZR team
requires_python
licenseBSD
keywords grpc http triton tensorrt inference server service client nvidia rtzr
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # tritony - Tiny configuration for Triton Inference Server

![Pypi](https://badge.fury.io/py/tritony.svg)
![CI](https://github.com/rtzr/tritony/actions/workflows/pre-commit_pytest.yml/badge.svg)
[![Coverage Status](https://coveralls.io/repos/github/rtzr/tritony/badge.svg?branch=main)](https://coveralls.io/github/rtzr/tritony?branch=main)

## What is this?

If you see [the official example](https://github.com/triton-inference-server/client/tree/main/src/python/examples), it is really confusing to use where to start.

Use tritony! You will get really short lines of code like example below.

```python
import argparse
import os
from glob import glob
import numpy as np
from PIL import Image

from tritony import InferenceClient


def preprocess(img, dtype=np.float32, h=224, w=224, scaling="INCEPTION"):
    sample_img = img.convert("RGB")

    resized_img = sample_img.resize((w, h), Image.Resampling.BILINEAR)
    resized = np.array(resized_img)
    if resized.ndim == 2:
        resized = resized[:, :, np.newaxis]

    scaled = (resized / 127.5) - 1
    ordered = np.transpose(scaled, (2, 0, 1))
    
    return ordered.astype(dtype)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--image_folder", type=str, help="Input folder.")
    FLAGS = parser.parse_args()

    client = InferenceClient.create_with("densenet_onnx", "0.0.0.0:8001", input_dims=3, protocol="grpc")
    client.output_kwargs = {"class_count": 1}

    image_data = []
    for filename in glob(os.path.join(FLAGS.image_folder, "*")):
        image_data.append(preprocess(Image.open(filename)))

    result = client(np.asarray(image_data))

    for output in result:
        max_value, arg_max, class_name = output[0].decode("utf-8").split(":")
        print(f"{max_value} ({arg_max}) = {class_name}")
```

## Release Notes

- 23.08.30 Support `optional` with model input, `parameters` on config.pbtxt
- 23.06.16 Support tritonclient>=2.34.0
- Loosely modified the requirements related to tritonclient


## Key Features

- [x] Simple configuration. Only `$host:$port` and `$model_name` are required.
- [x] Generating asynchronous requests with `asyncio.Queue`
- [x] Simple Model switching
- [ ] Support async tritonclient

## Requirements

    $ pip install tritonclient[all]

## Install

    $ pip install tritony

## Test

### With Triton

```bash
./bin/run_triton_tritony_sample.sh
```

```bash
pytest -s --cov-report term-missing --cov=tritony tests/
```

### Example with image_client.py

- Follow steps
  in [the official triton server documentation](https://github.com/triton-inference-server/server#serve-a-model-in-3-easy-steps)

```bash
# Download Images from https://github.com/triton-inference-server/server.git
python ./example/image_client.py --image_folder "./server/qa/images"
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rtzr/tritony",
    "name": "tritony",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "grpc,http,triton,tensorrt,inference,server,service,client,nvidia,rtzr",
    "author": "Arthur Kim, RTZR team",
    "author_email": "arthur@rtzr.ai",
    "download_url": "https://files.pythonhosted.org/packages/e4/78/550f413eb03b21124548949b65f871152a9ffd61e394461d18c80ff353e1/tritony-0.0.16.tar.gz",
    "platform": null,
    "description": "# tritony - Tiny configuration for Triton Inference Server\n\n![Pypi](https://badge.fury.io/py/tritony.svg)\n![CI](https://github.com/rtzr/tritony/actions/workflows/pre-commit_pytest.yml/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/github/rtzr/tritony/badge.svg?branch=main)](https://coveralls.io/github/rtzr/tritony?branch=main)\n\n## What is this?\n\nIf you see [the official example](https://github.com/triton-inference-server/client/tree/main/src/python/examples), it is really confusing to use where to start.\n\nUse tritony! You will get really short lines of code like example below.\n\n```python\nimport argparse\nimport os\nfrom glob import glob\nimport numpy as np\nfrom PIL import Image\n\nfrom tritony import InferenceClient\n\n\ndef preprocess(img, dtype=np.float32, h=224, w=224, scaling=\"INCEPTION\"):\n    sample_img = img.convert(\"RGB\")\n\n    resized_img = sample_img.resize((w, h), Image.Resampling.BILINEAR)\n    resized = np.array(resized_img)\n    if resized.ndim == 2:\n        resized = resized[:, :, np.newaxis]\n\n    scaled = (resized / 127.5) - 1\n    ordered = np.transpose(scaled, (2, 0, 1))\n    \n    return ordered.astype(dtype)\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--image_folder\", type=str, help=\"Input folder.\")\n    FLAGS = parser.parse_args()\n\n    client = InferenceClient.create_with(\"densenet_onnx\", \"0.0.0.0:8001\", input_dims=3, protocol=\"grpc\")\n    client.output_kwargs = {\"class_count\": 1}\n\n    image_data = []\n    for filename in glob(os.path.join(FLAGS.image_folder, \"*\")):\n        image_data.append(preprocess(Image.open(filename)))\n\n    result = client(np.asarray(image_data))\n\n    for output in result:\n        max_value, arg_max, class_name = output[0].decode(\"utf-8\").split(\":\")\n        print(f\"{max_value} ({arg_max}) = {class_name}\")\n```\n\n## Release Notes\n\n- 23.08.30 Support `optional` with model input, `parameters` on config.pbtxt\n- 23.06.16 Support tritonclient>=2.34.0\n- Loosely modified the requirements related to tritonclient\n\n\n## Key Features\n\n- [x] Simple configuration. Only `$host:$port` and `$model_name` are required.\n- [x] Generating asynchronous requests with `asyncio.Queue`\n- [x] Simple Model switching\n- [ ] Support async tritonclient\n\n## Requirements\n\n    $ pip install tritonclient[all]\n\n## Install\n\n    $ pip install tritony\n\n## Test\n\n### With Triton\n\n```bash\n./bin/run_triton_tritony_sample.sh\n```\n\n```bash\npytest -s --cov-report term-missing --cov=tritony tests/\n```\n\n### Example with image_client.py\n\n- Follow steps\n  in [the official triton server documentation](https://github.com/triton-inference-server/server#serve-a-model-in-3-easy-steps)\n\n```bash\n# Download Images from https://github.com/triton-inference-server/server.git\npython ./example/image_client.py --image_folder \"./server/qa/images\"\n```\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Tiny configuration for Triton Inference Server",
    "version": "0.0.16",
    "project_urls": {
        "Download": "https://pypi.org/project/tritony/#files",
        "Homepage": "https://github.com/rtzr/tritony",
        "Source": "https://github.com/rtzr/tritony",
        "Tracker": "https://github.com/rtzr/tritony/issues"
    },
    "split_keywords": [
        "grpc",
        "http",
        "triton",
        "tensorrt",
        "inference",
        "server",
        "service",
        "client",
        "nvidia",
        "rtzr"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ea7fa9decaf0f49080231f2e63f7acaa4bd5f433fa62590125551e3266c881b8",
                "md5": "a1301285a149b8b7ecc5267a558e0370",
                "sha256": "187f54fede07e540fa4c855da3c318136763d5e9aa7aba3cac401598b04cd033"
            },
            "downloads": -1,
            "filename": "tritony-0.0.16-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a1301285a149b8b7ecc5267a558e0370",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 10558,
            "upload_time": "2023-12-15T04:15:34",
            "upload_time_iso_8601": "2023-12-15T04:15:34.594929Z",
            "url": "https://files.pythonhosted.org/packages/ea/7f/a9decaf0f49080231f2e63f7acaa4bd5f433fa62590125551e3266c881b8/tritony-0.0.16-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e478550f413eb03b21124548949b65f871152a9ffd61e394461d18c80ff353e1",
                "md5": "eedc718ce8a6a25eac03daa7f4d4f7ed",
                "sha256": "67511eee0695dbf45aa28e27819fe766261f41dbb9fa4dad44a76b368e525aef"
            },
            "downloads": -1,
            "filename": "tritony-0.0.16.tar.gz",
            "has_sig": false,
            "md5_digest": "eedc718ce8a6a25eac03daa7f4d4f7ed",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12804,
            "upload_time": "2023-12-15T04:15:36",
            "upload_time_iso_8601": "2023-12-15T04:15:36.569102Z",
            "url": "https://files.pythonhosted.org/packages/e4/78/550f413eb03b21124548949b65f871152a9ffd61e394461d18c80ff353e1/tritony-0.0.16.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-15 04:15:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rtzr",
    "github_project": "tritony",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tritony"
}
        
Elapsed time: 0.17153s