# Open Inference Protocol OpenAPI Client
<p>
<a href="https://pypi.org/project/open-inference-openapi/">
<img src="https://badge.fury.io/py/open-inference-openapi.svg" alt="Package version">
</a>
</p>
`open-inference-openapi` is a generated client library based on the OpenAPI protocol definition tracked in the [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml) repository.
---
## Installation
This package requires Python 3.8 or greater.
Install with your favorite tool from pypi.org/project/open-inference-openapi/
```console
$ pip install open-inference-openapi
$ poetry add open-inference-openapi
```
> A gRPC-based python client ([`open-inference-grpc`](https://pypi.org/project/open-inference-grpc)) also exists for the Open Inference Protocol, and can be installed alongside this gRPC client, as both are distributed as [namespace packages](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#packaging-namespace-packages).
## Example
```python
from open_inference.openapi.client import OpenInferenceClient, InferenceRequest
client = OpenInferenceClient(base_url='http://localhost:5002')
# Check that the server is live, and it has the iris model loaded
client.check_server_readiness()
client.read_model_metadata('mlflow-model')
# Make an inference request with two examples
pred = client.model_infer(
"mlflow-model",
request=InferenceRequest(
inputs=[
{
"name": "input",
"shape": [2, 4],
"datatype": "FP64",
"data": [
[5.0, 3.3, 1.4, 0.2],
[7.0, 3.2, 4.7, 1.4],
],
}
]
),
)
print(repr(pred))
# InferenceResponse(
# model_name="mlflow-model",
# model_version=None,
# id="580c30e3-f835-418f-bb17-a3074d42ad21",
# parameters={"content_type": "np", "headers": None},
# outputs=[
# ResponseOutput(
# name="output-1",
# shape=[2, 1],
# datatype="INT64",
# parameters={"content_type": "np", "headers": None},
# data=TensorData(__root__=[0.0, 1.0]),
# )
# ],
# )
```
<details><summary>Async versions of the same APIs are also available. Import <code>AsyncOpenInfereClient</code> instead, then <code>await</code> and requests made.</summary>
```py
from open_inference.openapi.client import AsyncOpenInferenceClient
client = AsyncOpenInferenceClient(base_url="http://localhost:5002")
await client.check_server_readiness()
```
</details>
## Dependencies
The `open-inference-openapi` python package relies on:
- [`pydantic`](https://github.com/pydantic/pydantic) - Message formatting, structure, and validation.
- [`httpx`](https://github.com/encode/httpx/) - Implementation of the underlying HTTP transport.
## Contribute
This client is largely generated automatically by [`fern`](https://github.com/fern-api/fern), with a small amount of build post-processing in [build.py](https://github.com/open-inference/python-clients/blob/main/packages/open-inference-grpc/build.py).
> Run `python build.py` to build this package, it will:
>
> 1. If `fern/openapi/open_inference_rest.yaml` is not found, download it from [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml)
> 1. Run `fern generate` to create the python client (fern-api must be installed `npm install --global fern-api`)
> 1. Postprocess to correctly implement the recursive TensorData model.
> 1. Prepend the Apache 2.0 License preamble
> 1. Format with [black](https://github.com/psf/black)
If you want to contribute to the open-inference-protocol itself, please create an issue or PR in the [open-inference/open-inference-protocol](https://github.com/open-inference/open-inference-protocol) repository.
## License
By contributing to Open Inference Protocol Python client repository, you agree that your contributions will be licensed under its Apache 2.0 License.
Raw data
{
"_id": null,
"home_page": "https://github.com/open-inference/python-clients",
"name": "open-inference-openapi",
"maintainer": "Zev Isert",
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": "zevisert@users.noreply.github.com",
"keywords": "mlserver, kserve, triton, seldon, openvino, amdinfer, pytorch-serve, openapi",
"author": "Zev Isert",
"author_email": "zevisert@users.noreply.github.com",
"download_url": "https://files.pythonhosted.org/packages/f1/1d/7afe42c6e77bebd07b2f1c80d29ec3bc20d9488a945bcca59422b8030780/open_inference_openapi-2.0.0.1.tar.gz",
"platform": null,
"description": "# Open Inference Protocol OpenAPI Client\n\n<p>\n<a href=\"https://pypi.org/project/open-inference-openapi/\">\n <img src=\"https://badge.fury.io/py/open-inference-openapi.svg\" alt=\"Package version\">\n</a>\n</p>\n\n`open-inference-openapi` is a generated client library based on the OpenAPI protocol definition tracked in the [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml) repository.\n\n---\n\n## Installation\n\nThis package requires Python 3.8 or greater.\n\nInstall with your favorite tool from pypi.org/project/open-inference-openapi/\n\n```console\n$ pip install open-inference-openapi\n$ poetry add open-inference-openapi\n```\n\n> A gRPC-based python client ([`open-inference-grpc`](https://pypi.org/project/open-inference-grpc)) also exists for the Open Inference Protocol, and can be installed alongside this gRPC client, as both are distributed as [namespace packages](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#packaging-namespace-packages).\n\n## Example\n\n```python\nfrom open_inference.openapi.client import OpenInferenceClient, InferenceRequest\n\nclient = OpenInferenceClient(base_url='http://localhost:5002')\n\n# Check that the server is live, and it has the iris model loaded\nclient.check_server_readiness()\nclient.read_model_metadata('mlflow-model')\n\n# Make an inference request with two examples\npred = client.model_infer(\n \"mlflow-model\",\n request=InferenceRequest(\n inputs=[\n {\n \"name\": \"input\",\n \"shape\": [2, 4],\n \"datatype\": \"FP64\",\n \"data\": [\n [5.0, 3.3, 1.4, 0.2],\n [7.0, 3.2, 4.7, 1.4],\n ],\n }\n ]\n ),\n)\n\nprint(repr(pred))\n# InferenceResponse(\n# model_name=\"mlflow-model\",\n# model_version=None,\n# id=\"580c30e3-f835-418f-bb17-a3074d42ad21\",\n# parameters={\"content_type\": \"np\", \"headers\": None},\n# outputs=[\n# ResponseOutput(\n# name=\"output-1\",\n# shape=[2, 1],\n# datatype=\"INT64\",\n# parameters={\"content_type\": \"np\", \"headers\": None},\n# data=TensorData(__root__=[0.0, 1.0]),\n# )\n# ],\n# )\n```\n\n<details><summary>Async versions of the same APIs are also available. Import <code>AsyncOpenInfereClient</code> instead, then <code>await</code> and requests made.</summary>\n\n```py\nfrom open_inference.openapi.client import AsyncOpenInferenceClient\n\nclient = AsyncOpenInferenceClient(base_url=\"http://localhost:5002\")\nawait client.check_server_readiness()\n```\n\n</details>\n\n## Dependencies\n\nThe `open-inference-openapi` python package relies on:\n\n- [`pydantic`](https://github.com/pydantic/pydantic) - Message formatting, structure, and validation.\n- [`httpx`](https://github.com/encode/httpx/) - Implementation of the underlying HTTP transport.\n\n## Contribute\n\nThis client is largely generated automatically by [`fern`](https://github.com/fern-api/fern), with a small amount of build post-processing in [build.py](https://github.com/open-inference/python-clients/blob/main/packages/open-inference-grpc/build.py).\n\n> Run `python build.py` to build this package, it will:\n>\n> 1. If `fern/openapi/open_inference_rest.yaml` is not found, download it from [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml)\n> 1. Run `fern generate` to create the python client (fern-api must be installed `npm install --global fern-api`)\n> 1. Postprocess to correctly implement the recursive TensorData model.\n> 1. Prepend the Apache 2.0 License preamble\n> 1. Format with [black](https://github.com/psf/black)\n\nIf you want to contribute to the open-inference-protocol itself, please create an issue or PR in the [open-inference/open-inference-protocol](https://github.com/open-inference/open-inference-protocol) repository.\n\n## License\n\nBy contributing to Open Inference Protocol Python client repository, you agree that your contributions will be licensed under its Apache 2.0 License.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Generated OpenAPI client library for the open-inference-protocol",
"version": "2.0.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/open-inference/python-clients/issues",
"Homepage": "https://github.com/open-inference/python-clients",
"Repository": "https://github.com/open-inference/python-clients"
},
"split_keywords": [
"mlserver",
" kserve",
" triton",
" seldon",
" openvino",
" amdinfer",
" pytorch-serve",
" openapi"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8aec88d723d8025d00942d506e802648e2cc5342e714a338409e106f9c82d300",
"md5": "102cfa2e6b4867c0833235f2bbb71f92",
"sha256": "9d2b46cff9be8bd3f56adad171d2fa6986e4756413d09deff6e1bf2d85cae124"
},
"downloads": -1,
"filename": "open_inference_openapi-2.0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "102cfa2e6b4867c0833235f2bbb71f92",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 29395,
"upload_time": "2024-08-27T01:12:43",
"upload_time_iso_8601": "2024-08-27T01:12:43.862966Z",
"url": "https://files.pythonhosted.org/packages/8a/ec/88d723d8025d00942d506e802648e2cc5342e714a338409e106f9c82d300/open_inference_openapi-2.0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f11d7afe42c6e77bebd07b2f1c80d29ec3bc20d9488a945bcca59422b8030780",
"md5": "bd5360ff4fc00afe877ad0b644e7477a",
"sha256": "aa2fbc2650d95d81957cb120e3ad37ef3d97ee4c2df7fb60cda2a0dce792555e"
},
"downloads": -1,
"filename": "open_inference_openapi-2.0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "bd5360ff4fc00afe877ad0b644e7477a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 11356,
"upload_time": "2024-08-27T01:12:45",
"upload_time_iso_8601": "2024-08-27T01:12:45.249227Z",
"url": "https://files.pythonhosted.org/packages/f1/1d/7afe42c6e77bebd07b2f1c80d29ec3bc20d9488a945bcca59422b8030780/open_inference_openapi-2.0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-27 01:12:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "open-inference",
"github_project": "python-clients",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "open-inference-openapi"
}