open-inference-openapi


Nameopen-inference-openapi JSON
Version 2.0.0a1 PyPI version JSON
download
home_pagehttps://github.com/open-inference/python-clients
SummaryGenerated OpenAPI client library for the open-inference-protocol
upload_time2023-12-04 20:06:04
maintainerZev Isert
docs_urlNone
authorZev Isert
requires_python>=3.8,<4.0
licenseApache-2.0
keywords mlserver kserve triton seldon openvino amdinfer pytorch-serve openapi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Open Inference Protocol OpenAPI Client

<p>
<a href="https://pypi.org/project/open-inference-openapi/">
    <img src="https://badge.fury.io/py/open-inference-openapi.svg" alt="Package version">
</a>
</p>

`open-inference-openapi` is a generated client library based on the OpenAPI protocol definition tracked in the [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml) repository.

---

## Installation

This package requires Python 3.8 or greater.

Install with your favorite tool from pypi.org/project/open-inference-openapi/

```console
$ pip install open-inference-openapi
$ poetry add open-inference-openapi
```

> A gRPC-based python client ([`open-inference-grpc`](../open-inference-grpc/README.md)) also exists for the Open Inference Protocol, and can be installed alongside this gRPC client, as both are distributed as [namespace packages](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#packaging-namespace-packages).

## Example

```python
from open_inference.openapi.client import OpenInferenceClient, InferenceRequest

client = OpenInferenceClient(base_url='http://localhost:5002')

# Check that the server is live, and it has the iris model loaded
client.check_server_readiness()
client.read_model_metadata('mlflow-model')

# Make an inference request with two examples
pred = client.model_infer(
    "mlflow-model",
    request=InferenceRequest(
        inputs=[
            {
                "name": "input",
                "shape": [2, 4],
                "datatype": "FP64",
                "data": [
                    [5.0, 3.3, 1.4, 0.2],
                    [7.0, 3.2, 4.7, 1.4],
                ],
            }
        ]
    ),
)

print(repr(pred))
# InferenceResponse(
#     model_name="mlflow-model",
#     model_version=None,
#     id="580c30e3-f835-418f-bb17-a3074d42ad21",
#     parameters={"content_type": "np", "headers": None},
#     outputs=[
#         ResponseOutput(
#             name="output-1",
#             shape=[2, 1],
#             datatype="INT64",
#             parameters={"content_type": "np", "headers": None},
#             data=TensorData(__root__=[0.0, 1.0]),
#         )
#     ],
# )
```

<details><summary>Async versions of the same APIs are also available. Import <code>AsyncOpenInfereClient</code> instead, then <code>await</code> and requests made.</summary>

```py
from open_inference.openapi.client import AsyncOpenInferenceClient

client = AsyncOpenInferenceClient(base_url="http://localhost:5002")
await client.check_server_readiness()
```

</details>

## Dependencies

The `open-inference-openapi` python package relies on:

- [`pydantic`](https://github.com/pydantic/pydantic) - Message formatting, structure, and validation.
- [`httpx`](https://github.com/encode/httpx/) - Implementation of the underlying HTTP transport.

## Contribute

This client is largely generated automatically by [`fern`](https://github.com/fern-api/fern), with a small amount of build post-processing in [build.py](./build.py).

> Run `python build.py` to build this package, it will:
>
> 1. If `fern/openapi/open_inference_rest.yaml` is not found, download it from [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml)
> 1. Run `fern generate` to create the python client (fern-api must be installed `npm install --global fern-api`)
> 1. Postprocess to correctly implement the recursive TensorData model.
> 1. Prepend the Apache 2.0 License preamble
> 1. Format with [black](https://github.com/psf/black)

If you want to contribute to the open-inference-protocol itself, please create an issue or PR in the [open-inference/open-inference-protocol](https://github.com/open-inference/open-inference-protocol) repository.

## License

By contributing to Open Inference Protocol Python client repository, you agree that your contributions will be licensed under its Apache 2.0 License.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/open-inference/python-clients",
    "name": "open-inference-openapi",
    "maintainer": "Zev Isert",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "zevisert@users.noreply.github.com",
    "keywords": "mlserver,kserve,triton,seldon,openvino,amdinfer,pytorch-serve,openapi",
    "author": "Zev Isert",
    "author_email": "zevisert@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/1b/82/81efaf0739b7e09bc8cf16673587587865968c3fa78073c03b5894373f6b/open_inference_openapi-2.0.0a1.tar.gz",
    "platform": null,
    "description": "# Open Inference Protocol OpenAPI Client\n\n<p>\n<a href=\"https://pypi.org/project/open-inference-openapi/\">\n    <img src=\"https://badge.fury.io/py/open-inference-openapi.svg\" alt=\"Package version\">\n</a>\n</p>\n\n`open-inference-openapi` is a generated client library based on the OpenAPI protocol definition tracked in the [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml) repository.\n\n---\n\n## Installation\n\nThis package requires Python 3.8 or greater.\n\nInstall with your favorite tool from pypi.org/project/open-inference-openapi/\n\n```console\n$ pip install open-inference-openapi\n$ poetry add open-inference-openapi\n```\n\n> A gRPC-based python client ([`open-inference-grpc`](../open-inference-grpc/README.md)) also exists for the Open Inference Protocol, and can be installed alongside this gRPC client, as both are distributed as [namespace packages](https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#packaging-namespace-packages).\n\n## Example\n\n```python\nfrom open_inference.openapi.client import OpenInferenceClient, InferenceRequest\n\nclient = OpenInferenceClient(base_url='http://localhost:5002')\n\n# Check that the server is live, and it has the iris model loaded\nclient.check_server_readiness()\nclient.read_model_metadata('mlflow-model')\n\n# Make an inference request with two examples\npred = client.model_infer(\n    \"mlflow-model\",\n    request=InferenceRequest(\n        inputs=[\n            {\n                \"name\": \"input\",\n                \"shape\": [2, 4],\n                \"datatype\": \"FP64\",\n                \"data\": [\n                    [5.0, 3.3, 1.4, 0.2],\n                    [7.0, 3.2, 4.7, 1.4],\n                ],\n            }\n        ]\n    ),\n)\n\nprint(repr(pred))\n# InferenceResponse(\n#     model_name=\"mlflow-model\",\n#     model_version=None,\n#     id=\"580c30e3-f835-418f-bb17-a3074d42ad21\",\n#     parameters={\"content_type\": \"np\", \"headers\": None},\n#     outputs=[\n#         ResponseOutput(\n#             name=\"output-1\",\n#             shape=[2, 1],\n#             datatype=\"INT64\",\n#             parameters={\"content_type\": \"np\", \"headers\": None},\n#             data=TensorData(__root__=[0.0, 1.0]),\n#         )\n#     ],\n# )\n```\n\n<details><summary>Async versions of the same APIs are also available. Import <code>AsyncOpenInfereClient</code> instead, then <code>await</code> and requests made.</summary>\n\n```py\nfrom open_inference.openapi.client import AsyncOpenInferenceClient\n\nclient = AsyncOpenInferenceClient(base_url=\"http://localhost:5002\")\nawait client.check_server_readiness()\n```\n\n</details>\n\n## Dependencies\n\nThe `open-inference-openapi` python package relies on:\n\n- [`pydantic`](https://github.com/pydantic/pydantic) - Message formatting, structure, and validation.\n- [`httpx`](https://github.com/encode/httpx/) - Implementation of the underlying HTTP transport.\n\n## Contribute\n\nThis client is largely generated automatically by [`fern`](https://github.com/fern-api/fern), with a small amount of build post-processing in [build.py](./build.py).\n\n> Run `python build.py` to build this package, it will:\n>\n> 1. If `fern/openapi/open_inference_rest.yaml` is not found, download it from [open-inference/open-inference-protocol/](https://github.com/open-inference/open-inference-protocol/blob/main/specification/protocol/open_inference_rest.yaml)\n> 1. Run `fern generate` to create the python client (fern-api must be installed `npm install --global fern-api`)\n> 1. Postprocess to correctly implement the recursive TensorData model.\n> 1. Prepend the Apache 2.0 License preamble\n> 1. Format with [black](https://github.com/psf/black)\n\nIf you want to contribute to the open-inference-protocol itself, please create an issue or PR in the [open-inference/open-inference-protocol](https://github.com/open-inference/open-inference-protocol) repository.\n\n## License\n\nBy contributing to Open Inference Protocol Python client repository, you agree that your contributions will be licensed under its Apache 2.0 License.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Generated OpenAPI client library for the open-inference-protocol",
    "version": "2.0.0a1",
    "project_urls": {
        "Bug Tracker": "https://github.com/open-inference/python-clients/issues",
        "Homepage": "https://github.com/open-inference/python-clients",
        "Repository": "https://github.com/open-inference/python-clients"
    },
    "split_keywords": [
        "mlserver",
        "kserve",
        "triton",
        "seldon",
        "openvino",
        "amdinfer",
        "pytorch-serve",
        "openapi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eb707bfd34a672c1ed6189a7cb1d6f0454df3e1e75a8d40577a422be3dbfb0c7",
                "md5": "8bc28b4bd782b3f7eb5d0987f28b414e",
                "sha256": "beecf96090f191c93b80d6cb06754f342a0f4347e1ddc3b7430e62f89bd1dd14"
            },
            "downloads": -1,
            "filename": "open_inference_openapi-2.0.0a1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8bc28b4bd782b3f7eb5d0987f28b414e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 29223,
            "upload_time": "2023-12-04T20:06:02",
            "upload_time_iso_8601": "2023-12-04T20:06:02.778589Z",
            "url": "https://files.pythonhosted.org/packages/eb/70/7bfd34a672c1ed6189a7cb1d6f0454df3e1e75a8d40577a422be3dbfb0c7/open_inference_openapi-2.0.0a1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b8281efaf0739b7e09bc8cf16673587587865968c3fa78073c03b5894373f6b",
                "md5": "57d9bf8b1b5448ff3f181ab083932e26",
                "sha256": "09f78ebc0f5490c23493a1cda927aa57c16bbcd302d623b633e75a37d153409b"
            },
            "downloads": -1,
            "filename": "open_inference_openapi-2.0.0a1.tar.gz",
            "has_sig": false,
            "md5_digest": "57d9bf8b1b5448ff3f181ab083932e26",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 10572,
            "upload_time": "2023-12-04T20:06:04",
            "upload_time_iso_8601": "2023-12-04T20:06:04.436639Z",
            "url": "https://files.pythonhosted.org/packages/1b/82/81efaf0739b7e09bc8cf16673587587865968c3fa78073c03b5894373f6b/open_inference_openapi-2.0.0a1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-04 20:06:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "open-inference",
    "github_project": "python-clients",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "open-inference-openapi"
}
        
Elapsed time: 0.17034s