openllm-client


Nameopenllm-client JSON
Version 0.5.7 PyPI version JSON
download
home_pageNone
SummaryOpenLLM Client: Interacting with OpenLLM HTTP/gRPC server, or any BentoML server.
upload_time2024-06-14 02:49:50
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords ai alpaca bentoml falcon fine tuning generative ai llmops large language model llama 2 mlops model deployment model serving pytorch serverless stablelm transformers vicuna
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <a href="https://github.com/bentoml/openllm">
    <img src="https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/main-banner.png" alt="Banner for OpenLLM" />
  </a>
</p>


<div align="center">
    <h1 align="center">👾 OpenLLM Client</h1>
    <a href="https://pypi.org/project/openllm-client">
        <img src="https://img.shields.io/pypi/v/openllm-client.svg?logo=pypi&label=PyPI&logoColor=gold" alt="pypi_status" />
    </a><a href="https://test.pypi.org/project/openllm-client/">
        <img src="https://img.shields.io/badge/Nightly-PyPI?logo=pypi&label=PyPI&color=gray&link=https%3A%2F%2Ftest.pypi.org%2Fproject%2Fopenllm%2F" alt="test_pypi_status" />
    </a><a href="https://twitter.com/bentomlai">
        <img src="https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter&label=Follow%20Us" alt="Twitter" />
    </a><a href="https://l.bentoml.com/join-openllm-discord">
        <img src="https://badgen.net/badge/icon/OpenLLM/7289da?icon=discord&label=Join%20Us" alt="Discord" />
    </a><a href="https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml">
        <img src="https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml/badge.svg?branch=main" alt="ci" />
    </a><a href="https://results.pre-commit.ci/latest/github/bentoml/OpenLLM/main">
        <img src="https://results.pre-commit.ci/badge/github/bentoml/OpenLLM/main.svg" alt="pre-commit.ci status" />
    </a><br>
    <a href="https://pypi.org/project/openllm-client">
        <img src="https://img.shields.io/pypi/pyversions/openllm-client.svg?logo=python&label=Python&logoColor=gold" alt="python_version" />
    </a><a href="htjtps://github.com/pypa/hatch">
        <img src="https://img.shields.io/badge/%F0%9F%A5%9A-Hatch-4051b5.svg" alt="Hatch" />
    </a><a href="https://github.com/bentoml/OpenLLM/blob/main/STYLE.md">
        <img src="https://img.shields.io/badge/code%20style-experimental-000000.svg" alt="code style" />
    </a><a href="https://github.com/astral-sh/ruff">
        <img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json" alt="Ruff" />
    </a><a href="https://github.com/python/mypy">
        <img src="https://img.shields.io/badge/types-mypy-blue.svg" alt="types - mypy" />
    </a><a href="https://github.com/microsoft/pyright">
        <img src="https://img.shields.io/badge/types-pyright-yellow.svg" alt="types - pyright" />
    </a><br>
    <p>OpenLLM Client: Interacting with OpenLLM HTTP/gRPC server, or any BentoML server.<br/></p>
    <i></i>
</div>

## 📖 Introduction

With OpenLLM, you can run inference with any open-source large-language models,
deploy to the cloud or on-premises, and build powerful AI apps, and more.

To learn more about OpenLLM, please visit <a href="https://github.com/bentoml/OpenLLM">OpenLLM's README.md</a>

This package holds the underlying client implementation for OpenLLM. If you are
coming from OpenLLM, the client can be accessed via `openllm.client`.

```python
import openllm

client = openllm.client.HTTPClient()

client.query('Explain to me the difference between "further" and "farther"')
```

<p align="center">
  <img src="https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/output.gif" alt="Gif showing OpenLLM Intro" />
</p>

<p align="center">
  <img src="https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/agent.gif" alt="Gif showing Agent integration" />
</p>

## 📔 Citation

If you use OpenLLM in your research, we provide a [citation](../CITATION.cff) to use:

```bibtex
@software{Pham_OpenLLM_Operating_LLMs_2023,
author = {Pham, Aaron and Yang, Chaoyu and Sheng, Sean and  Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
license = {Apache-2.0},
month = jun,
title = {{OpenLLM: Operating LLMs in production}},
url = {https://github.com/bentoml/OpenLLM},
year = {2023}
}
```

---

[Click me for full changelog](https://github.com/bentoml/openllm/blob/main/CHANGELOG.md)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "openllm-client",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "AI, Alpaca, BentoML, Falcon, Fine tuning, Generative AI, LLMOps, Large Language Model, Llama 2, MLOps, Model Deployment, Model Serving, PyTorch, Serverless, StableLM, Transformers, Vicuna",
    "author": null,
    "author_email": "Aaron Pham <aarnphm@bentoml.com>, BentoML Team <contact@bentoml.com>",
    "download_url": "https://files.pythonhosted.org/packages/fd/7e/5d613df776e3eccfdd76b35cec613a0ce8fec4be9340bf13feb6bc115588/openllm_client-0.5.7.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <a href=\"https://github.com/bentoml/openllm\">\n    <img src=\"https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/main-banner.png\" alt=\"Banner for OpenLLM\" />\n  </a>\n</p>\n\n\n<div align=\"center\">\n    <h1 align=\"center\">\ud83d\udc7e OpenLLM Client</h1>\n    <a href=\"https://pypi.org/project/openllm-client\">\n        <img src=\"https://img.shields.io/pypi/v/openllm-client.svg?logo=pypi&label=PyPI&logoColor=gold\" alt=\"pypi_status\" />\n    </a><a href=\"https://test.pypi.org/project/openllm-client/\">\n        <img src=\"https://img.shields.io/badge/Nightly-PyPI?logo=pypi&label=PyPI&color=gray&link=https%3A%2F%2Ftest.pypi.org%2Fproject%2Fopenllm%2F\" alt=\"test_pypi_status\" />\n    </a><a href=\"https://twitter.com/bentomlai\">\n        <img src=\"https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter&label=Follow%20Us\" alt=\"Twitter\" />\n    </a><a href=\"https://l.bentoml.com/join-openllm-discord\">\n        <img src=\"https://badgen.net/badge/icon/OpenLLM/7289da?icon=discord&label=Join%20Us\" alt=\"Discord\" />\n    </a><a href=\"https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml\">\n        <img src=\"https://github.com/bentoml/OpenLLM/actions/workflows/ci.yml/badge.svg?branch=main\" alt=\"ci\" />\n    </a><a href=\"https://results.pre-commit.ci/latest/github/bentoml/OpenLLM/main\">\n        <img src=\"https://results.pre-commit.ci/badge/github/bentoml/OpenLLM/main.svg\" alt=\"pre-commit.ci status\" />\n    </a><br>\n    <a href=\"https://pypi.org/project/openllm-client\">\n        <img src=\"https://img.shields.io/pypi/pyversions/openllm-client.svg?logo=python&label=Python&logoColor=gold\" alt=\"python_version\" />\n    </a><a href=\"htjtps://github.com/pypa/hatch\">\n        <img src=\"https://img.shields.io/badge/%F0%9F%A5%9A-Hatch-4051b5.svg\" alt=\"Hatch\" />\n    </a><a href=\"https://github.com/bentoml/OpenLLM/blob/main/STYLE.md\">\n        <img src=\"https://img.shields.io/badge/code%20style-experimental-000000.svg\" alt=\"code style\" />\n    </a><a href=\"https://github.com/astral-sh/ruff\">\n        <img src=\"https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json\" alt=\"Ruff\" />\n    </a><a href=\"https://github.com/python/mypy\">\n        <img src=\"https://img.shields.io/badge/types-mypy-blue.svg\" alt=\"types - mypy\" />\n    </a><a href=\"https://github.com/microsoft/pyright\">\n        <img src=\"https://img.shields.io/badge/types-pyright-yellow.svg\" alt=\"types - pyright\" />\n    </a><br>\n    <p>OpenLLM Client: Interacting with OpenLLM HTTP/gRPC server, or any BentoML server.<br/></p>\n    <i></i>\n</div>\n\n## \ud83d\udcd6 Introduction\n\nWith OpenLLM, you can run inference with any open-source large-language models,\ndeploy to the cloud or on-premises, and build powerful AI apps, and more.\n\nTo learn more about OpenLLM, please visit <a href=\"https://github.com/bentoml/OpenLLM\">OpenLLM's README.md</a>\n\nThis package holds the underlying client implementation for OpenLLM. If you are\ncoming from OpenLLM, the client can be accessed via `openllm.client`.\n\n```python\nimport openllm\n\nclient = openllm.client.HTTPClient()\n\nclient.query('Explain to me the difference between \"further\" and \"farther\"')\n```\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/output.gif\" alt=\"Gif showing OpenLLM Intro\" />\n</p>\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/bentoml/openllm/main/.github/assets/agent.gif\" alt=\"Gif showing Agent integration\" />\n</p>\n\n## \ud83d\udcd4 Citation\n\nIf you use OpenLLM in your research, we provide a [citation](../CITATION.cff) to use:\n\n```bibtex\n@software{Pham_OpenLLM_Operating_LLMs_2023,\nauthor = {Pham, Aaron and Yang, Chaoyu and Sheng, Sean and  Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},\nlicense = {Apache-2.0},\nmonth = jun,\ntitle = {{OpenLLM: Operating LLMs in production}},\nurl = {https://github.com/bentoml/OpenLLM},\nyear = {2023}\n}\n```\n\n---\n\n[Click me for full changelog](https://github.com/bentoml/openllm/blob/main/CHANGELOG.md)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "OpenLLM Client: Interacting with OpenLLM HTTP/gRPC server, or any BentoML server.",
    "version": "0.5.7",
    "project_urls": {
        "Blog": "https://modelserving.com",
        "Chat": "https://l.bentoml.com/join-openllm-discord",
        "Documentation": "https://github.com/bentoml/OpenLLM/blob/main/openllm-client/README.md",
        "GitHub": "https://github.com/bentoml/OpenLLM/blob/main/openllm-client",
        "History": "https://github.com/bentoml/OpenLLM/blob/main/CHANGELOG.md",
        "Homepage": "https://bentoml.com",
        "Tracker": "https://github.com/bentoml/OpenLLM/issues",
        "Twitter": "https://twitter.com/bentomlai"
    },
    "split_keywords": [
        "ai",
        " alpaca",
        " bentoml",
        " falcon",
        " fine tuning",
        " generative ai",
        " llmops",
        " large language model",
        " llama 2",
        " mlops",
        " model deployment",
        " model serving",
        " pytorch",
        " serverless",
        " stablelm",
        " transformers",
        " vicuna"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "264d6b0dc71a0b42159ee09911bb7f175e8e5b13c35e2b933db15e4bf42b0f2e",
                "md5": "5bc2d030a905c6fc320df4607f2cbb2f",
                "sha256": "ea6a7c2644ac0a7d8bb007900b7b54128309ce971127a46f9d3a46c1a431977d"
            },
            "downloads": -1,
            "filename": "openllm_client-0.5.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5bc2d030a905c6fc320df4607f2cbb2f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 17483,
            "upload_time": "2024-06-14T02:49:45",
            "upload_time_iso_8601": "2024-06-14T02:49:45.051242Z",
            "url": "https://files.pythonhosted.org/packages/26/4d/6b0dc71a0b42159ee09911bb7f175e8e5b13c35e2b933db15e4bf42b0f2e/openllm_client-0.5.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fd7e5d613df776e3eccfdd76b35cec613a0ce8fec4be9340bf13feb6bc115588",
                "md5": "bbb29ba027eaa949120c2eb77bc3e936",
                "sha256": "5fa9ab4b5d65a149dce91328bf734123846b8e48af3cdc16ee183e809db61b00"
            },
            "downloads": -1,
            "filename": "openllm_client-0.5.7.tar.gz",
            "has_sig": false,
            "md5_digest": "bbb29ba027eaa949120c2eb77bc3e936",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 19527,
            "upload_time": "2024-06-14T02:49:50",
            "upload_time_iso_8601": "2024-06-14T02:49:50.465030Z",
            "url": "https://files.pythonhosted.org/packages/fd/7e/5d613df776e3eccfdd76b35cec613a0ce8fec4be9340bf13feb6bc115588/openllm_client-0.5.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-14 02:49:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bentoml",
    "github_project": "OpenLLM",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "openllm-client"
}
        
Elapsed time: 0.84744s