xinference


Namexinference JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/xorbitsai/inference
SummaryModel Serving Made Easy
upload_time2024-11-15 10:19:33
maintainerQin Xuye
docs_urlNone
authorQin Xuye
requires_pythonNone
licenseApache License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<img src="./assets/xorbits-logo.png" width="180px" alt="xorbits" />

# Xorbits Inference: Model Serving Made Easy ๐Ÿค–

<p align="center">
  <a href="https://inference.top/">Xinference Cloud</a> ยท
  <a href="https://github.com/xorbitsai/enterprise-docs/blob/main/README.md">Xinference Enterprise</a> ยท
  <a href="https://inference.readthedocs.io/en/latest/getting_started/installation.html#installation">Self-hosting</a> ยท
  <a href="https://inference.readthedocs.io/">Documentation</a>
</p>

[![PyPI Latest Release](https://img.shields.io/pypi/v/xinference.svg?style=for-the-badge)](https://pypi.org/project/xinference/)
[![License](https://img.shields.io/pypi/l/xinference.svg?style=for-the-badge)](https://github.com/xorbitsai/inference/blob/main/LICENSE)
[![Build Status](https://img.shields.io/github/actions/workflow/status/xorbitsai/inference/python.yaml?branch=main&style=for-the-badge&label=GITHUB%20ACTIONS&logo=github)](https://actions-badge.atrox.dev/xorbitsai/inference/goto?ref=main)
[![Slack](https://img.shields.io/badge/join_Slack-781FF5.svg?logo=slack&style=for-the-badge)](https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg)
[![Twitter](https://img.shields.io/twitter/follow/xorbitsio?logo=x&style=for-the-badge)](https://twitter.com/xorbitsio)

<p align="center">
  <a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-454545?style=for-the-badge"></a>
  <a href="./README_zh_CN.md"><img alt="็ฎ€ไฝ“ไธญๆ–‡็‰ˆ่‡ช่ฟฐๆ–‡ไปถ" src="https://img.shields.io/badge/ไธญๆ–‡ไป‹็ป-d9d9d9?style=for-the-badge"></a>
  <a href="./README_ja_JP.md"><img alt="ๆ—ฅๆœฌ่ชžใฎREADME" src="https://img.shields.io/badge/ๆ—ฅๆœฌ่ชž-d9d9d9?style=for-the-badge"></a>
</p>

</div>
<br />


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, 
speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy 
and serve your or state-of-the-art built-in models using just a single command. Whether you are a 
researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full 
potential of cutting-edge AI models.

<div align="center">
<i><a href="https://join.slack.com/t/xorbitsio/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA">๐Ÿ‘‰ Join our Slack community!</a></i>
</div>

## ๐Ÿ”ฅ Hot Topics
### Framework Enhancements
- Support Continuous batching for Transformers engine: [#1724](https://github.com/xorbitsai/inference/pull/1724)
- Support MLX backend for Apple Silicon chips: [#1765](https://github.com/xorbitsai/inference/pull/1765)
- Support specifying worker and GPU indexes for launching models: [#1195](https://github.com/xorbitsai/inference/pull/1195)
- Support SGLang backend: [#1161](https://github.com/xorbitsai/inference/pull/1161)
- Support LoRA for LLM and image models: [#1080](https://github.com/xorbitsai/inference/pull/1080)
- Support speech recognition model: [#929](https://github.com/xorbitsai/inference/pull/929)
- Metrics support: [#906](https://github.com/xorbitsai/inference/pull/906)
### New Models
- Built-in support for [Qwen 2.5 Series](https://qwenlm.github.io/blog/qwen2.5/): [#2325](https://github.com/xorbitsai/inference/pull/2325)
- Built-in support for [Fish Speech V1.4](https://huggingface.co/fishaudio/fish-speech-1.4): [#2295](https://github.com/xorbitsai/inference/pull/2295)
- Built-in support for [DeepSeek-V2.5](https://huggingface.co/deepseek-ai/DeepSeek-V2.5): [#2292](https://github.com/xorbitsai/inference/pull/2292)
- Built-in support for [Qwen2-Audio](https://github.com/QwenLM/Qwen2-Audio): [#2271](https://github.com/xorbitsai/inference/pull/2271)
- Built-in support for [Qwen2-vl-instruct](https://github.com/QwenLM/Qwen2-VL): [#2205](https://github.com/xorbitsai/inference/pull/2205)
- Built-in support for [MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B): [#2263](https://github.com/xorbitsai/inference/pull/2263)
- Built-in support for [CogVideoX](https://github.com/THUDM/CogVideo): [#2049](https://github.com/xorbitsai/inference/pull/2049)
- Built-in support for [flux.1-schnell & flux.1-dev](https://www.basedlabs.ai/tools/flux1): [#2007](https://github.com/xorbitsai/inference/pull/2007)
### Integrations
- [Dify](https://docs.dify.ai/advanced/model-configuration/xinference): an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
- [FastGPT](https://github.com/labring/FastGPT): a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
- [Chatbox](https://chatboxai.app/): a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.
- [RAGFlow](https://github.com/infiniflow/ragflow): is an open-source RAG engine based on deep document understanding.


## Key Features
๐ŸŒŸ **Model Serving Made Easy**: Simplify the process of serving large language, speech 
recognition, and multimodal models. You can set up and deploy your models
for experimentation and production with a single command.

โšก๏ธ **State-of-the-Art Models**: Experiment with cutting-edge built-in models using a single 
command. Inference provides access to state-of-the-art open-source models!

๐Ÿ–ฅ **Heterogeneous Hardware Utilization**: Make the most of your hardware resources with
[ggml](https://github.com/ggerganov/ggml). Xorbits Inference intelligently utilizes heterogeneous
hardware, including GPUs and CPUs, to accelerate your model inference tasks.

โš™๏ธ **Flexible API and Interfaces**: Offer multiple interfaces for interacting
with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI 
and WebUI for seamless model management and interaction.

๐ŸŒ **Distributed Deployment**: Excel in distributed deployment scenarios, 
allowing the seamless distribution of model inference across multiple devices or machines.

๐Ÿ”Œ **Built-in Integration with Third-Party Libraries**: Xorbits Inference seamlessly integrates
with popular third-party libraries including [LangChain](https://python.langchain.com/docs/integrations/providers/xinference), [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/XinferenceLocalDeployment.html#i-run-pip-install-xinference-all-in-a-terminal-window), [Dify](https://docs.dify.ai/advanced/model-configuration/xinference), and [Chatbox](https://chatboxai.app/).

## Why Xinference
| Feature                                        | Xinference | FastChat | OpenLLM | RayLLM |
|------------------------------------------------|------------|----------|---------|--------|
| OpenAI-Compatible RESTful API                  | โœ… | โœ… | โœ… | โœ… |
| vLLM Integrations                              | โœ… | โœ… | โœ… | โœ… |
| More Inference Engines (GGML, TensorRT)        | โœ… | โŒ | โœ… | โœ… |
| More Platforms (CPU, Metal)                    | โœ… | โœ… | โŒ | โŒ |
| Multi-node Cluster Deployment                  | โœ… | โŒ | โŒ | โœ… |
| Image Models (Text-to-Image)                   | โœ… | โœ… | โŒ | โŒ |
| Text Embedding Models                          | โœ… | โŒ | โŒ | โŒ |
| Multimodal Models                              | โœ… | โŒ | โŒ | โŒ |
| Audio Models                                   | โœ… | โŒ | โŒ | โŒ |
| More OpenAI Functionalities (Function Calling) | โœ… | โŒ | โŒ | โŒ |

## Using Xinference

- **Cloud </br>**
We host a [Xinference Cloud](https://inference.top) service for anyone to try with zero setup. 

- **Self-hosting Xinference Community Edition</br>**
Quickly get Xinference running in your environment with this [starter guide](#getting-started).
Use our [documentation](https://inference.readthedocs.io/) for further references and more in-depth instructions.

- **Xinference for enterprise / organizations</br>**
We provide additional enterprise-centric features. [send us an email](mailto:business@xprobe.io?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs. </br>

## Staying Ahead

Star Xinference on GitHub and be instantly notified of new releases.

![star-us](assets/stay_ahead.gif)

## Getting Started

* [Docs](https://inference.readthedocs.io/en/latest/index.html)
* [Built-in Models](https://inference.readthedocs.io/en/latest/models/builtin/index.html)
* [Custom Models](https://inference.readthedocs.io/en/latest/models/custom.html)
* [Deployment Docs](https://inference.readthedocs.io/en/latest/getting_started/using_xinference.html)
* [Examples and Tutorials](https://inference.readthedocs.io/en/latest/examples/index.html)

### Jupyter Notebook

The lightest way to experience Xinference is to try our [Jupyter Notebook on Google Colab](https://colab.research.google.com/github/xorbitsai/inference/blob/main/examples/Xinference_Quick_Start.ipynb).

### Docker 

Nvidia GPU users can start Xinference server using [Xinference Docker Image](https://inference.readthedocs.io/en/latest/getting_started/using_docker_image.html). Prior to executing the installation command, ensure that both [Docker](https://docs.docker.com/get-docker/) and [CUDA](https://developer.nvidia.com/cuda-downloads) are set up on your system.

```bash
docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
```

### K8s via helm

Ensure that you have GPU support in your Kubernetes cluster, then install as follows.

```
# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts

# update indexes and query xinference versions
helm repo update xinference
helm search repo xinference/xinference --devel --versions

# install xinference
helm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>
```

For more customized installation methods on K8s, please refer to the [documentation](https://inference.readthedocs.io/en/latest/getting_started/using_kubernetes.html).

### Quick Start

Install Xinference by using pip as follows. (For more options, see [Installation page](https://inference.readthedocs.io/en/latest/getting_started/installation.html).)

```bash
pip install "xinference[all]"
```

To start a local instance of Xinference, run the following command:

```bash
$ xinference-local
```

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL,
 via the command line, or via the Xinferenceโ€™s python client. Check out our [docs]( https://inference.readthedocs.io/en/latest/getting_started/using_xinference.html#run-xinference-locally) for the guide.

![web UI](assets/screenshot.png)

## Getting involved

| Platform                                                                                      | Purpose                                            |
|-----------------------------------------------------------------------------------------------|----------------------------------------------------|
| [Github Issues](https://github.com/xorbitsai/inference/issues)                                | Reporting bugs and filing feature requests.        |
| [Slack](https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg) | Collaborating with other Xorbits users.            |
| [Twitter](https://twitter.com/xorbitsio)                                                      | Staying up-to-date on new features.                |

## Citation

If this work is helpful, please kindly cite as:

```bibtex
@inproceedings{lu2024xinference,
    title = "Xinference: Making Large Model Serving Easy",
    author = "Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-demo.30",
    pages = "291--300",
}
```

## Contributors

<a href="https://github.com/xorbitsai/inference/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=xorbitsai/inference" />
</a>

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=xorbitsai/inference&type=Date)](https://star-history.com/#xorbitsai/inference&Date)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xorbitsai/inference",
    "name": "xinference",
    "maintainer": "Qin Xuye",
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "qinxuye@xprobe.io",
    "keywords": null,
    "author": "Qin Xuye",
    "author_email": "qinxuye@xprobe.io",
    "download_url": "https://files.pythonhosted.org/packages/38/05/a0630fc689c411537df1f328c20f022dd448d919eb0e551c1310844593b8/xinference-1.0.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n<img src=\"./assets/xorbits-logo.png\" width=\"180px\" alt=\"xorbits\" />\n\n# Xorbits Inference: Model Serving Made Easy \ud83e\udd16\n\n<p align=\"center\">\n  <a href=\"https://inference.top/\">Xinference Cloud</a> \u00b7\n  <a href=\"https://github.com/xorbitsai/enterprise-docs/blob/main/README.md\">Xinference Enterprise</a> \u00b7\n  <a href=\"https://inference.readthedocs.io/en/latest/getting_started/installation.html#installation\">Self-hosting</a> \u00b7\n  <a href=\"https://inference.readthedocs.io/\">Documentation</a>\n</p>\n\n[![PyPI Latest Release](https://img.shields.io/pypi/v/xinference.svg?style=for-the-badge)](https://pypi.org/project/xinference/)\n[![License](https://img.shields.io/pypi/l/xinference.svg?style=for-the-badge)](https://github.com/xorbitsai/inference/blob/main/LICENSE)\n[![Build Status](https://img.shields.io/github/actions/workflow/status/xorbitsai/inference/python.yaml?branch=main&style=for-the-badge&label=GITHUB%20ACTIONS&logo=github)](https://actions-badge.atrox.dev/xorbitsai/inference/goto?ref=main)\n[![Slack](https://img.shields.io/badge/join_Slack-781FF5.svg?logo=slack&style=for-the-badge)](https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg)\n[![Twitter](https://img.shields.io/twitter/follow/xorbitsio?logo=x&style=for-the-badge)](https://twitter.com/xorbitsio)\n\n<p align=\"center\">\n  <a href=\"./README.md\"><img alt=\"README in English\" src=\"https://img.shields.io/badge/English-454545?style=for-the-badge\"></a>\n  <a href=\"./README_zh_CN.md\"><img alt=\"\u7b80\u4f53\u4e2d\u6587\u7248\u81ea\u8ff0\u6587\u4ef6\" src=\"https://img.shields.io/badge/\u4e2d\u6587\u4ecb\u7ecd-d9d9d9?style=for-the-badge\"></a>\n  <a href=\"./README_ja_JP.md\"><img alt=\"\u65e5\u672c\u8a9e\u306eREADME\" src=\"https://img.shields.io/badge/\u65e5\u672c\u8a9e-d9d9d9?style=for-the-badge\"></a>\n</p>\n\n</div>\n<br />\n\n\nXorbits Inference(Xinference) is a powerful and versatile library designed to serve language, \nspeech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy \nand serve your or state-of-the-art built-in models using just a single command. Whether you are a \nresearcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full \npotential of cutting-edge AI models.\n\n<div align=\"center\">\n<i><a href=\"https://join.slack.com/t/xorbitsio/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA\">\ud83d\udc49 Join our Slack community!</a></i>\n</div>\n\n## \ud83d\udd25 Hot Topics\n### Framework Enhancements\n- Support Continuous batching for Transformers engine: [#1724](https://github.com/xorbitsai/inference/pull/1724)\n- Support MLX backend for Apple Silicon chips: [#1765](https://github.com/xorbitsai/inference/pull/1765)\n- Support specifying worker and GPU indexes for launching models: [#1195](https://github.com/xorbitsai/inference/pull/1195)\n- Support SGLang backend: [#1161](https://github.com/xorbitsai/inference/pull/1161)\n- Support LoRA for LLM and image models: [#1080](https://github.com/xorbitsai/inference/pull/1080)\n- Support speech recognition model: [#929](https://github.com/xorbitsai/inference/pull/929)\n- Metrics support: [#906](https://github.com/xorbitsai/inference/pull/906)\n### New Models\n- Built-in support for [Qwen 2.5 Series](https://qwenlm.github.io/blog/qwen2.5/): [#2325](https://github.com/xorbitsai/inference/pull/2325)\n- Built-in support for [Fish Speech V1.4](https://huggingface.co/fishaudio/fish-speech-1.4): [#2295](https://github.com/xorbitsai/inference/pull/2295)\n- Built-in support for [DeepSeek-V2.5](https://huggingface.co/deepseek-ai/DeepSeek-V2.5): [#2292](https://github.com/xorbitsai/inference/pull/2292)\n- Built-in support for [Qwen2-Audio](https://github.com/QwenLM/Qwen2-Audio): [#2271](https://github.com/xorbitsai/inference/pull/2271)\n- Built-in support for [Qwen2-vl-instruct](https://github.com/QwenLM/Qwen2-VL): [#2205](https://github.com/xorbitsai/inference/pull/2205)\n- Built-in support for [MiniCPM3-4B](https://huggingface.co/openbmb/MiniCPM3-4B): [#2263](https://github.com/xorbitsai/inference/pull/2263)\n- Built-in support for [CogVideoX](https://github.com/THUDM/CogVideo): [#2049](https://github.com/xorbitsai/inference/pull/2049)\n- Built-in support for [flux.1-schnell & flux.1-dev](https://www.basedlabs.ai/tools/flux1): [#2007](https://github.com/xorbitsai/inference/pull/2007)\n### Integrations\n- [Dify](https://docs.dify.ai/advanced/model-configuration/xinference): an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.\n- [FastGPT](https://github.com/labring/FastGPT): a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.\n- [Chatbox](https://chatboxai.app/): a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.\n- [RAGFlow](https://github.com/infiniflow/ragflow): is an open-source RAG engine based on deep document understanding.\n\n\n## Key Features\n\ud83c\udf1f **Model Serving Made Easy**: Simplify the process of serving large language, speech \nrecognition, and multimodal models. You can set up and deploy your models\nfor experimentation and production with a single command.\n\n\u26a1\ufe0f **State-of-the-Art Models**: Experiment with cutting-edge built-in models using a single \ncommand. Inference provides access to state-of-the-art open-source models!\n\n\ud83d\udda5 **Heterogeneous Hardware Utilization**: Make the most of your hardware resources with\n[ggml](https://github.com/ggerganov/ggml). Xorbits Inference intelligently utilizes heterogeneous\nhardware, including GPUs and CPUs, to accelerate your model inference tasks.\n\n\u2699\ufe0f **Flexible API and Interfaces**: Offer multiple interfaces for interacting\nwith your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI \nand WebUI for seamless model management and interaction.\n\n\ud83c\udf10 **Distributed Deployment**: Excel in distributed deployment scenarios, \nallowing the seamless distribution of model inference across multiple devices or machines.\n\n\ud83d\udd0c **Built-in Integration with Third-Party Libraries**: Xorbits Inference seamlessly integrates\nwith popular third-party libraries including [LangChain](https://python.langchain.com/docs/integrations/providers/xinference), [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/XinferenceLocalDeployment.html#i-run-pip-install-xinference-all-in-a-terminal-window), [Dify](https://docs.dify.ai/advanced/model-configuration/xinference), and [Chatbox](https://chatboxai.app/).\n\n## Why Xinference\n| Feature                                        | Xinference | FastChat | OpenLLM | RayLLM |\n|------------------------------------------------|------------|----------|---------|--------|\n| OpenAI-Compatible RESTful API                  | \u2705 | \u2705 | \u2705 | \u2705 |\n| vLLM Integrations                              | \u2705 | \u2705 | \u2705 | \u2705 |\n| More Inference Engines (GGML, TensorRT)        | \u2705 | \u274c | \u2705 | \u2705 |\n| More Platforms (CPU, Metal)                    | \u2705 | \u2705 | \u274c | \u274c |\n| Multi-node Cluster Deployment                  | \u2705 | \u274c | \u274c | \u2705 |\n| Image Models (Text-to-Image)                   | \u2705 | \u2705 | \u274c | \u274c |\n| Text Embedding Models                          | \u2705 | \u274c | \u274c | \u274c |\n| Multimodal Models                              | \u2705 | \u274c | \u274c | \u274c |\n| Audio Models                                   | \u2705 | \u274c | \u274c | \u274c |\n| More OpenAI Functionalities (Function Calling) | \u2705 | \u274c | \u274c | \u274c |\n\n## Using Xinference\n\n- **Cloud </br>**\nWe host a [Xinference Cloud](https://inference.top) service for anyone to try with zero setup. \n\n- **Self-hosting Xinference Community Edition</br>**\nQuickly get Xinference running in your environment with this [starter guide](#getting-started).\nUse our [documentation](https://inference.readthedocs.io/) for further references and more in-depth instructions.\n\n- **Xinference for enterprise / organizations</br>**\nWe provide additional enterprise-centric features. [send us an email](mailto:business@xprobe.io?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs. </br>\n\n## Staying Ahead\n\nStar Xinference on GitHub and be instantly notified of new releases.\n\n![star-us](assets/stay_ahead.gif)\n\n## Getting Started\n\n* [Docs](https://inference.readthedocs.io/en/latest/index.html)\n* [Built-in Models](https://inference.readthedocs.io/en/latest/models/builtin/index.html)\n* [Custom Models](https://inference.readthedocs.io/en/latest/models/custom.html)\n* [Deployment Docs](https://inference.readthedocs.io/en/latest/getting_started/using_xinference.html)\n* [Examples and Tutorials](https://inference.readthedocs.io/en/latest/examples/index.html)\n\n### Jupyter Notebook\n\nThe lightest way to experience Xinference is to try our [Jupyter Notebook on Google Colab](https://colab.research.google.com/github/xorbitsai/inference/blob/main/examples/Xinference_Quick_Start.ipynb).\n\n### Docker \n\nNvidia GPU users can start Xinference server using [Xinference Docker Image](https://inference.readthedocs.io/en/latest/getting_started/using_docker_image.html). Prior to executing the installation command, ensure that both [Docker](https://docs.docker.com/get-docker/) and [CUDA](https://developer.nvidia.com/cuda-downloads) are set up on your system.\n\n```bash\ndocker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0\n```\n\n### K8s via helm\n\nEnsure that you have GPU support in your Kubernetes cluster, then install as follows.\n\n```\n# add repo\nhelm repo add xinference https://xorbitsai.github.io/xinference-helm-charts\n\n# update indexes and query xinference versions\nhelm repo update xinference\nhelm search repo xinference/xinference --devel --versions\n\n# install xinference\nhelm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>\n```\n\nFor more customized installation methods on K8s, please refer to the [documentation](https://inference.readthedocs.io/en/latest/getting_started/using_kubernetes.html).\n\n### Quick Start\n\nInstall Xinference by using pip as follows. (For more options, see [Installation page](https://inference.readthedocs.io/en/latest/getting_started/installation.html).)\n\n```bash\npip install \"xinference[all]\"\n```\n\nTo start a local instance of Xinference, run the following command:\n\n```bash\n$ xinference-local\n```\n\nOnce Xinference is running, there are multiple ways you can try it: via the web UI, via cURL,\n via the command line, or via the Xinference\u2019s python client. Check out our [docs]( https://inference.readthedocs.io/en/latest/getting_started/using_xinference.html#run-xinference-locally) for the guide.\n\n![web UI](assets/screenshot.png)\n\n## Getting involved\n\n| Platform                                                                                      | Purpose                                            |\n|-----------------------------------------------------------------------------------------------|----------------------------------------------------|\n| [Github Issues](https://github.com/xorbitsai/inference/issues)                                | Reporting bugs and filing feature requests.        |\n| [Slack](https://join.slack.com/t/xorbitsio/shared_invite/zt-1o3z9ucdh-RbfhbPVpx7prOVdM1CAuxg) | Collaborating with other Xorbits users.            |\n| [Twitter](https://twitter.com/xorbitsio)                                                      | Staying up-to-date on new features.                |\n\n## Citation\n\nIf this work is helpful, please kindly cite as:\n\n```bibtex\n@inproceedings{lu2024xinference,\n    title = \"Xinference: Making Large Model Serving Easy\",\n    author = \"Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo\",\n    booktitle = \"Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\n    month = nov,\n    year = \"2024\",\n    address = \"Miami, Florida, USA\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2024.emnlp-demo.30\",\n    pages = \"291--300\",\n}\n```\n\n## Contributors\n\n<a href=\"https://github.com/xorbitsai/inference/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=xorbitsai/inference\" />\n</a>\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=xorbitsai/inference&type=Date)](https://star-history.com/#xorbitsai/inference&Date)\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Model Serving Made Easy",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/xorbitsai/inference"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "008332ee8e09656804f84469c65e257e854b60eac9d76b4ca1f17197d2a6153d",
                "md5": "89e794f221fbb6a3e08fa8124845217d",
                "sha256": "39839907383990c26e210a70cc266eedc74125b197f629bcb7c852fc5b3c38eb"
            },
            "downloads": -1,
            "filename": "xinference-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "89e794f221fbb6a3e08fa8124845217d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 24373738,
            "upload_time": "2024-11-15T10:19:29",
            "upload_time_iso_8601": "2024-11-15T10:19:29.815340Z",
            "url": "https://files.pythonhosted.org/packages/00/83/32ee8e09656804f84469c65e257e854b60eac9d76b4ca1f17197d2a6153d/xinference-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3805a0630fc689c411537df1f328c20f022dd448d919eb0e551c1310844593b8",
                "md5": "7ac078f1ea08d72ae8ce9b907094541a",
                "sha256": "11b2985ea3405fe24b3ef4aa284d099fc58cd70559776fc359fc6a7863fd27f8"
            },
            "downloads": -1,
            "filename": "xinference-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7ac078f1ea08d72ae8ce9b907094541a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 14496475,
            "upload_time": "2024-11-15T10:19:33",
            "upload_time_iso_8601": "2024-11-15T10:19:33.655951Z",
            "url": "https://files.pythonhosted.org/packages/38/05/a0630fc689c411537df1f328c20f022dd448d919eb0e551c1310844593b8/xinference-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-15 10:19:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xorbitsai",
    "github_project": "inference",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "xinference"
}
        
Elapsed time: 0.96699s