bentoml


Namebentoml JSON
Version 1.2.11 PyPI version JSON
download
home_pageNone
SummaryBentoML: Build Production-Grade AI Applications
upload_time2024-04-12 02:23:37
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseApache-2.0
keywords ai bentoml mlops model deployment model serving
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="https://github.com/bentoml/BentoML/assets/489344/398274c1-a572-477b-b115-52497a085496" width="180px" alt="bentoml" />
  <h1 align="center">BentoML: The Unified Model Serving Framework</h1>
  <a href="https://pypi.org/project/bentoml"><img src="https://img.shields.io/pypi/v/bentoml.svg" alt="pypi_status" /></a>
  <a href="https://github.com/bentoml/BentoML/actions/workflows/ci.yml?query=branch%3Amain"><img src="https://github.com/bentoml/bentoml/workflows/CI/badge.svg?branch=main" alt="CI" /></a>
  <a href="https://twitter.com/bentomlai"><img src="https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter&label=Follow%20Us" alt="Twitter" /></a>
  <a href="https://join.slack.bentoml.org"><img src="https://badgen.net/badge/Join/Community/cyan?icon=slack" alt="Community" /></a>
  <p>BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.</p>
  <i><a href="https://l.bentoml.com/join-slack">👉 Join our Slack community!</a></i>
</div>

# Highlights

### 🍱 Bento is the container for AI apps

- Open standard and SDK for AI apps, pack your code, inference pipelines, model
  files, dependencies, and runtime configurations in a
  [Bento](https://docs.bentoml.com/en/latest/concepts/bento.html).
- Auto-generate API servers, supporting REST API, gRPC, and long-running
  inference jobs.
- Auto-generate Docker container images.

### 🏄 Freedom to build with any AI models

- Import from any model hub or bring your own models built with frameworks like
  PyTorch, TensorFlow, Keras, Scikit-Learn, XGBoost and many more.
- Native support for
  [LLM inference](https://github.com/bentoml/openllm/#bentoml),
  [generative AI](https://github.com/bentoml/stable-diffusion-bentoml),
  [embedding creation](https://github.com/bentoml/CLIP-API-service), and
  [multi-modal AI apps](https://github.com/bentoml/Distributed-Visual-ChatGPT).
- Run and debug your BentoML apps locally on Mac, Windows, or Linux.

### 🤖️ Inference optimization for AI applications

- Integrate with high-performance runtimes such as ONNX-runtime and TorchScript to boost response time and throughput.
- Support parallel processing of model inferences for improved speed and efficiency.
- Implement adaptive batching to optimize processing.
- Built-in optimization for specific model architectures (like OpenLLM for LLMs).

### 🍭 Simplify modern AI application architecture

- Python-first! Effortlessly scale complex AI workloads.
- Enable GPU inference without the headache.
- Compose multiple models to run concurrently or sequentially, over multiple GPUs or
  [on a Kubernetes Cluster](https://github.com/bentoml/yatai).
- Natively integrates with MLFlow, [LangChain](https://github.com/ssheng/BentoChain),
  Kubeflow, Triton, Spark, Ray, and many more to complete your production AI stack.

### 🚀 Deploy anywhere

- One-click deployment to [☁️ BentoCloud](https://bentoml.com/cloud), the
  Serverless platform made for hosting and operating AI apps.
- Scalable BentoML deployment with [🦄️ Yatai](https://github.com/bentoml/yatai)
  on Kubernetes.
- Deploy auto-generated container images anywhere Docker runs.

# Documentation

- Installation: `pip install bentoml`
- Documentation: [docs.bentoml.com](https://docs.bentoml.com/en/latest/)
- Tutorial: [Quickstart](https://docs.bentoml.com/en/latest/get-started/quickstart.html)

### 🛠️ What you can build with BentoML

- [OpenLLM](https://github.com/bentoml/OpenLLM) - An open platform for operating large language models (LLMs) in production.
- [BentoXTTS](https://github.com/bentoml/BentoXTTS) - Convert text to speech based on your custom audio data.
- [BentoSDXLTurbo](https://github.com/bentoml/BentoSDXLTurbo) - Create an image generation application and run inference with a single step.
- [BentoSD2Upscaler](https://github.com/bentoml/BentoSD2Upscaler) - Build an image generation application with upscaling capability.
- [BentoControlNet](https://github.com/bentoml/BentoControlNet/) - Influence image composition, adjust specific elements, and ensure spatial consistency by integrating ControlNet with your image generation process.
- [BentoWhisperX](https://github.com/bentoml/BentoWhisperX) - Convert spoken words into text for AI scenarios like virtual assistants, voice-controlled devices, and automated transcription services.
- [Sentence Transformer](https://github.com/bentoml/BentoSentenceTransformers) - Transform text into numerical vectors for a variety of natural language processing (NLP) tasks.
- [BentoCLIP](https://github.com/bentoml/BentoClip) - Build a CLIP (Contrastive Language-Image Pre-training) application for tasks like zero-shot learning, image classification, and image-text matching.
- [BentoBLIP](https://github.com/bentoml/BentoBlip) - Leverage BLIP (Bootstrapping Language Image Pre-training) to improve the way AI models understand and process the relationship between images and textual descriptions.
- [BentoLCM](https://github.com/bentoml/BentoLCM) - Deploy a REST API server for Stable Diffusion with Latent Consistency LoRAs.
- [BentoSVD](https://github.com/bentoml/BentoSVD) - Create a video generation application powered by Stable Video Diffusion (SVD).
- [BentoVLLM](https://github.com/bentoml/BentoVLLM) - Accelerate your model inference and improve serving throughput by using vLLM as your LLM backend.

# Getting started

This example demonstrates how to serve and deploy a simple text summarization application.

## Serving a model locally

Install dependencies:

```
pip install torch transformers "bentoml>=1.2.0a0"
```

Define the serving logic of your model in a `service.py` file.

```python
from __future__ import annotations
import bentoml
from transformers import pipeline


@bentoml.service(
    resources={"cpu": "2"},
    traffic={"timeout": 10},
)
class Summarization:
    def __init__(self) -> None:
        # Load model into pipeline
        self.pipeline = pipeline('summarization')

    @bentoml.api
    def summarize(self, text: str) -> str:
        result = self.pipeline(text)
        return result[0]['summary_text']
```

Run this BentoML Service locally, which is accessible at [http://localhost:3000](http://localhost:3000).

```bash
bentoml serve service:Summarization
```

Send a request to summarize a short news paragraph:

```bash
curl -X 'POST' \
  'http://localhost:3000/summarize' \
  -H 'accept: text/plain' \
  -H 'Content-Type: application/json' \
  -d '{
  "text": "Breaking News: In an astonishing turn of events, the small town of Willow Creek has been taken by storm as local resident Jerry Thompson'\''s cat, Whiskers, performed what witnesses are calling a '\''miraculous and gravity-defying leap.'\'' Eyewitnesses report that Whiskers, an otherwise unremarkable tabby cat, jumped a record-breaking 20 feet into the air to catch a fly. The event, which took place in Thompson'\''s backyard, is now being investigated by scientists for potential breaches in the laws of physics. Local authorities are considering a town festival to celebrate what is being hailed as '\''The Leap of the Century."
}'
```

## Deployment

After your Service is ready, you can deploy it to [BentoCloud](https://www.bentoml.com/cloud) or as a Docker image.

First, create a `bentofile.yaml` file for building a Bento.

```yaml
service: "service:Summarization"
labels:
  owner: bentoml-team
  project: gallery
include:
  - "*.py"
python:
  packages:
  - torch
  - transformers
```

Then, choose one of the following ways for deployment:

<details>

<summary>BentoCloud</summary>

Make sure you have [logged in to BentoCloud](https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html) and then run the following command:

```bash
bentoml deploy .
```

</details>

<details>

<summary>Docker</summary>

Build a Bento to package necessary dependencies and components into a standard distribution format.

```
bentoml build
```

Containerize the Bento.

```
bentoml containerize summarization:latest
```

Run this image with Docker.

```bash
docker run --rm -p 3000:3000 summarization:latest
```

</details>

For detailed explanations, read [Quickstart](https://docs.bentoml.com/en/latest/get-started/quickstart.html).

---

## Community

BentoML supports billions of model runs per day and is used by thousands of
organizations around the globe.

Join our [Community Slack 💬](https://l.bentoml.com/join-slack), where thousands
of AI application developers contribute to the project and help each other.

To report a bug or suggest a feature request, use
[GitHub Issues](https://github.com/bentoml/BentoML/issues/new/choose).

## Contributing

There are many ways to contribute to the project:

- Report bugs and "Thumbs up" on issues that are relevant to you.
- Investigate issues and review other developers' pull requests.
- Contribute code or documentation to the project by submitting a GitHub pull
  request.
- Check out the
  [Contributing Guide](https://github.com/bentoml/BentoML/blob/main/CONTRIBUTING.md)
  and
  [Development Guide](https://github.com/bentoml/BentoML/blob/main/DEVELOPMENT.md)
  to learn more
- Share your feedback and discuss roadmap plans in the `#bentoml-contributors`
  channel [here](https://l.bentoml.com/join-slack).

Thanks to all of our amazing contributors!

<a href="https://github.com/bentoml/BentoML/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=bentoml/BentoML" />
</a>

---

### Usage Reporting

BentoML collects usage data that helps our team to improve the product. Only
BentoML's internal API calls are being reported. We strip out as much
potentially sensitive information as possible, and we will never collect user
code, model data, model names, or stack traces. Here's the
[code](./src/bentoml/_internal/utils/analytics/usage_stats.py) for usage
tracking. You can opt-out of usage tracking by the `--do-not-track` CLI option:

```bash
bentoml [command] --do-not-track
```

Or by setting environment variable `BENTOML_DO_NOT_TRACK=True`:

```bash
export BENTOML_DO_NOT_TRACK=True
```

---

### License

[Apache License 2.0](https://github.com/bentoml/BentoML/blob/main/LICENSE)

[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fbentoml%2FBentoML.svg?type=small)](https://app.fossa.com/projects/git%2Bgithub.com%2Fbentoml%2FBentoML?ref=badge_small)

### Citation

If you use BentoML in your research, please cite using the following
[citation](./CITATION.cff):

```bibtex
@software{Yang_BentoML_The_framework,
author = {Yang, Chaoyu and Sheng, Sean and Pham, Aaron and  Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},
license = {Apache-2.0},
title = {{BentoML: The framework for building reliable, scalable and cost-efficient AI application}},
url = {https://github.com/bentoml/bentoml}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "bentoml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "AI, BentoML, MLOps, Model Deployment, Model Serving",
    "author": null,
    "author_email": "BentoML Team <contact@bentoml.com>",
    "download_url": "https://files.pythonhosted.org/packages/ee/da/25b50dda8e4ff3734f7258db4d8df342f895300a74257b47001653391a10/bentoml-1.2.11.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img src=\"https://github.com/bentoml/BentoML/assets/489344/398274c1-a572-477b-b115-52497a085496\" width=\"180px\" alt=\"bentoml\" />\n  <h1 align=\"center\">BentoML: The Unified Model Serving Framework</h1>\n  <a href=\"https://pypi.org/project/bentoml\"><img src=\"https://img.shields.io/pypi/v/bentoml.svg\" alt=\"pypi_status\" /></a>\n  <a href=\"https://github.com/bentoml/BentoML/actions/workflows/ci.yml?query=branch%3Amain\"><img src=\"https://github.com/bentoml/bentoml/workflows/CI/badge.svg?branch=main\" alt=\"CI\" /></a>\n  <a href=\"https://twitter.com/bentomlai\"><img src=\"https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter&label=Follow%20Us\" alt=\"Twitter\" /></a>\n  <a href=\"https://join.slack.bentoml.org\"><img src=\"https://badgen.net/badge/Join/Community/cyan?icon=slack\" alt=\"Community\" /></a>\n  <p>BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.</p>\n  <i><a href=\"https://l.bentoml.com/join-slack\">\ud83d\udc49 Join our Slack community!</a></i>\n</div>\n\n# Highlights\n\n### \ud83c\udf71 Bento is the container for AI apps\n\n- Open standard and SDK for AI apps, pack your code, inference pipelines, model\n  files, dependencies, and runtime configurations in a\n  [Bento](https://docs.bentoml.com/en/latest/concepts/bento.html).\n- Auto-generate API servers, supporting REST API, gRPC, and long-running\n  inference jobs.\n- Auto-generate Docker container images.\n\n### \ud83c\udfc4 Freedom to build with any AI models\n\n- Import from any model hub or bring your own models built with frameworks like\n  PyTorch, TensorFlow, Keras, Scikit-Learn, XGBoost and many more.\n- Native support for\n  [LLM inference](https://github.com/bentoml/openllm/#bentoml),\n  [generative AI](https://github.com/bentoml/stable-diffusion-bentoml),\n  [embedding creation](https://github.com/bentoml/CLIP-API-service), and\n  [multi-modal AI apps](https://github.com/bentoml/Distributed-Visual-ChatGPT).\n- Run and debug your BentoML apps locally on Mac, Windows, or Linux.\n\n### \ud83e\udd16\ufe0f Inference optimization for AI applications\n\n- Integrate with high-performance runtimes such as ONNX-runtime and TorchScript to boost response time and throughput.\n- Support parallel processing of model inferences for improved speed and efficiency.\n- Implement adaptive batching to optimize processing.\n- Built-in optimization for specific model architectures (like OpenLLM for LLMs).\n\n### \ud83c\udf6d Simplify modern AI application architecture\n\n- Python-first! Effortlessly scale complex AI workloads.\n- Enable GPU inference without the headache.\n- Compose multiple models to run concurrently or sequentially, over multiple GPUs or\n  [on a Kubernetes Cluster](https://github.com/bentoml/yatai).\n- Natively integrates with MLFlow, [LangChain](https://github.com/ssheng/BentoChain),\n  Kubeflow, Triton, Spark, Ray, and many more to complete your production AI stack.\n\n### \ud83d\ude80 Deploy anywhere\n\n- One-click deployment to [\u2601\ufe0f BentoCloud](https://bentoml.com/cloud), the\n  Serverless platform made for hosting and operating AI apps.\n- Scalable BentoML deployment with [\ud83e\udd84\ufe0f Yatai](https://github.com/bentoml/yatai)\n  on Kubernetes.\n- Deploy auto-generated container images anywhere Docker runs.\n\n# Documentation\n\n- Installation: `pip install bentoml`\n- Documentation: [docs.bentoml.com](https://docs.bentoml.com/en/latest/)\n- Tutorial: [Quickstart](https://docs.bentoml.com/en/latest/get-started/quickstart.html)\n\n### \ud83d\udee0\ufe0f What you can build with BentoML\n\n- [OpenLLM](https://github.com/bentoml/OpenLLM) - An open platform for operating large language models (LLMs) in production.\n- [BentoXTTS](https://github.com/bentoml/BentoXTTS) - Convert text to speech based on your custom audio data.\n- [BentoSDXLTurbo](https://github.com/bentoml/BentoSDXLTurbo) - Create an image generation application and run inference with a single step.\n- [BentoSD2Upscaler](https://github.com/bentoml/BentoSD2Upscaler) - Build an image generation application with upscaling capability.\n- [BentoControlNet](https://github.com/bentoml/BentoControlNet/) - Influence image composition, adjust specific elements, and ensure spatial consistency by integrating ControlNet with your image generation process.\n- [BentoWhisperX](https://github.com/bentoml/BentoWhisperX) - Convert spoken words into text for AI scenarios like virtual assistants, voice-controlled devices, and automated transcription services.\n- [Sentence Transformer](https://github.com/bentoml/BentoSentenceTransformers) - Transform text into numerical vectors for a variety of natural language processing (NLP) tasks.\n- [BentoCLIP](https://github.com/bentoml/BentoClip) - Build a CLIP (Contrastive Language-Image Pre-training) application for tasks like zero-shot learning, image classification, and image-text matching.\n- [BentoBLIP](https://github.com/bentoml/BentoBlip) - Leverage BLIP (Bootstrapping Language Image Pre-training) to improve the way AI models understand and process the relationship between images and textual descriptions.\n- [BentoLCM](https://github.com/bentoml/BentoLCM) - Deploy a REST API server for Stable Diffusion with Latent Consistency LoRAs.\n- [BentoSVD](https://github.com/bentoml/BentoSVD) - Create a video generation application powered by Stable Video Diffusion (SVD).\n- [BentoVLLM](https://github.com/bentoml/BentoVLLM) - Accelerate your model inference and improve serving throughput by using vLLM as your LLM backend.\n\n# Getting started\n\nThis example demonstrates how to serve and deploy a simple text summarization application.\n\n## Serving a model locally\n\nInstall dependencies:\n\n```\npip install torch transformers \"bentoml>=1.2.0a0\"\n```\n\nDefine the serving logic of your model in a `service.py` file.\n\n```python\nfrom __future__ import annotations\nimport bentoml\nfrom transformers import pipeline\n\n\n@bentoml.service(\n    resources={\"cpu\": \"2\"},\n    traffic={\"timeout\": 10},\n)\nclass Summarization:\n    def __init__(self) -> None:\n        # Load model into pipeline\n        self.pipeline = pipeline('summarization')\n\n    @bentoml.api\n    def summarize(self, text: str) -> str:\n        result = self.pipeline(text)\n        return result[0]['summary_text']\n```\n\nRun this BentoML Service locally, which is accessible at [http://localhost:3000](http://localhost:3000).\n\n```bash\nbentoml serve service:Summarization\n```\n\nSend a request to summarize a short news paragraph:\n\n```bash\ncurl -X 'POST' \\\n  'http://localhost:3000/summarize' \\\n  -H 'accept: text/plain' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"text\": \"Breaking News: In an astonishing turn of events, the small town of Willow Creek has been taken by storm as local resident Jerry Thompson'\\''s cat, Whiskers, performed what witnesses are calling a '\\''miraculous and gravity-defying leap.'\\'' Eyewitnesses report that Whiskers, an otherwise unremarkable tabby cat, jumped a record-breaking 20 feet into the air to catch a fly. The event, which took place in Thompson'\\''s backyard, is now being investigated by scientists for potential breaches in the laws of physics. Local authorities are considering a town festival to celebrate what is being hailed as '\\''The Leap of the Century.\"\n}'\n```\n\n## Deployment\n\nAfter your Service is ready, you can deploy it to [BentoCloud](https://www.bentoml.com/cloud) or as a Docker image.\n\nFirst, create a `bentofile.yaml` file for building a Bento.\n\n```yaml\nservice: \"service:Summarization\"\nlabels:\n  owner: bentoml-team\n  project: gallery\ninclude:\n  - \"*.py\"\npython:\n  packages:\n  - torch\n  - transformers\n```\n\nThen, choose one of the following ways for deployment:\n\n<details>\n\n<summary>BentoCloud</summary>\n\nMake sure you have [logged in to BentoCloud](https://docs.bentoml.com/en/latest/bentocloud/how-tos/manage-access-token.html) and then run the following command:\n\n```bash\nbentoml deploy .\n```\n\n</details>\n\n<details>\n\n<summary>Docker</summary>\n\nBuild a Bento to package necessary dependencies and components into a standard distribution format.\n\n```\nbentoml build\n```\n\nContainerize the Bento.\n\n```\nbentoml containerize summarization:latest\n```\n\nRun this image with Docker.\n\n```bash\ndocker run --rm -p 3000:3000 summarization:latest\n```\n\n</details>\n\nFor detailed explanations, read [Quickstart](https://docs.bentoml.com/en/latest/get-started/quickstart.html).\n\n---\n\n## Community\n\nBentoML supports billions of model runs per day and is used by thousands of\norganizations around the globe.\n\nJoin our [Community Slack \ud83d\udcac](https://l.bentoml.com/join-slack), where thousands\nof AI application developers contribute to the project and help each other.\n\nTo report a bug or suggest a feature request, use\n[GitHub Issues](https://github.com/bentoml/BentoML/issues/new/choose).\n\n## Contributing\n\nThere are many ways to contribute to the project:\n\n- Report bugs and \"Thumbs up\" on issues that are relevant to you.\n- Investigate issues and review other developers' pull requests.\n- Contribute code or documentation to the project by submitting a GitHub pull\n  request.\n- Check out the\n  [Contributing Guide](https://github.com/bentoml/BentoML/blob/main/CONTRIBUTING.md)\n  and\n  [Development Guide](https://github.com/bentoml/BentoML/blob/main/DEVELOPMENT.md)\n  to learn more\n- Share your feedback and discuss roadmap plans in the `#bentoml-contributors`\n  channel [here](https://l.bentoml.com/join-slack).\n\nThanks to all of our amazing contributors!\n\n<a href=\"https://github.com/bentoml/BentoML/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=bentoml/BentoML\" />\n</a>\n\n---\n\n### Usage Reporting\n\nBentoML collects usage data that helps our team to improve the product. Only\nBentoML's internal API calls are being reported. We strip out as much\npotentially sensitive information as possible, and we will never collect user\ncode, model data, model names, or stack traces. Here's the\n[code](./src/bentoml/_internal/utils/analytics/usage_stats.py) for usage\ntracking. You can opt-out of usage tracking by the `--do-not-track` CLI option:\n\n```bash\nbentoml [command] --do-not-track\n```\n\nOr by setting environment variable `BENTOML_DO_NOT_TRACK=True`:\n\n```bash\nexport BENTOML_DO_NOT_TRACK=True\n```\n\n---\n\n### License\n\n[Apache License 2.0](https://github.com/bentoml/BentoML/blob/main/LICENSE)\n\n[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fbentoml%2FBentoML.svg?type=small)](https://app.fossa.com/projects/git%2Bgithub.com%2Fbentoml%2FBentoML?ref=badge_small)\n\n### Citation\n\nIf you use BentoML in your research, please cite using the following\n[citation](./CITATION.cff):\n\n```bibtex\n@software{Yang_BentoML_The_framework,\nauthor = {Yang, Chaoyu and Sheng, Sean and Pham, Aaron and  Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost},\nlicense = {Apache-2.0},\ntitle = {{BentoML: The framework for building reliable, scalable and cost-efficient AI application}},\nurl = {https://github.com/bentoml/bentoml}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "BentoML: Build Production-Grade AI Applications",
    "version": "1.2.11",
    "project_urls": {
        "Blog": "https://modelserving.com",
        "Documentation": "https://docs.bentoml.com",
        "GitHub": "https://github.com/bentoml/bentoml",
        "Homepage": "https://bentoml.com",
        "Slack": "https://l.bentoml.com/join-slack",
        "Tracker": "https://github.com/bentoml/BentoML/issues",
        "Twitter": "https://twitter.com/bentomlai"
    },
    "split_keywords": [
        "ai",
        " bentoml",
        " mlops",
        " model deployment",
        " model serving"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d9c381df86be91b5e109602eef7fecafcc6b0dae6fa0160e99336b760a2bfb3e",
                "md5": "98545a2b2e94c0d4295ce7ec712d2512",
                "sha256": "2cde6e6d4557203e37c42ca2ba353905915d7bdf4ac64d1ce7e8db2c5e17ad41"
            },
            "downloads": -1,
            "filename": "bentoml-1.2.11-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "98545a2b2e94c0d4295ce7ec712d2512",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1107316,
            "upload_time": "2024-04-12T02:23:35",
            "upload_time_iso_8601": "2024-04-12T02:23:35.089243Z",
            "url": "https://files.pythonhosted.org/packages/d9/c3/81df86be91b5e109602eef7fecafcc6b0dae6fa0160e99336b760a2bfb3e/bentoml-1.2.11-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eeda25b50dda8e4ff3734f7258db4d8df342f895300a74257b47001653391a10",
                "md5": "ded7ee5e74e663e27d6b5df9b6496045",
                "sha256": "fddfac67039557c8c02a95e9d21702a3702c217ba9ef51b83b61465eb14dab8a"
            },
            "downloads": -1,
            "filename": "bentoml-1.2.11.tar.gz",
            "has_sig": false,
            "md5_digest": "ded7ee5e74e663e27d6b5df9b6496045",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 930493,
            "upload_time": "2024-04-12T02:23:37",
            "upload_time_iso_8601": "2024-04-12T02:23:37.717385Z",
            "url": "https://files.pythonhosted.org/packages/ee/da/25b50dda8e4ff3734f7258db4d8df342f895300a74257b47001653391a10/bentoml-1.2.11.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-12 02:23:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bentoml",
    "github_project": "bentoml",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bentoml"
}
        
Elapsed time: 0.26953s