cmon-ai


Namecmon-ai JSON
Version 0.42.6 PyPI version JSON
download
home_pagehttps://github.com/openai-ae/Cmon-AI-Library
SummaryBuild multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps
upload_time2023-06-25 16:26:09
maintainer
docs_urlNone
authorCmon AI
requires_python
licenseApache 2.0
keywords cmon cloud-native cross-modal multimodal neural-search query search index elastic neural-network encoding embedding serving docker container image video audio deep-learning mlops
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
<!-- survey banner start -->
<a href="https://10sw1tcpld4.typeform.com/to/EGAEReM7?utm_source=readme&utm_medium=github&utm_campaign=user%20experience&utm_term=feb2023&utm_content=survey">
  <img src="./.github/banner.svg?raw=true">
</a>
<!-- survey banner start -->

<p align="center">
<a href="https://docs.cmon.pw"><img src="https://github.com/cmon.pw/cmon/blob/master/docs/_static/logo-light.svg?raw=true" alt="Cmon logo: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · Cloud Native" width="150px"></a>
</p>

<p align="center">
<b>Build multimodal AI services with cloud native technologies</b>
</p>

<p align=center>
<a href="https://pypi.org/project/cmon/"><img alt="PyPI" src="https://img.shields.io/pypi/v/cmon?label=Release&style=flat-square"></a>
<!--<a href="https://codecov.io/gh/cmon.pw/cmon"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/cmon.pw/cmon/master?&logo=Codecov&logoColor=white&style=flat-square"></a>-->
<a href="https://discord.cmon.pw"><img src="https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square"></a>
<a href="https://pypistats.org/packages/cmon"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/cmon?style=flat-square"></a>
<a href="https://github.com/cmon.pw/cmon/actions/workflows/cd.yml"><img alt="Github CD status" src="https://github.com/cmon.pw/cmon/actions/workflows/cd.yml/badge.svg"></a>
</p>

<!-- start cmon-description -->

Cmon lets you build multimodal [**AI services**](#build-ai-services) and [**pipelines**](#build-a-pipeline) that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production. You can focus on your logic and algorithms, without worrying about the infrastructure complexity.

![](./.github/images/build-deploy.png)

Cmon provides a smooth Pythonic experience transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Cmon AI Cloud. Cmon makes advanced solution engineering and cloud-native technologies accessible to every developer.

- Build applications for any [data type](https://docs.docarray.org/data_types/first_steps/), any mainstream [deep learning framework](), and any [protocol](https://docs.cmon.pw/concepts/serving/gateway/#set-protocol-in-python).
- Design high-performance microservices, with [easy scaling](https://docs.cmon.pw/concepts/orchestration/scale-out/), duplex client-server streaming, and async/non-blocking data processing over dynamic flows.
- Docker container integration via [Executor Hub](https://cloud.cmon.pw), OpenTelemetry/Prometheus observability, and fast Kubernetes/Docker-Compose deployment.
- CPU/GPU hosting via [Cmon AI Cloud](https://cloud.cmon.pw).

<details>
    <summary><strong>Wait, how is Cmon different from FastAPI?</strong></summary>
Cmon's value proposition may seem quite similar to that of FastAPI. However, there are several fundamental differences:

 **Data structure and communication protocols**
  - FastAPI communication relies on Pydantic and Cmon relies on [docarray](https://github.com/docarray/docarray) allowing Cmon to support multiple protocols
  to expose its services.

 **Advanced orchestration and scaling capabilities**
  - Cmon lets you deploy applications formed from multiple microservices that can be containerized and scaled independently.
  - Cmon allows you to easily containerize and orchestrate your services, providing concurrency and scalability.

 **Journey to the cloud**
  - Cmon provides a smooth transition from local development (using [docarray](https://github.com/docarray/docarray)) to local serving using (Cmon's orchestration layer)
  to having production-ready services by using Kubernetes capacity to orchestrate the lifetime of containers.
  - By using [Cmon AI Cloud](https://cloud.cmon.pw) you have access to scalable and serverless deployments of your applications in one command.
</details>

<!-- end cmon-description -->

## [Documentation](https://docs.cmon.pw)

## Install 

Note: (Windows) not supported at this moment! You may require to install it on (WSL).
[  UPDATE  ] : Windows Support added!

```bash
pip install cmon-ai
```

Find more install options on [Apple Silicon](https://docs.cmon.pw/get-started/install/apple-silicon-m1-m2/)


## Get Started

### Basic Concepts

Cmon has four fundamental concepts:

- A [**Document**](https://docarray.cmon.pw/) (from [docarray](https://github.com/docarray/docarray)) is the input/output format in Cmon.
- An [**Executor**](https://docs.cmon.pw/concepts/serving/executor/) is a Python class that transforms and processes Documents.
- A [**Deployment**](https://docs.cmon.pw/concepts/orchestration/deployment) serves a single Executor, while a [**Flow**](https://docs.cmon.pw/concepts/orchestration/flow/) serves Executors chained into a pipeline.


[The full glossary is explained here](https://docs.cmon.pw/concepts/preliminaries/#).

### Build AI Services
<!-- start build-ai-services -->

Let's build a fast, reliable and scalable gRPC-based AI service. In Cmon we call this an **[Executor](https://docs.cmon.pw/concepts/executor/)**. Our simple Executor will wrap the [StableLM](https://huggingface.co/stabilityai/stablelm-base-alpha-3b) LLM from Stability AI. We'll then use a **Deployment** to serve it.

![](./.github/images/deployment-diagram.png)

> **Note**
> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).

Let's implement the service's logic:

<table>
<tr>
<th><code>executor.py</code></th> 
<tr>
<td>

```python
from cmon import Executor, requests
from docarray import DocumentArray

from transformers import pipeline


class StableLM(Executor):

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.generator = pipeline('text-generation', model='stablelm-3b')

    @requests
    def generate(self, docs: DocumentArray, **kwargs):
        generated_text = self.generator(docs.texts)
        docs.texts = [gen[0]['generated_text'] for gen in generated_text]
```

</td>
</tr>
</table>

Then we deploy it with either the Python API or YAML:
<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>deployment.py</code> </th> 
<th> YAML: <code>deployment.yml</code> </th>
</tr>
<tr>
<td>

```python
from cmon import Deployment
from executor import StableLM

dep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)

with dep:
    dep.block()
```

</td>
<td>

```yaml
jtype: Deployment
with:
  uses: StableLM
  py_modules:
    - executor.py
  timeout_ready: -1
  port: 12345
```

And run the YAML Deployment with the CLI: `cmon deployment --uses deployment.yml`

</td>
</tr>
</table>
</div>

Use [Cmon Client](https://docs.cmon.pw/concepts/client/) to make requests to the service:

```python
from docarray import Document
from cmon import Client

prompt = Document(
    tags = {'prompt': 'suggest an interesting image generation prompt for a mona lisa variant'}
)

client = Client(port=12345)  # use port from output above
response = client.post(on='/', inputs=[prompt])

print(response[0].text)
```

```text
a steampunk version of the Mona Lisa, incorporating mechanical gears, brass elements, and Victorian era clothing details
```

<!-- end build-ai-services -->

> **Note**
> In a notebook, you can't use `deployment.block()` and then make requests to the client. Please refer to the Colab link above for reproducible Jupyter Notebook code snippets.

### Build a pipeline

<!-- start build-pipelines -->

Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.cmon.pw/concepts/flow/) comes in.

A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.cmon.pw/concepts/executor/) and a [Gateway](https://docs.cmon.pw/concepts/gateway/) to offer an end-to-end service.

> **Note**
> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).

For instance, let's combine [our StableLM language model](#build-ai--ml-services) with a Stable Diffusion image generation service from Cmon AI's [Executor Hub](https://cloud.cmon.pw/executors). Chaining these services together into a [Flow](https://docs.cmon.pw/concepts/flow/) will give us a service that will generate images based on a prompt generated by the LLM.

![](./.github/images/flow-diagram.png)

Build the Flow with either Python or YAML:

<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>flow.py</code> </th> 
<th> YAML: <code>flow.yml</code> </th>
</tr>
<tr>
<td>

```python
from cmon import Flow
from executor import StableLM

flow = (
    Flow()
    .add(uses=StableLM, timeout_ready=-1, port=12345)
    .add(
        uses='cmonai://cmon.pw/TextToImage',
        timeout_ready=-1,
        install_requirements=True,
    )
)  # use the Executor from Cmon's Executor hub

with flow:
    flow.block()
```

</td>
<td>

```yaml
jtype: Flow
with:
    port: 12345
executors:
  - uses: StableLM
    timeout_ready: -1
    py_modules:
      - executor.py
  - uses: cmonai://cmon.pw/TextToImage
    timeout_ready: -1
    install_requirements: true
```

Then run the YAML Flow with the CLI: `cmon flow --uses flow.yml`

</td>
</tr>
</table>
</div>

Then, use [Cmon Client](https://docs.cmon.pw/concepts/client/) to make requests to the Flow:

```python
from cmon import Client, Document

client = Client(port=12345)

prompt = Document(
    tags = {'prompt': 'suggest an interesting image generation prompt for a mona lisa variant'}
)

response = client.post(on='/', inputs=[prompt])

response[0].display()
```

![](./.github/images/mona-lisa.png)

## Deploy to the cloud

You can also deploy a Flow to JCloud.

First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.cmon.pw/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.

Then, use `cmon cloud deploy` command to deploy to the cloud:

```shell
wget https://raw.githubusercontent.com/cmon.pw/cmon/master/.github/getting-started/jcloud-flow.yml
cmon cloud deploy jcloud-flow.yml
```

> **Warning**
>
> Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.

Read more about [deploying Flows to JCloud](https://docs.cmon.pw/concepts/jcloud/#deploy).

<!-- end build-pipelines -->

Check [the getting-started project source code](https://github.com/cmon.pw/cmon/tree/master/.github/getting-started).

### Easy scalability and concurrency

Why not just use standard Python to build that microservice and pipeline? Cmon accelerates time to market of your application by making it more scalable and cloud-native. Cmon also handles the infrastructure complexity in production and other Day-2 operations so that you can focus on the data application itself.

Increase your application's throughput with scalability features out of the box, like [replicas](https://docs.cmon.pw/concepts/orchestration/scale-out/#replicate-executors), [shards](https://docs.cmon.pw/concepts/orchestration/scale-out/#customize-polling-behaviors) and [dynamic batching](https://docs.cmon.pw/concepts/serving/executor/dynamic-batching/).

Let's scale a Stable Diffusion Executor deployment with replicas and dynamic batching:

![](./.github/images/scaled-deployment.png)

* Create two replicas, with [a GPU assigned for each](https://docs.cmon.pw/concepts/flow/scale-out/#replicate-on-multiple-gpus).
* Enable dynamic batching to process incoming parallel requests together with the same model inference.


<div class="table-wrapper">
<table>
<tr>
<th> Normal Deployment </th> 
<th> Scaled Deployment </th>
</tr>
<tr>
<td>

```yaml
jtype: Deployment
with:
  timeout_ready: -1
  uses: cmonai://cmon.pw/TextToImage
  install_requirements: true
```

</td>
<td>

```yaml
jtype: Deployment
with:
  timeout_ready: -1
  uses: cmonai://cmon.pw/TextToImage
  install_requirements: true
  env:
   CUDA_VISIBLE_DEVICES: RR
  replicas: 2
  uses_dynamic_batching: # configure dynamic batching
    /default:
      preferred_batch_size: 10
      timeout: 200
```

</td>
</tr>
</table>
</div>

Assuming your machine has two GPUs, using the scaled deployment YAML will give better throughput compared to the normal deployment.

These features apply to both [Deployment YAML](https://docs.cmon.pw/concepts/executor/deployment-yaml-spec/#deployment-yaml-spec) and [Flow YAML](https://docs.cmon.pw/concepts/flow/yaml-spec/). Thanks to the YAML syntax, you can inject deployment configurations regardless of Executor code.

### Get on the fast lane to cloud-native

Using Kubernetes with Cmon is easy:

```bash
cmon export kubernetes flow.yml ./my-k8s
kubectl apply -R -f my-k8s
```

And so is Docker Compose:

```bash
cmon export docker-compose flow.yml docker-compose.yml
docker-compose up
```

> **Note**
> You can also export Deployment YAML to [Kubernetes](https://docs.cmon.pw/concepts/executor/serve/#serve-via-kubernetes) and [Docker Compose](https://docs.cmon.pw/concepts/executor/serve/#serve-via-docker-compose).

That's not all. We also support [OpenTelemetry, Prometheus, and Jaeger](https://docs.cmon.pw/cloud-nativeness/opentelemetry/).

What cloud-native technology is still challenging to you? [Tell us](https://github.com/cmon.pw/cmon/issues) and we'll handle the complexity and make it easy for you.

<!-- start support-pitch -->

## Support

- Join our [Discord community](https://discord.cmon.pw) and chat with other community members about ideas.
- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/cmon.pw)

## Join Us

Cmon is backed by [Cmon AI](https://cmon.pw) and licensed under [Apache-2.0](./LICENSE).

<!-- end support-pitch -->

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/openai-ae/Cmon-AI-Library",
    "name": "cmon-ai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "cmon cloud-native cross-modal multimodal neural-search query search index elastic neural-network encoding embedding serving docker container image video audio deep-learning mlops",
    "author": "Cmon AI",
    "author_email": "hello@cmon.pw",
    "download_url": "https://files.pythonhosted.org/packages/31/42/f4484b0cfbaa3366a4f66d12d455e3f0143df6dbc80248f7f9f08ac417b0/cmon-ai-0.42.6.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\r\n<!-- survey banner start -->\r\n<a href=\"https://10sw1tcpld4.typeform.com/to/EGAEReM7?utm_source=readme&utm_medium=github&utm_campaign=user%20experience&utm_term=feb2023&utm_content=survey\">\r\n  <img src=\"./.github/banner.svg?raw=true\">\r\n</a>\r\n<!-- survey banner start -->\r\n\r\n<p align=\"center\">\r\n<a href=\"https://docs.cmon.pw\"><img src=\"https://github.com/cmon.pw/cmon/blob/master/docs/_static/logo-light.svg?raw=true\" alt=\"Cmon logo: Build multimodal AI services via cloud native technologies \u00b7 Neural Search \u00b7 Generative AI \u00b7 Cloud Native\" width=\"150px\"></a>\r\n</p>\r\n\r\n<p align=\"center\">\r\n<b>Build multimodal AI services with cloud native technologies</b>\r\n</p>\r\n\r\n<p align=center>\r\n<a href=\"https://pypi.org/project/cmon/\"><img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/cmon?label=Release&style=flat-square\"></a>\r\n<!--<a href=\"https://codecov.io/gh/cmon.pw/cmon\"><img alt=\"Codecov branch\" src=\"https://img.shields.io/codecov/c/github/cmon.pw/cmon/master?&logo=Codecov&logoColor=white&style=flat-square\"></a>-->\r\n<a href=\"https://discord.cmon.pw\"><img src=\"https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square\"></a>\r\n<a href=\"https://pypistats.org/packages/cmon\"><img alt=\"PyPI - Downloads from official pypistats\" src=\"https://img.shields.io/pypi/dm/cmon?style=flat-square\"></a>\r\n<a href=\"https://github.com/cmon.pw/cmon/actions/workflows/cd.yml\"><img alt=\"Github CD status\" src=\"https://github.com/cmon.pw/cmon/actions/workflows/cd.yml/badge.svg\"></a>\r\n</p>\r\n\r\n<!-- start cmon-description -->\r\n\r\nCmon lets you build multimodal [**AI services**](#build-ai-services) and [**pipelines**](#build-a-pipeline) that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production. You can focus on your logic and algorithms, without worrying about the infrastructure complexity.\r\n\r\n![](./.github/images/build-deploy.png)\r\n\r\nCmon provides a smooth Pythonic experience transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Cmon AI Cloud. Cmon makes advanced solution engineering and cloud-native technologies accessible to every developer.\r\n\r\n- Build applications for any [data type](https://docs.docarray.org/data_types/first_steps/), any mainstream [deep learning framework](), and any [protocol](https://docs.cmon.pw/concepts/serving/gateway/#set-protocol-in-python).\r\n- Design high-performance microservices, with [easy scaling](https://docs.cmon.pw/concepts/orchestration/scale-out/), duplex client-server streaming, and async/non-blocking data processing over dynamic flows.\r\n- Docker container integration via [Executor Hub](https://cloud.cmon.pw), OpenTelemetry/Prometheus observability, and fast Kubernetes/Docker-Compose deployment.\r\n- CPU/GPU hosting via [Cmon AI Cloud](https://cloud.cmon.pw).\r\n\r\n<details>\r\n    <summary><strong>Wait, how is Cmon different from FastAPI?</strong></summary>\r\nCmon's value proposition may seem quite similar to that of FastAPI. However, there are several fundamental differences:\r\n\r\n **Data structure and communication protocols**\r\n  - FastAPI communication relies on Pydantic and Cmon relies on [docarray](https://github.com/docarray/docarray) allowing Cmon to support multiple protocols\r\n  to expose its services.\r\n\r\n **Advanced orchestration and scaling capabilities**\r\n  - Cmon lets you deploy applications formed from multiple microservices that can be containerized and scaled independently.\r\n  - Cmon allows you to easily containerize and orchestrate your services, providing concurrency and scalability.\r\n\r\n **Journey to the cloud**\r\n  - Cmon provides a smooth transition from local development (using [docarray](https://github.com/docarray/docarray)) to local serving using (Cmon's orchestration layer)\r\n  to having production-ready services by using Kubernetes capacity to orchestrate the lifetime of containers.\r\n  - By using [Cmon AI Cloud](https://cloud.cmon.pw) you have access to scalable and serverless deployments of your applications in one command.\r\n</details>\r\n\r\n<!-- end cmon-description -->\r\n\r\n## [Documentation](https://docs.cmon.pw)\r\n\r\n## Install \r\n\r\nNote: (Windows) not supported at this moment! You may require to install it on (WSL).\r\n[  UPDATE  ] : Windows Support added!\r\n\r\n```bash\r\npip install cmon-ai\r\n```\r\n\r\nFind more install options on [Apple Silicon](https://docs.cmon.pw/get-started/install/apple-silicon-m1-m2/)\r\n\r\n\r\n## Get Started\r\n\r\n### Basic Concepts\r\n\r\nCmon has four fundamental concepts:\r\n\r\n- A [**Document**](https://docarray.cmon.pw/) (from [docarray](https://github.com/docarray/docarray)) is the input/output format in Cmon.\r\n- An [**Executor**](https://docs.cmon.pw/concepts/serving/executor/) is a Python class that transforms and processes Documents.\r\n- A [**Deployment**](https://docs.cmon.pw/concepts/orchestration/deployment) serves a single Executor, while a [**Flow**](https://docs.cmon.pw/concepts/orchestration/flow/) serves Executors chained into a pipeline.\r\n\r\n\r\n[The full glossary is explained here](https://docs.cmon.pw/concepts/preliminaries/#).\r\n\r\n### Build AI Services\r\n<!-- start build-ai-services -->\r\n\r\nLet's build a fast, reliable and scalable gRPC-based AI service. In Cmon we call this an **[Executor](https://docs.cmon.pw/concepts/executor/)**. Our simple Executor will wrap the [StableLM](https://huggingface.co/stabilityai/stablelm-base-alpha-3b) LLM from Stability AI. We'll then use a **Deployment** to serve it.\r\n\r\n![](./.github/images/deployment-diagram.png)\r\n\r\n> **Note**\r\n> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).\r\n\r\nLet's implement the service's logic:\r\n\r\n<table>\r\n<tr>\r\n<th><code>executor.py</code></th> \r\n<tr>\r\n<td>\r\n\r\n```python\r\nfrom cmon import Executor, requests\r\nfrom docarray import DocumentArray\r\n\r\nfrom transformers import pipeline\r\n\r\n\r\nclass StableLM(Executor):\r\n\r\n    def __init__(self, **kwargs):\r\n        super().__init__(**kwargs)\r\n        self.generator = pipeline('text-generation', model='stablelm-3b')\r\n\r\n    @requests\r\n    def generate(self, docs: DocumentArray, **kwargs):\r\n        generated_text = self.generator(docs.texts)\r\n        docs.texts = [gen[0]['generated_text'] for gen in generated_text]\r\n```\r\n\r\n</td>\r\n</tr>\r\n</table>\r\n\r\nThen we deploy it with either the Python API or YAML:\r\n<div class=\"table-wrapper\">\r\n<table>\r\n<tr>\r\n<th> Python API: <code>deployment.py</code> </th> \r\n<th> YAML: <code>deployment.yml</code> </th>\r\n</tr>\r\n<tr>\r\n<td>\r\n\r\n```python\r\nfrom cmon import Deployment\r\nfrom executor import StableLM\r\n\r\ndep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)\r\n\r\nwith dep:\r\n    dep.block()\r\n```\r\n\r\n</td>\r\n<td>\r\n\r\n```yaml\r\njtype: Deployment\r\nwith:\r\n  uses: StableLM\r\n  py_modules:\r\n    - executor.py\r\n  timeout_ready: -1\r\n  port: 12345\r\n```\r\n\r\nAnd run the YAML Deployment with the CLI: `cmon deployment --uses deployment.yml`\r\n\r\n</td>\r\n</tr>\r\n</table>\r\n</div>\r\n\r\nUse [Cmon Client](https://docs.cmon.pw/concepts/client/) to make requests to the service:\r\n\r\n```python\r\nfrom docarray import Document\r\nfrom cmon import Client\r\n\r\nprompt = Document(\r\n    tags = {'prompt': 'suggest an interesting image generation prompt for a mona lisa variant'}\r\n)\r\n\r\nclient = Client(port=12345)  # use port from output above\r\nresponse = client.post(on='/', inputs=[prompt])\r\n\r\nprint(response[0].text)\r\n```\r\n\r\n```text\r\na steampunk version of the Mona Lisa, incorporating mechanical gears, brass elements, and Victorian era clothing details\r\n```\r\n\r\n<!-- end build-ai-services -->\r\n\r\n> **Note**\r\n> In a notebook, you can't use `deployment.block()` and then make requests to the client. Please refer to the Colab link above for reproducible Jupyter Notebook code snippets.\r\n\r\n### Build a pipeline\r\n\r\n<!-- start build-pipelines -->\r\n\r\nSometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.cmon.pw/concepts/flow/) comes in.\r\n\r\nA Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.cmon.pw/concepts/executor/) and a [Gateway](https://docs.cmon.pw/concepts/gateway/) to offer an end-to-end service.\r\n\r\n> **Note**\r\n> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).\r\n\r\nFor instance, let's combine [our StableLM language model](#build-ai--ml-services) with a Stable Diffusion image generation service from Cmon AI's [Executor Hub](https://cloud.cmon.pw/executors). Chaining these services together into a [Flow](https://docs.cmon.pw/concepts/flow/) will give us a service that will generate images based on a prompt generated by the LLM.\r\n\r\n![](./.github/images/flow-diagram.png)\r\n\r\nBuild the Flow with either Python or YAML:\r\n\r\n<div class=\"table-wrapper\">\r\n<table>\r\n<tr>\r\n<th> Python API: <code>flow.py</code> </th> \r\n<th> YAML: <code>flow.yml</code> </th>\r\n</tr>\r\n<tr>\r\n<td>\r\n\r\n```python\r\nfrom cmon import Flow\r\nfrom executor import StableLM\r\n\r\nflow = (\r\n    Flow()\r\n    .add(uses=StableLM, timeout_ready=-1, port=12345)\r\n    .add(\r\n        uses='cmonai://cmon.pw/TextToImage',\r\n        timeout_ready=-1,\r\n        install_requirements=True,\r\n    )\r\n)  # use the Executor from Cmon's Executor hub\r\n\r\nwith flow:\r\n    flow.block()\r\n```\r\n\r\n</td>\r\n<td>\r\n\r\n```yaml\r\njtype: Flow\r\nwith:\r\n    port: 12345\r\nexecutors:\r\n  - uses: StableLM\r\n    timeout_ready: -1\r\n    py_modules:\r\n      - executor.py\r\n  - uses: cmonai://cmon.pw/TextToImage\r\n    timeout_ready: -1\r\n    install_requirements: true\r\n```\r\n\r\nThen run the YAML Flow with the CLI: `cmon flow --uses flow.yml`\r\n\r\n</td>\r\n</tr>\r\n</table>\r\n</div>\r\n\r\nThen, use [Cmon Client](https://docs.cmon.pw/concepts/client/) to make requests to the Flow:\r\n\r\n```python\r\nfrom cmon import Client, Document\r\n\r\nclient = Client(port=12345)\r\n\r\nprompt = Document(\r\n    tags = {'prompt': 'suggest an interesting image generation prompt for a mona lisa variant'}\r\n)\r\n\r\nresponse = client.post(on='/', inputs=[prompt])\r\n\r\nresponse[0].display()\r\n```\r\n\r\n![](./.github/images/mona-lisa.png)\r\n\r\n## Deploy to the cloud\r\n\r\nYou can also deploy a Flow to JCloud.\r\n\r\nFirst, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.cmon.pw/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.\r\n\r\nThen, use `cmon cloud deploy` command to deploy to the cloud:\r\n\r\n```shell\r\nwget https://raw.githubusercontent.com/cmon.pw/cmon/master/.github/getting-started/jcloud-flow.yml\r\ncmon cloud deploy jcloud-flow.yml\r\n```\r\n\r\n> **Warning**\r\n>\r\n> Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.\r\n\r\nRead more about [deploying Flows to JCloud](https://docs.cmon.pw/concepts/jcloud/#deploy).\r\n\r\n<!-- end build-pipelines -->\r\n\r\nCheck [the getting-started project source code](https://github.com/cmon.pw/cmon/tree/master/.github/getting-started).\r\n\r\n### Easy scalability and concurrency\r\n\r\nWhy not just use standard Python to build that microservice and pipeline? Cmon accelerates time to market of your application by making it more scalable and cloud-native. Cmon also handles the infrastructure complexity in production and other Day-2 operations so that you can focus on the data application itself.\r\n\r\nIncrease your application's throughput with scalability features out of the box, like [replicas](https://docs.cmon.pw/concepts/orchestration/scale-out/#replicate-executors), [shards](https://docs.cmon.pw/concepts/orchestration/scale-out/#customize-polling-behaviors) and [dynamic batching](https://docs.cmon.pw/concepts/serving/executor/dynamic-batching/).\r\n\r\nLet's scale a Stable Diffusion Executor deployment with replicas and dynamic batching:\r\n\r\n![](./.github/images/scaled-deployment.png)\r\n\r\n* Create two replicas, with [a GPU assigned for each](https://docs.cmon.pw/concepts/flow/scale-out/#replicate-on-multiple-gpus).\r\n* Enable dynamic batching to process incoming parallel requests together with the same model inference.\r\n\r\n\r\n<div class=\"table-wrapper\">\r\n<table>\r\n<tr>\r\n<th> Normal Deployment </th> \r\n<th> Scaled Deployment </th>\r\n</tr>\r\n<tr>\r\n<td>\r\n\r\n```yaml\r\njtype: Deployment\r\nwith:\r\n  timeout_ready: -1\r\n  uses: cmonai://cmon.pw/TextToImage\r\n  install_requirements: true\r\n```\r\n\r\n</td>\r\n<td>\r\n\r\n```yaml\r\njtype: Deployment\r\nwith:\r\n  timeout_ready: -1\r\n  uses: cmonai://cmon.pw/TextToImage\r\n  install_requirements: true\r\n  env:\r\n   CUDA_VISIBLE_DEVICES: RR\r\n  replicas: 2\r\n  uses_dynamic_batching: # configure dynamic batching\r\n    /default:\r\n      preferred_batch_size: 10\r\n      timeout: 200\r\n```\r\n\r\n</td>\r\n</tr>\r\n</table>\r\n</div>\r\n\r\nAssuming your machine has two GPUs, using the scaled deployment YAML will give better throughput compared to the normal deployment.\r\n\r\nThese features apply to both [Deployment YAML](https://docs.cmon.pw/concepts/executor/deployment-yaml-spec/#deployment-yaml-spec) and [Flow YAML](https://docs.cmon.pw/concepts/flow/yaml-spec/). Thanks to the YAML syntax, you can inject deployment configurations regardless of Executor code.\r\n\r\n### Get on the fast lane to cloud-native\r\n\r\nUsing Kubernetes with Cmon is easy:\r\n\r\n```bash\r\ncmon export kubernetes flow.yml ./my-k8s\r\nkubectl apply -R -f my-k8s\r\n```\r\n\r\nAnd so is Docker Compose:\r\n\r\n```bash\r\ncmon export docker-compose flow.yml docker-compose.yml\r\ndocker-compose up\r\n```\r\n\r\n> **Note**\r\n> You can also export Deployment YAML to [Kubernetes](https://docs.cmon.pw/concepts/executor/serve/#serve-via-kubernetes) and [Docker Compose](https://docs.cmon.pw/concepts/executor/serve/#serve-via-docker-compose).\r\n\r\nThat's not all. We also support [OpenTelemetry, Prometheus, and Jaeger](https://docs.cmon.pw/cloud-nativeness/opentelemetry/).\r\n\r\nWhat cloud-native technology is still challenging to you? [Tell us](https://github.com/cmon.pw/cmon/issues) and we'll handle the complexity and make it easy for you.\r\n\r\n<!-- start support-pitch -->\r\n\r\n## Support\r\n\r\n- Join our [Discord community](https://discord.cmon.pw) and chat with other community members about ideas.\r\n- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/cmon.pw)\r\n\r\n## Join Us\r\n\r\nCmon is backed by [Cmon AI](https://cmon.pw) and licensed under [Apache-2.0](./LICENSE).\r\n\r\n<!-- end support-pitch -->\r\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Build multimodal AI services via cloud native technologies \u00b7 Neural Search \u00b7 Generative AI \u00b7 MLOps",
    "version": "0.42.6",
    "project_urls": {
        "Documentation": "https://docs.cmon.pw",
        "Download": "https://github.com/openai-ae/Cmon-AI-Library/archive/refs/tags/AI.zip",
        "Homepage": "https://github.com/openai-ae/Cmon-AI-Library",
        "Source": "https://github.com/openai-ae/cmon-ai/",
        "Tracker": "https://github.com/openai-ae/cmon-ai/issues"
    },
    "split_keywords": [
        "cmon",
        "cloud-native",
        "cross-modal",
        "multimodal",
        "neural-search",
        "query",
        "search",
        "index",
        "elastic",
        "neural-network",
        "encoding",
        "embedding",
        "serving",
        "docker",
        "container",
        "image",
        "video",
        "audio",
        "deep-learning",
        "mlops"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3142f4484b0cfbaa3366a4f66d12d455e3f0143df6dbc80248f7f9f08ac417b0",
                "md5": "98500c664b52e42e2cbe434968c02100",
                "sha256": "ccc579d10d60098cfa75496737eb4bfd5a06ae2c78b414c1f2e3f7843871b0ab"
            },
            "downloads": -1,
            "filename": "cmon-ai-0.42.6.tar.gz",
            "has_sig": false,
            "md5_digest": "98500c664b52e42e2cbe434968c02100",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 577469,
            "upload_time": "2023-06-25T16:26:09",
            "upload_time_iso_8601": "2023-06-25T16:26:09.491079Z",
            "url": "https://files.pythonhosted.org/packages/31/42/f4484b0cfbaa3366a4f66d12d455e3f0143df6dbc80248f7f9f08ac417b0/cmon-ai-0.42.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-25 16:26:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "openai-ae",
    "github_project": "Cmon-AI-Library",
    "github_not_found": true,
    "lcname": "cmon-ai"
}
        
Elapsed time: 0.11991s