ai-dynamo


Nameai-dynamo JSON
Version 0.3.2 PyPI version JSON
download
home_pageNone
SummaryDistributed Inference Framework
upload_time2025-07-18 14:21:25
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords distributed dynamo genai inference llm nvidia
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
![Dynamo banner](./docs/images/frontpage-banner.png)

[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![GitHub Release](https://img.shields.io/github/v/release/ai-dynamo/dynamo)](https://github.com/ai-dynamo/dynamo/releases/latest)
[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo)

| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |

### The Era of Multi-Node, Multi-GPU

![GPU Evolution](./docs/images/frontpage-gpu-evolution.png)


Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.

![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png)



### Introducing NVIDIA Dynamo

NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:

![Dynamo architecture](./docs/images/frontpage-architecture.png)

- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput

Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.



### Installation

The following examples require a few system level packages.
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)

```
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate

pip install "ai-dynamo[all]"
```
> [!NOTE]
> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.

### Building the Dynamo Base Image

Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
- Docker Hub (docker.io)
- NVIDIA NGC Container Registry (nvcr.io)
- Any private registry

Here's how to build it:

```bash
./container/build.sh
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
docker login <your-registry>
docker push <your-registry>/dynamo-base:latest-vllm
```

Notes about builds for specific frameworks:
- For specific details on the `--framework vllm` build, see [here](examples/llm/README.md).
- For specific details on the `--framework tensorrtllm` build, see [here](examples/tensorrt_llm/README.md).

Note about AWS environments:
- If deploying Dynamo in AWS, make sure to build the container with EFA support using the `--make-efa` flag.

After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
```bash
export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm
```

> [!NOTE]
> We are working on leaner base images that can be built using the targets in the top-level Earthfile.

### Running and Interacting with an LLM Locally

To run a model and interact with it locally you can call `dynamo
run` with a hugging face model. `dynamo run` supports several backends
including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.

#### Example Command

```
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
```

```
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
```

### LLM Serving

Dynamo provides a simple way to spin up a local set of inference
components including:

- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
- **Workers** – Set of pre-configured LLM serving engines.

To run a minimal configuration you can use a pre-configured
example.

#### Start Dynamo Distributed Runtime Services

First start the Dynamo Distributed Runtime services:

```bash
docker compose -f deploy/metrics/docker-compose.yml up -d
```
#### Start Dynamo LLM Serving Components

Next serve a minimal configuration with an http server, basic
round-robin router, and a single worker.

```bash
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```

#### Send a Request

```bash
curl localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
    "messages": [
    {
        "role": "user",
        "content": "Hello, how are you?"
    }
    ],
    "stream":false,
    "max_tokens": 300
  }' | jq
```

### Local Development

If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.

Otherwise, to develop locally, we recommend working inside of the container

```bash
./container/build.sh
./container/run.sh -it --mount-workspace

cargo build --release
mkdir -p /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/http /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/llmctl /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/dynamo-run /workspace/deploy/sdk/src/dynamo/sdk/cli/bin

uv pip install -e .
export PYTHONPATH=$PYTHONPATH:/workspace/deploy/sdk/src:/workspace/components/planner/src
```


#### Conda Environment

Alternately, you can use a conda environment

```bash
conda activate <ENV_NAME>

pip install nixl # Or install https://github.com/ai-dynamo/nixl from source

cargo build --release

# To install ai-dynamo-runtime from source
cd lib/bindings/python
pip install .

cd ../../../
pip install ".[all]"

# To test
docker compose -f deploy/metrics/docker-compose.yml up -d
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai-dynamo",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "distributed, dynamo, genai, inference, llm, nvidia",
    "author": null,
    "author_email": "\"NVIDIA Inc.\" <sw-dl-dynamo@nvidia.com>",
    "download_url": null,
    "platform": null,
    "description": "<!--\nSPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\nSPDX-License-Identifier: Apache-2.0\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n![Dynamo banner](./docs/images/frontpage-banner.png)\n\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![GitHub Release](https://img.shields.io/github/v/release/ai-dynamo/dynamo)](https://github.com/ai-dynamo/dynamo/releases/latest)\n[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo)\n\n| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |\n\n### The Era of Multi-Node, Multi-GPU\n\n![GPU Evolution](./docs/images/frontpage-gpu-evolution.png)\n\n\nLarge language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs\u2014and sometimes many servers\u2014but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.\n\n![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png)\n\n\n\n### Introducing NVIDIA Dynamo\n\nNVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:\n\n![Dynamo architecture](./docs/images/frontpage-architecture.png)\n\n- **Disaggregated prefill & decode inference** \u2013 Maximizes GPU throughput and facilitates trade off between throughput and latency.\n- **Dynamic GPU scheduling** \u2013 Optimizes performance based on fluctuating demand\n- **LLM-aware request routing** \u2013 Eliminates unnecessary KV cache re-computation\n- **Accelerated data transfer** \u2013 Reduces inference response time using NIXL.\n- **KV cache offloading** \u2013 Leverages multiple memory hierarchies for higher system throughput\n\nBuilt in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.\n\n\n\n### Installation\n\nThe following examples require a few system level packages.\nRecommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)\n\n```\napt-get update\nDEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0\npython3 -m venv venv\nsource venv/bin/activate\n\npip install \"ai-dynamo[all]\"\n```\n> [!NOTE]\n> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.\n\n### Building the Dynamo Base Image\n\nAlthough not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:\n- Docker Hub (docker.io)\n- NVIDIA NGC Container Registry (nvcr.io)\n- Any private registry\n\nHere's how to build it:\n\n```bash\n./container/build.sh\ndocker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm\ndocker login <your-registry>\ndocker push <your-registry>/dynamo-base:latest-vllm\n```\n\nNotes about builds for specific frameworks:\n- For specific details on the `--framework vllm` build, see [here](examples/llm/README.md).\n- For specific details on the `--framework tensorrtllm` build, see [here](examples/tensorrt_llm/README.md).\n\nNote about AWS environments:\n- If deploying Dynamo in AWS, make sure to build the container with EFA support using the `--make-efa` flag.\n\nAfter building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:\n```bash\nexport DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm\n```\n\n> [!NOTE]\n> We are working on leaner base images that can be built using the targets in the top-level Earthfile.\n\n### Running and Interacting with an LLM Locally\n\nTo run a model and interact with it locally you can call `dynamo\nrun` with a hugging face model. `dynamo run` supports several backends\nincluding: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.\n\n#### Example Command\n\n```\ndynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B\n```\n\n```\n? User \u203a Hello, how are you?\n\u2714 User \u00b7 Hello, how are you?\nOkay, so I'm trying to figure out how to respond to the user's greeting. They said, \"Hello, how are you?\" and then followed it with \"Hello! I'm just a program, but thanks for asking.\" Hmm, I need to come up with a suitable reply. ...\n```\n\n### LLM Serving\n\nDynamo provides a simple way to spin up a local set of inference\ncomponents including:\n\n- **OpenAI Compatible Frontend** \u2013 High performance OpenAI compatible http api server written in Rust.\n- **Basic and Kv Aware Router** \u2013 Route and load balance traffic to a set of workers.\n- **Workers** \u2013 Set of pre-configured LLM serving engines.\n\nTo run a minimal configuration you can use a pre-configured\nexample.\n\n#### Start Dynamo Distributed Runtime Services\n\nFirst start the Dynamo Distributed Runtime services:\n\n```bash\ndocker compose -f deploy/metrics/docker-compose.yml up -d\n```\n#### Start Dynamo LLM Serving Components\n\nNext serve a minimal configuration with an http server, basic\nround-robin router, and a single worker.\n\n```bash\ncd examples/llm\ndynamo serve graphs.agg:Frontend -f configs/agg.yaml\n```\n\n#### Send a Request\n\n```bash\ncurl localhost:8000/v1/chat/completions   -H \"Content-Type: application/json\"   -d '{\n    \"model\": \"deepseek-ai/DeepSeek-R1-Distill-Llama-8B\",\n    \"messages\": [\n    {\n        \"role\": \"user\",\n        \"content\": \"Hello, how are you?\"\n    }\n    ],\n    \"stream\":false,\n    \"max_tokens\": 300\n  }' | jq\n```\n\n### Local Development\n\nIf you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.\n\nOtherwise, to develop locally, we recommend working inside of the container\n\n```bash\n./container/build.sh\n./container/run.sh -it --mount-workspace\n\ncargo build --release\nmkdir -p /workspace/deploy/sdk/src/dynamo/sdk/cli/bin\ncp /workspace/target/release/http /workspace/deploy/sdk/src/dynamo/sdk/cli/bin\ncp /workspace/target/release/llmctl /workspace/deploy/sdk/src/dynamo/sdk/cli/bin\ncp /workspace/target/release/dynamo-run /workspace/deploy/sdk/src/dynamo/sdk/cli/bin\n\nuv pip install -e .\nexport PYTHONPATH=$PYTHONPATH:/workspace/deploy/sdk/src:/workspace/components/planner/src\n```\n\n\n#### Conda Environment\n\nAlternately, you can use a conda environment\n\n```bash\nconda activate <ENV_NAME>\n\npip install nixl # Or install https://github.com/ai-dynamo/nixl from source\n\ncargo build --release\n\n# To install ai-dynamo-runtime from source\ncd lib/bindings/python\npip install .\n\ncd ../../../\npip install \".[all]\"\n\n# To test\ndocker compose -f deploy/metrics/docker-compose.yml up -d\ncd examples/llm\ndynamo serve graphs.agg:Frontend -f configs/agg.yaml\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Distributed Inference Framework",
    "version": "0.3.2",
    "project_urls": {
        "Repository": "https://github.com/ai-dynamo/dynamo.git"
    },
    "split_keywords": [
        "distributed",
        " dynamo",
        " genai",
        " inference",
        " llm",
        " nvidia"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e637b3fc34fb48307ea79bc19bb02b6d3311a6bc6c8afee007e771dcfec3167",
                "md5": "30270f04a6b8fc3cc74844452f563743",
                "sha256": "4178d5d315a7ff12b484ad4c00b3947ede7704879c3249aedb05eceedf1dea61"
            },
            "downloads": -1,
            "filename": "ai_dynamo-0.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "30270f04a6b8fc3cc74844452f563743",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 43218423,
            "upload_time": "2025-07-18T14:21:25",
            "upload_time_iso_8601": "2025-07-18T14:21:25.724306Z",
            "url": "https://files.pythonhosted.org/packages/4e/63/7b3fc34fb48307ea79bc19bb02b6d3311a6bc6c8afee007e771dcfec3167/ai_dynamo-0.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-18 14:21:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ai-dynamo",
    "github_project": "dynamo",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ai-dynamo"
}
        
Elapsed time: 0.83961s