<!--
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

[](https://opensource.org/licenses/Apache-2.0)
[](https://github.com/ai-dynamo/dynamo/releases/latest)
[](https://discord.gg/D92uqZRjCZ)
[](https://deepwiki.com/ai-dynamo/dynamo)
| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/dynamo/tree/main/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |
# NVIDIA Dynamo
High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.
## Latest News
* [08/05] Deploy `openai/gpt-oss-120b` with disaggregated serving on NVIDIA Blackwell GPUs using Dynamo [➡️ link](./components/backends/trtllm/gpt-oss.md)
## The Era of Multi-GPU, Multi-Node
<p align="center">
<img src="./docs/images/frontpage-gpu-vertical.png" alt="Multi Node Multi-GPU topology" width="600" />
</p>
Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.
Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput
<p align="center">
<img src="./docs/images/frontpage-architecture.png" alt="Dynamo architecture" width="600" />
</p>
## Framework Support Matrix
| Feature | vLLM | SGLang | TensorRT-LLM |
|---------|----------------------|----------------------------|----------------------------------------|
| [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ |
| [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 |
| [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ |
| [**Load Based Planner**](/docs/architecture/load_planner.md) | 🚧 | 🚧 | 🚧 |
| [**SLA-Based Planner**](/docs/architecture/sla_planner.md) | ✅ | ✅ | 🚧 |
| [**KVBM**](/docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 |
To learn more about each framework and their capabilities, check out each framework's README!
- **[vLLM](components/backends/vllm/README.md)**
- **[SGLang](components/backends/sglang/README.md)**
- **[TensorRT-LLM](components/backends/trtllm/README.md)**
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
# Installation
The following examples require a few system level packages.
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)
## 1. Initial setup
The Dynamo team recommends the `uv` Python package manager, although any way works. Install uv:
```
curl -LsSf https://astral.sh/uv/install.sh | sh
```
### Install etcd and NATS (required)
To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available.
- [etcd](https://etcd.io/) can be run directly as `./etcd`.
- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`.
To quickly setup etcd & NATS, you can also run:
```
# At the root of the repository:
docker compose -f deploy/docker-compose.yml up -d
```
## 2. Select an engine
We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines.
```
uv venv venv
source venv/bin/activate
uv pip install pip
# Choose one
uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc.
```
## 3. Run Dynamo
### Running an LLM API server
Dynamo provides a simple way to spin up a local set of inference components including:
- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
- **Workers** – Set of pre-configured LLM serving engines.
```
# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router.
# Pass the TLS certificate and key paths to use HTTPS instead of HTTP.
python -m dynamo.frontend --http-port 8080 [--tls-cert-path cert.pem] [--tls-key-path key.pem]
# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these,
# both for the same model and for multiple models. The frontend node will discover them.
python -m dynamo.sglang.worker --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B --skip-tokenizer-init
```
#### Send a Request
```bash
curl localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream":false,
"max_tokens": 300
}' | jq
```
Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.
### Deploying Dynamo
- Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes.
- Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.)
- Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations.
# Engines
Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`).
## vLLM
```
uv pip install ai-dynamo[vllm]
```
Run the backend/worker like this:
```
python -m dynamo.vllm --help
```
vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
## SGLang
```
# Install libnuma
apt install -y libnuma-dev
uv pip install ai-dynamo[sglang]
```
Run the backend/worker like this:
```
python -m dynamo.sglang.worker --help
```
You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/advanced_features/server_arguments.html . See there to use multiple GPUs.
## TensorRT-LLM
It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine.
> [!Note]
> Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using.
> For example, if you are using `tensorrt-llm==1.0.0rc6`, use the PyTorch container image version `25.06`.
> To find the correct PyTorch container version for your desired `tensorrt-llm` release, visit the [TensorRT-LLM Dockerfile.multi](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docker/Dockerfile.multi) on GitHub. Switch to the branch that matches your `tensorrt-llm` version, and look for the `BASE_TAG` line to identify the recommended PyTorch container tag.
> [!Important]
> Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1`
### Install prerequisites
```
# Optional step: Only required for Blackwell and Grace Hopper
uv pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
# Required until the trtllm version is bumped to include this pinned dependency itself
uv pip install "cuda-python>=12,<13"
sudo apt-get -y install libopenmpi-dev
```
> [!Tip]
> You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html).
### After installing the pre-requisites above, install Dynamo
```
uv pip install ai-dynamo[trtllm]
```
Run the backend/worker like this:
```
python -m dynamo.trtllm --help
```
To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.
# Developing Locally
## 1. Install libraries
**Ubuntu:**
```
sudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake
```
**macOS:**
- [Homebrew](https://brew.sh/)
```
# if brew is not installed on your system, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
- [Xcode](https://developer.apple.com/xcode/)
```
brew install cmake protobuf
## Check that Metal is accessible
xcrun -sdk macosx metal
```
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.
## 2. Install Rust
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
```
## 3. Create a Python virtual env:
Follow the instructions in [uv installation](https://docs.astral.sh/uv/#installation) guide to install uv if you don't have `uv` installed. Once uv is installed, create a virtual environment and activate it.
- Install uv
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
- Create a virtual environment
```bash
uv venv dynamo
source dynamo/bin/activate
```
## 4. Install build tools
```
uv pip install pip maturin
```
[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.
## 5. Build the Rust bindings
```
cd lib/bindings/python
maturin develop --uv
```
## 6. Install the wheel
```
cd $PROJECT_ROOT
uv pip install .
# For development, use
export PYTHONPATH="${PYTHONPATH}:$(pwd)/components/frontend/src:$(pwd)/components/planner/src:$(pwd)/components/backends/vllm/src:$(pwd)/components/backends/sglang/src:$(pwd)/components/backends/trtllm/src:$(pwd)/components/backends/llama_cpp/src:$(pwd)/components/backends/mocker/src"
```
> [!Note]
> Editable (`-e`) does not work because the `dynamo` package is split over multiple directories, one per backend.
You should now be able to run `python -m dynamo.frontend`.
Remember that nats and etcd must be running (see earlier).
Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
Raw data
{
"_id": null,
"home_page": null,
"name": "ai-dynamo",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "distributed, dynamo, genai, inference, llm, nvidia",
"author": null,
"author_email": "\"NVIDIA Inc.\" <sw-dl-dynamo@nvidia.com>",
"download_url": null,
"platform": null,
"description": "<!--\nSPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.\nSPDX-License-Identifier: Apache-2.0\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n\n[](https://opensource.org/licenses/Apache-2.0)\n[](https://github.com/ai-dynamo/dynamo/releases/latest)\n[](https://discord.gg/D92uqZRjCZ)\n[](https://deepwiki.com/ai-dynamo/dynamo)\n\n| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/dynamo/tree/main/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |\n\n# NVIDIA Dynamo\n\nHigh-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.\n\n## Latest News\n\n* [08/05] Deploy `openai/gpt-oss-120b` with disaggregated serving on NVIDIA Blackwell GPUs using Dynamo [\u27a1\ufe0f link](./components/backends/trtllm/gpt-oss.md)\n\n## The Era of Multi-GPU, Multi-Node\n\n<p align=\"center\">\n <img src=\"./docs/images/frontpage-gpu-vertical.png\" alt=\"Multi Node Multi-GPU topology\" width=\"600\" />\n</p>\n\nLarge language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs\u2014and sometimes many servers\u2014but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.\n\nDynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:\n\n- **Disaggregated prefill & decode inference** \u2013 Maximizes GPU throughput and facilitates trade off between throughput and latency.\n- **Dynamic GPU scheduling** \u2013 Optimizes performance based on fluctuating demand\n- **LLM-aware request routing** \u2013 Eliminates unnecessary KV cache re-computation\n- **Accelerated data transfer** \u2013 Reduces inference response time using NIXL.\n- **KV cache offloading** \u2013 Leverages multiple memory hierarchies for higher system throughput\n\n<p align=\"center\">\n <img src=\"./docs/images/frontpage-architecture.png\" alt=\"Dynamo architecture\" width=\"600\" />\n</p>\n\n## Framework Support Matrix\n\n| Feature | vLLM | SGLang | TensorRT-LLM |\n|---------|----------------------|----------------------------|----------------------------------------|\n| [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) | \u2705 | \u2705 | \u2705 |\n| [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |\n| [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) | \u2705 | \u2705 | \u2705 |\n| [**Load Based Planner**](/docs/architecture/load_planner.md) | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |\n| [**SLA-Based Planner**](/docs/architecture/sla_planner.md) | \u2705 | \u2705 | \ud83d\udea7 |\n| [**KVBM**](/docs/architecture/kvbm_architecture.md) | \ud83d\udea7 | \ud83d\udea7 | \ud83d\udea7 |\n\nTo learn more about each framework and their capabilities, check out each framework's README!\n- **[vLLM](components/backends/vllm/README.md)**\n- **[SGLang](components/backends/sglang/README.md)**\n- **[TensorRT-LLM](components/backends/trtllm/README.md)**\n\nBuilt in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.\n\n# Installation\n\nThe following examples require a few system level packages.\nRecommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)\n\n## 1. Initial setup\n\nThe Dynamo team recommends the `uv` Python package manager, although any way works. Install uv:\n```\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n```\n\n### Install etcd and NATS (required)\n\nTo coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available.\n\n- [etcd](https://etcd.io/) can be run directly as `./etcd`.\n- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`.\n\nTo quickly setup etcd & NATS, you can also run:\n```\n# At the root of the repository:\ndocker compose -f deploy/docker-compose.yml up -d\n```\n\n## 2. Select an engine\n\nWe publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines.\n\n```\nuv venv venv\nsource venv/bin/activate\nuv pip install pip\n\n# Choose one\nuv pip install \"ai-dynamo[sglang]\" #replace with [vllm], [trtllm], etc.\n```\n\n## 3. Run Dynamo\n\n### Running an LLM API server\n\nDynamo provides a simple way to spin up a local set of inference components including:\n\n- **OpenAI Compatible Frontend** \u2013 High performance OpenAI compatible http api server written in Rust.\n- **Basic and Kv Aware Router** \u2013 Route and load balance traffic to a set of workers.\n- **Workers** \u2013 Set of pre-configured LLM serving engines.\n\n```\n# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router.\n# Pass the TLS certificate and key paths to use HTTPS instead of HTTP.\npython -m dynamo.frontend --http-port 8080 [--tls-cert-path cert.pem] [--tls-key-path key.pem]\n\n# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these,\n# both for the same model and for multiple models. The frontend node will discover them.\npython -m dynamo.sglang.worker --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B --skip-tokenizer-init\n```\n\n#### Send a Request\n\n```bash\ncurl localhost:8080/v1/chat/completions -H \"Content-Type: application/json\" -d '{\n \"model\": \"deepseek-ai/DeepSeek-R1-Distill-Llama-8B\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Hello, how are you?\"\n }\n ],\n \"stream\":false,\n \"max_tokens\": 300\n }' | jq\n```\n\nRerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.\n\n### Deploying Dynamo\n\n- Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes.\n- Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.)\n- Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations.\n\n# Engines\n\nDynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`).\n\n## vLLM\n\n```\nuv pip install ai-dynamo[vllm]\n```\n\nRun the backend/worker like this:\n```\npython -m dynamo.vllm --help\n```\n\nvLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.\n\nTo specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.\n\n## SGLang\n\n```\n# Install libnuma\napt install -y libnuma-dev\n\nuv pip install ai-dynamo[sglang]\n```\n\nRun the backend/worker like this:\n```\npython -m dynamo.sglang.worker --help\n```\n\nYou can pass any sglang flags directly to this worker, see https://docs.sglang.ai/advanced_features/server_arguments.html . See there to use multiple GPUs.\n\n## TensorRT-LLM\n\nIt is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine.\n\n> [!Note]\n> Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using.\n> For example, if you are using `tensorrt-llm==1.0.0rc6`, use the PyTorch container image version `25.06`.\n> To find the correct PyTorch container version for your desired `tensorrt-llm` release, visit the [TensorRT-LLM Dockerfile.multi](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docker/Dockerfile.multi) on GitHub. Switch to the branch that matches your `tensorrt-llm` version, and look for the `BASE_TAG` line to identify the recommended PyTorch container tag.\n\n> [!Important]\n> Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1`\n\n### Install prerequisites\n```\n# Optional step: Only required for Blackwell and Grace Hopper\nuv pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128\n\n# Required until the trtllm version is bumped to include this pinned dependency itself\nuv pip install \"cuda-python>=12,<13\"\n\nsudo apt-get -y install libopenmpi-dev\n```\n\n> [!Tip]\n> You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html).\n\n### After installing the pre-requisites above, install Dynamo\n```\nuv pip install ai-dynamo[trtllm]\n```\n\nRun the backend/worker like this:\n```\npython -m dynamo.trtllm --help\n```\n\nTo specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.\n\n# Developing Locally\n\n## 1. Install libraries\n\n**Ubuntu:**\n```\nsudo apt install -y build-essential libhwloc-dev libudev-dev pkg-config libclang-dev protobuf-compiler python3-dev cmake\n```\n\n**macOS:**\n- [Homebrew](https://brew.sh/)\n```\n# if brew is not installed on your system, install it\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n```\n- [Xcode](https://developer.apple.com/xcode/)\n\n```\nbrew install cmake protobuf\n\n## Check that Metal is accessible\nxcrun -sdk macosx metal\n```\nIf Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.\n\n\n## 2. Install Rust\n\n```\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\nsource $HOME/.cargo/env\n```\n\n## 3. Create a Python virtual env:\n\nFollow the instructions in [uv installation](https://docs.astral.sh/uv/#installation) guide to install uv if you don't have `uv` installed. Once uv is installed, create a virtual environment and activate it.\n\n- Install uv\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n```\n\n- Create a virtual environment\n```bash\nuv venv dynamo\nsource dynamo/bin/activate\n```\n\n## 4. Install build tools\n\n```\nuv pip install pip maturin\n```\n\n[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.\n\n## 5. Build the Rust bindings\n\n```\ncd lib/bindings/python\nmaturin develop --uv\n```\n\n## 6. Install the wheel\n\n```\ncd $PROJECT_ROOT\nuv pip install .\n# For development, use\nexport PYTHONPATH=\"${PYTHONPATH}:$(pwd)/components/frontend/src:$(pwd)/components/planner/src:$(pwd)/components/backends/vllm/src:$(pwd)/components/backends/sglang/src:$(pwd)/components/backends/trtllm/src:$(pwd)/components/backends/llama_cpp/src:$(pwd)/components/backends/mocker/src\"\n```\n\n> [!Note]\n> Editable (`-e`) does not work because the `dynamo` package is split over multiple directories, one per backend.\n\nYou should now be able to run `python -m dynamo.frontend`.\n\nRemember that nats and etcd must be running (see earlier).\n\nSet the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.\n\nIf you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Distributed Inference Framework",
"version": "0.4.1",
"project_urls": {
"Repository": "https://github.com/ai-dynamo/dynamo.git"
},
"split_keywords": [
"distributed",
" dynamo",
" genai",
" inference",
" llm",
" nvidia"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e75f4a5efb6551a1412f1f693289578aecd75c31572027d2e66e755af1d2a35e",
"md5": "34227f5c0c8456005030401fffdb44c3",
"sha256": "c758722e2e291ef39af92c487fa1221c40fa8a91008655347c0cc89fa793c90a"
},
"downloads": -1,
"filename": "ai_dynamo-0.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "34227f5c0c8456005030401fffdb44c3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 96216,
"upload_time": "2025-08-27T23:22:17",
"upload_time_iso_8601": "2025-08-27T23:22:17.883399Z",
"url": "https://files.pythonhosted.org/packages/e7/5f/4a5efb6551a1412f1f693289578aecd75c31572027d2e66e755af1d2a35e/ai_dynamo-0.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-27 23:22:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ai-dynamo",
"github_project": "dynamo",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ai-dynamo"
}