azure-switchboard


Nameazure-switchboard JSON
Version 2025.8.0 PyPI version JSON
download
home_pageNone
SummaryBatteries-included loadbalancing client for Azure OpenAI
upload_time2025-08-07 10:23:34
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords ai azure litellm llm loadbalancing openai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Azure Switchboard

Batteries-included, coordination-free client loadbalancing for Azure OpenAI.

```bash
uv add azure-switchboard
```

[![PyPI - Version](https://img.shields.io/pypi/v/azure-switchboard)](https://pypi.org/project/azure-switchboard/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![CI](https://github.com/arini-ai/azure-switchboard/actions/workflows/ci.yaml/badge.svg?branch=master)](https://github.com/arini-ai/azure-switchboard/actions/workflows/ci.yaml)

## Overview

`azure-switchboard` is a Python 3 asyncio library that provides an intelligent, API-compatible client loadbalancer for Azure OpenAI. You instantiate a Switchboard client with a set of deployments, and the client distributes your chat completion requests across the available deployments using the [power of two random choices](https://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf) method. In this sense, it functions as a lightweight service mesh between your application and Azure OpenAI. The basic idea is inspired by [ServiceRouter](https://www.usenix.org/system/files/osdi23-saokar.pdf).

## Features

- **API Compatibility**: `Switchboard.create` is a transparently-typed drop-in proxy for `OpenAI.chat.completions.create`.
- **Coordination-Free**: The default Two Random Choices algorithm does not require coordination between client instances to achieve excellent load distribution characteristics.
- **Utilization-Aware**: TPM/RPM ratelimit utilization is tracked per model per deployment for use during selection.
- **Batteries Included**:
  - **Session Affinity**: Provide a `session_id` to route requests in the same session to the same deployment, optimizing for prompt caching
  - **Automatic Failover**: Client automatically retries on request failure, with optional fallback to OpenAI by providing an `OpenAIDeployment` in `deployments`. The retry policy can also be customized by passing a tenacity
    `AsyncRetrying` instance to `failover_policy`.
  - **Pluggable Selection**: Custom selection algorithms can be
    provided by passing a callable to the `selector` parameter on the Switchboard constructor.
  - **OpenTelemetry Integration**: Comprehensive metrics and instrumentation for monitoring deployment health and utilization.

- **Lightweight**: sub-400 LOC implementation with minimal dependencies: `openai`, `tenacity`, `wrapt`, and `opentelemetry-api`. <1ms overhead per request.
- **100% Test Coverage**: Comprehensive test suite with pytest.

## Runnable Example

```python
#!/usr/bin/env python3
#
# To run this, use:
#   uv run --env-file .env tools/readme_example.py
#
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "azure-switchboard",
# ]
# ///

import asyncio
import os

from azure_switchboard import AzureDeployment, Model, OpenAIDeployment, Switchboard

azure_openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
azure_openai_api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai_api_key = os.getenv("OPENAI_API_KEY", None)

deployments = []
if azure_openai_endpoint and azure_openai_api_key:
    # create 3 deployments. reusing the endpoint
    # is fine for the purposes of this demo
    for name in ("east", "west", "south"):
        deployments.append(
            AzureDeployment(
                name=name,
                endpoint=azure_openai_endpoint,
                api_key=azure_openai_api_key,
                models=[Model(name="gpt-4o-mini")],
            )
        )

if openai_api_key:
    # we can use openai as a fallback deployment
    # it will pick up the api key from the environment
    deployments.append(OpenAIDeployment())


async def main():
    async with Switchboard(deployments=deployments) as sb:
        print("Basic functionality:")
        await basic_functionality(sb)

        print("Session affinity (should warn):")
        await session_affinity(sb)


async def basic_functionality(switchboard: Switchboard):
    # Make a completion request (non-streaming)
    response = await switchboard.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello, world!"}],
    )

    print("completion:", response.choices[0].message.content)

    # Make a streaming completion request
    stream = await switchboard.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello, world!"}],
        stream=True,
    )

    print("streaming: ", end="")
    async for chunk in stream:
        if chunk.choices and chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)

    print()


async def session_affinity(switchboard: Switchboard):
    session_id = "anything"

    # First message will select a random healthy
    # deployment and associate it with the session_id
    r = await switchboard.create(
        session_id=session_id,
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Who won the World Series in 2020?"}],
    )

    d1 = switchboard.select_deployment(model="gpt-4o-mini", session_id=session_id)
    print("deployment 1:", d1)
    print("response 1:", r.choices[0].message.content)

    # Follow-up requests with the same session_id will route to the same deployment
    r2 = await switchboard.create(
        session_id=session_id,
        model="gpt-4o-mini",
        messages=[
            {"role": "user", "content": "Who won the World Series in 2020?"},
            {"role": "assistant", "content": r.choices[0].message.content},
            {"role": "user", "content": "Who did they beat?"},
        ],
    )

    print("response 2:", r2.choices[0].message.content)

    # Simulate a failure by marking down the deployment
    d1.models["gpt-4o-mini"].cooldown()

    # A new deployment will be selected for this session_id
    r3 = await switchboard.create(
        session_id=session_id,
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Who won the World Series in 2021?"}],
    )

    d2 = switchboard.select_deployment(model="gpt-4o-mini", session_id=session_id)
    print("deployment 2:", d2)
    print("response 3:", r3.choices[0].message.content)
    assert d2 != d1


if __name__ == "__main__":
    asyncio.run(main())
```

## Benchmarks

```bash
just bench
uv run --env-file .env tools/bench.py -v -r 1000 -d 10 -e 500
Distributing 1000 requests across 10 deployments
Max inflight requests: 1000

Request 500/1000 completed
Utilization Distribution:
0.000 - 0.200 |   0
0.200 - 0.400 |  10 ..............................
0.400 - 0.600 |   0
0.600 - 0.800 |   0
0.800 - 1.000 |   0
Avg utilization: 0.339 (0.332 - 0.349)
Std deviation: 0.006

{
    'bench_0': {'gpt-4o-mini': {'util': 0.361, 'tpm': '10556/30000', 'rpm': '100/300'}},
    'bench_1': {'gpt-4o-mini': {'util': 0.339, 'tpm': '9819/30000', 'rpm': '100/300'}},
    'bench_2': {'gpt-4o-mini': {'util': 0.333, 'tpm': '9405/30000', 'rpm': '97/300'}},
    'bench_3': {'gpt-4o-mini': {'util': 0.349, 'tpm': '10188/30000', 'rpm': '100/300'}},
    'bench_4': {'gpt-4o-mini': {'util': 0.346, 'tpm': '10210/30000', 'rpm': '99/300'}},
    'bench_5': {'gpt-4o-mini': {'util': 0.341, 'tpm': '10024/30000', 'rpm': '99/300'}},
    'bench_6': {'gpt-4o-mini': {'util': 0.343, 'tpm': '10194/30000', 'rpm': '100/300'}},
    'bench_7': {'gpt-4o-mini': {'util': 0.352, 'tpm': '10362/30000', 'rpm': '102/300'}},
    'bench_8': {'gpt-4o-mini': {'util': 0.35, 'tpm': '10362/30000', 'rpm': '102/300'}},
    'bench_9': {'gpt-4o-mini': {'util': 0.365, 'tpm': '10840/30000', 'rpm': '101/300'}}
}

Utilization Distribution:
0.000 - 0.100 |   0
0.100 - 0.200 |   0
0.200 - 0.300 |   0
0.300 - 0.400 |  10 ..............................
0.400 - 0.500 |   0
0.500 - 0.600 |   0
0.600 - 0.700 |   0
0.700 - 0.800 |   0
0.800 - 0.900 |   0
0.900 - 1.000 |   0
Avg utilization: 0.348 (0.333 - 0.365)
Std deviation: 0.009

Distribution overhead: 926.14ms
Average response latency: 5593.77ms
Total latency: 17565.37ms
Requests per second: 1079.75
Overhead per request: 0.93ms
```

Distribution overhead scales ~linearly with the number of deployments.

## Configuration Reference

### switchboard.Model Parameters

| Parameter          | Description                                           | Default       |
| ------------------ | ----------------------------------------------------- | ------------- |
| `name`             | Configured model name, e.g. "gpt-4o" or "gpt-4o-mini" | Required      |
| `tpm`              | Configured TPM rate limit                             | 0 (unlimited) |
| `rpm`              | Configured RPM rate limit                             | 0 (unlimited) |
| `default_cooldown` | Default cooldown period in seconds                    | 10.0          |

### switchboard.AzureDeployment Parameters

| Parameter     | Description                                   | Default      |
| ------------- | --------------------------------------------- | ------------ |
| `name`        | Unique identifier for the deployment          | Required     |
| `endpoint`    | Azure OpenAI endpoint URL                     | Required     |
| `api_key`     | Azure OpenAI API key                          | Required     |
| `api_version` | Azure OpenAI API version                      | "2024-10-21" |
| `timeout`     | Default timeout in seconds                    | 600.0        |
| `models`      | List of Models configured for this deployment | Required     |

### switchboard.Switchboard Parameters

| Parameter          | Description                         | Default                                     |
| ------------------ | ----------------------------------- | ------------------------------------------- |
| `deployments`      | List of Deployment config objects   | Required                                    |
| `selector`         | Selection algorithm                 | `two_random_choices`                        |
| `failover_policy`  | Policy for handling failed requests | `AsyncRetrying(stop=stop_after_attempt(2))` |
| `ratelimit_window` | Ratelimit window in seconds         | 60.0                                        |
| `max_sessions`     | Maximum number of sessions          | 1024                                        |

## Development

This project uses [uv](https://github.com/astral-sh/uv) for package management,
and [just](https://github.com/casey/just) for task automation. See the [justfile](https://github.com/arini-ai/azure-switchboard/blob/master/justfile)
for available commands.

```bash
git clone https://github.com/arini-ai/azure-switchboard
cd azure-switchboard

just install
```

### Running tests

```bash
just test
```

### Release

This library uses CalVer for versioning. On push to master, if tests pass, a package is automatically built, released, and uploaded to PyPI.

Locally, the package can be built with uv:

```bash
uv build
```

### OpenTelemetry Integration

The library provides instrumentation for monitoring deployment health and performance metrics:

```bash
(azure-switchboard) .venv > just otel-run
uv run --env-file .env opentelemetry-instrument python tools/bench.py -r 5 -d 3
Distributing 5 requests across 3 deployments
Max inflight requests: 1000

Distribution overhead: 10.53ms
Average response latency: 2164.03ms
Total latency: 3869.06ms
Requests per second: 475.03
Overhead per request: 2.11ms
{
    "resource_metrics": [
        {
            "resource": {
                "attributes": {
                    "telemetry.sdk.language": "python",
                    "telemetry.sdk.name": "opentelemetry",
                    "telemetry.sdk.version": "1.31.0",
                    "service.name": "switchboard",
                    "telemetry.auto.version": "0.52b0"
                },
                "schema_url": ""
            },
            "scope_metrics": [
                {
                    "scope": {
                        "name": "azure_switchboard.deployment",
                        "version": "",
                        "schema_url": "",
                        "attributes": null
                    },
                    "metrics": [
                        {
                            "name": "model_utilization",
                            "description": "Current utilization of a model deployment (0-1)",
                            "unit": "percent",
                            "data": {
                                "data_points": [
                                    {
                                        "attributes": {
                                            "model": "gpt-4o-mini"
                                        },
                                        "start_time_unix_nano": null,
                                        "time_unix_nano": 1742461487509982000,
                                        "value": 0.008,
                                        "exemplars": []
...
```

## Contributing

1. Fork/clone repo
2. Make changes
3. Run tests with `just test`
4. Lint with `just lint`
5. Commit and make a PR

## License

MIT

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "azure-switchboard",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "ai, azure, litellm, llm, loadbalancing, openai",
    "author": null,
    "author_email": "Abizer Lokhandwala <abizer@abizer.me>",
    "download_url": "https://files.pythonhosted.org/packages/3b/39/3b3ffbe0f1ec053a8a93593ad0a22bf26d6c03437c90deeacc0294d1556d/azure_switchboard-2025.8.0.tar.gz",
    "platform": null,
    "description": "# Azure Switchboard\n\nBatteries-included, coordination-free client loadbalancing for Azure OpenAI.\n\n```bash\nuv add azure-switchboard\n```\n\n[![PyPI - Version](https://img.shields.io/pypi/v/azure-switchboard)](https://pypi.org/project/azure-switchboard/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![CI](https://github.com/arini-ai/azure-switchboard/actions/workflows/ci.yaml/badge.svg?branch=master)](https://github.com/arini-ai/azure-switchboard/actions/workflows/ci.yaml)\n\n## Overview\n\n`azure-switchboard` is a Python 3 asyncio library that provides an intelligent, API-compatible client loadbalancer for Azure OpenAI. You instantiate a Switchboard client with a set of deployments, and the client distributes your chat completion requests across the available deployments using the [power of two random choices](https://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf) method. In this sense, it functions as a lightweight service mesh between your application and Azure OpenAI. The basic idea is inspired by [ServiceRouter](https://www.usenix.org/system/files/osdi23-saokar.pdf).\n\n## Features\n\n- **API Compatibility**: `Switchboard.create` is a transparently-typed drop-in proxy for `OpenAI.chat.completions.create`.\n- **Coordination-Free**: The default Two Random Choices algorithm does not require coordination between client instances to achieve excellent load distribution characteristics.\n- **Utilization-Aware**: TPM/RPM ratelimit utilization is tracked per model per deployment for use during selection.\n- **Batteries Included**:\n  - **Session Affinity**: Provide a `session_id` to route requests in the same session to the same deployment, optimizing for prompt caching\n  - **Automatic Failover**: Client automatically retries on request failure, with optional fallback to OpenAI by providing an `OpenAIDeployment` in `deployments`. The retry policy can also be customized by passing a tenacity\n    `AsyncRetrying` instance to `failover_policy`.\n  - **Pluggable Selection**: Custom selection algorithms can be\n    provided by passing a callable to the `selector` parameter on the Switchboard constructor.\n  - **OpenTelemetry Integration**: Comprehensive metrics and instrumentation for monitoring deployment health and utilization.\n\n- **Lightweight**: sub-400 LOC implementation with minimal dependencies: `openai`, `tenacity`, `wrapt`, and `opentelemetry-api`. <1ms overhead per request.\n- **100% Test Coverage**: Comprehensive test suite with pytest.\n\n## Runnable Example\n\n```python\n#!/usr/bin/env python3\n#\n# To run this, use:\n#   uv run --env-file .env tools/readme_example.py\n#\n# /// script\n# requires-python = \">=3.10\"\n# dependencies = [\n#     \"azure-switchboard\",\n# ]\n# ///\n\nimport asyncio\nimport os\n\nfrom azure_switchboard import AzureDeployment, Model, OpenAIDeployment, Switchboard\n\nazure_openai_endpoint = os.getenv(\"AZURE_OPENAI_ENDPOINT\")\nazure_openai_api_key = os.getenv(\"AZURE_OPENAI_API_KEY\")\nopenai_api_key = os.getenv(\"OPENAI_API_KEY\", None)\n\ndeployments = []\nif azure_openai_endpoint and azure_openai_api_key:\n    # create 3 deployments. reusing the endpoint\n    # is fine for the purposes of this demo\n    for name in (\"east\", \"west\", \"south\"):\n        deployments.append(\n            AzureDeployment(\n                name=name,\n                endpoint=azure_openai_endpoint,\n                api_key=azure_openai_api_key,\n                models=[Model(name=\"gpt-4o-mini\")],\n            )\n        )\n\nif openai_api_key:\n    # we can use openai as a fallback deployment\n    # it will pick up the api key from the environment\n    deployments.append(OpenAIDeployment())\n\n\nasync def main():\n    async with Switchboard(deployments=deployments) as sb:\n        print(\"Basic functionality:\")\n        await basic_functionality(sb)\n\n        print(\"Session affinity (should warn):\")\n        await session_affinity(sb)\n\n\nasync def basic_functionality(switchboard: Switchboard):\n    # Make a completion request (non-streaming)\n    response = await switchboard.create(\n        model=\"gpt-4o-mini\",\n        messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}],\n    )\n\n    print(\"completion:\", response.choices[0].message.content)\n\n    # Make a streaming completion request\n    stream = await switchboard.create(\n        model=\"gpt-4o-mini\",\n        messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}],\n        stream=True,\n    )\n\n    print(\"streaming: \", end=\"\")\n    async for chunk in stream:\n        if chunk.choices and chunk.choices[0].delta.content:\n            print(chunk.choices[0].delta.content, end=\"\", flush=True)\n\n    print()\n\n\nasync def session_affinity(switchboard: Switchboard):\n    session_id = \"anything\"\n\n    # First message will select a random healthy\n    # deployment and associate it with the session_id\n    r = await switchboard.create(\n        session_id=session_id,\n        model=\"gpt-4o-mini\",\n        messages=[{\"role\": \"user\", \"content\": \"Who won the World Series in 2020?\"}],\n    )\n\n    d1 = switchboard.select_deployment(model=\"gpt-4o-mini\", session_id=session_id)\n    print(\"deployment 1:\", d1)\n    print(\"response 1:\", r.choices[0].message.content)\n\n    # Follow-up requests with the same session_id will route to the same deployment\n    r2 = await switchboard.create(\n        session_id=session_id,\n        model=\"gpt-4o-mini\",\n        messages=[\n            {\"role\": \"user\", \"content\": \"Who won the World Series in 2020?\"},\n            {\"role\": \"assistant\", \"content\": r.choices[0].message.content},\n            {\"role\": \"user\", \"content\": \"Who did they beat?\"},\n        ],\n    )\n\n    print(\"response 2:\", r2.choices[0].message.content)\n\n    # Simulate a failure by marking down the deployment\n    d1.models[\"gpt-4o-mini\"].cooldown()\n\n    # A new deployment will be selected for this session_id\n    r3 = await switchboard.create(\n        session_id=session_id,\n        model=\"gpt-4o-mini\",\n        messages=[{\"role\": \"user\", \"content\": \"Who won the World Series in 2021?\"}],\n    )\n\n    d2 = switchboard.select_deployment(model=\"gpt-4o-mini\", session_id=session_id)\n    print(\"deployment 2:\", d2)\n    print(\"response 3:\", r3.choices[0].message.content)\n    assert d2 != d1\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Benchmarks\n\n```bash\njust bench\nuv run --env-file .env tools/bench.py -v -r 1000 -d 10 -e 500\nDistributing 1000 requests across 10 deployments\nMax inflight requests: 1000\n\nRequest 500/1000 completed\nUtilization Distribution:\n0.000 - 0.200 |   0\n0.200 - 0.400 |  10 ..............................\n0.400 - 0.600 |   0\n0.600 - 0.800 |   0\n0.800 - 1.000 |   0\nAvg utilization: 0.339 (0.332 - 0.349)\nStd deviation: 0.006\n\n{\n    'bench_0': {'gpt-4o-mini': {'util': 0.361, 'tpm': '10556/30000', 'rpm': '100/300'}},\n    'bench_1': {'gpt-4o-mini': {'util': 0.339, 'tpm': '9819/30000', 'rpm': '100/300'}},\n    'bench_2': {'gpt-4o-mini': {'util': 0.333, 'tpm': '9405/30000', 'rpm': '97/300'}},\n    'bench_3': {'gpt-4o-mini': {'util': 0.349, 'tpm': '10188/30000', 'rpm': '100/300'}},\n    'bench_4': {'gpt-4o-mini': {'util': 0.346, 'tpm': '10210/30000', 'rpm': '99/300'}},\n    'bench_5': {'gpt-4o-mini': {'util': 0.341, 'tpm': '10024/30000', 'rpm': '99/300'}},\n    'bench_6': {'gpt-4o-mini': {'util': 0.343, 'tpm': '10194/30000', 'rpm': '100/300'}},\n    'bench_7': {'gpt-4o-mini': {'util': 0.352, 'tpm': '10362/30000', 'rpm': '102/300'}},\n    'bench_8': {'gpt-4o-mini': {'util': 0.35, 'tpm': '10362/30000', 'rpm': '102/300'}},\n    'bench_9': {'gpt-4o-mini': {'util': 0.365, 'tpm': '10840/30000', 'rpm': '101/300'}}\n}\n\nUtilization Distribution:\n0.000 - 0.100 |   0\n0.100 - 0.200 |   0\n0.200 - 0.300 |   0\n0.300 - 0.400 |  10 ..............................\n0.400 - 0.500 |   0\n0.500 - 0.600 |   0\n0.600 - 0.700 |   0\n0.700 - 0.800 |   0\n0.800 - 0.900 |   0\n0.900 - 1.000 |   0\nAvg utilization: 0.348 (0.333 - 0.365)\nStd deviation: 0.009\n\nDistribution overhead: 926.14ms\nAverage response latency: 5593.77ms\nTotal latency: 17565.37ms\nRequests per second: 1079.75\nOverhead per request: 0.93ms\n```\n\nDistribution overhead scales ~linearly with the number of deployments.\n\n## Configuration Reference\n\n### switchboard.Model Parameters\n\n| Parameter          | Description                                           | Default       |\n| ------------------ | ----------------------------------------------------- | ------------- |\n| `name`             | Configured model name, e.g. \"gpt-4o\" or \"gpt-4o-mini\" | Required      |\n| `tpm`              | Configured TPM rate limit                             | 0 (unlimited) |\n| `rpm`              | Configured RPM rate limit                             | 0 (unlimited) |\n| `default_cooldown` | Default cooldown period in seconds                    | 10.0          |\n\n### switchboard.AzureDeployment Parameters\n\n| Parameter     | Description                                   | Default      |\n| ------------- | --------------------------------------------- | ------------ |\n| `name`        | Unique identifier for the deployment          | Required     |\n| `endpoint`    | Azure OpenAI endpoint URL                     | Required     |\n| `api_key`     | Azure OpenAI API key                          | Required     |\n| `api_version` | Azure OpenAI API version                      | \"2024-10-21\" |\n| `timeout`     | Default timeout in seconds                    | 600.0        |\n| `models`      | List of Models configured for this deployment | Required     |\n\n### switchboard.Switchboard Parameters\n\n| Parameter          | Description                         | Default                                     |\n| ------------------ | ----------------------------------- | ------------------------------------------- |\n| `deployments`      | List of Deployment config objects   | Required                                    |\n| `selector`         | Selection algorithm                 | `two_random_choices`                        |\n| `failover_policy`  | Policy for handling failed requests | `AsyncRetrying(stop=stop_after_attempt(2))` |\n| `ratelimit_window` | Ratelimit window in seconds         | 60.0                                        |\n| `max_sessions`     | Maximum number of sessions          | 1024                                        |\n\n## Development\n\nThis project uses [uv](https://github.com/astral-sh/uv) for package management,\nand [just](https://github.com/casey/just) for task automation. See the [justfile](https://github.com/arini-ai/azure-switchboard/blob/master/justfile)\nfor available commands.\n\n```bash\ngit clone https://github.com/arini-ai/azure-switchboard\ncd azure-switchboard\n\njust install\n```\n\n### Running tests\n\n```bash\njust test\n```\n\n### Release\n\nThis library uses CalVer for versioning. On push to master, if tests pass, a package is automatically built, released, and uploaded to PyPI.\n\nLocally, the package can be built with uv:\n\n```bash\nuv build\n```\n\n### OpenTelemetry Integration\n\nThe library provides instrumentation for monitoring deployment health and performance metrics:\n\n```bash\n(azure-switchboard) .venv > just otel-run\nuv run --env-file .env opentelemetry-instrument python tools/bench.py -r 5 -d 3\nDistributing 5 requests across 3 deployments\nMax inflight requests: 1000\n\nDistribution overhead: 10.53ms\nAverage response latency: 2164.03ms\nTotal latency: 3869.06ms\nRequests per second: 475.03\nOverhead per request: 2.11ms\n{\n    \"resource_metrics\": [\n        {\n            \"resource\": {\n                \"attributes\": {\n                    \"telemetry.sdk.language\": \"python\",\n                    \"telemetry.sdk.name\": \"opentelemetry\",\n                    \"telemetry.sdk.version\": \"1.31.0\",\n                    \"service.name\": \"switchboard\",\n                    \"telemetry.auto.version\": \"0.52b0\"\n                },\n                \"schema_url\": \"\"\n            },\n            \"scope_metrics\": [\n                {\n                    \"scope\": {\n                        \"name\": \"azure_switchboard.deployment\",\n                        \"version\": \"\",\n                        \"schema_url\": \"\",\n                        \"attributes\": null\n                    },\n                    \"metrics\": [\n                        {\n                            \"name\": \"model_utilization\",\n                            \"description\": \"Current utilization of a model deployment (0-1)\",\n                            \"unit\": \"percent\",\n                            \"data\": {\n                                \"data_points\": [\n                                    {\n                                        \"attributes\": {\n                                            \"model\": \"gpt-4o-mini\"\n                                        },\n                                        \"start_time_unix_nano\": null,\n                                        \"time_unix_nano\": 1742461487509982000,\n                                        \"value\": 0.008,\n                                        \"exemplars\": []\n...\n```\n\n## Contributing\n\n1. Fork/clone repo\n2. Make changes\n3. Run tests with `just test`\n4. Lint with `just lint`\n5. Commit and make a PR\n\n## License\n\nMIT\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Batteries-included loadbalancing client for Azure OpenAI",
    "version": "2025.8.0",
    "project_urls": {
        "Homepage": "https://github.com/arini-ai/azure-switchboard"
    },
    "split_keywords": [
        "ai",
        " azure",
        " litellm",
        " llm",
        " loadbalancing",
        " openai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b049e662eed3b728f7ed863c7769193a19e69dba6a05ae4b45449e113435f774",
                "md5": "8eff9ddf8b494d1122cc4ae4f4c9a7f8",
                "sha256": "487c3cebb9298de784cee4e78b9e268ac8ee963adef6deb07950da0f1f5efc7c"
            },
            "downloads": -1,
            "filename": "azure_switchboard-2025.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8eff9ddf8b494d1122cc4ae4f4c9a7f8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 12837,
            "upload_time": "2025-08-07T10:23:33",
            "upload_time_iso_8601": "2025-08-07T10:23:33.511531Z",
            "url": "https://files.pythonhosted.org/packages/b0/49/e662eed3b728f7ed863c7769193a19e69dba6a05ae4b45449e113435f774/azure_switchboard-2025.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3b393b3ffbe0f1ec053a8a93593ad0a22bf26d6c03437c90deeacc0294d1556d",
                "md5": "4a34c75a35b4691a3c024d59253ec5ed",
                "sha256": "724c4c5c1cbff4a821778f04cd63b54ebbc62a282d55460133ea523de576b769"
            },
            "downloads": -1,
            "filename": "azure_switchboard-2025.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4a34c75a35b4691a3c024d59253ec5ed",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 154612,
            "upload_time": "2025-08-07T10:23:34",
            "upload_time_iso_8601": "2025-08-07T10:23:34.655466Z",
            "url": "https://files.pythonhosted.org/packages/3b/39/3b3ffbe0f1ec053a8a93593ad0a22bf26d6c03437c90deeacc0294d1556d/azure_switchboard-2025.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-07 10:23:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "arini-ai",
    "github_project": "azure-switchboard",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "azure-switchboard"
}
        
Elapsed time: 1.45640s