mini-apigw


Namemini-apigw JSON
Version 0.0.7 PyPI version JSON
download
home_pageNone
SummaryMinimal OpenAI-compatible edge gateway with multi-backend routing
upload_time2025-10-22 21:28:44
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords api-gateway gateway api llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # mini-apigw — Minimal OpenAI‑compatible API Gateway

__WORK IN PROGRESS_

mini-apigw is a small edge gateway that presents an OpenAI‑compatible API surface and routes requests
to multiple LLM and image generation backends. It is designed for __simplicity__ and ease of control: you
configure backends and apps in JSON, set policies and cost limits per app, and the gateway handles
routing, scheduling, usage accounting, optional persistence, trace logging, and admin endpoints.

The main reason I developed this gateway was to handle multiple access to shared resources - I personally
utilize machines that run LLMs (ollama, vllm) as well as SDXL based image generation and other software. They
compete for GPU resources and since those tools are usually not developed to work together nicely and
arbitrate GPU usage the gateway offers serialization of requests inside _sequence groups_. All backends
in the same sequence group execute requests of all backends in the same group strictly in sequence so the
local backends can handle loading and unloading of competing backends.

Backends included out of the box:

* OpenAI
* Ollama

In development:

* Anthropic
* Fooocus
* vLLM

The gateway exposes the familiar `/v1` endpoints (`/chat/completions`, `/completions`,
`/embeddings`, `/images/generations`, and `/models`) and normalizes responses where
needed. This allows one to use the ```openai``` client library for any backend. Note that the gateway
does not implement the full OpenAI API - it just passes through the above mentioned
endpoints. In addition it uses local API keys. Those API keys are used to select the application
from ```apps.json``` configuration file. 

## Installation

From PyPI (recommended):

```bash
pip install mini-apigw
```

From source (editable):

```bash
pip install -e .
```

## Configuration

The gateway reads three JSON files from a configuration directory. By default this
is `./config` (at the moment, this will be fixed soon, this will be a breaking change!),
or override with the environment variable `MINIAPIGW_CONFIG_DIR` or the CLI flag `--config-dir`.

Required files:

- `daemon.json`: service, logging, admin, timeouts, and optional Postgres settings
- `backends.json`: model providers, aliases, costs, and capabilities
- `apps.json`: application definitions, API keys, allow/deny policies, cost limits, and tracing

Example `daemon.json` (minimal, no persistent accounting log):

```json
{
  "listen": { "host_v4": "0.0.0.0", "port": 8080 },
  "admin": { "bind": ["127.0.0.1:8081", "[::1]:8081"], "stats_networks": ["127.0.0.1/32", "::1/128"] },
  "logging": { "level": "INFO", "redact_prompts": true, "access_log": true },
  "reload": { "enable_sighup": true },
  "timeouts": { "default_connect_s": 60, "default_read_s": 600 },
  "database": null
}
```

Example `backends.json` (mixed OpenAI + Ollama):

```json
{
  "aliases": { "llama3.2": "llama3.2:latest" },
  "sequence_groups": { "local_gpu_01": { "description": "Serialized work for local GPU tasks" } },
  "backends": [
    {
      "type": "openai",
      "name": "openai-primary",
      "base_url": "https://api.openai.com/v1",
      "api_key": "<openai_key>",
      "concurrency": 4,
      "supports": { "chat": ["gpt-4o-mini"], "embeddings": ["text-embedding-3-small"], "images": ["gpt-image-1", "dall-e-3"] },
      "cost": { "currency": "usd", "unit": "1k_tokens", "models": { "gpt-4o-mini": {"prompt": 0.002, "completion": 0.004} } }
    },
    {
      "type": "ollama",
      "name": "ollama-local",
      "base_url": "http://127.0.0.1:11434",
      "sequence_group": "local_gpu_01",
      "concurrency": 1,
      "supports": { "chat": ["llama3.2:latest", "gpt-oss:120b"], "completions": ["llama3.2:latest"], "embeddings": ["nomic-embed-text"] },
      "cost": {
        "models": {
          "llama3.2:latest": {"prompt": 0.0, "completion": 0.0},
          "gpt-oss:120b": {"prompt": 0.001, "completion": 0.001 }
        }
      }
    }
  ]
}
```

Example `apps.json` (one app, you just declare them one after each other), the API keys are threatened
transparent, you can use any string as Bearer token, it just has to be unique; the app ID is used in filemanes
so you might not want to use special characters:

```json
{
  "apps": [
    {
      "app_id": "demo",
      "name": "Demo application",
      "api_keys": [
        "sk-example-key"
      ],
      "policy": {
        "allow": [ "gpt-4o-mini", "llama3.2" ],
        "deny": []
      },
      "cost_limit": {
          "period": "day",
          "limit": 10.0
      },
      "trace": {
          "file": "/var/log/llmgw/demo.jsonl",
          "image_dir": "/var/log/llmgw/images/demo",
          "include_prompts": true,
          "include_response": true,
          "include_keys": true 
      }
    }
  ]
}
```

Notes:

- Backends declare capabilities via `supports` (globs allowed) and optional `aliases`.
  Use `concurrency` and `sequence_group` to tune throughput/serialization. Costs
  under `cost` are used to estimate per‑app spend. This may of course deviate from real
  platform billing. It's just there to provide an estimate (and may be reworked later on).
  Also billing is currently not tracked for images.
- Apps bind one or more API keys to an `app_id`, use `policy.allow`/`policy.deny` to
  restrict models, and `cost_limit` to enforce soft limits. Per‑app traces can be persisted
  as JSONL with optional image capture.
- The admin interface binds on the same port (due to limitations in FastAPI code and simplicity).
  It is restricted to localhost by default; when running local jails use CIDRs in `admin.stats_networks`.
  If you expose the service through a Unix domain socket the gateway assumes a local reverse proxy enforces access control, so configure that proxy accordingly.

## Running

The package installs a console entry point `mini-apigw`.

Foreground server_

```bash
mini-apigw start --config-dir ./config --foreground --reload
```

Daemonize with defaults from `daemon.json` (port/host or Unix socket):

```bash
mini-apigw start --config-dir ./config
```

Override listener explicitly:

```bash
mini-apigw start --config-dir ./config --host 0.0.0.0 --port 8080
# or use a Unix domain socket
mini-apigw start --config-dir ./config --unix-socket /var/llmgw/llmgw.sock
```

Admin helpers (call built‑in admin endpoints):

```bash
mini-apigw reload --config-dir ./config
mini-apigw stop --config-dir ./config
mini-apigw token --bytes 32   # generate a random API key
```

## OpenAI‑Compatible API

All endpoints live under `/v1` and require `Authorization: Bearer <api_key>`.

List models (combines declared and auto‑discovered where available):

```bash
curl -s -H "Authorization: Bearer sk-demo-key" http://127.0.0.1:8080/v1/models | jq .
```

Chat completions (JSON response; set `stream: true` for SSE):

```bash
curl -s \
  -H "Authorization: Bearer sk-demo-key" \
  -H "Content-Type: application/json" \
  -d '{
        "model": "gpt-4o-mini",
        "messages": [
          {"role": "system", "content": "You are helpful."},
          {"role": "user", "content": "Say hi"}
        ],
        "stream": false
      }' \
  http://127.0.0.1:8080/v1/chat/completions | jq .
```

Embeddings:

```bash
curl -s \
  -H "Authorization: Bearer sk-demo-key" \
  -H "Content-Type: application/json" \
  -d '{"model": "text-embedding-3-small", "input": "hello"}' \
  http://127.0.0.1:8080/v1/embeddings | jq .
```

Image generation (if supported by a configured backend):

```bash
curl -s \
  -H "Authorization: Bearer sk-demo-key" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-image-1", "prompt": "a lighthouse at dusk"}' \
  http://127.0.0.1:8080/v1/images/generations | jq .
```

## Accounting and Persistence

mini-apigw estimates request cost using backend‑specific `cost` rates and tracked token
usage (this is work in progress), and aggregates totals per app. By default, accounting
is in‑memory. To persist usage to Postgres for reporting or bootstrapping daily totals,
set the `database` section in `daemon.json` and apply the schema in `sql/schema.sql`.

Notes:

- Postgres is optional. If configured, the gateway uses `psycopg`/`psycopg2` to
  insert request rows asynchronously and reconstructs the current‑day state on startup.
- Cost limits (`apps[].cost_limit`) are enforced against the running totals; requests
  beyond the limit are rejected with `403`.

## Tracing

Per‑app tracing can write JSONL events and capture images from generation responses.
Enable under `apps[].trace` with `file` and/or `image_dir`. You can include prompts,
responses (non‑streaming), and masked API keys with `include_prompts`, `include_response`,
and `include_keys`.

Trace files are append‑only JSONL; image files are written under `imagedir` with content‑based
extensions for base64 payloads, or a `.txt` containing the URL for URL‑based images.

## Admin and Stats

Admin endpoints are local‑only by default (IPv4/IPv6 loopback) and can be extended
with `admin.stats_networks` CIDR allow‑lists.

- `POST /admin/reload` — reload configuration files atomically
- `POST /admin/shutdown` — request a graceful stop
- `GET /stats/live` — in‑flight/queue stats per backend and sequence group
- `GET /stats/usage?app_id=<id>` — current usage snapshot (optionally filtered)

The `mini-apigw reload` and `stop` CLI commands call these endpoints using the admin
bind defined in `daemon.json`.

When the gateway listens on a Unix domain socket (via `listen.unix_socket` or the
`--unix-socket` CLI flag) every request that reaches the socket is treated as trusted.
Place a local reverse proxy in front of the socket to enforce network access rules
for the public API as well as the admin/statistics endpoints.

For Apache HTTPD the following configuration proxies all API traffic through the socket:

```apache
<VirtualHost *:80>
        ServerName host.example.com
        ServerAdmin complains@example.com

        DocumentRoot /usr/www/host.example.com/www/

        ProxyPass        /       "unix:/var/run/miniapigw.sock|http://localhost/"
        ProxyPassReverse /       "unix:/var/run/miniapigw.sock|http://localhost/"
</VirtualHost>
```

If you also need HTTP authentication for administrators, Apache combines multiple `Require` directives with a logical AND by default. Wrap them in `<RequireAll>` to make that explicit:

```apache
<LocationMatch "^/(admin|stats)">
        AuthType Basic
        AuthName "mini-apigw admin"
        AuthUserFile "/usr/local/etc/httpd/miniapigw-admin.htpasswd"
        <RequireAll>
                Require valid-user
                Require ip 127.0.0.1 ::1 192.0.2.0/24
        </RequireAll>
</LocationMatch>
```

Adjust the `Require ip` list or wrap several blocks in `<RequireAny>` when you want to allow either a subnet or authenticated users.

Make sure the proxy only exposes the endpoints you intend to make reachable:
`/v1/…` for OpenAI-compatible APIs and `/admin`/`/stats` only to trusted administrators.

Note that at this moment this means any local user can control shutdown and reload! This 
is __work in progress__

## Deployment Notes

The CLI embeds Uvicorn and supports IPv4/IPv6 hosts or a Unix domain socket. For production, consider:

- running behind a reverse proxy (TLS termination, headers)
- setting `logging.redact_prompts` to `true` unless needed
- using `sequence_group` and `concurrency` to match GPU/CPU constraints

## License

See `LICENSE.md` for the full text.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mini-apigw",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "API-gateway, gateway, API, LLM",
    "author": null,
    "author_email": "Thomas Spielauer <pypipackages01@tspi.at>",
    "download_url": "https://files.pythonhosted.org/packages/d1/aa/a29ccfc61adc74b7156e1336d219ee84ec721b0e8792c60ea76e2eedb109/mini_apigw-0.0.7.tar.gz",
    "platform": null,
    "description": "# mini-apigw \u2014 Minimal OpenAI\u2011compatible API Gateway\n\n__WORK IN PROGRESS_\n\nmini-apigw is a small edge gateway that presents an OpenAI\u2011compatible API surface and routes requests\nto multiple LLM and image generation backends. It is designed for __simplicity__ and ease of control: you\nconfigure backends and apps in JSON, set policies and cost limits per app, and the gateway handles\nrouting, scheduling, usage accounting, optional persistence, trace logging, and admin endpoints.\n\nThe main reason I developed this gateway was to handle multiple access to shared resources - I personally\nutilize machines that run LLMs (ollama, vllm) as well as SDXL based image generation and other software. They\ncompete for GPU resources and since those tools are usually not developed to work together nicely and\narbitrate GPU usage the gateway offers serialization of requests inside _sequence groups_. All backends\nin the same sequence group execute requests of all backends in the same group strictly in sequence so the\nlocal backends can handle loading and unloading of competing backends.\n\nBackends included out of the box:\n\n* OpenAI\n* Ollama\n\nIn development:\n\n* Anthropic\n* Fooocus\n* vLLM\n\nThe gateway exposes the familiar `/v1` endpoints (`/chat/completions`, `/completions`,\n`/embeddings`, `/images/generations`, and `/models`) and normalizes responses where\nneeded. This allows one to use the ```openai``` client library for any backend. Note that the gateway\ndoes not implement the full OpenAI API - it just passes through the above mentioned\nendpoints. In addition it uses local API keys. Those API keys are used to select the application\nfrom ```apps.json``` configuration file. \n\n## Installation\n\nFrom PyPI (recommended):\n\n```bash\npip install mini-apigw\n```\n\nFrom source (editable):\n\n```bash\npip install -e .\n```\n\n## Configuration\n\nThe gateway reads three JSON files from a configuration directory. By default this\nis `./config` (at the moment, this will be fixed soon, this will be a breaking change!),\nor override with the environment variable `MINIAPIGW_CONFIG_DIR` or the CLI flag `--config-dir`.\n\nRequired files:\n\n- `daemon.json`: service, logging, admin, timeouts, and optional Postgres settings\n- `backends.json`: model providers, aliases, costs, and capabilities\n- `apps.json`: application definitions, API keys, allow/deny policies, cost limits, and tracing\n\nExample `daemon.json` (minimal, no persistent accounting log):\n\n```json\n{\n  \"listen\": { \"host_v4\": \"0.0.0.0\", \"port\": 8080 },\n  \"admin\": { \"bind\": [\"127.0.0.1:8081\", \"[::1]:8081\"], \"stats_networks\": [\"127.0.0.1/32\", \"::1/128\"] },\n  \"logging\": { \"level\": \"INFO\", \"redact_prompts\": true, \"access_log\": true },\n  \"reload\": { \"enable_sighup\": true },\n  \"timeouts\": { \"default_connect_s\": 60, \"default_read_s\": 600 },\n  \"database\": null\n}\n```\n\nExample `backends.json` (mixed OpenAI + Ollama):\n\n```json\n{\n  \"aliases\": { \"llama3.2\": \"llama3.2:latest\" },\n  \"sequence_groups\": { \"local_gpu_01\": { \"description\": \"Serialized work for local GPU tasks\" } },\n  \"backends\": [\n    {\n      \"type\": \"openai\",\n      \"name\": \"openai-primary\",\n      \"base_url\": \"https://api.openai.com/v1\",\n      \"api_key\": \"<openai_key>\",\n      \"concurrency\": 4,\n      \"supports\": { \"chat\": [\"gpt-4o-mini\"], \"embeddings\": [\"text-embedding-3-small\"], \"images\": [\"gpt-image-1\", \"dall-e-3\"] },\n      \"cost\": { \"currency\": \"usd\", \"unit\": \"1k_tokens\", \"models\": { \"gpt-4o-mini\": {\"prompt\": 0.002, \"completion\": 0.004} } }\n    },\n    {\n      \"type\": \"ollama\",\n      \"name\": \"ollama-local\",\n      \"base_url\": \"http://127.0.0.1:11434\",\n      \"sequence_group\": \"local_gpu_01\",\n      \"concurrency\": 1,\n      \"supports\": { \"chat\": [\"llama3.2:latest\", \"gpt-oss:120b\"], \"completions\": [\"llama3.2:latest\"], \"embeddings\": [\"nomic-embed-text\"] },\n      \"cost\": {\n        \"models\": {\n          \"llama3.2:latest\": {\"prompt\": 0.0, \"completion\": 0.0},\n          \"gpt-oss:120b\": {\"prompt\": 0.001, \"completion\": 0.001 }\n        }\n      }\n    }\n  ]\n}\n```\n\nExample `apps.json` (one app, you just declare them one after each other), the API keys are threatened\ntransparent, you can use any string as Bearer token, it just has to be unique; the app ID is used in filemanes\nso you might not want to use special characters:\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_id\": \"demo\",\n      \"name\": \"Demo application\",\n      \"api_keys\": [\n        \"sk-example-key\"\n      ],\n      \"policy\": {\n        \"allow\": [ \"gpt-4o-mini\", \"llama3.2\" ],\n        \"deny\": []\n      },\n      \"cost_limit\": {\n          \"period\": \"day\",\n          \"limit\": 10.0\n      },\n      \"trace\": {\n          \"file\": \"/var/log/llmgw/demo.jsonl\",\n          \"image_dir\": \"/var/log/llmgw/images/demo\",\n          \"include_prompts\": true,\n          \"include_response\": true,\n          \"include_keys\": true \n      }\n    }\n  ]\n}\n```\n\nNotes:\n\n- Backends declare capabilities via `supports` (globs allowed) and optional `aliases`.\n  Use `concurrency` and `sequence_group` to tune throughput/serialization. Costs\n  under `cost` are used to estimate per\u2011app spend. This may of course deviate from real\n  platform billing. It's just there to provide an estimate (and may be reworked later on).\n  Also billing is currently not tracked for images.\n- Apps bind one or more API keys to an `app_id`, use `policy.allow`/`policy.deny` to\n  restrict models, and `cost_limit` to enforce soft limits. Per\u2011app traces can be persisted\n  as JSONL with optional image capture.\n- The admin interface binds on the same port (due to limitations in FastAPI code and simplicity).\n  It is restricted to localhost by default; when running local jails use CIDRs in `admin.stats_networks`.\n  If you expose the service through a Unix domain socket the gateway assumes a local reverse proxy enforces access control, so configure that proxy accordingly.\n\n## Running\n\nThe package installs a console entry point `mini-apigw`.\n\nForeground server_\n\n```bash\nmini-apigw start --config-dir ./config --foreground --reload\n```\n\nDaemonize with defaults from `daemon.json` (port/host or Unix socket):\n\n```bash\nmini-apigw start --config-dir ./config\n```\n\nOverride listener explicitly:\n\n```bash\nmini-apigw start --config-dir ./config --host 0.0.0.0 --port 8080\n# or use a Unix domain socket\nmini-apigw start --config-dir ./config --unix-socket /var/llmgw/llmgw.sock\n```\n\nAdmin helpers (call built\u2011in admin endpoints):\n\n```bash\nmini-apigw reload --config-dir ./config\nmini-apigw stop --config-dir ./config\nmini-apigw token --bytes 32   # generate a random API key\n```\n\n## OpenAI\u2011Compatible API\n\nAll endpoints live under `/v1` and require `Authorization: Bearer <api_key>`.\n\nList models (combines declared and auto\u2011discovered where available):\n\n```bash\ncurl -s -H \"Authorization: Bearer sk-demo-key\" http://127.0.0.1:8080/v1/models | jq .\n```\n\nChat completions (JSON response; set `stream: true` for SSE):\n\n```bash\ncurl -s \\\n  -H \"Authorization: Bearer sk-demo-key\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n        \"model\": \"gpt-4o-mini\",\n        \"messages\": [\n          {\"role\": \"system\", \"content\": \"You are helpful.\"},\n          {\"role\": \"user\", \"content\": \"Say hi\"}\n        ],\n        \"stream\": false\n      }' \\\n  http://127.0.0.1:8080/v1/chat/completions | jq .\n```\n\nEmbeddings:\n\n```bash\ncurl -s \\\n  -H \"Authorization: Bearer sk-demo-key\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\": \"text-embedding-3-small\", \"input\": \"hello\"}' \\\n  http://127.0.0.1:8080/v1/embeddings | jq .\n```\n\nImage generation (if supported by a configured backend):\n\n```bash\ncurl -s \\\n  -H \"Authorization: Bearer sk-demo-key\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\": \"gpt-image-1\", \"prompt\": \"a lighthouse at dusk\"}' \\\n  http://127.0.0.1:8080/v1/images/generations | jq .\n```\n\n## Accounting and Persistence\n\nmini-apigw estimates request cost using backend\u2011specific `cost` rates and tracked token\nusage (this is work in progress), and aggregates totals per app. By default, accounting\nis in\u2011memory. To persist usage to Postgres for reporting or bootstrapping daily totals,\nset the `database` section in `daemon.json` and apply the schema in `sql/schema.sql`.\n\nNotes:\n\n- Postgres is optional. If configured, the gateway uses `psycopg`/`psycopg2` to\n  insert request rows asynchronously and reconstructs the current\u2011day state on startup.\n- Cost limits (`apps[].cost_limit`) are enforced against the running totals; requests\n  beyond the limit are rejected with `403`.\n\n## Tracing\n\nPer\u2011app tracing can write JSONL events and capture images from generation responses.\nEnable under `apps[].trace` with `file` and/or `image_dir`. You can include prompts,\nresponses (non\u2011streaming), and masked API keys with `include_prompts`, `include_response`,\nand `include_keys`.\n\nTrace files are append\u2011only JSONL; image files are written under `imagedir` with content\u2011based\nextensions for base64 payloads, or a `.txt` containing the URL for URL\u2011based images.\n\n## Admin and Stats\n\nAdmin endpoints are local\u2011only by default (IPv4/IPv6 loopback) and can be extended\nwith `admin.stats_networks` CIDR allow\u2011lists.\n\n- `POST /admin/reload` \u2014 reload configuration files atomically\n- `POST /admin/shutdown` \u2014 request a graceful stop\n- `GET /stats/live` \u2014 in\u2011flight/queue stats per backend and sequence group\n- `GET /stats/usage?app_id=<id>` \u2014 current usage snapshot (optionally filtered)\n\nThe `mini-apigw reload` and `stop` CLI commands call these endpoints using the admin\nbind defined in `daemon.json`.\n\nWhen the gateway listens on a Unix domain socket (via `listen.unix_socket` or the\n`--unix-socket` CLI flag) every request that reaches the socket is treated as trusted.\nPlace a local reverse proxy in front of the socket to enforce network access rules\nfor the public API as well as the admin/statistics endpoints.\n\nFor Apache HTTPD the following configuration proxies all API traffic through the socket:\n\n```apache\n<VirtualHost *:80>\n        ServerName host.example.com\n        ServerAdmin complains@example.com\n\n        DocumentRoot /usr/www/host.example.com/www/\n\n        ProxyPass        /       \"unix:/var/run/miniapigw.sock|http://localhost/\"\n        ProxyPassReverse /       \"unix:/var/run/miniapigw.sock|http://localhost/\"\n</VirtualHost>\n```\n\nIf you also need HTTP authentication for administrators, Apache combines multiple `Require` directives with a logical AND by default. Wrap them in `<RequireAll>` to make that explicit:\n\n```apache\n<LocationMatch \"^/(admin|stats)\">\n        AuthType Basic\n        AuthName \"mini-apigw admin\"\n        AuthUserFile \"/usr/local/etc/httpd/miniapigw-admin.htpasswd\"\n        <RequireAll>\n                Require valid-user\n                Require ip 127.0.0.1 ::1 192.0.2.0/24\n        </RequireAll>\n</LocationMatch>\n```\n\nAdjust the `Require ip` list or wrap several blocks in `<RequireAny>` when you want to allow either a subnet or authenticated users.\n\nMake sure the proxy only exposes the endpoints you intend to make reachable:\n`/v1/\u2026` for OpenAI-compatible APIs and `/admin`/`/stats` only to trusted administrators.\n\nNote that at this moment this means any local user can control shutdown and reload! This \nis __work in progress__\n\n## Deployment Notes\n\nThe CLI embeds Uvicorn and supports IPv4/IPv6 hosts or a Unix domain socket. For production, consider:\n\n- running behind a reverse proxy (TLS termination, headers)\n- setting `logging.redact_prompts` to `true` unless needed\n- using `sequence_group` and `concurrency` to match GPU/CPU constraints\n\n## License\n\nSee `LICENSE.md` for the full text.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Minimal OpenAI-compatible edge gateway with multi-backend routing",
    "version": "0.0.7",
    "project_urls": {
        "Homepage": "https://www.github.com/tspspi/mini-apigw"
    },
    "split_keywords": [
        "api-gateway",
        " gateway",
        " api",
        " llm"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8bd6ffa4cbe3c5c10f09fc85d2d80a583cf857d55ed3f14affed4a5cbf44d9f9",
                "md5": "f666f124aae0029953211541650e9a36",
                "sha256": "f15a4891cb111a146f1d95bdf7933613d0f80a171551324c3407c52ce3b5e5d0"
            },
            "downloads": -1,
            "filename": "mini_apigw-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f666f124aae0029953211541650e9a36",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 42403,
            "upload_time": "2025-10-22T21:28:42",
            "upload_time_iso_8601": "2025-10-22T21:28:42.824487Z",
            "url": "https://files.pythonhosted.org/packages/8b/d6/ffa4cbe3c5c10f09fc85d2d80a583cf857d55ed3f14affed4a5cbf44d9f9/mini_apigw-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d1aaa29ccfc61adc74b7156e1336d219ee84ec721b0e8792c60ea76e2eedb109",
                "md5": "2c0504986a499b2f2e6439c5bbe70935",
                "sha256": "45896561c056237d07e66a729d18348b11efb253e66cd99490f9ef1cb17365cd"
            },
            "downloads": -1,
            "filename": "mini_apigw-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "2c0504986a499b2f2e6439c5bbe70935",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 40656,
            "upload_time": "2025-10-22T21:28:44",
            "upload_time_iso_8601": "2025-10-22T21:28:44.036085Z",
            "url": "https://files.pythonhosted.org/packages/d1/aa/a29ccfc61adc74b7156e1336d219ee84ec721b0e8792c60ea76e2eedb109/mini_apigw-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-22 21:28:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tspspi",
    "github_project": "mini-apigw",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "mini-apigw"
}
        
Elapsed time: 1.72557s