stum-ai-gateway


Namestum-ai-gateway JSON
Version 0.1.9 PyPI version JSON
download
home_pageNone
SummaryNatBus-to-LLM agent gateway (contract-validated, iterative planning)
upload_time2025-08-12 08:26:07
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT License Copyright (c) 2025 Servicepod Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords agent gateway jetstream llm nats
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AI Gateway
• Interface-only LLM: the agent receives an injected async callback (prompt: str, system: Optional[str]) -> Awaitable[str]; no LLM SDKs bundled
• Command mapping: CommandRegistry maps human commands to request/response subjects; easily extended per service
• Routing: human commands → LLM JSON plan → service request; service response → optional LLM post-process → human reply subject
• Correlation: preserves the inbound x-correlation-id across all downstream requests and final replies
• Compression: outbound uses gzip when enabled; content-encoding: gzip header set; inbound ReceivedMessage auto-decompresses
• Streams: optionally create a single stream (stream_create=True) and subscribe to human_command_subject plus mapped response subjects
• Payloads: JSON via BusMessage.from_json; binary via from_bytes
• Handlers: receive ReceivedMessage with ack(), nak(), term() for JetStream flow control
• Durables: PUSH subscribers set durable="name" and optional queue group for load-balanced workers

## Install Requirements
```shell
python3.11 -m pip install -r requirements.txt -v
```

## Build
```shell
./build.sh
```


AI Gateway
Interface-only bridge between NatBus and an external LLM.
Maps human commands to service request/response subjects, asks an LLM to produce a JSON “plan”, publishes requests, and routes responses back to the human reply subject.
No LLM SDKs bundled; you inject an async callback.

Features
• LLM injection via async callback (prompt: str, system: Optional[str]) -> Awaitable[str]
• Command registry for easy, extensible mapping of human commands → NatBus subjects
• Correlation propagation using inbound x-correlation-id (or auto-generated UUID)
• Optional LLM post-processing of service responses per-command or globally
• Gzip compression for outbound messages with transparent inbound decompression
• Supports PUSH consumers for human commands and service responses

Install
Use the vendored NatBus wheel and install the gateway.

```bash

from your repo root
mkdir -p vendor
cp /mnt/nas_share/python_package_repository/natbus/natsbus-0.1.16-py3-none-any.whl vendor/
pip install --no-index --find-links=vendor natbus==0.1.16
pip install -e .
```

pyproject.toml dependency form (already configured):

```toml
dependencies = [
"natbus @ file:vendor/natsbus-0.1.16-py3-none-any.whl",
]
```

Concepts
Human command envelope (JSON on human_command_subject):

```json
{"cmd":"<string>","args":{"...": "..."},"reply_subject":"<subject optional>"}
```

LLM plan schema (LLM must output a single JSON object):

```json
{
"action": "send_request",
"subject": "<service.request.subject>",
"payload": { "service": "specific", "fields": "..." },
"await_response": true,
"response_subject": "<service.response.subject>" // optional; overrides mapping default
}
```

Service response envelope (typical):

```json
{ "ok": true, "data": { "...": "..." }, "meta": { "..." : "..." } }
```

Human reply envelope (published to reply_subject):

```json
{ "correlation_id": "<id>", "command": "<cmd>", "data": { "...": "..." } }
```

Quick Start
1) Provide an LLM callback
```python
from typing import Optional

async def llm_call(prompt: str, system: Optional[str]) -> str:
# Must return a single JSON object string following the LLM plan schema
# Example: route "show active trades" to forex service
return (
'{"action":"send_request","subject":"forex.trades.list.req","payload":{},"await_response":true,'
'"response_subject":"forex.trades.list.resp"}'
)
```

2) Register command mappings
```python
from ai_gateway import CommandRegistry, CommandMapping

registry = CommandRegistry()

registry.register(CommandMapping(
command="show active trades",
request_subject="forex.trades.list.req",
response_subject="forex.trades.list.resp",
llm_instructions="Use an empty payload; await_response true.",
llm_postprocess=True, # optional: run LLM to summarize service response
))

registry.register(CommandMapping(
command="get account info",
request_subject="forex.account.info.req",
response_subject="forex.account.info.resp",
))
```

3) Configure and run the agent
```python
import asyncio
from typing import Optional
from natbus.config import NatsConfig
from natbus.client import NatsBus
from ai_gateway import LlmNatbusAgent, LlmAgentConfig

CFG = NatsConfig(
server="nats-nats-jetstream:4222",
username="nats-user",
password="changeme",
name="ai-gateway",
stream_create=True,
stream_name="AI_STREAM",
stream_subjects=("ai.human.commands","ai.human.replies","forex.trades.list.req","forex.trades.list.resp"),
queue_group="ai-gateway",
)

async def main():
bus = NatsBus(CFG)
await bus.connect()
agent = LlmNatbusAgent(
bus=bus,
llm_call=llm_call,
registry=registry,
cfg=LlmAgentConfig(
human_command_subject="ai.human.commands",
default_reply_subject="ai.human.replies",
compress_outbound=True,
pending_timeout_seconds=180,
),
)
await agent.start()
# keep running
while True:
await asyncio.sleep(60)

if name == "main":
asyncio.run(main())
```

4) Publish a human command (e.g., from UI/controller)
```python
await bus.publish_json(
subject="ai.human.commands",
obj={"cmd": "show active trades", "args": {}, "reply_subject": "ai.human.replies"},
sender="ui",
)
```

Command Mapping Structure
CommandMapping fields:

• command: canonical human command string (lowercased for lookup)
• request_subject: NatBus subject to publish the service request
• response_subject: subject on which the service posts responses (optional)
• llm_instructions: extra prompt hints for command-specific nuances (optional)
• llm_postprocess: run LLM on service response before replying (optional)

CommandRegistry responsibilities:

• register(mapping): adds/overwrites a mapping keyed by command
• get(command): returns the CommandMapping for a human command
• all_response_subjects(): returns the set of response subjects for auto-subscription

Extending mappings:

```python

Add a new command → subject pair
registry.register(CommandMapping(
command="get forex quote",
request_subject="forex.quote.req",
response_subject="forex.quote.resp",
llm_instructions="Payload must include symbol (e.g. EUR/USD). await_response true.",
))
```

LLM Integration Contract
The agent sends a prompt that includes:

• System prompt (schema and output constraints)
• Context with command, args, default request_subject, and default response_subject
• Any llm_instructions from the mapping

Your callback must:

• Return a single JSON document (no markdown, no commentary)
• Include required keys: action, subject, payload, await_response
• Optionally include response_subject (overrides mapping)

Retries:

• The agent retries invalid outputs up to llm_max_retries with a short JSON-only reminder
• After retries, the agent publishes an error to the requester’s reply subject

Routing and Correlation
• Inbound x-correlation-id is reused for all downstream messages and final replies
• If missing, the agent generates a UUID and uses it consistently in request headers and the reply body and headers
• The agent subscribes to all configured response subjects and matches responses by correlation ID

Compression
Outbound (agent → bus):

• Controlled by LlmAgentConfig.compress_outbound
• When enabled, JSON payloads are gzip-compressed and marked with content-encoding: gzip

Inbound (bus → agent):

• ReceivedMessage auto-decompresses if content-encoding: gzip
• Your handlers and tests can read .as_json() or .as_text() regardless of compression

Subscriptions
• Human commands: PUSH consumer on human_command_subject with a durable
• Service responses: PUSH consumers per response subject derived from the registry and extra_response_subjects

Durables and queue groups:

• Use a durable name to preserve delivery cursor and acks across restarts
• Use a queue group to load-balance multiple agent replicas

Error Paths
Unknown command:

```json
{ "error": "unknown_command", "cmd": "<user text>" }
```

Invalid LLM output after retries:

```json
{ "error": "invalid_llm_output", "detail": "<reason>" }
```

No response subject configured while await_response is true:

```json
{ "error": "no_response_subject_configured" }
```

Unsupported plan action:

```json
{ "error": "unsupported_action", "action": "<value>" }
```

Configuration Reference
LlmAgentConfig most relevant fields:

• human_command_subject: subject to receive human commands
• default_reply_subject: fallback reply subject if requester did not specify one
• compress_outbound: gzip-compress outbound messages when true
• llm_max_retries: retry count for invalid LLM outputs
• llm_system_prompt: schema and constraints for planning calls
• llm_postprocess_system_prompt: schema for post-processing {result: ...}
• pending_timeout_seconds: TTL for awaiting responses
• extra_response_subjects: additional subjects to subscribe to

Example: Multi-service Registry
```python
registry = CommandRegistry()

Forex
registry.register(CommandMapping(
command="show active trades",
request_subject="forex.trades.list.req",
response_subject="forex.trades.list.resp",
llm_instructions="Empty payload; await_response true.",
llm_postprocess=True,
))
registry.register(CommandMapping(
command="get forex quote",
request_subject="forex.quote.req",
response_subject="forex.quote.resp",
llm_instructions="Require args.symbol like 'EUR/USD'.",
))

Accounts
registry.register(CommandMapping(
command="get account info",
request_subject="acct.info.req",
response_subject="acct.info.resp",
))

Orders
registry.register(CommandMapping(
command="place order",
request_subject="orders.place.req",
response_subject="orders.place.resp",
llm_instructions="Payload must include side, symbol, qty, type.",
))
```

Testing
Unit tests use a FakeBus and stubbed llm_call.
Tests cover compressed/uncompressed flows, correlation propagation, retries, unknown command, and non-JSON passthrough.

Run tests:

```bash
pytest -q
```

Key fixtures in tests/conftest.py:

• bus – fake NatBus
• make_cfg – builds LlmAgentConfig with overrides
• make_agent – starts LlmNatbusAgent with provided registry and callback
• decode_bus_json – decompresses and decodes BusMessage bodies for assertions

Build and Distribute
Version is controlled in pyproject.toml (project.version).
Build artifacts:

```bash
python -m pip install --upgrade build
python -m build
ls dist/
```

Optional NAS copy (example shown in build.sh):

```bash
cp dist/ai_gateway-<ver>-py3-none-any.whl /mnt/nas_share/python_package_repository/ai_gateway/
```

Notes for Service Authors
Subject naming:

• Requests: <service>.<resource>.<action>.req
• Responses: <service>.<resource>.<action>.resp

Correlation:

• Echo inbound x-correlation-id on all responses
• Respond on the subject specified by the plan or the documented default

Payloads:

• JSON only for requests and responses to maximize compatibility with the LLM plan schema
• For large payloads, gzip; the gateway handles decompression automatically

Minimal API Surface (import points)
```python
from ai_gateway import (
LlmAgentConfig,
CommandRegistry,
CommandMapping,
LlmNatbusAgent, # requires injected llm_call
)
```

This is sufficient to register commands, run the agent, and integrate your external LLM.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "stum-ai-gateway",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "agent, gateway, jetstream, llm, nats",
    "author": null,
    "author_email": "Servicepod <admin@servicepod.net>",
    "download_url": "https://files.pythonhosted.org/packages/c6/3c/d5571d08443666394eb01dbc962188e3ca6bd5cc877d47970c83741a2f2c/stum_ai_gateway-0.1.9.tar.gz",
    "platform": null,
    "description": "# AI Gateway\n\u2022 Interface-only LLM: the agent receives an injected async callback (prompt: str, system: Optional[str]) -> Awaitable[str]; no LLM SDKs bundled\n\u2022 Command mapping: CommandRegistry maps human commands to request/response subjects; easily extended per service\n\u2022 Routing: human commands \u2192 LLM JSON plan \u2192 service request; service response \u2192 optional LLM post-process \u2192 human reply subject\n\u2022 Correlation: preserves the inbound x-correlation-id across all downstream requests and final replies\n\u2022 Compression: outbound uses gzip when enabled; content-encoding: gzip header set; inbound ReceivedMessage auto-decompresses\n\u2022 Streams: optionally create a single stream (stream_create=True) and subscribe to human_command_subject plus mapped response subjects\n\u2022 Payloads: JSON via BusMessage.from_json; binary via from_bytes\n\u2022 Handlers: receive ReceivedMessage with ack(), nak(), term() for JetStream flow control\n\u2022 Durables: PUSH subscribers set durable=\"name\" and optional queue group for load-balanced workers\n\n## Install Requirements\n```shell\npython3.11 -m pip install -r requirements.txt -v\n```\n\n## Build\n```shell\n./build.sh\n```\n\n\nAI Gateway\nInterface-only bridge between NatBus and an external LLM.\nMaps human commands to service request/response subjects, asks an LLM to produce a JSON \u201cplan\u201d, publishes requests, and routes responses back to the human reply subject.\nNo LLM SDKs bundled; you inject an async callback.\n\nFeatures\n\u2022 LLM injection via async callback (prompt: str, system: Optional[str]) -> Awaitable[str]\n\u2022 Command registry for easy, extensible mapping of human commands \u2192 NatBus subjects\n\u2022 Correlation propagation using inbound x-correlation-id (or auto-generated UUID)\n\u2022 Optional LLM post-processing of service responses per-command or globally\n\u2022 Gzip compression for outbound messages with transparent inbound decompression\n\u2022 Supports PUSH consumers for human commands and service responses\n\nInstall\nUse the vendored NatBus wheel and install the gateway.\n\n```bash\n\nfrom your repo root\nmkdir -p vendor\ncp /mnt/nas_share/python_package_repository/natbus/natsbus-0.1.16-py3-none-any.whl vendor/\npip install --no-index --find-links=vendor natbus==0.1.16\npip install -e .\n```\n\npyproject.toml dependency form (already configured):\n\n```toml\ndependencies = [\n\"natbus @ file:vendor/natsbus-0.1.16-py3-none-any.whl\",\n]\n```\n\nConcepts\nHuman command envelope (JSON on human_command_subject):\n\n```json\n{\"cmd\":\"<string>\",\"args\":{\"...\": \"...\"},\"reply_subject\":\"<subject optional>\"}\n```\n\nLLM plan schema (LLM must output a single JSON object):\n\n```json\n{\n\"action\": \"send_request\",\n\"subject\": \"<service.request.subject>\",\n\"payload\": { \"service\": \"specific\", \"fields\": \"...\" },\n\"await_response\": true,\n\"response_subject\": \"<service.response.subject>\" // optional; overrides mapping default\n}\n```\n\nService response envelope (typical):\n\n```json\n{ \"ok\": true, \"data\": { \"...\": \"...\" }, \"meta\": { \"...\" : \"...\" } }\n```\n\nHuman reply envelope (published to reply_subject):\n\n```json\n{ \"correlation_id\": \"<id>\", \"command\": \"<cmd>\", \"data\": { \"...\": \"...\" } }\n```\n\nQuick Start\n1) Provide an LLM callback\n```python\nfrom typing import Optional\n\nasync def llm_call(prompt: str, system: Optional[str]) -> str:\n# Must return a single JSON object string following the LLM plan schema\n# Example: route \"show active trades\" to forex service\nreturn (\n'{\"action\":\"send_request\",\"subject\":\"forex.trades.list.req\",\"payload\":{},\"await_response\":true,'\n'\"response_subject\":\"forex.trades.list.resp\"}'\n)\n```\n\n2) Register command mappings\n```python\nfrom ai_gateway import CommandRegistry, CommandMapping\n\nregistry = CommandRegistry()\n\nregistry.register(CommandMapping(\ncommand=\"show active trades\",\nrequest_subject=\"forex.trades.list.req\",\nresponse_subject=\"forex.trades.list.resp\",\nllm_instructions=\"Use an empty payload; await_response true.\",\nllm_postprocess=True, # optional: run LLM to summarize service response\n))\n\nregistry.register(CommandMapping(\ncommand=\"get account info\",\nrequest_subject=\"forex.account.info.req\",\nresponse_subject=\"forex.account.info.resp\",\n))\n```\n\n3) Configure and run the agent\n```python\nimport asyncio\nfrom typing import Optional\nfrom natbus.config import NatsConfig\nfrom natbus.client import NatsBus\nfrom ai_gateway import LlmNatbusAgent, LlmAgentConfig\n\nCFG = NatsConfig(\nserver=\"nats-nats-jetstream:4222\",\nusername=\"nats-user\",\npassword=\"changeme\",\nname=\"ai-gateway\",\nstream_create=True,\nstream_name=\"AI_STREAM\",\nstream_subjects=(\"ai.human.commands\",\"ai.human.replies\",\"forex.trades.list.req\",\"forex.trades.list.resp\"),\nqueue_group=\"ai-gateway\",\n)\n\nasync def main():\nbus = NatsBus(CFG)\nawait bus.connect()\nagent = LlmNatbusAgent(\nbus=bus,\nllm_call=llm_call,\nregistry=registry,\ncfg=LlmAgentConfig(\nhuman_command_subject=\"ai.human.commands\",\ndefault_reply_subject=\"ai.human.replies\",\ncompress_outbound=True,\npending_timeout_seconds=180,\n),\n)\nawait agent.start()\n# keep running\nwhile True:\nawait asyncio.sleep(60)\n\nif name == \"main\":\nasyncio.run(main())\n```\n\n4) Publish a human command (e.g., from UI/controller)\n```python\nawait bus.publish_json(\nsubject=\"ai.human.commands\",\nobj={\"cmd\": \"show active trades\", \"args\": {}, \"reply_subject\": \"ai.human.replies\"},\nsender=\"ui\",\n)\n```\n\nCommand Mapping Structure\nCommandMapping fields:\n\n\u2022 command: canonical human command string (lowercased for lookup)\n\u2022 request_subject: NatBus subject to publish the service request\n\u2022 response_subject: subject on which the service posts responses (optional)\n\u2022 llm_instructions: extra prompt hints for command-specific nuances (optional)\n\u2022 llm_postprocess: run LLM on service response before replying (optional)\n\nCommandRegistry responsibilities:\n\n\u2022 register(mapping): adds/overwrites a mapping keyed by command\n\u2022 get(command): returns the CommandMapping for a human command\n\u2022 all_response_subjects(): returns the set of response subjects for auto-subscription\n\nExtending mappings:\n\n```python\n\nAdd a new command \u2192 subject pair\nregistry.register(CommandMapping(\ncommand=\"get forex quote\",\nrequest_subject=\"forex.quote.req\",\nresponse_subject=\"forex.quote.resp\",\nllm_instructions=\"Payload must include symbol (e.g. EUR/USD). await_response true.\",\n))\n```\n\nLLM Integration Contract\nThe agent sends a prompt that includes:\n\n\u2022 System prompt (schema and output constraints)\n\u2022 Context with command, args, default request_subject, and default response_subject\n\u2022 Any llm_instructions from the mapping\n\nYour callback must:\n\n\u2022 Return a single JSON document (no markdown, no commentary)\n\u2022 Include required keys: action, subject, payload, await_response\n\u2022 Optionally include response_subject (overrides mapping)\n\nRetries:\n\n\u2022 The agent retries invalid outputs up to llm_max_retries with a short JSON-only reminder\n\u2022 After retries, the agent publishes an error to the requester\u2019s reply subject\n\nRouting and Correlation\n\u2022 Inbound x-correlation-id is reused for all downstream messages and final replies\n\u2022 If missing, the agent generates a UUID and uses it consistently in request headers and the reply body and headers\n\u2022 The agent subscribes to all configured response subjects and matches responses by correlation ID\n\nCompression\nOutbound (agent \u2192 bus):\n\n\u2022 Controlled by LlmAgentConfig.compress_outbound\n\u2022 When enabled, JSON payloads are gzip-compressed and marked with content-encoding: gzip\n\nInbound (bus \u2192 agent):\n\n\u2022 ReceivedMessage auto-decompresses if content-encoding: gzip\n\u2022 Your handlers and tests can read .as_json() or .as_text() regardless of compression\n\nSubscriptions\n\u2022 Human commands: PUSH consumer on human_command_subject with a durable\n\u2022 Service responses: PUSH consumers per response subject derived from the registry and extra_response_subjects\n\nDurables and queue groups:\n\n\u2022 Use a durable name to preserve delivery cursor and acks across restarts\n\u2022 Use a queue group to load-balance multiple agent replicas\n\nError Paths\nUnknown command:\n\n```json\n{ \"error\": \"unknown_command\", \"cmd\": \"<user text>\" }\n```\n\nInvalid LLM output after retries:\n\n```json\n{ \"error\": \"invalid_llm_output\", \"detail\": \"<reason>\" }\n```\n\nNo response subject configured while await_response is true:\n\n```json\n{ \"error\": \"no_response_subject_configured\" }\n```\n\nUnsupported plan action:\n\n```json\n{ \"error\": \"unsupported_action\", \"action\": \"<value>\" }\n```\n\nConfiguration Reference\nLlmAgentConfig most relevant fields:\n\n\u2022 human_command_subject: subject to receive human commands\n\u2022 default_reply_subject: fallback reply subject if requester did not specify one\n\u2022 compress_outbound: gzip-compress outbound messages when true\n\u2022 llm_max_retries: retry count for invalid LLM outputs\n\u2022 llm_system_prompt: schema and constraints for planning calls\n\u2022 llm_postprocess_system_prompt: schema for post-processing {result: ...}\n\u2022 pending_timeout_seconds: TTL for awaiting responses\n\u2022 extra_response_subjects: additional subjects to subscribe to\n\nExample: Multi-service Registry\n```python\nregistry = CommandRegistry()\n\nForex\nregistry.register(CommandMapping(\ncommand=\"show active trades\",\nrequest_subject=\"forex.trades.list.req\",\nresponse_subject=\"forex.trades.list.resp\",\nllm_instructions=\"Empty payload; await_response true.\",\nllm_postprocess=True,\n))\nregistry.register(CommandMapping(\ncommand=\"get forex quote\",\nrequest_subject=\"forex.quote.req\",\nresponse_subject=\"forex.quote.resp\",\nllm_instructions=\"Require args.symbol like 'EUR/USD'.\",\n))\n\nAccounts\nregistry.register(CommandMapping(\ncommand=\"get account info\",\nrequest_subject=\"acct.info.req\",\nresponse_subject=\"acct.info.resp\",\n))\n\nOrders\nregistry.register(CommandMapping(\ncommand=\"place order\",\nrequest_subject=\"orders.place.req\",\nresponse_subject=\"orders.place.resp\",\nllm_instructions=\"Payload must include side, symbol, qty, type.\",\n))\n```\n\nTesting\nUnit tests use a FakeBus and stubbed llm_call.\nTests cover compressed/uncompressed flows, correlation propagation, retries, unknown command, and non-JSON passthrough.\n\nRun tests:\n\n```bash\npytest -q\n```\n\nKey fixtures in tests/conftest.py:\n\n\u2022 bus \u2013 fake NatBus\n\u2022 make_cfg \u2013 builds LlmAgentConfig with overrides\n\u2022 make_agent \u2013 starts LlmNatbusAgent with provided registry and callback\n\u2022 decode_bus_json \u2013 decompresses and decodes BusMessage bodies for assertions\n\nBuild and Distribute\nVersion is controlled in pyproject.toml (project.version).\nBuild artifacts:\n\n```bash\npython -m pip install --upgrade build\npython -m build\nls dist/\n```\n\nOptional NAS copy (example shown in build.sh):\n\n```bash\ncp dist/ai_gateway-<ver>-py3-none-any.whl /mnt/nas_share/python_package_repository/ai_gateway/\n```\n\nNotes for Service Authors\nSubject naming:\n\n\u2022 Requests: <service>.<resource>.<action>.req\n\u2022 Responses: <service>.<resource>.<action>.resp\n\nCorrelation:\n\n\u2022 Echo inbound x-correlation-id on all responses\n\u2022 Respond on the subject specified by the plan or the documented default\n\nPayloads:\n\n\u2022 JSON only for requests and responses to maximize compatibility with the LLM plan schema\n\u2022 For large payloads, gzip; the gateway handles decompression automatically\n\nMinimal API Surface (import points)\n```python\nfrom ai_gateway import (\nLlmAgentConfig,\nCommandRegistry,\nCommandMapping,\nLlmNatbusAgent, # requires injected llm_call\n)\n```\n\nThis is sufficient to register commands, run the agent, and integrate your external LLM.",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Servicepod\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \u201cSoftware\u201d), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in\n        all copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n        THE SOFTWARE.",
    "summary": "NatBus-to-LLM agent gateway (contract-validated, iterative planning)",
    "version": "0.1.9",
    "project_urls": {
        "Homepage": "https://servicepod.net",
        "Issues": "https://servicepod.net",
        "Source": "https://servicepod.net"
    },
    "split_keywords": [
        "agent",
        " gateway",
        " jetstream",
        " llm",
        " nats"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "83f3f98350a97e2c49fe02603002c0059fc646ff76342fdd9005071cc1323190",
                "md5": "4c47e2b18a32d31d8ab634f3253eac95",
                "sha256": "040fa23eb7b3cc02d83e41f0490d8a9d902afd9c9ba4572243ca4f1c69deff75"
            },
            "downloads": -1,
            "filename": "stum_ai_gateway-0.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4c47e2b18a32d31d8ab634f3253eac95",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 20206,
            "upload_time": "2025-08-12T08:26:06",
            "upload_time_iso_8601": "2025-08-12T08:26:06.404838Z",
            "url": "https://files.pythonhosted.org/packages/83/f3/f98350a97e2c49fe02603002c0059fc646ff76342fdd9005071cc1323190/stum_ai_gateway-0.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c63cd5571d08443666394eb01dbc962188e3ca6bd5cc877d47970c83741a2f2c",
                "md5": "edaf35a5441bba2e95615d6cd4837ce8",
                "sha256": "b4639e6270c7e9fde3296c55410b13a0feb9857f0b0af9d38dc66b8bf254b030"
            },
            "downloads": -1,
            "filename": "stum_ai_gateway-0.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "edaf35a5441bba2e95615d6cd4837ce8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 15435,
            "upload_time": "2025-08-12T08:26:07",
            "upload_time_iso_8601": "2025-08-12T08:26:07.547830Z",
            "url": "https://files.pythonhosted.org/packages/c6/3c/d5571d08443666394eb01dbc962188e3ca6bd5cc877d47970c83741a2f2c/stum_ai_gateway-0.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-12 08:26:07",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "stum-ai-gateway"
}
        
Elapsed time: 1.07016s