ai-infra


Nameai-infra JSON
Version 0.1.27 PyPI version JSON
download
home_pageNone
SummaryInfrastructure for efficient and scalable AI applications.
upload_time2025-08-28 06:10:16
maintainerNone
docs_urlNone
authorAli Khatami
requires_python<4.0,>=3.11
licenseMIT
keywords ai langchain langgraph fastapi infra llm mcp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ai-infra

Infrastructure for efficient and scalable AI applications: clean LLM interfaces, composable graphs, and MCP client/server utilities. Batteries-included quickstarts help you ship fast.

- LLM: simple chat, agents with tools, streaming, retries, structured output, HITL hooks
- Graph: small-to-large workflows using LangGraph with typed state and tracing
- MCP: multi-server client, tool discovery, OpenMCP (OpenAPI-like) doc generation


## Install

- Python: 3.11 – 3.13
- Package manager: Poetry (recommended) or pip

Using Poetry (dev):

```bash
poetry install
poetry shell
```

Using pip (library use):

```bash
pip install ai-infra
```


## Configure providers (env)

Create a .env (or export in your shell) with any providers you plan to use.

```bash
# OpenAI
export OPENAI_API_KEY=...
# Anthropic
export ANTHROPIC_API_KEY=...
# Google Generative AI
export GOOGLE_API_KEY=...
# xAI
export XAI_API_KEY=...
```

Optional: MCP HTTP headers for servers you call through the client.

```bash
export MCP_AUTH_TOKEN=...
```


## Quickstarts

Below are tiny copy/paste snippets and how to run included examples.

### LLM: chat (sync)

```python
from ai_infra.llm import CoreLLM, Providers, Models

llm = CoreLLM()
resp = llm.chat(
    user_msg="One fun fact about the moon?",
    system="You are concise.",
    provider=Providers.openai,
    model_name=Models.openai.gpt_4o.value,
)
print(resp)
```

Run the included example (calls a main() function):

```bash
python -c "from ai_infra.llm.examples.02_llm_chat_basic import main; main()"
```

### LLM: agent (tools, sync)

```python
from ai_infra.llm import CoreAgent, Providers, Models

agent = CoreAgent()
resp = agent.run_agent(
    messages=[{"role": "user", "content": "Introduce yourself in one sentence."}],
    provider=Providers.openai,
    model_name=Models.openai.gpt_4o.value,
    model_kwargs={"temperature": 0.7},
)
print(getattr(resp, "content", resp))
```

Run the included example:

```bash
python -c "from ai_infra.llm.examples.01_agent_basic import main; main()"
```

### LLM: token streaming (async)

```python
import asyncio
from ai_infra.llm import CoreLLM, Providers, Models

async def demo():
    llm = CoreLLM()
    async for token, meta in llm.stream_tokens(
        "Stream one short paragraph about Mars.",
        provider=Providers.openai,
        model_name=Models.openai.gpt_4o.value,
    ):
        print(token, end="", flush=True)

asyncio.run(demo())
```

See more examples in src/ai_infra/llm/examples:
- 03_structured_output.py, 04_agent_stream.py, 05_tool_controls.py, 06_hitl.py, 07_retry.py, 08_agent_stream_tokens.py, 09_chat_stream.py


### Graph: minimal state machine

```python
from typing_extensions import TypedDict
from langgraph.graph import END
from ai_infra.graph.core import CoreGraph
from ai_infra.graph.models import Edge, ConditionalEdge

class MyState(TypedDict):
    value: int

def inc(s: MyState) -> MyState:
    s["value"] += 1
    return s

def mul(s: MyState) -> MyState:
    s["value"] *= 2
    return s

graph = CoreGraph(
    state_type=MyState,
    node_definitions=[inc, mul],
    edges=[
        Edge(start="inc", end="mul"),
        ConditionalEdge(
            start="mul", router_fn=lambda s: "inc" if s["value"] < 40 else END, targets=["inc", END]
        ),
    ],
)

print(graph.run({"value": 1}))
```

Run the included example:

```bash
python -c "from ai_infra.graph.examples.01_graph_basic import main; main()"
```

See also: 02_graph_stream_values.py


### MCP: multi-server client

```python
import asyncio
from ai_infra.mcp.client.core import CoreMCPClient

async def main():
    client = CoreMCPClient([
        {"transport": "streamable_http", "url": "http://127.0.0.1:8000/api/mcp", "headers": {"Authorization": "Bearer $MCP_AUTH_TOKEN"}},
        # {"transport": "stdio", "command": "./your-mcp-server", "args": []},
        # {"transport": "sse", "url": "http://127.0.0.1:8001/sse"},
    ])

    await client.discover()
    tools = await client.list_tools()
    print("Discovered tools:", tools)

    docs = await client.get_openmcp()  # or client.get_openmcp("your_server_name")
    print("OpenMCP doc keys:", list(docs.keys()))

asyncio.run(main())
```

Run the included example:

```bash
python -m ai_infra.mcp.examples.01_mcps
```


## Running all quickstarts

If you prefer a single runner command, add a tiny script like this locally:

```python
# quickstart.py
import sys

M = {
    "llm_agent_basic": "ai_infra.llm.examples.01_agent_basic:main",
    "llm_chat_basic": "ai_infra.llm.examples.02_llm_chat_basic:main",
    "graph_basic": "ai_infra.graph.examples.01_graph_basic:main",
    "mcp_discover": "ai_infra.mcp.examples.01_mcps:__main__",
}

if __name__ == "__main__":
    key = sys.argv[1]
    mod, _, func = M[key].partition(":")
    if func == "__main__":
        import runpy; runpy.run_module(mod, run_name="__main__")
    else:
        mod = __import__(mod, fromlist=[func])
        getattr(mod, func)()
```

Run:

```bash
python quickstart.py llm_chat_basic
python quickstart.py graph_basic
python quickstart.py llm_agent_basic
python quickstart.py mcp_discover
```


## Testing and quality

- Unit tests: pytest
  - `pytest -q`
- Lint: ruff
  - `ruff check src tests`
- Types: mypy
  - `mypy src`

Tip: add a test_examples.py that imports and runs the example main() functions to smoke test provider wiring without hitting network (use mocks).


## Project layout

- src/ai_infra/llm: core LLM and Agent APIs, providers, tools, and utils
- src/ai_infra/graph: CoreGraph wrapper, typed models, and utilities
- src/ai_infra/mcp: MCP client, examples, and server stubs
- tests: add your unit/integration tests here


## Notes and roadmap

- Providers: OpenAI, Anthropic, Google GenAI, xAI (via langchain providers)
- Features include structured output, retries, fallbacks, streaming, and tool call controls
- MCP doc generation (OpenMCP) is available via CoreMCPClient.get_openmcp()
- Nice-to-haves: add a simple example runner module; more test coverage around examples and MCP flows


## License

MIT


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai-infra",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "ai, langchain, langgraph, fastapi, infra, llm, mcp",
    "author": "Ali Khatami",
    "author_email": "aliikhatami94@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/82/23/2768ea82860bde2a3bb9acf3c0821f5deccff25b2c1702500f8ea60c151e/ai_infra-0.1.27.tar.gz",
    "platform": null,
    "description": "# ai-infra\n\nInfrastructure for efficient and scalable AI applications: clean LLM interfaces, composable graphs, and MCP client/server utilities. Batteries-included quickstarts help you ship fast.\n\n- LLM: simple chat, agents with tools, streaming, retries, structured output, HITL hooks\n- Graph: small-to-large workflows using LangGraph with typed state and tracing\n- MCP: multi-server client, tool discovery, OpenMCP (OpenAPI-like) doc generation\n\n\n## Install\n\n- Python: 3.11 \u2013 3.13\n- Package manager: Poetry (recommended) or pip\n\nUsing Poetry (dev):\n\n```bash\npoetry install\npoetry shell\n```\n\nUsing pip (library use):\n\n```bash\npip install ai-infra\n```\n\n\n## Configure providers (env)\n\nCreate a .env (or export in your shell) with any providers you plan to use.\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=...\n# Anthropic\nexport ANTHROPIC_API_KEY=...\n# Google Generative AI\nexport GOOGLE_API_KEY=...\n# xAI\nexport XAI_API_KEY=...\n```\n\nOptional: MCP HTTP headers for servers you call through the client.\n\n```bash\nexport MCP_AUTH_TOKEN=...\n```\n\n\n## Quickstarts\n\nBelow are tiny copy/paste snippets and how to run included examples.\n\n### LLM: chat (sync)\n\n```python\nfrom ai_infra.llm import CoreLLM, Providers, Models\n\nllm = CoreLLM()\nresp = llm.chat(\n    user_msg=\"One fun fact about the moon?\",\n    system=\"You are concise.\",\n    provider=Providers.openai,\n    model_name=Models.openai.gpt_4o.value,\n)\nprint(resp)\n```\n\nRun the included example (calls a main() function):\n\n```bash\npython -c \"from ai_infra.llm.examples.02_llm_chat_basic import main; main()\"\n```\n\n### LLM: agent (tools, sync)\n\n```python\nfrom ai_infra.llm import CoreAgent, Providers, Models\n\nagent = CoreAgent()\nresp = agent.run_agent(\n    messages=[{\"role\": \"user\", \"content\": \"Introduce yourself in one sentence.\"}],\n    provider=Providers.openai,\n    model_name=Models.openai.gpt_4o.value,\n    model_kwargs={\"temperature\": 0.7},\n)\nprint(getattr(resp, \"content\", resp))\n```\n\nRun the included example:\n\n```bash\npython -c \"from ai_infra.llm.examples.01_agent_basic import main; main()\"\n```\n\n### LLM: token streaming (async)\n\n```python\nimport asyncio\nfrom ai_infra.llm import CoreLLM, Providers, Models\n\nasync def demo():\n    llm = CoreLLM()\n    async for token, meta in llm.stream_tokens(\n        \"Stream one short paragraph about Mars.\",\n        provider=Providers.openai,\n        model_name=Models.openai.gpt_4o.value,\n    ):\n        print(token, end=\"\", flush=True)\n\nasyncio.run(demo())\n```\n\nSee more examples in src/ai_infra/llm/examples:\n- 03_structured_output.py, 04_agent_stream.py, 05_tool_controls.py, 06_hitl.py, 07_retry.py, 08_agent_stream_tokens.py, 09_chat_stream.py\n\n\n### Graph: minimal state machine\n\n```python\nfrom typing_extensions import TypedDict\nfrom langgraph.graph import END\nfrom ai_infra.graph.core import CoreGraph\nfrom ai_infra.graph.models import Edge, ConditionalEdge\n\nclass MyState(TypedDict):\n    value: int\n\ndef inc(s: MyState) -> MyState:\n    s[\"value\"] += 1\n    return s\n\ndef mul(s: MyState) -> MyState:\n    s[\"value\"] *= 2\n    return s\n\ngraph = CoreGraph(\n    state_type=MyState,\n    node_definitions=[inc, mul],\n    edges=[\n        Edge(start=\"inc\", end=\"mul\"),\n        ConditionalEdge(\n            start=\"mul\", router_fn=lambda s: \"inc\" if s[\"value\"] < 40 else END, targets=[\"inc\", END]\n        ),\n    ],\n)\n\nprint(graph.run({\"value\": 1}))\n```\n\nRun the included example:\n\n```bash\npython -c \"from ai_infra.graph.examples.01_graph_basic import main; main()\"\n```\n\nSee also: 02_graph_stream_values.py\n\n\n### MCP: multi-server client\n\n```python\nimport asyncio\nfrom ai_infra.mcp.client.core import CoreMCPClient\n\nasync def main():\n    client = CoreMCPClient([\n        {\"transport\": \"streamable_http\", \"url\": \"http://127.0.0.1:8000/api/mcp\", \"headers\": {\"Authorization\": \"Bearer $MCP_AUTH_TOKEN\"}},\n        # {\"transport\": \"stdio\", \"command\": \"./your-mcp-server\", \"args\": []},\n        # {\"transport\": \"sse\", \"url\": \"http://127.0.0.1:8001/sse\"},\n    ])\n\n    await client.discover()\n    tools = await client.list_tools()\n    print(\"Discovered tools:\", tools)\n\n    docs = await client.get_openmcp()  # or client.get_openmcp(\"your_server_name\")\n    print(\"OpenMCP doc keys:\", list(docs.keys()))\n\nasyncio.run(main())\n```\n\nRun the included example:\n\n```bash\npython -m ai_infra.mcp.examples.01_mcps\n```\n\n\n## Running all quickstarts\n\nIf you prefer a single runner command, add a tiny script like this locally:\n\n```python\n# quickstart.py\nimport sys\n\nM = {\n    \"llm_agent_basic\": \"ai_infra.llm.examples.01_agent_basic:main\",\n    \"llm_chat_basic\": \"ai_infra.llm.examples.02_llm_chat_basic:main\",\n    \"graph_basic\": \"ai_infra.graph.examples.01_graph_basic:main\",\n    \"mcp_discover\": \"ai_infra.mcp.examples.01_mcps:__main__\",\n}\n\nif __name__ == \"__main__\":\n    key = sys.argv[1]\n    mod, _, func = M[key].partition(\":\")\n    if func == \"__main__\":\n        import runpy; runpy.run_module(mod, run_name=\"__main__\")\n    else:\n        mod = __import__(mod, fromlist=[func])\n        getattr(mod, func)()\n```\n\nRun:\n\n```bash\npython quickstart.py llm_chat_basic\npython quickstart.py graph_basic\npython quickstart.py llm_agent_basic\npython quickstart.py mcp_discover\n```\n\n\n## Testing and quality\n\n- Unit tests: pytest\n  - `pytest -q`\n- Lint: ruff\n  - `ruff check src tests`\n- Types: mypy\n  - `mypy src`\n\nTip: add a test_examples.py that imports and runs the example main() functions to smoke test provider wiring without hitting network (use mocks).\n\n\n## Project layout\n\n- src/ai_infra/llm: core LLM and Agent APIs, providers, tools, and utils\n- src/ai_infra/graph: CoreGraph wrapper, typed models, and utilities\n- src/ai_infra/mcp: MCP client, examples, and server stubs\n- tests: add your unit/integration tests here\n\n\n## Notes and roadmap\n\n- Providers: OpenAI, Anthropic, Google GenAI, xAI (via langchain providers)\n- Features include structured output, retries, fallbacks, streaming, and tool call controls\n- MCP doc generation (OpenMCP) is available via CoreMCPClient.get_openmcp()\n- Nice-to-haves: add a simple example runner module; more test coverage around examples and MCP flows\n\n\n## License\n\nMIT\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Infrastructure for efficient and scalable AI applications.",
    "version": "0.1.27",
    "project_urls": {
        "Documentation": "https://github.com/your-org/ai-infra#readme",
        "Homepage": "https://github.com/your-org/ai-infra",
        "Issues": "https://github.com/your-org/ai-infra/issues",
        "Repository": "https://github.com/your-org/ai-infra"
    },
    "split_keywords": [
        "ai",
        " langchain",
        " langgraph",
        " fastapi",
        " infra",
        " llm",
        " mcp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ad1343e4eea62692e53f89bbb7520148474e8501468643ac2c2ee6b265d4791f",
                "md5": "f28e8b630b6155acca25f63551568c8d",
                "sha256": "9537b030b7dee1f1d53ef532b3d56151331545906d4c5dc56ce9f02c7853c8a2"
            },
            "downloads": -1,
            "filename": "ai_infra-0.1.27-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f28e8b630b6155acca25f63551568c8d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 107589,
            "upload_time": "2025-08-28T06:10:14",
            "upload_time_iso_8601": "2025-08-28T06:10:14.988441Z",
            "url": "https://files.pythonhosted.org/packages/ad/13/43e4eea62692e53f89bbb7520148474e8501468643ac2c2ee6b265d4791f/ai_infra-0.1.27-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "82232768ea82860bde2a3bb9acf3c0821f5deccff25b2c1702500f8ea60c151e",
                "md5": "879b7ab36c54a7958f6f53bf5c7c67f0",
                "sha256": "059c068dec53fca476d6384a43455d8a8db5cc9c126e03bf59042b07ef9bf1ef"
            },
            "downloads": -1,
            "filename": "ai_infra-0.1.27.tar.gz",
            "has_sig": false,
            "md5_digest": "879b7ab36c54a7958f6f53bf5c7c67f0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 81404,
            "upload_time": "2025-08-28T06:10:16",
            "upload_time_iso_8601": "2025-08-28T06:10:16.055082Z",
            "url": "https://files.pythonhosted.org/packages/82/23/2768ea82860bde2a3bb9acf3c0821f5deccff25b2c1702500f8ea60c151e/ai_infra-0.1.27.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-28 06:10:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "your-org",
    "github_project": "ai-infra#readme",
    "github_not_found": true,
    "lcname": "ai-infra"
}
        
Elapsed time: 1.70606s