jetflow


Namejetflow JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryLightweight, production-ready framework for building agentic workflows with LLMs
upload_time2025-10-20 11:23:23
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords llm agents ai langchain openai anthropic gemini
VCS
bugtrack_url
requirements annotated-types anyio certifi distro h11 httpcore httpx idna jiter openai pydantic pydantic_core sniffio tqdm typing-inspection typing_extensions
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ⚡ Jetflow

[![PyPI](https://img.shields.io/pypi/v/jetflow.svg)](https://pypi.org/project/jetflow)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)

**Stop rebuilding the same agent patterns.**

Jetflow gives you **typed tools**, **short agent loops**, and **clean multi-agent composition**—all with **full cost visibility**.

* **Move fast.** Stand up real agents in minutes, not weeks.
* **Control cost.** See tokens and dollars per run.
* **Debug cleanly.** Read the full transcript, not a black box.
* **Scale simply.** Treat agents as tools. Chain them when it helps.

> **One mental model:** *schema-in → action calls → formatted exit*.
> Agents and actions share the same computational shape. That makes composition boring—in the good way.

---

## Why Jetflow (vs CrewAI/LangChain)

A lightweight, developer-first agent toolkit for real applications. LLM-agnostic, easy to set up and debug, and flexible from single agents to multi-agent chains.

| Dimension | Jetflow | CrewAI | LangChain |
|---|---|---|---|
| Target user | Developers integrating agents into apps | Non-dev “crew” workflows | Broad framework users |
| Abstraction | Low-level, code-first | High-level roles/crews | Many abstractions (chains/graphs) |
| Architecture | Explicit tools + short loops | Multi-agent by default | Varies by components |
| Setup/Debug | Minutes; small surface; full transcript | Heavier config/orchestration | Larger surface; callbacks/tools |
| LLM support | Vendor-neutral (OpenAI, Anthropic, pluggable) | Provider adapters | Large ecosystem |
| Orchestration | Single, multi-agent, sequential agent chains | Teams/crews | Chains, agents, graphs |

## Install

```bash
pip install jetflow[openai]      # OpenAI
pip install jetflow[anthropic]   # Anthropic
pip install jetflow[all]         # Both
```

```bash
export OPENAI_API_KEY=...
export ANTHROPIC_API_KEY=...
```

**Async support:** Full async/await API available. Use `AsyncAgent`, `AsyncChain`, and `@async_action`.

---

## Quick Start 1 — Single Agent

Typed tool → short loop → visible cost.

```python
from pydantic import BaseModel, Field
from jetflow import Agent, action
from jetflow.clients.openai import OpenAIClient

class Calculate(BaseModel):
    """Evaluate a safe arithmetic expression"""
    expression: str = Field(description="e.g. '25 * 4 + 10'")

@action(schema=Calculate)
def calculator(p: Calculate) -> str:
    env = {"__builtins__": {}}
    fns = {"abs": abs, "round": round, "min": min, "max": max, "sum": sum, "pow": pow}
    return str(eval(p.expression, env, fns))

agent = Agent(
    client=OpenAIClient(model="gpt-5"),
    actions=[calculator],
    system_prompt="Answer clearly. Use tools when needed."
)

resp = agent.run("What is 25 * 4 + 10?")
print(resp.content)                       # -> "110"
print(f"Cost: ${resp.usage.estimated_cost:.4f}")
```

**Why teams use this:** strong schemas reduce junk calls, a short loop keeps latency predictable, and you see spend immediately.

---

## Quick Start 2 — Multi-Agent (agents as tools)

Let a **fast** model gather facts; let a **strong** model reason. Child agents return **one formatted result** via an exit action.

```python
from pydantic import BaseModel
from jetflow import Agent, action
from jetflow.clients.openai import OpenAIClient

# Child agent: research → returns a concise note
class ResearchNote(BaseModel):
    summary: str
    sources: list[str]
    def format(self) -> str:
        return f"{self.summary}\n\n" + "\n".join(f"- {s}" for s in self.sources)

@action(schema=ResearchNote, exit=True)
def FinishedResearch(note: ResearchNote) -> str:
    return note.format()

researcher = Agent(
    client=OpenAIClient(model="gpt-5-mini"),
    actions=[/* your web_search tool */, FinishedResearch],
    system_prompt="Search broadly. Deduplicate. Return concise notes.",
    require_action=True
)

# Parent agent: deep analysis over the returned note
class FinalReport(BaseModel):
    headline: str
    bullets: list[str]
    def format(self) -> str:
        return f"{self.headline}\n\n" + "\n".join(f"- {b}" for b in self.bullets)

@action(schema=FinalReport, exit=True)
def Finished(report: FinalReport) -> str:
    return report.format()

analyst = Agent(
    client=OpenAIClient(model="gpt-5"),
    actions=[researcher.to_action("research", "Search and summarize"), Finished],
    system_prompt="Use research notes. Quantify impacts. Be precise.",
    require_action=True
)

resp = analyst.run("Compare NVDA vs AMD inference margins using latest earnings calls.")
print(resp.content)
```

**What this buys you:** fast models scout, strong models conclude; strict boundaries prevent prompt bloat; parents get one crisp payload per child.

---

## Quick Start 3 — Sequential Agent Chains (shared transcript, sequential hand-off)

Run agents **in order** over the **same** message history. Classic "fast search → slow analysis".

```python
from jetflow import Chain
from jetflow.clients.openai import OpenAIClient

search_agent = Agent(
    client=OpenAIClient(model="gpt-5-mini"),
    actions=[/* web_search */, FinishedResearch],
    system_prompt="Fast breadth-first search.",
    require_action=True
)

analysis_agent = Agent(
    client=OpenAIClient(model="gpt-5"),
    actions=[/* calculator */, Finished],
    system_prompt="Read prior messages. Analyze. Show working.",
    require_action=True
)

chain = Chain([search_agent, analysis_agent])
resp = chain.run("Find ARM CPU commentary in recent earnings calls, then quantify margin impacts.")
print(resp.content)
print(f"Total cost: ${resp.usage.estimated_cost:.4f}")
```

**Why chains win:** you share context only when it compounds value, swap models per stage to balance speed and accuracy, and keep each agent narrowly focused.

---

## Async Support

Full async/await API. Same patterns, async primitives.

```python
from jetflow import AsyncAgent, AsyncChain, async_action

@async_action(schema=Calculate)
async def async_calculator(p: Calculate) -> str:
    return str(eval(p.expression))

agent = AsyncAgent(
    client=OpenAIClient(model="gpt-5"),
    actions=[async_calculator]
)

resp = await agent.run("What is 25 * 4 + 10?")
```

**Use async when:** making concurrent API calls, handling many agents in parallel, or building async web services.

---

## Streaming

Stream events in real-time as the agent executes. Perfect for UI updates, progress bars, and live feedback.

```python
from jetflow import ContentDelta, ActionStart, ActionEnd, MessageEnd

with agent.stream("What is 25 * 4 + 10?") as events:
    for event in events:
        if isinstance(event, ContentDelta):
            print(event.delta, end="", flush=True)  # Stream text as it arrives

        elif isinstance(event, ActionStart):
            print(f"\n[Calling {event.name}...]")

        elif isinstance(event, ActionEnd):
            print(f"✓ {event.name}({event.body})")

        elif isinstance(event, MessageEnd):
            final = event.message  # Complete message with all content
```

**Two modes:**
- **`mode="deltas"`** (default): Stream granular events (ContentDelta, ActionStart, ActionDelta, ActionEnd)
- **`mode="messages"`**: Stream only complete Message objects (MessageEnd events)

**Works for chains too:**
```python
with chain.stream("Research and analyze") as events:
    for event in events:
        if isinstance(event, ContentDelta):
            print(event.delta, end="")
```

---

## Why Jetflow (in one breath)

* **Fewer moving parts.** Agents, actions, messages—nothing else.
* **Deterministic endings.** Use `require_action=True` + a `format()` exit to get one reliable result.
* **Real observability.** Full transcript + token and dollar accounting.
* **Composability that sticks.** Treat agents as tools; add chains when you need shared context.
* **Provider-agnostic.** OpenAI + Anthropic with matching streaming semantics.

---

## Production in 60 Seconds

* **Guard exits.** For anything that matters, set `require_action=True` and finish with a formattable exit action.
* **Budget hard-stops.** Choose `max_iter` and fail closed; treat errors as tool messages, not exceptions.
* **Pick models per stage.** Cheap for search/IO, strong for reasoning, writer for polish.
* **Log the transcript.** Store `response.messages` and `response.usage` for repro and cost tracking.
* **Test like code.** Snapshot transcripts for golden tests; track cost deltas PR-to-PR.

---

## Built-in Actions

Jetflow includes one useful action: **safe Python execution**.

```python
from jetflow.actions import python_exec

agent = Agent(
    client=OpenAIClient(model="gpt-5"),
    actions=[python_exec]
)

resp = agent.run("Calculate compound interest: principal=10000, rate=0.05, years=10")
```

Variables persist across calls. Perfect for data analysis workflows.

---

## Docs

📚 [Full Documentation](https://jetflow.readthedocs.io)

- [Quickstart](https://jetflow.readthedocs.io/quickstart) — 5-minute tutorial
- [Single Agent](https://jetflow.readthedocs.io/single-agent) — Actions, control flow, debugging
- [Composition](https://jetflow.readthedocs.io/composition) — Agents as tools
- [Chains](https://jetflow.readthedocs.io/chains) — Multi-stage workflows
- [API Reference](https://jetflow.readthedocs.io/api) — Complete API docs

---

## License

MIT © 2025 Lucas Astorian

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "jetflow",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "llm, agents, ai, langchain, openai, anthropic, gemini",
    "author": null,
    "author_email": "Lucas Astorian <lucas@intellifin.ai>",
    "download_url": "https://files.pythonhosted.org/packages/46/7c/4287c1f030860c05aedb6c04876a4e7ea2f93a16789b44f0ebac9676c9a0/jetflow-1.0.0.tar.gz",
    "platform": null,
    "description": "# \u26a1 Jetflow\n\n[![PyPI](https://img.shields.io/pypi/v/jetflow.svg)](https://pypi.org/project/jetflow)\n[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)\n\n**Stop rebuilding the same agent patterns.**\n\nJetflow gives you **typed tools**, **short agent loops**, and **clean multi-agent composition**\u2014all with **full cost visibility**.\n\n* **Move fast.** Stand up real agents in minutes, not weeks.\n* **Control cost.** See tokens and dollars per run.\n* **Debug cleanly.** Read the full transcript, not a black box.\n* **Scale simply.** Treat agents as tools. Chain them when it helps.\n\n> **One mental model:** *schema-in \u2192 action calls \u2192 formatted exit*.\n> Agents and actions share the same computational shape. That makes composition boring\u2014in the good way.\n\n---\n\n## Why Jetflow (vs CrewAI/LangChain)\n\nA lightweight, developer-first agent toolkit for real applications. LLM-agnostic, easy to set up and debug, and flexible from single agents to multi-agent chains.\n\n| Dimension | Jetflow | CrewAI | LangChain |\n|---|---|---|---|\n| Target user | Developers integrating agents into apps | Non-dev \u201ccrew\u201d workflows | Broad framework users |\n| Abstraction | Low-level, code-first | High-level roles/crews | Many abstractions (chains/graphs) |\n| Architecture | Explicit tools + short loops | Multi-agent by default | Varies by components |\n| Setup/Debug | Minutes; small surface; full transcript | Heavier config/orchestration | Larger surface; callbacks/tools |\n| LLM support | Vendor-neutral (OpenAI, Anthropic, pluggable) | Provider adapters | Large ecosystem |\n| Orchestration | Single, multi-agent, sequential agent chains | Teams/crews | Chains, agents, graphs |\n\n## Install\n\n```bash\npip install jetflow[openai]      # OpenAI\npip install jetflow[anthropic]   # Anthropic\npip install jetflow[all]         # Both\n```\n\n```bash\nexport OPENAI_API_KEY=...\nexport ANTHROPIC_API_KEY=...\n```\n\n**Async support:** Full async/await API available. Use `AsyncAgent`, `AsyncChain`, and `@async_action`.\n\n---\n\n## Quick Start 1 \u2014 Single Agent\n\nTyped tool \u2192 short loop \u2192 visible cost.\n\n```python\nfrom pydantic import BaseModel, Field\nfrom jetflow import Agent, action\nfrom jetflow.clients.openai import OpenAIClient\n\nclass Calculate(BaseModel):\n    \"\"\"Evaluate a safe arithmetic expression\"\"\"\n    expression: str = Field(description=\"e.g. '25 * 4 + 10'\")\n\n@action(schema=Calculate)\ndef calculator(p: Calculate) -> str:\n    env = {\"__builtins__\": {}}\n    fns = {\"abs\": abs, \"round\": round, \"min\": min, \"max\": max, \"sum\": sum, \"pow\": pow}\n    return str(eval(p.expression, env, fns))\n\nagent = Agent(\n    client=OpenAIClient(model=\"gpt-5\"),\n    actions=[calculator],\n    system_prompt=\"Answer clearly. Use tools when needed.\"\n)\n\nresp = agent.run(\"What is 25 * 4 + 10?\")\nprint(resp.content)                       # -> \"110\"\nprint(f\"Cost: ${resp.usage.estimated_cost:.4f}\")\n```\n\n**Why teams use this:** strong schemas reduce junk calls, a short loop keeps latency predictable, and you see spend immediately.\n\n---\n\n## Quick Start 2 \u2014 Multi-Agent (agents as tools)\n\nLet a **fast** model gather facts; let a **strong** model reason. Child agents return **one formatted result** via an exit action.\n\n```python\nfrom pydantic import BaseModel\nfrom jetflow import Agent, action\nfrom jetflow.clients.openai import OpenAIClient\n\n# Child agent: research \u2192 returns a concise note\nclass ResearchNote(BaseModel):\n    summary: str\n    sources: list[str]\n    def format(self) -> str:\n        return f\"{self.summary}\\n\\n\" + \"\\n\".join(f\"- {s}\" for s in self.sources)\n\n@action(schema=ResearchNote, exit=True)\ndef FinishedResearch(note: ResearchNote) -> str:\n    return note.format()\n\nresearcher = Agent(\n    client=OpenAIClient(model=\"gpt-5-mini\"),\n    actions=[/* your web_search tool */, FinishedResearch],\n    system_prompt=\"Search broadly. Deduplicate. Return concise notes.\",\n    require_action=True\n)\n\n# Parent agent: deep analysis over the returned note\nclass FinalReport(BaseModel):\n    headline: str\n    bullets: list[str]\n    def format(self) -> str:\n        return f\"{self.headline}\\n\\n\" + \"\\n\".join(f\"- {b}\" for b in self.bullets)\n\n@action(schema=FinalReport, exit=True)\ndef Finished(report: FinalReport) -> str:\n    return report.format()\n\nanalyst = Agent(\n    client=OpenAIClient(model=\"gpt-5\"),\n    actions=[researcher.to_action(\"research\", \"Search and summarize\"), Finished],\n    system_prompt=\"Use research notes. Quantify impacts. Be precise.\",\n    require_action=True\n)\n\nresp = analyst.run(\"Compare NVDA vs AMD inference margins using latest earnings calls.\")\nprint(resp.content)\n```\n\n**What this buys you:** fast models scout, strong models conclude; strict boundaries prevent prompt bloat; parents get one crisp payload per child.\n\n---\n\n## Quick Start 3 \u2014 Sequential Agent Chains (shared transcript, sequential hand-off)\n\nRun agents **in order** over the **same** message history. Classic \"fast search \u2192 slow analysis\".\n\n```python\nfrom jetflow import Chain\nfrom jetflow.clients.openai import OpenAIClient\n\nsearch_agent = Agent(\n    client=OpenAIClient(model=\"gpt-5-mini\"),\n    actions=[/* web_search */, FinishedResearch],\n    system_prompt=\"Fast breadth-first search.\",\n    require_action=True\n)\n\nanalysis_agent = Agent(\n    client=OpenAIClient(model=\"gpt-5\"),\n    actions=[/* calculator */, Finished],\n    system_prompt=\"Read prior messages. Analyze. Show working.\",\n    require_action=True\n)\n\nchain = Chain([search_agent, analysis_agent])\nresp = chain.run(\"Find ARM CPU commentary in recent earnings calls, then quantify margin impacts.\")\nprint(resp.content)\nprint(f\"Total cost: ${resp.usage.estimated_cost:.4f}\")\n```\n\n**Why chains win:** you share context only when it compounds value, swap models per stage to balance speed and accuracy, and keep each agent narrowly focused.\n\n---\n\n## Async Support\n\nFull async/await API. Same patterns, async primitives.\n\n```python\nfrom jetflow import AsyncAgent, AsyncChain, async_action\n\n@async_action(schema=Calculate)\nasync def async_calculator(p: Calculate) -> str:\n    return str(eval(p.expression))\n\nagent = AsyncAgent(\n    client=OpenAIClient(model=\"gpt-5\"),\n    actions=[async_calculator]\n)\n\nresp = await agent.run(\"What is 25 * 4 + 10?\")\n```\n\n**Use async when:** making concurrent API calls, handling many agents in parallel, or building async web services.\n\n---\n\n## Streaming\n\nStream events in real-time as the agent executes. Perfect for UI updates, progress bars, and live feedback.\n\n```python\nfrom jetflow import ContentDelta, ActionStart, ActionEnd, MessageEnd\n\nwith agent.stream(\"What is 25 * 4 + 10?\") as events:\n    for event in events:\n        if isinstance(event, ContentDelta):\n            print(event.delta, end=\"\", flush=True)  # Stream text as it arrives\n\n        elif isinstance(event, ActionStart):\n            print(f\"\\n[Calling {event.name}...]\")\n\n        elif isinstance(event, ActionEnd):\n            print(f\"\u2713 {event.name}({event.body})\")\n\n        elif isinstance(event, MessageEnd):\n            final = event.message  # Complete message with all content\n```\n\n**Two modes:**\n- **`mode=\"deltas\"`** (default): Stream granular events (ContentDelta, ActionStart, ActionDelta, ActionEnd)\n- **`mode=\"messages\"`**: Stream only complete Message objects (MessageEnd events)\n\n**Works for chains too:**\n```python\nwith chain.stream(\"Research and analyze\") as events:\n    for event in events:\n        if isinstance(event, ContentDelta):\n            print(event.delta, end=\"\")\n```\n\n---\n\n## Why Jetflow (in one breath)\n\n* **Fewer moving parts.** Agents, actions, messages\u2014nothing else.\n* **Deterministic endings.** Use `require_action=True` + a `format()` exit to get one reliable result.\n* **Real observability.** Full transcript + token and dollar accounting.\n* **Composability that sticks.** Treat agents as tools; add chains when you need shared context.\n* **Provider-agnostic.** OpenAI + Anthropic with matching streaming semantics.\n\n---\n\n## Production in 60 Seconds\n\n* **Guard exits.** For anything that matters, set `require_action=True` and finish with a formattable exit action.\n* **Budget hard-stops.** Choose `max_iter` and fail closed; treat errors as tool messages, not exceptions.\n* **Pick models per stage.** Cheap for search/IO, strong for reasoning, writer for polish.\n* **Log the transcript.** Store `response.messages` and `response.usage` for repro and cost tracking.\n* **Test like code.** Snapshot transcripts for golden tests; track cost deltas PR-to-PR.\n\n---\n\n## Built-in Actions\n\nJetflow includes one useful action: **safe Python execution**.\n\n```python\nfrom jetflow.actions import python_exec\n\nagent = Agent(\n    client=OpenAIClient(model=\"gpt-5\"),\n    actions=[python_exec]\n)\n\nresp = agent.run(\"Calculate compound interest: principal=10000, rate=0.05, years=10\")\n```\n\nVariables persist across calls. Perfect for data analysis workflows.\n\n---\n\n## Docs\n\n\ud83d\udcda [Full Documentation](https://jetflow.readthedocs.io)\n\n- [Quickstart](https://jetflow.readthedocs.io/quickstart) \u2014 5-minute tutorial\n- [Single Agent](https://jetflow.readthedocs.io/single-agent) \u2014 Actions, control flow, debugging\n- [Composition](https://jetflow.readthedocs.io/composition) \u2014 Agents as tools\n- [Chains](https://jetflow.readthedocs.io/chains) \u2014 Multi-stage workflows\n- [API Reference](https://jetflow.readthedocs.io/api) \u2014 Complete API docs\n\n---\n\n## License\n\nMIT \u00a9 2025 Lucas Astorian\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Lightweight, production-ready framework for building agentic workflows with LLMs",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/lucasastorian/jetflow",
        "Issues": "https://github.com/lucasastorian/jetflow/issues",
        "Repository": "https://github.com/lucasastorian/jetflow"
    },
    "split_keywords": [
        "llm",
        " agents",
        " ai",
        " langchain",
        " openai",
        " anthropic",
        " gemini"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "37f735989a17e881983ca15975035682812d93bee373899392f46f853a630968",
                "md5": "4ae93fb564b3f65418d54a66ad048d0d",
                "sha256": "ccad92265681c63ad9bada7f3c9edf33225aa501239a56cbde20f2226388ceec"
            },
            "downloads": -1,
            "filename": "jetflow-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4ae93fb564b3f65418d54a66ad048d0d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 45536,
            "upload_time": "2025-10-20T11:23:22",
            "upload_time_iso_8601": "2025-10-20T11:23:22.012215Z",
            "url": "https://files.pythonhosted.org/packages/37/f7/35989a17e881983ca15975035682812d93bee373899392f46f853a630968/jetflow-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "467c4287c1f030860c05aedb6c04876a4e7ea2f93a16789b44f0ebac9676c9a0",
                "md5": "c7e1ee72572729ff4ab53ad91a81191a",
                "sha256": "2ce9778b39cd3b6432bc91d92f04d3b48bc5bcecaa44362040f79874935726a3"
            },
            "downloads": -1,
            "filename": "jetflow-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c7e1ee72572729ff4ab53ad91a81191a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 33400,
            "upload_time": "2025-10-20T11:23:23",
            "upload_time_iso_8601": "2025-10-20T11:23:23.752908Z",
            "url": "https://files.pythonhosted.org/packages/46/7c/4287c1f030860c05aedb6c04876a4e7ea2f93a16789b44f0ebac9676c9a0/jetflow-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-20 11:23:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lucasastorian",
    "github_project": "jetflow",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "annotated-types",
            "specs": [
                [
                    "==",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "anyio",
            "specs": [
                [
                    "==",
                    "4.11.0"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2025.10.5"
                ]
            ]
        },
        {
            "name": "distro",
            "specs": [
                [
                    "==",
                    "1.9.0"
                ]
            ]
        },
        {
            "name": "h11",
            "specs": [
                [
                    "==",
                    "0.16.0"
                ]
            ]
        },
        {
            "name": "httpcore",
            "specs": [
                [
                    "==",
                    "1.0.9"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    "==",
                    "0.28.1"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.11"
                ]
            ]
        },
        {
            "name": "jiter",
            "specs": [
                [
                    "==",
                    "0.11.1"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "2.5.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.12.3"
                ]
            ]
        },
        {
            "name": "pydantic_core",
            "specs": [
                [
                    "==",
                    "2.41.4"
                ]
            ]
        },
        {
            "name": "sniffio",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    "==",
                    "4.67.1"
                ]
            ]
        },
        {
            "name": "typing-inspection",
            "specs": [
                [
                    "==",
                    "0.4.2"
                ]
            ]
        },
        {
            "name": "typing_extensions",
            "specs": [
                [
                    "==",
                    "4.15.0"
                ]
            ]
        }
    ],
    "lcname": "jetflow"
}
        
Elapsed time: 1.33993s