| Name | llmgine JSON |
| Version |
0.0.1
JSON |
| download |
| home_page | None |
| Summary | An application framework for building LLM-powered applications. |
| upload_time | 2025-10-06 23:24:29 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | <4.0,>=3.11 |
| license | None |
| keywords |
python
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# π **LLMgine**
LLMgine is a _pattern-driven_ framework for building **production-grade, tool-augmented LLM applications** in Python.
It offers a clean separation between _**engines**_ (conversation logic), _**models/providers**_ (LLM back-ends), _**tools**_ (function calling with **MCP support**), a streaming **message-bus** for commands & events, and opt-in **observability**.
Think _FastAPI_ for web servers or _Celery_ for tasksβLLMgine plays the same role for complex, chat-oriented AI.
---
## β¨ Feature Highlights
| Area | What you get | Key files |
|------|--------------|-----------|
| **Engines** | Plug-n-play `Engine` subclasses (`SinglePassEngine`, `ToolChatEngine`, β¦) with session isolation, tool-loop orchestration, and CLI front-ends | `engines/*.py`, `src/llmgine/llm/engine/` |
| **Message Bus** | Async **command bus** (1 handler) + **event bus** (N listeners) + **sessions** for scoped handlers | `src/llmgine/bus/` |
| **Tooling** | Declarative function-to-tool registration, multi-provider JSON-schema parsing (OpenAI, Claude, DeepSeek), async execution pipeline, **MCP integration** for external tool servers | `src/llmgine/llm/tools/` |
| **Providers / Models** | Wrapper classes for OpenAI, OpenRouter, Gemini 2.5 Flash etc. _without locking you in_ | `src/llmgine/llm/providers/`, `src/llmgine/llm/models/` |
| **Unified Interface** | Single API for OpenAI, Anthropic, and Gemini - switch providers by changing model name | `src/llmgine/unified/` |
| **Context Management** | Simple and in-memory chat history managers, event-emitting for retrieval/update | `src/llmgine/llm/context/` |
| **UI** | Rich-powered interactive CLI (`EngineCLI`) with live spinners, confirmation prompts, tool result panes | `src/llmgine/ui/cli/` |
| **Observability** | Console + JSONL file handlers, per-event metadata, easy custom sinks | `src/llmgine/observability/` |
| **Bootstrap** | One-liner `ApplicationBootstrap` that wires logging, bus startup, and observability | `src/llmgine/bootstrap.py` |
---
## ποΈ High-Level Architecture
```mermaid
flowchart TD
%% Nodes
AppBootstrap["ApplicationBootstrap"]
Bus["MessageBus<br/>(async loop)"]
Obs["Observability<br/>Handlers"]
Eng["Engine(s)"]
TM["ToolManager"]
Tools["Local Tools"]
MCP["MCP Servers"]
Session["BusSession"]
CLI["CLI / UI"]
%% Edges
AppBootstrap -->|starts| Bus
Bus -->|events| Obs
Bus -->|commands| Eng
Bus -->|events| Session
Eng -- status --> Bus
Eng -- tool_calls --> TM
TM -- executes --> Tools
TM -. "MCP Protocol" .-> MCP
Tools -- ToolResult --> CLI
MCP -. "External Tools" .-> CLI
Session --> CLI
```
*Every component communicates _only_ through the bus, so engines, tools, and UIs remain fully decoupled.*
---
## π Quick Start
### 1. Install
```bash
git clone https://github.com/your-org/llmgine.git
cd llmgine
python -m venv .venv && source .venv/bin/activate
pip install -e ".[openai]" # extras: openai, openrouter, mcp, dev, β¦
export OPENAI_API_KEY="sk-β¦" # or OPENROUTER_API_KEY / GEMINI_API_KEY
```
### 2. Run the demo CLI
```bash
python -m llmgine.engines.single_pass_engine # pirate translator
# or
python -m llmgine.engines.tool_chat_engine # automatic tool loop
```
Youβll get an interactive prompt with live status updates and tool execution logs.
---
## π§βπ» Building Your Own Engine
```python
from llmgine.llm.engine.engine import Engine
from llmgine.messages.commands import Command, CommandResult
from llmgine.bus.bus import MessageBus
class MyCommand(Command):
prompt: str = ""
class MyEngine(Engine):
def __init__(self, session_id: str):
self.session_id = session_id
self.bus = MessageBus()
async def handle_command(self, cmd: MyCommand) -> CommandResult:
await self.bus.publish(Status("thinking", session_id=self.session_id))
# call LLM or custom logic here β¦
answer = f"Echo: {cmd.prompt}"
await self.bus.publish(Status("finished", session_id=self.session_id))
return CommandResult(success=True, result=answer)
# Wire into CLI
from llmgine.ui.cli.cli import EngineCLI
chat = EngineCLI(session_id="demo")
chat.register_engine(MyEngine("demo"))
chat.register_engine_command(MyCommand, MyEngine("demo").handle_command)
await chat.main()
```
---
## π§ Tool Integration
### Local Tools in 3 Lines
```python
from llmgine.llm.tools.tool import Parameter
from llmgine.engines.tool_chat_engine import ToolChatEngine
def get_weather(city: str):
"""Return current temperature for a city.
Args:
city: Name of the city
"""
return f"{city}: 17 Β°C"
engine = ToolChatEngine(session_id="demo")
await engine.register_tool(get_weather) # β introspection magic β¨
```
The engine now follows the **OpenAI function-calling loop**:
```
User β Engine β LLM (asks to call get_weather) β ToolManager β get_weather()
β β
ββββββββββββ context update βββββββββ (loops until no tool calls)
```
### MCP (Model Context Protocol) Integration
Connect to external tool servers using the MCP protocol:
```python
from llmgine.llm.tools.tool_manager import ToolManager
# Initialize tool manager with MCP support
tool_manager = ToolManager()
# Register an MCP server (e.g., Notion integration)
await tool_manager.register_mcp_server(
server_name="notion",
command="python",
args=["/path/to/notion_mcp_server.py"]
)
# MCP tools are now available alongside local tools
```
**MCP Features:**
- π **External Tool Servers**: Connect to any MCP-compatible tool server
- π **Dynamic Loading**: Tools are discovered and loaded at runtime
- π― **Provider Agnostic**: Works with OpenAI, Anthropic, and Gemini tool formats
- β‘ **Graceful Degradation**: Falls back silently if MCP dependencies unavailable
**Prerequisites:**
```bash
pip install mcp # Optional: for MCP integration
```
---
## π° Message Bus in Depth
```python
from llmgine.bus.bus import MessageBus
from llmgine.bus.session import BusSession
bus = MessageBus()
await bus.start()
class Ping(Command): pass
class Pong(Event): msg: str = "pong!"
async def ping_handler(cmd: Ping):
await bus.publish(Pong(session_id=cmd.session_id))
return CommandResult(success=True)
with bus.create_session() as sess:
sess.register_command_handler(Ping, ping_handler)
sess.register_event_handler(Pong, lambda e: print(e.msg))
await sess.execute_with_session(Ping()) # prints βpong!β
```
*Handlers are **auto-unregistered** when the `BusSession` exitsβno leaks.*
---
## π Observability
Add structured logs with zero boilerplate:
```python
from llmgine.bootstrap import ApplicationBootstrap, ApplicationConfig
config = ApplicationConfig(enable_console_handler=True,
enable_file_handler=True,
log_level="debug")
await ApplicationBootstrap(config).bootstrap()
```
The observability system uses a standalone `ObservabilityManager` that:
- Operates independently of the message bus (no circular dependencies)
- Provides synchronous handlers (see note below)
- Supports console, file, and OpenTelemetry handlers
- Can be extended with custom handlers
*All events flow through the configured handlers to console and/or timestamped `logs/events_*.jsonl` files.*
**β οΈ Performance Note**: The current implementation uses synchronous handlers which can block the event loop in high-throughput scenarios. This is suitable for development and low-volume production use. See the [Observability System documentation](src/llmgine/observability/README.md#performance-considerations) for details and planned improvements.
For OpenTelemetry support:
```bash
pip install llmgine[opentelemetry]
```
---
## π Documentation
Comprehensive documentation is available in the [`docs/`](docs/) directory:
- **[Documentation Index](docs/index.md)** - Start here for navigation
- **[Product Requirements Document](docs/prd.md)** - Original project requirements
- **[Brownfield Enhancement PRD](docs/brownfield-prd/index.md)** - Production-ready enhancements
- **[User Stories](docs/stories/index.md)** - Detailed implementation stories
- **[Engine Development Guide](programs/engines/engine_guide.md)** - How to build custom engines
- **[CLAUDE.md](CLAUDE.md)** - AI assistant instructions for contributors
### Component Documentation
- **[Message Bus](src/llmgine/bus/README.md)** - Event and command bus architecture
- **[Tools System](src/llmgine/llm/tools/README.md)** - Function calling, tool registration, and MCP integration
- **[Observability System](src/llmgine/observability/README.md)** - Standalone observability architecture
- **[Observability CLI](programs/observability-cli/README.md)** - CLI monitoring tool
- **[Observability GUI](programs/observability-gui/README.md)** - React-based monitoring interface
---
## π Repository Layout (abridged)
```
llmgine/
β
ββ engines/ # Turn-key example engines (single-pass, tool chat, β¦)
ββ src/llmgine/
ββ bus/ # Message bus core + sessions
ββ llm/
β ββ context/ # Chat history & context events
β ββ engine/ # Engine base + dummy
β ββ models/ # Provider-agnostic model wrappers
β ββ providers/ # OpenAI, OpenRouter, Gemini, Dummy, β¦
β ββ tools/ # ToolManager, parser, register, types, MCP integration
ββ observability/ # Console & file handlers, log events
ββ ui/cli/ # Rich-based CLI components
```
---
## π Roadmap
- [ ] **Streaming responses** with incremental event dispatch
- [ ] **WebSocket / FastAPI** front-end (drop-in replacement for CLI)
- [ ] **Persistent vector memory** layer behind `ContextManager`
- [ ] **Plugin system** for third-party Observability handlers
- [ ] **More providers**: Anthropic, Vertex AI, etc.
---
## π€ Contributing
1. Fork & create a feature branch
2. Ensure `pre-commit` passes (`ruff`, `black`, `isort`, `pytest`)
3. Open a PR with context + screenshots/GIFs if UI-related
---
## π License
LLMgine is distributed under the **MIT License**βsee [`LICENSE`](LICENSE) for details.
---
> _βBuild architecturally sound LLM apps, not spaghetti code.
> Welcome to the engine room.β_
Raw data
{
"_id": null,
"home_page": null,
"name": "llmgine",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.11",
"maintainer_email": null,
"keywords": "python",
"author": null,
"author_email": "Nathan Luo <nathanluo13@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/99/1c/997faa8017128cc1154140a67393ce1927c0ba8563b568fac1d97ad3ac4c/llmgine-0.0.1.tar.gz",
"platform": null,
"description": "# \ud83c\udf0c **LLMgine**\n\nLLMgine is a _pattern-driven_ framework for building **production-grade, tool-augmented LLM applications** in Python. \nIt offers a clean separation between _**engines**_ (conversation logic), _**models/providers**_ (LLM back-ends), _**tools**_ (function calling with **MCP support**), a streaming **message-bus** for commands & events, and opt-in **observability**. \nThink _FastAPI_ for web servers or _Celery_ for tasks\u2014LLMgine plays the same role for complex, chat-oriented AI.\n\n---\n\n## \u2728 Feature Highlights\n| Area | What you get | Key files |\n|------|--------------|-----------|\n| **Engines** | Plug-n-play `Engine` subclasses (`SinglePassEngine`, `ToolChatEngine`, \u2026) with session isolation, tool-loop orchestration, and CLI front-ends | `engines/*.py`, `src/llmgine/llm/engine/` |\n| **Message Bus** | Async **command bus** (1 handler) + **event bus** (N listeners) + **sessions** for scoped handlers | `src/llmgine/bus/` |\n| **Tooling** | Declarative function-to-tool registration, multi-provider JSON-schema parsing (OpenAI, Claude, DeepSeek), async execution pipeline, **MCP integration** for external tool servers | `src/llmgine/llm/tools/` |\n| **Providers / Models** | Wrapper classes for OpenAI, OpenRouter, Gemini 2.5 Flash etc. _without locking you in_ | `src/llmgine/llm/providers/`, `src/llmgine/llm/models/` |\n| **Unified Interface** | Single API for OpenAI, Anthropic, and Gemini - switch providers by changing model name | `src/llmgine/unified/` |\n| **Context Management** | Simple and in-memory chat history managers, event-emitting for retrieval/update | `src/llmgine/llm/context/` |\n| **UI** | Rich-powered interactive CLI (`EngineCLI`) with live spinners, confirmation prompts, tool result panes | `src/llmgine/ui/cli/` |\n| **Observability** | Console + JSONL file handlers, per-event metadata, easy custom sinks | `src/llmgine/observability/` |\n| **Bootstrap** | One-liner `ApplicationBootstrap` that wires logging, bus startup, and observability | `src/llmgine/bootstrap.py` |\n\n---\n\n## \ud83c\udfd7\ufe0f High-Level Architecture\n\n```mermaid\nflowchart TD\n %% Nodes\n AppBootstrap[\"ApplicationBootstrap\"]\n Bus[\"MessageBus<br/>(async loop)\"]\n Obs[\"Observability<br/>Handlers\"]\n Eng[\"Engine(s)\"]\n TM[\"ToolManager\"]\n Tools[\"Local Tools\"]\n MCP[\"MCP Servers\"]\n Session[\"BusSession\"]\n CLI[\"CLI / UI\"]\n\n %% Edges\n AppBootstrap -->|starts| Bus\n\n Bus -->|events| Obs\n Bus -->|commands| Eng\n Bus -->|events| Session\n\n Eng -- status --> Bus\n Eng -- tool_calls --> TM\n\n TM -- executes --> Tools\n TM -. \"MCP Protocol\" .-> MCP\n Tools -- ToolResult --> CLI\n MCP -. \"External Tools\" .-> CLI\n\n Session --> CLI\n```\n\n*Every component communicates _only_ through the bus, so engines, tools, and UIs remain fully decoupled.*\n\n---\n\n## \ud83d\ude80 Quick Start\n\n### 1. Install\n\n```bash\ngit clone https://github.com/your-org/llmgine.git\ncd llmgine\npython -m venv .venv && source .venv/bin/activate\npip install -e \".[openai]\" # extras: openai, openrouter, mcp, dev, \u2026\nexport OPENAI_API_KEY=\"sk-\u2026\" # or OPENROUTER_API_KEY / GEMINI_API_KEY\n```\n\n### 2. Run the demo CLI\n\n```bash\npython -m llmgine.engines.single_pass_engine # pirate translator\n# or\npython -m llmgine.engines.tool_chat_engine # automatic tool loop\n```\n\nYou\u2019ll get an interactive prompt with live status updates and tool execution logs.\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Building Your Own Engine\n\n```python\nfrom llmgine.llm.engine.engine import Engine\nfrom llmgine.messages.commands import Command, CommandResult\nfrom llmgine.bus.bus import MessageBus\n\nclass MyCommand(Command):\n prompt: str = \"\"\n\nclass MyEngine(Engine):\n def __init__(self, session_id: str):\n self.session_id = session_id\n self.bus = MessageBus()\n\n async def handle_command(self, cmd: MyCommand) -> CommandResult:\n await self.bus.publish(Status(\"thinking\", session_id=self.session_id))\n # call LLM or custom logic here \u2026\n answer = f\"Echo: {cmd.prompt}\"\n await self.bus.publish(Status(\"finished\", session_id=self.session_id))\n return CommandResult(success=True, result=answer)\n\n# Wire into CLI\nfrom llmgine.ui.cli.cli import EngineCLI\nchat = EngineCLI(session_id=\"demo\")\nchat.register_engine(MyEngine(\"demo\"))\nchat.register_engine_command(MyCommand, MyEngine(\"demo\").handle_command)\nawait chat.main()\n```\n\n---\n\n## \ud83d\udd27 Tool Integration\n\n### Local Tools in 3 Lines\n\n```python\nfrom llmgine.llm.tools.tool import Parameter\nfrom llmgine.engines.tool_chat_engine import ToolChatEngine\n\ndef get_weather(city: str):\n \"\"\"Return current temperature for a city.\n Args:\n city: Name of the city\n \"\"\"\n return f\"{city}: 17 \u00b0C\"\n\nengine = ToolChatEngine(session_id=\"demo\")\nawait engine.register_tool(get_weather) # \u2190 introspection magic \u2728\n```\n\nThe engine now follows the **OpenAI function-calling loop**:\n\n```\nUser \u2192 Engine \u2192 LLM (asks to call get_weather) \u2192 ToolManager \u2192 get_weather()\n \u2191 \u2193\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 context update \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 (loops until no tool calls)\n```\n\n### MCP (Model Context Protocol) Integration\n\nConnect to external tool servers using the MCP protocol:\n\n```python\nfrom llmgine.llm.tools.tool_manager import ToolManager\n\n# Initialize tool manager with MCP support\ntool_manager = ToolManager()\n\n# Register an MCP server (e.g., Notion integration)\nawait tool_manager.register_mcp_server(\n server_name=\"notion\",\n command=\"python\",\n args=[\"/path/to/notion_mcp_server.py\"]\n)\n\n# MCP tools are now available alongside local tools\n```\n\n**MCP Features:**\n- \ud83d\udd0c **External Tool Servers**: Connect to any MCP-compatible tool server\n- \ud83d\udd04 **Dynamic Loading**: Tools are discovered and loaded at runtime\n- \ud83c\udfaf **Provider Agnostic**: Works with OpenAI, Anthropic, and Gemini tool formats\n- \u26a1 **Graceful Degradation**: Falls back silently if MCP dependencies unavailable\n\n**Prerequisites:**\n```bash\npip install mcp # Optional: for MCP integration\n```\n\n---\n\n## \ud83d\udcf0 Message Bus in Depth\n\n```python\nfrom llmgine.bus.bus import MessageBus\nfrom llmgine.bus.session import BusSession\n\nbus = MessageBus()\nawait bus.start()\n\nclass Ping(Command): pass\nclass Pong(Event): msg: str = \"pong!\"\n\nasync def ping_handler(cmd: Ping):\n await bus.publish(Pong(session_id=cmd.session_id))\n return CommandResult(success=True)\n\nwith bus.create_session() as sess:\n sess.register_command_handler(Ping, ping_handler)\n sess.register_event_handler(Pong, lambda e: print(e.msg))\n await sess.execute_with_session(Ping()) # prints \u201cpong!\u201d\n```\n\n*Handlers are **auto-unregistered** when the `BusSession` exits\u2014no leaks.*\n\n---\n\n## \ud83d\udcca Observability\n\nAdd structured logs with zero boilerplate:\n\n```python\nfrom llmgine.bootstrap import ApplicationBootstrap, ApplicationConfig\nconfig = ApplicationConfig(enable_console_handler=True,\n enable_file_handler=True,\n log_level=\"debug\")\nawait ApplicationBootstrap(config).bootstrap()\n```\n\nThe observability system uses a standalone `ObservabilityManager` that:\n- Operates independently of the message bus (no circular dependencies)\n- Provides synchronous handlers (see note below)\n- Supports console, file, and OpenTelemetry handlers\n- Can be extended with custom handlers\n\n*All events flow through the configured handlers to console and/or timestamped `logs/events_*.jsonl` files.*\n\n**\u26a0\ufe0f Performance Note**: The current implementation uses synchronous handlers which can block the event loop in high-throughput scenarios. This is suitable for development and low-volume production use. See the [Observability System documentation](src/llmgine/observability/README.md#performance-considerations) for details and planned improvements.\n\nFor OpenTelemetry support:\n```bash\npip install llmgine[opentelemetry]\n```\n\n---\n\n## \ud83d\udcda Documentation\n\nComprehensive documentation is available in the [`docs/`](docs/) directory:\n\n- **[Documentation Index](docs/index.md)** - Start here for navigation\n- **[Product Requirements Document](docs/prd.md)** - Original project requirements\n- **[Brownfield Enhancement PRD](docs/brownfield-prd/index.md)** - Production-ready enhancements\n- **[User Stories](docs/stories/index.md)** - Detailed implementation stories\n- **[Engine Development Guide](programs/engines/engine_guide.md)** - How to build custom engines\n- **[CLAUDE.md](CLAUDE.md)** - AI assistant instructions for contributors\n\n### Component Documentation\n- **[Message Bus](src/llmgine/bus/README.md)** - Event and command bus architecture\n- **[Tools System](src/llmgine/llm/tools/README.md)** - Function calling, tool registration, and MCP integration\n- **[Observability System](src/llmgine/observability/README.md)** - Standalone observability architecture\n- **[Observability CLI](programs/observability-cli/README.md)** - CLI monitoring tool\n- **[Observability GUI](programs/observability-gui/README.md)** - React-based monitoring interface\n\n---\n\n## \ud83d\udcc1 Repository Layout (abridged)\n\n```\nllmgine/\n\u2502\n\u251c\u2500 engines/ # Turn-key example engines (single-pass, tool chat, \u2026)\n\u2514\u2500 src/llmgine/\n \u251c\u2500 bus/ # Message bus core + sessions\n \u251c\u2500 llm/\n \u2502 \u251c\u2500 context/ # Chat history & context events\n \u2502 \u251c\u2500 engine/ # Engine base + dummy\n \u2502 \u251c\u2500 models/ # Provider-agnostic model wrappers\n \u2502 \u251c\u2500 providers/ # OpenAI, OpenRouter, Gemini, Dummy, \u2026\n \u2502 \u2514\u2500 tools/ # ToolManager, parser, register, types, MCP integration\n \u251c\u2500 observability/ # Console & file handlers, log events\n \u2514\u2500 ui/cli/ # Rich-based CLI components\n```\n\n---\n\n## \ud83c\udfc1 Roadmap\n\n- [ ] **Streaming responses** with incremental event dispatch \n- [ ] **WebSocket / FastAPI** front-end (drop-in replacement for CLI) \n- [ ] **Persistent vector memory** layer behind `ContextManager` \n- [ ] **Plugin system** for third-party Observability handlers \n- [ ] **More providers**: Anthropic, Vertex AI, etc.\n\n---\n\n## \ud83e\udd1d Contributing\n\n1. Fork & create a feature branch \n2. Ensure `pre-commit` passes (`ruff`, `black`, `isort`, `pytest`) \n3. Open a PR with context + screenshots/GIFs if UI-related \n\n---\n\n## \ud83d\udcc4 License\n\nLLMgine is distributed under the **MIT License**\u2014see [`LICENSE`](LICENSE) for details.\n\n---\n\n> _\u201cBuild architecturally sound LLM apps, not spaghetti code. \n> Welcome to the engine room.\u201d_\n",
"bugtrack_url": null,
"license": null,
"summary": "An application framework for building LLM-powered applications.",
"version": "0.0.1",
"project_urls": {
"Documentation": "https://nathan-luo.github.io/llmgine/",
"Homepage": "https://nathan-luo.github.io/llmgine/",
"Repository": "https://github.com/nathan-luo/llmgine"
},
"split_keywords": [
"python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2908a11ff751f9487074c7e059e8258898d22baf1bccf264b32967eaf5c6d8f5",
"md5": "7a77a5146851d6e77c6451e4e3a9680a",
"sha256": "96549aa06598ced8bc214afe13b5610d3b72940055d3f69ee4540556759695b3"
},
"downloads": -1,
"filename": "llmgine-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7a77a5146851d6e77c6451e4e3a9680a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.11",
"size": 110804,
"upload_time": "2025-10-06T23:24:27",
"upload_time_iso_8601": "2025-10-06T23:24:27.900411Z",
"url": "https://files.pythonhosted.org/packages/29/08/a11ff751f9487074c7e059e8258898d22baf1bccf264b32967eaf5c6d8f5/llmgine-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "991c997faa8017128cc1154140a67393ce1927c0ba8563b568fac1d97ad3ac4c",
"md5": "8a90f013f057c47084b47b606c95d576",
"sha256": "e32b157b48dc1c677af2ed11bdb36f2c81c302d948ebf4d5a19721ed34c7c962"
},
"downloads": -1,
"filename": "llmgine-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "8a90f013f057c47084b47b606c95d576",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.11",
"size": 403219,
"upload_time": "2025-10-06T23:24:29",
"upload_time_iso_8601": "2025-10-06T23:24:29.905854Z",
"url": "https://files.pythonhosted.org/packages/99/1c/997faa8017128cc1154140a67393ce1927c0ba8563b568fac1d97ad3ac4c/llmgine-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 23:24:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "nathan-luo",
"github_project": "llmgine",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "llmgine"
}