Name | llmling-agent JSON |
Version |
0.100.1
JSON |
| download |
home_page | None |
Summary | A brand new AI framework. Fully async. Excellently typed. MCP & ACP Integration. Human in the loop. Unique messaging features. |
upload_time | 2025-10-06 21:29:47 |
maintainer | None |
docs_url | None |
author | Philipp Temminghoff |
requires_python | >=3.12 |
license | MIT License
Copyright (c) 2024, Philipp Temminghoff
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LLMling-Agent
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://pypi.org/project/llmling-agent/)
[](https://github.com/phil65/llmling-agent/releases)
[](https://github.com/phil65/llmling-agent/graphs/contributors)
[](https://github.com/phil65/llmling-agent/discussions)
[](https://github.com/phil65/llmling-agent/forks)
[](https://github.com/phil65/llmling-agent/issues)
[](https://github.com/phil65/llmling-agent/watchers)
[](https://github.com/phil65/llmling-agent/stars)
[](https://github.com/phil65/llmling-agent/commits)
[](https://github.com/phil65/llmling-agent/releases)
[](https://github.com/phil65/llmling-agent)
[](https://github.com/phil65/llmling-agent)
[](https://codecov.io/gh/phil65/llmling-agent/)
[](https://pyup.io/repos/github/phil65/llmling-agent/)
### [Read the documentation!](https://phil65.github.io/llmling-agent/)
# đ Getting Started
LLMling Agent is a framework for creating and managing LLM-powered agents. It integrates with LLMling's resource system and provides structured interactions with language models.
## ⨠Unique Features
- đ Modern python written from ground up with Python 3.12
- đ Easy consistent APIs
- đģ Pyodide-"compatible"
- đĄī¸ Complete agent defintion via YAML files including extensive JSON schema to help with creating configurations.
- đ Leveraging the complete pydantic-based type-safe stack and bringing it to the multi-agent world
- đ Agent MCP server support, initialized when entering the async context.
- đī¸ Multi-modal support (currently Images and PDFs if model support is given)
- đž Storage providers to allow writing to local files, databases, etc. with many customizable backends. Log to SQL databases and pretty-print to a file according to your own wishes.
- đ§Š Support for creating "description prompts" for many common python type(s / instances). Your agent understands common datatypes.
- đŽ Complete integrated command sytem to control agents from prompt-based interfaces
- đ Unique powerful connection-based messaging approach for object-oriented routing and observation.
- đ¯ Integration of Meta-Model system based on [LLMling-models](https://github.com/phil65/llmling-models), also configurable via YAML.
- đ Deep integration of structured responses into workflows and (generic) typing system.
- đ Response type definition via YAML. Structured response Agents can be defined in the agent config.
- đĄī¸ Capabilites system allowing runtime modifications and "special" commands (on-the-fly agent generation, history lookups)
- đ Complete database logging of Agent interactions including easy recovery based on query parameters.
- âī¸ pytest-inspired way to create agents from YAML in a type-safe manner. "Auto-populated signatures."
- đ Comletely UPath backed. Any file operations under our control is routed through fsspec to allow referencing remote sourcces.
- đ Integrated prompt management system.
- đ§ Tasks, tools, and what else you can expect from an Agent framework.
- đī¸ No fixed dependencies on all the super-heavy LLM libraries. Way faster startup than most other frameworks, and all IO in our control is async.
- đĨ Easy human-in-the-loop interactions on multiple levels (complete "providers" or model-based, see llmling-models)
- đģ A CLI application with extensive slash command support to build agent flows interactively. Set up message connections via commands.
- âšī¸ The most easy way available to generate static websites in combination with [MkNodes](https://github.com/phil/mknodes) and [the corresponding MkDocs plugin](https://github.com/phil65/mkdocs_mknodes)
## đ Coming Soon
- đ¯ Built-in event system for reactive agent behaviors (file changes, webhooks, timed events)
- đĨī¸ Real-time-monitoring via Textual app in truly async manner. Talk to your agents while they are working and monitor the progress!
### Why LLMling-agent? đ¤
Why another framework you may ask? The framework stands out through three core principles:
#### đĄī¸ Type Safety and Structure
Unlike other frameworks that rely on free-form text exchanges, LLMling-agent enforces type safety throughout the entire agent interaction chain.
From input validation to structured outputs, every data flow is typed and validated, making it significantly more reliable for production systems.
#### đŦ Object-oriented async messaging and routing system
A powerful approach to messaging using Connection ("Talk") objects which allow all kind of new patterns for async agent communication
#### âī¸ Rich Configuration System
While other frameworks require extensive Python code for setup, LLMling-agent introduces a comprehensive YAML configuration system.
This allows defining complex agent behaviors, capabilities, and interactions declaratively.
The configuration supports inheritance, composition, and strong validation, making it easier to manage large-scale agent deployments.
#### đ¤ Human-AI Collaboration
Instead of choosing between fully autonomous or human-controlled operations, LLMling-agent offers flexible human-in-the-loop integration.
From full human control to selective oversight of critical actions, or hooking in remotely via Network,
the framework makes it natural to build systems that combine AI capabilities with human supervision and interaction.
## Quick Start
The fastest way to start chatting with an AI:
```bash
# Start an ephemeral chat session (requires uv)
uvx llmling-agent[default] quickstart openai:gpt-5-mini
```
This creates a temporary agent ready for chat - no configuration needed!
The according API keys need to be set as environment variables.
Use `help` to see what commands are at your disposal.
## đ Quick Examples
Three ways to create a simple agent flow:
### Python Version
```python
from llmling_agent import AgentPool
async def main():
async with AgentPool() as pool:
# Create browser assistant
browser = await pool.add_agent(
"browser",
system_prompt="Open Wikipedia pages matching the topics you receive.",
model="openai:gpt-5-mini",
tools=["webbrowser.open"],
)
# Create main agent and connect
agent = await pool.add_agent("assistant", model="openai:gpt-5-mini")
connection = agent >> browser # this sets up a permanent connection.
await agent.run("Tell us a random major city! Just one word!")
print(connection.stats.total_cost) # Check cost of this connection
```
This flow will:
- Ask the 1st agent to tell a major city
- Will make the 2nd agent open a related webpage using that info
### YAML Version
```yaml
# agents.yml
agents:
browser:
model: openai:gpt-5-mini
system_prompts:
- "Open Wikipedia pages matching the topics you receive."
tools:
- type: import
name: open_url
import_path: webbrowser.open
assistant:
model: openai:gpt-5-mini
connections: # this forwards any output to the 2nd agent
- type: node
name: browser
```
```bash
llmling-agent run assistant --config agents.yml "whats your favourite holiday destination?"
> What's your favorite holiday destination?
```
### CLI Version (Interactive using slash command system)
```bash
# Start session
llmling-agent quickstart openai:gpt-5-mini
# Create browser assistant
/create-agent browser --system-prompt "Open Wikipedia pages matching the topics you receive." --tools webbrowser.open
# Connect the agents
/connect browser
# Speak to the main agent, which will auto-forward.
> What's your favorite holiday destination?
```
### YAML configuration
While you can define agents with 3 lines of YAML (or competely programmatic or via CLI),
you can also create agents as well as their connections, agent tasks, storage providers and much more via YAML.
This is the extended version
```yaml
# agents.yml
agents:
analyzer:
provider: # Provider configuration
type: "pydantic_ai" # Provider type discriminator
name: "PydanticAI Provider" # Optional provider name
end_strategy: "early" # "early" | "complete" | "confirm"
model: # Model configuration
type: "fallback" # Lot of special "meta-models" included out of the box!
models: # Try models in sequence
- "openai:gpt-5"
- "openai:gpt-5-mini"
- "anthropic:claude-sonnet-4-0"
output_retries: 3 # Max retries for result validation
defer_model_check: false # Whether to defer model evaluation
validation_enabled: true # Whether to validate outputs
allow_text_fallback: true # Accept plain text when validation fails
name: "Code Analyzer" # Display name
inherits: "base_agent" # Optional parent config to inherit from
description: "Code analysis specialist"
debug: false
retries: 1 # Number of retries for failed operations
# Structured output
result_type:
type: "inline" # or "import" for Python types
fields:
severity:
type: "str"
description: "Issue severity"
issues:
type: "list[str]"
description: "Found issues"
# Core behavior
system_prompts:
- "You analyze code for potential issues and improvements."
# Session & History
session:
name: "analysis_session"
since: "1h" # Only load messages from last hour
roles: ["user", "assistant"] # Only specific message types
# Capabilities (role-based permissions)
capabilities:
can_delegate_tasks: true
can_load_resources: true
can_register_tools: true
history_access: "own" # "none" | "own" | "all"
# Environment configuration
environment:
type: "inline" # or "file" for external config
tools:
analyze_complexity:
import_path: "radon.complexity"
description: "Calculate code complexity"
run_linter:
import_path: "pylint.lint"
description: "Run code linting"
resources:
coding_standards:
type: "text"
content: "PEP8 guidelines..."
# Knowledge sources
knowledge:
paths: ["docs/**/*.md"] # Glob patterns for files
resources:
- type: "repository"
url: "https://github.com/user/repo"
prompts:
- type: "file"
path: "prompts/analysis.txt"
# MCP Server integration
mcp_servers:
- type: "stdio"
command: "python"
args: ["-m", "mcp_server"]
environment:
DEBUG: "1"
- "python -m other_server" # shorthand syntax
# Worker agents (specialists)
workers:
- type: agent
name: "formatter"
reset_history_on_run: true
pass_message_history: false
share_context: false
- "linter" # shorthand syntax
# Message forwarding
connections:
- type: node
name: "reporter"
connection_type: "run" # "run" | "context" | "forward"
priority: 1
queued: true
queue_strategy: "latest"
transform: "my_module.transform_func"
wait_for_completion: true
filter_condition: # When to forward messages
type: "word_match"
words: ["error", "warning"]
case_sensitive: false
stop_condition: # When to disconnect
type: "message_count"
max_messages: 100
count_mode: "total" # or "per_agent"
exit_condition: # When to exit application
type: "cost_limit"
max_cost: 10.0
# Event triggers
triggers:
- type: "file"
name: "code_change"
paths: ["src/**/*.py"]
extensions: [".py"]
debounce: 1000 # ms
teams:
# Complex workflows via YAML
full_pipeline:
mode: sequential
members:
- analyzer
- planner
connections:
- type: node
name: final_reviewer
wait_for_completion: true
- type: file
path: "reports/{date}_workflow.txt"
# Response type definitions
responses:
AnalysisResult:
response_schema:
type: "inline"
description: "Code analysis result format"
fields:
severity: {type: "str"}
issues: {type: "list[str]"}
ComplexResult:
type: "import"
import_path: "myapp.types.ComplexResult"
# Storage configuration
storage:
providers:
- type: "sql"
url: "sqlite:///history.db"
pool_size: 5
- type: "text_file"
path: "logs/chat.log"
format: "chronological"
log_messages: true
log_conversations: true
log_tool_calls: true
log_commands: true
# Pre-defined jobs
jobs:
analyze_code:
name: "Code Analysis"
description: "Analyze code quality"
prompt: "Analyze this code: {code}"
required_return_type: "AnalysisResult"
knowledge:
paths: ["src/**/*.py"]
tools: ["analyze_complexity", "run_linter"]
```
You can use an Agents manifest in multiple ways:
- Use it for CLI sessions
```bash
llmling-agent chat --config agents.yml system_checker
```
- Run it using the CLI
```bash
llmling-agent run --config agents.yml my_agent "Some prompt"
```
- Use the defined Agent programmatically
```python
from llmling_agent import AgentPool
async with AgentPool("agents.yml") as pool:
agent = pool.get_agent("my_agent")
result = await agent.run("User prompt!")
print(result.data)
```
- Start *watch mode* and only react to triggers
```bash
llmling-agent watch --config agents.yml
```
### Agent Pool: Multi-Agent Coordination
The `AgentPool` allows multiple agents to work together on tasks. Here's a practical example of parallel file downloading:
```python
# agents.yml
agents:
file_getter:
model: openai:gpt-5-mini
tools:
- type: import
import_path: llmling_agent_tools.download_file # a simple httpx based async callable
system_prompts:
- |
You are a download specialist. Just use the download_file tool
and report its results. No explanations needed.
overseer:
capabilities:
can_delegate_tasks: true # these capabilities are available as tools for the agent
can_list_agents: true
model: openai:gpt-5-mini
system_prompts:
- |
You coordinate downloads using available agents.
1. Check out the available agents and assign each of them the download task
2. Report the results.
```
```python
from llmling_agent.delegation import AgentPool
async def main():
async with AgentPool("agents.yml") as pool:
# first we create two agents based on the file_getter template
file_getter_1 = pool.get_agent("file_getter")
file_getter_2 = pool.get_agent("file_getter")
# then we form a team and execute the task
team = file_getter_1 & file_getter_2
responses = await team.run_parallel("Download https://example.com/file.zip")
# Or let a coordinator orchestrate using his capabilities.
coordinator = pool.get_agent("coordinator")
result = await overseer.run(
"Download https://example.com/file.zip by delegating to all workers available!"
)
```
## Message System
LLMling provides a unified messaging system based on a simple but powerful concept: Every entity that can process messages is a message node. This creates a clean, composable architecture where all nodes:
1. Share a common interface:
- `run()` -> Returns ChatMessage
- `connect_to()` -> Creates connections
- `message_received`: Message-received signal
- `message_sent`: Message-sent signal
2. Can be freely connected:
```python
# Any message node can connect to any other
node_a.connect_to(node_b)
node_a >> node_b # Shorthand syntax
```
The framework provides three types of message nodes:
1. **Agents**: Individual LLM-powered actors
```python
# Single agent processing
analyzer = pool.get_agent("analyzer")
result = await analyzer.run("analyze this")
```
2. **Teams**: Groups for parallel execution
```python
# Create team using & operator
team = analyzer & planner & executor
results = await team.run("handle this task")
```
3. **TeamRuns**: Sequential execution chains
```python
# Create chain using | operator
chain = analyzer | planner | executor
results = await chain.run("process in sequence")
```
The beauty of this system is that these nodes are completely composable:
```python
def process_text(text: str) -> str:
return text.upper()
# Nested structures work naturally
team_1 = analyzer & planner # Team
team_2 = validator & reporter # Another team
chain = team_1 | process_text | team_2 # Teams and Callables in a chain
# Complex workflows become intuitive
(analyzer & planner) | validator # Team followed by validator
team_1 | (team_2 & agent_3) # Chain with parallel components
# Every node has the same core interface
async for message in node.run_iter("prompt"):
print(message.content)
# Monitoring works the same for all types
print(f"Messages: {node.stats.message_count}")
print(f"Cost: ${node.stats.total_cost:.2f}")
```
(note: the operator overloading is just syntactic sugar. In general, teams should be created
using pool.create_team()/ pool.create_team_run() or agent/team.connect_to())
)
All message nodes support the same execution patterns:
```python
# Single execution
result = await node.run("prompt")
# Streaming
async with node.run_stream("prompt") as stream:
async for chunk in stream:
print(chunk)
# Iterator
async for message in node.run_iter("prompt"):
print(message)
# Background execution
stats = await node.run_in_background("prompt", max_count=5)
await node.wait() # Wait for completion
# Nested teams work naturally
team_1 = analyzer & planner # First team
team_2 = validator & reporter # Second team
parallel_team = Team([team_1, agent_3, team_2]) # Team containing teams!
# This means you can create sophisticated structures:
result = await parallel_team.run("analyze this") # Will execute:
# - team_1 (analyzer & planner) in parallel
# - agent_3 in parallel
# - team_2 (validator & reporter) in parallel
# And still use all the standard patterns:
async for msg in parallel_team.run_iter("prompt"):
print(msg.content)
# With full monitoring capabilities:
print(f"Total cost: ${parallel_team.stats.total_cost:.2f}")
```
This unified system makes it easy to:
- Build complex workflows
- Monitor message flow
- Compose nodes in any combination
- Use consistent patterns across all node types
Each message in the system carries content, metadata, and execution information, providing a consistent interface across all types of interactions. See [Message System](docs/concepts/messages.md) for details.
### Advanced Connection Features
Connections between agents are highly configurable and support various patterns:
```python
# Basic connection in shorthand form.
connection = agent_a >> agent_b # Forward all messages
# Extended setup: Queued connection (manual processing)
connection = agent_a.connect_to(
agent_b,
queued=True,
queue_strategy="latest", # or "concat", "buffer"
)
# messages can queue up now
await connection.trigger(optional_additional_prompt) # Process queued messages sequentially
# Filtered connection (example: filter by keyword):
connection = agent_a.connect_to(
agent_b,
filter_condition=lambda ctx: "keyword" in ctx.message.content,
)
# Conditional disconnection (example: disconnect after cost limit):
connection = agent_a.connect_to(
agent_b,
filter_condition=lambda ctx: ctx.stats.total_cost > 1.0,
)
# Message transformations
async def transform_message(message: str) -> str:
return f"Transformed: {message}"
connection = agent_a.connect_to(agent_b, transform=transform_message)
# Connection statistics
print(f"Messages processed: {connection.stats.message_count}")
print(f"Total tokens: {connection.stats.token_count}")
print(f"Total cost: ${connection.stats.total_cost:.2f}")
```
The two basic programmatic patterns of this librry are:
1. Tree-like workflows (hierarchical):
```python
# Can be modeled purely with teams/chains using & and |
team_a = agent1 & agent2 # Parallel branch 1
team_b = agent3 & agent4 # Parallel branch 2
chain = preprocessor | team_a | postprocessor # Sequential with team
nested = Team([chain, team_b]) # Hierarchical nesting
```
2. DAG (Directed Acyclic Graph) workflows:
```python
# Needs explicit signal connections for non-tree patterns
analyzer = Agent("analyzer")
planner = Agent("planner")
executor = Agent("executor")
validator = Agent("validator")
# Can't model this with just teams - need explicit connections
analyzer.connect_to(planner)
analyzer.connect_to(executor) # Same source to multiple targets
planner.connect_to(validator)
executor.connect_to(validator) # Multiple sources to same target
validator.connect_to(executor) # Cyclic connections
```
BOTH connection types can be set up for BOTH teams and agents intiuiviely in the YAML file.
### Human-in-the-Loop Integration
LLMling-Agent offers multiple levels of human integration:
```python
# Provider-level human integration
from llmling_agent import Agent
async with Agent(provider="human") as agent:
result = await agent.run("We can ask ourselves and be part of Workflows!")
```
```yaml
# Or via YAML configuration
agents:
human_agent:
provider: "human" # Complete human control
timeout: 300 # Optional timeout in seconds
show_context: true # Show conversation context
```
You can also use LLMling-models for more sophisticated human integration:
- Remote human operators via network
- Hybrid human-AI workflows
- Input streaming support
- Custom UI integration
### Capability System
Fine-grained control over agent permissions:
```python
agent.capabilities.can_load_resources = True
agent.capabilities.history_access = "own" # "none" | "own" | "all"
```
```yaml
agents:
restricted_agent:
capabilities:
can_delegate_tasks: false
can_register_tools: false
history_access: "none"
```
### Event-Driven Automation
React to file changes, webhooks, and more:
```python
# File watching
agent.events.add_file_watch(paths=["src/**/*.py"], debounce=1000)
# Webhook endpoint
agent.events.add_webhook("/hooks/github",port=8000)
# Also included: time-based and email
```
### Multi-Modal Support
Handle images and PDFs alongside text (depends on provider / model support)
```python
import PIL.Image
from llmling_agent import Agent
async with Agent(...) as agent:
result = await agent.run("What's in this image?", PIL.Image.open("image.jpg"))
result = await agent.run("What's in this image?", pathlib.Path("image.jpg"))
result = await agent.run("What's in this PDF?", pathlib.Path("document.pdf"))
```
### Command System
Extensive slash commands available in all interfaces:
```bash
/list-tools # Show available tools
/enable-tool tool_name # Enable specific tool
/connect other_agent # Forward results
/model gpt-5 # Switch models
/history search "query" # Search conversation
/stats # Show usage statistics
```
### Storage & Analytics
All interaction is tracked using (multiple) configurable storage providers.
Information can get fetched programmatically or via CLI.
```python
# Query conversation history
messages = await agent.conversation.filter_messages(
SessionQuery(
since="1h",
contains="error",
roles={"user", "assistant"},
)
)
# Get usage statistics
stats = await agent.context.storage.get_conversation_stats(
group_by="model",
period="24h",
)
```
```bash
# View recent conversations
llmling-agent history show
llmling-agent history show --period 24h # Last 24 hours
llmling-agent history show --query "database" # Search content
# View usage statistics
llmling-agent history stats # Basic stats
llmling-agent history stats --group-by model # Model usage
llmling-agent history stats --group-by day # Daily breakdown
```
## đ MkDocs Integration
In combination with [MkNodes](https://github.com/phil65/mknodes) and the [MkDocs plugin](https://github.com/phil65/mkdocs_mknodes),
you can easily generate static documentation for websites with a few lines of code.
```python
@nav.route.page("Feature XYZ", icon="oui:documentation", hide="toc")
def gen_docs(page: mk.MkPage):
"""Generate docs using agents."""
agent = Agent[None](model="openai:gpt-5-mini")
page += mk.MkAdmonition("MkNodes includes all kinds of Markdown objects to generate docs!")
source_code = load_source_code_from_folder(...)
page += mk.MkCode() # if you want to display source code
result = agent.run_sync("Describle Feature XYZ in MkDocs compatible markdown including examples.", content)
page += result.content
```
This diagram shows the main components of the LLMling Agent framework:
```mermaid
classDiagram
%% Core relationships
AgentsManifest --* AgentConfig : contains
AgentsManifest --> AgentPool : creates
AgentPool --* Agent : manages
FileEnvironment --> Config : loads
InlineEnvironment --* Config : contains
Config --> RuntimeConfig : initialized as
Agent --> RuntimeConfig : uses
AgentConfig --> FileEnvironment : uses
AgentConfig --> InlineEnvironment : uses
Agent --* ToolManager : uses
Agent --* ConversationManager : uses
class Config ["[LLMling Core] Config"] {
Base configuration format defining tools, resources, and settings
+
+tools: dict
+resources: dict
+prompts: dict
+global_settings: GlobalSettings
+from_file()
}
class RuntimeConfig ["[LLMling Core] RuntimeConfig"] {
Runtime state of a config with instantiated components
+
+config: Config
+tools: dict[str, Tool]
+resources: dict[str, Resource]
+prompts: dict[str, BasePrompt]
+register_tool()
+load_resource()
}
class AgentsManifest {
Complete agent configuration manifest defining all available agents
+
+responses: dict[str, ResponseDefinition]
+agents: dict[str, AgentConfig]
}
class AgentConfig {
Configuration for a single agent including model, environment and capabilities
+
+name: str
+model: str | Model
+environment: AgentEnvironment
+capabilities: Capabilities
+system_prompts: list[str]
+get_config(): Config
}
class FileEnvironment {
Environment loaded from external YAML file
+
+type: "file"
+uri: str
}
class InlineEnvironment {
Direct environment configuration without external files
+
+type: "inline"
+tools: ...
+resources: ...
+prompts: ...
}
class AgentPool {
Manager for multiple initialized agents
+
+manifest: AgentsManifest
+agents: dict[str, Agent]
+open()
}
class Agent {
Main agent class handling LLM interactions and tool usage
+
+runtime: RuntimeConfig
+tools: ToolManager
+conversation: ConversationManager
+run()
+run_stream()
+open()
}
class ToolManager {
Manages tool registration, enabling/disabling and access
+
+register_tool()
+get_tools()
+list_tools()
}
class ConversationManager {
Manages conversation state and system prompts
+
+get_history()
+clear()
+add_context_from_path()
+add_context_from_resource()
}
```
### [Read the documentation for further info!](https://phil65.github.io/llmling-agent/)
Raw data
{
"_id": null,
"home_page": null,
"name": "llmling-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": null,
"author": "Philipp Temminghoff",
"author_email": "Philipp Temminghoff <philipptemminghoff@googlemail.com>",
"download_url": "https://files.pythonhosted.org/packages/7f/00/b8b9666c94e907e93afb49964cdaa8fa197068521e91a9e8386ddfc69676/llmling_agent-0.100.1.tar.gz",
"platform": null,
"description": "# LLMling-Agent\n\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://pypi.org/project/llmling-agent/)\n[](https://github.com/phil65/llmling-agent/releases)\n[](https://github.com/phil65/llmling-agent/graphs/contributors)\n[](https://github.com/phil65/llmling-agent/discussions)\n[](https://github.com/phil65/llmling-agent/forks)\n[](https://github.com/phil65/llmling-agent/issues)\n[](https://github.com/phil65/llmling-agent/watchers)\n[](https://github.com/phil65/llmling-agent/stars)\n[](https://github.com/phil65/llmling-agent/commits)\n[](https://github.com/phil65/llmling-agent/releases)\n[](https://github.com/phil65/llmling-agent)\n[](https://github.com/phil65/llmling-agent)\n[](https://codecov.io/gh/phil65/llmling-agent/)\n[](https://pyup.io/repos/github/phil65/llmling-agent/)\n\n### [Read the documentation!](https://phil65.github.io/llmling-agent/)\n\n# \ud83d\ude80 Getting Started\n\nLLMling Agent is a framework for creating and managing LLM-powered agents. It integrates with LLMling's resource system and provides structured interactions with language models.\n\n## \u2728 Unique Features\n- \ud83d\udd04 Modern python written from ground up with Python 3.12\n- \ud83d\udcdd Easy consistent APIs\n- \ud83d\udcbb Pyodide-\"compatible\"\n- \ud83d\udee1\ufe0f Complete agent defintion via YAML files including extensive JSON schema to help with creating configurations.\n- \ud83d\udd12 Leveraging the complete pydantic-based type-safe stack and bringing it to the multi-agent world\n- \ud83d\udd0c Agent MCP server support, initialized when entering the async context.\n- \ud83d\udc41\ufe0f Multi-modal support (currently Images and PDFs if model support is given)\n- \ud83d\udcbe Storage providers to allow writing to local files, databases, etc. with many customizable backends. Log to SQL databases and pretty-print to a file according to your own wishes.\n- \ud83e\udde9 Support for creating \"description prompts\" for many common python type(s / instances). Your agent understands common datatypes.\n- \ud83c\udfae Complete integrated command sytem to control agents from prompt-based interfaces\n- \ud83d\udd17 Unique powerful connection-based messaging approach for object-oriented routing and observation.\n- \ud83c\udfaf Integration of Meta-Model system based on [LLMling-models](https://github.com/phil65/llmling-models), also configurable via YAML.\n- \ud83d\udd10 Deep integration of structured responses into workflows and (generic) typing system.\n- \ud83d\udccb Response type definition via YAML. Structured response Agents can be defined in the agent config.\n- \ud83d\udee1\ufe0f Capabilites system allowing runtime modifications and \"special\" commands (on-the-fly agent generation, history lookups)\n- \ud83d\udcca Complete database logging of Agent interactions including easy recovery based on query parameters.\n- \u2699\ufe0f pytest-inspired way to create agents from YAML in a type-safe manner. \"Auto-populated signatures.\"\n- \ud83d\udedc Comletely UPath backed. Any file operations under our control is routed through fsspec to allow referencing remote sourcces.\n- \ud83d\udcd5 Integrated prompt management system.\n- \ud83d\udd27 Tasks, tools, and what else you can expect from an Agent framework.\n- \ud83c\udfce\ufe0f No fixed dependencies on all the super-heavy LLM libraries. Way faster startup than most other frameworks, and all IO in our control is async.\n- \ud83d\udc65 Easy human-in-the-loop interactions on multiple levels (complete \"providers\" or model-based, see llmling-models)\n- \ud83d\udcbb A CLI application with extensive slash command support to build agent flows interactively. Set up message connections via commands.\n- \u2139\ufe0f The most easy way available to generate static websites in combination with [MkNodes](https://github.com/phil/mknodes) and [the corresponding MkDocs plugin](https://github.com/phil65/mkdocs_mknodes)\n\n## \ud83d\udd1c Coming Soon\n- \ud83c\udfaf Built-in event system for reactive agent behaviors (file changes, webhooks, timed events)\n- \ud83d\udda5\ufe0f Real-time-monitoring via Textual app in truly async manner. Talk to your agents while they are working and monitor the progress!\n\n\n\n\n### Why LLMling-agent? \ud83e\udd14\n\nWhy another framework you may ask? The framework stands out through three core principles:\n\n\n#### \ud83d\udee1\ufe0f Type Safety and Structure\nUnlike other frameworks that rely on free-form text exchanges, LLMling-agent enforces type safety throughout the entire agent interaction chain.\nFrom input validation to structured outputs, every data flow is typed and validated, making it significantly more reliable for production systems.\n\n#### \ud83d\udcac Object-oriented async messaging and routing system\nA powerful approach to messaging using Connection (\"Talk\") objects which allow all kind of new patterns for async agent communication\n\n#### \u2699\ufe0f Rich Configuration System\nWhile other frameworks require extensive Python code for setup, LLMling-agent introduces a comprehensive YAML configuration system.\nThis allows defining complex agent behaviors, capabilities, and interactions declaratively.\nThe configuration supports inheritance, composition, and strong validation, making it easier to manage large-scale agent deployments.\n\n#### \ud83e\udd1d Human-AI Collaboration\nInstead of choosing between fully autonomous or human-controlled operations, LLMling-agent offers flexible human-in-the-loop integration.\nFrom full human control to selective oversight of critical actions, or hooking in remotely via Network,\nthe framework makes it natural to build systems that combine AI capabilities with human supervision and interaction.\n\n\n## Quick Start\n\nThe fastest way to start chatting with an AI:\n```bash\n# Start an ephemeral chat session (requires uv)\nuvx llmling-agent[default] quickstart openai:gpt-5-mini\n```\n\nThis creates a temporary agent ready for chat - no configuration needed!\nThe according API keys need to be set as environment variables.\n\nUse `help` to see what commands are at your disposal.\n\n\n## \ud83d\ude80 Quick Examples\n\nThree ways to create a simple agent flow:\n\n\n### Python Version\n```python\nfrom llmling_agent import AgentPool\n\nasync def main():\n async with AgentPool() as pool:\n # Create browser assistant\n browser = await pool.add_agent(\n \"browser\",\n system_prompt=\"Open Wikipedia pages matching the topics you receive.\",\n model=\"openai:gpt-5-mini\",\n tools=[\"webbrowser.open\"],\n )\n # Create main agent and connect\n agent = await pool.add_agent(\"assistant\", model=\"openai:gpt-5-mini\")\n connection = agent >> browser # this sets up a permanent connection.\n await agent.run(\"Tell us a random major city! Just one word!\")\n print(connection.stats.total_cost) # Check cost of this connection\n```\n\nThis flow will:\n\n- Ask the 1st agent to tell a major city\n- Will make the 2nd agent open a related webpage using that info\n\n\n### YAML Version\n```yaml\n# agents.yml\nagents:\n browser:\n model: openai:gpt-5-mini\n system_prompts:\n - \"Open Wikipedia pages matching the topics you receive.\"\n tools:\n - type: import\n name: open_url\n import_path: webbrowser.open\n\n assistant:\n model: openai:gpt-5-mini\n connections: # this forwards any output to the 2nd agent\n - type: node\n name: browser\n```\n\n```bash\nllmling-agent run assistant --config agents.yml \"whats your favourite holiday destination?\"\n> What's your favorite holiday destination?\n```\n\n\n### CLI Version (Interactive using slash command system)\n```bash\n# Start session\nllmling-agent quickstart openai:gpt-5-mini\n# Create browser assistant\n/create-agent browser --system-prompt \"Open Wikipedia pages matching the topics you receive.\" --tools webbrowser.open\n# Connect the agents\n/connect browser\n# Speak to the main agent, which will auto-forward.\n> What's your favorite holiday destination?\n```\n\n### YAML configuration\n\nWhile you can define agents with 3 lines of YAML (or competely programmatic or via CLI),\nyou can also create agents as well as their connections, agent tasks, storage providers and much more via YAML.\nThis is the extended version\n\n```yaml\n# agents.yml\nagents:\n analyzer:\n provider: # Provider configuration\n type: \"pydantic_ai\" # Provider type discriminator\n name: \"PydanticAI Provider\" # Optional provider name\n end_strategy: \"early\" # \"early\" | \"complete\" | \"confirm\"\n model: # Model configuration\n type: \"fallback\" # Lot of special \"meta-models\" included out of the box!\n models: # Try models in sequence\n - \"openai:gpt-5\"\n - \"openai:gpt-5-mini\"\n - \"anthropic:claude-sonnet-4-0\"\n output_retries: 3 # Max retries for result validation\n defer_model_check: false # Whether to defer model evaluation\n validation_enabled: true # Whether to validate outputs\n allow_text_fallback: true # Accept plain text when validation fails\n\n name: \"Code Analyzer\" # Display name\n inherits: \"base_agent\" # Optional parent config to inherit from\n description: \"Code analysis specialist\"\n debug: false\n retries: 1 # Number of retries for failed operations\n\n # Structured output\n result_type:\n type: \"inline\" # or \"import\" for Python types\n fields:\n severity:\n type: \"str\"\n description: \"Issue severity\"\n issues:\n type: \"list[str]\"\n description: \"Found issues\"\n\n # Core behavior\n system_prompts:\n - \"You analyze code for potential issues and improvements.\"\n\n # Session & History\n session:\n name: \"analysis_session\"\n since: \"1h\" # Only load messages from last hour\n roles: [\"user\", \"assistant\"] # Only specific message types\n\n # Capabilities (role-based permissions)\n capabilities:\n can_delegate_tasks: true\n can_load_resources: true\n can_register_tools: true\n history_access: \"own\" # \"none\" | \"own\" | \"all\"\n\n # Environment configuration\n environment:\n type: \"inline\" # or \"file\" for external config\n tools:\n analyze_complexity:\n import_path: \"radon.complexity\"\n description: \"Calculate code complexity\"\n run_linter:\n import_path: \"pylint.lint\"\n description: \"Run code linting\"\n resources:\n coding_standards:\n type: \"text\"\n content: \"PEP8 guidelines...\"\n\n # Knowledge sources\n knowledge:\n paths: [\"docs/**/*.md\"] # Glob patterns for files\n resources:\n - type: \"repository\"\n url: \"https://github.com/user/repo\"\n prompts:\n - type: \"file\"\n path: \"prompts/analysis.txt\"\n\n # MCP Server integration\n mcp_servers:\n - type: \"stdio\"\n command: \"python\"\n args: [\"-m\", \"mcp_server\"]\n environment:\n DEBUG: \"1\"\n - \"python -m other_server\" # shorthand syntax\n\n # Worker agents (specialists)\n workers:\n - type: agent\n name: \"formatter\"\n reset_history_on_run: true\n pass_message_history: false\n share_context: false\n - \"linter\" # shorthand syntax\n\n # Message forwarding\n connections:\n - type: node\n name: \"reporter\"\n connection_type: \"run\" # \"run\" | \"context\" | \"forward\"\n priority: 1\n queued: true\n queue_strategy: \"latest\"\n transform: \"my_module.transform_func\"\n wait_for_completion: true\n filter_condition: # When to forward messages\n type: \"word_match\"\n words: [\"error\", \"warning\"]\n case_sensitive: false\n stop_condition: # When to disconnect\n type: \"message_count\"\n max_messages: 100\n count_mode: \"total\" # or \"per_agent\"\n exit_condition: # When to exit application\n type: \"cost_limit\"\n max_cost: 10.0\n # Event triggers\n triggers:\n - type: \"file\"\n name: \"code_change\"\n paths: [\"src/**/*.py\"]\n extensions: [\".py\"]\n debounce: 1000 # ms\nteams:\n # Complex workflows via YAML\n full_pipeline:\n mode: sequential\n members:\n - analyzer\n - planner\n connections:\n - type: node\n name: final_reviewer\n wait_for_completion: true\n - type: file\n path: \"reports/{date}_workflow.txt\"\n# Response type definitions\nresponses:\n AnalysisResult:\n response_schema:\n type: \"inline\"\n description: \"Code analysis result format\"\n fields:\n severity: {type: \"str\"}\n issues: {type: \"list[str]\"}\n\n ComplexResult:\n type: \"import\"\n import_path: \"myapp.types.ComplexResult\"\n\n# Storage configuration\nstorage:\n providers:\n - type: \"sql\"\n url: \"sqlite:///history.db\"\n pool_size: 5\n - type: \"text_file\"\n path: \"logs/chat.log\"\n format: \"chronological\"\n log_messages: true\n log_conversations: true\n log_tool_calls: true\n log_commands: true\n\n# Pre-defined jobs\njobs:\n analyze_code:\n name: \"Code Analysis\"\n description: \"Analyze code quality\"\n prompt: \"Analyze this code: {code}\"\n required_return_type: \"AnalysisResult\"\n knowledge:\n paths: [\"src/**/*.py\"]\n tools: [\"analyze_complexity\", \"run_linter\"]\n```\n\nYou can use an Agents manifest in multiple ways:\n\n- Use it for CLI sessions\n\n```bash\nllmling-agent chat --config agents.yml system_checker\n```\n\n- Run it using the CLI\n\n```bash\nllmling-agent run --config agents.yml my_agent \"Some prompt\"\n```\n\n- Use the defined Agent programmatically\n\n```python\nfrom llmling_agent import AgentPool\n\nasync with AgentPool(\"agents.yml\") as pool:\n agent = pool.get_agent(\"my_agent\")\n result = await agent.run(\"User prompt!\")\n print(result.data)\n```\n\n- Start *watch mode* and only react to triggers\n\n```bash\nllmling-agent watch --config agents.yml\n```\n\n\n### Agent Pool: Multi-Agent Coordination\n\nThe `AgentPool` allows multiple agents to work together on tasks. Here's a practical example of parallel file downloading:\n\n```python\n# agents.yml\nagents:\n file_getter:\n model: openai:gpt-5-mini\n tools:\n - type: import\n import_path: llmling_agent_tools.download_file # a simple httpx based async callable\n system_prompts:\n - |\n You are a download specialist. Just use the download_file tool\n and report its results. No explanations needed.\n\n overseer:\n capabilities:\n can_delegate_tasks: true # these capabilities are available as tools for the agent\n can_list_agents: true\n model: openai:gpt-5-mini\n system_prompts:\n - |\n You coordinate downloads using available agents.\n 1. Check out the available agents and assign each of them the download task\n 2. Report the results.\n\n```\n\n```python\nfrom llmling_agent.delegation import AgentPool\n\nasync def main():\n async with AgentPool(\"agents.yml\") as pool:\n # first we create two agents based on the file_getter template\n file_getter_1 = pool.get_agent(\"file_getter\")\n file_getter_2 = pool.get_agent(\"file_getter\")\n # then we form a team and execute the task\n team = file_getter_1 & file_getter_2\n responses = await team.run_parallel(\"Download https://example.com/file.zip\")\n\n # Or let a coordinator orchestrate using his capabilities.\n coordinator = pool.get_agent(\"coordinator\")\n result = await overseer.run(\n \"Download https://example.com/file.zip by delegating to all workers available!\"\n )\n```\n\n## Message System\n\nLLMling provides a unified messaging system based on a simple but powerful concept: Every entity that can process messages is a message node. This creates a clean, composable architecture where all nodes:\n\n1. Share a common interface:\n - `run()` -> Returns ChatMessage\n - `connect_to()` -> Creates connections\n - `message_received`: Message-received signal\n - `message_sent`: Message-sent signal\n\n2. Can be freely connected:\n```python\n# Any message node can connect to any other\nnode_a.connect_to(node_b)\nnode_a >> node_b # Shorthand syntax\n```\n\nThe framework provides three types of message nodes:\n\n1. **Agents**: Individual LLM-powered actors\n```python\n# Single agent processing\nanalyzer = pool.get_agent(\"analyzer\")\nresult = await analyzer.run(\"analyze this\")\n```\n\n2. **Teams**: Groups for parallel execution\n```python\n# Create team using & operator\nteam = analyzer & planner & executor\nresults = await team.run(\"handle this task\")\n```\n\n3. **TeamRuns**: Sequential execution chains\n```python\n# Create chain using | operator\nchain = analyzer | planner | executor\nresults = await chain.run(\"process in sequence\")\n```\n\nThe beauty of this system is that these nodes are completely composable:\n\n```python\n\ndef process_text(text: str) -> str:\n return text.upper()\n\n# Nested structures work naturally\nteam_1 = analyzer & planner # Team\nteam_2 = validator & reporter # Another team\nchain = team_1 | process_text | team_2 # Teams and Callables in a chain\n\n# Complex workflows become intuitive\n(analyzer & planner) | validator # Team followed by validator\nteam_1 | (team_2 & agent_3) # Chain with parallel components\n\n# Every node has the same core interface\nasync for message in node.run_iter(\"prompt\"):\n print(message.content)\n\n# Monitoring works the same for all types\nprint(f\"Messages: {node.stats.message_count}\")\nprint(f\"Cost: ${node.stats.total_cost:.2f}\")\n```\n(note: the operator overloading is just syntactic sugar. In general, teams should be created\nusing pool.create_team()/ pool.create_team_run() or agent/team.connect_to())\n)\nAll message nodes support the same execution patterns:\n```python\n# Single execution\nresult = await node.run(\"prompt\")\n\n# Streaming\nasync with node.run_stream(\"prompt\") as stream:\n async for chunk in stream:\n print(chunk)\n\n# Iterator\nasync for message in node.run_iter(\"prompt\"):\n print(message)\n\n# Background execution\nstats = await node.run_in_background(\"prompt\", max_count=5)\nawait node.wait() # Wait for completion\n\n# Nested teams work naturally\nteam_1 = analyzer & planner # First team\nteam_2 = validator & reporter # Second team\nparallel_team = Team([team_1, agent_3, team_2]) # Team containing teams!\n\n# This means you can create sophisticated structures:\nresult = await parallel_team.run(\"analyze this\") # Will execute:\n# - team_1 (analyzer & planner) in parallel\n# - agent_3 in parallel\n# - team_2 (validator & reporter) in parallel\n\n# And still use all the standard patterns:\nasync for msg in parallel_team.run_iter(\"prompt\"):\n print(msg.content)\n\n# With full monitoring capabilities:\nprint(f\"Total cost: ${parallel_team.stats.total_cost:.2f}\")\n\n```\n\nThis unified system makes it easy to:\n- Build complex workflows\n- Monitor message flow\n- Compose nodes in any combination\n- Use consistent patterns across all node types\n\nEach message in the system carries content, metadata, and execution information, providing a consistent interface across all types of interactions. See [Message System](docs/concepts/messages.md) for details.\n\n\n\n\n### Advanced Connection Features\n\nConnections between agents are highly configurable and support various patterns:\n\n```python\n# Basic connection in shorthand form.\nconnection = agent_a >> agent_b # Forward all messages\n\n# Extended setup: Queued connection (manual processing)\nconnection = agent_a.connect_to(\n agent_b,\n queued=True,\n queue_strategy=\"latest\", # or \"concat\", \"buffer\"\n)\n# messages can queue up now\nawait connection.trigger(optional_additional_prompt) # Process queued messages sequentially\n\n# Filtered connection (example: filter by keyword):\nconnection = agent_a.connect_to(\n agent_b,\n filter_condition=lambda ctx: \"keyword\" in ctx.message.content,\n)\n\n# Conditional disconnection (example: disconnect after cost limit):\nconnection = agent_a.connect_to(\n agent_b,\n filter_condition=lambda ctx: ctx.stats.total_cost > 1.0,\n)\n\n# Message transformations\nasync def transform_message(message: str) -> str:\n return f\"Transformed: {message}\"\n\nconnection = agent_a.connect_to(agent_b, transform=transform_message)\n\n# Connection statistics\nprint(f\"Messages processed: {connection.stats.message_count}\")\nprint(f\"Total tokens: {connection.stats.token_count}\")\nprint(f\"Total cost: ${connection.stats.total_cost:.2f}\")\n```\n\nThe two basic programmatic patterns of this librry are:\n\n1. Tree-like workflows (hierarchical):\n```python\n# Can be modeled purely with teams/chains using & and |\nteam_a = agent1 & agent2 # Parallel branch 1\nteam_b = agent3 & agent4 # Parallel branch 2\nchain = preprocessor | team_a | postprocessor # Sequential with team\nnested = Team([chain, team_b]) # Hierarchical nesting\n```\n\n2. DAG (Directed Acyclic Graph) workflows:\n```python\n# Needs explicit signal connections for non-tree patterns\nanalyzer = Agent(\"analyzer\")\nplanner = Agent(\"planner\")\nexecutor = Agent(\"executor\")\nvalidator = Agent(\"validator\")\n\n# Can't model this with just teams - need explicit connections\nanalyzer.connect_to(planner)\nanalyzer.connect_to(executor) # Same source to multiple targets\nplanner.connect_to(validator)\nexecutor.connect_to(validator) # Multiple sources to same target\nvalidator.connect_to(executor) # Cyclic connections\n```\n\nBOTH connection types can be set up for BOTH teams and agents intiuiviely in the YAML file.\n\n### Human-in-the-Loop Integration\n\nLLMling-Agent offers multiple levels of human integration:\n\n```python\n# Provider-level human integration\nfrom llmling_agent import Agent\n\nasync with Agent(provider=\"human\") as agent:\n result = await agent.run(\"We can ask ourselves and be part of Workflows!\")\n```\n\n```yaml\n# Or via YAML configuration\nagents:\n human_agent:\n provider: \"human\" # Complete human control\n timeout: 300 # Optional timeout in seconds\n show_context: true # Show conversation context\n```\n\nYou can also use LLMling-models for more sophisticated human integration:\n- Remote human operators via network\n- Hybrid human-AI workflows\n- Input streaming support\n- Custom UI integration\n\n### Capability System\n\nFine-grained control over agent permissions:\n\n```python\nagent.capabilities.can_load_resources = True\nagent.capabilities.history_access = \"own\" # \"none\" | \"own\" | \"all\"\n```\n\n```yaml\nagents:\n restricted_agent:\n capabilities:\n can_delegate_tasks: false\n can_register_tools: false\n history_access: \"none\"\n```\n\n### Event-Driven Automation\n\nReact to file changes, webhooks, and more:\n\n```python\n# File watching\nagent.events.add_file_watch(paths=[\"src/**/*.py\"], debounce=1000)\n\n# Webhook endpoint\nagent.events.add_webhook(\"/hooks/github\",port=8000)\n\n# Also included: time-based and email\n```\n\n### Multi-Modal Support\n\nHandle images and PDFs alongside text (depends on provider / model support)\n\n```python\nimport PIL.Image\nfrom llmling_agent import Agent\n\nasync with Agent(...) as agent:\n result = await agent.run(\"What's in this image?\", PIL.Image.open(\"image.jpg\"))\n result = await agent.run(\"What's in this image?\", pathlib.Path(\"image.jpg\"))\n result = await agent.run(\"What's in this PDF?\", pathlib.Path(\"document.pdf\"))\n```\n\n### Command System\n\nExtensive slash commands available in all interfaces:\n\n```bash\n/list-tools # Show available tools\n/enable-tool tool_name # Enable specific tool\n/connect other_agent # Forward results\n/model gpt-5 # Switch models\n/history search \"query\" # Search conversation\n/stats # Show usage statistics\n```\n\n### Storage & Analytics\n\nAll interaction is tracked using (multiple) configurable storage providers.\nInformation can get fetched programmatically or via CLI.\n\n```python\n# Query conversation history\nmessages = await agent.conversation.filter_messages(\n SessionQuery(\n since=\"1h\",\n contains=\"error\",\n roles={\"user\", \"assistant\"},\n )\n)\n\n# Get usage statistics\nstats = await agent.context.storage.get_conversation_stats(\n group_by=\"model\",\n period=\"24h\",\n)\n```\n\n\n```bash\n# View recent conversations\nllmling-agent history show\nllmling-agent history show --period 24h # Last 24 hours\nllmling-agent history show --query \"database\" # Search content\n\n# View usage statistics\nllmling-agent history stats # Basic stats\nllmling-agent history stats --group-by model # Model usage\nllmling-agent history stats --group-by day # Daily breakdown\n```\n\n## \ud83d\udcda MkDocs Integration\n\nIn combination with [MkNodes](https://github.com/phil65/mknodes) and the [MkDocs plugin](https://github.com/phil65/mkdocs_mknodes),\nyou can easily generate static documentation for websites with a few lines of code.\n\n```python\n\n@nav.route.page(\"Feature XYZ\", icon=\"oui:documentation\", hide=\"toc\")\ndef gen_docs(page: mk.MkPage):\n \"\"\"Generate docs using agents.\"\"\"\n agent = Agent[None](model=\"openai:gpt-5-mini\")\n page += mk.MkAdmonition(\"MkNodes includes all kinds of Markdown objects to generate docs!\")\n source_code = load_source_code_from_folder(...)\n page += mk.MkCode() # if you want to display source code\n result = agent.run_sync(\"Describle Feature XYZ in MkDocs compatible markdown including examples.\", content)\n page += result.content\n```\n\n\n\nThis diagram shows the main components of the LLMling Agent framework:\n\n```mermaid\nclassDiagram\n %% Core relationships\n AgentsManifest --* AgentConfig : contains\n AgentsManifest --> AgentPool : creates\n AgentPool --* Agent : manages\n FileEnvironment --> Config : loads\n InlineEnvironment --* Config : contains\n Config --> RuntimeConfig : initialized as\n Agent --> RuntimeConfig : uses\n AgentConfig --> FileEnvironment : uses\n AgentConfig --> InlineEnvironment : uses\n Agent --* ToolManager : uses\n Agent --* ConversationManager : uses\n\n class Config [\"[LLMling Core] Config\"] {\n Base configuration format defining tools, resources, and settings\n +\n +tools: dict\n +resources: dict\n +prompts: dict\n +global_settings: GlobalSettings\n +from_file()\n }\n\n class RuntimeConfig [\"[LLMling Core] RuntimeConfig\"] {\n Runtime state of a config with instantiated components\n +\n +config: Config\n +tools: dict[str, Tool]\n +resources: dict[str, Resource]\n +prompts: dict[str, BasePrompt]\n +register_tool()\n +load_resource()\n }\n\n class AgentsManifest {\n Complete agent configuration manifest defining all available agents\n +\n +responses: dict[str, ResponseDefinition]\n +agents: dict[str, AgentConfig]\n }\n\n class AgentConfig {\n Configuration for a single agent including model, environment and capabilities\n +\n +name: str\n +model: str | Model\n +environment: AgentEnvironment\n +capabilities: Capabilities\n +system_prompts: list[str]\n +get_config(): Config\n }\n\n class FileEnvironment {\n Environment loaded from external YAML file\n +\n +type: \"file\"\n +uri: str\n }\n\n class InlineEnvironment {\n Direct environment configuration without external files\n +\n +type: \"inline\"\n +tools: ...\n +resources: ...\n +prompts: ...\n }\n\n class AgentPool {\n Manager for multiple initialized agents\n +\n +manifest: AgentsManifest\n +agents: dict[str, Agent]\n +open()\n }\n\n class Agent {\n Main agent class handling LLM interactions and tool usage\n +\n +runtime: RuntimeConfig\n +tools: ToolManager\n +conversation: ConversationManager\n +run()\n +run_stream()\n +open()\n }\n\n class ToolManager {\n Manages tool registration, enabling/disabling and access\n +\n +register_tool()\n +get_tools()\n +list_tools()\n }\n\n class ConversationManager {\n Manages conversation state and system prompts\n +\n +get_history()\n +clear()\n +add_context_from_path()\n +add_context_from_resource()\n }\n```\n\n\n### [Read the documentation for further info!](https://phil65.github.io/llmling-agent/)\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024, Philipp Temminghoff\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "A brand new AI framework. Fully async. Excellently typed. MCP & ACP Integration. Human in the loop. Unique messaging features.",
"version": "0.100.1",
"project_urls": {
"Code coverage": "https://app.codecov.io/gh/phil65/llmling-agent",
"Discussions": "https://github.com/phil65/llmling-agent/discussions",
"Documentation": "https://phil65.github.io/llmling-agent/",
"Issues": "https://github.com/phil65/llmling-agent/issues",
"Source": "https://github.com/phil65/llmling-agent"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6d4e55a4a3f8055df8be1e056082ac8741e99ad6bcab8da4b8fa920539530096",
"md5": "61b18c138a01ab9bdf255db6e622bc97",
"sha256": "65b1a88a4090f126fe8420131969309efe08b08ff90cbe4340cb0d7e77999a05"
},
"downloads": -1,
"filename": "llmling_agent-0.100.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "61b18c138a01ab9bdf255db6e622bc97",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 532250,
"upload_time": "2025-10-06T21:29:45",
"upload_time_iso_8601": "2025-10-06T21:29:45.701804Z",
"url": "https://files.pythonhosted.org/packages/6d/4e/55a4a3f8055df8be1e056082ac8741e99ad6bcab8da4b8fa920539530096/llmling_agent-0.100.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7f00b8b9666c94e907e93afb49964cdaa8fa197068521e91a9e8386ddfc69676",
"md5": "1cf4d55cc1ba8eb13de69cd6b245ca04",
"sha256": "574dc7e82c1028dd14198491230564d3262a10bad35f37e1cb699b6ae813890e"
},
"downloads": -1,
"filename": "llmling_agent-0.100.1.tar.gz",
"has_sig": false,
"md5_digest": "1cf4d55cc1ba8eb13de69cd6b245ca04",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 361507,
"upload_time": "2025-10-06T21:29:47",
"upload_time_iso_8601": "2025-10-06T21:29:47.769925Z",
"url": "https://files.pythonhosted.org/packages/7f/00/b8b9666c94e907e93afb49964cdaa8fa197068521e91a9e8386ddfc69676/llmling_agent-0.100.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 21:29:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "phil65",
"github_project": "llmling-agent",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llmling-agent"
}