Name | llmling JSON |
Version |
1.7.1
JSON |
| download |
home_page | None |
Summary | A backend for pydantic-AI agents and MCP servers. |
upload_time | 2025-10-07 00:02:05 |
maintainer | None |
docs_url | None |
author | Philipp Temminghoff |
requires_python | >=3.12 |
license | MIT License
Copyright (c) 2024, Philipp Temminghoff
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LLMling
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://pypi.org/project/llmling/)
[](https://github.com/phil65/llmling/releases)
[](https://github.com/phil65/llmling/graphs/contributors)
[](https://github.com/phil65/llmling/discussions)
[](https://github.com/phil65/llmling/forks)
[](https://github.com/phil65/llmling/issues)
[](https://github.com/phil65/llmling/pulls)
[](https://github.com/phil65/llmling/watchers)
[](https://github.com/phil65/llmling/stars)
[](https://github.com/phil65/llmling)
[](https://github.com/phil65/llmling/commits)
[](https://github.com/phil65/llmling/releases)
[](https://github.com/phil65/llmling)
[](https://github.com/phil65/llmling)
[](https://codecov.io/gh/phil65/llmling/)
[](https://pyup.io/repos/github/phil65/llmling/)
A framework for declarative LLM application development focused on resource management, prompt templates, and tool execution.
This package provides the backend for two consumers: [A MCP server](https://github.com/phil65/mcp-server-llmling) and [a pydantic-AI based Agent](https://github.com/phil65/llmling-agent)
## Core Concepts
LLMLing provides a YAML-based configuration system for LLM applications.
It allows to set up custom MPC servers serving content defined in YAML files.
- **Static Declaration**: Define your LLM's environment in YAML - no code required
- **MCP Protocol**: Built on the Machine Chat Protocol (MCP) for standardized LLM interaction
- **Component Types**:
- **Resources**: Content providers (files, text, CLI output, etc.)
- **Prompts**: Message templates with arguments
- **Tools**: Python functions callable by the LLM
The YAML configuration creates a complete environment that provides the LLM with:
- Access to content via resources
- Structured prompts for consistent interaction
- Tools for extending capabilities
- Written from ground up in modern python (minimum 3.12 required)
- 100% typed
- pydantic(-ai) based
An overview about the whole system:
```mermaid
graph TB
subgraph LLMling[LLMling Core Package]
RT[RuntimeConfig]
subgraph Core_Components[Core Components]
Resources[Resource Management<br/>- Load files/URLs<br/>- Process content<br/>- Watch changes]
Tools[Tool System<br/>- Execute functions<br/>- Register new tools<br/>- OpenAPI integration]
Prompts[Prompt System<br/>- Static/Dynamic prompts<br/>- Template rendering<br/>- Completion support]
end
CLI[Core CLI<br/>- config add/remove/list<br/>- resource list/load<br/>- tool list/execute<br/>- prompt list/render]
Core_Components -->|YAML configuration| RT
RT -->|All components| CLI
CLI -->|modify| Core_Components
end
subgraph Direct_Access[mcp-server-llmling<br/>Direct Component Access]
MCP[HTTP/SSE Server<br/>- Start/Stop server]
MCP_CLI[Server CLI<br/>- Start/Stop server]
Injection[Injection Server<br/>- Inject components<br/>during runtime]
end
subgraph Function_Access[llmling-agent<br/>Access via Function Calling]
LLM[LLM Integration<br/>- Function calling<br/>- Resource access<br/>- Tool execution<br/>- Structured output]
Agent_CLI[Agent CLI<br/>- One-shot execution<br/>- Batch processing]
Agent_Web[Agent Web UI<br/>- Interactive chat]
end
RT -->|All components| MCP
RT -->|Resources & Tools<br/>via function calling| LLM
MCP_CLI --> CLI
Agent_CLI --> CLI
classDef core fill:#e1f5fe,stroke:#01579b
classDef comp fill:#e3f2fd,stroke:#1565c0
classDef cli fill:#fff3e0,stroke:#e65100
classDef mcp fill:#f3e5f5,stroke:#4a148c
classDef agent fill:#e8f5e9,stroke:#1b5e20
classDef access fill:#e8eaf6,stroke:#666
classDef serverBox fill:#7986cb,stroke:#3949ab
classDef agentBox fill:#81c784,stroke:#2e7d32
class RT core
class Resources,Tools,Prompts comp
class CLI,MCP_CLI,Agent_CLI cli
class MCP,Injection mcp
class LLM,Agent_Web agent
class Direct_Access serverBox
class Function_Access agentBox
```
## Usage
### 1. CLI Usage
Create a basic configuration file:
```bash
# Create a new config file with basic settings
llmling config init my_config.yml
# Add it to your stored configs
llmling config add myconfig my_config.yml
llmling config set myconfig # Make it active
```
Basic CLI commands:
```bash
# List available resources
llmling resource list
# Load a resource
llmling resource load python_files
# Execute a tool
llmling tool call open_url url=https://github.com
# Show a prompt
llmling prompt show greet
# Many more commands. The CLI will get extended when installing
# llmling-agent and mcp-server-llmling
```
### 2. Agent Usage (powered by pydantic-AI)
Create a configuration file (`config.yml`):
```yaml
tools:
open_url:
import_path: "webbrowser.open"
resources:
bookmarks:
type: text
description: "Common Python URLs"
content: |
Python Website: https://python.org
```
Use the agent with this configuration:
```python
from llmling import RuntimeConfig
from llmling_agent import LLMlingAgent
from pydantic import BaseModel
class WebResult(BaseModel):
opened_url: str
success: bool
async with RuntimeConfig.open("config.yml") as runtime:
agent = LLMlingAgent[WebResult](runtime)
result = await agent.run(
"Load the bookmarks resource and open the Python website URL"
)
print(f"Opened: {result.data.opened_url}")
```
The agent will:
1. Load the bookmarks resource
2. Extract the Python website URL
3. Use the `open_url` tool to open it
4. Return the structured result
### 3. Server Usage
#### With Zed Editor
Add LLMLing as a context server in your `settings.json`:
```json
{
"context_servers": {
"llmling": {
"command": {
"env": {},
"label": "llmling",
"path": "uvx",
"args": [
"mcp-server-llmling@latest",
"start",
"path/to/your/config.yml",
"--zed-mode"
]
},
"settings": {}
}
}
}
```
#### With Claude Desktop
Configure LLMLing in your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"llmling": {
"command": "uvx",
"args": [
"mcp-server-llmling@latest",
"start",
"path/to/your/config.yml"
],
"env": {}
}
}
}
```
#### Manual Server Start
Start the server directly from command line:
```bash
# Latest version
uvx mcp-server-llmling@latest start path/to/your/config.yml
```
## Resources
Resources are content providers that load and pre-process data from various sources.
### Basic Resource Types
```yaml
global_config: # declare dependencies if used for tools or function prompts
requirements: ["myapp"]
scripts:
- "https://gist.githubusercontent.com/.../get_readme.py"
resources:
# Load and watch a file or directory
python_files:
type: path
path: "./src/**/*.py" # Glob patterns supported
watch: # Optional file watching
enabled: true
patterns:
- "*.py"
- "!**/__pycache__/**" # Exclude patterns with !
processors: # Optional processing steps
- name: format_python
- name: add_header
required: false # Optional step
# Static text content
system_prompt:
type: text
content: |
You are a code reviewer specialized in Python.
Focus on these aspects:
- Code style (PEP8)
- Best practices
- Performance
- Security
# Execute CLI commands
git_changes:
type: cli
command: "git diff HEAD~1" # String or list of args
shell: true # Use shell for command
cwd: "./src" # Optional working directory
timeout: 5.0 # Optional timeout in seconds
# Load Python source code
utils_module:
type: source
import_path: myapp.utils
recursive: true # Include submodules
include_tests: false # Exclude test files
# Execute Python callables
system_info:
type: callable
import_path: platform.uname
keyword_args: # Optional arguments
aliased: true
```
### Resource Groups
Group related resources for easier access:
```yaml
resource_groups:
code_review:
- python_files
- git_changes
- system_prompt
documentation:
- architecture
- utils_module
```
### File Watching
Resources supporting file watching (`path`, `image`) can be configured to detect changes:
```yaml
resources:
config_files:
type: path
path: "./config"
watch:
enabled: true
patterns: # .gitignore style patterns
- "*.yml"
- "*.yaml"
- "!.private/**" # Exclude private files
ignore_file: ".gitignore" # Use existing ignore file
```
### Resource Processing
Resources can be processed through a pipeline of processors:
```yaml
# First define processors
context_processors:
uppercase:
type: function
import_path: myapp.processors.to_upper
async_execution: false # Sync function
# Then use them in resources
resources:
processed_file:
type: path
path: "./input.txt"
processors:
- name: uppercase
```
## Prompts
Prompts are message templates that can be formatted with arguments. LLMLing supports both declarative YAML prompts and function-based prompts.
### YAML-Based Prompts
```yaml
prompts:
code_review:
description: "Review Python code changes"
messages:
- role: system
content: |
You are a Python code reviewer. Focus on:
- Code style (PEP8)
- Best practices
- Performance
- Security
Always structure your review as:
1. Summary
2. Issues Found
3. Suggestions
- role: user
content: |
Review the following code changes:
{code}
Focus areas: {focus_areas}
arguments:
- name: code
description: "Code to review"
required: true
- name: focus_areas
description: "Specific areas to focus on (one of: style, security, performance)"
required: false
default: "style"
```
### Function-Based Prompts
Function-based prompts provide more control and enable auto-completion:
```yaml
prompts:
analyze_code:
# Import path to the prompt function
import_path: myapp.prompts.code_analysis
# Optional overrides
name: "Code Analysis"
description: "Analyze Python code structure and complexity"
# Optional message template override
template: |
Analyze this code: {code}
Focus on: {focus}
# Auto-completion functions for arguments
completions:
focus: myapp.prompts.get_analysis_focus_options
```
```python
# myapp/prompts/code_analysis.py
from typing import Literal
FocusArea = Literal["complexity", "dependencies", "typing"]
def code_analysis(
code: str,
focus: FocusArea = "complexity",
include_metrics: bool = True
) -> list[dict[str, str]]:
"""Analyze Python code structure and complexity.
Args:
code: Python source code to analyze
focus: Analysis focus area (one of: complexity, dependencies, typing)
include_metrics: Whether to include numeric metrics
"""
# Function will be converted to a prompt automatically
...
def get_analysis_focus_options(current: str) -> list[str]:
"""Provide auto-completion for focus argument."""
options = ["complexity", "dependencies", "typing"]
return [opt for opt in options if opt.startswith(current)]
```
### Message Content Types
Prompts support different content types:
```yaml
prompts:
document_review:
messages:
# Text content
- role: system
content: "You are a document reviewer..."
# Resource reference
- role: user
content:
type: resource
content: "document://main.pdf"
alt_text: "Main document content"
# Image content
- role: user
content:
type: image_url
content: "https://example.com/diagram.png"
alt_text: "System architecture diagram"
```
### Argument Validation
Prompts validate arguments before formatting:
```yaml
prompts:
analyze:
messages:
- role: user
content: "Analyze with level {level}"
arguments:
- name: level
description: "Analysis depth (one of: basic, detailed, full)"
required: true
# Will be used for validation and auto-completion
type_hint: Literal["basic", "detailed", "full"]
```
## Tools
Tools are Python functions or classes that can be called by the LLM. They provide a safe way to extend the LLM's capabilities with custom functionality.
### Basic Tool Configuration
```yaml
tools:
# Function-based tool
analyze_code:
import_path: myapp.tools.code.analyze
description: "Analyze Python code structure and metrics"
# Class-based tool
browser:
import_path: llmling.tools.browser.BrowserTool
description: "Control web browser for research"
# Override tool name
code_metrics:
import_path: myapp.tools.analyze_complexity
name: "Analyze Code Complexity"
description: "Calculate code complexity metrics"
# Include pre-built tool collections
toolsets:
- llmling.code # Code analysis tools
- llmling.web # Web/browser tools
```
## Toolsets
Toolsets are, like the name says, a collection of tools. Right now LLMling supports:
- Extension point system
- OpenAPI endpoints
- class-based toolsets
### Function-Based Tools
Tools can be created from any Python function:
```python
# myapp/tools/code.py
from typing import Any
import ast
async def analyze(
code: str,
include_metrics: bool = True
) -> dict[str, Any]:
"""Analyze Python code structure and complexity.
Args:
code: Python source code to analyze
include_metrics: Whether to include numeric metrics
Returns:
Dictionary with analysis results
"""
tree = ast.parse(code)
return {
"classes": len([n for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]),
"functions": len([n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]),
"complexity": _calculate_complexity(tree) if include_metrics else None
}
```
### Class-Based Tools
Complex tools can be implemented as classes:
```python
# myapp/tools/browser.py
from typing import Literal
from playwright.async_api import Page
from llmling.tools.base import BaseTool
class BrowserTool(BaseTool):
"""Tool for web browser automation."""
name = "browser"
description = "Control web browser to navigate and interact with web pages"
...
def get_tools(self):
return [self.open_url, self.click_button]
```
### Tool Collections (Toolsets)
Group related tools into reusable collections:
```python
# myapp/toolsets.py
from typing import Callable, Any
def get_mcp_tools() -> list[Callable[..., Any]]:
"""Entry point exposing tools to LLMling."""
from myapp.tools import (
analyze_code,
check_style,
count_tokens
)
return [
analyze_code,
check_style,
count_tokens
]
```
In pyproject.toml:
```toml
[project.entry-points.llmling]
tools = "llmling.testing:get_mcp_tools"
```
### Tool Progress Reporting
Tools can report progress to the client:
```python
from llmling.tools.base import BaseTool
class AnalysisTool(BaseTool):
name = "analyze"
description = "Analyze large codebase"
async def execute(
self,
path: str,
_meta: dict[str, Any] | None = None, # Progress tracking
) -> dict[str, Any]:
files = list(Path(path).glob("**/*.py"))
results = []
for i, file in enumerate(files):
# Report progress if meta information provided
if _meta and "progressToken" in _meta:
self.notify_progress(
token=_meta["progressToken"],
progress=i,
total=len(files),
description=f"Analyzing {file.name}"
)
results.append(await self._analyze_file(file))
return {"results": results}
```
### Complete Tool Example
Here's a complete example combining multiple tool features:
```yaml
# Configuration
tools:
# Basic function tool
analyze:
import_path: myapp.tools.code.analyze
# Class-based tool with lifecycle
browser:
import_path: myapp.tools.browser.BrowserTool
# Tool with progress reporting
batch_analysis:
import_path: myapp.tools.AnalysisTool
toolsets:
- llmling.code
- myapp.tools
```
```python
# Tool implementation
from typing import Any
from pathlib import Path
from llmling.tools.base import BaseTool
class AnalysisTool(BaseTool):
"""Tool for batch code analysis with progress reporting."""
name = "batch_analysis"
description = "Analyze multiple Python files"
async def startup(self) -> None:
"""Initialize analysis engine."""
self.analyzer = await self._create_analyzer()
async def execute(
self,
path: str,
recursive: bool = True,
_meta: dict[str, Any] | None = None
) -> dict[str, Any]:
"""Execute batch analysis.
Args:
path: Directory to analyze
recursive: Whether to analyze subdirectories
_meta: Optional progress tracking metadata
"""
files = list(Path(path).glob("**/*.py" if recursive else "*.py"))
results = []
for i, file in enumerate(files, 1):
# Report progress
if _meta and "progressToken" in _meta:
self.notify_progress(
token=_meta["progressToken"],
progress=i,
total=len(files),
description=f"Analyzing {file.name}"
)
# Analyze file
try:
result = await self.analyzer.analyze_file(file)
results.append({
"file": str(file),
"metrics": result
})
except Exception as e:
results.append({
"file": str(file),
"error": str(e)
})
return {
"total_files": len(files),
"successful": len([r for r in results if "metrics" in r]),
"failed": len([r for r in results if "error" in r]),
"results": results
}
async def shutdown(self) -> None:
"""Clean up analysis engine."""
await self.analyzer.close()
```
More: (ATTENTION: THESE ARE MOSTLY AI GENERATED AND OUTDATED, NO GUARANTEE FOR CORRECTNESS)
[introduction](https://github.com/phil65/LLMling/blob/main/docs/introduction.md)
[quick_example](https://github.com/phil65/LLMling/blob/main/docs/quick_example.md)
[usage](https://github.com/phil65/LLMling/blob/main/docs/usage.md)
[yaml_config](https://github.com/phil65/LLMling/blob/main/docs/yaml_config.md)
[CLI](https://github.com/phil65/LLMling/blob/main/docs/cli.md)
[resources](https://github.com/phil65/LLMling/blob/main/docs/resources.md)
[prompts](https://github.com/phil65/LLMling/blob/main/docs/prompts.md)
[tools](https://github.com/phil65/LLMling/blob/main/docs/tools.md)
[server](https://github.com/phil65/LLMling/blob/main/docs/server.md)
[extending](https://github.com/phil65/LLMling/blob/main/docs/extending.md)
Raw data
{
"_id": null,
"home_page": null,
"name": "llmling",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": null,
"author": "Philipp Temminghoff",
"author_email": "Philipp Temminghoff <philipptemminghoff@googlemail.com>",
"download_url": "https://files.pythonhosted.org/packages/e2/a5/7bc14f2c0db38daf2a71ae0cb3b4e1306c4d89266da2d3bd83cec24d0590/llmling-1.7.1.tar.gz",
"platform": null,
"description": "# LLMling\n\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://pypi.org/project/llmling/)\n[](https://github.com/phil65/llmling/releases)\n[](https://github.com/phil65/llmling/graphs/contributors)\n[](https://github.com/phil65/llmling/discussions)\n[](https://github.com/phil65/llmling/forks)\n[](https://github.com/phil65/llmling/issues)\n[](https://github.com/phil65/llmling/pulls)\n[](https://github.com/phil65/llmling/watchers)\n[](https://github.com/phil65/llmling/stars)\n[](https://github.com/phil65/llmling)\n[](https://github.com/phil65/llmling/commits)\n[](https://github.com/phil65/llmling/releases)\n[](https://github.com/phil65/llmling)\n[](https://github.com/phil65/llmling)\n[](https://codecov.io/gh/phil65/llmling/)\n[](https://pyup.io/repos/github/phil65/llmling/)\n\nA framework for declarative LLM application development focused on resource management, prompt templates, and tool execution.\n\nThis package provides the backend for two consumers: [A MCP server](https://github.com/phil65/mcp-server-llmling) and [a pydantic-AI based Agent](https://github.com/phil65/llmling-agent)\n\n\n## Core Concepts\n\nLLMLing provides a YAML-based configuration system for LLM applications.\nIt allows to set up custom MPC servers serving content defined in YAML files.\n\n- **Static Declaration**: Define your LLM's environment in YAML - no code required\n- **MCP Protocol**: Built on the Machine Chat Protocol (MCP) for standardized LLM interaction\n- **Component Types**:\n - **Resources**: Content providers (files, text, CLI output, etc.)\n - **Prompts**: Message templates with arguments\n - **Tools**: Python functions callable by the LLM\n\nThe YAML configuration creates a complete environment that provides the LLM with:\n- Access to content via resources\n- Structured prompts for consistent interaction\n- Tools for extending capabilities\n\n\n- Written from ground up in modern python (minimum 3.12 required)\n- 100% typed\n- pydantic(-ai) based\n\nAn overview about the whole system:\n\n```mermaid\ngraph TB\n subgraph LLMling[LLMling Core Package]\n RT[RuntimeConfig]\n\n subgraph Core_Components[Core Components]\n Resources[Resource Management<br/>- Load files/URLs<br/>- Process content<br/>- Watch changes]\n Tools[Tool System<br/>- Execute functions<br/>- Register new tools<br/>- OpenAPI integration]\n Prompts[Prompt System<br/>- Static/Dynamic prompts<br/>- Template rendering<br/>- Completion support]\n end\n\n CLI[Core CLI<br/>- config add/remove/list<br/>- resource list/load<br/>- tool list/execute<br/>- prompt list/render]\n\n Core_Components -->|YAML configuration| RT\n RT -->|All components| CLI\n CLI -->|modify| Core_Components\n end\n\n subgraph Direct_Access[mcp-server-llmling<br/>Direct Component Access]\n MCP[HTTP/SSE Server<br/>- Start/Stop server]\n MCP_CLI[Server CLI<br/>- Start/Stop server]\n Injection[Injection Server<br/>- Inject components<br/>during runtime]\n end\n\n subgraph Function_Access[llmling-agent<br/>Access via Function Calling]\n LLM[LLM Integration<br/>- Function calling<br/>- Resource access<br/>- Tool execution<br/>- Structured output]\n Agent_CLI[Agent CLI<br/>- One-shot execution<br/>- Batch processing]\n Agent_Web[Agent Web UI<br/>- Interactive chat]\n end\n\n RT -->|All components| MCP\n RT -->|Resources & Tools<br/>via function calling| LLM\n MCP_CLI --> CLI\n Agent_CLI --> CLI\n\n classDef core fill:#e1f5fe,stroke:#01579b\n classDef comp fill:#e3f2fd,stroke:#1565c0\n classDef cli fill:#fff3e0,stroke:#e65100\n classDef mcp fill:#f3e5f5,stroke:#4a148c\n classDef agent fill:#e8f5e9,stroke:#1b5e20\n classDef access fill:#e8eaf6,stroke:#666\n classDef serverBox fill:#7986cb,stroke:#3949ab\n classDef agentBox fill:#81c784,stroke:#2e7d32\n\n class RT core\n class Resources,Tools,Prompts comp\n class CLI,MCP_CLI,Agent_CLI cli\n class MCP,Injection mcp\n class LLM,Agent_Web agent\n class Direct_Access serverBox\n class Function_Access agentBox\n```\n\n\n## Usage\n\n### 1. CLI Usage\n\nCreate a basic configuration file:\n```bash\n# Create a new config file with basic settings\nllmling config init my_config.yml\n\n# Add it to your stored configs\nllmling config add myconfig my_config.yml\nllmling config set myconfig # Make it active\n```\n\nBasic CLI commands:\n```bash\n# List available resources\nllmling resource list\n\n# Load a resource\nllmling resource load python_files\n\n# Execute a tool\nllmling tool call open_url url=https://github.com\n\n# Show a prompt\nllmling prompt show greet\n\n# Many more commands. The CLI will get extended when installing\n# llmling-agent and mcp-server-llmling\n```\n\n### 2. Agent Usage (powered by pydantic-AI)\n\nCreate a configuration file (`config.yml`):\n```yaml\ntools:\n open_url:\n import_path: \"webbrowser.open\"\n\nresources:\n bookmarks:\n type: text\n description: \"Common Python URLs\"\n content: |\n Python Website: https://python.org\n```\n\nUse the agent with this configuration:\n```python\nfrom llmling import RuntimeConfig\nfrom llmling_agent import LLMlingAgent\nfrom pydantic import BaseModel\n\nclass WebResult(BaseModel):\n opened_url: str\n success: bool\n\nasync with RuntimeConfig.open(\"config.yml\") as runtime:\n agent = LLMlingAgent[WebResult](runtime)\n result = await agent.run(\n \"Load the bookmarks resource and open the Python website URL\"\n )\n print(f\"Opened: {result.data.opened_url}\")\n```\n\nThe agent will:\n1. Load the bookmarks resource\n2. Extract the Python website URL\n3. Use the `open_url` tool to open it\n4. Return the structured result\n\n### 3. Server Usage\n\n#### With Zed Editor\n\nAdd LLMLing as a context server in your `settings.json`:\n\n```json\n{\n \"context_servers\": {\n \"llmling\": {\n \"command\": {\n \"env\": {},\n \"label\": \"llmling\",\n \"path\": \"uvx\",\n \"args\": [\n \"mcp-server-llmling@latest\",\n \"start\",\n \"path/to/your/config.yml\",\n \"--zed-mode\"\n ]\n },\n \"settings\": {}\n }\n }\n}\n```\n\n#### With Claude Desktop\n\nConfigure LLMLing in your `claude_desktop_config.json`:\n\n```json\n{\n \"mcpServers\": {\n \"llmling\": {\n \"command\": \"uvx\",\n \"args\": [\n \"mcp-server-llmling@latest\",\n \"start\",\n \"path/to/your/config.yml\"\n ],\n \"env\": {}\n }\n }\n}\n```\n\n#### Manual Server Start\n\nStart the server directly from command line:\n\n```bash\n# Latest version\nuvx mcp-server-llmling@latest start path/to/your/config.yml\n```\n\n\n## Resources\n\nResources are content providers that load and pre-process data from various sources.\n\n### Basic Resource Types\n\n```yaml\nglobal_config: # declare dependencies if used for tools or function prompts\n requirements: [\"myapp\"]\n scripts:\n - \"https://gist.githubusercontent.com/.../get_readme.py\"\n\n\nresources:\n # Load and watch a file or directory\n python_files:\n type: path\n path: \"./src/**/*.py\" # Glob patterns supported\n watch: # Optional file watching\n enabled: true\n patterns:\n - \"*.py\"\n - \"!**/__pycache__/**\" # Exclude patterns with !\n processors: # Optional processing steps\n - name: format_python\n - name: add_header\n required: false # Optional step\n\n # Static text content\n system_prompt:\n type: text\n content: |\n You are a code reviewer specialized in Python.\n Focus on these aspects:\n - Code style (PEP8)\n - Best practices\n - Performance\n - Security\n\n # Execute CLI commands\n git_changes:\n type: cli\n command: \"git diff HEAD~1\" # String or list of args\n shell: true # Use shell for command\n cwd: \"./src\" # Optional working directory\n timeout: 5.0 # Optional timeout in seconds\n\n # Load Python source code\n utils_module:\n type: source\n import_path: myapp.utils\n recursive: true # Include submodules\n include_tests: false # Exclude test files\n\n # Execute Python callables\n system_info:\n type: callable\n import_path: platform.uname\n keyword_args: # Optional arguments\n aliased: true\n\n```\n\n### Resource Groups\n\nGroup related resources for easier access:\n\n```yaml\nresource_groups:\n code_review:\n - python_files\n - git_changes\n - system_prompt\n\n documentation:\n - architecture\n - utils_module\n```\n\n### File Watching\n\nResources supporting file watching (`path`, `image`) can be configured to detect changes:\n\n```yaml\nresources:\n config_files:\n type: path\n path: \"./config\"\n watch:\n enabled: true\n patterns: # .gitignore style patterns\n - \"*.yml\"\n - \"*.yaml\"\n - \"!.private/**\" # Exclude private files\n ignore_file: \".gitignore\" # Use existing ignore file\n```\n\n### Resource Processing\n\nResources can be processed through a pipeline of processors:\n\n```yaml\n# First define processors\ncontext_processors:\n uppercase:\n type: function\n import_path: myapp.processors.to_upper\n async_execution: false # Sync function\n\n# Then use them in resources\nresources:\n processed_file:\n type: path\n path: \"./input.txt\"\n processors:\n - name: uppercase\n```\n\n## Prompts\n\nPrompts are message templates that can be formatted with arguments. LLMLing supports both declarative YAML prompts and function-based prompts.\n\n### YAML-Based Prompts\n\n```yaml\nprompts:\n code_review:\n description: \"Review Python code changes\"\n messages:\n - role: system\n content: |\n You are a Python code reviewer. Focus on:\n - Code style (PEP8)\n - Best practices\n - Performance\n - Security\n\n Always structure your review as:\n 1. Summary\n 2. Issues Found\n 3. Suggestions\n\n - role: user\n content: |\n Review the following code changes:\n\n {code}\n\n Focus areas: {focus_areas}\n\n arguments:\n - name: code\n description: \"Code to review\"\n required: true\n - name: focus_areas\n description: \"Specific areas to focus on (one of: style, security, performance)\"\n required: false\n default: \"style\"\n```\n\n### Function-Based Prompts\n\nFunction-based prompts provide more control and enable auto-completion:\n\n```yaml\nprompts:\n analyze_code:\n # Import path to the prompt function\n import_path: myapp.prompts.code_analysis\n # Optional overrides\n name: \"Code Analysis\"\n description: \"Analyze Python code structure and complexity\"\n # Optional message template override\n template: |\n Analyze this code: {code}\n Focus on: {focus}\n # Auto-completion functions for arguments\n completions:\n focus: myapp.prompts.get_analysis_focus_options\n```\n\n```python\n# myapp/prompts/code_analysis.py\nfrom typing import Literal\n\nFocusArea = Literal[\"complexity\", \"dependencies\", \"typing\"]\n\ndef code_analysis(\n code: str,\n focus: FocusArea = \"complexity\",\n include_metrics: bool = True\n) -> list[dict[str, str]]:\n \"\"\"Analyze Python code structure and complexity.\n\n Args:\n code: Python source code to analyze\n focus: Analysis focus area (one of: complexity, dependencies, typing)\n include_metrics: Whether to include numeric metrics\n \"\"\"\n # Function will be converted to a prompt automatically\n ...\n\ndef get_analysis_focus_options(current: str) -> list[str]:\n \"\"\"Provide auto-completion for focus argument.\"\"\"\n options = [\"complexity\", \"dependencies\", \"typing\"]\n return [opt for opt in options if opt.startswith(current)]\n```\n\n### Message Content Types\n\nPrompts support different content types:\n\n```yaml\nprompts:\n document_review:\n messages:\n # Text content\n - role: system\n content: \"You are a document reviewer...\"\n\n # Resource reference\n - role: user\n content:\n type: resource\n content: \"document://main.pdf\"\n alt_text: \"Main document content\"\n\n # Image content\n - role: user\n content:\n type: image_url\n content: \"https://example.com/diagram.png\"\n alt_text: \"System architecture diagram\"\n```\n\n### Argument Validation\n\nPrompts validate arguments before formatting:\n\n```yaml\nprompts:\n analyze:\n messages:\n - role: user\n content: \"Analyze with level {level}\"\n\n arguments:\n - name: level\n description: \"Analysis depth (one of: basic, detailed, full)\"\n required: true\n # Will be used for validation and auto-completion\n type_hint: Literal[\"basic\", \"detailed\", \"full\"]\n```\n\n## Tools\n\nTools are Python functions or classes that can be called by the LLM. They provide a safe way to extend the LLM's capabilities with custom functionality.\n\n### Basic Tool Configuration\n\n```yaml\ntools:\n # Function-based tool\n analyze_code:\n import_path: myapp.tools.code.analyze\n description: \"Analyze Python code structure and metrics\"\n\n # Class-based tool\n browser:\n import_path: llmling.tools.browser.BrowserTool\n description: \"Control web browser for research\"\n\n # Override tool name\n code_metrics:\n import_path: myapp.tools.analyze_complexity\n name: \"Analyze Code Complexity\"\n description: \"Calculate code complexity metrics\"\n\n# Include pre-built tool collections\ntoolsets:\n - llmling.code # Code analysis tools\n - llmling.web # Web/browser tools\n```\n\n## Toolsets\n\nToolsets are, like the name says, a collection of tools. Right now LLMling supports:\n\n- Extension point system\n- OpenAPI endpoints\n- class-based toolsets\n\n### Function-Based Tools\n\nTools can be created from any Python function:\n\n```python\n# myapp/tools/code.py\nfrom typing import Any\nimport ast\n\nasync def analyze(\n code: str,\n include_metrics: bool = True\n) -> dict[str, Any]:\n \"\"\"Analyze Python code structure and complexity.\n\n Args:\n code: Python source code to analyze\n include_metrics: Whether to include numeric metrics\n\n Returns:\n Dictionary with analysis results\n \"\"\"\n tree = ast.parse(code)\n return {\n \"classes\": len([n for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]),\n \"functions\": len([n for n in ast.walk(tree) if isinstance(n, ast.FunctionDef)]),\n \"complexity\": _calculate_complexity(tree) if include_metrics else None\n }\n```\n\n### Class-Based Tools\n\nComplex tools can be implemented as classes:\n\n```python\n# myapp/tools/browser.py\nfrom typing import Literal\nfrom playwright.async_api import Page\nfrom llmling.tools.base import BaseTool\n\nclass BrowserTool(BaseTool):\n \"\"\"Tool for web browser automation.\"\"\"\n\n name = \"browser\"\n description = \"Control web browser to navigate and interact with web pages\"\n\n ...\n\n def get_tools(self):\n return [self.open_url, self.click_button]\n```\n\n### Tool Collections (Toolsets)\n\nGroup related tools into reusable collections:\n\n```python\n# myapp/toolsets.py\nfrom typing import Callable, Any\n\ndef get_mcp_tools() -> list[Callable[..., Any]]:\n \"\"\"Entry point exposing tools to LLMling.\"\"\"\n from myapp.tools import (\n analyze_code,\n check_style,\n count_tokens\n )\n return [\n analyze_code,\n check_style,\n count_tokens\n ]\n```\n\nIn pyproject.toml:\n\n```toml\n[project.entry-points.llmling]\ntools = \"llmling.testing:get_mcp_tools\"\n```\n\n### Tool Progress Reporting\n\nTools can report progress to the client:\n\n```python\nfrom llmling.tools.base import BaseTool\n\nclass AnalysisTool(BaseTool):\n name = \"analyze\"\n description = \"Analyze large codebase\"\n\n async def execute(\n self,\n path: str,\n _meta: dict[str, Any] | None = None, # Progress tracking\n ) -> dict[str, Any]:\n files = list(Path(path).glob(\"**/*.py\"))\n results = []\n\n for i, file in enumerate(files):\n # Report progress if meta information provided\n if _meta and \"progressToken\" in _meta:\n self.notify_progress(\n token=_meta[\"progressToken\"],\n progress=i,\n total=len(files),\n description=f\"Analyzing {file.name}\"\n )\n\n results.append(await self._analyze_file(file))\n\n return {\"results\": results}\n```\n\n### Complete Tool Example\n\nHere's a complete example combining multiple tool features:\n\n```yaml\n# Configuration\ntools:\n # Basic function tool\n analyze:\n import_path: myapp.tools.code.analyze\n\n # Class-based tool with lifecycle\n browser:\n import_path: myapp.tools.browser.BrowserTool\n\n # Tool with progress reporting\n batch_analysis:\n import_path: myapp.tools.AnalysisTool\n\ntoolsets:\n - llmling.code\n - myapp.tools\n```\n\n```python\n# Tool implementation\nfrom typing import Any\nfrom pathlib import Path\nfrom llmling.tools.base import BaseTool\n\nclass AnalysisTool(BaseTool):\n \"\"\"Tool for batch code analysis with progress reporting.\"\"\"\n\n name = \"batch_analysis\"\n description = \"Analyze multiple Python files\"\n\n async def startup(self) -> None:\n \"\"\"Initialize analysis engine.\"\"\"\n self.analyzer = await self._create_analyzer()\n\n async def execute(\n self,\n path: str,\n recursive: bool = True,\n _meta: dict[str, Any] | None = None\n ) -> dict[str, Any]:\n \"\"\"Execute batch analysis.\n\n Args:\n path: Directory to analyze\n recursive: Whether to analyze subdirectories\n _meta: Optional progress tracking metadata\n \"\"\"\n files = list(Path(path).glob(\"**/*.py\" if recursive else \"*.py\"))\n results = []\n\n for i, file in enumerate(files, 1):\n # Report progress\n if _meta and \"progressToken\" in _meta:\n self.notify_progress(\n token=_meta[\"progressToken\"],\n progress=i,\n total=len(files),\n description=f\"Analyzing {file.name}\"\n )\n\n # Analyze file\n try:\n result = await self.analyzer.analyze_file(file)\n results.append({\n \"file\": str(file),\n \"metrics\": result\n })\n except Exception as e:\n results.append({\n \"file\": str(file),\n \"error\": str(e)\n })\n\n return {\n \"total_files\": len(files),\n \"successful\": len([r for r in results if \"metrics\" in r]),\n \"failed\": len([r for r in results if \"error\" in r]),\n \"results\": results\n }\n\n async def shutdown(self) -> None:\n \"\"\"Clean up analysis engine.\"\"\"\n await self.analyzer.close()\n```\n\nMore: (ATTENTION: THESE ARE MOSTLY AI GENERATED AND OUTDATED, NO GUARANTEE FOR CORRECTNESS)\n[introduction](https://github.com/phil65/LLMling/blob/main/docs/introduction.md)\n[quick_example](https://github.com/phil65/LLMling/blob/main/docs/quick_example.md)\n[usage](https://github.com/phil65/LLMling/blob/main/docs/usage.md)\n[yaml_config](https://github.com/phil65/LLMling/blob/main/docs/yaml_config.md)\n[CLI](https://github.com/phil65/LLMling/blob/main/docs/cli.md)\n\n[resources](https://github.com/phil65/LLMling/blob/main/docs/resources.md)\n[prompts](https://github.com/phil65/LLMling/blob/main/docs/prompts.md)\n[tools](https://github.com/phil65/LLMling/blob/main/docs/tools.md)\n[server](https://github.com/phil65/LLMling/blob/main/docs/server.md)\n[extending](https://github.com/phil65/LLMling/blob/main/docs/extending.md)\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024, Philipp Temminghoff\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "A backend for pydantic-AI agents and MCP servers.",
"version": "1.7.1",
"project_urls": {
"Code coverage": "https://app.codecov.io/gh/phil65/llmling",
"Discussions": "https://github.com/phil65/llmling/discussions",
"Documentation": "https://phil65.github.io/llmling/",
"Issues": "https://github.com/phil65/llmling/issues",
"Source": "https://github.com/phil65/llmling"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "025b709db2c66e89f5ee8897fefb06f1668831dce99ffa0b420c96f798a5b983",
"md5": "d13a149a7ffaa3edd9332e39121248e3",
"sha256": "918952094550801787e00aa2a8390c98b9d3f4c4c6f4089e6f31cc927f568213"
},
"downloads": -1,
"filename": "llmling-1.7.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d13a149a7ffaa3edd9332e39121248e3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 107569,
"upload_time": "2025-10-07T00:02:03",
"upload_time_iso_8601": "2025-10-07T00:02:03.533960Z",
"url": "https://files.pythonhosted.org/packages/02/5b/709db2c66e89f5ee8897fefb06f1668831dce99ffa0b420c96f798a5b983/llmling-1.7.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e2a57bc14f2c0db38daf2a71ae0cb3b4e1306c4d89266da2d3bd83cec24d0590",
"md5": "582a744e1f947152672767bd839324c8",
"sha256": "72b968975f39a84e39a89205f897d8c9171f70e2e8f70b6ce71b25172e2b50fc"
},
"downloads": -1,
"filename": "llmling-1.7.1.tar.gz",
"has_sig": false,
"md5_digest": "582a744e1f947152672767bd839324c8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 77171,
"upload_time": "2025-10-07T00:02:05",
"upload_time_iso_8601": "2025-10-07T00:02:05.468052Z",
"url": "https://files.pythonhosted.org/packages/e2/a5/7bc14f2c0db38daf2a71ae0cb3b4e1306c4d89266da2d3bd83cec24d0590/llmling-1.7.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-07 00:02:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "phil65",
"github_project": "llmling",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llmling"
}