# GF AgentKit
Lightweight agents on top of **Pydantic AI** with:
* **True *soft tools*** (reasoning-first; tools are optional, not mandatory).
* **A single “gateway” tool** that delegates to a separate executor (to reduce tool gravity).
* **Built-in conversation history** (persistent transcript managed by the agent).
* A few **ready-to-use tools** (file I/O, safe process execution, agent delegation).
> ⚠️ This is a work-in-progress. The API may evolve.
---
## Why?
Many agent frameworks push models to use tools aggressively (“hard tools”), which can be great for workflow graphs but not for **reasoning-first** tasks. GF AgentKit makes tool use **opt-in**:
* **SoftToolAgent**: the model decides when to ask for a tool using a **schema-guided envelope** (OPEN\_RESPONSE / TOOL\_CALL / TOOL\_RESULT). The host runs tools and feeds results back as context.
* **DelegatingToolsAgent**: the model sees **one** gateway tool (`delegate_ops`). That tool delegates to an internal executor which owns all the real tools. This reduces tool gravity and lets you keep a clean reasoning loop.
Both agents keep a **persistent transcript** so follow-ups naturally refer back to earlier turns.
---
## Installation
```bash
pip install agentkit-gf
```
Requires: Python 3.11+.
You’ll also need an OpenAI key to use `gpt-5-nano`:
```bash
# macOS/Linux
export OPENAI_API_KEY=sk-...
# Windows PowerShell
$env:OPENAI_API_KEY = "sk-..."
```
---
## At a Glance
```
agentkit_gf/
├─ _base_agent.py # shared transcript + constructor glue
├─ soft_tool_agent.py # true soft tools (Envelope schema)
├─ delegating_tools_agent.py # single gateway tool; delegates to executor
└─ tools/
├─ fs.py # FileTools: read/write/stat/hash/list (sandboxable)
├─ os.py # ProcessTools: run_process/run_shell (policy controlled)
├─ agent.py # create_agent_delegation_tool(...) factory
└─ builtin_tools_matrix.py # BuiltinTool enums + provider validation
```
---
## Agents
### 1) SoftToolAgent (true soft tool)
* The model returns a single **Envelope** JSON object each hop:
* `OPEN_RESPONSE { text, confidence? }`
* `TOOL_CALL { tool, args_json, reason }`
* `TOOL_RESULT { tool, args_json, result_json, success, note? }` (normally emitted by the host)
* You provide a **registry** of Python callables (`tool_name -> callable(**kwargs)`).
* The agent executes tools in host code and feeds a `TOOL_RESULT` back to the model.
* Maintains an internal transcript across turns.
**Minimal example (read a file):**
```python
from agentkit_gf.soft_tool_agent import SoftToolAgent
from agentkit_gf.tools.fs import FileTools
file_tools = FileTools(root_dir=".") # restrict if you like
registry = {"read_text": file_tools.read_text}
agent = SoftToolAgent(model="openai:gpt-5-nano")
prompt = (
"Read ./notes.txt and tell me the first line.\n"
"If you need the file, return a TOOL_CALL Envelope for tool 'read_text' with args_json "
'{"path": "./notes.txt", "max_bytes": 10000}. Then, after TOOL_RESULT, respond with OPEN_RESPONSE.'
)
result = agent.run_soft_sync(prompt, registry, max_steps=5)
print(result.final_text)
```
**Transcript helpers:**
```python
print(agent.export_history_text())
agent.reset_history()
```
### 2) DelegatingToolsAgent (single gateway tool)
* Presents **one** tool (`delegate_ops`) to the model.
* Internally spins up a private **executor agent** that owns all real tools (including optional provider built-ins like WebSearch).
* You pass in objects or callables; **public methods** are automatically exposed as tools (optionally prefixed).
**Example:**
```python
from agentkit_gf.delegating_tools_agent import DelegatingToolsAgent
from agentkit_gf.tools.fs import FileTools
from agentkit_gf.tools.os import ProcessTools
from agentkit_gf.tools.builtin_tools_matrix import BuiltinTool
agent = DelegatingToolsAgent(
model="openai:gpt-5-nano",
builtin_enums=[BuiltinTool.WEB_SEARCH], # optional provider built-ins
tool_sources=[
FileTools(root_dir="."),
ProcessTools(root_cwd=".", allowed_basenames=["python", "bash", "ls"])
],
class_prefix="fs", # public tool names become "fs_read_text", etc.
system_prompt=(
"Answer-first. Use delegate_ops only if a specific missing fact requires it."
),
ops_system_prompt="Execute exactly one tool and return only its result.",
)
reply = agent.run_sync(
"Read ./notes.txt (use delegate_ops/tool 'fs_read_text' if needed) and summarize the first line."
).output
print(reply)
```
---
## Included Tools
### FileTools (`agentkit_gf.tools.fs`)
* `read_text(path, max_bytes=200_000, encoding="utf-8")`
* `read_bytes_base64(path, max_bytes=200_000)`
* `write_text(path, content, overwrite=False, encoding="utf-8")`
* `stat(path)` / `list_dir(path, include_hidden=False, max_entries=1000)`
* `hash_file(path, algorithm=HashAlgorithm.SHA256)`
All enforce **fail-fast** validation and can be **sandboxed** with `root_dir`.
### ProcessTools (`agentkit_gf.tools.os`)
* `run_process(argv: Sequence[str], timeout_sec=10, cwd=None)` (no shell; recommended)
* `run_shell(command: str, timeout_sec=10, cwd=None)` (flexible; riskier)
Policy controls:
* `root_cwd` (path sandbox)
* `allowed_basenames` (allowlist executables)
* `max_output_bytes` (clip stdout/stderr)
### Agent Delegation Tool (`agentkit_gf.tools.agent`)
* `create_agent_delegation_tool(agent_factory: Callable[[str], Agent]) -> Callable[..., dict]`
* Produces a registry callable: `delegate_agent(agent_name: str, prompt: str) -> {"output": ...}`
* Handy if your soft tool needs to **spin up or discover** another agent on demand.
---
## Soft vs. Hard Tools
* **Soft tools** (this library):
* The model **asks** to call a tool via a schema; the host decides and executes.
* Great for reasoning-first flows where the model should **prefer answering** from context.
* **Hard tools**:
* Registered with the provider; models are often biased to call them.
* Better for rigid flows or “do X with Y, then Z” pipelines.
You can mix: use `SoftToolAgent` for reasoning, and `DelegatingToolsAgent` when you need a single, auditable gateway to real tools (including provider built-ins).
---
## Extending with Your Own Tools
You can pass **objects** or **callables**:
```python
class MyDataOps:
def summarize_csv(self, path: str, top_n: int = 5) -> dict:
# ... return JSON-serializable result ...
return {"summary": "...", "top_n": top_n}
agent = DelegatingToolsAgent(
model="openai:gpt-5-nano",
builtin_enums=[],
tool_sources=[MyDataOps()],
class_prefix="data", # exposes "data_summarize_csv"
)
```
For **SoftToolAgent**, add to the registry:
```python
registry = {"summarize_csv": MyDataOps().summarize_csv}
```
---
## API Reference (quick)
### `SoftToolAgent`
```python
SoftToolAgent(
model: str,
system_prompt: str | None = None,
allow_llm_tool_result: bool = False,
)
run_soft_sync(
prompt: str,
tool_registry: Mapping[str, Callable[..., Any]],
max_steps: int = 4,
) -> SoftRunResult
# history helpers
export_history_text() -> str
export_history_blocks() -> Sequence[str]
reset_history() -> None
```
**Envelope schema:** the model returns exactly one JSON object with `message.kind` in:
* `OPEN_RESPONSE { text, confidence? }`
* `TOOL_CALL { tool, args_json, reason }` (args\_json must be an object string)
* `TOOL_RESULT { tool, args_json, result_json, success, note? }`
### `DelegatingToolsAgent`
```python
DelegatingToolsAgent(
model: str,
builtin_enums: Sequence[BuiltinTool],
tool_sources: Sequence[Callable | object | AbstractToolset] = (),
class_prefix: str | None = None,
system_prompt: str | None = None,
ops_system_prompt: str | None = None,
)
# single-step run (history-aware)
run_sync(prompt: str) -> RunResult
```
This agent exposes only **one** public tool to the model: `delegate_ops(tool, args_json, why)` (the executor runs exactly one real tool; results are recorded to the transcript).
---
## License
See `LICENSE` - MIT
---
Raw data
{
"_id": null,
"home_page": null,
"name": "agentkit-gf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "GreenFuze <info@greenfuze.com>",
"keywords": "ai, agent, pydantic, openai, tools, soft-tools",
"author": null,
"author_email": "GreenFuze <info@greenfuze.com>",
"download_url": "https://files.pythonhosted.org/packages/3f/2c/ad8e278bd07908a47e51d4e6b459ec12eae36ee26d5a655521774f4873b2/agentkit_gf-0.1.1.tar.gz",
"platform": null,
"description": "# GF AgentKit\r\n\r\nLightweight agents on top of **Pydantic AI** with:\r\n\r\n* **True *soft tools*** (reasoning-first; tools are optional, not mandatory).\r\n* **A single \u201cgateway\u201d tool** that delegates to a separate executor (to reduce tool gravity).\r\n* **Built-in conversation history** (persistent transcript managed by the agent).\r\n* A few **ready-to-use tools** (file I/O, safe process execution, agent delegation).\r\n\r\n> \u26a0\ufe0f This is a work-in-progress. The API may evolve.\r\n\r\n---\r\n\r\n## Why?\r\n\r\nMany agent frameworks push models to use tools aggressively (\u201chard tools\u201d), which can be great for workflow graphs but not for **reasoning-first** tasks. GF AgentKit makes tool use **opt-in**:\r\n\r\n* **SoftToolAgent**: the model decides when to ask for a tool using a **schema-guided envelope** (OPEN\\_RESPONSE / TOOL\\_CALL / TOOL\\_RESULT). The host runs tools and feeds results back as context.\r\n* **DelegatingToolsAgent**: the model sees **one** gateway tool (`delegate_ops`). That tool delegates to an internal executor which owns all the real tools. This reduces tool gravity and lets you keep a clean reasoning loop.\r\n\r\nBoth agents keep a **persistent transcript** so follow-ups naturally refer back to earlier turns.\r\n\r\n---\r\n\r\n## Installation\r\n\r\n```bash\r\npip install agentkit-gf\r\n```\r\n\r\nRequires: Python 3.11+.\r\n\r\nYou\u2019ll also need an OpenAI key to use `gpt-5-nano`:\r\n\r\n```bash\r\n# macOS/Linux\r\nexport OPENAI_API_KEY=sk-...\r\n\r\n# Windows PowerShell\r\n$env:OPENAI_API_KEY = \"sk-...\"\r\n```\r\n\r\n---\r\n\r\n## At a Glance\r\n\r\n```\r\nagentkit_gf/\r\n \u251c\u2500 _base_agent.py # shared transcript + constructor glue\r\n \u251c\u2500 soft_tool_agent.py # true soft tools (Envelope schema)\r\n \u251c\u2500 delegating_tools_agent.py # single gateway tool; delegates to executor\r\n \u2514\u2500 tools/\r\n \u251c\u2500 fs.py # FileTools: read/write/stat/hash/list (sandboxable)\r\n \u251c\u2500 os.py # ProcessTools: run_process/run_shell (policy controlled)\r\n \u251c\u2500 agent.py # create_agent_delegation_tool(...) factory\r\n \u2514\u2500 builtin_tools_matrix.py # BuiltinTool enums + provider validation\r\n```\r\n\r\n---\r\n\r\n## Agents\r\n\r\n### 1) SoftToolAgent (true soft tool)\r\n\r\n* The model returns a single **Envelope** JSON object each hop:\r\n\r\n * `OPEN_RESPONSE { text, confidence? }`\r\n * `TOOL_CALL { tool, args_json, reason }`\r\n * `TOOL_RESULT { tool, args_json, result_json, success, note? }` (normally emitted by the host)\r\n* You provide a **registry** of Python callables (`tool_name -> callable(**kwargs)`).\r\n* The agent executes tools in host code and feeds a `TOOL_RESULT` back to the model.\r\n* Maintains an internal transcript across turns.\r\n\r\n**Minimal example (read a file):**\r\n\r\n```python\r\nfrom agentkit_gf.soft_tool_agent import SoftToolAgent\r\nfrom agentkit_gf.tools.fs import FileTools\r\n\r\nfile_tools = FileTools(root_dir=\".\") # restrict if you like\r\nregistry = {\"read_text\": file_tools.read_text}\r\n\r\nagent = SoftToolAgent(model=\"openai:gpt-5-nano\")\r\n\r\nprompt = (\r\n \"Read ./notes.txt and tell me the first line.\\n\"\r\n \"If you need the file, return a TOOL_CALL Envelope for tool 'read_text' with args_json \"\r\n '{\"path\": \"./notes.txt\", \"max_bytes\": 10000}. Then, after TOOL_RESULT, respond with OPEN_RESPONSE.'\r\n)\r\n\r\nresult = agent.run_soft_sync(prompt, registry, max_steps=5)\r\nprint(result.final_text)\r\n```\r\n\r\n**Transcript helpers:**\r\n\r\n```python\r\nprint(agent.export_history_text())\r\nagent.reset_history()\r\n```\r\n\r\n### 2) DelegatingToolsAgent (single gateway tool)\r\n\r\n* Presents **one** tool (`delegate_ops`) to the model.\r\n* Internally spins up a private **executor agent** that owns all real tools (including optional provider built-ins like WebSearch).\r\n* You pass in objects or callables; **public methods** are automatically exposed as tools (optionally prefixed).\r\n\r\n**Example:**\r\n\r\n```python\r\nfrom agentkit_gf.delegating_tools_agent import DelegatingToolsAgent\r\nfrom agentkit_gf.tools.fs import FileTools\r\nfrom agentkit_gf.tools.os import ProcessTools\r\nfrom agentkit_gf.tools.builtin_tools_matrix import BuiltinTool\r\n\r\nagent = DelegatingToolsAgent(\r\n model=\"openai:gpt-5-nano\",\r\n builtin_enums=[BuiltinTool.WEB_SEARCH], # optional provider built-ins\r\n tool_sources=[\r\n FileTools(root_dir=\".\"),\r\n ProcessTools(root_cwd=\".\", allowed_basenames=[\"python\", \"bash\", \"ls\"])\r\n ],\r\n class_prefix=\"fs\", # public tool names become \"fs_read_text\", etc.\r\n system_prompt=(\r\n \"Answer-first. Use delegate_ops only if a specific missing fact requires it.\"\r\n ),\r\n ops_system_prompt=\"Execute exactly one tool and return only its result.\",\r\n)\r\n\r\nreply = agent.run_sync(\r\n \"Read ./notes.txt (use delegate_ops/tool 'fs_read_text' if needed) and summarize the first line.\"\r\n).output\r\n\r\nprint(reply)\r\n```\r\n\r\n---\r\n\r\n## Included Tools\r\n\r\n### FileTools (`agentkit_gf.tools.fs`)\r\n\r\n* `read_text(path, max_bytes=200_000, encoding=\"utf-8\")`\r\n* `read_bytes_base64(path, max_bytes=200_000)`\r\n* `write_text(path, content, overwrite=False, encoding=\"utf-8\")`\r\n* `stat(path)` / `list_dir(path, include_hidden=False, max_entries=1000)`\r\n* `hash_file(path, algorithm=HashAlgorithm.SHA256)`\r\n\r\nAll enforce **fail-fast** validation and can be **sandboxed** with `root_dir`.\r\n\r\n### ProcessTools (`agentkit_gf.tools.os`)\r\n\r\n* `run_process(argv: Sequence[str], timeout_sec=10, cwd=None)` (no shell; recommended)\r\n* `run_shell(command: str, timeout_sec=10, cwd=None)` (flexible; riskier)\r\n\r\nPolicy controls:\r\n\r\n* `root_cwd` (path sandbox)\r\n* `allowed_basenames` (allowlist executables)\r\n* `max_output_bytes` (clip stdout/stderr)\r\n\r\n### Agent Delegation Tool (`agentkit_gf.tools.agent`)\r\n\r\n* `create_agent_delegation_tool(agent_factory: Callable[[str], Agent]) -> Callable[..., dict]`\r\n* Produces a registry callable: `delegate_agent(agent_name: str, prompt: str) -> {\"output\": ...}`\r\n* Handy if your soft tool needs to **spin up or discover** another agent on demand.\r\n\r\n---\r\n\r\n## Soft vs. Hard Tools\r\n\r\n* **Soft tools** (this library):\r\n\r\n * The model **asks** to call a tool via a schema; the host decides and executes.\r\n * Great for reasoning-first flows where the model should **prefer answering** from context.\r\n* **Hard tools**:\r\n\r\n * Registered with the provider; models are often biased to call them.\r\n * Better for rigid flows or \u201cdo X with Y, then Z\u201d pipelines.\r\n\r\nYou can mix: use `SoftToolAgent` for reasoning, and `DelegatingToolsAgent` when you need a single, auditable gateway to real tools (including provider built-ins).\r\n\r\n---\r\n\r\n## Extending with Your Own Tools\r\n\r\nYou can pass **objects** or **callables**:\r\n\r\n```python\r\nclass MyDataOps:\r\n def summarize_csv(self, path: str, top_n: int = 5) -> dict:\r\n # ... return JSON-serializable result ...\r\n return {\"summary\": \"...\", \"top_n\": top_n}\r\n\r\nagent = DelegatingToolsAgent(\r\n model=\"openai:gpt-5-nano\",\r\n builtin_enums=[],\r\n tool_sources=[MyDataOps()],\r\n class_prefix=\"data\", # exposes \"data_summarize_csv\"\r\n)\r\n```\r\n\r\nFor **SoftToolAgent**, add to the registry:\r\n\r\n```python\r\nregistry = {\"summarize_csv\": MyDataOps().summarize_csv}\r\n```\r\n\r\n---\r\n\r\n## API Reference (quick)\r\n\r\n### `SoftToolAgent`\r\n\r\n```python\r\nSoftToolAgent(\r\n model: str,\r\n system_prompt: str | None = None,\r\n allow_llm_tool_result: bool = False,\r\n)\r\n\r\nrun_soft_sync(\r\n prompt: str,\r\n tool_registry: Mapping[str, Callable[..., Any]],\r\n max_steps: int = 4,\r\n) -> SoftRunResult\r\n\r\n# history helpers\r\nexport_history_text() -> str\r\nexport_history_blocks() -> Sequence[str]\r\nreset_history() -> None\r\n```\r\n\r\n**Envelope schema:** the model returns exactly one JSON object with `message.kind` in:\r\n\r\n* `OPEN_RESPONSE { text, confidence? }`\r\n* `TOOL_CALL { tool, args_json, reason }` (args\\_json must be an object string)\r\n* `TOOL_RESULT { tool, args_json, result_json, success, note? }`\r\n\r\n### `DelegatingToolsAgent`\r\n\r\n```python\r\nDelegatingToolsAgent(\r\n model: str,\r\n builtin_enums: Sequence[BuiltinTool],\r\n tool_sources: Sequence[Callable | object | AbstractToolset] = (),\r\n class_prefix: str | None = None,\r\n system_prompt: str | None = None,\r\n ops_system_prompt: str | None = None,\r\n)\r\n\r\n# single-step run (history-aware)\r\nrun_sync(prompt: str) -> RunResult\r\n```\r\n\r\nThis agent exposes only **one** public tool to the model: `delegate_ops(tool, args_json, why)` (the executor runs exactly one real tool; results are recorded to the transcript).\r\n\r\n---\r\n\r\n## License\r\n\r\nSee `LICENSE` - MIT\r\n\r\n---\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Lightweight agents on top of Pydantic AI with true soft tools and conversation history",
"version": "0.1.1",
"project_urls": {
"Documentation": "https://github.com/GreenFuze/agentkit-gf#readme",
"Homepage": "https://github.com/GreenFuze/agentkit-gf",
"Issues": "https://github.com/GreenFuze/agentkit-gf/issues",
"Repository": "https://github.com/GreenFuze/agentkit-gf"
},
"split_keywords": [
"ai",
" agent",
" pydantic",
" openai",
" tools",
" soft-tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0cb770448081544f398a59c19b22303cf4135526b748b3477ec7249d8530802e",
"md5": "563f4f31339d345ef860fda7ada43b6d",
"sha256": "1c1e968d490199dbfea8def6ba640e9cfc4d5df0accc9ee7cae009bbe57dd132"
},
"downloads": -1,
"filename": "agentkit_gf-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "563f4f31339d345ef860fda7ada43b6d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 19418,
"upload_time": "2025-08-31T07:23:03",
"upload_time_iso_8601": "2025-08-31T07:23:03.562436Z",
"url": "https://files.pythonhosted.org/packages/0c/b7/70448081544f398a59c19b22303cf4135526b748b3477ec7249d8530802e/agentkit_gf-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3f2cad8e278bd07908a47e51d4e6b459ec12eae36ee26d5a655521774f4873b2",
"md5": "8fce32563c9f020607e3b871e0070362",
"sha256": "0b2d49f8f9f4e3105f84af94ca32d5579361879de415034a2f44cc1addc5f221"
},
"downloads": -1,
"filename": "agentkit_gf-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "8fce32563c9f020607e3b871e0070362",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 20723,
"upload_time": "2025-08-31T07:23:04",
"upload_time_iso_8601": "2025-08-31T07:23:04.910858Z",
"url": "https://files.pythonhosted.org/packages/3f/2c/ad8e278bd07908a47e51d4e6b459ec12eae36ee26d5a655521774f4873b2/agentkit_gf-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-31 07:23:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "GreenFuze",
"github_project": "agentkit-gf#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pydantic-ai",
"specs": [
[
">=",
"0.8.1"
]
]
}
],
"lcname": "agentkit-gf"
}