# CoreFoundry
A lightweight, LLM-agnostic micro-framework for AI agent tool management.
CoreFoundry eliminates the boilerplate from building AI agents with tools. Define tools with a simple decorator, auto-discover them from packages, and let CoreFoundry handle all the schema management and serialization. As a micro-framework, CoreFoundry focuses solely on tool definition and management - not agent orchestration.
## Features
- **LLM-Agnostic**: Works with any LLM provider (OpenAI, Anthropic, local models, etc.)
- **Decorator-Based Registration**: Simple `@registry.register` decorator for tool definitions
- **Auto-Discovery**: Automatically discover and register tools from Python packages
- **Type-Safe**: Built on Pydantic for schema validation
- **Extensible**: Easy-to-implement adapter pattern for any LLM provider
## Important: Global Registry
⚠️ **CoreFoundry uses a global tool registry.** All Agent instances share the same registered tools. This is fine for single-user applications, but requires consideration in multi-tenant environments. [Read more about registry isolation →](#global-registry)
## Why CoreFoundry?
**Problem**: Building AI agents with tools requires tons of repetitive boilerplate code.
Every tool needs:
- A function implementation
- A separate JSON schema definition
- Manual wiring between tool calls and functions
- Serialization logic for LLM providers
- Discovery and registration code
**Without CoreFoundry:**
```python
# Define the schema separately
tool_schema = {
"type": "function",
"function": {
"name": "to_uppercase",
"description": "Convert text to uppercase",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "input text"}
},
"required": ["text"]
}
}
}
# Implement the function
def to_uppercase(text: str) -> str:
return text.upper()
# Manually map tool names to functions
tool_map = {"to_uppercase": to_uppercase}
# Repeat for every tool...
```
**With CoreFoundry:**
```python
@registry.register(
description="Convert text to uppercase",
input_schema={
"properties": {"text": {"type": "string", "description": "input text"}},
"required": ["text"],
}
)
def to_uppercase(text: str) -> str:
return text.upper()
```
**That's it.** CoreFoundry handles:
- ✅ Schema validation with Pydantic
- ✅ Automatic tool discovery
- ✅ JSON serialization for any LLM
- ✅ Runtime tool execution
- ✅ Clean separation of concerns
**CoreFoundry is a micro-framework** - it handles tool management, not agent orchestration. You stay in control of your application architecture while CoreFoundry eliminates the tool definition boilerplate.
And it works with **any LLM provider** - OpenAI, Anthropic, local models, or your own custom integration.
## Design Philosophy
CoreFoundry is intentionally minimal:
- **Focused scope**: Tool definition and management only - no orchestration, no conversation handling, no agent runtime
- **You stay in control**: No hidden magic, no architectural constraints, no opinionated workflows
- **Composable**: Works alongside any agent framework (LangChain, CrewAI, custom solutions) or standalone
- **Reduces friction, not flexibility**: Eliminates boilerplate while letting you build agents your way
If you need full agent orchestration, consider frameworks like LangChain or MCP. If you just want to define tools without the ceremony, CoreFoundry is for you.
## Installation
### Basic Installation
```bash
uv pip install corefoundry
```
### With OpenAI Adapter
```bash
uv pip install corefoundry[adapters]
```
> **Note**: CoreFoundry works with any package manager. If you prefer pip: `pip install corefoundry`
## Quick Start
### 1. Define Your Tools
Create a Python module with your tools:
```python
# my_tools/text_tools.py
from corefoundry import registry
@registry.register(
description="Convert text to uppercase",
input_schema={
"properties": {"text": {"type": "string", "description": "input text"}},
"required": ["text"],
},
)
def to_uppercase(text: str) -> str:
return text.upper()
@registry.register(
description="Count words in text",
input_schema={
"properties": {"text": {"type": "string"}},
"required": ["text"]
},
)
def count_words(text: str) -> int:
return len(text.split())
```
### 2. Create an Agent
```python
from corefoundry import Agent
# Create agent and auto-discover tools
agent = Agent(
name="MyAgent",
description="A helpful text processing agent",
auto_tools_pkg="my_tools"
)
# View available tools
print(agent.tool_names())
# ['to_uppercase', 'count_words']
# Get tool definitions as JSON (for LLM consumption)
print(agent.available_tools_json())
# Call a tool directly
result = agent.call_tool("to_uppercase", text="hello world")
print(result) # "HELLO WORLD"
```
### 3. Use with an LLM (Optional)
```python
from agent_adapters.openai_adapter import OpenAIAdapter
from openai import OpenAI
client = OpenAI(api_key="your-api-key")
adapter = OpenAIAdapter(client=client, model="gpt-4o-mini")
# The adapter can now use your registered tools
response = adapter.call_with_tools("Convert 'hello world' to uppercase")
```
## Core Concepts
### Tool Registry
The registry is a global singleton that manages tool definitions:
```python
from corefoundry import registry
@registry.register(
name="custom_name", # Optional: defaults to function name
description="What this tool does",
input_schema={
"properties": {
"param1": {"type": "string", "description": "First parameter"},
"param2": {"type": "integer"}
},
"required": ["param1"]
}
)
def my_tool(param1: str, param2: int = 0):
return f"{param1}: {param2}"
```
### Agent
The `Agent` class provides a convenient wrapper around the registry:
- **Auto-discovery**: Automatically imports and registers tools from a package
- **JSON Export**: Exports tool definitions in LLM-compatible format
- **Tool Execution**: Call tools by name at runtime
```python
agent = Agent(
name="MyAgent",
description="Agent description",
auto_tools_pkg="my_tools" # Optional: auto-discover tools
)
```
### Async Tools
CoreFoundry supports both synchronous and asynchronous tools. You're responsible for handling async execution in your application.
**Registering async tools:**
```python
import httpx
from corefoundry import registry
@registry.register(
description="Fetch content from a URL",
input_schema={
"properties": {"url": {"type": "string"}},
"required": ["url"]
}
)
async def fetch_url(url: str) -> str:
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.text
```
**Using async tools:**
```python
import asyncio
from corefoundry import Agent
agent = Agent("MyAgent", auto_tools_pkg="my_tools")
# Option 1: Await directly in async context
async def main():
result = await agent.call_tool("fetch_url", url="https://example.com")
print(result)
asyncio.run(main())
# Option 2: Run in event loop
result = asyncio.run(agent.call_tool("fetch_url", url="https://example.com"))
```
**Note:** `call_tool()` returns a coroutine when calling async tools. You must `await` it or run it in an event loop. CoreFoundry doesn't automatically handle async execution - that's your application's responsibility.
### Adapters
Adapters integrate the registry with specific LLM providers. CoreFoundry includes an OpenAI adapter, and you can create your own:
```python
from agent_adapters.base import BaseAdapter
from corefoundry import registry
class MyLLMAdapter(BaseAdapter):
def __init__(self, client, registry=registry):
super().__init__(registry)
self.client = client
def generate(self, prompt: str):
# Implement LLM call
pass
def call_with_tools(self, prompt: str):
# Implement LLM call with tools
tools = self.registry.get_json()
# Pass tools to your LLM provider
pass
```
## Input Schema Format
CoreFoundry uses JSON Schema for tool input validation:
```python
input_schema = {
"properties": {
"file_path": {
"type": "string",
"description": "Path to the file"
},
"mode": {
"type": "string",
"description": "Read mode",
"enum": ["text", "binary"]
},
"max_lines": {
"type": "integer",
"description": "Maximum lines to read"
}
},
"required": ["file_path"]
}
```
Supported types: `string`, `integer`, `number`, `boolean`, `array`, `object`
## API Reference
### `registry.register(name=None, description=None, input_schema=None)`
Decorator to register a function as a tool.
**Parameters:**
- `name` (str, optional): Tool name (defaults to function name)
- `description` (str, optional): Tool description (defaults to docstring)
- `input_schema` (dict, optional): JSON Schema for tool inputs
### `Agent(name, description="", auto_tools_pkg=None)`
Create a new agent.
**Parameters:**
- `name` (str): Agent name
- `description` (str): Agent description
- `auto_tools_pkg` (str, optional): Package to auto-discover tools from
**Methods:**
- `tool_names()`: List all registered tool names
- `available_tools_json()`: Get tool definitions as JSON string
- `call_tool(name, **kwargs)`: Execute a tool by name
### `registry.autodiscover(package_name)`
Discover and register tools from a package.
**Parameters:**
- `package_name` (str): Fully qualified package name (e.g., "my_app.tools")
**Security Note:** Only use with trusted packages. This imports and executes code.
## Security Considerations
### Important Security Notes:
1. **Trusted Tools Only**: Only register tools from trusted sources. Registered tools have full Python execution privileges.
2. **Auto-Discovery Safety**: The `autodiscover()` method imports Python modules. Only use with trusted package names:
```python
# Safe: your own package
agent = Agent(auto_tools_pkg="my_app.tools")
# Unsafe: user-controlled input
pkg = input("Enter package: ") # DON'T DO THIS
agent = Agent(auto_tools_pkg=pkg)
```
3. **Global Registry**: The registry is a global singleton. In multi-tenant applications, tools are shared across all Agent instances.
**What this means:**
```python
# Agent A registers admin tools
agent_a = Agent("Admin", auto_tools_pkg="admin_tools")
# Tools: delete_file, restart_server, etc.
# Agent B in the same process can also access admin tools
agent_b = Agent("Guest", auto_tools_pkg="guest_tools")
# agent_b.call_tool("delete_file", ...) will work!
```
**If you're building multi-tenant systems, consider:**
- Using separate processes for different tenants/users
- Deploying isolated containers per tenant
- Implementing authorization checks within tool functions
- Being very careful about what gets registered globally
**This is by design** - CoreFoundry reduces boilerplate, not orchestration. Multi-tenant isolation is the application's responsibility.
4. **Input Validation**: While CoreFoundry validates schema structure, it does NOT automatically validate tool inputs at runtime. Implement input validation in your tool functions:
```python
@registry.register(...)
def read_file(file_path: str):
# Validate inputs in your tool
if not os.path.exists(file_path):
raise ValueError("File not found")
if not file_path.startswith("/allowed/path/"):
raise ValueError("Access denied")
# ... safe file reading
```
5. **LLM-Generated Tool Calls**: When using with LLMs, remember that LLMs can be prompted to call tools in unexpected ways. Implement appropriate safeguards in tool implementations.
## Project Structure
```
├── corefoundry/ # core package (LLM-agnostic)
│ ├── __init__.py
│ ├── core.py # registry, models, autodiscover
│ └── agent.py # agent wrapper / executor
│
├── agent_adapters/ # optional adapters (separate package)
│ ├── __init__.py
│ ├── base.py
│ └── openai_adapter.py
│
├── examples/
│ ├── my_tools/
│ │ ├── __init__.py
│ │ └── text_tools.py
│ └── demo.py
│
├── tests/
│ ├── test_registry.py
│ └── test_agent.py
│
├── pyproject.toml
├── README.md
└── LICENSE
```
## Development
### Setup
```bash
# Clone the repository
git clone https://github.com/jjhiza/core-foundry.git
cd corefoundry
# Install in editable mode (uv handles virtual environment automatically)
uv pip install -e .
# Install with optional dependencies
uv pip install -e ".[adapters]"
```
### Running Tests
```bash
pytest tests/
```
### Running Examples
```bash
python examples/demo.py
```
## License
MIT License - see LICENSE file for details
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Submit a pull request
## Roadmap
- [ ] Additional LLM adapters (local models, etc.)
- [x] OpenAI adapter
- [x] Anthropic adapter
- [ ] Runtime input validation against schemas
## Support
- **Issues**: https://github.com/jjhiza/corefoundry/issues
- **Discussions**: https://github.com/jjhiza/corefoundry/discussions
Raw data
{
"_id": null,
"home_page": null,
"name": "corefoundry",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, agents, llm, tools, openai, anthropic, framework",
"author": "jjhiza",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/5e/d1/f2c1ceb9e49914a9d4f9f24aaab5545173280891f154b82914a1370c95d3/corefoundry-0.2.1.tar.gz",
"platform": null,
"description": "# CoreFoundry\n\nA lightweight, LLM-agnostic micro-framework for AI agent tool management.\n\nCoreFoundry eliminates the boilerplate from building AI agents with tools. Define tools with a simple decorator, auto-discover them from packages, and let CoreFoundry handle all the schema management and serialization. As a micro-framework, CoreFoundry focuses solely on tool definition and management - not agent orchestration.\n\n## Features\n\n- **LLM-Agnostic**: Works with any LLM provider (OpenAI, Anthropic, local models, etc.)\n- **Decorator-Based Registration**: Simple `@registry.register` decorator for tool definitions\n- **Auto-Discovery**: Automatically discover and register tools from Python packages\n- **Type-Safe**: Built on Pydantic for schema validation\n- **Extensible**: Easy-to-implement adapter pattern for any LLM provider\n\n## Important: Global Registry\n\n\u26a0\ufe0f **CoreFoundry uses a global tool registry.** All Agent instances share the same registered tools. This is fine for single-user applications, but requires consideration in multi-tenant environments. [Read more about registry isolation \u2192](#global-registry)\n\n## Why CoreFoundry?\n\n**Problem**: Building AI agents with tools requires tons of repetitive boilerplate code.\n\nEvery tool needs:\n\n- A function implementation\n- A separate JSON schema definition\n- Manual wiring between tool calls and functions\n- Serialization logic for LLM providers\n- Discovery and registration code\n\n**Without CoreFoundry:**\n\n```python\n# Define the schema separately\ntool_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"to_uppercase\",\n \"description\": \"Convert text to uppercase\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"text\": {\"type\": \"string\", \"description\": \"input text\"}\n },\n \"required\": [\"text\"]\n }\n }\n}\n\n# Implement the function\ndef to_uppercase(text: str) -> str:\n return text.upper()\n\n# Manually map tool names to functions\ntool_map = {\"to_uppercase\": to_uppercase}\n\n# Repeat for every tool...\n```\n\n**With CoreFoundry:**\n\n```python\n@registry.register(\n description=\"Convert text to uppercase\",\n input_schema={\n \"properties\": {\"text\": {\"type\": \"string\", \"description\": \"input text\"}},\n \"required\": [\"text\"],\n }\n)\ndef to_uppercase(text: str) -> str:\n return text.upper()\n```\n\n**That's it.** CoreFoundry handles:\n\n- \u2705 Schema validation with Pydantic\n- \u2705 Automatic tool discovery\n- \u2705 JSON serialization for any LLM\n- \u2705 Runtime tool execution\n- \u2705 Clean separation of concerns\n\n**CoreFoundry is a micro-framework** - it handles tool management, not agent orchestration. You stay in control of your application architecture while CoreFoundry eliminates the tool definition boilerplate.\n\nAnd it works with **any LLM provider** - OpenAI, Anthropic, local models, or your own custom integration.\n\n## Design Philosophy\n\nCoreFoundry is intentionally minimal:\n\n- **Focused scope**: Tool definition and management only - no orchestration, no conversation handling, no agent runtime\n- **You stay in control**: No hidden magic, no architectural constraints, no opinionated workflows\n- **Composable**: Works alongside any agent framework (LangChain, CrewAI, custom solutions) or standalone\n- **Reduces friction, not flexibility**: Eliminates boilerplate while letting you build agents your way\n\nIf you need full agent orchestration, consider frameworks like LangChain or MCP. If you just want to define tools without the ceremony, CoreFoundry is for you.\n\n## Installation\n\n### Basic Installation\n\n```bash\nuv pip install corefoundry\n```\n\n### With OpenAI Adapter\n\n```bash\nuv pip install corefoundry[adapters]\n```\n\n> **Note**: CoreFoundry works with any package manager. If you prefer pip: `pip install corefoundry`\n\n## Quick Start\n\n### 1. Define Your Tools\n\nCreate a Python module with your tools:\n\n```python\n# my_tools/text_tools.py\nfrom corefoundry import registry\n\n@registry.register(\n description=\"Convert text to uppercase\",\n input_schema={\n \"properties\": {\"text\": {\"type\": \"string\", \"description\": \"input text\"}},\n \"required\": [\"text\"],\n },\n)\ndef to_uppercase(text: str) -> str:\n return text.upper()\n\n@registry.register(\n description=\"Count words in text\",\n input_schema={\n \"properties\": {\"text\": {\"type\": \"string\"}},\n \"required\": [\"text\"]\n },\n)\ndef count_words(text: str) -> int:\n return len(text.split())\n```\n\n### 2. Create an Agent\n\n```python\nfrom corefoundry import Agent\n\n# Create agent and auto-discover tools\nagent = Agent(\n name=\"MyAgent\",\n description=\"A helpful text processing agent\",\n auto_tools_pkg=\"my_tools\"\n)\n\n# View available tools\nprint(agent.tool_names())\n# ['to_uppercase', 'count_words']\n\n# Get tool definitions as JSON (for LLM consumption)\nprint(agent.available_tools_json())\n\n# Call a tool directly\nresult = agent.call_tool(\"to_uppercase\", text=\"hello world\")\nprint(result) # \"HELLO WORLD\"\n```\n\n### 3. Use with an LLM (Optional)\n\n```python\nfrom agent_adapters.openai_adapter import OpenAIAdapter\nfrom openai import OpenAI\n\nclient = OpenAI(api_key=\"your-api-key\")\nadapter = OpenAIAdapter(client=client, model=\"gpt-4o-mini\")\n\n# The adapter can now use your registered tools\nresponse = adapter.call_with_tools(\"Convert 'hello world' to uppercase\")\n```\n\n## Core Concepts\n\n### Tool Registry\n\nThe registry is a global singleton that manages tool definitions:\n\n```python\nfrom corefoundry import registry\n\n@registry.register(\n name=\"custom_name\", # Optional: defaults to function name\n description=\"What this tool does\",\n input_schema={\n \"properties\": {\n \"param1\": {\"type\": \"string\", \"description\": \"First parameter\"},\n \"param2\": {\"type\": \"integer\"}\n },\n \"required\": [\"param1\"]\n }\n)\ndef my_tool(param1: str, param2: int = 0):\n return f\"{param1}: {param2}\"\n```\n\n### Agent\n\nThe `Agent` class provides a convenient wrapper around the registry:\n\n- **Auto-discovery**: Automatically imports and registers tools from a package\n- **JSON Export**: Exports tool definitions in LLM-compatible format\n- **Tool Execution**: Call tools by name at runtime\n\n```python\nagent = Agent(\n name=\"MyAgent\",\n description=\"Agent description\",\n auto_tools_pkg=\"my_tools\" # Optional: auto-discover tools\n)\n```\n\n### Async Tools\n\nCoreFoundry supports both synchronous and asynchronous tools. You're responsible for handling async execution in your application.\n\n**Registering async tools:**\n\n```python\nimport httpx\nfrom corefoundry import registry\n\n@registry.register(\n description=\"Fetch content from a URL\",\n input_schema={\n \"properties\": {\"url\": {\"type\": \"string\"}},\n \"required\": [\"url\"]\n }\n)\nasync def fetch_url(url: str) -> str:\n async with httpx.AsyncClient() as client:\n response = await client.get(url)\n return response.text\n```\n\n**Using async tools:**\n\n```python\nimport asyncio\nfrom corefoundry import Agent\n\nagent = Agent(\"MyAgent\", auto_tools_pkg=\"my_tools\")\n\n# Option 1: Await directly in async context\nasync def main():\n result = await agent.call_tool(\"fetch_url\", url=\"https://example.com\")\n print(result)\n\nasyncio.run(main())\n\n# Option 2: Run in event loop\nresult = asyncio.run(agent.call_tool(\"fetch_url\", url=\"https://example.com\"))\n```\n\n**Note:** `call_tool()` returns a coroutine when calling async tools. You must `await` it or run it in an event loop. CoreFoundry doesn't automatically handle async execution - that's your application's responsibility.\n\n### Adapters\n\nAdapters integrate the registry with specific LLM providers. CoreFoundry includes an OpenAI adapter, and you can create your own:\n\n```python\nfrom agent_adapters.base import BaseAdapter\nfrom corefoundry import registry\n\nclass MyLLMAdapter(BaseAdapter):\n def __init__(self, client, registry=registry):\n super().__init__(registry)\n self.client = client\n\n def generate(self, prompt: str):\n # Implement LLM call\n pass\n\n def call_with_tools(self, prompt: str):\n # Implement LLM call with tools\n tools = self.registry.get_json()\n # Pass tools to your LLM provider\n pass\n```\n\n## Input Schema Format\n\nCoreFoundry uses JSON Schema for tool input validation:\n\n```python\ninput_schema = {\n \"properties\": {\n \"file_path\": {\n \"type\": \"string\",\n \"description\": \"Path to the file\"\n },\n \"mode\": {\n \"type\": \"string\",\n \"description\": \"Read mode\",\n \"enum\": [\"text\", \"binary\"]\n },\n \"max_lines\": {\n \"type\": \"integer\",\n \"description\": \"Maximum lines to read\"\n }\n },\n \"required\": [\"file_path\"]\n}\n```\n\nSupported types: `string`, `integer`, `number`, `boolean`, `array`, `object`\n\n## API Reference\n\n### `registry.register(name=None, description=None, input_schema=None)`\n\nDecorator to register a function as a tool.\n\n**Parameters:**\n\n- `name` (str, optional): Tool name (defaults to function name)\n- `description` (str, optional): Tool description (defaults to docstring)\n- `input_schema` (dict, optional): JSON Schema for tool inputs\n\n### `Agent(name, description=\"\", auto_tools_pkg=None)`\n\nCreate a new agent.\n\n**Parameters:**\n\n- `name` (str): Agent name\n- `description` (str): Agent description\n- `auto_tools_pkg` (str, optional): Package to auto-discover tools from\n\n**Methods:**\n\n- `tool_names()`: List all registered tool names\n- `available_tools_json()`: Get tool definitions as JSON string\n- `call_tool(name, **kwargs)`: Execute a tool by name\n\n### `registry.autodiscover(package_name)`\n\nDiscover and register tools from a package.\n\n**Parameters:**\n\n- `package_name` (str): Fully qualified package name (e.g., \"my_app.tools\")\n\n**Security Note:** Only use with trusted packages. This imports and executes code.\n\n## Security Considerations\n\n### Important Security Notes:\n\n1. **Trusted Tools Only**: Only register tools from trusted sources. Registered tools have full Python execution privileges.\n\n2. **Auto-Discovery Safety**: The `autodiscover()` method imports Python modules. Only use with trusted package names:\n\n ```python\n # Safe: your own package\n agent = Agent(auto_tools_pkg=\"my_app.tools\")\n\n # Unsafe: user-controlled input\n pkg = input(\"Enter package: \") # DON'T DO THIS\n agent = Agent(auto_tools_pkg=pkg)\n ```\n\n3. **Global Registry**: The registry is a global singleton. In multi-tenant applications, tools are shared across all Agent instances.\n\n **What this means:**\n\n ```python\n # Agent A registers admin tools\n agent_a = Agent(\"Admin\", auto_tools_pkg=\"admin_tools\")\n # Tools: delete_file, restart_server, etc.\n\n # Agent B in the same process can also access admin tools\n agent_b = Agent(\"Guest\", auto_tools_pkg=\"guest_tools\")\n # agent_b.call_tool(\"delete_file\", ...) will work!\n ```\n\n **If you're building multi-tenant systems, consider:**\n - Using separate processes for different tenants/users\n - Deploying isolated containers per tenant\n - Implementing authorization checks within tool functions\n - Being very careful about what gets registered globally\n\n **This is by design** - CoreFoundry reduces boilerplate, not orchestration. Multi-tenant isolation is the application's responsibility.\n\n4. **Input Validation**: While CoreFoundry validates schema structure, it does NOT automatically validate tool inputs at runtime. Implement input validation in your tool functions:\n\n ```python\n @registry.register(...)\n def read_file(file_path: str):\n # Validate inputs in your tool\n if not os.path.exists(file_path):\n raise ValueError(\"File not found\")\n if not file_path.startswith(\"/allowed/path/\"):\n raise ValueError(\"Access denied\")\n # ... safe file reading\n ```\n\n5. **LLM-Generated Tool Calls**: When using with LLMs, remember that LLMs can be prompted to call tools in unexpected ways. Implement appropriate safeguards in tool implementations.\n\n## Project Structure\n\n```\n\u251c\u2500\u2500 corefoundry/ # core package (LLM-agnostic)\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 core.py # registry, models, autodiscover\n\u2502 \u2514\u2500\u2500 agent.py # agent wrapper / executor\n\u2502\n\u251c\u2500\u2500 agent_adapters/ # optional adapters (separate package)\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 base.py\n\u2502 \u2514\u2500\u2500 openai_adapter.py\n\u2502\n\u251c\u2500\u2500 examples/\n\u2502 \u251c\u2500\u2500 my_tools/\n\u2502 \u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2502 \u2514\u2500\u2500 text_tools.py\n\u2502 \u2514\u2500\u2500 demo.py\n\u2502\n\u251c\u2500\u2500 tests/\n\u2502 \u251c\u2500\u2500 test_registry.py\n\u2502 \u2514\u2500\u2500 test_agent.py\n\u2502\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 README.md\n\u2514\u2500\u2500 LICENSE\n```\n\n## Development\n\n### Setup\n\n```bash\n# Clone the repository\ngit clone https://github.com/jjhiza/core-foundry.git\ncd corefoundry\n\n# Install in editable mode (uv handles virtual environment automatically)\nuv pip install -e .\n\n# Install with optional dependencies\nuv pip install -e \".[adapters]\"\n```\n\n### Running Tests\n\n```bash\npytest tests/\n```\n\n### Running Examples\n\n```bash\npython examples/demo.py\n```\n\n## License\n\nMIT License - see LICENSE file for details\n\n## Contributing\n\nContributions welcome! Please:\n\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Submit a pull request\n\n## Roadmap\n\n- [ ] Additional LLM adapters (local models, etc.)\n - [x] OpenAI adapter\n - [x] Anthropic adapter\n- [ ] Runtime input validation against schemas\n\n## Support\n\n- **Issues**: https://github.com/jjhiza/corefoundry/issues\n- **Discussions**: https://github.com/jjhiza/corefoundry/discussions\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A lightweight, LLM-agnostic micro-framework for AI agent tool management.",
"version": "0.2.1",
"project_urls": {
"Documentation": "https://github.com/jjhiza/core-foundry#readme",
"Homepage": "https://github.com/jjhiza/core-foundry",
"Issues": "https://github.com/jjhiza/core-foundry/issues",
"Repository": "https://github.com/jjhiza/core-foundry"
},
"split_keywords": [
"ai",
" agents",
" llm",
" tools",
" openai",
" anthropic",
" framework"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ebee7f609f5bd2fb89c2941026258e49e596200d2ccf9e7921cef7e9b7190b00",
"md5": "68960ec587a3833b1f93bb47de7d0545",
"sha256": "4d1da2f938d15bdb978ab7c392e72b3d92d3e8312f30877e715f3fd4615a9977"
},
"downloads": -1,
"filename": "corefoundry-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "68960ec587a3833b1f93bb47de7d0545",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 12855,
"upload_time": "2025-10-10T02:10:00",
"upload_time_iso_8601": "2025-10-10T02:10:00.261106Z",
"url": "https://files.pythonhosted.org/packages/eb/ee/7f609f5bd2fb89c2941026258e49e596200d2ccf9e7921cef7e9b7190b00/corefoundry-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5ed1f2c1ceb9e49914a9d4f9f24aaab5545173280891f154b82914a1370c95d3",
"md5": "5d20b758a97019c4e396cac471e5938c",
"sha256": "abe77dab2fa69bf4349bb82a4c8484e4fe3b3b79ffce6cab6ee2d45949cafe61"
},
"downloads": -1,
"filename": "corefoundry-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "5d20b758a97019c4e396cac471e5938c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 15623,
"upload_time": "2025-10-10T02:10:01",
"upload_time_iso_8601": "2025-10-10T02:10:01.394797Z",
"url": "https://files.pythonhosted.org/packages/5e/d1/f2c1ceb9e49914a9d4f9f24aaab5545173280891f154b82914a1370c95d3/corefoundry-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-10 02:10:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jjhiza",
"github_project": "core-foundry#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "corefoundry"
}