# Agentic Blocks
Building blocks for agentic systems with a focus on simplicity and ease of use.
## Overview
Agentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:
- **MCP Client**: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API
- **Messages**: Manage LLM conversation history with OpenAI-compatible format
- **LLM**: Simple function for calling OpenAI-compatible completion APIs
All components follow principles of simplicity, maintainability, and ease of use.
## Installation
```bash
pip install -e .
```
For development:
```bash
pip install -e ".[dev]"
```
## Quick Start
### MCPClient - Connect to MCP Endpoints
The MCPClient provides a unified interface for connecting to different types of MCP endpoints:
```python
from agentic_blocks import MCPClient
# Connect to an SSE endpoint (sync by default)
client = MCPClient("https://example.com/mcp/server/sse")
# List available tools
tools = client.list_tools()
print(f"Available tools: {len(tools)}")
# Call a tool
result = client.call_tool("search", {"query": "What is MCP?"})
print(result)
```
**Supported endpoint types:**
- **SSE endpoints**: URLs with `/sse` in the path
- **HTTP endpoints**: URLs with `/mcp` in the path
- **Local scripts**: File paths to Python MCP servers
**Async support for advanced users:**
```python
# Async versions available
tools = await client.list_tools_async()
result = await client.call_tool_async("search", {"query": "async example"})
```
### Messages - Manage Conversation History
The Messages class helps build and manage LLM conversations in OpenAI-compatible format:
```python
from agentic_blocks import Messages
# Initialize with system prompt
messages = Messages(
system_prompt="You are a helpful assistant.",
user_prompt="Hello, how can you help me?",
add_date_and_time=True
)
# Add assistant response
messages.add_assistant_message("I can help you with various tasks!")
# Add tool calls
tool_call = {
"id": "call_123",
"type": "function",
"function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
}
messages.add_tool_call(tool_call)
# Add tool response
messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
# Get messages for LLM API
conversation = messages.get_messages()
# View readable format
print(messages)
```
### LLM - Call OpenAI-Compatible APIs
The `call_llm` function provides a simple interface for calling LLM completion APIs:
```python
from agentic_blocks import call_llm, Messages
# Method 1: Using with Messages object
messages = Messages(
system_prompt="You are a helpful assistant.",
user_prompt="What is the capital of France?"
)
response = call_llm(messages, temperature=0.7)
print(response) # "The capital of France is Paris."
```
```python
# Method 2: Using with raw message list
messages_list = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
response = call_llm(messages_list, model="gpt-4o-mini")
print(response) # "2+2 equals 4."
```
```python
# Method 3: Using with tools (for function calling)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
]
messages = Messages(user_prompt="What's the weather like in Stockholm?")
response = call_llm(messages, tools=tools)
print(response)
```
**Environment Setup:**
Create a `.env` file in your project root:
```
OPENAI_API_KEY=your_api_key_here
```
Or pass the API key directly:
```python
response = call_llm(messages, api_key="your_api_key_here")
```
## Complete Example - Tool Calling with Weather API
This example demonstrates a complete workflow using function calling with an LLM. For a full interactive notebook version, see `notebooks/agentic_example.ipynb`.
```python
from agentic_blocks import call_llm, Messages
# Define tools in OpenAI function calling format
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather information for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "calculate",
"description": "Perform a mathematical calculation",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
}
]
# Create conversation with system and user prompts
messages = Messages(
system_prompt="You are a helpful assistant with access to weather and calculation tools.",
user_prompt="What is the weather in Stockholm?"
)
# Call LLM with tools - it will decide which tools to call
model = "gpt-4o-mini" # or your preferred model
response = call_llm(model=model, messages=messages, tools=tools)
# Add the LLM's response (including any tool calls) to conversation
messages.add_response_message(response)
# Display the conversation so far
for message in messages.get_messages():
print(message)
# Check if there are pending tool calls that need execution
print("Has pending tool calls:", messages.has_pending_tool_calls())
# In a real implementation, you would:
# 1. Execute the actual tool calls (get_weather, calculate, etc.)
# 2. Add tool responses using messages.add_tool_response()
# 3. Call the LLM again to get the final user-facing response
```
**Expected Output:**
```
{'role': 'system', 'content': 'You are a helpful assistant with access to weather and calculation tools.'}
{'role': 'user', 'content': 'What is the weather in Stockholm?'}
{'role': 'assistant', 'content': '', 'tool_calls': [{'id': 'call_abc123', 'type': 'function', 'function': {'name': 'get_weather', 'arguments': '{"location": "Stockholm, Sweden", "unit": "celsius"}'}}]}
Has pending tool calls: True
```
**Key Features Demonstrated:**
- **Messages management**: Clean conversation history with system/user prompts
- **Tool calling**: LLM automatically decides to call the `get_weather` function
- **Response handling**: `add_response_message()` handles both content and tool calls
- **Pending detection**: `has_pending_tool_calls()` identifies when tools need execution
**Next Steps:**
After the LLM makes tool calls, you would implement the actual tool functions and continue the conversation:
```python
# Implement actual weather function
def get_weather(location, unit="celsius"):
# Your weather API implementation here
return f"The weather in {location} is sunny, 22°{unit[0].upper()}"
# Execute pending tool calls
if messages.has_pending_tool_calls():
last_message = messages.get_messages()[-1]
for tool_call in last_message.get("tool_calls", []):
if tool_call["function"]["name"] == "get_weather":
import json
args = json.loads(tool_call["function"]["arguments"])
result = get_weather(**args)
messages.add_tool_response(tool_call["id"], result)
# Get final response from LLM
final_response = call_llm(model=model, messages=messages)
messages.add_assistant_message(final_response)
print(f"Final response: {final_response}")
```
## Development Principles
This project follows these core principles:
- **Simplicity First**: Keep code simple, readable, and focused on core functionality
- **Sync-by-Default**: Primary methods are synchronous for ease of use, with optional async versions
- **Minimal Dependencies**: Avoid over-engineering and complex error handling unless necessary
- **Clean APIs**: Prefer straightforward method names and clear parameter expectations
- **Maintainable Code**: Favor fewer lines of clear code over comprehensive edge case handling
## API Reference
### MCPClient
```python
MCPClient(endpoint: str, timeout: int = 30)
```
**Methods:**
- `list_tools() -> List[Dict]`: Get available tools (sync)
- `call_tool(name: str, args: Dict) -> Dict`: Call a tool (sync)
- `list_tools_async() -> List[Dict]`: Async version of list_tools
- `call_tool_async(name: str, args: Dict) -> Dict`: Async version of call_tool
### Messages
```python
Messages(system_prompt=None, user_prompt=None, add_date_and_time=False)
```
**Methods:**
- `add_system_message(content: str)`: Add system message
- `add_user_message(content: str)`: Add user message
- `add_assistant_message(content: str)`: Add assistant message
- `add_tool_call(tool_call: Dict)`: Add tool call to assistant message
- `add_tool_calls(tool_calls)`: Add multiple tool calls from ChatCompletionMessageFunctionToolCall objects
- `add_response_message(message)`: Add ChatCompletionMessage response to conversation
- `add_tool_response(call_id: str, content: str)`: Add tool response
- `get_messages() -> List[Dict]`: Get all messages
- `has_pending_tool_calls() -> bool`: Check for pending tool calls
### call_llm
```python
call_llm(messages, tools=None, api_key=None, model="gpt-4o-mini", **kwargs) -> str
```
**Parameters:**
- `messages`: Either a `Messages` instance or list of message dictionaries
- `tools`: Optional list of tools in OpenAI function calling format
- `api_key`: OpenAI API key (defaults to OPENAI_API_KEY from .env)
- `model`: Model name to use for completion
- `**kwargs`: Additional parameters passed to OpenAI API (temperature, max_tokens, etc.)
**Returns:** The assistant's response content as a string
## Requirements
- Python >= 3.11
- Dependencies: `mcp`, `requests`, `python-dotenv`, `openai`
## License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "agentic-blocks",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "ai, mcp, model-control-protocol, agent, llm, openai, conversation",
"author": null,
"author_email": "Magnus Bjelkenhed <bjelkenhed@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/c6/cd/4e3a13c70a15ac480e8f3f0600b52899abc7c6cef278689c21cda27394b3/agentic_blocks-0.1.12.tar.gz",
"platform": null,
"description": "# Agentic Blocks\n\nBuilding blocks for agentic systems with a focus on simplicity and ease of use.\n\n## Overview\n\nAgentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:\n\n- **MCP Client**: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API\n- **Messages**: Manage LLM conversation history with OpenAI-compatible format\n- **LLM**: Simple function for calling OpenAI-compatible completion APIs\n\nAll components follow principles of simplicity, maintainability, and ease of use.\n\n## Installation\n\n```bash\npip install -e .\n```\n\nFor development:\n```bash\npip install -e \".[dev]\"\n```\n\n## Quick Start\n\n### MCPClient - Connect to MCP Endpoints\n\nThe MCPClient provides a unified interface for connecting to different types of MCP endpoints:\n\n```python\nfrom agentic_blocks import MCPClient\n\n# Connect to an SSE endpoint (sync by default)\nclient = MCPClient(\"https://example.com/mcp/server/sse\")\n\n# List available tools\ntools = client.list_tools()\nprint(f\"Available tools: {len(tools)}\")\n\n# Call a tool\nresult = client.call_tool(\"search\", {\"query\": \"What is MCP?\"})\nprint(result)\n```\n\n**Supported endpoint types:**\n- **SSE endpoints**: URLs with `/sse` in the path\n- **HTTP endpoints**: URLs with `/mcp` in the path \n- **Local scripts**: File paths to Python MCP servers\n\n**Async support for advanced users:**\n```python\n# Async versions available\ntools = await client.list_tools_async()\nresult = await client.call_tool_async(\"search\", {\"query\": \"async example\"})\n```\n\n### Messages - Manage Conversation History\n\nThe Messages class helps build and manage LLM conversations in OpenAI-compatible format:\n\n```python\nfrom agentic_blocks import Messages\n\n# Initialize with system prompt\nmessages = Messages(\n system_prompt=\"You are a helpful assistant.\",\n user_prompt=\"Hello, how can you help me?\",\n add_date_and_time=True\n)\n\n# Add assistant response\nmessages.add_assistant_message(\"I can help you with various tasks!\")\n\n# Add tool calls\ntool_call = {\n \"id\": \"call_123\",\n \"type\": \"function\", \n \"function\": {\"name\": \"get_weather\", \"arguments\": '{\"location\": \"Paris\"}'}\n}\nmessages.add_tool_call(tool_call)\n\n# Add tool response\nmessages.add_tool_response(\"call_123\", \"The weather in Paris is sunny, 22\u00b0C\")\n\n# Get messages for LLM API\nconversation = messages.get_messages()\n\n# View readable format\nprint(messages)\n```\n\n### LLM - Call OpenAI-Compatible APIs\n\nThe `call_llm` function provides a simple interface for calling LLM completion APIs:\n\n```python\nfrom agentic_blocks import call_llm, Messages\n\n# Method 1: Using with Messages object\nmessages = Messages(\n system_prompt=\"You are a helpful assistant.\",\n user_prompt=\"What is the capital of France?\"\n)\n\nresponse = call_llm(messages, temperature=0.7)\nprint(response) # \"The capital of France is Paris.\"\n```\n\n```python\n# Method 2: Using with raw message list \nmessages_list = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"What is 2+2?\"}\n]\n\nresponse = call_llm(messages_list, model=\"gpt-4o-mini\")\nprint(response) # \"2+2 equals 4.\"\n```\n\n```python\n# Method 3: Using with tools (for function calling)\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get current weather for a location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\"type\": \"string\", \"description\": \"City name\"}\n },\n \"required\": [\"location\"]\n }\n }\n }\n]\n\nmessages = Messages(user_prompt=\"What's the weather like in Stockholm?\")\nresponse = call_llm(messages, tools=tools)\nprint(response)\n```\n\n**Environment Setup:**\nCreate a `.env` file in your project root:\n```\nOPENAI_API_KEY=your_api_key_here\n```\n\nOr pass the API key directly:\n```python\nresponse = call_llm(messages, api_key=\"your_api_key_here\")\n```\n\n## Complete Example - Tool Calling with Weather API\n\nThis example demonstrates a complete workflow using function calling with an LLM. For a full interactive notebook version, see `notebooks/agentic_example.ipynb`.\n\n```python\nfrom agentic_blocks import call_llm, Messages\n\n# Define tools in OpenAI function calling format\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get current weather information for a location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state, e.g. San Francisco, CA\"\n },\n \"unit\": {\n \"type\": \"string\", \n \"enum\": [\"celsius\", \"fahrenheit\"],\n \"description\": \"Temperature unit\"\n }\n },\n \"required\": [\"location\"]\n }\n }\n },\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"calculate\",\n \"description\": \"Perform a mathematical calculation\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"expression\": {\n \"type\": \"string\",\n \"description\": \"Mathematical expression to evaluate\"\n }\n },\n \"required\": [\"expression\"]\n }\n }\n }\n]\n\n# Create conversation with system and user prompts\nmessages = Messages(\n system_prompt=\"You are a helpful assistant with access to weather and calculation tools.\",\n user_prompt=\"What is the weather in Stockholm?\"\n)\n\n# Call LLM with tools - it will decide which tools to call\nmodel = \"gpt-4o-mini\" # or your preferred model\nresponse = call_llm(model=model, messages=messages, tools=tools)\n\n# Add the LLM's response (including any tool calls) to conversation\nmessages.add_response_message(response)\n\n# Display the conversation so far\nfor message in messages.get_messages():\n print(message)\n\n# Check if there are pending tool calls that need execution\nprint(\"Has pending tool calls:\", messages.has_pending_tool_calls())\n\n# In a real implementation, you would:\n# 1. Execute the actual tool calls (get_weather, calculate, etc.)\n# 2. Add tool responses using messages.add_tool_response()\n# 3. Call the LLM again to get the final user-facing response\n```\n\n**Expected Output:**\n```\n{'role': 'system', 'content': 'You are a helpful assistant with access to weather and calculation tools.'}\n{'role': 'user', 'content': 'What is the weather in Stockholm?'}\n{'role': 'assistant', 'content': '', 'tool_calls': [{'id': 'call_abc123', 'type': 'function', 'function': {'name': 'get_weather', 'arguments': '{\"location\": \"Stockholm, Sweden\", \"unit\": \"celsius\"}'}}]}\nHas pending tool calls: True\n```\n\n**Key Features Demonstrated:**\n- **Messages management**: Clean conversation history with system/user prompts\n- **Tool calling**: LLM automatically decides to call the `get_weather` function\n- **Response handling**: `add_response_message()` handles both content and tool calls\n- **Pending detection**: `has_pending_tool_calls()` identifies when tools need execution\n\n**Next Steps:**\nAfter the LLM makes tool calls, you would implement the actual tool functions and continue the conversation:\n\n```python\n# Implement actual weather function\ndef get_weather(location, unit=\"celsius\"):\n # Your weather API implementation here\n return f\"The weather in {location} is sunny, 22\u00b0{unit[0].upper()}\"\n\n# Execute pending tool calls\nif messages.has_pending_tool_calls():\n last_message = messages.get_messages()[-1]\n for tool_call in last_message.get(\"tool_calls\", []):\n if tool_call[\"function\"][\"name\"] == \"get_weather\":\n import json\n args = json.loads(tool_call[\"function\"][\"arguments\"])\n result = get_weather(**args)\n messages.add_tool_response(tool_call[\"id\"], result)\n \n # Get final response from LLM\n final_response = call_llm(model=model, messages=messages)\n messages.add_assistant_message(final_response)\n print(f\"Final response: {final_response}\")\n```\n\n## Development Principles\n\nThis project follows these core principles:\n\n- **Simplicity First**: Keep code simple, readable, and focused on core functionality\n- **Sync-by-Default**: Primary methods are synchronous for ease of use, with optional async versions\n- **Minimal Dependencies**: Avoid over-engineering and complex error handling unless necessary \n- **Clean APIs**: Prefer straightforward method names and clear parameter expectations\n- **Maintainable Code**: Favor fewer lines of clear code over comprehensive edge case handling\n\n## API Reference\n\n### MCPClient\n\n```python\nMCPClient(endpoint: str, timeout: int = 30)\n```\n\n**Methods:**\n- `list_tools() -> List[Dict]`: Get available tools (sync)\n- `call_tool(name: str, args: Dict) -> Dict`: Call a tool (sync)\n- `list_tools_async() -> List[Dict]`: Async version of list_tools\n- `call_tool_async(name: str, args: Dict) -> Dict`: Async version of call_tool\n\n### Messages\n\n```python\nMessages(system_prompt=None, user_prompt=None, add_date_and_time=False)\n```\n\n**Methods:**\n- `add_system_message(content: str)`: Add system message\n- `add_user_message(content: str)`: Add user message\n- `add_assistant_message(content: str)`: Add assistant message\n- `add_tool_call(tool_call: Dict)`: Add tool call to assistant message\n- `add_tool_calls(tool_calls)`: Add multiple tool calls from ChatCompletionMessageFunctionToolCall objects\n- `add_response_message(message)`: Add ChatCompletionMessage response to conversation\n- `add_tool_response(call_id: str, content: str)`: Add tool response\n- `get_messages() -> List[Dict]`: Get all messages\n- `has_pending_tool_calls() -> bool`: Check for pending tool calls\n\n### call_llm\n\n```python\ncall_llm(messages, tools=None, api_key=None, model=\"gpt-4o-mini\", **kwargs) -> str\n```\n\n**Parameters:**\n- `messages`: Either a `Messages` instance or list of message dictionaries\n- `tools`: Optional list of tools in OpenAI function calling format\n- `api_key`: OpenAI API key (defaults to OPENAI_API_KEY from .env)\n- `model`: Model name to use for completion\n- `**kwargs`: Additional parameters passed to OpenAI API (temperature, max_tokens, etc.)\n\n**Returns:** The assistant's response content as a string\n\n## Requirements\n\n- Python >= 3.11\n- Dependencies: `mcp`, `requests`, `python-dotenv`, `openai`\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple building blocks for agentic AI systems with MCP client and conversation management",
"version": "0.1.12",
"project_urls": {
"Homepage": "https://github.com/bjelkenhed/agentic-blocks",
"Issues": "https://github.com/bjelkenhed/agentic-blocks/issues",
"Repository": "https://github.com/bjelkenhed/agentic-blocks"
},
"split_keywords": [
"ai",
" mcp",
" model-control-protocol",
" agent",
" llm",
" openai",
" conversation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3fc68a2ebebd2bf24906f8961f4054eef656e6449ba9a84fa8dac1462f519fe1",
"md5": "262a77332275aa57b1574a68a0fae614",
"sha256": "34f20fc1bf3091e017f5d79e17a1da7c91a4d34d8a72fcede81d8e0af00299b1"
},
"downloads": -1,
"filename": "agentic_blocks-0.1.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "262a77332275aa57b1574a68a0fae614",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 15131,
"upload_time": "2025-08-29T18:39:33",
"upload_time_iso_8601": "2025-08-29T18:39:33.986565Z",
"url": "https://files.pythonhosted.org/packages/3f/c6/8a2ebebd2bf24906f8961f4054eef656e6449ba9a84fa8dac1462f519fe1/agentic_blocks-0.1.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c6cd4e3a13c70a15ac480e8f3f0600b52899abc7c6cef278689c21cda27394b3",
"md5": "07cd96b3f80390f40581eb7ed2d5c7ad",
"sha256": "77c0e02d1eaabe40986022e2806f768dc6197ce754febe43bfa3d74d75441046"
},
"downloads": -1,
"filename": "agentic_blocks-0.1.12.tar.gz",
"has_sig": false,
"md5_digest": "07cd96b3f80390f40581eb7ed2d5c7ad",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 16659,
"upload_time": "2025-08-29T18:39:35",
"upload_time_iso_8601": "2025-08-29T18:39:35.255022Z",
"url": "https://files.pythonhosted.org/packages/c6/cd/4e3a13c70a15ac480e8f3f0600b52899abc7c6cef278689c21cda27394b3/agentic_blocks-0.1.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-29 18:39:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bjelkenhed",
"github_project": "agentic-blocks",
"github_not_found": true,
"lcname": "agentic-blocks"
}