Name | toyaikit JSON |
Version |
0.0.3
JSON |
| download |
home_page | None |
Summary | Toolkit for building AI assistants and tool integrations. |
upload_time | 2025-08-13 13:52:01 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | WTFPL |
keywords |
ai
assistant
integration
tools
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# toyaikit
ToyAIKit is a minimalistic Python library for building AI assistants powered by Large Language Models (LLMs). It provides a simple yet powerful framework for creating chatbots with advanced capabilities like:
The project builds upon concepts from multiple courses and workshops:
- ["From RAG to Agents: Build Your Own AI Assistant" Workshop](https://github.com/alexeygrigorev/rag-agents-workshop)
- [MLZoomcamp's LLM Course](https://github.com/DataTalksClub/llm-zoomcamp) covering AI Agents and MCP
It's great for learning about agents and agentic asisstants, but not suitable for production use.
Main features:
- Support for OpenAI with both `reponses` and `chat.completions` APIs
- Support for OpenAI Agents SDK and Pydantic AI
- Tool integration for function calling
- Interactive IPython-based chat interface
- Easy to add new providers and runners
## Quick Start
```bash
pip install toyaikit
```
### Basic Usage with OpenAI
```python
from openai import OpenAI
from toyaikit.llm import OpenAIClient
from toyaikit.tools import Tools
from toyaikit.chat import IPythonChatInterface
from toyaikit.chat.runners import OpenAIResponsesRunner
# Create tools
tools = Tools()
# Add a simple function as a tool
def get_weather(city: str):
"""Get weather information for a city."""
return f"Weather in {city}: Sunny, 25°C"
tools.add_tool(get_weather)
# Create chat interface and client
chat_interface = IPythonChatInterface()
openai_client = OpenAIClient(
model="gpt-4o-mini",
client=OpenAI()
)
# Create and run chat assistant
runner = OpenAIResponsesRunner(
tools=tools,
developer_prompt="You are a helpful weather assistant.",
chat_interface=chat_interface,
llm_client=openai_client
)
runner.run()
```
It displays the responses form the assistant and
function calls
<img src="./images/weather.png" width="50%" />
### Tools System
The tools system allows you to easily integrate Python functions with LLM function calling:
```python
from toyaikit.tools import Tools
tools = Tools()
# Add individual functions
def calculate_area(length: float, width: float):
"""Calculate the area of a rectangle."""
return length * width
tools.add_tool(calculate_area)
# Add all methods from a class instance
class MathTools:
def add(self, a: float, b: float):
"""Add two numbers."""
return a + b
def multiply(self, a: float, b: float):
"""Multiply two numbers."""
return a * b
math_tools = MathTools()
tools.add_tools(math_tools)
```
### Chat Interface
The IPython-based chat interface provides an interactive way to chat with your AI assistant:
```python
from toyaikit.chat import IPythonChatInterface
chat_interface = IPythonChatInterface()
# Get user input
user_input = chat_interface.input()
# Display message
chat_interface.display("Hello!")
# Display AI response
chat_interface.display_response("AI response")
# Display function call
chat_interface.display_function_call("function_name", '{"arg1": "value1"}', "result")
```
## Examples
### OpenAI Chat Completions API
The default runner users the `responses` API. If you need to use
the `chat.completions` API, do it with `OpenAIChatCompletionsRunner`:
```python
from openai import OpenAI
from toyaikit.tools import Tools
from toyaikit.llm import OpenAIChatCompletionsClient
from toyaikit.chat.runners import OpenAIChatCompletionsRunner
from toyaikit.chat import IPythonChatInterface
# Setup tools and client
agent_tools = ... # class with some functions to be called
tools = Tools()
tools.add_tools(agent_tools)
chat_interface = IPythonChatInterface()
llm_client = OpenAIChatCompletionsClient(
model="gpt-4o-mini",
client=OpenAI()
)
# Create and run the chat completions runner
runner = OpenAIChatCompletionsRunner(
tools=tools,
developer_prompt="You are a coding agent that can modify Django projects.",
chat_interface=chat_interface,
llm_client=llm_client
)
runner.run()
```
### Extending it to other LLM providers
Most of LLM providers follow the OpenAI API and can be used with the
OpenAI client.
For example, this is how we can use Z.ai's GLM-4.5:
```python
from openai import OpenAI
from toyaikit.tools import Tools
from toyaikit.chat import IPythonChatInterface
from toyaikit.chat.runners import OpenAIChatCompletionsRunner
from toyaikit.llm import OpenAIChatCompletionsClient
# Setup z.ai client
zai_client = OpenAI(
api_key=os.getenv('ZAI_API_KEY'),
base_url='https://api.z.ai/api/paas/v4/'
)
# define the model to use
llm_client = OpenAIChatCompletionsClient(
model='glm-4.5',
client=zai_client
)
# Setup tools and run
agent_tools = ...
tools = Tools()
tools.add_tools(agent_tools)
chat_interface = IPythonChatInterface()
runner = OpenAIChatCompletionsRunner(
tools=tools,
developer_prompt="You are a coding agent that can modify Django projects.",
chat_interface=chat_interface,
llm_client=llm_client
)
runner.run()
```
## Wrappers
ToyAIKit can also help with running agents from OpenAI Agents SDK
and PydanticAI
### OpenAI Agents SDK
```python
from agents import Agent, Runner, SQLiteSession, function_tool
from toyaikit.tools import get_instance_methods
from toyaikit.chat import IPythonChatInterface
from toyaikit.chat.runners import OpenAIAgentsSDKRunner
# use get_instance_methods to find all the methods of an object
coding_agent_tools_list = []
for m in get_instance_methods(agent_tools):
tool = function_tool(m)
coding_agent_tools_list.append(tool)
# alternatively, define the list yourself:
coding_agent_tools_list = [
function_tool(agent_tools.execute_bash_command),
function_tool(agent_tools.read_file),
function_tool(agent_tools.search_in_files),
function_tool(agent_tools.see_file_tree),
function_tool(agent_tools.write_file)
]
# create the Agent
coding_agent = Agent(
name="CodingAgent",
instructions="You are a coding agent that can modify Django projects.",
tools=coding_agent_tools_list,
model='gpt-4o-mini'
)
# Setup and run with ToyAIKit
chat_interface = IPythonChatInterface()
runner = OpenAIAgentsSDKRunner(
chat_interface=chat_interface,
agent=coding_agent
)
# In Jypyter, run asynchronously
await runner.run()
```
### Pydantic AI with OpenAI
```python
from pydantic_ai import Agent
from toyaikit.tools import get_instance_methods
from toyaikit.chat import IPythonChatInterface
from toyaikit.chat.runners import PydanticAIRunner
# get tools from your object with functions
coding_agent_tools_list = get_instance_methods(agent_tools)
# Create Pydantic AI agent with OpenAI
coding_agent = Agent(
'openai:gpt-4o-mini',
instructions="You are a coding agent that can modify Django projects.",
tools=coding_agent_tools_list
)
# Setup and run with ToyAIKit
chat_interface = IPythonChatInterface()
runner = PydanticAIRunner(
chat_interface=chat_interface,
agent=coding_agent
)
# Run asynchronously
await runner.run()
```
You can easily switch to Claude:
```python
coding_agent = Agent(
'anthropic:claude-3-5-sonnet-latest',
instructions="You are a coding agent that can modify Django projects.",
tools=coding_agent_tools_list
)
```
## Development
### Running Tests
```bash
make test
```
### Publishing
Build the package:
```bash
uv run hatch build
```
Publish to test PyPI:
```bash
uv run hatch publish --repo test
```
Publish to PyPI:
```bash
uv run hatch publish
```
Clean up:
```bash
rm -r dist/
```
Note: For Hatch publishing, you'll need to configure your PyPI credentials in `~/.pypirc` or use environment variables.
Raw data
{
"_id": null,
"home_page": null,
"name": "toyaikit",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Alexey Grigorev <alexey@datatalks.club>",
"keywords": "ai, assistant, integration, tools",
"author": null,
"author_email": "Alexey Grigorev <alexey@datatalks.club>",
"download_url": "https://files.pythonhosted.org/packages/4e/66/fcff57f971a615c1f3bceb4b86a82ab910a7478bb20803895c09aabadcf8/toyaikit-0.0.3.tar.gz",
"platform": null,
"description": "# toyaikit\n\nToyAIKit is a minimalistic Python library for building AI assistants powered by Large Language Models (LLMs). It provides a simple yet powerful framework for creating chatbots with advanced capabilities like:\n\nThe project builds upon concepts from multiple courses and workshops:\n- [\"From RAG to Agents: Build Your Own AI Assistant\" Workshop](https://github.com/alexeygrigorev/rag-agents-workshop)\n- [MLZoomcamp's LLM Course](https://github.com/DataTalksClub/llm-zoomcamp) covering AI Agents and MCP\n\nIt's great for learning about agents and agentic asisstants, but not suitable for production use. \n\nMain features:\n\n- Support for OpenAI with both `reponses` and `chat.completions` APIs\n- Support for OpenAI Agents SDK and Pydantic AI\n- Tool integration for function calling\n- Interactive IPython-based chat interface\n- Easy to add new providers and runners\n\n## Quick Start\n\n```bash\npip install toyaikit\n```\n\n### Basic Usage with OpenAI\n\n```python\nfrom openai import OpenAI\n\nfrom toyaikit.llm import OpenAIClient\nfrom toyaikit.tools import Tools\nfrom toyaikit.chat import IPythonChatInterface\nfrom toyaikit.chat.runners import OpenAIResponsesRunner\n\n# Create tools\ntools = Tools()\n\n# Add a simple function as a tool\ndef get_weather(city: str):\n \"\"\"Get weather information for a city.\"\"\"\n return f\"Weather in {city}: Sunny, 25\u00b0C\"\n\ntools.add_tool(get_weather)\n\n# Create chat interface and client\nchat_interface = IPythonChatInterface()\nopenai_client = OpenAIClient(\n model=\"gpt-4o-mini\",\n client=OpenAI()\n)\n\n# Create and run chat assistant\nrunner = OpenAIResponsesRunner(\n tools=tools,\n developer_prompt=\"You are a helpful weather assistant.\",\n chat_interface=chat_interface,\n llm_client=openai_client\n)\n\nrunner.run()\n```\n\nIt displays the responses form the assistant and \nfunction calls\n\n<img src=\"./images/weather.png\" width=\"50%\" />\n\n\n### Tools System\n\nThe tools system allows you to easily integrate Python functions with LLM function calling:\n\n```python\nfrom toyaikit.tools import Tools\n\ntools = Tools()\n\n# Add individual functions\ndef calculate_area(length: float, width: float):\n \"\"\"Calculate the area of a rectangle.\"\"\"\n return length * width\n\ntools.add_tool(calculate_area)\n\n# Add all methods from a class instance\nclass MathTools:\n def add(self, a: float, b: float):\n \"\"\"Add two numbers.\"\"\"\n return a + b\n \n def multiply(self, a: float, b: float):\n \"\"\"Multiply two numbers.\"\"\"\n return a * b\n\nmath_tools = MathTools()\ntools.add_tools(math_tools)\n```\n\n### Chat Interface\n\nThe IPython-based chat interface provides an interactive way to chat with your AI assistant:\n\n```python\nfrom toyaikit.chat import IPythonChatInterface\n\nchat_interface = IPythonChatInterface()\n\n# Get user input\nuser_input = chat_interface.input()\n\n# Display message\nchat_interface.display(\"Hello!\")\n\n# Display AI response\nchat_interface.display_response(\"AI response\")\n\n# Display function call\nchat_interface.display_function_call(\"function_name\", '{\"arg1\": \"value1\"}', \"result\")\n```\n\n\n## Examples\n\n### OpenAI Chat Completions API\n\nThe default runner users the `responses` API. If you need to use \nthe `chat.completions` API, do it with `OpenAIChatCompletionsRunner`:\n\n```python\nfrom openai import OpenAI\n\nfrom toyaikit.tools import Tools\nfrom toyaikit.llm import OpenAIChatCompletionsClient\nfrom toyaikit.chat.runners import OpenAIChatCompletionsRunner\nfrom toyaikit.chat import IPythonChatInterface\n\n# Setup tools and client\nagent_tools = ... # class with some functions to be called\n\ntools = Tools()\ntools.add_tools(agent_tools)\n\nchat_interface = IPythonChatInterface()\n\nllm_client = OpenAIChatCompletionsClient(\n model=\"gpt-4o-mini\",\n client=OpenAI()\n)\n\n# Create and run the chat completions runner\nrunner = OpenAIChatCompletionsRunner(\n tools=tools,\n developer_prompt=\"You are a coding agent that can modify Django projects.\",\n chat_interface=chat_interface,\n llm_client=llm_client\n)\nrunner.run()\n```\n\n### Extending it to other LLM providers \n\nMost of LLM providers follow the OpenAI API and can be used with the\nOpenAI client. \n\nFor example, this is how we can use Z.ai's GLM-4.5:\n\n```python\nfrom openai import OpenAI\n\nfrom toyaikit.tools import Tools\nfrom toyaikit.chat import IPythonChatInterface\nfrom toyaikit.chat.runners import OpenAIChatCompletionsRunner\nfrom toyaikit.llm import OpenAIChatCompletionsClient\n\n# Setup z.ai client\nzai_client = OpenAI(\n api_key=os.getenv('ZAI_API_KEY'),\n base_url='https://api.z.ai/api/paas/v4/'\n)\n\n# define the model to use\nllm_client = OpenAIChatCompletionsClient(\n model='glm-4.5',\n client=zai_client\n)\n\n# Setup tools and run\nagent_tools = ...\n\ntools = Tools()\ntools.add_tools(agent_tools)\n\nchat_interface = IPythonChatInterface()\n\nrunner = OpenAIChatCompletionsRunner(\n tools=tools,\n developer_prompt=\"You are a coding agent that can modify Django projects.\",\n chat_interface=chat_interface,\n llm_client=llm_client\n)\n\nrunner.run()\n```\n\n## Wrappers\n\nToyAIKit can also help with running agents from OpenAI Agents SDK\nand PydanticAI\n\n### OpenAI Agents SDK\n\n\n```python\nfrom agents import Agent, Runner, SQLiteSession, function_tool\n\nfrom toyaikit.tools import get_instance_methods\nfrom toyaikit.chat import IPythonChatInterface\nfrom toyaikit.chat.runners import OpenAIAgentsSDKRunner\n\n\n# use get_instance_methods to find all the methods of an object\ncoding_agent_tools_list = []\n\nfor m in get_instance_methods(agent_tools):\n tool = function_tool(m)\n coding_agent_tools_list.append(tool)\n\n\n# alternatively, define the list yourself:\ncoding_agent_tools_list = [\n function_tool(agent_tools.execute_bash_command),\n function_tool(agent_tools.read_file),\n function_tool(agent_tools.search_in_files),\n function_tool(agent_tools.see_file_tree),\n function_tool(agent_tools.write_file)\n]\n\n# create the Agent\ncoding_agent = Agent(\n name=\"CodingAgent\",\n instructions=\"You are a coding agent that can modify Django projects.\",\n tools=coding_agent_tools_list,\n model='gpt-4o-mini'\n)\n\n# Setup and run with ToyAIKit\nchat_interface = IPythonChatInterface()\nrunner = OpenAIAgentsSDKRunner(\n chat_interface=chat_interface,\n agent=coding_agent\n)\n\n# In Jypyter, run asynchronously\nawait runner.run()\n```\n\n### Pydantic AI with OpenAI\n\n```python\nfrom pydantic_ai import Agent\n\nfrom toyaikit.tools import get_instance_methods\nfrom toyaikit.chat import IPythonChatInterface\nfrom toyaikit.chat.runners import PydanticAIRunner\n\n# get tools from your object with functions\ncoding_agent_tools_list = get_instance_methods(agent_tools)\n\n# Create Pydantic AI agent with OpenAI\ncoding_agent = Agent(\n 'openai:gpt-4o-mini',\n instructions=\"You are a coding agent that can modify Django projects.\",\n tools=coding_agent_tools_list\n)\n\n# Setup and run with ToyAIKit\nchat_interface = IPythonChatInterface()\nrunner = PydanticAIRunner(\n chat_interface=chat_interface,\n agent=coding_agent\n)\n\n# Run asynchronously\nawait runner.run()\n```\n\nYou can easily switch to Claude:\n\n```python\ncoding_agent = Agent(\n 'anthropic:claude-3-5-sonnet-latest',\n instructions=\"You are a coding agent that can modify Django projects.\",\n tools=coding_agent_tools_list\n)\n```\n\n## Development\n\n### Running Tests\n\n```bash\nmake test\n```\n\n### Publishing\n\nBuild the package:\n```bash\nuv run hatch build\n```\n\nPublish to test PyPI:\n```bash\nuv run hatch publish --repo test\n```\n\nPublish to PyPI:\n```bash\nuv run hatch publish\n```\n\nClean up:\n```bash\nrm -r dist/\n```\n\nNote: For Hatch publishing, you'll need to configure your PyPI credentials in `~/.pypirc` or use environment variables.",
"bugtrack_url": null,
"license": "WTFPL",
"summary": "Toolkit for building AI assistants and tool integrations.",
"version": "0.0.3",
"project_urls": {
"Homepage": "https://github.com/alexeygrigorev/toyaikit",
"Issues": "https://github.com/alexeygrigorev/toyaikit/issues",
"Repository": "https://github.com/alexeygrigorev/toyaikit"
},
"split_keywords": [
"ai",
" assistant",
" integration",
" tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fe5915ec3d81f31b24a05bd9e964ade13b8d9e7b52988d7e9fda2f885578bb4c",
"md5": "865f0b826cff272242fe0d8b6ae45eb3",
"sha256": "aa6491a182a79cd1cf2adfb23f92d8bdbb8bb4c45f109a92e821f2fc1aad6db6"
},
"downloads": -1,
"filename": "toyaikit-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "865f0b826cff272242fe0d8b6ae45eb3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 12900,
"upload_time": "2025-08-13T13:52:00",
"upload_time_iso_8601": "2025-08-13T13:52:00.255063Z",
"url": "https://files.pythonhosted.org/packages/fe/59/15ec3d81f31b24a05bd9e964ade13b8d9e7b52988d7e9fda2f885578bb4c/toyaikit-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4e66fcff57f971a615c1f3bceb4b86a82ab910a7478bb20803895c09aabadcf8",
"md5": "5c2fbc0f48b4bef205f28abf91df8ff3",
"sha256": "161fca28290713e8730dbbf5b13341cc404d99d8211e0c5696381f6d85c8c0f2"
},
"downloads": -1,
"filename": "toyaikit-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "5c2fbc0f48b4bef205f28abf91df8ff3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 112108,
"upload_time": "2025-08-13T13:52:01",
"upload_time_iso_8601": "2025-08-13T13:52:01.885692Z",
"url": "https://files.pythonhosted.org/packages/4e/66/fcff57f971a615c1f3bceb4b86a82ab910a7478bb20803895c09aabadcf8/toyaikit-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-13 13:52:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alexeygrigorev",
"github_project": "toyaikit",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "toyaikit"
}