# Tools4All
A Python library that adds function calling capabilities to LLMs that don't natively support them, with a focus on Ollama models.
## 🎯 Goal
Tools4All enables function calling for any LLM, even those that don't natively support tools or function calling. It works by:
1. Injecting tool descriptions into the system prompt
2. Parsing the LLM's response to extract tool calls
3. Executing the tools and providing results back to the LLM
4. Generating a final answer based on the tool results
This approach allows you to use function calling with models like Gemma 3 (27B) through Ollama, even though they don't have native tool support.
## 🚀 Features
- **Universal Tool Support**: Add function calling to any LLM through Ollama
- **Tool Registry**: Easy registration and management of tools
- **Response Parsing**: Intelligent parsing of LLM responses to extract tool calls
- **Final Answer Generation**: Avoid hallucinations by using actual tool results
- **Flexible Model Configuration**: Works with any model available through Ollama
## Usage
```bash
pip install tools4all
```
Use as you would use ollama python library
## 📋 Development Installation
```bash
# Clone the repository
git clone git@github.com:alfredwallace7/tools4all.git
cd tools4all
# Create a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
```
## 📦 Dependencies
- `ollama`: Python client for Ollama
- `pydantic`: Data validation and settings management
- `rich`: For pretty printing (optional)
## 🔧 Usage
### Basic Usage
```python
from tools4all import Tools4All, ToolRegistry
# Create a tool registry
registry = ToolRegistry()
# Define and register a tool
def get_weather(location):
# Your implementation here
return f"The weather in {location} is sunny."
registry.register_tool(
"get_weather",
get_weather,
"Get the current weather",
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
)
# Create a Tools4All client
client = Tools4All(host='http://127.0.0.1:11434')
# Process a user prompt
prompt = "What's the weather like in San Francisco?"
model = "gemma3:27b" # or any other model available through Ollama
client.process_prompt(prompt, registry, model)
```
### Creating Custom Tools
You can create custom tools by defining functions and registering them with the ToolRegistry:
```python
def calculate_mortgage(principal, interest_rate, years):
monthly_rate = interest_rate / 100 / 12
months = years * 12
payment = principal * (monthly_rate * (1 + monthly_rate) ** months) / ((1 + monthly_rate) ** months - 1)
return f"Monthly payment: ${payment:.2f}"
registry.register_tool(
"calculate_mortgage",
calculate_mortgage,
"Calculate monthly mortgage payment",
{
"type": "object",
"properties": {
"principal": {
"type": "number",
"description": "Loan amount in dollars"
},
"interest_rate": {
"type": "number",
"description": "Annual interest rate in percent"
},
"years": {
"type": "integer",
"description": "Loan term in years"
}
},
"required": ["principal", "interest_rate", "years"]
}
)
```
## 🧩 Architecture
Tools4All consists of several key components:
1. **LLMResponseParser**: Parses LLM responses to extract code blocks, tool calls, and comments
2. **ToolRegistry**: Manages tool registration and execution
3. **Tools4All**: Main class that handles chat completions and adapts for models without native tool support
4. **Process Prompt Function**: Orchestrates the entire workflow from user prompt to final answer
## 🔄 Workflow
1. User sends a prompt
2. The prompt is sent to the LLM with tool descriptions
3. The LLM's response is parsed to extract tool calls
4. Tools are executed with the provided arguments
5. Tool results are sent back to the LLM
6. A final answer is generated based on the tool results
## 📝 Example
```
User: What's the weather like in San Francisco, CA?
Model: I'll check the weather for you.
Tool Call: get_temperature(location="San Francisco, CA", format="fahrenheit")
Tool Result: Current temperature in San Francisco, CA is 65°F
Tool Call: get_humidity(location="San Francisco, CA")
Tool Result: Current humidity in San Francisco, CA is 75%
Final Answer:
Based on the information I gathered:
Current temperature in San Francisco, CA is 65°F
Current humidity in San Francisco, CA is 75%
This is the current weather information for your requested location.
```
## 🛠️ Advanced Configuration
### Using Different Models
You can specify which model to use when processing prompts:
```python
# Use Gemma 3 (27B)
client.process_prompt(prompt, registry, model="gemma3:27b")
# Use Qwen 2.5 (3B)
client.process_prompt(prompt, registry, model="qwen2.5:3b")
```
### Custom Host
You can specify a custom Ollama host:
```python
client = Tools4All(host='http://your-ollama-server:11434')
```
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
## 🙏 Acknowledgements
- [Ollama](https://github.com/ollama/ollama) for providing the LLM backend
- [Pydantic](https://github.com/pydantic/pydantic) for data validation
Raw data
{
"_id": null,
"home_page": "https://github.com/alfredwallace7/tools4all",
"name": "tools4all",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Alfred Wallace",
"author_email": "Alfred Wallace <alfred.wallace@netcraft.fr>",
"download_url": "https://files.pythonhosted.org/packages/f7/36/5ceed796fc498f043de4d7dbb9d200338a7f566efc696596d37a0c1987b9/tools4all-0.1.3.tar.gz",
"platform": null,
"description": "# Tools4All\r\n\r\nA Python library that adds function calling capabilities to LLMs that don't natively support them, with a focus on Ollama models.\r\n\r\n## \ud83c\udfaf Goal\r\n\r\nTools4All enables function calling for any LLM, even those that don't natively support tools or function calling. It works by:\r\n\r\n1. Injecting tool descriptions into the system prompt\r\n2. Parsing the LLM's response to extract tool calls\r\n3. Executing the tools and providing results back to the LLM\r\n4. Generating a final answer based on the tool results\r\n\r\nThis approach allows you to use function calling with models like Gemma 3 (27B) through Ollama, even though they don't have native tool support.\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- **Universal Tool Support**: Add function calling to any LLM through Ollama\r\n- **Tool Registry**: Easy registration and management of tools\r\n- **Response Parsing**: Intelligent parsing of LLM responses to extract tool calls\r\n- **Final Answer Generation**: Avoid hallucinations by using actual tool results\r\n- **Flexible Model Configuration**: Works with any model available through Ollama\r\n\r\n## Usage\r\n\r\n```bash\r\npip install tools4all\r\n```\r\n\r\nUse as you would use ollama python library\r\n\r\n## \ud83d\udccb Development Installation\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone git@github.com:alfredwallace7/tools4all.git\r\ncd tools4all\r\n\r\n# Create a virtual environment\r\npython -m venv venv\r\nsource venv/bin/activate # On Windows: venv\\Scripts\\activate\r\n\r\n# Install dependencies\r\npip install -r requirements.txt\r\n```\r\n\r\n## \ud83d\udce6 Dependencies\r\n\r\n- `ollama`: Python client for Ollama\r\n- `pydantic`: Data validation and settings management\r\n- `rich`: For pretty printing (optional)\r\n\r\n## \ud83d\udd27 Usage\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom tools4all import Tools4All, ToolRegistry\r\n\r\n# Create a tool registry\r\nregistry = ToolRegistry()\r\n\r\n# Define and register a tool\r\ndef get_weather(location):\r\n # Your implementation here\r\n return f\"The weather in {location} is sunny.\"\r\n\r\nregistry.register_tool(\r\n \"get_weather\",\r\n get_weather,\r\n \"Get the current weather\",\r\n {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"location\": {\r\n \"type\": \"string\",\r\n \"description\": \"The city and state, e.g. San Francisco, CA\"\r\n }\r\n },\r\n \"required\": [\"location\"]\r\n }\r\n)\r\n\r\n# Create a Tools4All client\r\nclient = Tools4All(host='http://127.0.0.1:11434')\r\n\r\n# Process a user prompt\r\nprompt = \"What's the weather like in San Francisco?\"\r\nmodel = \"gemma3:27b\" # or any other model available through Ollama\r\n\r\nclient.process_prompt(prompt, registry, model)\r\n```\r\n\r\n### Creating Custom Tools\r\n\r\nYou can create custom tools by defining functions and registering them with the ToolRegistry:\r\n\r\n```python\r\ndef calculate_mortgage(principal, interest_rate, years):\r\n monthly_rate = interest_rate / 100 / 12\r\n months = years * 12\r\n payment = principal * (monthly_rate * (1 + monthly_rate) ** months) / ((1 + monthly_rate) ** months - 1)\r\n return f\"Monthly payment: ${payment:.2f}\"\r\n\r\nregistry.register_tool(\r\n \"calculate_mortgage\",\r\n calculate_mortgage,\r\n \"Calculate monthly mortgage payment\",\r\n {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"principal\": {\r\n \"type\": \"number\",\r\n \"description\": \"Loan amount in dollars\"\r\n },\r\n \"interest_rate\": {\r\n \"type\": \"number\",\r\n \"description\": \"Annual interest rate in percent\"\r\n },\r\n \"years\": {\r\n \"type\": \"integer\",\r\n \"description\": \"Loan term in years\"\r\n }\r\n },\r\n \"required\": [\"principal\", \"interest_rate\", \"years\"]\r\n }\r\n)\r\n```\r\n\r\n## \ud83e\udde9 Architecture\r\n\r\nTools4All consists of several key components:\r\n\r\n1. **LLMResponseParser**: Parses LLM responses to extract code blocks, tool calls, and comments\r\n2. **ToolRegistry**: Manages tool registration and execution\r\n3. **Tools4All**: Main class that handles chat completions and adapts for models without native tool support\r\n4. **Process Prompt Function**: Orchestrates the entire workflow from user prompt to final answer\r\n\r\n## \ud83d\udd04 Workflow\r\n\r\n1. User sends a prompt\r\n2. The prompt is sent to the LLM with tool descriptions\r\n3. The LLM's response is parsed to extract tool calls\r\n4. Tools are executed with the provided arguments\r\n5. Tool results are sent back to the LLM\r\n6. A final answer is generated based on the tool results\r\n\r\n## \ud83d\udcdd Example\r\n\r\n```\r\nUser: What's the weather like in San Francisco, CA?\r\n\r\nModel: I'll check the weather for you.\r\n\r\nTool Call: get_temperature(location=\"San Francisco, CA\", format=\"fahrenheit\")\r\nTool Result: Current temperature in San Francisco, CA is 65\u00b0F\r\n\r\nTool Call: get_humidity(location=\"San Francisco, CA\")\r\nTool Result: Current humidity in San Francisco, CA is 75%\r\n\r\nFinal Answer:\r\nBased on the information I gathered:\r\n\r\nCurrent temperature in San Francisco, CA is 65\u00b0F\r\nCurrent humidity in San Francisco, CA is 75%\r\n\r\nThis is the current weather information for your requested location.\r\n```\r\n\r\n## \ud83d\udee0\ufe0f Advanced Configuration\r\n\r\n### Using Different Models\r\n\r\nYou can specify which model to use when processing prompts:\r\n\r\n```python\r\n# Use Gemma 3 (27B)\r\nclient.process_prompt(prompt, registry, model=\"gemma3:27b\")\r\n\r\n# Use Qwen 2.5 (3B)\r\nclient.process_prompt(prompt, registry, model=\"qwen2.5:3b\")\r\n```\r\n\r\n### Custom Host\r\n\r\nYou can specify a custom Ollama host:\r\n\r\n```python\r\nclient = Tools4All(host='http://your-ollama-server:11434')\r\n```\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the LICENSE file for details.\r\n\r\n## \ud83d\ude4f Acknowledgements\r\n\r\n- [Ollama](https://github.com/ollama/ollama) for providing the LLM backend\r\n- [Pydantic](https://github.com/pydantic/pydantic) for data validation\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Function calling capabilities for LLMs that don't natively support them",
"version": "0.1.3",
"project_urls": {
"Homepage": "https://github.com/alfredwallace7/tools4all",
"Issues": "https://github.com/alfredwallace7/tools4all/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "af71db14287ac3af04d6f7c3148f2b6d5b613cbcc307f56d8707400a883730fd",
"md5": "560055c6534846c25df990bc14db0a36",
"sha256": "3c449f9beb122b49d527bebd833d9282d00a298f440569eb49ff1a158455433f"
},
"downloads": -1,
"filename": "tools4all-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "560055c6534846c25df990bc14db0a36",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11875,
"upload_time": "2025-03-20T13:33:03",
"upload_time_iso_8601": "2025-03-20T13:33:03.705294Z",
"url": "https://files.pythonhosted.org/packages/af/71/db14287ac3af04d6f7c3148f2b6d5b613cbcc307f56d8707400a883730fd/tools4all-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f7365ceed796fc498f043de4d7dbb9d200338a7f566efc696596d37a0c1987b9",
"md5": "75382d2503507973f8cc16f102dafd04",
"sha256": "943542715b20805f05f2da4cc2b71627d64d8da3340de48fa2c53902df8a774b"
},
"downloads": -1,
"filename": "tools4all-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "75382d2503507973f8cc16f102dafd04",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 24587,
"upload_time": "2025-03-20T13:33:05",
"upload_time_iso_8601": "2025-03-20T13:33:05.830122Z",
"url": "https://files.pythonhosted.org/packages/f7/36/5ceed796fc498f043de4d7dbb9d200338a7f566efc696596d37a0c1987b9/tools4all-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-20 13:33:05",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alfredwallace7",
"github_project": "tools4all",
"github_not_found": true,
"lcname": "tools4all"
}