# Local Agent Toolkit
[](https://badge.fury.io/py/local-agent-toolkit)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A sophisticated AI agent toolkit that can use various tools to answer questions and perform tasks. The toolkit supports multiple AI providers including Ollama and OpenAI, and can be used both as a command-line tool and as a Python library.
## 🚀 Quick Start
### Installation
```bash
pip install local-agent-toolkit
```
### Command Line Usage
First, set up your environment configuration (see [Environment Setup](#environment-setup) below), then:
```bash
# Ask a question directly
local-agent "What files are in the current directory?"
# Interactive mode
local-agent
# With custom settings
local-agent "Analyze the code structure" --no-save --no-stream
```
### Python Library Usage
```python
from local_agent_toolkit import run_agent_with_question
# Simple usage
result, messages = run_agent_with_question("List all Python files in the project")
print(result)
# With custom options
result, messages = run_agent_with_question(
question="What's the weather like?",
save_messages=False,
stream=False
)
```
## ✨ Features
- **🤖 Multiple AI Agent Support**: Choose between Ollama and OpenAI agents
- **🛠️ Rich Tool Integration**: File operations, internet search, shell commands, and more
- **💻 Dual Interface**: Command-line tool and Python library
- **📝 Conversation History**: Automatic saving with customizable options
- **⚙️ Flexible Configuration**: Environment-based configuration for different AI providers
- **🔄 Streaming Support**: Real-time response streaming (Ollama)
- **🐍 Python 3.8+**: Compatible with modern Python versions
## 📦 Installation Options
### From PyPI (Recommended)
```bash
pip install local-agent-toolkit
```
### From Source
```bash
git clone https://github.com/technicalheist/local-agent-toolkit.git
cd local-agent-toolkit
pip install -e .
```
### Development Installation
```bash
git clone https://github.com/technicalheist/local-agent-toolkit.git
cd local-agent-toolkit
pip install -e ".[dev]"
```
## Project Structure
```
agent/
├── app.py # Main application entry point
├── agents/ # AI agent implementations
│ ├── OllamaAgent.py # Ollama-based agent
│ └── OpenAIAgent.py # OpenAI-based agent
├── helper/ # Helper modules directory
│ ├── __init__.py # Package initialization
│ ├── agent.py # Core agent logic
│ ├── tools.py # Tool implementations
│ ├── tools_definition.py # Tool definitions
│ └── tool_call.py # Function for running the agent
├── __test__/ # Test files
│ ├── ollama_agent_test.py # Tests for Ollama agent
│ └── openai_agent_test.py # Tests for OpenAI agent
├── requirements.txt # Python dependencies
├── .env # Environment variables
└── README.md # This file
```
## Environment Setup
On your first run, the toolkit will guide you through an interactive setup process where you can:
- Choose between Ollama (local) and OpenAI agents
- Configure the necessary settings (model, API keys, base URLs, etc.)
- Save your configuration either locally or globally
```
$ local-agent "What files are in the current directory?"
========================================================
🚀 Local Agent Toolkit - First-time Setup
========================================================
Welcome! Let's set up your environment for Local Agent Toolkit.
This will help you configure the AI agent you want to use.
First, choose which AI agent you want to use:
1. Ollama (local LLM, requires Ollama running)
2. OpenAI (requires API key)
Enter your choice (1 or 2): 2
Enter your OpenAI API key: sk-...
Enter OpenAI API base URL [https://api.openai.com/v1]:
Enter OpenAI model name [gpt-4]:
Where would you like to save this configuration?
1. Current directory (.env file in your working directory)
2. Global user configuration (~/.config/local-agent-toolkit/.env)
Enter your choice (1 or 2): 2
✅ Configuration saved successfully!
📁 Configuration file: /home/user/.config/local-agent-toolkit/.env
🔄 Configuration loaded and ready to use!
========================================================
```
You can also manually configure the toolkit by:
1. Creating a `.env` file in your working directory
2. Setting environment variables in your terminal session
3. Creating a global configuration in `~/.config/local-agent-toolkit/.env`
Example `.env` file contents:
```
CURRENT_AGENT=OLLAMA
OLLAMA_MODEL=llama3.1
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_HOST=http://localhost:11434
```
## ⚙️ Configuration
### Environment Variables
Create a `.env` file in your project root (copy from `.env.example`):
```bash
# Agent Selection
CURRENT_AGENT=OLLAMA # or OPENAI
# Ollama Configuration
OLLAMA_MODEL=llama3.1
OLLAMA_BASE_URL=http://localhost:11434
# OpenAI Configuration
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4
# Optional Settings
MAX_ITERATIONS=25
WORK_DIRECTORY=./
LOG_LEVEL=INFO
```
### Supported AI Providers
#### 🦙 Ollama Agent
- **Purpose**: Local AI models with privacy
- **Requirements**: Ollama server running locally
- **Models**: Any Ollama-compatible model (llama3.1, codellama, etc.)
- **Benefits**: Privacy, no API costs, offline usage
#### 🤖 OpenAI Agent
- **Purpose**: Cloud-based AI with latest models
- **Requirements**: OpenAI API key
- **Models**: GPT-4, GPT-3.5-turbo, etc.
- **Benefits**: High performance, latest capabilities
## 🛠️ Available Tools
The agents have access to these built-in tools:
- **📁 File Operations**: `list_files`, `read_file`, `write_file`
- **🔍 Pattern Search**: `list_files_by_pattern`
- **🌐 Internet Search**: `ask_any_question_internet`
- **💻 Shell Commands**: `execute_shell_command`
- **📂 Directory Operations**: `mkdir`
## 📚 Usage Examples
### Command Line Interface
```bash
# Basic usage
local-agent "What Python files are in this project?"
# Interactive mode with conversation history
local-agent
# Disable features
local-agent "Analyze code" --no-save --no-stream
# Custom message file
local-agent "Help me debug" --messages custom_conversation.json
```
### Python Library
```python
# Basic usage
from local_agent_toolkit import run_agent_with_question
result, messages = run_agent_with_question(
"Create a Python script that lists all files"
)
# Advanced usage with options
result, messages = run_agent_with_question(
question="What's the project structure?",
save_messages=True,
messages_file="project_analysis.json",
stream=False
)
# Using specific agents
from local_agent_toolkit import OllamaAgent, OpenAIAgent
from local_agent_toolkit.helper.tools_definition import tools
# Create Ollama agent
ollama_agent = OllamaAgent(
tool_definitions=tools,
tool_callables=available_functions
)
# Create OpenAI agent
openai_agent = OpenAIAgent(
tool_definitions=tools,
tool_callables=available_functions
)
```
### Environment Management
#### Option 1: Global .env file
```bash
# Create .env in your project root
cp .env.example .env
# Edit .env with your settings
```
#### Option 2: Per-project configuration
```python
import os
from dotenv import load_dotenv
# Load from specific path
load_dotenv('/path/to/your/project/.env')
# Or set programmatically
os.environ['CURRENT_AGENT'] = 'OPENAI'
os.environ['OPENAI_API_KEY'] = 'your_key_here'
from local_agent_toolkit import run_agent_with_question
```
#### Option 3: Docker/Container deployment
```dockerfile
# Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install local-agent-toolkit
# Set environment variables
ENV CURRENT_AGENT=OLLAMA
ENV OLLAMA_BASE_URL=http://ollama-server:11434
ENV OLLAMA_MODEL=llama3.1
COPY . .
CMD ["local-agent"]
```
## 🔧 Development
### Running Tests
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest __test__/
# Run specific tests
pytest __test__/ollama_agent_test.py
pytest __test__/openai_agent_test.py
```
### Building for Distribution
```bash
# Install build tools
pip install build twine
# Build the package
python -m build
# Upload to PyPI (maintainers only)
twine upload dist/*
```
## 📝 Project Structure
```
local-agent-toolkit/
├── app.py # Command-line interface
├── agents/ # AI agent implementations
│ ├── OllamaAgent.py # Ollama-based agent
│ └── OpenAIAgent.py # OpenAI-based agent
├── helper/ # Core library modules
│ ├── __init__.py # Library exports
│ ├── agent.py # Base agent logic
│ ├── tools.py # Tool implementations
│ ├── tools_definition.py # Tool schemas
│ └── tool_call.py # Main execution function
├── __test__/ # Test suite
├── requirements.txt # Dependencies
├── setup.py # Package configuration
├── pyproject.toml # Modern packaging config
└── .env.example # Environment template
```
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [Ollama](https://ollama.ai/) for local AI model serving
- [OpenAI](https://openai.com/) for API access to advanced models
- The Python community for excellent tooling and libraries
## 📞 Support
- 📧 Issues: [GitHub Issues](https://github.com/technicalheist/local-agent-toolkit/issues)
- 📖 Documentation: [GitHub Repository](https://github.com/technicalheist/local-agent-toolkit)
- 💬 Discussions: [GitHub Discussions](https://github.com/technicalheist/local-agent-toolkit/discussions)
Use command-line flags for more control:
```bash
# Ask a question using the --question flag
python app.py --question "What files are in the helper directory?"
# Don't save the conversation history
python app.py --question "Get current time" --no-save
# Save to a custom file
python app.py --question "Create a test directory" --messages-file "my_conversation.json"
```
## Command-Line Options
- `question` (positional): The question to ask the agent
- `-q, --question`: Alternative way to specify the question
- `--no-save`: Don't save the conversation history to a file
- `--messages-file`: Custom filename for saving conversation history (default: `messages.json`)
- `-h, --help`: Show help message
## Examples
```bash
# Interactive mode
python app.py
# Simple question
python app.py "What's the weather like?"
# With custom options
python app.py -q "List directory contents" --messages-file "dir_check.json"
# Without saving conversation
python app.py "Quick calculation: 2+2" --no-save
```
## Available Tools
The agent has access to various tools including:
- **File Operations**: `list_files`, `read_file`, `write_file`, `mkdir`
- **Pattern Matching**: `list_files_by_pattern`
- **Internet Search**: `ask_any_question_internet`
- **Shell Commands**: `execute_shell_command`
## Conversation History
By default, all conversations are saved to `messages.json`. You can:
- Use `--no-save` to skip saving
- Use `--messages-file` to specify a custom filename
- View the conversation history in JSON format
## Notes
- The agent can handle complex multi-step tasks
- **Multiple AI Providers**: Choose between Ollama (local) and OpenAI (cloud) agents based on your needs
- Conversations include tool calls and responses for full transparency
- The JSON conversation files can be used for debugging or analysis
- The agent will continue working until it completes the task or reaches the maximum iteration limit
- Both agent types support the same tool set and functionality
Raw data
{
"_id": null,
"home_page": "https://technicalheist.com",
"name": "local-agent-toolkit",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, agent, toolkit, ollama, openai, tool-calling, automation",
"author": "TechnicalHeist",
"author_email": "TechnicalHeist <contact@technicalheist.com>",
"download_url": "https://files.pythonhosted.org/packages/a8/d0/3216e10ebca02bcb7ecdf42d354f90c762d18fe3f2f9d758fa152a09ddb7/local_agent_toolkit-0.1.2.tar.gz",
"platform": null,
"description": "# Local Agent Toolkit\n\n[](https://badge.fury.io/py/local-agent-toolkit)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\nA sophisticated AI agent toolkit that can use various tools to answer questions and perform tasks. The toolkit supports multiple AI providers including Ollama and OpenAI, and can be used both as a command-line tool and as a Python library.\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install local-agent-toolkit\n```\n\n### Command Line Usage\n\nFirst, set up your environment configuration (see [Environment Setup](#environment-setup) below), then:\n\n```bash\n# Ask a question directly\nlocal-agent \"What files are in the current directory?\"\n\n# Interactive mode\nlocal-agent\n\n# With custom settings\nlocal-agent \"Analyze the code structure\" --no-save --no-stream\n```\n\n### Python Library Usage\n\n```python\nfrom local_agent_toolkit import run_agent_with_question\n\n# Simple usage\nresult, messages = run_agent_with_question(\"List all Python files in the project\")\nprint(result)\n\n# With custom options\nresult, messages = run_agent_with_question(\n question=\"What's the weather like?\",\n save_messages=False,\n stream=False\n)\n```\n\n## \u2728 Features\n\n- **\ud83e\udd16 Multiple AI Agent Support**: Choose between Ollama and OpenAI agents\n- **\ud83d\udee0\ufe0f Rich Tool Integration**: File operations, internet search, shell commands, and more\n- **\ud83d\udcbb Dual Interface**: Command-line tool and Python library\n- **\ud83d\udcdd Conversation History**: Automatic saving with customizable options\n- **\u2699\ufe0f Flexible Configuration**: Environment-based configuration for different AI providers\n- **\ud83d\udd04 Streaming Support**: Real-time response streaming (Ollama)\n- **\ud83d\udc0d Python 3.8+**: Compatible with modern Python versions\n\n## \ud83d\udce6 Installation Options\n\n### From PyPI (Recommended)\n```bash\npip install local-agent-toolkit\n```\n\n### From Source\n```bash\ngit clone https://github.com/technicalheist/local-agent-toolkit.git\ncd local-agent-toolkit\npip install -e .\n```\n\n### Development Installation\n```bash\ngit clone https://github.com/technicalheist/local-agent-toolkit.git\ncd local-agent-toolkit\npip install -e \".[dev]\"\n```\n\n## Project Structure\n\n```\nagent/\n\u251c\u2500\u2500 app.py # Main application entry point\n\u251c\u2500\u2500 agents/ # AI agent implementations\n\u2502 \u251c\u2500\u2500 OllamaAgent.py # Ollama-based agent\n\u2502 \u2514\u2500\u2500 OpenAIAgent.py # OpenAI-based agent\n\u251c\u2500\u2500 helper/ # Helper modules directory\n\u2502 \u251c\u2500\u2500 __init__.py # Package initialization\n\u2502 \u251c\u2500\u2500 agent.py # Core agent logic\n\u2502 \u251c\u2500\u2500 tools.py # Tool implementations\n\u2502 \u251c\u2500\u2500 tools_definition.py # Tool definitions\n\u2502 \u2514\u2500\u2500 tool_call.py # Function for running the agent\n\u251c\u2500\u2500 __test__/ # Test files\n\u2502 \u251c\u2500\u2500 ollama_agent_test.py # Tests for Ollama agent\n\u2502 \u2514\u2500\u2500 openai_agent_test.py # Tests for OpenAI agent\n\u251c\u2500\u2500 requirements.txt # Python dependencies\n\u251c\u2500\u2500 .env # Environment variables\n\u2514\u2500\u2500 README.md # This file\n```\n\n## Environment Setup\n\nOn your first run, the toolkit will guide you through an interactive setup process where you can:\n- Choose between Ollama (local) and OpenAI agents\n- Configure the necessary settings (model, API keys, base URLs, etc.)\n- Save your configuration either locally or globally\n\n```\n$ local-agent \"What files are in the current directory?\"\n\n========================================================\n\ud83d\ude80 Local Agent Toolkit - First-time Setup\n========================================================\n\nWelcome! Let's set up your environment for Local Agent Toolkit.\nThis will help you configure the AI agent you want to use.\n\nFirst, choose which AI agent you want to use:\n1. Ollama (local LLM, requires Ollama running)\n2. OpenAI (requires API key)\n\nEnter your choice (1 or 2): 2\n\nEnter your OpenAI API key: sk-...\n\nEnter OpenAI API base URL [https://api.openai.com/v1]: \n\nEnter OpenAI model name [gpt-4]: \n\nWhere would you like to save this configuration?\n1. Current directory (.env file in your working directory)\n2. Global user configuration (~/.config/local-agent-toolkit/.env)\n\nEnter your choice (1 or 2): 2\n\n\u2705 Configuration saved successfully!\n\ud83d\udcc1 Configuration file: /home/user/.config/local-agent-toolkit/.env\n\n\ud83d\udd04 Configuration loaded and ready to use!\n========================================================\n```\n\nYou can also manually configure the toolkit by:\n\n1. Creating a `.env` file in your working directory\n2. Setting environment variables in your terminal session\n3. Creating a global configuration in `~/.config/local-agent-toolkit/.env`\n\nExample `.env` file contents:\n```\nCURRENT_AGENT=OLLAMA\nOLLAMA_MODEL=llama3.1\nOLLAMA_BASE_URL=http://localhost:11434\nOLLAMA_HOST=http://localhost:11434\n```\n\n## \u2699\ufe0f Configuration\n\n### Environment Variables\n\nCreate a `.env` file in your project root (copy from `.env.example`):\n\n```bash\n# Agent Selection\nCURRENT_AGENT=OLLAMA # or OPENAI\n\n# Ollama Configuration \nOLLAMA_MODEL=llama3.1\nOLLAMA_BASE_URL=http://localhost:11434\n\n# OpenAI Configuration\nOPENAI_API_KEY=your_openai_api_key_here\nOPENAI_MODEL=gpt-4\n\n# Optional Settings\nMAX_ITERATIONS=25\nWORK_DIRECTORY=./\nLOG_LEVEL=INFO\n```\n\n### Supported AI Providers\n\n#### \ud83e\udd99 Ollama Agent\n- **Purpose**: Local AI models with privacy\n- **Requirements**: Ollama server running locally\n- **Models**: Any Ollama-compatible model (llama3.1, codellama, etc.)\n- **Benefits**: Privacy, no API costs, offline usage\n\n#### \ud83e\udd16 OpenAI Agent \n- **Purpose**: Cloud-based AI with latest models\n- **Requirements**: OpenAI API key\n- **Models**: GPT-4, GPT-3.5-turbo, etc.\n- **Benefits**: High performance, latest capabilities\n\n## \ud83d\udee0\ufe0f Available Tools\n\nThe agents have access to these built-in tools:\n\n- **\ud83d\udcc1 File Operations**: `list_files`, `read_file`, `write_file`\n- **\ud83d\udd0d Pattern Search**: `list_files_by_pattern`\n- **\ud83c\udf10 Internet Search**: `ask_any_question_internet`\n- **\ud83d\udcbb Shell Commands**: `execute_shell_command`\n- **\ud83d\udcc2 Directory Operations**: `mkdir`\n\n## \ud83d\udcda Usage Examples\n\n### Command Line Interface\n\n```bash\n# Basic usage\nlocal-agent \"What Python files are in this project?\"\n\n# Interactive mode with conversation history\nlocal-agent\n\n# Disable features\nlocal-agent \"Analyze code\" --no-save --no-stream\n\n# Custom message file\nlocal-agent \"Help me debug\" --messages custom_conversation.json\n```\n\n### Python Library\n\n```python\n# Basic usage\nfrom local_agent_toolkit import run_agent_with_question\n\nresult, messages = run_agent_with_question(\n \"Create a Python script that lists all files\"\n)\n\n# Advanced usage with options\nresult, messages = run_agent_with_question(\n question=\"What's the project structure?\",\n save_messages=True,\n messages_file=\"project_analysis.json\",\n stream=False\n)\n\n# Using specific agents\nfrom local_agent_toolkit import OllamaAgent, OpenAIAgent\nfrom local_agent_toolkit.helper.tools_definition import tools\n\n# Create Ollama agent\nollama_agent = OllamaAgent(\n tool_definitions=tools,\n tool_callables=available_functions\n)\n\n# Create OpenAI agent \nopenai_agent = OpenAIAgent(\n tool_definitions=tools,\n tool_callables=available_functions\n)\n```\n\n### Environment Management\n\n#### Option 1: Global .env file\n```bash\n# Create .env in your project root\ncp .env.example .env\n# Edit .env with your settings\n```\n\n#### Option 2: Per-project configuration\n```python\nimport os\nfrom dotenv import load_dotenv\n\n# Load from specific path\nload_dotenv('/path/to/your/project/.env')\n\n# Or set programmatically\nos.environ['CURRENT_AGENT'] = 'OPENAI'\nos.environ['OPENAI_API_KEY'] = 'your_key_here'\n\nfrom local_agent_toolkit import run_agent_with_question\n```\n\n#### Option 3: Docker/Container deployment\n```dockerfile\n# Dockerfile\nFROM python:3.11-slim\n\nWORKDIR /app\nCOPY requirements.txt .\nRUN pip install local-agent-toolkit\n\n# Set environment variables\nENV CURRENT_AGENT=OLLAMA\nENV OLLAMA_BASE_URL=http://ollama-server:11434\nENV OLLAMA_MODEL=llama3.1\n\nCOPY . .\nCMD [\"local-agent\"]\n```\n\n## \ud83d\udd27 Development\n\n### Running Tests\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest __test__/\n\n# Run specific tests\npytest __test__/ollama_agent_test.py\npytest __test__/openai_agent_test.py\n```\n\n### Building for Distribution\n```bash\n# Install build tools\npip install build twine\n\n# Build the package\npython -m build\n\n# Upload to PyPI (maintainers only)\ntwine upload dist/*\n```\n\n## \ud83d\udcdd Project Structure\n\n```\nlocal-agent-toolkit/\n\u251c\u2500\u2500 app.py # Command-line interface\n\u251c\u2500\u2500 agents/ # AI agent implementations\n\u2502 \u251c\u2500\u2500 OllamaAgent.py # Ollama-based agent\n\u2502 \u2514\u2500\u2500 OpenAIAgent.py # OpenAI-based agent\n\u251c\u2500\u2500 helper/ # Core library modules\n\u2502 \u251c\u2500\u2500 __init__.py # Library exports\n\u2502 \u251c\u2500\u2500 agent.py # Base agent logic\n\u2502 \u251c\u2500\u2500 tools.py # Tool implementations\n\u2502 \u251c\u2500\u2500 tools_definition.py # Tool schemas\n\u2502 \u2514\u2500\u2500 tool_call.py # Main execution function\n\u251c\u2500\u2500 __test__/ # Test suite\n\u251c\u2500\u2500 requirements.txt # Dependencies\n\u251c\u2500\u2500 setup.py # Package configuration\n\u251c\u2500\u2500 pyproject.toml # Modern packaging config\n\u2514\u2500\u2500 .env.example # Environment template\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- [Ollama](https://ollama.ai/) for local AI model serving\n- [OpenAI](https://openai.com/) for API access to advanced models\n- The Python community for excellent tooling and libraries\n\n## \ud83d\udcde Support\n\n- \ud83d\udce7 Issues: [GitHub Issues](https://github.com/technicalheist/local-agent-toolkit/issues)\n- \ud83d\udcd6 Documentation: [GitHub Repository](https://github.com/technicalheist/local-agent-toolkit)\n- \ud83d\udcac Discussions: [GitHub Discussions](https://github.com/technicalheist/local-agent-toolkit/discussions)\n\nUse command-line flags for more control:\n\n```bash\n# Ask a question using the --question flag\npython app.py --question \"What files are in the helper directory?\"\n\n# Don't save the conversation history\npython app.py --question \"Get current time\" --no-save\n\n# Save to a custom file\npython app.py --question \"Create a test directory\" --messages-file \"my_conversation.json\"\n```\n\n## Command-Line Options\n\n- `question` (positional): The question to ask the agent\n- `-q, --question`: Alternative way to specify the question\n- `--no-save`: Don't save the conversation history to a file\n- `--messages-file`: Custom filename for saving conversation history (default: `messages.json`)\n- `-h, --help`: Show help message\n\n## Examples\n\n```bash\n# Interactive mode\npython app.py\n\n# Simple question\npython app.py \"What's the weather like?\"\n\n# With custom options\npython app.py -q \"List directory contents\" --messages-file \"dir_check.json\"\n\n# Without saving conversation\npython app.py \"Quick calculation: 2+2\" --no-save\n```\n\n## Available Tools\n\nThe agent has access to various tools including:\n\n- **File Operations**: `list_files`, `read_file`, `write_file`, `mkdir`\n- **Pattern Matching**: `list_files_by_pattern`\n- **Internet Search**: `ask_any_question_internet`\n- **Shell Commands**: `execute_shell_command`\n\n## Conversation History\n\nBy default, all conversations are saved to `messages.json`. You can:\n- Use `--no-save` to skip saving\n- Use `--messages-file` to specify a custom filename\n- View the conversation history in JSON format\n\n## Notes\n\n- The agent can handle complex multi-step tasks\n- **Multiple AI Providers**: Choose between Ollama (local) and OpenAI (cloud) agents based on your needs\n- Conversations include tool calls and responses for full transparency\n- The JSON conversation files can be used for debugging or analysis\n- The agent will continue working until it completes the task or reaches the maximum iteration limit\n- Both agent types support the same tool set and functionality\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A sophisticated AI agent toolkit supporting multiple AI providers with tool calling capabilities.",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://technicalheist.com",
"Issues": "https://github.com/technicalheist/local-agent-toolkit/issues",
"Repository": "https://github.com/technicalheist/local-agent-toolkit"
},
"split_keywords": [
"ai",
" agent",
" toolkit",
" ollama",
" openai",
" tool-calling",
" automation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1fbbec1f2aba3d51d91c1bf5aa318c8bdb54a08890b39e57a44d3cfb1405be1f",
"md5": "3b3d103ae99343052c921611dfb77d58",
"sha256": "b334a3b97079c0f783b89cb7c3a06fcdb720e01b81cfca60550a4bda82a446e1"
},
"downloads": -1,
"filename": "local_agent_toolkit-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3b3d103ae99343052c921611dfb77d58",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 22697,
"upload_time": "2025-08-14T08:15:55",
"upload_time_iso_8601": "2025-08-14T08:15:55.751055Z",
"url": "https://files.pythonhosted.org/packages/1f/bb/ec1f2aba3d51d91c1bf5aa318c8bdb54a08890b39e57a44d3cfb1405be1f/local_agent_toolkit-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a8d03216e10ebca02bcb7ecdf42d354f90c762d18fe3f2f9d758fa152a09ddb7",
"md5": "719f6757af57e9e8f0f0109c0f88c1c3",
"sha256": "c75e5896fbb8b5ad3d8d6bc78dfab0ea9966d85d974be62afb28aa72140f33fd"
},
"downloads": -1,
"filename": "local_agent_toolkit-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "719f6757af57e9e8f0f0109c0f88c1c3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 32239,
"upload_time": "2025-08-14T08:15:57",
"upload_time_iso_8601": "2025-08-14T08:15:57.480657Z",
"url": "https://files.pythonhosted.org/packages/a8/d0/3216e10ebca02bcb7ecdf42d354f90c762d18fe3f2f9d758fa152a09ddb7/local_agent_toolkit-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-14 08:15:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "technicalheist",
"github_project": "local-agent-toolkit",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "ollama",
"specs": []
},
{
"name": "python-dotenv",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "openai",
"specs": []
}
],
"lcname": "local-agent-toolkit"
}