# OllamaPy
A powerful terminal-based chat interface for Ollama with AI meta-reasoning capabilities. OllamaPy provides an intuitive way to interact with local AI models while featuring unique "vibe tests" that evaluate AI decision-making consistency.
## Features
- 🤖 **Terminal Chat Interface** - Clean, user-friendly chat experience in your terminal
- 🔄 **Streaming Responses** - Real-time streaming for natural conversation flow
- 📚 **Model Management** - Automatic model pulling and listing of available models
- 🧠 **Meta-Reasoning** - AI analyzes user input and selects appropriate actions
- 🛠️ **Extensible Actions** - Easy-to-extend action system with parameter support
- 🧪 **AI Vibe Tests** - Built-in tests to evaluate AI consistency and reliability
- 🔢 **Parameter Extraction** - AI intelligently extracts parameters from natural language
## Prerequisites
You need to have [Ollama](https://ollama.ai/) installed and running on your system.
```bash
# Install Ollama (if not already installed)
curl -fsSL https://ollama.ai/install.sh | sh
# Start the Ollama server
ollama serve
```
## Installation
Install from PyPI:
```bash
pip install ollamapy
```
Or install from source:
```bash
git clone https://github.com/ScienceIsVeryCool/OllamaPy.git
cd OllamaPy
pip install .
```
## Quick Start
Simply run the chat interface:
```bash
ollamapy
```
This will start a chat session with the default model (gemma3:4b). If the model isn't available locally, OllamaPy will automatically pull it for you.
## Usage Examples
### Basic Chat
```bash
# Start chat with default model
ollamapy
```
### Custom Model
```bash
# Use a specific model
ollamapy --model gemma2:2b
ollamapy -m codellama:7b
```
### Dual Model Setup (Analysis + Chat)
```bash
# Use a small, fast model for analysis and a larger model for chat
ollamapy --analysis-model gemma2:2b --model llama3.2:7b
ollamapy -a gemma2:2b -m mistral:7b
# This is great for performance - small model does action selection, large model handles conversation
```
### Two-Step Analysis Mode
```bash
# Enable two-step analysis (better for smaller models)
ollamapy --two-step --analysis-model gemma2:2b
# Two-step mode separates action selection from parameter extraction
```
### System Message
```bash
# Set context for the AI
ollamapy --system "You are a helpful coding assistant specializing in Python"
ollamapy -s "You are a creative writing partner"
```
### Combined Options
```bash
# Use custom models with system message and two-step mode
ollamapy --two-step --analysis-model gemma2:2b --model mistral:7b --system "You are a helpful assistant"
```
## Meta-Reasoning System
OllamaPy features a unique meta-reasoning system where the AI analyzes user input and dynamically selects from available actions. The AI examines the intent behind your message and chooses the most appropriate response action.
### Dual Model Architecture
For optimal performance, you can use two different models:
- **Analysis Model**: A smaller, faster model (like `gemma2:2b`) for quick action selection
- **Chat Model**: A larger, more capable model (like `llama3.2:7b`) for generating responses
This architecture provides the best of both worlds - fast decision-making and high-quality responses.
```bash
# Example: Fast analysis with powerful chat
ollamapy --analysis-model gemma2:2b --model llama3.2:7b
```
### Analysis Modes
OllamaPy supports two analysis modes:
1. **Single-Step Mode** (default): The AI selects the action and extracts parameters in one analysis
2. **Two-Step Mode**: The AI first selects the action, then extracts parameters in a separate step
Two-step mode is particularly useful with smaller models that might struggle with complex multi-task analysis.
```bash
# Enable two-step mode
ollamapy --two-step
```
### Currently Available Actions
- **null** - Default conversation mode. Used for normal chat when no special action is needed
- **getWeather** - Provides weather information (accepts optional location parameter)
- **getTime** - Returns the current date and time (accepts optional timezone parameter)
- **square_root** - Calculates the square root of a number (requires number parameter)
- **calculate** - Evaluates basic mathematical expressions (requires expression parameter)
### How Meta-Reasoning Works
When you send a message, the AI:
1. **Analyzes** your input to understand intent
2. **Selects** the most appropriate action
3. **Extracts** any required parameters from your input
4. **Executes** the chosen action with parameters
5. **Responds** using the action's output as context
## Creating Custom Actions
The action system is designed to be easily extensible. Here's a comprehensive guide on creating your own actions:
### Basic Action Structure
```python
from ollamapy.actions import register_action
@register_action(
name="action_name",
description="When to use this action",
vibe_test_phrases=["test phrase 1", "test phrase 2"], # Optional
parameters={ # Optional
"param_name": {
"type": "string|number",
"description": "What this parameter is for",
"required": True|False
}
}
)
def action_name(param_name=None):
"""Your action implementation."""
# Do something useful
return "Result string that will be given to the AI as context"
```
### Example 1: Simple Action (No Parameters)
```python
@register_action(
name="joke",
description="Use when the user wants to hear a joke or needs cheering up",
vibe_test_phrases=[
"tell me a joke",
"I need a laugh",
"cheer me up",
"make me smile"
]
)
def joke():
"""Tell a random joke."""
import random
jokes = [
"Why don't scientists trust atoms? Because they make up everything!",
"Why did the scarecrow win an award? He was outstanding in his field!",
"Why don't eggs tell jokes? They'd crack each other up!"
]
return random.choice(jokes)
```
### Example 2: Action with Required Parameter
```python
@register_action(
name="convert_temp",
description="Convert temperature between Celsius and Fahrenheit",
vibe_test_phrases=[
"convert 32 fahrenheit to celsius",
"what's 100C in fahrenheit?",
"20 degrees celsius in F"
],
parameters={
"value": {
"type": "number",
"description": "The temperature value to convert",
"required": True
},
"unit": {
"type": "string",
"description": "The unit to convert from (C or F)",
"required": True
}
}
)
def convert_temp(value, unit):
"""Convert temperature between units."""
unit = unit.upper()
if unit == 'C':
# Celsius to Fahrenheit
result = (value * 9/5) + 32
return f"{value}°C is equal to {result:.1f}°F"
elif unit == 'F':
# Fahrenheit to Celsius
result = (value - 32) * 5/9
return f"{value}°F is equal to {result:.1f}°C"
else:
return f"Unknown unit '{unit}'. Please use 'C' for Celsius or 'F' for Fahrenheit."
```
### Example 3: Action with Optional Parameters
```python
@register_action(
name="roll_dice",
description="Roll dice for games or random number generation",
vibe_test_phrases=[
"roll a die",
"roll 2d6",
"roll three dice",
"give me a random number between 1 and 6"
],
parameters={
"count": {
"type": "number",
"description": "Number of dice to roll",
"required": False # Defaults to 1 if not specified
},
"sides": {
"type": "number",
"description": "Number of sides on each die",
"required": False # Defaults to 6 if not specified
}
}
)
def roll_dice(count=1, sides=6):
"""Roll dice and return results."""
import random
# Ensure positive integers
count = max(1, int(count))
sides = max(2, int(sides))
if count == 1:
result = random.randint(1, sides)
return f"Rolled a d{sides}: {result}"
else:
results = [random.randint(1, sides) for _ in range(count)]
total = sum(results)
return f"Rolled {count}d{sides}: {results} (Total: {total})"
```
### Best Practices for Creating Actions
1. **Clear Naming**: Use descriptive names that clearly indicate what the action does
```python
# Good: specific and clear
name="calculate_compound_interest"
# Avoid: too generic
name="calculate"
```
2. **Detailed Descriptions**: Help the AI understand when to use your action
```python
# Good: specific keywords and use cases
description="Calculate compound interest for investments. Keywords: compound interest, investment return, APY"
# Avoid: too vague
description="Do math stuff"
```
3. **Comprehensive Test Phrases**: Include varied ways users might request the action
```python
vibe_test_phrases=[
"calculate compound interest on $1000",
"what's my investment worth after 5 years?",
"compound interest calculator",
"how much will I earn with 5% APY?"
]
```
4. **Parameter Validation**: Always validate and handle edge cases
```python
def safe_divide(numerator, denominator):
"""Safely divide two numbers."""
if denominator == 0:
return "Error: Cannot divide by zero!"
result = numerator / denominator
return f"{numerator} ÷ {denominator} = {result}"
```
5. **Meaningful Return Values**: Return informative strings that help the AI respond
```python
# Good: provides context and formatting
return f"The calculation result is {result:.2f} ({increase:.1f}% increase)"
# Avoid: just the raw number
return str(result)
```
6. **Error Handling**: Always handle potential errors gracefully
```python
try:
result = complex_calculation(param)
return f"Success: {result}"
except ValueError as e:
return f"Error: Invalid input - {str(e)}"
except Exception as e:
return f"Unexpected error: {str(e)}"
```
### Adding Your Actions to OllamaPy
1. Create a new Python file for your actions (e.g., `my_actions.py`)
2. Import and implement your actions using the patterns above
3. Import your actions module before starting OllamaPy
```python
# my_script.py
from ollamapy import chat
import my_actions # This registers your actions
# Now start chat with your custom actions available
chat()
```
### Testing Your Actions
Always test your actions with vibe tests to ensure the AI can reliably select them:
```bash
# Run vibe tests including your custom actions
ollamapy --vibetest
# Test with different models
ollamapy --vibetest --model llama3.2:3b -n 5
```
## Vibe Tests
Vibe tests are a built-in feature that evaluates how consistently AI models interpret human intent and choose appropriate actions. These tests help you understand model behavior and compare performance across different models.
### Running Vibe Tests
```bash
# Run vibe tests with default settings
ollamapy --vibetest
# Run with multiple iterations for statistical confidence
ollamapy --vibetest -n 5
# Test a specific model
ollamapy --vibetest --model gemma2:2b -n 3
# Use dual models for testing (analysis + chat)
ollamapy --vibetest --analysis-model gemma2:2b --model llama3.2:7b -n 5
# Test with two-step analysis
ollamapy --vibetest --two-step --model gemma2:2b
```
### Understanding Results
Vibe tests evaluate:
- **Action Selection**: How reliably the AI chooses the correct action
- **Parameter Extraction**: How accurately the AI extracts required parameters
- **Consistency**: How stable the AI's decisions are across multiple runs
Tests pass with a 60% or higher success rate, ensuring reasonable consistency in decision-making.
## Chat Commands
While chatting, you can use these built-in commands:
- `quit`, `exit`, `bye` - End the conversation
- `clear` - Clear conversation history
- `help` - Show available commands
- `model` - Display current models (both chat and analysis)
- `models` - List all available models
- `actions` - Show available actions the AI can choose from
## Python API
You can also use OllamaPy programmatically:
```python
from ollamapy import TerminalChat, OllamaClient, execute_action
# Start a chat session programmatically with dual models
chat = TerminalChat(
model="llama3.2:7b",
analysis_model="gemma2:2b",
system_message="You are a helpful assistant",
two_step_analysis=True
)
chat.run()
# Or use the client directly
client = OllamaClient()
messages = [{"role": "user", "content": "Hello!"}]
for chunk in client.chat_stream("gemma3:4b", messages):
print(chunk, end="", flush=True)
# Execute actions programmatically
result = execute_action("square_root", {"number": 16})
print(result) # "The square root of 16 is 4"
# Run vibe tests programmatically with dual models
from ollamapy import run_vibe_tests
success = run_vibe_tests(
model="llama3.2:7b",
analysis_model="gemma2:2b",
iterations=5
)
```
### Available Classes and Functions
- **`TerminalChat`** - High-level terminal chat interface with meta-reasoning
- **`OllamaClient`** - Low-level API client for Ollama
- **`run_vibe_tests()`** - Execute vibe tests programmatically
- **`get_available_actions()`** - Get all registered actions
- **`execute_action()`** - Execute an action with parameters programmatically
- **`register_action()`** - Decorator for creating new actions
## Configuration
OllamaPy connects to Ollama on `http://localhost:11434` by default. If your Ollama instance is running elsewhere:
```python
from ollamapy import OllamaClient
client = OllamaClient(base_url="http://your-ollama-server:11434")
```
## Supported Models
OllamaPy works with any model available in Ollama. Popular options include:
- `gemma3:4b` (default) - Fast and capable general-purpose model
- `llama3.2:3b` - Efficient and responsive for most tasks
- `gemma2:2b` - Lightweight model, great for analysis tasks
- `gemma2:9b` - Larger Gemma model for complex tasks
- `codellama:7b` - Specialized for coding tasks
- `mistral:7b` - Strong general-purpose model
To see available models on your system: `ollama list`
## Development
Clone the repository and install in development mode:
```bash
git clone https://github.com/ScienceIsVeryCool/OllamaPy.git
cd OllamaPy
pip install -e ".[dev]"
```
Run tests:
```bash
pytest
```
Run vibe tests:
```bash
pytest -m vibetest
```
## Troubleshooting
### "Ollama server is not running!"
Make sure Ollama is installed and running:
```bash
ollama serve
```
### Model not found
OllamaPy will automatically pull models, but you can also pull manually:
```bash
ollama pull gemma3:4b
```
### Parameter extraction issues
- Try two-step mode: `ollamapy --two-step`
- Use a more capable analysis model: `ollamapy --analysis-model llama3.2:3b`
- Ensure your action descriptions clearly indicate what parameters are needed
### Vibe test failures
- Try different models: `ollamapy --vibetest --model gemma2:9b`
- Use two-step analysis: `ollamapy --vibetest --two-step`
- Increase iterations for better statistics: `ollamapy --vibetest -n 10`
- Check that your test phrases clearly indicate the intended action
## Project Information
- **Version**: 0.6.2
- **License**: GPL-3.0-or-later
- **Author**: The Lazy Artist
- **Python**: >=3.8
- **Dependencies**: requests>=2.25.0
## Links
- [PyPI Package](https://pypi.org/project/ollamapy/)
- [GitHub Repository](https://github.com/ScienceIsVeryCool/OllamaPy)
- [Issues](https://github.com/ScienceIsVeryCool/OllamaPy/issues)
- [Ollama Documentation](https://ollama.ai/)
## License
This project is licensed under the GPL-3.0-or-later license. See the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "ollamapy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, chat, llama, ollama, python, terminal, utilities, vibe-tests",
"author": null,
"author_email": "The Lazy Artist <TheRealLazyArtist@iCloud.com>",
"download_url": "https://files.pythonhosted.org/packages/ea/d3/3021a22bcc4191ad3cc03b9880b717cdd2be5b691d6b7eff1f2d7fa807e2/ollamapy-0.6.2.tar.gz",
"platform": null,
"description": "# OllamaPy\n\nA powerful terminal-based chat interface for Ollama with AI meta-reasoning capabilities. OllamaPy provides an intuitive way to interact with local AI models while featuring unique \"vibe tests\" that evaluate AI decision-making consistency.\n\n## Features\n\n- \ud83e\udd16 **Terminal Chat Interface** - Clean, user-friendly chat experience in your terminal\n- \ud83d\udd04 **Streaming Responses** - Real-time streaming for natural conversation flow\n- \ud83d\udcda **Model Management** - Automatic model pulling and listing of available models\n- \ud83e\udde0 **Meta-Reasoning** - AI analyzes user input and selects appropriate actions\n- \ud83d\udee0\ufe0f **Extensible Actions** - Easy-to-extend action system with parameter support\n- \ud83e\uddea **AI Vibe Tests** - Built-in tests to evaluate AI consistency and reliability\n- \ud83d\udd22 **Parameter Extraction** - AI intelligently extracts parameters from natural language\n\n## Prerequisites\n\nYou need to have [Ollama](https://ollama.ai/) installed and running on your system.\n\n```bash\n# Install Ollama (if not already installed)\ncurl -fsSL https://ollama.ai/install.sh | sh\n\n# Start the Ollama server\nollama serve\n```\n\n## Installation\n\nInstall from PyPI:\n\n```bash\npip install ollamapy\n```\n\nOr install from source:\n\n```bash\ngit clone https://github.com/ScienceIsVeryCool/OllamaPy.git\ncd OllamaPy\npip install .\n```\n\n## Quick Start\n\nSimply run the chat interface:\n\n```bash\nollamapy\n```\n\nThis will start a chat session with the default model (gemma3:4b). If the model isn't available locally, OllamaPy will automatically pull it for you.\n\n## Usage Examples\n\n### Basic Chat\n```bash\n# Start chat with default model\nollamapy\n```\n\n### Custom Model\n```bash\n# Use a specific model\nollamapy --model gemma2:2b\nollamapy -m codellama:7b\n```\n\n### Dual Model Setup (Analysis + Chat)\n```bash\n# Use a small, fast model for analysis and a larger model for chat\nollamapy --analysis-model gemma2:2b --model llama3.2:7b\nollamapy -a gemma2:2b -m mistral:7b\n\n# This is great for performance - small model does action selection, large model handles conversation\n```\n\n### Two-Step Analysis Mode\n```bash\n# Enable two-step analysis (better for smaller models)\nollamapy --two-step --analysis-model gemma2:2b\n\n# Two-step mode separates action selection from parameter extraction\n```\n\n### System Message\n```bash\n# Set context for the AI\nollamapy --system \"You are a helpful coding assistant specializing in Python\"\nollamapy -s \"You are a creative writing partner\"\n```\n\n### Combined Options\n```bash\n# Use custom models with system message and two-step mode\nollamapy --two-step --analysis-model gemma2:2b --model mistral:7b --system \"You are a helpful assistant\"\n```\n\n## Meta-Reasoning System\n\nOllamaPy features a unique meta-reasoning system where the AI analyzes user input and dynamically selects from available actions. The AI examines the intent behind your message and chooses the most appropriate response action.\n\n### Dual Model Architecture\n\nFor optimal performance, you can use two different models:\n- **Analysis Model**: A smaller, faster model (like `gemma2:2b`) for quick action selection\n- **Chat Model**: A larger, more capable model (like `llama3.2:7b`) for generating responses\n\nThis architecture provides the best of both worlds - fast decision-making and high-quality responses.\n\n```bash\n# Example: Fast analysis with powerful chat\nollamapy --analysis-model gemma2:2b --model llama3.2:7b\n```\n\n### Analysis Modes\n\nOllamaPy supports two analysis modes:\n\n1. **Single-Step Mode** (default): The AI selects the action and extracts parameters in one analysis\n2. **Two-Step Mode**: The AI first selects the action, then extracts parameters in a separate step\n\nTwo-step mode is particularly useful with smaller models that might struggle with complex multi-task analysis.\n\n```bash\n# Enable two-step mode\nollamapy --two-step\n```\n\n### Currently Available Actions\n\n- **null** - Default conversation mode. Used for normal chat when no special action is needed\n- **getWeather** - Provides weather information (accepts optional location parameter)\n- **getTime** - Returns the current date and time (accepts optional timezone parameter)\n- **square_root** - Calculates the square root of a number (requires number parameter)\n- **calculate** - Evaluates basic mathematical expressions (requires expression parameter)\n\n### How Meta-Reasoning Works\n\nWhen you send a message, the AI:\n1. **Analyzes** your input to understand intent\n2. **Selects** the most appropriate action\n3. **Extracts** any required parameters from your input\n4. **Executes** the chosen action with parameters\n5. **Responds** using the action's output as context\n\n## Creating Custom Actions\n\nThe action system is designed to be easily extensible. Here's a comprehensive guide on creating your own actions:\n\n### Basic Action Structure\n\n```python\nfrom ollamapy.actions import register_action\n\n@register_action(\n name=\"action_name\",\n description=\"When to use this action\",\n vibe_test_phrases=[\"test phrase 1\", \"test phrase 2\"], # Optional\n parameters={ # Optional\n \"param_name\": {\n \"type\": \"string|number\",\n \"description\": \"What this parameter is for\",\n \"required\": True|False\n }\n }\n)\ndef action_name(param_name=None):\n \"\"\"Your action implementation.\"\"\"\n # Do something useful\n return \"Result string that will be given to the AI as context\"\n```\n\n### Example 1: Simple Action (No Parameters)\n\n```python\n@register_action(\n name=\"joke\",\n description=\"Use when the user wants to hear a joke or needs cheering up\",\n vibe_test_phrases=[\n \"tell me a joke\",\n \"I need a laugh\",\n \"cheer me up\",\n \"make me smile\"\n ]\n)\ndef joke():\n \"\"\"Tell a random joke.\"\"\"\n import random\n jokes = [\n \"Why don't scientists trust atoms? Because they make up everything!\",\n \"Why did the scarecrow win an award? He was outstanding in his field!\",\n \"Why don't eggs tell jokes? They'd crack each other up!\"\n ]\n return random.choice(jokes)\n```\n\n### Example 2: Action with Required Parameter\n\n```python\n@register_action(\n name=\"convert_temp\",\n description=\"Convert temperature between Celsius and Fahrenheit\",\n vibe_test_phrases=[\n \"convert 32 fahrenheit to celsius\",\n \"what's 100C in fahrenheit?\",\n \"20 degrees celsius in F\"\n ],\n parameters={\n \"value\": {\n \"type\": \"number\",\n \"description\": \"The temperature value to convert\",\n \"required\": True\n },\n \"unit\": {\n \"type\": \"string\",\n \"description\": \"The unit to convert from (C or F)\",\n \"required\": True\n }\n }\n)\ndef convert_temp(value, unit):\n \"\"\"Convert temperature between units.\"\"\"\n unit = unit.upper()\n if unit == 'C':\n # Celsius to Fahrenheit\n result = (value * 9/5) + 32\n return f\"{value}\u00b0C is equal to {result:.1f}\u00b0F\"\n elif unit == 'F':\n # Fahrenheit to Celsius\n result = (value - 32) * 5/9\n return f\"{value}\u00b0F is equal to {result:.1f}\u00b0C\"\n else:\n return f\"Unknown unit '{unit}'. Please use 'C' for Celsius or 'F' for Fahrenheit.\"\n```\n\n### Example 3: Action with Optional Parameters\n\n```python\n@register_action(\n name=\"roll_dice\",\n description=\"Roll dice for games or random number generation\",\n vibe_test_phrases=[\n \"roll a die\",\n \"roll 2d6\",\n \"roll three dice\",\n \"give me a random number between 1 and 6\"\n ],\n parameters={\n \"count\": {\n \"type\": \"number\",\n \"description\": \"Number of dice to roll\",\n \"required\": False # Defaults to 1 if not specified\n },\n \"sides\": {\n \"type\": \"number\",\n \"description\": \"Number of sides on each die\",\n \"required\": False # Defaults to 6 if not specified\n }\n }\n)\ndef roll_dice(count=1, sides=6):\n \"\"\"Roll dice and return results.\"\"\"\n import random\n \n # Ensure positive integers\n count = max(1, int(count))\n sides = max(2, int(sides))\n \n if count == 1:\n result = random.randint(1, sides)\n return f\"Rolled a d{sides}: {result}\"\n else:\n results = [random.randint(1, sides) for _ in range(count)]\n total = sum(results)\n return f\"Rolled {count}d{sides}: {results} (Total: {total})\"\n```\n\n### Best Practices for Creating Actions\n\n1. **Clear Naming**: Use descriptive names that clearly indicate what the action does\n ```python\n # Good: specific and clear\n name=\"calculate_compound_interest\"\n \n # Avoid: too generic\n name=\"calculate\"\n ```\n\n2. **Detailed Descriptions**: Help the AI understand when to use your action\n ```python\n # Good: specific keywords and use cases\n description=\"Calculate compound interest for investments. Keywords: compound interest, investment return, APY\"\n \n # Avoid: too vague\n description=\"Do math stuff\"\n ```\n\n3. **Comprehensive Test Phrases**: Include varied ways users might request the action\n ```python\n vibe_test_phrases=[\n \"calculate compound interest on $1000\",\n \"what's my investment worth after 5 years?\",\n \"compound interest calculator\",\n \"how much will I earn with 5% APY?\"\n ]\n ```\n\n4. **Parameter Validation**: Always validate and handle edge cases\n ```python\n def safe_divide(numerator, denominator):\n \"\"\"Safely divide two numbers.\"\"\"\n if denominator == 0:\n return \"Error: Cannot divide by zero!\"\n \n result = numerator / denominator\n return f\"{numerator} \u00f7 {denominator} = {result}\"\n ```\n\n5. **Meaningful Return Values**: Return informative strings that help the AI respond\n ```python\n # Good: provides context and formatting\n return f\"The calculation result is {result:.2f} ({increase:.1f}% increase)\"\n \n # Avoid: just the raw number\n return str(result)\n ```\n\n6. **Error Handling**: Always handle potential errors gracefully\n ```python\n try:\n result = complex_calculation(param)\n return f\"Success: {result}\"\n except ValueError as e:\n return f\"Error: Invalid input - {str(e)}\"\n except Exception as e:\n return f\"Unexpected error: {str(e)}\"\n ```\n\n### Adding Your Actions to OllamaPy\n\n1. Create a new Python file for your actions (e.g., `my_actions.py`)\n2. Import and implement your actions using the patterns above\n3. Import your actions module before starting OllamaPy\n\n```python\n# my_script.py\nfrom ollamapy import chat\nimport my_actions # This registers your actions\n\n# Now start chat with your custom actions available\nchat()\n```\n\n### Testing Your Actions\n\nAlways test your actions with vibe tests to ensure the AI can reliably select them:\n\n```bash\n# Run vibe tests including your custom actions\nollamapy --vibetest\n\n# Test with different models\nollamapy --vibetest --model llama3.2:3b -n 5\n```\n\n## Vibe Tests\n\nVibe tests are a built-in feature that evaluates how consistently AI models interpret human intent and choose appropriate actions. These tests help you understand model behavior and compare performance across different models.\n\n### Running Vibe Tests\n\n```bash\n# Run vibe tests with default settings\nollamapy --vibetest\n\n# Run with multiple iterations for statistical confidence\nollamapy --vibetest -n 5\n\n# Test a specific model\nollamapy --vibetest --model gemma2:2b -n 3\n\n# Use dual models for testing (analysis + chat)\nollamapy --vibetest --analysis-model gemma2:2b --model llama3.2:7b -n 5\n\n# Test with two-step analysis\nollamapy --vibetest --two-step --model gemma2:2b\n```\n\n### Understanding Results\n\nVibe tests evaluate:\n- **Action Selection**: How reliably the AI chooses the correct action\n- **Parameter Extraction**: How accurately the AI extracts required parameters\n- **Consistency**: How stable the AI's decisions are across multiple runs\n\nTests pass with a 60% or higher success rate, ensuring reasonable consistency in decision-making.\n\n## Chat Commands\n\nWhile chatting, you can use these built-in commands:\n\n- `quit`, `exit`, `bye` - End the conversation\n- `clear` - Clear conversation history\n- `help` - Show available commands\n- `model` - Display current models (both chat and analysis)\n- `models` - List all available models\n- `actions` - Show available actions the AI can choose from\n\n## Python API\n\nYou can also use OllamaPy programmatically:\n\n```python\nfrom ollamapy import TerminalChat, OllamaClient, execute_action\n\n# Start a chat session programmatically with dual models\nchat = TerminalChat(\n model=\"llama3.2:7b\", \n analysis_model=\"gemma2:2b\",\n system_message=\"You are a helpful assistant\",\n two_step_analysis=True\n)\nchat.run()\n\n# Or use the client directly\nclient = OllamaClient()\nmessages = [{\"role\": \"user\", \"content\": \"Hello!\"}]\n\nfor chunk in client.chat_stream(\"gemma3:4b\", messages):\n print(chunk, end=\"\", flush=True)\n\n# Execute actions programmatically\nresult = execute_action(\"square_root\", {\"number\": 16})\nprint(result) # \"The square root of 16 is 4\"\n\n# Run vibe tests programmatically with dual models\nfrom ollamapy import run_vibe_tests\nsuccess = run_vibe_tests(\n model=\"llama3.2:7b\", \n analysis_model=\"gemma2:2b\", \n iterations=5\n)\n```\n\n### Available Classes and Functions\n\n- **`TerminalChat`** - High-level terminal chat interface with meta-reasoning\n- **`OllamaClient`** - Low-level API client for Ollama\n- **`run_vibe_tests()`** - Execute vibe tests programmatically\n- **`get_available_actions()`** - Get all registered actions\n- **`execute_action()`** - Execute an action with parameters programmatically\n- **`register_action()`** - Decorator for creating new actions\n\n## Configuration\n\nOllamaPy connects to Ollama on `http://localhost:11434` by default. If your Ollama instance is running elsewhere:\n\n```python\nfrom ollamapy import OllamaClient\n\nclient = OllamaClient(base_url=\"http://your-ollama-server:11434\")\n```\n\n## Supported Models\n\nOllamaPy works with any model available in Ollama. Popular options include:\n\n- `gemma3:4b` (default) - Fast and capable general-purpose model\n- `llama3.2:3b` - Efficient and responsive for most tasks\n- `gemma2:2b` - Lightweight model, great for analysis tasks\n- `gemma2:9b` - Larger Gemma model for complex tasks\n- `codellama:7b` - Specialized for coding tasks\n- `mistral:7b` - Strong general-purpose model\n\nTo see available models on your system: `ollama list`\n\n## Development\n\nClone the repository and install in development mode:\n\n```bash\ngit clone https://github.com/ScienceIsVeryCool/OllamaPy.git\ncd OllamaPy\npip install -e \".[dev]\"\n```\n\nRun tests:\n\n```bash\npytest\n```\n\nRun vibe tests:\n\n```bash\npytest -m vibetest\n```\n\n## Troubleshooting\n\n### \"Ollama server is not running!\"\nMake sure Ollama is installed and running:\n```bash\nollama serve\n```\n\n### Model not found\nOllamaPy will automatically pull models, but you can also pull manually:\n```bash\nollama pull gemma3:4b\n```\n\n### Parameter extraction issues\n- Try two-step mode: `ollamapy --two-step`\n- Use a more capable analysis model: `ollamapy --analysis-model llama3.2:3b`\n- Ensure your action descriptions clearly indicate what parameters are needed\n\n### Vibe test failures\n- Try different models: `ollamapy --vibetest --model gemma2:9b`\n- Use two-step analysis: `ollamapy --vibetest --two-step`\n- Increase iterations for better statistics: `ollamapy --vibetest -n 10`\n- Check that your test phrases clearly indicate the intended action\n\n## Project Information\n\n- **Version**: 0.6.2\n- **License**: GPL-3.0-or-later\n- **Author**: The Lazy Artist\n- **Python**: >=3.8\n- **Dependencies**: requests>=2.25.0\n\n## Links\n\n- [PyPI Package](https://pypi.org/project/ollamapy/)\n- [GitHub Repository](https://github.com/ScienceIsVeryCool/OllamaPy)\n- [Issues](https://github.com/ScienceIsVeryCool/OllamaPy/issues)\n- [Ollama Documentation](https://ollama.ai/)\n\n## License\n\nThis project is licensed under the GPL-3.0-or-later license. See the LICENSE file for details.",
"bugtrack_url": null,
"license": "GPL-3.0-or-later",
"summary": "A simple Python package for Ollama utilities with built-in AI vibe tests",
"version": "0.6.2",
"project_urls": {
"Homepage": "https://github.com/ScienceIsVeryCool/OllamaPy",
"Issues": "https://github.com/ScienceIsVeryCool/OllamaPy/issues",
"Repository": "https://github.com/ScienceIsVeryCool/OllamaPy"
},
"split_keywords": [
"ai",
" chat",
" llama",
" ollama",
" python",
" terminal",
" utilities",
" vibe-tests"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f12a84f3576c7e77985c6027b5e1fab443bf41ced00810fec7a959c5867539a3",
"md5": "1ef03c4a77d6db066999741e6b4bdfed",
"sha256": "f0c6e2c1c9c39fcfa4eed4c3f2aea3c8de87cad2ce486e8579ac9242237b5fe3"
},
"downloads": -1,
"filename": "ollamapy-0.6.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1ef03c4a77d6db066999741e6b4bdfed",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 23215,
"upload_time": "2025-07-30T02:27:12",
"upload_time_iso_8601": "2025-07-30T02:27:12.872738Z",
"url": "https://files.pythonhosted.org/packages/f1/2a/84f3576c7e77985c6027b5e1fab443bf41ced00810fec7a959c5867539a3/ollamapy-0.6.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ead33021a22bcc4191ad3cc03b9880b717cdd2be5b691d6b7eff1f2d7fa807e2",
"md5": "86bdced66e87acf18399f65dd84d8d42",
"sha256": "401b5161389204eef28bc1c6f29d9408c5914889cb0e026df2b49a8e8d04c945"
},
"downloads": -1,
"filename": "ollamapy-0.6.2.tar.gz",
"has_sig": false,
"md5_digest": "86bdced66e87acf18399f65dd84d8d42",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 20635,
"upload_time": "2025-07-30T02:27:14",
"upload_time_iso_8601": "2025-07-30T02:27:14.093331Z",
"url": "https://files.pythonhosted.org/packages/ea/d3/3021a22bcc4191ad3cc03b9880b717cdd2be5b691d6b7eff1f2d7fa807e2/ollamapy-0.6.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-30 02:27:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ScienceIsVeryCool",
"github_project": "OllamaPy",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "ollamapy"
}