Name | HelpingAI JSON |
Version |
1.2.1
JSON |
| download |
home_page | None |
Summary | Python client library for the HelpingAI API |
upload_time | 2025-08-05 05:31:08 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | MIT |
keywords |
ai
api
client
helpingai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# HelpingAI Python SDK
The official Python library for the [HelpingAI](https://helpingai.co) API - Advanced AI with Emotional Intelligence
[](https://badge.fury.io/py/helpingai)
[](https://pypi.org/project/helpingai/)
[](https://opensource.org/licenses/MIT)
## 🚀 Features
- **OpenAI-Compatible API**: Drop-in replacement with familiar interface
- **Emotional Intelligence**: Advanced AI models with emotional understanding
- **MCP Integration**: Seamless connection to external tools via Model Context Protocol servers
- **Tool Calling Made Easy**: [`@tools decorator`](HelpingAI/tools/core.py:144) for effortless function-to-tool conversion
- **Direct Tool Execution**: Simple `.call()` method for executing tools without registry manipulation
- **Automatic Schema Generation**: Type hint-based JSON schema creation with docstring parsing
- **Universal Tool Compatibility**: Seamless integration with OpenAI-format tools
- **Streaming Support**: Real-time response streaming
- **Comprehensive Error Handling**: Detailed error types and retry mechanisms
- **Type Safety**: Full type hints and IDE support
- **Flexible Configuration**: Environment variables and direct initialization
## 📦 Installation
```bash
pip install HelpingAI
```
### Optional Features
```bash
# Install with MCP (Model Context Protocol) support
pip install HelpingAI[mcp]
```
## 🔑 Authentication
Get your API key from the [HelpingAI Dashboard](https://helpingai.co/dashboard).
### Environment Variable (Recommended)
```bash
export HAI_API_KEY='your-api-key'
```
### Direct Initialization
```python
from HelpingAI import HAI
hai = HAI(api_key='your-api-key')
```
## 🎯 Quick Start
```python
from HelpingAI import HAI
# Initialize client
hai = HAI()
# Create a chat completion
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "system", "content": "You are an expert in emotional intelligence."},
{"role": "user", "content": "What makes a good leader?"}
]
)
print(response.choices[0].message.content)
```
## 🌊 Streaming Responses
```python
# Stream responses in real-time
for chunk in hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Tell me about empathy"}],
stream=True
):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
```
## ⚙️ Advanced Configuration
### Parameter Control
```python
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Write a story about empathy"}],
temperature=0.7, # Controls randomness (0-1)
max_tokens=500, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.3, # Reduces repetition
presence_penalty=0.3, # Encourages new topics
hide_think=True # Filter out reasoning blocks
)
```
### Client Configuration
```python
hai = HAI(
api_key="your-api-key",
base_url="https://api.helpingai.co/v1", # Custom base URL
timeout=30.0, # Request timeout
organization="your-org-id" # Organization ID
)
```
## 🛡️ Error Handling
```python
from HelpingAI import HAI, HAIError, RateLimitError, InvalidRequestError
import time
def make_completion_with_retry(messages, max_retries=3):
for attempt in range(max_retries):
try:
return hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=messages
)
except RateLimitError as e:
if attempt == max_retries - 1:
raise
time.sleep(e.retry_after or 1)
except InvalidRequestError as e:
print(f"Invalid request: {str(e)}")
raise
except HAIError as e:
print(f"API error: {str(e)}")
raise
```
## 🤖 Available Models
### Dhanishtha-2.0-preview
- **World's First Intermediate Thinking Model**: Multi-phase reasoning with self-correction capabilities
- **Unique Features**: Multiple `<think>...</think>` blocks for transparent reasoning, structured emotional reasoning (SER)
- **Best For**: Complex problem-solving, analytical tasks, educational content, reasoning-heavy applications
### Dhanishtha-2.0-preview-mini
- **Lightweight Reasoning Model**: Efficient version of Dhanishtha-2.0-preview
- **Unique Features**: Same reasoning capabilities in a more compact model
- **Best For**: Faster responses, mobile applications, resource-constrained environments
### Helpingai3-raw
- **Advanced Emotional Intelligence Model**: Enhanced contextual awareness with emotional reasoning capabilities
- **Unique Features**: Advanced language understanding, emotional intelligence, therapeutic exchanges support
- **Best For**: Conversational AI, emotional support, content creation, general-purpose tasks
### HELVETE-X
- **🔞 18+ Role-Playing Model**: Highly advanced NSFW model with unrestricted capabilities
- **Unique Features**: Badass personality, uncensored responses, advanced role-playing abilities
- **Best For**: Adult role-playing scenarios, creative writing, mature content generation
- **⚠️ Warning**: 18+ only - Contains mature/explicit content
### Dhanishtha-NSFW
- **🔞 18+ Intermediate Thinking Model**: Similar to Dhanishtha-2.0-preview but with NSFW capabilities and role-playing specialization
- **Unique Features**: Multiple `<think>...</think>` blocks for transparent reasoning, structured emotional reasoning (SER), plus mature content handling
- **Best For**: Adult conversations, NSFW role-playing scenarios, mature content creation with advanced reasoning
- **⚠️ Warning**: 18+ only - Contains mature/explicit content
```python
# List all available models
models = hai.models.list()
for model in models:
print(f"Model: {model.id} - {model.description}")
# Get specific model info
model = hai.models.retrieve("Dhanishtha-2.0-preview")
print(f"Model: {model.name}")
# Use Dhanishtha-2.0 for complex reasoning
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Solve this step by step: What's 15% of 240?"}],
hide_think=False # Show reasoning process
)
```
## 🛠️ MCP (Model Context Protocol) Integration
Connect to external tools and services through MCP servers for expanded AI capabilities.
### Quick Start with MCP
```python
from HelpingAI import HAI
client = HAI(api_key="your-api-key")
# Configure MCP servers
tools = [
{
'mcpServers': {
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
}
]
# Use MCP tools in chat completion
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What time is it in Shanghai?"}],
tools=tools
)
print(response.choices[0].message.content)
```
### Supported Server Types
```python
# Stdio-based servers (most common)
{
'command': 'uvx',
'args': ['mcp-server-time'],
'env': {'TIMEZONE': 'UTC'} # optional
}
# HTTP SSE servers
{
'url': 'https://api.example.com/mcp',
'headers': {'Authorization': 'Bearer token'},
'sse_read_timeout': 300
}
# Streamable HTTP servers
{
'type': 'streamable-http',
'url': 'http://localhost:8000/mcp'
}
```
### Popular MCP Servers
- **mcp-server-time** - Time and timezone operations
- **mcp-server-fetch** - HTTP requests and web scraping
- **mcp-server-filesystem** - File system operations
- **mcp-server-memory** - Persistent memory across conversations
- **mcp-server-sqlite** - SQLite database operations
- **Custom servers** - Any MCP-compliant server
### Combined Usage
Mix MCP servers with regular tools:
```python
# Regular OpenAI tools
regular_tools = [{
"type": "function",
"function": {
"name": "calculate",
"description": "Perform calculations",
"parameters": {
"type": "object",
"properties": {
"expression": {"type": "string"}
}
}
}
}]
# Combined with MCP servers
all_tools = regular_tools + [{
'mcpServers': {
'time': {
'command': 'uvx',
'args': ['mcp-server-time']
}
}
}]
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Calculate 2+2 and tell me the current time"}],
tools=all_tools
)
```
### Installation & Setup
```bash
# Install MCP support
pip install HelpingAI[mcp]
# Or install MCP package separately
pip install -U mcp
```
**Note**: MCP functionality requires the `mcp` package. The SDK provides graceful error handling when MCP is not installed.
## 🔧 Tool Calling with @tools Decorator
Transform any Python function into a powerful AI tool with zero boilerplate using the [`@tools`](HelpingAI/tools/core.py:144) decorator.
### Quick Start with Tools
```python
from HelpingAI import HAI
from HelpingAI.tools import tools, get_tools
@tools
def get_weather(city: str, units: str = "celsius") -> str:
"""Get current weather information for a city.
Args:
city: The city name to get weather for
units: Temperature units (celsius or fahrenheit)
"""
# Your weather API logic here
return f"Weather in {city}: 22°{units[0].upper()}"
@tools
def calculate_tip(bill_amount: float, tip_percentage: float = 15.0) -> dict:
"""Calculate tip and total amount for a bill.
Args:
bill_amount: The original bill amount
tip_percentage: Tip percentage (default: 15.0)
"""
tip = bill_amount * (tip_percentage / 100)
total = bill_amount + tip
return {"tip": tip, "total": total, "original": bill_amount}
# Use with chat completions
hai = HAI()
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the weather in Paris and calculate tip for $50 bill?"}],
tools=get_tools() # Automatically includes all @tools functions
)
print(response.choices[0].message.content)
```
### Direct Tool Execution
The HAI client provides a convenient `.call()` method to directly execute tools without having to manually use the registry:
```python
from HelpingAI import HAI
from HelpingAI.tools import tools
@tools
def search(query: str, max_results: int = 5):
"""Search the web for information"""
# Implementation here
return {"results": [{"title": "Result 1", "url": "https://example.com"}]}
# Create a client instance
client = HAI()
# Directly call a tool by name with arguments
search_result = client.call("search", {"query": "python programming", "max_results": 3})
print("Search results:", search_result)
# You can also execute tools from model responses
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "search for quantum computing"}],
tools=get_tools(),
tool_choice="auto"
)
# Extract tool name and arguments from the model's tool call
tool_call = response.choices[0].message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# Execute the tool directly
tool_result = client.call(tool_name, tool_args)
print(f"Result: {tool_result}")
```
### Advanced Tool Features
#### Type System Support
The [`@tools`](HelpingAI/tools/core.py:144) decorator automatically generates JSON schemas from Python type hints:
```python
from typing import List, Optional, Union
from enum import Enum
class Priority(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
@tools
def create_task(
title: str,
description: Optional[str] = None,
priority: Priority = Priority.MEDIUM,
tags: List[str] = None,
due_date: Union[str, None] = None
) -> dict:
"""Create a new task with advanced type support.
Args:
title: Task title
description: Optional task description
priority: Task priority level
tags: List of task tags
due_date: Due date in YYYY-MM-DD format
"""
return {
"title": title,
"description": description,
"priority": priority.value,
"tags": tags or [],
"due_date": due_date
}
```
#### Tool Registry Management
```python
from HelpingAI.tools import get_tools, get_registry, clear_registry
# Get specific tools
weather_tools = get_tools(["get_weather", "calculate_tip"])
# Registry inspection
registry = get_registry()
print(f"Registered tools: {registry.list_tool_names()}")
print(f"Total tools: {registry.size()}")
# Check if tool exists
if registry.has_tool("get_weather"):
weather_tool = registry.get_tool("get_weather")
print(f"Tool: {weather_tool.name} - {weather_tool.description}")
```
#### Universal Tool Compatibility
Seamlessly combine [`@tools`](HelpingAI/tools/core.py:144) functions with existing OpenAI-format tools:
```python
from HelpingAI.tools import merge_tool_lists, ensure_tool_format
# Existing OpenAI-format tools
legacy_tools = [{
"type": "function",
"function": {
"name": "search_web",
"description": "Search the web for information",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
}
}]
# Combine with @tools functions
combined_tools = merge_tool_lists(
legacy_tools, # Existing tools
get_tools(), # @tools functions
"math" # Category name (if you have categorized tools)
)
# Use in chat completion
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Help me with weather, calculations, and web search"}],
tools=combined_tools
)
```
### Error Handling & Best Practices
```python
from HelpingAI.tools import ToolExecutionError, SchemaValidationError, ToolRegistrationError
@tools
def divide_numbers(a: float, b: float) -> float:
"""Divide two numbers safely.
Args:
a: The dividend
b: The divisor
"""
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
# Handle tool execution in your application
def execute_tool_safely(tool_name: str, arguments: dict):
try:
# You can use the direct call method instead of registry manipulation
hai = HAI()
return hai.call(tool_name, arguments)
except ToolExecutionError as e:
print(f"Tool execution failed: {e}")
return {"error": str(e)}
except SchemaValidationError as e:
print(f"Invalid arguments: {e}")
return {"error": "Invalid parameters provided"}
except ToolRegistrationError as e:
print(f"Tool registration issue: {e}")
return {"error": "Tool configuration error"}
# Example usage
result = execute_tool_safely("divide_numbers", {"a": 10, "b": 2})
print(result) # 5.0
error_result = execute_tool_safely("divide_numbers", {"a": 10, "b": 0})
print(error_result) # {"error": "Cannot divide by zero"}
```
### Migration from Legacy Tools
Transform your existing tool definitions with minimal effort:
**Before (Manual Schema):**
```python
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
"units": {"type": "string", "description": "Temperature units", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}
}]
```
**After (@tools Decorator):**
```python
from typing import Literal
@tools
def get_weather(city: str, units: Literal["celsius", "fahrenheit"] = "celsius") -> str:
"""Get weather information
Args:
city: City name
units: Temperature units
"""
# Implementation here
pass
```
The [`@tools`](HelpingAI/tools/core.py:144) decorator automatically:
- ✅ Generates JSON schema from type hints
- ✅ Extracts descriptions from docstrings
- ✅ Handles required/optional parameters
- ✅ Supports multiple docstring formats (Google, Sphinx, NumPy)
- ✅ Provides comprehensive error handling
- ✅ Maintains thread-safe tool registry
## 📚 Documentation
Comprehensive documentation is available:
- [📖 Getting Started Guide](docs/getting_started.md) - Installation and basic usage
- [🔧 API Reference](docs/api_reference.md) - Complete API documentation
- [🛠️ Tool Calling Guide](docs/tool_calling.md) - Creating and using AI-callable tools
- [🔌 MCP Integration Guide](docs/mcp_integration.md) - Model Context Protocol integration
- [💡 Examples](docs/examples.md) - Code examples and use cases
- [❓ FAQ](docs/faq.md) - Frequently asked questions
## 🔧 Requirements
- **Python**: 3.7-3.14
- **Dependencies**:
- `requests` - HTTP client
- `typing_extensions` - Type hints support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🆘 Support & Community
- **Issues**: [GitHub Issues](https://github.com/HelpingAI/HelpingAI-python/issues)
- **Documentation**: [HelpingAI Docs](https://helpingai.co/docs)
- **Dashboard**: [HelpingAI Dashboard](https://helpingai.co/dashboard)
- **Email**: Team@helpingai.co
**Built with ❤️ by the HelpingAI Team**
*Empowering AI with Emotional Intelligence*
Raw data
{
"_id": null,
"home_page": null,
"name": "HelpingAI",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "ai, api, client, helpingai",
"author": null,
"author_email": "HelpingAI <Team@helpingai.co>",
"download_url": "https://files.pythonhosted.org/packages/04/04/182761be330b4ce45922f23223e87a8bbafc1636781184da912d5fd76d6b/helpingai-1.2.1.tar.gz",
"platform": null,
"description": "# HelpingAI Python SDK\r\n\r\nThe official Python library for the [HelpingAI](https://helpingai.co) API - Advanced AI with Emotional Intelligence\r\n\r\n[](https://badge.fury.io/py/helpingai)\r\n[](https://pypi.org/project/helpingai/)\r\n[](https://opensource.org/licenses/MIT)\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- **OpenAI-Compatible API**: Drop-in replacement with familiar interface\r\n- **Emotional Intelligence**: Advanced AI models with emotional understanding\r\n- **MCP Integration**: Seamless connection to external tools via Model Context Protocol servers\r\n- **Tool Calling Made Easy**: [`@tools decorator`](HelpingAI/tools/core.py:144) for effortless function-to-tool conversion\r\n- **Direct Tool Execution**: Simple `.call()` method for executing tools without registry manipulation\r\n- **Automatic Schema Generation**: Type hint-based JSON schema creation with docstring parsing\r\n- **Universal Tool Compatibility**: Seamless integration with OpenAI-format tools\r\n- **Streaming Support**: Real-time response streaming\r\n- **Comprehensive Error Handling**: Detailed error types and retry mechanisms\r\n- **Type Safety**: Full type hints and IDE support\r\n- **Flexible Configuration**: Environment variables and direct initialization\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install HelpingAI\r\n```\r\n\r\n### Optional Features\r\n\r\n```bash\r\n# Install with MCP (Model Context Protocol) support\r\npip install HelpingAI[mcp]\r\n```\r\n\r\n## \ud83d\udd11 Authentication\r\n\r\nGet your API key from the [HelpingAI Dashboard](https://helpingai.co/dashboard).\r\n\r\n### Environment Variable (Recommended)\r\n\r\n```bash\r\nexport HAI_API_KEY='your-api-key'\r\n```\r\n\r\n### Direct Initialization\r\n\r\n```python\r\nfrom HelpingAI import HAI\r\n\r\nhai = HAI(api_key='your-api-key')\r\n```\r\n\r\n## \ud83c\udfaf Quick Start\r\n\r\n```python\r\nfrom HelpingAI import HAI\r\n\r\n# Initialize client\r\nhai = HAI()\r\n\r\n# Create a chat completion\r\nresponse = hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[\r\n {\"role\": \"system\", \"content\": \"You are an expert in emotional intelligence.\"},\r\n {\"role\": \"user\", \"content\": \"What makes a good leader?\"}\r\n ]\r\n)\r\n\r\nprint(response.choices[0].message.content)\r\n```\r\n\r\n## \ud83c\udf0a Streaming Responses\r\n\r\n```python\r\n# Stream responses in real-time\r\nfor chunk in hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"Tell me about empathy\"}],\r\n stream=True\r\n):\r\n if chunk.choices[0].delta.content:\r\n print(chunk.choices[0].delta.content, end=\"\")\r\n```\r\n\r\n## \u2699\ufe0f Advanced Configuration\r\n\r\n### Parameter Control\r\n\r\n```python\r\nresponse = hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"Write a story about empathy\"}],\r\n temperature=0.7, # Controls randomness (0-1)\r\n max_tokens=500, # Maximum length of response\r\n top_p=0.9, # Nucleus sampling parameter\r\n frequency_penalty=0.3, # Reduces repetition\r\n presence_penalty=0.3, # Encourages new topics\r\n hide_think=True # Filter out reasoning blocks\r\n)\r\n```\r\n\r\n### Client Configuration\r\n\r\n```python\r\nhai = HAI(\r\n api_key=\"your-api-key\",\r\n base_url=\"https://api.helpingai.co/v1\", # Custom base URL\r\n timeout=30.0, # Request timeout\r\n organization=\"your-org-id\" # Organization ID\r\n)\r\n```\r\n\r\n## \ud83d\udee1\ufe0f Error Handling\r\n\r\n```python\r\nfrom HelpingAI import HAI, HAIError, RateLimitError, InvalidRequestError\r\nimport time\r\n\r\ndef make_completion_with_retry(messages, max_retries=3):\r\n for attempt in range(max_retries):\r\n try:\r\n return hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=messages\r\n )\r\n except RateLimitError as e:\r\n if attempt == max_retries - 1:\r\n raise\r\n time.sleep(e.retry_after or 1)\r\n except InvalidRequestError as e:\r\n print(f\"Invalid request: {str(e)}\")\r\n raise\r\n except HAIError as e:\r\n print(f\"API error: {str(e)}\")\r\n raise\r\n```\r\n\r\n## \ud83e\udd16 Available Models\r\n\r\n### Dhanishtha-2.0-preview\r\n- **World's First Intermediate Thinking Model**: Multi-phase reasoning with self-correction capabilities\r\n- **Unique Features**: Multiple `<think>...</think>` blocks for transparent reasoning, structured emotional reasoning (SER)\r\n- **Best For**: Complex problem-solving, analytical tasks, educational content, reasoning-heavy applications\r\n\r\n### Dhanishtha-2.0-preview-mini\r\n- **Lightweight Reasoning Model**: Efficient version of Dhanishtha-2.0-preview\r\n- **Unique Features**: Same reasoning capabilities in a more compact model\r\n- **Best For**: Faster responses, mobile applications, resource-constrained environments\r\n\r\n### Helpingai3-raw\r\n- **Advanced Emotional Intelligence Model**: Enhanced contextual awareness with emotional reasoning capabilities\r\n- **Unique Features**: Advanced language understanding, emotional intelligence, therapeutic exchanges support\r\n- **Best For**: Conversational AI, emotional support, content creation, general-purpose tasks\r\n\r\n### HELVETE-X\r\n- **\ud83d\udd1e 18+ Role-Playing Model**: Highly advanced NSFW model with unrestricted capabilities\r\n- **Unique Features**: Badass personality, uncensored responses, advanced role-playing abilities\r\n- **Best For**: Adult role-playing scenarios, creative writing, mature content generation\r\n- **\u26a0\ufe0f Warning**: 18+ only - Contains mature/explicit content\r\n\r\n### Dhanishtha-NSFW\r\n- **\ud83d\udd1e 18+ Intermediate Thinking Model**: Similar to Dhanishtha-2.0-preview but with NSFW capabilities and role-playing specialization\r\n- **Unique Features**: Multiple `<think>...</think>` blocks for transparent reasoning, structured emotional reasoning (SER), plus mature content handling\r\n- **Best For**: Adult conversations, NSFW role-playing scenarios, mature content creation with advanced reasoning\r\n- **\u26a0\ufe0f Warning**: 18+ only - Contains mature/explicit content\r\n\r\n```python\r\n# List all available models\r\nmodels = hai.models.list()\r\nfor model in models:\r\n print(f\"Model: {model.id} - {model.description}\")\r\n\r\n# Get specific model info\r\nmodel = hai.models.retrieve(\"Dhanishtha-2.0-preview\")\r\nprint(f\"Model: {model.name}\")\r\n\r\n# Use Dhanishtha-2.0 for complex reasoning\r\nresponse = hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"Solve this step by step: What's 15% of 240?\"}],\r\n hide_think=False # Show reasoning process\r\n)\r\n```\r\n## \ud83d\udee0\ufe0f MCP (Model Context Protocol) Integration\r\n\r\nConnect to external tools and services through MCP servers for expanded AI capabilities.\r\n\r\n### Quick Start with MCP\r\n\r\n```python\r\nfrom HelpingAI import HAI\r\n\r\nclient = HAI(api_key=\"your-api-key\")\r\n\r\n# Configure MCP servers\r\ntools = [\r\n {\r\n 'mcpServers': {\r\n 'time': {\r\n 'command': 'uvx',\r\n 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']\r\n },\r\n \"fetch\": {\r\n \"command\": \"uvx\",\r\n \"args\": [\"mcp-server-fetch\"]\r\n }\r\n }\r\n }\r\n]\r\n\r\n# Use MCP tools in chat completion\r\nresponse = client.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"What time is it in Shanghai?\"}],\r\n tools=tools\r\n)\r\n\r\nprint(response.choices[0].message.content)\r\n```\r\n\r\n### Supported Server Types\r\n\r\n```python\r\n# Stdio-based servers (most common)\r\n{\r\n 'command': 'uvx',\r\n 'args': ['mcp-server-time'],\r\n 'env': {'TIMEZONE': 'UTC'} # optional\r\n}\r\n\r\n# HTTP SSE servers\r\n{\r\n 'url': 'https://api.example.com/mcp',\r\n 'headers': {'Authorization': 'Bearer token'},\r\n 'sse_read_timeout': 300\r\n}\r\n\r\n# Streamable HTTP servers\r\n{\r\n 'type': 'streamable-http',\r\n 'url': 'http://localhost:8000/mcp'\r\n}\r\n```\r\n\r\n### Popular MCP Servers\r\n\r\n- **mcp-server-time** - Time and timezone operations\r\n- **mcp-server-fetch** - HTTP requests and web scraping\r\n- **mcp-server-filesystem** - File system operations\r\n- **mcp-server-memory** - Persistent memory across conversations\r\n- **mcp-server-sqlite** - SQLite database operations\r\n- **Custom servers** - Any MCP-compliant server\r\n\r\n### Combined Usage\r\n\r\nMix MCP servers with regular tools:\r\n\r\n```python\r\n# Regular OpenAI tools\r\nregular_tools = [{\r\n \"type\": \"function\",\r\n \"function\": {\r\n \"name\": \"calculate\",\r\n \"description\": \"Perform calculations\",\r\n \"parameters\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"expression\": {\"type\": \"string\"}\r\n }\r\n }\r\n }\r\n}]\r\n\r\n# Combined with MCP servers\r\nall_tools = regular_tools + [{\r\n 'mcpServers': {\r\n 'time': {\r\n 'command': 'uvx',\r\n 'args': ['mcp-server-time']\r\n }\r\n }\r\n}]\r\n\r\nresponse = client.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"Calculate 2+2 and tell me the current time\"}],\r\n tools=all_tools\r\n)\r\n```\r\n\r\n### Installation & Setup\r\n\r\n```bash\r\n# Install MCP support\r\npip install HelpingAI[mcp]\r\n\r\n# Or install MCP package separately\r\npip install -U mcp\r\n```\r\n\r\n**Note**: MCP functionality requires the `mcp` package. The SDK provides graceful error handling when MCP is not installed.\r\n\r\n## \ud83d\udd27 Tool Calling with @tools Decorator\r\n\r\nTransform any Python function into a powerful AI tool with zero boilerplate using the [`@tools`](HelpingAI/tools/core.py:144) decorator.\r\n\r\n### Quick Start with Tools\r\n\r\n```python\r\nfrom HelpingAI import HAI\r\nfrom HelpingAI.tools import tools, get_tools\r\n\r\n@tools\r\ndef get_weather(city: str, units: str = \"celsius\") -> str:\r\n \"\"\"Get current weather information for a city.\r\n \r\n Args:\r\n city: The city name to get weather for\r\n units: Temperature units (celsius or fahrenheit)\r\n \"\"\"\r\n # Your weather API logic here\r\n return f\"Weather in {city}: 22\u00b0{units[0].upper()}\"\r\n\r\n@tools\r\ndef calculate_tip(bill_amount: float, tip_percentage: float = 15.0) -> dict:\r\n \"\"\"Calculate tip and total amount for a bill.\r\n \r\n Args:\r\n bill_amount: The original bill amount\r\n tip_percentage: Tip percentage (default: 15.0)\r\n \"\"\"\r\n tip = bill_amount * (tip_percentage / 100)\r\n total = bill_amount + tip\r\n return {\"tip\": tip, \"total\": total, \"original\": bill_amount}\r\n\r\n# Use with chat completions\r\nhai = HAI()\r\nresponse = hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"What's the weather in Paris and calculate tip for $50 bill?\"}],\r\n tools=get_tools() # Automatically includes all @tools functions\r\n)\r\n\r\nprint(response.choices[0].message.content)\r\n```\r\n\r\n### Direct Tool Execution\r\n\r\nThe HAI client provides a convenient `.call()` method to directly execute tools without having to manually use the registry:\r\n\r\n```python\r\nfrom HelpingAI import HAI\r\nfrom HelpingAI.tools import tools\r\n\r\n@tools\r\ndef search(query: str, max_results: int = 5):\r\n \"\"\"Search the web for information\"\"\"\r\n # Implementation here\r\n return {\"results\": [{\"title\": \"Result 1\", \"url\": \"https://example.com\"}]}\r\n\r\n# Create a client instance\r\nclient = HAI()\r\n\r\n# Directly call a tool by name with arguments\r\nsearch_result = client.call(\"search\", {\"query\": \"python programming\", \"max_results\": 3})\r\nprint(\"Search results:\", search_result)\r\n\r\n# You can also execute tools from model responses\r\nresponse = client.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"search for quantum computing\"}],\r\n tools=get_tools(),\r\n tool_choice=\"auto\"\r\n)\r\n\r\n# Extract tool name and arguments from the model's tool call\r\ntool_call = response.choices[0].message.tool_calls[0]\r\ntool_name = tool_call.function.name\r\ntool_args = json.loads(tool_call.function.arguments)\r\n\r\n# Execute the tool directly\r\ntool_result = client.call(tool_name, tool_args)\r\nprint(f\"Result: {tool_result}\")\r\n```\r\n\r\n### Advanced Tool Features\r\n\r\n#### Type System Support\r\nThe [`@tools`](HelpingAI/tools/core.py:144) decorator automatically generates JSON schemas from Python type hints:\r\n\r\n```python\r\nfrom typing import List, Optional, Union\r\nfrom enum import Enum\r\n\r\nclass Priority(Enum):\r\n LOW = \"low\"\r\n MEDIUM = \"medium\" \r\n HIGH = \"high\"\r\n\r\n@tools\r\ndef create_task(\r\n title: str,\r\n description: Optional[str] = None,\r\n priority: Priority = Priority.MEDIUM,\r\n tags: List[str] = None,\r\n due_date: Union[str, None] = None\r\n) -> dict:\r\n \"\"\"Create a new task with advanced type support.\r\n \r\n Args:\r\n title: Task title\r\n description: Optional task description\r\n priority: Task priority level\r\n tags: List of task tags\r\n due_date: Due date in YYYY-MM-DD format\r\n \"\"\"\r\n return {\r\n \"title\": title,\r\n \"description\": description,\r\n \"priority\": priority.value,\r\n \"tags\": tags or [],\r\n \"due_date\": due_date\r\n }\r\n```\r\n\r\n#### Tool Registry Management\r\n\r\n```python\r\nfrom HelpingAI.tools import get_tools, get_registry, clear_registry\r\n\r\n# Get specific tools\r\nweather_tools = get_tools([\"get_weather\", \"calculate_tip\"])\r\n\r\n# Registry inspection\r\nregistry = get_registry()\r\nprint(f\"Registered tools: {registry.list_tool_names()}\")\r\nprint(f\"Total tools: {registry.size()}\")\r\n\r\n# Check if tool exists\r\nif registry.has_tool(\"get_weather\"):\r\n weather_tool = registry.get_tool(\"get_weather\")\r\n print(f\"Tool: {weather_tool.name} - {weather_tool.description}\")\r\n```\r\n\r\n#### Universal Tool Compatibility\r\n\r\nSeamlessly combine [`@tools`](HelpingAI/tools/core.py:144) functions with existing OpenAI-format tools:\r\n\r\n```python\r\nfrom HelpingAI.tools import merge_tool_lists, ensure_tool_format\r\n\r\n# Existing OpenAI-format tools\r\nlegacy_tools = [{\r\n \"type\": \"function\",\r\n \"function\": {\r\n \"name\": \"search_web\",\r\n \"description\": \"Search the web for information\",\r\n \"parameters\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"query\": {\"type\": \"string\", \"description\": \"Search query\"}\r\n },\r\n \"required\": [\"query\"]\r\n }\r\n }\r\n}]\r\n\r\n# Combine with @tools functions\r\ncombined_tools = merge_tool_lists(\r\n legacy_tools, # Existing tools\r\n get_tools(), # @tools functions\r\n \"math\" # Category name (if you have categorized tools)\r\n)\r\n\r\n# Use in chat completion\r\nresponse = hai.chat.completions.create(\r\n model=\"Dhanishtha-2.0-preview\",\r\n messages=[{\"role\": \"user\", \"content\": \"Help me with weather, calculations, and web search\"}],\r\n tools=combined_tools\r\n)\r\n```\r\n\r\n### Error Handling & Best Practices\r\n\r\n```python\r\nfrom HelpingAI.tools import ToolExecutionError, SchemaValidationError, ToolRegistrationError\r\n\r\n@tools\r\ndef divide_numbers(a: float, b: float) -> float:\r\n \"\"\"Divide two numbers safely.\r\n \r\n Args:\r\n a: The dividend \r\n b: The divisor\r\n \"\"\"\r\n if b == 0:\r\n raise ValueError(\"Cannot divide by zero\")\r\n return a / b\r\n\r\n# Handle tool execution in your application\r\ndef execute_tool_safely(tool_name: str, arguments: dict):\r\n try:\r\n # You can use the direct call method instead of registry manipulation\r\n hai = HAI()\r\n return hai.call(tool_name, arguments)\r\n \r\n except ToolExecutionError as e:\r\n print(f\"Tool execution failed: {e}\")\r\n return {\"error\": str(e)}\r\n except SchemaValidationError as e:\r\n print(f\"Invalid arguments: {e}\")\r\n return {\"error\": \"Invalid parameters provided\"}\r\n except ToolRegistrationError as e:\r\n print(f\"Tool registration issue: {e}\")\r\n return {\"error\": \"Tool configuration error\"}\r\n\r\n# Example usage\r\nresult = execute_tool_safely(\"divide_numbers\", {\"a\": 10, \"b\": 2})\r\nprint(result) # 5.0\r\n\r\nerror_result = execute_tool_safely(\"divide_numbers\", {\"a\": 10, \"b\": 0})\r\nprint(error_result) # {\"error\": \"Cannot divide by zero\"}\r\n```\r\n\r\n### Migration from Legacy Tools\r\n\r\nTransform your existing tool definitions with minimal effort:\r\n\r\n**Before (Manual Schema):**\r\n```python\r\ntools = [{\r\n \"type\": \"function\",\r\n \"function\": {\r\n \"name\": \"get_weather\", \r\n \"description\": \"Get weather information\",\r\n \"parameters\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"city\": {\"type\": \"string\", \"description\": \"City name\"},\r\n \"units\": {\"type\": \"string\", \"description\": \"Temperature units\", \"enum\": [\"celsius\", \"fahrenheit\"]}\r\n },\r\n \"required\": [\"city\"]\r\n }\r\n }\r\n}]\r\n```\r\n\r\n**After (@tools Decorator):**\r\n```python\r\nfrom typing import Literal\r\n\r\n@tools\r\ndef get_weather(city: str, units: Literal[\"celsius\", \"fahrenheit\"] = \"celsius\") -> str:\r\n \"\"\"Get weather information\r\n \r\n Args:\r\n city: City name\r\n units: Temperature units\r\n \"\"\"\r\n # Implementation here\r\n pass\r\n```\r\n\r\nThe [`@tools`](HelpingAI/tools/core.py:144) decorator automatically:\r\n- \u2705 Generates JSON schema from type hints\r\n- \u2705 Extracts descriptions from docstrings \r\n- \u2705 Handles required/optional parameters\r\n- \u2705 Supports multiple docstring formats (Google, Sphinx, NumPy)\r\n- \u2705 Provides comprehensive error handling\r\n- \u2705 Maintains thread-safe tool registry\r\n\r\n\r\n## \ud83d\udcda Documentation\r\n\r\nComprehensive documentation is available:\r\n\r\n- [\ud83d\udcd6 Getting Started Guide](docs/getting_started.md) - Installation and basic usage\r\n- [\ud83d\udd27 API Reference](docs/api_reference.md) - Complete API documentation\r\n- [\ud83d\udee0\ufe0f Tool Calling Guide](docs/tool_calling.md) - Creating and using AI-callable tools\r\n- [\ud83d\udd0c MCP Integration Guide](docs/mcp_integration.md) - Model Context Protocol integration\r\n- [\ud83d\udca1 Examples](docs/examples.md) - Code examples and use cases\r\n- [\u2753 FAQ](docs/faq.md) - Frequently asked questions\r\n\r\n\r\n## \ud83d\udd27 Requirements\r\n\r\n- **Python**: 3.7-3.14\r\n- **Dependencies**: \r\n - `requests` - HTTP client\r\n - `typing_extensions` - Type hints support\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\r\n\r\n1. Fork the repository\r\n2. Create a feature branch\r\n3. Make your changes\r\n4. Add tests if applicable\r\n5. Submit a pull request\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83c\udd98 Support & Community\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/HelpingAI/HelpingAI-python/issues)\r\n- **Documentation**: [HelpingAI Docs](https://helpingai.co/docs)\r\n- **Dashboard**: [HelpingAI Dashboard](https://helpingai.co/dashboard)\r\n- **Email**: Team@helpingai.co\r\n\r\n\r\n**Built with \u2764\ufe0f by the HelpingAI Team**\r\n\r\n*Empowering AI with Emotional Intelligence*\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python client library for the HelpingAI API",
"version": "1.2.1",
"project_urls": {
"Documentation": "https://helpingai.co/docs",
"Homepage": "https://github.com/HelpingAI/HelpingAI-python",
"Issues": "https://github.com/HelpingAI/HelpingAI-python/issues",
"Source": "https://github.com/HelpingAI/HelpingAI-python"
},
"split_keywords": [
"ai",
" api",
" client",
" helpingai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "701d178861224bf6392c080eba7bc67f4510ccd01df42e9ef72f56a65cbc1db7",
"md5": "1d2137e6ffe107c118034d418232bf6a",
"sha256": "54f4098fda00515eaf292d1b2542888476dee249e6f9aadf3d57f0008431008a"
},
"downloads": -1,
"filename": "helpingai-1.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1d2137e6ffe107c118034d418232bf6a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 48187,
"upload_time": "2025-08-05T05:31:06",
"upload_time_iso_8601": "2025-08-05T05:31:06.159046Z",
"url": "https://files.pythonhosted.org/packages/70/1d/178861224bf6392c080eba7bc67f4510ccd01df42e9ef72f56a65cbc1db7/helpingai-1.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0404182761be330b4ce45922f23223e87a8bbafc1636781184da912d5fd76d6b",
"md5": "88f43f58542fd2e7394a757ca2017edd",
"sha256": "dca29cc1ff9be853912bf303c12a4f23c3c8fded866c470652b79f05f58bc344"
},
"downloads": -1,
"filename": "helpingai-1.2.1.tar.gz",
"has_sig": false,
"md5_digest": "88f43f58542fd2e7394a757ca2017edd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 46457,
"upload_time": "2025-08-05T05:31:08",
"upload_time_iso_8601": "2025-08-05T05:31:08.106475Z",
"url": "https://files.pythonhosted.org/packages/04/04/182761be330b4ce45922f23223e87a8bbafc1636781184da912d5fd76d6b/helpingai-1.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-05 05:31:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "HelpingAI",
"github_project": "HelpingAI-python",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "helpingai"
}