metorial-openai-compatible


Namemetorial-openai-compatible JSON
Version 1.0.4 PyPI version JSON
download
home_pageNone
SummaryOpenAI-compatible provider base for Metorial
upload_time2025-10-30 05:03:24
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords ai compatible llm metorial openai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # metorial-openai-compatible

Base package for OpenAI-compatible provider integrations for Metorial. This package provides shared functionality for providers that use OpenAI's function calling format.

## Installation

```bash
pip install metorial-openai-compatible
# or
uv add metorial-openai-compatible
# or
poetry add metorial-openai-compatible
```

## Features

- šŸ”§ **OpenAI Format**: Standard OpenAI function calling format
- šŸ“” **Session Management**: Automatic tool lifecycle handling
- šŸ”„ **Format Conversion**: Converts Metorial tools to OpenAI function format
- ⚔ **Async Support**: Full async/await support

## Usage

### Quick Start (Recommended)

This package serves as a base for provider-specific implementations. For end-user usage, use the specific provider packages like `metorial-xai`, `metorial-deepseek`, or `metorial-togetherai`.

### Direct Usage (Advanced)

```python
import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...") # async by default
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...", 
    base_url="https://your-provider-url/v1"
  )
  
  # Run with automatic session management
  response = await metorial.run(
    "What are the latest commits in the metorial/websocket-explorer repository?",
    "...your-mcp-server-deployment-id...", # can also be list
    compatible_client,
    model="your-model-name",
    max_iterations=25
  )
  
  print("Response:", response)

asyncio.run(main())
```

### Streaming Chat

```python
import asyncio
from openai import AsyncOpenAI
from metorial import Metorial
from metorial.types import StreamEventType

async def example():
  # Initialize clients
  metorial = Metorial(api_key="...your-metorial-api-key...")
  compatible_client = AsyncOpenAI(
    api_key="...your-provider-api-key...",
    base_url="https://your-provider-url/v1"
  )
  
  # Streaming chat with real-time responses
  async def stream_action(session):
    messages = [
      {"role": "user", "content": "Explain quantum computing"}
    ]
    
    async for event in metorial.stream(
      compatible_client, session, messages, 
      model="your-model-name",
      max_iterations=25
    ):
      if event.type == StreamEventType.CONTENT:
        print(f"šŸ¤– {event.content}", end="", flush=True)
      elif event.type == StreamEventType.TOOL_CALL:
        print(f"\nšŸ”§ Executing {len(event.tool_calls)} tool(s)...")
      elif event.type == StreamEventType.COMPLETE:
        print(f"\nāœ… Complete!")
  
  await metorial.with_session("...your-server-deployment-id...", stream_action)

asyncio.run(example())
```

### Advanced Usage with Session Management

```python
import asyncio
from metorial import Metorial
from metorial_openai_compatible import MetorialOpenAICompatibleSession

async def main():
  # Initialize Metorial
  metorial = Metorial(api_key="...your-metorial-api-key...")
  
  # Create session with your server deployments
  async with metorial.session(["...your-server-deployment-id..."]) as session:
    # Create OpenAI-compatible wrapper
    openai_session = MetorialOpenAICompatibleSession(
      session.tool_manager,
      with_strict=True  # Enable strict mode
    )
    
    # Use with any OpenAI-compatible client
    tools = openai_session.tools
    
    # Handle tool calls from response
    tool_responses = await openai_session.call_tools(tool_calls)

asyncio.run(main())
```

### As Base Class

This package is primarily used as a base for provider-specific packages:

```python
from metorial_openai_compatible import MetorialOpenAICompatibleSession

class MyProviderSession(MetorialOpenAICompatibleSession):
  def __init__(self, tool_mgr):
    # Configure strict mode based on provider capabilities
    super().__init__(tool_mgr, with_strict=False)
```

### Using Convenience Functions

```python
from metorial_openai_compatible import build_openai_compatible_tools, call_openai_compatible_tools

async def example():
  # Get tools in OpenAI format
  tools = build_openai_compatible_tools(tool_manager, with_strict=True)
  
  # Call tools from OpenAI-compatible response
  tool_messages = await call_openai_compatible_tools(tool_manager, tool_calls)
```

## API Reference

### `MetorialOpenAICompatibleSession`

Main session class for OpenAI-compatible integration.

```python
session = MetorialOpenAICompatibleSession(tool_manager, with_strict=False)
```

**Parameters:**
- `tool_manager`: Metorial tool manager instance
- `with_strict`: Enable strict parameter validation (default: False)

**Properties:**
- `tools`: List of tools in OpenAI function calling format

**Methods:**
- `async call_tools(tool_calls)`: Execute tool calls and return tool messages

### `build_openai_compatible_tools(tool_mgr, with_strict=False)`

Build OpenAI-compatible tool definitions.

**Parameters:**
- `tool_mgr`: Tool manager instance
- `with_strict`: Enable strict mode (default: False)

**Returns:** List of tool definitions in OpenAI format

### `call_openai_compatible_tools(tool_mgr, tool_calls)`

Execute tool calls from OpenAI-compatible response.

**Returns:** List of tool messages

## Tool Format

Tools are converted to OpenAI's function calling format:

```python
{
  "type": "function",
  "function": {
    "name": "tool_name",
    "description": "Tool description",
    "parameters": {
      "type": "object",
      "properties": {...},
      "required": [...]
    },
    "strict": True  # Only if with_strict=True
  }
}
```

## Strict Mode

When `with_strict=True`, the `strict` field is added to function definitions for providers that support strict parameter validation (like OpenAI and XAI).

## Provider Implementations

This package serves as the base for:

- **metorial-xai**: XAI (Grok) with strict mode enabled
- **metorial-deepseek**: DeepSeek without strict mode
- **metorial-togetherai**: Together AI without strict mode

## Error Handling

```python
try:
    tool_messages = await session.call_tools(tool_calls)
except Exception as e:
    print(f"Tool execution failed: {e}")
```

Tool errors are returned as tool messages with error content.

## License

MIT License - see [LICENSE](../../LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "metorial-openai-compatible",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "ai, compatible, llm, metorial, openai",
    "author": null,
    "author_email": "Metorial Team <support@metorial.com>",
    "download_url": "https://files.pythonhosted.org/packages/34/2f/c407ce8969e60bdf44806e7e27b46c4eecda86752cfce4c364ed4e282664/metorial_openai_compatible-1.0.4.tar.gz",
    "platform": null,
    "description": "# metorial-openai-compatible\n\nBase package for OpenAI-compatible provider integrations for Metorial. This package provides shared functionality for providers that use OpenAI's function calling format.\n\n## Installation\n\n```bash\npip install metorial-openai-compatible\n# or\nuv add metorial-openai-compatible\n# or\npoetry add metorial-openai-compatible\n```\n\n## Features\n\n- \ud83d\udd27 **OpenAI Format**: Standard OpenAI function calling format\n- \ud83d\udce1 **Session Management**: Automatic tool lifecycle handling\n- \ud83d\udd04 **Format Conversion**: Converts Metorial tools to OpenAI function format\n- \u26a1 **Async Support**: Full async/await support\n\n## Usage\n\n### Quick Start (Recommended)\n\nThis package serves as a base for provider-specific implementations. For end-user usage, use the specific provider packages like `metorial-xai`, `metorial-deepseek`, or `metorial-togetherai`.\n\n### Direct Usage (Advanced)\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\nfrom metorial import Metorial\nfrom metorial_openai_compatible import MetorialOpenAICompatibleSession\n\nasync def main():\n  # Initialize clients\n  metorial = Metorial(api_key=\"...your-metorial-api-key...\") # async by default\n  compatible_client = AsyncOpenAI(\n    api_key=\"...your-provider-api-key...\", \n    base_url=\"https://your-provider-url/v1\"\n  )\n  \n  # Run with automatic session management\n  response = await metorial.run(\n    \"What are the latest commits in the metorial/websocket-explorer repository?\",\n    \"...your-mcp-server-deployment-id...\", # can also be list\n    compatible_client,\n    model=\"your-model-name\",\n    max_iterations=25\n  )\n  \n  print(\"Response:\", response)\n\nasyncio.run(main())\n```\n\n### Streaming Chat\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\nfrom metorial import Metorial\nfrom metorial.types import StreamEventType\n\nasync def example():\n  # Initialize clients\n  metorial = Metorial(api_key=\"...your-metorial-api-key...\")\n  compatible_client = AsyncOpenAI(\n    api_key=\"...your-provider-api-key...\",\n    base_url=\"https://your-provider-url/v1\"\n  )\n  \n  # Streaming chat with real-time responses\n  async def stream_action(session):\n    messages = [\n      {\"role\": \"user\", \"content\": \"Explain quantum computing\"}\n    ]\n    \n    async for event in metorial.stream(\n      compatible_client, session, messages, \n      model=\"your-model-name\",\n      max_iterations=25\n    ):\n      if event.type == StreamEventType.CONTENT:\n        print(f\"\ud83e\udd16 {event.content}\", end=\"\", flush=True)\n      elif event.type == StreamEventType.TOOL_CALL:\n        print(f\"\\n\ud83d\udd27 Executing {len(event.tool_calls)} tool(s)...\")\n      elif event.type == StreamEventType.COMPLETE:\n        print(f\"\\n\u2705 Complete!\")\n  \n  await metorial.with_session(\"...your-server-deployment-id...\", stream_action)\n\nasyncio.run(example())\n```\n\n### Advanced Usage with Session Management\n\n```python\nimport asyncio\nfrom metorial import Metorial\nfrom metorial_openai_compatible import MetorialOpenAICompatibleSession\n\nasync def main():\n  # Initialize Metorial\n  metorial = Metorial(api_key=\"...your-metorial-api-key...\")\n  \n  # Create session with your server deployments\n  async with metorial.session([\"...your-server-deployment-id...\"]) as session:\n    # Create OpenAI-compatible wrapper\n    openai_session = MetorialOpenAICompatibleSession(\n      session.tool_manager,\n      with_strict=True  # Enable strict mode\n    )\n    \n    # Use with any OpenAI-compatible client\n    tools = openai_session.tools\n    \n    # Handle tool calls from response\n    tool_responses = await openai_session.call_tools(tool_calls)\n\nasyncio.run(main())\n```\n\n### As Base Class\n\nThis package is primarily used as a base for provider-specific packages:\n\n```python\nfrom metorial_openai_compatible import MetorialOpenAICompatibleSession\n\nclass MyProviderSession(MetorialOpenAICompatibleSession):\n  def __init__(self, tool_mgr):\n    # Configure strict mode based on provider capabilities\n    super().__init__(tool_mgr, with_strict=False)\n```\n\n### Using Convenience Functions\n\n```python\nfrom metorial_openai_compatible import build_openai_compatible_tools, call_openai_compatible_tools\n\nasync def example():\n  # Get tools in OpenAI format\n  tools = build_openai_compatible_tools(tool_manager, with_strict=True)\n  \n  # Call tools from OpenAI-compatible response\n  tool_messages = await call_openai_compatible_tools(tool_manager, tool_calls)\n```\n\n## API Reference\n\n### `MetorialOpenAICompatibleSession`\n\nMain session class for OpenAI-compatible integration.\n\n```python\nsession = MetorialOpenAICompatibleSession(tool_manager, with_strict=False)\n```\n\n**Parameters:**\n- `tool_manager`: Metorial tool manager instance\n- `with_strict`: Enable strict parameter validation (default: False)\n\n**Properties:**\n- `tools`: List of tools in OpenAI function calling format\n\n**Methods:**\n- `async call_tools(tool_calls)`: Execute tool calls and return tool messages\n\n### `build_openai_compatible_tools(tool_mgr, with_strict=False)`\n\nBuild OpenAI-compatible tool definitions.\n\n**Parameters:**\n- `tool_mgr`: Tool manager instance\n- `with_strict`: Enable strict mode (default: False)\n\n**Returns:** List of tool definitions in OpenAI format\n\n### `call_openai_compatible_tools(tool_mgr, tool_calls)`\n\nExecute tool calls from OpenAI-compatible response.\n\n**Returns:** List of tool messages\n\n## Tool Format\n\nTools are converted to OpenAI's function calling format:\n\n```python\n{\n  \"type\": \"function\",\n  \"function\": {\n    \"name\": \"tool_name\",\n    \"description\": \"Tool description\",\n    \"parameters\": {\n      \"type\": \"object\",\n      \"properties\": {...},\n      \"required\": [...]\n    },\n    \"strict\": True  # Only if with_strict=True\n  }\n}\n```\n\n## Strict Mode\n\nWhen `with_strict=True`, the `strict` field is added to function definitions for providers that support strict parameter validation (like OpenAI and XAI).\n\n## Provider Implementations\n\nThis package serves as the base for:\n\n- **metorial-xai**: XAI (Grok) with strict mode enabled\n- **metorial-deepseek**: DeepSeek without strict mode\n- **metorial-togetherai**: Together AI without strict mode\n\n## Error Handling\n\n```python\ntry:\n    tool_messages = await session.call_tools(tool_calls)\nexcept Exception as e:\n    print(f\"Tool execution failed: {e}\")\n```\n\nTool errors are returned as tool messages with error content.\n\n## License\n\nMIT License - see [LICENSE](../../LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "OpenAI-compatible provider base for Metorial",
    "version": "1.0.4",
    "project_urls": {
        "Documentation": "https://metorial.com/docs",
        "Homepage": "https://metorial.com",
        "Repository": "https://github.com/metorial/metorial-python"
    },
    "split_keywords": [
        "ai",
        " compatible",
        " llm",
        " metorial",
        " openai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7acf253e0bdb8daa2ffa89e76fe1c590eb01a9d78d2770a9498fe74b566f32cd",
                "md5": "bfc321fa5dbca10b5eaa1181abaa0306",
                "sha256": "b006213fd2edfce55db35b23df6bb02752d3b71be862ef4189626fca135e671f"
            },
            "downloads": -1,
            "filename": "metorial_openai_compatible-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bfc321fa5dbca10b5eaa1181abaa0306",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 5991,
            "upload_time": "2025-10-30T05:03:15",
            "upload_time_iso_8601": "2025-10-30T05:03:15.307396Z",
            "url": "https://files.pythonhosted.org/packages/7a/cf/253e0bdb8daa2ffa89e76fe1c590eb01a9d78d2770a9498fe74b566f32cd/metorial_openai_compatible-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "342fc407ce8969e60bdf44806e7e27b46c4eecda86752cfce4c364ed4e282664",
                "md5": "20f1f39ccabe7e4e3dad193f675cf531",
                "sha256": "def1a94dcd1429612983a37388bd3f198ff227da3444b5c1870b8ec2d6a74bf1"
            },
            "downloads": -1,
            "filename": "metorial_openai_compatible-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "20f1f39ccabe7e4e3dad193f675cf531",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 6869,
            "upload_time": "2025-10-30T05:03:24",
            "upload_time_iso_8601": "2025-10-30T05:03:24.525092Z",
            "url": "https://files.pythonhosted.org/packages/34/2f/c407ce8969e60bdf44806e7e27b46c4eecda86752cfce4c364ed4e282664/metorial_openai_compatible-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-30 05:03:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "metorial",
    "github_project": "metorial-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "metorial-openai-compatible"
}
        
Elapsed time: 3.10311s