python-ai-sdk


Namepython-ai-sdk JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/Daviduche03/ai.py
SummaryA Python AI SDK inspired by the Vercel AI SDK with streaming support and multi-provider integration.
upload_time2025-08-14 11:20:11
maintainerNone
docs_urlNone
authorDavid uche
requires_python<4.0,>=3.9
licenseMIT
keywords ai openai google streaming sdk llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Python AI SDK

A high-performance Python AI SDK inspired by Vercel AI SDK, built for production backends with streaming, multi-provider support, and type safety.

## Features

- **Streaming-first** - Real-time text generation with built-in streaming support
- **Multi-provider** - OpenAI, Google Gemini, and extensible architecture
- **Tool calling** - Server-side and client-side function execution
- **FastAPI ready** - Drop-in integration for web APIs
- **Type safe** - Full Pydantic validation and TypeScript-like experience
- **Analytics** - Built-in callbacks for monitoring and logging

## Installation

```bash
# Basic installation
pip install python-ai-sdk

# With FastAPI support
pip install python-ai-sdk[fastapi]

# With all optional dependencies
pip install python-ai-sdk[all]
```

## Quick Start

### Basic Text Generation

```python
from ai.core import generateText
from ai.model import openai
import os

# Set your API key
os.environ["OPENAI_API_KEY"] = "your-api-key"

# Generate text
response = await generateText(
    model=openai("gpt-4"),
    systemMessage="You are a helpful assistant.",
    prompt="What is the capital of France?"
)

print(response)  # "The capital of France is Paris."
```

### Streaming Text

```python
from ai.core import streamText
from ai.model import google

async for chunk in streamText(
    model=google("gemini-2.0-flash-exp"),
    systemMessage="You are a creative writer.",
    prompt="Write a short story about a robot."
):
    # chunk format: "0:{"text content"}\n"
    if chunk.startswith("0:"):
        import json
        text = json.loads(chunk[2:])
        print(text, end="", flush=True)
```

## Image Support

### Image from URL (Vercel AI SDK format)

```python
from ai import generateText, openai

# Simple and clean Vercel AI SDK format
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "What do you see in this image?"},
        {
            "type": "image",
            "image": "https://example.com/image.jpg"  # URL string
        }
    ]
}

response = await generateText(
    model=openai("gpt-4o"),  # Vision-capable model
    systemMessage="You are an expert image analyst.",
    messages=[message]
)
```

### Image from File (Binary)

```python
import fs from 'fs'  # In Python: with open()

# Binary image data (like Vercel AI SDK)
with open("image.jpg", "rb") as f:
    image_bytes = f.read()

message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe this image"},
        {
            "type": "image", 
            "image": image_bytes  # Raw bytes
        }
    ]
}
```

### Base64 Images

```python
# Base64 string (no data URL prefix needed)
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "What's in this image?"},
        {
            "type": "image",
            "image": base64_string  # Just the base64 data
        }
    ]
}
```

### Helper Functions

```python
from ai.image import image_from_file, image_from_url

# Helper functions create the proper format
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Analyze these images:"},
        image_from_file("local_image.jpg"),      # Binary format
        image_from_url("https://example.com/img.png")  # URL format
    ]
}
```

### Multiple Images

```python
# Multiple images in one message
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Compare these images:"},
        {
            "type": "image",
            "image": "https://example.com/chart1.png"
        },
        {"type": "text", "text": "And this one:"},
        {
            "type": "image",
            "image": "https://example.com/chart2.png"
        },
        {"type": "text", "text": "What are the differences?"}
    ]
}
```

## Embeddings

### Single Text Embedding

```python
from ai import embed, openai_embedding

# Create embedding model
model = openai_embedding("text-embedding-3-small")

# Generate embedding
embedding = await embed(model, "Hello, world!")
print(f"Dimensions: {len(embedding)}")  # 1536 for text-embedding-3-small
print(f"First 5 values: {embedding[:5]}")
```

### Batch Embeddings

```python
from ai import embedMany, openai_embedding

model = openai_embedding("text-embedding-3-small")

texts = [
    "The cat sits on the mat",
    "Python is a programming language", 
    "Machine learning is fascinating"
]

embeddings = await embedMany(model, texts)
print(f"Generated {len(embeddings)} embeddings")
```

### Semantic Similarity

```python
import math

def cosine_similarity(a, b):
    dot_product = sum(x * y for x, y in zip(a, b))
    magnitude_a = math.sqrt(sum(x * x for x in a))
    magnitude_b = math.sqrt(sum(x * x for x in b))
    return dot_product / (magnitude_a * magnitude_b)

# Compare texts
model = openai_embedding("text-embedding-3-small")
embeddings = await embedMany(model, [
    "The cat sits on the mat",
    "A feline rests on the rug"  # Similar meaning
])

similarity = cosine_similarity(embeddings[0], embeddings[1])
print(f"Similarity: {similarity:.4f}")  # High similarity score
```

## Tool Calling

### Server-side Tools

```python
from ai.tools import Tool
from pydantic import BaseModel, Field

class WeatherParams(BaseModel):
    location: str = Field(..., description="City and country")

def get_weather(params: WeatherParams):
    # Your weather API logic here
    return {"location": params.location, "temperature": 22, "condition": "sunny"}

weather_tool = Tool(
    name="get_weather",
    description="Get current weather for a location",
    parameters=WeatherParams,
    execute=get_weather
)

response = await generateText(
    model=openai("gpt-4"),
    systemMessage="You can check weather for users.",
    prompt="What's the weather in Tokyo?",
    tools=[weather_tool]
)
```

### Client-side Tools

```python
# Tool without execute function - handled by client
call_tool = Tool(
    name="make_call",
    description="Make a phone call",
    parameters=CallParams,
    # No execute - client handles this
)

# The AI will return tool calls for client to execute
async for chunk in streamText(
    model=openai("gpt-4"),
    systemMessage="You can make phone calls for users.",
    prompt="Call John at 555-0123",
    tools=[call_tool]
):
    if chunk.startswith("9:"):  # Tool call
        tool_call = json.loads(chunk[2:])
        print(f"Tool: {tool_call['toolName']}")
        print(f"Args: {tool_call['args']}")
```

## FastAPI Integration

```python
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from ai.core import streamText
from ai.model import openai

app = FastAPI()

@app.post("/api/chat")
async def chat(request: Request):
    body = await request.json()
    messages = body.get("messages", [])
    
    return StreamingResponse(
        streamText(
            model=openai("gpt-4"),
            systemMessage="You are a helpful assistant.",
            messages=messages,
            tools=[weather_tool]  # Optional tools
        ),
        media_type="text/plain; charset=utf-8"
    )

@app.post("/api/generate")
async def generate(request: Request):
    body = await request.json()
    
    response = await generateText(
        model=openai("gpt-4"),
        systemMessage="You are a helpful assistant.",
        prompt=body["prompt"]
    )
    
    return {"response": response}

@app.post("/api/embed")
async def create_embedding(request: Request):
    body = await request.json()
    
    embedding = await embed(
        model=openai_embedding("text-embedding-3-small"),
        value=body["text"]
    )
    
    return {"embedding": embedding, "dimensions": len(embedding)}

@app.post("/api/analyze-image")
async def analyze_image(request: Request):
    body = await request.json()
    
    message = create_image_message(
        body.get("prompt", "What do you see?"),
        body["image_url"]
    )
    
    response = await generateText(
        model=openai("gpt-4o"),
        systemMessage="You are an image analysis expert.",
        messages=[message]
    )
    
    return {"analysis": response}
```

## Analytics & Monitoring

```python
from ai.types import OnFinishResult

async def analytics_callback(result: OnFinishResult):
    print(f"Tokens used: {result['usage']['totalTokens']}")
    print(f"Finish reason: {result['finishReason']}")
    print(f"Tool calls: {len(result['toolCalls'])}")
    
    # Send to your analytics service
    # await send_to_analytics(result)

response = await generateText(
    model=openai("gpt-4"),
    systemMessage="You are helpful.",
    prompt="Hello!",
    onFinish=analytics_callback
)
```

## Supported Providers

### OpenAI

```python
from ai.model import openai, openai_embedding
import os

os.environ["OPENAI_API_KEY"] = "your-key"

# Chat models
model = openai("gpt-4o")           # GPT-4
model = openai("gpt-4.1")     # GPT-4.1

# Embedding models
embed_model = openai_embedding("text-embedding-3-small")  # 1536 dimensions
embed_model = openai_embedding("text-embedding-3-large")  # 3072 dimensions
```

### Google Gemini

```python
from ai.model import google, google_embedding
import os

os.environ["GOOGLE_API_KEY"] = "your-key"

# Chat models
model = google("gemini-2.5-pro")  # Latest Gemini


# Embedding models
embed_model = google_embedding("text-embedding-004")  # Latest embedding model
```

## Message Format

```python
messages = [
    {"role": "system", "content": "You are helpful."},
    {"role": "user", "content": "Hello!"},
    {"role": "assistant", "content": "Hi there!"},
    {"role": "user", "content": "How are you?"}
]

response = await generateText(
    model=openai("gpt-4"),
    messages=messages  # Use messages instead of prompt
)
```

## Streaming Response Format

The streaming API returns formatted chunks:

```
f:{"messageId": "msg-abc123"}           # Message start
0:"Hello"                              # Text chunk
0:" there!"                            # More text
9:{"toolCallId":"call-1","toolName":"weather","args":{...}}  # Tool call
a:{"toolCallId":"call-1","result":"sunny"}  # Tool result
e:{"finishReason":"stop","usage":{...}} # Finish event
d:{"finishReason":"stop","usage":{...}} # Done
```

## Configuration

### Environment Variables

```bash
# Required for respective providers
OPENAI_API_KEY=your-openai-key
GOOGLE_API_KEY=your-google-key

# Optional
OPENAI_BASE_URL=https://api.openai.com/v1  # Custom OpenAI endpoint
```

### Custom Client Configuration

```python
import openai
from ai.model import LanguageModel

# Custom OpenAI client
custom_client = openai.AsyncOpenAI(
    api_key="your-key",
    base_url="https://your-proxy.com/v1",
    timeout=30.0
)

model = LanguageModel(
    provider="openai",
    model="gpt-4",
    client=custom_client
)
```

## Development

```bash
# Clone the repository
git clone https://github.com/Daviduche03/ai.py
cd ai.py

# Install with Poetry
poetry install

# Install with pip (development mode)
pip install -e .

# Run tests
poetry run pytest

# Format code
poetry run ruff format

# Type checking
poetry run mypy ai/

# Run example
cd examples
python -m uvicorn fastapi_app:app --reload
```

## Examples

Check out the `/examples` directory for:
- FastAPI chat application with streaming
- Tool calling examples (server-side & client-side)
- Image analysis and vision capabilities
- Embedding generation and similarity search
- Multi-provider usage patterns
- Semantic search implementations

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request

## License

MIT License - see [LICENSE](LICENSE) file for details.

## Links

- [GitHub Repository](https://github.com/Daviduche03/ai.py)
- [PyPI Package](https://pypi.org/project/python-ai-sdk/)
- [Issues](https://github.com/Daviduche03/ai.py/issues)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Daviduche03/ai.py",
    "name": "python-ai-sdk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "ai, openai, google, streaming, sdk, llm",
    "author": "David uche",
    "author_email": "Daviduche176@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/17/70/1843e2d7878bd093dffa4a936d4b9e0db807a818cc42fb960b1e3d2d661a/python_ai_sdk-0.0.3.tar.gz",
    "platform": null,
    "description": "# Python AI SDK\n\nA high-performance Python AI SDK inspired by Vercel AI SDK, built for production backends with streaming, multi-provider support, and type safety.\n\n## Features\n\n- **Streaming-first** - Real-time text generation with built-in streaming support\n- **Multi-provider** - OpenAI, Google Gemini, and extensible architecture\n- **Tool calling** - Server-side and client-side function execution\n- **FastAPI ready** - Drop-in integration for web APIs\n- **Type safe** - Full Pydantic validation and TypeScript-like experience\n- **Analytics** - Built-in callbacks for monitoring and logging\n\n## Installation\n\n```bash\n# Basic installation\npip install python-ai-sdk\n\n# With FastAPI support\npip install python-ai-sdk[fastapi]\n\n# With all optional dependencies\npip install python-ai-sdk[all]\n```\n\n## Quick Start\n\n### Basic Text Generation\n\n```python\nfrom ai.core import generateText\nfrom ai.model import openai\nimport os\n\n# Set your API key\nos.environ[\"OPENAI_API_KEY\"] = \"your-api-key\"\n\n# Generate text\nresponse = await generateText(\n    model=openai(\"gpt-4\"),\n    systemMessage=\"You are a helpful assistant.\",\n    prompt=\"What is the capital of France?\"\n)\n\nprint(response)  # \"The capital of France is Paris.\"\n```\n\n### Streaming Text\n\n```python\nfrom ai.core import streamText\nfrom ai.model import google\n\nasync for chunk in streamText(\n    model=google(\"gemini-2.0-flash-exp\"),\n    systemMessage=\"You are a creative writer.\",\n    prompt=\"Write a short story about a robot.\"\n):\n    # chunk format: \"0:{\"text content\"}\\n\"\n    if chunk.startswith(\"0:\"):\n        import json\n        text = json.loads(chunk[2:])\n        print(text, end=\"\", flush=True)\n```\n\n## Image Support\n\n### Image from URL (Vercel AI SDK format)\n\n```python\nfrom ai import generateText, openai\n\n# Simple and clean Vercel AI SDK format\nmessage = {\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"text\", \"text\": \"What do you see in this image?\"},\n        {\n            \"type\": \"image\",\n            \"image\": \"https://example.com/image.jpg\"  # URL string\n        }\n    ]\n}\n\nresponse = await generateText(\n    model=openai(\"gpt-4o\"),  # Vision-capable model\n    systemMessage=\"You are an expert image analyst.\",\n    messages=[message]\n)\n```\n\n### Image from File (Binary)\n\n```python\nimport fs from 'fs'  # In Python: with open()\n\n# Binary image data (like Vercel AI SDK)\nwith open(\"image.jpg\", \"rb\") as f:\n    image_bytes = f.read()\n\nmessage = {\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"text\", \"text\": \"Describe this image\"},\n        {\n            \"type\": \"image\", \n            \"image\": image_bytes  # Raw bytes\n        }\n    ]\n}\n```\n\n### Base64 Images\n\n```python\n# Base64 string (no data URL prefix needed)\nmessage = {\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"text\", \"text\": \"What's in this image?\"},\n        {\n            \"type\": \"image\",\n            \"image\": base64_string  # Just the base64 data\n        }\n    ]\n}\n```\n\n### Helper Functions\n\n```python\nfrom ai.image import image_from_file, image_from_url\n\n# Helper functions create the proper format\nmessage = {\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"text\", \"text\": \"Analyze these images:\"},\n        image_from_file(\"local_image.jpg\"),      # Binary format\n        image_from_url(\"https://example.com/img.png\")  # URL format\n    ]\n}\n```\n\n### Multiple Images\n\n```python\n# Multiple images in one message\nmessage = {\n    \"role\": \"user\",\n    \"content\": [\n        {\"type\": \"text\", \"text\": \"Compare these images:\"},\n        {\n            \"type\": \"image\",\n            \"image\": \"https://example.com/chart1.png\"\n        },\n        {\"type\": \"text\", \"text\": \"And this one:\"},\n        {\n            \"type\": \"image\",\n            \"image\": \"https://example.com/chart2.png\"\n        },\n        {\"type\": \"text\", \"text\": \"What are the differences?\"}\n    ]\n}\n```\n\n## Embeddings\n\n### Single Text Embedding\n\n```python\nfrom ai import embed, openai_embedding\n\n# Create embedding model\nmodel = openai_embedding(\"text-embedding-3-small\")\n\n# Generate embedding\nembedding = await embed(model, \"Hello, world!\")\nprint(f\"Dimensions: {len(embedding)}\")  # 1536 for text-embedding-3-small\nprint(f\"First 5 values: {embedding[:5]}\")\n```\n\n### Batch Embeddings\n\n```python\nfrom ai import embedMany, openai_embedding\n\nmodel = openai_embedding(\"text-embedding-3-small\")\n\ntexts = [\n    \"The cat sits on the mat\",\n    \"Python is a programming language\", \n    \"Machine learning is fascinating\"\n]\n\nembeddings = await embedMany(model, texts)\nprint(f\"Generated {len(embeddings)} embeddings\")\n```\n\n### Semantic Similarity\n\n```python\nimport math\n\ndef cosine_similarity(a, b):\n    dot_product = sum(x * y for x, y in zip(a, b))\n    magnitude_a = math.sqrt(sum(x * x for x in a))\n    magnitude_b = math.sqrt(sum(x * x for x in b))\n    return dot_product / (magnitude_a * magnitude_b)\n\n# Compare texts\nmodel = openai_embedding(\"text-embedding-3-small\")\nembeddings = await embedMany(model, [\n    \"The cat sits on the mat\",\n    \"A feline rests on the rug\"  # Similar meaning\n])\n\nsimilarity = cosine_similarity(embeddings[0], embeddings[1])\nprint(f\"Similarity: {similarity:.4f}\")  # High similarity score\n```\n\n## Tool Calling\n\n### Server-side Tools\n\n```python\nfrom ai.tools import Tool\nfrom pydantic import BaseModel, Field\n\nclass WeatherParams(BaseModel):\n    location: str = Field(..., description=\"City and country\")\n\ndef get_weather(params: WeatherParams):\n    # Your weather API logic here\n    return {\"location\": params.location, \"temperature\": 22, \"condition\": \"sunny\"}\n\nweather_tool = Tool(\n    name=\"get_weather\",\n    description=\"Get current weather for a location\",\n    parameters=WeatherParams,\n    execute=get_weather\n)\n\nresponse = await generateText(\n    model=openai(\"gpt-4\"),\n    systemMessage=\"You can check weather for users.\",\n    prompt=\"What's the weather in Tokyo?\",\n    tools=[weather_tool]\n)\n```\n\n### Client-side Tools\n\n```python\n# Tool without execute function - handled by client\ncall_tool = Tool(\n    name=\"make_call\",\n    description=\"Make a phone call\",\n    parameters=CallParams,\n    # No execute - client handles this\n)\n\n# The AI will return tool calls for client to execute\nasync for chunk in streamText(\n    model=openai(\"gpt-4\"),\n    systemMessage=\"You can make phone calls for users.\",\n    prompt=\"Call John at 555-0123\",\n    tools=[call_tool]\n):\n    if chunk.startswith(\"9:\"):  # Tool call\n        tool_call = json.loads(chunk[2:])\n        print(f\"Tool: {tool_call['toolName']}\")\n        print(f\"Args: {tool_call['args']}\")\n```\n\n## FastAPI Integration\n\n```python\nfrom fastapi import FastAPI, Request\nfrom fastapi.responses import StreamingResponse\nfrom ai.core import streamText\nfrom ai.model import openai\n\napp = FastAPI()\n\n@app.post(\"/api/chat\")\nasync def chat(request: Request):\n    body = await request.json()\n    messages = body.get(\"messages\", [])\n    \n    return StreamingResponse(\n        streamText(\n            model=openai(\"gpt-4\"),\n            systemMessage=\"You are a helpful assistant.\",\n            messages=messages,\n            tools=[weather_tool]  # Optional tools\n        ),\n        media_type=\"text/plain; charset=utf-8\"\n    )\n\n@app.post(\"/api/generate\")\nasync def generate(request: Request):\n    body = await request.json()\n    \n    response = await generateText(\n        model=openai(\"gpt-4\"),\n        systemMessage=\"You are a helpful assistant.\",\n        prompt=body[\"prompt\"]\n    )\n    \n    return {\"response\": response}\n\n@app.post(\"/api/embed\")\nasync def create_embedding(request: Request):\n    body = await request.json()\n    \n    embedding = await embed(\n        model=openai_embedding(\"text-embedding-3-small\"),\n        value=body[\"text\"]\n    )\n    \n    return {\"embedding\": embedding, \"dimensions\": len(embedding)}\n\n@app.post(\"/api/analyze-image\")\nasync def analyze_image(request: Request):\n    body = await request.json()\n    \n    message = create_image_message(\n        body.get(\"prompt\", \"What do you see?\"),\n        body[\"image_url\"]\n    )\n    \n    response = await generateText(\n        model=openai(\"gpt-4o\"),\n        systemMessage=\"You are an image analysis expert.\",\n        messages=[message]\n    )\n    \n    return {\"analysis\": response}\n```\n\n## Analytics & Monitoring\n\n```python\nfrom ai.types import OnFinishResult\n\nasync def analytics_callback(result: OnFinishResult):\n    print(f\"Tokens used: {result['usage']['totalTokens']}\")\n    print(f\"Finish reason: {result['finishReason']}\")\n    print(f\"Tool calls: {len(result['toolCalls'])}\")\n    \n    # Send to your analytics service\n    # await send_to_analytics(result)\n\nresponse = await generateText(\n    model=openai(\"gpt-4\"),\n    systemMessage=\"You are helpful.\",\n    prompt=\"Hello!\",\n    onFinish=analytics_callback\n)\n```\n\n## Supported Providers\n\n### OpenAI\n\n```python\nfrom ai.model import openai, openai_embedding\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"your-key\"\n\n# Chat models\nmodel = openai(\"gpt-4o\")           # GPT-4\nmodel = openai(\"gpt-4.1\")     # GPT-4.1\n\n# Embedding models\nembed_model = openai_embedding(\"text-embedding-3-small\")  # 1536 dimensions\nembed_model = openai_embedding(\"text-embedding-3-large\")  # 3072 dimensions\n```\n\n### Google Gemini\n\n```python\nfrom ai.model import google, google_embedding\nimport os\n\nos.environ[\"GOOGLE_API_KEY\"] = \"your-key\"\n\n# Chat models\nmodel = google(\"gemini-2.5-pro\")  # Latest Gemini\n\n\n# Embedding models\nembed_model = google_embedding(\"text-embedding-004\")  # Latest embedding model\n```\n\n## Message Format\n\n```python\nmessages = [\n    {\"role\": \"system\", \"content\": \"You are helpful.\"},\n    {\"role\": \"user\", \"content\": \"Hello!\"},\n    {\"role\": \"assistant\", \"content\": \"Hi there!\"},\n    {\"role\": \"user\", \"content\": \"How are you?\"}\n]\n\nresponse = await generateText(\n    model=openai(\"gpt-4\"),\n    messages=messages  # Use messages instead of prompt\n)\n```\n\n## Streaming Response Format\n\nThe streaming API returns formatted chunks:\n\n```\nf:{\"messageId\": \"msg-abc123\"}           # Message start\n0:\"Hello\"                              # Text chunk\n0:\" there!\"                            # More text\n9:{\"toolCallId\":\"call-1\",\"toolName\":\"weather\",\"args\":{...}}  # Tool call\na:{\"toolCallId\":\"call-1\",\"result\":\"sunny\"}  # Tool result\ne:{\"finishReason\":\"stop\",\"usage\":{...}} # Finish event\nd:{\"finishReason\":\"stop\",\"usage\":{...}} # Done\n```\n\n## Configuration\n\n### Environment Variables\n\n```bash\n# Required for respective providers\nOPENAI_API_KEY=your-openai-key\nGOOGLE_API_KEY=your-google-key\n\n# Optional\nOPENAI_BASE_URL=https://api.openai.com/v1  # Custom OpenAI endpoint\n```\n\n### Custom Client Configuration\n\n```python\nimport openai\nfrom ai.model import LanguageModel\n\n# Custom OpenAI client\ncustom_client = openai.AsyncOpenAI(\n    api_key=\"your-key\",\n    base_url=\"https://your-proxy.com/v1\",\n    timeout=30.0\n)\n\nmodel = LanguageModel(\n    provider=\"openai\",\n    model=\"gpt-4\",\n    client=custom_client\n)\n```\n\n## Development\n\n```bash\n# Clone the repository\ngit clone https://github.com/Daviduche03/ai.py\ncd ai.py\n\n# Install with Poetry\npoetry install\n\n# Install with pip (development mode)\npip install -e .\n\n# Run tests\npoetry run pytest\n\n# Format code\npoetry run ruff format\n\n# Type checking\npoetry run mypy ai/\n\n# Run example\ncd examples\npython -m uvicorn fastapi_app:app --reload\n```\n\n## Examples\n\nCheck out the `/examples` directory for:\n- FastAPI chat application with streaming\n- Tool calling examples (server-side & client-side)\n- Image analysis and vision capabilities\n- Embedding generation and similarity search\n- Multi-provider usage patterns\n- Semantic search implementations\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests\n5. Submit a pull request\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## Links\n\n- [GitHub Repository](https://github.com/Daviduche03/ai.py)\n- [PyPI Package](https://pypi.org/project/python-ai-sdk/)\n- [Issues](https://github.com/Daviduche03/ai.py/issues)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Python AI SDK inspired by the Vercel AI SDK with streaming support and multi-provider integration.",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://github.com/Daviduche03/ai.py#readme",
        "Homepage": "https://github.com/Daviduche03/ai.py",
        "Repository": "https://github.com/Daviduche03/ai.py"
    },
    "split_keywords": [
        "ai",
        " openai",
        " google",
        " streaming",
        " sdk",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "27b44c7c927e2025a9e7e48c13c4a755d05390baa277a6611a6472c37810a048",
                "md5": "7981723abfa61df01e0e76ae824f8932",
                "sha256": "a1ee9ba45b6a8a8ced149d23ecbc52b9ee4c575cf6671a87eeb38126e09e7533"
            },
            "downloads": -1,
            "filename": "python_ai_sdk-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7981723abfa61df01e0e76ae824f8932",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 28806,
            "upload_time": "2025-08-14T11:20:09",
            "upload_time_iso_8601": "2025-08-14T11:20:09.581144Z",
            "url": "https://files.pythonhosted.org/packages/27/b4/4c7c927e2025a9e7e48c13c4a755d05390baa277a6611a6472c37810a048/python_ai_sdk-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "17701843e2d7878bd093dffa4a936d4b9e0db807a818cc42fb960b1e3d2d661a",
                "md5": "7b7370aab156e013c85df964d893a33d",
                "sha256": "c70fe9b495f9e45ea65cffed34367b861d43ebc6f6c4a8bfc5fb8ccc7b705ae0"
            },
            "downloads": -1,
            "filename": "python_ai_sdk-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7b7370aab156e013c85df964d893a33d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 26362,
            "upload_time": "2025-08-14T11:20:11",
            "upload_time_iso_8601": "2025-08-14T11:20:11.202352Z",
            "url": "https://files.pythonhosted.org/packages/17/70/1843e2d7878bd093dffa4a936d4b9e0db807a818cc42fb960b1e3d2d661a/python_ai_sdk-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-14 11:20:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Daviduche03",
    "github_project": "ai.py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "python-ai-sdk"
}
        
Elapsed time: 1.10206s