# AI Proxy Core
A minimal Python package providing reusable AI service handlers for Gemini and other LLMs. No web framework dependencies - just the core logic.
## Installation
```bash
pip install ai-proxy-core
```
Or install from source:
```bash
git clone https://github.com/ebowwa/ai-proxy-core.git
cd ai-proxy-core
pip install -e .
```
## Usage
### Completions Handler
```python
from ai_proxy_core import CompletionsHandler
# Initialize handler
handler = CompletionsHandler(api_key="your-gemini-api-key")
# Create completion
response = await handler.create_completion(
messages=[
{"role": "user", "content": "Hello, how are you?"}
],
model="gemini-1.5-flash",
temperature=0.7
)
print(response["choices"][0]["message"]["content"])
```
### Gemini Live Session
```python
from ai_proxy_core import GeminiLiveSession
# Example 1: Basic session (no system prompt)
session = GeminiLiveSession(api_key="your-gemini-api-key")
# Example 2: Session with system prompt (simple string format)
session = GeminiLiveSession(
api_key="your-gemini-api-key",
system_instruction="You are a helpful voice assistant. Be concise and friendly."
)
# Example 3: Session with system prompt (Content object for more control)
from google.genai import types
session = GeminiLiveSession(
api_key="your-gemini-api-key",
system_instruction=types.Content(
parts=[types.Part.from_text("You are a pirate. Speak like a pirate!")],
role="user"
)
)
# Set up callbacks
session.on_audio = lambda data: print(f"Received audio: {len(data)} bytes")
session.on_text = lambda text: print(f"Received text: {text}")
# Start session
await session.start()
# Send audio/text
await session.send_audio(audio_data)
await session.send_text("Hello!")
# Stop when done
await session.stop()
```
### Integration with FastAPI
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from ai_proxy_core import CompletionsHandler
app = FastAPI()
handler = CompletionsHandler()
class CompletionRequest(BaseModel):
messages: list
model: str = "gemini-1.5-flash"
temperature: float = 0.7
@app.post("/api/chat/completions")
async def create_completion(request: CompletionRequest):
try:
response = await handler.create_completion(
messages=request.messages,
model=request.model,
temperature=request.temperature
)
return response
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
```
## Features
- **No framework dependencies** - Use with FastAPI, Flask, or any Python app
- **Async/await support** - Modern async Python
- **Type hints** - Full type annotations
- **Minimal surface area** - Just the core logic you need
- **Easy testing** - Mock the handlers in your tests
## Development
### Building the Package
When building the package for distribution, use `setup.py` directly instead of `python -m build` to avoid pip isolation issues:
```bash
python setup.py sdist bdist_wheel
```
This will create both source distribution and wheel files in the `dist/` directory.
### Publishing to PyPI
```bash
twine upload dist/*
```
## License
MIT
Raw data
{
"_id": null,
"home_page": "https://github.com/ebowwa/ai-proxy-core",
"name": "ai-proxy-core",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, gemini, llm, api, proxy",
"author": "ebowwa",
"author_email": "ebowwa <your-email@example.com>",
"download_url": "https://files.pythonhosted.org/packages/af/d0/acb4d58916ff03930bbfef4f848fba6843922d0f6020f97b88ecec01943a/ai_proxy_core-0.1.6.tar.gz",
"platform": null,
"description": "# AI Proxy Core\n\nA minimal Python package providing reusable AI service handlers for Gemini and other LLMs. No web framework dependencies - just the core logic.\n\n## Installation\n\n```bash\npip install ai-proxy-core\n```\n\nOr install from source:\n```bash\ngit clone https://github.com/ebowwa/ai-proxy-core.git\ncd ai-proxy-core\npip install -e .\n```\n\n## Usage\n\n### Completions Handler\n\n```python\nfrom ai_proxy_core import CompletionsHandler\n\n# Initialize handler\nhandler = CompletionsHandler(api_key=\"your-gemini-api-key\")\n\n# Create completion\nresponse = await handler.create_completion(\n messages=[\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"}\n ],\n model=\"gemini-1.5-flash\",\n temperature=0.7\n)\n\nprint(response[\"choices\"][0][\"message\"][\"content\"])\n```\n\n### Gemini Live Session\n\n```python\nfrom ai_proxy_core import GeminiLiveSession\n\n# Example 1: Basic session (no system prompt)\nsession = GeminiLiveSession(api_key=\"your-gemini-api-key\")\n\n# Example 2: Session with system prompt (simple string format)\nsession = GeminiLiveSession(\n api_key=\"your-gemini-api-key\",\n system_instruction=\"You are a helpful voice assistant. Be concise and friendly.\"\n)\n\n# Example 3: Session with system prompt (Content object for more control)\nfrom google.genai import types\nsession = GeminiLiveSession(\n api_key=\"your-gemini-api-key\",\n system_instruction=types.Content(\n parts=[types.Part.from_text(\"You are a pirate. Speak like a pirate!\")],\n role=\"user\"\n )\n)\n\n# Set up callbacks\nsession.on_audio = lambda data: print(f\"Received audio: {len(data)} bytes\")\nsession.on_text = lambda text: print(f\"Received text: {text}\")\n\n# Start session\nawait session.start()\n\n# Send audio/text\nawait session.send_audio(audio_data)\nawait session.send_text(\"Hello!\")\n\n# Stop when done\nawait session.stop()\n```\n\n### Integration with FastAPI\n\n```python\nfrom fastapi import FastAPI, HTTPException\nfrom pydantic import BaseModel\nfrom ai_proxy_core import CompletionsHandler\n\napp = FastAPI()\nhandler = CompletionsHandler()\n\nclass CompletionRequest(BaseModel):\n messages: list\n model: str = \"gemini-1.5-flash\"\n temperature: float = 0.7\n\n@app.post(\"/api/chat/completions\")\nasync def create_completion(request: CompletionRequest):\n try:\n response = await handler.create_completion(\n messages=request.messages,\n model=request.model,\n temperature=request.temperature\n )\n return response\n except Exception as e:\n raise HTTPException(status_code=500, detail=str(e))\n```\n\n## Features\n\n- **No framework dependencies** - Use with FastAPI, Flask, or any Python app\n- **Async/await support** - Modern async Python\n- **Type hints** - Full type annotations\n- **Minimal surface area** - Just the core logic you need\n- **Easy testing** - Mock the handlers in your tests\n\n## Development\n\n### Building the Package\n\nWhen building the package for distribution, use `setup.py` directly instead of `python -m build` to avoid pip isolation issues:\n\n```bash\npython setup.py sdist bdist_wheel\n```\n\nThis will create both source distribution and wheel files in the `dist/` directory.\n\n### Publishing to PyPI\n\n```bash\ntwine upload dist/*\n```\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Minimal, reusable AI service handlers for Gemini and other LLMs",
"version": "0.1.6",
"project_urls": {
"Homepage": "https://github.com/ebowwa/ai-proxy-core",
"Issues": "https://github.com/ebowwa/ai-proxy-core/issues",
"Repository": "https://github.com/ebowwa/ai-proxy-core"
},
"split_keywords": [
"ai",
" gemini",
" llm",
" api",
" proxy"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "bd410c70f9e76a2fd889323ebd7d066a925420a9530e2417b50f505fcb2090d2",
"md5": "600743ef14a750716bcbc480265b30d3",
"sha256": "c116235152d192f5ff4d6566285281bbf6ef146e07c39f9c7830773d605613fa"
},
"downloads": -1,
"filename": "ai_proxy_core-0.1.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "600743ef14a750716bcbc480265b30d3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 8129,
"upload_time": "2025-08-01T19:53:05",
"upload_time_iso_8601": "2025-08-01T19:53:05.503330Z",
"url": "https://files.pythonhosted.org/packages/bd/41/0c70f9e76a2fd889323ebd7d066a925420a9530e2417b50f505fcb2090d2/ai_proxy_core-0.1.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "afd0acb4d58916ff03930bbfef4f848fba6843922d0f6020f97b88ecec01943a",
"md5": "d3a1b92eaac223c514944da10b7d9b92",
"sha256": "58cf2d5742956ba7628d13be7494375ebb1fa34e6891d4116228aaf5105582cd"
},
"downloads": -1,
"filename": "ai_proxy_core-0.1.6.tar.gz",
"has_sig": false,
"md5_digest": "d3a1b92eaac223c514944da10b7d9b92",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 7751,
"upload_time": "2025-08-01T19:53:06",
"upload_time_iso_8601": "2025-08-01T19:53:06.577710Z",
"url": "https://files.pythonhosted.org/packages/af/d0/acb4d58916ff03930bbfef4f848fba6843922d0f6020f97b88ecec01943a/ai_proxy_core-0.1.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 19:53:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ebowwa",
"github_project": "ai-proxy-core",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "ai-proxy-core"
}