| Name | pixigpt JSON |
| Version |
0.1.5
JSON |
| download |
| home_page | None |
| Summary | Production-grade Python client for the PixiGPT API |
| upload_time | 2025-10-24 21:26:23 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.8 |
| license | MIT |
| keywords |
pixigpt
llm
ai
api
client
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# PixiGPT Python Client
Production-grade Python client for the [PixiGPT API](https://pixigpt.com).
## Features
- 🚀 **High Performance**: Connection pooling with 100 connections
- 🔄 **Smart Retries**: Exponential backoff (0.1s → 0.8s)
- ⏱️ **Timeouts**: 30s default, fully configurable
- 🎯 **Type Hints**: Full typing support for modern Python
- 📦 **Minimal Dependencies**: Just `requests` + `urllib3`
- 🔧 **OpenAI Compatible**: Familiar API surface
- 🧠 **Chain of Thought**: Server-extracted CoT reasoning in `reasoning_content`
- 🛠️ **Tool Calling**: Full OpenAI-compatible function calling support
## Installation
```bash
pip install pixigpt
```
## Quick Start
```python
from pixigpt import Client, ChatCompletionRequest, Message
client = Client("sk-proj-YOUR_API_KEY", "https://pixigpt.com/v1")
# Option 1: With assistant personality
response = client.create_chat_completion(
ChatCompletionRequest(
assistant_id="your-assistant-id", # Optional
messages=[Message(role="user", content="Hello!")],
)
)
# Option 2: Pure OpenAI mode (no assistant)
response = client.create_chat_completion(
ChatCompletionRequest(
messages=[
Message(role="system", content="You are helpful"),
Message(role="user", content="Hello!"),
],
)
)
print(response.choices[0].message.content)
# Access chain of thought reasoning (if enable_thinking=true)
if response.choices[0].reasoning_content:
print(f"Reasoning: {response.choices[0].reasoning_content}")
```
## Configuration
```python
from pixigpt import Client
# Custom timeout and retries
client = Client(
api_key="sk-proj-...",
base_url="https://pixigpt.com/v1",
timeout=60, # 60 second timeout
max_retries=5, # Retry up to 5 times
)
# Custom session
import requests
session = requests.Session()
session.proxies = {"http": "http://proxy:8080"}
client = Client(
api_key="sk-proj-...",
base_url="https://pixigpt.com/v1",
session=session,
)
```
## API Methods
### Vision & Moderation
Image/video analysis and content moderation:
```python
from pixigpt import (
VisionAnalyzeRequest,
VisionTagsRequest,
VisionOCRRequest,
VisionVideoRequest,
ModerationTextRequest,
ModerationMediaRequest,
)
# Image analysis
response = client.analyze_image(
VisionAnalyzeRequest(
image_url="https://example.com/image.jpg",
user_prompt="Describe this in detail.",
)
)
# Tag generation
response = client.analyze_image_for_tags(
VisionTagsRequest(image_url="https://example.com/image.jpg")
)
# OCR text extraction
response = client.extract_text(
VisionOCRRequest(image_url="https://example.com/document.jpg")
)
# Video analysis (< 10MB)
response = client.analyze_video(
VisionVideoRequest(
video_url="https://example.com/video.mp4",
user_prompt="Describe what happens.",
)
)
# Text moderation (11 categories)
response = client.moderate_text(
ModerationTextRequest(prompt="text to moderate")
)
# Returns: category (SAFE, SEXUAL_ADULT, UNDERAGE_SEXUAL, etc.) + score (0.0-1.0)
# Image/video moderation
response = client.moderate_media(
ModerationMediaRequest(
media_url="https://example.com/image.jpg",
is_video=False,
)
)
```
**Moderation Categories:**
- **CRITICAL:** `UNDERAGE_SEXUAL` (priority), `JAILBREAK`, `SUICIDE_SELF_HARM`, `PII`, `COPYRIGHT_VIOLATION`
- **WARNING:** `VIOLENT`, `ILLEGAL_ACTS`, `UNETHICAL`, `HATE_SPEECH`
- **ALLOWED:** `SEXUAL_ADULT` (explicit only), `SAFE` (everything else)
### Chat Completions (Stateless)
```python
from pixigpt import ChatCompletionRequest, Message
# Basic completion
response = client.create_chat_completion(
ChatCompletionRequest(
assistant_id=assistant_id, # Optional - omit for pure OpenAI mode
messages=[
Message(role="user", content="What's the weather?"),
],
temperature=0.7,
max_tokens=2000,
enable_thinking=True, # Enable chain of thought (default: True)
)
)
# Access response
print(response.choices[0].message.content)
print(f"Tokens: {response.usage.total_tokens}")
# Access reasoning (server-provided, automatically extracted from <think> tags)
if response.choices[0].reasoning_content:
print(f"Reasoning: {response.choices[0].reasoning_content}")
```
### Tool Calling (Function Calling)
```python
from pixigpt import ChatCompletionRequest, Message
# Define tools (OpenAI format)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
}
]
# Request with tools
response = client.create_chat_completion(
ChatCompletionRequest(
messages=[
Message(role="system", content="You are helpful"),
Message(role="user", content="What's the weather in Paris?"),
],
tools=tools,
)
)
# Check if model wants to call a tool
if response.choices[0].finish_reason == "tool_calls":
for tool_call in response.choices[0].message.tool_calls:
print(f"Tool: {tool_call.function.name}")
print(f"Args: {tool_call.function.arguments}")
# Execute tool, then send result back
result = {"temperature": 18, "conditions": "cloudy"}
response = client.create_chat_completion(
ChatCompletionRequest(
messages=[
Message(role="system", content="You are helpful"),
Message(role="user", content="What's the weather in Paris?"),
response.choices[0].message, # Assistant's tool call
Message(
role="tool",
content=json.dumps(result),
tool_call_id=tool_call.id,
),
],
tools=tools,
)
)
```
### Threads (Async with Memory)
```python
# Create thread
thread = client.create_thread()
# Add message
msg = client.create_message(thread.id, "user", "Hello!")
# Run assistant
run = client.create_run(thread.id, assistant_id, enable_thinking=True)
# Wait for completion (message included in response!)
completed_run = client.wait_for_run(thread.id, run.id)
# Access the assistant's response directly
content = completed_run.message.content[0].text["value"]
print(f"assistant: {content}")
# Access reasoning if available
if completed_run.message.reasoning_content:
print(f"Reasoning: {completed_run.message.reasoning_content}")
# Access tool execution results (Pixi tools only - when assistant has tools_config=null)
if completed_run.message.sources:
for src in completed_run.message.sources:
print(f"Source [{src.tool_name}]: {src.title} - {src.url}")
if completed_run.message.media:
for media in completed_run.message.media:
print(f"Media [{media.source}]: {media.signed_url}")
if completed_run.message.code:
for code in completed_run.message.code:
print(f"Code [{code.language}]: {code.stdout}")
```
### Assistants
```python
# List
assistants = client.list_assistants()
# Create
assistant = client.create_assistant(
name="My Assistant",
instructions="You are a helpful assistant.",
tools_config=None,
)
# Update
assistant = client.update_assistant(
assistant_id=assistant.id,
name="Updated Name",
instructions="New instructions",
)
# Delete
client.delete_assistant(assistant.id)
```
## Context Manager
```python
with Client(api_key, base_url) as client:
response = client.create_chat_completion(...)
# Session automatically closed
```
## Error Handling
```python
from pixigpt import APIError, is_auth_error, is_rate_limit_error
try:
response = client.create_chat_completion(request)
except APIError as e:
if is_auth_error(e):
print("Invalid API key")
elif is_rate_limit_error(e):
print("Rate limit exceeded")
else:
print(f"API error: {e}")
```
## Chain of Thought Reasoning
When `enable_thinking=True` (default), the server automatically extracts reasoning from `<think>` tags:
```python
response = client.create_chat_completion(
ChatCompletionRequest(
messages=[
Message(role="system", content="You are helpful"),
Message(role="user", content="Explain quantum physics"),
],
enable_thinking=True, # Default: true
)
)
# Main response (thinking tags removed by server)
print(response.choices[0].message.content)
# Reasoning content (automatically extracted by vLLM, provided in separate field)
if response.choices[0].reasoning_content:
print(f"Chain of thought: {response.choices[0].reasoning_content}")
```
**Note:** `reasoning_content` is provided directly by the server (vLLM extracts it). Both `content` and `reasoning_content` are automatically trimmed of whitespace. When thinking is enabled and `max_tokens` < 3000, it's automatically bumped to 3000 (CoT needs space).
## Examples
See [examples/](examples/) directory:
- [`chat.py`](examples/chat.py) - Simple chat completion
- [`thread.py`](examples/thread.py) - Multi-turn conversation
- [`vision.py`](examples/vision.py) - Vision analysis and content moderation
```bash
# Install dev dependencies
pip install pixigpt[dev]
# Run examples
python examples/chat.py
python examples/vision.py
```
## Testing
```bash
pip install pixigpt[dev]
pytest
```
## Publishing to PyPI
```bash
# Update version in pyproject.toml
./publish.sh
```
## License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "pixigpt",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "pixigpt, llm, ai, api, client",
"author": null,
"author_email": "PixiGPT <contact@pixigpt.com>",
"download_url": "https://files.pythonhosted.org/packages/87/d1/4329e1342976287fcbd5696ff3561690d99f9bfe9bf2ad657e2a9533d367/pixigpt-0.1.5.tar.gz",
"platform": null,
"description": "# PixiGPT Python Client\n\nProduction-grade Python client for the [PixiGPT API](https://pixigpt.com).\n\n## Features\n\n- \ud83d\ude80 **High Performance**: Connection pooling with 100 connections\n- \ud83d\udd04 **Smart Retries**: Exponential backoff (0.1s \u2192 0.8s)\n- \u23f1\ufe0f **Timeouts**: 30s default, fully configurable\n- \ud83c\udfaf **Type Hints**: Full typing support for modern Python\n- \ud83d\udce6 **Minimal Dependencies**: Just `requests` + `urllib3`\n- \ud83d\udd27 **OpenAI Compatible**: Familiar API surface\n- \ud83e\udde0 **Chain of Thought**: Server-extracted CoT reasoning in `reasoning_content`\n- \ud83d\udee0\ufe0f **Tool Calling**: Full OpenAI-compatible function calling support\n\n## Installation\n\n```bash\npip install pixigpt\n```\n\n## Quick Start\n\n```python\nfrom pixigpt import Client, ChatCompletionRequest, Message\n\nclient = Client(\"sk-proj-YOUR_API_KEY\", \"https://pixigpt.com/v1\")\n\n# Option 1: With assistant personality\nresponse = client.create_chat_completion(\n ChatCompletionRequest(\n assistant_id=\"your-assistant-id\", # Optional\n messages=[Message(role=\"user\", content=\"Hello!\")],\n )\n)\n\n# Option 2: Pure OpenAI mode (no assistant)\nresponse = client.create_chat_completion(\n ChatCompletionRequest(\n messages=[\n Message(role=\"system\", content=\"You are helpful\"),\n Message(role=\"user\", content=\"Hello!\"),\n ],\n )\n)\n\nprint(response.choices[0].message.content)\n\n# Access chain of thought reasoning (if enable_thinking=true)\nif response.choices[0].reasoning_content:\n print(f\"Reasoning: {response.choices[0].reasoning_content}\")\n```\n\n## Configuration\n\n```python\nfrom pixigpt import Client\n\n# Custom timeout and retries\nclient = Client(\n api_key=\"sk-proj-...\",\n base_url=\"https://pixigpt.com/v1\",\n timeout=60, # 60 second timeout\n max_retries=5, # Retry up to 5 times\n)\n\n# Custom session\nimport requests\nsession = requests.Session()\nsession.proxies = {\"http\": \"http://proxy:8080\"}\n\nclient = Client(\n api_key=\"sk-proj-...\",\n base_url=\"https://pixigpt.com/v1\",\n session=session,\n)\n```\n\n## API Methods\n\n### Vision & Moderation\n\nImage/video analysis and content moderation:\n\n```python\nfrom pixigpt import (\n VisionAnalyzeRequest,\n VisionTagsRequest,\n VisionOCRRequest,\n VisionVideoRequest,\n ModerationTextRequest,\n ModerationMediaRequest,\n)\n\n# Image analysis\nresponse = client.analyze_image(\n VisionAnalyzeRequest(\n image_url=\"https://example.com/image.jpg\",\n user_prompt=\"Describe this in detail.\",\n )\n)\n\n# Tag generation\nresponse = client.analyze_image_for_tags(\n VisionTagsRequest(image_url=\"https://example.com/image.jpg\")\n)\n\n# OCR text extraction\nresponse = client.extract_text(\n VisionOCRRequest(image_url=\"https://example.com/document.jpg\")\n)\n\n# Video analysis (< 10MB)\nresponse = client.analyze_video(\n VisionVideoRequest(\n video_url=\"https://example.com/video.mp4\",\n user_prompt=\"Describe what happens.\",\n )\n)\n\n# Text moderation (11 categories)\nresponse = client.moderate_text(\n ModerationTextRequest(prompt=\"text to moderate\")\n)\n# Returns: category (SAFE, SEXUAL_ADULT, UNDERAGE_SEXUAL, etc.) + score (0.0-1.0)\n\n# Image/video moderation\nresponse = client.moderate_media(\n ModerationMediaRequest(\n media_url=\"https://example.com/image.jpg\",\n is_video=False,\n )\n)\n```\n\n**Moderation Categories:**\n- **CRITICAL:** `UNDERAGE_SEXUAL` (priority), `JAILBREAK`, `SUICIDE_SELF_HARM`, `PII`, `COPYRIGHT_VIOLATION`\n- **WARNING:** `VIOLENT`, `ILLEGAL_ACTS`, `UNETHICAL`, `HATE_SPEECH`\n- **ALLOWED:** `SEXUAL_ADULT` (explicit only), `SAFE` (everything else)\n\n### Chat Completions (Stateless)\n\n```python\nfrom pixigpt import ChatCompletionRequest, Message\n\n# Basic completion\nresponse = client.create_chat_completion(\n ChatCompletionRequest(\n assistant_id=assistant_id, # Optional - omit for pure OpenAI mode\n messages=[\n Message(role=\"user\", content=\"What's the weather?\"),\n ],\n temperature=0.7,\n max_tokens=2000,\n enable_thinking=True, # Enable chain of thought (default: True)\n )\n)\n\n# Access response\nprint(response.choices[0].message.content)\nprint(f\"Tokens: {response.usage.total_tokens}\")\n\n# Access reasoning (server-provided, automatically extracted from <think> tags)\nif response.choices[0].reasoning_content:\n print(f\"Reasoning: {response.choices[0].reasoning_content}\")\n```\n\n### Tool Calling (Function Calling)\n\n```python\nfrom pixigpt import ChatCompletionRequest, Message\n\n# Define tools (OpenAI format)\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get current weather for a location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\"type\": \"string\", \"description\": \"City name\"}\n },\n \"required\": [\"location\"]\n }\n }\n }\n]\n\n# Request with tools\nresponse = client.create_chat_completion(\n ChatCompletionRequest(\n messages=[\n Message(role=\"system\", content=\"You are helpful\"),\n Message(role=\"user\", content=\"What's the weather in Paris?\"),\n ],\n tools=tools,\n )\n)\n\n# Check if model wants to call a tool\nif response.choices[0].finish_reason == \"tool_calls\":\n for tool_call in response.choices[0].message.tool_calls:\n print(f\"Tool: {tool_call.function.name}\")\n print(f\"Args: {tool_call.function.arguments}\")\n\n # Execute tool, then send result back\n result = {\"temperature\": 18, \"conditions\": \"cloudy\"}\n\n response = client.create_chat_completion(\n ChatCompletionRequest(\n messages=[\n Message(role=\"system\", content=\"You are helpful\"),\n Message(role=\"user\", content=\"What's the weather in Paris?\"),\n response.choices[0].message, # Assistant's tool call\n Message(\n role=\"tool\",\n content=json.dumps(result),\n tool_call_id=tool_call.id,\n ),\n ],\n tools=tools,\n )\n )\n```\n\n### Threads (Async with Memory)\n\n```python\n# Create thread\nthread = client.create_thread()\n\n# Add message\nmsg = client.create_message(thread.id, \"user\", \"Hello!\")\n\n# Run assistant\nrun = client.create_run(thread.id, assistant_id, enable_thinking=True)\n\n# Wait for completion (message included in response!)\ncompleted_run = client.wait_for_run(thread.id, run.id)\n\n# Access the assistant's response directly\ncontent = completed_run.message.content[0].text[\"value\"]\nprint(f\"assistant: {content}\")\n\n# Access reasoning if available\nif completed_run.message.reasoning_content:\n print(f\"Reasoning: {completed_run.message.reasoning_content}\")\n\n# Access tool execution results (Pixi tools only - when assistant has tools_config=null)\nif completed_run.message.sources:\n for src in completed_run.message.sources:\n print(f\"Source [{src.tool_name}]: {src.title} - {src.url}\")\n\nif completed_run.message.media:\n for media in completed_run.message.media:\n print(f\"Media [{media.source}]: {media.signed_url}\")\n\nif completed_run.message.code:\n for code in completed_run.message.code:\n print(f\"Code [{code.language}]: {code.stdout}\")\n```\n\n### Assistants\n\n```python\n# List\nassistants = client.list_assistants()\n\n# Create\nassistant = client.create_assistant(\n name=\"My Assistant\",\n instructions=\"You are a helpful assistant.\",\n tools_config=None,\n)\n\n# Update\nassistant = client.update_assistant(\n assistant_id=assistant.id,\n name=\"Updated Name\",\n instructions=\"New instructions\",\n)\n\n# Delete\nclient.delete_assistant(assistant.id)\n```\n\n## Context Manager\n\n```python\nwith Client(api_key, base_url) as client:\n response = client.create_chat_completion(...)\n# Session automatically closed\n```\n\n## Error Handling\n\n```python\nfrom pixigpt import APIError, is_auth_error, is_rate_limit_error\n\ntry:\n response = client.create_chat_completion(request)\nexcept APIError as e:\n if is_auth_error(e):\n print(\"Invalid API key\")\n elif is_rate_limit_error(e):\n print(\"Rate limit exceeded\")\n else:\n print(f\"API error: {e}\")\n```\n\n## Chain of Thought Reasoning\n\nWhen `enable_thinking=True` (default), the server automatically extracts reasoning from `<think>` tags:\n\n```python\nresponse = client.create_chat_completion(\n ChatCompletionRequest(\n messages=[\n Message(role=\"system\", content=\"You are helpful\"),\n Message(role=\"user\", content=\"Explain quantum physics\"),\n ],\n enable_thinking=True, # Default: true\n )\n)\n\n# Main response (thinking tags removed by server)\nprint(response.choices[0].message.content)\n\n# Reasoning content (automatically extracted by vLLM, provided in separate field)\nif response.choices[0].reasoning_content:\n print(f\"Chain of thought: {response.choices[0].reasoning_content}\")\n```\n\n**Note:** `reasoning_content` is provided directly by the server (vLLM extracts it). Both `content` and `reasoning_content` are automatically trimmed of whitespace. When thinking is enabled and `max_tokens` < 3000, it's automatically bumped to 3000 (CoT needs space).\n\n## Examples\n\nSee [examples/](examples/) directory:\n- [`chat.py`](examples/chat.py) - Simple chat completion\n- [`thread.py`](examples/thread.py) - Multi-turn conversation\n- [`vision.py`](examples/vision.py) - Vision analysis and content moderation\n\n```bash\n# Install dev dependencies\npip install pixigpt[dev]\n\n# Run examples\npython examples/chat.py\npython examples/vision.py\n```\n\n## Testing\n\n```bash\npip install pixigpt[dev]\npytest\n```\n\n## Publishing to PyPI\n\n```bash\n# Update version in pyproject.toml\n./publish.sh\n```\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Production-grade Python client for the PixiGPT API",
"version": "0.1.5",
"project_urls": {
"Documentation": "https://pixigpt.com/docs",
"Homepage": "https://github.com/PixiGPT/pixigpt-py",
"Repository": "https://github.com/PixiGPT/pixigpt-py"
},
"split_keywords": [
"pixigpt",
" llm",
" ai",
" api",
" client"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1600a10650958310d62e8a0ddbe5d4a84c533c6bdeda2784784ed645adbfb9b8",
"md5": "5baeec47f540204bbcb12d987692ff0a",
"sha256": "8dab8b2adab1804749274f1db49148fa8e478929ec76a9d2449462a14f913ad5"
},
"downloads": -1,
"filename": "pixigpt-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5baeec47f540204bbcb12d987692ff0a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 12427,
"upload_time": "2025-10-24T21:26:22",
"upload_time_iso_8601": "2025-10-24T21:26:22.423783Z",
"url": "https://files.pythonhosted.org/packages/16/00/a10650958310d62e8a0ddbe5d4a84c533c6bdeda2784784ed645adbfb9b8/pixigpt-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "87d14329e1342976287fcbd5696ff3561690d99f9bfe9bf2ad657e2a9533d367",
"md5": "52750175ba55548a16961f2651939048",
"sha256": "31cc5c4ed2227bc5fd544543b13ae44f71cd7ab1ede9c439b47f22085fef3d05"
},
"downloads": -1,
"filename": "pixigpt-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "52750175ba55548a16961f2651939048",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 16219,
"upload_time": "2025-10-24T21:26:23",
"upload_time_iso_8601": "2025-10-24T21:26:23.472590Z",
"url": "https://files.pythonhosted.org/packages/87/d1/4329e1342976287fcbd5696ff3561690d99f9bfe9bf2ad657e2a9533d367/pixigpt-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-24 21:26:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "PixiGPT",
"github_project": "pixigpt-py",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pixigpt"
}