Name | wraipperz JSON |
Version |
0.1.41
JSON |
| download |
home_page | None |
Summary | Simple wrappers for various AI APIs including LLMs, ASR, and TTS |
upload_time | 2025-07-09 07:53:11 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
ai
anthropic
asr
google
llm
openai
tts
wrapper
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# wraipperz
Easy wrapper for various AI APIs including LLMs, ASR, TTS, and Video Generation.
## Installation
Basic installation:
```bash
pip install wraipperz
uv add wraipperz
```
With optional dependencies for specific providers:
```bash
# For fal.ai video generation
pip install wraipperz fal-client
# For all supported providers
pip install wraipperz "wraipperz[all]"
```
## Features
- **LLM API Wrappers**: Unified interface for OpenAI, Anthropic, Google, and other LLM providers
- **ASR (Automatic Speech Recognition)**: Convert speech to text
- **TTS (Text-to-Speech)**: Convert text to speech
- **Video Generation**: Text-to-video and image-to-video generation
- **Async Support**: Asynchronous API calls for improved performance
## Quick Start
### LLM
```python
import os
from wraipperz import call_ai, MessageBuilder
os.environ["OPENAI_API_KEY"] = "your_openai_key" # if not defined in environment variables
messages = MessageBuilder().add_system("You are a helpful assistant.").add_user("What's 1+1?")
# Call an LLM with a simple interface
response, cost = call_ai(
model="openai/gpt-4o",
messages=messages
)
```
Parsing LLM output to pydantic object.
```python
from pydantic import BaseModel, Field
from wraipperz import pydantic_to_yaml_example, find_yaml, MessageBuilder, call_ai
import yaml
class User(BaseModel):
name: str = Field(json_schema_extra={"example": "Bob", "comment": "The name of the character."})
age: int = Field(json_schema_extra={"example": 12, "comment": "The age of the character."})
template = pydantic_to_yaml_example(User)
prompt = f"""Extract the user's name and age from the unstructured text provided below and output your answer following the provided example.
Text: "John is a well respected 31 years old pirate who really likes mooncakes."
Exampe output:
\`\`\`yaml
{template}
\`\`\`
"""
messages = MessageBuilder().add_system(prompt).build()
response, cost = call_ai(model="openai/gpt-4o-mini", messages=messages)
yaml_content = find_yaml(response)
user = User(**yaml.safe_load(yaml_content))
print(user) # prints name='John' age=31
```
### Image Generation and Modification (todo check readme)
```python
from wraipperz import generate, MessageBuilder
from PIL import Image
# Text-to-image generation
messages = MessageBuilder().add_user("Generate an image of a futuristic city skyline at sunset.").build()
result, cost = generate(
model="gemini/gemini-2.0-flash-exp-image-generation",
messages=messages,
temperature=0.7,
max_tokens=4096
)
# The result contains both text and images
print(result["text"]) # Text description/commentary from the model
# Save the generated images
for i, image in enumerate(result["images"]):
image.save(f"generated_city_{i}.png")
# image.show() # Uncomment to display the image
# Image modification with input image
input_image = Image.open("input_photo.jpg") # Replace with your image path
image_messages = MessageBuilder().add_user("Add a futuristic flying car to this image.").add_image(input_image).build()
result, cost = generate(
model="gemini/gemini-2.0-flash-exp-image-generation",
messages=image_messages,
temperature=0.7,
max_tokens=4096
)
# Save the modified images
for i, image in enumerate(result["images"]):
image.save(f"modified_image_{i}.png")
```
The `generate` function returns a dictionary containing both textual response and generated images, enabling multimodal AI capabilities in your applications.
### Video Generation
```python
import os
from wraipperz import generate_video_from_text, generate_video_from_image, wait_for_video_completion
from PIL import Image
# Set your API key
os.environ["PIXVERSE_API_KEY"] = "your_pixverse_key"
# Text-to-Video Generation with automatic download
result = generate_video_from_text(
model="pixverse/text-to-video-v3.5",
prompt="A serene mountain lake at sunrise, with mist rising from the water.",
negative_prompt="blurry, distorted, low quality, text, watermark",
duration=5, # 5 seconds
quality="720p",
style="3d_animation", # Optional: "anime", "3d_animation", "day", "cyberpunk", "comic"
wait_for_completion=True, # Wait for the video to complete
output_path="videos/mountain_lake" # Extension (.mp4) will be added automatically
)
print(f"Video downloaded to: {result['file_path']}")
print(f"Video URL: {result['url']}")
# Image-to-Video Generation
# Load an image
image = Image.open("your_image.jpg")
# Convert the image to a video with motion and download automatically
result = generate_video_from_image(
model="pixverse/image-to-video-v3.5",
image_path=image, # Can also be a file path string
prompt="Add gentle motion and waves to this image",
duration=5,
quality="720p",
output_path="videos/animated_image.mp4" # Specify full path with extension
)
print(f"Video downloaded to: {result['file_path']}")
```
#### Using fal.ai for Video Generation
```python
import os
from wraipperz import generate_video_from_image
from PIL import Image
# Set your API key
os.environ["FAL_KEY"] = "your_fal_key"
# Works with local image paths (auto-encoded as base64)
result = generate_video_from_image(
model="fal/kling-video-v2-master", # Using Kling 2.0 Master
image_path="path/to/your/local/image.jpg", # Local image path
prompt="A beautiful mountain scene with gentle motion in the clouds and water",
duration="5", # "5" or "10" seconds
aspect_ratio="16:9", # "16:9", "9:16", or "1:1"
wait_for_completion=True,
output_path="videos/fal_mountain_scene.mp4"
)
print(f"Video downloaded to: {result['file_path']}")
# Works directly with PIL Image objects
pil_image = Image.open("path/to/your/image.jpg")
result = generate_video_from_image(
model="fal/minimax-video", # Options: fal/minimax-video, fal/luma-dream-machine, fal/kling-video
image_path=pil_image, # PIL Image object
prompt="Gentle ocean waves with clouds moving in the sky",
wait_for_completion=True,
output_path="videos/fal_ocean_scene" # Extension will be added automatically
)
print(f"Video downloaded to: {result['file_path']}")
# You can also still use image URLs if you prefer
result = generate_video_from_image(
model="fal/kling-video-v2-master",
image_path="https://example.com/your-image.jpg", # Web URL
prompt="A colorful autumn scene with leaves gently falling",
wait_for_completion=True,
output_path="videos/fal_autumn_scene"
)
print(f"Video downloaded to: {result['file_path']}")
```
**Note**: fal.ai requires the `fal-client` package. Install it with `pip install fal-client`.
### TTS
```python
from wraipperz.api.tts import create_tts_manager
tts_manager = create_tts_manager()
# Generate speech using OpenAI Realtime TTS
response = tts_manager.generate_speech(
"openai_realtime",
text="This is a demonstration of my voice capabilities!",
output_path="realtime_output.mp3",
voice="ballad",
context="Speak in a extremelly calm, soft, and relaxed voice.",
return_alignment=True,
speed=1.1,
)
# Convert speech using ElevenLabs
# TODO add example
```
## Environment Variables
Set up your API keys in environment variables to enable providers.
```bash
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GOOGLE_API_KEY=your_google_key
PIXVERSE_API_KEY=your_pixverse_key
KLING_API_KEY=your_kling_key
FAL_KEY=your_fal_key
# ... todo add all
```
## License
MIT
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Raw data
{
"_id": null,
"home_page": null,
"name": "wraipperz",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, anthropic, asr, google, llm, openai, tts, wrapper",
"author": null,
"author_email": "Adan H\u00e4fliger <adan.haefliger@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/1b/e0/34e8af89429610baa8f93667984ca56e2dde0da22ed5265d5befee71f529/wraipperz-0.1.41.tar.gz",
"platform": null,
"description": "# wraipperz\n\nEasy wrapper for various AI APIs including LLMs, ASR, TTS, and Video Generation.\n\n## Installation\n\nBasic installation:\n\n```bash\npip install wraipperz\nuv add wraipperz\n```\n\nWith optional dependencies for specific providers:\n\n```bash\n# For fal.ai video generation\npip install wraipperz fal-client\n\n# For all supported providers\npip install wraipperz \"wraipperz[all]\"\n```\n\n## Features\n\n- **LLM API Wrappers**: Unified interface for OpenAI, Anthropic, Google, and other LLM providers\n- **ASR (Automatic Speech Recognition)**: Convert speech to text\n- **TTS (Text-to-Speech)**: Convert text to speech\n- **Video Generation**: Text-to-video and image-to-video generation\n- **Async Support**: Asynchronous API calls for improved performance\n\n## Quick Start\n\n### LLM\n\n```python\nimport os\nfrom wraipperz import call_ai, MessageBuilder\n\nos.environ[\"OPENAI_API_KEY\"] = \"your_openai_key\" # if not defined in environment variables\nmessages = MessageBuilder().add_system(\"You are a helpful assistant.\").add_user(\"What's 1+1?\")\n\n# Call an LLM with a simple interface\nresponse, cost = call_ai(\n model=\"openai/gpt-4o\",\n messages=messages\n)\n```\n\nParsing LLM output to pydantic object.\n\n```python\nfrom pydantic import BaseModel, Field\nfrom wraipperz import pydantic_to_yaml_example, find_yaml, MessageBuilder, call_ai\nimport yaml\n\n\nclass User(BaseModel):\n name: str = Field(json_schema_extra={\"example\": \"Bob\", \"comment\": \"The name of the character.\"})\n age: int = Field(json_schema_extra={\"example\": 12, \"comment\": \"The age of the character.\"})\n\n\ntemplate = pydantic_to_yaml_example(User)\nprompt = f\"\"\"Extract the user's name and age from the unstructured text provided below and output your answer following the provided example.\nText: \"John is a well respected 31 years old pirate who really likes mooncakes.\"\nExampe output:\n\\`\\`\\`yaml\n{template}\n\\`\\`\\`\n\"\"\"\nmessages = MessageBuilder().add_system(prompt).build()\nresponse, cost = call_ai(model=\"openai/gpt-4o-mini\", messages=messages)\n\nyaml_content = find_yaml(response)\nuser = User(**yaml.safe_load(yaml_content))\nprint(user) # prints name='John' age=31\n```\n\n### Image Generation and Modification (todo check readme)\n\n```python\nfrom wraipperz import generate, MessageBuilder\nfrom PIL import Image\n\n# Text-to-image generation\nmessages = MessageBuilder().add_user(\"Generate an image of a futuristic city skyline at sunset.\").build()\n\nresult, cost = generate(\n model=\"gemini/gemini-2.0-flash-exp-image-generation\",\n messages=messages,\n temperature=0.7,\n max_tokens=4096\n)\n\n# The result contains both text and images\nprint(result[\"text\"]) # Text description/commentary from the model\n\n# Save the generated images\nfor i, image in enumerate(result[\"images\"]):\n image.save(f\"generated_city_{i}.png\")\n # image.show() # Uncomment to display the image\n\n# Image modification with input image\ninput_image = Image.open(\"input_photo.jpg\") # Replace with your image path\n\nimage_messages = MessageBuilder().add_user(\"Add a futuristic flying car to this image.\").add_image(input_image).build()\n\nresult, cost = generate(\n model=\"gemini/gemini-2.0-flash-exp-image-generation\",\n messages=image_messages,\n temperature=0.7,\n max_tokens=4096\n)\n\n# Save the modified images\nfor i, image in enumerate(result[\"images\"]):\n image.save(f\"modified_image_{i}.png\")\n```\n\nThe `generate` function returns a dictionary containing both textual response and generated images, enabling multimodal AI capabilities in your applications.\n\n### Video Generation\n\n```python\nimport os\nfrom wraipperz import generate_video_from_text, generate_video_from_image, wait_for_video_completion\nfrom PIL import Image\n\n# Set your API key\nos.environ[\"PIXVERSE_API_KEY\"] = \"your_pixverse_key\"\n\n# Text-to-Video Generation with automatic download\nresult = generate_video_from_text(\n model=\"pixverse/text-to-video-v3.5\",\n prompt=\"A serene mountain lake at sunrise, with mist rising from the water.\",\n negative_prompt=\"blurry, distorted, low quality, text, watermark\",\n duration=5, # 5 seconds\n quality=\"720p\",\n style=\"3d_animation\", # Optional: \"anime\", \"3d_animation\", \"day\", \"cyberpunk\", \"comic\"\n wait_for_completion=True, # Wait for the video to complete\n output_path=\"videos/mountain_lake\" # Extension (.mp4) will be added automatically\n)\n\nprint(f\"Video downloaded to: {result['file_path']}\")\nprint(f\"Video URL: {result['url']}\")\n\n# Image-to-Video Generation\n# Load an image\nimage = Image.open(\"your_image.jpg\")\n\n# Convert the image to a video with motion and download automatically\nresult = generate_video_from_image(\n model=\"pixverse/image-to-video-v3.5\",\n image_path=image, # Can also be a file path string\n prompt=\"Add gentle motion and waves to this image\",\n duration=5,\n quality=\"720p\",\n output_path=\"videos/animated_image.mp4\" # Specify full path with extension\n)\n\nprint(f\"Video downloaded to: {result['file_path']}\")\n```\n\n\n#### Using fal.ai for Video Generation\n\n```python\nimport os\nfrom wraipperz import generate_video_from_image\nfrom PIL import Image\n\n# Set your API key\nos.environ[\"FAL_KEY\"] = \"your_fal_key\"\n\n# Works with local image paths (auto-encoded as base64)\nresult = generate_video_from_image(\n model=\"fal/kling-video-v2-master\", # Using Kling 2.0 Master\n image_path=\"path/to/your/local/image.jpg\", # Local image path\n prompt=\"A beautiful mountain scene with gentle motion in the clouds and water\",\n duration=\"5\", # \"5\" or \"10\" seconds\n aspect_ratio=\"16:9\", # \"16:9\", \"9:16\", or \"1:1\"\n wait_for_completion=True,\n output_path=\"videos/fal_mountain_scene.mp4\"\n)\n\nprint(f\"Video downloaded to: {result['file_path']}\")\n\n# Works directly with PIL Image objects\npil_image = Image.open(\"path/to/your/image.jpg\")\nresult = generate_video_from_image(\n model=\"fal/minimax-video\", # Options: fal/minimax-video, fal/luma-dream-machine, fal/kling-video\n image_path=pil_image, # PIL Image object\n prompt=\"Gentle ocean waves with clouds moving in the sky\",\n wait_for_completion=True,\n output_path=\"videos/fal_ocean_scene\" # Extension will be added automatically\n)\n\nprint(f\"Video downloaded to: {result['file_path']}\")\n\n# You can also still use image URLs if you prefer\nresult = generate_video_from_image(\n model=\"fal/kling-video-v2-master\",\n image_path=\"https://example.com/your-image.jpg\", # Web URL\n prompt=\"A colorful autumn scene with leaves gently falling\",\n wait_for_completion=True,\n output_path=\"videos/fal_autumn_scene\"\n)\n\nprint(f\"Video downloaded to: {result['file_path']}\")\n```\n\n**Note**: fal.ai requires the `fal-client` package. Install it with `pip install fal-client`.\n\n### TTS\n\n```python\nfrom wraipperz.api.tts import create_tts_manager\n\ntts_manager = create_tts_manager()\n\n# Generate speech using OpenAI Realtime TTS\nresponse = tts_manager.generate_speech(\n \"openai_realtime\",\n text=\"This is a demonstration of my voice capabilities!\",\n output_path=\"realtime_output.mp3\",\n voice=\"ballad\",\n context=\"Speak in a extremelly calm, soft, and relaxed voice.\",\n return_alignment=True,\n speed=1.1,\n)\n\n# Convert speech using ElevenLabs\n# TODO add example\n\n```\n\n## Environment Variables\n\nSet up your API keys in environment variables to enable providers.\n\n```bash\nOPENAI_API_KEY=your_openai_key\nANTHROPIC_API_KEY=your_anthropic_key\nGOOGLE_API_KEY=your_google_key\nPIXVERSE_API_KEY=your_pixverse_key\nKLING_API_KEY=your_kling_key\nFAL_KEY=your_fal_key\n# ... todo add all\n```\n\n## License\n\nMIT\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple wrappers for various AI APIs including LLMs, ASR, and TTS",
"version": "0.1.41",
"project_urls": {
"Bug Tracker": "https://github.com/Ahaeflig/wraipperz/issues",
"Homepage": "https://github.com/Ahaeflig/wraipperz"
},
"split_keywords": [
"ai",
" anthropic",
" asr",
" google",
" llm",
" openai",
" tts",
" wrapper"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "11e1d7887b7371b50a7f871c8ea23130993130876114fa2a05d6b9e3b3a9b665",
"md5": "a96966df66332a42261294275ab33fd3",
"sha256": "306f3781c32e7f424b69dd3f24c3bc365fdabea4b364db54955544f200132516"
},
"downloads": -1,
"filename": "wraipperz-0.1.41-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a96966df66332a42261294275ab33fd3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 56175,
"upload_time": "2025-07-09T07:53:06",
"upload_time_iso_8601": "2025-07-09T07:53:06.857002Z",
"url": "https://files.pythonhosted.org/packages/11/e1/d7887b7371b50a7f871c8ea23130993130876114fa2a05d6b9e3b3a9b665/wraipperz-0.1.41-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1be034e8af89429610baa8f93667984ca56e2dde0da22ed5265d5befee71f529",
"md5": "997c6b79b0c0f62662d02527f06efeee",
"sha256": "7187968c7277e31754e8c4ba53841a8cdcd549fc83c579bd12d432e42a4729cc"
},
"downloads": -1,
"filename": "wraipperz-0.1.41.tar.gz",
"has_sig": false,
"md5_digest": "997c6b79b0c0f62662d02527f06efeee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 48684285,
"upload_time": "2025-07-09T07:53:11",
"upload_time_iso_8601": "2025-07-09T07:53:11.616002Z",
"url": "https://files.pythonhosted.org/packages/1b/e0/34e8af89429610baa8f93667984ca56e2dde0da22ed5265d5befee71f529/wraipperz-0.1.41.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 07:53:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Ahaeflig",
"github_project": "wraipperz",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "wraipperz"
}