# BubbleTea Python SDK
Build AI chatbots for the BubbleTea platform with simple Python functions.
**Now with LiteLLM support!** π Easily integrate with OpenAI, Anthropic Claude, Google Gemini, and 100+ other LLMs.
**NEW: Vision & Image Generation!** πΈπ¨ Build multimodal bots that can analyze images and generate new ones using AI.
**NEW: User & Conversation Tracking!** π Chat requests now include `user_uuid` and `conversation_uuid` for better context awareness.
## Installation
### Basic Installation
```bash
pip install bubbletea-chat
```
### With LLM Support
To include LiteLLM integration for AI models (OpenAI, Claude, Gemini, and 100+ more):
```bash
pip install 'bubbletea-chat[llm]'
```
## Quick Start
Create a simple chatbot in `my_bot.py`:
```python
import bubbletea_chat as bt
@bt.chatbot
def my_chatbot(message: str):
# Your bot logic here
if "image" in message.lower():
yield bt.Image("https://picsum.photos/400/300")
yield bt.Text("Here's a random image for you!")
else:
yield bt.Text(f"You said: {message}")
if __name__ == "__main__":
# Run the chatbot server
bt.run_server(my_chatbot, port=8000, host="0.0.0.0")
```
Run it locally:
```bash
python my_bot.py
```
This will start a server at `http://localhost:8000` with your chatbot available at the `/chat` endpoint.
### Configuration with `@config` Decorator
BubbleTea provides a `@config` decorator to define and expose bot configurations via a dedicated endpoint. This is essential for:
- Setting up bot metadata (name, URL, description)
- Enabling subscriptions and payments
- Configuring app store-style listing information
- Managing access control and visibility
#### Example: Using the `@config` Decorator
```python
import bubbletea_chat as bt
# Define bot configuration
@bt.config()
def get_config():
return bt.BotConfig(
# Required fields
name="weather-bot", # URL-safe handle (no spaces)
url="http://localhost:8000",
is_streaming=True,
# App store metadata
display_name="Weather Bot", # User-facing name (max 20 chars)
subtitle="Real-time weather updates", # Brief description (max 30 chars)
icon_emoji="π€οΈ", # Or use icon_url for custom icon
description="Get accurate weather forecasts for any city worldwide.",
# Subscription/Payment (in cents)
subscription_monthly_price=499, # $4.99/month (0 = free)
# Access control
visibility="public", # "public" or "private"
authorized_emails=["admin@example.com"], # For private bots
# User experience
initial_text="Hello! I can help you check the weather. Which city would you like to know about?"
)
# Define the chatbot
@bt.chatbot(name="Weather Bot", stream=True)
def weather_bot(message: str):
if "new york" in message.lower():
yield bt.Text("π€οΈ New York: Partly cloudy, 72Β°F")
else:
yield bt.Text("Please specify a city to check the weather.")
```
When the bot server is running, the configuration can be accessed at the `/config` endpoint. For example:
```bash
curl http://localhost:8000/config
```
This will return the bot's configuration as a JSON object.
**Note on URL Paths:** If your bot runs on a custom path (e.g., `/pillsbot`), BubbleTea will automatically append `/config` to that path. For example:
- Bot URL: `http://localhost:8010/pillsbot` β Config endpoint: `http://localhost:8010/pillsbot/config`
- Bot URL: `http://localhost:8000/my-bot` β Config endpoint: `http://localhost:8000/my-bot/config`
- Bot URL: `http://localhost:8000` β Config endpoint: `http://localhost:8000/config`
#### Dynamic Bot Creation Using `/config`
BubbleTea agents can dynamically create new chatbots by utilizing the `/config` endpoint. For example, if you provide a command like:
```bash
create new bot 'bot-name' with url 'http://example.com'
```
The agent will automatically fetch the configuration from `http://example.com/config` and create a new chatbot based on the metadata defined in the configuration. This allows for seamless integration and creation of new bots without manual setup.
### Complete BotConfig Reference
The `BotConfig` class supports extensive configuration options for your bot:
#### Required Fields
- `name` (str): URL-safe bot handle (no spaces, used in URLs)
- `url` (str): Bot hosting URL
- `is_streaming` (bool): Enable streaming responses
#### App Store Metadata
- `display_name` (str): User-facing name (max 20 characters)
- `subtitle` (str): Brief tagline (max 30 characters)
- `description` (str): Full Markdown description
- `icon_url` (str): 1024x1024 PNG icon URL
- `icon_emoji` (str): Alternative emoji icon
- `preview_video_url` (str): Demo video URL
#### Subscription & Payment
- `subscription_monthly_price` (int): Price in cents
- Example: `999` = $9.99/month
- Set to `0` for free bots
- Users are automatically billed monthly
- Subscription status is passed to your bot
#### Access Control
- `visibility` (str): "public" or "private"
- `authorized_emails` (List[str]): Whitelist for private bots
- `authorization` (str): Deprecated, use `visibility`
#### User Experience
- `initial_text` (str): Welcome message
- `cors_config` (dict): Custom CORS settings
### Payment & Subscription Example
```python
@bt.config()
def get_config():
return bt.BotConfig(
# Basic configuration
name="premium-assistant",
url="https://your-bot.com",
is_streaming=True,
# Enable subscription
subscription_monthly_price=1999, # $19.99/month
# Premium-only access
visibility="public", # Anyone can see it
# But only subscribers can use it
)
@bt.chatbot
async def premium_bot(message: str, user_email: str = None, subscription_status: str = None):
"""Subscription status is automatically provided by BubbleTea"""
if subscription_status == "active":
# Full premium features
llm = LLM(model="gpt-4")
response = await llm.acomplete(message)
yield bt.Text(response)
else:
# Limited features for non-subscribers
yield bt.Text("Subscribe to access premium features!")
yield bt.Markdown("""
## π Premium Features
- Advanced AI responses
- Priority support
- And much more!
**Only $19.99/month**
""")
```
## Features
### π€ LiteLLM Integration
BubbleTea now includes built-in support for LiteLLM, allowing you to easily use any LLM provider. We use LiteLLM on the backend, which supports 100+ LLM models from various providers.
```python
from bubbletea_chat import LLM
# Use any model supported by LiteLLM
llm = LLM(model="gpt-4")
llm = LLM(model="claude-3-sonnet-20240229")
llm = LLM(model="gemini/gemini-pro")
# Simple completion
response = await llm.acomplete("Hello, how are you?")
# Streaming
async for chunk in llm.stream("Tell me a story"):
yield bt.Text(chunk)
```
**π Supported Models**: Check out the full list of supported models and providers at [LiteLLM Providers Documentation](https://docs.litellm.ai/docs/providers)
**π‘ DIY Alternative**: You can also implement your own LLM connections using the LiteLLM library directly in your bots if you need more control over the integration.
### πΈ Vision/Image Analysis
BubbleTea supports multimodal interactions! Your bots can receive and analyze images:
```python
from bubbletea_chat import chatbot, Text, LLM, ImageInput
@chatbot
async def vision_bot(prompt: str, images: list = None):
"""A bot that can see and analyze images"""
if images:
llm = LLM(model="gpt-4-vision-preview")
response = await llm.acomplete_with_images(prompt, images)
yield Text(response)
else:
yield Text("Send me an image to analyze!")
```
**Supported Image Formats:**
- URL images: Direct links to images
- Base64 encoded images: For local/uploaded images
- Multiple images: Analyze multiple images at once
**Compatible Vision Models:**
- OpenAI: GPT-4 Vision (`gpt-4-vision-preview`)
- Anthropic: Claude 3 models (Opus, Sonnet, Haiku)
- Google: Gemini Pro Vision (`gemini/gemini-pro-vision`)
- And more vision-enabled models via LiteLLM
### π¨ Image Generation
Generate images from text descriptions using AI models:
```python
from bubbletea_chat import chatbot, Image, LLM
@chatbot
async def art_bot(prompt: str):
"""Generate images from descriptions"""
llm = LLM(model="dall-e-3") # or any image generation model
# Generate an image
image_url = await llm.agenerate_image(prompt)
yield Image(image_url)
```
**Image Generation Features:**
- Text-to-image generation
- Support for DALL-E, Stable Diffusion, and other models
- Customizable parameters (size, quality, style)
### π¦ Components
**Video Component Features:**
- Embed videos from any URL (MP4, WebM, etc.)
- Works in web and mobile BubbleTea clients
**Video API:**
```python
Video(url: str)
```
#### Video Component Example
```python
@chatbot
async def video_bot(message: str):
yield Text("Here's a video for you:")
yield Video(
url="https://www.w3schools.com/html/mov_bbb.mp4"
)
yield Text("Did you enjoy the video?")
```
#### Card Component Example
```python
from bubbletea_chat import chatbot, Card, Image, Text
@chatbot
async def card_bot(message: str):
yield Text("Here's a card for you:")
yield Card(
image=Image(url="https://picsum.photos/id/237/200/300", alt="A dog"),
text="This is a dog card.",
card_value="dog_card_clicked"
)
```
#### Pills Component Example
```python
from bubbletea_chat import chatbot, Pill, Pills, Text
@chatbot
async def pills_bot(message: str):
yield Text("Choose your favorite fruit:")
yield Pills(pills=[
Pill(text="Apple", pill_value="apple_selected"),
Pill(text="Banana", pill_value="banana_selected"),
Pill(text="Orange", pill_value="orange_selected")
])
```
#### Error Component Example
```python
from bubbletea_chat import chatbot, Error, Text
@chatbot
async def error_handling_bot(message: str):
if "error" in message.lower():
# Return an error component with details
return Error(
title="Service Unavailable",
description="The requested service is temporarily unavailable. Please try again later.",
code="ERR_503"
)
elif "fail" in message.lower():
# Simple error without description
return Error(
title="Operation Failed",
code="ERR_001"
)
else:
return Text("Try saying 'error' or 'fail' to see error messages.")
```
**Error Component Features:**
- **title** (required): The main error message to display
- **description** (optional): Additional context or instructions
- **code** (optional): Error code for debugging/support
The Error component is automatically styled with:
- Warning icon (β οΈ)
- Red-themed design for visibility
- Clear formatting to distinguish from regular messages
- Support for retry functionality (when implemented by the frontend)
**Common Use Cases:**
- API failures
- Authentication errors
- Validation errors
- Service unavailability
- Rate limiting messages
### π Streaming Support
BubbleTea automatically detects generator functions and streams responses:
```python
@bt.chatbot
async def streaming_bot(message: str):
yield bt.Text("Processing your request...")
# Simulate some async work
import asyncio
await asyncio.sleep(1)
yield bt.Markdown("## Here's your response")
yield bt.Image("https://example.com/image.jpg")
yield bt.Text("All done!")
```
### π User & Conversation Context
Starting with version 0.2.0, BubbleTea chat requests include UUID fields for tracking users and conversations:
```python
@bt.chatbot
def echo_bot(message: str, user_uuid: str = None, conversation_uuid: str = None, user_email: str = None):
"""A simple bot that echoes back the user's message"""
response = f"You said: {message}"
if user_uuid:
response += f"\nYour User UUID: {user_uuid}"
if conversation_uuid:
response += f"\nYour Conversation UUID: {conversation_uuid}"
if user_email:
response += f"\nYour Email: {user_email}"
return bt.Text(response)
```
The optional parameters that BubbleTea automatically includes in requests when available are:
- **user_uuid**: A unique identifier for the user making the request
- **conversation_uuid**: A unique identifier for the conversation/chat session
- **user_email**: The email address of the user making the request
You can use these to:
- Maintain conversation history
- Personalize responses based on user preferences
- Track usage analytics
- Implement stateful conversations
- Provide user-specific features based on email
### π§΅ Thread-based Conversation Support
BubbleTea now supports thread-based conversations using LiteLLM's threading capabilities. This allows for maintaining conversation context across multiple messages with support for OpenAI Assistants API and fallback for other models.
#### How It Works
1. **Backend Integration**: The backend stores a `thread_id` with each conversation
2. **Thread Creation**: On first message, if no thread exists, the bot creates one
3. **Thread Persistence**: The thread_id is stored in the backend for future messages
4. **Context Maintenance**: All messages in a thread maintain full conversation context
### π¬ Chat History
BubbleTea now supports passing chat history to your bots for context-aware conversations:
```python
@bt.chatbot
async def context_aware_bot(message: str, chat_history: list = None):
"""A bot that uses conversation history for context"""
if chat_history:
# chat_history is a list of 5 user messages and 5 bot messages dictionaries with metadata
yield bt.Text(f"I see we have {previous_messages} previous messages in our conversation.")
# You can use the history to provide contextual responses
llm = LLM(model="gpt-4")
context_prompt = f"Based on our conversation history: {chat_history}\n\nUser: {message}"
response = await llm.acomplete(context_prompt)
yield bt.Text(response)
else:
yield bt.Text("This seems to be the start of our conversation!")
```
### Multiple Bots with Unique Routes
You can create multiple bots in the same application, each with its own unique route:
```python
import bubbletea_chat as bt
# Bot 1: Available at /support
@bt.chatbot("support")
def support_bot(message: str):
return bt.Text("Welcome to support! How can I help you?")
# Bot 2: Available at /sales
@bt.chatbot("sales")
def sales_bot(message: str):
return bt.Text("Hi! I'm here to help with sales inquiries.")
# Bot 3: Default route at /chat
@bt.chatbot()
def general_bot(message: str):
return bt.Text("Hello! I'm the general assistant.")
# This would raise ValueError - duplicate route!
# @bt.chatbot("support")
# def another_support_bot(message: str):
# yield bt.Text("This won't work!")
if __name__ == "__main__":
# Get all registered bots
bt.run_server(port=8000)
```
**Important Notes:**
- Routes are case-sensitive: `/Bot1` is different from `/bot1`
- Each bot must have a unique route
- The default route is `/chat` if no route is specified
- Routes automatically get a leading `/` if not provided
## Examples
### AI-Powered Bots with LiteLLM
#### OpenAI GPT Bot
```python
import bubbletea_chat as bt
from bubbletea_chat import LLM
@bt.chatbot
async def gpt_assistant(message: str):
# Make sure to set OPENAI_API_KEY environment variable
llm = LLM(model="gpt-4")
# Stream the response
async for chunk in llm.stream(message):
yield bt.Text(chunk)
```
#### Claude Bot
```python
@bt.chatbot
async def claude_bot(message: str):
# Set ANTHROPIC_API_KEY environment variable
llm = LLM(model="claude-3-sonnet-20240229")
response = await llm.acomplete(message)
yield bt.Text(response)
```
#### Gemini Bot
```python
@bt.chatbot
async def gemini_bot(message: str):
# Set GEMINI_API_KEY environment variable
llm = LLM(model="gemini/gemini-pro")
async for chunk in llm.stream(message):
yield bt.Text(chunk)
```
### Vision-Enabled Bot
Build bots that can analyze images using multimodal LLMs:
```python
from bubbletea_chat import chatbot, Text, Markdown, LLM, ImageInput
@chatbot
async def vision_bot(prompt: str, images: list = None):
"""
A vision-enabled bot that can analyze images
"""
llm = LLM(model="gpt-4-vision-preview", max_tokens=1000)
if images:
yield Text("I can see you've shared some images. Let me analyze them...")
response = await llm.acomplete_with_images(prompt, images)
yield Markdown(response)
else:
yield Markdown("""
## πΈ Vision Bot
I can analyze images! Try sending me:
- Screenshots to explain
- Photos to describe
- Diagrams to interpret
- Art to analyze
Just upload an image along with your question!
**Supported formats**: JPEG, PNG, GIF, WebP
""")
```
**Key Features:**
- Accepts images along with text prompts
- Supports both URL and base64-encoded images
- Works with multiple images at once
- Compatible with various vision models
### Image Generation Bot
Create images from text descriptions:
```python
from bubbletea_chat import chatbot, Text, Markdown, LLM, Image
@chatbot
async def image_generator(prompt: str):
"""
Generate images from text descriptions
"""
llm = LLM(model="dall-e-3") # Default image generation model
if prompt:
yield Text(f"π¨ Creating: {prompt}")
# Generate image from the text prompt
image_url = await llm.agenerate_image(prompt)
yield Image(image_url)
yield Text("β¨ Your image is ready!")
else:
yield Markdown("""
## π¨ AI Image Generator
I can create images from your text prompts!
Try prompts like:
- *"A futuristic cityscape at sunset"*
- *"A cute robot playing guitar in a forest"*
- *"An ancient map with fantasy landmarks"*
π Just type your description and I'll generate an image for you!
""")
```
### Simple Echo Bot
```python
import bubbletea_chat as bt
@bt.chatbot
def echo_bot(message: str):
return bt.Text(f"Echo: {message}")
```
### Multi-Modal Bot
```python
import bubbletea_chat as bt
@bt.chatbot
def multimodal_bot(message: str):
yield bt.Markdown("# Welcome to the Multi-Modal Bot!")
yield bt.Text("I can show you different types of content:")
yield bt.Markdown("""
- π **Text** messages
- πΌοΈ **Images** with descriptions
- π **Markdown** formatting
""")
yield bt.Image(
"https://picsum.photos/400/300",
alt="A random beautiful image"
)
yield bt.Text("Pretty cool, right? π")
```
### Streaming Bot
```python
import bubbletea_chat as bt
import asyncio
@bt.chatbot
async def streaming_bot(message: str):
yield bt.Text("Hello! Let me process your message...")
await asyncio.sleep(1)
words = message.split()
yield bt.Text("You said: ")
for word in words:
yield bt.Text(f"{word} ")
await asyncio.sleep(0.3)
yield bt.Markdown("## Analysis Complete!")
```
## API Reference
### Decorators
- `@bt.chatbot` - Create a chatbot from a function
- `@bt.chatbot(name="custom-name")` - Set a custom bot name
- `@bt.chatbot(stream=False)` - Force non-streaming mode
- `@bt.chatbot("route-name")` - Create a chatbot with a custom URL route (e.g., `/route-name`)
**Route Validation**: Each chatbot must have a unique URL path. If you try to register multiple bots with the same route, a `ValueError` will be raised.
**Optional Parameters**: Your chatbot functions can accept these optional parameters that BubbleTea provides automatically:
```python
@bt.chatbot
async def my_bot(
message: str, # Required: The user's message
images: list = None, # Optional: List of ImageInput objects
user_uuid: str = None, # Optional: Unique user identifier
conversation_uuid: str = None, # Optional: Unique conversation identifier
user_email: str = None, # Optional: User's email address
chat_history: Union[List[Dict], str] = None # Optional: Conversation history
):
# Your bot logic here
pass
```
### Components
- `bt.Text(content: str)` - Plain text message
- `bt.Image(url: str, alt: str = None)` - Image component
- `bt.Markdown(content: str)` - Markdown formatted text
- `bt.Card(image: Image, text: Optional[str] = None, markdown: Optional[Markdown] = None, card_value: Optional[str] = None)` - A single card component.
- `bt.Cards(cards: List[Card], orient: Literal["wide", "tall"] = "wide")` - A collection of cards.
- `bt.Pill(text: str, pill_value: Optional[str] = None)` - A single pill component.
- `bt.Pills(pills: List[Pill])` - A collection of pill items.
- `bt.Video(url: str)` - Video component
- `bt.Error(title: str, description: Optional[str] = None, code: Optional[str] = None)` - Error message component
### LLM Class
- `LLM(model: str, **kwargs)` - Initialize an LLM client
- `model`: Any model supported by LiteLLM (e.g., "gpt-4", "claude-3-sonnet-20240229")
- `assistant_id`: Optional assistant ID for thread-based conversations
- `**kwargs`: Additional parameters (temperature, max_tokens, etc.)
#### Text Generation Methods:
- `complete(prompt: str, **kwargs) -> str` - Get a completion
- `acomplete(prompt: str, **kwargs) -> str` - Async completion
- `stream(prompt: str, **kwargs) -> AsyncGenerator[str, None]` - Stream a completion
- `with_messages(messages: List[Dict], **kwargs) -> str` - Use full message history
- `astream_with_messages(messages: List[Dict], **kwargs) -> AsyncGenerator[str, None]` - Stream with messages
#### Vision/Image Analysis Methods:
- `complete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Completion with images
- `acomplete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Async with images
- `stream_with_images(prompt: str, images: List[ImageInput], **kwargs) -> AsyncGenerator` - Stream with images
#### Image Generation Methods:
- `generate_image(prompt: str, **kwargs) -> str` - Generate image (sync), returns URL
- `agenerate_image(prompt: str, **kwargs) -> str` - Generate image (async), returns URL
#### Thread-based Conversation Methods:
- `create_thread() -> Dict` - Create a new conversation thread
- `add_message(thread_id, content, role="user") -> Dict` - Add message to thread
- `run_thread(thread_id, instructions=None) -> str` - Run thread and get response
- `get_thread_messages(thread_id) -> List[Dict]` - Get all messages in thread
- `get_thread_status(thread_id, run_id) -> str` - Check thread run status
#### Assistant Creation Methods:
- `create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (sync)
- `acreate_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (async)
### ThreadManager Class
A high-level thread management utility that simplifies thread operations:
```python
from bubbletea_chat import ThreadManager
# Initialize manager
manager = ThreadManager(
assistant_id="asst_xxx", # Optional: use existing assistant
model="gpt-4",
storage_path="threads.json"
)
# Get or create thread for user
thread_id = manager.get_or_create_thread("user_123")
# Add message
manager.add_user_message("user_123", "Hello!")
# Get response
response = manager.get_assistant_response("user_123")
```
Features:
- Automatic thread creation and management
- Persistent thread storage
- User-to-thread mapping
- Async support
- Message history retrieval
- Assistant creation and management
Methods:
- `create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant
- `async_create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (async)
- `get_or_create_thread(user_id) -> str` - Get or create thread for user
- `add_user_message(user_id, message) -> bool` - Add user message
- `get_assistant_response(user_id, instructions=None) -> str` - Get AI response
- `get_thread_messages(user_id) -> List[Dict]` - Get conversation history
- `clear_user_thread(user_id)` - Clear a user's thread
- `clear_all_threads()` - Clear all threads
### ImageInput Class
Represents an image input that can be either a URL or base64 encoded:
```python
from bubbletea_chat import ImageInput
# URL image
ImageInput(url="https://example.com/image.jpg")
# Base64 image
ImageInput(
base64="iVBORw0KGgoAAAANS...",
mime_type="image/jpeg" # Optional
)
```
### Server
```
if __name__ == "__main__":
# Run the chatbot server
bt.run_server(my_bot, port=8000, host="0.0.0.0")
```
- Runs a chatbot server on port 8000 and binds to host 0.0.0.0
- Automatically creates a `/chat` endpoint for your bot
- The `/chat` endpoint accepts POST requests with chat messages
- Supports both streaming and non-streaming responses
## Environment Variables
To use different LLM providers, set the appropriate API keys:
```bash
# OpenAI
export OPENAI_API_KEY=your-openai-api-key
# Anthropic Claude
export ANTHROPIC_API_KEY=your-anthropic-api-key
# Google Gemini
export GEMINI_API_KEY=your-gemini-api-key
# Or use a .env file with python-dotenv
```
For more providers and configuration options, see the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
## Custom LLM Integration
While BubbleTea provides the `LLM` class for convenience, you can also use LiteLLM directly in your bots for more control:
```python
import bubbletea_chat as bt
from litellm import acompletion
@bt.chatbot
async def custom_llm_bot(message: str):
# Direct LiteLLM usage
response = await acompletion(
model="gpt-4",
messages=[{"role": "user", "content": message}],
temperature=0.7,
# Add any custom parameters
api_base="https://your-custom-endpoint.com", # Custom endpoints
custom_llm_provider="openai", # Custom providers
)
yield bt.Text(response.choices[0].message.content)
```
This approach gives you access to:
- Custom API endpoints
- Advanced parameters
- Direct response handling
- Custom error handling
- Any LiteLLM feature
## Testing Your Bot
Start your bot:
```bash
python my_bot.py
```
Your bot will automatically create a `/chat` endpoint that accepts POST requests. This is the standard endpoint for all BubbleTea chatbots.
Test with curl:
```bash
# Text only
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"type": "user", "message": "Hello bot!"}'
# Test config endpoint
curl http://localhost:8000/config
# For bots on custom paths
# If your bot runs at /pillsbot:
curl -X POST "http://localhost:8000/pillsbot" \
-H "Content-Type: application/json" \
-d '{"type": "user", "message": "Hello bot!"}'
# Config endpoint for custom path bot
curl http://localhost:8000/pillsbot/config
# With image URL
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"type": "user",
"message": "What is in this image?",
"images": [{"url": "https://example.com/image.jpg"}]
}'
# With base64 image
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"type": "user",
"message": "Describe this",
"images": [{"base64": "iVBORw0KGgoAAAANS...", "mime_type": "image/png"}]
}'
```
## π CORS Support
BubbleTea includes automatic CORS (Cross-Origin Resource Sharing) support out of the box! This means your bots will work seamlessly with web frontends without any additional configuration.
### Default Behavior
```python
# CORS is enabled by default with permissive settings for development
bt.run_server(my_bot, port=8000)
```
### Custom CORS Configuration
```python
# For production - restrict to specific origins
bt.run_server(my_bot, port=8000, cors_config={
"allow_origins": ["https://bubbletea.app", "https://yourdomain.com"],
"allow_credentials": True,
"allow_methods": ["GET", "POST"],
"allow_headers": ["Content-Type", "Authorization"]
})
```
### Disable CORS
```python
# Not recommended, but possible
bt.run_server(my_bot, port=8000, cors=False)
```
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
MIT License - see [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/bubbletea/bubbletea-python",
"name": "bubbletea-chat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "chatbot, ai, bubbletea, conversational-ai",
"author": "BubbleTea Team",
"author_email": "BubbleTea Team <team@bubbletea.dev>",
"download_url": "https://files.pythonhosted.org/packages/09/2f/d423863449b0f9d74a3d67f03354f7c39c7bd38161fa01999a41f177896f/bubbletea_chat-0.6.3.tar.gz",
"platform": null,
"description": "# BubbleTea Python SDK\n\nBuild AI chatbots for the BubbleTea platform with simple Python functions.\n\n**Now with LiteLLM support!** \ud83c\udf89 Easily integrate with OpenAI, Anthropic Claude, Google Gemini, and 100+ other LLMs.\n\n**NEW: Vision & Image Generation!** \ud83d\udcf8\ud83c\udfa8 Build multimodal bots that can analyze images and generate new ones using AI.\n\n**NEW: User & Conversation Tracking!** \ud83d\udd0d Chat requests now include `user_uuid` and `conversation_uuid` for better context awareness.\n\n## Installation\n\n### Basic Installation\n```bash\npip install bubbletea-chat\n```\n\n### With LLM Support\nTo include LiteLLM integration for AI models (OpenAI, Claude, Gemini, and 100+ more):\n```bash\npip install 'bubbletea-chat[llm]'\n```\n\n## Quick Start\n\nCreate a simple chatbot in `my_bot.py`:\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef my_chatbot(message: str):\n # Your bot logic here\n if \"image\" in message.lower():\n yield bt.Image(\"https://picsum.photos/400/300\")\n yield bt.Text(\"Here's a random image for you!\")\n else:\n yield bt.Text(f\"You said: {message}\")\n\nif __name__ == \"__main__\":\n # Run the chatbot server\n bt.run_server(my_chatbot, port=8000, host=\"0.0.0.0\")\n\n```\n\nRun it locally:\n\n```bash\npython my_bot.py\n```\n\nThis will start a server at `http://localhost:8000` with your chatbot available at the `/chat` endpoint.\n\n\n### Configuration with `@config` Decorator\n\nBubbleTea provides a `@config` decorator to define and expose bot configurations via a dedicated endpoint. This is essential for:\n- Setting up bot metadata (name, URL, description)\n- Enabling subscriptions and payments\n- Configuring app store-style listing information\n- Managing access control and visibility\n\n#### Example: Using the `@config` Decorator\n\n\n```python\nimport bubbletea_chat as bt\n\n# Define bot configuration\n@bt.config()\ndef get_config():\n return bt.BotConfig(\n # Required fields\n name=\"weather-bot\", # URL-safe handle (no spaces)\n url=\"http://localhost:8000\",\n is_streaming=True,\n \n # App store metadata\n display_name=\"Weather Bot\", # User-facing name (max 20 chars)\n subtitle=\"Real-time weather updates\", # Brief description (max 30 chars)\n icon_emoji=\"\ud83c\udf24\ufe0f\", # Or use icon_url for custom icon\n description=\"Get accurate weather forecasts for any city worldwide.\",\n \n # Subscription/Payment (in cents)\n subscription_monthly_price=499, # $4.99/month (0 = free)\n \n # Access control\n visibility=\"public\", # \"public\" or \"private\"\n authorized_emails=[\"admin@example.com\"], # For private bots\n \n # User experience\n initial_text=\"Hello! I can help you check the weather. Which city would you like to know about?\"\n )\n\n# Define the chatbot\n@bt.chatbot(name=\"Weather Bot\", stream=True)\ndef weather_bot(message: str):\n if \"new york\" in message.lower():\n yield bt.Text(\"\ud83c\udf24\ufe0f New York: Partly cloudy, 72\u00b0F\")\n else:\n yield bt.Text(\"Please specify a city to check the weather.\")\n```\n\nWhen the bot server is running, the configuration can be accessed at the `/config` endpoint. For example:\n\n```bash\ncurl http://localhost:8000/config\n```\n\nThis will return the bot's configuration as a JSON object.\n\n**Note on URL Paths:** If your bot runs on a custom path (e.g., `/pillsbot`), BubbleTea will automatically append `/config` to that path. For example:\n- Bot URL: `http://localhost:8010/pillsbot` \u2192 Config endpoint: `http://localhost:8010/pillsbot/config`\n- Bot URL: `http://localhost:8000/my-bot` \u2192 Config endpoint: `http://localhost:8000/my-bot/config`\n- Bot URL: `http://localhost:8000` \u2192 Config endpoint: `http://localhost:8000/config`\n\n#### Dynamic Bot Creation Using `/config`\n\nBubbleTea agents can dynamically create new chatbots by utilizing the `/config` endpoint. For example, if you provide a command like:\n\n```bash\ncreate new bot 'bot-name' with url 'http://example.com'\n```\n\nThe agent will automatically fetch the configuration from `http://example.com/config` and create a new chatbot based on the metadata defined in the configuration. This allows for seamless integration and creation of new bots without manual setup.\n\n### Complete BotConfig Reference\n\nThe `BotConfig` class supports extensive configuration options for your bot:\n\n#### Required Fields\n- `name` (str): URL-safe bot handle (no spaces, used in URLs)\n- `url` (str): Bot hosting URL\n- `is_streaming` (bool): Enable streaming responses\n\n#### App Store Metadata\n- `display_name` (str): User-facing name (max 20 characters)\n- `subtitle` (str): Brief tagline (max 30 characters)\n- `description` (str): Full Markdown description\n- `icon_url` (str): 1024x1024 PNG icon URL\n- `icon_emoji` (str): Alternative emoji icon\n- `preview_video_url` (str): Demo video URL\n\n#### Subscription & Payment\n- `subscription_monthly_price` (int): Price in cents\n - Example: `999` = $9.99/month\n - Set to `0` for free bots\n - Users are automatically billed monthly\n - Subscription status is passed to your bot\n\n#### Access Control\n- `visibility` (str): \"public\" or \"private\"\n- `authorized_emails` (List[str]): Whitelist for private bots\n- `authorization` (str): Deprecated, use `visibility`\n\n#### User Experience\n- `initial_text` (str): Welcome message\n- `cors_config` (dict): Custom CORS settings\n\n\n\n### Payment & Subscription Example\n\n```python\n@bt.config()\ndef get_config():\n return bt.BotConfig(\n # Basic configuration\n name=\"premium-assistant\",\n url=\"https://your-bot.com\",\n is_streaming=True,\n \n # Enable subscription\n subscription_monthly_price=1999, # $19.99/month\n \n # Premium-only access\n visibility=\"public\", # Anyone can see it\n # But only subscribers can use it\n )\n\n@bt.chatbot\nasync def premium_bot(message: str, user_email: str = None, subscription_status: str = None):\n \"\"\"Subscription status is automatically provided by BubbleTea\"\"\"\n if subscription_status == \"active\":\n # Full premium features\n llm = LLM(model=\"gpt-4\")\n response = await llm.acomplete(message)\n yield bt.Text(response)\n else:\n # Limited features for non-subscribers\n yield bt.Text(\"Subscribe to access premium features!\")\n yield bt.Markdown(\"\"\"\n ## \ud83d\udc8e Premium Features\n - Advanced AI responses\n - Priority support\n - And much more!\n \n **Only $19.99/month**\n \"\"\")\n```\n\n## Features\n\n### \ud83e\udd16 LiteLLM Integration\n\nBubbleTea now includes built-in support for LiteLLM, allowing you to easily use any LLM provider. We use LiteLLM on the backend, which supports 100+ LLM models from various providers.\n\n```python\nfrom bubbletea_chat import LLM\n\n# Use any model supported by LiteLLM\nllm = LLM(model=\"gpt-4\")\nllm = LLM(model=\"claude-3-sonnet-20240229\")\nllm = LLM(model=\"gemini/gemini-pro\")\n\n# Simple completion\nresponse = await llm.acomplete(\"Hello, how are you?\")\n\n# Streaming\nasync for chunk in llm.stream(\"Tell me a story\"):\n yield bt.Text(chunk)\n```\n\n**\ud83d\udcda Supported Models**: Check out the full list of supported models and providers at [LiteLLM Providers Documentation](https://docs.litellm.ai/docs/providers)\n\n**\ud83d\udca1 DIY Alternative**: You can also implement your own LLM connections using the LiteLLM library directly in your bots if you need more control over the integration.\n\n### \ud83d\udcf8 Vision/Image Analysis\n\nBubbleTea supports multimodal interactions! Your bots can receive and analyze images:\n\n```python\nfrom bubbletea_chat import chatbot, Text, LLM, ImageInput\n\n@chatbot\nasync def vision_bot(prompt: str, images: list = None):\n \"\"\"A bot that can see and analyze images\"\"\"\n if images:\n llm = LLM(model=\"gpt-4-vision-preview\")\n response = await llm.acomplete_with_images(prompt, images)\n yield Text(response)\n else:\n yield Text(\"Send me an image to analyze!\")\n```\n\n**Supported Image Formats:**\n- URL images: Direct links to images\n- Base64 encoded images: For local/uploaded images\n- Multiple images: Analyze multiple images at once\n\n**Compatible Vision Models:**\n- OpenAI: GPT-4 Vision (`gpt-4-vision-preview`)\n- Anthropic: Claude 3 models (Opus, Sonnet, Haiku)\n- Google: Gemini Pro Vision (`gemini/gemini-pro-vision`)\n- And more vision-enabled models via LiteLLM\n\n### \ud83c\udfa8 Image Generation\n\nGenerate images from text descriptions using AI models:\n\n```python\nfrom bubbletea_chat import chatbot, Image, LLM\n\n@chatbot\nasync def art_bot(prompt: str):\n \"\"\"Generate images from descriptions\"\"\"\n llm = LLM(model=\"dall-e-3\") # or any image generation model\n \n # Generate an image\n image_url = await llm.agenerate_image(prompt)\n yield Image(image_url)\n```\n\n**Image Generation Features:**\n- Text-to-image generation\n- Support for DALL-E, Stable Diffusion, and other models\n- Customizable parameters (size, quality, style)\n\n### \ud83d\udce6 Components\n\n**Video Component Features:**\n- Embed videos from any URL (MP4, WebM, etc.)\n- Works in web and mobile BubbleTea clients\n\n**Video API:**\n```python\nVideo(url: str)\n```\n\n#### Video Component Example\n```python\n@chatbot\nasync def video_bot(message: str):\n yield Text(\"Here's a video for you:\")\n yield Video(\n url=\"https://www.w3schools.com/html/mov_bbb.mp4\"\n )\n yield Text(\"Did you enjoy the video?\")\n```\n\n#### Card Component Example\n\n```python\nfrom bubbletea_chat import chatbot, Card, Image, Text\n\n@chatbot\nasync def card_bot(message: str):\n yield Text(\"Here's a card for you:\")\n yield Card(\n image=Image(url=\"https://picsum.photos/id/237/200/300\", alt=\"A dog\"),\n text=\"This is a dog card.\",\n card_value=\"dog_card_clicked\"\n )\n```\n\n#### Pills Component Example\n\n```python\nfrom bubbletea_chat import chatbot, Pill, Pills, Text\n\n@chatbot\nasync def pills_bot(message: str):\n yield Text(\"Choose your favorite fruit:\")\n yield Pills(pills=[\n Pill(text=\"Apple\", pill_value=\"apple_selected\"),\n Pill(text=\"Banana\", pill_value=\"banana_selected\"),\n Pill(text=\"Orange\", pill_value=\"orange_selected\")\n ])\n```\n\n#### Error Component Example\n\n```python\nfrom bubbletea_chat import chatbot, Error, Text\n\n@chatbot\nasync def error_handling_bot(message: str):\n if \"error\" in message.lower():\n # Return an error component with details\n return Error(\n title=\"Service Unavailable\",\n description=\"The requested service is temporarily unavailable. Please try again later.\",\n code=\"ERR_503\"\n )\n elif \"fail\" in message.lower():\n # Simple error without description\n return Error(\n title=\"Operation Failed\",\n code=\"ERR_001\"\n )\n else:\n return Text(\"Try saying 'error' or 'fail' to see error messages.\")\n```\n\n**Error Component Features:**\n- **title** (required): The main error message to display\n- **description** (optional): Additional context or instructions\n- **code** (optional): Error code for debugging/support\n\nThe Error component is automatically styled with:\n- Warning icon (\u26a0\ufe0f)\n- Red-themed design for visibility\n- Clear formatting to distinguish from regular messages\n- Support for retry functionality (when implemented by the frontend)\n\n**Common Use Cases:**\n- API failures\n- Authentication errors\n- Validation errors\n- Service unavailability\n- Rate limiting messages\n\n### \ud83d\udd04 Streaming Support\n\nBubbleTea automatically detects generator functions and streams responses:\n\n```python\n@bt.chatbot\nasync def streaming_bot(message: str):\n yield bt.Text(\"Processing your request...\")\n \n # Simulate some async work\n import asyncio\n await asyncio.sleep(1)\n \n yield bt.Markdown(\"## Here's your response\")\n yield bt.Image(\"https://example.com/image.jpg\")\n yield bt.Text(\"All done!\")\n```\n\n### \ud83d\udd0d User & Conversation Context\n\nStarting with version 0.2.0, BubbleTea chat requests include UUID fields for tracking users and conversations:\n\n```python\n@bt.chatbot\ndef echo_bot(message: str, user_uuid: str = None, conversation_uuid: str = None, user_email: str = None):\n \"\"\"A simple bot that echoes back the user's message\"\"\"\n response = f\"You said: {message}\"\n if user_uuid:\n response += f\"\\nYour User UUID: {user_uuid}\"\n if conversation_uuid:\n response += f\"\\nYour Conversation UUID: {conversation_uuid}\"\n if user_email:\n response += f\"\\nYour Email: {user_email}\"\n\n return bt.Text(response)\n```\n\nThe optional parameters that BubbleTea automatically includes in requests when available are:\n- **user_uuid**: A unique identifier for the user making the request\n- **conversation_uuid**: A unique identifier for the conversation/chat session\n- **user_email**: The email address of the user making the request\n\nYou can use these to:\n- Maintain conversation history\n- Personalize responses based on user preferences\n- Track usage analytics\n- Implement stateful conversations\n- Provide user-specific features based on email\n\n### \ud83e\uddf5 Thread-based Conversation Support\n\nBubbleTea now supports thread-based conversations using LiteLLM's threading capabilities. This allows for maintaining conversation context across multiple messages with support for OpenAI Assistants API and fallback for other models.\n\n#### How It Works\n\n1. **Backend Integration**: The backend stores a `thread_id` with each conversation\n2. **Thread Creation**: On first message, if no thread exists, the bot creates one\n3. **Thread Persistence**: The thread_id is stored in the backend for future messages\n4. **Context Maintenance**: All messages in a thread maintain full conversation context\n\n\n### \ud83d\udcac Chat History\n\nBubbleTea now supports passing chat history to your bots for context-aware conversations:\n\n```python\n@bt.chatbot\nasync def context_aware_bot(message: str, chat_history: list = None):\n \"\"\"A bot that uses conversation history for context\"\"\"\n if chat_history:\n # chat_history is a list of 5 user messages and 5 bot messages dictionaries with metadata\n yield bt.Text(f\"I see we have {previous_messages} previous messages in our conversation.\")\n \n # You can use the history to provide contextual responses\n llm = LLM(model=\"gpt-4\")\n context_prompt = f\"Based on our conversation history: {chat_history}\\n\\nUser: {message}\"\n response = await llm.acomplete(context_prompt)\n yield bt.Text(response)\n else:\n yield bt.Text(\"This seems to be the start of our conversation!\")\n```\n\n### Multiple Bots with Unique Routes\n\nYou can create multiple bots in the same application, each with its own unique route:\n\n```python\nimport bubbletea_chat as bt\n\n# Bot 1: Available at /support\n@bt.chatbot(\"support\")\ndef support_bot(message: str):\n return bt.Text(\"Welcome to support! How can I help you?\")\n\n# Bot 2: Available at /sales\n@bt.chatbot(\"sales\")\ndef sales_bot(message: str):\n return bt.Text(\"Hi! I'm here to help with sales inquiries.\")\n\n# Bot 3: Default route at /chat\n@bt.chatbot()\ndef general_bot(message: str):\n return bt.Text(\"Hello! I'm the general assistant.\")\n\n# This would raise ValueError - duplicate route!\n# @bt.chatbot(\"support\")\n# def another_support_bot(message: str):\n# yield bt.Text(\"This won't work!\")\n\nif __name__ == \"__main__\":\n # Get all registered bots\n bt.run_server(port=8000)\n```\n\n**Important Notes:**\n- Routes are case-sensitive: `/Bot1` is different from `/bot1`\n- Each bot must have a unique route\n- The default route is `/chat` if no route is specified\n- Routes automatically get a leading `/` if not provided\n\n## Examples\n\n### AI-Powered Bots with LiteLLM\n\n#### OpenAI GPT Bot\n\n```python\nimport bubbletea_chat as bt\nfrom bubbletea_chat import LLM\n\n@bt.chatbot\nasync def gpt_assistant(message: str):\n # Make sure to set OPENAI_API_KEY environment variable\n llm = LLM(model=\"gpt-4\")\n \n # Stream the response\n async for chunk in llm.stream(message):\n yield bt.Text(chunk)\n```\n\n#### Claude Bot\n\n```python\n@bt.chatbot\nasync def claude_bot(message: str):\n # Set ANTHROPIC_API_KEY environment variable\n llm = LLM(model=\"claude-3-sonnet-20240229\")\n \n response = await llm.acomplete(message)\n yield bt.Text(response)\n```\n\n#### Gemini Bot\n\n```python\n@bt.chatbot\nasync def gemini_bot(message: str):\n # Set GEMINI_API_KEY environment variable\n llm = LLM(model=\"gemini/gemini-pro\")\n \n async for chunk in llm.stream(message):\n yield bt.Text(chunk)\n```\n\n### Vision-Enabled Bot\n\nBuild bots that can analyze images using multimodal LLMs:\n\n```python\nfrom bubbletea_chat import chatbot, Text, Markdown, LLM, ImageInput\n\n@chatbot\nasync def vision_bot(prompt: str, images: list = None):\n \"\"\"\n A vision-enabled bot that can analyze images\n \"\"\"\n llm = LLM(model=\"gpt-4-vision-preview\", max_tokens=1000)\n \n if images:\n yield Text(\"I can see you've shared some images. Let me analyze them...\")\n response = await llm.acomplete_with_images(prompt, images)\n yield Markdown(response)\n else:\n yield Markdown(\"\"\"\n## \ud83d\udcf8 Vision Bot\n\nI can analyze images! Try sending me:\n- Screenshots to explain\n- Photos to describe\n- Diagrams to interpret\n- Art to analyze\n\nJust upload an image along with your question!\n\n**Supported formats**: JPEG, PNG, GIF, WebP\n \"\"\")\n```\n\n**Key Features:**\n- Accepts images along with text prompts\n- Supports both URL and base64-encoded images\n- Works with multiple images at once\n- Compatible with various vision models\n\n### Image Generation Bot\n\nCreate images from text descriptions:\n\n```python\nfrom bubbletea_chat import chatbot, Text, Markdown, LLM, Image\n\n@chatbot\nasync def image_generator(prompt: str):\n \"\"\"\n Generate images from text descriptions\n \"\"\"\n llm = LLM(model=\"dall-e-3\") # Default image generation model\n \n if prompt:\n yield Text(f\"\ud83c\udfa8 Creating: {prompt}\")\n # Generate image from the text prompt\n image_url = await llm.agenerate_image(prompt)\n yield Image(image_url)\n yield Text(\"\u2728 Your image is ready!\")\n else:\n yield Markdown(\"\"\"\n## \ud83c\udfa8 AI Image Generator\n\nI can create images from your text prompts!\n\nTry prompts like:\n- *\"A futuristic cityscape at sunset\"*\n- *\"A cute robot playing guitar in a forest\"*\n- *\"An ancient map with fantasy landmarks\"*\n\n\ud83d\udc49 Just type your description and I'll generate an image for you!\n \"\"\")\n```\n\n### Simple Echo Bot\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef echo_bot(message: str):\n return bt.Text(f\"Echo: {message}\")\n```\n\n### Multi-Modal Bot\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef multimodal_bot(message: str):\n yield bt.Markdown(\"# Welcome to the Multi-Modal Bot!\")\n \n yield bt.Text(\"I can show you different types of content:\")\n \n yield bt.Markdown(\"\"\"\n - \ud83d\udcdd **Text** messages\n - \ud83d\uddbc\ufe0f **Images** with descriptions \n - \ud83d\udcca **Markdown** formatting\n \"\"\")\n \n yield bt.Image(\n \"https://picsum.photos/400/300\",\n alt=\"A random beautiful image\"\n )\n \n yield bt.Text(\"Pretty cool, right? \ud83d\ude0e\")\n```\n\n### Streaming Bot\n\n```python\nimport bubbletea_chat as bt\nimport asyncio\n\n@bt.chatbot\nasync def streaming_bot(message: str):\n yield bt.Text(\"Hello! Let me process your message...\")\n await asyncio.sleep(1)\n \n words = message.split()\n yield bt.Text(\"You said: \")\n for word in words:\n yield bt.Text(f\"{word} \")\n await asyncio.sleep(0.3)\n \n yield bt.Markdown(\"## Analysis Complete!\")\n```\n\n## API Reference\n\n### Decorators\n\n- `@bt.chatbot` - Create a chatbot from a function\n- `@bt.chatbot(name=\"custom-name\")` - Set a custom bot name\n- `@bt.chatbot(stream=False)` - Force non-streaming mode\n- `@bt.chatbot(\"route-name\")` - Create a chatbot with a custom URL route (e.g., `/route-name`)\n\n**Route Validation**: Each chatbot must have a unique URL path. If you try to register multiple bots with the same route, a `ValueError` will be raised.\n\n**Optional Parameters**: Your chatbot functions can accept these optional parameters that BubbleTea provides automatically:\n```python\n@bt.chatbot\nasync def my_bot(\n message: str, # Required: The user's message\n images: list = None, # Optional: List of ImageInput objects\n user_uuid: str = None, # Optional: Unique user identifier\n conversation_uuid: str = None, # Optional: Unique conversation identifier\n user_email: str = None, # Optional: User's email address\n chat_history: Union[List[Dict], str] = None # Optional: Conversation history\n):\n # Your bot logic here\n pass\n```\n\n### Components\n\n- `bt.Text(content: str)` - Plain text message\n- `bt.Image(url: str, alt: str = None)` - Image component\n- `bt.Markdown(content: str)` - Markdown formatted text\n- `bt.Card(image: Image, text: Optional[str] = None, markdown: Optional[Markdown] = None, card_value: Optional[str] = None)` - A single card component.\n- `bt.Cards(cards: List[Card], orient: Literal[\"wide\", \"tall\"] = \"wide\")` - A collection of cards.\n- `bt.Pill(text: str, pill_value: Optional[str] = None)` - A single pill component.\n- `bt.Pills(pills: List[Pill])` - A collection of pill items.\n- `bt.Video(url: str)` - Video component\n- `bt.Error(title: str, description: Optional[str] = None, code: Optional[str] = None)` - Error message component\n\n### LLM Class\n\n- `LLM(model: str, **kwargs)` - Initialize an LLM client\n - `model`: Any model supported by LiteLLM (e.g., \"gpt-4\", \"claude-3-sonnet-20240229\")\n - `assistant_id`: Optional assistant ID for thread-based conversations\n - `**kwargs`: Additional parameters (temperature, max_tokens, etc.)\n\n#### Text Generation Methods:\n- `complete(prompt: str, **kwargs) -> str` - Get a completion\n- `acomplete(prompt: str, **kwargs) -> str` - Async completion\n- `stream(prompt: str, **kwargs) -> AsyncGenerator[str, None]` - Stream a completion\n- `with_messages(messages: List[Dict], **kwargs) -> str` - Use full message history\n- `astream_with_messages(messages: List[Dict], **kwargs) -> AsyncGenerator[str, None]` - Stream with messages\n\n#### Vision/Image Analysis Methods:\n- `complete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Completion with images\n- `acomplete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Async with images\n- `stream_with_images(prompt: str, images: List[ImageInput], **kwargs) -> AsyncGenerator` - Stream with images\n\n#### Image Generation Methods:\n- `generate_image(prompt: str, **kwargs) -> str` - Generate image (sync), returns URL\n- `agenerate_image(prompt: str, **kwargs) -> str` - Generate image (async), returns URL\n\n#### Thread-based Conversation Methods:\n- `create_thread() -> Dict` - Create a new conversation thread\n- `add_message(thread_id, content, role=\"user\") -> Dict` - Add message to thread\n- `run_thread(thread_id, instructions=None) -> str` - Run thread and get response\n- `get_thread_messages(thread_id) -> List[Dict]` - Get all messages in thread\n- `get_thread_status(thread_id, run_id) -> str` - Check thread run status\n\n#### Assistant Creation Methods:\n- `create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (sync)\n- `acreate_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (async)\n\n### ThreadManager Class\n\nA high-level thread management utility that simplifies thread operations:\n\n```python\nfrom bubbletea_chat import ThreadManager\n\n# Initialize manager\nmanager = ThreadManager(\n assistant_id=\"asst_xxx\", # Optional: use existing assistant\n model=\"gpt-4\",\n storage_path=\"threads.json\"\n)\n\n# Get or create thread for user\nthread_id = manager.get_or_create_thread(\"user_123\")\n\n# Add message\nmanager.add_user_message(\"user_123\", \"Hello!\")\n\n# Get response\nresponse = manager.get_assistant_response(\"user_123\")\n```\n\nFeatures:\n- Automatic thread creation and management\n- Persistent thread storage\n- User-to-thread mapping\n- Async support\n- Message history retrieval\n- Assistant creation and management\n\nMethods:\n- `create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant\n- `async_create_assistant(name, instructions, tools, **kwargs) -> str` - Create assistant (async)\n- `get_or_create_thread(user_id) -> str` - Get or create thread for user\n- `add_user_message(user_id, message) -> bool` - Add user message\n- `get_assistant_response(user_id, instructions=None) -> str` - Get AI response\n- `get_thread_messages(user_id) -> List[Dict]` - Get conversation history\n- `clear_user_thread(user_id)` - Clear a user's thread\n- `clear_all_threads()` - Clear all threads\n\n### ImageInput Class\n\nRepresents an image input that can be either a URL or base64 encoded:\n```python\nfrom bubbletea_chat import ImageInput\n\n# URL image\nImageInput(url=\"https://example.com/image.jpg\")\n\n# Base64 image\nImageInput(\n base64=\"iVBORw0KGgoAAAANS...\",\n mime_type=\"image/jpeg\" # Optional\n)\n```\n\n### Server\n```\nif __name__ == \"__main__\":\n # Run the chatbot server\n bt.run_server(my_bot, port=8000, host=\"0.0.0.0\")\n```\n\n- Runs a chatbot server on port 8000 and binds to host 0.0.0.0\n - Automatically creates a `/chat` endpoint for your bot\n - The `/chat` endpoint accepts POST requests with chat messages\n - Supports both streaming and non-streaming responses\n\n## Environment Variables\n\nTo use different LLM providers, set the appropriate API keys:\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=your-openai-api-key\n\n# Anthropic Claude\nexport ANTHROPIC_API_KEY=your-anthropic-api-key\n\n# Google Gemini\nexport GEMINI_API_KEY=your-gemini-api-key\n\n# Or use a .env file with python-dotenv\n```\n\nFor more providers and configuration options, see the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).\n\n## Custom LLM Integration\n\nWhile BubbleTea provides the `LLM` class for convenience, you can also use LiteLLM directly in your bots for more control:\n\n```python\nimport bubbletea_chat as bt\nfrom litellm import acompletion\n\n@bt.chatbot\nasync def custom_llm_bot(message: str):\n # Direct LiteLLM usage\n response = await acompletion(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": message}],\n temperature=0.7,\n # Add any custom parameters\n api_base=\"https://your-custom-endpoint.com\", # Custom endpoints\n custom_llm_provider=\"openai\", # Custom providers\n )\n \n yield bt.Text(response.choices[0].message.content)\n```\n\nThis approach gives you access to:\n- Custom API endpoints\n- Advanced parameters\n- Direct response handling\n- Custom error handling\n- Any LiteLLM feature\n\n## Testing Your Bot\n\nStart your bot:\n\n```bash\npython my_bot.py\n```\n\nYour bot will automatically create a `/chat` endpoint that accepts POST requests. This is the standard endpoint for all BubbleTea chatbots.\n\nTest with curl:\n\n```bash\n# Text only\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"type\": \"user\", \"message\": \"Hello bot!\"}'\n\n# Test config endpoint\ncurl http://localhost:8000/config\n\n# For bots on custom paths\n# If your bot runs at /pillsbot:\ncurl -X POST \"http://localhost:8000/pillsbot\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"type\": \"user\", \"message\": \"Hello bot!\"}'\n\n# Config endpoint for custom path bot\ncurl http://localhost:8000/pillsbot/config\n\n# With image URL\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"type\": \"user\",\n \"message\": \"What is in this image?\",\n \"images\": [{\"url\": \"https://example.com/image.jpg\"}]\n }'\n\n# With base64 image\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"type\": \"user\",\n \"message\": \"Describe this\",\n \"images\": [{\"base64\": \"iVBORw0KGgoAAAANS...\", \"mime_type\": \"image/png\"}]\n }'\n```\n\n## \ud83c\udf10 CORS Support\n\nBubbleTea includes automatic CORS (Cross-Origin Resource Sharing) support out of the box! This means your bots will work seamlessly with web frontends without any additional configuration.\n\n### Default Behavior\n```python\n# CORS is enabled by default with permissive settings for development\nbt.run_server(my_bot, port=8000)\n```\n\n### Custom CORS Configuration\n```python\n# For production - restrict to specific origins\nbt.run_server(my_bot, port=8000, cors_config={\n \"allow_origins\": [\"https://bubbletea.app\", \"https://yourdomain.com\"],\n \"allow_credentials\": True,\n \"allow_methods\": [\"GET\", \"POST\"],\n \"allow_headers\": [\"Content-Type\", \"Authorization\"]\n})\n```\n\n### Disable CORS\n```python\n# Not recommended, but possible\nbt.run_server(my_bot, port=8000, cors=False)\n```\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for building AI chatbots with LiteLLM support for BubbleTea platform",
"version": "0.6.3",
"project_urls": {
"Documentation": "https://docs.bubbletea.dev",
"Homepage": "https://bubbletea.dev",
"Repository": "https://github.com/bubbletea/bubbletea-python"
},
"split_keywords": [
"chatbot",
" ai",
" bubbletea",
" conversational-ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5cef394643aaca2cdaccdc15f9020b217b6cb2c51793abae3376e82b1efa4bdb",
"md5": "9e0f9da7550c672ecf33feb4607fc250",
"sha256": "b6c491c680b1be9ad22b3fd9dc2f2ee0ab2dfe4ff7012aef2f1304266e269983"
},
"downloads": -1,
"filename": "bubbletea_chat-0.6.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9e0f9da7550c672ecf33feb4607fc250",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 22054,
"upload_time": "2025-08-11T05:06:02",
"upload_time_iso_8601": "2025-08-11T05:06:02.868795Z",
"url": "https://files.pythonhosted.org/packages/5c/ef/394643aaca2cdaccdc15f9020b217b6cb2c51793abae3376e82b1efa4bdb/bubbletea_chat-0.6.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "092fd423863449b0f9d74a3d67f03354f7c39c7bd38161fa01999a41f177896f",
"md5": "50faa5e50adebf0f58bdc88ea7dd6cbf",
"sha256": "fb8e76b05fe175c800e818129dd895fd27c56a4a68b5106e2d721a8953ce7b82"
},
"downloads": -1,
"filename": "bubbletea_chat-0.6.3.tar.gz",
"has_sig": false,
"md5_digest": "50faa5e50adebf0f58bdc88ea7dd6cbf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 34565,
"upload_time": "2025-08-11T05:06:08",
"upload_time_iso_8601": "2025-08-11T05:06:08.843417Z",
"url": "https://files.pythonhosted.org/packages/09/2f/d423863449b0f9d74a3d67f03354f7c39c7bd38161fa01999a41f177896f/bubbletea_chat-0.6.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-11 05:06:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bubbletea",
"github_project": "bubbletea-python",
"github_not_found": true,
"lcname": "bubbletea-chat"
}