# BubbleTea Python SDK
Build AI chatbots for the BubbleTea platform with simple Python functions.
**Now with LiteLLM support!** π Easily integrate with OpenAI, Anthropic Claude, Google Gemini, and 100+ other LLMs.
**NEW: Vision & Image Generation!** πΈπ¨ Build multimodal bots that can analyze images and generate new ones using AI.
**NEW: User & Conversation Tracking!** π Chat requests now include `user_uuid` and `conversation_uuid` for better context awareness.
## Installation
```bash
pip install bubbletea-chat
```
## Quick Start
Create a simple chatbot in `my_bot.py`:
```python
import bubbletea_chat as bt
@bt.chatbot
def my_chatbot(message: str):
# Your bot logic here
if "image" in message.lower():
yield bt.Image("https://picsum.photos/400/300")
yield bt.Text("Here's a random image for you!")
else:
yield bt.Text(f"You said: {message}")
if __name__ == "__main__":
# Run the chatbot server
bt.run_server(my_chatbot, port=8000, host="0.0.0.0")
```
Run it locally:
```bash
python my_bot.py
```
This will start a server at `http://localhost:8000` with your chatbot available at the `/chat` endpoint.
### Configuration with `@config` Decorator
BubbleTea provides a `@config` decorator to define and expose bot configurations via a dedicated endpoint. This is useful for setting up bot metadata, such as its name, URL, emoji, and initial greeting.
#### Example: Using the `@config` Decorator
```python
import bubbletea_chat as bt
# Define bot configuration
@bt.config()
def get_config():
return bt.BotConfig(
name="Weather Bot",
url="http://localhost:8000",
is_streaming=True,
emoji="π€οΈ",
initial_text="Hello! I can help you check the weather. Which city would you like to know about?",
authorization="private", # Example: Set to "private" for restricted access
authorized_emails=["admin@example.com", "user@example.com"] # Example: List of authorized emails
)
# Define the chatbot
@bt.chatbot(name="Weather Bot", stream=True)
def weather_bot(message: str):
if "new york" in message.lower():
yield bt.Text("π€οΈ New York: Partly cloudy, 72Β°F")
else:
yield bt.Text("Please specify a city to check the weather.")
```
When the bot server is running, the configuration can be accessed at the `/config` endpoint. For example:
```bash
curl http://localhost:8000/config
```
This will return the bot's configuration as a JSON object.
#### Dynamic Bot Creation Using `/config`
BubbleTea agents can dynamically create new chatbots by utilizing the `/config` endpoint. For example, if you provide a command like:
```bash
create new bot 'bot-name' with url 'http://example.com'
```
The agent will automatically fetch the configuration from `http://example.com/config` and create a new chatbot based on the metadata defined in the configuration. This allows for seamless integration and creation of new bots without manual setup.
## Features
### π€ LiteLLM Integration
BubbleTea now includes built-in support for LiteLLM, allowing you to easily use any LLM provider. We use LiteLLM on the backend, which supports 100+ LLM models from various providers.
```python
from bubbletea_chat import LLM
# Use any model supported by LiteLLM
llm = LLM(model="gpt-4")
llm = LLM(model="claude-3-sonnet-20240229")
llm = LLM(model="gemini/gemini-pro")
# Simple completion
response = await llm.acomplete("Hello, how are you?")
# Streaming
async for chunk in llm.stream("Tell me a story"):
yield bt.Text(chunk)
```
**π Supported Models**: Check out the full list of supported models and providers at [LiteLLM Providers Documentation](https://docs.litellm.ai/docs/providers)
**π‘ DIY Alternative**: You can also implement your own LLM connections using the LiteLLM library directly in your bots if you need more control over the integration.
### πΈ Vision/Image Analysis
BubbleTea supports multimodal interactions! Your bots can receive and analyze images:
```python
from bubbletea_chat import chatbot, Text, LLM, ImageInput
@chatbot
async def vision_bot(prompt: str, images: list = None):
"""A bot that can see and analyze images"""
if images:
llm = LLM(model="gpt-4-vision-preview")
response = await llm.acomplete_with_images(prompt, images)
yield Text(response)
else:
yield Text("Send me an image to analyze!")
```
**Supported Image Formats:**
- URL images: Direct links to images
- Base64 encoded images: For local/uploaded images
- Multiple images: Analyze multiple images at once
**Compatible Vision Models:**
- OpenAI: GPT-4 Vision (`gpt-4-vision-preview`)
- Anthropic: Claude 3 models (Opus, Sonnet, Haiku)
- Google: Gemini Pro Vision (`gemini/gemini-pro-vision`)
- And more vision-enabled models via LiteLLM
### π¨ Image Generation
Generate images from text descriptions using AI models:
```python
from bubbletea_chat import chatbot, Image, LLM
@chatbot
async def art_bot(prompt: str):
"""Generate images from descriptions"""
llm = LLM(model="dall-e-3") # or any image generation model
# Generate an image
image_url = await llm.agenerate_image(prompt)
yield Image(image_url)
```
**Image Generation Features:**
- Text-to-image generation
- Support for DALL-E, Stable Diffusion, and other models
- Customizable parameters (size, quality, style)
### π¦ Components
BubbleTea supports rich components for building engaging chatbot experiences:
- **Text**: Plain text messages
- **Image**: Images with optional alt text
- **Markdown**: Rich formatted text
- **Card**: A single card component with an image and optional text/markdown.
- **Cards**: A collection of cards displayed in a layout.
- **Pill**: A single pill component for displaying text.
- **Pills**: A collection of pill items.
#### Card Component Example
```python
from bubbletea_chat import chatbot, Card, Image, Text
@chatbot
async def card_bot(message: str):
yield Text("Here's a card for you:")
yield Card(
image=Image(url="https://picsum.photos/id/237/200/300", alt="A dog"),
text="This is a dog card.",
card_value="dog_card_clicked"
)
```
#### Pills Component Example
```python
from bubbletea_chat import chatbot, Pill, Pills, Text
@chatbot
async def pills_bot(message: str):
yield Text("Choose your favorite fruit:")
yield Pills(pills=[
Pill(text="Apple", pill_value="apple_selected"),
Pill(text="Banana", pill_value="banana_selected"),
Pill(text="Orange", pill_value="orange_selected")
])
```
### π Streaming Support
BubbleTea automatically detects generator functions and streams responses:
```python
@bt.chatbot
async def streaming_bot(message: str):
yield bt.Text("Processing your request...")
# Simulate some async work
import asyncio
await asyncio.sleep(1)
yield bt.Markdown("## Here's your response")
yield bt.Image("https://example.com/image.jpg")
yield bt.Text("All done!")
```
### π User & Conversation Context
Starting with version 0.2.0, BubbleTea chat requests include UUID fields for tracking users and conversations:
```python
@bt.chatbot
def echo_bot(message: str, user_uuid: str = None, conversation_uuid: str = None, user_email: str = None):
"""A simple bot that echoes back the user's message"""
response = f"You said: {message}"
if user_uuid:
response += f"\nYour User UUID: {user_uuid}"
if conversation_uuid:
response += f"\nYour Conversation UUID: {conversation_uuid}"
if user_email:
response += f"\nYour Email: {user_email}"
return bt.Text(response)
```
The optional parameters that BubbleTea automatically includes in requests when available are:
- **user_uuid**: A unique identifier for the user making the request
- **conversation_uuid**: A unique identifier for the conversation/chat session
- **user_email**: The email address of the user making the request
You can use these to:
- Maintain conversation history
- Personalize responses based on user preferences
- Track usage analytics
- Implement stateful conversations
- Provide user-specific features based on email
### π¬ Chat History
BubbleTea now supports passing chat history to your bots for context-aware conversations:
```python
@bt.chatbot
async def context_aware_bot(message: str, chat_history: list = None):
"""A bot that uses conversation history for context"""
if chat_history:
# chat_history is a list of 5 user messages and 5 bot messages dictionaries with metadata
yield bt.Text(f"I see we have {previous_messages} previous messages in our conversation.")
# You can use the history to provide contextual responses
llm = LLM(model="gpt-4")
context_prompt = f"Based on our conversation history: {chat_history}\n\nUser: {message}"
response = await llm.acomplete(context_prompt)
yield bt.Text(response)
else:
yield bt.Text("This seems to be the start of our conversation!")
```
## Examples
### AI-Powered Bots with LiteLLM
#### OpenAI GPT Bot
```python
import bubbletea_chat as bt
from bubbletea_chat import LLM
@bt.chatbot
async def gpt_assistant(message: str):
# Make sure to set OPENAI_API_KEY environment variable
llm = LLM(model="gpt-4")
# Stream the response
async for chunk in llm.stream(message):
yield bt.Text(chunk)
```
#### Claude Bot
```python
@bt.chatbot
async def claude_bot(message: str):
# Set ANTHROPIC_API_KEY environment variable
llm = LLM(model="claude-3-sonnet-20240229")
response = await llm.acomplete(message)
yield bt.Text(response)
```
#### Gemini Bot
```python
@bt.chatbot
async def gemini_bot(message: str):
# Set GEMINI_API_KEY environment variable
llm = LLM(model="gemini/gemini-pro")
async for chunk in llm.stream(message):
yield bt.Text(chunk)
```
### Vision-Enabled Bot
Build bots that can analyze images using multimodal LLMs:
```python
from bubbletea_chat import chatbot, Text, Markdown, LLM, ImageInput
@chatbot
async def vision_bot(prompt: str, images: list = None):
"""
A vision-enabled bot that can analyze images
"""
llm = LLM(model="gpt-4-vision-preview", max_tokens=1000)
if images:
yield Text("I can see you've shared some images. Let me analyze them...")
response = await llm.acomplete_with_images(prompt, images)
yield Markdown(response)
else:
yield Markdown("""
## πΈ Vision Bot
I can analyze images! Try sending me:
- Screenshots to explain
- Photos to describe
- Diagrams to interpret
- Art to analyze
Just upload an image along with your question!
**Supported formats**: JPEG, PNG, GIF, WebP
""")
```
**Key Features:**
- Accepts images along with text prompts
- Supports both URL and base64-encoded images
- Works with multiple images at once
- Compatible with various vision models
### Image Generation Bot
Create images from text descriptions:
```python
from bubbletea_chat import chatbot, Text, Markdown, LLM, Image
@chatbot
async def image_generator(prompt: str):
"""
Generate images from text descriptions
"""
llm = LLM(model="dall-e-3") # Default image generation model
if prompt:
yield Text(f"π¨ Creating: {prompt}")
# Generate image from the text prompt
image_url = await llm.agenerate_image(prompt)
yield Image(image_url)
yield Text("β¨ Your image is ready!")
else:
yield Markdown("""
## π¨ AI Image Generator
I can create images from your text prompts!
Try prompts like:
- *"A futuristic cityscape at sunset"*
- *"A cute robot playing guitar in a forest"*
- *"An ancient map with fantasy landmarks"*
π Just type your description and I'll generate an image for you!
""")
```
### Simple Echo Bot
```python
import bubbletea_chat as bt
@bt.chatbot
def echo_bot(message: str):
return bt.Text(f"Echo: {message}")
```
### Multi-Modal Bot
```python
import bubbletea_chat as bt
@bt.chatbot
def multimodal_bot(message: str):
yield bt.Markdown("# Welcome to the Multi-Modal Bot!")
yield bt.Text("I can show you different types of content:")
yield bt.Markdown("""
- π **Text** messages
- πΌοΈ **Images** with descriptions
- π **Markdown** formatting
""")
yield bt.Image(
"https://picsum.photos/400/300",
alt="A random beautiful image"
)
yield bt.Text("Pretty cool, right? π")
```
### Streaming Bot
```python
import bubbletea_chat as bt
import asyncio
@bt.chatbot
async def streaming_bot(message: str):
yield bt.Text("Hello! Let me process your message...")
await asyncio.sleep(1)
words = message.split()
yield bt.Text("You said: ")
for word in words:
yield bt.Text(f"{word} ")
await asyncio.sleep(0.3)
yield bt.Markdown("## Analysis Complete!")
```
## API Reference
### Decorators
- `@bt.chatbot` - Create a chatbot from a function
- `@bt.chatbot(name="custom-name")` - Set a custom bot name
- `@bt.chatbot(stream=False)` - Force non-streaming mode
**Optional Parameters**: Your chatbot functions can accept these optional parameters that BubbleTea provides automatically:
```python
@bt.chatbot
async def my_bot(
message: str, # Required: The user's message
images: list = None, # Optional: List of ImageInput objects
user_uuid: str = None, # Optional: Unique user identifier
conversation_uuid: str = None, # Optional: Unique conversation identifier
user_email: str = None, # Optional: User's email address
chat_history: Union[List[Dict], str] = None # Optional: Conversation history
):
# Your bot logic here
pass
```
### Components
- `bt.Text(content: str)` - Plain text message
- `bt.Image(url: str, alt: str = None)` - Image component
- `bt.Markdown(content: str)` - Markdown formatted text
- `bt.Card(image: Image, text: Optional[str] = None, markdown: Optional[Markdown] = None, card_value: Optional[str] = None)` - A single card component.
- `bt.Cards(cards: List[Card], orient: Literal["wide", "tall"] = "wide")` - A collection of cards.
- `bt.Pill(text: str, pill_value: Optional[str] = None)` - A single pill component.
- `bt.Pills(pills: List[Pill])` - A collection of pill items.
### LLM Class
- `LLM(model: str, **kwargs)` - Initialize an LLM client
- `model`: Any model supported by LiteLLM (e.g., "gpt-4", "claude-3-sonnet-20240229")
- `**kwargs`: Additional parameters (temperature, max_tokens, etc.)
#### Text Generation Methods:
- `complete(prompt: str, **kwargs) -> str` - Get a completion
- `acomplete(prompt: str, **kwargs) -> str` - Async completion
- `stream(prompt: str, **kwargs) -> AsyncGenerator[str, None]` - Stream a completion
- `with_messages(messages: List[Dict], **kwargs) -> str` - Use full message history
- `astream_with_messages(messages: List[Dict], **kwargs) -> AsyncGenerator[str, None]` - Stream with messages
#### Vision/Image Analysis Methods:
- `complete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Completion with images
- `acomplete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Async with images
- `stream_with_images(prompt: str, images: List[ImageInput], **kwargs) -> AsyncGenerator` - Stream with images
#### Image Generation Methods:
- `generate_image(prompt: str, **kwargs) -> str` - Generate image (sync), returns URL
- `agenerate_image(prompt: str, **kwargs) -> str` - Generate image (async), returns URL
### ImageInput Class
Represents an image input that can be either a URL or base64 encoded:
```python
from bubbletea_chat import ImageInput
# URL image
ImageInput(url="https://example.com/image.jpg")
# Base64 image
ImageInput(
base64="iVBORw0KGgoAAAANS...",
mime_type="image/jpeg" # Optional
)
```
### Server
```
if __name__ == "__main__":
# Run the chatbot server
bt.run_server(my_bot, port=8000, host="0.0.0.0")
```
- Runs a chatbot server on port 8000 and binds to host 0.0.0.0
- Automatically creates a `/chat` endpoint for your bot
- The `/chat` endpoint accepts POST requests with chat messages
- Supports both streaming and non-streaming responses
## Environment Variables
To use different LLM providers, set the appropriate API keys:
```bash
# OpenAI
export OPENAI_API_KEY=your-openai-api-key
# Anthropic Claude
export ANTHROPIC_API_KEY=your-anthropic-api-key
# Google Gemini
export GEMINI_API_KEY=your-gemini-api-key
# Or use a .env file with python-dotenv
```
For more providers and configuration options, see the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
## Custom LLM Integration
While BubbleTea provides the `LLM` class for convenience, you can also use LiteLLM directly in your bots for more control:
```python
import bubbletea_chat as bt
from litellm import acompletion
@bt.chatbot
async def custom_llm_bot(message: str):
# Direct LiteLLM usage
response = await acompletion(
model="gpt-4",
messages=[{"role": "user", "content": message}],
temperature=0.7,
# Add any custom parameters
api_base="https://your-custom-endpoint.com", # Custom endpoints
custom_llm_provider="openai", # Custom providers
)
yield bt.Text(response.choices[0].message.content)
```
This approach gives you access to:
- Custom API endpoints
- Advanced parameters
- Direct response handling
- Custom error handling
- Any LiteLLM feature
## Testing Your Bot
Start your bot:
```bash
python my_bot.py
```
Your bot will automatically create a `/chat` endpoint that accepts POST requests. This is the standard endpoint for all BubbleTea chatbots.
Test with curl:
```bash
# Text only
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"type": "user", "message": "Hello bot!"}'
# With image URL
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"type": "user",
"message": "What is in this image?",
"images": [{"url": "https://example.com/image.jpg"}]
}'
# With base64 image
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{
"type": "user",
"message": "Describe this",
"images": [{"base64": "iVBORw0KGgoAAAANS...", "mime_type": "image/png"}]
}'
```
## π CORS Support
BubbleTea includes automatic CORS (Cross-Origin Resource Sharing) support out of the box! This means your bots will work seamlessly with web frontends without any additional configuration.
### Default Behavior
```python
# CORS is enabled by default with permissive settings for development
bt.run_server(my_bot, port=8000)
```
### Custom CORS Configuration
```python
# For production - restrict to specific origins
bt.run_server(my_bot, port=8000, cors_config={
"allow_origins": ["https://bubbletea.app", "https://yourdomain.com"],
"allow_credentials": True,
"allow_methods": ["GET", "POST"],
"allow_headers": ["Content-Type", "Authorization"]
})
```
### Disable CORS
```python
# Not recommended, but possible
bt.run_server(my_bot, port=8000, cors=False)
```
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
MIT License - see [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/bubbletea/bubbletea-python",
"name": "bubbletea-chat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "chatbot, ai, bubbletea, conversational-ai",
"author": "BubbleTea Team",
"author_email": "BubbleTea Team <team@bubbletea.dev>",
"download_url": "https://files.pythonhosted.org/packages/83/b3/cfd37bc65a2ef43ed642ad62c45ceade70a5727bbcdd5d23c97a69f3ca35/bubbletea_chat-0.5.5.tar.gz",
"platform": null,
"description": "# BubbleTea Python SDK\n\nBuild AI chatbots for the BubbleTea platform with simple Python functions.\n\n**Now with LiteLLM support!** \ud83c\udf89 Easily integrate with OpenAI, Anthropic Claude, Google Gemini, and 100+ other LLMs.\n\n**NEW: Vision & Image Generation!** \ud83d\udcf8\ud83c\udfa8 Build multimodal bots that can analyze images and generate new ones using AI.\n\n**NEW: User & Conversation Tracking!** \ud83d\udd0d Chat requests now include `user_uuid` and `conversation_uuid` for better context awareness.\n\n## Installation\n\n```bash\npip install bubbletea-chat\n```\n\n## Quick Start\n\nCreate a simple chatbot in `my_bot.py`:\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef my_chatbot(message: str):\n # Your bot logic here\n if \"image\" in message.lower():\n yield bt.Image(\"https://picsum.photos/400/300\")\n yield bt.Text(\"Here's a random image for you!\")\n else:\n yield bt.Text(f\"You said: {message}\")\n\nif __name__ == \"__main__\":\n # Run the chatbot server\n bt.run_server(my_chatbot, port=8000, host=\"0.0.0.0\")\n\n```\n\nRun it locally:\n\n```bash\npython my_bot.py\n```\n\nThis will start a server at `http://localhost:8000` with your chatbot available at the `/chat` endpoint.\n\n\n### Configuration with `@config` Decorator\n\nBubbleTea provides a `@config` decorator to define and expose bot configurations via a dedicated endpoint. This is useful for setting up bot metadata, such as its name, URL, emoji, and initial greeting.\n\n#### Example: Using the `@config` Decorator\n\n\n```python\nimport bubbletea_chat as bt\n\n# Define bot configuration\n@bt.config()\ndef get_config():\n return bt.BotConfig(\n name=\"Weather Bot\",\n url=\"http://localhost:8000\",\n is_streaming=True,\n emoji=\"\ud83c\udf24\ufe0f\",\n initial_text=\"Hello! I can help you check the weather. Which city would you like to know about?\",\n authorization=\"private\", # Example: Set to \"private\" for restricted access\n authorized_emails=[\"admin@example.com\", \"user@example.com\"] # Example: List of authorized emails\n )\n\n# Define the chatbot\n@bt.chatbot(name=\"Weather Bot\", stream=True)\ndef weather_bot(message: str):\n if \"new york\" in message.lower():\n yield bt.Text(\"\ud83c\udf24\ufe0f New York: Partly cloudy, 72\u00b0F\")\n else:\n yield bt.Text(\"Please specify a city to check the weather.\")\n```\n\nWhen the bot server is running, the configuration can be accessed at the `/config` endpoint. For example:\n\n```bash\ncurl http://localhost:8000/config\n```\n\nThis will return the bot's configuration as a JSON object.\n\n#### Dynamic Bot Creation Using `/config`\n\nBubbleTea agents can dynamically create new chatbots by utilizing the `/config` endpoint. For example, if you provide a command like:\n\n```bash\ncreate new bot 'bot-name' with url 'http://example.com'\n```\n\nThe agent will automatically fetch the configuration from `http://example.com/config` and create a new chatbot based on the metadata defined in the configuration. This allows for seamless integration and creation of new bots without manual setup.\n\n## Features\n\n### \ud83e\udd16 LiteLLM Integration\n\nBubbleTea now includes built-in support for LiteLLM, allowing you to easily use any LLM provider. We use LiteLLM on the backend, which supports 100+ LLM models from various providers.\n\n```python\nfrom bubbletea_chat import LLM\n\n# Use any model supported by LiteLLM\nllm = LLM(model=\"gpt-4\")\nllm = LLM(model=\"claude-3-sonnet-20240229\")\nllm = LLM(model=\"gemini/gemini-pro\")\n\n# Simple completion\nresponse = await llm.acomplete(\"Hello, how are you?\")\n\n# Streaming\nasync for chunk in llm.stream(\"Tell me a story\"):\n yield bt.Text(chunk)\n```\n\n**\ud83d\udcda Supported Models**: Check out the full list of supported models and providers at [LiteLLM Providers Documentation](https://docs.litellm.ai/docs/providers)\n\n**\ud83d\udca1 DIY Alternative**: You can also implement your own LLM connections using the LiteLLM library directly in your bots if you need more control over the integration.\n\n### \ud83d\udcf8 Vision/Image Analysis\n\nBubbleTea supports multimodal interactions! Your bots can receive and analyze images:\n\n```python\nfrom bubbletea_chat import chatbot, Text, LLM, ImageInput\n\n@chatbot\nasync def vision_bot(prompt: str, images: list = None):\n \"\"\"A bot that can see and analyze images\"\"\"\n if images:\n llm = LLM(model=\"gpt-4-vision-preview\")\n response = await llm.acomplete_with_images(prompt, images)\n yield Text(response)\n else:\n yield Text(\"Send me an image to analyze!\")\n```\n\n**Supported Image Formats:**\n- URL images: Direct links to images\n- Base64 encoded images: For local/uploaded images\n- Multiple images: Analyze multiple images at once\n\n**Compatible Vision Models:**\n- OpenAI: GPT-4 Vision (`gpt-4-vision-preview`)\n- Anthropic: Claude 3 models (Opus, Sonnet, Haiku)\n- Google: Gemini Pro Vision (`gemini/gemini-pro-vision`)\n- And more vision-enabled models via LiteLLM\n\n### \ud83c\udfa8 Image Generation\n\nGenerate images from text descriptions using AI models:\n\n```python\nfrom bubbletea_chat import chatbot, Image, LLM\n\n@chatbot\nasync def art_bot(prompt: str):\n \"\"\"Generate images from descriptions\"\"\"\n llm = LLM(model=\"dall-e-3\") # or any image generation model\n \n # Generate an image\n image_url = await llm.agenerate_image(prompt)\n yield Image(image_url)\n```\n\n**Image Generation Features:**\n- Text-to-image generation\n- Support for DALL-E, Stable Diffusion, and other models\n- Customizable parameters (size, quality, style)\n\n### \ud83d\udce6 Components\n\nBubbleTea supports rich components for building engaging chatbot experiences:\n\n- **Text**: Plain text messages\n- **Image**: Images with optional alt text \n- **Markdown**: Rich formatted text\n- **Card**: A single card component with an image and optional text/markdown.\n- **Cards**: A collection of cards displayed in a layout.\n- **Pill**: A single pill component for displaying text.\n- **Pills**: A collection of pill items.\n\n#### Card Component Example\n\n```python\nfrom bubbletea_chat import chatbot, Card, Image, Text\n\n@chatbot\nasync def card_bot(message: str):\n yield Text(\"Here's a card for you:\")\n yield Card(\n image=Image(url=\"https://picsum.photos/id/237/200/300\", alt=\"A dog\"),\n text=\"This is a dog card.\",\n card_value=\"dog_card_clicked\"\n )\n```\n\n#### Pills Component Example\n\n```python\nfrom bubbletea_chat import chatbot, Pill, Pills, Text\n\n@chatbot\nasync def pills_bot(message: str):\n yield Text(\"Choose your favorite fruit:\")\n yield Pills(pills=[\n Pill(text=\"Apple\", pill_value=\"apple_selected\"),\n Pill(text=\"Banana\", pill_value=\"banana_selected\"),\n Pill(text=\"Orange\", pill_value=\"orange_selected\")\n ])\n```\n\n### \ud83d\udd04 Streaming Support\n\nBubbleTea automatically detects generator functions and streams responses:\n\n```python\n@bt.chatbot\nasync def streaming_bot(message: str):\n yield bt.Text(\"Processing your request...\")\n \n # Simulate some async work\n import asyncio\n await asyncio.sleep(1)\n \n yield bt.Markdown(\"## Here's your response\")\n yield bt.Image(\"https://example.com/image.jpg\")\n yield bt.Text(\"All done!\")\n```\n\n### \ud83d\udd0d User & Conversation Context\n\nStarting with version 0.2.0, BubbleTea chat requests include UUID fields for tracking users and conversations:\n\n```python\n@bt.chatbot\ndef echo_bot(message: str, user_uuid: str = None, conversation_uuid: str = None, user_email: str = None):\n \"\"\"A simple bot that echoes back the user's message\"\"\"\n response = f\"You said: {message}\"\n if user_uuid:\n response += f\"\\nYour User UUID: {user_uuid}\"\n if conversation_uuid:\n response += f\"\\nYour Conversation UUID: {conversation_uuid}\"\n if user_email:\n response += f\"\\nYour Email: {user_email}\"\n\n return bt.Text(response)\n```\n\nThe optional parameters that BubbleTea automatically includes in requests when available are:\n- **user_uuid**: A unique identifier for the user making the request\n- **conversation_uuid**: A unique identifier for the conversation/chat session\n- **user_email**: The email address of the user making the request\n\nYou can use these to:\n- Maintain conversation history\n- Personalize responses based on user preferences\n- Track usage analytics\n- Implement stateful conversations\n- Provide user-specific features based on email\n\n### \ud83d\udcac Chat History\n\nBubbleTea now supports passing chat history to your bots for context-aware conversations:\n\n```python\n@bt.chatbot\nasync def context_aware_bot(message: str, chat_history: list = None):\n \"\"\"A bot that uses conversation history for context\"\"\"\n if chat_history:\n # chat_history is a list of 5 user messages and 5 bot messages dictionaries with metadata\n yield bt.Text(f\"I see we have {previous_messages} previous messages in our conversation.\")\n \n # You can use the history to provide contextual responses\n llm = LLM(model=\"gpt-4\")\n context_prompt = f\"Based on our conversation history: {chat_history}\\n\\nUser: {message}\"\n response = await llm.acomplete(context_prompt)\n yield bt.Text(response)\n else:\n yield bt.Text(\"This seems to be the start of our conversation!\")\n```\n\n## Examples\n\n### AI-Powered Bots with LiteLLM\n\n#### OpenAI GPT Bot\n\n```python\nimport bubbletea_chat as bt\nfrom bubbletea_chat import LLM\n\n@bt.chatbot\nasync def gpt_assistant(message: str):\n # Make sure to set OPENAI_API_KEY environment variable\n llm = LLM(model=\"gpt-4\")\n \n # Stream the response\n async for chunk in llm.stream(message):\n yield bt.Text(chunk)\n```\n\n#### Claude Bot\n\n```python\n@bt.chatbot\nasync def claude_bot(message: str):\n # Set ANTHROPIC_API_KEY environment variable\n llm = LLM(model=\"claude-3-sonnet-20240229\")\n \n response = await llm.acomplete(message)\n yield bt.Text(response)\n```\n\n#### Gemini Bot\n\n```python\n@bt.chatbot\nasync def gemini_bot(message: str):\n # Set GEMINI_API_KEY environment variable\n llm = LLM(model=\"gemini/gemini-pro\")\n \n async for chunk in llm.stream(message):\n yield bt.Text(chunk)\n```\n\n### Vision-Enabled Bot\n\nBuild bots that can analyze images using multimodal LLMs:\n\n```python\nfrom bubbletea_chat import chatbot, Text, Markdown, LLM, ImageInput\n\n@chatbot\nasync def vision_bot(prompt: str, images: list = None):\n \"\"\"\n A vision-enabled bot that can analyze images\n \"\"\"\n llm = LLM(model=\"gpt-4-vision-preview\", max_tokens=1000)\n \n if images:\n yield Text(\"I can see you've shared some images. Let me analyze them...\")\n response = await llm.acomplete_with_images(prompt, images)\n yield Markdown(response)\n else:\n yield Markdown(\"\"\"\n## \ud83d\udcf8 Vision Bot\n\nI can analyze images! Try sending me:\n- Screenshots to explain\n- Photos to describe\n- Diagrams to interpret\n- Art to analyze\n\nJust upload an image along with your question!\n\n**Supported formats**: JPEG, PNG, GIF, WebP\n \"\"\")\n```\n\n**Key Features:**\n- Accepts images along with text prompts\n- Supports both URL and base64-encoded images\n- Works with multiple images at once\n- Compatible with various vision models\n\n### Image Generation Bot\n\nCreate images from text descriptions:\n\n```python\nfrom bubbletea_chat import chatbot, Text, Markdown, LLM, Image\n\n@chatbot\nasync def image_generator(prompt: str):\n \"\"\"\n Generate images from text descriptions\n \"\"\"\n llm = LLM(model=\"dall-e-3\") # Default image generation model\n \n if prompt:\n yield Text(f\"\ud83c\udfa8 Creating: {prompt}\")\n # Generate image from the text prompt\n image_url = await llm.agenerate_image(prompt)\n yield Image(image_url)\n yield Text(\"\u2728 Your image is ready!\")\n else:\n yield Markdown(\"\"\"\n## \ud83c\udfa8 AI Image Generator\n\nI can create images from your text prompts!\n\nTry prompts like:\n- *\"A futuristic cityscape at sunset\"*\n- *\"A cute robot playing guitar in a forest\"*\n- *\"An ancient map with fantasy landmarks\"*\n\n\ud83d\udc49 Just type your description and I'll generate an image for you!\n \"\"\")\n```\n\n### Simple Echo Bot\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef echo_bot(message: str):\n return bt.Text(f\"Echo: {message}\")\n```\n\n### Multi-Modal Bot\n\n```python\nimport bubbletea_chat as bt\n\n@bt.chatbot\ndef multimodal_bot(message: str):\n yield bt.Markdown(\"# Welcome to the Multi-Modal Bot!\")\n \n yield bt.Text(\"I can show you different types of content:\")\n \n yield bt.Markdown(\"\"\"\n - \ud83d\udcdd **Text** messages\n - \ud83d\uddbc\ufe0f **Images** with descriptions \n - \ud83d\udcca **Markdown** formatting\n \"\"\")\n \n yield bt.Image(\n \"https://picsum.photos/400/300\",\n alt=\"A random beautiful image\"\n )\n \n yield bt.Text(\"Pretty cool, right? \ud83d\ude0e\")\n```\n\n### Streaming Bot\n\n```python\nimport bubbletea_chat as bt\nimport asyncio\n\n@bt.chatbot\nasync def streaming_bot(message: str):\n yield bt.Text(\"Hello! Let me process your message...\")\n await asyncio.sleep(1)\n \n words = message.split()\n yield bt.Text(\"You said: \")\n for word in words:\n yield bt.Text(f\"{word} \")\n await asyncio.sleep(0.3)\n \n yield bt.Markdown(\"## Analysis Complete!\")\n```\n\n## API Reference\n\n### Decorators\n\n- `@bt.chatbot` - Create a chatbot from a function\n- `@bt.chatbot(name=\"custom-name\")` - Set a custom bot name\n- `@bt.chatbot(stream=False)` - Force non-streaming mode\n\n**Optional Parameters**: Your chatbot functions can accept these optional parameters that BubbleTea provides automatically:\n```python\n@bt.chatbot\nasync def my_bot(\n message: str, # Required: The user's message\n images: list = None, # Optional: List of ImageInput objects\n user_uuid: str = None, # Optional: Unique user identifier\n conversation_uuid: str = None, # Optional: Unique conversation identifier\n user_email: str = None, # Optional: User's email address\n chat_history: Union[List[Dict], str] = None # Optional: Conversation history\n):\n # Your bot logic here\n pass\n```\n\n### Components\n\n- `bt.Text(content: str)` - Plain text message\n- `bt.Image(url: str, alt: str = None)` - Image component\n- `bt.Markdown(content: str)` - Markdown formatted text\n- `bt.Card(image: Image, text: Optional[str] = None, markdown: Optional[Markdown] = None, card_value: Optional[str] = None)` - A single card component.\n- `bt.Cards(cards: List[Card], orient: Literal[\"wide\", \"tall\"] = \"wide\")` - A collection of cards.\n- `bt.Pill(text: str, pill_value: Optional[str] = None)` - A single pill component.\n- `bt.Pills(pills: List[Pill])` - A collection of pill items.\n\n### LLM Class\n\n- `LLM(model: str, **kwargs)` - Initialize an LLM client\n - `model`: Any model supported by LiteLLM (e.g., \"gpt-4\", \"claude-3-sonnet-20240229\")\n - `**kwargs`: Additional parameters (temperature, max_tokens, etc.)\n\n#### Text Generation Methods:\n- `complete(prompt: str, **kwargs) -> str` - Get a completion\n- `acomplete(prompt: str, **kwargs) -> str` - Async completion\n- `stream(prompt: str, **kwargs) -> AsyncGenerator[str, None]` - Stream a completion\n- `with_messages(messages: List[Dict], **kwargs) -> str` - Use full message history\n- `astream_with_messages(messages: List[Dict], **kwargs) -> AsyncGenerator[str, None]` - Stream with messages\n\n#### Vision/Image Analysis Methods:\n- `complete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Completion with images\n- `acomplete_with_images(prompt: str, images: List[ImageInput], **kwargs) -> str` - Async with images\n- `stream_with_images(prompt: str, images: List[ImageInput], **kwargs) -> AsyncGenerator` - Stream with images\n\n#### Image Generation Methods:\n- `generate_image(prompt: str, **kwargs) -> str` - Generate image (sync), returns URL\n- `agenerate_image(prompt: str, **kwargs) -> str` - Generate image (async), returns URL\n\n### ImageInput Class\n\nRepresents an image input that can be either a URL or base64 encoded:\n```python\nfrom bubbletea_chat import ImageInput\n\n# URL image\nImageInput(url=\"https://example.com/image.jpg\")\n\n# Base64 image\nImageInput(\n base64=\"iVBORw0KGgoAAAANS...\",\n mime_type=\"image/jpeg\" # Optional\n)\n```\n\n### Server\n```\nif __name__ == \"__main__\":\n # Run the chatbot server\n bt.run_server(my_bot, port=8000, host=\"0.0.0.0\")\n```\n\n- Runs a chatbot server on port 8000 and binds to host 0.0.0.0\n - Automatically creates a `/chat` endpoint for your bot\n - The `/chat` endpoint accepts POST requests with chat messages\n - Supports both streaming and non-streaming responses\n\n## Environment Variables\n\nTo use different LLM providers, set the appropriate API keys:\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=your-openai-api-key\n\n# Anthropic Claude\nexport ANTHROPIC_API_KEY=your-anthropic-api-key\n\n# Google Gemini\nexport GEMINI_API_KEY=your-gemini-api-key\n\n# Or use a .env file with python-dotenv\n```\n\nFor more providers and configuration options, see the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).\n\n## Custom LLM Integration\n\nWhile BubbleTea provides the `LLM` class for convenience, you can also use LiteLLM directly in your bots for more control:\n\n```python\nimport bubbletea_chat as bt\nfrom litellm import acompletion\n\n@bt.chatbot\nasync def custom_llm_bot(message: str):\n # Direct LiteLLM usage\n response = await acompletion(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": message}],\n temperature=0.7,\n # Add any custom parameters\n api_base=\"https://your-custom-endpoint.com\", # Custom endpoints\n custom_llm_provider=\"openai\", # Custom providers\n )\n \n yield bt.Text(response.choices[0].message.content)\n```\n\nThis approach gives you access to:\n- Custom API endpoints\n- Advanced parameters\n- Direct response handling\n- Custom error handling\n- Any LiteLLM feature\n\n## Testing Your Bot\n\nStart your bot:\n\n```bash\npython my_bot.py\n```\n\nYour bot will automatically create a `/chat` endpoint that accepts POST requests. This is the standard endpoint for all BubbleTea chatbots.\n\nTest with curl:\n\n```bash\n# Text only\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"type\": \"user\", \"message\": \"Hello bot!\"}'\n\n# With image URL\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"type\": \"user\",\n \"message\": \"What is in this image?\",\n \"images\": [{\"url\": \"https://example.com/image.jpg\"}]\n }'\n\n# With base64 image\ncurl -X POST \"http://localhost:8000/chat\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"type\": \"user\",\n \"message\": \"Describe this\",\n \"images\": [{\"base64\": \"iVBORw0KGgoAAAANS...\", \"mime_type\": \"image/png\"}]\n }'\n```\n\n## \ud83c\udf10 CORS Support\n\nBubbleTea includes automatic CORS (Cross-Origin Resource Sharing) support out of the box! This means your bots will work seamlessly with web frontends without any additional configuration.\n\n### Default Behavior\n```python\n# CORS is enabled by default with permissive settings for development\nbt.run_server(my_bot, port=8000)\n```\n\n### Custom CORS Configuration\n```python\n# For production - restrict to specific origins\nbt.run_server(my_bot, port=8000, cors_config={\n \"allow_origins\": [\"https://bubbletea.app\", \"https://yourdomain.com\"],\n \"allow_credentials\": True,\n \"allow_methods\": [\"GET\", \"POST\"],\n \"allow_headers\": [\"Content-Type\", \"Authorization\"]\n})\n```\n\n### Disable CORS\n```python\n# Not recommended, but possible\nbt.run_server(my_bot, port=8000, cors=False)\n```\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for building AI chatbots with LiteLLM support for BubbleTea platform",
"version": "0.5.5",
"project_urls": {
"Documentation": "https://docs.bubbletea.dev",
"Homepage": "https://bubbletea.dev",
"Repository": "https://github.com/bubbletea/bubbletea-python"
},
"split_keywords": [
"chatbot",
" ai",
" bubbletea",
" conversational-ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4ab77171ad9f152b42f7552ade53c604cfc94894d980b14ed2a973166a8c9c25",
"md5": "b1cd201eb69f2cad2fc2ff33078de8f8",
"sha256": "c4f43265b1d21cd919c649d0d40ccbbcddcb320a515624be33175fc3f1c2c431"
},
"downloads": -1,
"filename": "bubbletea_chat-0.5.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b1cd201eb69f2cad2fc2ff33078de8f8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 16069,
"upload_time": "2025-07-18T11:43:17",
"upload_time_iso_8601": "2025-07-18T11:43:17.919062Z",
"url": "https://files.pythonhosted.org/packages/4a/b7/7171ad9f152b42f7552ade53c604cfc94894d980b14ed2a973166a8c9c25/bubbletea_chat-0.5.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "83b3cfd37bc65a2ef43ed642ad62c45ceade70a5727bbcdd5d23c97a69f3ca35",
"md5": "84e3286581b58b44e9e8e9b364c0ab0b",
"sha256": "9fafc49fd6d7d95662e51a803b89b16590d90ddfde63d65ceec0e5c0e845fba0"
},
"downloads": -1,
"filename": "bubbletea_chat-0.5.5.tar.gz",
"has_sig": false,
"md5_digest": "84e3286581b58b44e9e8e9b364c0ab0b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25652,
"upload_time": "2025-07-18T11:43:25",
"upload_time_iso_8601": "2025-07-18T11:43:25.646098Z",
"url": "https://files.pythonhosted.org/packages/83/b3/cfd37bc65a2ef43ed642ad62c45ceade70a5727bbcdd5d23c97a69f3ca35/bubbletea_chat-0.5.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-18 11:43:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bubbletea",
"github_project": "bubbletea-python",
"github_not_found": true,
"lcname": "bubbletea-chat"
}