backboard-sdk


Namebackboard-sdk JSON
Version 1.4.1 PyPI version JSON
download
home_pagehttps://github.com/backboard/backboard-python-sdk
SummaryPython SDK for the Backboard API - Build conversational AI applications with persistent memory and intelligent document processing
upload_time2025-10-20 17:31:16
maintainerNone
docs_urlNone
authorBackboard
requires_python>=3.8
licenseMIT
keywords ai api sdk conversational chatbot assistant documents rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Backboard Python SDK

A developer-friendly Python SDK for the Backboard API. Build conversational AI applications with persistent memory and intelligent document processing.

> New to Backboard? We include $10 in free credits to get you started and support 1,800+ LLMs across major providers.

## New in v1.4.1

- **Memory Support**: Add persistent memory to assistants with automatic context retrieval
- **Memory Modes**: Control memory behavior with Auto, Readonly, and off modes
- **Memory Statistics**: Track usage and limits

## Installation

```bash
pip install backboard-sdk
```


## Quick Start

```python
import asyncio
from backboard import BackboardClient

async def main():
    client = BackboardClient(api_key="your_api_key_here")

    assistant = await client.create_assistant(
        name="Support Bot",
        description="A helpful customer support assistant",
    )

    thread = await client.create_thread(assistant.assistant_id)

    response = await client.add_message(
        thread_id=thread.thread_id,
        content="Hello! Can you help me with my account?",
        llm_provider="openai",
        model_name="gpt-4o",
        stream=False,
    )

    print(response.latest_message.content)

    # Streaming
    async for event in await client.add_message(
        thread_id=thread.thread_id,
        content="Stream me a short response",
        stream=True,
    ):
        if event.get("type") == "content_streaming":
            print(event.get("content", ""), end="", flush=True)

if __name__ == "__main__":
    asyncio.run(main())
```

## Features

### Memory (NEW in v1.4.0)
- **Persistent Memory**: Store and retrieve information across conversations
- **Automatic Context**: Enable memory to automatically search and use relevant context
- **Manual Management**: Full control with add, update, delete, and list operations
- **Memory Modes**: Auto (search + write), Readonly (search only), or off

### Assistants
- Create, list, get, update, and delete assistants
- Configure custom tools and capabilities
- Upload documents for assistant-level context

### Threads
- Create conversation threads under assistants
- Maintain persistent conversation history
- Support for message attachments

### Documents
- Upload documents to assistants or threads
- Automatic processing and indexing for RAG
- Support for PDF, Office files, text, and more
- Real-time processing status tracking

### Messages
- Send messages with optional file attachments
- Streaming and non-streaming responses
- Tool calling support
- Custom LLM provider and model selection

## API Reference

### Client Initialization

```python
client = BackboardClient(api_key="your_api_key")
# or use as an async context manager
# async with BackboardClient(api_key="your_api_key") as client:
#     ...
```

### Assistants

```python
# Create assistant
assistant = await client.create_assistant(
    name="My Assistant",
    description="Assistant description",
    tools=[tool_definition],  # Optional
)

# List assistants
assistants = await client.list_assistants(skip=0, limit=100)

# Get assistant
assistant = await client.get_assistant(assistant_id)

# Update assistant
assistant = await client.update_assistant(
    assistant_id,
    name="New Name",
    description="New description",
)

# Delete assistant
result = await client.delete_assistant(assistant_id)
```

### Threads

```python
# Create thread
thread = await client.create_thread(assistant_id)

# List threads
threads = await client.list_threads(skip=0, limit=100)

# Get thread with messages
thread = await client.get_thread(thread_id)

# Delete thread
result = await client.delete_thread(thread_id)
```

### Messages

```python
# Send message
response = await client.add_message(
    thread_id=thread_id,
    content="Your message here",
    files=["path/to/file.pdf"],  # Optional attachments
    llm_provider="openai",  # Optional
    model_name="gpt-4o",  # Optional
    stream=False,
    memory="Auto",  # Optional: "Auto", "Readonly", or "off" (default)
)

# Streaming messages
async for chunk in await client.add_message(thread_id, content="Hello", stream=True):
    if chunk.get('type') == 'content_streaming':
        print(chunk.get('content', ''), end='', flush=True)
```

### Tool Integration (Simplified in v1.3.3)

#### Tool Definitions
```python
# Use plain JSON objects (no verbose SDK classes needed!)
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "City name"}
                },
                "required": ["location"]
            }
        }
    }
]

assistant = await client.create_assistant(
    name="Weather Assistant",
    tools=tools,
)
```

#### Tool Call Handling
```python
import json

# Enhanced object-oriented access with automatic JSON parsing
response = await client.add_message(
    thread_id=thread_id,
    content="What's the weather in San Francisco?",
    stream=False
)

if response.status == "REQUIRES_ACTION" and response.tool_calls:
    tool_outputs = []
    
    # Process each tool call
    for tc in response.tool_calls:
        if tc.function.name == "get_current_weather":
            # Get parsed arguments (required parameters are guaranteed by API)
            args = tc.function.parsed_arguments
            location = args["location"]
            
            # Execute your function and format the output
            weather_data = {
                "temperature": "68°F",
                "condition": "Sunny",
                "location": location
            }
            
            tool_outputs.append({
                "tool_call_id": tc.id,
                "output": json.dumps(weather_data)
            })
    
    # Submit the tool outputs back to continue the conversation
    final_response = await client.submit_tool_outputs(
        thread_id=thread_id,
        run_id=response.run_id,
        tool_outputs=tool_outputs
    )
    
    print(final_response.latest_message.content)
```

### Memory

```python
# Add a memory
await client.add_memory(
    assistant_id=assistant_id,
    content="User prefers Python programming",
    metadata={"category": "preference"}
)

# Get all memories
memories = await client.get_memories(assistant_id)
for memory in memories.memories:
    print(f"{memory.id}: {memory.content}")

# Get specific memory
memory = await client.get_memory(assistant_id, memory_id)

# Update memory
await client.update_memory(
    assistant_id=assistant_id,
    memory_id=memory_id,
    content="Updated content"
)

# Delete memory
await client.delete_memory(assistant_id, memory_id)

# Get memory stats
stats = await client.get_memory_stats(assistant_id)
print(f"Total memories: {stats.total_memories}")

# Use memory in conversation
response = await client.add_message(
    thread_id=thread_id,
    content="What do you know about me?",
    memory="Auto"  # Enable memory search and automatic updates
)
```


### Documents

```python
# Upload document to assistant
document = await client.upload_document_to_assistant(
    assistant_id=assistant_id,
    file_path="path/to/document.pdf",
)

# Upload document to thread
document = await client.upload_document_to_thread(
    thread_id=thread_id,
    file_path="path/to/document.pdf",
)

# List assistant documents
documents = await client.list_assistant_documents(assistant_id)

# List thread documents
documents = await client.list_thread_documents(thread_id)

# Get document status
document = await client.get_document_status(document_id)

# Delete document
result = await client.delete_document(document_id)
```

## Error Handling

The SDK includes comprehensive error handling:

```python
from backboard import (
    BackboardAPIError,
    BackboardValidationError,
    BackboardNotFoundError,
    BackboardRateLimitError,
    BackboardServerError,
)

async def demo_err():
    try:
        await client.get_assistant("invalid_id")
    except BackboardNotFoundError:
        print("Assistant not found")
    except BackboardValidationError as e:
        print(f"Validation error: {e}")
    except BackboardAPIError as e:
        print(f"API error: {e}")
```

## Supported File Types

The SDK supports uploading the following file types:
- PDF files (.pdf)
- Microsoft Office files (.docx, .xlsx, .pptx, .doc, .xls, .ppt)
- Text files (.txt, .csv, .md, .markdown)
- Code files (.py, .js, .html, .css, .xml)
- JSON files (.json, .jsonl)

## Requirements

- Python 3.8+
- httpx >= 0.27.0

## License

MIT License - see LICENSE file for details.

## Support

- Documentation: https://backboard.io/docs
- Email: support@backboard.io

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/backboard/backboard-python-sdk",
    "name": "backboard-sdk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, api, sdk, conversational, chatbot, assistant, documents, rag",
    "author": "Backboard",
    "author_email": "Backboard <support@backboard.io>",
    "download_url": "https://files.pythonhosted.org/packages/91/a0/0613faa1093e88e5338819c58d596bb127c1abc9f8679e01f025d8cac1a3/backboard_sdk-1.4.1.tar.gz",
    "platform": null,
    "description": "# Backboard Python SDK\n\nA developer-friendly Python SDK for the Backboard API. Build conversational AI applications with persistent memory and intelligent document processing.\n\n> New to Backboard? We include $10 in free credits to get you started and support 1,800+ LLMs across major providers.\n\n## New in v1.4.1\n\n- **Memory Support**: Add persistent memory to assistants with automatic context retrieval\n- **Memory Modes**: Control memory behavior with Auto, Readonly, and off modes\n- **Memory Statistics**: Track usage and limits\n\n## Installation\n\n```bash\npip install backboard-sdk\n```\n\n\n## Quick Start\n\n```python\nimport asyncio\nfrom backboard import BackboardClient\n\nasync def main():\n    client = BackboardClient(api_key=\"your_api_key_here\")\n\n    assistant = await client.create_assistant(\n        name=\"Support Bot\",\n        description=\"A helpful customer support assistant\",\n    )\n\n    thread = await client.create_thread(assistant.assistant_id)\n\n    response = await client.add_message(\n        thread_id=thread.thread_id,\n        content=\"Hello! Can you help me with my account?\",\n        llm_provider=\"openai\",\n        model_name=\"gpt-4o\",\n        stream=False,\n    )\n\n    print(response.latest_message.content)\n\n    # Streaming\n    async for event in await client.add_message(\n        thread_id=thread.thread_id,\n        content=\"Stream me a short response\",\n        stream=True,\n    ):\n        if event.get(\"type\") == \"content_streaming\":\n            print(event.get(\"content\", \"\"), end=\"\", flush=True)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Features\n\n### Memory (NEW in v1.4.0)\n- **Persistent Memory**: Store and retrieve information across conversations\n- **Automatic Context**: Enable memory to automatically search and use relevant context\n- **Manual Management**: Full control with add, update, delete, and list operations\n- **Memory Modes**: Auto (search + write), Readonly (search only), or off\n\n### Assistants\n- Create, list, get, update, and delete assistants\n- Configure custom tools and capabilities\n- Upload documents for assistant-level context\n\n### Threads\n- Create conversation threads under assistants\n- Maintain persistent conversation history\n- Support for message attachments\n\n### Documents\n- Upload documents to assistants or threads\n- Automatic processing and indexing for RAG\n- Support for PDF, Office files, text, and more\n- Real-time processing status tracking\n\n### Messages\n- Send messages with optional file attachments\n- Streaming and non-streaming responses\n- Tool calling support\n- Custom LLM provider and model selection\n\n## API Reference\n\n### Client Initialization\n\n```python\nclient = BackboardClient(api_key=\"your_api_key\")\n# or use as an async context manager\n# async with BackboardClient(api_key=\"your_api_key\") as client:\n#     ...\n```\n\n### Assistants\n\n```python\n# Create assistant\nassistant = await client.create_assistant(\n    name=\"My Assistant\",\n    description=\"Assistant description\",\n    tools=[tool_definition],  # Optional\n)\n\n# List assistants\nassistants = await client.list_assistants(skip=0, limit=100)\n\n# Get assistant\nassistant = await client.get_assistant(assistant_id)\n\n# Update assistant\nassistant = await client.update_assistant(\n    assistant_id,\n    name=\"New Name\",\n    description=\"New description\",\n)\n\n# Delete assistant\nresult = await client.delete_assistant(assistant_id)\n```\n\n### Threads\n\n```python\n# Create thread\nthread = await client.create_thread(assistant_id)\n\n# List threads\nthreads = await client.list_threads(skip=0, limit=100)\n\n# Get thread with messages\nthread = await client.get_thread(thread_id)\n\n# Delete thread\nresult = await client.delete_thread(thread_id)\n```\n\n### Messages\n\n```python\n# Send message\nresponse = await client.add_message(\n    thread_id=thread_id,\n    content=\"Your message here\",\n    files=[\"path/to/file.pdf\"],  # Optional attachments\n    llm_provider=\"openai\",  # Optional\n    model_name=\"gpt-4o\",  # Optional\n    stream=False,\n    memory=\"Auto\",  # Optional: \"Auto\", \"Readonly\", or \"off\" (default)\n)\n\n# Streaming messages\nasync for chunk in await client.add_message(thread_id, content=\"Hello\", stream=True):\n    if chunk.get('type') == 'content_streaming':\n        print(chunk.get('content', ''), end='', flush=True)\n```\n\n### Tool Integration (Simplified in v1.3.3)\n\n#### Tool Definitions\n```python\n# Use plain JSON objects (no verbose SDK classes needed!)\ntools = [\n    {\n        \"type\": \"function\",\n        \"function\": {\n            \"name\": \"get_weather\",\n            \"description\": \"Get current weather\",\n            \"parameters\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"location\": {\"type\": \"string\", \"description\": \"City name\"}\n                },\n                \"required\": [\"location\"]\n            }\n        }\n    }\n]\n\nassistant = await client.create_assistant(\n    name=\"Weather Assistant\",\n    tools=tools,\n)\n```\n\n#### Tool Call Handling\n```python\nimport json\n\n# Enhanced object-oriented access with automatic JSON parsing\nresponse = await client.add_message(\n    thread_id=thread_id,\n    content=\"What's the weather in San Francisco?\",\n    stream=False\n)\n\nif response.status == \"REQUIRES_ACTION\" and response.tool_calls:\n    tool_outputs = []\n    \n    # Process each tool call\n    for tc in response.tool_calls:\n        if tc.function.name == \"get_current_weather\":\n            # Get parsed arguments (required parameters are guaranteed by API)\n            args = tc.function.parsed_arguments\n            location = args[\"location\"]\n            \n            # Execute your function and format the output\n            weather_data = {\n                \"temperature\": \"68\u00b0F\",\n                \"condition\": \"Sunny\",\n                \"location\": location\n            }\n            \n            tool_outputs.append({\n                \"tool_call_id\": tc.id,\n                \"output\": json.dumps(weather_data)\n            })\n    \n    # Submit the tool outputs back to continue the conversation\n    final_response = await client.submit_tool_outputs(\n        thread_id=thread_id,\n        run_id=response.run_id,\n        tool_outputs=tool_outputs\n    )\n    \n    print(final_response.latest_message.content)\n```\n\n### Memory\n\n```python\n# Add a memory\nawait client.add_memory(\n    assistant_id=assistant_id,\n    content=\"User prefers Python programming\",\n    metadata={\"category\": \"preference\"}\n)\n\n# Get all memories\nmemories = await client.get_memories(assistant_id)\nfor memory in memories.memories:\n    print(f\"{memory.id}: {memory.content}\")\n\n# Get specific memory\nmemory = await client.get_memory(assistant_id, memory_id)\n\n# Update memory\nawait client.update_memory(\n    assistant_id=assistant_id,\n    memory_id=memory_id,\n    content=\"Updated content\"\n)\n\n# Delete memory\nawait client.delete_memory(assistant_id, memory_id)\n\n# Get memory stats\nstats = await client.get_memory_stats(assistant_id)\nprint(f\"Total memories: {stats.total_memories}\")\n\n# Use memory in conversation\nresponse = await client.add_message(\n    thread_id=thread_id,\n    content=\"What do you know about me?\",\n    memory=\"Auto\"  # Enable memory search and automatic updates\n)\n```\n\n\n### Documents\n\n```python\n# Upload document to assistant\ndocument = await client.upload_document_to_assistant(\n    assistant_id=assistant_id,\n    file_path=\"path/to/document.pdf\",\n)\n\n# Upload document to thread\ndocument = await client.upload_document_to_thread(\n    thread_id=thread_id,\n    file_path=\"path/to/document.pdf\",\n)\n\n# List assistant documents\ndocuments = await client.list_assistant_documents(assistant_id)\n\n# List thread documents\ndocuments = await client.list_thread_documents(thread_id)\n\n# Get document status\ndocument = await client.get_document_status(document_id)\n\n# Delete document\nresult = await client.delete_document(document_id)\n```\n\n## Error Handling\n\nThe SDK includes comprehensive error handling:\n\n```python\nfrom backboard import (\n    BackboardAPIError,\n    BackboardValidationError,\n    BackboardNotFoundError,\n    BackboardRateLimitError,\n    BackboardServerError,\n)\n\nasync def demo_err():\n    try:\n        await client.get_assistant(\"invalid_id\")\n    except BackboardNotFoundError:\n        print(\"Assistant not found\")\n    except BackboardValidationError as e:\n        print(f\"Validation error: {e}\")\n    except BackboardAPIError as e:\n        print(f\"API error: {e}\")\n```\n\n## Supported File Types\n\nThe SDK supports uploading the following file types:\n- PDF files (.pdf)\n- Microsoft Office files (.docx, .xlsx, .pptx, .doc, .xls, .ppt)\n- Text files (.txt, .csv, .md, .markdown)\n- Code files (.py, .js, .html, .css, .xml)\n- JSON files (.json, .jsonl)\n\n## Requirements\n\n- Python 3.8+\n- httpx >= 0.27.0\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\n- Documentation: https://backboard.io/docs\n- Email: support@backboard.io\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python SDK for the Backboard API - Build conversational AI applications with persistent memory and intelligent document processing",
    "version": "1.4.1",
    "project_urls": {
        "Documentation": "https://backboard.io/docs",
        "Homepage": "https://backboard.io"
    },
    "split_keywords": [
        "ai",
        " api",
        " sdk",
        " conversational",
        " chatbot",
        " assistant",
        " documents",
        " rag"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5aff013fb49cf80abdb0a8c177db626a1f5369a5dd97165fa632aa07246655ed",
                "md5": "ddd26df9d64342124828c7fe0f3cda5a",
                "sha256": "6d167370a09623d06f2994b24d3a95241d9d3fb3c818d36cd6b1f453ce3d7a48"
            },
            "downloads": -1,
            "filename": "backboard_sdk-1.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ddd26df9d64342124828c7fe0f3cda5a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 12342,
            "upload_time": "2025-10-20T17:31:14",
            "upload_time_iso_8601": "2025-10-20T17:31:14.985750Z",
            "url": "https://files.pythonhosted.org/packages/5a/ff/013fb49cf80abdb0a8c177db626a1f5369a5dd97165fa632aa07246655ed/backboard_sdk-1.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "91a00613faa1093e88e5338819c58d596bb127c1abc9f8679e01f025d8cac1a3",
                "md5": "0edc9f1e566f24eb144d9259c0d3bed4",
                "sha256": "c7722271eb43e6bf599ba999a3fb1a1122a6b044a38659951ee31e56ab24a0a0"
            },
            "downloads": -1,
            "filename": "backboard_sdk-1.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "0edc9f1e566f24eb144d9259c0d3bed4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 17011,
            "upload_time": "2025-10-20T17:31:16",
            "upload_time_iso_8601": "2025-10-20T17:31:16.181170Z",
            "url": "https://files.pythonhosted.org/packages/91/a0/0613faa1093e88e5338819c58d596bb127c1abc9f8679e01f025d8cac1a3/backboard_sdk-1.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-20 17:31:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "backboard",
    "github_project": "backboard-python-sdk",
    "github_not_found": true,
    "lcname": "backboard-sdk"
}
        
Elapsed time: 3.08099s