Name | nebula-client JSON |
Version |
0.1.4
JSON |
| download |
home_page | None |
Summary | Official Python SDK for Nebula Cloud API - Memory Focus |
upload_time | 2025-08-30 03:38:21 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | Proprietary |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Nebula Client SDK
A Python SDK for interacting with the Nebula Cloud API, providing a clean interface to Nebula's memory and retrieval capabilities.
## Overview
This SDK provides a unified interface for storing and retrieving memories in Nebula Cloud, with support for both conversational and document-based memory storage. The SDK uses the documents endpoint for optimal performance and supports both synchronous and asynchronous operations.
## Key Features
- **Unified Memory Storage**: Single `store_memory()` and `store_memories()` methods for all memory types
- **Conversational Memory**: Built-in support for conversation messages with role-based storage
- **Document Storage**: Efficient text and JSON document storage with automatic chunking
- **Collection Management**: Full CRUD operations for collections (clusters)
- **Deduplication**: Deterministic document IDs based on content hashing
- **Flexible Metadata**: Rich metadata support for memories and collections
- **Search & Retrieval**: Advanced search capabilities with filtering and ranking
- **Async Support**: Full async client with identical API surface
## Installation
```bash
pip install nebula-client
```
## Quick Start
### Basic Setup
```python
from nebula_client import NebulaClient, Memory
# Initialize client
client = NebulaClient(
api_key="your-api-key", # or set NEBULA_API_KEY env var
base_url="https://api.nebulacloud.app"
)
```
### Collection Management
```python
# Create a collection
collection = client.create_cluster(
name="my_conversations",
description="Collection for storing conversation memories"
)
# List collections
collections = client.list_clusters()
# Get specific collection
collection = client.get_cluster(collection_id)
# Update collection
updated_collection = client.update_cluster(
collection_id,
name="updated_name",
description="Updated description"
)
# Delete collection
client.delete_cluster(collection_id)
```
### Storing Memories
#### Individual Memory
```python
# Store a single text document
memory = Memory(
cluster_id=collection.id,
content="This is an important memory about machine learning.",
metadata={"topic": "machine_learning", "importance": "high"}
)
doc_id = client.store_memory(memory)
print(f"Stored document with ID: {doc_id}")
```
#### Conversation Messages
```python
# Store a conversation message
message = Memory(
cluster_id=collection.id,
content="What is machine learning?",
role="user",
metadata={"timestamp": "2024-01-15T10:30:00Z"}
)
conv_id = client.store_memory(message)
print(f"Stored in conversation: {conv_id}")
# Add a response to the same conversation
response = Memory(
cluster_id=collection.id,
content="Machine learning is a subset of AI that enables computers to learn from data.",
role="assistant",
parent_id=conv_id, # Link to existing conversation
metadata={"timestamp": "2024-01-15T10:30:05Z"}
)
client.store_memory(response)
```
#### Batch Storage
```python
# Store multiple memories at once
memories = [
Memory(cluster_id=collection.id, content="First memory", metadata={"type": "note"}),
Memory(cluster_id=collection.id, content="Second memory", metadata={"type": "note"}),
Memory(cluster_id=collection.id, content="User question", role="user"),
Memory(cluster_id=collection.id, content="Assistant response", role="assistant", parent_id="conv_123")
]
ids = client.store_memories(memories)
print(f"Stored {len(ids)} memories")
```
### Retrieving Memories
```python
# List memories from a collection
memories = client.list_memories(cluster_ids=[collection.id], limit=10)
for memory in memories:
print(f"ID: {memory.id}")
print(f"Content: {memory.content}")
print(f"Metadata: {memory.metadata}")
# Get specific memory
memory = client.get_memory("memory_id_here")
```
### Search Across Memories
```python
# Search across collections
results = client.search(
query="machine learning",
cluster_ids=[collection.id],
limit=10,
)
for result in results:
print(f"Found: {result.content[:100]}...")
print(f"Score: {result.score}")
```
## Async Client
The SDK also provides an async client with identical functionality:
```python
from nebula_client import AsyncNebulaClient, Memory
async with AsyncNebulaClient(api_key="your-api-key") as client:
# Store memory
memory = Memory(cluster_id="cluster_123", content="Async memory")
doc_id = await client.store_memory(memory)
# Search
results = await client.search("query", cluster_ids=["cluster_123"])
```
## API Reference
### Core Methods
#### Collection Management
- `create_cluster(name, description=None, metadata=None)` - Create a new collection
- `get_cluster(cluster_id)` - Get collection details
- `list_clusters(limit=100, offset=0)` - List all collections
- `update_cluster(cluster_id, name=None, description=None, metadata=None)` - Update collection
- `delete_cluster(cluster_id)` - Delete collection
#### Memory Storage
- `store_memory(memory)` - Store a single memory (conversation or document)
- `store_memories(memories)` - Store multiple memories with batching
- `delete(memory_id)` - Delete a memory
#### Memory Retrieval
- `list_memories(cluster_ids, limit=100, offset=0)` - List memories from collections
- `get_memory(memory_id)` - Get specific memory
- `search(query, cluster_ids, limit=10, retrieval_type=RetrievalType.ADVANCED, filters=None, search_settings=None)` - Search memories
### Data Models
#### Memory (Write Model)
```python
@dataclass
class Memory:
cluster_id: str
content: str
role: Optional[str] = None # user, assistant, or custom
parent_id: Optional[str] = None # conversation_id for messages
metadata: Dict[str, Any] = field(default_factory=dict)
```
**Behavior:**
- `role` present → conversation message
- `parent_id` used as conversation_id if provided; else a new conversation is created
- Returns conversation_id
- `role` absent → text/json document
- Content is stored as raw text
- Returns document_id
#### MemoryResponse (Read Model)
```python
@dataclass
class MemoryResponse:
id: str
content: Optional[str] = None
chunks: Optional[List[str]] = None
metadata: Dict[str, Any] = field(default_factory=dict)
cluster_ids: List[str] = field(default_factory=list)
created_at: Optional[datetime] = None
updated_at: Optional[datetime] = None
```
#### Cluster
```python
@dataclass
class Cluster:
id: str
name: str
description: Optional[str]
metadata: Dict[str, Any]
created_at: Optional[datetime]
updated_at: Optional[datetime]
memory_count: int
owner_id: Optional[str]
```
#### SearchResult
```python
@dataclass
class SearchResult:
id: str
score: float
metadata: Dict[str, Any]
source: Optional[str]
content: Optional[str] = None
# Graph fields for graph search results
graph_result_type: Optional[GraphSearchResultType] = None
graph_entity: Optional[GraphEntityResult] = None
graph_relationship: Optional[GraphRelationshipResult] = None
graph_community: Optional[GraphCommunityResult] = None
```
## Key Changes from Previous Version
### 1. Unified Write APIs
The SDK now provides unified methods for storing memories:
- `store_memory()` - Single method for both conversations and documents
- `store_memories()` - Batch storage with automatic grouping
- Removed legacy `store()` method
### 2. Memory Model Separation
- `Memory` - Input model for write operations
- `MemoryResponse` - Output model for read operations
- Clear separation of concerns between storage and retrieval
### 3. Conversation Support
Built-in conversation handling:
- Messages with `role` are stored as conversation messages
- Automatic conversation creation and management
- Support for multi-turn conversations
### 4. Deterministic Document IDs
Documents are created with deterministic IDs based on content hashing:
- Prevents duplicate storage of identical content
- Enables idempotent operations
- Improves data consistency
## Testing
Run the test suite to verify functionality:
```bash
cd py/sdk/nebula_client
pytest tests/ -v
```
The test suite covers:
- Collection management
- Memory storage (individual and batch)
- Memory retrieval
- Search capabilities
- Async client functionality
## Error Handling
The SDK provides specific exception types:
- `NebulaClientException` - General client errors
- `NebulaAuthenticationException` - Authentication failures
- `NebulaRateLimitException` - Rate limiting
- `NebulaValidationException` - Invalid input data
- `NebulaException` - General API errors
## Examples
See the `examples/` directory for complete usage examples including:
- Basic memory storage and retrieval
- Conversation management
- Search and filtering
- Async client usage
Raw data
{
"_id": null,
"home_page": null,
"name": "nebula-client",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Nebula Cloud <support@nebulacloud.app>",
"download_url": "https://files.pythonhosted.org/packages/33/da/2bcce17658b0b9a88b147f3afd0dec84663be4118515e2344299ee4b5a26/nebula_client-0.1.4.tar.gz",
"platform": null,
"description": "# Nebula Client SDK\n\nA Python SDK for interacting with the Nebula Cloud API, providing a clean interface to Nebula's memory and retrieval capabilities.\n\n## Overview\n\nThis SDK provides a unified interface for storing and retrieving memories in Nebula Cloud, with support for both conversational and document-based memory storage. The SDK uses the documents endpoint for optimal performance and supports both synchronous and asynchronous operations.\n\n## Key Features\n\n- **Unified Memory Storage**: Single `store_memory()` and `store_memories()` methods for all memory types\n- **Conversational Memory**: Built-in support for conversation messages with role-based storage\n- **Document Storage**: Efficient text and JSON document storage with automatic chunking\n- **Collection Management**: Full CRUD operations for collections (clusters)\n- **Deduplication**: Deterministic document IDs based on content hashing\n- **Flexible Metadata**: Rich metadata support for memories and collections\n- **Search & Retrieval**: Advanced search capabilities with filtering and ranking\n- **Async Support**: Full async client with identical API surface\n\n## Installation\n\n```bash\npip install nebula-client\n```\n\n## Quick Start\n\n### Basic Setup\n\n```python\nfrom nebula_client import NebulaClient, Memory\n\n# Initialize client\nclient = NebulaClient(\n api_key=\"your-api-key\", # or set NEBULA_API_KEY env var\n base_url=\"https://api.nebulacloud.app\"\n)\n```\n\n### Collection Management\n\n```python\n# Create a collection\ncollection = client.create_cluster(\n name=\"my_conversations\",\n description=\"Collection for storing conversation memories\"\n)\n\n# List collections\ncollections = client.list_clusters()\n\n# Get specific collection\ncollection = client.get_cluster(collection_id)\n\n# Update collection\nupdated_collection = client.update_cluster(\n collection_id,\n name=\"updated_name\",\n description=\"Updated description\"\n)\n\n# Delete collection\nclient.delete_cluster(collection_id)\n```\n\n### Storing Memories\n\n#### Individual Memory\n\n```python\n# Store a single text document\nmemory = Memory(\n cluster_id=collection.id,\n content=\"This is an important memory about machine learning.\",\n metadata={\"topic\": \"machine_learning\", \"importance\": \"high\"}\n)\n\ndoc_id = client.store_memory(memory)\nprint(f\"Stored document with ID: {doc_id}\")\n```\n\n#### Conversation Messages\n\n```python\n# Store a conversation message\nmessage = Memory(\n cluster_id=collection.id,\n content=\"What is machine learning?\",\n role=\"user\",\n metadata={\"timestamp\": \"2024-01-15T10:30:00Z\"}\n)\n\nconv_id = client.store_memory(message)\nprint(f\"Stored in conversation: {conv_id}\")\n\n# Add a response to the same conversation\nresponse = Memory(\n cluster_id=collection.id,\n content=\"Machine learning is a subset of AI that enables computers to learn from data.\",\n role=\"assistant\",\n parent_id=conv_id, # Link to existing conversation\n metadata={\"timestamp\": \"2024-01-15T10:30:05Z\"}\n)\n\nclient.store_memory(response)\n```\n\n#### Batch Storage\n\n```python\n# Store multiple memories at once\nmemories = [\n Memory(cluster_id=collection.id, content=\"First memory\", metadata={\"type\": \"note\"}),\n Memory(cluster_id=collection.id, content=\"Second memory\", metadata={\"type\": \"note\"}),\n Memory(cluster_id=collection.id, content=\"User question\", role=\"user\"),\n Memory(cluster_id=collection.id, content=\"Assistant response\", role=\"assistant\", parent_id=\"conv_123\")\n]\n\nids = client.store_memories(memories)\nprint(f\"Stored {len(ids)} memories\")\n```\n\n### Retrieving Memories\n\n```python\n# List memories from a collection\nmemories = client.list_memories(cluster_ids=[collection.id], limit=10)\n\nfor memory in memories:\n print(f\"ID: {memory.id}\")\n print(f\"Content: {memory.content}\")\n print(f\"Metadata: {memory.metadata}\")\n\n# Get specific memory\nmemory = client.get_memory(\"memory_id_here\")\n```\n\n### Search Across Memories\n\n```python\n# Search across collections\nresults = client.search(\n query=\"machine learning\",\n cluster_ids=[collection.id],\n limit=10,\n)\n\nfor result in results:\n print(f\"Found: {result.content[:100]}...\")\n print(f\"Score: {result.score}\")\n```\n\n## Async Client\n\nThe SDK also provides an async client with identical functionality:\n\n```python\nfrom nebula_client import AsyncNebulaClient, Memory\n\nasync with AsyncNebulaClient(api_key=\"your-api-key\") as client:\n # Store memory\n memory = Memory(cluster_id=\"cluster_123\", content=\"Async memory\")\n doc_id = await client.store_memory(memory)\n \n # Search\n results = await client.search(\"query\", cluster_ids=[\"cluster_123\"])\n```\n\n## API Reference\n\n### Core Methods\n\n#### Collection Management\n\n- `create_cluster(name, description=None, metadata=None)` - Create a new collection\n- `get_cluster(cluster_id)` - Get collection details\n- `list_clusters(limit=100, offset=0)` - List all collections\n- `update_cluster(cluster_id, name=None, description=None, metadata=None)` - Update collection\n- `delete_cluster(cluster_id)` - Delete collection\n\n#### Memory Storage\n\n- `store_memory(memory)` - Store a single memory (conversation or document)\n- `store_memories(memories)` - Store multiple memories with batching\n- `delete(memory_id)` - Delete a memory\n\n#### Memory Retrieval\n\n- `list_memories(cluster_ids, limit=100, offset=0)` - List memories from collections\n- `get_memory(memory_id)` - Get specific memory\n- `search(query, cluster_ids, limit=10, retrieval_type=RetrievalType.ADVANCED, filters=None, search_settings=None)` - Search memories\n\n### Data Models\n\n#### Memory (Write Model)\n\n```python\n@dataclass\nclass Memory:\n cluster_id: str\n content: str\n role: Optional[str] = None # user, assistant, or custom\n parent_id: Optional[str] = None # conversation_id for messages\n metadata: Dict[str, Any] = field(default_factory=dict)\n```\n\n**Behavior:**\n- `role` present \u2192 conversation message\n - `parent_id` used as conversation_id if provided; else a new conversation is created\n - Returns conversation_id\n- `role` absent \u2192 text/json document\n - Content is stored as raw text\n - Returns document_id\n\n#### MemoryResponse (Read Model)\n\n```python\n@dataclass\nclass MemoryResponse:\n id: str\n content: Optional[str] = None\n chunks: Optional[List[str]] = None\n metadata: Dict[str, Any] = field(default_factory=dict)\n cluster_ids: List[str] = field(default_factory=list)\n created_at: Optional[datetime] = None\n updated_at: Optional[datetime] = None\n```\n\n#### Cluster\n\n```python\n@dataclass\nclass Cluster:\n id: str\n name: str\n description: Optional[str]\n metadata: Dict[str, Any]\n created_at: Optional[datetime]\n updated_at: Optional[datetime]\n memory_count: int\n owner_id: Optional[str]\n```\n\n#### SearchResult\n\n```python\n@dataclass\nclass SearchResult:\n id: str\n score: float\n metadata: Dict[str, Any]\n source: Optional[str]\n content: Optional[str] = None\n # Graph fields for graph search results\n graph_result_type: Optional[GraphSearchResultType] = None\n graph_entity: Optional[GraphEntityResult] = None\n graph_relationship: Optional[GraphRelationshipResult] = None\n graph_community: Optional[GraphCommunityResult] = None\n```\n\n## Key Changes from Previous Version\n\n### 1. Unified Write APIs\n\nThe SDK now provides unified methods for storing memories:\n\n- `store_memory()` - Single method for both conversations and documents\n- `store_memories()` - Batch storage with automatic grouping\n- Removed legacy `store()` method\n\n### 2. Memory Model Separation\n\n- `Memory` - Input model for write operations\n- `MemoryResponse` - Output model for read operations\n- Clear separation of concerns between storage and retrieval\n\n### 3. Conversation Support\n\nBuilt-in conversation handling:\n\n- Messages with `role` are stored as conversation messages\n- Automatic conversation creation and management\n- Support for multi-turn conversations\n\n### 4. Deterministic Document IDs\n\nDocuments are created with deterministic IDs based on content hashing:\n\n- Prevents duplicate storage of identical content\n- Enables idempotent operations\n- Improves data consistency\n\n## Testing\n\nRun the test suite to verify functionality:\n\n```bash\ncd py/sdk/nebula_client\npytest tests/ -v\n```\n\nThe test suite covers:\n- Collection management\n- Memory storage (individual and batch)\n- Memory retrieval\n- Search capabilities\n- Async client functionality\n\n## Error Handling\n\nThe SDK provides specific exception types:\n\n- `NebulaClientException` - General client errors\n- `NebulaAuthenticationException` - Authentication failures\n- `NebulaRateLimitException` - Rate limiting\n- `NebulaValidationException` - Invalid input data\n- `NebulaException` - General API errors\n\n## Examples\n\nSee the `examples/` directory for complete usage examples including:\n- Basic memory storage and retrieval\n- Conversation management\n- Search and filtering\n- Async client usage\n",
"bugtrack_url": null,
"license": "Proprietary",
"summary": "Official Python SDK for Nebula Cloud API - Memory Focus",
"version": "0.1.4",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3da4b83707bd82358464e5128be29e19a989da9d5c4ab065bec35c5e6379a74d",
"md5": "a1a39d7e7d11d941f965cde63b888b1e",
"sha256": "396ebe5af7a817d3bc9155681611af3223b611780045a767680301ea28d5e4fe"
},
"downloads": -1,
"filename": "nebula_client-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a1a39d7e7d11d941f965cde63b888b1e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 21139,
"upload_time": "2025-08-30T03:38:20",
"upload_time_iso_8601": "2025-08-30T03:38:20.070140Z",
"url": "https://files.pythonhosted.org/packages/3d/a4/b83707bd82358464e5128be29e19a989da9d5c4ab065bec35c5e6379a74d/nebula_client-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "33da2bcce17658b0b9a88b147f3afd0dec84663be4118515e2344299ee4b5a26",
"md5": "79b3abf951a63e303e34c9269c537d19",
"sha256": "15350d229e1c4d4875d5e9580a7fd4a7a576bd9dd2a0542abe719f5144817cf2"
},
"downloads": -1,
"filename": "nebula_client-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "79b3abf951a63e303e34c9269c537d19",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 26416,
"upload_time": "2025-08-30T03:38:21",
"upload_time_iso_8601": "2025-08-30T03:38:21.314774Z",
"url": "https://files.pythonhosted.org/packages/33/da/2bcce17658b0b9a88b147f3afd0dec84663be4118515e2344299ee4b5a26/nebula_client-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-30 03:38:21",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "nebula-client"
}