# SVECTOR Python SDK
[](https://pypi.org/project/svector-sdk/)
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/svector-sdk/)
**Official Python SDK for accessing SVECTOR APIs.**
SVECTOR develops high-performance AI models and automation solutions, specializing in artificial intelligence, mathematical computing, and computational research. This Python SDK provides programmatic access to SVECTOR's API services, offering intuitive model completions, document processing, and seamless integration with SVECTOR's advanced AI systems (e.g., Spec-3, Spec-3-Turbo, Theta-35).
The library includes type hints for request parameters and response fields, and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx) and [requests](https://github.com/psf/requests).
## Quick Start
```bash
pip install svector-sdk
```
```python
from svector import SVECTOR
client = SVECTOR(api_key="your-api-key") # or set SVECTOR_API_KEY env var
# Conversational API - just provide instructions and input!
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful AI assistant that explains complex topics clearly.",
input="What is artificial intelligence?",
)
print(response.output)
```
## Table of Contents
- [Installation](#installation)
- [Authentication](#authentication)
- [Core Features](#core-features)
- [Conversations API (Recommended)](#conversations-api-recommended)
- [Chat Completions API (Advanced)](#chat-completions-api-advanced)
- [Streaming Responses](#streaming-responses)
- [File Management & Document Processing](#file-management--document-processing)
- [Models](#models)
- [Error Handling](#error-handling)
- [Async Support](#async-support)
- [Advanced Configuration](#advanced-configuration)
- [Complete Examples](#complete-examples)
- [Best Practices](#best-practices)
- [Contributing](#contributing)
## Installation
### pip
```bash
pip install svector-sdk
```
### Development Install
```bash
git clone https://github.com/svector-corporation/svector-python
cd svector-python
pip install -e ".[dev]"
```
## Authentication
Get your API key from the [SVECTOR Dashboard](https://www.svector.co.in) and set it as an environment variable:
```bash
export SVECTOR_API_KEY="your-api-key-here"
```
Or pass it directly to the client:
```python
from svector import SVECTOR
client = SVECTOR(api_key="your-api-key-here")
```
## Core Features
- **Conversations API** - Simple instructions + input interface
- **Advanced Chat Completions** - Full control with role-based messages
- **Real-time Streaming** - Server-sent events for live responses
- **File Processing** - Upload and process documents (PDF, DOCX, TXT, etc.)
- **Knowledge Collections** - Organize files for enhanced RAG
- **Type Safety** - Full type hints and IntelliSense support
- **Async Support** - AsyncSVECTOR client for high-performance applications
- **Robust Error Handling** - Comprehensive error types and retry logic
- **Multi-environment** - Works everywhere Python runs
## Conversations API (Recommended)
The **Conversations API** provides a, user-friendly interface. Just provide instructions and input - the SDK handles all the complex role management internally!
### Basic Conversation
```python
from svector import SVECTOR
client = SVECTOR()
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant that explains things clearly.",
input="What is machine learning?",
temperature=0.7,
max_tokens=200,
)
print(response.output)
print(f"Request ID: {response.request_id}")
print(f"Token Usage: {response.usage}")
```
### Conversation with Context
```python
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a programming tutor that helps students learn coding.",
input="Can you show me an example?",
context=[
"How do I create a function in Python?",
"You can create a function using the def keyword followed by the function name and parameters..."
],
temperature=0.5,
)
```
### Streaming Conversation
```python
stream = client.conversations.create_stream(
model="spec-3-turbo",
instructions="You are a creative storyteller.",
input="Tell me a short story about robots and humans.",
stream=True,
)
print("Story: ", end="", flush=True)
for event in stream:
if not event.done:
print(event.content, end="", flush=True)
else:
print("\nStory completed!")
```
### Document-based Conversation
```python
# First upload a document
with open("research-paper.pdf", "rb") as f:
file_response = client.files.create(f, purpose="default")
# Then ask questions about it
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a research assistant that analyzes documents.",
input="What are the key findings in this paper?",
files=[{"type": "file", "id": file_response.file_id}],
)
```
## Chat Completions API (Advanced)
For full control over the conversation structure, use the Chat Completions API with role-based messages:
### Basic Chat
```python
response = client.chat.create(
model="spec-3-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
max_tokens=150,
temperature=0.7,
)
print(response["choices"][0]["message"]["content"])
```
### Multi-turn Conversation
```python
conversation = [
{"role": "system", "content": "You are a helpful programming assistant."},
{"role": "user", "content": "How do I reverse a string in Python?"},
{"role": "assistant", "content": "You can reverse a string using slicing: string[::-1]"},
{"role": "user", "content": "Can you show me other methods?"}
]
response = client.chat.create(
model="spec-3-turbo",
messages=conversation,
temperature=0.5,
)
```
### Developer Role (System-level Instructions)
```python
response = client.chat.create(
model="spec-3-turbo",
messages=[
{"role": "developer", "content": "You are an expert code reviewer. Provide detailed feedback."},
{"role": "user", "content": "Please review this Python code: def add(a, b): return a + b"}
],
)
```
## Streaming Responses
Both Conversations and Chat APIs support real-time streaming:
### Conversations Streaming
```python
stream = client.conversations.create_stream(
model="spec-3-turbo",
instructions="You are a creative writer.",
input="Write a poem about technology.",
stream=True,
)
for event in stream:
if not event.done:
print(event.content, end="", flush=True)
else:
print("\nStream completed")
```
### Chat Streaming
```python
stream = client.chat.create_stream(
model="spec-3-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing"}
],
stream=True,
)
for event in stream:
if "choices" in event and len(event["choices"]) > 0:
delta = event["choices"][0].get("delta", {})
content = delta.get("content", "")
if content:
print(content, end="", flush=True)
```
## File Management & Document Processing
Upload and process various file formats for enhanced AI capabilities:
### Upload from File
```python
from pathlib import Path
# PDF document
with open("document.pdf", "rb") as f:
pdf_file = client.files.create(f, purpose="default")
# Text file from path
file_response = client.files.create(
Path("notes.txt"),
purpose="default"
)
print(f"File uploaded: {file_response.file_id}")
```
### Upload from Bytes
```python
with open("document.pdf", "rb") as f:
data = f.read()
file_response = client.files.create(
data,
purpose="default",
filename="document.pdf"
)
```
### Upload from String Content
```python
content = """
# Research Notes
This document contains important findings...
"""
file_response = client.files.create(
content.encode(),
purpose="default",
filename="notes.md"
)
```
### Document Q&A
```python
# Upload documents
with open("manual.pdf", "rb") as f:
doc1 = client.files.create(f, purpose="default")
with open("faq.docx", "rb") as f:
doc2 = client.files.create(f, purpose="default")
# Ask questions about the documents
answer = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant that answers questions based on the provided documents.",
input="What are the key features mentioned in the manual?",
files=[
{"type": "file", "id": doc1.file_id},
{"type": "file", "id": doc2.file_id}
],
)
```
## Knowledge Collections
Organize multiple files into collections for better performance and context management:
```python
# Add files to a knowledge collection
result1 = client.knowledge.add_file("collection-123", "file-456")
result2 = client.knowledge.add_file("collection-123", "file-789")
# Use the entire collection in conversations
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a research assistant with access to our knowledge base.",
input="Summarize all the information about our products.",
files=[{"type": "collection", "id": "collection-123"}],
)
```
## Models
SVECTOR provides several cutting-edge foundational AI models:
### Available Models
```python
# List all available models
models = client.models.list()
print(models["models"])
```
**SVECTOR's Foundational Models:**
- **`spec-3-turbo`** - Fast, efficient model for most use cases
- **`spec-3`** - Standard model with balanced performance
- **`theta-35-mini`** - Lightweight model for simple tasks
- **`theta-35`** - Advanced model for complex reasoning
### Model Selection Guide
```python
# For quick responses and general tasks
quick_response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input="What time is it?",
)
# For complex reasoning and analysis
complex_analysis = client.conversations.create(
model="theta-35",
instructions="You are an expert data analyst.",
input="Analyze the trends in this quarterly report.",
files=[{"type": "file", "id": "report-file-id"}],
)
# For lightweight tasks
simple_task = client.conversations.create(
model="theta-35-mini",
instructions="You help with simple questions.",
input="What is 2 + 2?",
)
```
## Error Handling
The SDK provides comprehensive error handling with specific error types:
```python
from svector import (
SVECTOR,
AuthenticationError,
RateLimitError,
NotFoundError,
APIError
)
client = SVECTOR()
try:
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input="Hello world",
)
print(response.output)
except AuthenticationError as e:
print(f"Invalid API key: {e}")
print("Get your API key from https://www.svector.co.in")
except RateLimitError as e:
print(f"Rate limit exceeded: {e}")
print("Please wait before making another request")
except NotFoundError as e:
print(f"Resource not found: {e}")
except APIError as e:
print(f"API error: {e} (Status: {e.status_code})")
print(f"Request ID: {getattr(e, 'request_id', 'N/A')}")
except Exception as e:
print(f"Unexpected error: {e}")
```
### Available Error Types
- **`AuthenticationError`** - Invalid API key or authentication issues
- **`PermissionDeniedError`** - Insufficient permissions for the resource
- **`NotFoundError`** - Requested resource not found
- **`RateLimitError`** - API rate limit exceeded
- **`UnprocessableEntityError`** - Invalid request data or parameters
- **`InternalServerError`** - Server-side errors
- **`APIConnectionError`** - Network connection issues
- **`APIConnectionTimeoutError`** - Request timeout
## Async Support
The SDK provides full async support with `AsyncSVECTOR`:
### Async Basic Usage
```python
import asyncio
from svector import AsyncSVECTOR
async def main():
async with AsyncSVECTOR() as client:
response = await client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input="Explain quantum computing in simple terms.",
)
print(response.output)
asyncio.run(main())
```
### Async Streaming
```python
async def streaming_example():
async with AsyncSVECTOR() as client:
stream = await client.conversations.create_stream(
model="spec-3-turbo",
instructions="You are a creative storyteller.",
input="Write a poem about technology.",
stream=True,
)
async for event in stream:
if not event.done:
print(event.content, end="", flush=True)
print()
asyncio.run(streaming_example())
```
### Async Concurrent Requests
```python
async def concurrent_example():
async with AsyncSVECTOR() as client:
# Multiple async conversations
tasks = [
client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input=f"What is {topic}?"
)
for topic in ["artificial intelligence", "quantum computing", "blockchain"]
]
responses = await asyncio.gather(*tasks, return_exceptions=True)
topics = ["artificial intelligence", "quantum computing", "blockchain"]
for topic, response in zip(topics, responses):
if isinstance(response, Exception):
print(f"{topic}: Error - {response}")
else:
print(f"{topic}: {response.output[:100]}...")
asyncio.run(concurrent_example())
```
## Advanced Configuration
### Client Configuration
```python
from svector import SVECTOR
client = SVECTOR(
api_key="your-api-key",
base_url="https://api.svector.co.in", # Custom API endpoint
timeout=30, # Request timeout in seconds
max_retries=3, # Retry failed requests
verify_ssl=True, # SSL verification
http_client=None, # Custom HTTP client
)
```
### Async Configuration
```python
from svector import AsyncSVECTOR
client = AsyncSVECTOR(
api_key="your-api-key",
timeout=30,
max_retries=3,
)
```
### Per-request Options
```python
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input="Hello",
timeout=60, # Override timeout for this request
headers={ # Additional headers
"X-Custom-Header": "value",
"X-Request-Source": "my-app"
}
)
```
### Raw Response Access
```python
# Get both response data and raw HTTP response
response, raw = client.conversations.create_with_response(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input="Hello",
)
print(f"Status: {raw.status_code}")
print(f"Headers: {raw.headers}")
print(f"Response: {response.output}")
print(f"Request ID: {response.request_id}")
```
## Complete Examples
### Intelligent Chat Application
```python
from svector import SVECTOR
class IntelligentChat:
def __init__(self, api_key: str):
self.client = SVECTOR(api_key=api_key)
self.conversation_history = []
def chat(self, user_message: str, system_instructions: str = None) -> str:
# Add user message to history
self.conversation_history.append(user_message)
response = self.client.conversations.create(
model="spec-3-turbo",
instructions=system_instructions or "You are a helpful and friendly AI assistant.",
input=user_message,
context=self.conversation_history[-10:], # Keep last 10 messages
temperature=0.7,
)
# Add AI response to history
self.conversation_history.append(response.output)
return response.output
def stream_chat(self, user_message: str):
print("Assistant: ", end="", flush=True)
stream = self.client.conversations.create_stream(
model="spec-3-turbo",
instructions="You are a helpful AI assistant. Be conversational and engaging.",
input=user_message,
context=self.conversation_history[-6:],
stream=True,
)
full_response = ""
for event in stream:
if not event.done:
print(event.content, end="", flush=True)
full_response += event.content
print()
self.conversation_history.append(user_message)
self.conversation_history.append(full_response)
def clear_history(self):
self.conversation_history = []
# Usage
import os
chat = IntelligentChat(os.environ.get("SVECTOR_API_KEY"))
# Regular chat
print(chat.chat("Hello! How are you today?"))
# Streaming chat
chat.stream_chat("Tell me an interesting fact about space.")
# Specialized chat
print(chat.chat(
"Explain quantum computing",
"You are a physics professor who explains complex topics in simple terms."
))
```
### Document Analysis System
```python
from svector import SVECTOR
from pathlib import Path
class DocumentAnalyzer:
def __init__(self):
self.client = SVECTOR()
self.uploaded_files = []
def add_document(self, file_path: str) -> str:
try:
with open(file_path, "rb") as f:
file_response = self.client.files.create(
f,
purpose="default",
filename=Path(file_path).name
)
self.uploaded_files.append(file_response.file_id)
print(f"Uploaded: {file_path} (ID: {file_response.file_id})")
return file_response.file_id
except Exception as error:
print(f"Failed to upload {file_path}: {error}")
raise error
def add_document_from_text(self, content: str, filename: str) -> str:
file_response = self.client.files.create(
content.encode(),
purpose="default",
filename=filename
)
self.uploaded_files.append(file_response.file_id)
return file_response.file_id
def analyze(self, query: str, analysis_type: str = "insights") -> str:
instructions = {
"summary": "You are an expert document summarizer. Provide clear, concise summaries.",
"questions": "You are an expert analyst. Answer questions based on the provided documents with citations.",
"insights": "You are a research analyst. Extract key insights, patterns, and important findings."
}
response = self.client.conversations.create(
model="spec-3-turbo",
instructions=instructions[analysis_type],
input=query,
files=[{"type": "file", "id": file_id} for file_id in self.uploaded_files],
temperature=0.3, # Lower temperature for more factual responses
)
return response.output
def compare_documents(self, query: str) -> str:
if len(self.uploaded_files) < 2:
raise ValueError("Need at least 2 documents to compare")
return self.analyze(
f"Compare and contrast the documents regarding: {query}",
"insights"
)
def get_uploaded_file_ids(self):
return self.uploaded_files.copy()
# Usage
analyzer = DocumentAnalyzer()
# Add multiple documents
analyzer.add_document("./reports/quarterly-report.pdf")
analyzer.add_document("./reports/annual-summary.docx")
analyzer.add_document_from_text("""
# Meeting Notes
Key decisions:
1. Increase R&D budget by 15%
2. Launch new product line in Q3
3. Expand team by 5 engineers
""", "meeting-notes.md")
# Analyze documents
summary = analyzer.analyze(
"Provide a comprehensive summary of all documents",
"summary"
)
print("Summary:", summary)
insights = analyzer.analyze(
"What are the key business decisions and their potential impact?",
"insights"
)
print("Insights:", insights)
# Compare documents
comparison = analyzer.compare_documents(
"financial performance and future projections"
)
print("Comparison:", comparison)
```
### Multi-Model Comparison
```python
from svector import SVECTOR
import time
class ModelComparison:
def __init__(self):
self.client = SVECTOR()
def compare_models(self, prompt: str):
models = ["spec-3-turbo", "spec-3", "theta-35", "theta-35-mini"]
print(f"Comparing models for prompt: \"{prompt}\"\n")
results = []
for model in models:
try:
start_time = time.time()
response = self.client.conversations.create(
model=model,
instructions="You are a helpful assistant. Be concise but informative.",
input=prompt,
max_tokens=150,
)
duration = time.time() - start_time
results.append({
"model": model,
"response": response.output,
"duration": duration,
"usage": response.usage,
"success": True
})
except Exception as e:
results.append({
"model": model,
"error": str(e),
"success": False
})
# Display results
for result in results:
if result["success"]:
print(f"Model: {result['model']}")
print(f"Duration: {result['duration']:.2f}s")
print(f"Tokens: {result['usage'].get('total_tokens', 'N/A')}")
print(f"Response: {result['response'][:200]}...")
print("─" * 80)
else:
print(f"{result['model']} failed: {result['error']}")
# Usage
comparison = ModelComparison()
comparison.compare_models("Explain the concept of artificial general intelligence")
```
## Best Practices
### 1. Use Conversations API for Simplicity
```python
# Recommended: Clean and simple
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a helpful assistant.",
input=user_message,
)
# More complex: Manual role management
response = client.chat.create(
model="spec-3-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message}
],
)
```
### 2. Handle Errors Gracefully
```python
import time
def chat_with_retry(client, prompt, max_retries=3):
for attempt in range(max_retries):
try:
return client.conversations.create(
model="spec-3-turbo",
instructions="You are helpful.",
input=prompt
)
except RateLimitError:
if attempt < max_retries - 1:
wait_time = 2 ** attempt # Exponential backoff
time.sleep(wait_time)
else:
raise
```
### 3. Use Appropriate Models
```python
# For quick responses
model = "spec-3-turbo"
# For complex reasoning
model = "theta-35"
# For simple tasks
model = "theta-35-mini"
```
### 4. Optimize File Usage
```python
# Upload once, use multiple times
with open("document.pdf", "rb") as f:
file_response = client.files.create(f, purpose="default")
file_id = file_response.file_id
# Use in multiple conversations
for question in questions:
response = client.conversations.create(
model="spec-3-turbo",
instructions="You are a document analyst.",
input=question,
files=[{"type": "file", "id": file_id}],
)
```
### 5. Environment Variables
```python
import os
from svector import SVECTOR
# Use environment variables
client = SVECTOR(api_key=os.environ.get("SVECTOR_API_KEY"))
# Don't hardcode API keys
client = SVECTOR(api_key="sk-hardcoded-key-here") # Never do this!
```
### 6. Use Context Managers for Async
```python
# Recommended: Use context manager
async with AsyncSVECTOR() as client:
response = await client.conversations.create(...)
# Manual cleanup required
client = AsyncSVECTOR()
try:
response = await client.conversations.create(...)
finally:
await client.close()
```
## Testing
Run tests with pytest:
```bash
# Install test dependencies
pip install -e ".[test]"
# Run tests
pytest
# Run with coverage
pytest --cov=svector
```
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
1. Fork the repository
2. Create a feature branch
3. Install development dependencies: `pip install -e ".[dev]"`
4. Make your changes
5. Add tests and documentation
6. Run tests and linting
7. Submit a pull request
## License
Apache License - see [LICENSE](LICENSE) file for details.
## Links & Support
- **Website**: [https://www.svector.co.in](https://www.svector.co.in)
- **Documentation**: [https://platform.svector.co.in](https://platform.svector.co.in)
- **Issues**: [GitHub Issues](https://github.com/SVECTOR-CORPORATION/svector-python/issues)
- **Support**: [support@svector.co.in](mailto:support@svector.co.in)
- **PyPI Package**: [svector-sdk](https://pypi.org/project/svector-sdk/)
---
**Built with ❤️ by SVECTOR Corporation** - *Pushing the boundaries of AI, Mathematics, and Computational research*
Raw data
{
"_id": null,
"home_page": "https://github.com/svector-corporation/svector-python",
"name": "svector-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "SVECTOR Team <support@svector.co.in>",
"keywords": "svector, ai, machine-learning, api, llm, spec-chat, artificial-intelligence, conversational-ai, language-models",
"author": "SVECTOR Team",
"author_email": "SVECTOR Team <support@svector.co.in>",
"download_url": "https://files.pythonhosted.org/packages/d3/28/a8698799386095c849477fec8b4f9708f1d906618579f21a68a8e8b3bbf9/svector_sdk-1.3.1.tar.gz",
"platform": null,
"description": "# SVECTOR Python SDK\n\n[](https://pypi.org/project/svector-sdk/) \n[](https://www.python.org/) \n[](https://opensource.org/licenses/MIT) \n[](https://pypi.org/project/svector-sdk/)\n\n**Official Python SDK for accessing SVECTOR APIs.**\n\nSVECTOR develops high-performance AI models and automation solutions, specializing in artificial intelligence, mathematical computing, and computational research. This Python SDK provides programmatic access to SVECTOR's API services, offering intuitive model completions, document processing, and seamless integration with SVECTOR's advanced AI systems (e.g., Spec-3, Spec-3-Turbo, Theta-35).\n\nThe library includes type hints for request parameters and response fields, and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx) and [requests](https://github.com/psf/requests).\n\n## Quick Start\n\n```bash\npip install svector-sdk\n```\n\n```python\nfrom svector import SVECTOR\n\nclient = SVECTOR(api_key=\"your-api-key\") # or set SVECTOR_API_KEY env var\n\n# Conversational API - just provide instructions and input!\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful AI assistant that explains complex topics clearly.\",\n input=\"What is artificial intelligence?\",\n)\n\nprint(response.output)\n```\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Authentication](#authentication)\n- [Core Features](#core-features)\n- [Conversations API (Recommended)](#conversations-api-recommended)\n- [Chat Completions API (Advanced)](#chat-completions-api-advanced)\n- [Streaming Responses](#streaming-responses)\n- [File Management & Document Processing](#file-management--document-processing)\n- [Models](#models)\n- [Error Handling](#error-handling)\n- [Async Support](#async-support)\n- [Advanced Configuration](#advanced-configuration)\n- [Complete Examples](#complete-examples)\n- [Best Practices](#best-practices)\n- [Contributing](#contributing)\n\n## Installation\n\n### pip\n```bash\npip install svector-sdk\n```\n\n### Development Install\n```bash\ngit clone https://github.com/svector-corporation/svector-python\ncd svector-python\npip install -e \".[dev]\"\n```\n\n## Authentication\n\nGet your API key from the [SVECTOR Dashboard](https://www.svector.co.in) and set it as an environment variable:\n\n```bash\nexport SVECTOR_API_KEY=\"your-api-key-here\"\n```\n\nOr pass it directly to the client:\n\n```python\nfrom svector import SVECTOR\n\nclient = SVECTOR(api_key=\"your-api-key-here\")\n```\n\n## Core Features\n\n- **Conversations API** - Simple instructions + input interface\n- **Advanced Chat Completions** - Full control with role-based messages\n- **Real-time Streaming** - Server-sent events for live responses\n- **File Processing** - Upload and process documents (PDF, DOCX, TXT, etc.)\n- **Knowledge Collections** - Organize files for enhanced RAG\n- **Type Safety** - Full type hints and IntelliSense support\n- **Async Support** - AsyncSVECTOR client for high-performance applications\n- **Robust Error Handling** - Comprehensive error types and retry logic\n- **Multi-environment** - Works everywhere Python runs\n\n## Conversations API (Recommended)\n\nThe **Conversations API** provides a, user-friendly interface. Just provide instructions and input - the SDK handles all the complex role management internally!\n\n### Basic Conversation\n\n```python\nfrom svector import SVECTOR\n\nclient = SVECTOR()\n\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant that explains things clearly.\",\n input=\"What is machine learning?\",\n temperature=0.7,\n max_tokens=200,\n)\n\nprint(response.output)\nprint(f\"Request ID: {response.request_id}\")\nprint(f\"Token Usage: {response.usage}\")\n```\n\n### Conversation with Context\n\n```python\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a programming tutor that helps students learn coding.\",\n input=\"Can you show me an example?\",\n context=[\n \"How do I create a function in Python?\",\n \"You can create a function using the def keyword followed by the function name and parameters...\"\n ],\n temperature=0.5,\n)\n```\n\n### Streaming Conversation\n\n```python\nstream = client.conversations.create_stream(\n model=\"spec-3-turbo\",\n instructions=\"You are a creative storyteller.\",\n input=\"Tell me a short story about robots and humans.\",\n stream=True,\n)\n\nprint(\"Story: \", end=\"\", flush=True)\nfor event in stream:\n if not event.done:\n print(event.content, end=\"\", flush=True)\n else:\n print(\"\\nStory completed!\")\n```\n\n### Document-based Conversation\n\n```python\n# First upload a document\nwith open(\"research-paper.pdf\", \"rb\") as f:\n file_response = client.files.create(f, purpose=\"default\")\n\n# Then ask questions about it\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a research assistant that analyzes documents.\",\n input=\"What are the key findings in this paper?\",\n files=[{\"type\": \"file\", \"id\": file_response.file_id}],\n)\n```\n\n## Chat Completions API (Advanced)\n\nFor full control over the conversation structure, use the Chat Completions API with role-based messages:\n\n### Basic Chat\n\n```python\nresponse = client.chat.create(\n model=\"spec-3-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"}\n ],\n max_tokens=150,\n temperature=0.7,\n)\n\nprint(response[\"choices\"][0][\"message\"][\"content\"])\n```\n\n### Multi-turn Conversation\n\n```python\nconversation = [\n {\"role\": \"system\", \"content\": \"You are a helpful programming assistant.\"},\n {\"role\": \"user\", \"content\": \"How do I reverse a string in Python?\"},\n {\"role\": \"assistant\", \"content\": \"You can reverse a string using slicing: string[::-1]\"},\n {\"role\": \"user\", \"content\": \"Can you show me other methods?\"}\n]\n\nresponse = client.chat.create(\n model=\"spec-3-turbo\",\n messages=conversation,\n temperature=0.5,\n)\n```\n\n### Developer Role (System-level Instructions)\n\n```python\nresponse = client.chat.create(\n model=\"spec-3-turbo\",\n messages=[\n {\"role\": \"developer\", \"content\": \"You are an expert code reviewer. Provide detailed feedback.\"},\n {\"role\": \"user\", \"content\": \"Please review this Python code: def add(a, b): return a + b\"}\n ],\n)\n```\n\n## Streaming Responses\n\nBoth Conversations and Chat APIs support real-time streaming:\n\n### Conversations Streaming\n\n```python\nstream = client.conversations.create_stream(\n model=\"spec-3-turbo\",\n instructions=\"You are a creative writer.\",\n input=\"Write a poem about technology.\",\n stream=True,\n)\n\nfor event in stream:\n if not event.done:\n print(event.content, end=\"\", flush=True)\n else:\n print(\"\\nStream completed\")\n```\n\n### Chat Streaming\n\n```python\nstream = client.chat.create_stream(\n model=\"spec-3-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Explain quantum computing\"}\n ],\n stream=True,\n)\n\nfor event in stream:\n if \"choices\" in event and len(event[\"choices\"]) > 0:\n delta = event[\"choices\"][0].get(\"delta\", {})\n content = delta.get(\"content\", \"\")\n if content:\n print(content, end=\"\", flush=True)\n```\n\n## File Management & Document Processing\n\nUpload and process various file formats for enhanced AI capabilities:\n\n### Upload from File\n\n```python\nfrom pathlib import Path\n\n# PDF document\nwith open(\"document.pdf\", \"rb\") as f:\n pdf_file = client.files.create(f, purpose=\"default\")\n\n# Text file from path\nfile_response = client.files.create(\n Path(\"notes.txt\"), \n purpose=\"default\"\n)\n\nprint(f\"File uploaded: {file_response.file_id}\")\n```\n\n### Upload from Bytes\n\n```python\nwith open(\"document.pdf\", \"rb\") as f:\n data = f.read()\n\nfile_response = client.files.create(\n data, \n purpose=\"default\", \n filename=\"document.pdf\"\n)\n```\n\n### Upload from String Content\n\n```python\ncontent = \"\"\"\n# Research Notes\nThis document contains important findings...\n\"\"\"\n\nfile_response = client.files.create(\n content.encode(), \n purpose=\"default\", \n filename=\"notes.md\"\n)\n```\n\n### Document Q&A\n\n```python\n# Upload documents\nwith open(\"manual.pdf\", \"rb\") as f:\n doc1 = client.files.create(f, purpose=\"default\")\n\nwith open(\"faq.docx\", \"rb\") as f:\n doc2 = client.files.create(f, purpose=\"default\")\n\n# Ask questions about the documents\nanswer = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant that answers questions based on the provided documents.\",\n input=\"What are the key features mentioned in the manual?\",\n files=[\n {\"type\": \"file\", \"id\": doc1.file_id},\n {\"type\": \"file\", \"id\": doc2.file_id}\n ],\n)\n```\n\n## Knowledge Collections\n\nOrganize multiple files into collections for better performance and context management:\n\n```python\n# Add files to a knowledge collection\nresult1 = client.knowledge.add_file(\"collection-123\", \"file-456\")\nresult2 = client.knowledge.add_file(\"collection-123\", \"file-789\")\n\n# Use the entire collection in conversations\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a research assistant with access to our knowledge base.\",\n input=\"Summarize all the information about our products.\",\n files=[{\"type\": \"collection\", \"id\": \"collection-123\"}],\n)\n```\n\n## Models\n\nSVECTOR provides several cutting-edge foundational AI models:\n\n### Available Models\n\n```python\n# List all available models\nmodels = client.models.list()\nprint(models[\"models\"])\n```\n\n**SVECTOR's Foundational Models:**\n\n- **`spec-3-turbo`** - Fast, efficient model for most use cases\n- **`spec-3`** - Standard model with balanced performance \n- **`theta-35-mini`** - Lightweight model for simple tasks\n- **`theta-35`** - Advanced model for complex reasoning\n\n### Model Selection Guide\n\n```python\n# For quick responses and general tasks\nquick_response = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=\"What time is it?\",\n)\n\n# For complex reasoning and analysis\ncomplex_analysis = client.conversations.create(\n model=\"theta-35\",\n instructions=\"You are an expert data analyst.\",\n input=\"Analyze the trends in this quarterly report.\",\n files=[{\"type\": \"file\", \"id\": \"report-file-id\"}],\n)\n\n# For lightweight tasks\nsimple_task = client.conversations.create(\n model=\"theta-35-mini\",\n instructions=\"You help with simple questions.\",\n input=\"What is 2 + 2?\",\n)\n```\n\n## Error Handling\n\nThe SDK provides comprehensive error handling with specific error types:\n\n```python\nfrom svector import (\n SVECTOR, \n AuthenticationError, \n RateLimitError, \n NotFoundError,\n APIError\n)\n\nclient = SVECTOR()\n\ntry:\n response = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=\"Hello world\",\n )\n \n print(response.output)\nexcept AuthenticationError as e:\n print(f\"Invalid API key: {e}\")\n print(\"Get your API key from https://www.svector.co.in\")\nexcept RateLimitError as e:\n print(f\"Rate limit exceeded: {e}\")\n print(\"Please wait before making another request\")\nexcept NotFoundError as e:\n print(f\"Resource not found: {e}\")\nexcept APIError as e:\n print(f\"API error: {e} (Status: {e.status_code})\")\n print(f\"Request ID: {getattr(e, 'request_id', 'N/A')}\")\nexcept Exception as e:\n print(f\"Unexpected error: {e}\")\n```\n\n### Available Error Types\n\n- **`AuthenticationError`** - Invalid API key or authentication issues\n- **`PermissionDeniedError`** - Insufficient permissions for the resource\n- **`NotFoundError`** - Requested resource not found\n- **`RateLimitError`** - API rate limit exceeded\n- **`UnprocessableEntityError`** - Invalid request data or parameters\n- **`InternalServerError`** - Server-side errors\n- **`APIConnectionError`** - Network connection issues\n- **`APIConnectionTimeoutError`** - Request timeout\n\n## Async Support\n\nThe SDK provides full async support with `AsyncSVECTOR`:\n\n### Async Basic Usage\n\n```python\nimport asyncio\nfrom svector import AsyncSVECTOR\n\nasync def main():\n async with AsyncSVECTOR() as client:\n response = await client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=\"Explain quantum computing in simple terms.\",\n )\n print(response.output)\n\nasyncio.run(main())\n```\n\n### Async Streaming\n\n```python\nasync def streaming_example():\n async with AsyncSVECTOR() as client:\n stream = await client.conversations.create_stream(\n model=\"spec-3-turbo\",\n instructions=\"You are a creative storyteller.\",\n input=\"Write a poem about technology.\",\n stream=True,\n )\n \n async for event in stream:\n if not event.done:\n print(event.content, end=\"\", flush=True)\n print()\n\nasyncio.run(streaming_example())\n```\n\n### Async Concurrent Requests\n\n```python\nasync def concurrent_example():\n async with AsyncSVECTOR() as client:\n # Multiple async conversations\n tasks = [\n client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=f\"What is {topic}?\"\n )\n for topic in [\"artificial intelligence\", \"quantum computing\", \"blockchain\"]\n ]\n \n responses = await asyncio.gather(*tasks, return_exceptions=True)\n \n topics = [\"artificial intelligence\", \"quantum computing\", \"blockchain\"]\n for topic, response in zip(topics, responses):\n if isinstance(response, Exception):\n print(f\"{topic}: Error - {response}\")\n else:\n print(f\"{topic}: {response.output[:100]}...\")\n\nasyncio.run(concurrent_example())\n```\n\n## Advanced Configuration\n\n### Client Configuration\n\n```python\nfrom svector import SVECTOR\n\nclient = SVECTOR(\n api_key=\"your-api-key\",\n base_url=\"https://api.svector.co.in\", # Custom API endpoint\n timeout=30, # Request timeout in seconds\n max_retries=3, # Retry failed requests\n verify_ssl=True, # SSL verification\n http_client=None, # Custom HTTP client\n)\n```\n\n### Async Configuration\n\n```python\nfrom svector import AsyncSVECTOR\n\nclient = AsyncSVECTOR(\n api_key=\"your-api-key\",\n timeout=30,\n max_retries=3,\n)\n```\n\n### Per-request Options\n\n```python\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=\"Hello\",\n timeout=60, # Override timeout for this request\n headers={ # Additional headers\n \"X-Custom-Header\": \"value\",\n \"X-Request-Source\": \"my-app\"\n }\n)\n```\n\n### Raw Response Access\n\n```python\n# Get both response data and raw HTTP response\nresponse, raw = client.conversations.create_with_response(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=\"Hello\",\n)\n\nprint(f\"Status: {raw.status_code}\")\nprint(f\"Headers: {raw.headers}\")\nprint(f\"Response: {response.output}\")\nprint(f\"Request ID: {response.request_id}\")\n```\n\n## Complete Examples\n\n### Intelligent Chat Application\n\n```python\nfrom svector import SVECTOR\n\nclass IntelligentChat:\n def __init__(self, api_key: str):\n self.client = SVECTOR(api_key=api_key)\n self.conversation_history = []\n\n def chat(self, user_message: str, system_instructions: str = None) -> str:\n # Add user message to history\n self.conversation_history.append(user_message)\n\n response = self.client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=system_instructions or \"You are a helpful and friendly AI assistant.\",\n input=user_message,\n context=self.conversation_history[-10:], # Keep last 10 messages\n temperature=0.7,\n )\n\n # Add AI response to history\n self.conversation_history.append(response.output)\n return response.output\n\n def stream_chat(self, user_message: str):\n print(\"Assistant: \", end=\"\", flush=True)\n \n stream = self.client.conversations.create_stream(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful AI assistant. Be conversational and engaging.\",\n input=user_message,\n context=self.conversation_history[-6:],\n stream=True,\n )\n\n full_response = \"\"\n for event in stream:\n if not event.done:\n print(event.content, end=\"\", flush=True)\n full_response += event.content\n print()\n\n self.conversation_history.append(user_message)\n self.conversation_history.append(full_response)\n\n def clear_history(self):\n self.conversation_history = []\n\n# Usage\nimport os\nchat = IntelligentChat(os.environ.get(\"SVECTOR_API_KEY\"))\n\n# Regular chat\nprint(chat.chat(\"Hello! How are you today?\"))\n\n# Streaming chat\nchat.stream_chat(\"Tell me an interesting fact about space.\")\n\n# Specialized chat\nprint(chat.chat(\n \"Explain quantum computing\", \n \"You are a physics professor who explains complex topics in simple terms.\"\n))\n```\n\n### Document Analysis System\n\n```python\nfrom svector import SVECTOR\nfrom pathlib import Path\n\nclass DocumentAnalyzer:\n def __init__(self):\n self.client = SVECTOR()\n self.uploaded_files = []\n\n def add_document(self, file_path: str) -> str:\n try:\n with open(file_path, \"rb\") as f:\n file_response = self.client.files.create(\n f, \n purpose=\"default\",\n filename=Path(file_path).name\n )\n \n self.uploaded_files.append(file_response.file_id)\n print(f\"Uploaded: {file_path} (ID: {file_response.file_id})\")\n return file_response.file_id\n except Exception as error:\n print(f\"Failed to upload {file_path}: {error}\")\n raise error\n\n def add_document_from_text(self, content: str, filename: str) -> str:\n file_response = self.client.files.create(\n content.encode(), \n purpose=\"default\", \n filename=filename\n )\n self.uploaded_files.append(file_response.file_id)\n return file_response.file_id\n\n def analyze(self, query: str, analysis_type: str = \"insights\") -> str:\n instructions = {\n \"summary\": \"You are an expert document summarizer. Provide clear, concise summaries.\",\n \"questions\": \"You are an expert analyst. Answer questions based on the provided documents with citations.\",\n \"insights\": \"You are a research analyst. Extract key insights, patterns, and important findings.\"\n }\n\n response = self.client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=instructions[analysis_type],\n input=query,\n files=[{\"type\": \"file\", \"id\": file_id} for file_id in self.uploaded_files],\n temperature=0.3, # Lower temperature for more factual responses\n )\n\n return response.output\n\n def compare_documents(self, query: str) -> str:\n if len(self.uploaded_files) < 2:\n raise ValueError(\"Need at least 2 documents to compare\")\n\n return self.analyze(\n f\"Compare and contrast the documents regarding: {query}\",\n \"insights\"\n )\n\n def get_uploaded_file_ids(self):\n return self.uploaded_files.copy()\n\n# Usage\nanalyzer = DocumentAnalyzer()\n\n# Add multiple documents\nanalyzer.add_document(\"./reports/quarterly-report.pdf\")\nanalyzer.add_document(\"./reports/annual-summary.docx\")\nanalyzer.add_document_from_text(\"\"\"\n# Meeting Notes\nKey decisions:\n1. Increase R&D budget by 15%\n2. Launch new product line in Q3\n3. Expand team by 5 engineers\n\"\"\", \"meeting-notes.md\")\n\n# Analyze documents\nsummary = analyzer.analyze(\n \"Provide a comprehensive summary of all documents\",\n \"summary\"\n)\nprint(\"Summary:\", summary)\n\ninsights = analyzer.analyze(\n \"What are the key business decisions and their potential impact?\",\n \"insights\"\n)\nprint(\"Insights:\", insights)\n\n# Compare documents\ncomparison = analyzer.compare_documents(\n \"financial performance and future projections\"\n)\nprint(\"Comparison:\", comparison)\n```\n\n### Multi-Model Comparison\n\n```python\nfrom svector import SVECTOR\nimport time\n\nclass ModelComparison:\n def __init__(self):\n self.client = SVECTOR()\n\n def compare_models(self, prompt: str):\n models = [\"spec-3-turbo\", \"spec-3\", \"theta-35\", \"theta-35-mini\"]\n \n print(f\"Comparing models for prompt: \\\"{prompt}\\\"\\n\")\n\n results = []\n for model in models:\n try:\n start_time = time.time()\n \n response = self.client.conversations.create(\n model=model,\n instructions=\"You are a helpful assistant. Be concise but informative.\",\n input=prompt,\n max_tokens=150,\n )\n \n duration = time.time() - start_time\n \n results.append({\n \"model\": model,\n \"response\": response.output,\n \"duration\": duration,\n \"usage\": response.usage,\n \"success\": True\n })\n \n except Exception as e:\n results.append({\n \"model\": model,\n \"error\": str(e),\n \"success\": False\n })\n\n # Display results\n for result in results:\n if result[\"success\"]:\n print(f\"Model: {result['model']}\")\n print(f\"Duration: {result['duration']:.2f}s\")\n print(f\"Tokens: {result['usage'].get('total_tokens', 'N/A')}\")\n print(f\"Response: {result['response'][:200]}...\")\n print(\"\u2500\" * 80)\n else:\n print(f\"{result['model']} failed: {result['error']}\")\n\n# Usage\ncomparison = ModelComparison()\ncomparison.compare_models(\"Explain the concept of artificial general intelligence\")\n```\n\n## Best Practices\n\n### 1. Use Conversations API for Simplicity\n```python\n# Recommended: Clean and simple\nresponse = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a helpful assistant.\",\n input=user_message,\n)\n\n# More complex: Manual role management\nresponse = client.chat.create(\n model=\"spec-3-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": user_message}\n ],\n)\n```\n\n### 2. Handle Errors Gracefully\n```python\nimport time\n\ndef chat_with_retry(client, prompt, max_retries=3):\n for attempt in range(max_retries):\n try:\n return client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are helpful.\",\n input=prompt\n )\n except RateLimitError:\n if attempt < max_retries - 1:\n wait_time = 2 ** attempt # Exponential backoff\n time.sleep(wait_time)\n else:\n raise\n```\n\n### 3. Use Appropriate Models\n```python\n# For quick responses\nmodel = \"spec-3-turbo\"\n\n# For complex reasoning\nmodel = \"theta-35\"\n\n# For simple tasks\nmodel = \"theta-35-mini\"\n```\n\n### 4. Optimize File Usage\n```python\n# Upload once, use multiple times\nwith open(\"document.pdf\", \"rb\") as f:\n file_response = client.files.create(f, purpose=\"default\")\n file_id = file_response.file_id\n\n# Use in multiple conversations\nfor question in questions:\n response = client.conversations.create(\n model=\"spec-3-turbo\",\n instructions=\"You are a document analyst.\",\n input=question,\n files=[{\"type\": \"file\", \"id\": file_id}],\n )\n```\n\n### 5. Environment Variables\n```python\nimport os\nfrom svector import SVECTOR\n\n# Use environment variables\nclient = SVECTOR(api_key=os.environ.get(\"SVECTOR_API_KEY\"))\n\n# Don't hardcode API keys\nclient = SVECTOR(api_key=\"sk-hardcoded-key-here\") # Never do this!\n```\n\n### 6. Use Context Managers for Async\n```python\n# Recommended: Use context manager\nasync with AsyncSVECTOR() as client:\n response = await client.conversations.create(...)\n\n# Manual cleanup required\nclient = AsyncSVECTOR()\ntry:\n response = await client.conversations.create(...)\nfinally:\n await client.close()\n```\n\n## Testing\n\nRun tests with pytest:\n\n```bash\n# Install test dependencies\npip install -e \".[test]\"\n\n# Run tests\npytest\n\n# Run with coverage\npytest --cov=svector\n```\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n1. Fork the repository\n2. Create a feature branch\n3. Install development dependencies: `pip install -e \".[dev]\"`\n4. Make your changes\n5. Add tests and documentation\n6. Run tests and linting\n7. Submit a pull request\n\n## License\n\nApache License - see [LICENSE](LICENSE) file for details.\n\n## Links & Support\n\n- **Website**: [https://www.svector.co.in](https://www.svector.co.in)\n- **Documentation**: [https://platform.svector.co.in](https://platform.svector.co.in)\n- **Issues**: [GitHub Issues](https://github.com/SVECTOR-CORPORATION/svector-python/issues)\n- **Support**: [support@svector.co.in](mailto:support@svector.co.in)\n- **PyPI Package**: [svector-sdk](https://pypi.org/project/svector-sdk/)\n\n---\n\n**Built with \u2764\ufe0f by SVECTOR Corporation** - *Pushing the boundaries of AI, Mathematics, and Computational research*\n",
"bugtrack_url": null,
"license": null,
"summary": "Official Python SDK for SVECTOR AI Models - Advanced conversational AI and language models",
"version": "1.3.1",
"project_urls": {
"Bug Tracker": "https://github.com/svector-corporation/svector-python/issues",
"Changelog": "https://github.com/svector-corporation/svector-python/blob/main/CHANGELOG.md",
"Documentation": "https://platform.svector.co.in",
"Homepage": "https://www.svector.co.in",
"Repository": "https://github.com/svector-corporation/svector-python"
},
"split_keywords": [
"svector",
" ai",
" machine-learning",
" api",
" llm",
" spec-chat",
" artificial-intelligence",
" conversational-ai",
" language-models"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f354ae439194b047bf072b94839d50103a85b7385c27d6b96d72ae0f0910a5ab",
"md5": "b7a6735f2ecd63c6f7b2747c07edc348",
"sha256": "93fc91ae52f69391599ca099b45a75f090196ecb3eb7120c611843c6ca05ab73"
},
"downloads": -1,
"filename": "svector_sdk-1.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b7a6735f2ecd63c6f7b2747c07edc348",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 30940,
"upload_time": "2025-07-12T14:32:49",
"upload_time_iso_8601": "2025-07-12T14:32:49.376525Z",
"url": "https://files.pythonhosted.org/packages/f3/54/ae439194b047bf072b94839d50103a85b7385c27d6b96d72ae0f0910a5ab/svector_sdk-1.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d328a8698799386095c849477fec8b4f9708f1d906618579f21a68a8e8b3bbf9",
"md5": "9b918c1ba1da3a4c0275eefe6985d5fa",
"sha256": "805e689d4039b4280ead3de389d0f4eee08aaa1fa862a05a60545d0c12438b05"
},
"downloads": -1,
"filename": "svector_sdk-1.3.1.tar.gz",
"has_sig": false,
"md5_digest": "9b918c1ba1da3a4c0275eefe6985d5fa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 32350,
"upload_time": "2025-07-12T14:32:51",
"upload_time_iso_8601": "2025-07-12T14:32:51.111372Z",
"url": "https://files.pythonhosted.org/packages/d3/28/a8698799386095c849477fec8b4f9708f1d906618579f21a68a8e8b3bbf9/svector_sdk-1.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 14:32:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "svector-corporation",
"github_project": "svector-python",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "svector-sdk"
}