Name | prarabdha-cache JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | Modular AI Cache System for chats, audio, and video data ingestion |
upload_time | 2025-08-03 13:11:59 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
cache
ai
vector
rag
audio
video
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Prarabdha - Modular AI Cache System
A futuristic modular caching system for AI applications supporting multi-layer caching, vector similarity search, RAG-aware chunk indexing, and async ingestion APIs.
## Features
### Core Components
- **Multi-layer KV Store**: RAM, disk, and Redis support with TTL and eviction policies
- **Vector Index**: FAISS-based semantic similarity search
- **RAG Chunk Index**: Document-aware chunking and retrieval
- **Audio Cache**: Feature-level audio processing cache
- **Video Cache**: Segment/frame-level video processing cache
### Advanced Features
- **LRU/LFU Eviction**: Configurable memory eviction policies
- **TTL Support**: Automatic expiration for all cache layers
- **Redis Integration**: Distributed caching with auto-sharding
- **RAG Integration**: Similarity-based retrieval with fallback
- **Async APIs**: FastAPI-based ingestion and inspection
- **CLI Tools**: Command-line interface for management
## Installation
```bash
# Install dependencies
pip install redis faiss-cpu fastapi uvicorn numpy aiohttp pydantic typer
# Clone the repository
git clone <repository-url>
cd prarabdha_cache_package
# Run examples
python3 simple_example.py
```
## Quick Start
### Clean Import Interface
The easiest way to use Prarabdha is with the clean import interface:
```python
# Import specific cache types
from prarabdha.chats import ChatCache
from prarabdha.audio import audioCache
from prarabdha.video import videoCache
from prarabdha.rag import RAGCache
# Create cache instances
chat_cache = ChatCache()
audio_cache = audioCache()
video_cache = videoCache()
rag_cache = RAGCache()
```
### Chat Caching
```python
from prarabdha.chats import ChatCache
# Create chat cache
chat_cache = ChatCache()
# Cache a chat segment
segment = {
"content": "Hello, how can I help you?",
"user_id": "user123",
"session_id": "session456",
"timestamp": 1234567890,
"model": "gpt-4"
}
cache_key = chat_cache.cache_segment(segment)
print(f"Cached with key: {cache_key}")
# Retrieve segment with RAG fallback
retrieved = chat_cache.get_segment_with_rag_fallback(segment)
if retrieved:
print(f"Retrieved: {retrieved['content']}")
# Get statistics
stats = chat_cache.get_stats()
print(f"Hit rate: {stats['hit_rate']:.2%}")
```
### Audio Caching
```python
from prarabdha.audio import audioCache
import numpy as np
# Create audio cache
audio_cache = audioCache()
# Cache audio features
features = np.random.rand(13, 100) # MFCC features
feature_key = audio_cache.cache_audio_features(
audio_id="audio1",
feature_type="mfcc",
features=features,
metadata={"duration": 5.0, "sample_rate": 16000}
)
# Retrieve features
retrieved = audio_cache.get_audio_features("audio1", "mfcc")
if retrieved:
print(f"Retrieved features shape: {retrieved.features.shape}")
```
### Video Caching
```python
from prarabdha.video import videoCache
import numpy as np
# Create video cache
video_cache = videoCache()
# Cache video segment
segment_key = video_cache.cache_video_segment(
video_id="video1",
segment_id="seg1",
start_frame=0,
end_frame=150,
start_time=0.0,
end_time=5.0,
features=np.random.rand(768),
metadata={"resolution": "1920x1080", "fps": 30}
)
# Retrieve segment
retrieved = video_cache.get_video_segment("video1", "seg1")
if retrieved:
print(f"Retrieved segment: {retrieved.video_id}:{retrieved.segment_id}")
```
### RAG Chunk Indexing
```python
from prarabdha.rag import RAGCache
# Create RAG cache
rag_cache = RAGCache()
# Add document
chunk_ids = rag_cache.add_document(
document_id="doc1",
content="Python is a high-level programming language...",
metadata={"author": "John Doe", "topic": "programming"}
)
# Search similar chunks
similar_chunks = rag_cache.search_similar_chunks("What is Python?", k=3)
for vector_id, similarity, metadata in similar_chunks:
chunk = rag_cache.get_chunk(metadata.get('chunk_id', ''))
print(f"Similarity: {similarity:.3f}, Content: {chunk.content[:100]}...")
```
### Advanced Usage
For more advanced usage, you can import the full classes:
```python
from prarabdha import (
SegmentCacheManager,
SegmentCacheManagerFactory,
AudioCache,
VideoCache,
ChunkIndex
)
# Create with custom configuration
cache_manager = SegmentCacheManagerFactory.create_high_performance_manager()
audio_cache = AudioCache(feature_dimension=1024, similarity_threshold=0.9)
video_cache = VideoCache(segment_duration=10.0)
chunk_index = ChunkIndex(chunk_size=2000, chunk_overlap=400)
```
## CLI Usage
### Basic Commands
```bash
# Show help
python3 -m prarabdha.cli.cli --help
# View cache statistics
python3 -m prarabdha.cli.cli stats
# Clear all cache data
python3 -m prarabdha.cli.cli clear --yes
# Search for similar segments
python3 -m prarabdha.cli.cli search "Python programming help" --limit 5
```
### Ingest Data
```bash
# Ingest from JSON file
python3 -m prarabdha.cli.cli ingest segments.json --verbose
# Ingest with Redis backend
python3 -m prarabdha.cli.cli ingest segments.json --redis redis://localhost:6379/0
# Ingest with high-performance strategy
python3 -m prarabdha.cli.cli ingest segments.json --strategy high-performance
```
## API Usage
### Start the API Server
```bash
# Start the FastAPI server
python3 -m prarabdha.api.app
```
### API Endpoints
```python
import requests
# Health check
response = requests.get("http://localhost:8000/health")
print(response.json())
# Ingest segments
segments = [
{
"content": "Hello, how can I help you?",
"user_id": "user123",
"session_id": "session456",
"timestamp": 1234567890,
"model": "gpt-4"
}
]
response = requests.post("http://localhost:8000/ingest", json={
"segments": segments,
"enable_rag": True,
"similarity_threshold": 0.8
})
print(response.json())
# Search similar segments
response = requests.post("http://localhost:8000/search", json={
"query": "Python programming help",
"k": 3,
"threshold": 0.7
})
print(response.json())
# Get statistics
response = requests.get("http://localhost:8000/stats")
print(response.json())
```
## Configuration
### Redis Configuration
```python
from prarabdha import SegmentCacheManagerFactory
# Create cache manager with Redis
redis_config = {
'host': 'localhost',
'port': 6379,
'db': 0
}
cache_manager = SegmentCacheManagerFactory.create_redis_manager(redis_config)
```
### Custom Strategies
```python
from prarabdha import CacheStrategy, SegmentCacheManager
class CustomStrategy(CacheStrategy):
def should_cache(self, segment):
# Custom caching logic
return len(segment.get('content', '')) > 50
def generate_key(self, segment):
# Custom key generation
return f"{segment['user_id']}:{hash(segment['content'])}"
def extract_features(self, segment):
# Custom feature extraction
return {
'content_length': len(segment.get('content', '')),
'user_id': segment.get('user_id', ''),
'timestamp': segment.get('timestamp', 0)
}
# Use custom strategy
cache_manager = SegmentCacheManager(strategy=CustomStrategy())
```
## Architecture
### Multi-layer Cache Architecture
```
┌─────────────────┐
│ Memory Cache │ ← LRU/LFU with TTL
│ (RAM) │
├─────────────────┤
│ Disk Cache │ ← Persistent storage
│ (Local) │
├─────────────────┤
│ Redis Cache │ ← Distributed cache
│ (Optional) │
└─────────────────┘
```
### Vector Index Architecture
```
┌─────────────────┐
│ Input Data │
│ (Text/Audio/ │
│ Video) │
├─────────────────┤
│ Feature │ ← Embedding generation
│ Extraction │
├─────────────────┤
│ FAISS Index │ ← Vector similarity
│ (Flat/IVF/ │ search
│ HNSW) │
└─────────────────┘
```
## Performance Features
### Eviction Policies
- **LRU (Least Recently Used)**: Removes least recently accessed items
- **LFU (Least Frequently Used)**: Removes least frequently accessed items
- **TTL (Time To Live)**: Automatic expiration based on time
### Scaling Features
- **Auto-sharding**: Automatic Redis sharding for horizontal scaling
- **Multi-layer promotion**: Hot data moves to faster layers
- **Background processing**: Async ingestion for high throughput
### Monitoring
- **Hit rate tracking**: Real-time cache performance metrics
- **Memory usage**: Per-layer memory consumption
- **Request statistics**: Detailed request/response tracking
## Testing
### Run Tests
```bash
# Run unit tests
python3 -m unittest prarabdha.tests.test_kv_backends
# Run simple example
python3 simple_example.py
# Run comprehensive example
python3 example_usage.py
# Test API
python3 test_api.py
```
### Example Output
```
Prarabdha Clean Import Interface Demo
==================================================
=== Chat Caching Demo ===
Cached chat segment with key: 68d3ccc8fd1c11dadd2d559e214dd360:28d5d305a0cc8e573a65c8aaa9bd4412
Retrieved: Hello, how can I help you with Python programming?
Chat cache hit rate: 100.00%
=== Audio Caching Demo ===
Cached audio features with key: audio1:mfcc
Retrieved audio features shape: (13, 100)
Audio cache: 1 files, 1 features
=== Video Caching Demo ===
Cached video segment with key: video1:segment:seg1
Retrieved video segment: video1:seg1
Video cache: 1 videos, 1 segments
=== RAG Caching Demo ===
Added document with 1 chunks
Found 2 similar chunks
RAG cache: 1 documents, 1 chunks
```
## Roadmap
### Completed Features
- [x] Multi-layer KV store (RAM, disk, Redis)
- [x] LRU/LFU eviction policies
- [x] TTL support for all layers
- [x] FAISS vector similarity search
- [x] RAG chunk indexing
- [x] Audio and video caching
- [x] FastAPI async ingestion
- [x] CLI management tools
- [x] Redis auto-sharding
- [x] Background processing
- [x] Clean import interface
### Planned Features
- [x] GPU-native tier for vLLM prefill (Basic implementation)
- [x] Persistent metadata store (SQLite-based)
- [x] Exportable cache storage (JSON, Parquet, SQLite)
- [x] Compression and encryption (AES-256 + GZIP/LZMA/ZLIB)
- [x] Plugin support for custom encoders
- [x] Advanced monitoring dashboard (Real-time WebSocket)
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
MIT License - see LICENSE file for details.
## Support
For questions and support, please open an issue on GitHub or contact the maintainers.
---
**Prarabdha** - Empowering AI applications with intelligent caching.
Raw data
{
"_id": null,
"home_page": null,
"name": "prarabdha-cache",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "cache, ai, vector, rag, audio, video",
"author": null,
"author_email": "Prarabdha Soni <prarabdha.soni@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/77/4c/6538556bab3babe69c555cda8e1698be1a0ad31bf3d3c7b2c924eaf9bb6e/prarabdha_cache-0.1.1.tar.gz",
"platform": null,
"description": "# Prarabdha - Modular AI Cache System\n\nA futuristic modular caching system for AI applications supporting multi-layer caching, vector similarity search, RAG-aware chunk indexing, and async ingestion APIs.\n\n## Features\n\n### Core Components\n- **Multi-layer KV Store**: RAM, disk, and Redis support with TTL and eviction policies\n- **Vector Index**: FAISS-based semantic similarity search\n- **RAG Chunk Index**: Document-aware chunking and retrieval\n- **Audio Cache**: Feature-level audio processing cache\n- **Video Cache**: Segment/frame-level video processing cache\n\n### Advanced Features\n- **LRU/LFU Eviction**: Configurable memory eviction policies\n- **TTL Support**: Automatic expiration for all cache layers\n- **Redis Integration**: Distributed caching with auto-sharding\n- **RAG Integration**: Similarity-based retrieval with fallback\n- **Async APIs**: FastAPI-based ingestion and inspection\n- **CLI Tools**: Command-line interface for management\n\n## Installation\n\n```bash\n# Install dependencies\npip install redis faiss-cpu fastapi uvicorn numpy aiohttp pydantic typer\n\n# Clone the repository\ngit clone <repository-url>\ncd prarabdha_cache_package\n\n# Run examples\npython3 simple_example.py\n```\n\n## Quick Start\n\n### Clean Import Interface\n\nThe easiest way to use Prarabdha is with the clean import interface:\n\n```python\n# Import specific cache types\nfrom prarabdha.chats import ChatCache\nfrom prarabdha.audio import audioCache\nfrom prarabdha.video import videoCache\nfrom prarabdha.rag import RAGCache\n\n# Create cache instances\nchat_cache = ChatCache()\naudio_cache = audioCache()\nvideo_cache = videoCache()\nrag_cache = RAGCache()\n```\n\n### Chat Caching\n\n```python\nfrom prarabdha.chats import ChatCache\n\n# Create chat cache\nchat_cache = ChatCache()\n\n# Cache a chat segment\nsegment = {\n \"content\": \"Hello, how can I help you?\",\n \"user_id\": \"user123\",\n \"session_id\": \"session456\",\n \"timestamp\": 1234567890,\n \"model\": \"gpt-4\"\n}\n\ncache_key = chat_cache.cache_segment(segment)\nprint(f\"Cached with key: {cache_key}\")\n\n# Retrieve segment with RAG fallback\nretrieved = chat_cache.get_segment_with_rag_fallback(segment)\nif retrieved:\n print(f\"Retrieved: {retrieved['content']}\")\n\n# Get statistics\nstats = chat_cache.get_stats()\nprint(f\"Hit rate: {stats['hit_rate']:.2%}\")\n```\n\n### Audio Caching\n\n```python\nfrom prarabdha.audio import audioCache\nimport numpy as np\n\n# Create audio cache\naudio_cache = audioCache()\n\n# Cache audio features\nfeatures = np.random.rand(13, 100) # MFCC features\nfeature_key = audio_cache.cache_audio_features(\n audio_id=\"audio1\",\n feature_type=\"mfcc\",\n features=features,\n metadata={\"duration\": 5.0, \"sample_rate\": 16000}\n)\n\n# Retrieve features\nretrieved = audio_cache.get_audio_features(\"audio1\", \"mfcc\")\nif retrieved:\n print(f\"Retrieved features shape: {retrieved.features.shape}\")\n```\n\n### Video Caching\n\n```python\nfrom prarabdha.video import videoCache\nimport numpy as np\n\n# Create video cache\nvideo_cache = videoCache()\n\n# Cache video segment\nsegment_key = video_cache.cache_video_segment(\n video_id=\"video1\",\n segment_id=\"seg1\",\n start_frame=0,\n end_frame=150,\n start_time=0.0,\n end_time=5.0,\n features=np.random.rand(768),\n metadata={\"resolution\": \"1920x1080\", \"fps\": 30}\n)\n\n# Retrieve segment\nretrieved = video_cache.get_video_segment(\"video1\", \"seg1\")\nif retrieved:\n print(f\"Retrieved segment: {retrieved.video_id}:{retrieved.segment_id}\")\n```\n\n### RAG Chunk Indexing\n\n```python\nfrom prarabdha.rag import RAGCache\n\n# Create RAG cache\nrag_cache = RAGCache()\n\n# Add document\nchunk_ids = rag_cache.add_document(\n document_id=\"doc1\",\n content=\"Python is a high-level programming language...\",\n metadata={\"author\": \"John Doe\", \"topic\": \"programming\"}\n)\n\n# Search similar chunks\nsimilar_chunks = rag_cache.search_similar_chunks(\"What is Python?\", k=3)\nfor vector_id, similarity, metadata in similar_chunks:\n chunk = rag_cache.get_chunk(metadata.get('chunk_id', ''))\n print(f\"Similarity: {similarity:.3f}, Content: {chunk.content[:100]}...\")\n```\n\n### Advanced Usage\n\nFor more advanced usage, you can import the full classes:\n\n```python\nfrom prarabdha import (\n SegmentCacheManager,\n SegmentCacheManagerFactory,\n AudioCache,\n VideoCache,\n ChunkIndex\n)\n\n# Create with custom configuration\ncache_manager = SegmentCacheManagerFactory.create_high_performance_manager()\naudio_cache = AudioCache(feature_dimension=1024, similarity_threshold=0.9)\nvideo_cache = VideoCache(segment_duration=10.0)\nchunk_index = ChunkIndex(chunk_size=2000, chunk_overlap=400)\n```\n\n## CLI Usage\n\n### Basic Commands\n\n```bash\n# Show help\npython3 -m prarabdha.cli.cli --help\n\n# View cache statistics\npython3 -m prarabdha.cli.cli stats\n\n# Clear all cache data\npython3 -m prarabdha.cli.cli clear --yes\n\n# Search for similar segments\npython3 -m prarabdha.cli.cli search \"Python programming help\" --limit 5\n```\n\n### Ingest Data\n\n```bash\n# Ingest from JSON file\npython3 -m prarabdha.cli.cli ingest segments.json --verbose\n\n# Ingest with Redis backend\npython3 -m prarabdha.cli.cli ingest segments.json --redis redis://localhost:6379/0\n\n# Ingest with high-performance strategy\npython3 -m prarabdha.cli.cli ingest segments.json --strategy high-performance\n```\n\n## API Usage\n\n### Start the API Server\n\n```bash\n# Start the FastAPI server\npython3 -m prarabdha.api.app\n```\n\n### API Endpoints\n\n```python\nimport requests\n\n# Health check\nresponse = requests.get(\"http://localhost:8000/health\")\nprint(response.json())\n\n# Ingest segments\nsegments = [\n {\n \"content\": \"Hello, how can I help you?\",\n \"user_id\": \"user123\",\n \"session_id\": \"session456\",\n \"timestamp\": 1234567890,\n \"model\": \"gpt-4\"\n }\n]\n\nresponse = requests.post(\"http://localhost:8000/ingest\", json={\n \"segments\": segments,\n \"enable_rag\": True,\n \"similarity_threshold\": 0.8\n})\nprint(response.json())\n\n# Search similar segments\nresponse = requests.post(\"http://localhost:8000/search\", json={\n \"query\": \"Python programming help\",\n \"k\": 3,\n \"threshold\": 0.7\n})\nprint(response.json())\n\n# Get statistics\nresponse = requests.get(\"http://localhost:8000/stats\")\nprint(response.json())\n```\n\n## Configuration\n\n### Redis Configuration\n\n```python\nfrom prarabdha import SegmentCacheManagerFactory\n\n# Create cache manager with Redis\nredis_config = {\n 'host': 'localhost',\n 'port': 6379,\n 'db': 0\n}\n\ncache_manager = SegmentCacheManagerFactory.create_redis_manager(redis_config)\n```\n\n### Custom Strategies\n\n```python\nfrom prarabdha import CacheStrategy, SegmentCacheManager\n\nclass CustomStrategy(CacheStrategy):\n def should_cache(self, segment):\n # Custom caching logic\n return len(segment.get('content', '')) > 50\n \n def generate_key(self, segment):\n # Custom key generation\n return f\"{segment['user_id']}:{hash(segment['content'])}\"\n \n def extract_features(self, segment):\n # Custom feature extraction\n return {\n 'content_length': len(segment.get('content', '')),\n 'user_id': segment.get('user_id', ''),\n 'timestamp': segment.get('timestamp', 0)\n }\n\n# Use custom strategy\ncache_manager = SegmentCacheManager(strategy=CustomStrategy())\n```\n\n## Architecture\n\n### Multi-layer Cache Architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Memory Cache \u2502 \u2190 LRU/LFU with TTL\n\u2502 (RAM) \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Disk Cache \u2502 \u2190 Persistent storage\n\u2502 (Local) \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Redis Cache \u2502 \u2190 Distributed cache\n\u2502 (Optional) \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n### Vector Index Architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Input Data \u2502\n\u2502 (Text/Audio/ \u2502\n\u2502 Video) \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Feature \u2502 \u2190 Embedding generation\n\u2502 Extraction \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 FAISS Index \u2502 \u2190 Vector similarity\n\u2502 (Flat/IVF/ \u2502 search\n\u2502 HNSW) \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## Performance Features\n\n### Eviction Policies\n- **LRU (Least Recently Used)**: Removes least recently accessed items\n- **LFU (Least Frequently Used)**: Removes least frequently accessed items\n- **TTL (Time To Live)**: Automatic expiration based on time\n\n### Scaling Features\n- **Auto-sharding**: Automatic Redis sharding for horizontal scaling\n- **Multi-layer promotion**: Hot data moves to faster layers\n- **Background processing**: Async ingestion for high throughput\n\n### Monitoring\n- **Hit rate tracking**: Real-time cache performance metrics\n- **Memory usage**: Per-layer memory consumption\n- **Request statistics**: Detailed request/response tracking\n\n## Testing\n\n### Run Tests\n\n```bash\n# Run unit tests\npython3 -m unittest prarabdha.tests.test_kv_backends\n\n# Run simple example\npython3 simple_example.py\n\n# Run comprehensive example\npython3 example_usage.py\n\n# Test API\npython3 test_api.py\n```\n\n### Example Output\n\n```\nPrarabdha Clean Import Interface Demo\n==================================================\n=== Chat Caching Demo ===\nCached chat segment with key: 68d3ccc8fd1c11dadd2d559e214dd360:28d5d305a0cc8e573a65c8aaa9bd4412\nRetrieved: Hello, how can I help you with Python programming?\nChat cache hit rate: 100.00%\n\n=== Audio Caching Demo ===\nCached audio features with key: audio1:mfcc\nRetrieved audio features shape: (13, 100)\nAudio cache: 1 files, 1 features\n\n=== Video Caching Demo ===\nCached video segment with key: video1:segment:seg1\nRetrieved video segment: video1:seg1\nVideo cache: 1 videos, 1 segments\n\n=== RAG Caching Demo ===\nAdded document with 1 chunks\nFound 2 similar chunks\nRAG cache: 1 documents, 1 chunks\n```\n\n## Roadmap\n\n### Completed Features\n- [x] Multi-layer KV store (RAM, disk, Redis)\n- [x] LRU/LFU eviction policies\n- [x] TTL support for all layers\n- [x] FAISS vector similarity search\n- [x] RAG chunk indexing\n- [x] Audio and video caching\n- [x] FastAPI async ingestion\n- [x] CLI management tools\n- [x] Redis auto-sharding\n- [x] Background processing\n- [x] Clean import interface\n\n### Planned Features\n- [x] GPU-native tier for vLLM prefill (Basic implementation)\n- [x] Persistent metadata store (SQLite-based)\n- [x] Exportable cache storage (JSON, Parquet, SQLite)\n- [x] Compression and encryption (AES-256 + GZIP/LZMA/ZLIB)\n- [x] Plugin support for custom encoders\n- [x] Advanced monitoring dashboard (Real-time WebSocket)\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Ensure all tests pass\n5. Submit a pull request\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\nFor questions and support, please open an issue on GitHub or contact the maintainers.\n\n---\n\n**Prarabdha** - Empowering AI applications with intelligent caching. \n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Modular AI Cache System for chats, audio, and video data ingestion",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [
"cache",
" ai",
" vector",
" rag",
" audio",
" video"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7a394750653ca1804defbe048921385ead4251a8bcefd9d0d4774f46e6538afd",
"md5": "40e571925b277d86a0b73a25c9abe36b",
"sha256": "9327f7f7831c52301764f6c08e0bfb63b18e3dfba1597a8bd166ba7316daaa40"
},
"downloads": -1,
"filename": "prarabdha_cache-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "40e571925b277d86a0b73a25c9abe36b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 54696,
"upload_time": "2025-08-03T13:11:57",
"upload_time_iso_8601": "2025-08-03T13:11:57.377561Z",
"url": "https://files.pythonhosted.org/packages/7a/39/4750653ca1804defbe048921385ead4251a8bcefd9d0d4774f46e6538afd/prarabdha_cache-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "774c6538556bab3babe69c555cda8e1698be1a0ad31bf3d3c7b2c924eaf9bb6e",
"md5": "4c13c3a9397914775e22d42cdf299244",
"sha256": "a44177262c188ee26caf6da0f954151d0acebbe5c1962549f1921f735bfb34ac"
},
"downloads": -1,
"filename": "prarabdha_cache-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "4c13c3a9397914775e22d42cdf299244",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 47616,
"upload_time": "2025-08-03T13:11:59",
"upload_time_iso_8601": "2025-08-03T13:11:59.376751Z",
"url": "https://files.pythonhosted.org/packages/77/4c/6538556bab3babe69c555cda8e1698be1a0ad31bf3d3c7b2c924eaf9bb6e/prarabdha_cache-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-03 13:11:59",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "prarabdha-cache"
}