cortex-memory-sdk


Namecortex-memory-sdk JSON
Version 2.0.3 PyPI version JSON
download
home_pageNone
Summary🧠 The Smart Context Layer for Prompt Chains in LLMs - Enterprise-grade context-aware AI system with semantic understanding and self-evolving memory. Built by Vaishakh Vipin (https://github.com/VaishakhVipin) - Advanced context management for LLMs with Redis-backed semantic search, self-evolving patterns, and multi-provider support (Gemini, Claude, OpenAI).
upload_time2025-07-30 17:35:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords ai memory context semantic embeddings llm prompt-chains machine-learning nlp artificial-intelligence context-aware
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🧠 Cortex Memory SDK

**The Smart Context Layer for Prompt Chains in LLMs**

Built by [Vaishakh Vipin](https://github.com/VaishakhVipin)

## Overview

Cortex Memory SDK is an enterprise-grade context-aware AI system that provides intelligent memory management for Large Language Models (LLMs). It combines semantic understanding with self-evolving patterns to deliver the most relevant context for your AI applications.

## 🚀 Key Features

- **Semantic Context Matching**: Redis-backed semantic search using sentence transformers
- **Self-Evolving Patterns**: Advanced statistical pattern recognition for context relevance
- **Multi-LLM Support**: Seamless integration with Gemini, Claude, and OpenAI
- **Hybrid Context Mode**: Combines semantic and self-evolving context for optimal results
- **Adaptive Context Selection**: Automatically chooses the best context method
- **Auto-Pruning System**: Intelligently manages memory storage and cleanup
- **Semantic Drift Detection**: Monitors and adapts to changing conversation patterns

## 🛠️ Installation

```bash
pip install cortex-memory-sdk
```

## 📖 Quick Start

```python
from cortex_memory import CortexClient

# Initialize the client
client = CortexClient(api_key="your_api_key")

# Generate context-aware responses
response = client.generate_with_context(
    user_id="user123",
    prompt="What did we discuss about AI yesterday?",
    provider="gemini"  # or "claude", "openai", "auto"
)

print(response)
```

## 🔧 Advanced Usage

### Hybrid Context Mode
```python
from cortex_memory.context_manager import generate_with_hybrid_context

response = generate_with_hybrid_context(
    user_id="user123",
    prompt="Explain the latest developments in AI",
    provider="claude"
)
```

### Adaptive Context Selection
```python
from cortex_memory.context_manager import generate_with_adaptive_context

response = generate_with_adaptive_context(
    user_id="user123",
    prompt="What are the key points from our previous meetings?",
    provider="auto"  # Automatically selects best provider
)
```

## 🏗️ Architecture

- **Redis**: High-performance memory storage with semantic embeddings
- **Sentence Transformers**: Dense vector embeddings for semantic similarity
- **Statistical Pattern Recognition**: Robust algorithms for context scoring
- **Multi-Provider LLM Integration**: Unified interface for all major LLM providers

## 📊 Performance

- **Fast Retrieval**: Redis-pipelined operations for sub-second context retrieval
- **Efficient Storage**: Optimized embedding storage and compression
- **Scalable**: Designed for enterprise-scale deployments
- **Cost-Effective**: Intelligent context selection reduces token usage

## 🔒 Security

- API key authentication
- Rate limiting and usage tracking
- Secure Redis connections
- Privacy-focused design

## 📚 Documentation

For detailed documentation, visit: [GitHub Repository](https://github.com/VaishakhVipin/cortex-memory)

## 🤝 Contributing

We welcome contributions! Please see our [Contributing Guidelines](https://github.com/VaishakhVipin/cortex-memory/blob/main/CONTRIBUTING.md) for details.

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](https://github.com/VaishakhVipin/cortex-memory/blob/main/LICENSE) file for details.

## 🆘 Support

- **Issues**: [GitHub Issues](https://github.com/VaishakhVipin/cortex-memory/issues)
- **Discussions**: [GitHub Discussions](https://github.com/VaishakhVipin/cortex-memory/discussions)
- **Email**: vaishakh.obelisk@gmail.com

---

**Built with ❤️ by [Vaishakh Vipin](https://github.com/VaishakhVipin)**

Transform your LLM applications with intelligent context management. 🧠✨ 

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cortex-memory-sdk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Cortex Team <vaishakh.obelisk@gmail.com>",
    "keywords": "ai, memory, context, semantic, embeddings, llm, prompt-chains, machine-learning, nlp, artificial-intelligence, context-aware",
    "author": null,
    "author_email": "Cortex Team <obeliskacquisitions@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/05/73/3d2aed27275831614b3528c98ccb26565b2793bd04aaf8005aafc81fe570/cortex_memory_sdk-2.0.3.tar.gz",
    "platform": null,
    "description": "# \ud83e\udde0 Cortex Memory SDK\r\n\r\n**The Smart Context Layer for Prompt Chains in LLMs**\r\n\r\nBuilt by [Vaishakh Vipin](https://github.com/VaishakhVipin)\r\n\r\n## Overview\r\n\r\nCortex Memory SDK is an enterprise-grade context-aware AI system that provides intelligent memory management for Large Language Models (LLMs). It combines semantic understanding with self-evolving patterns to deliver the most relevant context for your AI applications.\r\n\r\n## \ud83d\ude80 Key Features\r\n\r\n- **Semantic Context Matching**: Redis-backed semantic search using sentence transformers\r\n- **Self-Evolving Patterns**: Advanced statistical pattern recognition for context relevance\r\n- **Multi-LLM Support**: Seamless integration with Gemini, Claude, and OpenAI\r\n- **Hybrid Context Mode**: Combines semantic and self-evolving context for optimal results\r\n- **Adaptive Context Selection**: Automatically chooses the best context method\r\n- **Auto-Pruning System**: Intelligently manages memory storage and cleanup\r\n- **Semantic Drift Detection**: Monitors and adapts to changing conversation patterns\r\n\r\n## \ud83d\udee0\ufe0f Installation\r\n\r\n```bash\r\npip install cortex-memory-sdk\r\n```\r\n\r\n## \ud83d\udcd6 Quick Start\r\n\r\n```python\r\nfrom cortex_memory import CortexClient\r\n\r\n# Initialize the client\r\nclient = CortexClient(api_key=\"your_api_key\")\r\n\r\n# Generate context-aware responses\r\nresponse = client.generate_with_context(\r\n    user_id=\"user123\",\r\n    prompt=\"What did we discuss about AI yesterday?\",\r\n    provider=\"gemini\"  # or \"claude\", \"openai\", \"auto\"\r\n)\r\n\r\nprint(response)\r\n```\r\n\r\n## \ud83d\udd27 Advanced Usage\r\n\r\n### Hybrid Context Mode\r\n```python\r\nfrom cortex_memory.context_manager import generate_with_hybrid_context\r\n\r\nresponse = generate_with_hybrid_context(\r\n    user_id=\"user123\",\r\n    prompt=\"Explain the latest developments in AI\",\r\n    provider=\"claude\"\r\n)\r\n```\r\n\r\n### Adaptive Context Selection\r\n```python\r\nfrom cortex_memory.context_manager import generate_with_adaptive_context\r\n\r\nresponse = generate_with_adaptive_context(\r\n    user_id=\"user123\",\r\n    prompt=\"What are the key points from our previous meetings?\",\r\n    provider=\"auto\"  # Automatically selects best provider\r\n)\r\n```\r\n\r\n## \ud83c\udfd7\ufe0f Architecture\r\n\r\n- **Redis**: High-performance memory storage with semantic embeddings\r\n- **Sentence Transformers**: Dense vector embeddings for semantic similarity\r\n- **Statistical Pattern Recognition**: Robust algorithms for context scoring\r\n- **Multi-Provider LLM Integration**: Unified interface for all major LLM providers\r\n\r\n## \ud83d\udcca Performance\r\n\r\n- **Fast Retrieval**: Redis-pipelined operations for sub-second context retrieval\r\n- **Efficient Storage**: Optimized embedding storage and compression\r\n- **Scalable**: Designed for enterprise-scale deployments\r\n- **Cost-Effective**: Intelligent context selection reduces token usage\r\n\r\n## \ud83d\udd12 Security\r\n\r\n- API key authentication\r\n- Rate limiting and usage tracking\r\n- Secure Redis connections\r\n- Privacy-focused design\r\n\r\n## \ud83d\udcda Documentation\r\n\r\nFor detailed documentation, visit: [GitHub Repository](https://github.com/VaishakhVipin/cortex-memory)\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guidelines](https://github.com/VaishakhVipin/cortex-memory/blob/main/CONTRIBUTING.md) for details.\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](https://github.com/VaishakhVipin/cortex-memory/blob/main/LICENSE) file for details.\r\n\r\n## \ud83c\udd98 Support\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/VaishakhVipin/cortex-memory/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/VaishakhVipin/cortex-memory/discussions)\r\n- **Email**: vaishakh.obelisk@gmail.com\r\n\r\n---\r\n\r\n**Built with \u2764\ufe0f by [Vaishakh Vipin](https://github.com/VaishakhVipin)**\r\n\r\nTransform your LLM applications with intelligent context management. \ud83e\udde0\u2728 \r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "\ud83e\udde0 The Smart Context Layer for Prompt Chains in LLMs - Enterprise-grade context-aware AI system with semantic understanding and self-evolving memory. Built by Vaishakh Vipin (https://github.com/VaishakhVipin) - Advanced context management for LLMs with Redis-backed semantic search, self-evolving patterns, and multi-provider support (Gemini, Claude, OpenAI).",
    "version": "2.0.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/VaishakhVipin/cortex-memory/issues",
        "Changelog": "https://github.com/VaishakhVipin/cortex-memory/commits/main/",
        "Documentation": "https://github.com/VaishakhVipin/cortex-memory/tree/main/backend/docs/README.md",
        "Download": "https://pypi.org/project/cortex-memory/#files",
        "Homepage": "https://github.com/VaishakhVipin/cortex-memory",
        "Repository": "https://github.com/VaishakhVipin/cortex-memory",
        "Source Code": "https://github.com/VaishakhVipin/cortex-memory"
    },
    "split_keywords": [
        "ai",
        " memory",
        " context",
        " semantic",
        " embeddings",
        " llm",
        " prompt-chains",
        " machine-learning",
        " nlp",
        " artificial-intelligence",
        " context-aware"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4ba4c6bb4ea00a9297925eb89e1ea8c16fbcf897cd7110b68216f399702a391e",
                "md5": "796d8505a5b37cbb8d772777e2c2808c",
                "sha256": "8c2c4883e4df1bac72640611a215ea78319aa02ba1ae8e6bdea86c0f496d1760"
            },
            "downloads": -1,
            "filename": "cortex_memory_sdk-2.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "796d8505a5b37cbb8d772777e2c2808c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 47607,
            "upload_time": "2025-07-30T17:34:36",
            "upload_time_iso_8601": "2025-07-30T17:34:36.480748Z",
            "url": "https://files.pythonhosted.org/packages/4b/a4/c6bb4ea00a9297925eb89e1ea8c16fbcf897cd7110b68216f399702a391e/cortex_memory_sdk-2.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "05733d2aed27275831614b3528c98ccb26565b2793bd04aaf8005aafc81fe570",
                "md5": "406ab793ce978bab1c9f7eb503c8a9fc",
                "sha256": "736b2ae8eea2cf84f1117b420eb04186aa41c57882af60b9b0f142a5deb7e169"
            },
            "downloads": -1,
            "filename": "cortex_memory_sdk-2.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "406ab793ce978bab1c9f7eb503c8a9fc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 53240,
            "upload_time": "2025-07-30T17:35:16",
            "upload_time_iso_8601": "2025-07-30T17:35:16.851547Z",
            "url": "https://files.pythonhosted.org/packages/05/73/3d2aed27275831614b3528c98ccb26565b2793bd04aaf8005aafc81fe570/cortex_memory_sdk-2.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-30 17:35:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "VaishakhVipin",
    "github_project": "cortex-memory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "cortex-memory-sdk"
}
        
Elapsed time: 0.54872s