cortex-intelligence


Namecortex-intelligence JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryA three-layer intelligence engine with ONNX integration for Open Agent Spec
upload_time2025-08-03 02:25:26
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords ai intelligence onnx open-agent-spec machine-learning neural-networks
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Cortex Intelligence Engine ๐Ÿง โšก

[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tests](https://img.shields.io/badge/tests-20%20passed-brightgreen.svg)](tests/)

A **three-layer intelligence engine** designed for integration with [Open Agent Spec](https://github.com/openagents/openagents), providing advanced sensory processing, interpretation, and reasoning capabilities with **intelligent filtering and cost optimization**.

## ๐ŸŽฏ What is Cortex?

Cortex acts as an **intelligent filtering layer** between your OAS agents and expensive external LLMs. Instead of routing every request directly to external LLMs, Cortex:

1. **Processes input through Layer 1** (sensory processing)
2. **Analyzes with Layer 2** (internal intelligence - rule-based or ONNX models)
3. **Decides whether to trigger Layer 3** (external LLM) based on complexity and importance
4. **Provides intelligent responses** with suggested actions

**Result**: 60-80% cost reduction while maintaining intelligence! ๐Ÿ’ฐ๐Ÿง 

## ๐Ÿ—๏ธ Architecture

Cortex implements a three-layer architecture:

```
OAS Agent โ†’ Cortex Intelligence Engine โ†’ External LLM (if needed)
                โ†“
            Layer 1: Sensory Processing (Text, Image, Audio)
                โ†“
            Layer 2: Internal Intelligence (Rule-based/ONNX/Local LLM)
                โ†“
            Layer 3: External LLM (OpenAI, Claude, etc.)
```

### Layer Breakdown

1. **Layer 1 - Sensory Processing**: Raw data processing for vision, audio, and text inputs
2. **Layer 2 - Interpretation**: Onboard ONNX/LLM for base-level data interpretation and reaction decisions
3. **Layer 3 - Reasoning**: External LLM vendor integration (OpenAI, Claude, etc.) for complex reasoning

## โœจ Features

- **๐Ÿง  Intelligent Filtering**: Automatically decides when to use expensive external LLMs
- **๐Ÿค– ONNX Integration**: Local neural network models for intelligent decision-making
- **๐Ÿ’ฐ Cost Optimization**: 60-80% reduction in external API calls
- **โšก Performance**: Sub-second responses for simple queries
- **๐Ÿ”„ Multi-modal Processing**: Handles text, images, and audio data
- **๐ŸŽฏ Smart Routing**: Determines when complex reasoning is needed
- **๐Ÿ”Œ Flexible LLM Integration**: Supports multiple LLM providers with fallback mechanisms
- **๐Ÿค– Open Agent Spec Compatible**: Drop-in replacement for standard OAS intelligence engines
- **๐Ÿ“Š Performance Monitoring**: Built-in metrics and status tracking
- **๐Ÿ“ฆ Batch Processing**: Efficient processing of multiple inputs
- **โš™๏ธ Configurable Processing Modes**: Reactive, proactive, and selective processing

## ๐Ÿš€ Installation

```bash
# Clone the repository
git clone https://github.com/yourusername/cortex.git
cd cortex

# Install in development mode
pip install -e .
```

### ๐Ÿ“‹ Dependencies

- **Python 3.8+**
- **numpy** - Numerical computing
- **Pillow** - Image processing
- **onnxruntime** - Layer 2 ONNX models
- **aiohttp** - Async HTTP requests
- **pydantic** - Data validation

## โšก Quick Start

### Basic Usage

```python
import asyncio
from cortex import create_simple_cortex, create_cortex_function

async def main():
    # Create a basic Cortex instance
    cortex = create_simple_cortex(
        openai_api_key="your_openai_key",  # Optional
        claude_api_key="your_claude_key",  # Optional
        enable_layer3=True
    )
    
    # Process some input
    result = await cortex.process_input(
        data="Analyze this urgent system alert!",
        context="System monitoring detected anomaly"
    )
    
    print(f"Response: {result.final_response}")
    print(f"Suggested actions: {result.suggested_actions}")

asyncio.run(main())
```

### ๐Ÿค– Open Agent Spec Integration

```python
from cortex import create_cortex_oas_intelligence, create_cortex_oas_function

# Configure like standard OAS intelligence
config = {
    "type": "cortex",
    "config": {
        "processing_mode": "reactive",
        "layer2_threshold": 0.6,
        "enable_layer3": True,
        "external_engine": "openai",
        "external_model": "gpt-4"
    }
}

# Create OAS-compatible intelligence engine
intelligence = create_cortex_oas_intelligence(config)
oas_function = create_cortex_oas_function(intelligence)

# Use in your OAS agent
response = await oas_function(
    prompt="URGENT: System failure detected!",
    context={"user_id": "123", "session_id": "456"}
)

print(f"Triggered Layer 3: {response['metadata']['triggered_layer3']}")
```

๐Ÿ“– **For detailed OAS integration guide, see [docs/OAS_INTEGRATION.md](docs/OAS_INTEGRATION.md)**

### ๐Ÿง  ONNX Integration

```python
from cortex import create_cortex_oas_intelligence, CortexOASConfig

# Configure with ONNX models for intelligent filtering
config = CortexOASConfig(
    layer2_engine="onnx",  # Enable ONNX models
    layer2_threshold=0.6,  # Confidence threshold
    enable_layer3=True,    # Allow external LLM calls
    external_engine="openai",
    external_model="gpt-4"
)

# Create intelligence engine with ONNX-powered Layer 2
intelligence = create_cortex_oas_intelligence(config)

# Process input - ONNX models will decide if Layer 3 is needed
response = await intelligence.process("URGENT: System failure detected!")
print(f"Triggered Layer 3: {response['metadata']['triggered_layer3']}")
```

๐Ÿ“– **For detailed ONNX integration guide, see [docs/ONNX_INTEGRATION.md](docs/ONNX_INTEGRATION.md)**

## ๐Ÿ“š Usage Examples

### ๐Ÿ“ Text Processing

```python
result = await cortex.process_input(
    data="Emergency: Database server is down!",
    sense_type="text",
    task_type="incident_response"
)
```

### ๐Ÿ–ผ๏ธ Image Processing

```python
from PIL import Image

image = Image.open("photo.jpg")
result = await cortex.process_input(
    data=image,
    sense_type="vision",
    context="Security camera feed analysis"
)
```

### ๐ŸŽต Audio Processing

```python
import numpy as np

# Audio data as numpy array
audio_data = np.load("audio_sample.npy")
result = await cortex.process_input(
    data=audio_data,
    sense_type="audio",
    task_type="sound_classification"
)
```

### ๐Ÿ“ฆ Batch Processing

```python
inputs = [
    {"data": "Message 1", "task_type": "general"},
    {"data": "URGENT: Alert message", "task_type": "alert"},
    {"data": image_data, "sense_type": "vision"}
]

results = await cortex.batch_process(inputs)
```

## โš™๏ธ Configuration

### ๐Ÿ”„ Processing Modes

| Mode | Behavior | Use Case |
|------|----------|----------|
| **REACTIVE** | Only Layer 3 when Layer 2 decides | Cost-effective, intelligent filtering |
| **PROACTIVE** | Always use all layers | Maximum intelligence, higher cost |
| **SELECTIVE** | Custom logic for Layer 3 | Custom filtering rules |

```python
from cortex.core import CortexConfig, ProcessingMode

config = CortexConfig(
    processing_mode=ProcessingMode.REACTIVE,
    layer2_threshold=0.6,
    enable_layer3=True,
    max_processing_time=30.0
)

cortex = Cortex(config)
```

### ๐Ÿค– LLM Provider Configuration

```python
from cortex.layers.layer3 import LLMConfig, LLMProvider

# OpenAI Configuration
openai_config = LLMConfig(
    provider=LLMProvider.OPENAI,
    api_key="your_key",
    model="gpt-3.5-turbo",
    max_tokens=1000,
    temperature=0.7
)

# Claude Configuration
claude_config = LLMConfig(
    provider=LLMProvider.CLAUDE,
    api_key="your_key",
    model="claude-3-sonnet-20240229",
    max_tokens=1000,
    temperature=0.7
)

cortex.add_llm_provider(openai_config, is_primary=True)
cortex.add_llm_provider(claude_config, is_primary=False)  # Fallback
```

## ๐Ÿค– Open Agent Spec Integration

Cortex is designed as a **drop-in replacement** for standard OAS intelligence engines:

### Basic OAS Configuration

```yaml
name: "my_cortex_agent"
intelligence:
  type: "cortex"
  engine: "cortex-hybrid"
  config:
    processing_mode: "reactive"
    layer2_threshold: 0.6
    enable_layer3: true
    external_engine: "openai"
    external_model: "gpt-4"
    external_endpoint: "https://api.openai.com/v1"
    temperature: 0.7
    max_tokens: 150
    trigger_keywords: ["help", "urgent", "error", "assist"]
```

### Response Format

Cortex returns OAS-compatible responses:

```json
{
  "success": true,
  "response": "Analysis and response from Cortex",
  "actions": ["suggested_action_1", "suggested_action_2"],
  "metadata": {
    "layers_used": ["layer1", "layer2", "layer3"],
    "processing_time": 1.23,
    "confidence": 0.85,
    "cortex_stats": {
      "total_requests": 100,
      "layer2_only": 70,
      "layer3_triggered": 30,
      "average_response_time": 0.5
    },
    "triggered_layer3": true
  },
  "error": null
}
```

๐Ÿ“– **For comprehensive OAS integration guide, see [docs/OAS_INTEGRATION.md](docs/OAS_INTEGRATION.md)**

## ๐Ÿ“Š Monitoring and Metrics

### Performance Statistics

```python
# Get current status
status = cortex.get_status()
print(f"Performance metrics: {status['cortex']['performance_metrics']}")

# Get processing history
history = cortex.get_recent_history(limit=10)

# Reset metrics
cortex.reset_metrics()
```

### ๐Ÿ“ˆ Example Output

```
Performance Statistics:
  Total Requests: 100
  Layer 2 Only: 70 (70.0%)
  Layer 3 Triggered: 30 (30.0%)
  Average Response Time: 0.5s
```

## ๐Ÿ”” Callback System

```python
async def on_high_priority(data):
    print(f"High priority reaction detected: {data}")

cortex.add_callback("on_reaction_decision", on_high_priority)
```

## ๐Ÿ› ๏ธ Development

### ๐Ÿงช Running Tests

```bash
pip install pytest pytest-asyncio
python -m pytest tests/ -v
```

### ๐Ÿš€ Running Examples

```bash
# Basic usage examples
python examples/basic_usage.py

# OAS integration examples
python examples/oas_integration_example.py
```

### ๐Ÿ“ Project Structure

```
cortex/
โ”œโ”€โ”€ cortex/
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ core.py              # Main Cortex class
โ”‚   โ”œโ”€โ”€ oas_integration.py   # OAS integration module
โ”‚   โ””โ”€โ”€ layers/
โ”‚       โ”œโ”€โ”€ __init__.py
โ”‚       โ”œโ”€โ”€ layer1.py        # Sensory processing
โ”‚       โ”œโ”€โ”€ layer2.py        # Interpretation and reaction decisions
โ”‚       โ””โ”€โ”€ layer3.py        # External LLM reasoning
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ basic_usage.py       # Basic usage examples
โ”‚   โ””โ”€โ”€ oas_integration_example.py  # OAS integration examples
โ”œโ”€โ”€ docs/
โ”‚   โ””โ”€โ”€ OAS_INTEGRATION.md   # Detailed OAS integration guide
โ”œโ”€โ”€ tests/
โ”‚   โ””โ”€โ”€ test_basic.py        # Test suite
โ”œโ”€โ”€ pyproject.toml           # Project configuration
โ””โ”€โ”€ README.md
```

## ๐Ÿค Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Run the test suite
6. Submit a pull request

## ๐Ÿ“„ License

MIT License - see LICENSE file for details.

## ๐Ÿ“š API Reference

### ๐Ÿง  Core Classes

- `Cortex`: Main intelligence engine
- `SenseLayer`: Layer 1 sensory processing
- `InterpretationLayer`: Layer 2 interpretation and reaction decisions
- `ReasoningLayer`: Layer 3 external LLM reasoning

### ๐Ÿ› ๏ธ Utility Functions

- `create_simple_cortex()`: Quick Cortex setup
- `create_cortex_function()`: Create Open Agent Spec compatible function
- `create_cortex_oas_intelligence()`: Create OAS intelligence engine
- `create_cortex_oas_function()`: Create OAS-compatible function

### โš™๏ธ Configuration Classes

- `CortexConfig`: Main configuration
- `CortexOASConfig`: OAS-specific configuration
- `LLMConfig`: LLM provider configuration
- `ProcessingMode`: Processing mode enumeration

## ๐ŸŽฏ Use Cases

### ๐Ÿ’ฐ Cost Optimization
- **Problem**: High LLM API costs for simple queries
- **Solution**: Cortex filters out simple requests
- **Result**: 60-80% cost reduction

### โšก Response Speed
- **Problem**: Slow responses for simple questions
- **Solution**: Layer 2 handles simple queries instantly
- **Result**: Sub-second responses for basic requests

### ๐Ÿง  Intelligent Routing
- **Problem**: All requests treated equally
- **Solution**: Cortex analyzes complexity and urgency
- **Result**: Appropriate resource allocation

### ๐Ÿ”„ Multi-modal Support
- **Problem**: Text-only intelligence engines
- **Solution**: Cortex handles text, images, and audio
- **Result**: Rich multi-modal interactions

---

**Cortex** - Intelligent filtering for Open Agent Spec agents. ๐Ÿง โšก

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cortex-intelligence",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, intelligence, onnx, open-agent-spec, machine-learning, neural-networks",
    "author": null,
    "author_email": "Andrew Whitehouse <andrew@primevector.com.au>",
    "download_url": "https://files.pythonhosted.org/packages/84/e8/82fef2814c54e78f01ecf1e2b6fd32aeaf196b4d9b5eb08661e06b76eb9d/cortex_intelligence-0.1.0.tar.gz",
    "platform": null,
    "description": "# Cortex Intelligence Engine \ud83e\udde0\u26a1\n\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Tests](https://img.shields.io/badge/tests-20%20passed-brightgreen.svg)](tests/)\n\nA **three-layer intelligence engine** designed for integration with [Open Agent Spec](https://github.com/openagents/openagents), providing advanced sensory processing, interpretation, and reasoning capabilities with **intelligent filtering and cost optimization**.\n\n## \ud83c\udfaf What is Cortex?\n\nCortex acts as an **intelligent filtering layer** between your OAS agents and expensive external LLMs. Instead of routing every request directly to external LLMs, Cortex:\n\n1. **Processes input through Layer 1** (sensory processing)\n2. **Analyzes with Layer 2** (internal intelligence - rule-based or ONNX models)\n3. **Decides whether to trigger Layer 3** (external LLM) based on complexity and importance\n4. **Provides intelligent responses** with suggested actions\n\n**Result**: 60-80% cost reduction while maintaining intelligence! \ud83d\udcb0\ud83e\udde0\n\n## \ud83c\udfd7\ufe0f Architecture\n\nCortex implements a three-layer architecture:\n\n```\nOAS Agent \u2192 Cortex Intelligence Engine \u2192 External LLM (if needed)\n                \u2193\n            Layer 1: Sensory Processing (Text, Image, Audio)\n                \u2193\n            Layer 2: Internal Intelligence (Rule-based/ONNX/Local LLM)\n                \u2193\n            Layer 3: External LLM (OpenAI, Claude, etc.)\n```\n\n### Layer Breakdown\n\n1. **Layer 1 - Sensory Processing**: Raw data processing for vision, audio, and text inputs\n2. **Layer 2 - Interpretation**: Onboard ONNX/LLM for base-level data interpretation and reaction decisions\n3. **Layer 3 - Reasoning**: External LLM vendor integration (OpenAI, Claude, etc.) for complex reasoning\n\n## \u2728 Features\n\n- **\ud83e\udde0 Intelligent Filtering**: Automatically decides when to use expensive external LLMs\n- **\ud83e\udd16 ONNX Integration**: Local neural network models for intelligent decision-making\n- **\ud83d\udcb0 Cost Optimization**: 60-80% reduction in external API calls\n- **\u26a1 Performance**: Sub-second responses for simple queries\n- **\ud83d\udd04 Multi-modal Processing**: Handles text, images, and audio data\n- **\ud83c\udfaf Smart Routing**: Determines when complex reasoning is needed\n- **\ud83d\udd0c Flexible LLM Integration**: Supports multiple LLM providers with fallback mechanisms\n- **\ud83e\udd16 Open Agent Spec Compatible**: Drop-in replacement for standard OAS intelligence engines\n- **\ud83d\udcca Performance Monitoring**: Built-in metrics and status tracking\n- **\ud83d\udce6 Batch Processing**: Efficient processing of multiple inputs\n- **\u2699\ufe0f Configurable Processing Modes**: Reactive, proactive, and selective processing\n\n## \ud83d\ude80 Installation\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/cortex.git\ncd cortex\n\n# Install in development mode\npip install -e .\n```\n\n### \ud83d\udccb Dependencies\n\n- **Python 3.8+**\n- **numpy** - Numerical computing\n- **Pillow** - Image processing\n- **onnxruntime** - Layer 2 ONNX models\n- **aiohttp** - Async HTTP requests\n- **pydantic** - Data validation\n\n## \u26a1 Quick Start\n\n### Basic Usage\n\n```python\nimport asyncio\nfrom cortex import create_simple_cortex, create_cortex_function\n\nasync def main():\n    # Create a basic Cortex instance\n    cortex = create_simple_cortex(\n        openai_api_key=\"your_openai_key\",  # Optional\n        claude_api_key=\"your_claude_key\",  # Optional\n        enable_layer3=True\n    )\n    \n    # Process some input\n    result = await cortex.process_input(\n        data=\"Analyze this urgent system alert!\",\n        context=\"System monitoring detected anomaly\"\n    )\n    \n    print(f\"Response: {result.final_response}\")\n    print(f\"Suggested actions: {result.suggested_actions}\")\n\nasyncio.run(main())\n```\n\n### \ud83e\udd16 Open Agent Spec Integration\n\n```python\nfrom cortex import create_cortex_oas_intelligence, create_cortex_oas_function\n\n# Configure like standard OAS intelligence\nconfig = {\n    \"type\": \"cortex\",\n    \"config\": {\n        \"processing_mode\": \"reactive\",\n        \"layer2_threshold\": 0.6,\n        \"enable_layer3\": True,\n        \"external_engine\": \"openai\",\n        \"external_model\": \"gpt-4\"\n    }\n}\n\n# Create OAS-compatible intelligence engine\nintelligence = create_cortex_oas_intelligence(config)\noas_function = create_cortex_oas_function(intelligence)\n\n# Use in your OAS agent\nresponse = await oas_function(\n    prompt=\"URGENT: System failure detected!\",\n    context={\"user_id\": \"123\", \"session_id\": \"456\"}\n)\n\nprint(f\"Triggered Layer 3: {response['metadata']['triggered_layer3']}\")\n```\n\n\ud83d\udcd6 **For detailed OAS integration guide, see [docs/OAS_INTEGRATION.md](docs/OAS_INTEGRATION.md)**\n\n### \ud83e\udde0 ONNX Integration\n\n```python\nfrom cortex import create_cortex_oas_intelligence, CortexOASConfig\n\n# Configure with ONNX models for intelligent filtering\nconfig = CortexOASConfig(\n    layer2_engine=\"onnx\",  # Enable ONNX models\n    layer2_threshold=0.6,  # Confidence threshold\n    enable_layer3=True,    # Allow external LLM calls\n    external_engine=\"openai\",\n    external_model=\"gpt-4\"\n)\n\n# Create intelligence engine with ONNX-powered Layer 2\nintelligence = create_cortex_oas_intelligence(config)\n\n# Process input - ONNX models will decide if Layer 3 is needed\nresponse = await intelligence.process(\"URGENT: System failure detected!\")\nprint(f\"Triggered Layer 3: {response['metadata']['triggered_layer3']}\")\n```\n\n\ud83d\udcd6 **For detailed ONNX integration guide, see [docs/ONNX_INTEGRATION.md](docs/ONNX_INTEGRATION.md)**\n\n## \ud83d\udcda Usage Examples\n\n### \ud83d\udcdd Text Processing\n\n```python\nresult = await cortex.process_input(\n    data=\"Emergency: Database server is down!\",\n    sense_type=\"text\",\n    task_type=\"incident_response\"\n)\n```\n\n### \ud83d\uddbc\ufe0f Image Processing\n\n```python\nfrom PIL import Image\n\nimage = Image.open(\"photo.jpg\")\nresult = await cortex.process_input(\n    data=image,\n    sense_type=\"vision\",\n    context=\"Security camera feed analysis\"\n)\n```\n\n### \ud83c\udfb5 Audio Processing\n\n```python\nimport numpy as np\n\n# Audio data as numpy array\naudio_data = np.load(\"audio_sample.npy\")\nresult = await cortex.process_input(\n    data=audio_data,\n    sense_type=\"audio\",\n    task_type=\"sound_classification\"\n)\n```\n\n### \ud83d\udce6 Batch Processing\n\n```python\ninputs = [\n    {\"data\": \"Message 1\", \"task_type\": \"general\"},\n    {\"data\": \"URGENT: Alert message\", \"task_type\": \"alert\"},\n    {\"data\": image_data, \"sense_type\": \"vision\"}\n]\n\nresults = await cortex.batch_process(inputs)\n```\n\n## \u2699\ufe0f Configuration\n\n### \ud83d\udd04 Processing Modes\n\n| Mode | Behavior | Use Case |\n|------|----------|----------|\n| **REACTIVE** | Only Layer 3 when Layer 2 decides | Cost-effective, intelligent filtering |\n| **PROACTIVE** | Always use all layers | Maximum intelligence, higher cost |\n| **SELECTIVE** | Custom logic for Layer 3 | Custom filtering rules |\n\n```python\nfrom cortex.core import CortexConfig, ProcessingMode\n\nconfig = CortexConfig(\n    processing_mode=ProcessingMode.REACTIVE,\n    layer2_threshold=0.6,\n    enable_layer3=True,\n    max_processing_time=30.0\n)\n\ncortex = Cortex(config)\n```\n\n### \ud83e\udd16 LLM Provider Configuration\n\n```python\nfrom cortex.layers.layer3 import LLMConfig, LLMProvider\n\n# OpenAI Configuration\nopenai_config = LLMConfig(\n    provider=LLMProvider.OPENAI,\n    api_key=\"your_key\",\n    model=\"gpt-3.5-turbo\",\n    max_tokens=1000,\n    temperature=0.7\n)\n\n# Claude Configuration\nclaude_config = LLMConfig(\n    provider=LLMProvider.CLAUDE,\n    api_key=\"your_key\",\n    model=\"claude-3-sonnet-20240229\",\n    max_tokens=1000,\n    temperature=0.7\n)\n\ncortex.add_llm_provider(openai_config, is_primary=True)\ncortex.add_llm_provider(claude_config, is_primary=False)  # Fallback\n```\n\n## \ud83e\udd16 Open Agent Spec Integration\n\nCortex is designed as a **drop-in replacement** for standard OAS intelligence engines:\n\n### Basic OAS Configuration\n\n```yaml\nname: \"my_cortex_agent\"\nintelligence:\n  type: \"cortex\"\n  engine: \"cortex-hybrid\"\n  config:\n    processing_mode: \"reactive\"\n    layer2_threshold: 0.6\n    enable_layer3: true\n    external_engine: \"openai\"\n    external_model: \"gpt-4\"\n    external_endpoint: \"https://api.openai.com/v1\"\n    temperature: 0.7\n    max_tokens: 150\n    trigger_keywords: [\"help\", \"urgent\", \"error\", \"assist\"]\n```\n\n### Response Format\n\nCortex returns OAS-compatible responses:\n\n```json\n{\n  \"success\": true,\n  \"response\": \"Analysis and response from Cortex\",\n  \"actions\": [\"suggested_action_1\", \"suggested_action_2\"],\n  \"metadata\": {\n    \"layers_used\": [\"layer1\", \"layer2\", \"layer3\"],\n    \"processing_time\": 1.23,\n    \"confidence\": 0.85,\n    \"cortex_stats\": {\n      \"total_requests\": 100,\n      \"layer2_only\": 70,\n      \"layer3_triggered\": 30,\n      \"average_response_time\": 0.5\n    },\n    \"triggered_layer3\": true\n  },\n  \"error\": null\n}\n```\n\n\ud83d\udcd6 **For comprehensive OAS integration guide, see [docs/OAS_INTEGRATION.md](docs/OAS_INTEGRATION.md)**\n\n## \ud83d\udcca Monitoring and Metrics\n\n### Performance Statistics\n\n```python\n# Get current status\nstatus = cortex.get_status()\nprint(f\"Performance metrics: {status['cortex']['performance_metrics']}\")\n\n# Get processing history\nhistory = cortex.get_recent_history(limit=10)\n\n# Reset metrics\ncortex.reset_metrics()\n```\n\n### \ud83d\udcc8 Example Output\n\n```\nPerformance Statistics:\n  Total Requests: 100\n  Layer 2 Only: 70 (70.0%)\n  Layer 3 Triggered: 30 (30.0%)\n  Average Response Time: 0.5s\n```\n\n## \ud83d\udd14 Callback System\n\n```python\nasync def on_high_priority(data):\n    print(f\"High priority reaction detected: {data}\")\n\ncortex.add_callback(\"on_reaction_decision\", on_high_priority)\n```\n\n## \ud83d\udee0\ufe0f Development\n\n### \ud83e\uddea Running Tests\n\n```bash\npip install pytest pytest-asyncio\npython -m pytest tests/ -v\n```\n\n### \ud83d\ude80 Running Examples\n\n```bash\n# Basic usage examples\npython examples/basic_usage.py\n\n# OAS integration examples\npython examples/oas_integration_example.py\n```\n\n### \ud83d\udcc1 Project Structure\n\n```\ncortex/\n\u251c\u2500\u2500 cortex/\n\u2502   \u251c\u2500\u2500 __init__.py\n\u2502   \u251c\u2500\u2500 core.py              # Main Cortex class\n\u2502   \u251c\u2500\u2500 oas_integration.py   # OAS integration module\n\u2502   \u2514\u2500\u2500 layers/\n\u2502       \u251c\u2500\u2500 __init__.py\n\u2502       \u251c\u2500\u2500 layer1.py        # Sensory processing\n\u2502       \u251c\u2500\u2500 layer2.py        # Interpretation and reaction decisions\n\u2502       \u2514\u2500\u2500 layer3.py        # External LLM reasoning\n\u251c\u2500\u2500 examples/\n\u2502   \u251c\u2500\u2500 basic_usage.py       # Basic usage examples\n\u2502   \u2514\u2500\u2500 oas_integration_example.py  # OAS integration examples\n\u251c\u2500\u2500 docs/\n\u2502   \u2514\u2500\u2500 OAS_INTEGRATION.md   # Detailed OAS integration guide\n\u251c\u2500\u2500 tests/\n\u2502   \u2514\u2500\u2500 test_basic.py        # Test suite\n\u251c\u2500\u2500 pyproject.toml           # Project configuration\n\u2514\u2500\u2500 README.md\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests for new functionality\n5. Run the test suite\n6. Submit a pull request\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n## \ud83d\udcda API Reference\n\n### \ud83e\udde0 Core Classes\n\n- `Cortex`: Main intelligence engine\n- `SenseLayer`: Layer 1 sensory processing\n- `InterpretationLayer`: Layer 2 interpretation and reaction decisions\n- `ReasoningLayer`: Layer 3 external LLM reasoning\n\n### \ud83d\udee0\ufe0f Utility Functions\n\n- `create_simple_cortex()`: Quick Cortex setup\n- `create_cortex_function()`: Create Open Agent Spec compatible function\n- `create_cortex_oas_intelligence()`: Create OAS intelligence engine\n- `create_cortex_oas_function()`: Create OAS-compatible function\n\n### \u2699\ufe0f Configuration Classes\n\n- `CortexConfig`: Main configuration\n- `CortexOASConfig`: OAS-specific configuration\n- `LLMConfig`: LLM provider configuration\n- `ProcessingMode`: Processing mode enumeration\n\n## \ud83c\udfaf Use Cases\n\n### \ud83d\udcb0 Cost Optimization\n- **Problem**: High LLM API costs for simple queries\n- **Solution**: Cortex filters out simple requests\n- **Result**: 60-80% cost reduction\n\n### \u26a1 Response Speed\n- **Problem**: Slow responses for simple questions\n- **Solution**: Layer 2 handles simple queries instantly\n- **Result**: Sub-second responses for basic requests\n\n### \ud83e\udde0 Intelligent Routing\n- **Problem**: All requests treated equally\n- **Solution**: Cortex analyzes complexity and urgency\n- **Result**: Appropriate resource allocation\n\n### \ud83d\udd04 Multi-modal Support\n- **Problem**: Text-only intelligence engines\n- **Solution**: Cortex handles text, images, and audio\n- **Result**: Rich multi-modal interactions\n\n---\n\n**Cortex** - Intelligent filtering for Open Agent Spec agents. \ud83e\udde0\u26a1\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A three-layer intelligence engine with ONNX integration for Open Agent Spec",
    "version": "0.1.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/prime-vector/cortex/issues",
        "Documentation": "https://github.com/prime-vector/cortex#readme",
        "Homepage": "https://github.com/prime-vector/cortex",
        "Repository": "https://github.com/prime-vector/cortex"
    },
    "split_keywords": [
        "ai",
        " intelligence",
        " onnx",
        " open-agent-spec",
        " machine-learning",
        " neural-networks"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e907b746c10383daea7cf2311de24d838aa4102ecd71a5079c2b0fbab8c34862",
                "md5": "31b118b19616f07cc89096c09316576c",
                "sha256": "be7bf2f3af3d1910fb9223ad78d129df7b267f3c60b60f2147b7933bf51c8c1b"
            },
            "downloads": -1,
            "filename": "cortex_intelligence-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "31b118b19616f07cc89096c09316576c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 31496,
            "upload_time": "2025-08-03T02:25:24",
            "upload_time_iso_8601": "2025-08-03T02:25:24.389994Z",
            "url": "https://files.pythonhosted.org/packages/e9/07/b746c10383daea7cf2311de24d838aa4102ecd71a5079c2b0fbab8c34862/cortex_intelligence-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "84e882fef2814c54e78f01ecf1e2b6fd32aeaf196b4d9b5eb08661e06b76eb9d",
                "md5": "455af14c835fe7e733cf9f60614d8b27",
                "sha256": "35a8b71619c3ccf383689e592e5f016a3396622d73e5213b6613363544636b72"
            },
            "downloads": -1,
            "filename": "cortex_intelligence-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "455af14c835fe7e733cf9f60614d8b27",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 51192,
            "upload_time": "2025-08-03T02:25:26",
            "upload_time_iso_8601": "2025-08-03T02:25:26.091904Z",
            "url": "https://files.pythonhosted.org/packages/84/e8/82fef2814c54e78f01ecf1e2b6fd32aeaf196b4d9b5eb08661e06b76eb9d/cortex_intelligence-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-03 02:25:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "prime-vector",
    "github_project": "cortex",
    "github_not_found": true,
    "lcname": "cortex-intelligence"
}
        
Elapsed time: 1.45216s