# Reasoning Kernel

A **Semantic Kernel-native** reasoning system implementing the **Model Synthesis Architecture (MSA)** for open-world cognitive reasoning. Built entirely on Microsoft Semantic Kernel with plugin-based modularity and enterprise-grade orchestration.
## Project status
This repository is an active work in progress. APIs, configuration, documentation, and examples may change without notice.
- Documentation update plan: `docs/documentation-restructure-plan.md`
- Recent documentation changes: `docs/documentation-update-summary.md`
- System overview and scope: `docs/full-system.md`
## ๐ Core Features
- **๐ง SK-Native Architecture**: Built entirely on Microsoft Semantic Kernel patterns
- **๐ Plugin Ecosystem**: Modular reasoning capabilities as SK plugins
- **๐ Intelligent Planning**: SK planners for complex reasoning orchestration
- **๐พ Multi-Tier Memory**: Redis/PostgreSQL integration via SK memory abstractions
- **๐ฏ MSA Pipeline**: Five-stage reasoning process as plugin chains
- **๐ Multi-Model Support**: Azure OpenAI, Google Gemini, and local models
- **โก Production Ready**: FastAPI, streaming, and enterprise deployment
## ๐ Quick Start
### Prerequisites
- Python 3.10+ (3.13+ not yet supported due to dependency compatibility)
- Azure OpenAI or Google AI Studio API access
- Redis (optional, for memory features)
### Installation
#### One-Line Installation (Recommended)
For macOS and Linux:
```bash
curl -fsSL https://raw.githubusercontent.com/Qredence/Reasoning-Kernel/main/setup/install.sh | bash
```
For Windows:
```cmd
curl -fsSL https://raw.githubusercontent.com/Qredence/Reasoning-Kernel/main/setup/install.bat -o install.bat
install.bat
```
#### Manual Installation
```bash
# Clone the repository
git clone https://github.com/Qredence/Reasoning-Kernel.git
cd Reasoning-Kernel
# Install with Semantic Kernel support
uv venv && source .venv/bin/activate
uv pip install -e ".[azure,google]"
# Alternative: Install with pip
pip install -e ".[azure,google]"
```
> For a complete setup guide (including environment, optional services, and troubleshooting), see the Installation Guide: `docs/guides/installation.md`.
### Configuration
Set up your environment variables:
```bash
# Azure OpenAI (Recommended)
export AZURE_OPENAI_ENDPOINT="your-endpoint"
export AZURE_OPENAI_API_KEY="your-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4"
export AZURE_OPENAI_API_VERSION="2024-12-01-preview"
# Google AI (Alternative)
export GOOGLE_AI_API_KEY="your-key"
# Optional: Redis for memory
export REDIS_URL="redis://localhost:6379"
```
### Basic Usage
#### Python SDK
```python
import asyncio
from reasoning_kernel.core.kernel_config import KernelManager
from reasoning_kernel.services.redis_service import create_redis_services
from reasoning_kernel.reasoning_kernel import ReasoningKernel, ReasoningConfig
async def main():
# Initialize Semantic Kernel (uses Azure OpenAI env vars)
km = KernelManager()
await km.initialize()
# Optional: Redis for memory (uses REDIS_URL or host/port)
memory_service, _ = create_redis_services()
# Initialize reasoning system
rk = ReasoningKernel(kernel=km.kernel, redis_client=memory_service, config=ReasoningConfig())
# Perform reasoning
result = await rk.reason(
"A factory machine has failed and production is stopped. "
"Analyze the situation and suggest solutions."
)
print(result.success, result.overall_confidence)
asyncio.run(main())
```
#### CLI Usage
```bash
# Basic reasoning
reasoning-kernel "Analyze supply chain disruption scenario"
# Use specific reasoning mode
reasoning-kernel --mode knowledge "Factory production failure analysis"
# Interactive mode
reasoning-kernel --interactive
# JSON output for automation
reasoning-kernel --output json "Market analysis request"
```
## ๐ Documentation
- Getting started and concepts
- Core concepts: `docs/core_concepts.md`
- Full system overview: `docs/full-system.md`
- Product requirements (PRD): `docs/reasoning-kernel-PRD.md`
- Architecture
- MSA framework: `docs/architecture/msa-framework.md`
- Semantic Kernel architecture: `docs/architecture/semantic-kernel-architecture.md`
- Thinking exploration: `docs/architecture/thinking-exploration-reasoning-kernel.md`
- API & Services
- REST API reference: `docs/api/rest-api.md`
- Memory & Redis (MCP)
- MCP Redis integration: `docs/mcp_redis_integration.md`
- Redis schema: `docs/memory/redis_schema.md`
- Visual schema: `docs/memory/redis_visual_schema.md`
- Implementation summary: `docs/redis-world-model-implementation-summary.md`
- Plugins
- Plugin development guide: `docs/plugins/development-guide.md`
- Sandbox / Daytona
- Daytona integration guide: `docs/sandbox/daytona-integration.md`
- CLI Documentation
- User Guide: `docs/cli/user_guide.md`
- Command Reference: `docs/cli/command_reference.md`
- Interactive Tutorials: `docs/cli/tutorials.md`
- Example Library: `docs/cli/examples.md`
- Troubleshooting Guide: `docs/cli/troubleshooting.md`
- Papers and resources
- MSA paper guide: `docs/guides/msa-paper.md`
- Understanding the paper: `docs/guides/understanding_the_paper.md`
- Resources: `docs/guides/ressource.md`
## ๐งช Examples
- Gemini integration demo: `examples/gemini_integration_demo.py`
- MCP Redis example: `examples/mcp_redis_example.py`
- Redis world model integration: `examples/redis_world_model_integration_demo.py`
- MSA paper demo: `examples/msa_paper_demo.py`
- Tests overview: `tests/README.md`
## ๐๏ธ Architecture Overview
The Reasoning Kernel is built on a **Semantic Kernel-native architecture** with the following core components:
### Plugin Ecosystem
```mermaid
graph TB
subgraph "Semantic Kernel Core"
K[Kernel Instance]
P[Planners]
M[Memory]
S[AI Services]
end
subgraph "Reasoning Plugins"
AR[Abstract Reasoning]
CR[Causal Reasoning]
AnR[Analogical Reasoning]
LR[Logical Reasoning]
end
subgraph "MSA Pipeline Plugins"
PP[Parsing Plugin]
KP[Knowledge Plugin]
GP[Graph Plugin]
SP[Synthesis Plugin]
IP[Inference Plugin]
end
K --> AR
K --> PP
P --> SP
M --> KP
S --> CR
```
### MSA Reasoning Pipeline
The system implements a five-stage reasoning process:
1. **Parse**: Transform natural language into structured representations
2. **Knowledge**: Retrieve relevant background knowledge from memory
3. **Graph**: Build causal dependency graphs
4. **Synthesize**: Generate probabilistic programs (NumPyro)
5. **Inference**: Execute models and compute results
### Key Benefits
- **๐ Modular**: Each reasoning capability as an independent SK plugin
- **๐ฏ Orchestrated**: SK planners handle complex reasoning workflows
- **๐พ Memory-Aware**: Multi-tier memory system for context and knowledge
- **๐ Multi-Model**: Support for Azure OpenAI, Google, and local models
- **โก Scalable**: Production-ready with FastAPI and async processing
## ๐ง Configuration
### Environment Variables
**Primary AI Provider - Azure OpenAI (Required currently):**
```bash
AZURE_OPENAI_API_KEY=your_azure_openai_key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT=your_deployment_name
AZURE_OPENAI_API_VERSION=2024-12-01-preview
```
**Optional AI Provider - Google AI (Gemini):**
```bash
GOOGLE_AI_API_KEY=your_gemini_api_key
GOOGLE_AI_GEMINI_MODEL_ID=gemini-2.5-pro
GOOGLE_AI_EMBEDDING_MODEL_ID=text-embedding-004
```
**Optional configuration:**
```bash
LOG_LEVEL=INFO # Logging level
LOG_FORMAT=json # Logging format (json or text)
MCMC_NUM_WARMUP=1000 # MCMC warmup steps
MCMC_NUM_SAMPLES=2000 # MCMC sampling steps
MAX_KNOWLEDGE_ENTITIES=50 # Max entities to extract
UNCERTAINTY_THRESHOLD=0.8 # Uncertainty reporting threshold
```
> Note: Gemini support is optional. The current kernel initialization requires Azure OpenAI credentials.
### Structured Logging
The Reasoning Kernel features comprehensive structured logging with JSON output for production environments:
#### Features
- **JSON formatted logs** with structured data for easy parsing
- **Request correlation IDs** automatically added to all requests via `X-Request-ID` header
- **Performance metrics** with request duration tracking
- **Service context** automatically added to all log entries
- **Error logging** with full context and error details
#### Logging Configuration
```bash
# Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
LOG_LEVEL=INFO
# Set log format (json for structured, text for development)
LOG_FORMAT=json
```
#### Log Structure
JSON logs include the following fields:
```json
{
"event": "Request completed",
"service": "reasoning-kernel",
"component": "request",
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"method": "POST",
"path": "/api/v1/reason",
"endpoint": "/api/v1/reason",
"status_code": 200,
"duration": 0.145,
"timestamp": 1703875200.123,
"level": "info"
}
```
#### Usage in Code
```python
from reasoning_kernel.core.logging_config import get_logger, performance_context
# Get a structured logger
logger = get_logger("my_component")
# Log with additional context
logger.info("Processing request", user_id="123", operation="synthesis")
# Track performance with automatic duration logging
with performance_context("model_synthesis", logger):
# Your code here
pass
```
## ๐๏ธ Architecture
The system is built on a modern, scalable architecture:
- **FastAPI**: High-performance async web framework
- **Semantic Kernel**: Microsoft's AI orchestration platform
- **NumPyro**: Probabilistic programming with JAX
- **Pydantic**: Type-safe data validation
- **JAX**: Hardware-accelerated computing
- **Redis Cloud**: Vector search and knowledge storage via MCP integration
### Third-Party Integrations
- **MCP Redis Cloud**: Vendored Model Context Protocol server for Redis Cloud integration (`third_party/mcp-redis-cloud/`)
- Provides vector search, document storage, and caching capabilities
- MIT licensed with preserved attribution
- Integration wrapper at `reasoning_kernel/integrations/mcp_redis.py`
## ๐งช Development
```bash
# Install in development mode
pip install -e .
# Start with hot reload
uvicorn reasoning_kernel.main:app --host 0.0.0.0 --port 5000 --reload
```
### Code Quality
```bash
# Format code
black reasoning_kernel/
isort reasoning_kernel/
# Type checking
mypy reasoning_kernel/
# Static analysis (requires Datadog CLI)
datadog-ci sarif --config static-analysis.datadog.yml --output results.sarif
datadog-ci sarif upload --service reasoning-kernel results.sarif
# Run static analysis locally with Docker
docker run --rm -v $(pwd):/workspace \
datadog/datadog-static-analyzer:latest \
--config /workspace/static-analysis.datadog.yml \
/workspace
```
## ๐ Static Analysis
The project uses Datadog static analysis to ensure code quality and security. The configuration is defined in `static-analysis.datadog.yml` and includes:
- Python best practices and code style
- Security vulnerability detection
- Framework-specific rules (Django, Flask)
- GitHub Actions workflow validation
### Running Static Analysis Locally
#### Option 1: Using Datadog CLI (Recommended)
```bash
# Install Datadog CLI
npm install -g @datadog/datadog-ci
# Run static analysis
# Run static analysis and generate SARIF file
datadog-ci static-analysis scan --config static-analysis.datadog.yml --sarif-file results.sarif
# Upload SARIF results to Datadog
datadog-ci sarif upload --service reasoning-kernel results.sarif
```
#### Option 2: Using Docker
```bash
# Run static analysis with Docker
docker run --rm -v $(pwd):/workspace \
datadog/datadog-static-analyzer:latest \
--config /workspace/static-analysis.datadog.yml \
/workspace
```
### CI/CD Integration
Static analysis runs automatically on:
- All pull requests
- Pushes to the main branch
#### Required Secrets
To enable the CI workflow, configure these GitHub repository secrets:
- `DD_APP_KEY`: Your Datadog application key
- `DD_API_KEY`: Your Datadog API key
The workflow will:
- โ
Post results as PR comments
- โ
Create check status for PRs
- โ Block merging on critical/high severity violations
- ๐ Track metrics in Datadog dashboard
## ๐ Performance
The MSA Reasoning Engine is designed for production use:
- **Concurrent Sessions**: Handle multiple reasoning sessions simultaneously
- **Hardware Acceleration**: JAX-based computation with GPU support
- **Scalable Architecture**: Async processing with FastAPI
- **Memory Efficient**: Streaming inference and garbage collection
## ๐ค Contributing
We welcome contributions! Please see our contributing guidelines and code of conduct.
## ๐ License
This project is licensed under the Apache-2.0 License (see `pyproject.toml`).
## ๐ Acknowledgments
- Microsoft Semantic Kernel team for the AI orchestration framework
- NumPyro/JAX teams for probabilistic programming capabilities
- The broader AI reasoning research community
---
## Built with โค๏ธ for advanced AI reasoning capabilities
Raw data
{
"_id": null,
"home_page": null,
"name": "reasoning-kernel",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "ai, msa, numpyro, probabilistic-programming, reasoning, semantic-kernel",
"author": null,
"author_email": "Qredence <contact@qredence.ai>",
"download_url": "https://files.pythonhosted.org/packages/d9/b4/82d080aa6ed1db8fc6724b6d8c39ec39dbbc1ff3a2b60429c634d2a2468a/reasoning_kernel-0.0.1.tar.gz",
"platform": null,
"description": "# Reasoning Kernel\n\n\n\nA **Semantic Kernel-native** reasoning system implementing the **Model Synthesis Architecture (MSA)** for open-world cognitive reasoning. Built entirely on Microsoft Semantic Kernel with plugin-based modularity and enterprise-grade orchestration.\n\n## Project status\n\nThis repository is an active work in progress. APIs, configuration, documentation, and examples may change without notice.\n\n- Documentation update plan: `docs/documentation-restructure-plan.md`\n- Recent documentation changes: `docs/documentation-update-summary.md`\n- System overview and scope: `docs/full-system.md`\n\n## \ud83d\ude80 Core Features\n\n- **\ud83e\udde0 SK-Native Architecture**: Built entirely on Microsoft Semantic Kernel patterns\n- **\ud83d\udd0c Plugin Ecosystem**: Modular reasoning capabilities as SK plugins \n- **\ud83d\udccb Intelligent Planning**: SK planners for complex reasoning orchestration\n- **\ud83d\udcbe Multi-Tier Memory**: Redis/PostgreSQL integration via SK memory abstractions\n- **\ud83c\udfaf MSA Pipeline**: Five-stage reasoning process as plugin chains\n- **\ud83c\udf10 Multi-Model Support**: Azure OpenAI, Google Gemini, and local models\n- **\u26a1 Production Ready**: FastAPI, streaming, and enterprise deployment\n\n## \ud83d\ude80 Quick Start\n\n### Prerequisites\n\n- Python 3.10+ (3.13+ not yet supported due to dependency compatibility)\n- Azure OpenAI or Google AI Studio API access\n- Redis (optional, for memory features)\n\n### Installation\n\n#### One-Line Installation (Recommended)\n\nFor macOS and Linux:\n\n```bash\ncurl -fsSL https://raw.githubusercontent.com/Qredence/Reasoning-Kernel/main/setup/install.sh | bash\n```\n\nFor Windows:\n\n```cmd\ncurl -fsSL https://raw.githubusercontent.com/Qredence/Reasoning-Kernel/main/setup/install.bat -o install.bat\ninstall.bat\n```\n\n#### Manual Installation\n\n```bash\n# Clone the repository\ngit clone https://github.com/Qredence/Reasoning-Kernel.git\ncd Reasoning-Kernel\n\n# Install with Semantic Kernel support\nuv venv && source .venv/bin/activate\nuv pip install -e \".[azure,google]\"\n\n# Alternative: Install with pip\npip install -e \".[azure,google]\"\n```\n\n> For a complete setup guide (including environment, optional services, and troubleshooting), see the Installation Guide: `docs/guides/installation.md`.\n\n### Configuration\n\nSet up your environment variables:\n\n```bash\n# Azure OpenAI (Recommended)\nexport AZURE_OPENAI_ENDPOINT=\"your-endpoint\"\nexport AZURE_OPENAI_API_KEY=\"your-key\"\nexport AZURE_OPENAI_DEPLOYMENT=\"gpt-4\"\nexport AZURE_OPENAI_API_VERSION=\"2024-12-01-preview\"\n\n# Google AI (Alternative)\nexport GOOGLE_AI_API_KEY=\"your-key\"\n\n# Optional: Redis for memory\nexport REDIS_URL=\"redis://localhost:6379\"\n```\n\n### Basic Usage\n\n#### Python SDK\n\n```python\nimport asyncio\nfrom reasoning_kernel.core.kernel_config import KernelManager\nfrom reasoning_kernel.services.redis_service import create_redis_services\nfrom reasoning_kernel.reasoning_kernel import ReasoningKernel, ReasoningConfig\n\nasync def main():\n # Initialize Semantic Kernel (uses Azure OpenAI env vars)\n km = KernelManager()\n await km.initialize()\n\n # Optional: Redis for memory (uses REDIS_URL or host/port)\n memory_service, _ = create_redis_services()\n\n # Initialize reasoning system\n rk = ReasoningKernel(kernel=km.kernel, redis_client=memory_service, config=ReasoningConfig())\n\n # Perform reasoning\n result = await rk.reason(\n \"A factory machine has failed and production is stopped. \"\n \"Analyze the situation and suggest solutions.\"\n )\n\n print(result.success, result.overall_confidence)\n\nasyncio.run(main())\n```\n\n#### CLI Usage\n\n```bash\n# Basic reasoning\nreasoning-kernel \"Analyze supply chain disruption scenario\"\n\n# Use specific reasoning mode\nreasoning-kernel --mode knowledge \"Factory production failure analysis\"\n\n# Interactive mode\nreasoning-kernel --interactive\n\n# JSON output for automation\nreasoning-kernel --output json \"Market analysis request\"\n```\n\n## \ud83d\udcda Documentation\n\n- Getting started and concepts\n - Core concepts: `docs/core_concepts.md`\n - Full system overview: `docs/full-system.md`\n - Product requirements (PRD): `docs/reasoning-kernel-PRD.md`\n- Architecture\n - MSA framework: `docs/architecture/msa-framework.md`\n - Semantic Kernel architecture: `docs/architecture/semantic-kernel-architecture.md`\n - Thinking exploration: `docs/architecture/thinking-exploration-reasoning-kernel.md`\n- API & Services\n - REST API reference: `docs/api/rest-api.md`\n- Memory & Redis (MCP)\n - MCP Redis integration: `docs/mcp_redis_integration.md`\n - Redis schema: `docs/memory/redis_schema.md`\n - Visual schema: `docs/memory/redis_visual_schema.md`\n - Implementation summary: `docs/redis-world-model-implementation-summary.md`\n- Plugins\n - Plugin development guide: `docs/plugins/development-guide.md`\n- Sandbox / Daytona\n - Daytona integration guide: `docs/sandbox/daytona-integration.md`\n- CLI Documentation\n - User Guide: `docs/cli/user_guide.md`\n - Command Reference: `docs/cli/command_reference.md`\n - Interactive Tutorials: `docs/cli/tutorials.md`\n - Example Library: `docs/cli/examples.md`\n - Troubleshooting Guide: `docs/cli/troubleshooting.md`\n- Papers and resources\n - MSA paper guide: `docs/guides/msa-paper.md`\n - Understanding the paper: `docs/guides/understanding_the_paper.md`\n - Resources: `docs/guides/ressource.md`\n\n## \ud83e\uddea Examples\n\n- Gemini integration demo: `examples/gemini_integration_demo.py`\n- MCP Redis example: `examples/mcp_redis_example.py`\n- Redis world model integration: `examples/redis_world_model_integration_demo.py`\n- MSA paper demo: `examples/msa_paper_demo.py`\n- Tests overview: `tests/README.md`\n\n## \ud83c\udfd7\ufe0f Architecture Overview\n\nThe Reasoning Kernel is built on a **Semantic Kernel-native architecture** with the following core components:\n\n### Plugin Ecosystem\n\n```mermaid\ngraph TB\n subgraph \"Semantic Kernel Core\"\n K[Kernel Instance]\n P[Planners]\n M[Memory]\n S[AI Services]\n end\n \n subgraph \"Reasoning Plugins\"\n AR[Abstract Reasoning]\n CR[Causal Reasoning]\n AnR[Analogical Reasoning]\n LR[Logical Reasoning]\n end\n \n subgraph \"MSA Pipeline Plugins\"\n PP[Parsing Plugin]\n KP[Knowledge Plugin]\n GP[Graph Plugin]\n SP[Synthesis Plugin]\n IP[Inference Plugin]\n end\n \n K --> AR\n K --> PP\n P --> SP\n M --> KP\n S --> CR\n```\n\n### MSA Reasoning Pipeline\n\nThe system implements a five-stage reasoning process:\n\n1. **Parse**: Transform natural language into structured representations\n2. **Knowledge**: Retrieve relevant background knowledge from memory\n3. **Graph**: Build causal dependency graphs\n4. **Synthesize**: Generate probabilistic programs (NumPyro)\n5. **Inference**: Execute models and compute results\n\n### Key Benefits\n\n- **\ud83d\udd0c Modular**: Each reasoning capability as an independent SK plugin\n- **\ud83c\udfaf Orchestrated**: SK planners handle complex reasoning workflows\n- **\ud83d\udcbe Memory-Aware**: Multi-tier memory system for context and knowledge\n- **\ud83c\udf10 Multi-Model**: Support for Azure OpenAI, Google, and local models\n- **\u26a1 Scalable**: Production-ready with FastAPI and async processing\n\n## \ud83d\udd27 Configuration\n\n### Environment Variables\n\n**Primary AI Provider - Azure OpenAI (Required currently):**\n\n```bash\nAZURE_OPENAI_API_KEY=your_azure_openai_key\nAZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/\nAZURE_OPENAI_DEPLOYMENT=your_deployment_name\nAZURE_OPENAI_API_VERSION=2024-12-01-preview\n```\n\n**Optional AI Provider - Google AI (Gemini):**\n\n```bash\nGOOGLE_AI_API_KEY=your_gemini_api_key\nGOOGLE_AI_GEMINI_MODEL_ID=gemini-2.5-pro\nGOOGLE_AI_EMBEDDING_MODEL_ID=text-embedding-004\n```\n\n**Optional configuration:**\n\n```bash\nLOG_LEVEL=INFO # Logging level\nLOG_FORMAT=json # Logging format (json or text)\nMCMC_NUM_WARMUP=1000 # MCMC warmup steps\nMCMC_NUM_SAMPLES=2000 # MCMC sampling steps\nMAX_KNOWLEDGE_ENTITIES=50 # Max entities to extract\nUNCERTAINTY_THRESHOLD=0.8 # Uncertainty reporting threshold\n```\n\n> Note: Gemini support is optional. The current kernel initialization requires Azure OpenAI credentials.\n\n### Structured Logging\n\nThe Reasoning Kernel features comprehensive structured logging with JSON output for production environments:\n\n#### Features\n\n- **JSON formatted logs** with structured data for easy parsing\n- **Request correlation IDs** automatically added to all requests via `X-Request-ID` header\n- **Performance metrics** with request duration tracking\n- **Service context** automatically added to all log entries\n- **Error logging** with full context and error details\n\n#### Logging Configuration\n\n```bash\n# Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)\nLOG_LEVEL=INFO\n\n# Set log format (json for structured, text for development)\nLOG_FORMAT=json\n```\n\n#### Log Structure\n\nJSON logs include the following fields:\n\n```json\n{\n \"event\": \"Request completed\",\n \"service\": \"reasoning-kernel\", \n \"component\": \"request\",\n \"request_id\": \"550e8400-e29b-41d4-a716-446655440000\",\n \"method\": \"POST\",\n \"path\": \"/api/v1/reason\",\n \"endpoint\": \"/api/v1/reason\",\n \"status_code\": 200,\n \"duration\": 0.145,\n \"timestamp\": 1703875200.123,\n \"level\": \"info\"\n}\n```\n\n#### Usage in Code\n\n```python\nfrom reasoning_kernel.core.logging_config import get_logger, performance_context\n\n# Get a structured logger\nlogger = get_logger(\"my_component\")\n\n# Log with additional context\nlogger.info(\"Processing request\", user_id=\"123\", operation=\"synthesis\")\n\n# Track performance with automatic duration logging\nwith performance_context(\"model_synthesis\", logger):\n # Your code here\n pass\n```\n\n## \ud83c\udfd7\ufe0f Architecture\n\nThe system is built on a modern, scalable architecture:\n\n- **FastAPI**: High-performance async web framework\n- **Semantic Kernel**: Microsoft's AI orchestration platform\n- **NumPyro**: Probabilistic programming with JAX\n- **Pydantic**: Type-safe data validation\n- **JAX**: Hardware-accelerated computing\n- **Redis Cloud**: Vector search and knowledge storage via MCP integration\n\n### Third-Party Integrations\n\n- **MCP Redis Cloud**: Vendored Model Context Protocol server for Redis Cloud integration (`third_party/mcp-redis-cloud/`)\n - Provides vector search, document storage, and caching capabilities\n - MIT licensed with preserved attribution\n - Integration wrapper at `reasoning_kernel/integrations/mcp_redis.py`\n\n## \ud83e\uddea Development\n\n```bash\n# Install in development mode\npip install -e .\n\n# Start with hot reload\nuvicorn reasoning_kernel.main:app --host 0.0.0.0 --port 5000 --reload\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack reasoning_kernel/\nisort reasoning_kernel/\n\n# Type checking\nmypy reasoning_kernel/\n\n# Static analysis (requires Datadog CLI)\ndatadog-ci sarif --config static-analysis.datadog.yml --output results.sarif\ndatadog-ci sarif upload --service reasoning-kernel results.sarif\n\n# Run static analysis locally with Docker\ndocker run --rm -v $(pwd):/workspace \\\n datadog/datadog-static-analyzer:latest \\\n --config /workspace/static-analysis.datadog.yml \\\n /workspace\n```\n\n## \ud83d\udd0d Static Analysis\n\nThe project uses Datadog static analysis to ensure code quality and security. The configuration is defined in `static-analysis.datadog.yml` and includes:\n\n- Python best practices and code style\n- Security vulnerability detection\n- Framework-specific rules (Django, Flask)\n- GitHub Actions workflow validation\n\n### Running Static Analysis Locally\n\n#### Option 1: Using Datadog CLI (Recommended)\n\n```bash\n# Install Datadog CLI\nnpm install -g @datadog/datadog-ci\n\n# Run static analysis\n# Run static analysis and generate SARIF file\ndatadog-ci static-analysis scan --config static-analysis.datadog.yml --sarif-file results.sarif\n\n# Upload SARIF results to Datadog\ndatadog-ci sarif upload --service reasoning-kernel results.sarif\n```\n\n#### Option 2: Using Docker\n\n```bash\n# Run static analysis with Docker\ndocker run --rm -v $(pwd):/workspace \\\n datadog/datadog-static-analyzer:latest \\\n --config /workspace/static-analysis.datadog.yml \\\n /workspace\n```\n\n### CI/CD Integration\n\nStatic analysis runs automatically on:\n\n- All pull requests\n- Pushes to the main branch\n\n#### Required Secrets\n\nTo enable the CI workflow, configure these GitHub repository secrets:\n\n- `DD_APP_KEY`: Your Datadog application key\n- `DD_API_KEY`: Your Datadog API key\n\nThe workflow will:\n\n- \u2705 Post results as PR comments\n- \u2705 Create check status for PRs \n- \u274c Block merging on critical/high severity violations\n- \ud83d\udcca Track metrics in Datadog dashboard\n\n## \ud83d\udcca Performance\n\nThe MSA Reasoning Engine is designed for production use:\n\n- **Concurrent Sessions**: Handle multiple reasoning sessions simultaneously\n- **Hardware Acceleration**: JAX-based computation with GPU support\n- **Scalable Architecture**: Async processing with FastAPI\n- **Memory Efficient**: Streaming inference and garbage collection\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our contributing guidelines and code of conduct.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the Apache-2.0 License (see `pyproject.toml`).\n\n## \ud83d\ude4f Acknowledgments\n\n- Microsoft Semantic Kernel team for the AI orchestration framework\n- NumPyro/JAX teams for probabilistic programming capabilities\n- The broader AI reasoning research community\n\n---\n\n## Built with \u2764\ufe0f for advanced AI reasoning capabilities\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Model Synthesis Architecture (MSA) reasoning engine built on ReasoningFleet's FastAPI/Semantic Kernel foundation with NumPyro probabilistic programming",
"version": "0.0.1",
"project_urls": {
"Documentation": "https://github.com/Qredence/Reasoning-Kernel/tree/main/docs",
"Homepage": "https://github.com/Qredence/Reasoning-Kernel",
"Issues": "https://github.com/Qredence/Reasoning-Kernel/issues",
"Repository": "https://github.com/Qredence/Reasoning-Kernel"
},
"split_keywords": [
"ai",
" msa",
" numpyro",
" probabilistic-programming",
" reasoning",
" semantic-kernel"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "17a48ed24101412d4272cbf511b7c130124811952dba746c82c82937f290c5e4",
"md5": "3f919a0a883b7fc65b9d579726b8141a",
"sha256": "d3670a277928e48e942e8ed757c5696076d42cd0284847b5d95582b8ed9752e4"
},
"downloads": -1,
"filename": "reasoning_kernel-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3f919a0a883b7fc65b9d579726b8141a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 561401,
"upload_time": "2025-08-21T02:36:43",
"upload_time_iso_8601": "2025-08-21T02:36:43.533155Z",
"url": "https://files.pythonhosted.org/packages/17/a4/8ed24101412d4272cbf511b7c130124811952dba746c82c82937f290c5e4/reasoning_kernel-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d9b482d080aa6ed1db8fc6724b6d8c39ec39dbbc1ff3a2b60429c634d2a2468a",
"md5": "6f4376c21a24ee262080f9c87f1a3fdb",
"sha256": "c8c1e4b1cf90f96cfc6ca9c3cebda1fd852f49ac02168df9dce19ccbfe313a78"
},
"downloads": -1,
"filename": "reasoning_kernel-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "6f4376c21a24ee262080f9c87f1a3fdb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 1245052,
"upload_time": "2025-08-21T02:36:45",
"upload_time_iso_8601": "2025-08-21T02:36:45.453479Z",
"url": "https://files.pythonhosted.org/packages/d9/b4/82d080aa6ed1db8fc6724b6d8c39ec39dbbc1ff3a2b60429c634d2a2468a/reasoning_kernel-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 02:36:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Qredence",
"github_project": "Reasoning-Kernel",
"github_not_found": true,
"lcname": "reasoning-kernel"
}