ntt-ai-observability-exporter


Namentt-ai-observability-exporter JSON
Version 0.2.9 PyPI version JSON
download
home_pageNone
SummaryNTT AI Observability Exporter for Azure Monitor OpenTelemetry in AI Foundry projects
upload_time2025-09-26 09:40:34
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords ntt azure telemetry opentelemetry monitoring ai observability
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # NTT AI Observability Exporter

A comprehensive telemetry exporter for AI applications using Azure Monitor OpenTelemetry. This package provides advanced telemetry capabilities for AI projects built with Azure services, including comprehensive database instrumentation and Azure AI Foundry integration.

## Features

- **Single & Multi-Destination Telemetry**: Send telemetry to one or multiple Azure Monitor instances
- **Automatic instrumentation** of Azure SDK libraries and AI components
- **Comprehensive Database Instrumentation**: PostgreSQL, SQL Server, MySQL, SQLite, MongoDB, Redis, SQLAlchemy
- **Azure AI Foundry Integration**: Built-in tracing for Azure AI Inference and Agents
- **Advanced LangChain Support**: Native Azure AI tracing instead of generic LangChain instrumentation
- **GenAI content recording** for prompts and responses with content sanitization
- **Semantic Kernel Integration**: Full diagnostic support with OTEL tracing
- **Simplified configuration** with graceful error handling
- **Production-ready**: Comprehensive logging, tracing, and metrics collection

## Installation

### Basic Installation
```bash
pip install ntt-ai-observability-exporter
```

### Installation with Database Support
```bash
# Install with comprehensive database instrumentation
pip install ntt-ai-observability-exporter[databases]

# Install with Azure-specific database support
pip install ntt-ai-observability-exporter[azure-databases]

# Install with all optional dependencies
pip install ntt-ai-observability-exporter[all]
```

## Usage

## Usage Examples

### Azure AI Foundry Integration (Recommended)

```python
from ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor

# Configure comprehensive telemetry for Azure AI applications
configure_telemetry_azure_monitor(
    connection_strings=["your-connection-string"],
    customer_name="ai-project",
    agent_name="foundry-agent",
    enable_genai_content=True,
    genai_content_mode="sanitized"  # Options: "all", "sanitized"
)

# Use Azure AI Foundry components - automatic tracing
from azure.ai.inference import ChatCompletionsClient
from azure.ai.projects import AIProjectClient

# All AI operations are automatically traced with content recording
client = ChatCompletionsClient(endpoint="...", credential=DefaultAzureCredential())
response = client.complete(
    messages=[{"role": "user", "content": "Hello, world!"}]
)
```

### Database Operations Tracing

```python
# Configure telemetry ONCE - all database operations get automatic tracing
configure_telemetry_azure_monitor(
    connection_strings=["your-connection-string"],
    customer_name="database-app",
    agent_name="db-service"
)

# PostgreSQL operations (automatically traced)
import psycopg2
conn = psycopg2.connect("postgresql://user:pass@host:5432/db")
cursor = conn.cursor()
cursor.execute("SELECT * FROM users")  # ← Automatically traced

# Redis operations (automatically traced)
import redis
r = redis.Redis(host='localhost', port=6379)
r.set('key', 'value')  # ← Automatically traced

# MongoDB operations (automatically traced)
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017/')
db = client.mydatabase
db.users.find_one({"name": "John"})  # ← Automatically traced

# SQLAlchemy ORM (automatically traced)
from sqlalchemy import create_engine
engine = create_engine('postgresql://user:pass@host:5432/db')
# All ORM operations automatically traced
```

### LangChain with Azure AI Integration

```python
# Instead of generic LangChain instrumentation, use Azure AI native tracing
from ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor, get_azure_ai_tracer

# Configure telemetry
configure_telemetry_azure_monitor(
    connection_strings=["your-connection-string"],
    customer_name="langchain-app",
    agent_name="ai-agent"
)

# Get Azure AI tracer for LangChain (inherits global config)
tracer = get_azure_ai_tracer(tracer_name="langchain-integration")

# Use with LangChain components - gets Azure AI native tracing
from langchain_azure_ai import AzureAIChatCompletionsModel
from langchain.chains import ConversationChain

model = AzureAIChatCompletionsModel(
    azure_ai_tracer=tracer,  # Azure AI native tracing
    endpoint="...",
    credential=DefaultAzureCredential()
)

chain = ConversationChain(llm=model)
response = chain.run("Hello!")  # ← Traced with Azure AI Foundry integration
```

### Multi-Destination Telemetry

Send the same telemetry data to multiple Azure Monitor instances simultaneously:

```python
from ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor

# Configure telemetry for multiple Application Insights instances
configure_telemetry_azure_monitor(
    connection_strings=[
        "InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/",
        "InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/",
        "InstrumentationKey=key3;IngestionEndpoint=https://region3.in.applicationinsights.azure.com/"
    ],
    customer_name="multi-customer",
    agent_name="multi-agent",
    enable_genai_content=True,           # Enable GenAI content recording
    genai_content_mode="sanitized",      # "all" or "sanitized"
    enable_semantic_kernel_diagnostics=True  # Enable Semantic Kernel diagnostics
)

# All telemetry (traces, logs, metrics) will be sent to ALL destinations!
```

### Single Destination Telemetry (Legacy Support)

```python
from ntt_ai_observability_exporter import configure_telemetry

# Simple one-line setup for single Application Insights instance
configure_telemetry(
    connection_string="InstrumentationKey=your-key;IngestionEndpoint=your-endpoint",
    customer_name="your-customer", 
    agent_name="your-agent"
)
```
configure_telemetry_azure_monitor(
    connection_strings=[
        "InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/",
        "InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/",
        "InstrumentationKey=key3;IngestionEndpoint=https://region3.in.applicationinsights.azure.com/"
    ],
    customer_name="multi-customer",
    agent_name="multi-agent",
    enable_genai_content=True,           # Enable GenAI content recording
    genai_content_mode="all",            # "all" or "sanitized"
    enable_semantic_kernel_diagnostics=True  # Enable Semantic Kernel diagnostics
)

# All telemetry (traces, logs, metrics) will be sent to ALL destinations!
```

#### Multi-Destination Features

- **Duplicate to Multiple Targets**: Same telemetry sent to all connection strings
- **Comprehensive Database Instrumentation**: Auto-detects and instruments 7+ database types
- **Azure AI Native Integration**: Uses Azure AI Foundry tracing instead of generic instrumentation
- **GenAI Content Recording**: Capture prompts and responses with sanitization options
- **Semantic Kernel Integration**: Full diagnostic support with OTEL tracing
- **Live Metrics**: Real-time monitoring for all destinations
- **Graceful Error Handling**: Continues functioning even if some instrumentation packages are missing
- **Smart Detection**: Only instruments components that are actually imported and used

### Configuration Options

```python
# Standard single destination with all options
configure_telemetry(
    connection_string="InstrumentationKey=your-key;IngestionEndpoint=your-endpoint",
    customer_name="your-customer",
    agent_name="your-agent",
    enable_content_recording=True,
    content_recording_mode="all",
    enable_azure_monitor_tracing=True
)

# Multi-destination with advanced options
configure_telemetry_azure_monitor(
    connection_strings=["conn1", "conn2", "conn3"],
    customer_name="customer",
    agent_name="agent",
    enable_live_metrics=True,
    metric_export_interval_millis=15000,
    disable_offline_storage=False,
    logger_names=["semantic_kernel", "azure", "custom_logger"]
)

```

## What Gets Instrumented Automatically

The package automatically instruments a comprehensive set of components:

### **Azure AI & ML Services**
- **Azure AI Inference** (`azure.ai.inference`) - Native Azure AI Foundry tracing
- **Azure AI Agents** (`azure.ai.agents`) - When imported
- **Azure AI Projects** (`azure.ai.projects`) - Full project lifecycle tracing
- **Azure OpenAI** - Via Azure AI Foundry integration
- **OpenAI Python client** - Direct OpenAI API usage

### **Database Systems** (Auto-detected)
- **PostgreSQL** (`psycopg2`) - Comprehensive query tracing
- **SQL Server** (`pyodbc`) - Azure SQL Database compatible
- **MySQL** (`pymysql`) - MySQL database operations
- **SQLite** (`sqlite3`) - Local database operations
- **MongoDB** (`pymongo`) - NoSQL document database
- **Redis** (`redis`) - Caching and session storage
- **SQLAlchemy** - ORM-level database instrumentation

### **HTTP & Network**
- **HTTP client libraries** (`requests`, `aiohttp`, `urllib3`)
- **Azure SDK core** (`azure-core`) - All Azure service calls

### **AI Frameworks**
- **Semantic Kernel** - Full kernel and plugin instrumentation
- **LangChain via Azure AI** - Uses Azure AI Foundry native tracing instead of generic LangChain instrumentation

### **Advanced Features**
- **GenAI Content Recording** - Captures prompts and responses with sanitization options
- **Distributed Tracing** - Full request correlation across services
- **Custom Metrics** - Performance and usage metrics collection

> **Smart Detection**: Only instruments components that are actually imported and used in your application, ensuring minimal overhead.

## Telemetry Types Captured

The configuration captures:

- **Traces**: Request flows, GenAI operations, and distributed tracing
- **Metrics**: Performance measurements, token usage, response times
- **Logs**: Structured logging from Azure SDKs and application code

## Configuration Parameters

### Standard Telemetry (`configure_telemetry`)

- `connection_string`: Azure Monitor connection string
- `customer_name`: Maps to `service.name` in OpenTelemetry resource  
- `agent_name`: Maps to `service.instance.id` in OpenTelemetry resource
- `enable_content_recording`: Enable GenAI content recording (default: True)
- `content_recording_mode`: "all" or "sanitized" (default: "all")
- `enable_azure_monitor_tracing`: Enable Azure Monitor tracing (default: True)

### Multi-Destination Telemetry (`configure_telemetry_azure_monitor`)

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `connection_strings` | list[str] | Required | List of Azure Monitor connection strings |
| `customer_name` | str | Required | Service name identifier |
| `agent_name` | str | Required | Service instance identifier |
| `enable_genai_content` | bool | `True` | Enable GenAI content recording |
| `genai_content_mode` | str | `"all"` | Content mode: `"all"`, `"sanitized"`, `"minimal"` |
| `enable_semantic_kernel_diagnostics` | bool | `True` | Enable Semantic Kernel OTEL |
| `enable_live_metrics` | bool | `True` | Enable live metrics for all destinations |
| `metric_export_interval_millis` | int | `15000` | Metrics export interval (ms) |
| `logger_names` | list[str] | `["semantic_kernel", "azure", "azure.core"]` | Additional loggers to capture |
| `enable_database_instrumentation` | bool | `True` | Auto-detect and instrument database libraries |
| `custom_attributes` | dict | `{}` | Additional attributes for all telemetry |

### Database Instrumentation (Auto-Detected)

The following database libraries are automatically instrumented when imported:

- **PostgreSQL**: `psycopg2`, `asyncpg` 
- **SQL Server**: `pyodbc`, `pymssql`
- **MySQL**: `mysql-connector-python`, `PyMySQL`
- **SQLite**: Built-in `sqlite3` module
- **Redis**: `redis-py`
- **MongoDB**: `pymongo`
- **SQLAlchemy**: ORM-level instrumentation

### Security Configuration Options

```python
# GDPR/Compliance-friendly settings
configure_telemetry_azure_monitor(
    connection_strings=["your-connection-string"],
    customer_name="compliant-app",
    agent_name="production-agent",
    
    # Content recording controls
    enable_genai_content=True,
    genai_content_mode="sanitized",     # Remove PII, keep structure
    
    # Database query content (use with caution)
    enable_database_content=False,      # Don't log query content
    
    custom_attributes={
        "compliance_mode": "gdpr",
        "environment": "production"
    }
)
```

## Environment Variables

### Core Configuration

```bash
# Primary Azure Monitor connection
export APPLICATIONINSIGHTS_CONNECTION_STRING="InstrumentationKey=xxx;IngestionEndpoint=https://..."

# Multi-destination setup (semicolon separated)
export AZURE_MONITOR_DESTINATIONS="conn_string_1;conn_string_2;conn_string_3"

# Service identification
export CUSTOMER_NAME="your-service-name"
export AGENT_NAME="your-instance-id"
```

### Advanced Telemetry Controls

```bash
# GenAI content recording
export AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED="true"
export AZURE_TRACING_GEN_AI_CONTENT_RECORDING_MODE="sanitized"  # all, sanitized, minimal

# Semantic Kernel diagnostics
export SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS="true"

# Database instrumentation
export ENABLE_DATABASE_INSTRUMENTATION="true"
export DATABASE_CONTENT_LOGGING="false"  # Never enable in production

# Performance tuning
export METRIC_EXPORT_INTERVAL_MILLIS="15000"
export TELEMETRY_SAMPLING_RATIO="1.0"
```

### Security & Compliance

```bash
# Production security settings
export GENAI_CONTENT_MODE="sanitized"
export DATABASE_CONTENT_LOGGING="false"
export TELEMETRY_DATA_RESIDENCY="eu-west"
export COMPLIANCE_MODE="gdpr"

# Development settings (more verbose)
export GENAI_CONTENT_MODE="all"
export TELEMETRY_LOG_LEVEL="DEBUG"
export ENABLE_TELEMETRY_CONSOLE_OUTPUT="true"
```

## Example Use Cases

### Enterprise AI Application with Databases

```python
from ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor

# Configure comprehensive telemetry for enterprise application
configure_telemetry_azure_monitor(
    connection_strings=["your-connection-string"],
    customer_name="enterprise-ai-app", 
    agent_name="production-agent",
    enable_genai_content=True,
    genai_content_mode="sanitized"  # GDPR-friendly content recording
)

# Use enterprise databases - automatically instrumented
import pyodbc  # SQL Server
import redis   # Caching layer  
import pymongo # Document store

# Use Azure AI services - native tracing
from azure.ai.inference import ChatCompletionsClient
from azure.ai.projects import AIProjectClient

# All database queries, AI calls, and HTTP requests automatically traced
sql_conn = pyodbc.connect("DRIVER={SQL Server};SERVER=server;DATABASE=db")
redis_client = redis.Redis(host='redis-server')
mongo_client = pymongo.MongoClient('mongodb://mongo-server')
ai_client = ChatCompletionsClient(endpoint="...", credential=cred)

# Every operation across all these services gets full telemetry
```

### Multi-Region AI Deployment

```python
# Send telemetry to multiple regions for compliance and redundancy
configure_telemetry_azure_monitor(
    connection_strings=[
        "InstrumentationKey=us-key;IngestionEndpoint=https://eastus-1.in.applicationinsights.azure.com/",
        "InstrumentationKey=eu-key;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/", 
        "InstrumentationKey=asia-key;IngestionEndpoint=https://southeastasia-2.in.applicationinsights.azure.com/"
    ],
    customer_name="global-ai-service",
    agent_name="multi-region-deployment"
)

# All telemetry automatically replicated to US, EU, and Asia regions
```

### Development Environment with SQLite

```python
# Configure telemetry for development environment
configure_telemetry_azure_monitor(
    connection_strings=["your-dev-connection-string"],
    customer_name="ai-dev-project",
    agent_name="local-development"
)

# SQLite operations automatically traced (no additional setup)
import sqlite3
conn = sqlite3.connect('app.db')
cursor = conn.cursor()
cursor.execute('SELECT * FROM users')  # ← Automatically traced

# Azure AI development work automatically traced
from azure.ai.inference import ChatCompletionsClient
client = ChatCompletionsClient(endpoint="...", credential=cred)
response = client.complete(messages=[...])  # ← Full AI tracing
```


## Semantic Kernel Telemetry Support

For applications using Semantic Kernel, use the specialized configuration function:

```python
from ntt_ai_observability_exporter import configure_semantic_kernel_telemetry

# Configure Semantic Kernel telemetry BEFORE creating any Kernel instances
configure_semantic_kernel_telemetry(
    connection_string="your_connection_string",
    customer_name="your_customer_name",
    agent_name="your_agent_name"
)

# Then create and use your Semantic Kernel
from semantic_kernel import Kernel
kernel = Kernel()
# ... rest of your code
```

## Troubleshooting

### Common Issues and Solutions

#### Database Instrumentation Not Working

**Problem**: Database queries are not appearing in telemetry

**Solutions**:
```python
# 1. Verify database packages are installed
import sys
print("psycopg2" in sys.modules)  # Should be True after import

# 2. Check if instrumentation is active
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
print(Psycopg2Instrumentor().is_instrumented_by_opentelemetry)

# 3. Enable debug logging
import logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("opentelemetry").setLevel(logging.DEBUG)
```

**Alternative approach**:
```bash
# Install with specific database support
pip install ntt-ai-observability-exporter[databases]
```

#### Azure AI Foundry Tracing Issues

**Problem**: Azure AI calls not showing proper traces

**Solutions**:
```python
# 1. Verify Azure AI Foundry integration
from azure.ai.inference import ChatCompletionsClient
from azure.monitor.opentelemetry import configure_azure_monitor

# 2. Check connection string format
connection_string = "InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/"

# 3. Verify LangChain integration with Azure AI
from ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor
configure_telemetry_azure_monitor(
    connection_strings=[connection_string],
    customer_name="debug-app",
    agent_name="troubleshoot-agent"
)
```

#### Multi-Destination Telemetry Issues

**Problem**: Telemetry only going to one destination

**Solutions**:
```python
# Verify all connection strings are valid
connection_strings = [
    "InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/",
    "InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/"
]

# Test each connection separately first
for i, conn_str in enumerate(connection_strings):
    print(f"Testing connection {i+1}: {conn_str[:50]}...")
    configure_telemetry_azure_monitor(
        connection_strings=[conn_str],
        customer_name=f"test-{i}",
        agent_name="connection-test"
    )
```

#### Performance Issues

**Problem**: High telemetry overhead or missing traces

**Solutions**:
```python
# 1. Adjust sampling rates
configure_telemetry_azure_monitor(
    connection_strings=["your-connection"],
    customer_name="perf-app",
    agent_name="optimized-agent",
    metric_export_interval_millis=30000,  # Reduce frequency
    custom_attributes={"sampling_ratio": "0.1"}  # 10% sampling
)

# 2. Disable content recording in production
configure_telemetry_azure_monitor(
    connection_strings=["your-connection"],
    customer_name="prod-app", 
    agent_name="production-agent",
    enable_genai_content=False,  # Disable for performance
    genai_content_mode="minimal"
)
```

### Debugging Tips

#### Enable Console Output

```python
import logging
import sys

# Enable all telemetry logging
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    stream=sys.stdout
)

# Enable specific loggers
logging.getLogger("azure.monitor.opentelemetry").setLevel(logging.DEBUG)
logging.getLogger("opentelemetry").setLevel(logging.DEBUG)
logging.getLogger("ntt_ai_observability_exporter").setLevel(logging.DEBUG)
```

#### Check Installed Instrumentations

```python
from opentelemetry.instrumentation import _installed_instrumentations

# List all active instrumentations
for name, version in _installed_instrumentations.items():
    print(f"{name}: {version}")
```

#### Validate Telemetry Export

```python
# Add custom spans to verify telemetry flow
from opentelemetry import trace

tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("test-span"):
    print("Testing telemetry export...")
    # Your application code here
```

### Getting Help

If you encounter issues not covered here:

1. **Check the logs**: Enable debug logging to see detailed instrumentation information
2. **Verify dependencies**: Ensure all required packages are installed with correct versions
3. **Test connection**: Validate your Azure Monitor connection strings independently
4. **Review configuration**: Double-check all telemetry configuration parameters
5. **Check compatibility**: Verify OpenTelemetry version compatibility with your Azure SDKs

For additional support, review the [Azure Monitor OpenTelemetry documentation](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-overview).

## Development and Testing

### Installation for Development

Install the package with development dependencies:

```bash
# Install with all development dependencies
pip install -e ".[dev]"

# Or install with specific dependency groups
pip install -e ".[test]"          # Testing only
pip install -e ".[databases]"     # Database instrumentation
pip install -e ".[all]"           # Everything

# Or install from requirements files
pip install -r requirements-dev.txt
```

### Running Tests

```bash
# Run all tests
pytest

# Run tests with coverage
pytest --cov=src/ntt_ai_observability_exporter

# Run tests with detailed coverage report  
pytest --cov=src/ntt_ai_observability_exporter --cov-report=term-missing --cov-report=html

# Run specific test categories
pytest tests/test_telemetry_multi.py -v              # Multi-destination telemetry
pytest tests/test_semantic_kernel_telemetry.py -v   # Semantic Kernel integration
pytest -k "database" -v                             # Database instrumentation tests
```

### Test Coverage

The package includes comprehensive unit tests with:
- **95%+ overall test coverage** 
- **100% coverage** for all telemetry modules
- **24+ test cases** covering core functionality plus database instrumentation
- **Multi-destination telemetry validation** with mock Azure Monitor endpoints
- **Database instrumentation testing** for 7+ database types
- **Error handling validation** for missing dependencies and invalid configurations
- **Mock-based testing** for reliable CI/CD pipelines without external dependencies

### Code Quality Standards

The project maintains high code quality with:
- **pytest** for comprehensive testing framework
- **black** for consistent code formatting  
- **flake8** for linting and style enforcement
- **mypy** for static type checking
- **isort** for import organization
- **pre-commit hooks** for automated quality checks

### Project Structure

```
ntt-ai-observability-exporter/
├── src/ntt_ai_observability_exporter/
│   ├── __init__.py                     # Main telemetry configuration
│   ├── telemetry.py                    # Single-destination telemetry
│   ├── telemetry_multi.py              # Multi-destination + database instrumentation  
│   ├── semantic_kernel_telemetry.py    # Semantic Kernel integration
│   └── utilities.py                    # Helper functions
├── tests/
│   ├── test_telemetry.py               # Core telemetry tests
│   ├── test_telemetry_multi.py         # Multi-destination + database tests
│   └── test_semantic_kernel_telemetry.py # SK integration tests
├── pyproject.toml                      # Package configuration + dependencies
├── requirements.txt                    # Runtime dependencies
├── requirements-dev.txt                # Development dependencies
└── README.md                          # This documentation
```

## Contributing

### Development Workflow

1. **Fork and clone** the repository
2. **Install development dependencies**: `pip install -e ".[dev]"`
3. **Create feature branch**: `git checkout -b feature/your-feature-name`
4. **Make changes** with appropriate tests
5. **Run tests**: `pytest --cov=src`
6. **Run quality checks**: `black . && flake8 . && mypy src/`
7. **Submit pull request** with clear description

### Adding New Database Support

To add support for a new database:

1. **Add instrumentation package** to `pyproject.toml`:
```toml
[project.optional-dependencies]
databases = [
    # ... existing packages ...
    "opentelemetry-instrumentation-newdb",
]
```

2. **Add instrumentation logic** to `telemetry_multi.py`:
```python
def _install_instrumentations(self):
    # ... existing instrumentations ...
    
    # New database instrumentation
    if "newdb" in sys.modules:
        try:
            from opentelemetry.instrumentation.newdb import NewDBInstrumentor
            if not NewDBInstrumentor().is_instrumented_by_opentelemetry:
                NewDBInstrumentor().instrument()
        except ImportError:
            pass  # Graceful fallback
```

3. **Add comprehensive tests** in `tests/test_telemetry_multi.py`:
```python
@patch.dict(sys.modules, {"newdb": MagicMock()})
@patch("ntt_ai_observability_exporter.telemetry_multi.NewDBInstrumentor")
def test_newdb_instrumentation(self, mock_instrumentor):
    # Test instrumentation logic
    pass
```

4. **Update documentation** in README.md to list the new database support

### Submitting Issues

When submitting issues, please include:
- **Python version** and operating system
- **Package version** and installation method  
- **Complete error messages** and stack traces
- **Minimal code example** to reproduce the issue
- **Expected vs actual behavior** description

### License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ntt-ai-observability-exporter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ntt, azure, telemetry, opentelemetry, monitoring, ai, observability",
    "author": null,
    "author_email": "Anand Vaibhav Singh <anandvaibhav-singh_nttltd@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/fc/51/60158adf888f55933992c27f490e9392ad00bc5045553489c158194096fe/ntt_ai_observability_exporter-0.2.9.tar.gz",
    "platform": null,
    "description": "# NTT AI Observability Exporter\r\n\r\nA comprehensive telemetry exporter for AI applications using Azure Monitor OpenTelemetry. This package provides advanced telemetry capabilities for AI projects built with Azure services, including comprehensive database instrumentation and Azure AI Foundry integration.\r\n\r\n## Features\r\n\r\n- **Single & Multi-Destination Telemetry**: Send telemetry to one or multiple Azure Monitor instances\r\n- **Automatic instrumentation** of Azure SDK libraries and AI components\r\n- **Comprehensive Database Instrumentation**: PostgreSQL, SQL Server, MySQL, SQLite, MongoDB, Redis, SQLAlchemy\r\n- **Azure AI Foundry Integration**: Built-in tracing for Azure AI Inference and Agents\r\n- **Advanced LangChain Support**: Native Azure AI tracing instead of generic LangChain instrumentation\r\n- **GenAI content recording** for prompts and responses with content sanitization\r\n- **Semantic Kernel Integration**: Full diagnostic support with OTEL tracing\r\n- **Simplified configuration** with graceful error handling\r\n- **Production-ready**: Comprehensive logging, tracing, and metrics collection\r\n\r\n## Installation\r\n\r\n### Basic Installation\r\n```bash\r\npip install ntt-ai-observability-exporter\r\n```\r\n\r\n### Installation with Database Support\r\n```bash\r\n# Install with comprehensive database instrumentation\r\npip install ntt-ai-observability-exporter[databases]\r\n\r\n# Install with Azure-specific database support\r\npip install ntt-ai-observability-exporter[azure-databases]\r\n\r\n# Install with all optional dependencies\r\npip install ntt-ai-observability-exporter[all]\r\n```\r\n\r\n## Usage\r\n\r\n## Usage Examples\r\n\r\n### Azure AI Foundry Integration (Recommended)\r\n\r\n```python\r\nfrom ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor\r\n\r\n# Configure comprehensive telemetry for Azure AI applications\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection-string\"],\r\n    customer_name=\"ai-project\",\r\n    agent_name=\"foundry-agent\",\r\n    enable_genai_content=True,\r\n    genai_content_mode=\"sanitized\"  # Options: \"all\", \"sanitized\"\r\n)\r\n\r\n# Use Azure AI Foundry components - automatic tracing\r\nfrom azure.ai.inference import ChatCompletionsClient\r\nfrom azure.ai.projects import AIProjectClient\r\n\r\n# All AI operations are automatically traced with content recording\r\nclient = ChatCompletionsClient(endpoint=\"...\", credential=DefaultAzureCredential())\r\nresponse = client.complete(\r\n    messages=[{\"role\": \"user\", \"content\": \"Hello, world!\"}]\r\n)\r\n```\r\n\r\n### Database Operations Tracing\r\n\r\n```python\r\n# Configure telemetry ONCE - all database operations get automatic tracing\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection-string\"],\r\n    customer_name=\"database-app\",\r\n    agent_name=\"db-service\"\r\n)\r\n\r\n# PostgreSQL operations (automatically traced)\r\nimport psycopg2\r\nconn = psycopg2.connect(\"postgresql://user:pass@host:5432/db\")\r\ncursor = conn.cursor()\r\ncursor.execute(\"SELECT * FROM users\")  # \u2190 Automatically traced\r\n\r\n# Redis operations (automatically traced)\r\nimport redis\r\nr = redis.Redis(host='localhost', port=6379)\r\nr.set('key', 'value')  # \u2190 Automatically traced\r\n\r\n# MongoDB operations (automatically traced)\r\nfrom pymongo import MongoClient\r\nclient = MongoClient('mongodb://localhost:27017/')\r\ndb = client.mydatabase\r\ndb.users.find_one({\"name\": \"John\"})  # \u2190 Automatically traced\r\n\r\n# SQLAlchemy ORM (automatically traced)\r\nfrom sqlalchemy import create_engine\r\nengine = create_engine('postgresql://user:pass@host:5432/db')\r\n# All ORM operations automatically traced\r\n```\r\n\r\n### LangChain with Azure AI Integration\r\n\r\n```python\r\n# Instead of generic LangChain instrumentation, use Azure AI native tracing\r\nfrom ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor, get_azure_ai_tracer\r\n\r\n# Configure telemetry\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection-string\"],\r\n    customer_name=\"langchain-app\",\r\n    agent_name=\"ai-agent\"\r\n)\r\n\r\n# Get Azure AI tracer for LangChain (inherits global config)\r\ntracer = get_azure_ai_tracer(tracer_name=\"langchain-integration\")\r\n\r\n# Use with LangChain components - gets Azure AI native tracing\r\nfrom langchain_azure_ai import AzureAIChatCompletionsModel\r\nfrom langchain.chains import ConversationChain\r\n\r\nmodel = AzureAIChatCompletionsModel(\r\n    azure_ai_tracer=tracer,  # Azure AI native tracing\r\n    endpoint=\"...\",\r\n    credential=DefaultAzureCredential()\r\n)\r\n\r\nchain = ConversationChain(llm=model)\r\nresponse = chain.run(\"Hello!\")  # \u2190 Traced with Azure AI Foundry integration\r\n```\r\n\r\n### Multi-Destination Telemetry\r\n\r\nSend the same telemetry data to multiple Azure Monitor instances simultaneously:\r\n\r\n```python\r\nfrom ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor\r\n\r\n# Configure telemetry for multiple Application Insights instances\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\r\n        \"InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/\",\r\n        \"InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/\",\r\n        \"InstrumentationKey=key3;IngestionEndpoint=https://region3.in.applicationinsights.azure.com/\"\r\n    ],\r\n    customer_name=\"multi-customer\",\r\n    agent_name=\"multi-agent\",\r\n    enable_genai_content=True,           # Enable GenAI content recording\r\n    genai_content_mode=\"sanitized\",      # \"all\" or \"sanitized\"\r\n    enable_semantic_kernel_diagnostics=True  # Enable Semantic Kernel diagnostics\r\n)\r\n\r\n# All telemetry (traces, logs, metrics) will be sent to ALL destinations!\r\n```\r\n\r\n### Single Destination Telemetry (Legacy Support)\r\n\r\n```python\r\nfrom ntt_ai_observability_exporter import configure_telemetry\r\n\r\n# Simple one-line setup for single Application Insights instance\r\nconfigure_telemetry(\r\n    connection_string=\"InstrumentationKey=your-key;IngestionEndpoint=your-endpoint\",\r\n    customer_name=\"your-customer\", \r\n    agent_name=\"your-agent\"\r\n)\r\n```\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\r\n        \"InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/\",\r\n        \"InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/\",\r\n        \"InstrumentationKey=key3;IngestionEndpoint=https://region3.in.applicationinsights.azure.com/\"\r\n    ],\r\n    customer_name=\"multi-customer\",\r\n    agent_name=\"multi-agent\",\r\n    enable_genai_content=True,           # Enable GenAI content recording\r\n    genai_content_mode=\"all\",            # \"all\" or \"sanitized\"\r\n    enable_semantic_kernel_diagnostics=True  # Enable Semantic Kernel diagnostics\r\n)\r\n\r\n# All telemetry (traces, logs, metrics) will be sent to ALL destinations!\r\n```\r\n\r\n#### Multi-Destination Features\r\n\r\n- **Duplicate to Multiple Targets**: Same telemetry sent to all connection strings\r\n- **Comprehensive Database Instrumentation**: Auto-detects and instruments 7+ database types\r\n- **Azure AI Native Integration**: Uses Azure AI Foundry tracing instead of generic instrumentation\r\n- **GenAI Content Recording**: Capture prompts and responses with sanitization options\r\n- **Semantic Kernel Integration**: Full diagnostic support with OTEL tracing\r\n- **Live Metrics**: Real-time monitoring for all destinations\r\n- **Graceful Error Handling**: Continues functioning even if some instrumentation packages are missing\r\n- **Smart Detection**: Only instruments components that are actually imported and used\r\n\r\n### Configuration Options\r\n\r\n```python\r\n# Standard single destination with all options\r\nconfigure_telemetry(\r\n    connection_string=\"InstrumentationKey=your-key;IngestionEndpoint=your-endpoint\",\r\n    customer_name=\"your-customer\",\r\n    agent_name=\"your-agent\",\r\n    enable_content_recording=True,\r\n    content_recording_mode=\"all\",\r\n    enable_azure_monitor_tracing=True\r\n)\r\n\r\n# Multi-destination with advanced options\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"conn1\", \"conn2\", \"conn3\"],\r\n    customer_name=\"customer\",\r\n    agent_name=\"agent\",\r\n    enable_live_metrics=True,\r\n    metric_export_interval_millis=15000,\r\n    disable_offline_storage=False,\r\n    logger_names=[\"semantic_kernel\", \"azure\", \"custom_logger\"]\r\n)\r\n\r\n```\r\n\r\n## What Gets Instrumented Automatically\r\n\r\nThe package automatically instruments a comprehensive set of components:\r\n\r\n### **Azure AI & ML Services**\r\n- **Azure AI Inference** (`azure.ai.inference`) - Native Azure AI Foundry tracing\r\n- **Azure AI Agents** (`azure.ai.agents`) - When imported\r\n- **Azure AI Projects** (`azure.ai.projects`) - Full project lifecycle tracing\r\n- **Azure OpenAI** - Via Azure AI Foundry integration\r\n- **OpenAI Python client** - Direct OpenAI API usage\r\n\r\n### **Database Systems** (Auto-detected)\r\n- **PostgreSQL** (`psycopg2`) - Comprehensive query tracing\r\n- **SQL Server** (`pyodbc`) - Azure SQL Database compatible\r\n- **MySQL** (`pymysql`) - MySQL database operations\r\n- **SQLite** (`sqlite3`) - Local database operations\r\n- **MongoDB** (`pymongo`) - NoSQL document database\r\n- **Redis** (`redis`) - Caching and session storage\r\n- **SQLAlchemy** - ORM-level database instrumentation\r\n\r\n### **HTTP & Network**\r\n- **HTTP client libraries** (`requests`, `aiohttp`, `urllib3`)\r\n- **Azure SDK core** (`azure-core`) - All Azure service calls\r\n\r\n### **AI Frameworks**\r\n- **Semantic Kernel** - Full kernel and plugin instrumentation\r\n- **LangChain via Azure AI** - Uses Azure AI Foundry native tracing instead of generic LangChain instrumentation\r\n\r\n### **Advanced Features**\r\n- **GenAI Content Recording** - Captures prompts and responses with sanitization options\r\n- **Distributed Tracing** - Full request correlation across services\r\n- **Custom Metrics** - Performance and usage metrics collection\r\n\r\n> **Smart Detection**: Only instruments components that are actually imported and used in your application, ensuring minimal overhead.\r\n\r\n## Telemetry Types Captured\r\n\r\nThe configuration captures:\r\n\r\n- **Traces**: Request flows, GenAI operations, and distributed tracing\r\n- **Metrics**: Performance measurements, token usage, response times\r\n- **Logs**: Structured logging from Azure SDKs and application code\r\n\r\n## Configuration Parameters\r\n\r\n### Standard Telemetry (`configure_telemetry`)\r\n\r\n- `connection_string`: Azure Monitor connection string\r\n- `customer_name`: Maps to `service.name` in OpenTelemetry resource  \r\n- `agent_name`: Maps to `service.instance.id` in OpenTelemetry resource\r\n- `enable_content_recording`: Enable GenAI content recording (default: True)\r\n- `content_recording_mode`: \"all\" or \"sanitized\" (default: \"all\")\r\n- `enable_azure_monitor_tracing`: Enable Azure Monitor tracing (default: True)\r\n\r\n### Multi-Destination Telemetry (`configure_telemetry_azure_monitor`)\r\n\r\n| Parameter | Type | Default | Description |\r\n|-----------|------|---------|-------------|\r\n| `connection_strings` | list[str] | Required | List of Azure Monitor connection strings |\r\n| `customer_name` | str | Required | Service name identifier |\r\n| `agent_name` | str | Required | Service instance identifier |\r\n| `enable_genai_content` | bool | `True` | Enable GenAI content recording |\r\n| `genai_content_mode` | str | `\"all\"` | Content mode: `\"all\"`, `\"sanitized\"`, `\"minimal\"` |\r\n| `enable_semantic_kernel_diagnostics` | bool | `True` | Enable Semantic Kernel OTEL |\r\n| `enable_live_metrics` | bool | `True` | Enable live metrics for all destinations |\r\n| `metric_export_interval_millis` | int | `15000` | Metrics export interval (ms) |\r\n| `logger_names` | list[str] | `[\"semantic_kernel\", \"azure\", \"azure.core\"]` | Additional loggers to capture |\r\n| `enable_database_instrumentation` | bool | `True` | Auto-detect and instrument database libraries |\r\n| `custom_attributes` | dict | `{}` | Additional attributes for all telemetry |\r\n\r\n### Database Instrumentation (Auto-Detected)\r\n\r\nThe following database libraries are automatically instrumented when imported:\r\n\r\n- **PostgreSQL**: `psycopg2`, `asyncpg` \r\n- **SQL Server**: `pyodbc`, `pymssql`\r\n- **MySQL**: `mysql-connector-python`, `PyMySQL`\r\n- **SQLite**: Built-in `sqlite3` module\r\n- **Redis**: `redis-py`\r\n- **MongoDB**: `pymongo`\r\n- **SQLAlchemy**: ORM-level instrumentation\r\n\r\n### Security Configuration Options\r\n\r\n```python\r\n# GDPR/Compliance-friendly settings\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection-string\"],\r\n    customer_name=\"compliant-app\",\r\n    agent_name=\"production-agent\",\r\n    \r\n    # Content recording controls\r\n    enable_genai_content=True,\r\n    genai_content_mode=\"sanitized\",     # Remove PII, keep structure\r\n    \r\n    # Database query content (use with caution)\r\n    enable_database_content=False,      # Don't log query content\r\n    \r\n    custom_attributes={\r\n        \"compliance_mode\": \"gdpr\",\r\n        \"environment\": \"production\"\r\n    }\r\n)\r\n```\r\n\r\n## Environment Variables\r\n\r\n### Core Configuration\r\n\r\n```bash\r\n# Primary Azure Monitor connection\r\nexport APPLICATIONINSIGHTS_CONNECTION_STRING=\"InstrumentationKey=xxx;IngestionEndpoint=https://...\"\r\n\r\n# Multi-destination setup (semicolon separated)\r\nexport AZURE_MONITOR_DESTINATIONS=\"conn_string_1;conn_string_2;conn_string_3\"\r\n\r\n# Service identification\r\nexport CUSTOMER_NAME=\"your-service-name\"\r\nexport AGENT_NAME=\"your-instance-id\"\r\n```\r\n\r\n### Advanced Telemetry Controls\r\n\r\n```bash\r\n# GenAI content recording\r\nexport AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED=\"true\"\r\nexport AZURE_TRACING_GEN_AI_CONTENT_RECORDING_MODE=\"sanitized\"  # all, sanitized, minimal\r\n\r\n# Semantic Kernel diagnostics\r\nexport SEMANTICKERNEL_EXPERIMENTAL_GENAI_ENABLE_OTEL_DIAGNOSTICS=\"true\"\r\n\r\n# Database instrumentation\r\nexport ENABLE_DATABASE_INSTRUMENTATION=\"true\"\r\nexport DATABASE_CONTENT_LOGGING=\"false\"  # Never enable in production\r\n\r\n# Performance tuning\r\nexport METRIC_EXPORT_INTERVAL_MILLIS=\"15000\"\r\nexport TELEMETRY_SAMPLING_RATIO=\"1.0\"\r\n```\r\n\r\n### Security & Compliance\r\n\r\n```bash\r\n# Production security settings\r\nexport GENAI_CONTENT_MODE=\"sanitized\"\r\nexport DATABASE_CONTENT_LOGGING=\"false\"\r\nexport TELEMETRY_DATA_RESIDENCY=\"eu-west\"\r\nexport COMPLIANCE_MODE=\"gdpr\"\r\n\r\n# Development settings (more verbose)\r\nexport GENAI_CONTENT_MODE=\"all\"\r\nexport TELEMETRY_LOG_LEVEL=\"DEBUG\"\r\nexport ENABLE_TELEMETRY_CONSOLE_OUTPUT=\"true\"\r\n```\r\n\r\n## Example Use Cases\r\n\r\n### Enterprise AI Application with Databases\r\n\r\n```python\r\nfrom ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor\r\n\r\n# Configure comprehensive telemetry for enterprise application\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection-string\"],\r\n    customer_name=\"enterprise-ai-app\", \r\n    agent_name=\"production-agent\",\r\n    enable_genai_content=True,\r\n    genai_content_mode=\"sanitized\"  # GDPR-friendly content recording\r\n)\r\n\r\n# Use enterprise databases - automatically instrumented\r\nimport pyodbc  # SQL Server\r\nimport redis   # Caching layer  \r\nimport pymongo # Document store\r\n\r\n# Use Azure AI services - native tracing\r\nfrom azure.ai.inference import ChatCompletionsClient\r\nfrom azure.ai.projects import AIProjectClient\r\n\r\n# All database queries, AI calls, and HTTP requests automatically traced\r\nsql_conn = pyodbc.connect(\"DRIVER={SQL Server};SERVER=server;DATABASE=db\")\r\nredis_client = redis.Redis(host='redis-server')\r\nmongo_client = pymongo.MongoClient('mongodb://mongo-server')\r\nai_client = ChatCompletionsClient(endpoint=\"...\", credential=cred)\r\n\r\n# Every operation across all these services gets full telemetry\r\n```\r\n\r\n### Multi-Region AI Deployment\r\n\r\n```python\r\n# Send telemetry to multiple regions for compliance and redundancy\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\r\n        \"InstrumentationKey=us-key;IngestionEndpoint=https://eastus-1.in.applicationinsights.azure.com/\",\r\n        \"InstrumentationKey=eu-key;IngestionEndpoint=https://westeurope-5.in.applicationinsights.azure.com/\", \r\n        \"InstrumentationKey=asia-key;IngestionEndpoint=https://southeastasia-2.in.applicationinsights.azure.com/\"\r\n    ],\r\n    customer_name=\"global-ai-service\",\r\n    agent_name=\"multi-region-deployment\"\r\n)\r\n\r\n# All telemetry automatically replicated to US, EU, and Asia regions\r\n```\r\n\r\n### Development Environment with SQLite\r\n\r\n```python\r\n# Configure telemetry for development environment\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-dev-connection-string\"],\r\n    customer_name=\"ai-dev-project\",\r\n    agent_name=\"local-development\"\r\n)\r\n\r\n# SQLite operations automatically traced (no additional setup)\r\nimport sqlite3\r\nconn = sqlite3.connect('app.db')\r\ncursor = conn.cursor()\r\ncursor.execute('SELECT * FROM users')  # \u2190 Automatically traced\r\n\r\n# Azure AI development work automatically traced\r\nfrom azure.ai.inference import ChatCompletionsClient\r\nclient = ChatCompletionsClient(endpoint=\"...\", credential=cred)\r\nresponse = client.complete(messages=[...])  # \u2190 Full AI tracing\r\n```\r\n\r\n\r\n## Semantic Kernel Telemetry Support\r\n\r\nFor applications using Semantic Kernel, use the specialized configuration function:\r\n\r\n```python\r\nfrom ntt_ai_observability_exporter import configure_semantic_kernel_telemetry\r\n\r\n# Configure Semantic Kernel telemetry BEFORE creating any Kernel instances\r\nconfigure_semantic_kernel_telemetry(\r\n    connection_string=\"your_connection_string\",\r\n    customer_name=\"your_customer_name\",\r\n    agent_name=\"your_agent_name\"\r\n)\r\n\r\n# Then create and use your Semantic Kernel\r\nfrom semantic_kernel import Kernel\r\nkernel = Kernel()\r\n# ... rest of your code\r\n```\r\n\r\n## Troubleshooting\r\n\r\n### Common Issues and Solutions\r\n\r\n#### Database Instrumentation Not Working\r\n\r\n**Problem**: Database queries are not appearing in telemetry\r\n\r\n**Solutions**:\r\n```python\r\n# 1. Verify database packages are installed\r\nimport sys\r\nprint(\"psycopg2\" in sys.modules)  # Should be True after import\r\n\r\n# 2. Check if instrumentation is active\r\nfrom opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor\r\nprint(Psycopg2Instrumentor().is_instrumented_by_opentelemetry)\r\n\r\n# 3. Enable debug logging\r\nimport logging\r\nlogging.basicConfig(level=logging.DEBUG)\r\nlogging.getLogger(\"opentelemetry\").setLevel(logging.DEBUG)\r\n```\r\n\r\n**Alternative approach**:\r\n```bash\r\n# Install with specific database support\r\npip install ntt-ai-observability-exporter[databases]\r\n```\r\n\r\n#### Azure AI Foundry Tracing Issues\r\n\r\n**Problem**: Azure AI calls not showing proper traces\r\n\r\n**Solutions**:\r\n```python\r\n# 1. Verify Azure AI Foundry integration\r\nfrom azure.ai.inference import ChatCompletionsClient\r\nfrom azure.monitor.opentelemetry import configure_azure_monitor\r\n\r\n# 2. Check connection string format\r\nconnection_string = \"InstrumentationKey=xxx;IngestionEndpoint=https://xxx.in.applicationinsights.azure.com/\"\r\n\r\n# 3. Verify LangChain integration with Azure AI\r\nfrom ntt_ai_observability_exporter.telemetry_multi import configure_telemetry_azure_monitor\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[connection_string],\r\n    customer_name=\"debug-app\",\r\n    agent_name=\"troubleshoot-agent\"\r\n)\r\n```\r\n\r\n#### Multi-Destination Telemetry Issues\r\n\r\n**Problem**: Telemetry only going to one destination\r\n\r\n**Solutions**:\r\n```python\r\n# Verify all connection strings are valid\r\nconnection_strings = [\r\n    \"InstrumentationKey=key1;IngestionEndpoint=https://region1.in.applicationinsights.azure.com/\",\r\n    \"InstrumentationKey=key2;IngestionEndpoint=https://region2.in.applicationinsights.azure.com/\"\r\n]\r\n\r\n# Test each connection separately first\r\nfor i, conn_str in enumerate(connection_strings):\r\n    print(f\"Testing connection {i+1}: {conn_str[:50]}...\")\r\n    configure_telemetry_azure_monitor(\r\n        connection_strings=[conn_str],\r\n        customer_name=f\"test-{i}\",\r\n        agent_name=\"connection-test\"\r\n    )\r\n```\r\n\r\n#### Performance Issues\r\n\r\n**Problem**: High telemetry overhead or missing traces\r\n\r\n**Solutions**:\r\n```python\r\n# 1. Adjust sampling rates\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection\"],\r\n    customer_name=\"perf-app\",\r\n    agent_name=\"optimized-agent\",\r\n    metric_export_interval_millis=30000,  # Reduce frequency\r\n    custom_attributes={\"sampling_ratio\": \"0.1\"}  # 10% sampling\r\n)\r\n\r\n# 2. Disable content recording in production\r\nconfigure_telemetry_azure_monitor(\r\n    connection_strings=[\"your-connection\"],\r\n    customer_name=\"prod-app\", \r\n    agent_name=\"production-agent\",\r\n    enable_genai_content=False,  # Disable for performance\r\n    genai_content_mode=\"minimal\"\r\n)\r\n```\r\n\r\n### Debugging Tips\r\n\r\n#### Enable Console Output\r\n\r\n```python\r\nimport logging\r\nimport sys\r\n\r\n# Enable all telemetry logging\r\nlogging.basicConfig(\r\n    level=logging.DEBUG,\r\n    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',\r\n    stream=sys.stdout\r\n)\r\n\r\n# Enable specific loggers\r\nlogging.getLogger(\"azure.monitor.opentelemetry\").setLevel(logging.DEBUG)\r\nlogging.getLogger(\"opentelemetry\").setLevel(logging.DEBUG)\r\nlogging.getLogger(\"ntt_ai_observability_exporter\").setLevel(logging.DEBUG)\r\n```\r\n\r\n#### Check Installed Instrumentations\r\n\r\n```python\r\nfrom opentelemetry.instrumentation import _installed_instrumentations\r\n\r\n# List all active instrumentations\r\nfor name, version in _installed_instrumentations.items():\r\n    print(f\"{name}: {version}\")\r\n```\r\n\r\n#### Validate Telemetry Export\r\n\r\n```python\r\n# Add custom spans to verify telemetry flow\r\nfrom opentelemetry import trace\r\n\r\ntracer = trace.get_tracer(__name__)\r\nwith tracer.start_as_current_span(\"test-span\"):\r\n    print(\"Testing telemetry export...\")\r\n    # Your application code here\r\n```\r\n\r\n### Getting Help\r\n\r\nIf you encounter issues not covered here:\r\n\r\n1. **Check the logs**: Enable debug logging to see detailed instrumentation information\r\n2. **Verify dependencies**: Ensure all required packages are installed with correct versions\r\n3. **Test connection**: Validate your Azure Monitor connection strings independently\r\n4. **Review configuration**: Double-check all telemetry configuration parameters\r\n5. **Check compatibility**: Verify OpenTelemetry version compatibility with your Azure SDKs\r\n\r\nFor additional support, review the [Azure Monitor OpenTelemetry documentation](https://docs.microsoft.com/azure/azure-monitor/app/opentelemetry-overview).\r\n\r\n## Development and Testing\r\n\r\n### Installation for Development\r\n\r\nInstall the package with development dependencies:\r\n\r\n```bash\r\n# Install with all development dependencies\r\npip install -e \".[dev]\"\r\n\r\n# Or install with specific dependency groups\r\npip install -e \".[test]\"          # Testing only\r\npip install -e \".[databases]\"     # Database instrumentation\r\npip install -e \".[all]\"           # Everything\r\n\r\n# Or install from requirements files\r\npip install -r requirements-dev.txt\r\n```\r\n\r\n### Running Tests\r\n\r\n```bash\r\n# Run all tests\r\npytest\r\n\r\n# Run tests with coverage\r\npytest --cov=src/ntt_ai_observability_exporter\r\n\r\n# Run tests with detailed coverage report  \r\npytest --cov=src/ntt_ai_observability_exporter --cov-report=term-missing --cov-report=html\r\n\r\n# Run specific test categories\r\npytest tests/test_telemetry_multi.py -v              # Multi-destination telemetry\r\npytest tests/test_semantic_kernel_telemetry.py -v   # Semantic Kernel integration\r\npytest -k \"database\" -v                             # Database instrumentation tests\r\n```\r\n\r\n### Test Coverage\r\n\r\nThe package includes comprehensive unit tests with:\r\n- **95%+ overall test coverage** \r\n- **100% coverage** for all telemetry modules\r\n- **24+ test cases** covering core functionality plus database instrumentation\r\n- **Multi-destination telemetry validation** with mock Azure Monitor endpoints\r\n- **Database instrumentation testing** for 7+ database types\r\n- **Error handling validation** for missing dependencies and invalid configurations\r\n- **Mock-based testing** for reliable CI/CD pipelines without external dependencies\r\n\r\n### Code Quality Standards\r\n\r\nThe project maintains high code quality with:\r\n- **pytest** for comprehensive testing framework\r\n- **black** for consistent code formatting  \r\n- **flake8** for linting and style enforcement\r\n- **mypy** for static type checking\r\n- **isort** for import organization\r\n- **pre-commit hooks** for automated quality checks\r\n\r\n### Project Structure\r\n\r\n```\r\nntt-ai-observability-exporter/\r\n\u251c\u2500\u2500 src/ntt_ai_observability_exporter/\r\n\u2502   \u251c\u2500\u2500 __init__.py                     # Main telemetry configuration\r\n\u2502   \u251c\u2500\u2500 telemetry.py                    # Single-destination telemetry\r\n\u2502   \u251c\u2500\u2500 telemetry_multi.py              # Multi-destination + database instrumentation  \r\n\u2502   \u251c\u2500\u2500 semantic_kernel_telemetry.py    # Semantic Kernel integration\r\n\u2502   \u2514\u2500\u2500 utilities.py                    # Helper functions\r\n\u251c\u2500\u2500 tests/\r\n\u2502   \u251c\u2500\u2500 test_telemetry.py               # Core telemetry tests\r\n\u2502   \u251c\u2500\u2500 test_telemetry_multi.py         # Multi-destination + database tests\r\n\u2502   \u2514\u2500\u2500 test_semantic_kernel_telemetry.py # SK integration tests\r\n\u251c\u2500\u2500 pyproject.toml                      # Package configuration + dependencies\r\n\u251c\u2500\u2500 requirements.txt                    # Runtime dependencies\r\n\u251c\u2500\u2500 requirements-dev.txt                # Development dependencies\r\n\u2514\u2500\u2500 README.md                          # This documentation\r\n```\r\n\r\n## Contributing\r\n\r\n### Development Workflow\r\n\r\n1. **Fork and clone** the repository\r\n2. **Install development dependencies**: `pip install -e \".[dev]\"`\r\n3. **Create feature branch**: `git checkout -b feature/your-feature-name`\r\n4. **Make changes** with appropriate tests\r\n5. **Run tests**: `pytest --cov=src`\r\n6. **Run quality checks**: `black . && flake8 . && mypy src/`\r\n7. **Submit pull request** with clear description\r\n\r\n### Adding New Database Support\r\n\r\nTo add support for a new database:\r\n\r\n1. **Add instrumentation package** to `pyproject.toml`:\r\n```toml\r\n[project.optional-dependencies]\r\ndatabases = [\r\n    # ... existing packages ...\r\n    \"opentelemetry-instrumentation-newdb\",\r\n]\r\n```\r\n\r\n2. **Add instrumentation logic** to `telemetry_multi.py`:\r\n```python\r\ndef _install_instrumentations(self):\r\n    # ... existing instrumentations ...\r\n    \r\n    # New database instrumentation\r\n    if \"newdb\" in sys.modules:\r\n        try:\r\n            from opentelemetry.instrumentation.newdb import NewDBInstrumentor\r\n            if not NewDBInstrumentor().is_instrumented_by_opentelemetry:\r\n                NewDBInstrumentor().instrument()\r\n        except ImportError:\r\n            pass  # Graceful fallback\r\n```\r\n\r\n3. **Add comprehensive tests** in `tests/test_telemetry_multi.py`:\r\n```python\r\n@patch.dict(sys.modules, {\"newdb\": MagicMock()})\r\n@patch(\"ntt_ai_observability_exporter.telemetry_multi.NewDBInstrumentor\")\r\ndef test_newdb_instrumentation(self, mock_instrumentor):\r\n    # Test instrumentation logic\r\n    pass\r\n```\r\n\r\n4. **Update documentation** in README.md to list the new database support\r\n\r\n### Submitting Issues\r\n\r\nWhen submitting issues, please include:\r\n- **Python version** and operating system\r\n- **Package version** and installation method  \r\n- **Complete error messages** and stack traces\r\n- **Minimal code example** to reproduce the issue\r\n- **Expected vs actual behavior** description\r\n\r\n### License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "NTT AI Observability Exporter for Azure Monitor OpenTelemetry in AI Foundry projects",
    "version": "0.2.9",
    "project_urls": {
        "Bug Tracker": "https://github.com/nttlimited/ntt-ai-observability-exporter/issues",
        "Homepage": "https://github.com/nttlimited/ntt-ai-observability-exporter"
    },
    "split_keywords": [
        "ntt",
        " azure",
        " telemetry",
        " opentelemetry",
        " monitoring",
        " ai",
        " observability"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fbab7e3c4510bc62906ba1a811c247c4aa24a3cd6a1df24380339012b032c264",
                "md5": "1cf6aa68f6a94ba1a814fd14e9e586d4",
                "sha256": "ca606f489158982b668b64d3dd79821d7b1891acca31df611ed14a78a5c960fc"
            },
            "downloads": -1,
            "filename": "ntt_ai_observability_exporter-0.2.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1cf6aa68f6a94ba1a814fd14e9e586d4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 27482,
            "upload_time": "2025-09-26T09:40:33",
            "upload_time_iso_8601": "2025-09-26T09:40:33.484569Z",
            "url": "https://files.pythonhosted.org/packages/fb/ab/7e3c4510bc62906ba1a811c247c4aa24a3cd6a1df24380339012b032c264/ntt_ai_observability_exporter-0.2.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fc5160158adf888f55933992c27f490e9392ad00bc5045553489c158194096fe",
                "md5": "75e611acf433b435104c15dcbf24e008",
                "sha256": "a3e39704bfaf9963d52a70e8363a332d1a8419656a952af9139856f7408ed1ab"
            },
            "downloads": -1,
            "filename": "ntt_ai_observability_exporter-0.2.9.tar.gz",
            "has_sig": false,
            "md5_digest": "75e611acf433b435104c15dcbf24e008",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 38319,
            "upload_time": "2025-09-26T09:40:34",
            "upload_time_iso_8601": "2025-09-26T09:40:34.870555Z",
            "url": "https://files.pythonhosted.org/packages/fc/51/60158adf888f55933992c27f490e9392ad00bc5045553489c158194096fe/ntt_ai_observability_exporter-0.2.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-26 09:40:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nttlimited",
    "github_project": "ntt-ai-observability-exporter",
    "github_not_found": true,
    "lcname": "ntt-ai-observability-exporter"
}
        
Elapsed time: 1.59985s