# Sherlock AI
A Python package for performance monitoring and logging utilities that helps you track execution times and debug your applications with ease.
## Features
- ๐ฏ **Performance Decorators**: Easy-to-use decorators for tracking function execution times
- ๐ง **Memory Monitoring**: Track Python memory usage with detailed heap and tracemalloc integration
- ๐ **Resource Monitoring**: Monitor CPU, memory, I/O, and network usage during function execution
- โฑ๏ธ **Context Managers**: Monitor code block execution with simple context managers
- ๐ง **Advanced Configuration System**: Complete control over logging with dataclass-based configuration
- ๐๏ธ **Configuration Presets**: Pre-built setups for development, production, and testing environments
- ๐ **Async/Sync Support**: Works seamlessly with both synchronous and asynchronous functions
- ๐ **Request Tracking**: Built-in request ID tracking for distributed systems
- ๐ **Flexible Log Management**: Enable/disable log files, custom directories, and rotation settings
- ๐ท๏ธ **Logger Name Constants**: Easy access to available logger names with autocomplete support
- ๐ **Logger Discovery**: Programmatically discover available loggers in your application
- ๐ **Development-Friendly**: Optimized for FastAPI auto-reload and development environments
- ๐จ **Modular Architecture**: Clean, focused modules for different monitoring aspects
- ๐๏ธ **Class-Based Architecture**: Advanced `SherlockAI` class for instance-based logging management
- ๐ **Runtime Reconfiguration**: Change logging settings without application restart
- ๐งน **Resource Management**: Automatic cleanup and context manager support
- ๐ **Logging Introspection**: Query current logging configuration and statistics
## Installation
```bash
pip install sherlock-ai
```
## Quick Start
### Basic Setup
```python
from sherlock_ai import sherlock_ai, get_logger, log_performance
import time
# Initialize logging (call once at application startup)
sherlock_ai()
# Get a logger for your module
logger = get_logger(__name__)
@log_performance
def my_function():
# Your code here
try:
time.sleep(1)
logger.info("Processing completed")
return "result"
except Exception as e:
logger.error(f"Error: {e}")
raise
# This will log: PERFORMANCE | my_module.my_function | SUCCESS | 1.003s
result = my_function()
```
### Class-Based Setup (Advanced)
```python
from sherlock_ai import SherlockAI, get_logger, log_performance
# Initialize with class-based approach
logger_manager = SherlockAI()
logger_manager.setup()
# Get a logger for your module
logger = get_logger(__name__)
@log_performance
def my_function():
logger.info("Processing with class-based setup")
return "result"
# Later, reconfigure without restart
from sherlock_ai import LoggingPresets
logger_manager.reconfigure(LoggingPresets.development())
# Or use as context manager
with SherlockAI() as temp_logger:
# Temporary logging configuration
logger.info("This uses temporary configuration")
# Automatically cleaned up
```
### Using Logger Name Constants
```python
from sherlock_ai import sherlock_ai, get_logger, LoggerNames, list_available_loggers
# Initialize logging
sherlock_ai()
# Use predefined logger names with autocomplete support
api_logger = get_logger(LoggerNames.API)
db_logger = get_logger(LoggerNames.DATABASE)
service_logger = get_logger(LoggerNames.SERVICES)
# Discover available loggers programmatically
available_loggers = list_available_loggers()
print(f"Available loggers: {available_loggers}")
# Use the loggers
api_logger.info("API request received") # โ logs/api.log
db_logger.info("Database query executed") # โ logs/database.log
service_logger.info("Service operation done") # โ logs/services.log
```
### Logging Introspection
```python
from sherlock_ai import sherlock_ai, get_logging_stats, get_current_config
# Initialize logging
sherlock_ai()
# Get current logging statistics
stats = get_logging_stats()
print(f"Logging configured: {stats['is_configured']}")
print(f"Active handlers: {stats['handlers']}")
print(f"Log directory: {stats['logs_dir']}")
# Get current configuration
config = get_current_config()
print(f"Console enabled: {config.console_enabled}")
print(f"Log files: {list(config.log_files.keys())}")
```
### Advanced Configuration
```python
@log_performance(min_duration=0.1, include_args=True, log_level="DEBUG")
def slow_database_query(user_id, limit=10):
# Only logs if execution time >= 0.1 seconds
# Includes function arguments in the log
pass
```
### Context Manager for Code Blocks
```python
from sherlock_ai.performance import PerformanceTimer
with PerformanceTimer("database_operation"):
# Your code block here
result = database.query("SELECT * FROM users")
# Logs: PERFORMANCE | database_operation | SUCCESS | 0.234s
```
### Async Function Support
```python
@log_performance
async def async_api_call():
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com")
return response.json()
# Works automatically with async functions
result = await async_api_call()
```
### Manual Time Logging
```python
from sherlock_ai.performance import log_execution_time
import time
start_time = time.time()
try:
# Your code here
result = complex_operation()
log_execution_time("complex_operation", start_time, success=True)
except Exception as e:
log_execution_time("complex_operation", start_time, success=False, error=str(e))
```
## Memory and Resource Monitoring
### Memory Monitoring
Track Python memory usage with detailed heap analysis:
```python
from sherlock_ai import monitor_memory, MemoryTracker
# Basic memory monitoring
@monitor_memory
def memory_intensive_function():
data = [i * i for i in range(1000000)] # Allocate memory
processed = sum(data)
return processed
# Advanced memory monitoring with tracemalloc
@monitor_memory(trace_malloc=True, min_duration=0.1)
def critical_memory_function():
# Only logs if execution time >= 0.1 seconds
# Includes detailed Python memory tracking
large_dict = {i: str(i) * 100 for i in range(10000)}
return len(large_dict)
# Memory tracking context manager
with MemoryTracker("data_processing"):
# Your memory-intensive code here
data = load_large_dataset()
processed = process_data(data)
# Output example:
# MEMORY | my_module.memory_intensive_function | SUCCESS | 0.245s | Current: 45.67MB | Change: +12.34MB | Traced: 38.92MB (Peak: 52.18MB)
```
### Resource Monitoring
Monitor comprehensive system resources:
```python
from sherlock_ai import monitor_resources, ResourceTracker
# Basic resource monitoring
@monitor_resources
def resource_intensive_function():
# Monitors CPU, memory, and threads
result = sum(i * i for i in range(1000000))
return result
# Advanced resource monitoring with I/O and network
@monitor_resources(include_io=True, include_network=True)
def api_call_function():
# Monitors CPU, memory, I/O, network, and threads
response = requests.get("https://api.example.com")
return response.json()
# Resource tracking context manager
with ResourceTracker("database_operation", include_io=True):
# Your resource-intensive code here
connection = database.connect()
result = connection.execute("SELECT * FROM large_table")
connection.close()
# Output example:
# RESOURCES | my_module.resource_intensive_function | SUCCESS | 0.156s | CPU: 25.4% | Memory: 128.45MB (+5.23MB) | Threads: 12 | I/O: R:2.34MB W:1.12MB
```
### Combined Monitoring
Use both performance and resource monitoring together:
```python
from sherlock_ai import log_performance, monitor_memory, monitor_resources
@log_performance
@monitor_memory(trace_malloc=True)
@monitor_resources(include_io=True)
def comprehensive_monitoring():
# This function will be monitored for:
# - Execution time (performance)
# - Memory usage (memory)
# - System resources (CPU, I/O, etc.)
data = process_large_dataset()
save_to_database(data)
return len(data)
```
### Resource Monitor Utilities
Access low-level resource monitoring utilities:
```python
from sherlock_ai import ResourceMonitor
# Capture current resource snapshot
snapshot = ResourceMonitor.capture_resources()
if snapshot:
print(f"CPU: {snapshot.cpu_percent}%")
print(f"Memory: {ResourceMonitor.format_bytes(snapshot.memory_rss)}")
print(f"Threads: {snapshot.num_threads}")
# Capture memory snapshot
memory_snapshot = ResourceMonitor.capture_memory()
print(f"Current memory: {ResourceMonitor.format_bytes(memory_snapshot.current_size)}")
# Format bytes in human-readable format
formatted = ResourceMonitor.format_bytes(1024 * 1024 * 512) # "512.00MB"
```
## Advanced Configuration
### Configuration Presets
```python
from sherlock_ai import sherlock_ai, LoggingPresets
# Development environment - debug level logging
sherlock_ai(LoggingPresets.development())
# Production environment - optimized performance
sherlock_ai(LoggingPresets.production())
# Minimal setup - only basic app logs
sherlock_ai(LoggingPresets.minimal())
# Performance monitoring only
sherlock_ai(LoggingPresets.performance_only())
```
### Custom Configuration
```python
from sherlock_ai import sherlock_ai, LoggingConfig, LogFileConfig, LoggerConfig
# Create completely custom configuration
config = LoggingConfig(
logs_dir="my_app_logs",
console_level="DEBUG",
log_files={
"application": LogFileConfig("my_app_logs/app.log", max_bytes=50*1024*1024),
"errors": LogFileConfig("my_app_logs/errors.log", level="ERROR"),
"performance": LogFileConfig("my_app_logs/perf.log"),
"custom": LogFileConfig("my_app_logs/custom.log", backup_count=10)
},
loggers={
"api": LoggerConfig("mycompany.api", log_files=["application", "custom"]),
"database": LoggerConfig("mycompany.db", log_files=["application"]),
"performance": LoggerConfig("PerformanceLogger", log_files=["performance"], propagate=False)
}
)
sherlock_ai(config)
```
### Flexible Log Management
```python
from sherlock_ai import LoggingConfig, sherlock_ai
# Start with default configuration
config = LoggingConfig()
# Disable specific log files
config.log_files["api"].enabled = False
config.log_files["services"].enabled = False
# Change log levels
config.log_files["performance"].level = "DEBUG"
config.console_level = "WARNING"
# Modify file sizes and rotation
config.log_files["app"].max_bytes = 100 * 1024 * 1024 # 100MB
config.log_files["app"].backup_count = 15
# Apply the modified configuration
sherlock_ai(config)
```
### Custom File Names and Directories
```python
from sherlock_ai import LoggingPresets, sherlock_ai
# Use custom file names
config = LoggingPresets.custom_files({
"app": "logs/application.log",
"performance": "logs/metrics.log",
"errors": "logs/error_tracking.log"
})
sherlock_ai(config)
```
### Environment-Specific Configuration
```python
import os
from sherlock_ai import sherlock_ai, LoggingPresets, LoggingConfig
# Configure based on environment
env = os.getenv("ENVIRONMENT", "development")
if env == "production":
sherlock_ai(LoggingPresets.production())
elif env == "development":
sherlock_ai(LoggingPresets.development())
elif env == "testing":
config = LoggingConfig(
logs_dir="test_logs",
console_enabled=False, # No console output during tests
log_files={"test_results": LogFileConfig("test_logs/results.log")}
)
sherlock_ai(config)
else:
sherlock_ai() # Default configuration
```
### Development with FastAPI
The package is optimized for FastAPI development with auto-reload enabled:
```python
# main.py
from sherlock_ai import sherlock_ai
import uvicorn
if __name__ == "__main__":
# Set up logging once in the main entry point
sherlock_ai()
# FastAPI auto-reload won't cause duplicate log entries
uvicorn.run(
"myapp.api:app",
host="127.0.0.1",
port=8000,
reload=True # โ
Safe to use - no duplicate logs
)
```
```python
# myapp/api.py
from fastapi import FastAPI, Request
from sherlock_ai import get_logger, LoggerNames, log_performance, monitor_memory
# Don't call sherlock_ai() here - it's already done in main.py
app = FastAPI()
logger = get_logger(LoggerNames.API)
@app.get("/health")
def health_check():
logger.info("Health check requested")
return {"status": "healthy"}
# โ ๏ธ IMPORTANT: Decorator order matters for FastAPI middleware!
@app.middleware("http") # โ
Framework decorators must be outermost
@log_performance # โ
Then monitoring decorators
@monitor_memory # โ
Then other monitoring decorators
async def request_middleware(request: Request, call_next):
# This will work correctly and log to performance.log
response = await call_next(request)
return response
```
### FastAPI Decorator Order โ ๏ธ
**Critical**: Always put FastAPI decorators outermost:
```python
# โ
CORRECT - Framework decorator first
@app.middleware("http")
@log_performance
@monitor_memory
async def middleware_function(request, call_next):
pass
# โ WRONG - Monitoring decorators interfere with FastAPI
@log_performance
@monitor_memory
@app.middleware("http") # FastAPI can't find this!
async def middleware_function(request, call_next):
pass
```
This applies to all framework decorators (`@app.route`, `@app.middleware`, etc.).
## API Reference
### `@log_performance` Decorator
Parameters:
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
- `include_args` (bool): Whether to include function arguments in the log (default: False)
- `log_level` (str): Log level to use - INFO, DEBUG, WARNING, etc. (default: "INFO")
### `PerformanceTimer` Context Manager
Parameters:
- `name` (str): Name identifier for the operation
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
### `log_execution_time` Function
Parameters:
- `name` (str): Name identifier for the operation
- `start_time` (float): Start time from `time.time()`
- `success` (bool): Whether the operation succeeded (default: True)
- `error` (str): Error message if operation failed (default: None)
### `@monitor_memory` Decorator
Monitor memory usage during function execution.
Parameters:
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
- `log_level` (str): Log level to use (default: "INFO")
- `trace_malloc` (bool): Use tracemalloc for detailed Python memory tracking (default: True)
### `@monitor_resources` Decorator
Monitor comprehensive system resources during function execution.
Parameters:
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
- `log_level` (str): Log level to use (default: "INFO")
- `include_io` (bool): Include I/O statistics (default: True)
- `include_network` (bool): Include network statistics (default: False)
### `MemoryTracker` Context Manager
Track memory usage in code blocks.
Parameters:
- `name` (str): Name identifier for the operation
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
- `trace_malloc` (bool): Use tracemalloc for detailed tracking (default: True)
### `ResourceTracker` Context Manager
Track comprehensive resource usage in code blocks.
Parameters:
- `name` (str): Name identifier for the operation
- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)
- `include_io` (bool): Include I/O statistics (default: True)
- `include_network` (bool): Include network statistics (default: False)
### `ResourceMonitor` Utility Class
Low-level resource monitoring utilities.
Static Methods:
- `capture_resources()`: Capture current system resource snapshot
- `capture_memory()`: Capture current memory usage snapshot
- `format_bytes(bytes_val)`: Format bytes in human-readable format
- `calculate_resource_diff(start, end)`: Calculate differences between snapshots
### Configuration Classes
#### `LoggingConfig`
Main configuration class for the logging system.
Parameters:
- `logs_dir` (str): Directory for log files (default: "logs")
- `log_format` (str): Log message format string
- `date_format` (str): Date format for timestamps
- `console_enabled` (bool): Enable console output (default: True)
- `console_level` (Union[str, int]): Console log level (default: INFO)
- `root_level` (Union[str, int]): Root logger level (default: INFO)
- `log_files` (Dict[str, LogFileConfig]): Log file configurations
- `loggers` (Dict[str, LoggerConfig]): Logger configurations
- `external_loggers` (Dict[str, Union[str, int]]): External library log levels
#### `LogFileConfig`
Configuration for individual log files.
Parameters:
- `filename` (str): Path to the log file
- `level` (Union[str, int]): Log level for this file (default: INFO)
- `max_bytes` (int): Maximum file size before rotation (default: 10MB)
- `backup_count` (int): Number of backup files to keep (default: 5)
- `encoding` (str): File encoding (default: "utf-8")
- `enabled` (bool): Whether this log file is enabled (default: True)
#### `LoggerConfig`
Configuration for individual loggers.
Parameters:
- `name` (str): Logger name
- `level` (Union[str, int]): Logger level (default: INFO)
- `log_files` (List[str]): List of log file names this logger writes to
- `propagate` (bool): Whether to propagate to parent loggers (default: True)
- `enabled` (bool): Whether this logger is enabled (default: True)
### Configuration Presets
#### `LoggingPresets.minimal()`
Basic setup with only console and app log.
#### `LoggingPresets.development()`
Debug-level logging for development environment.
#### `LoggingPresets.production()`
Optimized configuration for production use.
#### `LoggingPresets.performance_only()`
Only performance monitoring logs.
#### `LoggingPresets.custom_files(file_configs)`
Custom file names for standard log types.
Parameters:
- `file_configs` (Dict[str, str]): Mapping of log type to custom filename
### Logger Constants and Discovery
#### `LoggerNames`
Class containing constants for available logger names.
Available constants:
- `LoggerNames.API` - API logger name
- `LoggerNames.DATABASE` - Database logger name
- `LoggerNames.SERVICES` - Services logger name
- `LoggerNames.PERFORMANCE` - Performance logger name
#### `list_available_loggers()`
Function to discover all available logger names.
Returns:
- `List[str]`: List of all available logger names
Example:
```python
from sherlock_ai import LoggerNames, list_available_loggers
# Use constants with autocomplete
logger = get_logger(LoggerNames.API)
# Discover available loggers
loggers = list_available_loggers()
print(f"Available: {loggers}")
```
### Class-Based API
#### `SherlockAI` Class
Advanced logging management with instance-based configuration.
**Constructor Parameters:**
- `config` (Optional[LoggingConfig]): Configuration object. If None, uses default configuration.
**Methods:**
- `setup()`: Set up logging configuration. Returns applied LoggingConfig.
- `reconfigure(new_config)`: Change configuration without restart.
- `cleanup()`: Clean up handlers and resources.
- `get_stats()`: Get current logging statistics.
- `get_handler_info()`: Get information about current handlers.
- `get_logger_info()`: Get information about configured loggers.
**Class Methods:**
- `SherlockAI.get_instance()`: Get or create singleton instance.
**Context Manager Support:**
```python
with SherlockAI(config) as logger_manager:
# Temporary logging configuration
pass
# Automatically cleaned up
```
#### `get_logging_stats()`
Get current logging statistics from default instance.
Returns:
- `Dict[str, Any]`: Dictionary containing logging statistics
Example:
```python
from sherlock_ai import get_logging_stats
stats = get_logging_stats()
print(f"Configured: {stats['is_configured']}")
print(f"Handlers: {stats['handlers']}")
```
#### `get_current_config()`
Get current logging configuration from default instance.
Returns:
- `Optional[LoggingConfig]`: Current configuration if available
Example:
```python
from sherlock_ai import get_current_config
config = get_current_config()
if config:
print(f"Console enabled: {config.console_enabled}")
```
## Configuration
### Basic Logging Setup
```python
from sherlock_ai import sherlock_ai, get_logger
# Initialize logging (call once at application startup)
sherlock_ai()
# Get a logger for your module
logger = get_logger(__name__)
# Use the logger
logger.info("Application started")
logger.error("Something went wrong")
```
**Default Log Files Created:**
When you call `sherlock_ai()` with no arguments, it automatically creates a `logs/` directory with these files:
- `app.log` - All INFO+ level logs from root logger
- `errors.log` - Only ERROR+ level logs from any logger
- `api.log` - Logs from `app.api` logger (empty unless you use this logger)
- `database.log` - Logs from `app.core.dbConnection` logger
- `services.log` - Logs from `app.services` logger
- `performance.log` - Performance monitoring logs from your `@log_performance` decorators
### Using Specific Loggers
```python
import logging
from sherlock_ai import sherlock_ai
sherlock_ai()
# Use specific loggers to populate their respective log files
api_logger = logging.getLogger("app.api")
db_logger = logging.getLogger("app.core.dbConnection")
services_logger = logging.getLogger("app.services")
# These will go to their specific log files
api_logger.info("API request received") # โ api.log
db_logger.info("Database query executed") # โ database.log
services_logger.info("Service operation done") # โ services.log
```
### Request ID Tracking
```python
from sherlock_ai.utils.helper import get_request_id, set_request_id
# Set a request ID for the current context
request_id = set_request_id("req-12345")
# Get current request ID for distributed tracing
current_id = get_request_id()
```
### Complete Application Example
```python
from sherlock_ai import sherlock_ai, get_logger, log_performance, PerformanceTimer
# Initialize logging first
sherlock_ai()
logger = get_logger(__name__)
@log_performance
def main():
logger.info("Application starting")
with PerformanceTimer("initialization"):
# Your initialization code
pass
logger.info("Application ready")
if __name__ == "__main__":
main()
```
## Log Output Format
The package produces structured log messages with the following format:
```
{timestamp} - {request_id} - {logger_name} - {log_level} - {message_content}
```
Where:
- `{timestamp}`: Date and time of the log entry
- `{request_id}`: Request ID set by `set_request_id()` (shows `-` if not set)
- `{logger_name}`: Name of the logger (e.g., PerformanceLogger, MonitoringLogger)
- `{log_level}`: Log level (INFO, ERROR, DEBUG, etc.)
- `{message_content}`: The actual log message content
### Performance Logs
**Message Content Format:**
```
PERFORMANCE | {function_name} | {STATUS} | {execution_time}s | {additional_info}
```
**Examples:**
```
2025-07-05 19:19:11 - 07ca74ed - PerformanceLogger - INFO - PERFORMANCE | tests.test_fastapi.health_check | SUCCESS | 0.262s
2025-07-05 21:13:03 - 2c4774b0 - PerformanceLogger - INFO - PERFORMANCE | my_module.api_call | ERROR | 2.456s | Connection timeout
2025-07-05 19:20:15 - - - PerformanceLogger - INFO - PERFORMANCE | database_query | SUCCESS | 0.089s | Args: ('user123',) | Kwargs: {'limit': 10}
```
### Memory Monitoring Logs
**Message Content Format:**
```
MEMORY | {function_name} | {STATUS} | {execution_time}s | Current: {current_memory} | Change: {memory_change} | Traced: {traced_memory}
```
**Examples:**
```
2025-07-05 19:19:11 - 07ca74ed - MonitoringLogger - INFO - MEMORY | tests.test_fastapi.health_check | SUCCESS | 0.261s | Current: 57.66MB | Change: +1.64MB | Traced: 24.33KB (Peak: 30.33KB)
2025-07-05 21:15:22 - - - MonitoringLogger - INFO - MEMORY | data_processor | SUCCESS | 0.245s | Current: 45.67MB | Change: +12.34MB
```
### Resource Monitoring Logs
**Message Content Format:**
```
RESOURCES | {function_name} | {STATUS} | {execution_time}s | CPU: {cpu_percent}% | Memory: {memory_usage} | Threads: {thread_count} | I/O: R:{read_bytes} W:{write_bytes}
```
**Examples:**
```
2025-07-05 19:19:11 - 07ca74ed - MonitoringLogger - INFO - RESOURCES | tests.test_fastapi.health_check | SUCCESS | 0.144s | CPU: 59.3% | Memory: 57.66MB (+1.63MB) | Threads: 9 | I/O: R:0.00B W:414.00B
2025-07-05 21:13:03 - 2c4774b0 - MonitoringLogger - INFO - RESOURCES | api_handler | SUCCESS | 0.156s | CPU: 25.4% | Memory: 128.45MB (+5.23MB) | Threads: 12 | I/O: R:2.34MB W:1.12MB
2025-07-05 19:25:30 - - - MonitoringLogger - INFO - RESOURCES | database_query | SUCCESS | 0.089s | CPU: 15.2% | Memory: 95.67MB (+0.12MB) | Threads: 8
```
### Request ID Usage
To include request IDs in your logs, use the `set_request_id()` function:
```python
from sherlock_ai import set_request_id, get_request_id
# Set a request ID for the current context
request_id = set_request_id("req-12345") # Custom ID
# or
request_id = set_request_id() # Auto-generated ID (e.g., "07ca74ed")
# Now all logs will include this request ID
# When request ID is set: "2025-07-05 19:19:11 - 07ca74ed - ..."
# When request ID is not set: "2025-07-05 19:19:11 - - - ..."
```
## Use Cases
- **API Performance Monitoring**: Track response times for your web APIs with dedicated API logging
- **Memory Leak Detection**: Monitor memory usage patterns to identify potential memory leaks
- **Resource Optimization**: Analyze CPU, memory, and I/O usage to optimize application performance
- **Database Query Optimization**: Monitor slow database operations with separate database logs
- **Microservices Debugging**: Trace execution times across service boundaries with request ID tracking
- **Algorithm Benchmarking**: Compare performance of different implementations using custom configurations
- **Production Monitoring**: Get insights into your application's performance characteristics with production presets
- **Memory-Intensive Applications**: Monitor memory usage in data processing, ML model training, and large dataset operations
- **System Resource Analysis**: Track resource consumption patterns for capacity planning and scaling decisions
- **Environment-Specific Logging**: Use different configurations for development, testing, and production
- **Custom Log Management**: Create application-specific log files and directory structures
- **Compliance & Auditing**: Separate error logs and performance logs for security and compliance requirements
- **DevOps Integration**: Configure logging for containerized environments and CI/CD pipelines
- **FastAPI Development**: Optimized for FastAPI auto-reload with no duplicate log entries during development
- **Logger Organization**: Use predefined logger names with autocomplete support for better code maintainability
- **Performance Profiling**: Comprehensive monitoring for identifying bottlenecks in CPU, memory, and I/O operations
## Requirements
- Python >= 3.8
- **psutil** >= 5.8.0 (for memory and resource monitoring)
- Standard library for basic performance monitoring
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Links
- **Homepage**: [https://github.com/pranawmishra/sherlock-ai](https://github.com/pranawmishra/sherlock-ai)
- **Repository**: [https://github.com/pranawmishra/sherlock-ai](https://github.com/pranawmishra/sherlock-ai)
- **Issues**: [https://github.com/pranawmishra/sherlock-ai/issues](https://github.com/pranawmishra/sherlock-ai/issues)
Raw data
{
"_id": null,
"home_page": null,
"name": "sherlock-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "performance, monitoring, logging, debugging, profiling",
"author": null,
"author_email": "Pranaw Mishra <pranawmishra73@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/bf/75/c7b66708c51a3d7e7a300217b3a392c35b911c916e2919c224ff2ae05c50/sherlock_ai-1.3.0.tar.gz",
"platform": null,
"description": "# Sherlock AI\n\nA Python package for performance monitoring and logging utilities that helps you track execution times and debug your applications with ease.\n\n## Features\n\n- \ud83c\udfaf **Performance Decorators**: Easy-to-use decorators for tracking function execution times\n- \ud83e\udde0 **Memory Monitoring**: Track Python memory usage with detailed heap and tracemalloc integration\n- \ud83d\udcca **Resource Monitoring**: Monitor CPU, memory, I/O, and network usage during function execution\n- \u23f1\ufe0f **Context Managers**: Monitor code block execution with simple context managers\n- \ud83d\udd27 **Advanced Configuration System**: Complete control over logging with dataclass-based configuration\n- \ud83c\udf9b\ufe0f **Configuration Presets**: Pre-built setups for development, production, and testing environments\n- \ud83d\udd04 **Async/Sync Support**: Works seamlessly with both synchronous and asynchronous functions\n- \ud83d\udcc8 **Request Tracking**: Built-in request ID tracking for distributed systems\n- \ud83d\udcc1 **Flexible Log Management**: Enable/disable log files, custom directories, and rotation settings\n- \ud83c\udff7\ufe0f **Logger Name Constants**: Easy access to available logger names with autocomplete support\n- \ud83d\udd0d **Logger Discovery**: Programmatically discover available loggers in your application\n- \ud83d\udc1b **Development-Friendly**: Optimized for FastAPI auto-reload and development environments\n- \ud83c\udfa8 **Modular Architecture**: Clean, focused modules for different monitoring aspects\n- \ud83c\udfd7\ufe0f **Class-Based Architecture**: Advanced `SherlockAI` class for instance-based logging management\n- \ud83d\udd04 **Runtime Reconfiguration**: Change logging settings without application restart\n- \ud83e\uddf9 **Resource Management**: Automatic cleanup and context manager support\n- \ud83d\udd0d **Logging Introspection**: Query current logging configuration and statistics\n\n## Installation\n\n```bash\npip install sherlock-ai\n```\n\n## Quick Start\n\n### Basic Setup\n\n```python\nfrom sherlock_ai import sherlock_ai, get_logger, log_performance\nimport time\n\n# Initialize logging (call once at application startup)\nsherlock_ai()\n\n# Get a logger for your module\nlogger = get_logger(__name__)\n\n@log_performance\ndef my_function():\n # Your code here\n try:\n time.sleep(1)\n logger.info(\"Processing completed\")\n return \"result\"\n except Exception as e:\n logger.error(f\"Error: {e}\")\n raise\n\n# This will log: PERFORMANCE | my_module.my_function | SUCCESS | 1.003s\nresult = my_function()\n```\n\n### Class-Based Setup (Advanced)\n\n```python\nfrom sherlock_ai import SherlockAI, get_logger, log_performance\n\n# Initialize with class-based approach\nlogger_manager = SherlockAI()\nlogger_manager.setup()\n\n# Get a logger for your module\nlogger = get_logger(__name__)\n\n@log_performance\ndef my_function():\n logger.info(\"Processing with class-based setup\")\n return \"result\"\n\n# Later, reconfigure without restart\nfrom sherlock_ai import LoggingPresets\nlogger_manager.reconfigure(LoggingPresets.development())\n\n# Or use as context manager\nwith SherlockAI() as temp_logger:\n # Temporary logging configuration\n logger.info(\"This uses temporary configuration\")\n# Automatically cleaned up\n```\n\n### Using Logger Name Constants\n\n```python\nfrom sherlock_ai import sherlock_ai, get_logger, LoggerNames, list_available_loggers\n\n# Initialize logging\nsherlock_ai()\n\n# Use predefined logger names with autocomplete support\napi_logger = get_logger(LoggerNames.API)\ndb_logger = get_logger(LoggerNames.DATABASE)\nservice_logger = get_logger(LoggerNames.SERVICES)\n\n# Discover available loggers programmatically\navailable_loggers = list_available_loggers()\nprint(f\"Available loggers: {available_loggers}\")\n\n# Use the loggers\napi_logger.info(\"API request received\") # \u2192 logs/api.log\ndb_logger.info(\"Database query executed\") # \u2192 logs/database.log\nservice_logger.info(\"Service operation done\") # \u2192 logs/services.log\n```\n\n### Logging Introspection\n\n```python\nfrom sherlock_ai import sherlock_ai, get_logging_stats, get_current_config\n\n# Initialize logging\nsherlock_ai()\n\n# Get current logging statistics\nstats = get_logging_stats()\nprint(f\"Logging configured: {stats['is_configured']}\")\nprint(f\"Active handlers: {stats['handlers']}\")\nprint(f\"Log directory: {stats['logs_dir']}\")\n\n# Get current configuration\nconfig = get_current_config()\nprint(f\"Console enabled: {config.console_enabled}\")\nprint(f\"Log files: {list(config.log_files.keys())}\")\n```\n\n### Advanced Configuration\n\n```python\n@log_performance(min_duration=0.1, include_args=True, log_level=\"DEBUG\")\ndef slow_database_query(user_id, limit=10):\n # Only logs if execution time >= 0.1 seconds\n # Includes function arguments in the log\n pass\n```\n\n### Context Manager for Code Blocks\n\n```python\nfrom sherlock_ai.performance import PerformanceTimer\n\nwith PerformanceTimer(\"database_operation\"):\n # Your code block here\n result = database.query(\"SELECT * FROM users\")\n \n# Logs: PERFORMANCE | database_operation | SUCCESS | 0.234s\n```\n\n### Async Function Support\n\n```python\n@log_performance\nasync def async_api_call():\n async with httpx.AsyncClient() as client:\n response = await client.get(\"https://api.example.com\")\n return response.json()\n\n# Works automatically with async functions\nresult = await async_api_call()\n```\n\n### Manual Time Logging\n\n```python\nfrom sherlock_ai.performance import log_execution_time\nimport time\n\nstart_time = time.time()\ntry:\n # Your code here\n result = complex_operation()\n log_execution_time(\"complex_operation\", start_time, success=True)\nexcept Exception as e:\n log_execution_time(\"complex_operation\", start_time, success=False, error=str(e))\n```\n\n## Memory and Resource Monitoring\n\n### Memory Monitoring\n\nTrack Python memory usage with detailed heap analysis:\n\n```python\nfrom sherlock_ai import monitor_memory, MemoryTracker\n\n# Basic memory monitoring\n@monitor_memory\ndef memory_intensive_function():\n data = [i * i for i in range(1000000)] # Allocate memory\n processed = sum(data)\n return processed\n\n# Advanced memory monitoring with tracemalloc\n@monitor_memory(trace_malloc=True, min_duration=0.1)\ndef critical_memory_function():\n # Only logs if execution time >= 0.1 seconds\n # Includes detailed Python memory tracking\n large_dict = {i: str(i) * 100 for i in range(10000)}\n return len(large_dict)\n\n# Memory tracking context manager\nwith MemoryTracker(\"data_processing\"):\n # Your memory-intensive code here\n data = load_large_dataset()\n processed = process_data(data)\n\n# Output example:\n# MEMORY | my_module.memory_intensive_function | SUCCESS | 0.245s | Current: 45.67MB | Change: +12.34MB | Traced: 38.92MB (Peak: 52.18MB)\n```\n\n### Resource Monitoring\n\nMonitor comprehensive system resources:\n\n```python\nfrom sherlock_ai import monitor_resources, ResourceTracker\n\n# Basic resource monitoring\n@monitor_resources\ndef resource_intensive_function():\n # Monitors CPU, memory, and threads\n result = sum(i * i for i in range(1000000))\n return result\n\n# Advanced resource monitoring with I/O and network\n@monitor_resources(include_io=True, include_network=True)\ndef api_call_function():\n # Monitors CPU, memory, I/O, network, and threads\n response = requests.get(\"https://api.example.com\")\n return response.json()\n\n# Resource tracking context manager\nwith ResourceTracker(\"database_operation\", include_io=True):\n # Your resource-intensive code here\n connection = database.connect()\n result = connection.execute(\"SELECT * FROM large_table\")\n connection.close()\n\n# Output example:\n# RESOURCES | my_module.resource_intensive_function | SUCCESS | 0.156s | CPU: 25.4% | Memory: 128.45MB (+5.23MB) | Threads: 12 | I/O: R:2.34MB W:1.12MB\n```\n\n### Combined Monitoring\n\nUse both performance and resource monitoring together:\n\n```python\nfrom sherlock_ai import log_performance, monitor_memory, monitor_resources\n\n@log_performance\n@monitor_memory(trace_malloc=True)\n@monitor_resources(include_io=True)\ndef comprehensive_monitoring():\n # This function will be monitored for:\n # - Execution time (performance)\n # - Memory usage (memory)\n # - System resources (CPU, I/O, etc.)\n data = process_large_dataset()\n save_to_database(data)\n return len(data)\n```\n\n### Resource Monitor Utilities\n\nAccess low-level resource monitoring utilities:\n\n```python\nfrom sherlock_ai import ResourceMonitor\n\n# Capture current resource snapshot\nsnapshot = ResourceMonitor.capture_resources()\nif snapshot:\n print(f\"CPU: {snapshot.cpu_percent}%\")\n print(f\"Memory: {ResourceMonitor.format_bytes(snapshot.memory_rss)}\")\n print(f\"Threads: {snapshot.num_threads}\")\n\n# Capture memory snapshot\nmemory_snapshot = ResourceMonitor.capture_memory()\nprint(f\"Current memory: {ResourceMonitor.format_bytes(memory_snapshot.current_size)}\")\n\n# Format bytes in human-readable format\nformatted = ResourceMonitor.format_bytes(1024 * 1024 * 512) # \"512.00MB\"\n```\n\n## Advanced Configuration\n\n### Configuration Presets\n\n```python\nfrom sherlock_ai import sherlock_ai, LoggingPresets\n\n# Development environment - debug level logging\nsherlock_ai(LoggingPresets.development())\n\n# Production environment - optimized performance\nsherlock_ai(LoggingPresets.production())\n\n# Minimal setup - only basic app logs\nsherlock_ai(LoggingPresets.minimal())\n\n# Performance monitoring only\nsherlock_ai(LoggingPresets.performance_only())\n```\n\n### Custom Configuration\n\n```python\nfrom sherlock_ai import sherlock_ai, LoggingConfig, LogFileConfig, LoggerConfig\n\n# Create completely custom configuration\nconfig = LoggingConfig(\n logs_dir=\"my_app_logs\",\n console_level=\"DEBUG\",\n log_files={\n \"application\": LogFileConfig(\"my_app_logs/app.log\", max_bytes=50*1024*1024),\n \"errors\": LogFileConfig(\"my_app_logs/errors.log\", level=\"ERROR\"),\n \"performance\": LogFileConfig(\"my_app_logs/perf.log\"),\n \"custom\": LogFileConfig(\"my_app_logs/custom.log\", backup_count=10)\n },\n loggers={\n \"api\": LoggerConfig(\"mycompany.api\", log_files=[\"application\", \"custom\"]),\n \"database\": LoggerConfig(\"mycompany.db\", log_files=[\"application\"]),\n \"performance\": LoggerConfig(\"PerformanceLogger\", log_files=[\"performance\"], propagate=False)\n }\n)\n\nsherlock_ai(config)\n```\n\n### Flexible Log Management\n\n```python\nfrom sherlock_ai import LoggingConfig, sherlock_ai\n\n# Start with default configuration\nconfig = LoggingConfig()\n\n# Disable specific log files\nconfig.log_files[\"api\"].enabled = False\nconfig.log_files[\"services\"].enabled = False\n\n# Change log levels\nconfig.log_files[\"performance\"].level = \"DEBUG\"\nconfig.console_level = \"WARNING\"\n\n# Modify file sizes and rotation\nconfig.log_files[\"app\"].max_bytes = 100 * 1024 * 1024 # 100MB\nconfig.log_files[\"app\"].backup_count = 15\n\n# Apply the modified configuration\nsherlock_ai(config)\n```\n\n### Custom File Names and Directories\n\n```python\nfrom sherlock_ai import LoggingPresets, sherlock_ai\n\n# Use custom file names\nconfig = LoggingPresets.custom_files({\n \"app\": \"logs/application.log\",\n \"performance\": \"logs/metrics.log\",\n \"errors\": \"logs/error_tracking.log\"\n})\n\nsherlock_ai(config)\n```\n\n### Environment-Specific Configuration\n\n```python\nimport os\nfrom sherlock_ai import sherlock_ai, LoggingPresets, LoggingConfig\n\n# Configure based on environment\nenv = os.getenv(\"ENVIRONMENT\", \"development\")\n\nif env == \"production\":\n sherlock_ai(LoggingPresets.production())\nelif env == \"development\":\n sherlock_ai(LoggingPresets.development())\nelif env == \"testing\":\n config = LoggingConfig(\n logs_dir=\"test_logs\",\n console_enabled=False, # No console output during tests\n log_files={\"test_results\": LogFileConfig(\"test_logs/results.log\")}\n )\n sherlock_ai(config)\nelse:\n sherlock_ai() # Default configuration\n```\n\n### Development with FastAPI\n\nThe package is optimized for FastAPI development with auto-reload enabled:\n\n```python\n# main.py\nfrom sherlock_ai import sherlock_ai\nimport uvicorn\n\nif __name__ == \"__main__\":\n # Set up logging once in the main entry point\n sherlock_ai()\n \n # FastAPI auto-reload won't cause duplicate log entries\n uvicorn.run(\n \"myapp.api:app\",\n host=\"127.0.0.1\",\n port=8000,\n reload=True # \u2705 Safe to use - no duplicate logs\n )\n```\n\n```python\n# myapp/api.py\nfrom fastapi import FastAPI, Request\nfrom sherlock_ai import get_logger, LoggerNames, log_performance, monitor_memory\n\n# Don't call sherlock_ai() here - it's already done in main.py\napp = FastAPI()\nlogger = get_logger(LoggerNames.API)\n\n@app.get(\"/health\")\ndef health_check():\n logger.info(\"Health check requested\")\n return {\"status\": \"healthy\"}\n\n# \u26a0\ufe0f IMPORTANT: Decorator order matters for FastAPI middleware!\n@app.middleware(\"http\") # \u2705 Framework decorators must be outermost\n@log_performance # \u2705 Then monitoring decorators\n@monitor_memory # \u2705 Then other monitoring decorators\nasync def request_middleware(request: Request, call_next):\n # This will work correctly and log to performance.log\n response = await call_next(request)\n return response\n```\n\n### FastAPI Decorator Order \u26a0\ufe0f\n\n**Critical**: Always put FastAPI decorators outermost:\n\n```python\n# \u2705 CORRECT - Framework decorator first\n@app.middleware(\"http\")\n@log_performance\n@monitor_memory\nasync def middleware_function(request, call_next):\n pass\n\n# \u274c WRONG - Monitoring decorators interfere with FastAPI\n@log_performance\n@monitor_memory\n@app.middleware(\"http\") # FastAPI can't find this!\nasync def middleware_function(request, call_next):\n pass\n```\n\nThis applies to all framework decorators (`@app.route`, `@app.middleware`, etc.).\n\n## API Reference\n\n### `@log_performance` Decorator\n\nParameters:\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n- `include_args` (bool): Whether to include function arguments in the log (default: False)\n- `log_level` (str): Log level to use - INFO, DEBUG, WARNING, etc. (default: \"INFO\")\n\n### `PerformanceTimer` Context Manager\n\nParameters:\n- `name` (str): Name identifier for the operation\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n\n### `log_execution_time` Function\n\nParameters:\n- `name` (str): Name identifier for the operation\n- `start_time` (float): Start time from `time.time()`\n- `success` (bool): Whether the operation succeeded (default: True)\n- `error` (str): Error message if operation failed (default: None)\n\n### `@monitor_memory` Decorator\n\nMonitor memory usage during function execution.\n\nParameters:\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n- `log_level` (str): Log level to use (default: \"INFO\")\n- `trace_malloc` (bool): Use tracemalloc for detailed Python memory tracking (default: True)\n\n### `@monitor_resources` Decorator\n\nMonitor comprehensive system resources during function execution.\n\nParameters:\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n- `log_level` (str): Log level to use (default: \"INFO\")\n- `include_io` (bool): Include I/O statistics (default: True)\n- `include_network` (bool): Include network statistics (default: False)\n\n### `MemoryTracker` Context Manager\n\nTrack memory usage in code blocks.\n\nParameters:\n- `name` (str): Name identifier for the operation\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n- `trace_malloc` (bool): Use tracemalloc for detailed tracking (default: True)\n\n### `ResourceTracker` Context Manager\n\nTrack comprehensive resource usage in code blocks.\n\nParameters:\n- `name` (str): Name identifier for the operation\n- `min_duration` (float): Only log if execution time >= this value in seconds (default: 0.0)\n- `include_io` (bool): Include I/O statistics (default: True)\n- `include_network` (bool): Include network statistics (default: False)\n\n### `ResourceMonitor` Utility Class\n\nLow-level resource monitoring utilities.\n\nStatic Methods:\n- `capture_resources()`: Capture current system resource snapshot\n- `capture_memory()`: Capture current memory usage snapshot\n- `format_bytes(bytes_val)`: Format bytes in human-readable format\n- `calculate_resource_diff(start, end)`: Calculate differences between snapshots\n\n### Configuration Classes\n\n#### `LoggingConfig`\n\nMain configuration class for the logging system.\n\nParameters:\n- `logs_dir` (str): Directory for log files (default: \"logs\")\n- `log_format` (str): Log message format string\n- `date_format` (str): Date format for timestamps\n- `console_enabled` (bool): Enable console output (default: True)\n- `console_level` (Union[str, int]): Console log level (default: INFO)\n- `root_level` (Union[str, int]): Root logger level (default: INFO)\n- `log_files` (Dict[str, LogFileConfig]): Log file configurations\n- `loggers` (Dict[str, LoggerConfig]): Logger configurations\n- `external_loggers` (Dict[str, Union[str, int]]): External library log levels\n\n#### `LogFileConfig`\n\nConfiguration for individual log files.\n\nParameters:\n- `filename` (str): Path to the log file\n- `level` (Union[str, int]): Log level for this file (default: INFO)\n- `max_bytes` (int): Maximum file size before rotation (default: 10MB)\n- `backup_count` (int): Number of backup files to keep (default: 5)\n- `encoding` (str): File encoding (default: \"utf-8\")\n- `enabled` (bool): Whether this log file is enabled (default: True)\n\n#### `LoggerConfig`\n\nConfiguration for individual loggers.\n\nParameters:\n- `name` (str): Logger name\n- `level` (Union[str, int]): Logger level (default: INFO)\n- `log_files` (List[str]): List of log file names this logger writes to\n- `propagate` (bool): Whether to propagate to parent loggers (default: True)\n- `enabled` (bool): Whether this logger is enabled (default: True)\n\n### Configuration Presets\n\n#### `LoggingPresets.minimal()`\nBasic setup with only console and app log.\n\n#### `LoggingPresets.development()`\nDebug-level logging for development environment.\n\n#### `LoggingPresets.production()`\nOptimized configuration for production use.\n\n#### `LoggingPresets.performance_only()`\nOnly performance monitoring logs.\n\n#### `LoggingPresets.custom_files(file_configs)`\nCustom file names for standard log types.\n\nParameters:\n- `file_configs` (Dict[str, str]): Mapping of log type to custom filename\n\n### Logger Constants and Discovery\n\n#### `LoggerNames`\nClass containing constants for available logger names.\n\nAvailable constants:\n- `LoggerNames.API` - API logger name\n- `LoggerNames.DATABASE` - Database logger name \n- `LoggerNames.SERVICES` - Services logger name\n- `LoggerNames.PERFORMANCE` - Performance logger name\n\n#### `list_available_loggers()`\nFunction to discover all available logger names.\n\nReturns:\n- `List[str]`: List of all available logger names\n\nExample:\n```python\nfrom sherlock_ai import LoggerNames, list_available_loggers\n\n# Use constants with autocomplete\nlogger = get_logger(LoggerNames.API)\n\n# Discover available loggers\nloggers = list_available_loggers()\nprint(f\"Available: {loggers}\")\n```\n\n### Class-Based API\n\n#### `SherlockAI` Class\n\nAdvanced logging management with instance-based configuration.\n\n**Constructor Parameters:**\n- `config` (Optional[LoggingConfig]): Configuration object. If None, uses default configuration.\n\n**Methods:**\n- `setup()`: Set up logging configuration. Returns applied LoggingConfig.\n- `reconfigure(new_config)`: Change configuration without restart.\n- `cleanup()`: Clean up handlers and resources.\n- `get_stats()`: Get current logging statistics.\n- `get_handler_info()`: Get information about current handlers.\n- `get_logger_info()`: Get information about configured loggers.\n\n**Class Methods:**\n- `SherlockAI.get_instance()`: Get or create singleton instance.\n\n**Context Manager Support:**\n```python\nwith SherlockAI(config) as logger_manager:\n # Temporary logging configuration\n pass\n# Automatically cleaned up\n```\n\n#### `get_logging_stats()`\nGet current logging statistics from default instance.\n\nReturns:\n- `Dict[str, Any]`: Dictionary containing logging statistics\n\nExample:\n```python\nfrom sherlock_ai import get_logging_stats\n\nstats = get_logging_stats()\nprint(f\"Configured: {stats['is_configured']}\")\nprint(f\"Handlers: {stats['handlers']}\")\n```\n\n#### `get_current_config()`\nGet current logging configuration from default instance.\n\nReturns:\n- `Optional[LoggingConfig]`: Current configuration if available\n\nExample:\n```python\nfrom sherlock_ai import get_current_config\n\nconfig = get_current_config()\nif config:\n print(f\"Console enabled: {config.console_enabled}\")\n```\n\n## Configuration\n\n### Basic Logging Setup\n\n```python\nfrom sherlock_ai import sherlock_ai, get_logger\n\n# Initialize logging (call once at application startup)\nsherlock_ai()\n\n# Get a logger for your module\nlogger = get_logger(__name__)\n\n# Use the logger\nlogger.info(\"Application started\")\nlogger.error(\"Something went wrong\")\n```\n\n**Default Log Files Created:**\nWhen you call `sherlock_ai()` with no arguments, it automatically creates a `logs/` directory with these files:\n- `app.log` - All INFO+ level logs from root logger\n- `errors.log` - Only ERROR+ level logs from any logger\n- `api.log` - Logs from `app.api` logger (empty unless you use this logger)\n- `database.log` - Logs from `app.core.dbConnection` logger\n- `services.log` - Logs from `app.services` logger \n- `performance.log` - Performance monitoring logs from your `@log_performance` decorators\n\n### Using Specific Loggers\n\n```python\nimport logging\nfrom sherlock_ai import sherlock_ai\n\nsherlock_ai()\n\n# Use specific loggers to populate their respective log files\napi_logger = logging.getLogger(\"app.api\")\ndb_logger = logging.getLogger(\"app.core.dbConnection\")\nservices_logger = logging.getLogger(\"app.services\")\n\n# These will go to their specific log files\napi_logger.info(\"API request received\") # \u2192 api.log\ndb_logger.info(\"Database query executed\") # \u2192 database.log\nservices_logger.info(\"Service operation done\") # \u2192 services.log\n```\n\n### Request ID Tracking\n\n```python\nfrom sherlock_ai.utils.helper import get_request_id, set_request_id\n\n# Set a request ID for the current context\nrequest_id = set_request_id(\"req-12345\")\n\n# Get current request ID for distributed tracing\ncurrent_id = get_request_id()\n```\n\n### Complete Application Example\n\n```python\nfrom sherlock_ai import sherlock_ai, get_logger, log_performance, PerformanceTimer\n\n# Initialize logging first\nsherlock_ai()\nlogger = get_logger(__name__)\n\n@log_performance\ndef main():\n logger.info(\"Application starting\")\n \n with PerformanceTimer(\"initialization\"):\n # Your initialization code\n pass\n \n logger.info(\"Application ready\")\n\nif __name__ == \"__main__\":\n main()\n```\n\n## Log Output Format\n\nThe package produces structured log messages with the following format:\n\n```\n{timestamp} - {request_id} - {logger_name} - {log_level} - {message_content}\n```\n\nWhere:\n- `{timestamp}`: Date and time of the log entry\n- `{request_id}`: Request ID set by `set_request_id()` (shows `-` if not set)\n- `{logger_name}`: Name of the logger (e.g., PerformanceLogger, MonitoringLogger)\n- `{log_level}`: Log level (INFO, ERROR, DEBUG, etc.)\n- `{message_content}`: The actual log message content\n\n### Performance Logs\n**Message Content Format:**\n```\nPERFORMANCE | {function_name} | {STATUS} | {execution_time}s | {additional_info}\n```\n\n**Examples:**\n```\n2025-07-05 19:19:11 - 07ca74ed - PerformanceLogger - INFO - PERFORMANCE | tests.test_fastapi.health_check | SUCCESS | 0.262s\n2025-07-05 21:13:03 - 2c4774b0 - PerformanceLogger - INFO - PERFORMANCE | my_module.api_call | ERROR | 2.456s | Connection timeout\n2025-07-05 19:20:15 - - - PerformanceLogger - INFO - PERFORMANCE | database_query | SUCCESS | 0.089s | Args: ('user123',) | Kwargs: {'limit': 10}\n```\n\n### Memory Monitoring Logs\n**Message Content Format:**\n```\nMEMORY | {function_name} | {STATUS} | {execution_time}s | Current: {current_memory} | Change: {memory_change} | Traced: {traced_memory}\n```\n\n**Examples:**\n```\n2025-07-05 19:19:11 - 07ca74ed - MonitoringLogger - INFO - MEMORY | tests.test_fastapi.health_check | SUCCESS | 0.261s | Current: 57.66MB | Change: +1.64MB | Traced: 24.33KB (Peak: 30.33KB)\n2025-07-05 21:15:22 - - - MonitoringLogger - INFO - MEMORY | data_processor | SUCCESS | 0.245s | Current: 45.67MB | Change: +12.34MB\n```\n\n### Resource Monitoring Logs\n**Message Content Format:**\n```\nRESOURCES | {function_name} | {STATUS} | {execution_time}s | CPU: {cpu_percent}% | Memory: {memory_usage} | Threads: {thread_count} | I/O: R:{read_bytes} W:{write_bytes}\n```\n\n**Examples:**\n```\n2025-07-05 19:19:11 - 07ca74ed - MonitoringLogger - INFO - RESOURCES | tests.test_fastapi.health_check | SUCCESS | 0.144s | CPU: 59.3% | Memory: 57.66MB (+1.63MB) | Threads: 9 | I/O: R:0.00B W:414.00B\n2025-07-05 21:13:03 - 2c4774b0 - MonitoringLogger - INFO - RESOURCES | api_handler | SUCCESS | 0.156s | CPU: 25.4% | Memory: 128.45MB (+5.23MB) | Threads: 12 | I/O: R:2.34MB W:1.12MB\n2025-07-05 19:25:30 - - - MonitoringLogger - INFO - RESOURCES | database_query | SUCCESS | 0.089s | CPU: 15.2% | Memory: 95.67MB (+0.12MB) | Threads: 8\n```\n\n### Request ID Usage\n\nTo include request IDs in your logs, use the `set_request_id()` function:\n\n```python\nfrom sherlock_ai import set_request_id, get_request_id\n\n# Set a request ID for the current context\nrequest_id = set_request_id(\"req-12345\") # Custom ID\n# or\nrequest_id = set_request_id() # Auto-generated ID (e.g., \"07ca74ed\")\n\n# Now all logs will include this request ID\n# When request ID is set: \"2025-07-05 19:19:11 - 07ca74ed - ...\"\n# When request ID is not set: \"2025-07-05 19:19:11 - - - ...\"\n```\n\n## Use Cases\n\n- **API Performance Monitoring**: Track response times for your web APIs with dedicated API logging\n- **Memory Leak Detection**: Monitor memory usage patterns to identify potential memory leaks\n- **Resource Optimization**: Analyze CPU, memory, and I/O usage to optimize application performance\n- **Database Query Optimization**: Monitor slow database operations with separate database logs\n- **Microservices Debugging**: Trace execution times across service boundaries with request ID tracking\n- **Algorithm Benchmarking**: Compare performance of different implementations using custom configurations\n- **Production Monitoring**: Get insights into your application's performance characteristics with production presets\n- **Memory-Intensive Applications**: Monitor memory usage in data processing, ML model training, and large dataset operations\n- **System Resource Analysis**: Track resource consumption patterns for capacity planning and scaling decisions\n- **Environment-Specific Logging**: Use different configurations for development, testing, and production\n- **Custom Log Management**: Create application-specific log files and directory structures\n- **Compliance & Auditing**: Separate error logs and performance logs for security and compliance requirements\n- **DevOps Integration**: Configure logging for containerized environments and CI/CD pipelines\n- **FastAPI Development**: Optimized for FastAPI auto-reload with no duplicate log entries during development\n- **Logger Organization**: Use predefined logger names with autocomplete support for better code maintainability\n- **Performance Profiling**: Comprehensive monitoring for identifying bottlenecks in CPU, memory, and I/O operations\n\n## Requirements\n\n- Python >= 3.8\n- **psutil** >= 5.8.0 (for memory and resource monitoring)\n- Standard library for basic performance monitoring\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## Links\n\n- **Homepage**: [https://github.com/pranawmishra/sherlock-ai](https://github.com/pranawmishra/sherlock-ai)\n- **Repository**: [https://github.com/pranawmishra/sherlock-ai](https://github.com/pranawmishra/sherlock-ai)\n- **Issues**: [https://github.com/pranawmishra/sherlock-ai/issues](https://github.com/pranawmishra/sherlock-ai/issues)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for performance monitoring and logging utilities",
"version": "1.3.0",
"project_urls": {
"Homepage": "https://github.com/pranawmishra/sherlock-ai.git",
"Repository": "https://github.com/pranawmishra/sherlock-ai.git"
},
"split_keywords": [
"performance",
" monitoring",
" logging",
" debugging",
" profiling"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "858b191b49427b81a8562cc21c54086cda637393a99989e447f0eab5557db1fa",
"md5": "f6f6c2afdb393f6dc94cef78f11678e5",
"sha256": "4521ffa297f147d75c398ec1192a7915fadf96ea7ec326ebec37002884a81350"
},
"downloads": -1,
"filename": "sherlock_ai-1.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f6f6c2afdb393f6dc94cef78f11678e5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 23938,
"upload_time": "2025-07-12T15:55:42",
"upload_time_iso_8601": "2025-07-12T15:55:42.255788Z",
"url": "https://files.pythonhosted.org/packages/85/8b/191b49427b81a8562cc21c54086cda637393a99989e447f0eab5557db1fa/sherlock_ai-1.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "bf75c7b66708c51a3d7e7a300217b3a392c35b911c916e2919c224ff2ae05c50",
"md5": "c313c1b67adf9b3f35e8111877e779de",
"sha256": "c9c8a4887afb99d2832037c53b6ced7dfc7c9c7d9741676dd0b3e129cff97044"
},
"downloads": -1,
"filename": "sherlock_ai-1.3.0.tar.gz",
"has_sig": false,
"md5_digest": "c313c1b67adf9b3f35e8111877e779de",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 30636,
"upload_time": "2025-07-12T15:55:43",
"upload_time_iso_8601": "2025-07-12T15:55:43.536258Z",
"url": "https://files.pythonhosted.org/packages/bf/75/c7b66708c51a3d7e7a300217b3a392c35b911c916e2919c224ff2ae05c50/sherlock_ai-1.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 15:55:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pranawmishra",
"github_project": "sherlock-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "sherlock-ai"
}