# CoreLogger - AI Interaction Monitoring & Analysis System
## Overview
CoreLogger is a sophisticated AI conversation monitoring and analysis system designed for tracking, analyzing, and understanding AI interactions. Built with production-grade features, it provides comprehensive tools for capturing AI conversations, detecting emotions, and analyzing interaction patterns using advanced NLP techniques.
**Primary Focus**: Automatic monitoring and analysis of AI conversations with real-time emotion detection and comprehensive logging.
## Features
### Core Functionality
- **AI Interaction Monitoring**: Automatic logging of AI conversations with emotion detection
- **Real-time Chat Interface**: Interactive conversations with AI providers (Web + CLI)
- **Advanced NLP Analysis**: Sentiment analysis, novelty detection, complexity scoring
- **Web Dashboard**: Full-featured web interface for AI interaction monitoring
- **Conversation Analytics**: Comprehensive analysis of AI interaction patterns
- **CLI Export System**: Data export in JSON/CSV formats (CLI only)
- **Real-time Streaming**: Token-by-token AI responses with Rich console rendering
### AI Providers
- **Google Gemini** - Advanced language understanding
- **OpenAI GPT** - Industry-leading conversational AI
- **Anthropic Claude** - Thoughtful and nuanced responses
- **Mock Provider** - Development and testing support
### Advanced NLP Features
- **Emotion Detection**: 9-category emotion classification for user messages and AI responses
- **Importance Scoring**: Multi-factor importance calculation using NLP metrics
- **Conversation Categorization**: Automatic classification (user-input, ai-response, conversation)
- **Sentiment Analysis**: Emotional tone and strength analysis
- **Complexity Scoring**: Text complexity based on vocabulary and structure
- **Keyword Extraction**: Automatic keyword identification and density analysis
- **Conversation Context**: Three-tier logging for complete interaction tracking
### Web Dashboard
- **Dark Theme Interface**: GitHub-style responsive design optimized for readability
- **AI Interaction Dashboard**: Overview of recent conversations and system statistics
- **Live Chat Interface**: Real-time AI conversation with automatic logging
- **Conversation History**: Browse and search through AI interaction logs
- **Emotion Analytics**: Visual representation of emotion patterns in conversations
- **Category Filtering**: Filter by user-input, ai-response, or complete conversations
- **Real-time Statistics**: Live updates of interaction counts and patterns
### CLI Features
- **Interactive AI Chat**: Full-featured chat with multiple AI providers
- **Automatic Logging**: All conversations automatically saved with metadata
- **Rich Formatting**: Beautiful console output with colors, tables, and progress indicators
- **Streaming Support**: Real-time AI response streaming
- **Conversation History**: Context-aware multi-turn conversations
- **Data Export**: Export conversations in JSON/CSV format
- **Manual Logging**: Traditional thought logging capabilities
- **NLP Analysis**: Analyze individual conversations with detailed metrics
- **Bulk Operations**: Recalculate importance scores for existing entries
## Installation
### Prerequisites
- Python 3.8+
- pip (Python package installer)
### Quick Setup
```bash
# Clone the repository
git clone https://github.com/yourusername/CoreLogger.git
cd CoreLogger
# Install dependencies
pip install -r requirements.txt
# Set up environment variables (for AI providers)
cp .env.example .env
# Edit .env with your API keys (optional - works with mock provider)
# Initialize database (automatic on first run)
python corelogger.py --help
# Start CLI chat
python corelogger.py chat --model gemini
# Start web interface
python main.py
Access the web dashboard at `http://localhost:8000/dashboard`
## Architecture
## Architecture
```
### Environment Configuration
Create a `.env` file with your API keys (optional - system works with mock providers):
```env
# AI Provider API Keys (Optional - works without for demo/testing)
GEMINI_API_KEY=your_gemini_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
# Database Configuration (automatic)
DATABASE_URL=sqlite:///./corelogger.db
# Application Settings
LOG_LEVEL=INFO
```
## Usage Guide
### Command Line Interface
#### AI Chat (Primary Feature)
```bash
# Start interactive AI chat with Gemini
python corelogger.py chat --model gemini
# Use mock provider (no API key needed)
python corelogger.py chat --model mock
# Chat with conversation history and streaming
python corelogger.py chat --model gemini --history --stream
```
#### Manual Thought Logging (Traditional CLI Features)
```bash
# Log a simple thought manually
python corelogger.py log "Interesting observation about AI behavior"
# Log with metadata
python corelogger.py log "Planning new features" \
--category idea \
--tag development,ai \
--emotion excited \
--importance 0.8
```
#### View and Analyze Conversations
```bash
# List recent AI interactions
python corelogger.py list --page 1 --size 10
# Filter by emotion or category
python corelogger.py list --emotion happy --category ai-response
python corelogger.py list --search "interesting topic"
# Export conversation data
python corelogger.py export --format json --output my_conversations.json
python corelogger.py export --format csv --category conversation
# Analyze specific interactions with NLP
python corelogger.py analyze <conversation-id> --detailed
```
### Web Interface
#### Starting the Web Server
```bash
# Start the FastAPI web server
python main.py
# Or with uvicorn directly
uvicorn main:app --reload --port 8000
# Access the dashboard
# http://localhost:8000/dashboard
```
#### Web Features
- **Dashboard**: Overview of recent AI interactions and statistics
- **Live Chat**: Real-time AI conversation interface with automatic logging
- **Conversation History**: Browse through all AI interactions with filtering
- **Emotion Analytics**: Visual representation of conversation emotions
- **Dark Theme**: Optimized interface for extended usage
- **Real-time Updates**: Live statistics and conversation logging
**Note**: Export functionality will be added in future updates. Currently available through CLI only.
## Architecture
### Project Structure
```
CoreLogger/
├── cli/ # Command-line interface
│ └── main.py # CLI commands and AI chat interface
├── web/ # Web interface
│ ├── main.py # FastAPI server configuration
│ ├── routes.py # Web routes and AI chat API
│ └── templates/ # Jinja2 HTML templates
├── chat/ # AI chat system
│ ├── interface.py # Chat interface management
│ └── providers/ # AI provider implementations
├── services/ # Core business logic
│ ├── logger.py # Conversation logging service
│ ├── exporter.py # Data export functionality (CLI)
│ ├── formatter.py # Console output formatting
│ └── nlp_analyzer.py # NLP analysis engine
├── db/ # Database layer
│ ├── session.py # Database session management
│ └── models.py # SQLAlchemy models
├── models/ # Pydantic data models
│ └── thought.py # API data structures
├── corelogger.py # CLI entry point
└── main.py # Web server entry point
```
### Key Components
#### Emotion Detection Engine
CoreLogger automatically detects emotions in both user messages and AI responses:
```python
# 9-Category Emotion Classification:
# happy, excited, confident, frustrated, confused,
# anxious, calm, sad, neutral
# Example detections:
"This is amazing!" → excited
"I'm not sure about this" → confused
"That worked perfectly" → happy
"Let me think about it" → calm
```
#### AI Chat Integration
Real-time conversation with automatic logging:
```python
# Web Interface: /chat endpoint
# CLI Interface: python corelogger.py chat --model gemini
# All conversations automatically logged with:
# - User message (user-input category)
# - AI response (ai-response category)
# - Complete conversation (conversation category)
# - Emotion detection for each message
# - Importance scoring and NLP analysis
```
#### AI Provider System
Extensible provider system with built-in fallbacks:
```python
# Currently supported:
# - Google Gemini (with API key)
# - Mock Provider (no API key needed)
# - Graceful fallback with helpful error messages
# Usage in CLI:
python corelogger.py chat --model gemini
python corelogger.py chat --model mock
# Usage in Web:
# Automatic provider selection based on available API keys
# User-friendly error messages when API keys are missing
```
#### Database Schema
Three-tier conversation logging system:
```python
# Database automatically stores:
class ThoughtModel:
id: UUID # Unique identifier
category: str # user-input, ai-response, conversation
content: str # Message or conversation content
tags: List[str] # Automatic tags (chat, provider, etc.)
emotion: str # Detected emotion (9 categories)
importance: float # NLP-calculated importance score
timestamp: datetime # When the interaction occurred
```
## Current Capabilities
### Core Features (Fully Implemented)
- **AI Chat Interface** (CLI + Web)
- **Automatic Conversation Logging**
- **9-Category Emotion Detection**
- **Dark Theme Web Dashboard**
- **Real-time Statistics**
- **NLP Analysis & Importance Scoring**
- **Data Export** (CLI only)
- **Rich Console Formatting**
- **Multiple AI Provider Support**
### Planned Features
- **Web Export Functionality**
- **Advanced Conversation Analytics**
- **Conversation Search & Filtering**
- **Data Visualization Charts**
- **OpenAI & Claude Provider Integration**
```
## Configuration
### Environment Variables
CoreLogger uses environment variables for configuration:
```env
# AI Provider API Keys (Optional)
GEMINI_API_KEY=your_gemini_api_key_here
# Database (Auto-configured)
DATABASE_URL=sqlite:///./corelogger.db
# Application Settings
LOG_LEVEL=INFO
```
### Configuration Files
The system automatically handles:
- Database initialization
- Table creation
- Default settings
- Error handling and fallbacks
### API Key Setup
```bash
# Option 1: Environment variable
export GEMINI_API_KEY="your_key_here"
# Option 2: .env file
echo "GEMINI_API_KEY=your_key_here" > .env
# Option 3: CLI parameter
python corelogger.py chat --model gemini --api-key "your_key_here"
# No API key needed for testing
python corelogger.py chat --model mock
```
## Testing
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=corelogger
# Test specific components
pytest tests/test_logger.py
pytest tests/test_models.py
```
## � Quick Start Examples
### 1. Test the System (No API Key Needed)
```bash
# Clone and setup
git clone <repo-url>
cd CoreLogger
pip install -r requirements.txt
# Try the CLI with mock AI
python corelogger.py chat --model mock
# Start web dashboard
python main.py
# Visit http://localhost:8000/dashboard
```
### 2. Use with Gemini AI
```bash
# Set API key
export GEMINI_API_KEY="your_key_here"
# Chat in CLI
python corelogger.py chat --model gemini
# Use web interface with real AI
python main.py
# Visit http://localhost:8000/chat
```
### 3. Analyze Your Conversations
```bash
# View recent interactions
python corelogger.py list --size 5
# Export your data
python corelogger.py export --format json --output my_ai_conversations.json
# Analyze specific conversation
python corelogger.py analyze <conversation-id>
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
### Development Setup
```bash
# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt
# Run tests before committing
pytest
# Format code
black .
isort .
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- **FastAPI**: Modern Python web framework for the dashboard
- **Typer**: Beautiful CLI framework with Rich integration
- **Rich**: Rich text and beautiful console formatting
- **SQLAlchemy**: Database ORM for conversation storage
- **Google Generative AI**: Gemini AI model integration
- **Jinja2**: Template engine for web interface
- **Bootstrap**: Frontend framework for responsive design
## Support
For support, please open an issue on GitHub.
---
**CoreLogger** - Monitor and analyze your AI interactions with sophisticated emotion detection and NLP analysis.
### Setup
1. Clone the repository:
```bash
git clone <repository-url>
cd CoreLogger
```
2. Create a virtual environment:
```bash
python -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
```
3. Install dependencies:
```bash
pip install -r requirements.txt
```
4. Initialize the database:
```bash
python corelogger.py --help # This will create the database
```
## Usage
### Command Line Interface
#### Basic Logging Commands
```bash
# Log a reflection (default category)
python corelogger.py log "I'm thinking about the nature of consciousness"
# Log with specific category and metadata
python corelogger.py log "I see a red car" --category perception --tag visual --emotion curious --importance 0.7
# Use convenience commands
python corelogger.py perception "The environment appears calm"
python corelogger.py reflect "This situation requires careful analysis" --emotion contemplative
python corelogger.py decide "I will proceed with option A" --importance 0.9
python corelogger.py tick "System checkpoint reached"
python corelogger.py error "Memory allocation failed" --tag system --importance 0.8
```
#### Listing and Searching
```bash
# List recent thoughts
python corelogger.py list
# List with filters
python corelogger.py list --category reflection --tag important
python corelogger.py list --emotion curious --min-importance 0.5
python corelogger.py list --search "consciousness" --page 1 --size 5
# Display as table
python corelogger.py list --table
# Show statistics
python corelogger.py list --stats
```
#### Thought Management
```bash
# Show specific thought
python corelogger.py show <thought-id>
# Update thought
python corelogger.py update <thought-id> --content "Updated content" --add-tag modified
# Delete thought (with confirmation)
python corelogger.py delete <thought-id>
# Force delete without confirmation
python corelogger.py delete <thought-id> --force
```
#### Interactive Mode
```bash
# Start interactive logging session
python corelogger.py interactive
```
### REST API
#### Starting the Server
```bash
# Start development server
python main.py
# Or with custom settings
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
```
#### API Endpoints
The API provides the following endpoints:
- `GET /api/v1/health` - Health check
- `POST /api/v1/thoughts` - Create a thought
- `GET /api/v1/thoughts` - List thoughts with filtering
- `GET /api/v1/thoughts/{id}` - Get specific thought
- `PUT /api/v1/thoughts/{id}` - Update thought
- `DELETE /api/v1/thoughts/{id}` - Delete thought
**Convenience endpoints:**
- `POST /api/v1/thoughts/perception` - Log perception
- `POST /api/v1/thoughts/reflection` - Log reflection
- `POST /api/v1/thoughts/decision` - Log decision
- `POST /api/v1/thoughts/tick` - Log system tick
- `POST /api/v1/thoughts/error` - Log error
#### API Examples
```bash
# Create a thought
curl -X POST "http://localhost:8000/api/v1/thoughts" \
-H "Content-Type: application/json" \
-d '{
"category": "reflection",
"content": "API testing thoughts",
"tags": ["api", "test"],
"emotion": "focused",
"importance": 0.8
}'
# List thoughts with filters
curl "http://localhost:8000/api/v1/thoughts?category=reflection&page=1&page_size=10"
# Quick logging with convenience endpoints
curl -X POST "http://localhost:8000/api/v1/thoughts/perception?content=I observe changes&tags=visual"
```
#### API Documentation
When the server is running, visit:
- Swagger UI: `http://localhost:8000/docs`
- ReDoc: `http://localhost:8000/redoc`
## Configuration
CoreLogger uses environment variables for configuration. Create a `.env` file:
```env
# Database
DATABASE_URL=sqlite:///./corelogger.db
DATABASE_ECHO=false
# API Server
API_HOST=localhost
API_PORT=8000
API_RELOAD=true
# Logging
LOG_LEVEL=INFO
# Features
ENABLE_EMOTIONS=true
ENABLE_IMPORTANCE_SCORING=true
MAX_CONTENT_LENGTH=10000
DEFAULT_IMPORTANCE=0.5
```
### Configuration Options
| Variable | Default | Description |
|----------|---------|-------------|
| `DATABASE_URL` | `sqlite:///./corelogger.db` | Database connection string |
| `DATABASE_ECHO` | `false` | Enable SQL query logging |
| `API_HOST` | `localhost` | API server host |
| `API_PORT` | `8000` | API server port |
| `LOG_LEVEL` | `INFO` | Python logging level |
| `ENABLE_EMOTIONS` | `true` | Enable emotion tracking |
| `ENABLE_IMPORTANCE_SCORING` | `true` | Enable importance scores |
| `MAX_CONTENT_LENGTH` | `10000` | Maximum thought content length |
| `DEFAULT_IMPORTANCE` | `0.5` | Default importance when not specified |
## Thought Schema
Each thought has the following structure:
```python
{
"id": "uuid4", # Unique identifier
"timestamp": "2024-01-01T12:00:00Z", # Creation time
"category": "reflection", # One of: perception, reflection, decision, tick, error
"content": "Thought content...", # Main thought text
"tags": ["tag1", "tag2"], # List of tags
"emotion": "curious", # Optional emotional state
"importance": 0.7 # Optional importance score (0.0-1.0)
}
```
### Categories
- **perception**: Observations and sensory input
- **reflection**: Analysis and contemplation
- **decision**: Choices and determinations
- **tick**: System events and checkpoints
- **error**: Problems and failures
## Development
### Project Structure
```
corelogger/
├── cli/ # CLI commands and interface
├── api/ # FastAPI routes and endpoints
├── db/ # Database models and session management
├── services/ # Business logic and formatting
├── models/ # Pydantic schemas
├── tests/ # Test suite
├── config.py # Configuration management
├── corelogger.py # CLI entry point
├── main.py # API entry point
└── README.md
```
### Running Tests
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html
# Run specific test file
pytest tests/test_logger.py
# Run with verbose output
pytest -v
```
### Code Quality
```bash
# Format code
black .
# Sort imports
isort .
# Type checking
mypy .
```
## Future Development
### Planned Enhancements
- **Web Export**: Direct export functionality from web interface
- **Advanced Analytics**: Conversation pattern analysis and visualization
- **Additional AI Providers**: OpenAI GPT and Anthropic Claude integration
- **Conversation Search**: Full-text search across AI interactions
- **Data Visualization**: Charts and graphs for interaction patterns
- **API Endpoints**: RESTful API for third-party integrations
### Extensibility
The modular design allows easy extension:
- **Custom AI Providers**: Add new AI service integrations
- **Enhanced Emotion Detection**: More sophisticated emotion classification
- **Custom Analytics**: Additional NLP analysis metrics
- **Export Formats**: New data export options
- **UI Themes**: Additional interface themes and customization
## Contributing
1. Fork the repository
2. Create a feature branch
3. Write tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
### Development Guidelines
- Follow PEP 8 style guidelines
- Write comprehensive docstrings
- Include type annotations
- Test new functionality thoroughly
- Use descriptive commit messages
## Current Status
**Version**: 1.0.0 (Production Ready)
**Status**: Fully Functional
### Completed Features
- CLI AI chat with emotion detection
- Web dashboard with real-time updates
- Automatic conversation logging
- 9-category emotion classification
- NLP analysis and importance scoring
- Data export (CLI)
- Dark theme web interface
- Multiple AI provider support (Gemini + Mock)
### In Development
- Web export functionality
- Advanced conversation analytics
- Additional AI provider integrations
## Version History
### v1.0.0 (Current)
- Production-ready AI conversation monitoring
- Complete emotion detection system
- Web and CLI interfaces fully functional
- Automatic database logging
- NLP analysis and importance scoring
### Future Versions
- v1.1.0: Web export functionality
- v1.2.0: OpenAI and Claude provider integration
- v1.3.0: Advanced analytics and visualization
---
**CoreLogger** - AI Interaction Monitoring Made Simple
Raw data
{
"_id": null,
"home_page": null,
"name": "corelogger-ai",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.10",
"maintainer_email": null,
"keywords": "ai, logging, thoughts, nlp, analysis, monitoring, gemini, fastapi, cli",
"author": null,
"author_email": "Eidos Development Team <dev@eidos.ai>",
"download_url": "https://files.pythonhosted.org/packages/b8/2d/b2d7ed6340cabfb3341c93e34652fa635052f3573f047eb1e7dbbeff1a0e/corelogger_ai-1.0.0.tar.gz",
"platform": null,
"description": "# CoreLogger - AI Interaction Monitoring & Analysis System\r\n\r\n## Overview\r\n\r\nCoreLogger is a sophisticated AI conversation monitoring and analysis system designed for tracking, analyzing, and understanding AI interactions. Built with production-grade features, it provides comprehensive tools for capturing AI conversations, detecting emotions, and analyzing interaction patterns using advanced NLP techniques.\r\n\r\n**Primary Focus**: Automatic monitoring and analysis of AI conversations with real-time emotion detection and comprehensive logging.\r\n\r\n## Features\r\n\r\n### Core Functionality\r\n- **AI Interaction Monitoring**: Automatic logging of AI conversations with emotion detection\r\n- **Real-time Chat Interface**: Interactive conversations with AI providers (Web + CLI)\r\n- **Advanced NLP Analysis**: Sentiment analysis, novelty detection, complexity scoring\r\n- **Web Dashboard**: Full-featured web interface for AI interaction monitoring\r\n- **Conversation Analytics**: Comprehensive analysis of AI interaction patterns\r\n- **CLI Export System**: Data export in JSON/CSV formats (CLI only)\r\n- **Real-time Streaming**: Token-by-token AI responses with Rich console rendering\r\n\r\n### AI Providers\r\n- **Google Gemini** - Advanced language understanding\r\n- **OpenAI GPT** - Industry-leading conversational AI \r\n- **Anthropic Claude** - Thoughtful and nuanced responses\r\n- **Mock Provider** - Development and testing support\r\n\r\n### Advanced NLP Features\r\n- **Emotion Detection**: 9-category emotion classification for user messages and AI responses\r\n- **Importance Scoring**: Multi-factor importance calculation using NLP metrics\r\n- **Conversation Categorization**: Automatic classification (user-input, ai-response, conversation)\r\n- **Sentiment Analysis**: Emotional tone and strength analysis\r\n- **Complexity Scoring**: Text complexity based on vocabulary and structure\r\n- **Keyword Extraction**: Automatic keyword identification and density analysis\r\n- **Conversation Context**: Three-tier logging for complete interaction tracking\r\n\r\n### Web Dashboard\r\n- **Dark Theme Interface**: GitHub-style responsive design optimized for readability\r\n- **AI Interaction Dashboard**: Overview of recent conversations and system statistics\r\n- **Live Chat Interface**: Real-time AI conversation with automatic logging\r\n- **Conversation History**: Browse and search through AI interaction logs\r\n- **Emotion Analytics**: Visual representation of emotion patterns in conversations\r\n- **Category Filtering**: Filter by user-input, ai-response, or complete conversations\r\n- **Real-time Statistics**: Live updates of interaction counts and patterns\r\n\r\n### CLI Features\r\n- **Interactive AI Chat**: Full-featured chat with multiple AI providers\r\n- **Automatic Logging**: All conversations automatically saved with metadata\r\n- **Rich Formatting**: Beautiful console output with colors, tables, and progress indicators\r\n- **Streaming Support**: Real-time AI response streaming\r\n- **Conversation History**: Context-aware multi-turn conversations\r\n- **Data Export**: Export conversations in JSON/CSV format\r\n- **Manual Logging**: Traditional thought logging capabilities\r\n- **NLP Analysis**: Analyze individual conversations with detailed metrics\r\n- **Bulk Operations**: Recalculate importance scores for existing entries\r\n\r\n## Installation\r\n\r\n### Prerequisites\r\n- Python 3.8+\r\n- pip (Python package installer)\r\n\r\n### Quick Setup\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/yourusername/CoreLogger.git\r\ncd CoreLogger\r\n\r\n# Install dependencies\r\npip install -r requirements.txt\r\n\r\n# Set up environment variables (for AI providers)\r\ncp .env.example .env\r\n# Edit .env with your API keys (optional - works with mock provider)\r\n\r\n# Initialize database (automatic on first run)\r\npython corelogger.py --help\r\n\r\n# Start CLI chat\r\npython corelogger.py chat --model gemini\r\n\r\n# Start web interface\r\npython main.py\r\nAccess the web dashboard at `http://localhost:8000/dashboard`\r\n\r\n## Architecture\r\n\r\n## Architecture\r\n```\r\n\r\n### Environment Configuration\r\n\r\nCreate a `.env` file with your API keys (optional - system works with mock providers):\r\n\r\n```env\r\n# AI Provider API Keys (Optional - works without for demo/testing)\r\nGEMINI_API_KEY=your_gemini_api_key_here\r\nOPENAI_API_KEY=your_openai_api_key_here\r\n\r\n# Database Configuration (automatic)\r\nDATABASE_URL=sqlite:///./corelogger.db\r\n\r\n# Application Settings\r\nLOG_LEVEL=INFO\r\n```\r\n\r\n## Usage Guide\r\n\r\n### Command Line Interface\r\n\r\n#### AI Chat (Primary Feature)\r\n```bash\r\n# Start interactive AI chat with Gemini\r\npython corelogger.py chat --model gemini\r\n\r\n# Use mock provider (no API key needed)\r\npython corelogger.py chat --model mock\r\n\r\n# Chat with conversation history and streaming\r\npython corelogger.py chat --model gemini --history --stream\r\n```\r\n\r\n#### Manual Thought Logging (Traditional CLI Features)\r\n```bash\r\n# Log a simple thought manually\r\npython corelogger.py log \"Interesting observation about AI behavior\"\r\n\r\n# Log with metadata\r\npython corelogger.py log \"Planning new features\" \\\r\n --category idea \\\r\n --tag development,ai \\\r\n --emotion excited \\\r\n --importance 0.8\r\n```\r\n\r\n#### View and Analyze Conversations\r\n```bash\r\n# List recent AI interactions\r\npython corelogger.py list --page 1 --size 10\r\n\r\n# Filter by emotion or category\r\npython corelogger.py list --emotion happy --category ai-response\r\npython corelogger.py list --search \"interesting topic\"\r\n\r\n# Export conversation data\r\npython corelogger.py export --format json --output my_conversations.json\r\npython corelogger.py export --format csv --category conversation\r\n\r\n# Analyze specific interactions with NLP\r\npython corelogger.py analyze <conversation-id> --detailed\r\n```\r\n\r\n### Web Interface\r\n\r\n#### Starting the Web Server\r\n```bash\r\n# Start the FastAPI web server\r\npython main.py\r\n\r\n# Or with uvicorn directly\r\nuvicorn main:app --reload --port 8000\r\n\r\n# Access the dashboard\r\n# http://localhost:8000/dashboard\r\n```\r\n\r\n#### Web Features\r\n- **Dashboard**: Overview of recent AI interactions and statistics \r\n- **Live Chat**: Real-time AI conversation interface with automatic logging\r\n- **Conversation History**: Browse through all AI interactions with filtering\r\n- **Emotion Analytics**: Visual representation of conversation emotions\r\n- **Dark Theme**: Optimized interface for extended usage\r\n- **Real-time Updates**: Live statistics and conversation logging\r\n\r\n**Note**: Export functionality will be added in future updates. Currently available through CLI only.\r\n\r\n## Architecture\r\n\r\n### Project Structure\r\n```\r\nCoreLogger/\r\n\u251c\u2500\u2500 cli/ # Command-line interface\r\n\u2502 \u2514\u2500\u2500 main.py # CLI commands and AI chat interface\r\n\u251c\u2500\u2500 web/ # Web interface\r\n\u2502 \u251c\u2500\u2500 main.py # FastAPI server configuration\r\n\u2502 \u251c\u2500\u2500 routes.py # Web routes and AI chat API\r\n\u2502 \u2514\u2500\u2500 templates/ # Jinja2 HTML templates\r\n\u251c\u2500\u2500 chat/ # AI chat system\r\n\u2502 \u251c\u2500\u2500 interface.py # Chat interface management\r\n\u2502 \u2514\u2500\u2500 providers/ # AI provider implementations\r\n\u251c\u2500\u2500 services/ # Core business logic\r\n\u2502 \u251c\u2500\u2500 logger.py # Conversation logging service\r\n\u2502 \u251c\u2500\u2500 exporter.py # Data export functionality (CLI)\r\n\u2502 \u251c\u2500\u2500 formatter.py # Console output formatting\r\n\u2502 \u2514\u2500\u2500 nlp_analyzer.py # NLP analysis engine\r\n\u251c\u2500\u2500 db/ # Database layer\r\n\u2502 \u251c\u2500\u2500 session.py # Database session management\r\n\u2502 \u2514\u2500\u2500 models.py # SQLAlchemy models\r\n\u251c\u2500\u2500 models/ # Pydantic data models\r\n\u2502 \u2514\u2500\u2500 thought.py # API data structures\r\n\u251c\u2500\u2500 corelogger.py # CLI entry point\r\n\u2514\u2500\u2500 main.py # Web server entry point\r\n```\r\n\r\n### Key Components\r\n\r\n#### Emotion Detection Engine\r\nCoreLogger automatically detects emotions in both user messages and AI responses:\r\n\r\n```python\r\n# 9-Category Emotion Classification:\r\n# happy, excited, confident, frustrated, confused, \r\n# anxious, calm, sad, neutral\r\n\r\n# Example detections:\r\n\"This is amazing!\" \u2192 excited\r\n\"I'm not sure about this\" \u2192 confused \r\n\"That worked perfectly\" \u2192 happy\r\n\"Let me think about it\" \u2192 calm\r\n```\r\n\r\n#### AI Chat Integration\r\nReal-time conversation with automatic logging:\r\n\r\n```python\r\n# Web Interface: /chat endpoint\r\n# CLI Interface: python corelogger.py chat --model gemini\r\n\r\n# All conversations automatically logged with:\r\n# - User message (user-input category)\r\n# - AI response (ai-response category) \r\n# - Complete conversation (conversation category)\r\n# - Emotion detection for each message\r\n# - Importance scoring and NLP analysis\r\n```\r\n\r\n#### AI Provider System\r\nExtensible provider system with built-in fallbacks:\r\n\r\n```python\r\n# Currently supported:\r\n# - Google Gemini (with API key)\r\n# - Mock Provider (no API key needed)\r\n# - Graceful fallback with helpful error messages\r\n\r\n# Usage in CLI:\r\npython corelogger.py chat --model gemini\r\npython corelogger.py chat --model mock\r\n\r\n# Usage in Web:\r\n# Automatic provider selection based on available API keys\r\n# User-friendly error messages when API keys are missing\r\n```\r\n\r\n#### Database Schema\r\nThree-tier conversation logging system:\r\n\r\n```python\r\n# Database automatically stores:\r\nclass ThoughtModel:\r\n id: UUID # Unique identifier\r\n category: str # user-input, ai-response, conversation\r\n content: str # Message or conversation content\r\n tags: List[str] # Automatic tags (chat, provider, etc.)\r\n emotion: str # Detected emotion (9 categories)\r\n importance: float # NLP-calculated importance score\r\n timestamp: datetime # When the interaction occurred\r\n```\r\n\r\n## Current Capabilities\r\n\r\n### Core Features (Fully Implemented)\r\n- **AI Chat Interface** (CLI + Web)\r\n- **Automatic Conversation Logging** \r\n- **9-Category Emotion Detection**\r\n- **Dark Theme Web Dashboard**\r\n- **Real-time Statistics**\r\n- **NLP Analysis & Importance Scoring**\r\n- **Data Export** (CLI only)\r\n- **Rich Console Formatting**\r\n- **Multiple AI Provider Support**\r\n\r\n### Planned Features\r\n- **Web Export Functionality**\r\n- **Advanced Conversation Analytics**\r\n- **Conversation Search & Filtering**\r\n- **Data Visualization Charts**\r\n- **OpenAI & Claude Provider Integration**\r\n\r\n```\r\n\r\n## Configuration\r\n\r\n### Environment Variables\r\nCoreLogger uses environment variables for configuration:\r\n\r\n```env\r\n# AI Provider API Keys (Optional)\r\nGEMINI_API_KEY=your_gemini_api_key_here\r\n\r\n# Database (Auto-configured)\r\nDATABASE_URL=sqlite:///./corelogger.db\r\n\r\n# Application Settings \r\nLOG_LEVEL=INFO\r\n```\r\n\r\n### Configuration Files\r\nThe system automatically handles:\r\n- Database initialization\r\n- Table creation\r\n- Default settings\r\n- Error handling and fallbacks\r\n\r\n### API Key Setup\r\n```bash\r\n# Option 1: Environment variable\r\nexport GEMINI_API_KEY=\"your_key_here\"\r\n\r\n# Option 2: .env file\r\necho \"GEMINI_API_KEY=your_key_here\" > .env\r\n\r\n# Option 3: CLI parameter\r\npython corelogger.py chat --model gemini --api-key \"your_key_here\"\r\n\r\n# No API key needed for testing\r\npython corelogger.py chat --model mock\r\n```\r\n\r\n## Testing\r\n\r\n```bash\r\n# Run all tests\r\npytest\r\n\r\n# Run with coverage\r\npytest --cov=corelogger\r\n\r\n# Test specific components\r\npytest tests/test_logger.py\r\npytest tests/test_models.py\r\n```\r\n\r\n## \ufffd Quick Start Examples\r\n\r\n### 1. Test the System (No API Key Needed)\r\n```bash\r\n# Clone and setup\r\ngit clone <repo-url>\r\ncd CoreLogger\r\npip install -r requirements.txt\r\n\r\n# Try the CLI with mock AI\r\npython corelogger.py chat --model mock\r\n\r\n# Start web dashboard\r\npython main.py\r\n# Visit http://localhost:8000/dashboard\r\n```\r\n\r\n### 2. Use with Gemini AI\r\n```bash\r\n# Set API key\r\nexport GEMINI_API_KEY=\"your_key_here\"\r\n\r\n# Chat in CLI\r\npython corelogger.py chat --model gemini\r\n\r\n# Use web interface with real AI\r\npython main.py\r\n# Visit http://localhost:8000/chat\r\n```\r\n\r\n### 3. Analyze Your Conversations\r\n```bash\r\n# View recent interactions\r\npython corelogger.py list --size 5\r\n\r\n# Export your data\r\npython corelogger.py export --format json --output my_ai_conversations.json\r\n\r\n# Analyze specific conversation\r\npython corelogger.py analyze <conversation-id>\r\n```\r\n\r\n## Contributing\r\n\r\n1. Fork the repository\r\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\r\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\r\n4. Push to the branch (`git push origin feature/amazing-feature`)\r\n5. Open a Pull Request\r\n\r\n### Development Setup\r\n```bash\r\n# Install dependencies\r\npip install -r requirements.txt\r\npip install -r requirements-dev.txt\r\n\r\n# Run tests before committing\r\npytest\r\n\r\n# Format code\r\nblack .\r\nisort .\r\n```\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## Acknowledgments\r\n\r\n- **FastAPI**: Modern Python web framework for the dashboard\r\n- **Typer**: Beautiful CLI framework with Rich integration\r\n- **Rich**: Rich text and beautiful console formatting\r\n- **SQLAlchemy**: Database ORM for conversation storage\r\n- **Google Generative AI**: Gemini AI model integration\r\n- **Jinja2**: Template engine for web interface\r\n- **Bootstrap**: Frontend framework for responsive design\r\n\r\n## Support\r\n\r\nFor support, please open an issue on GitHub.\r\n\r\n---\r\n\r\n**CoreLogger** - Monitor and analyze your AI interactions with sophisticated emotion detection and NLP analysis.\r\n\r\n### Setup\r\n\r\n1. Clone the repository:\r\n```bash\r\ngit clone <repository-url>\r\ncd CoreLogger\r\n```\r\n\r\n2. Create a virtual environment:\r\n```bash\r\npython -m venv venv\r\n\r\n# On Windows\r\nvenv\\Scripts\\activate\r\n\r\n# On macOS/Linux\r\nsource venv/bin/activate\r\n```\r\n\r\n3. Install dependencies:\r\n```bash\r\npip install -r requirements.txt\r\n```\r\n\r\n4. Initialize the database:\r\n```bash\r\npython corelogger.py --help # This will create the database\r\n```\r\n\r\n## Usage\r\n\r\n### Command Line Interface\r\n\r\n#### Basic Logging Commands\r\n\r\n```bash\r\n# Log a reflection (default category)\r\npython corelogger.py log \"I'm thinking about the nature of consciousness\"\r\n\r\n# Log with specific category and metadata\r\npython corelogger.py log \"I see a red car\" --category perception --tag visual --emotion curious --importance 0.7\r\n\r\n# Use convenience commands\r\npython corelogger.py perception \"The environment appears calm\"\r\npython corelogger.py reflect \"This situation requires careful analysis\" --emotion contemplative\r\npython corelogger.py decide \"I will proceed with option A\" --importance 0.9\r\npython corelogger.py tick \"System checkpoint reached\"\r\npython corelogger.py error \"Memory allocation failed\" --tag system --importance 0.8\r\n```\r\n\r\n#### Listing and Searching\r\n\r\n```bash\r\n# List recent thoughts\r\npython corelogger.py list\r\n\r\n# List with filters\r\npython corelogger.py list --category reflection --tag important\r\npython corelogger.py list --emotion curious --min-importance 0.5\r\npython corelogger.py list --search \"consciousness\" --page 1 --size 5\r\n\r\n# Display as table\r\npython corelogger.py list --table\r\n\r\n# Show statistics\r\npython corelogger.py list --stats\r\n```\r\n\r\n#### Thought Management\r\n\r\n```bash\r\n# Show specific thought\r\npython corelogger.py show <thought-id>\r\n\r\n# Update thought\r\npython corelogger.py update <thought-id> --content \"Updated content\" --add-tag modified\r\n\r\n# Delete thought (with confirmation)\r\npython corelogger.py delete <thought-id>\r\n\r\n# Force delete without confirmation\r\npython corelogger.py delete <thought-id> --force\r\n```\r\n\r\n#### Interactive Mode\r\n\r\n```bash\r\n# Start interactive logging session\r\npython corelogger.py interactive\r\n```\r\n\r\n### REST API\r\n\r\n#### Starting the Server\r\n\r\n```bash\r\n# Start development server\r\npython main.py\r\n\r\n# Or with custom settings\r\nuvicorn main:app --host 0.0.0.0 --port 8000 --reload\r\n```\r\n\r\n#### API Endpoints\r\n\r\nThe API provides the following endpoints:\r\n\r\n- `GET /api/v1/health` - Health check\r\n- `POST /api/v1/thoughts` - Create a thought\r\n- `GET /api/v1/thoughts` - List thoughts with filtering\r\n- `GET /api/v1/thoughts/{id}` - Get specific thought\r\n- `PUT /api/v1/thoughts/{id}` - Update thought\r\n- `DELETE /api/v1/thoughts/{id}` - Delete thought\r\n\r\n**Convenience endpoints:**\r\n- `POST /api/v1/thoughts/perception` - Log perception\r\n- `POST /api/v1/thoughts/reflection` - Log reflection\r\n- `POST /api/v1/thoughts/decision` - Log decision\r\n- `POST /api/v1/thoughts/tick` - Log system tick\r\n- `POST /api/v1/thoughts/error` - Log error\r\n\r\n#### API Examples\r\n\r\n```bash\r\n# Create a thought\r\ncurl -X POST \"http://localhost:8000/api/v1/thoughts\" \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{\r\n \"category\": \"reflection\",\r\n \"content\": \"API testing thoughts\",\r\n \"tags\": [\"api\", \"test\"],\r\n \"emotion\": \"focused\",\r\n \"importance\": 0.8\r\n }'\r\n\r\n# List thoughts with filters\r\ncurl \"http://localhost:8000/api/v1/thoughts?category=reflection&page=1&page_size=10\"\r\n\r\n# Quick logging with convenience endpoints\r\ncurl -X POST \"http://localhost:8000/api/v1/thoughts/perception?content=I observe changes&tags=visual\"\r\n```\r\n\r\n#### API Documentation\r\n\r\nWhen the server is running, visit:\r\n- Swagger UI: `http://localhost:8000/docs`\r\n- ReDoc: `http://localhost:8000/redoc`\r\n\r\n## Configuration\r\n\r\nCoreLogger uses environment variables for configuration. Create a `.env` file:\r\n\r\n```env\r\n# Database\r\nDATABASE_URL=sqlite:///./corelogger.db\r\nDATABASE_ECHO=false\r\n\r\n# API Server\r\nAPI_HOST=localhost\r\nAPI_PORT=8000\r\nAPI_RELOAD=true\r\n\r\n# Logging\r\nLOG_LEVEL=INFO\r\n\r\n# Features\r\nENABLE_EMOTIONS=true\r\nENABLE_IMPORTANCE_SCORING=true\r\nMAX_CONTENT_LENGTH=10000\r\nDEFAULT_IMPORTANCE=0.5\r\n```\r\n\r\n### Configuration Options\r\n\r\n| Variable | Default | Description |\r\n|----------|---------|-------------|\r\n| `DATABASE_URL` | `sqlite:///./corelogger.db` | Database connection string |\r\n| `DATABASE_ECHO` | `false` | Enable SQL query logging |\r\n| `API_HOST` | `localhost` | API server host |\r\n| `API_PORT` | `8000` | API server port |\r\n| `LOG_LEVEL` | `INFO` | Python logging level |\r\n| `ENABLE_EMOTIONS` | `true` | Enable emotion tracking |\r\n| `ENABLE_IMPORTANCE_SCORING` | `true` | Enable importance scores |\r\n| `MAX_CONTENT_LENGTH` | `10000` | Maximum thought content length |\r\n| `DEFAULT_IMPORTANCE` | `0.5` | Default importance when not specified |\r\n\r\n## Thought Schema\r\n\r\nEach thought has the following structure:\r\n\r\n```python\r\n{\r\n \"id\": \"uuid4\", # Unique identifier\r\n \"timestamp\": \"2024-01-01T12:00:00Z\", # Creation time\r\n \"category\": \"reflection\", # One of: perception, reflection, decision, tick, error\r\n \"content\": \"Thought content...\", # Main thought text\r\n \"tags\": [\"tag1\", \"tag2\"], # List of tags\r\n \"emotion\": \"curious\", # Optional emotional state\r\n \"importance\": 0.7 # Optional importance score (0.0-1.0)\r\n}\r\n```\r\n\r\n### Categories\r\n\r\n- **perception**: Observations and sensory input\r\n- **reflection**: Analysis and contemplation\r\n- **decision**: Choices and determinations\r\n- **tick**: System events and checkpoints\r\n- **error**: Problems and failures\r\n\r\n## Development\r\n\r\n### Project Structure\r\n\r\n```\r\ncorelogger/\r\n\u251c\u2500\u2500 cli/ # CLI commands and interface\r\n\u251c\u2500\u2500 api/ # FastAPI routes and endpoints\r\n\u251c\u2500\u2500 db/ # Database models and session management\r\n\u251c\u2500\u2500 services/ # Business logic and formatting\r\n\u251c\u2500\u2500 models/ # Pydantic schemas\r\n\u251c\u2500\u2500 tests/ # Test suite\r\n\u251c\u2500\u2500 config.py # Configuration management\r\n\u251c\u2500\u2500 corelogger.py # CLI entry point\r\n\u251c\u2500\u2500 main.py # API entry point\r\n\u2514\u2500\u2500 README.md\r\n```\r\n\r\n### Running Tests\r\n\r\n```bash\r\n# Run all tests\r\npytest\r\n\r\n# Run with coverage\r\npytest --cov=. --cov-report=html\r\n\r\n# Run specific test file\r\npytest tests/test_logger.py\r\n\r\n# Run with verbose output\r\npytest -v\r\n```\r\n\r\n### Code Quality\r\n\r\n```bash\r\n# Format code\r\nblack .\r\n\r\n# Sort imports\r\nisort .\r\n\r\n# Type checking\r\nmypy .\r\n```\r\n\r\n## Future Development\r\n\r\n### Planned Enhancements\r\n- **Web Export**: Direct export functionality from web interface\r\n- **Advanced Analytics**: Conversation pattern analysis and visualization\r\n- **Additional AI Providers**: OpenAI GPT and Anthropic Claude integration\r\n- **Conversation Search**: Full-text search across AI interactions\r\n- **Data Visualization**: Charts and graphs for interaction patterns\r\n- **API Endpoints**: RESTful API for third-party integrations\r\n\r\n### Extensibility\r\nThe modular design allows easy extension:\r\n- **Custom AI Providers**: Add new AI service integrations\r\n- **Enhanced Emotion Detection**: More sophisticated emotion classification\r\n- **Custom Analytics**: Additional NLP analysis metrics\r\n- **Export Formats**: New data export options\r\n- **UI Themes**: Additional interface themes and customization\r\n\r\n## Contributing\r\n\r\n1. Fork the repository\r\n2. Create a feature branch\r\n3. Write tests for new functionality\r\n4. Ensure all tests pass\r\n5. Submit a pull request\r\n\r\n### Development Guidelines\r\n- Follow PEP 8 style guidelines\r\n- Write comprehensive docstrings \r\n- Include type annotations\r\n- Test new functionality thoroughly\r\n- Use descriptive commit messages\r\n\r\n## Current Status\r\n\r\n**Version**: 1.0.0 (Production Ready)\r\n**Status**: Fully Functional\r\n\r\n### Completed Features\r\n- CLI AI chat with emotion detection\r\n- Web dashboard with real-time updates\r\n- Automatic conversation logging\r\n- 9-category emotion classification\r\n- NLP analysis and importance scoring\r\n- Data export (CLI)\r\n- Dark theme web interface\r\n- Multiple AI provider support (Gemini + Mock)\r\n\r\n### In Development\r\n- Web export functionality\r\n- Advanced conversation analytics\r\n- Additional AI provider integrations\r\n\r\n## Version History\r\n\r\n### v1.0.0 (Current)\r\n- Production-ready AI conversation monitoring\r\n- Complete emotion detection system\r\n- Web and CLI interfaces fully functional\r\n- Automatic database logging\r\n- NLP analysis and importance scoring\r\n\r\n### Future Versions\r\n- v1.1.0: Web export functionality\r\n- v1.2.0: OpenAI and Claude provider integration\r\n- v1.3.0: Advanced analytics and visualization\r\n\r\n---\r\n\r\n**CoreLogger** - AI Interaction Monitoring Made Simple\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "CoreLogger - Production-Ready AI Thought Monitoring & Analysis System",
"version": "1.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/MrBrightsidedev/CoreLogger/issues",
"Documentation": "https://github.com/MrBrightsidedev/CoreLogger#readme",
"Homepage": "https://github.com/MrBrightsidedev/CoreLogger",
"Repository": "https://github.com/MrBrightsidedev/CoreLogger.git"
},
"split_keywords": [
"ai",
" logging",
" thoughts",
" nlp",
" analysis",
" monitoring",
" gemini",
" fastapi",
" cli"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "27ed53c7fe2126de31de19fe1104443877a9b7e2b40bd86efd7b25fd44354518",
"md5": "83cd99001c47851ed6a9328ccc467fbc",
"sha256": "e202f756c53e79559737519d7f2e63e5d4286bb4d32cb60f38c15f070de50a6e"
},
"downloads": -1,
"filename": "corelogger_ai-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "83cd99001c47851ed6a9328ccc467fbc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.10",
"size": 9730,
"upload_time": "2025-07-26T12:50:56",
"upload_time_iso_8601": "2025-07-26T12:50:56.047287Z",
"url": "https://files.pythonhosted.org/packages/27/ed/53c7fe2126de31de19fe1104443877a9b7e2b40bd86efd7b25fd44354518/corelogger_ai-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b82db2d7ed6340cabfb3341c93e34652fa635052f3573f047eb1e7dbbeff1a0e",
"md5": "87e9b5dd2ba0694c4cc4bed10b168bbd",
"sha256": "f274fc5091a63e073da4c34486f5055e93f137d29cb848a0880c8d738c8b513d"
},
"downloads": -1,
"filename": "corelogger_ai-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "87e9b5dd2ba0694c4cc4bed10b168bbd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.10",
"size": 27741,
"upload_time": "2025-07-26T12:50:57",
"upload_time_iso_8601": "2025-07-26T12:50:57.993155Z",
"url": "https://files.pythonhosted.org/packages/b8/2d/b2d7ed6340cabfb3341c93e34652fa635052f3573f047eb1e7dbbeff1a0e/corelogger_ai-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-26 12:50:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MrBrightsidedev",
"github_project": "CoreLogger",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "fastapi",
"specs": [
[
">=",
"0.115.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.32.0"
]
]
},
{
"name": "jinja2",
"specs": [
[
">=",
"3.1.0"
]
]
},
{
"name": "typer",
"specs": [
[
">=",
"0.12.0"
]
]
},
{
"name": "sqlalchemy",
"specs": [
[
">=",
"2.0.36"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.10.0"
]
]
},
{
"name": "pydantic-settings",
"specs": [
[
">=",
"2.6.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.9.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.1"
]
]
},
{
"name": "python-multipart",
"specs": [
[
">=",
"0.0.12"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.28.0"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
">=",
"0.8.0"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.54.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
">=",
"0.39.0"
]
]
},
{
"name": "textblob",
"specs": [
[
">=",
"0.17.1"
]
]
},
{
"name": "nltk",
"specs": [
[
">=",
"3.8.1"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
"~=",
"4.13.2"
]
]
},
{
"name": "urllib3",
"specs": [
[
"~=",
"2.4.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.3.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.24.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"24.10.0"
]
]
},
{
"name": "isort",
"specs": [
[
">=",
"5.13.2"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.13.0"
]
]
},
{
"name": "pre-commit",
"specs": [
[
">=",
"4.0.0"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
">=",
"0.8.3"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.59.0"
]
]
}
],
"lcname": "corelogger-ai"
}