# Brownie Metadata Database
[](https://github.com/longyi-brownie/brownie-metadata-database)
[](LICENSE)
[](https://python.org)
[](https://postgresql.org)
The Brownie Metadata Database is the central metadata store for the [Brownie](https://github.com/longyi-brownie/brownie) incident assistant platform. It provides enterprise-grade data management, access control, and operational capabilities for managing incident data, team configurations, and system metadata.
> **Version 0.1.0** - Complete database infrastructure with enterprise security, monitoring, and high availability features.
## Overview
Brownie is an AI-powered incident assistant that helps teams manage and resolve incidents more effectively. This metadata database serves as the backbone for storing and managing:
- **Team Configurations**: Team structures, roles, and permissions
- **Incident Metadata**: Incident types, priorities, and resolution data
- **Agent Configurations**: AI agent settings and behavior parameters
- **User Management**: User accounts, authentication, and access control
- **System Statistics**: Performance metrics and operational data
### Database Schema
The database includes the following core tables:
- `organizations` - Multi-tenant organization management
- `teams` - Team structures within organizations
- `users` - User accounts and authentication
- `incidents` - Incident tracking and metadata
- `agent_configs` - AI agent configuration settings
- `stats` - System performance and usage statistics
### System Architecture
```mermaid
graph TB
subgraph "Brownie Platform"
A[Brownie Agent] --> B[Config Service]
B --> C[FastAPI Gateway]
A --> C
C --> D[Metadata Database]
end
```
**Related Repositories:**
- [Brownie Core](https://github.com/longyi-brownie/brownie) - Main Brownie incident assistant
- [Brownie Config Service](https://github.com/longyi-brownie/brownie-config-service) - Configuration management service
- [Brownie Metadata FastApi Server](https://github.com/longyi-brownie/brownie-metadata-api) - This repository
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Git
### 1. Clone and Setup
```bash
git clone https://github.com/longyi-brownie/brownie-metadata-database.git
cd brownie-metadata-database
```
### 2. Generate SSL Certificates
**Required before starting services** - Generate development certificates for SSL/TLS:
```bash
./scripts/setup-dev-certs.sh
```
This creates:
- `dev-certs/ca.crt` - Certificate Authority
- `dev-certs/server.crt` - PostgreSQL server certificate
- `dev-certs/server.key` - PostgreSQL server private key
- `dev-certs/client.crt` - Client certificate for database connections
- `dev-certs/client.key` - Client private key for database connections
### 3. Start Database & Monitoring Services
```bash
docker compose up -d
```
This will start:
- PostgreSQL with SSL and certificate authentication
- Database migrations
- Redis for caching
- **Enterprise metrics sidecar** - Custom business & technical metrics
- **Prometheus** - Metrics collection and alerting
- **Grafana** - Enterprise dashboards ready for copy-paste
### 4. Verify Everything Works
```bash
# Check all services are running
docker compose ps
# Test database connection with certificates
docker compose exec postgres psql -U brownie-fastapi-server -d brownie_metadata -c "SELECT version();"
# Test Redis connection
docker compose exec redis redis-cli ping
# Test metrics collection
curl http://localhost:9091/metrics
# Access Grafana dashboards
open http://localhost:3000
# Login: admin/admin
```
### 5. Access Services
- **PostgreSQL**: localhost:5432 (certificate auth required)
- **Redis**: localhost:6379
- **Prometheus**: http://localhost:9090
- **Grafana**: http://localhost:3000 (admin/admin)
- **Custom Metrics**: http://localhost:9091/metrics
## 📁 Project Structure
```
brownie-metadata-database/
├── alembic/ # Database migrations
├── k8s/ # Kubernetes deployment configs
├── monitoring/ # Enterprise monitoring stack
│ ├── dashboards/ # Grafana dashboards
│ ├── alerts/ # Prometheus alerting rules
│ ├── provisioning/ # Grafana auto-configuration
│ └── README.md # Monitoring documentation
├── runbooks/ # Operational procedures
│ ├── RUNBOOK-*.md # Specific runbooks
│ └── README.md # Runbook index
├── scripts/ # Database setup scripts
│ ├── init-db.sql # Database initialization
│ ├── setup-dev-certs.sh # Certificate generation
│ ├── setup-postgres-ssl.sh # SSL configuration
│ ├── pg_hba.conf # PostgreSQL auth config
│ └── postgresql.conf # PostgreSQL server config
├── src/ # Core database code
│ ├── certificates.py # Server certificate management
│ └── database/ # SQLAlchemy models and connection
├── tests/ # Test suite
├── metrics_sidecar/ # Custom metrics collection
├── docker-compose.yml # Complete stack definition
├── Dockerfile # Database migration container
├── Dockerfile.metrics # Metrics sidecar container
└── README.md # This file
```
### Enterprise Monitoring Features
- ✅ **Custom Metrics Sidecar** - Collects database, Redis, and business metrics
- ✅ **Ready-to-Use Dashboards** - Copy-paste Grafana dashboards for enterprise customers
- ✅ **Alerting Rules** - Pre-configured alerts for database health and business metrics
- ✅ **SSL/TLS Configuration** - PostgreSQL starts with SSL enabled
- ✅ **Certificate Authentication** - Only clients with valid certificates can connect
- ✅ **User Creation** - `brownie-fastapi-server` user created automatically
- ✅ **Database Migrations** - Schema applied automatically
**📊 [Complete Monitoring Documentation](monitoring/README.md)**
**📚 [Operational Runbooks](runbooks/README.md)**
**Kubernetes (Production):**
```bash
# 1. Apply PostgreSQL configuration
kubectl apply -f k8s/postgres-config.yaml
# 2. Deploy with automated setup
kubectl apply -k k8s/
# PostgreSQL is automatically configured with:
# - pg_hba.conf for certificate auth
# - User creation and permissions
# - Certificate mounting
```
**Manual Setup (If Needed):**
```sql
-- Create user that matches certificate CN
CREATE USER "brownie-fastapi-server" WITH CERTIFICATE;
-- Grant necessary permissions
GRANT CONNECT ON DATABASE brownie_metadata TO "brownie-fastapi-server";
GRANT USAGE ON SCHEMA public TO "brownie-fastapi-server";
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO "brownie-fastapi-server";
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO "brownie-fastapi-server";
```
**What's Automated:**
- ✅ **User Creation** - `brownie-fastapi-server` user with certificate auth
- ✅ **Permissions** - All necessary database permissions
- ✅ **pg_hba.conf** - Certificate authentication configuration
- ✅ **Certificate Mounting** - Server certificates in containers
- ✅ **Future Tables** - Permissions for Alembic migrations
**Docker Compose (Development) - Detailed:**
1. **Clone and start the services:**
```bash
git clone https://github.com/longyi-brownie/brownie-metadata-database
cd brownie-metadata-database
docker compose up -d
```
2. **Verify the services are running:**
```bash
docker compose ps
```
3. **Check the application:**
```bash
curl http://localhost:8000/health
```
**Docker Infrastructure:**
```mermaid
graph TB
subgraph "Docker Compose"
A[PostgreSQL :5432]
B[Redis :6379]
C[Prometheus :9090]
D[Grafana :3000]
E[Backup Service]
F[Migration Service]
end
subgraph "External Storage"
G[S3/GCS/Azure]
end
E -->|Backup| G
F -->|Schema Updates| A
A -->|Metrics| C
C -->|Dashboards| D
style A fill:#e8f5e8
style E fill:#fff2cc
style F fill:#fff2cc
```
**Available Components:**
- ✅ **PostgreSQL** - Primary database
- ✅ **Redis** - Caching and sessions
- ✅ **Prometheus** - Metrics collection
- ✅ **Grafana** - Metrics visualization
- ✅ **Backup Service** - Automated database backups
- ✅ **Migration Service** - Database schema updates
**Future Components:**
- 🔄 **Read Replicas** - For read scaling (planned)
- 🔄 **Custom Metrics Scraper** - PostgreSQL-specific metrics (planned)
**Kubernetes (Production) - Detailed:**
1. **Deploy to Kubernetes:**
```bash
kubectl apply -k k8s/
```
2. **Check deployment status:**
```bash
kubectl get pods -n brownie-metadata
```
3. **Access the application:**
```bash
kubectl port-forward -n brownie-metadata svc/brownie-metadata-app 8000:8000
curl http://localhost:8000/health
```
**Kubernetes Architecture:**
```mermaid
graph TB
subgraph "Kubernetes Cluster"
subgraph "brownie-metadata namespace"
A[App Pods] --> B[PostgreSQL Primary]
A --> C[PostgreSQL Replicas]
A --> D[Redis Cluster]
A --> E[PgBouncer]
end
subgraph "Monitoring"
F[Prometheus] --> A
G[Grafana] --> F
H[AlertManager] --> F
end
subgraph "Backup"
I[Backup CronJob] --> J[Cloud Storage]
end
end
```
**Helm Charts (Advanced):**
For production deployments with custom configurations:
```bash
# Install with Helm
helm install brownie-metadata-db ./k8s/helm \
--namespace brownie-metadata \
--create-namespace \
--set database.replicas=3 \
--set app.replicas=5
# Scale the deployment
helm upgrade brownie-metadata-db ./k8s/helm \
--set app.replicas=10 \
--set database.patroni.replicas=5
```
## Configuration
### Environment Variables
**Database Configuration:**
```bash
DB_HOST=postgres
DB_PORT=5432
DB_NAME=brownie_metadata
DB_USER=brownie
DB_SSL_MODE=verify-full
# Certificates automatically read from /certs directory
```
**Certificate Management:**
```bash
# For development (local certificates)
LOCAL_CERT_DIR=dev-certs
# For production (Vault integration)
VAULT_ENABLED=true
VAULT_URL=https://vault.company.com
VAULT_TOKEN=your-vault-token
VAULT_CERT_PATH=secret/brownie-metadata/certs
```
**Backup Configuration:**
```bash
BACKUP_PROVIDER=s3
BACKUP_DESTINATION=my-backup-bucket/database
BACKUP_SCHEDULE=0 2 * * *
BACKUP_RETENTION_DAYS=30
```
**Monitoring Configuration:**
```bash
METRICS_ENABLED=true
METRICS_PORT=8001
LOG_LEVEL=INFO
```
### Kubernetes Configuration
**ConfigMap:**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: brownie-metadata-config
data:
# Database configuration
DB_HOST: "postgres"
DB_PORT: "5432"
DB_NAME: "brownie_metadata"
DB_USER: "brownie"
DB_SSL_MODE: "verify-full"
# Application configuration
LOG_LEVEL: "INFO"
METRICS_ENABLED: "true"
```
**Secrets:**
```yaml
apiVersion: v1
kind: Secret
metadata:
name: brownie-metadata-secrets
type: Opaque
data:
# These are placeholders - certificates loaded from Vault or local files
database-client-cert: "PLACEHOLDER_DO_NOT_USE"
database-client-key: "PLACEHOLDER_DO_NOT_USE"
database-ca-cert: "PLACEHOLDER_DO_NOT_USE"
```
**Proper Secret Management:**
```bash
# Option 1: Create from local files (development)
kubectl create secret generic brownie-metadata-secrets \
--from-file=database-server-cert=dev-certs/server.crt \
--from-file=database-server-key=dev-certs/server.key \
--from-file=database-ca-cert=dev-certs/ca.crt
# Option 2: Use Vault (production)
# Certificates automatically loaded from Vault via CertificateManager
```
## Database Connection
### Connect via psql
**Docker Compose:**
```bash
docker exec -it brownie-metadata-postgres psql -U brownie -d brownie_metadata
```
**Kubernetes:**
```bash
kubectl exec -it -n brownie-metadata <postgres-pod> -- psql -U brownie -d brownie_metadata
```
### Test Database Connection
```sql
-- Check database version
SELECT version();
-- List all tables
\dt
-- Check table schemas
\d organizations
\d teams
\d users
\d incidents
-- Sample queries
SELECT COUNT(*) FROM organizations;
SELECT COUNT(*) FROM teams;
SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM incidents;
```
### API Access
**Health Check:**
```bash
curl http://localhost:8000/health
```
**API Documentation:**
```bash
# OpenAPI/Swagger UI
open http://localhost:8000/docs
# ReDoc documentation
open http://localhost:8000/redoc
```
## Access Control
### Database Authentication
**Client Certificate Authentication:**
- **No passwords** - Uses client certificates for authentication
- **Certificate CN**: `brownie-fastapi-server` (matches certificate Common Name)
- **mTLS Support**: Mutual TLS verification in production
- **Encrypted Connections**: All database traffic encrypted in transit
**Certificate Management:**
- **Development**: Local certificate files (gitignored)
- **Production**: Vault PKI automatic certificate generation
- **Rotation**: Automatic certificate rotation via Vault
### FastAPI Server Authentication
**User Authentication (Handled by FastAPI Server):**
- **JWT/OAuth**: User authentication and authorization
- **RBAC**: Role-based access control
- **Organization Scoping**: Multi-tenant data isolation
- **API Keys**: Service-to-service authentication
**Database Access:**
- **SQLAlchemy ORM**: Type-safe database queries
- **Connection Pooling**: Efficient database connections
- **Certificate-based**: All database access uses client certificates
### Enterprise Features
For enterprise-grade access control including SSO, LDAP integration, advanced RBAC, and compliance features, please contact us at **info@brownie-ai.com** for licensing information.
## Production Deployment
### Metrics and Dashboards
The system includes comprehensive monitoring with Grafana dashboards and Prometheus metrics:
**Access Dashboards:**
```bash
# Grafana (admin/admin)
open http://localhost:3000
# Prometheus metrics
open http://localhost:9090
```
**Key Metrics:**
- Request rate and response times
- Database performance and connections
- User activity and organization metrics
- Incident volume and resolution times
- System resource utilization
**Screenshots**:

*Business metrics dashboard showing organizations, teams, users, and incident trends*

*Database overview dashboard with connection stats, table sizes, and Redis metrics*
### Alerting
Enterprise-grade alerting with multiple notification channels:
**Alert Categories:**
- **Critical**: Service down, high error rates, SLA breaches
- **Warning**: Performance degradation, resource issues
- **Info**: User growth, unusual patterns
**Notification Channels:**
- PagerDuty integration
- Slack notifications
- Email alerts
- Microsoft Teams
### Backup and Disaster Recovery
Automated backup system with cloud storage support:
**Supported Providers:**
- AWS S3
- Google Cloud Storage
- Azure Blob Storage
- Local filesystem
**Backup Configuration:**
```bash
# Set backup parameters
export BACKUP_PROVIDER="s3"
export BACKUP_DESTINATION="my-backup-bucket/database"
export BACKUP_SCHEDULE="0 2 * * *" # Daily at 2 AM
export BACKUP_RETENTION_DAYS="30"
export BACKUP_ACCESS_KEY="your-access-key"
export BACKUP_SECRET_KEY="your-secret-key"
```
**Backup Operations:**
```bash
# Create backup
curl -X POST http://localhost:8000/backup/create
# List backups
curl http://localhost:8000/backup/list
# Restore backup
curl -X POST http://localhost:8000/backup/restore \
-H "Content-Type: application/json" \
-d '{"backup_name": "backup-2024-01-15"}'
```
**Recovery Time Objectives:**
- Local restore: < 5 minutes
- Cloud restore: < 15 minutes
- Cross-region: < 30 minutes
## Operations
### Database Migrations
See [Database Migration Runbook](docs/RUNBOOK-database-migration.md) for detailed procedures.
**Quick Commands:**
```bash
# Docker Compose
docker compose exec app alembic upgrade head
# Kubernetes
kubectl exec -it -n brownie-metadata <app-pod> -- alembic upgrade head
# Create new migration
alembic revision --autogenerate -m "Add new table"
alembic upgrade head
```
### Scaling Operations
See [Scaling Operations Runbook](docs/RUNBOOK-scaling-operations.md) for comprehensive scaling procedures.
**Quick Commands:**
```bash
# Scale application replicas
kubectl scale deployment brownie-metadata-app --replicas=5 -n brownie-metadata
# Add read replicas
kubectl scale statefulset patroni --replicas=3 -n brownie-metadata
# Deploy PgBouncer
kubectl apply -f k8s/pgbouncer.yaml
```
### Disaster Recovery
See [Disaster Recovery Runbook](docs/RUNBOOK-disaster-recovery.md) for complete recovery procedures.
**Quick Commands:**
```bash
# Restore from backup
kubectl exec -n brownie-metadata deployment/brownie-metadata-app -- python -c "
from src.backup.manager import BackupManager
from src.backup.config import BackupConfig
config = BackupConfig.from_env()
manager = BackupManager(config)
manager.restore_backup('backup_name')
"
```
### Database Sharding for Enterprise
See [Database Sharding Runbook](docs/RUNBOOK-database-sharding.md) for comprehensive sharding strategies and implementation.
**Quick Overview:**
- **Team-Based Sharding**: Route data by `team_id` for team isolation
- **Time-Based Sharding**: Partition by `created_at` for archival
- **Hybrid Sharding**: Combine team and time-based partitioning
**Performance Tuning:**
- Database connection pooling
- Query optimization
- Index management
- Cache configuration
## Support
### Documentation
- **API Documentation**: [http://localhost:8000/docs](http://localhost:8000/docs)
- **Backup Guide**: [BACKUP.md](BACKUP.md)
- **Monitoring Guide**: [MONITORING.md](MONITORING.md)
### Getting Help
- **Issues**: [GitHub Issues](https://github.com/longyi-brownie/brownie-metadata-database/issues)
- **Discussions**: [GitHub Discussions](https://github.com/longyi-brownie/brownie-metadata-database/discussions)
- **Enterprise Support**: [info@brownie-ai.com](mailto:info@brownie-ai.com)
### Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
**Brownie Metadata Database** - Enterprise metadata management for incident response
Raw data
{
"_id": null,
"home_page": null,
"name": "brownie-metadata-db",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "backup, brownie, database, metadata, postgresql, ssl",
"author": null,
"author_email": "Your Name <your.email@example.com>",
"download_url": "https://files.pythonhosted.org/packages/ce/a4/1713d7b7360890bdf8e0306f638847b0aa08b668cd0f4209404b255fcf39/brownie_metadata_db-0.1.0.tar.gz",
"platform": null,
"description": "# Brownie Metadata Database\n\n[](https://github.com/longyi-brownie/brownie-metadata-database)\n[](LICENSE)\n[](https://python.org)\n[](https://postgresql.org)\n\nThe Brownie Metadata Database is the central metadata store for the [Brownie](https://github.com/longyi-brownie/brownie) incident assistant platform. It provides enterprise-grade data management, access control, and operational capabilities for managing incident data, team configurations, and system metadata.\n\n> **Version 0.1.0** - Complete database infrastructure with enterprise security, monitoring, and high availability features.\n\n## Overview\n\nBrownie is an AI-powered incident assistant that helps teams manage and resolve incidents more effectively. This metadata database serves as the backbone for storing and managing:\n\n- **Team Configurations**: Team structures, roles, and permissions\n- **Incident Metadata**: Incident types, priorities, and resolution data\n- **Agent Configurations**: AI agent settings and behavior parameters\n- **User Management**: User accounts, authentication, and access control\n- **System Statistics**: Performance metrics and operational data\n\n### Database Schema\n\nThe database includes the following core tables:\n\n- `organizations` - Multi-tenant organization management\n- `teams` - Team structures within organizations\n- `users` - User accounts and authentication\n- `incidents` - Incident tracking and metadata\n- `agent_configs` - AI agent configuration settings\n- `stats` - System performance and usage statistics\n\n### System Architecture\n\n```mermaid\ngraph TB\n subgraph \"Brownie Platform\"\n A[Brownie Agent] --> B[Config Service]\n B --> C[FastAPI Gateway]\n A --> C\n C --> D[Metadata Database]\n end\n```\n\n**Related Repositories:**\n- [Brownie Core](https://github.com/longyi-brownie/brownie) - Main Brownie incident assistant\n- [Brownie Config Service](https://github.com/longyi-brownie/brownie-config-service) - Configuration management service\n- [Brownie Metadata FastApi Server](https://github.com/longyi-brownie/brownie-metadata-api) - This repository\n\n## Quick Start\n\n### Prerequisites\n\n- Docker and Docker Compose\n- Git\n\n### 1. Clone and Setup\n\n```bash\ngit clone https://github.com/longyi-brownie/brownie-metadata-database.git\ncd brownie-metadata-database\n```\n\n### 2. Generate SSL Certificates\n\n**Required before starting services** - Generate development certificates for SSL/TLS:\n\n```bash\n./scripts/setup-dev-certs.sh\n```\n\nThis creates:\n- `dev-certs/ca.crt` - Certificate Authority\n- `dev-certs/server.crt` - PostgreSQL server certificate \n- `dev-certs/server.key` - PostgreSQL server private key\n- `dev-certs/client.crt` - Client certificate for database connections\n- `dev-certs/client.key` - Client private key for database connections\n\n### 3. Start Database & Monitoring Services\n\n```bash\ndocker compose up -d\n```\n\nThis will start:\n- PostgreSQL with SSL and certificate authentication\n- Database migrations\n- Redis for caching\n- **Enterprise metrics sidecar** - Custom business & technical metrics\n- **Prometheus** - Metrics collection and alerting\n- **Grafana** - Enterprise dashboards ready for copy-paste\n\n### 4. Verify Everything Works\n\n```bash\n# Check all services are running\ndocker compose ps\n\n# Test database connection with certificates\ndocker compose exec postgres psql -U brownie-fastapi-server -d brownie_metadata -c \"SELECT version();\"\n\n# Test Redis connection\ndocker compose exec redis redis-cli ping\n\n# Test metrics collection\ncurl http://localhost:9091/metrics\n\n# Access Grafana dashboards\nopen http://localhost:3000\n# Login: admin/admin\n```\n\n### 5. Access Services\n\n- **PostgreSQL**: localhost:5432 (certificate auth required)\n- **Redis**: localhost:6379\n- **Prometheus**: http://localhost:9090\n- **Grafana**: http://localhost:3000 (admin/admin)\n- **Custom Metrics**: http://localhost:9091/metrics\n\n## \ud83d\udcc1 Project Structure\n\n```\nbrownie-metadata-database/\n\u251c\u2500\u2500 alembic/ # Database migrations\n\u251c\u2500\u2500 k8s/ # Kubernetes deployment configs\n\u251c\u2500\u2500 monitoring/ # Enterprise monitoring stack\n\u2502 \u251c\u2500\u2500 dashboards/ # Grafana dashboards\n\u2502 \u251c\u2500\u2500 alerts/ # Prometheus alerting rules\n\u2502 \u251c\u2500\u2500 provisioning/ # Grafana auto-configuration\n\u2502 \u2514\u2500\u2500 README.md # Monitoring documentation\n\u251c\u2500\u2500 runbooks/ # Operational procedures\n\u2502 \u251c\u2500\u2500 RUNBOOK-*.md # Specific runbooks\n\u2502 \u2514\u2500\u2500 README.md # Runbook index\n\u251c\u2500\u2500 scripts/ # Database setup scripts\n\u2502 \u251c\u2500\u2500 init-db.sql # Database initialization\n\u2502 \u251c\u2500\u2500 setup-dev-certs.sh # Certificate generation\n\u2502 \u251c\u2500\u2500 setup-postgres-ssl.sh # SSL configuration\n\u2502 \u251c\u2500\u2500 pg_hba.conf # PostgreSQL auth config\n\u2502 \u2514\u2500\u2500 postgresql.conf # PostgreSQL server config\n\u251c\u2500\u2500 src/ # Core database code\n\u2502 \u251c\u2500\u2500 certificates.py # Server certificate management\n\u2502 \u2514\u2500\u2500 database/ # SQLAlchemy models and connection\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 metrics_sidecar/ # Custom metrics collection\n\u251c\u2500\u2500 docker-compose.yml # Complete stack definition\n\u251c\u2500\u2500 Dockerfile # Database migration container\n\u251c\u2500\u2500 Dockerfile.metrics # Metrics sidecar container\n\u2514\u2500\u2500 README.md # This file\n```\n\n### Enterprise Monitoring Features\n\n- \u2705 **Custom Metrics Sidecar** - Collects database, Redis, and business metrics\n- \u2705 **Ready-to-Use Dashboards** - Copy-paste Grafana dashboards for enterprise customers\n- \u2705 **Alerting Rules** - Pre-configured alerts for database health and business metrics\n- \u2705 **SSL/TLS Configuration** - PostgreSQL starts with SSL enabled\n- \u2705 **Certificate Authentication** - Only clients with valid certificates can connect\n- \u2705 **User Creation** - `brownie-fastapi-server` user created automatically\n- \u2705 **Database Migrations** - Schema applied automatically\n\n**\ud83d\udcca [Complete Monitoring Documentation](monitoring/README.md)** \n**\ud83d\udcda [Operational Runbooks](runbooks/README.md)**\n\n**Kubernetes (Production):**\n```bash\n# 1. Apply PostgreSQL configuration\nkubectl apply -f k8s/postgres-config.yaml\n\n# 2. Deploy with automated setup\nkubectl apply -k k8s/\n\n# PostgreSQL is automatically configured with:\n# - pg_hba.conf for certificate auth\n# - User creation and permissions\n# - Certificate mounting\n```\n\n**Manual Setup (If Needed):**\n```sql\n-- Create user that matches certificate CN\nCREATE USER \"brownie-fastapi-server\" WITH CERTIFICATE;\n\n-- Grant necessary permissions\nGRANT CONNECT ON DATABASE brownie_metadata TO \"brownie-fastapi-server\";\nGRANT USAGE ON SCHEMA public TO \"brownie-fastapi-server\";\nGRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"brownie-fastapi-server\";\nGRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"brownie-fastapi-server\";\n```\n\n**What's Automated:**\n- \u2705 **User Creation** - `brownie-fastapi-server` user with certificate auth\n- \u2705 **Permissions** - All necessary database permissions\n- \u2705 **pg_hba.conf** - Certificate authentication configuration\n- \u2705 **Certificate Mounting** - Server certificates in containers\n- \u2705 **Future Tables** - Permissions for Alembic migrations\n\n**Docker Compose (Development) - Detailed:**\n\n1. **Clone and start the services:**\n```bash\n git clone https://github.com/longyi-brownie/brownie-metadata-database\n cd brownie-metadata-database\n docker compose up -d\n```\n\n2. **Verify the services are running:**\n```bash\n docker compose ps\n```\n\n3. **Check the application:**\n```bash\n curl http://localhost:8000/health\n ```\n\n**Docker Infrastructure:**\n```mermaid\ngraph TB\n subgraph \"Docker Compose\"\n A[PostgreSQL :5432]\n B[Redis :6379]\n C[Prometheus :9090]\n D[Grafana :3000]\n E[Backup Service]\n F[Migration Service]\n end\n \n subgraph \"External Storage\"\n G[S3/GCS/Azure]\n end\n \n E -->|Backup| G\n F -->|Schema Updates| A\n A -->|Metrics| C\n C -->|Dashboards| D\n \n style A fill:#e8f5e8\n style E fill:#fff2cc\n style F fill:#fff2cc\n```\n\n**Available Components:**\n- \u2705 **PostgreSQL** - Primary database\n- \u2705 **Redis** - Caching and sessions\n- \u2705 **Prometheus** - Metrics collection\n- \u2705 **Grafana** - Metrics visualization\n- \u2705 **Backup Service** - Automated database backups\n- \u2705 **Migration Service** - Database schema updates\n\n**Future Components:**\n- \ud83d\udd04 **Read Replicas** - For read scaling (planned)\n- \ud83d\udd04 **Custom Metrics Scraper** - PostgreSQL-specific metrics (planned)\n\n**Kubernetes (Production) - Detailed:**\n\n1. **Deploy to Kubernetes:**\n```bash\n kubectl apply -k k8s/\n```\n\n2. **Check deployment status:**\n```bash\n kubectl get pods -n brownie-metadata\n ```\n\n3. **Access the application:**\n```bash\n kubectl port-forward -n brownie-metadata svc/brownie-metadata-app 8000:8000\n curl http://localhost:8000/health\n ```\n\n**Kubernetes Architecture:**\n```mermaid\ngraph TB\n subgraph \"Kubernetes Cluster\"\n subgraph \"brownie-metadata namespace\"\n A[App Pods] --> B[PostgreSQL Primary]\n A --> C[PostgreSQL Replicas]\n A --> D[Redis Cluster]\n A --> E[PgBouncer]\n end\n \n subgraph \"Monitoring\"\n F[Prometheus] --> A\n G[Grafana] --> F\n H[AlertManager] --> F\n end\n \n subgraph \"Backup\"\n I[Backup CronJob] --> J[Cloud Storage]\n end\n end\n```\n\n**Helm Charts (Advanced):**\n\nFor production deployments with custom configurations:\n\n```bash\n# Install with Helm\nhelm install brownie-metadata-db ./k8s/helm \\\n --namespace brownie-metadata \\\n --create-namespace \\\n --set database.replicas=3 \\\n --set app.replicas=5\n\n# Scale the deployment\nhelm upgrade brownie-metadata-db ./k8s/helm \\\n --set app.replicas=10 \\\n --set database.patroni.replicas=5\n```\n\n## Configuration\n\n### Environment Variables\n\n**Database Configuration:**\n```bash\nDB_HOST=postgres\nDB_PORT=5432\nDB_NAME=brownie_metadata\nDB_USER=brownie\nDB_SSL_MODE=verify-full\n# Certificates automatically read from /certs directory\n```\n\n\n**Certificate Management:**\n```bash\n# For development (local certificates)\nLOCAL_CERT_DIR=dev-certs\n\n# For production (Vault integration)\nVAULT_ENABLED=true\nVAULT_URL=https://vault.company.com\nVAULT_TOKEN=your-vault-token\nVAULT_CERT_PATH=secret/brownie-metadata/certs\n```\n\n**Backup Configuration:**\n```bash\nBACKUP_PROVIDER=s3\nBACKUP_DESTINATION=my-backup-bucket/database\nBACKUP_SCHEDULE=0 2 * * *\nBACKUP_RETENTION_DAYS=30\n```\n\n**Monitoring Configuration:**\n```bash\nMETRICS_ENABLED=true\nMETRICS_PORT=8001\nLOG_LEVEL=INFO\n```\n\n### Kubernetes Configuration\n\n**ConfigMap:**\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: brownie-metadata-config\ndata:\n # Database configuration\n DB_HOST: \"postgres\"\n DB_PORT: \"5432\"\n DB_NAME: \"brownie_metadata\"\n DB_USER: \"brownie\"\n DB_SSL_MODE: \"verify-full\"\n \n # Application configuration\n LOG_LEVEL: \"INFO\"\n METRICS_ENABLED: \"true\"\n```\n\n**Secrets:**\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: brownie-metadata-secrets\ntype: Opaque\ndata:\n # These are placeholders - certificates loaded from Vault or local files\n database-client-cert: \"PLACEHOLDER_DO_NOT_USE\"\n database-client-key: \"PLACEHOLDER_DO_NOT_USE\"\n database-ca-cert: \"PLACEHOLDER_DO_NOT_USE\"\n```\n\n**Proper Secret Management:**\n```bash\n# Option 1: Create from local files (development)\nkubectl create secret generic brownie-metadata-secrets \\\n --from-file=database-server-cert=dev-certs/server.crt \\\n --from-file=database-server-key=dev-certs/server.key \\\n --from-file=database-ca-cert=dev-certs/ca.crt\n\n# Option 2: Use Vault (production)\n# Certificates automatically loaded from Vault via CertificateManager\n```\n\n## Database Connection\n\n### Connect via psql\n\n**Docker Compose:**\n```bash\ndocker exec -it brownie-metadata-postgres psql -U brownie -d brownie_metadata\n```\n\n**Kubernetes:**\n```bash\nkubectl exec -it -n brownie-metadata <postgres-pod> -- psql -U brownie -d brownie_metadata\n```\n\n### Test Database Connection\n\n```sql\n-- Check database version\nSELECT version();\n\n-- List all tables\n\\dt\n\n-- Check table schemas\n\\d organizations\n\\d teams\n\\d users\n\\d incidents\n\n-- Sample queries\nSELECT COUNT(*) FROM organizations;\nSELECT COUNT(*) FROM teams;\nSELECT COUNT(*) FROM users;\nSELECT COUNT(*) FROM incidents;\n```\n\n### API Access\n\n**Health Check:**\n```bash\ncurl http://localhost:8000/health\n```\n\n**API Documentation:**\n```bash\n# OpenAPI/Swagger UI\nopen http://localhost:8000/docs\n\n# ReDoc documentation\nopen http://localhost:8000/redoc\n```\n\n## Access Control\n\n### Database Authentication\n\n**Client Certificate Authentication:**\n- **No passwords** - Uses client certificates for authentication\n- **Certificate CN**: `brownie-fastapi-server` (matches certificate Common Name)\n- **mTLS Support**: Mutual TLS verification in production\n- **Encrypted Connections**: All database traffic encrypted in transit\n\n**Certificate Management:**\n- **Development**: Local certificate files (gitignored)\n- **Production**: Vault PKI automatic certificate generation\n- **Rotation**: Automatic certificate rotation via Vault\n\n### FastAPI Server Authentication\n\n**User Authentication (Handled by FastAPI Server):**\n- **JWT/OAuth**: User authentication and authorization\n- **RBAC**: Role-based access control\n- **Organization Scoping**: Multi-tenant data isolation\n- **API Keys**: Service-to-service authentication\n\n**Database Access:**\n- **SQLAlchemy ORM**: Type-safe database queries\n- **Connection Pooling**: Efficient database connections\n- **Certificate-based**: All database access uses client certificates\n\n### Enterprise Features\n\nFor enterprise-grade access control including SSO, LDAP integration, advanced RBAC, and compliance features, please contact us at **info@brownie-ai.com** for licensing information.\n\n## Production Deployment\n\n### Metrics and Dashboards\n\nThe system includes comprehensive monitoring with Grafana dashboards and Prometheus metrics:\n\n**Access Dashboards:**\n```bash\n# Grafana (admin/admin)\nopen http://localhost:3000\n\n# Prometheus metrics\nopen http://localhost:9090\n```\n\n**Key Metrics:**\n- Request rate and response times\n- Database performance and connections\n- User activity and organization metrics\n- Incident volume and resolution times\n- System resource utilization\n\n**Screenshots**: \n\n\n*Business metrics dashboard showing organizations, teams, users, and incident trends*\n\n\n*Database overview dashboard with connection stats, table sizes, and Redis metrics*\n\n### Alerting\n\nEnterprise-grade alerting with multiple notification channels:\n\n**Alert Categories:**\n- **Critical**: Service down, high error rates, SLA breaches\n- **Warning**: Performance degradation, resource issues\n- **Info**: User growth, unusual patterns\n\n**Notification Channels:**\n- PagerDuty integration\n- Slack notifications\n- Email alerts\n- Microsoft Teams\n\n### Backup and Disaster Recovery\n\nAutomated backup system with cloud storage support:\n\n**Supported Providers:**\n- AWS S3\n- Google Cloud Storage\n- Azure Blob Storage\n- Local filesystem\n\n**Backup Configuration:**\n```bash\n# Set backup parameters\nexport BACKUP_PROVIDER=\"s3\"\nexport BACKUP_DESTINATION=\"my-backup-bucket/database\"\nexport BACKUP_SCHEDULE=\"0 2 * * *\" # Daily at 2 AM\nexport BACKUP_RETENTION_DAYS=\"30\"\nexport BACKUP_ACCESS_KEY=\"your-access-key\"\nexport BACKUP_SECRET_KEY=\"your-secret-key\"\n```\n\n**Backup Operations:**\n```bash\n# Create backup\ncurl -X POST http://localhost:8000/backup/create\n\n# List backups\ncurl http://localhost:8000/backup/list\n\n# Restore backup\ncurl -X POST http://localhost:8000/backup/restore \\\n -H \"Content-Type: application/json\" \\\n -d '{\"backup_name\": \"backup-2024-01-15\"}'\n```\n\n**Recovery Time Objectives:**\n- Local restore: < 5 minutes\n- Cloud restore: < 15 minutes\n- Cross-region: < 30 minutes\n\n## Operations\n\n### Database Migrations\nSee [Database Migration Runbook](docs/RUNBOOK-database-migration.md) for detailed procedures.\n\n**Quick Commands:**\n```bash\n# Docker Compose\ndocker compose exec app alembic upgrade head\n\n# Kubernetes\nkubectl exec -it -n brownie-metadata <app-pod> -- alembic upgrade head\n\n# Create new migration\nalembic revision --autogenerate -m \"Add new table\"\nalembic upgrade head\n```\n\n### Scaling Operations\nSee [Scaling Operations Runbook](docs/RUNBOOK-scaling-operations.md) for comprehensive scaling procedures.\n\n**Quick Commands:**\n```bash\n# Scale application replicas\nkubectl scale deployment brownie-metadata-app --replicas=5 -n brownie-metadata\n\n# Add read replicas\nkubectl scale statefulset patroni --replicas=3 -n brownie-metadata\n\n# Deploy PgBouncer\nkubectl apply -f k8s/pgbouncer.yaml\n```\n\n### Disaster Recovery\nSee [Disaster Recovery Runbook](docs/RUNBOOK-disaster-recovery.md) for complete recovery procedures.\n\n**Quick Commands:**\n```bash\n# Restore from backup\nkubectl exec -n brownie-metadata deployment/brownie-metadata-app -- python -c \"\nfrom src.backup.manager import BackupManager\nfrom src.backup.config import BackupConfig\nconfig = BackupConfig.from_env()\nmanager = BackupManager(config)\nmanager.restore_backup('backup_name')\n\"\n```\n\n### Database Sharding for Enterprise\n\nSee [Database Sharding Runbook](docs/RUNBOOK-database-sharding.md) for comprehensive sharding strategies and implementation.\n\n**Quick Overview:**\n- **Team-Based Sharding**: Route data by `team_id` for team isolation\n- **Time-Based Sharding**: Partition by `created_at` for archival\n- **Hybrid Sharding**: Combine team and time-based partitioning\n\n**Performance Tuning:**\n- Database connection pooling\n- Query optimization\n- Index management\n- Cache configuration\n\n\n## Support\n\n### Documentation\n\n- **API Documentation**: [http://localhost:8000/docs](http://localhost:8000/docs)\n- **Backup Guide**: [BACKUP.md](BACKUP.md)\n- **Monitoring Guide**: [MONITORING.md](MONITORING.md)\n\n### Getting Help\n\n- **Issues**: [GitHub Issues](https://github.com/longyi-brownie/brownie-metadata-database/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/longyi-brownie/brownie-metadata-database/discussions)\n- **Enterprise Support**: [info@brownie-ai.com](mailto:info@brownie-ai.com)\n\n### Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n---\n\n**Brownie Metadata Database** - Enterprise metadata management for incident response",
"bugtrack_url": null,
"license": null,
"summary": "A comprehensive database management library for Brownie metadata with backup, SSL, and monitoring capabilities",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"backup",
" brownie",
" database",
" metadata",
" postgresql",
" ssl"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "babcf8184ccc5911f6b1150b30b45170ea876e36f69860d4c830186290631b77",
"md5": "7705c1f3bb312c3f10de0294c5782621",
"sha256": "99170f067e9a829770659ab187e0c6c03d3cf0d5b1119f1b60837992aba75cf0"
},
"downloads": -1,
"filename": "brownie_metadata_db-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7705c1f3bb312c3f10de0294c5782621",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 41184,
"upload_time": "2025-10-23T06:00:53",
"upload_time_iso_8601": "2025-10-23T06:00:53.947330Z",
"url": "https://files.pythonhosted.org/packages/ba/bc/f8184ccc5911f6b1150b30b45170ea876e36f69860d4c830186290631b77/brownie_metadata_db-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cea41713d7b7360890bdf8e0306f638847b0aa08b668cd0f4209404b255fcf39",
"md5": "6012de85297264be7856d638fae6a139",
"sha256": "014f3aafdeb9f387ac5fe161d42235095a0ad160deaaa9d59b1148662fe604f4"
},
"downloads": -1,
"filename": "brownie_metadata_db-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "6012de85297264be7856d638fae6a139",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 28988,
"upload_time": "2025-10-23T06:00:55",
"upload_time_iso_8601": "2025-10-23T06:00:55.450643Z",
"url": "https://files.pythonhosted.org/packages/ce/a4/1713d7b7360890bdf8e0306f638847b0aa08b668cd0f4209404b255fcf39/brownie_metadata_db-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-23 06:00:55",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "brownie-metadata-db"
}