# OpenPerformance
A comprehensive ML Performance Engineering Platform for optimizing and monitoring machine learning workloads.
[](https://github.com/llamasearchai/OpenPerformance/actions)
[](https://github.com/llamasearchai/OpenPerformance/releases)
[](https://github.com/llamasearchai/OpenPerformance/packages)
[](https://pypi.org/project/openperformance/)
[](https://pypi.org/project/openperformance/)
## Features
- **Hardware Monitoring**: Real-time CPU, memory, and GPU monitoring
- **Performance Analysis**: AI-powered optimization recommendations
- **Distributed Training**: Advanced distributed optimization algorithms
- **CLI Interface**: Comprehensive command-line tools
- **REST API**: Full-featured API with authentication
- **Cross-Platform**: Works on macOS, Linux, and Windows
- **Production Ready**: All tests passing (43/43)
## Quick Start
### Installation
```bash
# Install from PyPI
pip install openperformance
# Or install from source
git clone https://github.com/llamasearchai/OpenPerformance.git
cd OpenPerformance
pip install -e .
```
### Basic Usage
```bash
# Check system information
mlperf info
# Run performance analysis
mlperf optimize --framework pytorch --batch-size 32
# Start API server
python -m uvicorn python.mlperf.api.main:app --host 0.0.0.0 --port 8000
```
## CLI Commands
```bash
mlperf --help # Show all available commands
mlperf info # Display system hardware information
mlperf version # Show platform version
mlperf benchmark # Run performance benchmarks
mlperf profile # Profile Python scripts
mlperf optimize # Optimize ML workloads
mlperf gpt # AI-powered shell assistance
mlperf chat # Chat with ML performance AI agents
```
## API Endpoints
- `GET /health` - System health check
- `GET /system/metrics` - Real-time system metrics
- `GET /system/hardware` - Detailed hardware information
- `POST /analyze/performance` - Performance analysis and optimization
- `GET /admin/system/status` - Admin system status
## Hardware Monitoring
The platform provides comprehensive hardware monitoring:
- **CPU**: Core count, frequency, usage percentage
- **Memory**: Total, used, available memory with usage statistics
- **GPU**: NVIDIA GPU detection, memory usage, utilization metrics
- **System**: Architecture, platform information
## Performance Analysis
Get AI-powered optimization recommendations for your ML workloads:
```python
from mlperf.optimization.distributed import DistributedOptimizer
# Initialize optimizer
optimizer = DistributedOptimizer(framework="pytorch")
# Get optimization recommendations
recommendations = optimizer.optimize_model_parallel(
model_size_gb=10.0,
gpu_count=4
)
```
## Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/llamasearchai/OpenPerformance.git
cd OpenPerformance
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -e .
pip install -r dev-requirements.txt
```
### Running Tests
```bash
# Run all tests
python -m pytest tests/ -v
# Run with coverage
python -m pytest tests/ --cov=python/mlperf --cov-report=html
# Run specific test categories
python -m pytest tests/test_hardware.py -v
python -m pytest tests/test_integration.py -v
```
### Code Quality
```bash
# Linting
flake8 python/ tests/
# Type checking
mypy python/mlperf/
# Security checks
bandit -r python/
safety check
```
## Docker
### Build and Run
```bash
# Build Docker image
docker build -t openperformance .
# Run container
docker run -p 8000:8000 openperformance
# Run with Docker Compose
docker-compose up -d
```
### Docker Compose Services
- **API Server**: FastAPI application on port 8000
- **Database**: PostgreSQL for data persistence
- **Redis**: Caching and rate limiting
- **Monitoring**: Prometheus and Grafana
## Configuration
### Environment Variables
```bash
# Database
DATABASE_URL=postgresql://user:pass@localhost:5432/openperformance
# Redis
REDIS_URL=redis://localhost:6379
# Security
SECRET_KEY=your-secret-key
JWT_SECRET_KEY=your-jwt-secret
# API Keys
OPENAI_API_KEY=your-openai-api-key
# Logging
LOG_LEVEL=INFO
LOG_FILE=logs/openperformance.log
```
### Configuration Files
- `config.env` - Environment configuration
- `alembic.ini` - Database migration configuration
- `pyproject.toml` - Project metadata and dependencies
## Architecture
```
OpenPerformance/
├── python/mlperf/ # Main Python package
│ ├── api/ # FastAPI application
│ ├── auth/ # Authentication and security
│ ├── cli/ # Command-line interface
│ ├── hardware/ # Hardware monitoring
│ ├── optimization/ # Performance optimization
│ ├── utils/ # Utilities and helpers
│ └── workers/ # Background workers
├── tests/ # Test suite
├── docker/ # Docker configuration
├── k8s/ # Kubernetes manifests
├── scripts/ # Utility scripts
└── docs/ # Documentation
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
### Development Guidelines
- Follow PEP 8 style guidelines
- Write comprehensive tests
- Update documentation
- Ensure all tests pass
- Add type hints where appropriate
## Testing
The platform includes comprehensive testing:
- **Unit Tests**: 43 tests covering all core functionality
- **Integration Tests**: Full workflow testing
- **Performance Tests**: Benchmarking and profiling
- **Security Tests**: Vulnerability scanning
```bash
# Run all tests
python -m pytest tests/ -v
# Run with coverage
python -m pytest tests/ --cov=python/mlperf --cov-report=html
# Run performance benchmarks
python -m pytest tests/performance/ -v
```
## Security
- JWT-based authentication
- Role-based access control
- Rate limiting
- Input validation
- Secure password hashing
- CORS protection
## Performance
- Optimized for production workloads
- Efficient memory usage
- Fast API response times
- Scalable architecture
- Real-time monitoring
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: [GitHub Wiki](https://github.com/llamasearchai/OpenPerformance/wiki)
- **Issues**: [GitHub Issues](https://github.com/llamasearchai/OpenPerformance/issues)
- **Discussions**: [GitHub Discussions](https://github.com/llamasearchai/OpenPerformance/discussions)
## Acknowledgments
- Built with FastAPI, PyTorch, and modern Python tooling
- Inspired by MLPerf and other performance engineering tools
- Community contributions welcome
## Roadmap
- [ ] Web dashboard
- [ ] Advanced analytics
- [ ] Cloud integration
- [ ] Real-time monitoring
- [ ] Additional ML frameworks
- [ ] Performance benchmarking suite
Raw data
{
"_id": null,
"home_page": "https://github.com/llamasearchai/OpenPerformance",
"name": "openperformance",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "OpenPerformance Maintainers <maintainers@openperformance.ai>",
"keywords": "ml, performance, optimization, monitoring, deployment, mlops, ai",
"author": "Nik Jois",
"author_email": "OpenPerformance Team <team@openperformance.ai>",
"download_url": "https://files.pythonhosted.org/packages/a1/e2/03aeae9368d4aca234cce75edb9bcff699c1cb2474d1e14e08e2fa6f3051/openperformance-1.0.1.tar.gz",
"platform": null,
"description": "# OpenPerformance\n\nA comprehensive ML Performance Engineering Platform for optimizing and monitoring machine learning workloads.\n\n[](https://github.com/llamasearchai/OpenPerformance/actions)\n[](https://github.com/llamasearchai/OpenPerformance/releases)\n[](https://github.com/llamasearchai/OpenPerformance/packages)\n[](https://pypi.org/project/openperformance/)\n[](https://pypi.org/project/openperformance/)\n\n## Features\n\n- **Hardware Monitoring**: Real-time CPU, memory, and GPU monitoring\n- **Performance Analysis**: AI-powered optimization recommendations\n- **Distributed Training**: Advanced distributed optimization algorithms\n- **CLI Interface**: Comprehensive command-line tools\n- **REST API**: Full-featured API with authentication\n- **Cross-Platform**: Works on macOS, Linux, and Windows\n- **Production Ready**: All tests passing (43/43)\n\n## Quick Start\n\n### Installation\n\n```bash\n# Install from PyPI\npip install openperformance\n\n# Or install from source\ngit clone https://github.com/llamasearchai/OpenPerformance.git\ncd OpenPerformance\npip install -e .\n```\n\n### Basic Usage\n\n```bash\n# Check system information\nmlperf info\n\n# Run performance analysis\nmlperf optimize --framework pytorch --batch-size 32\n\n# Start API server\npython -m uvicorn python.mlperf.api.main:app --host 0.0.0.0 --port 8000\n```\n\n## CLI Commands\n\n```bash\nmlperf --help # Show all available commands\nmlperf info # Display system hardware information\nmlperf version # Show platform version\nmlperf benchmark # Run performance benchmarks\nmlperf profile # Profile Python scripts\nmlperf optimize # Optimize ML workloads\nmlperf gpt # AI-powered shell assistance\nmlperf chat # Chat with ML performance AI agents\n```\n\n## API Endpoints\n\n- `GET /health` - System health check\n- `GET /system/metrics` - Real-time system metrics\n- `GET /system/hardware` - Detailed hardware information\n- `POST /analyze/performance` - Performance analysis and optimization\n- `GET /admin/system/status` - Admin system status\n\n## Hardware Monitoring\n\nThe platform provides comprehensive hardware monitoring:\n\n- **CPU**: Core count, frequency, usage percentage\n- **Memory**: Total, used, available memory with usage statistics\n- **GPU**: NVIDIA GPU detection, memory usage, utilization metrics\n- **System**: Architecture, platform information\n\n## Performance Analysis\n\nGet AI-powered optimization recommendations for your ML workloads:\n\n```python\nfrom mlperf.optimization.distributed import DistributedOptimizer\n\n# Initialize optimizer\noptimizer = DistributedOptimizer(framework=\"pytorch\")\n\n# Get optimization recommendations\nrecommendations = optimizer.optimize_model_parallel(\n model_size_gb=10.0,\n gpu_count=4\n)\n```\n\n## Development\n\n### Setup Development Environment\n\n```bash\n# Clone repository\ngit clone https://github.com/llamasearchai/OpenPerformance.git\ncd OpenPerformance\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate # On Windows: venv\\Scripts\\activate\n\n# Install dependencies\npip install -e .\npip install -r dev-requirements.txt\n```\n\n### Running Tests\n\n```bash\n# Run all tests\npython -m pytest tests/ -v\n\n# Run with coverage\npython -m pytest tests/ --cov=python/mlperf --cov-report=html\n\n# Run specific test categories\npython -m pytest tests/test_hardware.py -v\npython -m pytest tests/test_integration.py -v\n```\n\n### Code Quality\n\n```bash\n# Linting\nflake8 python/ tests/\n\n# Type checking\nmypy python/mlperf/\n\n# Security checks\nbandit -r python/\nsafety check\n```\n\n## Docker\n\n### Build and Run\n\n```bash\n# Build Docker image\ndocker build -t openperformance .\n\n# Run container\ndocker run -p 8000:8000 openperformance\n\n# Run with Docker Compose\ndocker-compose up -d\n```\n\n### Docker Compose Services\n\n- **API Server**: FastAPI application on port 8000\n- **Database**: PostgreSQL for data persistence\n- **Redis**: Caching and rate limiting\n- **Monitoring**: Prometheus and Grafana\n\n## Configuration\n\n### Environment Variables\n\n```bash\n# Database\nDATABASE_URL=postgresql://user:pass@localhost:5432/openperformance\n\n# Redis\nREDIS_URL=redis://localhost:6379\n\n# Security\nSECRET_KEY=your-secret-key\nJWT_SECRET_KEY=your-jwt-secret\n\n# API Keys\nOPENAI_API_KEY=your-openai-api-key\n\n# Logging\nLOG_LEVEL=INFO\nLOG_FILE=logs/openperformance.log\n```\n\n### Configuration Files\n\n- `config.env` - Environment configuration\n- `alembic.ini` - Database migration configuration\n- `pyproject.toml` - Project metadata and dependencies\n\n## Architecture\n\n```\nOpenPerformance/\n\u251c\u2500\u2500 python/mlperf/ # Main Python package\n\u2502 \u251c\u2500\u2500 api/ # FastAPI application\n\u2502 \u251c\u2500\u2500 auth/ # Authentication and security\n\u2502 \u251c\u2500\u2500 cli/ # Command-line interface\n\u2502 \u251c\u2500\u2500 hardware/ # Hardware monitoring\n\u2502 \u251c\u2500\u2500 optimization/ # Performance optimization\n\u2502 \u251c\u2500\u2500 utils/ # Utilities and helpers\n\u2502 \u2514\u2500\u2500 workers/ # Background workers\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 docker/ # Docker configuration\n\u251c\u2500\u2500 k8s/ # Kubernetes manifests\n\u251c\u2500\u2500 scripts/ # Utility scripts\n\u2514\u2500\u2500 docs/ # Documentation\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n### Development Guidelines\n\n- Follow PEP 8 style guidelines\n- Write comprehensive tests\n- Update documentation\n- Ensure all tests pass\n- Add type hints where appropriate\n\n## Testing\n\nThe platform includes comprehensive testing:\n\n- **Unit Tests**: 43 tests covering all core functionality\n- **Integration Tests**: Full workflow testing\n- **Performance Tests**: Benchmarking and profiling\n- **Security Tests**: Vulnerability scanning\n\n```bash\n# Run all tests\npython -m pytest tests/ -v\n\n# Run with coverage\npython -m pytest tests/ --cov=python/mlperf --cov-report=html\n\n# Run performance benchmarks\npython -m pytest tests/performance/ -v\n```\n\n## Security\n\n- JWT-based authentication\n- Role-based access control\n- Rate limiting\n- Input validation\n- Secure password hashing\n- CORS protection\n\n## Performance\n\n- Optimized for production workloads\n- Efficient memory usage\n- Fast API response times\n- Scalable architecture\n- Real-time monitoring\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Support\n\n- **Documentation**: [GitHub Wiki](https://github.com/llamasearchai/OpenPerformance/wiki)\n- **Issues**: [GitHub Issues](https://github.com/llamasearchai/OpenPerformance/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/llamasearchai/OpenPerformance/discussions)\n\n## Acknowledgments\n\n- Built with FastAPI, PyTorch, and modern Python tooling\n- Inspired by MLPerf and other performance engineering tools\n- Community contributions welcome\n\n## Roadmap\n\n- [ ] Web dashboard\n- [ ] Advanced analytics\n- [ ] Cloud integration\n- [ ] Real-time monitoring\n- [ ] Additional ML frameworks\n- [ ] Performance benchmarking suite \n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Enterprise-grade ML Performance Engineering Platform for optimization, monitoring, and deployment",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/openperformance/openperformance/issues",
"Changelog": "https://github.com/openperformance/openperformance/blob/main/CHANGELOG.md",
"Documentation": "https://openperformance.io",
"Homepage": "https://github.com/openperformance/openperformance",
"Repository": "https://github.com/openperformance/openperformance.git"
},
"split_keywords": [
"ml",
" performance",
" optimization",
" monitoring",
" deployment",
" mlops",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a6555413331469fc75975584fd945dac3231972c83c85811c4afa9271e2c4d25",
"md5": "66bf3b525879bdd8948d0537eb39a7aa",
"sha256": "187aca5b499a2cb7ace56cd904fa7c56cded19d7cdf0adf537a67ed0229a099b"
},
"downloads": -1,
"filename": "openperformance-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "66bf3b525879bdd8948d0537eb39a7aa",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 120131,
"upload_time": "2025-08-02T02:25:13",
"upload_time_iso_8601": "2025-08-02T02:25:13.613733Z",
"url": "https://files.pythonhosted.org/packages/a6/55/5413331469fc75975584fd945dac3231972c83c85811c4afa9271e2c4d25/openperformance-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a1e203aeae9368d4aca234cce75edb9bcff699c1cb2474d1e14e08e2fa6f3051",
"md5": "338794870f6bf6895a5f0d1142f013e3",
"sha256": "47453097fe9f4cf5c2195767e7d9bfac0f70c6358fb6fc96b62d7d97a4c73e5b"
},
"downloads": -1,
"filename": "openperformance-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "338794870f6bf6895a5f0d1142f013e3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 113097,
"upload_time": "2025-08-02T02:25:15",
"upload_time_iso_8601": "2025-08-02T02:25:15.142521Z",
"url": "https://files.pythonhosted.org/packages/a1/e2/03aeae9368d4aca234cce75edb9bcff699c1cb2474d1e14e08e2fa6f3051/openperformance-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-02 02:25:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "llamasearchai",
"github_project": "OpenPerformance",
"github_not_found": true,
"lcname": "openperformance"
}