aegis-vision


Nameaegis-vision JSON
Version 0.2.11 PyPI version JSON
download
home_pageNone
SummaryCloud-native computer vision model training toolkit for Aegis AI
upload_time2025-10-25 04:00:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords computer-vision yolo model-training kaggle wandb object-detection
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Aegis Vision

> Cloud-native computer vision model training toolkit for Aegis AI

[![PyPI version](https://badge.fury.io/py/aegis-vision.svg)](https://badge.fury.io/py/aegis-vision)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Overview

**Aegis Vision** is a streamlined toolkit for training computer vision models in cloud environments (Kaggle, Colab, etc.) with built-in support for:

- 🎯 **YOLO Models** (v8, v9, v10, v11) - Object detection training
- 📊 **Wandb Integration** - Experiment tracking and visualization
- 🔄 **COCO Format** - Dataset conversion and handling
- ☁️ **Cloud-Optimized** - Designed for Kaggle/Colab workflows
- 📦 **Model Export** - ONNX, CoreML, OpenVINO, TensorRT, TFLite

## Installation

### Standard Installation

```bash
# Basic installation
pip install aegis-vision

# With Kaggle support
pip install aegis-vision[kaggle]

# Development installation
pip install aegis-vision[dev]

# All features
pip install aegis-vision[all]
```

### Headless Environments (Docker, CI/CD)

The package uses `opencv-python-headless` by default, which works in both GUI and headless environments:

```bash
# Standard installation (works in all environments)
pip install aegis-vision
```

No special configuration needed - the package automatically works in Docker containers, CI/CD systems, and GUI environments.

### Nvidia DGX / High-Performance GPU Systems

For Nvidia DGX Spark or other systems with latest NVIDIA GPUs (Blackwell architecture), installation is the same:

```bash
# Standard installation with automatic environment checking
pip install aegis-vision

# Login and start (agent will auto-check and fix environment)
aegis-agent login
aegis-agent start
```

The agent automatically:
- Detects environment issues (NumPy, PyTorch compatibility)
- Explains what's wrong and why
- Offers one-click fixes
- Starts agent after fixes

See [`QUICKSTART_DGX.txt`](QUICKSTART_DGX.txt) for detailed guide.

### GPU Detection & Support

The Aegis Vision training agent includes comprehensive GPU detection that supports modern NVIDIA architectures:

#### Supported GPU Architectures

| Architecture | Compute Capability | GPU Examples | Status |
|---|---|---|---|
| Volta | 7.0 | V100, Titan V | ✓ Supported |
| Turing | 7.5 | RTX 2080, RTX 2090, Titan RTX | ✓ Supported |
| Ampere | 8.0-8.6 | A100, RTX 3090, RTX 3080 | ✓ Supported |
| Ada | 8.9 | RTX 4090, RTX 4080, L40S | ✓ Supported |
| Hopper | 9.0-9.1 | H100, H200 | ✓ Supported |
| Blackwell | 9.2 | B100, B200 | ✓ Supported |

#### GPU Detection Methods

The agent uses a dual-detection approach for maximum reliability:

1. **PyTorch Detection** (Primary)
   - Queries `torch.cuda.is_available()` and device properties
   - Provides compute capability and device memory
   - Fastest detection method

2. **nvidia-smi Fallback** (Secondary)
   - Runs `nvidia-smi` command for GPU discovery
   - Detects GPUs even if PyTorch CUDA runtime is unavailable
   - Captures NVIDIA driver version and CUDA version from `nvcc`
   - Handles edge cases: PyTorch built without CUDA, mismatched CUDA versions, etc.

#### Check GPU Detection

```bash
# Show detailed GPU information
aegis-agent info

# Example output with H100:
# 🎮 GPU Information:
#   Detection Method: PyTorch
#   CUDA Version: 12.1
#   Driver Version: 550.120
#   GPU 0:
#     Name: NVIDIA H100 80GB
#     Memory: 80.0 GB
#     Compute Capability: 9.0
```

#### CUDA Architecture Auto-Configuration

The agent automatically configures optimal CUDA architectures for training:

```bash
TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.6 8.9 9.0 9.1 9.2+PTX
```

This includes:
- **PTX** flag for forward compatibility with future GPU architectures
- All major consumer and data center GPUs
- Optimal compilation for the target system

Custom architectures can be set via environment variable:

```bash
# Force specific GPU architecture
export TORCH_CUDA_ARCH_LIST="9.0 9.2+PTX"
aegis-agent start
```

#### Troubleshooting GPU Detection

If GPU is not detected despite having NVIDIA GPUs installed:

```bash
# 1. Verify NVIDIA driver is installed
nvidia-smi

# 2. Check CUDA version
nvcc --version

# 3. View detailed system info
aegis-agent info

# 4. Check CUDA compatibility
aegis-agent check-env
```

If detection shows "CPU Only" but GPU is available:
- The PyTorch in the environment may have been built without CUDA support
- The agent will automatically use the `nvidia-smi` fallback method
- Check driver and CUDA toolkit compatibility with your PyTorch version

## Quick Start

### Training a YOLO Model

```python
from aegis_vision import YOLOTrainer

# Initialize trainer
trainer = YOLOTrainer(
    model_variant="yolov11l",
    dataset_path="/kaggle/input/my-dataset",
    epochs=100,
    batch_size=16,
)

# Configure Wandb tracking (optional)
trainer.setup_wandb(
    project="my-project",
    entity="my-team",
    api_key="your-api-key"
)

# Train
results = trainer.train()

# Export to multiple formats
trainer.export(formats=["onnx", "coreml", "openvino"])
```

### Converting COCO to YOLO Format

```python
from aegis_vision import COCOConverter

# Convert dataset
converter = COCOConverter(
    annotations_file="annotations.json",
    images_dir="images/",
    output_dir="yolo_dataset/"
)

stats = converter.convert()
print(f"Converted {stats['total_annotations']} annotations")
```

### Command-Line Interface

```bash
# Train a model
aegis-train \
    --model yolov11l \
    --data /path/to/dataset \
    --epochs 100 \
    --batch 16 \
    --wandb-project my-project

# Convert COCO to YOLO
aegis-train convert-coco \
    --annotations annotations.json \
    --images images/ \
    --output yolo_dataset/
```

## Features

### 🎯 YOLO Training

- **Multi-version support**: YOLOv8, v9, v10, v11
- **Fine-tuning & from-scratch** training modes
- **Automatic augmentation** configuration
- **Early stopping** with patience
- **Validation metrics**: mAP50, mAP50-95, precision, recall

### 📊 Experiment Tracking

- **Wandb integration** for metrics, charts, and artifacts
- **Automatic logging** of hyperparameters, metrics, and model outputs
- **Run resumption** support

### 🔄 Dataset Handling

- **COCO format** support
- **Auto-conversion** to YOLO format
- **Label filtering** and validation
- **Dataset statistics** reporting

### 📦 Model Export

- **ONNX** - Cross-platform inference
- **CoreML** - iOS/macOS deployment
- **OpenVINO** - Intel hardware optimization
- **TensorRT** - NVIDIA GPU optimization
- **TFLite** - Mobile/edge deployment

### ☁️ Cloud Environment Support

- **Kaggle** - Kernel execution and dataset management
- **Google Colab** - Ready-to-use notebooks
- **Environment detection** - Auto-configuration for different platforms

## Configuration

### Training Configuration

```python
config = {
    # Model settings
    "model_variant": "yolov11l",
    "training_mode": "fine_tune",  # or "from_scratch"
    
    # Training hyperparameters
    "epochs": 100,
    "batch_size": 16,
    "img_size": 640,
    "learning_rate": 0.01,
    "momentum": 0.937,
    "weight_decay": 0.0005,
    
    # Augmentation
    "augmentation": {
        "hsv_h": 0.015,
        "hsv_s": 0.7,
        "hsv_v": 0.4,
        "degrees": 0.0,
        "translate": 0.1,
        "scale": 0.5,
        "shear": 0.0,
        "perspective": 0.0,
        "flipud": 0.0,
        "fliplr": 0.5,
        "mosaic": 1.0,
        "mixup": 0.0,
    },
    
    # Early stopping
    "early_stopping": {
        "enabled": True,
        "patience": 50,
        "min_delta": 0.0001
    },
    
    # Wandb
    "wandb_enabled": True,
    "wandb_project": "my-project",
    "wandb_entity": "my-team",
    
    # Export
    "output_formats": ["onnx", "coreml", "openvino"],
}

trainer = YOLOTrainer(**config)
```

## Examples

### Kaggle Kernel

```python
# In a Kaggle kernel
from aegis_vision import YOLOTrainer

trainer = YOLOTrainer(
    model_variant="yolov11l",
    dataset_path="/kaggle/input/my-dataset",
    epochs=100,
    wandb_api_key="/kaggle/input/secrets/wandb_api_key.txt"
)

results = trainer.train()
trainer.save_to_kaggle_output()
```

### Custom Dataset

```python
from aegis_vision import YOLOTrainer, COCOConverter

# 1. Convert your COCO dataset
converter = COCOConverter(
    annotations_file="my_annotations.json",
    images_dir="my_images/",
    output_dir="yolo_dataset/",
    labels_filter=["person", "car", "dog"]  # Optional filtering
)
converter.convert()

# 2. Train
trainer = YOLOTrainer(
    model_variant="yolov11m",
    dataset_path="yolo_dataset/",
    epochs=50,
)
results = trainer.train()
```

## API Reference

### YOLOTrainer

Main class for training YOLO models.

**Methods**:
- `train()` - Start training
- `setup_wandb()` - Configure Wandb tracking
- `export()` - Export trained model
- `validate()` - Run validation
- `get_metrics()` - Retrieve training metrics

### COCOConverter

Convert COCO format datasets to YOLO format.

**Methods**:
- `convert()` - Perform conversion
- `validate()` - Check dataset integrity
- `get_statistics()` - Dataset statistics

## Development

```bash
# Clone repository
git clone https://github.com/your-org/aegis-vision.git
cd aegis-vision

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Format code
black src/

# Lint
ruff src/
```

### Testing & Debugging

#### Programmatic Task Submission

Test the agent without using the UI:

```bash
# Submit a basic training task
python test_submit_task.py

# Submit with CoreML export
python test_submit_task.py --coreml --epochs 5

# Submit with custom configuration
python test_submit_task.py --model yolo11n --epochs 10 --batch-size 16
```

See [`TEST_TASK_SUBMISSION.md`](TEST_TASK_SUBMISSION.md) for complete documentation.

#### Debugging with VS Code/Cursor

1. **Set up debugging**:
   ```bash
   # Debug configurations are pre-configured in .vscode/launch.json
   # Just open the project in VS Code/Cursor
   ```

2. **Start debugging**:
   - Set breakpoints in `src/aegis_vision/agent.py` or `trainer.py`
   - Press F5 and select "Debug Aegis Agent"
   - Submit a task (via UI or `test_submit_task.py`)
   - Debugger will pause at your breakpoints

3. **Common debugging scenarios**:
   - CoreML export issues: Breakpoint at `trainer.py:_export_coreml()`
   - Task execution: Breakpoint at `agent.py:execute_task()`
   - Training config: Breakpoint at `training_script.py:main()`

See [`DEBUG_GUIDE.md`](DEBUG_GUIDE.md) for comprehensive debugging documentation.

#### Combined Testing Workflow

The most powerful debugging approach:

```bash
# Terminal 1: Start agent in debug mode (VS Code/Cursor)
# Press F5 → "Debug Aegis Agent"
# Set breakpoints in agent.py or trainer.py

# Terminal 2: Submit test task
python test_submit_task.py --coreml

# Debugger will pause at breakpoints
# Inspect variables, step through code, fix issues
```

This enables rapid iteration without manual UI interaction.

## Contributing

Contributions are welcome! Please:

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Roadmap

- [ ] Support for additional YOLO architectures
- [ ] Integration with Hugging Face Hub
- [ ] Distributed training support
- [ ] Auto-hyperparameter tuning
- [ ] Model quantization utilities
- [ ] Segmentation and pose estimation models
- [ ] Real-time inference utilities

## Citation

```bibtex
@software{aegis_vision,
  title = {Aegis Vision: Cloud-native Computer Vision Training Toolkit},
  author = {Aegis AI Team},
  year = {2025},
  url = {https://github.com/your-org/aegis-vision}
}
```

## Support

- 📧 Email: support@aegis-ai.com
- 💬 Discord: [Join our community](https://discord.gg/aegis-ai)
- 📚 Documentation: [https://aegis-vision.readthedocs.io](https://aegis-vision.readthedocs.io)
- 🐛 Issues: [GitHub Issues](https://github.com/your-org/aegis-vision/issues)



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "aegis-vision",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "computer-vision, yolo, model-training, kaggle, wandb, object-detection",
    "author": null,
    "author_email": "Aegis AI Team <your-email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/03/02/4cc2cf49bd30b57e3d9cb292c6f4680c9f50dd97de2cc8dd20bb86d78848/aegis_vision-0.2.11.tar.gz",
    "platform": null,
    "description": "# Aegis Vision\n\n> Cloud-native computer vision model training toolkit for Aegis AI\n\n[![PyPI version](https://badge.fury.io/py/aegis-vision.svg)](https://badge.fury.io/py/aegis-vision)\n[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Overview\n\n**Aegis Vision** is a streamlined toolkit for training computer vision models in cloud environments (Kaggle, Colab, etc.) with built-in support for:\n\n- \ud83c\udfaf **YOLO Models** (v8, v9, v10, v11) - Object detection training\n- \ud83d\udcca **Wandb Integration** - Experiment tracking and visualization\n- \ud83d\udd04 **COCO Format** - Dataset conversion and handling\n- \u2601\ufe0f **Cloud-Optimized** - Designed for Kaggle/Colab workflows\n- \ud83d\udce6 **Model Export** - ONNX, CoreML, OpenVINO, TensorRT, TFLite\n\n## Installation\n\n### Standard Installation\n\n```bash\n# Basic installation\npip install aegis-vision\n\n# With Kaggle support\npip install aegis-vision[kaggle]\n\n# Development installation\npip install aegis-vision[dev]\n\n# All features\npip install aegis-vision[all]\n```\n\n### Headless Environments (Docker, CI/CD)\n\nThe package uses `opencv-python-headless` by default, which works in both GUI and headless environments:\n\n```bash\n# Standard installation (works in all environments)\npip install aegis-vision\n```\n\nNo special configuration needed - the package automatically works in Docker containers, CI/CD systems, and GUI environments.\n\n### Nvidia DGX / High-Performance GPU Systems\n\nFor Nvidia DGX Spark or other systems with latest NVIDIA GPUs (Blackwell architecture), installation is the same:\n\n```bash\n# Standard installation with automatic environment checking\npip install aegis-vision\n\n# Login and start (agent will auto-check and fix environment)\naegis-agent login\naegis-agent start\n```\n\nThe agent automatically:\n- Detects environment issues (NumPy, PyTorch compatibility)\n- Explains what's wrong and why\n- Offers one-click fixes\n- Starts agent after fixes\n\nSee [`QUICKSTART_DGX.txt`](QUICKSTART_DGX.txt) for detailed guide.\n\n### GPU Detection & Support\n\nThe Aegis Vision training agent includes comprehensive GPU detection that supports modern NVIDIA architectures:\n\n#### Supported GPU Architectures\n\n| Architecture | Compute Capability | GPU Examples | Status |\n|---|---|---|---|\n| Volta | 7.0 | V100, Titan V | \u2713 Supported |\n| Turing | 7.5 | RTX 2080, RTX 2090, Titan RTX | \u2713 Supported |\n| Ampere | 8.0-8.6 | A100, RTX 3090, RTX 3080 | \u2713 Supported |\n| Ada | 8.9 | RTX 4090, RTX 4080, L40S | \u2713 Supported |\n| Hopper | 9.0-9.1 | H100, H200 | \u2713 Supported |\n| Blackwell | 9.2 | B100, B200 | \u2713 Supported |\n\n#### GPU Detection Methods\n\nThe agent uses a dual-detection approach for maximum reliability:\n\n1. **PyTorch Detection** (Primary)\n   - Queries `torch.cuda.is_available()` and device properties\n   - Provides compute capability and device memory\n   - Fastest detection method\n\n2. **nvidia-smi Fallback** (Secondary)\n   - Runs `nvidia-smi` command for GPU discovery\n   - Detects GPUs even if PyTorch CUDA runtime is unavailable\n   - Captures NVIDIA driver version and CUDA version from `nvcc`\n   - Handles edge cases: PyTorch built without CUDA, mismatched CUDA versions, etc.\n\n#### Check GPU Detection\n\n```bash\n# Show detailed GPU information\naegis-agent info\n\n# Example output with H100:\n# \ud83c\udfae GPU Information:\n#   Detection Method: PyTorch\n#   CUDA Version: 12.1\n#   Driver Version: 550.120\n#   GPU 0:\n#     Name: NVIDIA H100 80GB\n#     Memory: 80.0 GB\n#     Compute Capability: 9.0\n```\n\n#### CUDA Architecture Auto-Configuration\n\nThe agent automatically configures optimal CUDA architectures for training:\n\n```bash\nTORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.6 8.9 9.0 9.1 9.2+PTX\n```\n\nThis includes:\n- **PTX** flag for forward compatibility with future GPU architectures\n- All major consumer and data center GPUs\n- Optimal compilation for the target system\n\nCustom architectures can be set via environment variable:\n\n```bash\n# Force specific GPU architecture\nexport TORCH_CUDA_ARCH_LIST=\"9.0 9.2+PTX\"\naegis-agent start\n```\n\n#### Troubleshooting GPU Detection\n\nIf GPU is not detected despite having NVIDIA GPUs installed:\n\n```bash\n# 1. Verify NVIDIA driver is installed\nnvidia-smi\n\n# 2. Check CUDA version\nnvcc --version\n\n# 3. View detailed system info\naegis-agent info\n\n# 4. Check CUDA compatibility\naegis-agent check-env\n```\n\nIf detection shows \"CPU Only\" but GPU is available:\n- The PyTorch in the environment may have been built without CUDA support\n- The agent will automatically use the `nvidia-smi` fallback method\n- Check driver and CUDA toolkit compatibility with your PyTorch version\n\n## Quick Start\n\n### Training a YOLO Model\n\n```python\nfrom aegis_vision import YOLOTrainer\n\n# Initialize trainer\ntrainer = YOLOTrainer(\n    model_variant=\"yolov11l\",\n    dataset_path=\"/kaggle/input/my-dataset\",\n    epochs=100,\n    batch_size=16,\n)\n\n# Configure Wandb tracking (optional)\ntrainer.setup_wandb(\n    project=\"my-project\",\n    entity=\"my-team\",\n    api_key=\"your-api-key\"\n)\n\n# Train\nresults = trainer.train()\n\n# Export to multiple formats\ntrainer.export(formats=[\"onnx\", \"coreml\", \"openvino\"])\n```\n\n### Converting COCO to YOLO Format\n\n```python\nfrom aegis_vision import COCOConverter\n\n# Convert dataset\nconverter = COCOConverter(\n    annotations_file=\"annotations.json\",\n    images_dir=\"images/\",\n    output_dir=\"yolo_dataset/\"\n)\n\nstats = converter.convert()\nprint(f\"Converted {stats['total_annotations']} annotations\")\n```\n\n### Command-Line Interface\n\n```bash\n# Train a model\naegis-train \\\n    --model yolov11l \\\n    --data /path/to/dataset \\\n    --epochs 100 \\\n    --batch 16 \\\n    --wandb-project my-project\n\n# Convert COCO to YOLO\naegis-train convert-coco \\\n    --annotations annotations.json \\\n    --images images/ \\\n    --output yolo_dataset/\n```\n\n## Features\n\n### \ud83c\udfaf YOLO Training\n\n- **Multi-version support**: YOLOv8, v9, v10, v11\n- **Fine-tuning & from-scratch** training modes\n- **Automatic augmentation** configuration\n- **Early stopping** with patience\n- **Validation metrics**: mAP50, mAP50-95, precision, recall\n\n### \ud83d\udcca Experiment Tracking\n\n- **Wandb integration** for metrics, charts, and artifacts\n- **Automatic logging** of hyperparameters, metrics, and model outputs\n- **Run resumption** support\n\n### \ud83d\udd04 Dataset Handling\n\n- **COCO format** support\n- **Auto-conversion** to YOLO format\n- **Label filtering** and validation\n- **Dataset statistics** reporting\n\n### \ud83d\udce6 Model Export\n\n- **ONNX** - Cross-platform inference\n- **CoreML** - iOS/macOS deployment\n- **OpenVINO** - Intel hardware optimization\n- **TensorRT** - NVIDIA GPU optimization\n- **TFLite** - Mobile/edge deployment\n\n### \u2601\ufe0f Cloud Environment Support\n\n- **Kaggle** - Kernel execution and dataset management\n- **Google Colab** - Ready-to-use notebooks\n- **Environment detection** - Auto-configuration for different platforms\n\n## Configuration\n\n### Training Configuration\n\n```python\nconfig = {\n    # Model settings\n    \"model_variant\": \"yolov11l\",\n    \"training_mode\": \"fine_tune\",  # or \"from_scratch\"\n    \n    # Training hyperparameters\n    \"epochs\": 100,\n    \"batch_size\": 16,\n    \"img_size\": 640,\n    \"learning_rate\": 0.01,\n    \"momentum\": 0.937,\n    \"weight_decay\": 0.0005,\n    \n    # Augmentation\n    \"augmentation\": {\n        \"hsv_h\": 0.015,\n        \"hsv_s\": 0.7,\n        \"hsv_v\": 0.4,\n        \"degrees\": 0.0,\n        \"translate\": 0.1,\n        \"scale\": 0.5,\n        \"shear\": 0.0,\n        \"perspective\": 0.0,\n        \"flipud\": 0.0,\n        \"fliplr\": 0.5,\n        \"mosaic\": 1.0,\n        \"mixup\": 0.0,\n    },\n    \n    # Early stopping\n    \"early_stopping\": {\n        \"enabled\": True,\n        \"patience\": 50,\n        \"min_delta\": 0.0001\n    },\n    \n    # Wandb\n    \"wandb_enabled\": True,\n    \"wandb_project\": \"my-project\",\n    \"wandb_entity\": \"my-team\",\n    \n    # Export\n    \"output_formats\": [\"onnx\", \"coreml\", \"openvino\"],\n}\n\ntrainer = YOLOTrainer(**config)\n```\n\n## Examples\n\n### Kaggle Kernel\n\n```python\n# In a Kaggle kernel\nfrom aegis_vision import YOLOTrainer\n\ntrainer = YOLOTrainer(\n    model_variant=\"yolov11l\",\n    dataset_path=\"/kaggle/input/my-dataset\",\n    epochs=100,\n    wandb_api_key=\"/kaggle/input/secrets/wandb_api_key.txt\"\n)\n\nresults = trainer.train()\ntrainer.save_to_kaggle_output()\n```\n\n### Custom Dataset\n\n```python\nfrom aegis_vision import YOLOTrainer, COCOConverter\n\n# 1. Convert your COCO dataset\nconverter = COCOConverter(\n    annotations_file=\"my_annotations.json\",\n    images_dir=\"my_images/\",\n    output_dir=\"yolo_dataset/\",\n    labels_filter=[\"person\", \"car\", \"dog\"]  # Optional filtering\n)\nconverter.convert()\n\n# 2. Train\ntrainer = YOLOTrainer(\n    model_variant=\"yolov11m\",\n    dataset_path=\"yolo_dataset/\",\n    epochs=50,\n)\nresults = trainer.train()\n```\n\n## API Reference\n\n### YOLOTrainer\n\nMain class for training YOLO models.\n\n**Methods**:\n- `train()` - Start training\n- `setup_wandb()` - Configure Wandb tracking\n- `export()` - Export trained model\n- `validate()` - Run validation\n- `get_metrics()` - Retrieve training metrics\n\n### COCOConverter\n\nConvert COCO format datasets to YOLO format.\n\n**Methods**:\n- `convert()` - Perform conversion\n- `validate()` - Check dataset integrity\n- `get_statistics()` - Dataset statistics\n\n## Development\n\n```bash\n# Clone repository\ngit clone https://github.com/your-org/aegis-vision.git\ncd aegis-vision\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Format code\nblack src/\n\n# Lint\nruff src/\n```\n\n### Testing & Debugging\n\n#### Programmatic Task Submission\n\nTest the agent without using the UI:\n\n```bash\n# Submit a basic training task\npython test_submit_task.py\n\n# Submit with CoreML export\npython test_submit_task.py --coreml --epochs 5\n\n# Submit with custom configuration\npython test_submit_task.py --model yolo11n --epochs 10 --batch-size 16\n```\n\nSee [`TEST_TASK_SUBMISSION.md`](TEST_TASK_SUBMISSION.md) for complete documentation.\n\n#### Debugging with VS Code/Cursor\n\n1. **Set up debugging**:\n   ```bash\n   # Debug configurations are pre-configured in .vscode/launch.json\n   # Just open the project in VS Code/Cursor\n   ```\n\n2. **Start debugging**:\n   - Set breakpoints in `src/aegis_vision/agent.py` or `trainer.py`\n   - Press F5 and select \"Debug Aegis Agent\"\n   - Submit a task (via UI or `test_submit_task.py`)\n   - Debugger will pause at your breakpoints\n\n3. **Common debugging scenarios**:\n   - CoreML export issues: Breakpoint at `trainer.py:_export_coreml()`\n   - Task execution: Breakpoint at `agent.py:execute_task()`\n   - Training config: Breakpoint at `training_script.py:main()`\n\nSee [`DEBUG_GUIDE.md`](DEBUG_GUIDE.md) for comprehensive debugging documentation.\n\n#### Combined Testing Workflow\n\nThe most powerful debugging approach:\n\n```bash\n# Terminal 1: Start agent in debug mode (VS Code/Cursor)\n# Press F5 \u2192 \"Debug Aegis Agent\"\n# Set breakpoints in agent.py or trainer.py\n\n# Terminal 2: Submit test task\npython test_submit_task.py --coreml\n\n# Debugger will pause at breakpoints\n# Inspect variables, step through code, fix issues\n```\n\nThis enables rapid iteration without manual UI interaction.\n\n## Contributing\n\nContributions are welcome! Please:\n\n1. Fork the repository\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Roadmap\n\n- [ ] Support for additional YOLO architectures\n- [ ] Integration with Hugging Face Hub\n- [ ] Distributed training support\n- [ ] Auto-hyperparameter tuning\n- [ ] Model quantization utilities\n- [ ] Segmentation and pose estimation models\n- [ ] Real-time inference utilities\n\n## Citation\n\n```bibtex\n@software{aegis_vision,\n  title = {Aegis Vision: Cloud-native Computer Vision Training Toolkit},\n  author = {Aegis AI Team},\n  year = {2025},\n  url = {https://github.com/your-org/aegis-vision}\n}\n```\n\n## Support\n\n- \ud83d\udce7 Email: support@aegis-ai.com\n- \ud83d\udcac Discord: [Join our community](https://discord.gg/aegis-ai)\n- \ud83d\udcda Documentation: [https://aegis-vision.readthedocs.io](https://aegis-vision.readthedocs.io)\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/your-org/aegis-vision/issues)\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Cloud-native computer vision model training toolkit for Aegis AI",
    "version": "0.2.11",
    "project_urls": {
        "Bug Tracker": "https://github.com/your-org/aegis-vision/issues",
        "Documentation": "https://aegis-vision.readthedocs.io",
        "Homepage": "https://github.com/your-org/aegis-vision",
        "Repository": "https://github.com/your-org/aegis-vision"
    },
    "split_keywords": [
        "computer-vision",
        " yolo",
        " model-training",
        " kaggle",
        " wandb",
        " object-detection"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6a30d2973d2fe6367206f47d54d5d41a18cce3511a5d9cae32fa8afbf79788c3",
                "md5": "151ca1e1ef60034b9a694fe06f473c28",
                "sha256": "f0bc8c08d02ee4f88b03e9b253df5556692dc5254dbd7e069af0a32fca9d83d5"
            },
            "downloads": -1,
            "filename": "aegis_vision-0.2.11-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "151ca1e1ef60034b9a694fe06f473c28",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 100342,
            "upload_time": "2025-10-25T04:00:51",
            "upload_time_iso_8601": "2025-10-25T04:00:51.567320Z",
            "url": "https://files.pythonhosted.org/packages/6a/30/d2973d2fe6367206f47d54d5d41a18cce3511a5d9cae32fa8afbf79788c3/aegis_vision-0.2.11-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "03024cc2cf49bd30b57e3d9cb292c6f4680c9f50dd97de2cc8dd20bb86d78848",
                "md5": "f8339c4758d3f1fdfa5805159014f5f0",
                "sha256": "15e51b6468c0fcd4c3af88f4580229a896f29062fbcd807d6d98dd6f39db324d"
            },
            "downloads": -1,
            "filename": "aegis_vision-0.2.11.tar.gz",
            "has_sig": false,
            "md5_digest": "f8339c4758d3f1fdfa5805159014f5f0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 92480,
            "upload_time": "2025-10-25T04:00:53",
            "upload_time_iso_8601": "2025-10-25T04:00:53.083262Z",
            "url": "https://files.pythonhosted.org/packages/03/02/4cc2cf49bd30b57e3d9cb292c6f4680c9f50dd97de2cc8dd20bb86d78848/aegis_vision-0.2.11.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-25 04:00:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "your-org",
    "github_project": "aegis-vision",
    "github_not_found": true,
    "lcname": "aegis-vision"
}
        
Elapsed time: 0.96271s