swajay-cv-toolkit


Nameswajay-cv-toolkit JSON
Version 1.0.9 PyPI version JSON
download
home_pagehttps://github.com/swajayresources/swajay-cv-toolkit
SummaryAdvanced Computer Vision Toolkit for Image Classification
upload_time2025-08-28 08:34:44
maintainerNone
docs_urlNone
authorSwajay
requires_python>=3.8
licenseMIT
keywords computer-vision deep-learning image-classification pytorch
VCS
bugtrack_url
requirements torch torchvision timm albumentations opencv-python-headless numpy scikit-learn pandas Pillow tqdm
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Swajay CV Toolkit 🚀

[![PyPI version](https://badge.fury.io/py/swajay-cv-toolkit.svg)](https://badge.fury.io/py/swajay-cv-toolkit)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Downloads](https://pepy.tech/badge/swajay-cv-toolkit)](https://pepy.tech/project/swajay-cv-toolkit)

**Advanced Computer Vision Toolkit for State-of-the-Art Image Classification**

A comprehensive, production-ready toolkit featuring cutting-edge loss functions, model architectures, augmentation strategies, and training pipelines. Designed for researchers, practitioners, and Kaggle competitors who want to achieve state-of-the-art results with minimal code.

## 🎯 Key Features

### 🔥 **Advanced Loss Functions**
- **Focal Loss** - Handle class imbalance effectively
- **Label Smoothing Cross Entropy** - Improve generalization  
- **Polynomial Loss** - Enhanced convergence properties
- **Mixed Loss** - Optimal combination of multiple loss functions
- **Automatic class weight calculation** for imbalanced datasets

### 🏗️ **Universal Model Architectures**  
- **Auto-adaptive models** - Automatically adjust to any number of classes
- **Multiple architectures** - ConvNeXt, EfficientNet, ResNet, Vision Transformers
- **Smart classifier heads** - Optimal architecture for your dataset size
- **Ensemble support** - Combine multiple models for better performance

### 🎭 **Professional Augmentations**
- **Competition-grade augmentations** - Albumentations-based pipeline
- **MixUp & CutMix** - Advanced mixing strategies
- **Test-Time Augmentation (TTA)** - Boost inference accuracy
- **4 intensity levels** - From lightweight to competition-grade

### 🏋️ **Advanced Training Pipeline**
- **Mixed precision training** - 2x faster with lower memory usage
- **Differential learning rates** - Optimal rates for backbone vs classifier
- **Smart early stopping** - Prevent overfitting automatically
- **Comprehensive metrics** - Track everything that matters

## 🚀 Quick Start

### Installation

```bash
pip install swajay-cv-toolkit
```

### Basic Usage

```python
import torch
from swajay_cv_toolkit import create_model, get_loss_function, AdvancedTrainer
from torch.utils.data import DataLoader

# Create model for any number of classes
model = create_model('convnext_large', num_classes=10)

# Get advanced loss function
criterion = get_loss_function('mixed')  # Combines focal + label smoothing + poly

# Create optimizer 
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)

# Advanced trainer with all the bells and whistles
trainer = AdvancedTrainer(
    model=model,
    criterion=criterion, 
    optimizer=optimizer,
    mixed_precision=True,
    gradient_clip_norm=1.0
)

# Train with automatic early stopping
history = trainer.fit(
    train_loader=train_loader,
    val_loader=val_loader, 
    epochs=50,
    early_stopping_patience=10
)
```

### Complete Pipeline Example

```python
from swajay_cv_toolkit import *

# 1. Setup
seed_everything(42)
device = setup_device()

# 2. Data augmentation
aug_config = get_augmentation_preset('competition', image_size=224)
train_dataset = AdvancedDataset(raw_dataset, aug_config['train_transform'])

# 3. Model creation  
model = create_model('convnext_large', num_classes=20).to(device)

# 4. Advanced training components
criterion = get_loss_function('mixed')
optimizer = create_optimizer(model, 'adamw', lr=1e-3, differential_lr=True)
scheduler = create_scheduler(optimizer, 'cosine', total_steps=1000)

# 5. Train with MixUp/CutMix
mixup_cutmix = MixupCutmix(mixup_alpha=0.2, cutmix_alpha=1.0)
trainer = AdvancedTrainer(model, criterion, optimizer, scheduler)
history = trainer.fit(train_loader, val_loader, epochs=30, mixup_cutmix=mixup_cutmix)

# 6. Test-Time Augmentation for final predictions
tta_predictor = TTAPredictor(model, device)
predictions = tta_predictor.predict_with_tta(test_dataset, aug_config['tta_transforms'])
```

## 🎯 Supported Use Cases

### ✅ **Works on ANY Image Classification Task**
- **Medical Imaging** - X-rays, MRIs, pathology slides
- **Satellite Imagery** - Land use, crop monitoring, disaster assessment  
- **Manufacturing** - Quality control, defect detection
- **Retail & E-commerce** - Product categorization, visual search
- **Security** - Face recognition, object identification
- **Agriculture** - Plant disease detection, crop classification
- **Wildlife & Conservation** - Species identification, monitoring
- **Food & Nutrition** - Recipe classification, nutritional analysis
- **Academic Research** - Any custom image classification dataset

### 📊 **Dataset Requirements**
Just organize your data like this:
```
your_dataset/
├── class1/
│   ├── image1.jpg
│   ├── image2.jpg
│   └── ...
├── class2/
│   ├── image1.jpg
│   └── ...
└── class3/
    └── ...
```

That's it! The toolkit handles everything else automatically.



## 📚 Comprehensive Examples

### Example 1: CIFAR-10 Classification
```python
import torchvision.datasets as datasets
from swajay_cv_toolkit import *

# Load CIFAR-10
train_data = datasets.CIFAR10(root='./data', train=True, download=True)
test_data = datasets.CIFAR10(root='./data', train=False, download=True)

# Quick setup with presets
config = {
    'model': 'efficientnet_b4',
    'num_classes': 10,
    'image_size': 224
}

# One-line training
model, history = quick_train(
    train_data, test_data, 
    preset='competition',
    **config
)
```

### Example 2: Custom Medical Dataset
```python
# Your medical imaging dataset
train_data = datasets.ImageFolder('./medical_images/train')
test_data = datasets.ImageFolder('./medical_images/test')

# Specialized setup for medical images
model = create_model('convnext_large', num_classes=len(train_data.classes))
criterion = get_loss_function('focal', alpha=2, gamma=2)  # Good for imbalanced medical data

# Conservative augmentations for medical data
aug_config = get_augmentation_preset('standard', image_size=384)

# Train with higher resolution for medical precision
trainer = AdvancedTrainer(model, criterion, optimizer)
history = trainer.fit(train_loader, val_loader, epochs=100)
```

### Example 3: Ensemble for Maximum Accuracy
```python
# Create ensemble of different architectures
models = [
    create_model('convnext_large', num_classes=20),
    create_model('efficientnet_b7', num_classes=20),
    create_model('vit_large_patch16_224', num_classes=20)
]

ensemble = EnsembleModel(models, weights=[0.4, 0.35, 0.25])

# Train ensemble or individual models
for i, model in enumerate(models):
    print(f"Training model {i+1}/3...")
    trainer = AdvancedTrainer(model, criterion, optimizer)
    trainer.fit(train_loader, val_loader, epochs=30)
```

## 🔧 Advanced Features

### 🎯 **Automatic Configuration**
- **Auto-detect classes** - No need to count manually
- **Auto-size networks** - Optimal architecture for your data size
- **Auto-balance classes** - Handle imbalanced datasets automatically
- **Auto-tune hyperparameters** - Smart defaults that work

### 🔬 **Research-Grade Components**
All components are based on peer-reviewed research:

- **ConvNeXt**: Liu et al., 2022 - [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
- **Focal Loss**: Lin et al., 2017 - [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002)
- **Label Smoothing**: Szegedy et al., 2016 - [Rethinking the Inception Architecture](https://arxiv.org/abs/1512.00567)  
- **MixUp**: Zhang et al., 2017 - [mixup: Beyond Empirical Risk Minimization](https://arxiv.org/abs/1710.09412)
- **Test-Time Augmentation**: He et al., 2015 - [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
- **AdamW**: Loshchilov & Hutter, 2017 - [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101)

### 🚀 **Production Ready**
- **Memory efficient** - Automatic mixed precision
- **GPU optimized** - Efficient data loading and processing
- **Reproducible** - Comprehensive seed management
- **Robust** - Extensive error handling and validation
- **Scalable** - Works from laptop to multi-GPU setups

## 🎮 Presets for Every Use Case

### `'lightweight'` - Fast Development
- EfficientNet-B2, basic augmentations, 20 epochs
- Perfect for prototyping and quick experiments

### `'standard'` - Balanced Performance  
- ConvNeXt-Base, medium augmentations, 30 epochs
- Great balance of speed and accuracy

### `'competition'` - Maximum Accuracy
- ConvNeXt-Large, aggressive augmentations, TTA, 40 epochs  
- For when you need the absolute best results

### `'research'` - Experimental Features
- Latest architectures, experimental techniques
- For pushing the boundaries

## 📖 Documentation

### 🚀 **Quick References**
- [Installation Guide](docs/installation.md)
- [Quick Start Tutorial](docs/quickstart.md) 
- [API Reference](docs/api_reference.md)
- [Example Gallery](examples/)

### 📋 **Detailed Guides**
- [Loss Functions Guide](docs/losses.md) - When to use which loss
- [Model Architecture Guide](docs/models.md) - Choosing the right model
- [Augmentation Guide](docs/augmentations.md) - Augmentation strategies
- [Training Guide](docs/training.md) - Advanced training techniques
- [Troubleshooting](docs/troubleshooting.md) - Common issues and solutions

## 🤝 Contributing

We welcome contributions! Here's how:

1. **Fork the repository**
2. **Create a feature branch**: `git checkout -b feature/amazing-feature`
3. **Make your changes** and add tests
4. **Run tests**: `pytest tests/`
5. **Format code**: `black swajay_cv_toolkit/`
6. **Commit changes**: `git commit -m 'Add amazing feature'`
7. **Push to branch**: `git push origin feature/amazing-feature`
8. **Open a Pull Request**

### 🐛 **Bug Reports**
Found a bug? [Open an issue](https://github.com/swajayresources/swajay-cv-toolkit/issues) with:
- Python version and OS
- Full error traceback
- Minimal code to reproduce
- Dataset information (if relevant)

### 💡 **Feature Requests**
Have an idea? [Request a feature](https://github.com/swajayresources/swajay-cv-toolkit/issues) with:
- Use case description
- Expected behavior
- Example code (if possible)





## 🛠️ Technical Requirements

### Minimum Requirements
- **Python**: 3.8+
- **PyTorch**: 1.12.0+
- **CUDA**: Optional but recommended
- **RAM**: 8GB+ (16GB+ recommended)
- **Storage**: 2GB for dependencies

### Recommended Setup
- **GPU**: RTX 3080+ or Tesla V100+
- **RAM**: 32GB+
- **Python**: 3.10+
- **CUDA**: 11.7+

### Dependencies
All dependencies are automatically installed:
```
torch>=1.12.0
torchvision>=0.13.0
timm>=0.6.0
albumentations>=1.3.0
opencv-python-headless>=4.6.0
numpy>=1.21.0
scikit-learn>=1.1.0
pandas>=1.4.0
```


### 💬 **Get Help**
- **Documentation**: [swajay-cv-toolkit.readthedocs.io](https://swajay-cv-toolkit.readthedocs.io/)
- **Issues**: [GitHub Issues](https://github.com/swajayresources/swajay-cv-toolkit/issues)
- **Discussions**: [GitHub Discussions](https://github.com/swajayresources/swajay-cv-toolkit/discussions)
- **Email**: swajay@example.com

### 🌟 **Show Your Support**
If this toolkit helps you:
- ⭐ **Star the repository**
- 🐦 **Tweet about it** 
- 📝 **Write a blog post**
- 🗣️ **Tell your colleagues**
- 💖 **Sponsor the project**

## 📜 License

This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.

## 📚 Citation

If you use this toolkit in your research, please cite:

```bibtex
@software{swajay_cv_toolkit,
  title={Swajay CV Toolkit: Advanced Computer Vision for Image Classification},
  author={Swajay},
  year={2024},
  url={https://github.com/swajayresources/swajay-cv-toolkit},
  version={1.0.0}
}
```

## 🙏 Acknowledgments

Special thanks to:
- **PyTorch Team** - For the amazing framework
- **TIMM contributors** - For the model implementations
- **Albumentations team** - For the augmentation library
- **Research community** - For the foundational papers
- **Beta testers** - For feedback and bug reports
- **Contributors** - For making this project better

---

<div align="center">

**Made with ❤️ by Swajay**

[⭐ Star on GitHub](https://github.com/swajayresources/swajay-cv-toolkit) • [📖 Documentation](https://swajay-cv-toolkit.readthedocs.io/) • [🐛 Report Bug](https://github.com/swajayresources/swajay-cv-toolkit/issues) • [💡 Request Feature](https://github.com/swajayresources/swajay-cv-toolkit/issues)

</div>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/swajayresources/swajay-cv-toolkit",
    "name": "swajay-cv-toolkit",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "computer-vision, deep-learning, image-classification, pytorch",
    "author": "Swajay",
    "author_email": "Swajay <swajaynandanwade04@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/4c/13/ae1ed68a5f10733593076d0d5defcf35698f748ac2c4b5a2ab7c7355f92e/swajay_cv_toolkit-1.0.9.tar.gz",
    "platform": null,
    "description": "# Swajay CV Toolkit \ud83d\ude80\r\n\r\n[![PyPI version](https://badge.fury.io/py/swajay-cv-toolkit.svg)](https://badge.fury.io/py/swajay-cv-toolkit)\r\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\r\n[![Downloads](https://pepy.tech/badge/swajay-cv-toolkit)](https://pepy.tech/project/swajay-cv-toolkit)\r\n\r\n**Advanced Computer Vision Toolkit for State-of-the-Art Image Classification**\r\n\r\nA comprehensive, production-ready toolkit featuring cutting-edge loss functions, model architectures, augmentation strategies, and training pipelines. Designed for researchers, practitioners, and Kaggle competitors who want to achieve state-of-the-art results with minimal code.\r\n\r\n## \ud83c\udfaf Key Features\r\n\r\n### \ud83d\udd25 **Advanced Loss Functions**\r\n- **Focal Loss** - Handle class imbalance effectively\r\n- **Label Smoothing Cross Entropy** - Improve generalization  \r\n- **Polynomial Loss** - Enhanced convergence properties\r\n- **Mixed Loss** - Optimal combination of multiple loss functions\r\n- **Automatic class weight calculation** for imbalanced datasets\r\n\r\n### \ud83c\udfd7\ufe0f **Universal Model Architectures**  \r\n- **Auto-adaptive models** - Automatically adjust to any number of classes\r\n- **Multiple architectures** - ConvNeXt, EfficientNet, ResNet, Vision Transformers\r\n- **Smart classifier heads** - Optimal architecture for your dataset size\r\n- **Ensemble support** - Combine multiple models for better performance\r\n\r\n### \ud83c\udfad **Professional Augmentations**\r\n- **Competition-grade augmentations** - Albumentations-based pipeline\r\n- **MixUp & CutMix** - Advanced mixing strategies\r\n- **Test-Time Augmentation (TTA)** - Boost inference accuracy\r\n- **4 intensity levels** - From lightweight to competition-grade\r\n\r\n### \ud83c\udfcb\ufe0f **Advanced Training Pipeline**\r\n- **Mixed precision training** - 2x faster with lower memory usage\r\n- **Differential learning rates** - Optimal rates for backbone vs classifier\r\n- **Smart early stopping** - Prevent overfitting automatically\r\n- **Comprehensive metrics** - Track everything that matters\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### Installation\r\n\r\n```bash\r\npip install swajay-cv-toolkit\r\n```\r\n\r\n### Basic Usage\r\n\r\n```python\r\nimport torch\r\nfrom swajay_cv_toolkit import create_model, get_loss_function, AdvancedTrainer\r\nfrom torch.utils.data import DataLoader\r\n\r\n# Create model for any number of classes\r\nmodel = create_model('convnext_large', num_classes=10)\r\n\r\n# Get advanced loss function\r\ncriterion = get_loss_function('mixed')  # Combines focal + label smoothing + poly\r\n\r\n# Create optimizer \r\noptimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)\r\n\r\n# Advanced trainer with all the bells and whistles\r\ntrainer = AdvancedTrainer(\r\n    model=model,\r\n    criterion=criterion, \r\n    optimizer=optimizer,\r\n    mixed_precision=True,\r\n    gradient_clip_norm=1.0\r\n)\r\n\r\n# Train with automatic early stopping\r\nhistory = trainer.fit(\r\n    train_loader=train_loader,\r\n    val_loader=val_loader, \r\n    epochs=50,\r\n    early_stopping_patience=10\r\n)\r\n```\r\n\r\n### Complete Pipeline Example\r\n\r\n```python\r\nfrom swajay_cv_toolkit import *\r\n\r\n# 1. Setup\r\nseed_everything(42)\r\ndevice = setup_device()\r\n\r\n# 2. Data augmentation\r\naug_config = get_augmentation_preset('competition', image_size=224)\r\ntrain_dataset = AdvancedDataset(raw_dataset, aug_config['train_transform'])\r\n\r\n# 3. Model creation  \r\nmodel = create_model('convnext_large', num_classes=20).to(device)\r\n\r\n# 4. Advanced training components\r\ncriterion = get_loss_function('mixed')\r\noptimizer = create_optimizer(model, 'adamw', lr=1e-3, differential_lr=True)\r\nscheduler = create_scheduler(optimizer, 'cosine', total_steps=1000)\r\n\r\n# 5. Train with MixUp/CutMix\r\nmixup_cutmix = MixupCutmix(mixup_alpha=0.2, cutmix_alpha=1.0)\r\ntrainer = AdvancedTrainer(model, criterion, optimizer, scheduler)\r\nhistory = trainer.fit(train_loader, val_loader, epochs=30, mixup_cutmix=mixup_cutmix)\r\n\r\n# 6. Test-Time Augmentation for final predictions\r\ntta_predictor = TTAPredictor(model, device)\r\npredictions = tta_predictor.predict_with_tta(test_dataset, aug_config['tta_transforms'])\r\n```\r\n\r\n## \ud83c\udfaf Supported Use Cases\r\n\r\n### \u2705 **Works on ANY Image Classification Task**\r\n- **Medical Imaging** - X-rays, MRIs, pathology slides\r\n- **Satellite Imagery** - Land use, crop monitoring, disaster assessment  \r\n- **Manufacturing** - Quality control, defect detection\r\n- **Retail & E-commerce** - Product categorization, visual search\r\n- **Security** - Face recognition, object identification\r\n- **Agriculture** - Plant disease detection, crop classification\r\n- **Wildlife & Conservation** - Species identification, monitoring\r\n- **Food & Nutrition** - Recipe classification, nutritional analysis\r\n- **Academic Research** - Any custom image classification dataset\r\n\r\n### \ud83d\udcca **Dataset Requirements**\r\nJust organize your data like this:\r\n```\r\nyour_dataset/\r\n\u251c\u2500\u2500 class1/\r\n\u2502   \u251c\u2500\u2500 image1.jpg\r\n\u2502   \u251c\u2500\u2500 image2.jpg\r\n\u2502   \u2514\u2500\u2500 ...\r\n\u251c\u2500\u2500 class2/\r\n\u2502   \u251c\u2500\u2500 image1.jpg\r\n\u2502   \u2514\u2500\u2500 ...\r\n\u2514\u2500\u2500 class3/\r\n    \u2514\u2500\u2500 ...\r\n```\r\n\r\nThat's it! The toolkit handles everything else automatically.\r\n\r\n\r\n\r\n## \ud83d\udcda Comprehensive Examples\r\n\r\n### Example 1: CIFAR-10 Classification\r\n```python\r\nimport torchvision.datasets as datasets\r\nfrom swajay_cv_toolkit import *\r\n\r\n# Load CIFAR-10\r\ntrain_data = datasets.CIFAR10(root='./data', train=True, download=True)\r\ntest_data = datasets.CIFAR10(root='./data', train=False, download=True)\r\n\r\n# Quick setup with presets\r\nconfig = {\r\n    'model': 'efficientnet_b4',\r\n    'num_classes': 10,\r\n    'image_size': 224\r\n}\r\n\r\n# One-line training\r\nmodel, history = quick_train(\r\n    train_data, test_data, \r\n    preset='competition',\r\n    **config\r\n)\r\n```\r\n\r\n### Example 2: Custom Medical Dataset\r\n```python\r\n# Your medical imaging dataset\r\ntrain_data = datasets.ImageFolder('./medical_images/train')\r\ntest_data = datasets.ImageFolder('./medical_images/test')\r\n\r\n# Specialized setup for medical images\r\nmodel = create_model('convnext_large', num_classes=len(train_data.classes))\r\ncriterion = get_loss_function('focal', alpha=2, gamma=2)  # Good for imbalanced medical data\r\n\r\n# Conservative augmentations for medical data\r\naug_config = get_augmentation_preset('standard', image_size=384)\r\n\r\n# Train with higher resolution for medical precision\r\ntrainer = AdvancedTrainer(model, criterion, optimizer)\r\nhistory = trainer.fit(train_loader, val_loader, epochs=100)\r\n```\r\n\r\n### Example 3: Ensemble for Maximum Accuracy\r\n```python\r\n# Create ensemble of different architectures\r\nmodels = [\r\n    create_model('convnext_large', num_classes=20),\r\n    create_model('efficientnet_b7', num_classes=20),\r\n    create_model('vit_large_patch16_224', num_classes=20)\r\n]\r\n\r\nensemble = EnsembleModel(models, weights=[0.4, 0.35, 0.25])\r\n\r\n# Train ensemble or individual models\r\nfor i, model in enumerate(models):\r\n    print(f\"Training model {i+1}/3...\")\r\n    trainer = AdvancedTrainer(model, criterion, optimizer)\r\n    trainer.fit(train_loader, val_loader, epochs=30)\r\n```\r\n\r\n## \ud83d\udd27 Advanced Features\r\n\r\n### \ud83c\udfaf **Automatic Configuration**\r\n- **Auto-detect classes** - No need to count manually\r\n- **Auto-size networks** - Optimal architecture for your data size\r\n- **Auto-balance classes** - Handle imbalanced datasets automatically\r\n- **Auto-tune hyperparameters** - Smart defaults that work\r\n\r\n### \ud83d\udd2c **Research-Grade Components**\r\nAll components are based on peer-reviewed research:\r\n\r\n- **ConvNeXt**: Liu et al., 2022 - [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)\r\n- **Focal Loss**: Lin et al., 2017 - [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002)\r\n- **Label Smoothing**: Szegedy et al., 2016 - [Rethinking the Inception Architecture](https://arxiv.org/abs/1512.00567)  \r\n- **MixUp**: Zhang et al., 2017 - [mixup: Beyond Empirical Risk Minimization](https://arxiv.org/abs/1710.09412)\r\n- **Test-Time Augmentation**: He et al., 2015 - [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)\r\n- **AdamW**: Loshchilov & Hutter, 2017 - [Decoupled Weight Decay Regularization](https://arxiv.org/abs/1711.05101)\r\n\r\n### \ud83d\ude80 **Production Ready**\r\n- **Memory efficient** - Automatic mixed precision\r\n- **GPU optimized** - Efficient data loading and processing\r\n- **Reproducible** - Comprehensive seed management\r\n- **Robust** - Extensive error handling and validation\r\n- **Scalable** - Works from laptop to multi-GPU setups\r\n\r\n## \ud83c\udfae Presets for Every Use Case\r\n\r\n### `'lightweight'` - Fast Development\r\n- EfficientNet-B2, basic augmentations, 20 epochs\r\n- Perfect for prototyping and quick experiments\r\n\r\n### `'standard'` - Balanced Performance  \r\n- ConvNeXt-Base, medium augmentations, 30 epochs\r\n- Great balance of speed and accuracy\r\n\r\n### `'competition'` - Maximum Accuracy\r\n- ConvNeXt-Large, aggressive augmentations, TTA, 40 epochs  \r\n- For when you need the absolute best results\r\n\r\n### `'research'` - Experimental Features\r\n- Latest architectures, experimental techniques\r\n- For pushing the boundaries\r\n\r\n## \ud83d\udcd6 Documentation\r\n\r\n### \ud83d\ude80 **Quick References**\r\n- [Installation Guide](docs/installation.md)\r\n- [Quick Start Tutorial](docs/quickstart.md) \r\n- [API Reference](docs/api_reference.md)\r\n- [Example Gallery](examples/)\r\n\r\n### \ud83d\udccb **Detailed Guides**\r\n- [Loss Functions Guide](docs/losses.md) - When to use which loss\r\n- [Model Architecture Guide](docs/models.md) - Choosing the right model\r\n- [Augmentation Guide](docs/augmentations.md) - Augmentation strategies\r\n- [Training Guide](docs/training.md) - Advanced training techniques\r\n- [Troubleshooting](docs/troubleshooting.md) - Common issues and solutions\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Here's how:\r\n\r\n1. **Fork the repository**\r\n2. **Create a feature branch**: `git checkout -b feature/amazing-feature`\r\n3. **Make your changes** and add tests\r\n4. **Run tests**: `pytest tests/`\r\n5. **Format code**: `black swajay_cv_toolkit/`\r\n6. **Commit changes**: `git commit -m 'Add amazing feature'`\r\n7. **Push to branch**: `git push origin feature/amazing-feature`\r\n8. **Open a Pull Request**\r\n\r\n### \ud83d\udc1b **Bug Reports**\r\nFound a bug? [Open an issue](https://github.com/swajayresources/swajay-cv-toolkit/issues) with:\r\n- Python version and OS\r\n- Full error traceback\r\n- Minimal code to reproduce\r\n- Dataset information (if relevant)\r\n\r\n### \ud83d\udca1 **Feature Requests**\r\nHave an idea? [Request a feature](https://github.com/swajayresources/swajay-cv-toolkit/issues) with:\r\n- Use case description\r\n- Expected behavior\r\n- Example code (if possible)\r\n\r\n\r\n\r\n\r\n\r\n## \ud83d\udee0\ufe0f Technical Requirements\r\n\r\n### Minimum Requirements\r\n- **Python**: 3.8+\r\n- **PyTorch**: 1.12.0+\r\n- **CUDA**: Optional but recommended\r\n- **RAM**: 8GB+ (16GB+ recommended)\r\n- **Storage**: 2GB for dependencies\r\n\r\n### Recommended Setup\r\n- **GPU**: RTX 3080+ or Tesla V100+\r\n- **RAM**: 32GB+\r\n- **Python**: 3.10+\r\n- **CUDA**: 11.7+\r\n\r\n### Dependencies\r\nAll dependencies are automatically installed:\r\n```\r\ntorch>=1.12.0\r\ntorchvision>=0.13.0\r\ntimm>=0.6.0\r\nalbumentations>=1.3.0\r\nopencv-python-headless>=4.6.0\r\nnumpy>=1.21.0\r\nscikit-learn>=1.1.0\r\npandas>=1.4.0\r\n```\r\n\r\n\r\n### \ud83d\udcac **Get Help**\r\n- **Documentation**: [swajay-cv-toolkit.readthedocs.io](https://swajay-cv-toolkit.readthedocs.io/)\r\n- **Issues**: [GitHub Issues](https://github.com/swajayresources/swajay-cv-toolkit/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/swajayresources/swajay-cv-toolkit/discussions)\r\n- **Email**: swajay@example.com\r\n\r\n### \ud83c\udf1f **Show Your Support**\r\nIf this toolkit helps you:\r\n- \u2b50 **Star the repository**\r\n- \ud83d\udc26 **Tweet about it** \r\n- \ud83d\udcdd **Write a blog post**\r\n- \ud83d\udde3\ufe0f **Tell your colleagues**\r\n- \ud83d\udc96 **Sponsor the project**\r\n\r\n## \ud83d\udcdc License\r\n\r\nThis project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\udcda Citation\r\n\r\nIf you use this toolkit in your research, please cite:\r\n\r\n```bibtex\r\n@software{swajay_cv_toolkit,\r\n  title={Swajay CV Toolkit: Advanced Computer Vision for Image Classification},\r\n  author={Swajay},\r\n  year={2024},\r\n  url={https://github.com/swajayresources/swajay-cv-toolkit},\r\n  version={1.0.0}\r\n}\r\n```\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\nSpecial thanks to:\r\n- **PyTorch Team** - For the amazing framework\r\n- **TIMM contributors** - For the model implementations\r\n- **Albumentations team** - For the augmentation library\r\n- **Research community** - For the foundational papers\r\n- **Beta testers** - For feedback and bug reports\r\n- **Contributors** - For making this project better\r\n\r\n---\r\n\r\n<div align=\"center\">\r\n\r\n**Made with \u2764\ufe0f by Swajay**\r\n\r\n[\u2b50 Star on GitHub](https://github.com/swajayresources/swajay-cv-toolkit) \u2022 [\ud83d\udcd6 Documentation](https://swajay-cv-toolkit.readthedocs.io/) \u2022 [\ud83d\udc1b Report Bug](https://github.com/swajayresources/swajay-cv-toolkit/issues) \u2022 [\ud83d\udca1 Request Feature](https://github.com/swajayresources/swajay-cv-toolkit/issues)\r\n\r\n</div>\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Advanced Computer Vision Toolkit for Image Classification",
    "version": "1.0.9",
    "project_urls": {
        "Bug Tracker": "https://github.com/swajayresources/swajay-cv-toolkit/issues",
        "Documentation": "https://swajay-cv-toolkit.readthedocs.io/",
        "Homepage": "https://github.com/swajayresources/swajay-cv-toolkit",
        "Repository": "https://github.com/swajayresources/swajay-cv-toolkit"
    },
    "split_keywords": [
        "computer-vision",
        " deep-learning",
        " image-classification",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f0d6afa4525b4fbab0dc0ea2fac5630cefd6c312d15f6a71de5e2df23dcfeee0",
                "md5": "0d20505ce7fc8919b24a86b69c1d6cbe",
                "sha256": "524c51d84d8055812cdcdd281b08d8ddba2f535eaa4f771a13cfd588a19301b7"
            },
            "downloads": -1,
            "filename": "swajay_cv_toolkit-1.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0d20505ce7fc8919b24a86b69c1d6cbe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 24179,
            "upload_time": "2025-08-28T08:34:42",
            "upload_time_iso_8601": "2025-08-28T08:34:42.865240Z",
            "url": "https://files.pythonhosted.org/packages/f0/d6/afa4525b4fbab0dc0ea2fac5630cefd6c312d15f6a71de5e2df23dcfeee0/swajay_cv_toolkit-1.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4c13ae1ed68a5f10733593076d0d5defcf35698f748ac2c4b5a2ab7c7355f92e",
                "md5": "e8785fb519ae2c6fb381900a10cfcaf4",
                "sha256": "0f52b48557905b3bc2587245834d2c811cf4049d9be8deb753497c133617f280"
            },
            "downloads": -1,
            "filename": "swajay_cv_toolkit-1.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "e8785fb519ae2c6fb381900a10cfcaf4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 30052,
            "upload_time": "2025-08-28T08:34:44",
            "upload_time_iso_8601": "2025-08-28T08:34:44.022765Z",
            "url": "https://files.pythonhosted.org/packages/4c/13/ae1ed68a5f10733593076d0d5defcf35698f748ac2c4b5a2ab7c7355f92e/swajay_cv_toolkit-1.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-28 08:34:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "swajayresources",
    "github_project": "swajay-cv-toolkit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.12.0"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    ">=",
                    "0.13.0"
                ]
            ]
        },
        {
            "name": "timm",
            "specs": [
                [
                    ">=",
                    "0.6.0"
                ]
            ]
        },
        {
            "name": "albumentations",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "opencv-python-headless",
            "specs": [
                [
                    ">=",
                    "4.6.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.21.0"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.4.0"
                ]
            ]
        },
        {
            "name": "Pillow",
            "specs": [
                [
                    ">=",
                    "8.3.0"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    ">=",
                    "4.64.0"
                ]
            ]
        }
    ],
    "lcname": "swajay-cv-toolkit"
}
        
Elapsed time: 1.40342s