# ๐ AnomaVision: Edge-Ready Visual Anomaly Detection
<div align="center">
[](https://python.org)
[](https://pytorch.org)
[](https://developer.nvidia.com/cuda-toolkit)
[](https://onnx.ai/)
[](https://docs.openvino.ai/)
[](https://pytorch.org/docs/stable/jit.html)
[](LICENSE)
<img src="docs/images/AnomaVision_banner.png" alt="bg" width="100%" style="border-radius: 15px;"/>
**๐ฅ Production-ready anomaly detection powered by state-of-the-art PaDiM algorithm**
*Deploy anywhere, run everywhere - from edge devices to cloud infrastructure*
<details open>
<summary>โจ Supported Export Formats</summary>
| Format | Status | Use Case | Language Support |
|--------|--------|----------|------------------|
| **PyTorch** | โ
Ready | Development & Research | Python |
| **Statistics (.pth)** | โ
Ready | Ultra-compact deployment (2-4x smaller) | Python |
| **ONNX** | โ
Ready | Cross-platform deployment | Python, C++ |
| **TorchScript** | โ
Ready | Production Python deployment | Python |
| **OpenVINO** | โ
Ready | Intel hardware optimization | Python|
| **TensorRT** | ๐ง Coming Soon | NVIDIA GPU acceleration | Python|
</details>
</div>
---
<details open>
<summary>โจ What's New (September 2025)</summary>
- **Slim artifacts (`.pth`)**: Save only PaDiM statistics (mean, cov_inv, channel indices, layer indices, backbone) for **2โ4ร smaller files** vs. full `.pt` checkpoints
- **Plug-and-play loading**: `.pth` loads seamlessly through `TorchBackend` and exporter via lightweight runtime (`PadimLite`) with same `.predict(...)` interface
- **CPU-first pipeline**: Everything works on machines **without a GPU**. FP16 used only for storage; compute happens in FP32 on CPU
- **Export from `.pth`**: ONNX/TorchScript/OpenVINO export now accepts stats-only `.pth` directly
- **Test coverage**: New pytest cases validate saving stats, loading via `PadimLite`, CPU inference, and exporter compatibility
</details>
---
<details open>
<summary>โจ Why Choose AnomaVision?</summary>
**๐ฏ Unmatched Performance** โข **๐ Multi-Format Support** โข **๐ฆ Production Ready** โข **๐จ Rich Visualizations** โข **๐ Flexible Image Dimensions**
AnomaVision transforms the cutting-edge **PaDiM (Patch Distribution Modeling)** algorithm into a production-ready powerhouse for visual anomaly detection. Whether you're detecting manufacturing defects, monitoring infrastructure, or ensuring quality control, AnomaVision delivers enterprise-grade performance with research-level accuracy.
</details>
---
<details>
<summary>โจ Benchmark Results: AnomaVision vs Anomalib (MVTec Bottle, CPU-only)</summary>
<img src="docs/images/av_al.png" alt="bg" width="50%" style="border-radius: 15px;"/>
</details>
---
<details >
<summary>โจ Installation</summary>
### ๐ Prerequisites
- **Python**: 3.9+
- **CUDA**: 11.7+ for GPU acceleration
- **PyTorch**: 2.0+ (automatically installed)
### ๐ฏ Method 1: Poetry (Recommended)
```bash
git clone https://github.com/DeepKnowledge1/AnomaVision.git
cd AnomaVision
poetry install
poetry shell
```
### ๐ฏ Method 2: pip
```bash
git clone https://github.com/DeepKnowledge1/AnomaVision.git
cd AnomaVision
pip install -r requirements.txt
```
### โ
Verify Installation
```python
python -c "import anomavision; print('๐ AnomaVision installed successfully!')"
```
### ๐ณ Docker Support
```bash
# Build Docker image (coming soon)
docker build -t anomavision:latest .
docker run --gpus all -v $(pwd):/workspace anomavision:latest
```
</details>
---
<details >
<summary>โจ Quick Start</summary>
### ๐ฏ Train Your First Model (2 minutes)
```python
import anomavision
import torch
from torch.utils.data import DataLoader
# ๐ Load your "good" training images
dataset = anomavision.anomavisionDataset(
"path/to/train/good",
resize=[256, 192], # Flexible width/height
crop_size=[224, 224], # Final crop size
normalize=True # ImageNet normalization
)
dataloader = DataLoader(dataset, batch_size=4)
# ๐ง Initialize PaDiM with optimal settings
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = anomavision.Padim(
backbone='resnet18', # Fast and accurate
device=device,
layer_indices=[0, 1], # Multi-scale features
feat_dim=100 # Optimal feature dimension
)
# ๐ฅ Train the model (surprisingly fast!)
print("๐ Training model...")
model.fit(dataloader)
# ๐พ Save for production deployment
torch.save(model, "anomaly_detector.pt")
model.save_statistics("compact_model.pth", half=True) # 4x smaller!
print("โ
Model trained and saved!")
```
### ๐ Detect Anomalies Instantly
```python
# ๐ Load test data and detect anomalies (uses same preprocessing as training)
test_dataset = anomavision.anomavisionDataset("path/to/test/images")
test_dataloader = DataLoader(test_dataset, batch_size=4)
for batch, images, _, _ in test_dataloader:
# ๐ฏ Get anomaly scores and detailed heatmaps
image_scores, score_maps = model.predict(batch)
# ๐ท๏ธ Classify anomalies (threshold=13 works great for most cases)
predictions = anomavision.classification(image_scores, threshold=13)
print(f"๐ฅ Anomaly scores: {image_scores.tolist()}")
print(f"๐ Predictions: {predictions.tolist()}")
break
```
### ๐ Export for Production Deployment
```python
# ๐ฆ Export to ONNX for universal deployment
python export.py \
--model_data_path "./models/" \
--model "padim_model.pt" \
--format onnx \
--opset 17
print("โ
ONNX model ready for deployment!")
```
</details>
---
<details >
<summary>โจ Real-World Examples</summary>
### ๐ฅ๏ธ Command Line Interface
#### ๐ Train a High-Performance Model
```bash
# Using command line arguments
python train.py \
--dataset_path "data/bottle" \
--class_name "bottle" \
--model_data_path "./models/" \
--backbone resnet18 \
--batch_size 8 \
--layer_indices 0 1 2 \
--feat_dim 200 \
--resize 256 224 \
--crop_size 224 224 \
--normalize
# Or using config file (recommended)
python train.py --config config.yml
```
**Sample config.yml:**
```yaml
# Dataset configuration
dataset_path: "D:/01-DATA"
class_name: "bottle"
resize: [256, 224] # Width, Height - flexible dimensions!
crop_size: [224, 224] # Final square crop
normalize: true
norm_mean: [0.485, 0.456, 0.406]
norm_std: [0.229, 0.224, 0.225]
# Model configuration
backbone: "resnet18"
feat_dim: 100
layer_indices: [0, 1]
batch_size: 8
# Output configuration
model_data_path: "./distributions/bottle_exp"
output_model: "padim_model.pt"
run_name: "bottle_experiment"
```
#### ๐ Run Lightning-Fast Inference
```bash
# Automatically uses training configuration
python detect.py \
--model_data_path "./distributions/bottle_exp" \
--model "padim_model.pt" \
--img_path "data/bottle/test/broken_large" \
--batch_size 16 \
--thresh 13 \
--enable_visualization \
--save_visualizations
# Multi-format support
python detect.py --model padim_model.pt # PyTorch
python detect.py --model padim_model.torchscript # TorchScript
python detect.py --model padim_model.onnx # ONNX Runtime
python detect.py --model padim_model_openvino # OpenVINO
# Or using config file (recommended)
python train.py --config config.yml
```
#### ๐ Comprehensive Model Evaluation
```bash
# Uses saved configuration automatically
python eval.py \
--model_data_path "./distributions/bottle_exp" \
--model "padim_model.pt" \
--dataset_path "data/mvtec" \
--class_name "bottle" \
--batch_size 8
# Or using config file (recommended)
python eval.py --config config.yml
```
#### ๐ Export to Multiple Formats
```bash
# Export to all formats
python export.py \
--model_data_path "./distributions/bottle_exp" \
--model "padim_model.pt" \
--format all
# Or using config file (recommended)
python export.py --config config.yml
```
### ๐ Universal Model Format Support
```python
from anomavision.inference.model.wrapper import ModelWrapper
# ๐ฏ Automatically detect and load ANY supported format
pytorch_model = ModelWrapper("model.pt", device='cuda') # PyTorch
onnx_model = ModelWrapper("model.onnx", device='cuda') # ONNX Runtime
torchscript_model = ModelWrapper("model.torchscript", device='cuda') # TorchScript
openvino_model = ModelWrapper("model_openvino/model.xml", device='cpu') # OpenVINO
# ๐ Unified prediction interface - same API for all formats!
scores, maps = pytorch_model.predict(batch)
scores, maps = onnx_model.predict(batch)
# ๐งน Always clean up resources
pytorch_model.close()
onnx_model.close()
```
### ๐ง C++ ONNX Integration
```cpp
// C++ ONNX Runtime integration example
#include <onnxruntime_cxx_api.h>
#include <iostream>
#include <vector>
#include <string>
#include <chrono>
#include <numeric>
#include <algorithm>
.
.
.
.
```
</details>
---
<details >
<summary>โจ Configuration Guide</summary>
### ๐ฏ Training Parameters
| Parameter | Description | Default | Range | Pro Tip |
|-----------|-------------|---------|-------|---------|
| `backbone` | Feature extractor | `resnet18` | `resnet18`, `wide_resnet50` | Use ResNet18 for speed, Wide-ResNet50 for accuracy |
| `layer_indices` | ResNet layers | `[0]` | `[0, 1, 2, 3]` | `[0, 1]` gives best speed/accuracy balance |
| `feat_dim` | Feature dimensions | `50` | `1-2048` | Higher = more accurate but slower |
| `batch_size` | Training batch size | `2` | `1-64` | Use largest size that fits in memory |
### ๐ Image Processing Parameters
| Parameter | Description | Default | Example | Pro Tip |
|-----------|-------------|---------|---------|---------|
| `resize` | Initial resize | `[224, 224]` | `[256, 192]` | Flexible width/height, maintains aspect ratio |
| `crop_size` | Final crop size | `None` | `[224, 224]` | Square crops often work best for CNN models |
| `normalize` | ImageNet normalization | `true` | `true/false` | Usually improves performance with pretrained models |
| `norm_mean` | RGB mean values | `[0.485, 0.456, 0.406]` | Custom values | Use ImageNet stats for pretrained backbones |
| `norm_std` | RGB std values | `[0.229, 0.224, 0.225]` | Custom values | Match your training data distribution |
### ๐ Inference Parameters
| Parameter | Description | Default | Range | Pro Tip |
|-----------|-------------|---------|-------|---------|
| `thresh` | Anomaly threshold | `13` | `1-100` | Start with 13, tune based on your data |
| `enable_visualization` | Show results | `false` | `true/false` | Great for debugging and demos |
| `save_visualizations` | Save images | `false` | `true/false` | Essential for production monitoring |
### ๐ Configuration File Structure
```yaml
# =========================
# Dataset / preprocessing (shared by train, detect, eval)
# =========================
dataset_path: "D:/01-DATA" # Root dataset folder
class_name: "bottle" # Class name for MVTec dataset
resize: [224, 224] # Resize dimensions [width, height]
crop_size: [224, 224] # Final crop size [width, height]
normalize: true # Whether to normalize images
norm_mean: [0.485, 0.456, 0.406] # ImageNet normalization mean
norm_std: [0.229, 0.224, 0.225] # ImageNet normalization std
# =========================
# Model / training
# =========================
backbone: "resnet18" # Backbone CNN architecture
feat_dim: 50 # Feature dimension size
layer_indices: [0] # Which backbone layers to use
model_data_path: "./distributions/exp" # Path to store model data
output_model: "padim_model.pt" # Saved model filename
batch_size: 2 # Training/inference batch size
device: "auto" # Device: "cpu", "cuda", or "auto"
# =========================
# Inference (detect.py)
# =========================
img_path: "D:/01-DATA/bottle/test/broken_large" # Test images path
thresh: 13.0 # Anomaly detection threshold
enable_visualization: true # Enable visualizations
save_visualizations: true # Save visualization results
viz_output_dir: "./visualizations/" # Visualization output directory
# =========================
# Export (export.py)
# =========================
format: "all" # Export format: onnx, torchscript, openvino, all
opset: 17 # ONNX opset version
dynamic_batch: true # Allow dynamic batch size
fp32: false # Export precision (false = FP16 for OpenVINO)
```
</details>
---
<details >
<summary>โจ Complete API Reference</summary>
### ๐ง Core Classes
#### `anomavision.Padim` - The Heart of AnomaVision
```python
model = anomavision.Padim(
backbone='resnet18', # 'resnet18' | 'wide_resnet50'
device=torch.device('cuda'), # Target device
layer_indices=[0, 1, 2], # ResNet layers [0-3]
feat_dim=100, # Feature dimensions (1-2048)
channel_indices=None # Optional channel selection
)
```
**๐ฅ Methods:**
- `fit(dataloader, extractions=1)` - Train on normal images
- `predict(batch, gaussian_blur=True)` - Detect anomalies
- `evaluate(dataloader)` - Full evaluation with metrics
- `evaluate_memory_efficient(dataloader)` - For large datasets
- `save_statistics(path, half=False)` - Save compact statistics
- `load_statistics(path, device, force_fp32=True)` - Load statistics
#### `anomavision.anomavisionDataset` - Smart Data Loading with Flexible Sizing
```python
dataset = anomavision.anomavisionDataset(
"path/to/images", # Image directory
resize=[256, 192], # Flexible width/height resize
crop_size=[224, 224], # Final crop dimensions
normalize=True, # ImageNet normalization
mean=[0.485, 0.456, 0.406], # Custom mean values
std=[0.229, 0.224, 0.225] # Custom std values
)
# For MVTec format with same flexibility
mvtec_dataset = anomavision.MVTecDataset(
"path/to/mvtec",
class_name="bottle",
is_train=True,
resize=[300, 300], # Square resize
crop_size=[224, 224], # Final crop
normalize=True
)
```
#### `ModelWrapper` - Universal Model Interface
```python
wrapper = ModelWrapper(
model_path="model.onnx", # Any supported format (.pt, .onnx, .torchscript, etc.)
device='cuda' # Target device
)
# ๐ฏ Unified API for all formats
scores, maps = wrapper.predict(batch)
wrapper.close() # Always clean up!
```
### ๐ ๏ธ Utility Functions
```python
# ๐ท๏ธ Smart classification with optimal thresholds
predictions = anomavision.classification(scores, threshold=15)
# ๐ Comprehensive evaluation metrics
images, targets, masks, scores, maps = model.evaluate(dataloader)
# ๐จ Rich visualization functions
boundary_images = anomavision.visualization.framed_boundary_images(images, classifications)
heatmap_images = anomavision.visualization.heatmap_images(images, score_maps)
highlighted_images = anomavision.visualization.highlighted_images(images, classifications)
```
### โ๏ธ Configuration Management
```python
from anomavision.config import load_config
from anomavision.utils import merge_config
# Load configuration from file
config = load_config("config.yml")
# Merge with command line arguments
final_config = merge_config(args, config)
# Image processing with automatic parameter application
dataset = anomavision.anomavisionDataset(
image_path,
resize=config.resize, # From config: [256, 224]
crop_size=config.crop_size, # From config: [224, 224]
normalize=config.normalize, # From config: true
mean=config.norm_mean, # From config: ImageNet values
std=config.norm_std # From config: ImageNet values
)
```
</details>
---
<details >
<summary>โจ Architecture Overview</summary>
```
AnomaVision/
โโโ ๐ง anomavision/ # Core AI library
โ โโโ ๐ padim.py # PaDiM implementation
โ โโโ ๐ padim_lite.py # Lightweight runtime module
โ โโโ ๐ feature_extraction.py # ResNet feature extraction
โ โโโ ๐ mahalanobis.py # Distance computation
โ โโโ ๐ datasets/ # Dataset loaders with flexible sizing
โ โโโ ๐ visualization/ # Rich visualization tools
โ โโโ ๐ inference/ # Multi-format inference engine
โ โ โโโ ๐ wrapper.py # Universal model wrapper
โ โ โโโ ๐ modelType.py # Format detection
โ โ โโโ ๐ backends/ # Format-specific backends
โ โ โโโ ๐ base.py # Backend interface
โ โ โโโ ๐ torch_backend.py # PyTorch support
โ โ โโโ ๐ onnx_backend.py # ONNX Runtime support
โ โ โโโ ๐ torchscript_backend.py # TorchScript support
โ โ โโโ ๐ tensorrt_backend.py # TensorRT (coming soon)
โ โ โโโ ๐ openvino_backend.py # OpenVINO support
โ โโโ ๐ config/ # Configuration management
โ โโโ ๐ utils.py # Utility functions
โโโ ๐ train.py # Training script with config support
โโโ ๐ detect.py # Inference script
โโโ ๐ eval.py # Evaluation script
โโโ ๐ export.py # Multi-format export utilities
โโโ ๐ config.yml # Default configuration
โโโ ๐ notebooks/ # Interactive examples
```
</details>
---
<details >
<summary>โจ Contributing</summary>
We love contributions! Here's how to make AnomaVision even better:
### ๐ Quick Start for Contributors
```bash
# ๐ฅ Fork and clone
git clone https://github.com/yourusername/AnomaVision.git
cd AnomaVision
# ๐ง Setup development environment
poetry install --dev
pre-commit install
# ๐ฟ Create feature branch
git checkout -b feature/awesome-improvement
# ๐จ Make your changes
# ... code, test, commit ...
# ๐ Submit pull request
git push origin feature/awesome-improvement
```
### ๐ Development Guidelines
- **Code Style**: Follow PEP 8 with 88-character line limit (Black formatting)
- **Type Hints**: Add type hints to all new functions and methods
- **Docstrings**: Use Google-style docstrings for all public functions
- **Tests**: Add pytest tests for new functionality
- **Documentation**: Update README and docstrings as needed
### ๐ Bug Reports & Feature Requests
- **Bug Reports**: Use the [bug report template](.github/ISSUE_TEMPLATE/bug-report.yml)
- **Feature Requests**: Use the [feature request template](.github/ISSUE_TEMPLATE/feature-request.yml)
- **Questions**: Use [GitHub Discussions](https://github.com/DeepKnowledge1/AnomaVision/discussions)
</details>
---
<details >
<summary>โจ Support & Community</summary>
### ๐ค Getting Help
1. **๐ Documentation**: Check this README and code documentation
2. **๐ Search Issues**: Someone might have had the same question
3. **๐ฌ Discussions**: Use GitHub Discussions for questions
4. **๐ Bug Reports**: Create detailed issue reports with examples
### ๐ฅ Maintainers
- **Core Team**: [@DeepKnowledge1](https://github.com/DeepKnowledge1)
- **Contributors**: See [CONTRIBUTORS.md](CONTRIBUTORS.md)
### ๐ Recognition
Contributors are recognized in:
- `CONTRIBUTORS.md` file
- Release notes
- GitHub contributors page
</details>
---
---
<details >
<summary>โจ Roadmap</summary>
### ๐
Q4 2025
- **๐ TensorRT Backend**: NVIDIA GPU acceleration
- **๐ฑ Mobile Export**: CoreML and TensorFlow Lite support
- **๐ง C++ API**: Native C++ library with Python bindings
- **๐ฏ AutoML**: Automatic hyperparameter optimization
### ๐
Q1 2026
- **๐ง Transformer Models**: Vision Transformer (ViT) backbone support
- **๐ Online Learning**: Continuous model updates
- **๐ MLOps Integration**: MLflow, Weights & Biases support
- **๐ Web Interface**: Browser-based inference and visualization
### ๐
Q2 2026
- **๐ฅ Video Anomaly Detection**: Temporal anomaly detection
- **๐ Multi-Class Support**: Beyond binary anomaly detection
- **โก Quantization**: INT8 optimization for edge devices
- **๐ Integration**: Kubernetes operators and Helm charts
</details>
---
<details >
<summary>โจ License & Citation</summary>
### ๐ MIT License
AnomaVision is released under the **MIT License** - see [LICENSE](LICENSE) for details.
### ๐ Citation
If AnomaVision helps your research or project, we'd appreciate a citation:
```bibtex
@software{anomavision2025,
title={AnomaVision: Edge-Ready Visual Anomaly Detection},
author={DeepKnowledge Contributors},
year={2025},
url={https://github.com/DeepKnowledge1/AnomaVision},
version={2.0.46},
note={High-performance anomaly detection library optimized for edge deployment}
}
```
### ๐ Acknowledgments
AnomaVision builds upon the excellent work of:
- **PaDiM**: Original algorithm by Defard et al.
- **PyTorch**: Deep learning framework
- **ONNX**: Open Neural Network Exchange
- **OpenVINO**: Intel's inference optimization toolkit
- **Anomalib**: Intel's anomaly detection library (for inspiration)
</details>
---
<details >
<summary>โจ Related Projects</summary>
- **[anomavision](https://github.com/OpenAOI/anomavision)**: anomaly detection
</details>
---
<details >
<summary>โจ Contact & Support</summary>
### ๐ค Community Channels
- **๐ฌ GitHub Discussions**: [Community Forum](https://github.com/DeepKnowledge1/AnomaVision/discussions)
- **๐ Issues**: [Bug Reports & Features](https://github.com/DeepKnowledge1/AnomaVision/issues)
- **๐ง Email**: [deepp.knowledge@gmail.com](mailto:deepp.knowledge@gmail.com)
- **๐ Documentation**: [Wiki](https://github.com/DeepKnowledge1/AnomaVision/wiki)
### ๐ผ Enterprise Support
For enterprise deployments, custom integrations, or commercial support:
- **๐ข Enterprise Consulting**: Available upon request
- **๐ Training Workshops**: Custom training for your team
- **๐ง Custom Development**: Tailored solutions for your use case
</details>
---
<div align="center">
## ๐ Ready to Transform Your Anomaly Detection?
**Stop settling for slow, bloated solutions. Experience the future of edge-ready anomaly detection.**
[](https://github.com/DeepKnowledge1/AnomaVision)
[](compare_with_anomalib.py)
[](docs/)
[](https://github.com/DeepKnowledge1/AnomaVision)
---
**๐ Benchmark Results Don't Lie: AnomaVision Wins 10/10 Metrics**
*Deploy fast. Detect better. AnomaVision.*
**Made with โค๏ธ for the edge AI community**
</div>
Raw data
{
"_id": null,
"home_page": "https://github.com/DeepKnowledge1",
"name": "anomavision",
"maintainer": null,
"docs_url": null,
"requires_python": "<=3.12,>=3.9",
"maintainer_email": null,
"keywords": "anomaly, vision, computer-vision, anomaly-detection, PaDim",
"author": "Deep Knowledge",
"author_email": "Deepp.Knowledge@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/66/bc/6240e5cc65c33bfdbc49e38a2aebf259a8bb29d6897fc964cd0e70a85e7c/anomavision-3.0.12.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 AnomaVision: Edge-Ready Visual Anomaly Detection\n\n<div align=\"center\">\n\n[](https://python.org)\n[](https://pytorch.org)\n[](https://developer.nvidia.com/cuda-toolkit)\n[](https://onnx.ai/)\n[](https://docs.openvino.ai/)\n[](https://pytorch.org/docs/stable/jit.html)\n[](LICENSE)\n\n<img src=\"docs/images/AnomaVision_banner.png\" alt=\"bg\" width=\"100%\" style=\"border-radius: 15px;\"/>\n\n**\ud83d\udd25 Production-ready anomaly detection powered by state-of-the-art PaDiM algorithm**\n*Deploy anywhere, run everywhere - from edge devices to cloud infrastructure*\n\n\n<details open>\n<summary>\u2728 Supported Export Formats</summary>\n\n| Format | Status | Use Case | Language Support |\n|--------|--------|----------|------------------|\n| **PyTorch** | \u2705 Ready | Development & Research | Python |\n| **Statistics (.pth)** | \u2705 Ready | Ultra-compact deployment (2-4x smaller) | Python |\n| **ONNX** | \u2705 Ready | Cross-platform deployment | Python, C++ |\n| **TorchScript** | \u2705 Ready | Production Python deployment | Python |\n| **OpenVINO** | \u2705 Ready | Intel hardware optimization | Python|\n| **TensorRT** | \ud83d\udea7 Coming Soon | NVIDIA GPU acceleration | Python|\n\n</details>\n\n</div>\n\n\n---\n\n<details open>\n<summary>\u2728 What's New (September 2025)</summary>\n\n- **Slim artifacts (`.pth`)**: Save only PaDiM statistics (mean, cov_inv, channel indices, layer indices, backbone) for **2\u20134\u00d7 smaller files** vs. full `.pt` checkpoints\n- **Plug-and-play loading**: `.pth` loads seamlessly through `TorchBackend` and exporter via lightweight runtime (`PadimLite`) with same `.predict(...)` interface\n- **CPU-first pipeline**: Everything works on machines **without a GPU**. FP16 used only for storage; compute happens in FP32 on CPU\n- **Export from `.pth`**: ONNX/TorchScript/OpenVINO export now accepts stats-only `.pth` directly\n- **Test coverage**: New pytest cases validate saving stats, loading via `PadimLite`, CPU inference, and exporter compatibility\n\n</details>\n\n---\n\n<details open>\n<summary>\u2728 Why Choose AnomaVision?</summary>\n\n**\ud83c\udfaf Unmatched Performance** \u2022 **\ud83d\udd04 Multi-Format Support** \u2022 **\ud83d\udce6 Production Ready** \u2022 **\ud83c\udfa8 Rich Visualizations** \u2022 **\ud83d\udccf Flexible Image Dimensions**\n\nAnomaVision transforms the cutting-edge **PaDiM (Patch Distribution Modeling)** algorithm into a production-ready powerhouse for visual anomaly detection. Whether you're detecting manufacturing defects, monitoring infrastructure, or ensuring quality control, AnomaVision delivers enterprise-grade performance with research-level accuracy.\n\n</details>\n\n---\n<details>\n<summary>\u2728 Benchmark Results: AnomaVision vs Anomalib (MVTec Bottle, CPU-only)</summary>\n\n<img src=\"docs/images/av_al.png\" alt=\"bg\" width=\"50%\" style=\"border-radius: 15px;\"/>\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Installation</summary>\n\n### \ud83d\udccb Prerequisites\n- **Python**: 3.9+\n- **CUDA**: 11.7+ for GPU acceleration\n- **PyTorch**: 2.0+ (automatically installed)\n\n### \ud83c\udfaf Method 1: Poetry (Recommended)\n```bash\ngit clone https://github.com/DeepKnowledge1/AnomaVision.git\ncd AnomaVision\npoetry install\npoetry shell\n```\n\n### \ud83c\udfaf Method 2: pip\n```bash\ngit clone https://github.com/DeepKnowledge1/AnomaVision.git\ncd AnomaVision\npip install -r requirements.txt\n```\n\n### \u2705 Verify Installation\n```python\npython -c \"import anomavision; print('\ud83c\udf89 AnomaVision installed successfully!')\"\n```\n\n### \ud83d\udc33 Docker Support\n```bash\n# Build Docker image (coming soon)\ndocker build -t anomavision:latest .\ndocker run --gpus all -v $(pwd):/workspace anomavision:latest\n```\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Quick Start</summary>\n\n### \ud83c\udfaf Train Your First Model (2 minutes)\n\n```python\nimport anomavision\nimport torch\nfrom torch.utils.data import DataLoader\n\n# \ud83d\udcc2 Load your \"good\" training images\ndataset = anomavision.anomavisionDataset(\n \"path/to/train/good\",\n resize=[256, 192], # Flexible width/height\n crop_size=[224, 224], # Final crop size\n normalize=True # ImageNet normalization\n)\ndataloader = DataLoader(dataset, batch_size=4)\n\n# \ud83e\udde0 Initialize PaDiM with optimal settings\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = anomavision.Padim(\n backbone='resnet18', # Fast and accurate\n device=device,\n layer_indices=[0, 1], # Multi-scale features\n feat_dim=100 # Optimal feature dimension\n)\n\n# \ud83d\udd25 Train the model (surprisingly fast!)\nprint(\"\ud83d\ude80 Training model...\")\nmodel.fit(dataloader)\n\n# \ud83d\udcbe Save for production deployment\ntorch.save(model, \"anomaly_detector.pt\")\nmodel.save_statistics(\"compact_model.pth\", half=True) # 4x smaller!\nprint(\"\u2705 Model trained and saved!\")\n```\n\n### \ud83d\udd0d Detect Anomalies Instantly\n\n```python\n# \ud83d\udcca Load test data and detect anomalies (uses same preprocessing as training)\ntest_dataset = anomavision.anomavisionDataset(\"path/to/test/images\")\ntest_dataloader = DataLoader(test_dataset, batch_size=4)\n\nfor batch, images, _, _ in test_dataloader:\n # \ud83c\udfaf Get anomaly scores and detailed heatmaps\n image_scores, score_maps = model.predict(batch)\n\n # \ud83c\udff7\ufe0f Classify anomalies (threshold=13 works great for most cases)\n predictions = anomavision.classification(image_scores, threshold=13)\n\n print(f\"\ud83d\udd25 Anomaly scores: {image_scores.tolist()}\")\n print(f\"\ud83d\udccb Predictions: {predictions.tolist()}\")\n break\n```\n\n### \ud83d\ude80 Export for Production Deployment\n\n```python\n# \ud83d\udce6 Export to ONNX for universal deployment\npython export.py \\\n --model_data_path \"./models/\" \\\n --model \"padim_model.pt\" \\\n --format onnx \\\n --opset 17\n\nprint(\"\u2705 ONNX model ready for deployment!\")\n```\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Real-World Examples</summary>\n\n### \ud83d\udda5\ufe0f Command Line Interface\n\n#### \ud83d\udcda Train a High-Performance Model\n```bash\n# Using command line arguments\npython train.py \\\n --dataset_path \"data/bottle\" \\\n --class_name \"bottle\" \\\n --model_data_path \"./models/\" \\\n --backbone resnet18 \\\n --batch_size 8 \\\n --layer_indices 0 1 2 \\\n --feat_dim 200 \\\n --resize 256 224 \\\n --crop_size 224 224 \\\n --normalize\n\n# Or using config file (recommended)\npython train.py --config config.yml\n```\n\n**Sample config.yml:**\n```yaml\n# Dataset configuration\ndataset_path: \"D:/01-DATA\"\nclass_name: \"bottle\"\nresize: [256, 224] # Width, Height - flexible dimensions!\ncrop_size: [224, 224] # Final square crop\nnormalize: true\nnorm_mean: [0.485, 0.456, 0.406]\nnorm_std: [0.229, 0.224, 0.225]\n\n# Model configuration\nbackbone: \"resnet18\"\nfeat_dim: 100\nlayer_indices: [0, 1]\nbatch_size: 8\n\n# Output configuration\nmodel_data_path: \"./distributions/bottle_exp\"\noutput_model: \"padim_model.pt\"\nrun_name: \"bottle_experiment\"\n```\n\n#### \ud83d\udd0d Run Lightning-Fast Inference\n```bash\n# Automatically uses training configuration\npython detect.py \\\n --model_data_path \"./distributions/bottle_exp\" \\\n --model \"padim_model.pt\" \\\n --img_path \"data/bottle/test/broken_large\" \\\n --batch_size 16 \\\n --thresh 13 \\\n --enable_visualization \\\n --save_visualizations\n\n# Multi-format support\npython detect.py --model padim_model.pt # PyTorch\npython detect.py --model padim_model.torchscript # TorchScript\npython detect.py --model padim_model.onnx # ONNX Runtime\npython detect.py --model padim_model_openvino # OpenVINO\n\n# Or using config file (recommended)\npython train.py --config config.yml\n\n```\n\n#### \ud83d\udcca Comprehensive Model Evaluation\n```bash\n# Uses saved configuration automatically\npython eval.py \\\n --model_data_path \"./distributions/bottle_exp\" \\\n --model \"padim_model.pt\" \\\n --dataset_path \"data/mvtec\" \\\n --class_name \"bottle\" \\\n --batch_size 8\n\n# Or using config file (recommended)\npython eval.py --config config.yml\n\n```\n\n#### \ud83d\udd04 Export to Multiple Formats\n```bash\n# Export to all formats\npython export.py \\\n --model_data_path \"./distributions/bottle_exp\" \\\n --model \"padim_model.pt\" \\\n --format all\n\n# Or using config file (recommended)\npython export.py --config config.yml\n```\n\n### \ud83d\udd04 Universal Model Format Support\n\n```python\nfrom anomavision.inference.model.wrapper import ModelWrapper\n\n# \ud83c\udfaf Automatically detect and load ANY supported format\npytorch_model = ModelWrapper(\"model.pt\", device='cuda') # PyTorch\nonnx_model = ModelWrapper(\"model.onnx\", device='cuda') # ONNX Runtime\ntorchscript_model = ModelWrapper(\"model.torchscript\", device='cuda') # TorchScript\nopenvino_model = ModelWrapper(\"model_openvino/model.xml\", device='cpu') # OpenVINO\n\n# \ud83d\ude80 Unified prediction interface - same API for all formats!\nscores, maps = pytorch_model.predict(batch)\nscores, maps = onnx_model.predict(batch)\n\n# \ud83e\uddf9 Always clean up resources\npytorch_model.close()\nonnx_model.close()\n```\n\n### \ud83d\udd27 C++ ONNX Integration\n\n```cpp\n// C++ ONNX Runtime integration example\n\n#include <onnxruntime_cxx_api.h>\n#include <iostream>\n#include <vector>\n#include <string>\n#include <chrono>\n#include <numeric>\n#include <algorithm>\n\n.\n.\n.\n.\n\n```\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Configuration Guide</summary>\n\n### \ud83c\udfaf Training Parameters\n\n| Parameter | Description | Default | Range | Pro Tip |\n|-----------|-------------|---------|-------|---------|\n| `backbone` | Feature extractor | `resnet18` | `resnet18`, `wide_resnet50` | Use ResNet18 for speed, Wide-ResNet50 for accuracy |\n| `layer_indices` | ResNet layers | `[0]` | `[0, 1, 2, 3]` | `[0, 1]` gives best speed/accuracy balance |\n| `feat_dim` | Feature dimensions | `50` | `1-2048` | Higher = more accurate but slower |\n| `batch_size` | Training batch size | `2` | `1-64` | Use largest size that fits in memory |\n\n### \ud83d\udccf Image Processing Parameters\n\n| Parameter | Description | Default | Example | Pro Tip |\n|-----------|-------------|---------|---------|---------|\n| `resize` | Initial resize | `[224, 224]` | `[256, 192]` | Flexible width/height, maintains aspect ratio |\n| `crop_size` | Final crop size | `None` | `[224, 224]` | Square crops often work best for CNN models |\n| `normalize` | ImageNet normalization | `true` | `true/false` | Usually improves performance with pretrained models |\n| `norm_mean` | RGB mean values | `[0.485, 0.456, 0.406]` | Custom values | Use ImageNet stats for pretrained backbones |\n| `norm_std` | RGB std values | `[0.229, 0.224, 0.225]` | Custom values | Match your training data distribution |\n\n### \ud83d\udd0d Inference Parameters\n\n| Parameter | Description | Default | Range | Pro Tip |\n|-----------|-------------|---------|-------|---------|\n| `thresh` | Anomaly threshold | `13` | `1-100` | Start with 13, tune based on your data |\n| `enable_visualization` | Show results | `false` | `true/false` | Great for debugging and demos |\n| `save_visualizations` | Save images | `false` | `true/false` | Essential for production monitoring |\n\n### \ud83d\udcc4 Configuration File Structure\n\n```yaml\n# =========================\n# Dataset / preprocessing (shared by train, detect, eval)\n# =========================\ndataset_path: \"D:/01-DATA\" # Root dataset folder\nclass_name: \"bottle\" # Class name for MVTec dataset\nresize: [224, 224] # Resize dimensions [width, height]\ncrop_size: [224, 224] # Final crop size [width, height]\nnormalize: true # Whether to normalize images\nnorm_mean: [0.485, 0.456, 0.406] # ImageNet normalization mean\nnorm_std: [0.229, 0.224, 0.225] # ImageNet normalization std\n\n# =========================\n# Model / training\n# =========================\nbackbone: \"resnet18\" # Backbone CNN architecture\nfeat_dim: 50 # Feature dimension size\nlayer_indices: [0] # Which backbone layers to use\nmodel_data_path: \"./distributions/exp\" # Path to store model data\noutput_model: \"padim_model.pt\" # Saved model filename\nbatch_size: 2 # Training/inference batch size\ndevice: \"auto\" # Device: \"cpu\", \"cuda\", or \"auto\"\n\n# =========================\n# Inference (detect.py)\n# =========================\nimg_path: \"D:/01-DATA/bottle/test/broken_large\" # Test images path\nthresh: 13.0 # Anomaly detection threshold\nenable_visualization: true # Enable visualizations\nsave_visualizations: true # Save visualization results\nviz_output_dir: \"./visualizations/\" # Visualization output directory\n\n# =========================\n# Export (export.py)\n# =========================\nformat: \"all\" # Export format: onnx, torchscript, openvino, all\nopset: 17 # ONNX opset version\ndynamic_batch: true # Allow dynamic batch size\nfp32: false # Export precision (false = FP16 for OpenVINO)\n```\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Complete API Reference</summary>\n\n### \ud83e\udde0 Core Classes\n\n#### `anomavision.Padim` - The Heart of AnomaVision\n```python\nmodel = anomavision.Padim(\n backbone='resnet18', # 'resnet18' | 'wide_resnet50'\n device=torch.device('cuda'), # Target device\n layer_indices=[0, 1, 2], # ResNet layers [0-3]\n feat_dim=100, # Feature dimensions (1-2048)\n channel_indices=None # Optional channel selection\n)\n```\n\n**\ud83d\udd25 Methods:**\n- `fit(dataloader, extractions=1)` - Train on normal images\n- `predict(batch, gaussian_blur=True)` - Detect anomalies\n- `evaluate(dataloader)` - Full evaluation with metrics\n- `evaluate_memory_efficient(dataloader)` - For large datasets\n- `save_statistics(path, half=False)` - Save compact statistics\n- `load_statistics(path, device, force_fp32=True)` - Load statistics\n\n#### `anomavision.anomavisionDataset` - Smart Data Loading with Flexible Sizing\n```python\ndataset = anomavision.anomavisionDataset(\n \"path/to/images\", # Image directory\n resize=[256, 192], # Flexible width/height resize\n crop_size=[224, 224], # Final crop dimensions\n normalize=True, # ImageNet normalization\n mean=[0.485, 0.456, 0.406], # Custom mean values\n std=[0.229, 0.224, 0.225] # Custom std values\n)\n\n# For MVTec format with same flexibility\nmvtec_dataset = anomavision.MVTecDataset(\n \"path/to/mvtec\",\n class_name=\"bottle\",\n is_train=True,\n resize=[300, 300], # Square resize\n crop_size=[224, 224], # Final crop\n normalize=True\n)\n```\n\n#### `ModelWrapper` - Universal Model Interface\n```python\nwrapper = ModelWrapper(\n model_path=\"model.onnx\", # Any supported format (.pt, .onnx, .torchscript, etc.)\n device='cuda' # Target device\n)\n\n# \ud83c\udfaf Unified API for all formats\nscores, maps = wrapper.predict(batch)\nwrapper.close() # Always clean up!\n```\n\n### \ud83d\udee0\ufe0f Utility Functions\n\n```python\n# \ud83c\udff7\ufe0f Smart classification with optimal thresholds\npredictions = anomavision.classification(scores, threshold=15)\n\n# \ud83d\udcca Comprehensive evaluation metrics\nimages, targets, masks, scores, maps = model.evaluate(dataloader)\n\n# \ud83c\udfa8 Rich visualization functions\nboundary_images = anomavision.visualization.framed_boundary_images(images, classifications)\nheatmap_images = anomavision.visualization.heatmap_images(images, score_maps)\nhighlighted_images = anomavision.visualization.highlighted_images(images, classifications)\n```\n\n### \u2699\ufe0f Configuration Management\n\n```python\nfrom anomavision.config import load_config\nfrom anomavision.utils import merge_config\n\n# Load configuration from file\nconfig = load_config(\"config.yml\")\n\n# Merge with command line arguments\nfinal_config = merge_config(args, config)\n\n# Image processing with automatic parameter application\ndataset = anomavision.anomavisionDataset(\n image_path,\n resize=config.resize, # From config: [256, 224]\n crop_size=config.crop_size, # From config: [224, 224]\n normalize=config.normalize, # From config: true\n mean=config.norm_mean, # From config: ImageNet values\n std=config.norm_std # From config: ImageNet values\n)\n```\n\n</details>\n\n---\n\n\n<details >\n<summary>\u2728 Architecture Overview</summary>\n\n```\nAnomaVision/\n\u251c\u2500\u2500 \ud83e\udde0 anomavision/ # Core AI library\n\u2502 \u251c\u2500\u2500 \ud83d\udcc4 padim.py # PaDiM implementation\n\u2502 \u251c\u2500\u2500 \ud83d\udcc4 padim_lite.py # Lightweight runtime module\n\u2502 \u251c\u2500\u2500 \ud83d\udcc4 feature_extraction.py # ResNet feature extraction\n\u2502 \u251c\u2500\u2500 \ud83d\udcc4 mahalanobis.py # Distance computation\n\u2502 \u251c\u2500\u2500 \ud83d\udcc1 datasets/ # Dataset loaders with flexible sizing\n\u2502 \u251c\u2500\u2500 \ud83d\udcc1 visualization/ # Rich visualization tools\n\u2502 \u251c\u2500\u2500 \ud83d\udcc1 inference/ # Multi-format inference engine\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 wrapper.py # Universal model wrapper\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 modelType.py # Format detection\n\u2502 \u2502 \u2514\u2500\u2500 \ud83d\udcc1 backends/ # Format-specific backends\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 base.py # Backend interface\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 torch_backend.py # PyTorch support\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 onnx_backend.py # ONNX Runtime support\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 torchscript_backend.py # TorchScript support\n\u2502 \u2502 \u251c\u2500\u2500 \ud83d\udcc4 tensorrt_backend.py # TensorRT (coming soon)\n\u2502 \u2502 \u2514\u2500\u2500 \ud83d\udcc4 openvino_backend.py # OpenVINO support\n\u2502 \u251c\u2500\u2500 \ud83d\udcc1 config/ # Configuration management\n\u2502 \u2514\u2500\u2500 \ud83d\udcc4 utils.py # Utility functions\n\u251c\u2500\u2500 \ud83d\udcc4 train.py # Training script with config support\n\u251c\u2500\u2500 \ud83d\udcc4 detect.py # Inference script\n\u251c\u2500\u2500 \ud83d\udcc4 eval.py # Evaluation script\n\u251c\u2500\u2500 \ud83d\udcc4 export.py # Multi-format export utilities\n\u251c\u2500\u2500 \ud83d\udcc4 config.yml # Default configuration\n\u2514\u2500\u2500 \ud83d\udcc1 notebooks/ # Interactive examples\n```\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Contributing</summary>\n\nWe love contributions! Here's how to make AnomaVision even better:\n\n### \ud83d\ude80 Quick Start for Contributors\n```bash\n# \ud83d\udd25 Fork and clone\ngit clone https://github.com/yourusername/AnomaVision.git\ncd AnomaVision\n\n# \ud83d\udd27 Setup development environment\npoetry install --dev\npre-commit install\n\n# \ud83c\udf3f Create feature branch\ngit checkout -b feature/awesome-improvement\n\n# \ud83d\udd28 Make your changes\n# ... code, test, commit ...\n\n# \ud83d\ude80 Submit pull request\ngit push origin feature/awesome-improvement\n```\n\n### \ud83d\udcdd Development Guidelines\n\n- **Code Style**: Follow PEP 8 with 88-character line limit (Black formatting)\n- **Type Hints**: Add type hints to all new functions and methods\n- **Docstrings**: Use Google-style docstrings for all public functions\n- **Tests**: Add pytest tests for new functionality\n- **Documentation**: Update README and docstrings as needed\n\n### \ud83d\udc1b Bug Reports & Feature Requests\n\n- **Bug Reports**: Use the [bug report template](.github/ISSUE_TEMPLATE/bug-report.yml)\n- **Feature Requests**: Use the [feature request template](.github/ISSUE_TEMPLATE/feature-request.yml)\n- **Questions**: Use [GitHub Discussions](https://github.com/DeepKnowledge1/AnomaVision/discussions)\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Support & Community</summary>\n\n### \ud83e\udd1d Getting Help\n\n1. **\ud83d\udcd6 Documentation**: Check this README and code documentation\n2. **\ud83d\udd0d Search Issues**: Someone might have had the same question\n3. **\ud83d\udcac Discussions**: Use GitHub Discussions for questions\n4. **\ud83d\udc1b Bug Reports**: Create detailed issue reports with examples\n\n### \ud83d\udc65 Maintainers\n\n- **Core Team**: [@DeepKnowledge1](https://github.com/DeepKnowledge1)\n- **Contributors**: See [CONTRIBUTORS.md](CONTRIBUTORS.md)\n\n### \ud83c\udf1f Recognition\n\nContributors are recognized in:\n- `CONTRIBUTORS.md` file\n- Release notes\n- GitHub contributors page\n\n</details>\n\n---\n---\n\n<details >\n<summary>\u2728 Roadmap</summary>\n\n### \ud83d\udcc5 Q4 2025\n- **\ud83d\ude80 TensorRT Backend**: NVIDIA GPU acceleration\n- **\ud83d\udcf1 Mobile Export**: CoreML and TensorFlow Lite support\n- **\ud83d\udd27 C++ API**: Native C++ library with Python bindings\n- **\ud83c\udfaf AutoML**: Automatic hyperparameter optimization\n\n### \ud83d\udcc5 Q1 2026\n- **\ud83e\udde0 Transformer Models**: Vision Transformer (ViT) backbone support\n- **\ud83d\udd04 Online Learning**: Continuous model updates\n- **\ud83d\udcca MLOps Integration**: MLflow, Weights & Biases support\n- **\ud83c\udf10 Web Interface**: Browser-based inference and visualization\n\n### \ud83d\udcc5 Q2 2026\n- **\ud83c\udfa5 Video Anomaly Detection**: Temporal anomaly detection\n- **\ud83d\udd0d Multi-Class Support**: Beyond binary anomaly detection\n- **\u26a1 Quantization**: INT8 optimization for edge devices\n- **\ud83d\udd17 Integration**: Kubernetes operators and Helm charts\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 License & Citation</summary>\n\n### \ud83d\udcdc MIT License\n\nAnomaVision is released under the **MIT License** - see [LICENSE](LICENSE) for details.\n\n### \ud83d\udcd6 Citation\n\nIf AnomaVision helps your research or project, we'd appreciate a citation:\n\n```bibtex\n@software{anomavision2025,\n title={AnomaVision: Edge-Ready Visual Anomaly Detection},\n author={DeepKnowledge Contributors},\n year={2025},\n url={https://github.com/DeepKnowledge1/AnomaVision},\n version={2.0.46},\n note={High-performance anomaly detection library optimized for edge deployment}\n}\n```\n\n### \ud83d\ude4f Acknowledgments\n\nAnomaVision builds upon the excellent work of:\n- **PaDiM**: Original algorithm by Defard et al.\n- **PyTorch**: Deep learning framework\n- **ONNX**: Open Neural Network Exchange\n- **OpenVINO**: Intel's inference optimization toolkit\n- **Anomalib**: Intel's anomaly detection library (for inspiration)\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Related Projects</summary>\n\n- **[anomavision](https://github.com/OpenAOI/anomavision)**: anomaly detection\n\n</details>\n\n---\n\n<details >\n<summary>\u2728 Contact & Support</summary>\n\n### \ud83e\udd1d Community Channels\n\n- **\ud83d\udcac GitHub Discussions**: [Community Forum](https://github.com/DeepKnowledge1/AnomaVision/discussions)\n- **\ud83d\udc1b Issues**: [Bug Reports & Features](https://github.com/DeepKnowledge1/AnomaVision/issues)\n- **\ud83d\udce7 Email**: [deepp.knowledge@gmail.com](mailto:deepp.knowledge@gmail.com)\n- **\ud83d\udcd6 Documentation**: [Wiki](https://github.com/DeepKnowledge1/AnomaVision/wiki)\n\n### \ud83d\udcbc Enterprise Support\n\nFor enterprise deployments, custom integrations, or commercial support:\n- **\ud83c\udfe2 Enterprise Consulting**: Available upon request\n- **\ud83c\udf93 Training Workshops**: Custom training for your team\n- **\ud83d\udd27 Custom Development**: Tailored solutions for your use case\n\n</details>\n\n---\n\n<div align=\"center\">\n\n## \ud83d\ude80 Ready to Transform Your Anomaly Detection?\n\n**Stop settling for slow, bloated solutions. Experience the future of edge-ready anomaly detection.**\n\n[](https://github.com/DeepKnowledge1/AnomaVision)\n[](compare_with_anomalib.py)\n[](docs/)\n[](https://github.com/DeepKnowledge1/AnomaVision)\n\n---\n\n**\ud83c\udfc6 Benchmark Results Don't Lie: AnomaVision Wins 10/10 Metrics**\n*Deploy fast. Detect better. AnomaVision.*\n\n**Made with \u2764\ufe0f for the edge AI community**\n\n</div>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Advanced Anomaly Detection Environment - Deep learning library for state-of-the-art anomaly detection algorithms",
"version": "3.0.12",
"project_urls": {
"Documentation": "https://github.com/DeepKnowledge1/AnomaVision/wiki",
"Homepage": "https://github.com/DeepKnowledge1",
"Repository": "https://github.com/DeepKnowledge1/AnomaVision"
},
"split_keywords": [
"anomaly",
" vision",
" computer-vision",
" anomaly-detection",
" padim"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fb9f9b4554cdf50411126a194ea2fe658ddb0263a7f7bc69f7ae4310cb49dc8e",
"md5": "40da84508ec8b10e0e14de2bb21f4d19",
"sha256": "bfa35b1451478aac956b9f0373a5aab0051f406122028ab5080893a51be29a49"
},
"downloads": -1,
"filename": "anomavision-3.0.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "40da84508ec8b10e0e14de2bb21f4d19",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<=3.12,>=3.9",
"size": 64652,
"upload_time": "2025-09-12T21:05:29",
"upload_time_iso_8601": "2025-09-12T21:05:29.093600Z",
"url": "https://files.pythonhosted.org/packages/fb/9f/9b4554cdf50411126a194ea2fe658ddb0263a7f7bc69f7ae4310cb49dc8e/anomavision-3.0.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "66bc6240e5cc65c33bfdbc49e38a2aebf259a8bb29d6897fc964cd0e70a85e7c",
"md5": "e613e519f835b784de061a52b24f50e7",
"sha256": "1f39ad22678ce0a942744f4fb247bb2ef3f0ac18baf2a6b2167d66c0eb46d788"
},
"downloads": -1,
"filename": "anomavision-3.0.12.tar.gz",
"has_sig": false,
"md5_digest": "e613e519f835b784de061a52b24f50e7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<=3.12,>=3.9",
"size": 49208,
"upload_time": "2025-09-12T21:05:30",
"upload_time_iso_8601": "2025-09-12T21:05:30.853030Z",
"url": "https://files.pythonhosted.org/packages/66/bc/6240e5cc65c33bfdbc49e38a2aebf259a8bb29d6897fc964cd0e70a85e7c/anomavision-3.0.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-12 21:05:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "DeepKnowledge1",
"github_project": "AnomaVision",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "paho-mqtt",
"specs": [
[
"==",
"1.6.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.8.3"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"8.0.2"
]
]
},
{
"name": "black",
"specs": [
[
"==",
"24.2.0"
]
]
},
{
"name": "flake8",
"specs": [
[
"==",
"7.0.0"
]
]
},
{
"name": "pre-commit",
"specs": [
[
"==",
"3.6.2"
]
]
},
{
"name": "tox",
"specs": [
[
"==",
"4.13.0"
]
]
},
{
"name": "gitpython",
"specs": [
[
"==",
"3.1.42"
]
]
},
{
"name": "albumentations",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "pycocotools",
"specs": [
[
"==",
"2.0.7"
]
]
},
{
"name": "ipython",
"specs": [
[
"==",
"8.18.1"
]
]
},
{
"name": "openvino-dev",
"specs": [
[
"==",
"2023.0"
]
]
},
{
"name": "easydict",
"specs": [
[
"==",
"1.13"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.2.2"
]
]
},
{
"name": "py-cpuinfo",
"specs": [
[
"==",
"9.0.0"
]
]
},
{
"name": "thon",
"specs": [
[
"==",
"2.2"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"5.9.8"
]
]
},
{
"name": "onnx",
"specs": [
[
"==",
"1.14.1"
]
]
},
{
"name": "onnxruntime-gpu",
"specs": [
[
"==",
"1.14.1"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"1.13.1+cu117"
]
]
},
{
"name": "torchaudio",
"specs": [
[
"==",
"0.13.1+cu117"
]
]
},
{
"name": "torchvision",
"specs": [
[
"==",
"0.14.1+cu117"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.6.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.10.0.82"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.34.3"
]
]
},
{
"name": "python-multipart",
"specs": [
[
"==",
"0.0.20"
]
]
},
{
"name": "seaborn",
"specs": [
[
"==",
"0.13.2"
]
]
},
{
"name": "tabulate",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "ipykernel",
"specs": [
[
"==",
"6.29.5"
]
]
}
],
"lcname": "anomavision"
}