<p align="center">
<img src="docs/assets/perceptra_logo_white.jpg" alt="PERCEPTRA Logo" width="420"/>
</p>
<h1 align="center">PERCEPTRA</h1>
<p align="center">
<b>Production-ready object classification via vector retrieval</b>
</p>
<p align="center">
π <a href="#features">Features</a> β’
βοΈ <a href="#installation">Installation</a> β’
π <a href="#quick-start">Quick Start</a> β’
π <a href="#architecture">Architecture</a> β’
π§© <a href="#api-reference">API</a> β’
π§ <a href="#development">Development</a>
</p>
---
# PERCEPTRA
Production-ready object classification via vector retrieval.
## Features
- π **Vector-based Classification**: Use embedding similarity for robust object recognition
- π **Pluggable Architecture**: Swap embedding models (CLIP, custom) and vector stores (FAISS, Qdrant)
- π **Metadata Fusion**: Combine visual similarity with physical properties (length, color)
- π― **Calibrated Confidence**: Reliable uncertainty estimates via multi-factor calibration
- π **Production-Ready**: FastAPI service with health checks, logging, and Docker support
- π **Incremental Learning**: Add new samples without retraining
- π οΈ **CLI Tools**: Command-line interface for calibration and inference
## Installation
### Basic Installation
```bash
pip install perceptra
```
### With CLIP Support
```bash
pip install perceptra[clip]
```
### Full Installation (Development)
```bash
pip install perceptra[all]
```
### From Source
```bash
git clone https://github.com/tannousgeagea/perceptra.git
cd perceptra
pip install -e ".[all]"
```
## Quick Start
### Library Usage
```python
from perceptra import (
ObjectClassifier,
CalibrationManager,
CLIPEmbedding,
FAISSVectorStore,
load_config
)
import numpy as np
from PIL import Image
# 1. Initialize components
config = load_config("config/default.yaml")
embedder = CLIPEmbedding(
model_name=config.embedding.model_name,
device=config.embedding.device
)
vector_store = FAISSVectorStore(
dim=embedder.embedding_dim,
metric="cosine"
)
classifier = ObjectClassifier(
embedding_backend=embedder,
vector_store=vector_store,
config=config.classifier
)
calibration_manager = CalibrationManager(
embedding_backend=embedder,
vector_store=vector_store,
config=config.calibration
)
# 2. Build calibration set
pipe_img = np.array(Image.open("samples/pipe1.jpg"))
bottle_img = np.array(Image.open("samples/bottle1.jpg"))
calibration_manager.add_samples(
images=[pipe_img, bottle_img],
labels=["pipe", "bottle"],
metadata=[
{"length_cm": 15.0, "color": "gray", "material": "PVC"},
{"length_cm": 8.0, "color": "transparent", "material": "plastic"}
]
)
# 3. Classify new detection
detection = np.array(Image.open("detections/unknown_object.jpg"))
result = classifier.classify(
image_crop=detection,
k=5,
metadata_hint={"estimated_length_cm": 14.0},
return_reasoning=True
)
print(f"Predicted: {result.predicted_label}")
print(f"Confidence: {result.confidence:.2%}")
print(f"Reasoning: {result.reasoning}")
# 4. Save calibration set
calibration_manager.export_calibration_set("data/my_calibration.index")
```
### Service Usage
Start the FastAPI service:
```bash
# Using CLI
perceptra serve --config config/default.yaml --port 8000
# Using Python
python -m uvicorn perceptra.service.app:app --host 0.0.0.0 --port 8000
# Using Docker
docker-compose up
```
Classify via API:
```bash
# Classify an object
curl -X POST "http://localhost:8000/classify" \
-F "image=@detection.jpg" \
-F "k=5" \
-F "return_reasoning=true"
# Response:
{
"predicted_label": "pipe",
"confidence": 0.87,
"nearest_neighbors": [
{"label": "pipe", "distance": 0.12},
{"label": "pipe", "distance": 0.18},
{"label": "bottle", "distance": 0.45}
],
"reasoning": "Classified as 'pipe' with 87% confidence. Based on 5 similar objects: 3 pipe, 2 bottle."
}
```
Add calibration samples via API:
```bash
curl -X POST "http://localhost:8000/calibration/add" \
-F "images=@pipe1.jpg" \
-F "images=@pipe2.jpg" \
-F 'labels=["pipe", "pipe"]' \
-F 'metadata=[{"length_cm": 15}, {"length_cm": 20}]'
```
Get calibration statistics:
```bash
curl "http://localhost:8000/calibration/stats"
# Response:
{
"total_samples": 150,
"label_distribution": {
"pipe": 45,
"bottle": 38,
"can": 32,
"bag": 35
},
"embedding_model": "ViT-B-32",
"embedding_dim": 512
}
```
### CLI Usage
```bash
# Classify a single image
perceptra classify detection.jpg --k 5
# Build calibration set from directory
perceptra calibrate ./calibration_images ./labels.json --config config/default.yaml
# Start service
perceptra serve --host 0.0.0.0 --port 8000
```
## Configuration
PERCEPTRA uses hierarchical configuration with YAML files and environment variables.
### Priority Order
1. Environment variables (highest)
2. YAML config file
3. Default values (lowest)
### Example Configuration
```yaml
embedding:
backend: clip # clip | perception
model_name: ViT-B-32
device: cpu # cpu | cuda
batch_size: 32
vector_store:
backend: faiss # faiss | qdrant
index_type: Flat # Flat | HNSW | IVF
metric: cosine # cosine | l2
persistence_path: ./data/vector_store.index
classifier:
min_confidence_threshold: 0.6
default_k: 5
temperature: 1.0
enable_metadata_filtering: true
unknown_label: unknown
calibration:
auto_save_interval: 100
backup_enabled: true
service:
host: 0.0.0.0
port: 8000
max_image_size_mb: 10
enable_cors: true
log_level: INFO
```
### Environment Variables
Prefix with `PERCEPTRA_` and use `__` for nesting:
```bash
export PERCEPTRA_EMBEDDING__DEVICE=cuda
export PERCEPTRA_CLASSIFIER__MIN_CONFIDENCE_THRESHOLD=0.7
export PERCEPTRA_SERVICE__PORT=9000
```
## Architecture
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PERCEPTRA System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ βββββββββββββββββββ β
β β Embedding βββββββΆβ Vector Store β β
β β Backend β β (FAISS/Qdrant)β β
β β (CLIP/etc) β ββββββββββ¬βββββββββ β
β ββββββββββββββββ β β
β β β
β βββββββββΌβββββββββ β
β New Detection ββββββββββΆβ Classifier β β
β β β’ k-NN search β β
β β β’ Metadata β β
β β filtering β β
β β β’ Confidence β β
β β calibration β β
β βββββββββ¬βββββββββ β
β β β
β βββββββββΌβββββββββ β
β β Classification β β
β β Result β β
β ββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Access Interfaces β
β β’ Python Library β’ FastAPI Service β’ CLI Tools β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
## API Reference
### Classification Endpoint
**POST** `/classify`
Classify a object from an image crop.
**Request:**
- `image`: Image file (multipart/form-data)
- `k`: Number of neighbors (default: 5)
- `return_reasoning`: Include explanation (default: false)
**Response:**
```json
{
"predicted_label": "pipe",
"confidence": 0.87,
"nearest_neighbors": [...],
"reasoning": "...",
"metadata": {...}
}
```
### Calibration Endpoints
**POST** `/calibration/add`
Add samples to calibration set.
**GET** `/calibration/stats`
Get calibration statistics.
**POST** `/calibration/export`
Export calibration set.
**DELETE** `/calibration/samples`
Delete samples by ID.
### Health Endpoints
**GET** `/health` - Service health check
**GET** `/health/ready` - Readiness probe (K8s)
**GET** `/health/live` - Liveness probe (K8s)
## Development
### Setup Development Environment
```bash
git clone https://github.com/tannousgeagea/perceptra.git
cd perceptra
# Create virtual environment
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
# Install in development mode
pip install -e ".[all]"
# Install pre-commit hooks
pre-commit install
```
### Run Tests
```bash
pytest tests/ -v --cov=perceptra
```
### Code Quality
```bash
# Format code
black perceptra/ tests/
# Lint
ruff check perceptra/ tests/
# Type check
mypy perceptra/
```
## Deployment
### Docker Deployment
```bash
# Build image
docker build -t perceptra:latest .
# Run container
docker run -p 8000:8000 -v $(pwd)/data:/app/data perceptra:latest
```
### Kubernetes Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: perceptra-service
spec:
replicas: 3
selector:
matchLabels:
app: perceptra
template:
metadata:
labels:
app: perceptra
spec:
containers:
- name: perceptra
image: perceptra:latest
ports:
- containerPort: 8000
env:
- name: PERCEPTRA_EMBEDDING__DEVICE
value: "cpu"
volumeMounts:
- name: data
mountPath: /app/data
livenessProbe:
httpGet:
path: /health/live
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
volumes:
- name: data
persistentVolumeClaim:
claimName: perceptra-data-pvc
```
## Performance Optimization
### GPU Acceleration
```yaml
embedding:
device: cuda # Enable GPU
batch_size: 64 # Increase batch size
```
### Vector Store Optimization
```yaml
vector_store:
index_type: HNSW # Fast approximate search
# or IVF for large datasets
```
### Production Tuning
```yaml
classifier:
min_confidence_threshold: 0.7 # Higher precision
default_k: 10 # More neighbors
service:
workers: 4 # Multiple workers
```
## Monitoring
PERCEPTRA exposes Prometheus-compatible metrics:
- `perceptra_classifications_total`: Total classifications
- `perceptra_classification_duration_seconds`: Classification latency
- `perceptra_calibration_samples_total`: Total calibration samples
- `perceptra_confidence_scores`: Confidence score distribution
Access metrics at: `http://localhost:8000/metrics`
## Troubleshooting
### Issue: Low Confidence Scores
**Solution**: Add more diverse calibration samples for each class.
```python
# Add more samples with variations
calibration_manager.add_samples(
images=[pipe_img1, pipe_img2, pipe_img3],
labels=["pipe", "pipe", "pipe"],
metadata=[
{"length_cm": 10, "color": "white"},
{"length_cm": 20, "color": "gray"},
{"length_cm": 15, "color": "black"}
]
)
```
### Issue: Slow Classification
**Solutions**:
1. Use GPU: `PERCEPTRA_EMBEDDING__DEVICE=cuda`
2. Use approximate search: `index_type: HNSW`
3. Reduce k: `default_k: 3`
### Issue: High Memory Usage
**Solutions**:
1. Use quantized index (FAISS)
2. Reduce batch size
3. Use lower-dimensional embeddings
## License
MIT License - see LICENSE file
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Submit a pull request
## Citation
```bibtex
@software{perceptra,
title={PERCEPTRA: Vector-based Object Classification},
author={Your Team},
year={2025},
url={https://github.com/tannousgeagea/perceptra}
}
```
## Support
- Documentation: https://docs.example.com
- Issues: https://github.com/tannousgeagea/perceptra/issues
- Email: tannousgeagea@hotmail.com
Raw data
{
"_id": null,
"home_page": null,
"name": "perceptra",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "computer-vision, detection, vector-search, embedding",
"author": null,
"author_email": "Tannous Geagea <tannousgeagea@hotmail.com>",
"download_url": "https://files.pythonhosted.org/packages/2d/aa/20ae100d7404b227b8038d3b1f9ee95befa3a3509140e72f1eebf694904a/perceptra-0.1.1.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"docs/assets/perceptra_logo_white.jpg\" alt=\"PERCEPTRA Logo\" width=\"420\"/>\n</p>\n\n<h1 align=\"center\">PERCEPTRA</h1>\n\n<p align=\"center\">\n <b>Production-ready object classification via vector retrieval</b>\n</p>\n\n<p align=\"center\">\n \ud83d\udd0d <a href=\"#features\">Features</a> \u2022 \n \u2699\ufe0f <a href=\"#installation\">Installation</a> \u2022 \n \ud83d\ude80 <a href=\"#quick-start\">Quick Start</a> \u2022 \n \ud83d\udcc8 <a href=\"#architecture\">Architecture</a> \u2022 \n \ud83e\udde9 <a href=\"#api-reference\">API</a> \u2022 \n \ud83e\udde0 <a href=\"#development\">Development</a>\n</p>\n\n---\n\n# PERCEPTRA\n\nProduction-ready object classification via vector retrieval.\n\n## Features\n\n- \ud83d\udd0d **Vector-based Classification**: Use embedding similarity for robust object recognition\n- \ud83d\udd0c **Pluggable Architecture**: Swap embedding models (CLIP, custom) and vector stores (FAISS, Qdrant)\n- \ud83d\udcca **Metadata Fusion**: Combine visual similarity with physical properties (length, color)\n- \ud83c\udfaf **Calibrated Confidence**: Reliable uncertainty estimates via multi-factor calibration\n- \ud83d\ude80 **Production-Ready**: FastAPI service with health checks, logging, and Docker support\n- \ud83d\udcc8 **Incremental Learning**: Add new samples without retraining\n- \ud83d\udee0\ufe0f **CLI Tools**: Command-line interface for calibration and inference\n\n## Installation\n\n### Basic Installation\n```bash\npip install perceptra\n```\n\n### With CLIP Support\n```bash\npip install perceptra[clip]\n```\n\n### Full Installation (Development)\n```bash\npip install perceptra[all]\n```\n\n### From Source\n```bash\ngit clone https://github.com/tannousgeagea/perceptra.git\ncd perceptra\npip install -e \".[all]\"\n```\n\n## Quick Start\n\n### Library Usage\n\n```python\nfrom perceptra import (\n ObjectClassifier,\n CalibrationManager,\n CLIPEmbedding,\n FAISSVectorStore,\n load_config\n)\nimport numpy as np\nfrom PIL import Image\n\n# 1. Initialize components\nconfig = load_config(\"config/default.yaml\")\n\nembedder = CLIPEmbedding(\n model_name=config.embedding.model_name,\n device=config.embedding.device\n)\n\nvector_store = FAISSVectorStore(\n dim=embedder.embedding_dim,\n metric=\"cosine\"\n)\n\nclassifier = ObjectClassifier(\n embedding_backend=embedder,\n vector_store=vector_store,\n config=config.classifier\n)\n\ncalibration_manager = CalibrationManager(\n embedding_backend=embedder,\n vector_store=vector_store,\n config=config.calibration\n)\n\n# 2. Build calibration set\npipe_img = np.array(Image.open(\"samples/pipe1.jpg\"))\nbottle_img = np.array(Image.open(\"samples/bottle1.jpg\"))\n\ncalibration_manager.add_samples(\n images=[pipe_img, bottle_img],\n labels=[\"pipe\", \"bottle\"],\n metadata=[\n {\"length_cm\": 15.0, \"color\": \"gray\", \"material\": \"PVC\"},\n {\"length_cm\": 8.0, \"color\": \"transparent\", \"material\": \"plastic\"}\n ]\n)\n\n# 3. Classify new detection\ndetection = np.array(Image.open(\"detections/unknown_object.jpg\"))\n\nresult = classifier.classify(\n image_crop=detection,\n k=5,\n metadata_hint={\"estimated_length_cm\": 14.0},\n return_reasoning=True\n)\n\nprint(f\"Predicted: {result.predicted_label}\")\nprint(f\"Confidence: {result.confidence:.2%}\")\nprint(f\"Reasoning: {result.reasoning}\")\n\n# 4. Save calibration set\ncalibration_manager.export_calibration_set(\"data/my_calibration.index\")\n```\n\n### Service Usage\n\nStart the FastAPI service:\n\n```bash\n# Using CLI\nperceptra serve --config config/default.yaml --port 8000\n\n# Using Python\npython -m uvicorn perceptra.service.app:app --host 0.0.0.0 --port 8000\n\n# Using Docker\ndocker-compose up\n```\n\nClassify via API:\n\n```bash\n# Classify an object\ncurl -X POST \"http://localhost:8000/classify\" \\\n -F \"image=@detection.jpg\" \\\n -F \"k=5\" \\\n -F \"return_reasoning=true\"\n\n# Response:\n{\n \"predicted_label\": \"pipe\",\n \"confidence\": 0.87,\n \"nearest_neighbors\": [\n {\"label\": \"pipe\", \"distance\": 0.12},\n {\"label\": \"pipe\", \"distance\": 0.18},\n {\"label\": \"bottle\", \"distance\": 0.45}\n ],\n \"reasoning\": \"Classified as 'pipe' with 87% confidence. Based on 5 similar objects: 3 pipe, 2 bottle.\"\n}\n```\n\nAdd calibration samples via API:\n\n```bash\ncurl -X POST \"http://localhost:8000/calibration/add\" \\\n -F \"images=@pipe1.jpg\" \\\n -F \"images=@pipe2.jpg\" \\\n -F 'labels=[\"pipe\", \"pipe\"]' \\\n -F 'metadata=[{\"length_cm\": 15}, {\"length_cm\": 20}]'\n```\n\nGet calibration statistics:\n\n```bash\ncurl \"http://localhost:8000/calibration/stats\"\n\n# Response:\n{\n \"total_samples\": 150,\n \"label_distribution\": {\n \"pipe\": 45,\n \"bottle\": 38,\n \"can\": 32,\n \"bag\": 35\n },\n \"embedding_model\": \"ViT-B-32\",\n \"embedding_dim\": 512\n}\n```\n\n### CLI Usage\n\n```bash\n# Classify a single image\nperceptra classify detection.jpg --k 5\n\n# Build calibration set from directory\nperceptra calibrate ./calibration_images ./labels.json --config config/default.yaml\n\n# Start service\nperceptra serve --host 0.0.0.0 --port 8000\n```\n\n## Configuration\n\nPERCEPTRA uses hierarchical configuration with YAML files and environment variables.\n\n### Priority Order\n1. Environment variables (highest)\n2. YAML config file\n3. Default values (lowest)\n\n### Example Configuration\n\n```yaml\nembedding:\n backend: clip # clip | perception\n model_name: ViT-B-32\n device: cpu # cpu | cuda\n batch_size: 32\n\nvector_store:\n backend: faiss # faiss | qdrant\n index_type: Flat # Flat | HNSW | IVF\n metric: cosine # cosine | l2\n persistence_path: ./data/vector_store.index\n\nclassifier:\n min_confidence_threshold: 0.6\n default_k: 5\n temperature: 1.0\n enable_metadata_filtering: true\n unknown_label: unknown\n\ncalibration:\n auto_save_interval: 100\n backup_enabled: true\n\nservice:\n host: 0.0.0.0\n port: 8000\n max_image_size_mb: 10\n enable_cors: true\n log_level: INFO\n```\n\n### Environment Variables\n\nPrefix with `PERCEPTRA_` and use `__` for nesting:\n\n```bash\nexport PERCEPTRA_EMBEDDING__DEVICE=cuda\nexport PERCEPTRA_CLASSIFIER__MIN_CONFIDENCE_THRESHOLD=0.7\nexport PERCEPTRA_SERVICE__PORT=9000\n```\n\n## Architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 PERCEPTRA System \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 \u2502\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n\u2502 \u2502 Embedding \u2502\u2500\u2500\u2500\u2500\u2500\u25b6\u2502 Vector Store \u2502 \u2502\n\u2502 \u2502 Backend \u2502 \u2502 (FAISS/Qdrant)\u2502 \u2502\n\u2502 \u2502 (CLIP/etc) \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502\n\u2502 \u2502 \u2502\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n\u2502 New Detection \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25b6\u2502 Classifier \u2502 \u2502\n\u2502 \u2502 \u2022 k-NN search \u2502 \u2502\n\u2502 \u2502 \u2022 Metadata \u2502 \u2502\n\u2502 \u2502 filtering \u2502 \u2502\n\u2502 \u2502 \u2022 Confidence \u2502 \u2502\n\u2502 \u2502 calibration \u2502 \u2502\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2502 \u2502 \u2502\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25bc\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n\u2502 \u2502 Classification \u2502 \u2502\n\u2502 \u2502 Result \u2502 \u2502\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2502 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Access Interfaces \u2502\n\u2502 \u2022 Python Library \u2022 FastAPI Service \u2022 CLI Tools \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## API Reference\n\n### Classification Endpoint\n\n**POST** `/classify`\n\nClassify a object from an image crop.\n\n**Request:**\n- `image`: Image file (multipart/form-data)\n- `k`: Number of neighbors (default: 5)\n- `return_reasoning`: Include explanation (default: false)\n\n**Response:**\n```json\n{\n \"predicted_label\": \"pipe\",\n \"confidence\": 0.87,\n \"nearest_neighbors\": [...],\n \"reasoning\": \"...\",\n \"metadata\": {...}\n}\n```\n\n### Calibration Endpoints\n\n**POST** `/calibration/add`\n\nAdd samples to calibration set.\n\n**GET** `/calibration/stats`\n\nGet calibration statistics.\n\n**POST** `/calibration/export`\n\nExport calibration set.\n\n**DELETE** `/calibration/samples`\n\nDelete samples by ID.\n\n### Health Endpoints\n\n**GET** `/health` - Service health check\n\n**GET** `/health/ready` - Readiness probe (K8s)\n\n**GET** `/health/live` - Liveness probe (K8s)\n\n## Development\n\n### Setup Development Environment\n\n```bash\ngit clone https://github.com/tannousgeagea/perceptra.git\ncd perceptra\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate # or `venv\\Scripts\\activate` on Windows\n\n# Install in development mode\npip install -e \".[all]\"\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Run Tests\n\n```bash\npytest tests/ -v --cov=perceptra\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack perceptra/ tests/\n\n# Lint\nruff check perceptra/ tests/\n\n# Type check\nmypy perceptra/\n```\n\n## Deployment\n\n### Docker Deployment\n\n```bash\n# Build image\ndocker build -t perceptra:latest .\n\n# Run container\ndocker run -p 8000:8000 -v $(pwd)/data:/app/data perceptra:latest\n```\n\n### Kubernetes Deployment\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: perceptra-service\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: perceptra\n template:\n metadata:\n labels:\n app: perceptra\n spec:\n containers:\n - name: perceptra\n image: perceptra:latest\n ports:\n - containerPort: 8000\n env:\n - name: PERCEPTRA_EMBEDDING__DEVICE\n value: \"cpu\"\n volumeMounts:\n - name: data\n mountPath: /app/data\n livenessProbe:\n httpGet:\n path: /health/live\n port: 8000\n initialDelaySeconds: 30\n periodSeconds: 10\n readinessProbe:\n httpGet:\n path: /health/ready\n port: 8000\n initialDelaySeconds: 10\n periodSeconds: 5\n volumes:\n - name: data\n persistentVolumeClaim:\n claimName: perceptra-data-pvc\n```\n\n## Performance Optimization\n\n### GPU Acceleration\n\n```yaml\nembedding:\n device: cuda # Enable GPU\n batch_size: 64 # Increase batch size\n```\n\n### Vector Store Optimization\n\n```yaml\nvector_store:\n index_type: HNSW # Fast approximate search\n # or IVF for large datasets\n```\n\n### Production Tuning\n\n```yaml\nclassifier:\n min_confidence_threshold: 0.7 # Higher precision\n default_k: 10 # More neighbors\n\nservice:\n workers: 4 # Multiple workers\n```\n\n## Monitoring\n\nPERCEPTRA exposes Prometheus-compatible metrics:\n\n- `perceptra_classifications_total`: Total classifications\n- `perceptra_classification_duration_seconds`: Classification latency\n- `perceptra_calibration_samples_total`: Total calibration samples\n- `perceptra_confidence_scores`: Confidence score distribution\n\nAccess metrics at: `http://localhost:8000/metrics`\n\n## Troubleshooting\n\n### Issue: Low Confidence Scores\n\n**Solution**: Add more diverse calibration samples for each class.\n\n```python\n# Add more samples with variations\ncalibration_manager.add_samples(\n images=[pipe_img1, pipe_img2, pipe_img3],\n labels=[\"pipe\", \"pipe\", \"pipe\"],\n metadata=[\n {\"length_cm\": 10, \"color\": \"white\"},\n {\"length_cm\": 20, \"color\": \"gray\"},\n {\"length_cm\": 15, \"color\": \"black\"}\n ]\n)\n```\n\n### Issue: Slow Classification\n\n**Solutions**:\n1. Use GPU: `PERCEPTRA_EMBEDDING__DEVICE=cuda`\n2. Use approximate search: `index_type: HNSW`\n3. Reduce k: `default_k: 3`\n\n### Issue: High Memory Usage\n\n**Solutions**:\n1. Use quantized index (FAISS)\n2. Reduce batch size\n3. Use lower-dimensional embeddings\n\n## License\n\nMIT License - see LICENSE file\n\n## Contributing\n\nContributions welcome! Please:\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Submit a pull request\n\n## Citation\n\n```bibtex\n@software{perceptra,\n title={PERCEPTRA: Vector-based Object Classification},\n author={Your Team},\n year={2025},\n url={https://github.com/tannousgeagea/perceptra}\n}\n```\n\n## Support\n\n- Documentation: https://docs.example.com\n- Issues: https://github.com/tannousgeagea/perceptra/issues\n- Email: tannousgeagea@hotmail.com\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Production-ready object classification via vector retrieval",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [
"computer-vision",
" detection",
" vector-search",
" embedding"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5768d05be4321d78b5cbd4d80085b927bfb0c1e94140b15801cf5f806624bd78",
"md5": "ae87bfc8a8f0b46c1b1edc1677e04a1d",
"sha256": "727d5b470fbd45b049fbc16f674ca7528beb1a8a3b67bfd283d65205705c4f5e"
},
"downloads": -1,
"filename": "perceptra-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ae87bfc8a8f0b46c1b1edc1677e04a1d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 43895,
"upload_time": "2025-10-12T14:41:24",
"upload_time_iso_8601": "2025-10-12T14:41:24.345151Z",
"url": "https://files.pythonhosted.org/packages/57/68/d05be4321d78b5cbd4d80085b927bfb0c1e94140b15801cf5f806624bd78/perceptra-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2daa20ae100d7404b227b8038d3b1f9ee95befa3a3509140e72f1eebf694904a",
"md5": "38b7412e784b0edb0a73b584441db5fd",
"sha256": "d5658921462bcb3cf8c1a23a370cf6bb32979de0de5bde96e990afb0adc80e7c"
},
"downloads": -1,
"filename": "perceptra-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "38b7412e784b0edb0a73b584441db5fd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 41012,
"upload_time": "2025-10-12T14:41:26",
"upload_time_iso_8601": "2025-10-12T14:41:26.060303Z",
"url": "https://files.pythonhosted.org/packages/2d/aa/20ae100d7404b227b8038d3b1f9ee95befa3a3509140e72f1eebf694904a/perceptra-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-12 14:41:26",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "perceptra"
}