Name | solovision JSON |
Version |
0.0.1
JSON |
| download |
home_page | None |
Summary | State-of-the-art Real Time Object Detection & Tracking System integrated with ReID architecture |
upload_time | 2024-12-27 04:33:54 |
maintainer | None |
docs_url | None |
author | Dhruv Diddi |
requires_python | !=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,!=3.8.*,>=3.9 |
license | AGPL-3.0 |
keywords |
object tracking
reid
machine-learning
deep-learning
vision
artificial intelligence
yolo
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# 🚀 Solovision
<div align="center">
<img src="assets/logo/logo.png" alt="Solovision Logo" width="200"/>
[![Python 3.9+](https://img.shields.io/badge/Python-3.9%2B-blue.svg)](https://www.python.org/downloads/)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/solovision)](https://pypi.org/project/solovision/)
[![PyPI - Version](https://img.shields.io/pypi/v/solovision)](https://pypi.org/project/solovision/)
</div>
Solovision is a state-of-the-art real-time object detection and tracking system that seamlessly integrates with ReID (Re-Identification) architecture. Built on top of YOLO object detection, it provides robust multi-object tracking capabilities with advanced features for identity preservation across frames.
<div align="center">
<img src="assets/results/solovision_results.gif" alt="SoloVision Results">
</div>
## ✨ Key Features
- 🎯 **High-Performance Tracking**: Implements ByteTrack algorithm for reliable multi-object tracking
- 🔄 **ReID Integration**: Advanced re-identification capabilities for maintaining object identity
- 🚀 **Real-time Processing**: Optimized for real-time applications with efficient processing
- 📊 **Multiple Detection Backends**: Support for YOLOv8, YOLOv9, YOLOv11 and all other previous YOLO variants
- 💪 **Robust Motion Prediction**: Kalman filtering for smooth trajectory estimation
- 🎨 **Flexible Visualization**: Customizable visualization options for tracking results
- 🔧 **Easy-to-use CLI**: Simple command-line interface for quick deployment
## 🛠️ Installation
Install the solovision package in a Python>=3.9 environment.
```bash
pip install solovision
```
Install from source:
```bash
git clone https://github.com/AIEngineersDev/solovision.git
cd solovision
pip install .
```
Install in Dev
```bash
pip install poetry
poetry install
poetry shell
```
## 🚀 Quick Start
### Basic Usage
```python
from solovision import ByteTracker
from ultralytics import YOLO
import cv2
# Initialize tracker
tracker = ByteTracker(
reid_weights="path/to/reid/weights",
device="cuda",
half=True
)
# Process video
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
# Get detections from yolo
model = YOLO('yolov8m.pt')
detections = model.predict(frame)
# Update tracker
tracks = tracker.update(detections, frame)
# Process tracking results
for track in tracks:
bbox = track[:4]
track_id = track[4]
# Draw or process tracking results
```
### Command Line Interface
```bash
# Detect objects across videos or streams
solovision detect --source video_path --conf 0.25 --iou 0.45
# Track objects using unique id with custom settings
solovision track --source video_path --yolo-model yolov8n.pt --reid-model osnet_x1_0_msmt17.pt --show --save --half \
--show-trajectories --save-txt --save-crops --per-class \
--classes 0 2 --device 0 --imgsz 640
# Runs Interactive Web Application to perform real-time inference
solovision run_app
# View all available CLI args
solovision --help
```
## 🎯 ReID Models Support
Solovision supports various state-of-the-art ReID architectures:
- OSNet (x0.25, x0.5, x0.75, x1.0)
- OSNet-AIN
- OSNet-IBN
- ResNet (50, 101)
- CLIP-ReID
Check out the [Model Zoo](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html) for pre-trained weights and performance benchmarks.
## 🔧 Advanced Features
- **Tracking Analytics**: Line graphs and timestamp plotting for track id's
- **Separate Merged Tracks**: Save separate videos of persistant tracks from multiple video sources
- **Per-Class Tracking**: Enable separate tracking for different object classes
- **Feature History**: Maintain temporal appearance features for robust tracking
- **Camera Motion Compensation**: Automatic adjustment for camera movement
- **Multi-Camera Support**: Persist Tracker across multiple cameras/source
## 📊 Performance
- Runs at 30+ FPS on modern GPUs with YOLOv8n
- Support for half-precision (FP16) inference
- Optimized for both accuracy and speed
- Scalable for multi-camera deployments
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## 📝 License
This project is licensed under the GNU Affero General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
## 📚 Citation
```bibtex
@software{solovision2024,
author = {Diddi, Dhruv and Mohammed, Zeeshaan},
title = {Solovision: State-of-the-art Real-Time Object Tracking System},
year = {2024},
publisher = {GitHub},
organization = {AIEngineersDev},
url = {https://github.com/AIEngineersDev/solovision}
}
```
## 🙏 Acknowledgments
- ByteTrack algorithm implementation
- Ultralytics YOLO
- OSNet for ReID features
- BOXMOT
- FastReID
---
<p align="center">Made with ❤️ by Solo</p>
Raw data
{
"_id": null,
"home_page": null,
"name": "solovision",
"maintainer": null,
"docs_url": null,
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,!=3.8.*,>=3.9",
"maintainer_email": null,
"keywords": "Object tracking, ReID, machine-learning, deep-learning, vision, Artificial Intelligence, YOLO",
"author": "Dhruv Diddi",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/79/85/aa29d3a9c0ea25adc834aa4e5a7aed407bb5d9f6571d32652668d29225ac/solovision-0.0.1.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 Solovision\n\n<div align=\"center\">\n\n<img src=\"assets/logo/logo.png\" alt=\"Solovision Logo\" width=\"200\"/>\n\n[![Python 3.9+](https://img.shields.io/badge/Python-3.9%2B-blue.svg)](https://www.python.org/downloads/)\n[![License: AGPL v3](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/solovision)](https://pypi.org/project/solovision/)\n[![PyPI - Version](https://img.shields.io/pypi/v/solovision)](https://pypi.org/project/solovision/)\n\n\n</div>\n\nSolovision is a state-of-the-art real-time object detection and tracking system that seamlessly integrates with ReID (Re-Identification) architecture. Built on top of YOLO object detection, it provides robust multi-object tracking capabilities with advanced features for identity preservation across frames.\n\n<div align=\"center\">\n <img src=\"assets/results/solovision_results.gif\" alt=\"SoloVision Results\">\n</div>\n\n## \u2728 Key Features\n\n- \ud83c\udfaf **High-Performance Tracking**: Implements ByteTrack algorithm for reliable multi-object tracking\n- \ud83d\udd04 **ReID Integration**: Advanced re-identification capabilities for maintaining object identity\n- \ud83d\ude80 **Real-time Processing**: Optimized for real-time applications with efficient processing\n- \ud83d\udcca **Multiple Detection Backends**: Support for YOLOv8, YOLOv9, YOLOv11 and all other previous YOLO variants\n- \ud83d\udcaa **Robust Motion Prediction**: Kalman filtering for smooth trajectory estimation\n- \ud83c\udfa8 **Flexible Visualization**: Customizable visualization options for tracking results\n- \ud83d\udd27 **Easy-to-use CLI**: Simple command-line interface for quick deployment\n\n## \ud83d\udee0\ufe0f Installation\n\nInstall the solovision package in a Python>=3.9 environment.\n```bash\npip install solovision\n```\n\nInstall from source:\n\n```bash\ngit clone https://github.com/AIEngineersDev/solovision.git\ncd solovision\npip install .\n```\n\nInstall in Dev\n```bash\npip install poetry\npoetry install\npoetry shell\n```\n\n## \ud83d\ude80 Quick Start\n\n### Basic Usage\n\n```python\nfrom solovision import ByteTracker\nfrom ultralytics import YOLO\nimport cv2\n\n# Initialize tracker\ntracker = ByteTracker(\n reid_weights=\"path/to/reid/weights\",\n device=\"cuda\",\n half=True\n)\n\n# Process video\ncap = cv2.VideoCapture(0)\nwhile True:\n ret, frame = cap.read()\n if not ret:\n break\n \n # Get detections from yolo\n model = YOLO('yolov8m.pt')\n detections = model.predict(frame)\n \n # Update tracker\n tracks = tracker.update(detections, frame)\n \n # Process tracking results\n for track in tracks:\n bbox = track[:4]\n track_id = track[4]\n # Draw or process tracking results\n```\n\n### Command Line Interface\n\n```bash\n# Detect objects across videos or streams\nsolovision detect --source video_path --conf 0.25 --iou 0.45 \n\n# Track objects using unique id with custom settings\nsolovision track --source video_path --yolo-model yolov8n.pt --reid-model osnet_x1_0_msmt17.pt --show --save --half \\\n --show-trajectories --save-txt --save-crops --per-class \\\n --classes 0 2 --device 0 --imgsz 640\n\n# Runs Interactive Web Application to perform real-time inference\nsolovision run_app \n\n# View all available CLI args\nsolovision --help\n```\n\n## \ud83c\udfaf ReID Models Support\n\nSolovision supports various state-of-the-art ReID architectures:\n\n- OSNet (x0.25, x0.5, x0.75, x1.0)\n- OSNet-AIN\n- OSNet-IBN\n- ResNet (50, 101)\n- CLIP-ReID\n\nCheck out the [Model Zoo](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.html) for pre-trained weights and performance benchmarks.\n\n## \ud83d\udd27 Advanced Features\n\n- **Tracking Analytics**: Line graphs and timestamp plotting for track id's\n- **Separate Merged Tracks**: Save separate videos of persistant tracks from multiple video sources\n- **Per-Class Tracking**: Enable separate tracking for different object classes\n- **Feature History**: Maintain temporal appearance features for robust tracking\n- **Camera Motion Compensation**: Automatic adjustment for camera movement\n- **Multi-Camera Support**: Persist Tracker across multiple cameras/source\n\n\n## \ud83d\udcca Performance\n\n- Runs at 30+ FPS on modern GPUs with YOLOv8n\n- Support for half-precision (FP16) inference\n- Optimized for both accuracy and speed\n- Scalable for multi-camera deployments\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n## \ud83d\udcdd License\n\nThis project is licensed under the GNU Affero General Public License v3.0 - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udcda Citation\n\n```bibtex\n@software{solovision2024,\n author = {Diddi, Dhruv and Mohammed, Zeeshaan},\n title = {Solovision: State-of-the-art Real-Time Object Tracking System},\n year = {2024},\n publisher = {GitHub},\n organization = {AIEngineersDev},\n url = {https://github.com/AIEngineersDev/solovision}\n}\n```\n\n## \ud83d\ude4f Acknowledgments\n\n- ByteTrack algorithm implementation\n- Ultralytics YOLO\n- OSNet for ReID features\n- BOXMOT\n- FastReID\n\n---\n<p align=\"center\">Made with \u2764\ufe0f by Solo</p>\n\n",
"bugtrack_url": null,
"license": "AGPL-3.0",
"summary": "State-of-the-art Real Time Object Detection & Tracking System integrated with ReID architecture",
"version": "0.0.1",
"project_urls": null,
"split_keywords": [
"object tracking",
" reid",
" machine-learning",
" deep-learning",
" vision",
" artificial intelligence",
" yolo"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9d10d45ade989b296794b0349a8af3c9f970e097b8cc5d9f99be8cd55d1904ef",
"md5": "df2b77a66f18c65334a6faf32e0cda7e",
"sha256": "b02e4a13fd0dee58952b3d4053adac629282cb5b4fdbfe7899a803f231de0225"
},
"downloads": -1,
"filename": "solovision-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "df2b77a66f18c65334a6faf32e0cda7e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,!=3.8.*,>=3.9",
"size": 1492001,
"upload_time": "2024-12-27T04:33:51",
"upload_time_iso_8601": "2024-12-27T04:33:51.093831Z",
"url": "https://files.pythonhosted.org/packages/9d/10/d45ade989b296794b0349a8af3c9f970e097b8cc5d9f99be8cd55d1904ef/solovision-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7985aa29d3a9c0ea25adc834aa4e5a7aed407bb5d9f6571d32652668d29225ac",
"md5": "073f52255bd2aaf639735034822b6d0d",
"sha256": "86f3a4227a4b44e8bd96d96ea4e1a3e2c3f545260b1699d723ae2e99b5475c8b"
},
"downloads": -1,
"filename": "solovision-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "073f52255bd2aaf639735034822b6d0d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,!=3.8.*,>=3.9",
"size": 1457682,
"upload_time": "2024-12-27T04:33:54",
"upload_time_iso_8601": "2024-12-27T04:33:54.339602Z",
"url": "https://files.pythonhosted.org/packages/79/85/aa29d3a9c0ea25adc834aa4e5a7aed407bb5d9f6571d32652668d29225ac/solovision-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-27 04:33:54",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "solovision"
}