Name | bluevision JSON |
Version |
0.0.6
JSON |
| download |
home_page | |
Summary | Bluesignal Vision AI project |
upload_time | 2024-03-19 15:33:09 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.8 |
license | |
keywords |
ai
bluesignal
computer-vision
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align=center>
![logo](assets/logo.png)
# bluevision
[![PyPI - Version](https://img.shields.io/pypi/v/bluevision.svg)](https://pypi.org/project/bluevision)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/bluevision.svg)](https://pypi.org/project/bluevision)
</div>
-----
**Table of Contents**
- [Installation](#installation)
- [Usage](#usage)
- [Test](#test)
- [License](#license)
## Installation
```console
pip install bluevision
```
## Usage
### Simple video demo
```python
from bluevision.demo import video_demo
video_demo(
weights="yolov8s-coco.safetensors",
video_path="sample.mp4",
model_size="s",
track=True,
show=True,
save_path="output.mp4",
)
```
### Video Inference Process
```python
import cv2
import supervision as sv
import bluevision as bv
from bluevision.utils import to_supervision_detections, make_labels
# Initialize
detector = bv.solutions.Detector(model=bv.solutions.detector.models.Yolov8(size='s'),
nms=bv.utils.nms.soft_nms,
weights="yolov8s.safetensors")
tracker = bv.utils.tracker.BYTETracker(track_thresh=0.15, match_thresh=0.9,
track_buffer=60, frame_rate=30)
box_annotator = sv.BoundingBoxAnnotator(thickness=2)
label_annotator = sv.LabelAnnotator(text_scale=0.5, text_padding=2)
# Load sample video
vid = cv2.VideoCapture('sample.mp4')
# Start
while True:
ret, original_image = vid.read()
if not ret:
break
detections = detector(original_image)
detections = tracker.update(detections)
# Draw bboxes using supervision
sv_detections = to_supervision_detections(detections)
annotated_frame = box_annotator.annotate(
scene=original_image,
detections=sv_detections,
)
annotated_frame = label_annotator.annotate(
scene=annotated_frame,
detections=sv_detections,
labels=make_labels(sv_detections)
)
cv2.imshow('annotated image', annotated_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
```
## BlueVision
### Solution
**[Object Detector](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/solutions/detector)**
## Utils
**[NMS](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/utils/nms)**
**[Object Tracker](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/utils/tracker)**
## Test
```text
$ python test_with_time.py
Using device: mps
preprocess: 0.00225s, infer: 0.015508s, postprocess: 0.01313s, track: 0.00108s, draw: 0.000670s, total: 0.03462s, t-avg: 0.03576s
total frame : 1050
total elapsed time: 56.25610s
total inference time: 37.55273s
```
## License
`bluevision` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
Raw data
{
"_id": null,
"home_page": "",
"name": "bluevision",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "AI,Bluesignal,computer-vision",
"author": "",
"author_email": "dh031200 <imbird0312@gmail.com>",
"download_url": "",
"platform": null,
"description": "<div align=center>\n\n![logo](assets/logo.png)\n\n# bluevision\n\n[![PyPI - Version](https://img.shields.io/pypi/v/bluevision.svg)](https://pypi.org/project/bluevision)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/bluevision.svg)](https://pypi.org/project/bluevision)\n</div>\n\n\n-----\n\n**Table of Contents**\n\n- [Installation](#installation)\n- [Usage](#usage)\n- [Test](#test)\n- [License](#license)\n\n## Installation\n\n```console\npip install bluevision\n```\n\n## Usage\n\n### Simple video demo\n```python\nfrom bluevision.demo import video_demo\n\nvideo_demo(\n weights=\"yolov8s-coco.safetensors\",\n video_path=\"sample.mp4\",\n model_size=\"s\",\n track=True,\n show=True,\n save_path=\"output.mp4\",\n)\n```\n\n### Video Inference Process\n\n```python\nimport cv2\nimport supervision as sv\nimport bluevision as bv\nfrom bluevision.utils import to_supervision_detections, make_labels\n\n# Initialize\ndetector = bv.solutions.Detector(model=bv.solutions.detector.models.Yolov8(size='s'),\n nms=bv.utils.nms.soft_nms,\n weights=\"yolov8s.safetensors\")\ntracker = bv.utils.tracker.BYTETracker(track_thresh=0.15, match_thresh=0.9,\n track_buffer=60, frame_rate=30)\nbox_annotator = sv.BoundingBoxAnnotator(thickness=2)\nlabel_annotator = sv.LabelAnnotator(text_scale=0.5, text_padding=2)\n\n# Load sample video\nvid = cv2.VideoCapture('sample.mp4')\n\n# Start\nwhile True:\n ret, original_image = vid.read()\n if not ret:\n break\n\n detections = detector(original_image)\n detections = tracker.update(detections)\n\n # Draw bboxes using supervision\n sv_detections = to_supervision_detections(detections)\n annotated_frame = box_annotator.annotate(\n scene=original_image,\n detections=sv_detections,\n )\n annotated_frame = label_annotator.annotate(\n scene=annotated_frame,\n detections=sv_detections,\n labels=make_labels(sv_detections)\n )\n\n cv2.imshow('annotated image', annotated_frame)\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\ncv2.destroyAllWindows()\n```\n\n## BlueVision\n\n### Solution\n\n**[Object Detector](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/solutions/detector)**\n\n## Utils\n\n**[NMS](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/utils/nms)**\n**[Object Tracker](https://github.com/dh031200/BlueVision/tree/main/src/bluevision/utils/tracker)**\n\n## Test\n\n```text\n$ python test_with_time.py\nUsing device: mps\npreprocess: 0.00225s, infer: 0.015508s, postprocess: 0.01313s, track: 0.00108s, draw: 0.000670s, total: 0.03462s, t-avg: 0.03576s\ntotal frame : 1050\ntotal elapsed time: 56.25610s\ntotal inference time: 37.55273s\n```\n\n## License\n\n`bluevision` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.\n",
"bugtrack_url": null,
"license": "",
"summary": "Bluesignal Vision AI project",
"version": "0.0.6",
"project_urls": {
"Documentation": "https://github.com/dh031200/bluevision#readme",
"Issues": "https://github.com/dh031200/bluevision/issues",
"Source": "https://github.com/dh031200/bluevision"
},
"split_keywords": [
"ai",
"bluesignal",
"computer-vision"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5457ef8a68b900f148c37a6e08247323ca07604da776a72e674eac17f86fa238",
"md5": "a4c8a0b9984d4aca8e8414710334dc05",
"sha256": "5cfd0b361a175175cba2be691eae8c5d6ca149a4de81a2144aa09a1110733b57"
},
"downloads": -1,
"filename": "bluevision-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a4c8a0b9984d4aca8e8414710334dc05",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 23365,
"upload_time": "2024-03-19T15:33:09",
"upload_time_iso_8601": "2024-03-19T15:33:09.551700Z",
"url": "https://files.pythonhosted.org/packages/54/57/ef8a68b900f148c37a6e08247323ca07604da776a72e674eac17f86fa238/bluevision-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-19 15:33:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dh031200",
"github_project": "bluevision#readme",
"github_not_found": true,
"lcname": "bluevision"
}