## 👋 hello
**We write your reusable computer vision tools.** Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝
## 💻 install
Pip install the wow-ai-vision package in a
[**3.11>=Python>=3.8**](https://www.python.org/) environment.
```bash
pip install wow-ai-vision[desktop]
```
Read more about desktop, headless, and local installation in our [guide](https://).
## 🔥 quickstart
### [detections processing](https://)
```python
>>> import wow-ai-vision as sv
>>> from ultralytics import YOLO
>>> model = YOLO('yolov8s.pt')
>>> result = model(IMAGE)[0]
>>> detections = sv.Detections.from_ultralytics(result)
>>> len(detections)
5
```
<details close>
<summary>👉 more detections utils</summary>
- Easily switch inference pipeline between supported object detection/instance segmentation models
```python
>>> import wow-ai-vision as sv
>>> from segment_anything import sam_model_registry, SamAutomaticMaskGenerator
>>> sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE)
>>> mask_generator = SamAutomaticMaskGenerator(sam)
>>> sam_result = mask_generator.generate(IMAGE)
>>> detections = sv.Detections.from_sam(sam_result=sam_result)
```
- [Advanced filtering](https://)
```python
>>> detections = detections[detections.class_id == 0]
>>> detections = detections[detections.confidence > 0.5]
>>> detections = detections[detections.area > 1000]
```
- Image annotation
```python
>>> import wow-ai-vision as sv
>>> box_annotator = sv.BoxAnnotator()
>>> annotated_frame = box_annotator.annotate(
... scene=IMAGE,
... detections=detections
... )
```
</details>
### [datasets processing](https://)
```python
>>> import wow-ai-vision as sv
>>> dataset = sv.DetectionDataset.from_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )
>>> dataset.classes
['dog', 'person']
>>> len(dataset)
1000
```
<details close>
<summary>👉 more dataset utils</summary>
- Load object detection/instance segmentation datasets in one of the supported formats
```python
>>> dataset = sv.DetectionDataset.from_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )
>>> dataset = sv.DetectionDataset.from_pascal_voc(
... images_directory_path='...',
... annotations_directory_path='...'
... )
>>> dataset = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
```
- Loop over dataset entries
```python
>>> for name, image, labels in dataset:
... print(labels.xyxy)
array([[404. , 719. , 538. , 884.5 ],
[155. , 497. , 404. , 833.5 ],
[ 20.154999, 347.825 , 416.125 , 915.895 ]], dtype=float32)
```
- Split dataset for training, testing, and validation
```python
>>> train_dataset, test_dataset = dataset.split(split_ratio=0.7)
>>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
>>> len(train_dataset), len(test_dataset), len(valid_dataset)
(700, 150, 150)
```
- Merge multiple datasets
```python
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
```
- Save object detection/instance segmentation datasets in one of the supported formats
```python
>>> dataset.as_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )
>>> dataset.as_pascal_voc(
... images_directory_path='...',
... annotations_directory_path='...'
... )
>>> dataset.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
```
- Convert labels between supported formats
```python
>>> sv.DetectionDataset.from_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... ).as_pascal_voc(
... images_directory_path='...',
... annotations_directory_path='...'
... )
```
- Load classification datasets in one of the supported formats
```python
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
```
- Save classification datasets in one of the supported formats
```python
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
```
</details>
### [model evaluation](https://)
```python
>>> import wow-ai-vision as sv
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... ...
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
```
<details close>
<summary>👉 more metrics</summary>
- Mean average precision (mAP) for object detection tasks.
```python
>>> import wow-ai-vision as sv
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... ...
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
```
</details>
## 🎬 tutorials
## 💜 built with wow-ai-vision
## 📚 documentation
## 🏆 contribution
Raw data
{
"_id": null,
"home_page": "https://github.com/wow-ai/wow_ai_vision",
"name": "wow-ai-vision",
"maintainer": "huonghx",
"docs_url": null,
"requires_python": ">=3.8,<3.12.0",
"maintainer_email": "huonghx@wow-ai.com",
"keywords": "machine-learning,deep-learning,vision,ML,DL,AI,YOLOv5,YOLOv8,SAM",
"author": "huonghx",
"author_email": "huonghx@wow-ai.com",
"download_url": "https://files.pythonhosted.org/packages/cf/0a/129e668df323c5276dfa8692a682f31d2140f747875812c5676b5ad9a2cf/wow_ai_vision-0.0.4.tar.gz",
"platform": null,
"description": "\n## \ud83d\udc4b hello\n\n**We write your reusable computer vision tools.** Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! \ud83e\udd1d\n\n## \ud83d\udcbb install\n\nPip install the wow-ai-vision package in a\n[**3.11>=Python>=3.8**](https://www.python.org/) environment.\n\n```bash\npip install wow-ai-vision[desktop]\n```\n\nRead more about desktop, headless, and local installation in our [guide](https://).\n\n## \ud83d\udd25 quickstart\n\n### [detections processing](https://)\n\n```python\n>>> import wow-ai-vision as sv\n>>> from ultralytics import YOLO\n\n>>> model = YOLO('yolov8s.pt')\n>>> result = model(IMAGE)[0]\n>>> detections = sv.Detections.from_ultralytics(result)\n\n>>> len(detections)\n5\n```\n\n<details close>\n<summary>\ud83d\udc49 more detections utils</summary>\n\n- Easily switch inference pipeline between supported object detection/instance segmentation models\n\n ```python\n >>> import wow-ai-vision as sv\n >>> from segment_anything import sam_model_registry, SamAutomaticMaskGenerator\n\n >>> sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE)\n >>> mask_generator = SamAutomaticMaskGenerator(sam)\n >>> sam_result = mask_generator.generate(IMAGE)\n >>> detections = sv.Detections.from_sam(sam_result=sam_result)\n ```\n\n- [Advanced filtering](https://)\n\n ```python\n >>> detections = detections[detections.class_id == 0]\n >>> detections = detections[detections.confidence > 0.5]\n >>> detections = detections[detections.area > 1000]\n ```\n\n- Image annotation\n\n ```python\n >>> import wow-ai-vision as sv\n\n >>> box_annotator = sv.BoxAnnotator()\n >>> annotated_frame = box_annotator.annotate(\n ... scene=IMAGE,\n ... detections=detections\n ... )\n ```\n\n</details>\n\n### [datasets processing](https://)\n\n```python\n>>> import wow-ai-vision as sv\n\n>>> dataset = sv.DetectionDataset.from_yolo(\n... images_directory_path='...',\n... annotations_directory_path='...',\n... data_yaml_path='...'\n... )\n\n>>> dataset.classes\n['dog', 'person']\n\n>>> len(dataset)\n1000\n```\n\n<details close>\n<summary>\ud83d\udc49 more dataset utils</summary>\n\n- Load object detection/instance segmentation datasets in one of the supported formats\n\n ```python\n >>> dataset = sv.DetectionDataset.from_yolo(\n ... images_directory_path='...',\n ... annotations_directory_path='...',\n ... data_yaml_path='...'\n ... )\n\n >>> dataset = sv.DetectionDataset.from_pascal_voc(\n ... images_directory_path='...',\n ... annotations_directory_path='...'\n ... )\n\n >>> dataset = sv.DetectionDataset.from_coco(\n ... images_directory_path='...',\n ... annotations_path='...'\n ... )\n ```\n\n- Loop over dataset entries\n\n ```python\n >>> for name, image, labels in dataset:\n ... print(labels.xyxy)\n\n array([[404. , 719. , 538. , 884.5 ],\n [155. , 497. , 404. , 833.5 ],\n [ 20.154999, 347.825 , 416.125 , 915.895 ]], dtype=float32)\n ```\n\n- Split dataset for training, testing, and validation\n\n ```python\n >>> train_dataset, test_dataset = dataset.split(split_ratio=0.7)\n >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)\n\n >>> len(train_dataset), len(test_dataset), len(valid_dataset)\n (700, 150, 150)\n ```\n\n- Merge multiple datasets\n\n ```python\n >>> ds_1 = sv.DetectionDataset(...)\n >>> len(ds_1)\n 100\n >>> ds_1.classes\n ['dog', 'person']\n\n >>> ds_2 = sv.DetectionDataset(...)\n >>> len(ds_2)\n 200\n >>> ds_2.classes\n ['cat']\n\n >>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])\n >>> len(ds_merged)\n 300\n >>> ds_merged.classes\n ['cat', 'dog', 'person']\n ```\n\n- Save object detection/instance segmentation datasets in one of the supported formats\n\n ```python\n >>> dataset.as_yolo(\n ... images_directory_path='...',\n ... annotations_directory_path='...',\n ... data_yaml_path='...'\n ... )\n\n >>> dataset.as_pascal_voc(\n ... images_directory_path='...',\n ... annotations_directory_path='...'\n ... )\n\n >>> dataset.as_coco(\n ... images_directory_path='...',\n ... annotations_path='...'\n ... )\n ```\n\n- Convert labels between supported formats\n\n ```python\n >>> sv.DetectionDataset.from_yolo(\n ... images_directory_path='...',\n ... annotations_directory_path='...',\n ... data_yaml_path='...'\n ... ).as_pascal_voc(\n ... images_directory_path='...',\n ... annotations_directory_path='...'\n ... )\n ```\n\n- Load classification datasets in one of the supported formats\n\n ```python\n >>> cs = sv.ClassificationDataset.from_folder_structure(\n ... root_directory_path='...'\n ... )\n ```\n\n- Save classification datasets in one of the supported formats\n\n ```python\n >>> cs.as_folder_structure(\n ... root_directory_path='...'\n ... )\n ```\n\n</details>\n\n### [model evaluation](https://)\n\n```python\n>>> import wow-ai-vision as sv\n\n>>> dataset = sv.DetectionDataset.from_yolo(...)\n\n>>> def callback(image: np.ndarray) -> sv.Detections:\n... ...\n\n>>> confusion_matrix = sv.ConfusionMatrix.benchmark(\n... dataset = dataset,\n... callback = callback\n... )\n\n>>> confusion_matrix.matrix\narray([\n [0., 0., 0., 0.],\n [0., 1., 0., 1.],\n [0., 1., 1., 0.],\n [1., 1., 0., 0.]\n])\n```\n\n<details close>\n<summary>\ud83d\udc49 more metrics</summary>\n\n- Mean average precision (mAP) for object detection tasks.\n\n ```python\n >>> import wow-ai-vision as sv\n\n >>> dataset = sv.DetectionDataset.from_yolo(...)\n\n >>> def callback(image: np.ndarray) -> sv.Detections:\n ... ...\n\n >>> mean_average_precision = sv.MeanAveragePrecision.benchmark(\n ... dataset = dataset,\n ... callback = callback\n ... )\n\n >>> mean_average_precision.map50_95\n 0.433\n ```\n\n</details>\n\n## \ud83c\udfac tutorials\n\n\n## \ud83d\udc9c built with wow-ai-vision\n\n\n## \ud83d\udcda documentation\n\n\n\n## \ud83c\udfc6 contribution\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A set of easy-to-use utils that will come in handy in any Computer Vision project",
"version": "0.0.4",
"project_urls": {
"Documentation": "https://github.com/wow-ai/wow_ai_vision/blob/main/README.md",
"Homepage": "https://github.com/wow-ai/wow_ai_vision",
"Repository": "https://github.com/wow-ai/wow_ai_vision"
},
"split_keywords": [
"machine-learning",
"deep-learning",
"vision",
"ml",
"dl",
"ai",
"yolov5",
"yolov8",
"sam"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "019db5a710ef3282c217e4c95a9a87f3e08375b394647aee68c6e914f92384f6",
"md5": "3a1bb370e6e3b86f2348bad34a724a73",
"sha256": "29e5a698811db0e200a2a073370eea6de23eaa9b263c8ef7a6500d5e3db5f4a7"
},
"downloads": -1,
"filename": "wow_ai_vision-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3a1bb370e6e3b86f2348bad34a724a73",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<3.12.0",
"size": 68327,
"upload_time": "2023-10-19T09:44:54",
"upload_time_iso_8601": "2023-10-19T09:44:54.007731Z",
"url": "https://files.pythonhosted.org/packages/01/9d/b5a710ef3282c217e4c95a9a87f3e08375b394647aee68c6e914f92384f6/wow_ai_vision-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cf0a129e668df323c5276dfa8692a682f31d2140f747875812c5676b5ad9a2cf",
"md5": "9db6f8454b0c9417f6cf58f2de72b216",
"sha256": "a427456a5aa6c69b2a9cec7343deca7d66931356a7a94fdc79c264b800208c97"
},
"downloads": -1,
"filename": "wow_ai_vision-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "9db6f8454b0c9417f6cf58f2de72b216",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<3.12.0",
"size": 56723,
"upload_time": "2023-10-19T09:44:57",
"upload_time_iso_8601": "2023-10-19T09:44:57.699493Z",
"url": "https://files.pythonhosted.org/packages/cf/0a/129e668df323c5276dfa8692a682f31d2140f747875812c5676b5ad9a2cf/wow_ai_vision-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-19 09:44:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wow-ai",
"github_project": "wow_ai_vision",
"github_not_found": true,
"lcname": "wow-ai-vision"
}