# MetaVision
[notebooks](https://github.com/khulnasoft/notebooks) | [inference](https://github.com/khulnasoft/inference) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/khulnasoft/multimodal-maestro)
<br>
[![version](https://badge.fury.io/py/superverse.svg)](https://badge.fury.io/py/superverse)
[![downloads](https://img.shields.io/pypi/dm/superverse)](https://pypistats.org/packages/superverse)
[![snyk](https://snyk.io/advisor/python/superverse/badge.svg)](https://snyk.io/advisor/python/superverse)
[![license](https://img.shields.io/pypi/l/superverse)](https://github.com/khulnasoft/superverse/blob/main/LICENSE.md)
[![python-version](https://img.shields.io/pypi/pyversions/superverse)](https://badge.fury.io/py/superverse)
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/khulnasoft/superverse/blob/main/demo.ipynb)
[![gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Khulnasoft/Annotators)
[![discord](https://img.shields.io/discord/1159501506232451173?logo=discord&label=discord&labelColor=fff&color=5865f2&link=https%3A%2F%2Fdiscord.gg%2FGbfgXGJ8Bk)](https://discord.gg/GbfgXGJ8Bk)
[![built-with-material-for-mkdocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
</div>
## 👋 hello
**We write your reusable computer vision tools.** Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝
## 💻 install
Pip install the superverse package in a
[**Python>=3.8**](https://www.python.org/) environment.
```bash
pip install superverse
```
Read more about conda, mamba, and installing from source in our [guide](https://khulnasoft.github.io/superverse/).
## 🔥 quickstart
### models
Superverse was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created [connectors](https://superverse.khulnasoft.com/latest/detection/core/#detections) for the most popular libraries like Ultralytics, Transformers, or MMDetection.
```python
import cv2
import superverse as sv
from ultralytics import YOLO
image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
len(detections)
# 5
```
<details>
<summary>👉 more model connectors</summary>
- inference
```python
import cv2
import superverse as sv
from inference import get_model
image = cv2.imread(...)
model = get_model(model_id="yolov8s-640", api_key=<KHULNASOFT API KEY>)
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
len(detections)
# 5
```
</details>
### annotators
```python
import cv2
import superverse as sv
image = cv2.imread(...)
detections = sv.Detections(...)
box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections)
```
### datasets
```python
import superverse as sv
from khulnasoft import Khulnasoft
project = Khulnasoft().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds[0]
# loads image on demand
for path, image, annotation in ds:
# loads image on demand
```
<details close>
<summary>👉 more dataset utils</summary>
- load
```python
dataset = sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset = sv.DetectionDataset.from_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset = sv.DetectionDataset.from_coco(
images_directory_path=...,
annotations_path=...
)
```
- split
```python
train_dataset, test_dataset = dataset.split(split_ratio=0.7)
test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
len(train_dataset), len(test_dataset), len(valid_dataset)
# (700, 150, 150)
```
- merge
```python
ds_1 = sv.DetectionDataset(...)
len(ds_1)
# 100
ds_1.classes
# ['dog', 'person']
ds_2 = sv.DetectionDataset(...)
len(ds_2)
# 200
ds_2.classes
# ['cat']
ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
len(ds_merged)
# 300
ds_merged.classes
# ['cat', 'dog', 'person']
```
- save
```python
dataset.as_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset.as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset.as_coco(
images_directory_path=...,
annotations_path=...
)
```
- convert
```python
sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
).as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
```
</details>
<br/>
## 📚 documentation
Visit our [documentation](https://khulnasoft.github.io/superverse) page to learn how superverse can help you build computer vision applications faster and more reliably.
## 🏆 contribution
We love your input! Please see our [contributing guide](https://github.com/khulnasoft/superverse/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!
<p align="center">
<a href="https://github.com/khulnasoft/superverse/graphs/contributors">
<img src="https://contrib.rocks/image?repo=khulnasoft/superverse" />
</a>
</p>
Raw data
{
"_id": null,
"home_page": "https://github.com/khulnasoft/superverse",
"name": "superverse",
"maintainer": "Md Sulaiman",
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": "dev.sulaiman@icloud.com",
"keywords": "machine-learning, deep-learning, vision, ML, DL, AI, Khulnasoft",
"author": "Md Sulaiman",
"author_email": "dev.sulaiman@icloud.com",
"download_url": "https://files.pythonhosted.org/packages/cb/cd/284ae683d0a657b7eff41eed166f7da9f2333f8534794ef650100712e0d7/superverse-0.3.0.tar.gz",
"platform": null,
"description": "# MetaVision\n\n[notebooks](https://github.com/khulnasoft/notebooks) | [inference](https://github.com/khulnasoft/inference) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/khulnasoft/multimodal-maestro)\n\n<br>\n\n[![version](https://badge.fury.io/py/superverse.svg)](https://badge.fury.io/py/superverse)\n[![downloads](https://img.shields.io/pypi/dm/superverse)](https://pypistats.org/packages/superverse)\n[![snyk](https://snyk.io/advisor/python/superverse/badge.svg)](https://snyk.io/advisor/python/superverse)\n[![license](https://img.shields.io/pypi/l/superverse)](https://github.com/khulnasoft/superverse/blob/main/LICENSE.md)\n[![python-version](https://img.shields.io/pypi/pyversions/superverse)](https://badge.fury.io/py/superverse)\n[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/khulnasoft/superverse/blob/main/demo.ipynb)\n[![gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Khulnasoft/Annotators)\n[![discord](https://img.shields.io/discord/1159501506232451173?logo=discord&label=discord&labelColor=fff&color=5865f2&link=https%3A%2F%2Fdiscord.gg%2FGbfgXGJ8Bk)](https://discord.gg/GbfgXGJ8Bk)\n[![built-with-material-for-mkdocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)\n\n</div>\n\n## \ud83d\udc4b hello\n\n**We write your reusable computer vision tools.** Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! \ud83e\udd1d\n\n## \ud83d\udcbb install\n\nPip install the superverse package in a\n[**Python>=3.8**](https://www.python.org/) environment.\n\n```bash\npip install superverse\n```\n\nRead more about conda, mamba, and installing from source in our [guide](https://khulnasoft.github.io/superverse/).\n\n## \ud83d\udd25 quickstart\n\n### models\n\nSuperverse was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created [connectors](https://superverse.khulnasoft.com/latest/detection/core/#detections) for the most popular libraries like Ultralytics, Transformers, or MMDetection.\n\n```python\nimport cv2\nimport superverse as sv\nfrom ultralytics import YOLO\n\nimage = cv2.imread(...)\nmodel = YOLO(\"yolov8s.pt\")\nresult = model(image)[0]\ndetections = sv.Detections.from_ultralytics(result)\n\nlen(detections)\n# 5\n```\n\n<details>\n<summary>\ud83d\udc49 more model connectors</summary>\n\n- inference\n\n ```python\n import cv2\n import superverse as sv\n from inference import get_model\n\n image = cv2.imread(...)\n model = get_model(model_id=\"yolov8s-640\", api_key=<KHULNASOFT API KEY>)\n result = model.infer(image)[0]\n detections = sv.Detections.from_inference(result)\n\n len(detections)\n # 5\n ```\n\n</details>\n\n### annotators\n\n```python\nimport cv2\nimport superverse as sv\n\nimage = cv2.imread(...)\ndetections = sv.Detections(...)\n\nbox_annotator = sv.BoxAnnotator()\nannotated_frame = box_annotator.annotate(\n scene=image.copy(),\n detections=detections)\n```\n\n### datasets\n\n```python\nimport superverse as sv\nfrom khulnasoft import Khulnasoft\n\nproject = Khulnasoft().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)\ndataset = project.version(<PROJECT_VERSION>).download(\"coco\")\n\nds = sv.DetectionDataset.from_coco(\n images_directory_path=f\"{dataset.location}/train\",\n annotations_path=f\"{dataset.location}/train/_annotations.coco.json\",\n)\n\npath, image, annotation = ds[0]\n # loads image on demand\n\nfor path, image, annotation in ds:\n # loads image on demand\n```\n\n<details close>\n<summary>\ud83d\udc49 more dataset utils</summary>\n\n- load\n\n ```python\n dataset = sv.DetectionDataset.from_yolo(\n images_directory_path=...,\n annotations_directory_path=...,\n data_yaml_path=...\n )\n\n dataset = sv.DetectionDataset.from_pascal_voc(\n images_directory_path=...,\n annotations_directory_path=...\n )\n\n dataset = sv.DetectionDataset.from_coco(\n images_directory_path=...,\n annotations_path=...\n )\n ```\n\n- split\n\n ```python\n train_dataset, test_dataset = dataset.split(split_ratio=0.7)\n test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)\n\n len(train_dataset), len(test_dataset), len(valid_dataset)\n # (700, 150, 150)\n ```\n\n- merge\n\n ```python\n ds_1 = sv.DetectionDataset(...)\n len(ds_1)\n # 100\n ds_1.classes\n # ['dog', 'person']\n\n ds_2 = sv.DetectionDataset(...)\n len(ds_2)\n # 200\n ds_2.classes\n # ['cat']\n\n ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])\n len(ds_merged)\n # 300\n ds_merged.classes\n # ['cat', 'dog', 'person']\n ```\n\n- save\n\n ```python\n dataset.as_yolo(\n images_directory_path=...,\n annotations_directory_path=...,\n data_yaml_path=...\n )\n\n dataset.as_pascal_voc(\n images_directory_path=...,\n annotations_directory_path=...\n )\n\n dataset.as_coco(\n images_directory_path=...,\n annotations_path=...\n )\n ```\n\n- convert\n\n ```python\n sv.DetectionDataset.from_yolo(\n images_directory_path=...,\n annotations_directory_path=...,\n data_yaml_path=...\n ).as_pascal_voc(\n images_directory_path=...,\n annotations_directory_path=...\n )\n ```\n\n</details>\n\n<br/>\n\n## \ud83d\udcda documentation\n\nVisit our [documentation](https://khulnasoft.github.io/superverse) page to learn how superverse can help you build computer vision applications faster and more reliably.\n\n## \ud83c\udfc6 contribution\n\nWe love your input! Please see our [contributing guide](https://github.com/khulnasoft/superverse/blob/main/CONTRIBUTING.md) to get started. Thank you \ud83d\ude4f to all our contributors!\n\n<p align=\"center\">\n <a href=\"https://github.com/khulnasoft/superverse/graphs/contributors\">\n <img src=\"https://contrib.rocks/image?repo=khulnasoft/superverse\" />\n </a>\n</p>\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A set of easy-to-use utils that will come in handy in any Computer Vision project",
"version": "0.3.0",
"project_urls": {
"Documentation": "https://superverse.khulnasoft.com/latest/",
"Homepage": "https://github.com/khulnasoft/superverse",
"Repository": "https://github.com/khulnasoft/superverse"
},
"split_keywords": [
"machine-learning",
" deep-learning",
" vision",
" ml",
" dl",
" ai",
" khulnasoft"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7cca821b14401d787df67c1580fbb609e252a252e3b745b12a50862c3cb1baca",
"md5": "d6e34d6f8a522baaf39ea6ce890d1402",
"sha256": "3a374441bac545649f1909711708d73752a66fabcc418aa5d48882ec15fab121"
},
"downloads": -1,
"filename": "superverse-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d6e34d6f8a522baaf39ea6ce890d1402",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 179573,
"upload_time": "2024-11-25T23:41:39",
"upload_time_iso_8601": "2024-11-25T23:41:39.888193Z",
"url": "https://files.pythonhosted.org/packages/7c/ca/821b14401d787df67c1580fbb609e252a252e3b745b12a50862c3cb1baca/superverse-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cbcd284ae683d0a657b7eff41eed166f7da9f2333f8534794ef650100712e0d7",
"md5": "2505b51055607c5a9be6d3908f94bf9b",
"sha256": "ffaa5905228d980fe288d40f45de3343ea88969d7baa5b2abe4d1f23cd7b9a97"
},
"downloads": -1,
"filename": "superverse-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "2505b51055607c5a9be6d3908f94bf9b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 143546,
"upload_time": "2024-11-25T23:41:41",
"upload_time_iso_8601": "2024-11-25T23:41:41.991986Z",
"url": "https://files.pythonhosted.org/packages/cb/cd/284ae683d0a657b7eff41eed166f7da9f2333f8534794ef650100712e0d7/superverse-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-25 23:41:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "khulnasoft",
"github_project": "superverse",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "superverse"
}