# AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose
<div align="center">
<p>
<a align="center" href="" target="https://badge.fury.io/py/asone">
<img
width="100%"
src="https://kajabi-storefronts-production.kajabi-cdn.com/kajabi-storefronts-production/file-uploads/themes/2151400015/settings_images/747367e-1d78-eead-2a2-7e5b336a775_Screenshot_2024-05-08_at_13.48.08.jpg" width="100%">
<a href="https://www.youtube.com/watch?v=K-VcpPwcM8k" style="display:inline-block;padding:10px 20px;background-color:red;color:white;text-decoration:none;font-size:16px;font-weight:bold;border-radius:5px;transition:background-color 0.3s;" target="_blank">Watch Video</a>
</p>
<br>
<br>
[![PyPI version](https://badge.fury.io/py/asone.svg)](https://badge.fury.io/py/asone)
[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/asone)
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing)
[![start with why](https://img.shields.io/badge/version-2.0.0-green)](https://github.com/augmentedstartups/AS-One)
[![GPLv3 License](https://img.shields.io/badge/License-GPL%20v3-yellow.svg)](https://opensource.org/licenses/)
</div>
## 👋 Hello
==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==
AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code.
This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released.
This is One Library for most of your computer vision needs.
If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our [courses](https://www.augmentedstartups.com/store) and [projects](https://store.augmentedstartups.com)
[<img src="https://s3.amazonaws.com/kajabi-storefronts-production/blogs/22606/images/0FDx83VXSYOY0NAO2kMc_ASOne_Windows_Play.jpg" width="50%">](https://www.youtube.com/watch?v=K-VcpPwcM8k)
Watch the step-by-step tutorial 🤝
## 💻 Install
<details><summary> 🔥 Prerequisites</summary>
- Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions.
- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows.
- [Download git for windows](https://git-scm.com/download/win) if not installed.
</details>
```bash
pip install asone
```
<details>
<summary> 👉 Install from Source</summary>
### 💾 Clone the Repository
Navigate to an empty folder of your choice.
`git clone https://github.com/augmentedstartups/AS-One.git`
Change Directory to AS-One
`cd AS-One`
<details open>
<summary> 👉 For Linux</summary>
```shell
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
```
</details>
<details>
<summary> 👉 For Windows 10/11</summary>
```shell
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox
pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
</details>
<details>
<summary> 👉 For MacOS</summary>
```shell
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
```
</details>
</details>
## Quick Start 🏃♂️
Use tracker on sample video.
```python
import asone
from asone import ASOne
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw(model_output, display=False)
```
### Run in `Google Colab` 💻
<a href="https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## Sample Code Snippets 📃
<details>
<summary>6.1 👉 Object Detection</summary>
```python
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/test.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True)
```
Run the `asone/demo_detector.py` to test detector.
```shell
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
```
<details>
<summary>6.1.1 👉 Use Custom Trained Weights for Detector</summary>
<!-- ### 6.1.2 Use Custom Trained Weights -->
Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.
```python
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/license_video.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
```
</details>
<details>
<summary>6.1.2 👉 Changing Detector Models </summary>
Change detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.
- Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
```python
# Change detector
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
# For macOs
# YOLO5
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
model = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
```
</details>
</details>
<details>
<summary>6.2 👉 Object Tracking </summary>
Use tracker on sample video.
```python
import asone
from asone import ASOne
# Instantiate Asone object
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
# Loop over track to retrieve outputs of each frame
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
# Do anything with bboxes here
```
[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in `ASOne` class.
<details>
<summary>6.2.1 👉 Changing Detector and Tracking Models</summary>
<!-- ### Changing Detector and Tracking Models -->
Change Tracker by simply changing the tracker flag.
The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.
```python
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
# Change tracker
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
```
```python
# Change Detector
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
```
</details>
Run the `asone/demo_tracker.py` to test detector.
```shell
# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
```
</details>
<details>
<summary>6.3 👉 Segmentation</summary>
```python
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw_masks(model_output, display=True) # Draw masks
```
</details>
<details>
<summary>6.4 👉 Text Detection</summary>
Sample code to detect text on an image
```python
# Detect and recognize text
import asone
from asone import ASOne, utils
import cv2
model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)
```
Use Tracker on Text
```python
import asone
from asone import ASOne
# Instantiate Asone object
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')
# Loop over track to retrieve outputs of each frame
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
# Do anything with bboxes here
```
Run the `asone/demo_ocr.py` to test ocr.
```shell
# run on gpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4
# run on cpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
```
</details>
<details>
<summary>6.5 👉 Pose Estimation</summary>
Sample code to estimate pose on an image
```python
# Pose Estimation
import asone
from asone import PoseEstimator, utils
import cv2
model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
```
- Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.
```python
# Pose Estimation on video
import asone
from asone import PoseEstimator, utils
model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
annotations = utils.draw_kpts(model_output)
# Do anything with kpts here
```
Run the `asone/demo_pose_estimator.py` to test Pose estimation.
```shell
# run on gpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4
# run on cpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu
```
</details>
To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md)🐳
### ToDo 📝
- [x] First Release
- [x] Import trained models
- [x] Simplify code even further
- [x] Updated for YOLOv8
- [x] OCR and Counting
- [x] OCSORT, StrongSORT, MoTPy
- [x] M1/2 Apple Silicon Compatibility
- [x] Pose Estimation YOLOv7/v8
- [x] YOLO-NAS
- [x] Updated for YOLOv8.1
- [x] YOLOV9
- [x] SAM Integration
| Offered By 💼 : | Maintained By 👨💻 : |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| [![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com) | [![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/) |
Raw data
{
"_id": null,
"home_page": "https://github.com/augmentedstartups/AS-One",
"name": "asone",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "asone bytetrack deepsort norfair yolo yolox yolor yolov5 yolov7 yolov8 yolov9 sam segment-anything installation inferencing",
"author": "AxcelerateAI",
"author_email": "dev@axcelerate.ai",
"download_url": null,
"platform": null,
"description": "# AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose\n\n\n\n<div align=\"center\">\n <p>\n <a align=\"center\" href=\"\" target=\"https://badge.fury.io/py/asone\">\n <img\n width=\"100%\"\n src=\"https://kajabi-storefronts-production.kajabi-cdn.com/kajabi-storefronts-production/file-uploads/themes/2151400015/settings_images/747367e-1d78-eead-2a2-7e5b336a775_Screenshot_2024-05-08_at_13.48.08.jpg\" width=\"100%\">\n <a href=\"https://www.youtube.com/watch?v=K-VcpPwcM8k\" style=\"display:inline-block;padding:10px 20px;background-color:red;color:white;text-decoration:none;font-size:16px;font-weight:bold;border-radius:5px;transition:background-color 0.3s;\" target=\"_blank\">Watch Video</a>\n\n\n </p>\n\n <br>\n\n <br>\n\n[![PyPI version](https://badge.fury.io/py/asone.svg)](https://badge.fury.io/py/asone)\n[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/asone)\n[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing)\n[![start with why](https://img.shields.io/badge/version-2.0.0-green)](https://github.com/augmentedstartups/AS-One)\n[![GPLv3 License](https://img.shields.io/badge/License-GPL%20v3-yellow.svg)](https://opensource.org/licenses/)\n\n</div>\n\n## \ud83d\udc4b Hello\n\n==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==\n\nAS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code.\nThis python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released.\n\nThis is One Library for most of your computer vision needs.\n\nIf you would like to dive deeper into YOLO Object Detection and Tracking, then check out our [courses](https://www.augmentedstartups.com/store) and [projects](https://store.augmentedstartups.com)\n\n[<img src=\"https://s3.amazonaws.com/kajabi-storefronts-production/blogs/22606/images/0FDx83VXSYOY0NAO2kMc_ASOne_Windows_Play.jpg\" width=\"50%\">](https://www.youtube.com/watch?v=K-VcpPwcM8k)\n\nWatch the step-by-step tutorial \ud83e\udd1d\n\n\n\n## \ud83d\udcbb Install\n<details><summary> \ud83d\udd25 Prerequisites</summary>\n\n- Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions.\n- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows.\n- [Download git for windows](https://git-scm.com/download/win) if not installed.\n</details>\n\n```bash\npip install asone\n```\n\n<details>\n<summary> \ud83d\udc49 Install from Source</summary>\n\n### \ud83d\udcbe Clone the Repository\n\nNavigate to an empty folder of your choice.\n\n`git clone https://github.com/augmentedstartups/AS-One.git`\n\nChange Directory to AS-One\n\n`cd AS-One`\n\n<details open>\n<summary> \ud83d\udc49 For Linux</summary>\n\n\n```shell\npython3 -m venv .env\nsource .env/bin/activate\n\npip install -r requirements.txt\n\n# for CPU\npip install torch torchvision\n# for GPU\npip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113\n```\n\n\n</details>\n\n<details>\n<summary> \ud83d\udc49 For Windows 10/11</summary>\n\n```shell\npython -m venv .env\n.env\\Scripts\\activate\npip install numpy Cython\npip install lap\npip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox\n\npip install asone onnxruntime-gpu==1.12.1\npip install typing_extensions==4.7.1\npip install super-gradients==3.1.3\n# for CPU\npip install torch torchvision\n\n# for GPU\npip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113\nor\npip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html\n```\n\n</details>\n<details>\n<summary> \ud83d\udc49 For MacOS</summary>\n\n```shell\npython3 -m venv .env\nsource .env/bin/activate\n\n\npip install -r requirements.txt\n\n# for CPU\npip install torch torchvision\n```\n\n</details>\n</details>\n\n## Quick Start \ud83c\udfc3\u200d\u2642\ufe0f\n\nUse tracker on sample video.\n\n```python\nimport asone\nfrom asone import ASOne\n\nmodel = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)\ntracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])\n\nfor model_output in tracks:\n annotations = ASOne.draw(model_output, display=False)\n```\n\n\n### Run in `Google Colab` \ud83d\udcbb\n\n\n<a href=\"https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n\n## Sample Code Snippets \ud83d\udcc3\n\n<details>\n<summary>6.1 \ud83d\udc49 Object Detection</summary>\n\n```python\nimport asone\nfrom asone import ASOne\n\nmodel = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu\nvid = model.read_video('data/sample_videos/test.mp4')\n\nfor img in vid:\n detection = model.detecter(img)\n annotations = ASOne.draw(detection, img=img, display=True)\n```\n\nRun the `asone/demo_detector.py` to test detector.\n\n```shell\n# run on gpu\npython -m asone.demo_detector data/sample_videos/test.mp4\n\n# run on cpu\npython -m asone.demo_detector data/sample_videos/test.mp4 --cpu\n```\n\n\n<details>\n<summary>6.1.1 \ud83d\udc49 Use Custom Trained Weights for Detector</summary>\n<!-- ### 6.1.2 Use Custom Trained Weights -->\n\nUse your custom weights of a detector model trained on custom data by simply providing path of the weights file.\n\n```python\nimport asone\nfrom asone import ASOne\n\nmodel = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu\nvid = model.read_video('data/sample_videos/license_video.mp4')\n\nfor img in vid:\n detection = model.detecter(img)\n annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])\n```\n\n</details>\n\n<details>\n<summary>6.1.2 \ud83d\udc49 Changing Detector Models </summary>\n\nChange detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n\n- Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.\n\n```python\n# Change detector\nmodel = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)\n\n# For macOs\n# YOLO5\nmodel = ASOne(detector=asone.YOLOV5X_MLMODEL)\n# YOLO7\nmodel = ASOne(detector=asone.YOLOV7_MLMODEL)\n# YOLO8\nmodel = ASOne(detector=asone.YOLOV8L_MLMODEL)\n```\n\n</details>\n\n</details>\n\n<details>\n<summary>6.2 \ud83d\udc49 Object Tracking </summary>\n\nUse tracker on sample video.\n\n```python\nimport asone\nfrom asone import ASOne\n\n# Instantiate Asone object\nmodel = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu\ntracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])\n\n# Loop over track to retrieve outputs of each frame\nfor model_output in tracks:\n annotations = ASOne.draw(model_output, display=True)\n # Do anything with bboxes here\n```\n\n[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in `ASOne` class.\n\n<details>\n<summary>6.2.1 \ud83d\udc49 Changing Detector and Tracking Models</summary>\n\n<!-- ### Changing Detector and Tracking Models -->\n\nChange Tracker by simply changing the tracker flag.\n\nThe flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n\n```python\nmodel = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)\n# Change tracker\nmodel = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)\n```\n\n```python\n# Change Detector\nmodel = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)\n```\n\n</details>\n\nRun the `asone/demo_tracker.py` to test detector.\n\n```shell\n# run on gpu\npython -m asone.demo_tracker data/sample_videos/test.mp4\n\n# run on cpu\npython -m asone.demo_tracker data/sample_videos/test.mp4 --cpu\n```\n\n</details>\n\n<details>\n<summary>6.3 \ud83d\udc49 Segmentation</summary>\n\n\n```python\nimport asone\nfrom asone import ASOne\n\nmodel = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu\ntracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])\n\nfor model_output in tracks:\n annotations = ASOne.draw_masks(model_output, display=True) # Draw masks\n```\n</details>\n\n<details>\n<summary>6.4 \ud83d\udc49 Text Detection</summary>\n\nSample code to detect text on an image\n\n```python\n# Detect and recognize text\nimport asone\nfrom asone import ASOne, utils\nimport cv2\n\nmodel = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu\nimg = cv2.imread('data/sample_imgs/sample_text.jpeg')\nresults = model.detect_text(img)\nannotations = utils.draw_text(img, results, display=True)\n```\n\nUse Tracker on Text\n\n```python\nimport asone\nfrom asone import ASOne\n\n# Instantiate Asone object\nmodel = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu\ntracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')\n\n# Loop over track to retrieve outputs of each frame\nfor model_output in tracks:\n annotations = ASOne.draw(model_output, display=True)\n\n # Do anything with bboxes here\n```\n\nRun the `asone/demo_ocr.py` to test ocr.\n\n```shell\n# run on gpu\n python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4\n\n# run on cpu\n python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu\n```\n\n</details>\n\n<details>\n<summary>6.5 \ud83d\udc49 Pose Estimation</summary>\n\n\nSample code to estimate pose on an image\n\n```python\n# Pose Estimation\nimport asone\nfrom asone import PoseEstimator, utils\nimport cv2\n\nmodel = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu\nimg = cv2.imread('data/sample_imgs/test2.jpg')\nkpts = model.estimate_image(img)\nannotations = utils.draw_kpts(kpts, image=img, display=True)\n```\n\n- Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n\n```python\n# Pose Estimation on video\nimport asone\nfrom asone import PoseEstimator, utils\n\nmodel = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu\nestimator = model.video_estimator('data/sample_videos/football1.mp4')\nfor model_output in estimator:\n annotations = utils.draw_kpts(model_output)\n # Do anything with kpts here\n```\n\nRun the `asone/demo_pose_estimator.py` to test Pose estimation.\n\n```shell\n# run on gpu\n python -m asone.demo_pose_estimator data/sample_videos/football1.mp4\n\n# run on cpu\n python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu\n```\n\n</details>\n\nTo setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md)\ud83d\udc33\n\n### ToDo \ud83d\udcdd\n\n- [x] First Release\n- [x] Import trained models\n- [x] Simplify code even further\n- [x] Updated for YOLOv8\n- [x] OCR and Counting\n- [x] OCSORT, StrongSORT, MoTPy\n- [x] M1/2 Apple Silicon Compatibility\n- [x] Pose Estimation YOLOv7/v8\n- [x] YOLO-NAS\n- [x] Updated for YOLOv8.1\n- [x] YOLOV9\n- [x] SAM Integration\n\n\n| Offered By \ud83d\udcbc : | Maintained By \ud83d\udc68\u200d\ud83d\udcbb : |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |\n| [![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com) | [![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/) |\n\n\n",
"bugtrack_url": null,
"license": "BSD 2-clause",
"summary": null,
"version": "2.0.0",
"project_urls": {
"Homepage": "https://github.com/augmentedstartups/AS-One"
},
"split_keywords": [
"asone",
"bytetrack",
"deepsort",
"norfair",
"yolo",
"yolox",
"yolor",
"yolov5",
"yolov7",
"yolov8",
"yolov9",
"sam",
"segment-anything",
"installation",
"inferencing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1492d47d45108bef6b8c0c515fd2cbc534638f093ca31c0915fb9c1f17525a59",
"md5": "d07ebc527a037a01c8b6460094a9f230",
"sha256": "b77438460b554429ccd184d64db5ba8665c7591faa29a7ec77fb7a88638631bf"
},
"downloads": -1,
"filename": "asone-2.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d07ebc527a037a01c8b6460094a9f230",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 761714,
"upload_time": "2024-05-14T05:21:57",
"upload_time_iso_8601": "2024-05-14T05:21:57.018373Z",
"url": "https://files.pythonhosted.org/packages/14/92/d47d45108bef6b8c0c515fd2cbc534638f093ca31c0915fb9c1f17525a59/asone-2.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-14 05:21:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "augmentedstartups",
"github_project": "AS-One",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "torch",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "opencv-python",
"specs": []
},
{
"name": "lap",
"specs": []
},
{
"name": "loguru",
"specs": []
},
{
"name": "norfair",
"specs": []
},
{
"name": "numpy",
"specs": [
[
"==",
"1.23.3"
]
]
},
{
"name": "scipy",
"specs": []
},
{
"name": "pyyaml",
"specs": []
},
{
"name": "easydict",
"specs": []
},
{
"name": "gdown",
"specs": []
},
{
"name": "pandas",
"specs": []
},
{
"name": "tabulate",
"specs": []
},
{
"name": "wheel",
"specs": []
},
{
"name": "numpy",
"specs": [
[
"==",
"1.23.3"
]
]
},
{
"name": "asone-ocr",
"specs": []
},
{
"name": "motpy",
"specs": []
},
{
"name": "ultralytics",
"specs": [
[
"==",
"8.1.30"
]
]
},
{
"name": "torchreid",
"specs": [
[
"==",
"0.2.5"
]
]
},
{
"name": "tensorboard",
"specs": []
},
{
"name": "protobuf",
"specs": [
[
"==",
"3.20.*"
]
]
},
{
"name": "onnxruntime",
"specs": []
},
{
"name": "coremltools",
"specs": []
},
{
"name": "Cython",
"specs": [
[
"==",
"3.0.9"
]
]
},
{
"name": "typing_extensions",
"specs": []
},
{
"name": "super_gradients",
"specs": []
},
{
"name": "cython_bbox",
"specs": []
},
{
"name": "thop",
"specs": []
},
{
"name": "IPython",
"specs": []
},
{
"name": "segment-anything",
"specs": []
},
{
"name": "pillow",
"specs": [
[
"==",
"9.5.0"
]
]
}
],
"lcname": "asone"
}