asone


Nameasone JSON
Version 0.3.3 PyPI version JSON
download
home_pagehttps://github.com/axcelerateai/asone
Summary
upload_time2023-06-11 02:55:10
maintainer
docs_urlNone
authorAxcelerateAI
requires_python
licenseBSD 2-clause
keywords asone bytetrack deepsort norfair yolo yolox yolor yolov5 yolov7 installation inferencing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AS-One : A Modular Library for YOLO Object Detection and Object Tracking

[<img src="https://kajabi-storefronts-production.kajabi-cdn.com/kajabi-storefronts-production/file-uploads/themes/2151476941/settings_images/65d82-0d84-6171-a7e0-5aa180b657d5_Black_with_Logo.jpg" width="100%">](https://www.youtube.com/watch?v=K-VcpPwcM8k)





#### Table of Contents
1. Introduction
2. Prerequisites
3. Clone the Repo
4. Installation
    - [Linux](#4-installation)
    - [Windows 10/11](#4-installation) 
    - [MacOS](#4-installation) 
5. Running AS-One
6. [Sample Code Snippets](#6-sample-code-snippets)
7. [Model Zoo](asone/linux/Instructions/Benchmarking.md)

## 1. Introduction
==UPDATE: YOLO-NAS is OUT==

AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code.
This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML`  flavors. We plan to offer support for future versions of YOLO when they get released.

This is One Library for most of your computer vision needs.

If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our [courses](https://www.augmentedstartups.com/store) and [projects](https://store.augmentedstartups.com)

[<img src="https://s3.amazonaws.com/kajabi-storefronts-production/blogs/22606/images/0FDx83VXSYOY0NAO2kMc_ASOne_Windows_Play.jpg" width="50%">](https://www.youtube.com/watch?v=K-VcpPwcM8k)

Watch the step-by-step tutorial

## 2. Prerequisites

- Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions.
- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows. 
- [Download git for windows](https://git-scm.com/download/win) if not installed.

## 3. Clone the Repo

Navigate to an empty folder of your choice.

```git clone https://github.com/augmentedstartups/AS-One.git```

Change Directory to AS-One

```cd AS-One```

## 4. Installation
<details open>
<summary>For Linux</summary>

```shell
python3 -m venv .env
source .env/bin/activate

pip install numpy Cython
pip install cython-bbox asone onnxruntime-gpu==1.12.1
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
```
</details>

<details>
<summary> For Windows 10/11</summary>

```shell
python -m venv .env
.env\Scripts\activate
pip install numpy Cython 
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox

pip install asone onnxruntime-gpu==1.12.1
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision

# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
</details>
<details>
<summary>For MacOS</summary>

```shell
python3 -m venv .env
source .env/bin/activate

pip install numpy Cython
pip install cython-bbox asone
pip install super-gradients==3.1.1
# for CPU
pip install torch torchvision
```
</details>

## 5. Running AS-One

Run `main.py` to test tracker on `data/sample_videos/test.mp4` video

```
python main.py data/sample_videos/test.mp4
```

### Run in `Google Colab`

 <a href="https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>


## 6. Sample Code Snippets
<details>
<summary>6.1. Object Detection</summary>

```python
import asone
from asone import utils
from asone import ASOne
import cv2

video_path = 'data/sample_videos/test.mp4'
detector = ASOne(detector=asone.YOLOV7_PYTORCH, use_cuda=True) # Set use_cuda to False for cpu

filter_classes = ['car'] # Set to None to detect all classes

cap = cv2.VideoCapture(video_path)

while True:
    _, frame = cap.read()
    if not _:
        break

    dets, img_info = detector.detect(frame, filter_classes=filter_classes)

    bbox_xyxy = dets[:, :4]
    scores = dets[:, 4]
    class_ids = dets[:, 5]

    frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids)

    cv2.imshow('result', frame)

    if cv2.waitKey(25) & 0xFF == ord('q'):
        break
```

Run the `asone/demo_detector.py` to test detector.

```shell
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
```

<details>
<summary>6.1.1 Use Custom Trained Weights for Detector</summary>

<!-- ### 6.1.2 Use Custom Trained Weights -->

Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.

```python
import asone
from asone import utils
from asone import ASOne
import cv2

video_path = 'data/sample_videos/license_video.webm'
detector = ASOne(detector=asone.YOLOV7_PYTORCH, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu

class_names = ['license_plate'] # your custom classes list

cap = cv2.VideoCapture(video_path)

while True:
    _, frame = cap.read()
    if not _:
        break

    dets, img_info = detector.detect(frame)

    bbox_xyxy = dets[:, :4]
    scores = dets[:, 4]
    class_ids = dets[:, 5]

    frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids, class_names=class_names) # simply pass custom classes list to write your classes on result video

    cv2.imshow('result', frame)

    if cv2.waitKey(25) & 0xFF == ord('q'):
        break
```
</details>

<details>
<summary>6.1.2. Changing Detector Models </summary>

Change detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.
* Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
```python
# Change detector
detector = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)

# For macOs
# YOLO5
detector = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
detector = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
detector = ASOne(detector=asone.YOLOV8L_MLMODEL)
```

</details>

</details>

<details>
<summary>6.2. Object Tracking </summary>

Use tracker on sample video. 

```python
import asone
from asone import ASOne

# Instantiate Asone object
detect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True) #set use_cuda=False to use cpu

filter_classes = ['person'] # set to None to track all classes

# ##############################################
#           To track using video file
# ##############################################
# Get tracking function
track = detect.track_video('data/sample_videos/test.mp4', output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame 
for bbox_details, frame_details in track:
    bbox_xyxy, ids, scores, class_ids = bbox_details
    frame, frame_num, fps = frame_details
    # Do anything with bboxes here

# ##############################################
#           To track using webcam
# ##############################################
# Get tracking function
track = detect.track_webcam(cam_id=0, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame 
for bbox_details, frame_details in track:
    bbox_xyxy, ids, scores, class_ids = bbox_details
    frame, frame_num, fps = frame_details
    # Do anything with bboxes here

# ##############################################
#           To track using web stream
# ##############################################
# Get tracking function
stream_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4'
track = detect.track_stream(stream_url, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame 
for bbox_details, frame_details in track:
    bbox_xyxy, ids, scores, class_ids = bbox_details
    frame, frame_num, fps = frame_details
    # Do anything with bboxes here
```

[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in `ASOne` class.

<details>
<summary>6.2.1 Changing Detector and Tracking Models</summary>

<!-- ### Changing Detector and Tracking Models -->

Change Tracker by simply changing the tracker flag.

The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.

```python
detect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True)
# Change tracker
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV7_PYTORCH, use_cuda=True)
```

```python
# Change Detector
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
```
</details>


Run the `asone/demo_detector.py` to test detector.

```shell
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4

# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
```
</details>
<details>
<summary>6.3. Text Detection</summary>

Sample code to detect text on an image

```python
# Detect and recognize text
import asone
from asone import utils
from asone import ASOne
import cv2


img_path = 'data/sample_imgs/sample_text.jpeg'
ocr = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread(img_path)
results = ocr.detect_text(img) 
img = utils.draw_text(img, results)
cv2.imwrite("data/results/results.jpg", img)
```

Use Tracker on Text
```python
import asone
from asone import ASOne

# Instantiate Asone object
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu

# ##############################################
#           To track using video file
# ##############################################
# Get tracking function
track = detect.track_video('data/sample_videos/GTA_5-Unique_License_Plate.mp4', output_dir='data/results', save_result=True, display=True)

# Loop over track to retrieve outputs of each frame 
for bbox_details, frame_details in track:
    bbox_xyxy, ids, scores, class_ids = bbox_details
    frame, frame_num, fps = frame_details
    # Do anything with bboxes here
```

Run the `asone/demo_ocr.py` to test ocr.

```shell
# run on gpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4

# run on cpu
 python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
```

</details>

<details>
<summary>6.4. Pose Estimation</summary>

Sample code to estimate pose on an image

```python
# Pose Estimation
import asone
from asone import utils
from asone import PoseEstimator
import cv2

img_path = 'data/sample_imgs/test2.jpg'
pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread(img_path)
kpts = pose_estimator.estimate_image(img) 
img = utils.draw_kpts(img, kpts)
cv2.imwrite("data/results/results.jpg", img)
```
* Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.

```python
# Pose Estimation on video
import asone
from asone import PoseEstimator

video_path = 'data/sample_videos/football1.mp4'
pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = pose_estimator.estimate_video(video_path, save=True, display=True)
for kpts, frame_details in estimator:
    frame, frame_num, fps = frame_details
    print(frame_num)
    # Do anything with kpts here
```

Run the `asone/demo_pose_estimator.py` to test Pose estimation.

```shell
# run on gpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4

# run on cpu
 python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu
```

</details>

To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md) 

# ToDo
- [x] First Release
- [x] Import trained models
- [x] Simplify code even further
- [x] Updated for YOLOv8
- [x] OCR and Counting
- [x] OCSORT, StrongSORT, MoTPy
- [x] M1/2 Apple Silicon Compatibility
- [x] Pose Estimation YOLOv7/v8
- [x] YOLO-NAS
- [ ] SAM Integration

|Offered By: |Maintained By:|
|-------------|-------------|
|[![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com)|[![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/)|



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/axcelerateai/asone",
    "name": "asone",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "asone bytetrack deepsort norfair yolo yolox yolor yolov5 yolov7 installation inferencing",
    "author": "AxcelerateAI",
    "author_email": "umair.imran@axcelerate.ai",
    "download_url": "",
    "platform": null,
    "description": "# AS-One : A Modular Library for YOLO Object Detection and Object Tracking\n\n[<img src=\"https://kajabi-storefronts-production.kajabi-cdn.com/kajabi-storefronts-production/file-uploads/themes/2151476941/settings_images/65d82-0d84-6171-a7e0-5aa180b657d5_Black_with_Logo.jpg\" width=\"100%\">](https://www.youtube.com/watch?v=K-VcpPwcM8k)\n\n\n\n\n\n#### Table of Contents\n1. Introduction\n2. Prerequisites\n3. Clone the Repo\n4. Installation\n    - [Linux](#4-installation)\n    - [Windows 10/11](#4-installation) \n    - [MacOS](#4-installation) \n5. Running AS-One\n6. [Sample Code Snippets](#6-sample-code-snippets)\n7. [Model Zoo](asone/linux/Instructions/Benchmarking.md)\n\n## 1. Introduction\n==UPDATE: YOLO-NAS is OUT==\n\nAS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code.\nThis python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML`  flavors. We plan to offer support for future versions of YOLO when they get released.\n\nThis is One Library for most of your computer vision needs.\n\nIf you would like to dive deeper into YOLO Object Detection and Tracking, then check out our [courses](https://www.augmentedstartups.com/store) and [projects](https://store.augmentedstartups.com)\n\n[<img src=\"https://s3.amazonaws.com/kajabi-storefronts-production/blogs/22606/images/0FDx83VXSYOY0NAO2kMc_ASOne_Windows_Play.jpg\" width=\"50%\">](https://www.youtube.com/watch?v=K-VcpPwcM8k)\n\nWatch the step-by-step tutorial\n\n## 2. Prerequisites\n\n- Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions.\n- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows. \n- [Download git for windows](https://git-scm.com/download/win) if not installed.\n\n## 3. Clone the Repo\n\nNavigate to an empty folder of your choice.\n\n```git clone https://github.com/augmentedstartups/AS-One.git```\n\nChange Directory to AS-One\n\n```cd AS-One```\n\n## 4. Installation\n<details open>\n<summary>For Linux</summary>\n\n```shell\npython3 -m venv .env\nsource .env/bin/activate\n\npip install numpy Cython\npip install cython-bbox asone onnxruntime-gpu==1.12.1\npip install super-gradients==3.1.1\n# for CPU\npip install torch torchvision\n# for GPU\npip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113\n```\n</details>\n\n<details>\n<summary> For Windows 10/11</summary>\n\n```shell\npython -m venv .env\n.env\\Scripts\\activate\npip install numpy Cython \npip install lap\npip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox\n\npip install asone onnxruntime-gpu==1.12.1\npip install super-gradients==3.1.1\n# for CPU\npip install torch torchvision\n\n# for GPU\npip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113\nor\npip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html\n```\n</details>\n<details>\n<summary>For MacOS</summary>\n\n```shell\npython3 -m venv .env\nsource .env/bin/activate\n\npip install numpy Cython\npip install cython-bbox asone\npip install super-gradients==3.1.1\n# for CPU\npip install torch torchvision\n```\n</details>\n\n## 5. Running AS-One\n\nRun `main.py` to test tracker on `data/sample_videos/test.mp4` video\n\n```\npython main.py data/sample_videos/test.mp4\n```\n\n### Run in `Google Colab`\n\n <a href=\"https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n\n\n## 6. Sample Code Snippets\n<details>\n<summary>6.1. Object Detection</summary>\n\n```python\nimport asone\nfrom asone import utils\nfrom asone import ASOne\nimport cv2\n\nvideo_path = 'data/sample_videos/test.mp4'\ndetector = ASOne(detector=asone.YOLOV7_PYTORCH, use_cuda=True) # Set use_cuda to False for cpu\n\nfilter_classes = ['car'] # Set to None to detect all classes\n\ncap = cv2.VideoCapture(video_path)\n\nwhile True:\n    _, frame = cap.read()\n    if not _:\n        break\n\n    dets, img_info = detector.detect(frame, filter_classes=filter_classes)\n\n    bbox_xyxy = dets[:, :4]\n    scores = dets[:, 4]\n    class_ids = dets[:, 5]\n\n    frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids)\n\n    cv2.imshow('result', frame)\n\n    if cv2.waitKey(25) & 0xFF == ord('q'):\n        break\n```\n\nRun the `asone/demo_detector.py` to test detector.\n\n```shell\n# run on gpu\npython -m asone.demo_detector data/sample_videos/test.mp4\n\n# run on cpu\npython -m asone.demo_detector data/sample_videos/test.mp4 --cpu\n```\n\n<details>\n<summary>6.1.1 Use Custom Trained Weights for Detector</summary>\n\n<!-- ### 6.1.2 Use Custom Trained Weights -->\n\nUse your custom weights of a detector model trained on custom data by simply providing path of the weights file.\n\n```python\nimport asone\nfrom asone import utils\nfrom asone import ASOne\nimport cv2\n\nvideo_path = 'data/sample_videos/license_video.webm'\ndetector = ASOne(detector=asone.YOLOV7_PYTORCH, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu\n\nclass_names = ['license_plate'] # your custom classes list\n\ncap = cv2.VideoCapture(video_path)\n\nwhile True:\n    _, frame = cap.read()\n    if not _:\n        break\n\n    dets, img_info = detector.detect(frame)\n\n    bbox_xyxy = dets[:, :4]\n    scores = dets[:, 4]\n    class_ids = dets[:, 5]\n\n    frame = utils.draw_boxes(frame, bbox_xyxy, class_ids=class_ids, class_names=class_names) # simply pass custom classes list to write your classes on result video\n\n    cv2.imshow('result', frame)\n\n    if cv2.waitKey(25) & 0xFF == ord('q'):\n        break\n```\n</details>\n\n<details>\n<summary>6.1.2. Changing Detector Models </summary>\n\nChange detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n* Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.\n```python\n# Change detector\ndetector = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)\n\n# For macOs\n# YOLO5\ndetector = ASOne(detector=asone.YOLOV5X_MLMODEL)\n# YOLO7\ndetector = ASOne(detector=asone.YOLOV7_MLMODEL)\n# YOLO8\ndetector = ASOne(detector=asone.YOLOV8L_MLMODEL)\n```\n\n</details>\n\n</details>\n\n<details>\n<summary>6.2. Object Tracking </summary>\n\nUse tracker on sample video. \n\n```python\nimport asone\nfrom asone import ASOne\n\n# Instantiate Asone object\ndetect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True) #set use_cuda=False to use cpu\n\nfilter_classes = ['person'] # set to None to track all classes\n\n# ##############################################\n#           To track using video file\n# ##############################################\n# Get tracking function\ntrack = detect.track_video('data/sample_videos/test.mp4', output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)\n\n# Loop over track to retrieve outputs of each frame \nfor bbox_details, frame_details in track:\n    bbox_xyxy, ids, scores, class_ids = bbox_details\n    frame, frame_num, fps = frame_details\n    # Do anything with bboxes here\n\n# ##############################################\n#           To track using webcam\n# ##############################################\n# Get tracking function\ntrack = detect.track_webcam(cam_id=0, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)\n\n# Loop over track to retrieve outputs of each frame \nfor bbox_details, frame_details in track:\n    bbox_xyxy, ids, scores, class_ids = bbox_details\n    frame, frame_num, fps = frame_details\n    # Do anything with bboxes here\n\n# ##############################################\n#           To track using web stream\n# ##############################################\n# Get tracking function\nstream_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4'\ntrack = detect.track_stream(stream_url, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)\n\n# Loop over track to retrieve outputs of each frame \nfor bbox_details, frame_details in track:\n    bbox_xyxy, ids, scores, class_ids = bbox_details\n    frame, frame_num, fps = frame_details\n    # Do anything with bboxes here\n```\n\n[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in `ASOne` class.\n\n<details>\n<summary>6.2.1 Changing Detector and Tracking Models</summary>\n\n<!-- ### Changing Detector and Tracking Models -->\n\nChange Tracker by simply changing the tracker flag.\n\nThe flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n\n```python\ndetect = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV7_PYTORCH, use_cuda=True)\n# Change tracker\ndetect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV7_PYTORCH, use_cuda=True)\n```\n\n```python\n# Change Detector\ndetect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)\n```\n</details>\n\n\nRun the `asone/demo_detector.py` to test detector.\n\n```shell\n# run on gpu\npython -m asone.demo_detector data/sample_videos/test.mp4\n\n# run on cpu\npython -m asone.demo_detector data/sample_videos/test.mp4 --cpu\n```\n</details>\n<details>\n<summary>6.3. Text Detection</summary>\n\nSample code to detect text on an image\n\n```python\n# Detect and recognize text\nimport asone\nfrom asone import utils\nfrom asone import ASOne\nimport cv2\n\n\nimg_path = 'data/sample_imgs/sample_text.jpeg'\nocr = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu\nimg = cv2.imread(img_path)\nresults = ocr.detect_text(img) \nimg = utils.draw_text(img, results)\ncv2.imwrite(\"data/results/results.jpg\", img)\n```\n\nUse Tracker on Text\n```python\nimport asone\nfrom asone import ASOne\n\n# Instantiate Asone object\ndetect = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu\n\n# ##############################################\n#           To track using video file\n# ##############################################\n# Get tracking function\ntrack = detect.track_video('data/sample_videos/GTA_5-Unique_License_Plate.mp4', output_dir='data/results', save_result=True, display=True)\n\n# Loop over track to retrieve outputs of each frame \nfor bbox_details, frame_details in track:\n    bbox_xyxy, ids, scores, class_ids = bbox_details\n    frame, frame_num, fps = frame_details\n    # Do anything with bboxes here\n```\n\nRun the `asone/demo_ocr.py` to test ocr.\n\n```shell\n# run on gpu\n python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4\n\n# run on cpu\n python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu\n```\n\n</details>\n\n<details>\n<summary>6.4. Pose Estimation</summary>\n\nSample code to estimate pose on an image\n\n```python\n# Pose Estimation\nimport asone\nfrom asone import utils\nfrom asone import PoseEstimator\nimport cv2\n\nimg_path = 'data/sample_imgs/test2.jpg'\npose_estimator = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu\nimg = cv2.imread(img_path)\nkpts = pose_estimator.estimate_image(img) \nimg = utils.draw_kpts(img, kpts)\ncv2.imwrite(\"data/results/results.jpg\", img)\n```\n* Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.\n\n```python\n# Pose Estimation on video\nimport asone\nfrom asone import PoseEstimator\n\nvideo_path = 'data/sample_videos/football1.mp4'\npose_estimator = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu\nestimator = pose_estimator.estimate_video(video_path, save=True, display=True)\nfor kpts, frame_details in estimator:\n    frame, frame_num, fps = frame_details\n    print(frame_num)\n    # Do anything with kpts here\n```\n\nRun the `asone/demo_pose_estimator.py` to test Pose estimation.\n\n```shell\n# run on gpu\n python -m asone.demo_pose_estimator data/sample_videos/football1.mp4\n\n# run on cpu\n python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu\n```\n\n</details>\n\nTo setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md) \n\n# ToDo\n- [x] First Release\n- [x] Import trained models\n- [x] Simplify code even further\n- [x] Updated for YOLOv8\n- [x] OCR and Counting\n- [x] OCSORT, StrongSORT, MoTPy\n- [x] M1/2 Apple Silicon Compatibility\n- [x] Pose Estimation YOLOv7/v8\n- [x] YOLO-NAS\n- [ ] SAM Integration\n\n|Offered By: |Maintained By:|\n|-------------|-------------|\n|[![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com)|[![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/)|\n\n\n",
    "bugtrack_url": null,
    "license": "BSD 2-clause",
    "summary": "",
    "version": "0.3.3",
    "project_urls": {
        "Homepage": "https://github.com/axcelerateai/asone"
    },
    "split_keywords": [
        "asone",
        "bytetrack",
        "deepsort",
        "norfair",
        "yolo",
        "yolox",
        "yolor",
        "yolov5",
        "yolov7",
        "installation",
        "inferencing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "be00252b94bed712af1dad7719e30adeb6ef0edeb0d09bc4bbb7091ea890b265",
                "md5": "711eff8e75f61205588fafc93a7d443d",
                "sha256": "e4c9626cb0b345e0e072f4bd5acf3377bd8616bd7739febff01e3412346e01a9"
            },
            "downloads": -1,
            "filename": "asone-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "711eff8e75f61205588fafc93a7d443d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 477411,
            "upload_time": "2023-06-11T02:55:10",
            "upload_time_iso_8601": "2023-06-11T02:55:10.140225Z",
            "url": "https://files.pythonhosted.org/packages/be/00/252b94bed712af1dad7719e30adeb6ef0edeb0d09bc4bbb7091ea890b265/asone-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-11 02:55:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "axcelerateai",
    "github_project": "asone",
    "github_not_found": true,
    "lcname": "asone"
}
        
Elapsed time: 0.07672s