OurCustomPkg: YOLOv7-based Toy Car Detection
============================================
Welcome to **OurCustomPkg**, a cutting-edge Python package designed for the detection of toy cars using the powerful YOLOv7 model. Additionally, the package integrates hand-tracking capabilities via MediaPipe, allowing for interactive and dynamic detection experiences.
Features
--------
* **YOLOv7 Integration:** Utilize the state-of-the-art YOLOv7 model for accurate and efficient toy car detection.
* **Multi-Source Input:** Supports images, video files, and real-time webcam feeds as input sources.
* **Hand Tracking:** Employ MediaPipe's hand tracking to interact with detected objects in real-time.
* **Highly Customizable:** Easily adjust detection parameters and extend functionalities according to your project needs.
Installation
------------
To get started with **OurCustomPkg**, you can install it directly from PyPI:
pip install ourcustompkg
This command will install the package along with all the necessary dependencies, including PyTorch, OpenCV, and MediaPipe.
Getting Started
---------------
### Basic Usage
The primary script for toy car detection is `detect_car.py`, located in the `ourcustompkg/yolov7/` directory. Here's how to use it:
python -m ourcustompkg.yolov7.detect_car --source <input_source> --weights <path_to_weights>
#### Example Commands
**Detect cars in an image:**
python -m ourcustompkg.yolov7.detect_car --source data/images/car.jpg --weights yolov7.pt
**Detect cars from a video file:**
python -m ourcustompkg.yolov7.detect_car --source data/videos/car_video.mp4 --weights yolov7.pt
**Real-time detection using a webcam:**
python -m ourcustompkg.yolov7.detect_car --source 0 --weights yolov7.pt
### Hand Tracking Interaction
One of the standout features of **OurCustomPkg** is its integration of hand-tracking functionality using MediaPipe. When running the `detect_car.py` script, you can interact with detected cars using hand gestures tracked by your webcam.
### Command-Line Arguments
* `--source`: Specifies the input source, which can be an image file, video file, or webcam feed.
* `--weights`: Path to the YOLOv7 weights file. You can download pretrained weights from the official YOLOv7 repository.
* `--img-size`: Sets the size of the input image for detection (default: 640).
* `--conf-thres`: Confidence threshold for filtering weak detections (default: 0.25).
* `--iou-thres`: IoU threshold for non-maximum suppression (default: 0.45).
* `--device`: Specifies the device to run the model on (`cpu` or `cuda`).
### Advanced Usage
For advanced users, the `detect_car.py` script offers additional options to customize your detection pipeline:
* Save detection results to a specific directory.
* Toggle between different models or weight files.
* Modify input processing techniques or adjust the output format.
Documentation
-------------
For more detailed information, including how to extend the package or contribute to its development, please visit our [official documentation](https://pypi.org/project/ourcustompkg/).
Contributing
------------
We welcome contributions to **OurCustomPkg**! If you have ideas for new features, enhancements, or bug fixes, please open an issue or submit a pull request on our [GitHub repository](https://github.com/NeuroLeapTeam/gesture_recognition).
License
-------
This project is licensed under the MIT License. You can view the full license [here](https://github.com/NeuroLeapTeam/gesture_recognition/blob/main/LICENSE).
Acknowledgments
---------------
* **YOLOv7**: Our package is built upon the innovative YOLOv7 model, which has significantly advanced the field of real-time object detection.
* **MediaPipe**: MediaPipe's hand tracking technology has enabled us to create a more interactive and user-friendly detection experience.
Contact
-------
For any questions, feedback, or support, feel free to reach out to the authors:
* Brandon: [brandon@neuroleapmail.com](mailto:brandon@neuroleapmail.com)
* Moshiur: [moshiur@neuroleapmail.com](mailto:moshiur@neuroleapmail.com)
* * *
Thank you for using **OurCustomPkg**! We hope it serves your toy car detection needs and inspires further innovation in your projects.
Raw data
{
"_id": null,
"home_page": "https://github.com/NeuroLeapTeam/gesture_recognition.git",
"name": "ourcustompkg",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "YOLOv7, toy car detection, dataset management, computer vision, deep learning",
"author": "Brandon & Moshiur",
"author_email": "brandon@neuroleapmail.com, moshiur@neuroleapmail.com",
"download_url": "https://files.pythonhosted.org/packages/e2/2a/fe8ace97dda6bb35bf05b39c7b0e94c660e0544a5f812abcc267de22f369/ourcustompkg-0.14.tar.gz",
"platform": null,
"description": "\nOurCustomPkg: YOLOv7-based Toy Car Detection\n============================================\n\nWelcome to **OurCustomPkg**, a cutting-edge Python package designed for the detection of toy cars using the powerful YOLOv7 model. Additionally, the package integrates hand-tracking capabilities via MediaPipe, allowing for interactive and dynamic detection experiences.\n\nFeatures\n--------\n\n* **YOLOv7 Integration:** Utilize the state-of-the-art YOLOv7 model for accurate and efficient toy car detection.\n* **Multi-Source Input:** Supports images, video files, and real-time webcam feeds as input sources.\n* **Hand Tracking:** Employ MediaPipe's hand tracking to interact with detected objects in real-time.\n* **Highly Customizable:** Easily adjust detection parameters and extend functionalities according to your project needs.\n\nInstallation\n------------\n\nTo get started with **OurCustomPkg**, you can install it directly from PyPI:\n\n pip install ourcustompkg\n\nThis command will install the package along with all the necessary dependencies, including PyTorch, OpenCV, and MediaPipe.\n\nGetting Started\n---------------\n\n### Basic Usage\n\nThe primary script for toy car detection is `detect_car.py`, located in the `ourcustompkg/yolov7/` directory. Here's how to use it:\n\n python -m ourcustompkg.yolov7.detect_car --source <input_source> --weights <path_to_weights>\n\n#### Example Commands\n\n**Detect cars in an image:**\n\n python -m ourcustompkg.yolov7.detect_car --source data/images/car.jpg --weights yolov7.pt\n\n**Detect cars from a video file:**\n\n python -m ourcustompkg.yolov7.detect_car --source data/videos/car_video.mp4 --weights yolov7.pt\n\n**Real-time detection using a webcam:**\n\n python -m ourcustompkg.yolov7.detect_car --source 0 --weights yolov7.pt\n\n### Hand Tracking Interaction\n\nOne of the standout features of **OurCustomPkg** is its integration of hand-tracking functionality using MediaPipe. When running the `detect_car.py` script, you can interact with detected cars using hand gestures tracked by your webcam.\n\n### Command-Line Arguments\n\n* `--source`: Specifies the input source, which can be an image file, video file, or webcam feed.\n* `--weights`: Path to the YOLOv7 weights file. You can download pretrained weights from the official YOLOv7 repository.\n* `--img-size`: Sets the size of the input image for detection (default: 640).\n* `--conf-thres`: Confidence threshold for filtering weak detections (default: 0.25).\n* `--iou-thres`: IoU threshold for non-maximum suppression (default: 0.45).\n* `--device`: Specifies the device to run the model on (`cpu` or `cuda`).\n\n### Advanced Usage\n\nFor advanced users, the `detect_car.py` script offers additional options to customize your detection pipeline:\n\n* Save detection results to a specific directory.\n* Toggle between different models or weight files.\n* Modify input processing techniques or adjust the output format.\n\nDocumentation\n-------------\n\nFor more detailed information, including how to extend the package or contribute to its development, please visit our [official documentation](https://pypi.org/project/ourcustompkg/).\n\nContributing\n------------\n\nWe welcome contributions to **OurCustomPkg**! If you have ideas for new features, enhancements, or bug fixes, please open an issue or submit a pull request on our [GitHub repository](https://github.com/NeuroLeapTeam/gesture_recognition).\n\nLicense\n-------\n\nThis project is licensed under the MIT License. You can view the full license [here](https://github.com/NeuroLeapTeam/gesture_recognition/blob/main/LICENSE).\n\nAcknowledgments\n---------------\n\n* **YOLOv7**: Our package is built upon the innovative YOLOv7 model, which has significantly advanced the field of real-time object detection.\n* **MediaPipe**: MediaPipe's hand tracking technology has enabled us to create a more interactive and user-friendly detection experience.\n\nContact\n-------\n\nFor any questions, feedback, or support, feel free to reach out to the authors:\n\n* Brandon: [brandon@neuroleapmail.com](mailto:brandon@neuroleapmail.com)\n* Moshiur: [moshiur@neuroleapmail.com](mailto:moshiur@neuroleapmail.com)\n\n* * *\n\nThank you for using **OurCustomPkg**! We hope it serves your toy car detection needs and inspires further innovation in your projects.\n",
"bugtrack_url": null,
"license": null,
"summary": "A robust YOLOv7-based package designed for efficient toy car detection and comprehensive dataset management.",
"version": "0.14",
"project_urls": {
"Bug Tracker": "https://github.com/NeuroLeapTeam/gesture_recognition/issues",
"Documentation": "https://github.com/NeuroLeapTeam/gesture_recognition/wiki",
"Homepage": "https://github.com/NeuroLeapTeam/gesture_recognition.git",
"Source Code": "https://github.com/NeuroLeapTeam/gesture_recognition"
},
"split_keywords": [
"yolov7",
" toy car detection",
" dataset management",
" computer vision",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "401a9faed5bbd3b8a8d98053c3b0495af2e06573a7a24ce8ffa99e03867c60f8",
"md5": "046cf0f53f46796c1e7a08c0b12090ca",
"sha256": "1e82bbafe55b6fa921ada1b9a9f38666365e066a95fccd46eb1060b4525f59b5"
},
"downloads": -1,
"filename": "ourcustompkg-0.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "046cf0f53f46796c1e7a08c0b12090ca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 154195,
"upload_time": "2024-09-01T01:33:31",
"upload_time_iso_8601": "2024-09-01T01:33:31.283279Z",
"url": "https://files.pythonhosted.org/packages/40/1a/9faed5bbd3b8a8d98053c3b0495af2e06573a7a24ce8ffa99e03867c60f8/ourcustompkg-0.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e22afe8ace97dda6bb35bf05b39c7b0e94c660e0544a5f812abcc267de22f369",
"md5": "7022160c54210c137356ae5da28919d7",
"sha256": "9abf990815799c31c5d68fc4c3afe13d59324fc28c50552bd1f103f2f8c95279"
},
"downloads": -1,
"filename": "ourcustompkg-0.14.tar.gz",
"has_sig": false,
"md5_digest": "7022160c54210c137356ae5da28919d7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 142341,
"upload_time": "2024-09-01T01:33:33",
"upload_time_iso_8601": "2024-09-01T01:33:33.285288Z",
"url": "https://files.pythonhosted.org/packages/e2/2a/fe8ace97dda6bb35bf05b39c7b0e94c660e0544a5f812abcc267de22f369/ourcustompkg-0.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-01 01:33:33",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NeuroLeapTeam",
"github_project": "gesture_recognition",
"github_not_found": true,
"lcname": "ourcustompkg"
}