<h1 align="center">
packaged ultralytics/yolov5
</h1>
<h4 align="center">
pip install yolov5
</h4>
<div align="center">
<a href="https://pepy.tech/project/yolov5"><img src="https://pepy.tech/badge/yolov5" alt="total downloads"></a>
<a href="https://pepy.tech/project/yolov5"><img src="https://pepy.tech/badge/yolov5/month" alt="monthly downloads"></a>
<a href="https://twitter.com/fcakyon"><img src="https://img.shields.io/badge/twitter-fcakyon_-blue?logo=twitter&style=flat" alt="fcakyon twitter"></a>
<br>
<a href="https://badge.fury.io/py/yolov5"><img src="https://badge.fury.io/py/yolov5.svg?kill_cache=1" alt="pypi version"></a>
<a href="https://github.com/fcakyon/yolov5-pip/actions/workflows/ci.yml"><img src="https://github.com/fcakyon/yolov5-pip/actions/workflows/ci.yml/badge.svg" alt="ci testing"></a>
<a href="https://github.com/fcakyon/yolov5-pip/actions/workflows/package_testing.yml"><img src="https://github.com/fcakyon/yolov5-pip/actions/workflows/package_testing.yml/badge.svg" alt="package testing"></a>
</div>
## <div align="center">Overview</div>
<div align="center">
You can finally install <a href="https://github.com/ultralytics/yolov5">YOLOv5 object detector</a> using <a href="https://pypi.org/project/yolov5/">pip</a> and integrate into your project easily.
<img src="https://user-images.githubusercontent.com/26833433/136901921-abcfcd9d-f978-4942-9b97-0e3f202907df.png" width="1000">
</div>
<br>
This yolov5 package contains everything from ultralytics/yolov5 <a href="https://github.com/ultralytics/yolov5/tree/5deff1471dede726f6399be43e7073ee7ed3a7d4">at this commit</a> plus:
<br>
1. Easy installation via pip: <b>pip install yolov5</b>
<br>
2. Full CLI integration with <a href="https://github.com/google/python-fire">fire</a> package
<br>
3. COCO dataset format support (for training)
<br>
4. Full <a href="https://huggingface.co/models?other=yolov5">🤗 Hub</a> integration
<br>
5. <a href="https://aws.amazon.com/s3/">S3</a> support (model and dataset upload)
<br>
6. <a href="https://neptune.ai/">NeptuneAI</a> logger support (metric, model and dataset logging)
<br>
7. Classwise AP logging during experiments
## <div align="center">Install</div>
Install yolov5 using pip (for Python >=3.7)
```console
pip install yolov5
```
## <div align="center">Model Zoo</div>
<div align="center">
Effortlessly explore and use finetuned YOLOv5 models with one line of code: <a href="https://github.com/keremberke/awesome-yolov5-models">awesome-yolov5-models</a>
<a href="https://github.com/keremberke/awesome-yolov5-models"><img src="https://user-images.githubusercontent.com/34196005/210134158-108b24f4-2b8e-43ea-95c8-44731625cde2.gif" width="640"></a>
</div>
## <div align="center">Use from Python</div>
```python
import yolov5
# load pretrained model
model = yolov5.load('yolov5s.pt')
# or load custom model
model = yolov5.load('train/best.pt')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img)
# inference with larger input size
results = model(img, size=1280)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
<details closed>
<summary>Train/Detect/Test/Export</summary>
- You can directly use these functions by importing them:
```python
from yolov5 import train, val, detect, export
# from yolov5.classify import train, val, predict
# from yolov5.segment import train, val, predict
train.run(imgsz=640, data='coco128.yaml')
val.run(imgsz=640, data='coco128.yaml', weights='yolov5s.pt')
detect.run(imgsz=640)
export.run(imgsz=640, weights='yolov5s.pt')
```
- You can pass any argument as input:
```python
from yolov5 import detect
img_url = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
detect.run(source=img_url, weights="yolov5s6.pt", conf_thres=0.25, imgsz=640)
```
</details>
## <div align="center">Use from CLI</div>
You can call `yolov5 train`, `yolov5 detect`, `yolov5 val` and `yolov5 export` commands after installing the package via `pip`:
<details open>
<summary>Training</summary>
- Finetune one of the pretrained YOLOv5 models using your custom `data.yaml`:
```bash
$ yolov5 train --data data.yaml --weights yolov5s.pt --batch-size 16 --img 640
yolov5m.pt 8
yolov5l.pt 4
yolov5x.pt 2
```
- Start a training using a COCO formatted dataset:
```yaml
# data.yml
train_json_path: "train.json"
train_image_dir: "train_image_dir/"
val_json_path: "val.json"
val_image_dir: "val_image_dir/"
```
```bash
$ yolov5 train --data data.yaml --weights yolov5s.pt
```
- Train your model using [Roboflow Universe](https://universe.roboflow.com/) datasets (roboflow>=0.2.29 required):
```bash
$ yolov5 train --data DATASET_UNIVERSE_URL --weights yolov5s.pt --roboflow_token YOUR_ROBOFLOW_TOKEN
```
Where `DATASET_UNIVERSE_URL` must be in `https://universe.roboflow.com/workspace_name/project_name/project_version` format.
- Visualize your experiments via [Neptune.AI](https://neptune.ai/) (neptune-client>=0.10.10 required):
```bash
$ yolov5 train --data data.yaml --weights yolov5s.pt --neptune_project NAMESPACE/PROJECT_NAME --neptune_token YOUR_NEPTUNE_TOKEN
```
- Automatically upload weights to [Huggingface Hub](https://huggingface.co/models?other=yolov5):
```bash
$ yolov5 train --data data.yaml --weights yolov5s.pt --hf_model_id username/modelname --hf_token YOUR-HF-WRITE-TOKEN
```
- Automatically upload weights and datasets to AWS S3 (with Neptune.AI artifact tracking integration):
```bash
export AWS_ACCESS_KEY_ID=YOUR_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_KEY
```
```bash
$ yolov5 train --data data.yaml --weights yolov5s.pt --s3_upload_dir YOUR_S3_FOLDER_DIRECTORY --upload_dataset
```
- Add `yolo_s3_data_dir` into `data.yaml` to match Neptune dataset with a present dataset in S3.
```yaml
# data.yml
train_json_path: "train.json"
train_image_dir: "train_image_dir/"
val_json_path: "val.json"
val_image_dir: "val_image_dir/"
yolo_s3_data_dir: s3://bucket_name/data_dir/
```
</details>
<details open>
<summary>Inference</summary>
yolov5 detect command runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
```bash
$ yolov5 detect --source 0 # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob
rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream
rtmp://192.168.1.105/live/test # rtmp stream
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
```
</details>
<details open>
<summary>Export</summary>
You can export your fine-tuned YOLOv5 weights to any format such as `torchscript`, `onnx`, `coreml`, `pb`, `tflite`, `tfjs`:
```bash
$ yolov5 export --weights yolov5s.pt --include torchscript,onnx,coreml,pb,tfjs
```
</details>
<details open>
<summary>Classify</summary>
Train/Val/Predict with YOLOv5 image classifier:
```bash
$ yolov5 classify train --img 640 --data mnist2560 --weights yolov5s-cls.pt --epochs 1
```
```bash
$ yolov5 classify predict --img 640 --weights yolov5s-cls.pt --source images/
```
</details>
<details open>
<summary>Segment</summary>
Train/Val/Predict with YOLOv5 instance segmentation model:
```bash
$ yolov5 segment train --img 640 --weights yolov5s-seg.pt --epochs 1
```
```bash
$ yolov5 segment predict --img 640 --weights yolov5s-seg.pt --source images/
```
</details>
Raw data
{
"_id": null,
"home_page": "https://github.com/fcakyon/yolov5-pip",
"name": "yolov5",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "machine-learning, deep-learning, ml, pytorch, YOLO, object-detection, vision, YOLOv5, YOLOv7",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/6f/34/3b8c3bf8dae3f9663659ac38662d385a46f01dd133710b4290baec6ec153/yolov5-7.0.14.tar.gz",
"platform": null,
"description": "<h1 align=\"center\">\n packaged ultralytics/yolov5\n</h1>\n\n<h4 align=\"center\">\n pip install yolov5\n</h4>\n\n<div align=\"center\">\n <a href=\"https://pepy.tech/project/yolov5\"><img src=\"https://pepy.tech/badge/yolov5\" alt=\"total downloads\"></a>\n <a href=\"https://pepy.tech/project/yolov5\"><img src=\"https://pepy.tech/badge/yolov5/month\" alt=\"monthly downloads\"></a>\n <a href=\"https://twitter.com/fcakyon\"><img src=\"https://img.shields.io/badge/twitter-fcakyon_-blue?logo=twitter&style=flat\" alt=\"fcakyon twitter\"></a>\n <br>\n <a href=\"https://badge.fury.io/py/yolov5\"><img src=\"https://badge.fury.io/py/yolov5.svg?kill_cache=1\" alt=\"pypi version\"></a>\n <a href=\"https://github.com/fcakyon/yolov5-pip/actions/workflows/ci.yml\"><img src=\"https://github.com/fcakyon/yolov5-pip/actions/workflows/ci.yml/badge.svg\" alt=\"ci testing\"></a>\n <a href=\"https://github.com/fcakyon/yolov5-pip/actions/workflows/package_testing.yml\"><img src=\"https://github.com/fcakyon/yolov5-pip/actions/workflows/package_testing.yml/badge.svg\" alt=\"package testing\"></a>\n</div>\n\n## <div align=\"center\">Overview</div>\n\n<div align=\"center\">\nYou can finally install <a href=\"https://github.com/ultralytics/yolov5\">YOLOv5 object detector</a> using <a href=\"https://pypi.org/project/yolov5/\">pip</a> and integrate into your project easily.\n\n<img src=\"https://user-images.githubusercontent.com/26833433/136901921-abcfcd9d-f978-4942-9b97-0e3f202907df.png\" width=\"1000\">\n</div>\n\n<br>\nThis yolov5 package contains everything from ultralytics/yolov5 <a href=\"https://github.com/ultralytics/yolov5/tree/5deff1471dede726f6399be43e7073ee7ed3a7d4\">at this commit</a> plus:\n<br>\n1. Easy installation via pip: <b>pip install yolov5</b>\n<br>\n2. Full CLI integration with <a href=\"https://github.com/google/python-fire\">fire</a> package\n<br>\n3. COCO dataset format support (for training)\n<br>\n4. Full <a href=\"https://huggingface.co/models?other=yolov5\">\ud83e\udd17 Hub</a> integration\n<br>\n5. <a href=\"https://aws.amazon.com/s3/\">S3</a> support (model and dataset upload)\n<br>\n6. <a href=\"https://neptune.ai/\">NeptuneAI</a> logger support (metric, model and dataset logging)\n<br>\n7. Classwise AP logging during experiments\n\n\n\n## <div align=\"center\">Install</div>\n\nInstall yolov5 using pip (for Python >=3.7)\n\n```console\npip install yolov5\n```\n\n## <div align=\"center\">Model Zoo</div>\n\n\n\n<div align=\"center\">\n\nEffortlessly explore and use finetuned YOLOv5 models with one line of code: <a href=\"https://github.com/keremberke/awesome-yolov5-models\">awesome-yolov5-models</a>\n\n<a href=\"https://github.com/keremberke/awesome-yolov5-models\"><img src=\"https://user-images.githubusercontent.com/34196005/210134158-108b24f4-2b8e-43ea-95c8-44731625cde2.gif\" width=\"640\"></a>\n</div>\n\n## <div align=\"center\">Use from Python</div>\n\n```python\nimport yolov5\n\n# load pretrained model\nmodel = yolov5.load('yolov5s.pt')\n\n# or load custom model\nmodel = yolov5.load('train/best.pt')\n \n# set model parameters\nmodel.conf = 0.25 # NMS confidence threshold\nmodel.iou = 0.45 # NMS IoU threshold\nmodel.agnostic = False # NMS class-agnostic\nmodel.multi_label = False # NMS multiple labels per box\nmodel.max_det = 1000 # maximum number of detections per image\n\n# set image\nimg = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'\n\n# perform inference\nresults = model(img)\n\n# inference with larger input size\nresults = model(img, size=1280)\n\n# inference with test time augmentation\nresults = model(img, augment=True)\n\n# parse results\npredictions = results.pred[0]\nboxes = predictions[:, :4] # x1, y1, x2, y2\nscores = predictions[:, 4]\ncategories = predictions[:, 5]\n\n# show detection bounding boxes on image\nresults.show()\n\n# save results into \"results/\" folder\nresults.save(save_dir='results/')\n\n```\n\n<details closed>\n<summary>Train/Detect/Test/Export</summary>\n\n- You can directly use these functions by importing them:\n\n```python\nfrom yolov5 import train, val, detect, export\n# from yolov5.classify import train, val, predict\n# from yolov5.segment import train, val, predict\n\ntrain.run(imgsz=640, data='coco128.yaml')\nval.run(imgsz=640, data='coco128.yaml', weights='yolov5s.pt')\ndetect.run(imgsz=640)\nexport.run(imgsz=640, weights='yolov5s.pt')\n```\n\n- You can pass any argument as input:\n\n```python\nfrom yolov5 import detect\n\nimg_url = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'\n\ndetect.run(source=img_url, weights=\"yolov5s6.pt\", conf_thres=0.25, imgsz=640)\n\n```\n\n</details>\n\n## <div align=\"center\">Use from CLI</div>\n\nYou can call `yolov5 train`, `yolov5 detect`, `yolov5 val` and `yolov5 export` commands after installing the package via `pip`:\n\n<details open>\n<summary>Training</summary>\n\n- Finetune one of the pretrained YOLOv5 models using your custom `data.yaml`:\n\n```bash\n$ yolov5 train --data data.yaml --weights yolov5s.pt --batch-size 16 --img 640\n yolov5m.pt 8\n yolov5l.pt 4\n yolov5x.pt 2\n```\n\n- Start a training using a COCO formatted dataset:\n\n```yaml\n# data.yml\ntrain_json_path: \"train.json\"\ntrain_image_dir: \"train_image_dir/\"\nval_json_path: \"val.json\"\nval_image_dir: \"val_image_dir/\"\n```\n\n```bash\n$ yolov5 train --data data.yaml --weights yolov5s.pt\n```\n\n- Train your model using [Roboflow Universe](https://universe.roboflow.com/) datasets (roboflow>=0.2.29 required):\n\n```bash\n$ yolov5 train --data DATASET_UNIVERSE_URL --weights yolov5s.pt --roboflow_token YOUR_ROBOFLOW_TOKEN\n```\n\nWhere `DATASET_UNIVERSE_URL` must be in `https://universe.roboflow.com/workspace_name/project_name/project_version` format.\n\n- Visualize your experiments via [Neptune.AI](https://neptune.ai/) (neptune-client>=0.10.10 required):\n\n```bash\n$ yolov5 train --data data.yaml --weights yolov5s.pt --neptune_project NAMESPACE/PROJECT_NAME --neptune_token YOUR_NEPTUNE_TOKEN\n```\n\n- Automatically upload weights to [Huggingface Hub](https://huggingface.co/models?other=yolov5):\n\n```bash\n$ yolov5 train --data data.yaml --weights yolov5s.pt --hf_model_id username/modelname --hf_token YOUR-HF-WRITE-TOKEN\n```\n\n- Automatically upload weights and datasets to AWS S3 (with Neptune.AI artifact tracking integration):\n\n```bash\nexport AWS_ACCESS_KEY_ID=YOUR_KEY\nexport AWS_SECRET_ACCESS_KEY=YOUR_KEY\n```\n\n```bash\n$ yolov5 train --data data.yaml --weights yolov5s.pt --s3_upload_dir YOUR_S3_FOLDER_DIRECTORY --upload_dataset\n```\n\n- Add `yolo_s3_data_dir` into `data.yaml` to match Neptune dataset with a present dataset in S3.\n\n```yaml\n# data.yml\ntrain_json_path: \"train.json\"\ntrain_image_dir: \"train_image_dir/\"\nval_json_path: \"val.json\"\nval_image_dir: \"val_image_dir/\"\nyolo_s3_data_dir: s3://bucket_name/data_dir/\n```\n\n</details>\n\n<details open>\n<summary>Inference</summary>\n\nyolov5 detect command runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.\n\n```bash\n$ yolov5 detect --source 0 # webcam\n file.jpg # image\n file.mp4 # video\n path/ # directory\n path/*.jpg # glob\n rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream\n rtmp://192.168.1.105/live/test # rtmp stream\n http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream\n```\n\n</details>\n\n<details open>\n<summary>Export</summary>\n\nYou can export your fine-tuned YOLOv5 weights to any format such as `torchscript`, `onnx`, `coreml`, `pb`, `tflite`, `tfjs`:\n\n```bash\n$ yolov5 export --weights yolov5s.pt --include torchscript,onnx,coreml,pb,tfjs\n```\n\n</details>\n\n<details open>\n<summary>Classify</summary>\n\nTrain/Val/Predict with YOLOv5 image classifier:\n\n```bash\n$ yolov5 classify train --img 640 --data mnist2560 --weights yolov5s-cls.pt --epochs 1\n```\n\n```bash\n$ yolov5 classify predict --img 640 --weights yolov5s-cls.pt --source images/\n```\n\n</details>\n\n<details open>\n<summary>Segment</summary>\n\nTrain/Val/Predict with YOLOv5 instance segmentation model:\n\n```bash\n$ yolov5 segment train --img 640 --weights yolov5s-seg.pt --epochs 1\n```\n\n```bash\n$ yolov5 segment predict --img 640 --weights yolov5s-seg.pt --source images/\n```\n\n</details>\n",
"bugtrack_url": null,
"license": "GPL",
"summary": "Packaged version of the Yolov5 object detector",
"version": "7.0.14",
"project_urls": {
"Homepage": "https://github.com/fcakyon/yolov5-pip"
},
"split_keywords": [
"machine-learning",
" deep-learning",
" ml",
" pytorch",
" yolo",
" object-detection",
" vision",
" yolov5",
" yolov7"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "68f78621e01554a1fe59592a4344994be7754869aa19b95f3ad556ce1c0d2c88",
"md5": "6c4e7bfb08a1f38b920137f819dd5d0c",
"sha256": "67d0e3f7cb055cc7ff0c9d66fc993577f7394ca83a16ab097c0343ee4db3ca4b"
},
"downloads": -1,
"filename": "yolov5-7.0.14-py37.py38.py39.py310-none-any.whl",
"has_sig": false,
"md5_digest": "6c4e7bfb08a1f38b920137f819dd5d0c",
"packagetype": "bdist_wheel",
"python_version": "py37.py38.py39.py310",
"requires_python": ">=3.7",
"size": 953480,
"upload_time": "2024-11-11T21:30:24",
"upload_time_iso_8601": "2024-11-11T21:30:24.388790Z",
"url": "https://files.pythonhosted.org/packages/68/f7/8621e01554a1fe59592a4344994be7754869aa19b95f3ad556ce1c0d2c88/yolov5-7.0.14-py37.py38.py39.py310-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6f343b8c3bf8dae3f9663659ac38662d385a46f01dd133710b4290baec6ec153",
"md5": "baa520d73f61ef6c2f388b01aa597cd8",
"sha256": "ba1b5012fc191b1252d4c532bd2ba9da9dd53207bbed2dc2387740e18d8815e6"
},
"downloads": -1,
"filename": "yolov5-7.0.14.tar.gz",
"has_sig": false,
"md5_digest": "baa520d73f61ef6c2f388b01aa597cd8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 900762,
"upload_time": "2024-11-11T21:30:26",
"upload_time_iso_8601": "2024-11-11T21:30:26.450245Z",
"url": "https://files.pythonhosted.org/packages/6f/34/3b8c3bf8dae3f9663659ac38662d385a46f01dd133710b4290baec6ec153/yolov5-7.0.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-11 21:30:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fcakyon",
"github_project": "yolov5-pip",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "gitpython",
"specs": [
[
">=",
"3.1.30"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.3"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.18.5"
]
]
},
{
"name": "opencv-python",
"specs": [
[
">=",
"4.1.1"
]
]
},
{
"name": "Pillow",
"specs": [
[
">=",
"7.1.2"
]
]
},
{
"name": "psutil",
"specs": []
},
{
"name": "PyYAML",
"specs": [
[
">=",
"5.3.1"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.23.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.4.1"
]
]
},
{
"name": "thop",
"specs": [
[
">=",
"0.1.1"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"1.7.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.8.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.64.0"
]
]
},
{
"name": "ultralytics",
"specs": [
[
">=",
"8.0.100"
]
]
},
{
"name": "tensorboard",
"specs": [
[
">=",
"2.4.1"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.1.4"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "setuptools",
"specs": [
[
">=",
"65.5.1"
]
]
},
{
"name": "fire",
"specs": []
},
{
"name": "boto3",
"specs": [
[
">=",
"1.19.1"
]
]
},
{
"name": "sahi",
"specs": [
[
">=",
"0.11.10"
]
]
},
{
"name": "huggingface-hub",
"specs": [
[
">=",
"0.12.0"
],
[
"<",
"0.25.0"
]
]
},
{
"name": "roboflow",
"specs": [
[
">=",
"0.2.29"
]
]
}
],
"lcname": "yolov5"
}