# VisualFlow
![VisualFlow Logo](images/vf_logo.webp)
[![PyPI version](https://badge.fury.io/py/visualflow.svg)](https://badge.fury.io/py/visualflow)
[![Downloads](https://static.pepy.tech/badge/visualflow)](https://pepy.tech/project/visualflow)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
VisualFlow is a Python library for object detection that aims to provide a model-agnostic, end to end data solution for your object detection needs. Convert your data, augment it, and inference your models simply with a few function calls!
We have started this library with the vision of providing end to end object detection, from formatting all the way to inferencing multiple types of object detection models.
Our initial version of VisualFlow allows format conversions between PASCAL VOC, COCO and YOLO. Stay tuned for future updates!
- [Installation](#installation)
- [Usage](#usage)
- [Conversions](#conversions)
- [Augmentations](#augmentations)
- [Inferences](#inferences)
- [Contributing](#contributing)
- [License](#license)
## Installation
You can install VisualFlow using pip:
```bash
pip install visualflow
```
## Usage
### Conversions
VisualFlow provides three main conversion functions: to_voc(), to_yolo(), and to_coco(). Here's how you can use them:
#### Conversion to YOLO Format
To convert from PASCAL VOC or COCO format to YOLO format, use the to_yolo() function.
For VOC to YOLO:
```python
import VisualFlow as vf
vf.to_yolo(in_format='voc',
images='path/to/images',
annotations='path/to/annotations',
out_dir='path/to/output')
```
For COCO to YOLO:
```python
import VisualFlow as vf
vf.to_yolo(in_format='coco',
images='path/to/images',
out_dir='path/to/output',
json_file='path/to/annotations.json')
```
#### Conversion to Pascal VOC Format
To convert from COCO or YOLO format to Pascal VOC format, use the to_voc() function.
For COCO to VOC:
```python
import VisualFlow as vf
vf.to_voc(in_format='coco',
images='path/to/images',
out_dir='path/to/output',
json_file='path/to/annotations.json')
```
For YOLO to VOC:
```python
import VisualFlow as vf
vf.to_voc(in_format='yolo',
images='path/to/images',
annotations='path/to/annotations',
class_file='path/to/classes.txt',
out_dir='path/to/output')
```
#### Conversion to COCO Format
To convert from PASCAL VOC or YOLO format to COCO format, use the to_coco() function.
For VOC to COCO:
```python
import VisualFlow as vf
vf.to_coco(in_format='voc',
images='path/to/images',
annotations='path/to/annotations',
class_file='path/to/classes.txt',
output_file_path='path/to/output.json')
```
For YOLO to COCO:
```python
import VisualFlow as vf
vf.to_coco(in_format='yolo',
images='path/to/images',
annotations='path/to/annotations',
class_file='path/to/classes.txt',
output_file_path='path/to/output.json')
```
Make sure to replace 'path/to/images', 'path/to/annotations', 'path/to/classes.txt', and 'path/to/output' with the actual paths to your dataset files and folders.
### Augmentations
VisualFlow's powerful data augmentations can enhance your object detection training data. Easily apply these transformations to your dataset with just a few lines of code:
- **Cutout**: Create up to three random cutouts to encourage robustness and generalization in your models.
- **Grayscale**: Convert images to grayscale, adding diversity to your training data.
- **Brightness**: Adjust the brightness of your images, ensuring your models can handle varying lighting conditions.
- **Noise**: Introduce noise to diversify your dataset and improve model resilience.
- **Blur**: Apply blurring to images, simulating real-world scenarios and enhancing model adaptability.
- **Hue**: Adjust the hue of images, enriching color variations and augmenting the dataset.
- **Exposure**: Manipulate exposure levels to help models cope with different lighting environments.
- **Flip90**: Perform 90-degree flips for data variation and better model generalization.
- **Shear**: Apply shear transformations on bounding boxes to augment your dataset and improve model robustness.
- **Rotate**: Rotate bounding boxes by a specified angle to create diverse training examples.
Some examples are available below
```python
import VisualFlow as vf
vf.cutout(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output',
max_num_cutouts=3) # optional, set by default
vf.grayscale(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output')
vf.brightness(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output',
factor=1.5) # optional, set by default
vf.noise(image_dir='path/to/images',
labels_dir='path/to/labels', #optional
output_dir='path/to/output')
vf.blur(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output')
vf.hue(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output')
vf.exposure(image_dir='path/to/images',
labels_dir='path/to/labels', # optional
output_dir='path/to/output',
factor=2.0) # optional, set by default
vf.flip90(image_dir='path/to/images',
labels_dir='path/to/labels',
output_dir='path/to/output')
vf.shear(image_dir='path/to/images',
labels_dir='path/to/labels',
output_dir='path/to/output',
shear_factor= 0.2) # optional, set by default
vf.rotate(image_dir='path/to/images',
labels_dir='path/to/labels',
output_dir='path/to/output', angle=30) # optional, set by default
```
### Inferences
VisualFlow now empowers you to harness the full potential of your YOLO models, making object detection inferencing a seamless part of your workflow. With this new feature, you can confidently evaluate your trained models on your trained models.
Inference with VisualFlow is a breeze. Here's a simple example of how you can perform inferencing on your YOLO models:
```python
import VisualFlow as vf
model_path = "/path/to/your/yolo_model.pt"
inference_dir = "/path/to/your/inference_images"
labels_dir = "/path/to/your/inference_labels"
class_txt = "/path/to/your/class_names.txt"
output_dir = "/path/to/your/output_directory"
vf.yolo_inference(model_path=model_path,
inference_dir=inference_dir,
labels_dir=labels_dir,
class_txt=class_txt,
output_dir=output_dir)
# additional arguments: iou, conf
```
We understand that each object detection project may require different configurations. Therefore, VisualFlow's inference() function now supports two additional parameters:
- **iou**: Set to 0.7 by default, this parameter controls the minimum threshold for bounding box overlap.
- **conf**: Set to 0.5 by default, this parameter determines the minimum confidence level required for an object detection prediction.
UPDATE: Visualflow now supports Facebook's DETR! (Please convert for test dataset into YOLO format first using the conversion functionality above.) You can use it as follows:
```python
import VisualFlow as vf
model_path = "/path/to/your/yolo_model.pt"
inference_dir = "/path/to/your/inference_images"
labels_dir = "/path/to/your/inference_labels"
class_txt = "/path/to/your/class_names.txt"
output_dir = "/path/to/your/output_directory"
vf.detr_inference(model_path=model_path,
inference_dir=inference_dir,
labels_dir=labels_dir,
class_txt=class_txt,
output_dir=output_dir)
```
## Contributing
Contributions are welcome! If you find any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request on [GitHub](https://github.com/Ojas-Sharma/VisualFlow).
## License
[MIT](https://choosealicense.com/licenses/mit/)
Raw data
{
"_id": null,
"home_page": "https://github.com/Ojas-Sharma/VisualFlow",
"name": "VisualFlow",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "object-detection cvtoolkit pascal-voc yolo coco computer-vision detr image-classification detection format conversion",
"author": "Ojas Sharma",
"author_email": "ojassharma1607@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/cd/ab/4701c1f4173fca9db14b6c84d5f1add642d050527e614b74b44e6f614d36/VisualFlow-0.2.4.tar.gz",
"platform": null,
"description": "# VisualFlow\n\n![VisualFlow Logo](images/vf_logo.webp)\n\n[![PyPI version](https://badge.fury.io/py/visualflow.svg)](https://badge.fury.io/py/visualflow)\n[![Downloads](https://static.pepy.tech/badge/visualflow)](https://pepy.tech/project/visualflow)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nVisualFlow is a Python library for object detection that aims to provide a model-agnostic, end to end data solution for your object detection needs. Convert your data, augment it, and inference your models simply with a few function calls!\n\nWe have started this library with the vision of providing end to end object detection, from formatting all the way to inferencing multiple types of object detection models.\n\nOur initial version of VisualFlow allows format conversions between PASCAL VOC, COCO and YOLO. Stay tuned for future updates!\n\n- [Installation](#installation)\n- [Usage](#usage)\n - [Conversions](#conversions)\n - [Augmentations](#augmentations)\n - [Inferences](#inferences)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Installation\n\nYou can install VisualFlow using pip:\n\n```bash\npip install visualflow\n```\n## Usage\n\n### Conversions\n\nVisualFlow provides three main conversion functions: to_voc(), to_yolo(), and to_coco(). Here's how you can use them:\n\n#### Conversion to YOLO Format\nTo convert from PASCAL VOC or COCO format to YOLO format, use the to_yolo() function.\n\nFor VOC to YOLO:\n```python\nimport VisualFlow as vf\n\nvf.to_yolo(in_format='voc',\n images='path/to/images',\n annotations='path/to/annotations',\n out_dir='path/to/output')\n```\nFor COCO to YOLO:\n```python\nimport VisualFlow as vf\n\nvf.to_yolo(in_format='coco',\n images='path/to/images',\n out_dir='path/to/output',\n json_file='path/to/annotations.json')\n```\n\n#### Conversion to Pascal VOC Format\nTo convert from COCO or YOLO format to Pascal VOC format, use the to_voc() function.\n\nFor COCO to VOC:\n```python\nimport VisualFlow as vf\n\nvf.to_voc(in_format='coco',\n images='path/to/images',\n out_dir='path/to/output',\n json_file='path/to/annotations.json')\n```\nFor YOLO to VOC:\n```python\nimport VisualFlow as vf\n\nvf.to_voc(in_format='yolo',\n images='path/to/images',\n annotations='path/to/annotations',\n class_file='path/to/classes.txt',\n out_dir='path/to/output')\n```\n\n#### Conversion to COCO Format\nTo convert from PASCAL VOC or YOLO format to COCO format, use the to_coco() function.\n\nFor VOC to COCO:\n```python\nimport VisualFlow as vf\n\nvf.to_coco(in_format='voc',\n images='path/to/images',\n annotations='path/to/annotations',\n class_file='path/to/classes.txt',\n output_file_path='path/to/output.json')\n```\nFor YOLO to COCO:\n```python\nimport VisualFlow as vf\n\nvf.to_coco(in_format='yolo',\n images='path/to/images',\n annotations='path/to/annotations',\n class_file='path/to/classes.txt',\n output_file_path='path/to/output.json')\n```\n\nMake sure to replace 'path/to/images', 'path/to/annotations', 'path/to/classes.txt', and 'path/to/output' with the actual paths to your dataset files and folders.\n\n### Augmentations\n\nVisualFlow's powerful data augmentations can enhance your object detection training data. Easily apply these transformations to your dataset with just a few lines of code:\n\n- **Cutout**: Create up to three random cutouts to encourage robustness and generalization in your models.\n- **Grayscale**: Convert images to grayscale, adding diversity to your training data.\n- **Brightness**: Adjust the brightness of your images, ensuring your models can handle varying lighting conditions.\n- **Noise**: Introduce noise to diversify your dataset and improve model resilience.\n- **Blur**: Apply blurring to images, simulating real-world scenarios and enhancing model adaptability.\n- **Hue**: Adjust the hue of images, enriching color variations and augmenting the dataset.\n- **Exposure**: Manipulate exposure levels to help models cope with different lighting environments.\n- **Flip90**: Perform 90-degree flips for data variation and better model generalization.\n- **Shear**: Apply shear transformations on bounding boxes to augment your dataset and improve model robustness.\n- **Rotate**: Rotate bounding boxes by a specified angle to create diverse training examples.\n\nSome examples are available below\n```python\nimport VisualFlow as vf\n\nvf.cutout(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output', \n max_num_cutouts=3) # optional, set by default\n\nvf.grayscale(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output')\n\nvf.brightness(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output', \n factor=1.5) # optional, set by default\n\nvf.noise(image_dir='path/to/images', \n labels_dir='path/to/labels', #optional\n output_dir='path/to/output')\n\nvf.blur(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output')\n\nvf.hue(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output')\n\nvf.exposure(image_dir='path/to/images', \n labels_dir='path/to/labels', # optional\n output_dir='path/to/output', \n factor=2.0) # optional, set by default\n\nvf.flip90(image_dir='path/to/images', \n labels_dir='path/to/labels', \n output_dir='path/to/output')\n\nvf.shear(image_dir='path/to/images', \n labels_dir='path/to/labels', \n output_dir='path/to/output', \n shear_factor= 0.2) # optional, set by default\n\nvf.rotate(image_dir='path/to/images', \n labels_dir='path/to/labels', \n output_dir='path/to/output', angle=30) # optional, set by default\n```\n\n### Inferences\n\nVisualFlow now empowers you to harness the full potential of your YOLO models, making object detection inferencing a seamless part of your workflow. With this new feature, you can confidently evaluate your trained models on your trained models.\n\nInference with VisualFlow is a breeze. Here's a simple example of how you can perform inferencing on your YOLO models:\n```python\nimport VisualFlow as vf\n\nmodel_path = \"/path/to/your/yolo_model.pt\"\ninference_dir = \"/path/to/your/inference_images\"\nlabels_dir = \"/path/to/your/inference_labels\"\nclass_txt = \"/path/to/your/class_names.txt\"\noutput_dir = \"/path/to/your/output_directory\"\n\nvf.yolo_inference(model_path=model_path,\n inference_dir=inference_dir,\n labels_dir=labels_dir,\n class_txt=class_txt,\n output_dir=output_dir)\n# additional arguments: iou, conf\n```\nWe understand that each object detection project may require different configurations. Therefore, VisualFlow's inference() function now supports two additional parameters:\n\n- **iou**: Set to 0.7 by default, this parameter controls the minimum threshold for bounding box overlap.\n- **conf**: Set to 0.5 by default, this parameter determines the minimum confidence level required for an object detection prediction.\n\nUPDATE: Visualflow now supports Facebook's DETR! (Please convert for test dataset into YOLO format first using the conversion functionality above.) You can use it as follows:\n```python\nimport VisualFlow as vf\n\nmodel_path = \"/path/to/your/yolo_model.pt\"\ninference_dir = \"/path/to/your/inference_images\"\nlabels_dir = \"/path/to/your/inference_labels\"\nclass_txt = \"/path/to/your/class_names.txt\"\noutput_dir = \"/path/to/your/output_directory\"\n\nvf.detr_inference(model_path=model_path,\n inference_dir=inference_dir,\n labels_dir=labels_dir,\n class_txt=class_txt,\n output_dir=output_dir)\n```\n\n## Contributing\n\nContributions are welcome! If you find any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request on [GitHub](https://github.com/Ojas-Sharma/VisualFlow).\n\n## License\n\n[MIT](https://choosealicense.com/licenses/mit/)\n",
"bugtrack_url": null,
"license": "",
"summary": "A Python library for object detection format conversion",
"version": "0.2.4",
"project_urls": {
"Homepage": "https://github.com/Ojas-Sharma/VisualFlow"
},
"split_keywords": [
"object-detection",
"cvtoolkit",
"pascal-voc",
"yolo",
"coco",
"computer-vision",
"detr",
"image-classification",
"detection",
"format",
"conversion"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "77752c44d8d92b985cc42d665320545b4b0a91104741486427f687f5eaaaa17f",
"md5": "7950b4610c6ebd6bba783c3515a880a3",
"sha256": "6da6129a24c2b47b336a3c0402ec6525d2a6cb8500faf09af16951e3a61addea"
},
"downloads": -1,
"filename": "VisualFlow-0.2.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7950b4610c6ebd6bba783c3515a880a3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 14497,
"upload_time": "2024-02-01T18:10:57",
"upload_time_iso_8601": "2024-02-01T18:10:57.651134Z",
"url": "https://files.pythonhosted.org/packages/77/75/2c44d8d92b985cc42d665320545b4b0a91104741486427f687f5eaaaa17f/VisualFlow-0.2.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cdab4701c1f4173fca9db14b6c84d5f1add642d050527e614b74b44e6f614d36",
"md5": "aeb0cd0dfdf0802fb0114028483eb4b7",
"sha256": "86c2db6faa1677abcf535b36d8fe6ffc4c940303e77d9b4fd65d6ef3b94f2b42"
},
"downloads": -1,
"filename": "VisualFlow-0.2.4.tar.gz",
"has_sig": false,
"md5_digest": "aeb0cd0dfdf0802fb0114028483eb4b7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 15490,
"upload_time": "2024-02-01T18:10:59",
"upload_time_iso_8601": "2024-02-01T18:10:59.335401Z",
"url": "https://files.pythonhosted.org/packages/cd/ab/4701c1f4173fca9db14b6c84d5f1add642d050527e614b74b44e6f614d36/VisualFlow-0.2.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-01 18:10:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Ojas-Sharma",
"github_project": "VisualFlow",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "visualflow"
}