modeyolo


Namemodeyolo JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/colddsam/ModeYOLO.git
SummaryModeYOLO is a versatile Python package designed for efficient color space transformations and simplified dataset modification for deep learning applications. Seamlessly integrating into your workflow, this package empowers users to effortlessly perform diverse color operations and streamline the creation of modified datasets, enhancing the flexibility and convenience of machine learning model training processes.
upload_time2024-03-13 20:24:10
maintainer
docs_urlNone
authorcolddsam
requires_python>=3.10,<4.0
licenseMIT
keywords color space transformations image processing deep learning dataset modification computer vision opencv numpy data augmentation machine learning python package
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ModeYOLO Python Package

## Introduction
ModeYOLO is a versatile Python package designed for efficient color space transformations, dataset modification, and YOLO model training. It seamlessly integrates into your workflow, providing solutions for diverse machine learning applications in computer vision.


## Dependencies
ModeYOLO depends on the following libraries:
- Ultralytics (`ultralytics`)
- PyTorch (`torch`)
- os (`os`)
- opencv-python (`cv2`)

### Folder Structure
Before using the package, ensure that your source dataset follows the following folder structure:

```plaintext
dataset/
|-- train/
|   |-- images/
|   |-- labels/
|-- test/
|   |-- images/
|   |-- labels/
|-- val/
|   |-- images/
|   |-- labels/
|-- data.yaml
```

## ColorOperation Module (`ColorOperation.py`)

### Class: `colorcng`

#### Color Spaces: 
Currently we accepting this `['RGB', 'BGR', 'GRAY', 'CrCb', 'LAB', 'HSV']` Color Spaces. Each element in the list corresponds to a specific color space. Here's an explanation of each color space:

1. **RGB (Red, Green, Blue):** The standard color model used in most digital cameras and displays, where each pixel is represented by three values indicating the intensity of red, green, and blue.

2. **BGR (Blue, Green, Red):** Similar to RGB but with the order of color channels reversed. OpenCV, a popular computer vision library, uses BGR as its default color order.

3. **GRAY (Grayscale):** A single-channel color space where each pixel is represented by a single intensity value, typically ranging from black to white.

4. **CrCb:** A component of the YCbCr color space often used in image and video compression. It separates chrominance (color information) from luminance (brightness information).

5. **LAB:** The LAB color space represents colors independently of device-specific characteristics. It consists of three components: L* (luminance), a* (green to red), and b* (blue to yellow).

6. **HSV (Hue, Saturation, Value):** A color space that separates color information into three components: hue (the type of color), saturation (the intensity or vividness of the color), and value (brightness). 

#### Constructor
```python
def __init__(self, path: str, mode: str = 'all') -> None:
    """
    Initializes the colorcng object.

    Parameters:
    - path: str, path to the target directory.
    - mode: str, mode of operation ('all', 'rgb', 'bgr', 'gray', 'hsv', 'crcb', 'lab').
    """
```

#### Methods
1. `cng_rgb`
    ```python
    def cng_rgb(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Converts the image to RGB color space.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

2. `cng_bgr`
    ```python
    def cng_bgr(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Saves the image in BGR color space.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

3. `cng_gray`
    ```python
    def cng_gray(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Converts the image to grayscale.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

4. `cng_hsv`
    ```python
    def cng_hsv(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Converts the image to HSV color space.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

5. `cng_crcb`
    ```python
    def cng_crcb(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Converts the image to YCrCb color space.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

6. `cng_lab`
    ```python
    def cng_lab(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:
        """
        Converts the image to LAB color space.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - img: np.ndarray, input image.
        - idx: int | str, index for the output file name.
        """
    ```

7. `execute`
    ```python
    def execute(self, opt: str, file: str, idx: int | str = 0) -> None:
        """
        Executes the specified color space transformation.

        Parameters:
        - opt: str, operation type ('train', 'test', 'val').
        - file: str, path to the input image.
        - idx: int | str, index for the output file name.
        """
    ```

## Operation Module (`Operation.py`)

### Class: `InitOperation`

#### Constructor
```python
def __init__(self, target_directory: str = 'modified_dataset', src_directory: str = 'dataset', mode: str = 'all') -> None:
    """
    Initializes the InitOperation object.

    Parameters:
    - target_directory: str, path to the target directory.
    - src_directory: str, path to the source dataset directory.
    - mode: str, mode of operation ('all', 'rgb', 'bgr', 'gray', 'hsv', 'crcb', 'lab').
    """
```

#### Methods
1. `start_train`
    ```python
    def start_train(self) -> None:
        """
        Creates the modified training dataset.
        """
    ```

2. `start_test`
    ```python
    def start_test(self) -> None:
        """
        Creates the modified testing dataset.
        """
    ```

3. `start_val`
    ```python
    def start_val(self) -> None:
        """
        Creates the modified validation dataset.
        """
    ```

4. `reform_dataset`
    ```python
    def reform_dataset(self) -> None:
        """
        Reformats the entire dataset.
        """
    ```

### Example Usage

```python
# Import the InitOperation class
from ModeYOLO.Operation import InitOperation

# Create an InitOperation object
init_op = InitOperation(target_directory='modified_dataset', src_directory='dataset', mode='all')

# Create the modified dataset
init_op.reform_dataset()
```

Certainly! Below is an updated README.md file reflecting the newly added `trainYOLO` submodule. The readme includes information about the new submodule, its purpose, and usage.



## ModelTrain Module (`ModelTrain.py`)
### Class: `trainYOLO`
This submodule facilitates YOLO model training with various pre-trained models. Users can choose from a selection of YOLO models, specify training parameters, and seamlessly integrate it into their workflows.


### Pre-trained Models
The `trainYOLO` submodule supports training with various pre-trained YOLO models. Choose a model by entering the corresponding index when prompted. Here are the available models:

1. `yolov3u.pt`: YOLOv3 with upsampling
2. `yolov5nu.pt`: YOLOv5 with narrow channels
3. `yolov5su.pt`: YOLOv5 with small model
4. `yolov5mu.pt`: YOLOv5 with medium model
5. `yolov5lu.pt`: YOLOv5 with large model
6. `yolov5xu.pt`: YOLOv5 with extra-large model
7. `yolov5n6u.pt`: YOLOv5 with narrow channels and 6x size
8. `yolov5s6u.pt`: YOLOv5 with small model and 6x size
9. `yolov5m6u.pt`: YOLOv5 with medium model and 6x size
10. `yolov5l6u.pt`: YOLOv5 with large model and 6x size
11. `yolov5x6u.pt`: YOLOv5 with extra-large model and 6x size
12. `yolov6n.pt`: YOLOv6 with narrow channels
13. `yolov6s.pt`: YOLOv6 with small model
14. `yolov6m.pt`: YOLOv6 with medium model
15. `yolov6l.pt`: YOLOv6 with large model
16. `yolov6l6.pt`: YOLOv6 with large model and 6x size
17. `yolov8n.pt`: YOLOv8 with narrow channels
18. `yolov8s.pt`: YOLOv8 with small model
19. `yolov8m.pt`: YOLOv8 with medium model
20. `yolov8l.pt`: YOLOv8 with large model
21. `yolov8x.pt`: YOLOv8 with extra-large model
22. `yolov9s.pt`: YOLOv9 with small model
23. `yolov9m.pt`: YOLOv9 with medium model
24. `yolov9c.pt`: YOLOv9 with complex model
25. `yolov9e.pt`: YOLOv9 with extra-large model

Choose a model based on your specific requirements and follow the on-screen instructions during training for optimal results.


### Example Usage
```python
# Import the trainYOLO class
from ModeYOLO.ModelTrain import trainYOLO

# Create a trainYOLO object
yolo_trainer = trainYOLO(target_directory='modified_dataset', src_directory='dataset', mode='all', data_path='./modified_dataset/data.yaml', epochs=1, imgsz=224)

# Train the YOLO model
yolo_trainer.train()

# Validate the trained model
yolo_trainer.val()
```

**Note:** Follow the on-screen instructions to choose a YOLO model for training.

## License
This project is licensed under the MIT License - see the [LICENSE]('https://github.com/colddsam/ModeYOLO/blob/main/LICENSE') file for details.

## Acknowledgments
- Mention any contributors or external libraries that inspired or helped with the development of ModeYOLO.

- Feel free to adjust the content as needed and let me know if you have further requirements!

- This example assumes that the source dataset is structured according to the specified folder structure. Adjust the paths and parameters accordingly based on your dataset structure.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/colddsam/ModeYOLO.git",
    "name": "modeyolo",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10,<4.0",
    "maintainer_email": "",
    "keywords": "Color Space Transformations,Image Processing,Deep Learning,Dataset Modification,Computer Vision,OpenCV,NumPy,Data Augmentation,Machine Learning,Python Package",
    "author": "colddsam",
    "author_email": "dassamratkumar772@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5e/66/467e270a3a80e95e19c5d28eeaa26fd60ee1405697e9f38931a2d0683d74/modeyolo-0.2.1.tar.gz",
    "platform": null,
    "description": "# ModeYOLO Python Package\n\n## Introduction\nModeYOLO is a versatile Python package designed for efficient color space transformations, dataset modification, and YOLO model training. It seamlessly integrates into your workflow, providing solutions for diverse machine learning applications in computer vision.\n\n\n## Dependencies\nModeYOLO depends on the following libraries:\n- Ultralytics (`ultralytics`)\n- PyTorch (`torch`)\n- os (`os`)\n- opencv-python (`cv2`)\n\n### Folder Structure\nBefore using the package, ensure that your source dataset follows the following folder structure:\n\n```plaintext\ndataset/\n|-- train/\n|   |-- images/\n|   |-- labels/\n|-- test/\n|   |-- images/\n|   |-- labels/\n|-- val/\n|   |-- images/\n|   |-- labels/\n|-- data.yaml\n```\n\n## ColorOperation Module (`ColorOperation.py`)\n\n### Class: `colorcng`\n\n#### Color Spaces: \nCurrently we accepting this `['RGB', 'BGR', 'GRAY', 'CrCb', 'LAB', 'HSV']` Color Spaces. Each element in the list corresponds to a specific color space. Here's an explanation of each color space:\n\n1. **RGB (Red, Green, Blue):** The standard color model used in most digital cameras and displays, where each pixel is represented by three values indicating the intensity of red, green, and blue.\n\n2. **BGR (Blue, Green, Red):** Similar to RGB but with the order of color channels reversed. OpenCV, a popular computer vision library, uses BGR as its default color order.\n\n3. **GRAY (Grayscale):** A single-channel color space where each pixel is represented by a single intensity value, typically ranging from black to white.\n\n4. **CrCb:** A component of the YCbCr color space often used in image and video compression. It separates chrominance (color information) from luminance (brightness information).\n\n5. **LAB:** The LAB color space represents colors independently of device-specific characteristics. It consists of three components: L* (luminance), a* (green to red), and b* (blue to yellow).\n\n6. **HSV (Hue, Saturation, Value):** A color space that separates color information into three components: hue (the type of color), saturation (the intensity or vividness of the color), and value (brightness). \n\n#### Constructor\n```python\ndef __init__(self, path: str, mode: str = 'all') -> None:\n    \"\"\"\n    Initializes the colorcng object.\n\n    Parameters:\n    - path: str, path to the target directory.\n    - mode: str, mode of operation ('all', 'rgb', 'bgr', 'gray', 'hsv', 'crcb', 'lab').\n    \"\"\"\n```\n\n#### Methods\n1. `cng_rgb`\n    ```python\n    def cng_rgb(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Converts the image to RGB color space.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n2. `cng_bgr`\n    ```python\n    def cng_bgr(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Saves the image in BGR color space.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n3. `cng_gray`\n    ```python\n    def cng_gray(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Converts the image to grayscale.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n4. `cng_hsv`\n    ```python\n    def cng_hsv(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Converts the image to HSV color space.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n5. `cng_crcb`\n    ```python\n    def cng_crcb(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Converts the image to YCrCb color space.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n6. `cng_lab`\n    ```python\n    def cng_lab(self, opt: str, img: np.ndarray, idx: int | str = 0) -> None:\n        \"\"\"\n        Converts the image to LAB color space.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - img: np.ndarray, input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n7. `execute`\n    ```python\n    def execute(self, opt: str, file: str, idx: int | str = 0) -> None:\n        \"\"\"\n        Executes the specified color space transformation.\n\n        Parameters:\n        - opt: str, operation type ('train', 'test', 'val').\n        - file: str, path to the input image.\n        - idx: int | str, index for the output file name.\n        \"\"\"\n    ```\n\n## Operation Module (`Operation.py`)\n\n### Class: `InitOperation`\n\n#### Constructor\n```python\ndef __init__(self, target_directory: str = 'modified_dataset', src_directory: str = 'dataset', mode: str = 'all') -> None:\n    \"\"\"\n    Initializes the InitOperation object.\n\n    Parameters:\n    - target_directory: str, path to the target directory.\n    - src_directory: str, path to the source dataset directory.\n    - mode: str, mode of operation ('all', 'rgb', 'bgr', 'gray', 'hsv', 'crcb', 'lab').\n    \"\"\"\n```\n\n#### Methods\n1. `start_train`\n    ```python\n    def start_train(self) -> None:\n        \"\"\"\n        Creates the modified training dataset.\n        \"\"\"\n    ```\n\n2. `start_test`\n    ```python\n    def start_test(self) -> None:\n        \"\"\"\n        Creates the modified testing dataset.\n        \"\"\"\n    ```\n\n3. `start_val`\n    ```python\n    def start_val(self) -> None:\n        \"\"\"\n        Creates the modified validation dataset.\n        \"\"\"\n    ```\n\n4. `reform_dataset`\n    ```python\n    def reform_dataset(self) -> None:\n        \"\"\"\n        Reformats the entire dataset.\n        \"\"\"\n    ```\n\n### Example Usage\n\n```python\n# Import the InitOperation class\nfrom ModeYOLO.Operation import InitOperation\n\n# Create an InitOperation object\ninit_op = InitOperation(target_directory='modified_dataset', src_directory='dataset', mode='all')\n\n# Create the modified dataset\ninit_op.reform_dataset()\n```\n\nCertainly! Below is an updated README.md file reflecting the newly added `trainYOLO` submodule. The readme includes information about the new submodule, its purpose, and usage.\n\n\n\n## ModelTrain Module (`ModelTrain.py`)\n### Class: `trainYOLO`\nThis submodule facilitates YOLO model training with various pre-trained models. Users can choose from a selection of YOLO models, specify training parameters, and seamlessly integrate it into their workflows.\n\n\n### Pre-trained Models\nThe `trainYOLO` submodule supports training with various pre-trained YOLO models. Choose a model by entering the corresponding index when prompted. Here are the available models:\n\n1. `yolov3u.pt`: YOLOv3 with upsampling\n2. `yolov5nu.pt`: YOLOv5 with narrow channels\n3. `yolov5su.pt`: YOLOv5 with small model\n4. `yolov5mu.pt`: YOLOv5 with medium model\n5. `yolov5lu.pt`: YOLOv5 with large model\n6. `yolov5xu.pt`: YOLOv5 with extra-large model\n7. `yolov5n6u.pt`: YOLOv5 with narrow channels and 6x size\n8. `yolov5s6u.pt`: YOLOv5 with small model and 6x size\n9. `yolov5m6u.pt`: YOLOv5 with medium model and 6x size\n10. `yolov5l6u.pt`: YOLOv5 with large model and 6x size\n11. `yolov5x6u.pt`: YOLOv5 with extra-large model and 6x size\n12. `yolov6n.pt`: YOLOv6 with narrow channels\n13. `yolov6s.pt`: YOLOv6 with small model\n14. `yolov6m.pt`: YOLOv6 with medium model\n15. `yolov6l.pt`: YOLOv6 with large model\n16. `yolov6l6.pt`: YOLOv6 with large model and 6x size\n17. `yolov8n.pt`: YOLOv8 with narrow channels\n18. `yolov8s.pt`: YOLOv8 with small model\n19. `yolov8m.pt`: YOLOv8 with medium model\n20. `yolov8l.pt`: YOLOv8 with large model\n21. `yolov8x.pt`: YOLOv8 with extra-large model\n22. `yolov9s.pt`: YOLOv9 with small model\n23. `yolov9m.pt`: YOLOv9 with medium model\n24. `yolov9c.pt`: YOLOv9 with complex model\n25. `yolov9e.pt`: YOLOv9 with extra-large model\n\nChoose a model based on your specific requirements and follow the on-screen instructions during training for optimal results.\n\n\n### Example Usage\n```python\n# Import the trainYOLO class\nfrom ModeYOLO.ModelTrain import trainYOLO\n\n# Create a trainYOLO object\nyolo_trainer = trainYOLO(target_directory='modified_dataset', src_directory='dataset', mode='all', data_path='./modified_dataset/data.yaml', epochs=1, imgsz=224)\n\n# Train the YOLO model\nyolo_trainer.train()\n\n# Validate the trained model\nyolo_trainer.val()\n```\n\n**Note:** Follow the on-screen instructions to choose a YOLO model for training.\n\n## License\nThis project is licensed under the MIT License - see the [LICENSE]('https://github.com/colddsam/ModeYOLO/blob/main/LICENSE') file for details.\n\n## Acknowledgments\n- Mention any contributors or external libraries that inspired or helped with the development of ModeYOLO.\n\n- Feel free to adjust the content as needed and let me know if you have further requirements!\n\n- This example assumes that the source dataset is structured according to the specified folder structure. Adjust the paths and parameters accordingly based on your dataset structure.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "ModeYOLO is a versatile Python package designed for efficient color space transformations and simplified dataset modification for deep learning applications. Seamlessly integrating into your workflow, this package empowers users to effortlessly perform diverse color operations and streamline the creation of modified datasets, enhancing the flexibility and convenience of machine learning model training processes.",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/colddsam/ModeYOLO.git",
        "Repository": "https://github.com/colddsam/ModeYOLO.git"
    },
    "split_keywords": [
        "color space transformations",
        "image processing",
        "deep learning",
        "dataset modification",
        "computer vision",
        "opencv",
        "numpy",
        "data augmentation",
        "machine learning",
        "python package"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eec8f1eaecebbb94abfde71f7f6d03d3526b88f7e3f63d4a3a9205262056d444",
                "md5": "8b09b7da0e29a37173cf829a9140b3a0",
                "sha256": "af2a653affddc3dec97f077f134d7f0abb7ebe96a41a8fdcb4a958b997ce16b3"
            },
            "downloads": -1,
            "filename": "modeyolo-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b09b7da0e29a37173cf829a9140b3a0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10,<4.0",
            "size": 7903,
            "upload_time": "2024-03-13T20:24:09",
            "upload_time_iso_8601": "2024-03-13T20:24:09.389083Z",
            "url": "https://files.pythonhosted.org/packages/ee/c8/f1eaecebbb94abfde71f7f6d03d3526b88f7e3f63d4a3a9205262056d444/modeyolo-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e66467e270a3a80e95e19c5d28eeaa26fd60ee1405697e9f38931a2d0683d74",
                "md5": "a67bd8a8cbee52415718098780dee98c",
                "sha256": "42d26139f19e33df9914fba75740724980019e3a9c9932ef5f2a1dac1d68c348"
            },
            "downloads": -1,
            "filename": "modeyolo-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "a67bd8a8cbee52415718098780dee98c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10,<4.0",
            "size": 7342,
            "upload_time": "2024-03-13T20:24:10",
            "upload_time_iso_8601": "2024-03-13T20:24:10.812476Z",
            "url": "https://files.pythonhosted.org/packages/5e/66/467e270a3a80e95e19c5d28eeaa26fd60ee1405697e9f38931a2d0683d74/modeyolo-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-13 20:24:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "colddsam",
    "github_project": "ModeYOLO",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "modeyolo"
}
        
Elapsed time: 0.20323s