auto-deep-learning


Nameauto-deep-learning JSON
Version 0.1.5.4 PyPI version JSON
download
home_pagehttps://github.com/Nil-Andreu/auto-deep-learning
SummaryAutomation of the creation of the architecture of the neural network based on the input
upload_time2023-01-02 17:14:44
maintainer
docs_urlNone
authorNil Andreu
requires_python
licenseMIT
keywords deep learning machine learning computer vision convolutional neural networks neural networks image classification
VCS
bugtrack_url
requirements pandas pre-commit pydantic pytest sentence-transformers torch torchaudio torchsummary torchvision transformers
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Auto-Deep-Learning (Auto Deep Learning)
[![Downloads](https://static.pepy.tech/personalized-badge/auto-deep-learning?period=month&units=none&left_color=grey&right_color=blue&left_text=Downloads)](https://pepy.tech/project/auto_deep_learning) ![Version](https://img.shields.io/badge/version-0.1.1-blue) ![Python-Version](https://img.shields.io/badge/python-3.9-blue) ![issues](https://img.shields.io/github/issues/Nil-Andreu/auto_deep_learning) ![PyPI - Status](https://img.shields.io/pypi/status/auto_deep_learning) ![License](https://img.shields.io/github/license/Nil-Andreu/auto_deep_learning)

```auto_deep_learning```: with this package, you will be able to create, train and deploy neural networks automatically based on the input that you provide.

## Alert
This package is still on development, but Start the Project to know further updates in next days!
For the moment, would be for computer vision classification tasks on images (multi-modal included).

## Installation
Use the package manager [pip](https://pypi.org/project/pip/) to install *auto_deep_learning*.

To install the package:
```bash
    pip install auto_deep_learning
```

**If using an old version of the package, update it:**
```bash
    pip install --upgrade auto_deep_learning
```


## Project Structure
The project structure is the following one:

```bash
    ├── auto_deep_learning                  # Python Package
    │   ├── cloud                           # Cloud module for saving & service DL models
    │   │   ├── aws                         # Amazon Web Services
    │   │   └── gcp                         # Google Cloud
    │   ├── enum                            # Enumerations for the model
    │   ├── exceptions                      # Exceptions
    │   │   ├── model                       # Exceptions related to the definition/creation of the model
    │   │   └── utils                       # Exceptions related to the utilities folder
    │   │       └── data_handler            # Exceptions related to handling the data
    │   ├── model                           # Module for creating & training the models
    │   │   └── arch                        # Architectures supported of the models
    │   │       └── convolution
    │   ├── schemas                         # Schemas of expected outputs
    │   └── utils                           # Utilities for the project
    │       ├── data_handler                # Utilities related to handling the data
    │       │   ├── creator                 # Utilities related to creating the loaders
    │       │   └── transform               # Utilities related to the transformation of the data
    │       └── model                       # Utilities related to the creation of the model
    ├── examples                            # Examples of how the package can be used
    └── tests                               # Tests
```


## Basic Usage
How easy can be to create and train a deep learning model:
```python
    from auto_deep_learning import Model
    from auto_deep_learning.utils import DataCreator, DataSampler, image_folder_convertion

    df = image_folder_convertion()
    data = DataCreator(df)
    data_sampled = DatasetSampler(data)
    model = Model(data_sampled)
    model.fit()
    model.predict('image.jpg')
```

We provide also with a configuration object, where it centralizes some of the most important configurations that you might want to do:
```python
    ConfigurationObject(
        n_epochs: int = 10,
        batch_size_train: int = 64,
        batch_size_valid: int = 128,
        batch_size_test: int = 128,
        valid_size: float = 0.1,
        test_size: float = 0.05,
        image_size: int = 224,
        num_workers: int = 6,
        objective: ModelObjective = ModelObjective.THROUGHPUT,
        img_transformers: Dict[str, ImageTransformer] =  {
            'train': ImageTransformer(
                rotation=3.0,
                color_jitter_brightness=3.0,
                color_jitter_contrast=3.0,
                color_jitter_hue=3.0,
                color_jitter_saturation=3.0,
                color_jitter_enabled=True,
                resized_crop_enabled=True
            ),
            'valid': ImageTransformer(),
            'test': ImageTransformer()
        }
    )
```
So by default, it is going to do image augmentation on the training data.
Note that if for example we did not want to make a validation split because our dataset is too small, we would change this value as:

```python
    conf_obj = ConfigurationObject()
    conf_obj.valid_size = 0.0
```


### Dataset

The data that it expects is a pd.DataFrame(), where the columns are the following:
```
    - image_path: the path to the image
    - class1: the classification of the class nr. 1. For example: {t-shirt, glasses, ...}
    - class2: the classification of the class nr. 2. For example: {summer, winter, ...}
    - ...
    - split_type: whether it is for train/valid/test
```
For better performance, it is suggested that the classes and the type are of dtype *category* in the pandas DataFrame.
If the type is not provided in the dataframe, you should use the utils function of *data_split_types* (in *utils.dataset.sampler* file).

If instead you have the images ordered in the structure of ImageFolder, which is the following structure:
```
    train/
        class1_value/
            1.jgp
            2.jpg
            ...
        class2_value/
            3.jpg
            4.jpg
            ...
    test/
        class1_value/
            1.jgp
            2.jpg
            ...
        class2_value/
            3.jpg
            4.jpg
            ...
```
For simplifying logic, we have provided a logic that gives you the expected dataframe that we wanted, with the function of *image_folder_convertion* (in *utils.functions*), where it is expecting a path to the parent folder where the *train/* and */test* folders are.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Nil-Andreu/auto-deep-learning",
    "name": "auto-deep-learning",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "deep learning,machine learning,computer vision,convolutional neural networks,neural networks,image classification",
    "author": "Nil Andreu",
    "author_email": "nilandreug@email.com",
    "download_url": "https://files.pythonhosted.org/packages/0a/15/8f08d084f1b7d0daaa14a0ddb9ee8611eba06e260e0a73af47c8c680e242/auto_deep_learning-0.1.5.4.tar.gz",
    "platform": null,
    "description": "# Auto-Deep-Learning (Auto Deep Learning)\n[![Downloads](https://static.pepy.tech/personalized-badge/auto-deep-learning?period=month&units=none&left_color=grey&right_color=blue&left_text=Downloads)](https://pepy.tech/project/auto_deep_learning) ![Version](https://img.shields.io/badge/version-0.1.1-blue) ![Python-Version](https://img.shields.io/badge/python-3.9-blue) ![issues](https://img.shields.io/github/issues/Nil-Andreu/auto_deep_learning) ![PyPI - Status](https://img.shields.io/pypi/status/auto_deep_learning) ![License](https://img.shields.io/github/license/Nil-Andreu/auto_deep_learning)\n\n```auto_deep_learning```: with this package, you will be able to create, train and deploy neural networks automatically based on the input that you provide.\n\n## Alert\nThis package is still on development, but Start the Project to know further updates in next days!\nFor the moment, would be for computer vision classification tasks on images (multi-modal included).\n\n## Installation\nUse the package manager [pip](https://pypi.org/project/pip/) to install *auto_deep_learning*.\n\nTo install the package:\n```bash\n    pip install auto_deep_learning\n```\n\n**If using an old version of the package, update it:**\n```bash\n    pip install --upgrade auto_deep_learning\n```\n\n\n## Project Structure\nThe project structure is the following one:\n\n```bash\n    \u251c\u2500\u2500 auto_deep_learning                  # Python Package\n    \u2502   \u251c\u2500\u2500 cloud                           # Cloud module for saving & service DL models\n    \u2502   \u2502   \u251c\u2500\u2500 aws                         # Amazon Web Services\n    \u2502   \u2502   \u2514\u2500\u2500 gcp                         # Google Cloud\n    \u2502   \u251c\u2500\u2500 enum                            # Enumerations for the model\n    \u2502   \u251c\u2500\u2500 exceptions                      # Exceptions\n    \u2502   \u2502   \u251c\u2500\u2500 model                       # Exceptions related to the definition/creation of the model\n    \u2502   \u2502   \u2514\u2500\u2500 utils                       # Exceptions related to the utilities folder\n    \u2502   \u2502       \u2514\u2500\u2500 data_handler            # Exceptions related to handling the data\n    \u2502   \u251c\u2500\u2500 model                           # Module for creating & training the models\n    \u2502   \u2502   \u2514\u2500\u2500 arch                        # Architectures supported of the models\n    \u2502   \u2502       \u2514\u2500\u2500 convolution\n    \u2502   \u251c\u2500\u2500 schemas                         # Schemas of expected outputs\n    \u2502   \u2514\u2500\u2500 utils                           # Utilities for the project\n    \u2502       \u251c\u2500\u2500 data_handler                # Utilities related to handling the data\n    \u2502       \u2502   \u251c\u2500\u2500 creator                 # Utilities related to creating the loaders\n    \u2502       \u2502   \u2514\u2500\u2500 transform               # Utilities related to the transformation of the data\n    \u2502       \u2514\u2500\u2500 model                       # Utilities related to the creation of the model\n    \u251c\u2500\u2500 examples                            # Examples of how the package can be used\n    \u2514\u2500\u2500 tests                               # Tests\n```\n\n\n## Basic Usage\nHow easy can be to create and train a deep learning model:\n```python\n    from auto_deep_learning import Model\n    from auto_deep_learning.utils import DataCreator, DataSampler, image_folder_convertion\n\n    df = image_folder_convertion()\n    data = DataCreator(df)\n    data_sampled = DatasetSampler(data)\n    model = Model(data_sampled)\n    model.fit()\n    model.predict('image.jpg')\n```\n\nWe provide also with a configuration object, where it centralizes some of the most important configurations that you might want to do:\n```python\n    ConfigurationObject(\n        n_epochs: int = 10,\n        batch_size_train: int = 64,\n        batch_size_valid: int = 128,\n        batch_size_test: int = 128,\n        valid_size: float = 0.1,\n        test_size: float = 0.05,\n        image_size: int = 224,\n        num_workers: int = 6,\n        objective: ModelObjective = ModelObjective.THROUGHPUT,\n        img_transformers: Dict[str, ImageTransformer] =  {\n            'train': ImageTransformer(\n                rotation=3.0,\n                color_jitter_brightness=3.0,\n                color_jitter_contrast=3.0,\n                color_jitter_hue=3.0,\n                color_jitter_saturation=3.0,\n                color_jitter_enabled=True,\n                resized_crop_enabled=True\n            ),\n            'valid': ImageTransformer(),\n            'test': ImageTransformer()\n        }\n    )\n```\nSo by default, it is going to do image augmentation on the training data.\nNote that if for example we did not want to make a validation split because our dataset is too small, we would change this value as:\n\n```python\n    conf_obj = ConfigurationObject()\n    conf_obj.valid_size = 0.0\n```\n\n\n### Dataset\n\nThe data that it expects is a pd.DataFrame(), where the columns are the following:\n```\n    - image_path: the path to the image\n    - class1: the classification of the class nr. 1. For example: {t-shirt, glasses, ...}\n    - class2: the classification of the class nr. 2. For example: {summer, winter, ...}\n    - ...\n    - split_type: whether it is for train/valid/test\n```\nFor better performance, it is suggested that the classes and the type are of dtype *category* in the pandas DataFrame.\nIf the type is not provided in the dataframe, you should use the utils function of *data_split_types* (in *utils.dataset.sampler* file).\n\nIf instead you have the images ordered in the structure of ImageFolder, which is the following structure:\n```\n    train/\n        class1_value/\n            1.jgp\n            2.jpg\n            ...\n        class2_value/\n            3.jpg\n            4.jpg\n            ...\n    test/\n        class1_value/\n            1.jgp\n            2.jpg\n            ...\n        class2_value/\n            3.jpg\n            4.jpg\n            ...\n```\nFor simplifying logic, we have provided a logic that gives you the expected dataframe that we wanted, with the function of *image_folder_convertion* (in *utils.functions*), where it is expecting a path to the parent folder where the *train/* and */test* folders are.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Automation of the creation of the architecture of the neural network based on the input",
    "version": "0.1.5.4",
    "split_keywords": [
        "deep learning",
        "machine learning",
        "computer vision",
        "convolutional neural networks",
        "neural networks",
        "image classification"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "1b26bc0b05cf1ca3f6b38aa9396ae4c6",
                "sha256": "56aaa0d123fb654ba78841b185cd064a9765109e2a41d63749d37b070956995a"
            },
            "downloads": -1,
            "filename": "auto_deep_learning-0.1.5.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1b26bc0b05cf1ca3f6b38aa9396ae4c6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 32778,
            "upload_time": "2023-01-02T17:14:42",
            "upload_time_iso_8601": "2023-01-02T17:14:42.533836Z",
            "url": "https://files.pythonhosted.org/packages/0c/8b/f2302221ab584486958c27bd194d9dbb2cb9afcb2e9ca5733617a0d3457e/auto_deep_learning-0.1.5.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "b8c191e141c24c091c4fe9ceb64ebbdd",
                "sha256": "5ee06d26fcd6cff6289dd661072cfc47ab0e0d8c009a8e33acd80d7e539e4c1f"
            },
            "downloads": -1,
            "filename": "auto_deep_learning-0.1.5.4.tar.gz",
            "has_sig": false,
            "md5_digest": "b8c191e141c24c091c4fe9ceb64ebbdd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 24904,
            "upload_time": "2023-01-02T17:14:44",
            "upload_time_iso_8601": "2023-01-02T17:14:44.571463Z",
            "url": "https://files.pythonhosted.org/packages/0a/15/8f08d084f1b7d0daaa14a0ddb9ee8611eba06e260e0a73af47c8c680e242/auto_deep_learning-0.1.5.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-02 17:14:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "Nil-Andreu",
    "github_project": "auto-deep-learning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "1.5.0"
                ]
            ]
        },
        {
            "name": "pre-commit",
            "specs": [
                [
                    "==",
                    "2.21.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "1.10.2"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": []
        },
        {
            "name": "sentence-transformers",
            "specs": [
                [
                    "==",
                    "2.2.2"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "1.13.1"
                ]
            ]
        },
        {
            "name": "torchaudio",
            "specs": [
                [
                    "==",
                    "0.13.1"
                ]
            ]
        },
        {
            "name": "torchsummary",
            "specs": [
                [
                    "==",
                    "1.5.1"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    "==",
                    "0.14.1"
                ]
            ]
        },
        {
            "name": "transformers",
            "specs": [
                [
                    "==",
                    "4.25.1"
                ]
            ]
        }
    ],
    "lcname": "auto-deep-learning"
}
        
Elapsed time: 0.06983s