keras-aug


Namekeras-aug JSON
Version 1.1.1 PyPI version JSON
download
home_pageNone
SummaryA library that includes Keras 3 preprocessing and augmentation layers
upload_time2024-08-12 14:28:59
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache License 2.0
keywords deep-learning preprocessing augmentation keras jax tensorflow torch
VCS
bugtrack_url
requirements torch torchvision keras
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # KerasAug

<!-- markdownlint-disable MD033 -->

![Keras](https://img.shields.io/badge/keras-v3.4.1+-success.svg)
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/james77777778/keras-aug/actions.yml?label=tests)](https://github.com/james77777778/keras-aug/actions/workflows/actions.yml?query=branch%3Amain++)
[![codecov](https://codecov.io/gh/james77777778/keras-aug/branch/main/graph/badge.svg?token=81ELI3VH7H)](https://codecov.io/gh/james77777778/keras-aug)
[![PyPI](https://img.shields.io/pypi/v/keras-aug)](https://pypi.org/project/keras-aug/)
![PyPI - Downloads](https://img.shields.io/pypi/dm/keras-aug)
[![Open in HF Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg)](https://huggingface.co/spaces/james77777778/KerasAug)

## Description

KerasAug is a library that includes Keras 3 preprocessing and augmentation layers, providing support for various data types such as images, labels, bounding boxes, segmentation masks, and more.

<div align="center">
<img width="45%" src="https://github.com/user-attachments/assets/bf9488c4-5c6b-4c87-8fa8-30170a67c92c" alt="object_detection.gif"> <img width="45%" src="https://github.com/user-attachments/assets/556db949-9461-438a-b1cf-3621ec63416e"  alt="semantic_segmentation.gif">
</div>

> [!NOTE]
> See `docs/*.py` for the GIF generation. YOLOV8-like pipeline for bounding boxes and segmentation masks.

KerasAug aims to provide fast, robust and user-friendly preprocessing and augmentation layers, facilitating seamless integration with Keras 3 and `tf.data`.

The APIs largely follow `torchvision`, and the correctness of the layers has been verified through unit tests.

Also, you can check out the demo app on HF:
App here: [![Open in HF Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg)](https://huggingface.co/spaces/james77777778/KerasAug)

## Why KerasAug

- 🚀 Supports many preprocessing & augmentation layers across all backends (JAX, TensorFlow and Torch).
- 🧰 Seamlessly integrates with `tf.data`, offering a performant and scalable data pipeline.
- 🔥 Follows the same API design as `torchvision`.
- 🙌 Depends only on Keras 3.

## Installation

```bash
pip install keras keras-aug -U
```

> [!IMPORTANT]  
> Make sure you have installed a supported backend for Keras.

## Quickstart

### Rock, Paper and Scissors Image Classification

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11xc0nW06iWQ_R-oH4wLB_MYV4GY4mNwy?usp=sharing)

```python
import keras
import tensorflow as tf
import tensorflow_datasets as tfds

from keras_aug import layers as ka_layers

BATCH_SIZE = 64
NUM_CLASSES = 3
INPUT_SIZE = (128, 128)

# Create a `tf.data.Dataset`-compatible preprocessing pipeline.
# Note that this example works with all backends.
train_dataset, validation_dataset = tfds.load(
    "rock_paper_scissors", as_supervised=True, split=["train", "test"]
)
train_dataset = (
    train_dataset.batch(BATCH_SIZE)
    .map(
        lambda images, labels: {
            "images": tf.cast(images, "float32") / 255.0,
            "labels": tf.one_hot(labels, NUM_CLASSES),
        }
    )
    .map(ka_layers.vision.Resize(INPUT_SIZE))
    .shuffle(128)
    .map(ka_layers.vision.RandAugment())
    .map(ka_layers.vision.CutMix(num_classes=NUM_CLASSES))
    .map(ka_layers.vision.Rescale(scale=2.0, offset=-1))  # [0, 1] to [-1, 1]
    .map(lambda data: (data["images"], data["labels"]))
    .prefetch(tf.data.AUTOTUNE)
)
validation_dataset = (
    validation_dataset.batch(BATCH_SIZE)
    .map(
        lambda images, labels: {
            "images": tf.cast(images, "float32") / 255.0,
            "labels": tf.one_hot(labels, NUM_CLASSES),
        }
    )
    .map(ka_layers.vision.Resize(INPUT_SIZE))
    .map(ka_layers.vision.Rescale(scale=2.0, offset=-1))  # [0, 1] to [-1, 1]
    .map(lambda data: (data["images"], data["labels"]))
    .prefetch(tf.data.AUTOTUNE)
)

# Create a model using MobileNetV2 as the backbone.
backbone = keras.applications.MobileNetV2(
    input_shape=(*INPUT_SIZE, 3), include_top=False
)
backbone.trainable = False
inputs = keras.Input((*INPUT_SIZE, 3))
x = backbone(inputs)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.summary()
model.compile(
    loss="categorical_crossentropy",
    optimizer=keras.optimizers.SGD(learning_rate=1e-3, momentum=0.9),
    metrics=["accuracy"],
)

# Train and evaluate your model
model.fit(train_dataset, validation_data=validation_dataset, epochs=8)
model.evaluate(validation_dataset)
```

The above example runs with all backends (JAX, TensorFlow, Torch).

### More Examples

- [YOLOV8 object detection pipeline](guides/voc_yolov8_aug.py) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1AgnnvfTRMHKq--7gvmHP7RyxTeQResV4?usp=sharing)

- [YOLOV8 semantic segmentation pipeline](guides/oxford_yolov8_aug.py) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IJwUPiHreO7iIJ3VewgfLRoBdFdcQcJE?usp=sharing)

## Gradio App

```bash
gradio deploy
```

## Citing KerasAug

```bibtex
@misc{chiu2023kerasaug,
  title={KerasAug},
  author={Hongyu, Chiu},
  year={2023},
  howpublished={\url{https://github.com/james77777778/keras-aug}},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "keras-aug",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "Hong-Yu Chiu <james77777778@gmail.com>",
    "keywords": "deep-learning, preprocessing, augmentation, keras, jax, tensorflow, torch",
    "author": null,
    "author_email": "Hong-Yu Chiu <james77777778@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/82/68/f61fe767218fcfa4f4a180c311416489162c81acce626b52b0220d2b63cf/keras_aug-1.1.1.tar.gz",
    "platform": null,
    "description": "# KerasAug\n\n<!-- markdownlint-disable MD033 -->\n\n![Keras](https://img.shields.io/badge/keras-v3.4.1+-success.svg)\n[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/james77777778/keras-aug/actions.yml?label=tests)](https://github.com/james77777778/keras-aug/actions/workflows/actions.yml?query=branch%3Amain++)\n[![codecov](https://codecov.io/gh/james77777778/keras-aug/branch/main/graph/badge.svg?token=81ELI3VH7H)](https://codecov.io/gh/james77777778/keras-aug)\n[![PyPI](https://img.shields.io/pypi/v/keras-aug)](https://pypi.org/project/keras-aug/)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/keras-aug)\n[![Open in HF Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg)](https://huggingface.co/spaces/james77777778/KerasAug)\n\n## Description\n\nKerasAug is a library that includes Keras 3 preprocessing and augmentation layers, providing support for various data types such as images, labels, bounding boxes, segmentation masks, and more.\n\n<div align=\"center\">\n<img width=\"45%\" src=\"https://github.com/user-attachments/assets/bf9488c4-5c6b-4c87-8fa8-30170a67c92c\" alt=\"object_detection.gif\"> <img width=\"45%\" src=\"https://github.com/user-attachments/assets/556db949-9461-438a-b1cf-3621ec63416e\"  alt=\"semantic_segmentation.gif\">\n</div>\n\n> [!NOTE]\n> See `docs/*.py` for the GIF generation. YOLOV8-like pipeline for bounding boxes and segmentation masks.\n\nKerasAug aims to provide fast, robust and user-friendly preprocessing and augmentation layers, facilitating seamless integration with Keras 3 and `tf.data`.\n\nThe APIs largely follow `torchvision`, and the correctness of the layers has been verified through unit tests.\n\nAlso, you can check out the demo app on HF:\nApp here: [![Open in HF Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg)](https://huggingface.co/spaces/james77777778/KerasAug)\n\n## Why KerasAug\n\n- \ud83d\ude80 Supports many preprocessing & augmentation layers across all backends (JAX, TensorFlow and Torch).\n- \ud83e\uddf0 Seamlessly integrates with `tf.data`, offering a performant and scalable data pipeline.\n- \ud83d\udd25 Follows the same API design as `torchvision`.\n- \ud83d\ude4c Depends only on Keras 3.\n\n## Installation\n\n```bash\npip install keras keras-aug -U\n```\n\n> [!IMPORTANT]  \n> Make sure you have installed a supported backend for Keras.\n\n## Quickstart\n\n### Rock, Paper and Scissors Image Classification\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11xc0nW06iWQ_R-oH4wLB_MYV4GY4mNwy?usp=sharing)\n\n```python\nimport keras\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\nfrom keras_aug import layers as ka_layers\n\nBATCH_SIZE = 64\nNUM_CLASSES = 3\nINPUT_SIZE = (128, 128)\n\n# Create a `tf.data.Dataset`-compatible preprocessing pipeline.\n# Note that this example works with all backends.\ntrain_dataset, validation_dataset = tfds.load(\n    \"rock_paper_scissors\", as_supervised=True, split=[\"train\", \"test\"]\n)\ntrain_dataset = (\n    train_dataset.batch(BATCH_SIZE)\n    .map(\n        lambda images, labels: {\n            \"images\": tf.cast(images, \"float32\") / 255.0,\n            \"labels\": tf.one_hot(labels, NUM_CLASSES),\n        }\n    )\n    .map(ka_layers.vision.Resize(INPUT_SIZE))\n    .shuffle(128)\n    .map(ka_layers.vision.RandAugment())\n    .map(ka_layers.vision.CutMix(num_classes=NUM_CLASSES))\n    .map(ka_layers.vision.Rescale(scale=2.0, offset=-1))  # [0, 1] to [-1, 1]\n    .map(lambda data: (data[\"images\"], data[\"labels\"]))\n    .prefetch(tf.data.AUTOTUNE)\n)\nvalidation_dataset = (\n    validation_dataset.batch(BATCH_SIZE)\n    .map(\n        lambda images, labels: {\n            \"images\": tf.cast(images, \"float32\") / 255.0,\n            \"labels\": tf.one_hot(labels, NUM_CLASSES),\n        }\n    )\n    .map(ka_layers.vision.Resize(INPUT_SIZE))\n    .map(ka_layers.vision.Rescale(scale=2.0, offset=-1))  # [0, 1] to [-1, 1]\n    .map(lambda data: (data[\"images\"], data[\"labels\"]))\n    .prefetch(tf.data.AUTOTUNE)\n)\n\n# Create a model using MobileNetV2 as the backbone.\nbackbone = keras.applications.MobileNetV2(\n    input_shape=(*INPUT_SIZE, 3), include_top=False\n)\nbackbone.trainable = False\ninputs = keras.Input((*INPUT_SIZE, 3))\nx = backbone(inputs)\nx = keras.layers.GlobalAveragePooling2D()(x)\noutputs = keras.layers.Dense(NUM_CLASSES, activation=\"softmax\")(x)\nmodel = keras.Model(inputs, outputs)\nmodel.summary()\nmodel.compile(\n    loss=\"categorical_crossentropy\",\n    optimizer=keras.optimizers.SGD(learning_rate=1e-3, momentum=0.9),\n    metrics=[\"accuracy\"],\n)\n\n# Train and evaluate your model\nmodel.fit(train_dataset, validation_data=validation_dataset, epochs=8)\nmodel.evaluate(validation_dataset)\n```\n\nThe above example runs with all backends (JAX, TensorFlow, Torch).\n\n### More Examples\n\n- [YOLOV8 object detection pipeline](guides/voc_yolov8_aug.py) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1AgnnvfTRMHKq--7gvmHP7RyxTeQResV4?usp=sharing)\n\n- [YOLOV8 semantic segmentation pipeline](guides/oxford_yolov8_aug.py) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IJwUPiHreO7iIJ3VewgfLRoBdFdcQcJE?usp=sharing)\n\n## Gradio App\n\n```bash\ngradio deploy\n```\n\n## Citing KerasAug\n\n```bibtex\n@misc{chiu2023kerasaug,\n  title={KerasAug},\n  author={Hongyu, Chiu},\n  year={2023},\n  howpublished={\\url{https://github.com/james77777778/keras-aug}},\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "A library that includes Keras 3 preprocessing and augmentation layers",
    "version": "1.1.1",
    "project_urls": {
        "Documentation": "https://github.com/james77777778/keras-aug",
        "Homepage": "https://github.com/james77777778/keras-aug",
        "Issues": "https://github.com/james77777778/keras-aug/issues",
        "Repository": "https://github.com/james77777778/keras-aug.git"
    },
    "split_keywords": [
        "deep-learning",
        " preprocessing",
        " augmentation",
        " keras",
        " jax",
        " tensorflow",
        " torch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1f6dfc4a1efea49d4d6ae30de6bc0690ac04a1aa30ff8bf3c0a0c995ce48b163",
                "md5": "e4a567c6b50a86e7d2af8abb21f6052f",
                "sha256": "74d0925c9846e8b7b9b8b94aec246fbb0a4ce7ddadfa1bc5831bc0f2c83dce63"
            },
            "downloads": -1,
            "filename": "keras_aug-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e4a567c6b50a86e7d2af8abb21f6052f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 146114,
            "upload_time": "2024-08-12T14:28:58",
            "upload_time_iso_8601": "2024-08-12T14:28:58.371411Z",
            "url": "https://files.pythonhosted.org/packages/1f/6d/fc4a1efea49d4d6ae30de6bc0690ac04a1aa30ff8bf3c0a0c995ce48b163/keras_aug-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8268f61fe767218fcfa4f4a180c311416489162c81acce626b52b0220d2b63cf",
                "md5": "6bfcdbff4f1975d2255d65dc08ce5a10",
                "sha256": "8b9c22c288498c50c17f740675e51fe7cd07f9dc5c24c5f0c79c588007d18bf1"
            },
            "downloads": -1,
            "filename": "keras_aug-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "6bfcdbff4f1975d2255d65dc08ce5a10",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 82842,
            "upload_time": "2024-08-12T14:28:59",
            "upload_time_iso_8601": "2024-08-12T14:28:59.890188Z",
            "url": "https://files.pythonhosted.org/packages/82/68/f61fe767218fcfa4f4a180c311416489162c81acce626b52b0220d2b63cf/keras_aug-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-12 14:28:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "james77777778",
    "github_project": "keras-aug",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "torchvision",
            "specs": []
        },
        {
            "name": "keras",
            "specs": []
        }
    ],
    "lcname": "keras-aug"
}
        
Elapsed time: 0.52952s