lightning-flash


Namelightning-flash JSON
Version 0.8.2 PyPI version JSON
download
home_pagehttps://github.com/Lightning-AI/lightning-flash
SummaryYour PyTorch AI Factory - Flash enables you to easily configure and run complex AI recipes.
upload_time2023-06-30 13:36:28
maintainer
docs_urlNone
authorPyTorchLightning et al.
requires_python>=3.8
licenseApache-2.0
keywords deep learning pytorch ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<img src="https://github.com/Lightning-AI/lightning-flash/raw/0.8.2/docs/source/_static/images/logo.svg" width="400px">


**Your PyTorch AI Factory**

---

<p align="center">
  <a href="#getting-started">Installation</a> •
  <a href="#flash-in-3-steps">Flash in 3 Steps</a> •
  <a href="https://lightning-flash.readthedocs.io/en/stable/?badge=stable">Docs</a> •
  <a href="#contribute">Contribute</a> •
  <a href="#community">Community</a> •
  <a href="https://www.lightning.ai/">Website</a> •
  <a href="#license">License</a>
</p>


[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lightning-flash)](https://pypi.org/project/lightning-flash/)
[![PyPI Status](https://badge.fury.io/py/lightning-flash.svg)](https://badge.fury.io/py/lightning-flash)
[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://www.pytorchlightning.ai/community)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE)

[![CI testing](https://github.com/Lightning-Universe/lightning-flash/actions/workflows/ci-testing.yml/badge.svg?event=push)](https://github.com/Lightning-Universe/lightning-flash/actions/workflows/ci-testing.yml)
[![codecov](https://codecov.io/gh/Lightning-Universe/lightning-flash/release/0.8.2/graph/badge.svg?token=oLuUr9q1vt)](https://codecov.io/gh/Lightning-Universe/lightning-flash)
[![Documentation Status](https://readthedocs.org/projects/lightning-flash/badge/?version=latest)](https://lightning-flash.readthedocs.io/en/stable/?badge=stable)
[![DOI](https://zenodo.org/badge/333857397.svg)](https://zenodo.org/badge/latestdoi/333857397)

</div>

---

<div align="center">
  Flash makes complex AI recipes for over 15 tasks across 7 data domains accessible to all.
  <br / >
  In a nutshell, Flash is the production grade research framework you always dreamed of but didn't have time to build.
</div>

## Getting Started

From PyPI:

```bash
pip install lightning-flash
```

See [our installation guide](https://lightning-flash.readthedocs.io/en/latest/installation.html) for more options.

## Flash in 3 Steps

### Step 1. Load your data

All data loading in Flash is performed via a `from_*` classmethod on a `DataModule`.
To decide which `DataModule` to use and which `from_*` methods are available, it depends on the task you want to perform.
For example, for image segmentation where your data is stored in folders, you would use the [`from_folders` method of the `SemanticSegmentationData` class](https://lightning-flash.readthedocs.io/en/latest/reference/semantic_segmentation.html#from-folders):

```py
from flash.image import SemanticSegmentationData

dm = SemanticSegmentationData.from_folders(
    train_folder="data/CameraRGB",
    train_target_folder="data/CameraSeg",
    val_split=0.1,
    image_size=(256, 256),
    num_classes=21,
)

```

### Step 2: Configure your model

Our tasks come loaded with pre-trained backbones and (where applicable) heads.
You can view the available backbones to use with your task using [`available_backbones`](https://lightning-flash.readthedocs.io/en/latest/general/backbones.html).
Once you've chosen one, create the model:

```py
from flash.image import SemanticSegmentation

print(SemanticSegmentation.available_heads())
# ['deeplabv3', 'deeplabv3plus', 'fpn', ..., 'unetplusplus']

print(SemanticSegmentation.available_backbones('fpn'))
# ['densenet121', ..., 'xception'] # + 113 models

print(SemanticSegmentation.available_pretrained_weights('efficientnet-b0'))
# ['imagenet', 'advprop']

model = SemanticSegmentation(
  head="fpn", backbone='efficientnet-b0', pretrained="advprop", num_classes=dm.num_classes)
```

### Step 3: Finetune!

```py
from flash import Trainer

trainer = Trainer(max_epochs=3)
trainer.finetune(model, datamodule=datamodule, strategy="freeze")
trainer.save_checkpoint("semantic_segmentation_model.pt")
```

---

## PyTorch Recipes

### Make predictions with Flash!

Serve in just 2 lines:

```py
from flash.image import SemanticSegmentation

model = SemanticSegmentation.load_from_checkpoint("semantic_segmentation_model.pt")
model.serve()
```

or make predictions from raw data directly.

```py
from flash import Trainer

trainer = Trainer(strategy='ddp', accelerator="gpu", gpus=2)
dm = SemanticSegmentationData.from_folders(predict_folder="data/CameraRGB")
predictions = trainer.predict(model, dm)
```

### Flash Training Strategies

Training strategies are PyTorch SOTA Training Recipes which can be utilized with a given task.


Check out this [example](https://github.com/Lightning-AI/lightning-flash/blob/master/examples/integrations/learn2learn/image_classification_imagenette_mini.py) where the `ImageClassifier` supports 4 [Meta Learning Algorithms](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html) from [Learn2Learn](https://github.com/learnables/learn2learn).
This is particularly useful if you use this model in production and want to make sure the model adapts quickly to its new environment with minimal labelled data.

```py
from flash.image import ImageClassifier

model = ImageClassifier(
    backbone="resnet18",
    optimizer=torch.optim.Adam,
    optimizer_kwargs={"lr": 0.001},
    training_strategy="prototypicalnetworks",
    training_strategy_kwargs={
        "epoch_length": 10 * 16,
        "meta_batch_size": 4,
        "num_tasks": 200,
        "test_num_tasks": 2000,
        "ways": datamodule.num_classes,
        "shots": 1,
        "test_ways": 5,
        "test_shots": 1,
        "test_queries": 15,
    },
)
```

In detail, the following methods are currently implemented:

* **[prototypicalnetworks](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_protonet.py)** : from Snell *et al.* 2017, [Prototypical Networks for Few-shot Learning](https://arxiv.org/abs/1703.05175)
* **[maml](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_maml.py)** : from Finn *et al.* 2017, [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](https://arxiv.org/abs/1703.03400)
* **[metaoptnet](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_metaoptnet.py)** : from Lee *et al.* 2019, [Meta-Learning with Differentiable Convex Optimization](https://arxiv.org/abs/1904.03758)
* **[anil](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_anil.py)** : from Raghu *et al.* 2020, [Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML](https://arxiv.org/abs/1909.09157)


### Flash Optimizers / Schedulers

With Flash, swapping among 40+ optimizers and 15+ schedulers recipes are simple. Find the list of available optimizers, schedulers as follows:

```py
from flash.image import ImageClassifier

ImageClassifier.available_optimizers()
# ['A2GradExp', ..., 'Yogi']

ImageClassifier.available_schedulers()
# ['CosineAnnealingLR', 'CosineAnnealingWarmRestarts', ..., 'polynomial_decay_schedule_with_warmup']
```

Once you've chosen, create the model:

```py
#### The optimizer of choice can be passed as
from flash.image import ImageClassifier

# - String value
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer="Adam", lr_scheduler=None)

# - Callable
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer=functools.partial(torch.optim.Adadelta, eps=0.5), lr_scheduler=None)

# - Tuple[string, dict]: (The dict takes in the optimizer kwargs)
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer=("Adadelta", {"epa": 0.5}), lr_scheduler=None)

#### The scheduler of choice can be passed as a
# - String value
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer="Adam", lr_scheduler="constant_schedule")

# - Callable
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer="Adam", lr_scheduler=functools.partial(CyclicLR, step_size_up=1500, mode='exp_range', gamma=0.5))

# - Tuple[string, dict]: (The dict takes in the scheduler kwargs)
model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer="Adam", lr_scheduler=("StepLR", {"step_size": 10}))
```

You can also register you own custom scheduler recipes beforeahand and use them shown as above:

```py
from flash.image import ImageClassifier

@ImageClassifier.lr_schedulers_registry
def my_steplr_recipe(optimizer):
    return torch.optim.lr_scheduler.StepLR(optimizer, step_size=10)

model = ImageClassifier(backbone="resnet18", num_classes=2, optimizer="Adam", lr_scheduler="my_steplr_recipe")
```

### Flash Transforms


Flash includes some simple augmentations for each task by default, however, you will often want to override these and control your own augmentation recipe.
To this end, Flash supports custom transformations with the [`InputTransform`](https://lightning-flash.readthedocs.io/en/stable/api/generated/flash.core.data.io.input_transform.InputTransform.html).
The `InputTransform` is like a callback for transforms, with hooks that can be used to apply transforms to samples or batches, on and off the device / accelerator.
In addition, hooks can be specialized to apply transforms only to the input or target.
With these hooks, complex transforms like MixUp can be implemented with ease.
Here's an example (with an albumentations transform thrown in too!):

```py
import torch
import numpy as np
import albumentations
from flash import InputTransform
from flash.image import ImageClassificationData
from flash.image.classification.input_transform import AlbumentationsAdapter


def mixup(batch, alpha=1.0):
    images = batch["input"]
    targets = batch["target"].float().unsqueeze(1)

    lam = np.random.beta(alpha, alpha)
    perm = torch.randperm(images.size(0))

    batch["input"] = images * lam + images[perm] * (1 - lam)
    batch["target"] = targets * lam + targets[perm] * (1 - lam)
    return batch


class MixUpInputTransform(InputTransform):

    def train_input_per_sample_transform(self):
        return AlbumentationsAdapter(albumentations.HorizontalFlip(p=0.5))

    # This will be applied after transferring the batch to the device!
    def train_per_batch_transform_on_device(self):
        return mixup


datamodule = ImageClassificationData.from_folders(
    train_folder="data/train",
    transform=MixUpInputTransform,
    batch_size=2,
)

```

## Flash Zero - PyTorch Recipes from the Command Line!

<div align="center">
<img src="/https://github.com/Lightning-AI/lightning-flash/raw/0.8.2/docs/source/_static/images/flash_zero.gif?raw=true" width="75%">
</div>

Flash Zero is a zero-code machine learning platform built
directly into lightning-flash
using the [`Lightning CLI`](https://pytorch-lightning.readthedocs.io/en/0.8.2common/lightning_cli.html).

To get started and view the available tasks, run:

```bash
  flash --help
```

For example, to train an image classifier for 10 epochs with a `resnet50` backbone on 2 GPUs using your own data, you can do:

```bash
  flash image_classification --trainer.max_epochs 10 --trainer.gpus 2 --model.backbone resnet50 from_folders --train_folder {PATH_TO_DATA}
```

## Kaggle Notebook Examples

- [🚢Titanic crash with Lightning⚡Flash](https://www.kaggle.com/jirkaborovec/titanic-crash-with-lightning-flash)
- [🏠House 💵prices predictions with Lightning⚡Flash](https://www.kaggle.com/jirkaborovec/house-prices-predictions-with-lightning-flash)
- [Playing 📋tabular with Lightning⚡Flash](https://www.kaggle.com/jirkaborovec/playing-tabular-with-lightning-flash)
- [🙊Toxic comments with Lightning⚡Flash](https://www.kaggle.com/jirkaborovec/toxic-comments-with-lightning-flash)
- [🫁COVID detection with Lightning⚡️Flash](https://www.kaggle.com/jirkaborovec/covid-detection-with-lightning-flash)


## Contribute!
The lightning + Flash team is hard at work building more tasks for common deep-learning use cases. But we're looking for incredible contributors like you to submit new tasks!

Join our [Slack](https://www.pytorchlightning.ai/community) and/or read our [CONTRIBUTING](https://github.com/PyTorchLightning/lightning-flash/blob/master/.github/CONTRIBUTING.md) guidelines to get help becoming a contributor!

__Note:__ Flash is currently being tested on real-world use cases and is in active development. Please [open an issue](https://github.com/PyTorchLightning/lightning-flash/issues/new/choose) if you find anything that isn't working as expected.

---

## Community
Flash is maintained by our [core contributors](https://lightning-flash.readthedocs.io/en/latest/governance.html).

For help or questions, join our huge community on [Slack](https://www.pytorchlightning.ai/community)!

---

## Citations
We’re excited to continue the strong legacy of opensource software and have been inspired over the years by Caffe, Theano, Keras, PyTorch, torchbearer, and [fast.ai](https://arxiv.org/abs/2002.04688). When/if additional papers are written about this, we’ll be happy to cite these frameworks and the corresponding authors.

Flash leverages models from many different frameworks in order to cover such a wide range of domains and tasks. The full list of providers can be found in [our documentation](https://lightning-flash.readthedocs.io/en/latest/integrations/providers.html).

---

## License
Please observe the Apache 2.0 license that is listed in this repository.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Lightning-AI/lightning-flash",
    "name": "lightning-flash",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "deep learning,pytorch,AI",
    "author": "PyTorchLightning et al.",
    "author_email": "name@pytorchlightning.ai",
    "download_url": "https://files.pythonhosted.org/packages/c0/4d/3d7b72b73475625422eb220fc417f87511104d788771b47adc7a0e766ede/lightning-flash-0.8.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<img src=\"https://github.com/Lightning-AI/lightning-flash/raw/0.8.2/docs/source/_static/images/logo.svg\" width=\"400px\">\n\n\n**Your PyTorch AI Factory**\n\n---\n\n<p align=\"center\">\n  <a href=\"#getting-started\">Installation</a> \u2022\n  <a href=\"#flash-in-3-steps\">Flash in 3 Steps</a> \u2022\n  <a href=\"https://lightning-flash.readthedocs.io/en/stable/?badge=stable\">Docs</a> \u2022\n  <a href=\"#contribute\">Contribute</a> \u2022\n  <a href=\"#community\">Community</a> \u2022\n  <a href=\"https://www.lightning.ai/\">Website</a> \u2022\n  <a href=\"#license\">License</a>\n</p>\n\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lightning-flash)](https://pypi.org/project/lightning-flash/)\n[![PyPI Status](https://badge.fury.io/py/lightning-flash.svg)](https://badge.fury.io/py/lightning-flash)\n[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://www.pytorchlightning.ai/community)\n[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE)\n\n[![CI testing](https://github.com/Lightning-Universe/lightning-flash/actions/workflows/ci-testing.yml/badge.svg?event=push)](https://github.com/Lightning-Universe/lightning-flash/actions/workflows/ci-testing.yml)\n[![codecov](https://codecov.io/gh/Lightning-Universe/lightning-flash/release/0.8.2/graph/badge.svg?token=oLuUr9q1vt)](https://codecov.io/gh/Lightning-Universe/lightning-flash)\n[![Documentation Status](https://readthedocs.org/projects/lightning-flash/badge/?version=latest)](https://lightning-flash.readthedocs.io/en/stable/?badge=stable)\n[![DOI](https://zenodo.org/badge/333857397.svg)](https://zenodo.org/badge/latestdoi/333857397)\n\n</div>\n\n---\n\n<div align=\"center\">\n  Flash makes complex AI recipes for over 15 tasks across 7 data domains accessible to all.\n  <br / >\n  In a nutshell, Flash is the production grade research framework you always dreamed of but didn't have time to build.\n</div>\n\n## Getting Started\n\nFrom PyPI:\n\n```bash\npip install lightning-flash\n```\n\nSee [our installation guide](https://lightning-flash.readthedocs.io/en/latest/installation.html) for more options.\n\n## Flash in 3 Steps\n\n### Step 1. Load your data\n\nAll data loading in Flash is performed via a `from_*` classmethod on a `DataModule`.\nTo decide which `DataModule` to use and which `from_*` methods are available, it depends on the task you want to perform.\nFor example, for image segmentation where your data is stored in folders, you would use the [`from_folders` method of the `SemanticSegmentationData` class](https://lightning-flash.readthedocs.io/en/latest/reference/semantic_segmentation.html#from-folders):\n\n```py\nfrom flash.image import SemanticSegmentationData\n\ndm = SemanticSegmentationData.from_folders(\n    train_folder=\"data/CameraRGB\",\n    train_target_folder=\"data/CameraSeg\",\n    val_split=0.1,\n    image_size=(256, 256),\n    num_classes=21,\n)\n\n```\n\n### Step 2: Configure your model\n\nOur tasks come loaded with pre-trained backbones and (where applicable) heads.\nYou can view the available backbones to use with your task using [`available_backbones`](https://lightning-flash.readthedocs.io/en/latest/general/backbones.html).\nOnce you've chosen one, create the model:\n\n```py\nfrom flash.image import SemanticSegmentation\n\nprint(SemanticSegmentation.available_heads())\n# ['deeplabv3', 'deeplabv3plus', 'fpn', ..., 'unetplusplus']\n\nprint(SemanticSegmentation.available_backbones('fpn'))\n# ['densenet121', ..., 'xception'] # + 113 models\n\nprint(SemanticSegmentation.available_pretrained_weights('efficientnet-b0'))\n# ['imagenet', 'advprop']\n\nmodel = SemanticSegmentation(\n  head=\"fpn\", backbone='efficientnet-b0', pretrained=\"advprop\", num_classes=dm.num_classes)\n```\n\n### Step 3: Finetune!\n\n```py\nfrom flash import Trainer\n\ntrainer = Trainer(max_epochs=3)\ntrainer.finetune(model, datamodule=datamodule, strategy=\"freeze\")\ntrainer.save_checkpoint(\"semantic_segmentation_model.pt\")\n```\n\n---\n\n## PyTorch Recipes\n\n### Make predictions with Flash!\n\nServe in just 2 lines:\n\n```py\nfrom flash.image import SemanticSegmentation\n\nmodel = SemanticSegmentation.load_from_checkpoint(\"semantic_segmentation_model.pt\")\nmodel.serve()\n```\n\nor make predictions from raw data directly.\n\n```py\nfrom flash import Trainer\n\ntrainer = Trainer(strategy='ddp', accelerator=\"gpu\", gpus=2)\ndm = SemanticSegmentationData.from_folders(predict_folder=\"data/CameraRGB\")\npredictions = trainer.predict(model, dm)\n```\n\n### Flash Training Strategies\n\nTraining strategies are PyTorch SOTA Training Recipes which can be utilized with a given task.\n\n\nCheck out this [example](https://github.com/Lightning-AI/lightning-flash/blob/master/examples/integrations/learn2learn/image_classification_imagenette_mini.py) where the `ImageClassifier` supports 4 [Meta Learning Algorithms](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html) from [Learn2Learn](https://github.com/learnables/learn2learn).\nThis is particularly useful if you use this model in production and want to make sure the model adapts quickly to its new environment with minimal labelled data.\n\n```py\nfrom flash.image import ImageClassifier\n\nmodel = ImageClassifier(\n    backbone=\"resnet18\",\n    optimizer=torch.optim.Adam,\n    optimizer_kwargs={\"lr\": 0.001},\n    training_strategy=\"prototypicalnetworks\",\n    training_strategy_kwargs={\n        \"epoch_length\": 10 * 16,\n        \"meta_batch_size\": 4,\n        \"num_tasks\": 200,\n        \"test_num_tasks\": 2000,\n        \"ways\": datamodule.num_classes,\n        \"shots\": 1,\n        \"test_ways\": 5,\n        \"test_shots\": 1,\n        \"test_queries\": 15,\n    },\n)\n```\n\nIn detail, the following methods are currently implemented:\n\n* **[prototypicalnetworks](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_protonet.py)** : from Snell *et al.* 2017, [Prototypical Networks for Few-shot Learning](https://arxiv.org/abs/1703.05175)\n* **[maml](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_maml.py)** : from Finn *et al.* 2017, [Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks](https://arxiv.org/abs/1703.03400)\n* **[metaoptnet](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_metaoptnet.py)** : from Lee *et al.* 2019, [Meta-Learning with Differentiable Convex Optimization](https://arxiv.org/abs/1904.03758)\n* **[anil](https://github.com/learnables/learn2learn/blob/master/learn2learn/algorithms/lightning/lightning_anil.py)** : from Raghu *et al.* 2020, [Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML](https://arxiv.org/abs/1909.09157)\n\n\n### Flash Optimizers / Schedulers\n\nWith Flash, swapping among 40+ optimizers and 15+ schedulers recipes are simple. Find the list of available optimizers, schedulers as follows:\n\n```py\nfrom flash.image import ImageClassifier\n\nImageClassifier.available_optimizers()\n# ['A2GradExp', ..., 'Yogi']\n\nImageClassifier.available_schedulers()\n# ['CosineAnnealingLR', 'CosineAnnealingWarmRestarts', ..., 'polynomial_decay_schedule_with_warmup']\n```\n\nOnce you've chosen, create the model:\n\n```py\n#### The optimizer of choice can be passed as\nfrom flash.image import ImageClassifier\n\n# - String value\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=\"Adam\", lr_scheduler=None)\n\n# - Callable\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=functools.partial(torch.optim.Adadelta, eps=0.5), lr_scheduler=None)\n\n# - Tuple[string, dict]: (The dict takes in the optimizer kwargs)\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=(\"Adadelta\", {\"epa\": 0.5}), lr_scheduler=None)\n\n#### The scheduler of choice can be passed as a\n# - String value\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=\"Adam\", lr_scheduler=\"constant_schedule\")\n\n# - Callable\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=\"Adam\", lr_scheduler=functools.partial(CyclicLR, step_size_up=1500, mode='exp_range', gamma=0.5))\n\n# - Tuple[string, dict]: (The dict takes in the scheduler kwargs)\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=\"Adam\", lr_scheduler=(\"StepLR\", {\"step_size\": 10}))\n```\n\nYou can also register you own custom scheduler recipes beforeahand and use them shown as above:\n\n```py\nfrom flash.image import ImageClassifier\n\n@ImageClassifier.lr_schedulers_registry\ndef my_steplr_recipe(optimizer):\n    return torch.optim.lr_scheduler.StepLR(optimizer, step_size=10)\n\nmodel = ImageClassifier(backbone=\"resnet18\", num_classes=2, optimizer=\"Adam\", lr_scheduler=\"my_steplr_recipe\")\n```\n\n### Flash Transforms\n\n\nFlash includes some simple augmentations for each task by default, however, you will often want to override these and control your own augmentation recipe.\nTo this end, Flash supports custom transformations with the [`InputTransform`](https://lightning-flash.readthedocs.io/en/stable/api/generated/flash.core.data.io.input_transform.InputTransform.html).\nThe `InputTransform` is like a callback for transforms, with hooks that can be used to apply transforms to samples or batches, on and off the device / accelerator.\nIn addition, hooks can be specialized to apply transforms only to the input or target.\nWith these hooks, complex transforms like MixUp can be implemented with ease.\nHere's an example (with an albumentations transform thrown in too!):\n\n```py\nimport torch\nimport numpy as np\nimport albumentations\nfrom flash import InputTransform\nfrom flash.image import ImageClassificationData\nfrom flash.image.classification.input_transform import AlbumentationsAdapter\n\n\ndef mixup(batch, alpha=1.0):\n    images = batch[\"input\"]\n    targets = batch[\"target\"].float().unsqueeze(1)\n\n    lam = np.random.beta(alpha, alpha)\n    perm = torch.randperm(images.size(0))\n\n    batch[\"input\"] = images * lam + images[perm] * (1 - lam)\n    batch[\"target\"] = targets * lam + targets[perm] * (1 - lam)\n    return batch\n\n\nclass MixUpInputTransform(InputTransform):\n\n    def train_input_per_sample_transform(self):\n        return AlbumentationsAdapter(albumentations.HorizontalFlip(p=0.5))\n\n    # This will be applied after transferring the batch to the device!\n    def train_per_batch_transform_on_device(self):\n        return mixup\n\n\ndatamodule = ImageClassificationData.from_folders(\n    train_folder=\"data/train\",\n    transform=MixUpInputTransform,\n    batch_size=2,\n)\n\n```\n\n## Flash Zero - PyTorch Recipes from the Command Line!\n\n<div align=\"center\">\n<img src=\"/https://github.com/Lightning-AI/lightning-flash/raw/0.8.2/docs/source/_static/images/flash_zero.gif?raw=true\" width=\"75%\">\n</div>\n\nFlash Zero is a zero-code machine learning platform built\ndirectly into lightning-flash\nusing the [`Lightning CLI`](https://pytorch-lightning.readthedocs.io/en/0.8.2common/lightning_cli.html).\n\nTo get started and view the available tasks, run:\n\n```bash\n  flash --help\n```\n\nFor example, to train an image classifier for 10 epochs with a `resnet50` backbone on 2 GPUs using your own data, you can do:\n\n```bash\n  flash image_classification --trainer.max_epochs 10 --trainer.gpus 2 --model.backbone resnet50 from_folders --train_folder {PATH_TO_DATA}\n```\n\n## Kaggle Notebook Examples\n\n- [\ud83d\udea2Titanic crash with Lightning\u26a1Flash](https://www.kaggle.com/jirkaborovec/titanic-crash-with-lightning-flash)\n- [\ud83c\udfe0House \ud83d\udcb5prices predictions with Lightning\u26a1Flash](https://www.kaggle.com/jirkaborovec/house-prices-predictions-with-lightning-flash)\n- [Playing \ud83d\udccbtabular with Lightning\u26a1Flash](https://www.kaggle.com/jirkaborovec/playing-tabular-with-lightning-flash)\n- [\ud83d\ude4aToxic comments with Lightning\u26a1Flash](https://www.kaggle.com/jirkaborovec/toxic-comments-with-lightning-flash)\n- [\ud83e\udec1COVID detection with Lightning\u26a1\ufe0fFlash](https://www.kaggle.com/jirkaborovec/covid-detection-with-lightning-flash)\n\n\n## Contribute!\nThe lightning + Flash team is hard at work building more tasks for common deep-learning use cases. But we're looking for incredible contributors like you to submit new tasks!\n\nJoin our [Slack](https://www.pytorchlightning.ai/community) and/or read our [CONTRIBUTING](https://github.com/PyTorchLightning/lightning-flash/blob/master/.github/CONTRIBUTING.md) guidelines to get help becoming a contributor!\n\n__Note:__ Flash is currently being tested on real-world use cases and is in active development. Please [open an issue](https://github.com/PyTorchLightning/lightning-flash/issues/new/choose) if you find anything that isn't working as expected.\n\n---\n\n## Community\nFlash is maintained by our [core contributors](https://lightning-flash.readthedocs.io/en/latest/governance.html).\n\nFor help or questions, join our huge community on [Slack](https://www.pytorchlightning.ai/community)!\n\n---\n\n## Citations\nWe\u2019re excited to continue the strong legacy of opensource software and have been inspired over the years by Caffe, Theano, Keras, PyTorch, torchbearer, and [fast.ai](https://arxiv.org/abs/2002.04688). When/if additional papers are written about this, we\u2019ll be happy to cite these frameworks and the corresponding authors.\n\nFlash leverages models from many different frameworks in order to cover such a wide range of domains and tasks. The full list of providers can be found in [our documentation](https://lightning-flash.readthedocs.io/en/latest/integrations/providers.html).\n\n---\n\n## License\nPlease observe the Apache 2.0 license that is listed in this repository.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Your PyTorch AI Factory - Flash enables you to easily configure and run complex AI recipes.",
    "version": "0.8.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/Lightning-AI/lightning-flash/issues",
        "Documentation": "https://lightning-flash.rtfd.io/en/latest/",
        "Download": "https://github.com/Lightning-AI/lightning-flash",
        "Homepage": "https://github.com/Lightning-AI/lightning-flash",
        "Source Code": "https://github.com/Lightning-AI/lightning-flash"
    },
    "split_keywords": [
        "deep learning",
        "pytorch",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "510e794f1125d76bbe2f159e6e0d1b99844efe14e93880a6fcbe5df9fc4e2a9b",
                "md5": "2080c6109e6ae511195e7d0fa3b271da",
                "sha256": "073504754f2620c5129a05fbcae8fb71d047590ba19d009c4d9f8ea355d10536"
            },
            "downloads": -1,
            "filename": "lightning_flash-0.8.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2080c6109e6ae511195e7d0fa3b271da",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1217877,
            "upload_time": "2023-06-30T13:36:25",
            "upload_time_iso_8601": "2023-06-30T13:36:25.732286Z",
            "url": "https://files.pythonhosted.org/packages/51/0e/794f1125d76bbe2f159e6e0d1b99844efe14e93880a6fcbe5df9fc4e2a9b/lightning_flash-0.8.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c04d3d7b72b73475625422eb220fc417f87511104d788771b47adc7a0e766ede",
                "md5": "94a707547e0507f881f2b222c9263230",
                "sha256": "9eb0de2c8eac7774c8623decf1cdfd84476df7553152df0ebb552fea997119ac"
            },
            "downloads": -1,
            "filename": "lightning-flash-0.8.2.tar.gz",
            "has_sig": false,
            "md5_digest": "94a707547e0507f881f2b222c9263230",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1065725,
            "upload_time": "2023-06-30T13:36:28",
            "upload_time_iso_8601": "2023-06-30T13:36:28.624420Z",
            "url": "https://files.pythonhosted.org/packages/c0/4d/3d7b72b73475625422eb220fc417f87511104d788771b47adc7a0e766ede/lightning-flash-0.8.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-30 13:36:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Lightning-AI",
    "github_project": "lightning-flash",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lightning-flash"
}
        
Elapsed time: 0.09565s