fastxtend


Namefastxtend JSON
Version 0.1.7 PyPI version JSON
download
home_pagehttps://github.com/warner-benjamin/fastxtend
SummaryTrain fastai models faster (and other useful tools)
upload_time2023-12-18 15:29:50
maintainer
docs_urlNone
authorBenjamin Warner
requires_python>=3.8
licenseMIT License
keywords fastai pytorch deep-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # fastxtend

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

### Train fastai models faster (and other useful tools)

![fastxtend accelerates
fastai](https://github.com/warner-benjamin/fastxtend/blob/main/nbs/images/imagenette_benchmark.png?raw=true)

Train fastai models faster with fastxtend’s [fused
optimizers](https://fastxtend.benjaminwarner.dev/optimizer.fused.html),
[Progressive
Resizing](https://fastxtend.benjaminwarner.dev/callback.progresize.html)
callback, integrated [FFCV
DataLoader](https://fastxtend.benjaminwarner.dev/ffcv.tutorial.html),
and integrated [PyTorch
Compile](https://fastxtend.benjaminwarner.dev/callback.compiler.html)
support.

## Feature overview

**Train Models Faster**

- Drop in [fused
  optimizers](https://fastxtend.benjaminwarner.dev/optimizer.fused.html),
  which are 21 to 293 percent faster then fastai native optimizers.
- Up to 75% optimizer memory savings with integrated
  [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) [8-bit
  optimizers](https://fastxtend.benjaminwarner.dev/optimizer.eightbit.html).
- Increase GPU throughput and decrease training time with the
  [Progressive
  Resizing](https://fastxtend.benjaminwarner.dev/callback.progresize.html)
  callback.
- Use the highly optimized [FFCV
  DataLoader](https://fastxtend.benjaminwarner.dev/ffcv.tutorial.html),
  fully integrated with fastai.
- Integrated support for `torch.compile` via the
  [Compile](https://fastxtend.benjaminwarner.dev/callback.compiler.html)
  callbacks.

**General Features**

- Fused implementations of modern optimizers, such as
  [Adan](https://fastxtend.benjaminwarner.dev/optimizer.adan.html),
  [Lion](https://fastxtend.benjaminwarner.dev/optimizer.lion.html), &
  [StableAdam](https://fastxtend.benjaminwarner.dev/optimizer.stableadam.html).
- Hugging Face [Transformers
  compatibility](https://fastxtend.benjaminwarner.dev/text.huggingface.html)
  with fastai
- Flexible [metrics](https://fastxtend.benjaminwarner.dev/metrics.html)
  which can log on train, valid, or both. Backwards compatible with
  fastai metrics.
- Easily use [multiple
  losses](https://fastxtend.benjaminwarner.dev/multiloss.html) and log
  each individual loss on train and valid.
- [Multiple
  profilers](https://fastxtend.benjaminwarner.dev/callback.profiler.html)
  for profiling training and identifying bottlenecks.
- A fast [Exponential Moving
  Average](https://fastxtend.benjaminwarner.dev/callback.ema.html)
  callback for smoother training.

**Vision**

- Apply
  [`MixUp`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#mixup),
  [`CutMix`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmix),
  or Augmentations at once with
  [`CutMixUp`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmixup)
  or
  [`CutMixUpAugment`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmixupaugment).
- Additional [image
  augmentations](https://fastxtend.benjaminwarner.dev/vision.augment.batch.html).
- Support for running fastai [batch transforms on
  CPU](https://fastxtend.benjaminwarner.dev/vision.data.html).
- More
  [attention](https://fastxtend.benjaminwarner.dev/vision.models.attention_modules.html)
  and
  [pooling](https://fastxtend.benjaminwarner.dev/vision.models.pooling.html)
  modules
- A flexible implementation of fastai’s
  [`XResNet`](https://fastxtend.benjaminwarner.dev/vision.models.xresnet.html#xresnet).

Check out the documentation for additional splitters, callbacks,
schedulers, utilities, and more.

## Documentation

<https://fastxtend.benjaminwarner.dev>

## Install

fastxtend is avalible on pypi:

``` bash
pip install fastxtend
```

fastxtend can be installed with task-specific dependencies for `vision`,
`ffcv`, `text`, `audio`, or `all`:

``` bash
pip install "fastxtend[all]"
```

To easily install most prerequisites for all fastxtend features, use
[Conda](https://docs.conda.io/en/latest) or
[Miniconda](https://docs.conda.io/en/latest/miniconda.html):

``` bash
conda create -n fastxtend python=3.11 "pytorch>=2.1" torchvision torchaudio \
pytorch-cuda=12.1 fastai nbdev pkg-config libjpeg-turbo opencv tqdm psutil \
terminaltables numpy "numba>=0.57" librosa timm kornia rich typer wandb \
"transformers>=4.34" "tokenizers>=0.14" "datasets>=2.14" ipykernel ipywidgets \
"matplotlib<3.8" -c pytorch -c nvidia -c fastai -c huggingface -c conda-forge

conda activate fastxtend

pip install "fastxtend[all]"
```

replacing `pytorch-cuda=12.1` with your preferred [supported version of
Cuda](https://pytorch.org/get-started/locally).

To create an editable development install:

``` bash
git clone https://github.com/warner-benjamin/fastxtend.git
cd fastxtend
pip install -e ".[dev]"
```

## Usage

Like fastai, fastxtend provides safe wildcard imports using python’s
`__all__`.

``` python
from fastai.vision.all import *
from fastxtend.vision.all import *
from fastxtend.ffcv.all import *
```

In general, import fastxtend after all fastai imports, as fastxtend
modifies fastai. Any method modified by fastxtend is backwards
compatible with the original fastai code.

## Examples

Use a fused ForEach optimizer:

``` python
Learner(..., opt_func=adam(foreach=True))
```

Or a bitsandbytes 8-bit optimizer:

``` python
Learner(..., opt_func=adam(eightbit=True))
```

Speed up image training using Progressive Resizing:

``` python
Learner(... cbs=ProgressiveResize())
```

Log an accuracy metric on the training set as a smoothed metric and
validation set like normal:

``` python
Learner(..., metrics=[Accuracy(log_metric=LogMetric.Train, metric_type=MetricType.Smooth),
                      Accuracy()])
```

Log multiple losses as individual metrics on train and valid:

``` python
mloss = MultiLoss(loss_funcs=[nn.MSELoss, nn.L1Loss],
                  weights=[1, 3.5], loss_names=['mse_loss', 'l1_loss'])

Learner(..., loss_func=mloss, metrics=RMSE(), cbs=MultiLossCallback)
```

Compile a model with `torch.compile`:

``` python
from fastxtend.callback import compiler

learn = Learner(...).compile()
```

Profile a fastai training loop:

``` python
from fastxtend.callback import simpleprofiler

learn = Learner(...).profile()
learn.fit_one_cycle(2, 3e-3)
```

## Benchmark

To run the benchmark on your own machine, see the [example
scripts](https://github.com/warner-benjamin/fastxtend/tree/main/examples)
for details on how to replicate.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/warner-benjamin/fastxtend",
    "name": "fastxtend",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "fastai pytorch deep-learning",
    "author": "Benjamin Warner",
    "author_email": "me@benjaminwarner.dev",
    "download_url": "https://files.pythonhosted.org/packages/fa/fd/2803d189bbebf51fd6a04abc723855b031f34945adf1c573387acb228834/fastxtend-0.1.7.tar.gz",
    "platform": null,
    "description": "# fastxtend\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n### Train fastai models faster (and other useful tools)\n\n![fastxtend accelerates\nfastai](https://github.com/warner-benjamin/fastxtend/blob/main/nbs/images/imagenette_benchmark.png?raw=true)\n\nTrain fastai models faster with fastxtend\u2019s [fused\noptimizers](https://fastxtend.benjaminwarner.dev/optimizer.fused.html),\n[Progressive\nResizing](https://fastxtend.benjaminwarner.dev/callback.progresize.html)\ncallback, integrated [FFCV\nDataLoader](https://fastxtend.benjaminwarner.dev/ffcv.tutorial.html),\nand integrated [PyTorch\nCompile](https://fastxtend.benjaminwarner.dev/callback.compiler.html)\nsupport.\n\n## Feature overview\n\n**Train Models Faster**\n\n- Drop in [fused\n  optimizers](https://fastxtend.benjaminwarner.dev/optimizer.fused.html),\n  which are 21 to 293 percent faster then fastai native optimizers.\n- Up to 75% optimizer memory savings with integrated\n  [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) [8-bit\n  optimizers](https://fastxtend.benjaminwarner.dev/optimizer.eightbit.html).\n- Increase GPU throughput and decrease training time with the\n  [Progressive\n  Resizing](https://fastxtend.benjaminwarner.dev/callback.progresize.html)\n  callback.\n- Use the highly optimized [FFCV\n  DataLoader](https://fastxtend.benjaminwarner.dev/ffcv.tutorial.html),\n  fully integrated with fastai.\n- Integrated support for `torch.compile` via the\n  [Compile](https://fastxtend.benjaminwarner.dev/callback.compiler.html)\n  callbacks.\n\n**General Features**\n\n- Fused implementations of modern optimizers, such as\n  [Adan](https://fastxtend.benjaminwarner.dev/optimizer.adan.html),\n  [Lion](https://fastxtend.benjaminwarner.dev/optimizer.lion.html), &\n  [StableAdam](https://fastxtend.benjaminwarner.dev/optimizer.stableadam.html).\n- Hugging Face [Transformers\n  compatibility](https://fastxtend.benjaminwarner.dev/text.huggingface.html)\n  with fastai\n- Flexible [metrics](https://fastxtend.benjaminwarner.dev/metrics.html)\n  which can log on train, valid, or both. Backwards compatible with\n  fastai metrics.\n- Easily use [multiple\n  losses](https://fastxtend.benjaminwarner.dev/multiloss.html) and log\n  each individual loss on train and valid.\n- [Multiple\n  profilers](https://fastxtend.benjaminwarner.dev/callback.profiler.html)\n  for profiling training and identifying bottlenecks.\n- A fast [Exponential Moving\n  Average](https://fastxtend.benjaminwarner.dev/callback.ema.html)\n  callback for smoother training.\n\n**Vision**\n\n- Apply\n  [`MixUp`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#mixup),\n  [`CutMix`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmix),\n  or Augmentations at once with\n  [`CutMixUp`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmixup)\n  or\n  [`CutMixUpAugment`](https://fastxtend.benjaminwarner.dev/callback.cutmixup.html#cutmixupaugment).\n- Additional [image\n  augmentations](https://fastxtend.benjaminwarner.dev/vision.augment.batch.html).\n- Support for running fastai [batch transforms on\n  CPU](https://fastxtend.benjaminwarner.dev/vision.data.html).\n- More\n  [attention](https://fastxtend.benjaminwarner.dev/vision.models.attention_modules.html)\n  and\n  [pooling](https://fastxtend.benjaminwarner.dev/vision.models.pooling.html)\n  modules\n- A flexible implementation of fastai\u2019s\n  [`XResNet`](https://fastxtend.benjaminwarner.dev/vision.models.xresnet.html#xresnet).\n\nCheck out the documentation for additional splitters, callbacks,\nschedulers, utilities, and more.\n\n## Documentation\n\n<https://fastxtend.benjaminwarner.dev>\n\n## Install\n\nfastxtend is avalible on pypi:\n\n``` bash\npip install fastxtend\n```\n\nfastxtend can be installed with task-specific dependencies for `vision`,\n`ffcv`, `text`, `audio`, or `all`:\n\n``` bash\npip install \"fastxtend[all]\"\n```\n\nTo easily install most prerequisites for all fastxtend features, use\n[Conda](https://docs.conda.io/en/latest) or\n[Miniconda](https://docs.conda.io/en/latest/miniconda.html):\n\n``` bash\nconda create -n fastxtend python=3.11 \"pytorch>=2.1\" torchvision torchaudio \\\npytorch-cuda=12.1 fastai nbdev pkg-config libjpeg-turbo opencv tqdm psutil \\\nterminaltables numpy \"numba>=0.57\" librosa timm kornia rich typer wandb \\\n\"transformers>=4.34\" \"tokenizers>=0.14\" \"datasets>=2.14\" ipykernel ipywidgets \\\n\"matplotlib<3.8\" -c pytorch -c nvidia -c fastai -c huggingface -c conda-forge\n\nconda activate fastxtend\n\npip install \"fastxtend[all]\"\n```\n\nreplacing `pytorch-cuda=12.1` with your preferred [supported version of\nCuda](https://pytorch.org/get-started/locally).\n\nTo create an editable development install:\n\n``` bash\ngit clone https://github.com/warner-benjamin/fastxtend.git\ncd fastxtend\npip install -e \".[dev]\"\n```\n\n## Usage\n\nLike fastai, fastxtend provides safe wildcard imports using python\u2019s\n`__all__`.\n\n``` python\nfrom fastai.vision.all import *\nfrom fastxtend.vision.all import *\nfrom fastxtend.ffcv.all import *\n```\n\nIn general, import fastxtend after all fastai imports, as fastxtend\nmodifies fastai. Any method modified by fastxtend is backwards\ncompatible with the original fastai code.\n\n## Examples\n\nUse a fused ForEach optimizer:\n\n``` python\nLearner(..., opt_func=adam(foreach=True))\n```\n\nOr a bitsandbytes 8-bit optimizer:\n\n``` python\nLearner(..., opt_func=adam(eightbit=True))\n```\n\nSpeed up image training using Progressive Resizing:\n\n``` python\nLearner(... cbs=ProgressiveResize())\n```\n\nLog an accuracy metric on the training set as a smoothed metric and\nvalidation set like normal:\n\n``` python\nLearner(..., metrics=[Accuracy(log_metric=LogMetric.Train, metric_type=MetricType.Smooth),\n                      Accuracy()])\n```\n\nLog multiple losses as individual metrics on train and valid:\n\n``` python\nmloss = MultiLoss(loss_funcs=[nn.MSELoss, nn.L1Loss],\n                  weights=[1, 3.5], loss_names=['mse_loss', 'l1_loss'])\n\nLearner(..., loss_func=mloss, metrics=RMSE(), cbs=MultiLossCallback)\n```\n\nCompile a model with `torch.compile`:\n\n``` python\nfrom fastxtend.callback import compiler\n\nlearn = Learner(...).compile()\n```\n\nProfile a fastai training loop:\n\n``` python\nfrom fastxtend.callback import simpleprofiler\n\nlearn = Learner(...).profile()\nlearn.fit_one_cycle(2, 3e-3)\n```\n\n## Benchmark\n\nTo run the benchmark on your own machine, see the [example\nscripts](https://github.com/warner-benjamin/fastxtend/tree/main/examples)\nfor details on how to replicate.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Train fastai models faster (and other useful tools)",
    "version": "0.1.7",
    "project_urls": {
        "Documentation": "https://fastxtend.benjaminwarner.dev/",
        "Homepage": "https://github.com/warner-benjamin/fastxtend"
    },
    "split_keywords": [
        "fastai",
        "pytorch",
        "deep-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0a9e30a731adbc3372926e7b3c97b73db7076d3495f12dce977f62341568e7b4",
                "md5": "cec7bd573844bf480bc3cc299df9d0ff",
                "sha256": "b84b2ac52aac5e62bc3446722160cfd96e4f90f931d77223b9e46e3b0a208afb"
            },
            "downloads": -1,
            "filename": "fastxtend-0.1.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cec7bd573844bf480bc3cc299df9d0ff",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 149332,
            "upload_time": "2023-12-18T15:29:49",
            "upload_time_iso_8601": "2023-12-18T15:29:49.216075Z",
            "url": "https://files.pythonhosted.org/packages/0a/9e/30a731adbc3372926e7b3c97b73db7076d3495f12dce977f62341568e7b4/fastxtend-0.1.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fafd2803d189bbebf51fd6a04abc723855b031f34945adf1c573387acb228834",
                "md5": "5dadc08cc5256e711907d2e41297a3e9",
                "sha256": "0a14f23e024a81993eb6f6b0c706a40655274933d17923be4badaac84d7ad8e6"
            },
            "downloads": -1,
            "filename": "fastxtend-0.1.7.tar.gz",
            "has_sig": false,
            "md5_digest": "5dadc08cc5256e711907d2e41297a3e9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 124115,
            "upload_time": "2023-12-18T15:29:50",
            "upload_time_iso_8601": "2023-12-18T15:29:50.862605Z",
            "url": "https://files.pythonhosted.org/packages/fa/fd/2803d189bbebf51fd6a04abc723855b031f34945adf1c573387acb228834/fastxtend-0.1.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-18 15:29:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "warner-benjamin",
    "github_project": "fastxtend",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "fastxtend"
}
        
Elapsed time: 0.20587s