fastgs


Namefastgs JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/restlessronin/fastgs
SummaryGeospatial (Sentinel2 Multi-Spectral) support for fastai
upload_time2023-01-29 09:54:46
maintainer
docs_urlNone
authorrestlessronin
requires_python>=3.7
licenseApache Software License 2.0
keywords geospatial multi-spectral sentinel2 fastai nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Welcome to fastgs
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Introduction

**This library is currently in *alpha*, neither the functionality nor
the API is stable**. Feedback / PR’s welcome!

This library provides geospatial multi-spectral image support for
fastai. FastAI already has extensive support for RGB images in the
pipeline. I try to achieve feature parity for multi-spectral images with
this library, specifically in the context of Sentinel 2 geospatial
imaging.

## Demo Notebooks

Complete examples are provided in the following notebooks

1.  working with a netCDF sample
    [KappaSet](https://www.kaggle.com/code/restlessronin/netcdf-demo-fastai-using-fastgs).
    demo code for brightness factor calculation by
    [@wrignj08](https://github.com/wrignj08). Shows how to load images
    with all channels stored in a single netCDF file.
2.  working with the kaggle [38-cloud/95-cloud landsat
    dataset](https://www.kaggle.com/code/restlessronin/cloud95-fastai-with-fastgs-multispectral-support).
    Shows how to load images stored in a “single channel per file”
    format (seems to be the common case).
3.  working on a segmentation problem with a [Sentinel 2
    dataset](https://www.kaggle.com/code/restlessronin/lila-sentinel-2-segmentation-with-fastai)

These are boths works in progress and optimized to display the features
of the library, rather than the best possible results. Even so, the
“cloud 95” notebook is providing results comparable to other hiqh
quality notebooks on the same dataset.

## Install

``` sh
pip install -Uqq fastgs
```

``` sh
conda install -c restlessronin fastgs
```

## Multi-spectral visualization

One key problem that is solved is visualization of multi-spectral data,
which has more than the three R, G, B channels.

We introduce a new category of pytorch tensor,
[`TensorImageMS`](https://restlessronin.github.io/fastgs/vision.core.html#tensorimagems),
that shows multiple images. In addition to the normal RGB image, it
handles extra channels by displaying them as additional images, either
in sets of false-colour RGB images, or as ‘monochrome’ images (one per
channel).

There is also [experimental
support](07a_vision.core.ipynb#animating-multiple-images) (not
integrated into the API yet) for mapping multi-spectral images to an
animation of multiple images. Feedback on it’s usefulness is welcome!

The first use-case is Sentinel 2 images, which are naturally “dark”.
There is a provision to provide “brightening” multipliers during
display, customizable per channel.

## Image data class

A high-level API,
[`MSData`](https://restlessronin.github.io/fastgs/multispectral.html#msdata)
is exposed that knows how to load multispectral images given some
parameters.

``` python
from fastgs.multispectral import *
```

The following code creates a class that can load 11 Sentinel 2 channels
into a
[`TensorImageMS`](https://restlessronin.github.io/fastgs/vision.core.html#tensorimagems).
The first parameter is a descriptor that provides mapping from Sentinel
2 channels to brightening factors and other parameters specific to the
inputs. This will generally be tailored to your image dataset.

``` python
from fastgs.test.io import * # defines file naming and io for our test samples

sentinel2 = createSentinel2Descriptor()

snt12_imgs = MSData.from_files(
    sentinel2,
    # B04 and B02 are transposed so that the first 3 channels are natural R,G,B channels
    ["B04","B03","B02","B05","B06","B07","B08","B8A","B11","B12","AOT"],
    [["B04","B03","B02"],["B07","B06","B05"],["B12","B11","B8A"],["B08"]],
    get_channel_filenames,
    read_multichan_files
)
```

The second parameter is a list of ids of channel to be loaded into the
image tensor, in the order in which they are loaded.

The third parameter is a list of 4 channel lists. Each channel list
describes one image that will be displayed. The lists that have 3
channel ids will map those channels to the R,G,B inputs of a
“false-colour” image. Lists with a single channel id will be mapped to
monochrome images.

In this example, we will display 4 images per MS image. The first maps
the “real” RGB channels (B04, B03, B02) of Sentinel 2 data to an RGB
image, which makes this a true-colour image. The second image maps
channels B07, B06, B05 to a false-colour image. Likewise the third image
maps B12, B11, B8A to a false-colour image. Finally the one remaining
channel B08 is mapped to a monochrome image. Thus all the channels in
the image are displayed.

The fourth parameter is a function that maps channel id’s to filenames
that provide the image data for a single channel. The final parameter is
an IO function that loads a complete TensorImageMS given the list of
files corresponding to individual channels.

## Image display

The simplest use of the high-level wrapper class is to load an indvidual
MS image.

``` python
img12 = snt12_imgs.load_image(66)
img12.show()
```

    [<AxesSubplot:title={'center':'B04,B03,B02'}>,
     <AxesSubplot:title={'center':'B07,B06,B05'}>,
     <AxesSubplot:title={'center':'B12,B11,B8A'}>,
     <AxesSubplot:title={'center':'B08'}>]

![](index_files/figure-commonmark/cell-4-output-2.png)

Note that the single MS image is displayed as 4 images, each
corresponding to one of the channel lists we provided. The first image
is the true-colour image, the next 2 are false colour, and the final one
is monochrome.

## High level wrapper [`FastGS`](https://restlessronin.github.io/fastgs/multispectral.html#fastgs) for semantic segmentation

We also provide a high-level wrapper
[`FastGS`](https://restlessronin.github.io/fastgs/multispectral.html#fastgs)
which generates fastai dataloaders and learners for semantic
segmentation using unets. Providing support for other models and for
classification should be straightforward.

### [`MaskData`](https://restlessronin.github.io/fastgs/multispectral.html#maskdata)

Continuing our example, we provide mask information using a wrapper
class for segmentation mask loading (this is analogous to the
[`MSData`](https://restlessronin.github.io/fastgs/multispectral.html#msdata)
class, but for ‘normal’ `TensorImage`s).

``` python
msks = MaskData.from_files("LC",get_channel_filenames,read_mask_file,["non-building","building"])
```

### [`MSAugment`](https://restlessronin.github.io/fastgs/multispectral.html#msaugment)

We also provide a wrapper class that can specify which (if any)
augmentations to use during training and validation, using the
albumentations library (which works for multi-spectral data).

``` python
import albumentations as A
```

Here we just use demo augmentations

``` python
augs = MSAugment.from_augs(train_aug=A.Rotate(p=1),valid_aug=A.HorizontalFlip(p=0.33))
```

Now we create the actual high level wrapper

``` python
fastgs = FastGS.for_training(snt12_imgs,msks,augs)
```

Create a datablock and a data loader

``` python
db = fastgs.create_data_block()
dl = db.dataloaders(source=[66]*10,bs=8) # repeat the sample image 10 times
```

Now we can see the visualization support in action. Let’s look at some
training and validation batches (with augmentation). Each row shows the
image in 4 columns and the mask in the 5th.

``` python
from fastai.vision.all import *
from fastgs.vision.data import *
from fastgs.vision.learner import *
from fastgs.vision.augment import *
```

``` python
dl.train.show_batch(max_n=3,mskovl=False) # don't overlay mask
```

![](index_files/figure-commonmark/cell-11-output-1.png)

``` python
dl.valid.show_batch(mskovl=False)
```

![](index_files/figure-commonmark/cell-12-output-1.png)

We create and train a unet learner and look at results. Image is in
first 4 columns, mask in the 5th and prediction in the 6th.

``` python
learner = fastgs.create_learner(dl,reweight="avg") # weights of n > 3 channels are set to average of first 3 channels
learner.fit_one_cycle(1)
learner.show_results(mskovl=False)
```

    /opt/homebrew/Caskroom/miniforge/base/envs/fastgs/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
      warnings.warn(
    /opt/homebrew/Caskroom/miniforge/base/envs/fastgs/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
      warnings.warn(msg)

<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: left;">
      <th>epoch</th>
      <th>train_loss</th>
      <th>valid_loss</th>
      <th>dice</th>
      <th>time</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>0</td>
      <td>0.872479</td>
      <td>0.691804</td>
      <td>0.044623</td>
      <td>00:27</td>
    </tr>
  </tbody>
</table>

![](index_files/figure-commonmark/cell-13-output-6.png)

Finally, we can look at the top losses

``` python
interp = SegmentationInterpretation.from_learner(learner)
interp.plot_top_losses(k=1,mskovl=False)
```

![](index_files/figure-commonmark/cell-14-output-5.png)

## Acknowledgements

This library is inspired by the following notebooks (and related works
by the authors)

- [@cordmaur](https://github.com/cordmaur) - Mauricio Cordeiro’s
  [multi-spectral segmentation fastai
  pipeline](https://www.kaggle.com/code/cordmaur/remotesensing-fastai2-multiband-augmentations/notebook)
- [@wrignj08](https://github.com/wrignj08) - Nick Wright’s
  [multi-spectral classification
  notebook](https://dpird-dma.github.io/blog/Multispectral-image-classification-Transfer-Learning//)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/restlessronin/fastgs",
    "name": "fastgs",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "geospatial multi-spectral sentinel2 fastai nbdev jupyter notebook python",
    "author": "restlessronin",
    "author_email": "88921269+restlessronin@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/02/c7/8545193005e44b78bba8dc8376c35c495b6dac587cf30ee682dabddbe3a7/fastgs-0.1.1.tar.gz",
    "platform": null,
    "description": "Welcome to fastgs\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Introduction\n\n**This library is currently in *alpha*, neither the functionality nor\nthe API is stable**. Feedback / PR\u2019s welcome!\n\nThis library provides geospatial multi-spectral image support for\nfastai. FastAI already has extensive support for RGB images in the\npipeline. I try to achieve feature parity for multi-spectral images with\nthis library, specifically in the context of Sentinel 2 geospatial\nimaging.\n\n## Demo Notebooks\n\nComplete examples are provided in the following notebooks\n\n1.  working with a netCDF sample\n    [KappaSet](https://www.kaggle.com/code/restlessronin/netcdf-demo-fastai-using-fastgs).\n    demo code for brightness factor calculation by\n    [@wrignj08](https://github.com/wrignj08). Shows how to load images\n    with all channels stored in a single netCDF file.\n2.  working with the kaggle [38-cloud/95-cloud landsat\n    dataset](https://www.kaggle.com/code/restlessronin/cloud95-fastai-with-fastgs-multispectral-support).\n    Shows how to load images stored in a \u201csingle channel per file\u201d\n    format (seems to be the common case).\n3.  working on a segmentation problem with a [Sentinel 2\n    dataset](https://www.kaggle.com/code/restlessronin/lila-sentinel-2-segmentation-with-fastai)\n\nThese are boths works in progress and optimized to display the features\nof the library, rather than the best possible results. Even so, the\n\u201ccloud 95\u201d notebook is providing results comparable to other hiqh\nquality notebooks on the same dataset.\n\n## Install\n\n``` sh\npip install -Uqq fastgs\n```\n\n``` sh\nconda install -c restlessronin fastgs\n```\n\n## Multi-spectral visualization\n\nOne key problem that is solved is visualization of multi-spectral data,\nwhich has more than the three R, G, B channels.\n\nWe introduce a new category of pytorch tensor,\n[`TensorImageMS`](https://restlessronin.github.io/fastgs/vision.core.html#tensorimagems),\nthat shows multiple images. In addition to the normal RGB image, it\nhandles extra channels by displaying them as additional images, either\nin sets of false-colour RGB images, or as \u2018monochrome\u2019 images (one per\nchannel).\n\nThere is also [experimental\nsupport](07a_vision.core.ipynb#animating-multiple-images) (not\nintegrated into the API yet) for mapping multi-spectral images to an\nanimation of multiple images. Feedback on it\u2019s usefulness is welcome!\n\nThe first use-case is Sentinel 2 images, which are naturally \u201cdark\u201d.\nThere is a provision to provide \u201cbrightening\u201d multipliers during\ndisplay, customizable per channel.\n\n## Image data class\n\nA high-level API,\n[`MSData`](https://restlessronin.github.io/fastgs/multispectral.html#msdata)\nis exposed that knows how to load multispectral images given some\nparameters.\n\n``` python\nfrom fastgs.multispectral import *\n```\n\nThe following code creates a class that can load 11 Sentinel 2 channels\ninto a\n[`TensorImageMS`](https://restlessronin.github.io/fastgs/vision.core.html#tensorimagems).\nThe first parameter is a descriptor that provides mapping from Sentinel\n2 channels to brightening factors and other parameters specific to the\ninputs. This will generally be tailored to your image dataset.\n\n``` python\nfrom fastgs.test.io import * # defines file naming and io for our test samples\n\nsentinel2 = createSentinel2Descriptor()\n\nsnt12_imgs = MSData.from_files(\n    sentinel2,\n    # B04 and B02 are transposed so that the first 3 channels are natural R,G,B channels\n    [\"B04\",\"B03\",\"B02\",\"B05\",\"B06\",\"B07\",\"B08\",\"B8A\",\"B11\",\"B12\",\"AOT\"],\n    [[\"B04\",\"B03\",\"B02\"],[\"B07\",\"B06\",\"B05\"],[\"B12\",\"B11\",\"B8A\"],[\"B08\"]],\n    get_channel_filenames,\n    read_multichan_files\n)\n```\n\nThe second parameter is a list of ids of channel to be loaded into the\nimage tensor, in the order in which they are loaded.\n\nThe third parameter is a list of 4 channel lists. Each channel list\ndescribes one image that will be displayed. The lists that have 3\nchannel ids will map those channels to the R,G,B inputs of a\n\u201cfalse-colour\u201d image. Lists with a single channel id will be mapped to\nmonochrome images.\n\nIn this example, we will display 4 images per MS image. The first maps\nthe \u201creal\u201d RGB channels (B04, B03, B02) of Sentinel 2 data to an RGB\nimage, which makes this a true-colour image. The second image maps\nchannels B07, B06, B05 to a false-colour image. Likewise the third image\nmaps B12, B11, B8A to a false-colour image. Finally the one remaining\nchannel B08 is mapped to a monochrome image. Thus all the channels in\nthe image are displayed.\n\nThe fourth parameter is a function that maps channel id\u2019s to filenames\nthat provide the image data for a single channel. The final parameter is\nan IO function that loads a complete TensorImageMS given the list of\nfiles corresponding to individual channels.\n\n## Image display\n\nThe simplest use of the high-level wrapper class is to load an indvidual\nMS image.\n\n``` python\nimg12 = snt12_imgs.load_image(66)\nimg12.show()\n```\n\n    [<AxesSubplot:title={'center':'B04,B03,B02'}>,\n     <AxesSubplot:title={'center':'B07,B06,B05'}>,\n     <AxesSubplot:title={'center':'B12,B11,B8A'}>,\n     <AxesSubplot:title={'center':'B08'}>]\n\n![](index_files/figure-commonmark/cell-4-output-2.png)\n\nNote that the single MS image is displayed as 4 images, each\ncorresponding to one of the channel lists we provided. The first image\nis the true-colour image, the next 2 are false colour, and the final one\nis monochrome.\n\n## High level wrapper [`FastGS`](https://restlessronin.github.io/fastgs/multispectral.html#fastgs) for semantic segmentation\n\nWe also provide a high-level wrapper\n[`FastGS`](https://restlessronin.github.io/fastgs/multispectral.html#fastgs)\nwhich generates fastai dataloaders and learners for semantic\nsegmentation using unets. Providing support for other models and for\nclassification should be straightforward.\n\n### [`MaskData`](https://restlessronin.github.io/fastgs/multispectral.html#maskdata)\n\nContinuing our example, we provide mask information using a wrapper\nclass for segmentation mask loading (this is analogous to the\n[`MSData`](https://restlessronin.github.io/fastgs/multispectral.html#msdata)\nclass, but for \u2018normal\u2019 `TensorImage`s).\n\n``` python\nmsks = MaskData.from_files(\"LC\",get_channel_filenames,read_mask_file,[\"non-building\",\"building\"])\n```\n\n### [`MSAugment`](https://restlessronin.github.io/fastgs/multispectral.html#msaugment)\n\nWe also provide a wrapper class that can specify which (if any)\naugmentations to use during training and validation, using the\nalbumentations library (which works for multi-spectral data).\n\n``` python\nimport albumentations as A\n```\n\nHere we just use demo augmentations\n\n``` python\naugs = MSAugment.from_augs(train_aug=A.Rotate(p=1),valid_aug=A.HorizontalFlip(p=0.33))\n```\n\nNow we create the actual high level wrapper\n\n``` python\nfastgs = FastGS.for_training(snt12_imgs,msks,augs)\n```\n\nCreate a datablock and a data loader\n\n``` python\ndb = fastgs.create_data_block()\ndl = db.dataloaders(source=[66]*10,bs=8) # repeat the sample image 10 times\n```\n\nNow we can see the visualization support in action. Let\u2019s look at some\ntraining and validation batches (with augmentation). Each row shows the\nimage in 4 columns and the mask in the 5th.\n\n``` python\nfrom fastai.vision.all import *\nfrom fastgs.vision.data import *\nfrom fastgs.vision.learner import *\nfrom fastgs.vision.augment import *\n```\n\n``` python\ndl.train.show_batch(max_n=3,mskovl=False) # don't overlay mask\n```\n\n![](index_files/figure-commonmark/cell-11-output-1.png)\n\n``` python\ndl.valid.show_batch(mskovl=False)\n```\n\n![](index_files/figure-commonmark/cell-12-output-1.png)\n\nWe create and train a unet learner and look at results. Image is in\nfirst 4 columns, mask in the 5th and prediction in the 6th.\n\n``` python\nlearner = fastgs.create_learner(dl,reweight=\"avg\") # weights of n > 3 channels are set to average of first 3 channels\nlearner.fit_one_cycle(1)\nlearner.show_results(mskovl=False)\n```\n\n    /opt/homebrew/Caskroom/miniforge/base/envs/fastgs/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\n      warnings.warn(\n    /opt/homebrew/Caskroom/miniforge/base/envs/fastgs/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.\n      warnings.warn(msg)\n\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: left;\">\n      <th>epoch</th>\n      <th>train_loss</th>\n      <th>valid_loss</th>\n      <th>dice</th>\n      <th>time</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <td>0</td>\n      <td>0.872479</td>\n      <td>0.691804</td>\n      <td>0.044623</td>\n      <td>00:27</td>\n    </tr>\n  </tbody>\n</table>\n\n![](index_files/figure-commonmark/cell-13-output-6.png)\n\nFinally, we can look at the top losses\n\n``` python\ninterp = SegmentationInterpretation.from_learner(learner)\ninterp.plot_top_losses(k=1,mskovl=False)\n```\n\n![](index_files/figure-commonmark/cell-14-output-5.png)\n\n## Acknowledgements\n\nThis library is inspired by the following notebooks (and related works\nby the authors)\n\n- [@cordmaur](https://github.com/cordmaur) - Mauricio Cordeiro\u2019s\n  [multi-spectral segmentation fastai\n  pipeline](https://www.kaggle.com/code/cordmaur/remotesensing-fastai2-multiband-augmentations/notebook)\n- [@wrignj08](https://github.com/wrignj08) - Nick Wright\u2019s\n  [multi-spectral classification\n  notebook](https://dpird-dma.github.io/blog/Multispectral-image-classification-Transfer-Learning//)\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Geospatial (Sentinel2 Multi-Spectral) support for fastai",
    "version": "0.1.1",
    "split_keywords": [
        "geospatial",
        "multi-spectral",
        "sentinel2",
        "fastai",
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "85fec839d80018b896eabd54ff505861fd63c070540b1d5a88a0bf033e7a3055",
                "md5": "9f26810e9ac6e8fc8bd306906bad7918",
                "sha256": "f589ce8bd906bba6ae65ecccf7af52c8e406c66c12723212f5bba1e6a56f13f0"
            },
            "downloads": -1,
            "filename": "fastgs-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9f26810e9ac6e8fc8bd306906bad7918",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 21164,
            "upload_time": "2023-01-29T09:54:43",
            "upload_time_iso_8601": "2023-01-29T09:54:43.504856Z",
            "url": "https://files.pythonhosted.org/packages/85/fe/c839d80018b896eabd54ff505861fd63c070540b1d5a88a0bf033e7a3055/fastgs-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "02c78545193005e44b78bba8dc8376c35c495b6dac587cf30ee682dabddbe3a7",
                "md5": "d78c38f28e2b39e578c07070c23c830f",
                "sha256": "d0ecb8ec4c4297324821f8b8611664ec20185e261362406ece1eea2f60374f72"
            },
            "downloads": -1,
            "filename": "fastgs-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d78c38f28e2b39e578c07070c23c830f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 22787,
            "upload_time": "2023-01-29T09:54:46",
            "upload_time_iso_8601": "2023-01-29T09:54:46.168972Z",
            "url": "https://files.pythonhosted.org/packages/02/c7/8545193005e44b78bba8dc8376c35c495b6dac587cf30ee682dabddbe3a7/fastgs-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-29 09:54:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "restlessronin",
    "github_project": "fastgs",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "fastgs"
}
        
Elapsed time: 0.03301s