![logo](docs/images/dlsia.png 'the logo')
# Welcome to dlsia's documentation!
<a style="text-decoration:none !important;" href="https://dlsia.readthedocs.io/en/latest/" alt="website"><img src="https://img.shields.io/readthedocs/dlsia" /></a>
<a style="text-decoration:none !important;" href="https://opensource.org/licenses/MIT" alt="License"><img src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
<a style="text-decoration:none !important;" href="https://img.shields.io/github/commit-activity/m/phzwart/dlsia" alt="License"><img src="https://img.shields.io/github/commit-activity/m/phzwart/dlsia" /></a>
![GitHub contributors](https://img.shields.io/github/contributors/phzwart/dlsia)
![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/phzwart/dlsia)
dlsia (Deep Learning for Scientific Image Analysis) provides easy access to a number of segmentation and denoising
methods using convolution neural networks. The tools available are build for
microscopy and synchrotron-imaging/scattering data in mind, but can be used
elsewhere as well.
The easiest way to start playing with the code is to install dlsia and
perform denoising/segmenting using custom neural networks in our tutorial
notebooks located in the dlsia/tutorials folder.
## Install dlsia
We offer several methods for installation.
### pip: Python package installer
We are currently working on a stable release.
### From source
dlsia may be directly downloaded and installed into your machine by
cloning the public repository into an empty directory using:
```console
$ git clone https://github.com/phzwart/dlsia.git
```
Once cloned, move to the newly minted dlsia directory and install
dlsia using:
```console
$ cd dlsia
$ pip install -e .
```
### Further documentation & tutorial download
For more in-depth documentation and end-to-end training workflows, please
visit our
[readthedocs](https://dlsia.readthedocs.io/en/latest/index.html) page
for more support. To download only the tutorials in a new folder, use the
following terminal input for a sparse git checkout:
```console
mkdir dlsiaTutorials
cd dlsiaTutorials
git init
git config core.sparseCheckout true
git remote add -f origin https://github.com/phzwart/dlsia.git
echo "dlsia/tutorials/*" > .git/info/sparse-checkout
git checkout main
```
## Getting started
We start with some basic imports - we import a network and some training
scripts:
```python
from dlsia.core.networks import msdnet
from dlsia.core import train_scripts
```
### Mixed-Scale dense networks (MSDNet)
![msdnet](docs/images/MSDNet_fig.png 'msdnet fig')
A plain 2d mixed-scale dense network is constructed as follows:
```python
from dlsia.core.networks import msdnet
msdnet_model = msdnet.MixedScaleDenseNetwork(in_channels=1,
out_channels=1,
num_layers=20,
max_dilation=10)
```
while 3d network types for volumetric images can be built passing in equivalent
kernels for 3 dimensions:
```python
import torch
from torch import nn
msdnet3d_model = msdnet.MixedScaleDenseNetwork(in_channels=1,
out_channels=1,
num_layers=20,
max_dilation=10,
normalization=nn.BatchNorm3d,
convolution=nn.Conv3d)
```
Note that each instance of a convolution operator is followed by ReLU
activation and batch normalization. To turn these off, simply pass in the
parameters
```python
activation=None,
normalization=None
```
### Sparse mixed-scale dense network (SMSNet)
![smsnet](docs/images/RMSNet_fig.png 'smsnet fig')
The dlsia suite also provides ways and means to build random, sparse mixed
scale networks. SMSNets contain more sparsely connected nodes than a standard
MSDNet and are useful to alleviate overfitting and multi-network aggregation.
Controlling sparsity is possible, see full documentation for more details.
```python
from dlsia.core.networks import smsnet
smsnet_model = smsnet.random_SMS_network(in_channels=1,
out_channels=1,
layers=20,
dilation_choices=[1, 2, 4, 8],
hidden_out_channels=[1, 2, 3])
```
### Tunable U-Nets
![tunet](docs/images/UNet_fig.png 'tunet fig')
An alternative network choice is to construct a UNet. Classic U-Nets can easily
explode in the number of parameters it requires; here we make it a bit easier
to tune desired architecture-governing parameters:
```python
from dlsia.core.networks import tunet
tunet_model = tunet.TUNet(image_shape=(64, 128),
in_channels=1,
out_channels=4,
base_channels=4,
depth=3,
growth_rate=1.5)
```
## Training
### Data preparation
To prep data for training, we make liberal use of PyTorch DataLoader
classes. This allows for easy handling of data in the training process and
automates the iterative loading of batch sizes.
In the example below, we take pair two numpy arrays of shape ```[num_images,
num_channels, x_size, y_size]``` consisting of training images and masks, convert
them into PyTorch tensors, then initialize the DataLoader class.
```python
import torch
from torch.utils.data import TensorDataset, DataLoader
train_data = TensorDataset(torch.Tensor(training_imgs),
torch.Tensor(training_masks))
train_loader_params = {'batch_size': 20,
'shuffle': True}
train_loader = DataLoader(train_data, **train_loader_params)
```
### Training loop
Once your DataLoaders are constructed, the training of these networks is as
simple as defining a torch.nn optimizer, and calling the training script:
```python
from torch import optim, nn
from dlsia.core import helpers, train_scripts
criterion = nn.CrossEntropyLoss() # For segmenting
optimizer = optim.Adam(tunet_model.parameters(), lr=1e-2)
device = helpers.get_device()
tunet_model = tunet_model.to(device)
tunet_model, results = train_scripts.train_segmentation(net=tunet_model,
trainloader=train_loader,
validationloader=test_loader,
NUM_EPOCHS=epochs,
criterion=criterion,
optimizer=optimizer,
device=device,
show=1)
```
The output of the training scripts is the trained network and a dictionary with
training losses and evaluation metrics. You can view them as follows:
```python
from dlsia.viz_tools import plots
fig = plots.plot_training_results_segmentation(results)
fig.show()
```
## Saving and loading models
Each dlsia network library contains submodules for saving trained
networks and loading them from file. Using the conventional PyTorch ```.pt ```
model file extension, the TUNet above may be saved with
```python
savepath = 'this_tunet.pt'
tunet_model.save_network_parameters(savepath)
```
and reloaded for future use with
```python
copy_of_tunet = tunet.TUNetwork_from_file(savepath)
```
## License and Legal Stuff
This software has been developed from funds that originate from the US tax
payer and is free for academics. Please have a look at the license agreement
for more details. Commercial usage will require some extra steps. Please
contact ipo@lbl.gov for more details.
## Final Thoughts
This documentation is far from complete, but have some notebooks as part of the codebase, which could provide a good
entry point.
More to come!
Raw data
{
"_id": null,
"home_page": "https://github.com/phzwart/dlsia/",
"name": "dlsia",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "dlsia",
"author": "Petrus H. Zwart, Eric J. Roberts",
"author_email": "PHZwart@lbl.gov, EJroberts@lbl.gov",
"download_url": "https://files.pythonhosted.org/packages/63/a6/58e55ff23394b6afab804a234250d129fe3b1a4d2170fbcaf8947d1d26fd/dlsia-0.3.1.tar.gz",
"platform": null,
"description": "![logo](docs/images/dlsia.png 'the logo')\n\n\n# Welcome to dlsia's documentation!\n\n<a style=\"text-decoration:none !important;\" href=\"https://dlsia.readthedocs.io/en/latest/\" alt=\"website\"><img src=\"https://img.shields.io/readthedocs/dlsia\" /></a>\n<a style=\"text-decoration:none !important;\" href=\"https://opensource.org/licenses/MIT\" alt=\"License\"><img src=\"https://img.shields.io/badge/license-MIT-blue.svg\" /></a>\n<a style=\"text-decoration:none !important;\" href=\"https://img.shields.io/github/commit-activity/m/phzwart/dlsia\" alt=\"License\"><img src=\"https://img.shields.io/github/commit-activity/m/phzwart/dlsia\" /></a>\n![GitHub contributors](https://img.shields.io/github/contributors/phzwart/dlsia)\n![GitHub code size in bytes](https://img.shields.io/github/languages/code-size/phzwart/dlsia)\n\ndlsia (Deep Learning for Scientific Image Analysis) provides easy access to a number of segmentation and denoising\nmethods using convolution neural networks. The tools available are build for \nmicroscopy and synchrotron-imaging/scattering data in mind, but can be used \nelsewhere as well.\n\nThe easiest way to start playing with the code is to install dlsia and \nperform denoising/segmenting using custom neural networks in our tutorial \nnotebooks located in the dlsia/tutorials folder.\n\n## Install dlsia\n\nWe offer several methods for installation. \n\n### pip: Python package installer\n\nWe are currently working on a stable release.\n\n### From source\n\ndlsia may be directly downloaded and installed into your machine by \ncloning the public repository into an empty directory using:\n\n```console\n$ git clone https://github.com/phzwart/dlsia.git\n```\n\nOnce cloned, move to the newly minted dlsia directory and install \ndlsia using:\n\n```console\n$ cd dlsia\n$ pip install -e .\n```\n\n### Further documentation & tutorial download\n\nFor more in-depth documentation and end-to-end training workflows, please \nvisit our \n[readthedocs](https://dlsia.readthedocs.io/en/latest/index.html) page \nfor more support. To download only the tutorials in a new folder, use the \nfollowing terminal input for a sparse git checkout:\n\n```console\nmkdir dlsiaTutorials\ncd dlsiaTutorials\ngit init\ngit config core.sparseCheckout true\ngit remote add -f origin https://github.com/phzwart/dlsia.git\necho \"dlsia/tutorials/*\" > .git/info/sparse-checkout\ngit checkout main\n```\n\n## Getting started\n\nWe start with some basic imports - we import a network and some training \nscripts:\n\n```python\nfrom dlsia.core.networks import msdnet\nfrom dlsia.core import train_scripts\n```\n\n### Mixed-Scale dense networks (MSDNet)\n\n![msdnet](docs/images/MSDNet_fig.png 'msdnet fig')\n\n\nA plain 2d mixed-scale dense network is constructed as follows:\n\n```python\nfrom dlsia.core.networks import msdnet\n\nmsdnet_model = msdnet.MixedScaleDenseNetwork(in_channels=1,\n out_channels=1,\n num_layers=20,\n max_dilation=10)\n```\n\nwhile 3d network types for volumetric images can be built passing in equivalent \nkernels for 3 dimensions:\n\n```python\nimport torch\nfrom torch import nn\n\nmsdnet3d_model = msdnet.MixedScaleDenseNetwork(in_channels=1,\n out_channels=1,\n num_layers=20,\n max_dilation=10,\n normalization=nn.BatchNorm3d,\n convolution=nn.Conv3d)\n```\n\nNote that each instance of a convolution operator is followed by ReLU \nactivation and batch normalization. To turn these off, simply pass in the \nparameters\n\n```python\nactivation=None,\nnormalization=None\n```\n\n### Sparse mixed-scale dense network (SMSNet)\n\n![smsnet](docs/images/RMSNet_fig.png 'smsnet fig')\n\n\nThe dlsia suite also provides ways and means to build random, sparse mixed \nscale networks. SMSNets contain more sparsely connected nodes than a standard \nMSDNet and are useful to alleviate overfitting and multi-network aggregation. \nControlling sparsity is possible, see full documentation for more details.\n\n```python\nfrom dlsia.core.networks import smsnet\n\nsmsnet_model = smsnet.random_SMS_network(in_channels=1,\n out_channels=1,\n layers=20,\n dilation_choices=[1, 2, 4, 8],\n hidden_out_channels=[1, 2, 3])\n```\n### Tunable U-Nets\n\n![tunet](docs/images/UNet_fig.png 'tunet fig')\n\nAn alternative network choice is to construct a UNet. Classic U-Nets can easily \nexplode in the number of parameters it requires; here we make it a bit easier \nto tune desired architecture-governing parameters:\n\n```python\nfrom dlsia.core.networks import tunet\n\ntunet_model = tunet.TUNet(image_shape=(64, 128),\n in_channels=1,\n out_channels=4,\n base_channels=4,\n depth=3,\n growth_rate=1.5)\n```\n\n## Training\n\n### Data preparation\n\nTo prep data for training, we make liberal use of PyTorch DataLoader \nclasses. This allows for easy handling of data in the training process and \nautomates the iterative loading of batch sizes.\n\nIn the example below, we take pair two numpy arrays of shape ```[num_images, \nnum_channels, x_size, y_size]``` consisting of training images and masks, convert \nthem into PyTorch tensors, then initialize the DataLoader class.\n\n```python\nimport torch\nfrom torch.utils.data import TensorDataset, DataLoader\n\ntrain_data = TensorDataset(torch.Tensor(training_imgs), \n torch.Tensor(training_masks))\n\ntrain_loader_params = {'batch_size': 20,\n 'shuffle': True}\n\ntrain_loader = DataLoader(train_data, **train_loader_params)\n```\n\n### Training loop\n\nOnce your DataLoaders are constructed, the training of these networks is as \nsimple as defining a torch.nn optimizer, and calling the training script:\n\n```python\nfrom torch import optim, nn\nfrom dlsia.core import helpers, train_scripts\n\ncriterion = nn.CrossEntropyLoss() # For segmenting\noptimizer = optim.Adam(tunet_model.parameters(), lr=1e-2)\n\ndevice = helpers.get_device()\ntunet_model = tunet_model.to(device)\n\ntunet_model, results = train_scripts.train_segmentation(net=tunet_model,\n trainloader=train_loader,\n validationloader=test_loader,\n NUM_EPOCHS=epochs, \n criterion=criterion,\n optimizer=optimizer,\n device=device,\n show=1)\n```\n\nThe output of the training scripts is the trained network and a dictionary with \ntraining losses and evaluation metrics. You can view them as follows:\n\n```python\nfrom dlsia.viz_tools import plots\n\nfig = plots.plot_training_results_segmentation(results)\nfig.show()\n```\n\n## Saving and loading models\n\nEach dlsia network library contains submodules for saving trained \nnetworks and loading them from file. Using the conventional PyTorch ```.pt ``` \nmodel file extension, the TUNet above may be saved with\n\n```python\nsavepath = 'this_tunet.pt'\ntunet_model.save_network_parameters(savepath)\n```\n\nand reloaded for future use with\n\n```python\ncopy_of_tunet = tunet.TUNetwork_from_file(savepath)\n```\n\n## License and Legal Stuff\n\nThis software has been developed from funds that originate from the US tax \npayer and is free for academics. Please have a look at the license agreement \nfor more details. Commercial usage will require some extra steps. Please \ncontact ipo@lbl.gov for more details.\n\n## Final Thoughts\n\nThis documentation is far from complete, but have some notebooks as part of the codebase, which could provide a good\nentry point.\n\nMore to come!\n\n\n",
"bugtrack_url": null,
"license": "BSD License",
"summary": "Deep Learning for Scientif Image Analysis",
"version": "0.3.1",
"project_urls": {
"Homepage": "https://github.com/phzwart/dlsia/"
},
"split_keywords": [
"dlsia"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "14d438f533a7939daf480e510ca1c4e7f9ac46494d557b1a28ce50ad3ba85fa4",
"md5": "ea61cd9b3281214858c2db4cfb807580",
"sha256": "ff9147a7035c2e14d3a7233e156f3594ef62321d3e4aee51c68e1488ecd2ec78"
},
"downloads": -1,
"filename": "dlsia-0.3.1-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "ea61cd9b3281214858c2db4cfb807580",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.9",
"size": 114739,
"upload_time": "2024-04-12T03:16:23",
"upload_time_iso_8601": "2024-04-12T03:16:23.322730Z",
"url": "https://files.pythonhosted.org/packages/14/d4/38f533a7939daf480e510ca1c4e7f9ac46494d557b1a28ce50ad3ba85fa4/dlsia-0.3.1-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "63a658e55ff23394b6afab804a234250d129fe3b1a4d2170fbcaf8947d1d26fd",
"md5": "3b74f6fe427da63fc383f840fbaabe3d",
"sha256": "14e1b6f1afb3d6b476493666564e634540e85ff32516d1c7f2b169dce2f97ba6"
},
"downloads": -1,
"filename": "dlsia-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "3b74f6fe427da63fc383f840fbaabe3d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 322880,
"upload_time": "2024-04-12T03:16:25",
"upload_time_iso_8601": "2024-04-12T03:16:25.456771Z",
"url": "https://files.pythonhosted.org/packages/63/a6/58e55ff23394b6afab804a234250d129fe3b1a4d2170fbcaf8947d1d26fd/dlsia-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-12 03:16:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "phzwart",
"github_project": "dlsia",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "setuptools",
"specs": [
[
">=",
"62.2.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.23"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"1.11"
]
]
},
{
"name": "torchsummary",
"specs": [
[
">=",
"1.5.1"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.13"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.6.3"
]
]
},
{
"name": "torchmetrics",
"specs": [
[
">=",
"0.4.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.58.0"
]
]
},
{
"name": "h5py",
"specs": [
[
">=",
"3.1.0"
]
]
},
{
"name": "plotly",
"specs": [
[
">=",
"5.3.1"
]
]
},
{
"name": "dill",
"specs": [
[
">=",
"0.3.4"
]
]
},
{
"name": "einops",
"specs": [
[
">=",
"0.3.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.2.3"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.0"
]
]
},
{
"name": "scikit-image",
"specs": [
[
">=",
"0.18.1"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"0.24.1"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.8.1"
]
]
},
{
"name": "tifffile",
"specs": [
[
">=",
"2021.2.26"
]
]
},
{
"name": "torchmetrics",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "qlty",
"specs": [
[
">=",
"0.1.7"
]
]
},
{
"name": "typing",
"specs": [
[
">=",
"3.7.4.3"
]
]
},
{
"name": "jedi",
"specs": [
[
">=",
"0.10"
]
]
},
{
"name": "connected-components-3d",
"specs": []
},
{
"name": "pytorch-lightning",
"specs": []
},
{
"name": "sphinx",
"specs": [
[
"==",
"7.2.6"
]
]
},
{
"name": "nbsphinx",
"specs": []
},
{
"name": "lxml_html_clean",
"specs": []
},
{
"name": "nbsphinx-link",
"specs": []
},
{
"name": "m2r",
"specs": []
},
{
"name": "papermill",
"specs": []
},
{
"name": "umap-learn",
"specs": [
[
">=",
"0.5.3"
]
]
},
{
"name": "kaleido",
"specs": []
},
{
"name": "zarr",
"specs": []
},
{
"name": "sphinx_rtd_theme",
"specs": []
}
],
"tox": true,
"lcname": "dlsia"
}