Name | segmentation-models-pytorch JSON |
Version |
0.4.0
JSON |
| download |
home_page | None |
Summary | Image segmentation models with pre-trained backbones. PyTorch. |
upload_time | 2025-01-08 15:34:43 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | The MIT License Copyright (c) 2019, Pavel Iakubovskii Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">

**Python library with Neural Networks for Image
Segmentation based on [PyTorch](https://pytorch.org/).**
[](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE)
[](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml)
[](https://smp.readthedocs.io/en/latest/)
<br>
[](https://pypi.org/project/segmentation-models-pytorch/)
[](https://pepy.tech/project/segmentation-models-pytorch)
<br>
[](https://pepy.tech/project/segmentation-models-pytorch)
[](https://pepy.tech/project/segmentation-models-pytorch)
</div>
The main features of this library are:
- High-level API (just two lines to create a neural network)
- 11 models architectures for binary and multi class segmentation (including legendary Unet)
- 124 available encoders (and 500+ encoders from [timm](https://github.com/rwightman/pytorch-image-models))
- All encoders have pre-trained weights for faster and better convergence
- Popular metrics and losses for training routines
### [📚 Project Documentation 📚](http://smp.readthedocs.io/)
Visit [Read The Docs Project Page](https://smp.readthedocs.io/) or read the following README to know more about Segmentation Models Pytorch (SMP for short) library
### 📋 Table of content
1. [Quick start](#start)
2. [Examples](#examples)
3. [Models](#models)
1. [Architectures](#architectures)
2. [Encoders](#encoders)
3. [Timm Encoders](#timm)
4. [Models API](#api)
1. [Input channels](#input-channels)
2. [Auxiliary classification output](#auxiliary-classification-output)
3. [Depth](#depth)
5. [Installation](#installation)
6. [Competitions won with the library](#competitions-won-with-the-library)
7. [Contributing](#contributing)
8. [Citing](#citing)
9. [License](#license)
### ⏳ Quick start <a name="start"></a>
#### 1. Create your first Segmentation model with SMP
The segmentation model is just a PyTorch `torch.nn.Module`, which can be created as easy as:
```python
import segmentation_models_pytorch as smp
model = smp.Unet(
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
classes=3, # model output channels (number of classes in your dataset)
)
```
- see [table](#architectures) with available model architectures
- see [table](#encoders) with available encoders and their corresponding weights
#### 2. Configure data preprocessing
All encoders have pretrained weights. Preparing your data the same way as during weights pre-training may give you better results (higher metric score and faster convergence). It is **not necessary** in case you train the whole model, not only the decoder.
```python
from segmentation_models_pytorch.encoders import get_preprocessing_fn
preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
```
Congratulations! You are done! Now you can train your model with your favorite framework!
### 💡 Examples <a name="examples"></a>
- Training model for pets binary segmentation with Pytorch-Lightning [notebook](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb) and [](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb)
- Training model for cars segmentation on CamVid dataset [here](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/cars%20segmentation%20(camvid).ipynb).
- Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [TTAch](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/v21.02rc0/examples/notebooks/segmentation-tutorial.ipynb) [](https://colab.research.google.com/github/catalyst-team/catalyst/blob/v21.02rc0/examples/notebooks/segmentation-tutorial.ipynb)
- Training SMP model with [Pytorch-Lightning](https://pytorch-lightning.readthedocs.io) framework - [here](https://github.com/ternaus/cloths_segmentation) (clothes binary segmentation by [@ternaus](https://github.com/ternaus)).
- Export trained model to ONNX - [notebook](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/convert_to_onnx.ipynb) [](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/convert_to_onnx.ipynb)
### 📦 Models <a name="models"></a>
#### Architectures <a name="architectures"></a>
- Unet [[paper](https://arxiv.org/abs/1505.04597)] [[docs](https://smp.readthedocs.io/en/latest/models.html#unet)]
- Unet++ [[paper](https://arxiv.org/pdf/1807.10165.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id2)]
- MAnet [[paper](https://ieeexplore.ieee.org/abstract/document/9201310)] [[docs](https://smp.readthedocs.io/en/latest/models.html#manet)]
- Linknet [[paper](https://arxiv.org/abs/1707.03718)] [[docs](https://smp.readthedocs.io/en/latest/models.html#linknet)]
- FPN [[paper](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#fpn)]
- PSPNet [[paper](https://arxiv.org/abs/1612.01105)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pspnet)]
- PAN [[paper](https://arxiv.org/abs/1805.10180)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pan)]
- DeepLabV3 [[paper](https://arxiv.org/abs/1706.05587)] [[docs](https://smp.readthedocs.io/en/latest/models.html#deeplabv3)]
- DeepLabV3+ [[paper](https://arxiv.org/abs/1802.02611)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id9)]
- UPerNet [[paper](https://arxiv.org/abs/1807.10221)] [[docs](https://smp.readthedocs.io/en/latest/models.html#upernet)]
- Segformer [[paper](https://arxiv.org/abs/2105.15203)] [[docs](https://smp.readthedocs.io/en/latest/models.html#segformer)]
#### Encoders <a name="encoders"></a>
The following is a list of supported encoders in the SMP. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights (`encoder_name` and `encoder_weights` parameters).
<details>
<summary style="margin-left: 25px;">ResNet</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|resnet18 |imagenet / ssl / swsl |11M |
|resnet34 |imagenet |21M |
|resnet50 |imagenet / ssl / swsl |23M |
|resnet101 |imagenet |42M |
|resnet152 |imagenet |58M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">ResNeXt</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|resnext50_32x4d |imagenet / ssl / swsl |22M |
|resnext101_32x4d |ssl / swsl |42M |
|resnext101_32x8d |imagenet / instagram / ssl / swsl|86M |
|resnext101_32x16d |instagram / ssl / swsl |191M |
|resnext101_32x32d |instagram |466M |
|resnext101_32x48d |instagram |826M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">ResNeSt</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|timm-resnest14d |imagenet |8M |
|timm-resnest26d |imagenet |15M |
|timm-resnest50d |imagenet |25M |
|timm-resnest101e |imagenet |46M |
|timm-resnest200e |imagenet |68M |
|timm-resnest269e |imagenet |108M |
|timm-resnest50d_4s2x40d |imagenet |28M |
|timm-resnest50d_1s4x24d |imagenet |23M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">Res2Ne(X)t</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|timm-res2net50_26w_4s |imagenet |23M |
|timm-res2net101_26w_4s |imagenet |43M |
|timm-res2net50_26w_6s |imagenet |35M |
|timm-res2net50_26w_8s |imagenet |46M |
|timm-res2net50_48w_2s |imagenet |23M |
|timm-res2net50_14w_8s |imagenet |23M |
|timm-res2next50 |imagenet |22M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">RegNet(x/y)</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|timm-regnetx_002 |imagenet |2M |
|timm-regnetx_004 |imagenet |4M |
|timm-regnetx_006 |imagenet |5M |
|timm-regnetx_008 |imagenet |6M |
|timm-regnetx_016 |imagenet |8M |
|timm-regnetx_032 |imagenet |14M |
|timm-regnetx_040 |imagenet |20M |
|timm-regnetx_064 |imagenet |24M |
|timm-regnetx_080 |imagenet |37M |
|timm-regnetx_120 |imagenet |43M |
|timm-regnetx_160 |imagenet |52M |
|timm-regnetx_320 |imagenet |105M |
|timm-regnety_002 |imagenet |2M |
|timm-regnety_004 |imagenet |3M |
|timm-regnety_006 |imagenet |5M |
|timm-regnety_008 |imagenet |5M |
|timm-regnety_016 |imagenet |10M |
|timm-regnety_032 |imagenet |17M |
|timm-regnety_040 |imagenet |19M |
|timm-regnety_064 |imagenet |29M |
|timm-regnety_080 |imagenet |37M |
|timm-regnety_120 |imagenet |49M |
|timm-regnety_160 |imagenet |80M |
|timm-regnety_320 |imagenet |141M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">GERNet</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|timm-gernet_s |imagenet |6M |
|timm-gernet_m |imagenet |18M |
|timm-gernet_l |imagenet |28M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">SE-Net</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|senet154 |imagenet |113M |
|se_resnet50 |imagenet |26M |
|se_resnet101 |imagenet |47M |
|se_resnet152 |imagenet |64M |
|se_resnext50_32x4d |imagenet |25M |
|se_resnext101_32x4d |imagenet |46M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">SK-ResNe(X)t</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|timm-skresnet18 |imagenet |11M |
|timm-skresnet34 |imagenet |21M |
|timm-skresnext50_32x4d |imagenet |25M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">DenseNet</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|densenet121 |imagenet |6M |
|densenet169 |imagenet |12M |
|densenet201 |imagenet |18M |
|densenet161 |imagenet |26M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">Inception</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|inceptionresnetv2 |imagenet / imagenet+background |54M |
|inceptionv4 |imagenet / imagenet+background |41M |
|xception |imagenet |22M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">EfficientNet</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|efficientnet-b0 |imagenet |4M |
|efficientnet-b1 |imagenet |6M |
|efficientnet-b2 |imagenet |7M |
|efficientnet-b3 |imagenet |10M |
|efficientnet-b4 |imagenet |17M |
|efficientnet-b5 |imagenet |28M |
|efficientnet-b6 |imagenet |40M |
|efficientnet-b7 |imagenet |63M |
|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |
|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |
|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |
|timm-efficientnet-b3 |imagenet / advprop / noisy-student|10M |
|timm-efficientnet-b4 |imagenet / advprop / noisy-student|17M |
|timm-efficientnet-b5 |imagenet / advprop / noisy-student|28M |
|timm-efficientnet-b6 |imagenet / advprop / noisy-student|40M |
|timm-efficientnet-b7 |imagenet / advprop / noisy-student|63M |
|timm-efficientnet-b8 |imagenet / advprop |84M |
|timm-efficientnet-l2 |noisy-student |474M |
|timm-efficientnet-lite0 |imagenet |4M |
|timm-efficientnet-lite1 |imagenet |5M |
|timm-efficientnet-lite2 |imagenet |6M |
|timm-efficientnet-lite3 |imagenet |8M |
|timm-efficientnet-lite4 |imagenet |13M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">MobileNet</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|mobilenet_v2 |imagenet |2M |
|timm-mobilenetv3_large_075 |imagenet |1.78M |
|timm-mobilenetv3_large_100 |imagenet |2.97M |
|timm-mobilenetv3_large_minimal_100|imagenet |1.41M |
|timm-mobilenetv3_small_075 |imagenet |0.57M |
|timm-mobilenetv3_small_100 |imagenet |0.93M |
|timm-mobilenetv3_small_minimal_100|imagenet |0.43M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">DPN</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|dpn68 |imagenet |11M |
|dpn68b |imagenet+5k |11M |
|dpn92 |imagenet+5k |34M |
|dpn98 |imagenet |58M |
|dpn107 |imagenet+5k |84M |
|dpn131 |imagenet |76M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">VGG</summary>
<div style="margin-left: 25px;">
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|vgg11 |imagenet |9M |
|vgg11_bn |imagenet |9M |
|vgg13 |imagenet |9M |
|vgg13_bn |imagenet |9M |
|vgg16 |imagenet |14M |
|vgg16_bn |imagenet |14M |
|vgg19 |imagenet |20M |
|vgg19_bn |imagenet |20M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">Mix Vision Transformer</summary>
<div style="margin-left: 25px;">
Backbone from SegFormer pretrained on Imagenet! Can be used with other decoders from package, you can combine Mix Vision Transformer with Unet, FPN and others!
Limitations:
- encoder is **not** supported by Linknet, Unet++
- encoder is supported by FPN only for encoder **depth = 5**
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|mit_b0 |imagenet |3M |
|mit_b1 |imagenet |13M |
|mit_b2 |imagenet |24M |
|mit_b3 |imagenet |44M |
|mit_b4 |imagenet |60M |
|mit_b5 |imagenet |81M |
</div>
</details>
<details>
<summary style="margin-left: 25px;">MobileOne</summary>
<div style="margin-left: 25px;">
Apple's "sub-one-ms" Backbone pretrained on Imagenet! Can be used with all decoders.
Note: In the official github repo the s0 variant has additional num_conv_branches, leading to more params than s1.
|Encoder |Weights |Params, M |
|--------------------------------|:------------------------------:|:------------------------------:|
|mobileone_s0 |imagenet |4.6M |
|mobileone_s1 |imagenet |4.0M |
|mobileone_s2 |imagenet |6.5M |
|mobileone_s3 |imagenet |8.8M |
|mobileone_s4 |imagenet |13.6M |
</div>
</details>
\* `ssl`, `swsl` - semi-supervised and weakly-supervised learning on ImageNet ([repo](https://github.com/facebookresearch/semi-supervised-ImageNet1K-models)).
#### Timm Encoders <a name="timm"></a>
[docs](https://smp.readthedocs.io/en/latest/encoders_timm.html)
Pytorch Image Models (a.k.a. timm) has a lot of pretrained models and interface which allows using these models as encoders in smp, however, not all models are supported
- not all transformer models have ``features_only`` functionality implemented that is required for encoder
- some models have inappropriate strides
Total number of supported encoders: 549
- [table with available encoders](https://smp.readthedocs.io/en/latest/encoders_timm.html)
### 🔁 Models API <a name="api"></a>
- `model.encoder` - pretrained backbone to extract features of different spatial resolution
- `model.decoder` - depends on models architecture (`Unet`/`Linknet`/`PSPNet`/`FPN`)
- `model.segmentation_head` - last block to produce required number of mask channels (include also optional upsampling and activation)
- `model.classification_head` - optional block which create classification head on top of encoder
- `model.forward(x)` - sequentially pass `x` through model\`s encoder, decoder and segmentation head (and classification head if specified)
##### Input channels
Input channels parameter allows you to create models, which process tensors with arbitrary number of channels.
If you use pretrained weights from imagenet - weights of first convolution will be reused. For
1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be
populated with weights like `new_weight[:, i] = pretrained_weight[:, i % 3]` and than scaled with `new_weight * 3 / new_in_channels`.
```python
model = smp.FPN('resnet34', in_channels=1)
mask = model(torch.ones([1, 1, 64, 64]))
```
##### Auxiliary classification output
All models support `aux_params` parameters, which is default set to `None`.
If `aux_params = None` then classification auxiliary output is not created, else
model produce not only `mask`, but also `label` output with shape `NC`.
Classification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be
configured by `aux_params` as follows:
```python
aux_params=dict(
pooling='avg', # one of 'avg', 'max'
dropout=0.5, # dropout ratio, default is None
activation='sigmoid', # activation function, default is None
classes=4, # define number of output labels
)
model = smp.Unet('resnet34', classes=4, aux_params=aux_params)
mask, label = model(x)
```
##### Depth
Depth parameter specify a number of downsampling operations in encoder, so you can make
your model lighter if specify smaller `depth`.
```python
model = smp.Unet('resnet34', encoder_depth=4)
```
### 🛠 Installation <a name="installation"></a>
PyPI version:
```bash
$ pip install segmentation-models-pytorch
````
Latest version from source:
```bash
$ pip install git+https://github.com/qubvel/segmentation_models.pytorch
````
### 🏆 Competitions won with the library
`Segmentation Models` package is widely used in the image segmentation competitions.
[Here](https://github.com/qubvel/segmentation_models.pytorch/blob/main/HALLOFFAME.md) you can find competitions, names of the winners and links to their solutions.
### 🤝 Contributing
#### Install SMP
```bash
make install_dev # create .venv, install SMP in dev mode
```
#### Run tests and code checks
```bash
make fixup # Ruff for formatting and lint checks
```
#### Update table with encoders
```bash
make table # generate a table with encoders and print to stdout
```
### 📝 Citing
```
@misc{Iakubovskii:2019,
Author = {Pavel Iakubovskii},
Title = {Segmentation Models Pytorch},
Year = {2019},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/qubvel/segmentation_models.pytorch}}
}
```
### 🛡️ License <a name="license"></a>
The project is primarily distributed under [MIT License](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE), while some files are subject to other licenses. Please refer to [LICENSES](licenses/LICENSES.md) and license statements in each file for careful check, especially for commercial use.
Raw data
{
"_id": null,
"home_page": null,
"name": "segmentation-models-pytorch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Pavel Iakubovskii <qubvel@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/ef/5e/4fad09f0a0975014d895f2fe2225c547ea7d2f5c7b920288e21750fbf38f/segmentation_models_pytorch-0.4.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n \n \n**Python library with Neural Networks for Image \nSegmentation based on [PyTorch](https://pytorch.org/).** \n\n[](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE) \n[](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml) \n[](https://smp.readthedocs.io/en/latest/) \n<br>\n[](https://pypi.org/project/segmentation-models-pytorch/) \n[](https://pepy.tech/project/segmentation-models-pytorch) \n<br>\n[](https://pepy.tech/project/segmentation-models-pytorch) \n[](https://pepy.tech/project/segmentation-models-pytorch) \n\n</div>\n\nThe main features of this library are:\n\n - High-level API (just two lines to create a neural network)\n - 11 models architectures for binary and multi class segmentation (including legendary Unet)\n - 124 available encoders (and 500+ encoders from [timm](https://github.com/rwightman/pytorch-image-models))\n - All encoders have pre-trained weights for faster and better convergence\n - Popular metrics and losses for training routines\n \n### [\ud83d\udcda Project Documentation \ud83d\udcda](http://smp.readthedocs.io/)\n\nVisit [Read The Docs Project Page](https://smp.readthedocs.io/) or read the following README to know more about Segmentation Models Pytorch (SMP for short) library\n\n### \ud83d\udccb Table of content\n 1. [Quick start](#start)\n 2. [Examples](#examples)\n 3. [Models](#models)\n 1. [Architectures](#architectures)\n 2. [Encoders](#encoders)\n 3. [Timm Encoders](#timm)\n 4. [Models API](#api)\n 1. [Input channels](#input-channels)\n 2. [Auxiliary classification output](#auxiliary-classification-output)\n 3. [Depth](#depth)\n 5. [Installation](#installation)\n 6. [Competitions won with the library](#competitions-won-with-the-library)\n 7. [Contributing](#contributing)\n 8. [Citing](#citing)\n 9. [License](#license)\n\n### \u23f3 Quick start <a name=\"start\"></a>\n\n#### 1. Create your first Segmentation model with SMP\n\nThe segmentation model is just a PyTorch `torch.nn.Module`, which can be created as easy as:\n\n```python\nimport segmentation_models_pytorch as smp\n\nmodel = smp.Unet(\n encoder_name=\"resnet34\", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7\n encoder_weights=\"imagenet\", # use `imagenet` pre-trained weights for encoder initialization\n in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)\n classes=3, # model output channels (number of classes in your dataset)\n)\n```\n - see [table](#architectures) with available model architectures\n - see [table](#encoders) with available encoders and their corresponding weights\n\n#### 2. Configure data preprocessing\n\nAll encoders have pretrained weights. Preparing your data the same way as during weights pre-training may give you better results (higher metric score and faster convergence). It is **not necessary** in case you train the whole model, not only the decoder.\n\n```python\nfrom segmentation_models_pytorch.encoders import get_preprocessing_fn\n\npreprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')\n```\n\nCongratulations! You are done! Now you can train your model with your favorite framework!\n\n### \ud83d\udca1 Examples <a name=\"examples\"></a>\n - Training model for pets binary segmentation with Pytorch-Lightning [notebook](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb) and [](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb)\n - Training model for cars segmentation on CamVid dataset [here](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/cars%20segmentation%20(camvid).ipynb).\n - Training SMP model with [Catalyst](https://github.com/catalyst-team/catalyst) (high-level framework for PyTorch), [TTAch](https://github.com/qubvel/ttach) (TTA library for PyTorch) and [Albumentations](https://github.com/albu/albumentations) (fast image augmentation library) - [here](https://github.com/catalyst-team/catalyst/blob/v21.02rc0/examples/notebooks/segmentation-tutorial.ipynb) [](https://colab.research.google.com/github/catalyst-team/catalyst/blob/v21.02rc0/examples/notebooks/segmentation-tutorial.ipynb)\n - Training SMP model with [Pytorch-Lightning](https://pytorch-lightning.readthedocs.io) framework - [here](https://github.com/ternaus/cloths_segmentation) (clothes binary segmentation by [@ternaus](https://github.com/ternaus)).\n - Export trained model to ONNX - [notebook](https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/convert_to_onnx.ipynb) [](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/convert_to_onnx.ipynb)\n\n### \ud83d\udce6 Models <a name=\"models\"></a>\n\n#### Architectures <a name=\"architectures\"></a>\n - Unet [[paper](https://arxiv.org/abs/1505.04597)] [[docs](https://smp.readthedocs.io/en/latest/models.html#unet)]\n - Unet++ [[paper](https://arxiv.org/pdf/1807.10165.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id2)]\n - MAnet [[paper](https://ieeexplore.ieee.org/abstract/document/9201310)] [[docs](https://smp.readthedocs.io/en/latest/models.html#manet)]\n - Linknet [[paper](https://arxiv.org/abs/1707.03718)] [[docs](https://smp.readthedocs.io/en/latest/models.html#linknet)]\n - FPN [[paper](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf)] [[docs](https://smp.readthedocs.io/en/latest/models.html#fpn)]\n - PSPNet [[paper](https://arxiv.org/abs/1612.01105)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pspnet)]\n - PAN [[paper](https://arxiv.org/abs/1805.10180)] [[docs](https://smp.readthedocs.io/en/latest/models.html#pan)]\n - DeepLabV3 [[paper](https://arxiv.org/abs/1706.05587)] [[docs](https://smp.readthedocs.io/en/latest/models.html#deeplabv3)]\n - DeepLabV3+ [[paper](https://arxiv.org/abs/1802.02611)] [[docs](https://smp.readthedocs.io/en/latest/models.html#id9)]\n - UPerNet [[paper](https://arxiv.org/abs/1807.10221)] [[docs](https://smp.readthedocs.io/en/latest/models.html#upernet)]\n - Segformer [[paper](https://arxiv.org/abs/2105.15203)] [[docs](https://smp.readthedocs.io/en/latest/models.html#segformer)]\n\n#### Encoders <a name=\"encoders\"></a>\n\nThe following is a list of supported encoders in the SMP. Select the appropriate family of encoders and click to expand the table and select a specific encoder and its pre-trained weights (`encoder_name` and `encoder_weights` parameters).\n\n<details>\n<summary style=\"margin-left: 25px;\">ResNet</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|resnet18 |imagenet / ssl / swsl |11M |\n|resnet34 |imagenet |21M |\n|resnet50 |imagenet / ssl / swsl |23M |\n|resnet101 |imagenet |42M |\n|resnet152 |imagenet |58M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">ResNeXt</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|resnext50_32x4d |imagenet / ssl / swsl |22M |\n|resnext101_32x4d |ssl / swsl |42M |\n|resnext101_32x8d |imagenet / instagram / ssl / swsl|86M |\n|resnext101_32x16d |instagram / ssl / swsl |191M |\n|resnext101_32x32d |instagram |466M |\n|resnext101_32x48d |instagram |826M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">ResNeSt</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|timm-resnest14d |imagenet |8M |\n|timm-resnest26d |imagenet |15M |\n|timm-resnest50d |imagenet |25M |\n|timm-resnest101e |imagenet |46M |\n|timm-resnest200e |imagenet |68M |\n|timm-resnest269e |imagenet |108M |\n|timm-resnest50d_4s2x40d |imagenet |28M |\n|timm-resnest50d_1s4x24d |imagenet |23M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">Res2Ne(X)t</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|timm-res2net50_26w_4s |imagenet |23M |\n|timm-res2net101_26w_4s |imagenet |43M |\n|timm-res2net50_26w_6s |imagenet |35M |\n|timm-res2net50_26w_8s |imagenet |46M |\n|timm-res2net50_48w_2s |imagenet |23M |\n|timm-res2net50_14w_8s |imagenet |23M |\n|timm-res2next50 |imagenet |22M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">RegNet(x/y)</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|timm-regnetx_002 |imagenet |2M |\n|timm-regnetx_004 |imagenet |4M |\n|timm-regnetx_006 |imagenet |5M |\n|timm-regnetx_008 |imagenet |6M |\n|timm-regnetx_016 |imagenet |8M |\n|timm-regnetx_032 |imagenet |14M |\n|timm-regnetx_040 |imagenet |20M |\n|timm-regnetx_064 |imagenet |24M |\n|timm-regnetx_080 |imagenet |37M |\n|timm-regnetx_120 |imagenet |43M |\n|timm-regnetx_160 |imagenet |52M |\n|timm-regnetx_320 |imagenet |105M |\n|timm-regnety_002 |imagenet |2M |\n|timm-regnety_004 |imagenet |3M |\n|timm-regnety_006 |imagenet |5M |\n|timm-regnety_008 |imagenet |5M |\n|timm-regnety_016 |imagenet |10M |\n|timm-regnety_032 |imagenet |17M |\n|timm-regnety_040 |imagenet |19M |\n|timm-regnety_064 |imagenet |29M |\n|timm-regnety_080 |imagenet |37M |\n|timm-regnety_120 |imagenet |49M |\n|timm-regnety_160 |imagenet |80M |\n|timm-regnety_320 |imagenet |141M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">GERNet</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|timm-gernet_s |imagenet |6M |\n|timm-gernet_m |imagenet |18M |\n|timm-gernet_l |imagenet |28M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">SE-Net</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|senet154 |imagenet |113M |\n|se_resnet50 |imagenet |26M |\n|se_resnet101 |imagenet |47M |\n|se_resnet152 |imagenet |64M |\n|se_resnext50_32x4d |imagenet |25M |\n|se_resnext101_32x4d |imagenet |46M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">SK-ResNe(X)t</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|timm-skresnet18 |imagenet |11M |\n|timm-skresnet34 |imagenet |21M |\n|timm-skresnext50_32x4d |imagenet |25M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">DenseNet</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|densenet121 |imagenet |6M |\n|densenet169 |imagenet |12M |\n|densenet201 |imagenet |18M |\n|densenet161 |imagenet |26M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">Inception</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|inceptionresnetv2 |imagenet / imagenet+background |54M |\n|inceptionv4 |imagenet / imagenet+background |41M |\n|xception |imagenet |22M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">EfficientNet</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|efficientnet-b0 |imagenet |4M |\n|efficientnet-b1 |imagenet |6M |\n|efficientnet-b2 |imagenet |7M |\n|efficientnet-b3 |imagenet |10M |\n|efficientnet-b4 |imagenet |17M |\n|efficientnet-b5 |imagenet |28M |\n|efficientnet-b6 |imagenet |40M |\n|efficientnet-b7 |imagenet |63M |\n|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |\n|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |\n|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |\n|timm-efficientnet-b3 |imagenet / advprop / noisy-student|10M |\n|timm-efficientnet-b4 |imagenet / advprop / noisy-student|17M |\n|timm-efficientnet-b5 |imagenet / advprop / noisy-student|28M |\n|timm-efficientnet-b6 |imagenet / advprop / noisy-student|40M |\n|timm-efficientnet-b7 |imagenet / advprop / noisy-student|63M |\n|timm-efficientnet-b8 |imagenet / advprop |84M |\n|timm-efficientnet-l2 |noisy-student |474M |\n|timm-efficientnet-lite0 |imagenet |4M |\n|timm-efficientnet-lite1 |imagenet |5M |\n|timm-efficientnet-lite2 |imagenet |6M |\n|timm-efficientnet-lite3 |imagenet |8M |\n|timm-efficientnet-lite4 |imagenet |13M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">MobileNet</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|mobilenet_v2 |imagenet |2M |\n|timm-mobilenetv3_large_075 |imagenet |1.78M |\n|timm-mobilenetv3_large_100 |imagenet |2.97M |\n|timm-mobilenetv3_large_minimal_100|imagenet |1.41M |\n|timm-mobilenetv3_small_075 |imagenet |0.57M |\n|timm-mobilenetv3_small_100 |imagenet |0.93M |\n|timm-mobilenetv3_small_minimal_100|imagenet |0.43M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">DPN</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|dpn68 |imagenet |11M |\n|dpn68b |imagenet+5k |11M |\n|dpn92 |imagenet+5k |34M |\n|dpn98 |imagenet |58M |\n|dpn107 |imagenet+5k |84M |\n|dpn131 |imagenet |76M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">VGG</summary>\n<div style=\"margin-left: 25px;\">\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|vgg11 |imagenet |9M |\n|vgg11_bn |imagenet |9M |\n|vgg13 |imagenet |9M |\n|vgg13_bn |imagenet |9M |\n|vgg16 |imagenet |14M |\n|vgg16_bn |imagenet |14M |\n|vgg19 |imagenet |20M |\n|vgg19_bn |imagenet |20M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">Mix Vision Transformer</summary>\n<div style=\"margin-left: 25px;\">\n\nBackbone from SegFormer pretrained on Imagenet! Can be used with other decoders from package, you can combine Mix Vision Transformer with Unet, FPN and others!\n\nLimitations: \n\n - encoder is **not** supported by Linknet, Unet++\n - encoder is supported by FPN only for encoder **depth = 5**\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|mit_b0 |imagenet |3M |\n|mit_b1 |imagenet |13M |\n|mit_b2 |imagenet |24M |\n|mit_b3 |imagenet |44M |\n|mit_b4 |imagenet |60M |\n|mit_b5 |imagenet |81M |\n\n</div>\n</details>\n\n<details>\n<summary style=\"margin-left: 25px;\">MobileOne</summary>\n<div style=\"margin-left: 25px;\">\n\nApple's \"sub-one-ms\" Backbone pretrained on Imagenet! Can be used with all decoders.\n\nNote: In the official github repo the s0 variant has additional num_conv_branches, leading to more params than s1.\n\n|Encoder |Weights |Params, M |\n|--------------------------------|:------------------------------:|:------------------------------:|\n|mobileone_s0 |imagenet |4.6M |\n|mobileone_s1 |imagenet |4.0M |\n|mobileone_s2 |imagenet |6.5M |\n|mobileone_s3 |imagenet |8.8M |\n|mobileone_s4 |imagenet |13.6M |\n\n</div>\n</details>\n\n\n\\* `ssl`, `swsl` - semi-supervised and weakly-supervised learning on ImageNet ([repo](https://github.com/facebookresearch/semi-supervised-ImageNet1K-models)).\n\n#### Timm Encoders <a name=\"timm\"></a>\n\n[docs](https://smp.readthedocs.io/en/latest/encoders_timm.html)\n\nPytorch Image Models (a.k.a. timm) has a lot of pretrained models and interface which allows using these models as encoders in smp, however, not all models are supported\n\n - not all transformer models have ``features_only`` functionality implemented that is required for encoder\n - some models have inappropriate strides\n\nTotal number of supported encoders: 549\n - [table with available encoders](https://smp.readthedocs.io/en/latest/encoders_timm.html)\n\n### \ud83d\udd01 Models API <a name=\"api\"></a>\n\n - `model.encoder` - pretrained backbone to extract features of different spatial resolution\n - `model.decoder` - depends on models architecture (`Unet`/`Linknet`/`PSPNet`/`FPN`)\n - `model.segmentation_head` - last block to produce required number of mask channels (include also optional upsampling and activation)\n - `model.classification_head` - optional block which create classification head on top of encoder\n - `model.forward(x)` - sequentially pass `x` through model\\`s encoder, decoder and segmentation head (and classification head if specified)\n\n##### Input channels\nInput channels parameter allows you to create models, which process tensors with arbitrary number of channels.\nIf you use pretrained weights from imagenet - weights of first convolution will be reused. For\n1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be \npopulated with weights like `new_weight[:, i] = pretrained_weight[:, i % 3]` and than scaled with `new_weight * 3 / new_in_channels`.\n```python\nmodel = smp.FPN('resnet34', in_channels=1)\nmask = model(torch.ones([1, 1, 64, 64]))\n```\n\n##### Auxiliary classification output \nAll models support `aux_params` parameters, which is default set to `None`. \nIf `aux_params = None` then classification auxiliary output is not created, else\nmodel produce not only `mask`, but also `label` output with shape `NC`.\nClassification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be \nconfigured by `aux_params` as follows:\n```python\naux_params=dict(\n pooling='avg', # one of 'avg', 'max'\n dropout=0.5, # dropout ratio, default is None\n activation='sigmoid', # activation function, default is None\n classes=4, # define number of output labels\n)\nmodel = smp.Unet('resnet34', classes=4, aux_params=aux_params)\nmask, label = model(x)\n```\n\n##### Depth\nDepth parameter specify a number of downsampling operations in encoder, so you can make\nyour model lighter if specify smaller `depth`.\n```python\nmodel = smp.Unet('resnet34', encoder_depth=4)\n```\n\n\n### \ud83d\udee0 Installation <a name=\"installation\"></a>\nPyPI version:\n```bash\n$ pip install segmentation-models-pytorch\n````\nLatest version from source:\n```bash\n$ pip install git+https://github.com/qubvel/segmentation_models.pytorch\n````\n\n### \ud83c\udfc6 Competitions won with the library\n\n`Segmentation Models` package is widely used in the image segmentation competitions.\n[Here](https://github.com/qubvel/segmentation_models.pytorch/blob/main/HALLOFFAME.md) you can find competitions, names of the winners and links to their solutions.\n\n### \ud83e\udd1d Contributing\n\n#### Install SMP \n\n```bash\nmake install_dev # create .venv, install SMP in dev mode\n```\n\n#### Run tests and code checks\n\n```bash\nmake fixup # Ruff for formatting and lint checks\n```\n\n#### Update table with encoders \n\n```bash\nmake table # generate a table with encoders and print to stdout\n```\n\n### \ud83d\udcdd Citing\n```\n@misc{Iakubovskii:2019,\n Author = {Pavel Iakubovskii},\n Title = {Segmentation Models Pytorch},\n Year = {2019},\n Publisher = {GitHub},\n Journal = {GitHub repository},\n Howpublished = {\\url{https://github.com/qubvel/segmentation_models.pytorch}}\n}\n```\n\n### \ud83d\udee1\ufe0f License <a name=\"license\"></a>\nThe project is primarily distributed under [MIT License](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE), while some files are subject to other licenses. Please refer to [LICENSES](licenses/LICENSES.md) and license statements in each file for careful check, especially for commercial use.\n",
"bugtrack_url": null,
"license": "The MIT License Copyright (c) 2019, Pavel Iakubovskii Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Image segmentation models with pre-trained backbones. PyTorch.",
"version": "0.4.0",
"project_urls": {
"Homepage": "https://github.com/qubvel-org/segmentation_models.pytorch"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1edd02779033f660b670c6050a35b4f9eb9e536fa6c719af4a4f59f095a2c70f",
"md5": "a340b3f688d6af6074f98b6541193719",
"sha256": "2cd95a985d7d2d87d94bddef9f0398fd16ec5f54dda0619bcc3a25555c3eb86a"
},
"downloads": -1,
"filename": "segmentation_models_pytorch-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a340b3f688d6af6074f98b6541193719",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 121262,
"upload_time": "2025-01-08T15:34:40",
"upload_time_iso_8601": "2025-01-08T15:34:40.640993Z",
"url": "https://files.pythonhosted.org/packages/1e/dd/02779033f660b670c6050a35b4f9eb9e536fa6c719af4a4f59f095a2c70f/segmentation_models_pytorch-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ef5e4fad09f0a0975014d895f2fe2225c547ea7d2f5c7b920288e21750fbf38f",
"md5": "f49b5b21e19e31e33e038d1540dfca20",
"sha256": "8833e63f0846090667be6fce05a2bbebbd1537776d3dea72916aa3db9e22e55b"
},
"downloads": -1,
"filename": "segmentation_models_pytorch-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "f49b5b21e19e31e33e038d1540dfca20",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 78515,
"upload_time": "2025-01-08T15:34:43",
"upload_time_iso_8601": "2025-01-08T15:34:43.196499Z",
"url": "https://files.pythonhosted.org/packages/ef/5e/4fad09f0a0975014d895f2fe2225c547ea7d2f5c7b920288e21750fbf38f/segmentation_models_pytorch-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-08 15:34:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "qubvel-org",
"github_project": "segmentation_models.pytorch",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "segmentation-models-pytorch"
}