# saldet
> **Sal**iency **Det**ection (*saldet*) is a collection of models and tools to perform Saliency Detection with PyTorch (cuda, mps, etc.).
[![PyPI Version](https://img.shields.io/pypi/v/saldet)](https://pypi.org/project/saldet/)
[![Build Status](https://github.com/riccardomusmeci/saldet/actions/workflows/build.yaml/badge.svg)](https://github.com/riccardomusmeci/saldet/actions/workflows/build.yaml)
[![Code Coverage](https://codecov.io/gh/riccardomusmeci/saldet/branch/main/graph/badge.svg)](https://codecov.io/gh/riccardomusmeci/saldet/)
<!-- [![Documentation Status](https://readthedocs.org/projects/saldet/badge/?version=latest)](https://saldet.readthedocs.io/en/latest/?badge=latest) -->
## **Models**
List of saliency detection models supported by saldet:
* U2Net - https://arxiv.org/abs/2005.09007v3 ([U2Net repo](https://github.com/xuebinqin/U-2-Net))
* PGNet - https://arxiv.org/abs/2204.05041 (follow training instructions from [PGNet repo](https://github.com/iCVTEAM/PGNet))
* PFAN - https://arxiv.org/pdf/1903.00179v2.pdf ([PFAN repo](https://github.com/sairajk/PyTorch-Pyramid-Feature-Attention-Network-for-Saliency-Detection))
### **Weights**
* PGNet -> weights from [PGNet repo](https://github.com/iCVTEAM/PGNet) converted to saldet version from [here](https://drive.google.com/file/d/1gr0lWZoCIucrV5-Z_QV23tUNd8826EjN/view?usp=share_link)
* U2Net Lite -> weights from [here](https://drive.google.com/file/d/1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy/view?usp=sharing) (U2Net repository)
* U2Net Full -> weights from [here](https://drive.google.com/file/d/1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ/view?usp=sharing) (U2Net repository)
* U2Net Full - Portrait -> weights for portrait images from [here](https://drive.google.com/file/d/1IG3HdpcRiDoWNookbncQjeaPN28t90yW/view) (U2Net repository)
* U2Net Full - Human Segmentation -> weights for segmenting humans from [here](https://drive.google.com/file/d/1-Yg0cxgrNhHP-016FPdp902BR-kSsA4P/view) (U2Net repository)
* PFAN -> weights from [PFAN repo](https://github.com/sairajk/PyTorch-Pyramid-Feature-Attention-Network-for-Saliency-Detection) converted to saldet version from [here](https://drive.google.com/file/d/1z6KdZh6arQOE6R30_AxNLvCOLe00dnez/view?usp=share_link)
To load pre-trained weights:
```python
from saldet import create_model
model = create_model("pgnet", checkpoint_path="PATH/TO/pgnet.pth")
```
## **Train**
### **Easy Mode**
The library comes with easy access to train models thanks to the amazing PyTorch Lightning support.
```python
from saldet.experiment import train
train(
data_dir=...,
config_path="config/u2net_lite.yaml", # check the config folder with some configurations
output_dir=...,
resume_from=...,
seed=42
)
```
Once the training is over, configuration file and checkpoints will be saved into the output dir.
**[WARNING]** The dataset must be structured as follows:
```
dataset
├── train
| ├── images
| │ ├── img_1.jpg
| │ └── img_2.jpg
| └── masks
| ├── img_1.png
| └── img_2.png
└── val
├── images
│ ├── img_10.jpg
│ └── img_11.jpg
└── masks
├── img_10.png
└── img_11.png
```
### **PyTorch Lighting Mode**
The library provides utils for model and data PyTorch Lightning Modules.
```python
import pytorch_lightning as pl
from saldet import create_model
from saldet.pl import
SaliencyPLDataModule, SaliencyPLModel
from saldet.transform import SaliencyTransform
# datamodule
datamodule = SaliencyPLDataModule(
root_dir=data_dir,
train_transform=SaliencyTransform(train=True, **config["transform"]),
val_transform=SaliencyTransform(train=False, **config["transform"]),
**config["datamodule"],
)
model = create_model(...)
criterion = ...
optimizer = ...
lr_scheduler = ...
pl_model = SaliencyPLModel(
model=model, criterion=criterion, optimizer=optimizer, lr_scheduler=lr_scheduler
)
trainer = pl.Trainer(...)
# fit
print(f"Launching training...")
trainer.fit(model=pl_model, datamodule=datamodule)
```
### **PyTorch Mode**
Alternatively you can define your custom training process and use the ```create_model()``` util to use the model you like.
## **Inference**
The library comes with easy access to inference saliency maps from a folder with images.
```python
from saldet.experiment import inference
inference(
images_dir=...,
ckpt=..., # path to ckpt/pth model file
config_path=..., # path to configuration file from saldet train
output_dir=..., # where to save saliency maps
sigmoid=..., # whether to apply sigmoid to predicted masks
)
```
## **To-Dos**
[ ] Improve code coverage
[ ] ReadTheDocs documentation
Raw data
{
"_id": null,
"home_page": "https://github.com/riccardomusmeci/saldet",
"name": "saldet",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8,<4.0",
"maintainer_email": "",
"keywords": "computer vision,saliency detection",
"author": "Riccardo Musmeci",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/38/24/71838b72210002901cd5c0702a27cf923c48048a3a8d6a7a4d9ed71d5f9e/saldet-0.6.1.tar.gz",
"platform": null,
"description": "# saldet\n> **Sal**iency **Det**ection (*saldet*) is a collection of models and tools to perform Saliency Detection with PyTorch (cuda, mps, etc.).\n\n[![PyPI Version](https://img.shields.io/pypi/v/saldet)](https://pypi.org/project/saldet/)\n[![Build Status](https://github.com/riccardomusmeci/saldet/actions/workflows/build.yaml/badge.svg)](https://github.com/riccardomusmeci/saldet/actions/workflows/build.yaml)\n[![Code Coverage](https://codecov.io/gh/riccardomusmeci/saldet/branch/main/graph/badge.svg)](https://codecov.io/gh/riccardomusmeci/saldet/)\n<!-- [![Documentation Status](https://readthedocs.org/projects/saldet/badge/?version=latest)](https://saldet.readthedocs.io/en/latest/?badge=latest) -->\n\n\n## **Models**\nList of saliency detection models supported by saldet:\n* U2Net - https://arxiv.org/abs/2005.09007v3 ([U2Net repo](https://github.com/xuebinqin/U-2-Net))\n* PGNet - https://arxiv.org/abs/2204.05041 (follow training instructions from [PGNet repo](https://github.com/iCVTEAM/PGNet))\n* PFAN - https://arxiv.org/pdf/1903.00179v2.pdf ([PFAN repo](https://github.com/sairajk/PyTorch-Pyramid-Feature-Attention-Network-for-Saliency-Detection))\n\n### **Weights**\n* PGNet -> weights from [PGNet repo](https://github.com/iCVTEAM/PGNet) converted to saldet version from [here](https://drive.google.com/file/d/1gr0lWZoCIucrV5-Z_QV23tUNd8826EjN/view?usp=share_link)\n* U2Net Lite -> weights from [here](https://drive.google.com/file/d/1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy/view?usp=sharing) (U2Net repository)\n* U2Net Full -> weights from [here](https://drive.google.com/file/d/1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ/view?usp=sharing) (U2Net repository)\n* U2Net Full - Portrait -> weights for portrait images from [here](https://drive.google.com/file/d/1IG3HdpcRiDoWNookbncQjeaPN28t90yW/view) (U2Net repository)\n* U2Net Full - Human Segmentation -> weights for segmenting humans from [here](https://drive.google.com/file/d/1-Yg0cxgrNhHP-016FPdp902BR-kSsA4P/view) (U2Net repository)\n* PFAN -> weights from [PFAN repo](https://github.com/sairajk/PyTorch-Pyramid-Feature-Attention-Network-for-Saliency-Detection) converted to saldet version from [here](https://drive.google.com/file/d/1z6KdZh6arQOE6R30_AxNLvCOLe00dnez/view?usp=share_link)\n\n\nTo load pre-trained weights:\n```python\nfrom saldet import create_model\nmodel = create_model(\"pgnet\", checkpoint_path=\"PATH/TO/pgnet.pth\")\n```\n\n## **Train**\n\n### **Easy Mode**\nThe library comes with easy access to train models thanks to the amazing PyTorch Lightning support. \n\n```python\nfrom saldet.experiment import train\n\ntrain(\n data_dir=...,\n config_path=\"config/u2net_lite.yaml\", # check the config folder with some configurations\n output_dir=...,\n resume_from=...,\n seed=42\n)\n```\n\nOnce the training is over, configuration file and checkpoints will be saved into the output dir.\n\n**[WARNING]** The dataset must be structured as follows:\n```\ndataset\n \u251c\u2500\u2500 train \n | \u251c\u2500\u2500 images \n | \u2502 \u251c\u2500\u2500 img_1.jpg\n | \u2502 \u2514\u2500\u2500 img_2.jpg \n | \u2514\u2500\u2500 masks\n | \u251c\u2500\u2500 img_1.png\n | \u2514\u2500\u2500 img_2.png \n \u2514\u2500\u2500 val\n \u251c\u2500\u2500 images \n \u2502 \u251c\u2500\u2500 img_10.jpg\n \u2502 \u2514\u2500\u2500 img_11.jpg \n \u2514\u2500\u2500 masks\n \u251c\u2500\u2500 img_10.png\n \u2514\u2500\u2500 img_11.png \n```\n\n### **PyTorch Lighting Mode**\nThe library provides utils for model and data PyTorch Lightning Modules.\n```python\nimport pytorch_lightning as pl\nfrom saldet import create_model\nfrom saldet.pl import\n SaliencyPLDataModule, SaliencyPLModel\nfrom saldet.transform import SaliencyTransform\n\n# datamodule\ndatamodule = SaliencyPLDataModule(\n root_dir=data_dir,\n train_transform=SaliencyTransform(train=True, **config[\"transform\"]),\n val_transform=SaliencyTransform(train=False, **config[\"transform\"]),\n **config[\"datamodule\"],\n)\n\nmodel = create_model(...)\ncriterion = ...\noptimizer = ...\nlr_scheduler = ...\n\npl_model = SaliencyPLModel(\n model=model, criterion=criterion, optimizer=optimizer, lr_scheduler=lr_scheduler\n)\n\ntrainer = pl.Trainer(...)\n\n# fit\nprint(f\"Launching training...\")\ntrainer.fit(model=pl_model, datamodule=datamodule)\n```\n\n### **PyTorch Mode**\nAlternatively you can define your custom training process and use the ```create_model()``` util to use the model you like.\n\n\n## **Inference**\nThe library comes with easy access to inference saliency maps from a folder with images.\n```python\nfrom saldet.experiment import inference\n\ninference(\n images_dir=...,\n ckpt=..., # path to ckpt/pth model file\n config_path=..., # path to configuration file from saldet train\n output_dir=..., # where to save saliency maps\n sigmoid=..., # whether to apply sigmoid to predicted masks\n)\n```\n\n## **To-Dos**\n\n[ ] Improve code coverage\n\n[ ] ReadTheDocs documentation\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Saliency Detection library (models, loss, utils) with PyTorch",
"version": "0.6.1",
"project_urls": {
"Homepage": "https://github.com/riccardomusmeci/saldet",
"Repository": "https://github.com/riccardomusmeci/saldet"
},
"split_keywords": [
"computer vision",
"saliency detection"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bb09f07fbc6c5ff8f0ef6e22639155219c007e900d090d865ab35a86d8e47927",
"md5": "c0d540c2638fd32512c835d4b280db13",
"sha256": "87939ac87fe1042665562cb6d50060798d1ce4b6a7d372133498fb902a778206"
},
"downloads": -1,
"filename": "saldet-0.6.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c0d540c2638fd32512c835d4b280db13",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<4.0",
"size": 37274,
"upload_time": "2023-06-03T06:45:40",
"upload_time_iso_8601": "2023-06-03T06:45:40.204829Z",
"url": "https://files.pythonhosted.org/packages/bb/09/f07fbc6c5ff8f0ef6e22639155219c007e900d090d865ab35a86d8e47927/saldet-0.6.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "382471838b72210002901cd5c0702a27cf923c48048a3a8d6a7a4d9ed71d5f9e",
"md5": "b0cd3037aeabd505ee1449b9bc310e26",
"sha256": "62afa0246dd90cf0cb8d43051e199abe2a13b3f0656312ec9c7b8601c1133d3b"
},
"downloads": -1,
"filename": "saldet-0.6.1.tar.gz",
"has_sig": false,
"md5_digest": "b0cd3037aeabd505ee1449b9bc310e26",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<4.0",
"size": 28459,
"upload_time": "2023-06-03T06:45:42",
"upload_time_iso_8601": "2023-06-03T06:45:42.046110Z",
"url": "https://files.pythonhosted.org/packages/38/24/71838b72210002901cd5c0702a27cf923c48048a3a8d6a7a4d9ed71d5f9e/saldet-0.6.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-03 06:45:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "riccardomusmeci",
"github_project": "saldet",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "saldet"
}