![tests](https://github.com/pilot7747/sldl/actions/workflows/tests.yml/badge.svg)
[![Documentation Status](https://readthedocs.org/projects/sldl/badge/?version=latest)](https://sldl.readthedocs.io/en/latest/?badge=latest)
# Single-Line Deep Learning
Most of the practicle tasks that require the usage of deep learning models can be simplified to "just do the thing", e.g., "just upscale the image". On the other hand, official repositories of the state-of-the-art methods are dedicated to provide a way to reproduce experiments presented in the paper. These two different tasks require different code structure, so I made this library that provides an ultimative single-line solution for practical tasks solved by the SOTA methods. For instance, to "just upscale the image" you can just run the following code:
```python
from PIL import Image
from sldl.image import ImageSR
sr = ImageSR('BSRGAN')
img = Image.open('test.png')
upscaled = sr(img)
```
## Installation
The project is available on PyPI, just run
```bash
pip install sldl
```
## Overview
SLDL is written in PyTorch and tries not to change the original author's implementation and, at the same time, provide the fastest inference and the most convinient interface. Note that SLDL doesn't provide any interface to train or fine-tune the models.
Each method is a `torch.nn.Module` that has a `__call__` method that solves your task. The library tries to provide the most practical interface, so it operates with Pillow images and video files in order to use the upscaler in your program and to just upscale your video.
Currently two types of tasks are supported.
### Images
* Denoising: SwinIR
* Super-resolution: BSRGAN, RealESRGAN, SwinIR
### Videos
* Denoising: SwinIR
* Super-resolution: BSRGAN, RealESRGAN, SwinIR, VRT
* Interpolation: IFRNet
## Usage
For images, run this
```python
from PIL import Image
from sldl.image import ImageSR
img = Image.open('test.png')
sr = ImageSR('BSRGAN') # or 'SwinIR-M', 'SwinIR-L', 'BSRGANx2'
# sr = sr.cuda() if you have a GPU
upscaled = sr(img)
```
For videos, run this
```python
from sldl.video import VideoSR
sr = VideoSR('BSRGAN')
sr('your_video.mp4', 'upscaled_video.mp4')
```
## Plans
* Add image deblurring, face generation, machine translation, etc
* Make inference optimizations like `torch.compile` and TensorRT
* CLI tool and Docker image
* Ready-to-go REST API deployment
Raw data
{
"_id": null,
"home_page": "https://github.com/pilot7747/sldl",
"name": "sldl",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "deep learning,computer vision,super resolution,denoising",
"author": "Nikita Pavlichenko",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/65/92/458bb2f0fb128adc7bb4e1ab6d13a7a5f3c11e11ed3ee32e4554f535713a/sldl-0.0.3.tar.gz",
"platform": null,
"description": "![tests](https://github.com/pilot7747/sldl/actions/workflows/tests.yml/badge.svg)\n[![Documentation Status](https://readthedocs.org/projects/sldl/badge/?version=latest)](https://sldl.readthedocs.io/en/latest/?badge=latest)\n\n# Single-Line Deep Learning\n\nMost of the practicle tasks that require the usage of deep learning models can be simplified to \"just do the thing\", e.g., \"just upscale the image\". On the other hand, official repositories of the state-of-the-art methods are dedicated to provide a way to reproduce experiments presented in the paper. These two different tasks require different code structure, so I made this library that provides an ultimative single-line solution for practical tasks solved by the SOTA methods. For instance, to \"just upscale the image\" you can just run the following code:\n\n```python\nfrom PIL import Image\nfrom sldl.image import ImageSR\n\nsr = ImageSR('BSRGAN')\n\nimg = Image.open('test.png')\nupscaled = sr(img)\n```\n\n## Installation\n\nThe project is available on PyPI, just run\n```bash\npip install sldl\n```\n\n## Overview\n\nSLDL is written in PyTorch and tries not to change the original author's implementation and, at the same time, provide the fastest inference and the most convinient interface. Note that SLDL doesn't provide any interface to train or fine-tune the models.\n\nEach method is a `torch.nn.Module` that has a `__call__` method that solves your task. The library tries to provide the most practical interface, so it operates with Pillow images and video files in order to use the upscaler in your program and to just upscale your video.\n\nCurrently two types of tasks are supported.\n\n### Images\n\n* Denoising: SwinIR\n* Super-resolution: BSRGAN, RealESRGAN, SwinIR\n\n### Videos\n\n* Denoising: SwinIR\n* Super-resolution: BSRGAN, RealESRGAN, SwinIR, VRT\n* Interpolation: IFRNet\n\n## Usage\n\nFor images, run this\n\n```python\nfrom PIL import Image\nfrom sldl.image import ImageSR\n\nimg = Image.open('test.png')\nsr = ImageSR('BSRGAN') # or 'SwinIR-M', 'SwinIR-L', 'BSRGANx2'\n# sr = sr.cuda() if you have a GPU\n\nupscaled = sr(img)\n```\n\nFor videos, run this\n```python\nfrom sldl.video import VideoSR\n\nsr = VideoSR('BSRGAN')\nsr('your_video.mp4', 'upscaled_video.mp4')\n```\n\n## Plans\n\n* Add image deblurring, face generation, machine translation, etc\n* Make inference optimizations like `torch.compile` and TensorRT\n* CLI tool and Docker image\n* Ready-to-go REST API deployment\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Single-line inference of SOTA deep learning models",
"version": "0.0.3",
"split_keywords": [
"deep learning",
"computer vision",
"super resolution",
"denoising"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "030e6b363e08a3b817e5b5fec851591fab70dbc6f03ce49f663fe1339c338f35",
"md5": "e897e3bdd010eec01429aceb25031434",
"sha256": "cdab7ffc5f2a6fe89b2f7b46d578d9e372a6a387b9a61fc38bd2af015dbe44ed"
},
"downloads": -1,
"filename": "sldl-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e897e3bdd010eec01429aceb25031434",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 44120,
"upload_time": "2023-01-22T10:56:45",
"upload_time_iso_8601": "2023-01-22T10:56:45.823949Z",
"url": "https://files.pythonhosted.org/packages/03/0e/6b363e08a3b817e5b5fec851591fab70dbc6f03ce49f663fe1339c338f35/sldl-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6592458bb2f0fb128adc7bb4e1ab6d13a7a5f3c11e11ed3ee32e4554f535713a",
"md5": "bdef862daa696f9eed149e1cc7e45b3f",
"sha256": "e46baca134f10ae232047d8a6903f8f1fe33d1ed9edc1b1b51159f7b60451f1b"
},
"downloads": -1,
"filename": "sldl-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "bdef862daa696f9eed149e1cc7e45b3f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 40026,
"upload_time": "2023-01-22T10:56:47",
"upload_time_iso_8601": "2023-01-22T10:56:47.354954Z",
"url": "https://files.pythonhosted.org/packages/65/92/458bb2f0fb128adc7bb4e1ab6d13a7a5f3c11e11ed3ee32e4554f535713a/sldl-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-22 10:56:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "pilot7747",
"github_project": "sldl",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "sldl"
}