noisebase


Namenoisebase JSON
Version 1.1.1 PyPI version JSON
download
home_pageNone
SummaryDatasets and benchmarks for neural Monte Carlo denoising
upload_time2024-04-18 10:48:31
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords denoising monte carlo mc neural
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="https://github.com/balintio/noisebase/raw/main/docs/_static/logo-01.png" width="100%">
</p>

<div align="center">

[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyPI - Version](https://img.shields.io/pypi/v/noisebase)](https://pypi.org/project/noisebase/)
[![GitHub Repo stars](https://img.shields.io/github/stars/balintio/noisebase)](https://github.com/balintio/noisebase)

</div>

<p align="center">
<a href="https://balint.io/noisebase/news.html">News</a> &emsp;|&emsp; <a href="https://balint.io/noisebase/benchmarks/index.html">Benchmarks</a> &emsp;|&emsp; <a href="https://balint.io/noisebase/datasets/index.html">Datasets</a>
</p>

**Datasets and benchmarks for neural Monte Carlo denoising.**

<p align="center">
  <img src="https://github.com/balintio/noisebase/raw/main/docs/_static/teaser.png" width="70%">
</p>

What is Monte Carlo denoising?
------------------------------
<details>
<summary>Read More</summary>

<div align="center">
  <img src="https://github.com/balintio/noisebase/raw/main/docs/_static/Pi_monte_carlo_all.gif" width="30%"> <br>
  <p>Monte Carlo integration <a href="https://commons.wikimedia.org/w/index.php?curid=140013480"><b>Kmhkmh</b></a></p>
</div>

Monte Carlo methods approximate integrals by sampling random points from the function's domain, evaluating the function, and averaging the resulting samples. We mainly focus on *light transport simulation* as it's a complex and mature application, usually producing visual and intuitive results. In this case, our samples are light paths that a "photon" might take. Above on the right, you see an image rendered with 4 samples per pixel. It's quite noisy.

With a bit of napkin maths, we can estimate that rendering a relatively noise-free 4K image requires tens of billions of samples while rendering a two-hour movie requires quadrillions of samples. Astonishingly, we have data centres fit for this task. Not only do they consume electricity on par with a small town, but such computational requirements put creating 3D art outside the reach of many.

Deep neural networks have an incredible ability to reconstruct noisy data. They learn to combine the sliver of useful information contained in samples from the same object, both spatially from nearby pixels and temporally from subsequent frames. The images denoised with such neural networks (like above on the left) look absurd in comparison.
</details>

Getting started
---------------
You can start prototyping your denoiser by calling a single function:

```python
from noisebase import Noisebase

data_loader = Noisebase(
   'sampleset_v1', # Our first per-sample dataset
   {
      'framework': 'torch',
      'train': True,
      'buffers': ['diffuse', 'color', 'reference'],
      'samples': 8,
      'batch_size': 16
   }
)

# Get training, pytorch stuff...
for epoch in range(25):
   for data in data_loader:
      ...
```

And here's the kicker: with just that, our data loaders seamlessly support asynchronous and distributed loading, decompression, and augmentation of large video datasets containing anything from normal maps, diffuse maps, motion vectors, temporally changing camera intrinsics, and noisy HDR samples.

As you scale up, you'll want a little more control. Thankfully, Noisebase is fully integrated with [Hydra](https://hydra.cc/) and [Pytorch Lightning](https://lightning.ai/docs/pytorch/stable/).

Noisebase can also:
* Download training and testing data
* Runs benchmark with many metrics
* Neatly summarize everything into tables
* Help you keep track of denoising performance while keeping your implementation simple

Installation
------------
You can quickly install Noisebase from PyPI:
```bash
pip install noisebase
```
For more complicated workflows, we recommend cloning the repo instead:
```bash
git clone https://github.com/balintio/noisebase
cd noisebase
pip install -e . # Editable install
```

Check our [manual](https://balint.io/noisebase/manual.html) for more details.

Citation
--------

Please cite our paper introducing Noisebase when used in academic projects:

```bibtex
@inproceedings{balint2023nppd,
    author = {Balint, Martin and Wolski, Krzysztof and Myszkowski, Karol and Seidel, Hans-Peter and Mantiuk, Rafa\l{}},
    title = {Neural Partitioning Pyramids for Denoising Monte Carlo Renderings},
    year = {2023},
    isbn = {9798400701597},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3588432.3591562},
    doi = {10.1145/3588432.3591562},
    booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
    articleno = {60},
    numpages = {11},
    keywords = {upsampling, radiance decomposition, pyramidal filtering, kernel prediction, denoising, Monte Carlo},
    location = {<conf-loc>, <city>Los Angeles</city>, <state>CA</state>, <country>USA</country>, </conf-loc>},
    series = {SIGGRAPH '23}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "noisebase",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "denoising, Monte Carlo, MC, neural",
    "author": null,
    "author_email": "Martin B\u00e1lint <martin@balint.io>",
    "download_url": "https://files.pythonhosted.org/packages/d6/a4/4c1633d131a7bcaf8e612649b670b928a2b9d7ffd5db73f8af92e7ae3865/noisebase-1.1.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"https://github.com/balintio/noisebase/raw/main/docs/_static/logo-01.png\" width=\"100%\">\n</p>\n\n<div align=\"center\">\n\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![PyPI - Version](https://img.shields.io/pypi/v/noisebase)](https://pypi.org/project/noisebase/)\n[![GitHub Repo stars](https://img.shields.io/github/stars/balintio/noisebase)](https://github.com/balintio/noisebase)\n\n</div>\n\n<p align=\"center\">\n<a href=\"https://balint.io/noisebase/news.html\">News</a> &emsp;|&emsp; <a href=\"https://balint.io/noisebase/benchmarks/index.html\">Benchmarks</a> &emsp;|&emsp; <a href=\"https://balint.io/noisebase/datasets/index.html\">Datasets</a>\n</p>\n\n**Datasets and benchmarks for neural Monte Carlo denoising.**\n\n<p align=\"center\">\n  <img src=\"https://github.com/balintio/noisebase/raw/main/docs/_static/teaser.png\" width=\"70%\">\n</p>\n\nWhat is Monte Carlo denoising?\n------------------------------\n<details>\n<summary>Read More</summary>\n\n<div align=\"center\">\n  <img src=\"https://github.com/balintio/noisebase/raw/main/docs/_static/Pi_monte_carlo_all.gif\" width=\"30%\"> <br>\n  <p>Monte Carlo integration <a href=\"https://commons.wikimedia.org/w/index.php?curid=140013480\"><b>Kmhkmh</b></a></p>\n</div>\n\nMonte Carlo methods approximate integrals by sampling random points from the function's domain, evaluating the function, and averaging the resulting samples. We mainly focus on *light transport simulation* as it's a complex and mature application, usually producing visual and intuitive results. In this case, our samples are light paths that a \"photon\" might take. Above on the right, you see an image rendered with 4 samples per pixel. It's quite noisy.\n\nWith a bit of napkin maths, we can estimate that rendering a relatively noise-free 4K image requires tens of billions of samples while rendering a two-hour movie requires quadrillions of samples. Astonishingly, we have data centres fit for this task. Not only do they consume electricity on par with a small town, but such computational requirements put creating 3D art outside the reach of many.\n\nDeep neural networks have an incredible ability to reconstruct noisy data. They learn to combine the sliver of useful information contained in samples from the same object, both spatially from nearby pixels and temporally from subsequent frames. The images denoised with such neural networks (like above on the left) look absurd in comparison.\n</details>\n\nGetting started\n---------------\nYou can start prototyping your denoiser by calling a single function:\n\n```python\nfrom noisebase import Noisebase\n\ndata_loader = Noisebase(\n   'sampleset_v1', # Our first per-sample dataset\n   {\n      'framework': 'torch',\n      'train': True,\n      'buffers': ['diffuse', 'color', 'reference'],\n      'samples': 8,\n      'batch_size': 16\n   }\n)\n\n# Get training, pytorch stuff...\nfor epoch in range(25):\n   for data in data_loader:\n      ...\n```\n\nAnd here's the kicker: with just that, our data loaders seamlessly support asynchronous and distributed loading, decompression, and augmentation of large video datasets containing anything from normal maps, diffuse maps, motion vectors, temporally changing camera intrinsics, and noisy HDR samples.\n\nAs you scale up, you'll want a little more control. Thankfully, Noisebase is fully integrated with [Hydra](https://hydra.cc/) and [Pytorch Lightning](https://lightning.ai/docs/pytorch/stable/).\n\nNoisebase can also:\n* Download training and testing data\n* Runs benchmark with many metrics\n* Neatly summarize everything into tables\n* Help you keep track of denoising performance while keeping your implementation simple\n\nInstallation\n------------\nYou can quickly install Noisebase from PyPI:\n```bash\npip install noisebase\n```\nFor more complicated workflows, we recommend cloning the repo instead:\n```bash\ngit clone https://github.com/balintio/noisebase\ncd noisebase\npip install -e . # Editable install\n```\n\nCheck our [manual](https://balint.io/noisebase/manual.html) for more details.\n\nCitation\n--------\n\nPlease cite our paper introducing Noisebase when used in academic projects:\n\n```bibtex\n@inproceedings{balint2023nppd,\n    author = {Balint, Martin and Wolski, Krzysztof and Myszkowski, Karol and Seidel, Hans-Peter and Mantiuk, Rafa\\l{}},\n    title = {Neural Partitioning Pyramids for Denoising Monte Carlo Renderings},\n    year = {2023},\n    isbn = {9798400701597},\n    publisher = {Association for Computing Machinery},\n    address = {New York, NY, USA},\n    url = {https://doi.org/10.1145/3588432.3591562},\n    doi = {10.1145/3588432.3591562},\n    booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},\n    articleno = {60},\n    numpages = {11},\n    keywords = {upsampling, radiance decomposition, pyramidal filtering, kernel prediction, denoising, Monte Carlo},\n    location = {<conf-loc>, <city>Los Angeles</city>, <state>CA</state>, <country>USA</country>, </conf-loc>},\n    series = {SIGGRAPH '23}\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Datasets and benchmarks for neural Monte Carlo denoising",
    "version": "1.1.1",
    "project_urls": {
        "Documentation": "https://balint.io/noisebase",
        "Repository": "https://github.com/balintio/noisebase.git"
    },
    "split_keywords": [
        "denoising",
        " monte carlo",
        " mc",
        " neural"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "84b8ca2df50cbcab823f12f9087e2c9d957319c175d1e1d16ecad584a4655315",
                "md5": "3f66e8793e3864bbb9d931b43b27cadd",
                "sha256": "ab81237a67a7c976d50a2b76ddbe6fbc255f61780fcc8d8827155a23d90fa81f"
            },
            "downloads": -1,
            "filename": "noisebase-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3f66e8793e3864bbb9d931b43b27cadd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 42037,
            "upload_time": "2024-04-18T10:48:29",
            "upload_time_iso_8601": "2024-04-18T10:48:29.651482Z",
            "url": "https://files.pythonhosted.org/packages/84/b8/ca2df50cbcab823f12f9087e2c9d957319c175d1e1d16ecad584a4655315/noisebase-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d6a44c1633d131a7bcaf8e612649b670b928a2b9d7ffd5db73f8af92e7ae3865",
                "md5": "276eeb30d1ce9ffe9f980fc95b7f09ea",
                "sha256": "f32113a397d93ea4bdb9c25a3633b7b020efe5f6b3a7b12c76a7325297e8474b"
            },
            "downloads": -1,
            "filename": "noisebase-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "276eeb30d1ce9ffe9f980fc95b7f09ea",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 36144,
            "upload_time": "2024-04-18T10:48:31",
            "upload_time_iso_8601": "2024-04-18T10:48:31.758828Z",
            "url": "https://files.pythonhosted.org/packages/d6/a4/4c1633d131a7bcaf8e612649b670b928a2b9d7ffd5db73f8af92e7ae3865/noisebase-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-18 10:48:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "balintio",
    "github_project": "noisebase",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "noisebase"
}
        
Elapsed time: 3.51007s