analogvnn


Nameanalogvnn JSON
Version 1.0.8 PyPI version JSON
download
home_pageNone
SummaryA fully modular framework for modeling and optimizing analog/photonic neural networks
upload_time2024-05-02 23:30:10
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseNone
keywords deep-learning analog photonics neural-network framework pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AnalogVNN

[![arXiv](https://img.shields.io/badge/arXiv-2210.10048-orange.svg)](https://arxiv.org/abs/2210.10048)
[![AML](https://img.shields.io/badge/AML-10.1063/5.0134156-orange.svg)](https://doi.org/10.1063/5.0134156)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

[![PyPI version](https://badge.fury.io/py/analogvnn.svg)](https://badge.fury.io/py/analogvnn)
[![Documentation Status](https://readthedocs.org/projects/analogvnn/badge/?version=stable)](https://analogvnn.readthedocs.io/en/stable/?badge=stable)
[![Python](https://img.shields.io/badge/python-3.7--3.11-blue)](https://badge.fury.io/py/analogvnn)
[![License: MPL 2.0](https://img.shields.io/badge/License-MPL_2.0-blue.svg)](https://opensource.org/licenses/MPL-2.0)

Documentation: [https://analogvnn.readthedocs.io/](https://analogvnn.readthedocs.io/)

## Installation:

- Install [PyTorch](https://pytorch.org/)
- Install AnalogVNN using [pip](https://pypi.org/project/analogvnn/)

```bash
  # Current stable release for CPU and GPU
  pip install analogvnn
  
  # For additional optional features
  pip install analogvnn[full]
```

## Usage:

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

- Sample code with AnalogVNN: [sample_code.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code.py)
- Sample code without
  AnalogVNN: [sample_code_non_analog.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_non_analog.py)
- Sample code with AnalogVNN and
  Logs: [sample_code_with_logs.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_with_logs.py)
- Jupyter
  Notebook: [AnalogVNN_Demo.ipynb](https://github.com/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)

## Abstract

[//]: # (![3 Layered Linear Photonic Analog Neural Network](docs/_static/analogvnn_model.png))

![3 Layered Linear Photonic Analog Neural Network](https://github.com/Vivswan/AnalogVNN/raw/release/docs/_static/analogvnn_model.png)

**AnalogVNN** is a simulation framework built on PyTorch which can simulate the effects of
optoelectronic noise, limited precision, and signal normalization present in photonic
neural network accelerators. We use this framework to train and optimize linear and
convolutional neural networks with up to 9 layers and ~1.7 million parameters, while
gaining insights into how normalization, activation function, reduced precision, and
noise influence accuracy in analog photonic neural networks. By following the same layer
structure design present in PyTorch, the AnalogVNN framework allows users to convert most
digital neural network models to their analog counterparts with just a few lines of code,
taking full advantage of the open-source optimization, deep learning, and GPU acceleration
libraries available through PyTorch.

AnalogVNN Paper: [https://doi.org/10.1063/5.0134156](https://doi.org/10.1063/5.0134156)

## Citing AnalogVNN

We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:

```bibtex
@article{shah2023analogvnn,
  title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},
  author={Shah, Vivswan and Youngblood, Nathan},
  journal={APL Machine Learning},
  volume={1},
  number={2},
  year={2023},
  publisher={AIP Publishing}
}
```

Or in textual form:

```text
Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling 
and optimizing photonic neural networks." APL Machine Learning 1.2 (2023).
DOI: 10.1063/5.0134156
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "analogvnn",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Vivswan Shah <vivswanshah@pitt.edu>",
    "keywords": "deep-learning, analog, photonics, neural-network, framework, pytorch",
    "author": null,
    "author_email": "Vivswan Shah <vivswanshah@pitt.edu>",
    "download_url": "https://files.pythonhosted.org/packages/3b/13/905fc48d535460a54f800e13a90b6c7e5a62e8c04ac3e17666bdc0cbec64/analogvnn-1.0.8.tar.gz",
    "platform": null,
    "description": "# AnalogVNN\n\n[![arXiv](https://img.shields.io/badge/arXiv-2210.10048-orange.svg)](https://arxiv.org/abs/2210.10048)\n[![AML](https://img.shields.io/badge/AML-10.1063/5.0134156-orange.svg)](https://doi.org/10.1063/5.0134156)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)\n\n[![PyPI version](https://badge.fury.io/py/analogvnn.svg)](https://badge.fury.io/py/analogvnn)\n[![Documentation Status](https://readthedocs.org/projects/analogvnn/badge/?version=stable)](https://analogvnn.readthedocs.io/en/stable/?badge=stable)\n[![Python](https://img.shields.io/badge/python-3.7--3.11-blue)](https://badge.fury.io/py/analogvnn)\n[![License: MPL 2.0](https://img.shields.io/badge/License-MPL_2.0-blue.svg)](https://opensource.org/licenses/MPL-2.0)\n\nDocumentation: [https://analogvnn.readthedocs.io/](https://analogvnn.readthedocs.io/)\n\n## Installation:\n\n- Install [PyTorch](https://pytorch.org/)\n- Install AnalogVNN using [pip](https://pypi.org/project/analogvnn/)\n\n```bash\n  # Current stable release for CPU and GPU\n  pip install analogvnn\n  \n  # For additional optional features\n  pip install analogvnn[full]\n```\n\n## Usage:\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)\n\n- Sample code with AnalogVNN: [sample_code.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code.py)\n- Sample code without\n  AnalogVNN: [sample_code_non_analog.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_non_analog.py)\n- Sample code with AnalogVNN and\n  Logs: [sample_code_with_logs.py](https://github.com/Vivswan/AnalogVNN/blob/release/sample_code_with_logs.py)\n- Jupyter\n  Notebook: [AnalogVNN_Demo.ipynb](https://github.com/Vivswan/AnalogVNN/blob/release/docs/_static/AnalogVNN_Demo.ipynb)\n\n## Abstract\n\n[//]: # (![3 Layered Linear Photonic Analog Neural Network]&#40;docs/_static/analogvnn_model.png&#41;)\n\n![3 Layered Linear Photonic Analog Neural Network](https://github.com/Vivswan/AnalogVNN/raw/release/docs/_static/analogvnn_model.png)\n\n**AnalogVNN** is a simulation framework built on PyTorch which can simulate the effects of\noptoelectronic noise, limited precision, and signal normalization present in photonic\nneural network accelerators. We use this framework to train and optimize linear and\nconvolutional neural networks with up to 9 layers and ~1.7 million parameters, while\ngaining insights into how normalization, activation function, reduced precision, and\nnoise influence accuracy in analog photonic neural networks. By following the same layer\nstructure design present in PyTorch, the AnalogVNN framework allows users to convert most\ndigital neural network models to their analog counterparts with just a few lines of code,\ntaking full advantage of the open-source optimization, deep learning, and GPU acceleration\nlibraries available through PyTorch.\n\nAnalogVNN Paper: [https://doi.org/10.1063/5.0134156](https://doi.org/10.1063/5.0134156)\n\n## Citing AnalogVNN\n\nWe would appreciate if you cite the following paper in your publications for which you used AnalogVNN:\n\n```bibtex\n@article{shah2023analogvnn,\n  title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},\n  author={Shah, Vivswan and Youngblood, Nathan},\n  journal={APL Machine Learning},\n  volume={1},\n  number={2},\n  year={2023},\n  publisher={AIP Publishing}\n}\n```\n\nOr in textual form:\n\n```text\nVivswan Shah, and Nathan Youngblood. \"AnalogVNN: A fully modular framework for modeling \nand optimizing photonic neural networks.\" APL Machine Learning 1.2 (2023).\nDOI: 10.1063/5.0134156\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A fully modular framework for modeling and optimizing analog/photonic neural networks",
    "version": "1.0.8",
    "project_urls": {
        "Author": "https://vivswan.github.io/",
        "Bug Reports": "https://github.com/Vivswan/AnalogVNN/issues",
        "Documentation": "https://analogvnn.readthedocs.io/en/latest/",
        "Homepage": "https://github.com/Vivswan/AnalogVNN",
        "Say Thanks!": "https://vivswan.github.io/",
        "Source": "https://github.com/Vivswan/AnalogVNN"
    },
    "split_keywords": [
        "deep-learning",
        " analog",
        " photonics",
        " neural-network",
        " framework",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2bd2780468f316cf25526fdd787893f5722d1e37061539d7639535cf58e16184",
                "md5": "259f13a6dd6379c2529fbdde01a15ee5",
                "sha256": "09921cc404795f52157911f809bd2cc28032c758c5ed5d597e44612022c03c51"
            },
            "downloads": -1,
            "filename": "analogvnn-1.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "259f13a6dd6379c2529fbdde01a15ee5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 72030,
            "upload_time": "2024-05-02T23:30:07",
            "upload_time_iso_8601": "2024-05-02T23:30:07.474573Z",
            "url": "https://files.pythonhosted.org/packages/2b/d2/780468f316cf25526fdd787893f5722d1e37061539d7639535cf58e16184/analogvnn-1.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b13905fc48d535460a54f800e13a90b6c7e5a62e8c04ac3e17666bdc0cbec64",
                "md5": "8fb52acf0de97b632b7ed83ce9aaf5c6",
                "sha256": "4f55a5f616c0f418dfe7588d84580f650af91c259822ac0fbeaea02e525c9952"
            },
            "downloads": -1,
            "filename": "analogvnn-1.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "8fb52acf0de97b632b7ed83ce9aaf5c6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 48552,
            "upload_time": "2024-05-02T23:30:10",
            "upload_time_iso_8601": "2024-05-02T23:30:10.190185Z",
            "url": "https://files.pythonhosted.org/packages/3b/13/905fc48d535460a54f800e13a90b6c7e5a62e8c04ac3e17666bdc0cbec64/analogvnn-1.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-02 23:30:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Vivswan",
    "github_project": "AnalogVNN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "analogvnn"
}
        
Elapsed time: 0.21838s