.. Author: Akshay Mestry <xa@mes3.dev>
.. Created on: Saturday, December 02 2023
.. Last updated on: Tuesday, December 12 2023
NanoTorch
=========
Etymology: *nano* (Small) and *torch* (PyTorch)
Small-scale implementation of `PyTorch`_ from the ground up.
This project, a miniature implementation of the PyTorch library, is crafted
with the primary goal of elucidating the intricate workings of neural network
libraries. It serves as a pedagogical tool for those seeking to unravel the
mathematical complexities and the underlying architecture that powers such
sophisticated libraries.
**NOTE:** This project is based on the excellent work done by
`Andrej Karpathy`_ in his `micrograd`_ project.
Installation
------------
.. See more at: https://stackoverflow.com/a/15268990
Install the latest version of NanoTorch using `pip`_:
.. code-block:: bash
pip install -U git+https://github.com/xames3/nanotorch.git#egg=nanotorch
Objective
---------
The cornerstone of this endeavor is to provide a hands-on learning experience
by replicating key components of PyTorch, thereby granting insights into its
functional mechanisms. This bespoke implementation focuses on the core aspects
of neural network computation, including tensor operations, automatic
differentiation, and basic neural network modules.
Features
--------
1. **Tensor Operations:** At the heart of this implementation lie tensor
operations, which are the building blocks of any neural network library. As of
now, our tensors support basic arithmetic functionalities found in PyTorch.
.. code:: python
>>> a = nanotorch.tensor(2.0)
>>> b = nanotorch.tensor(3.0)
>>> a + b
tensor(5.0)
>>> a - 6
tensor(-4.0)
>>> c = a + b
>>> c += 2 * a / b
>>> c = c ** 3
>>> nanotorch.arange(5)
[tensor(0), tensor(1), tensor(2), tensor(3), tensor(4)]
>>> nanotorch.arange(1, 4)
[tensor(1), tensor(2), tensor(3)]
>>> nanotorch.arange(1, 2.5, 0.5)
[tensor(1), tensor(1.5), tensor(2.)]
2. **Automatic Differentiation:** A pivotal feature of this project is a
simplistic version of automatic differentiation, akin to PyTorch's
``autograd``. It allows for the computation of gradients automatically, which
is essential for training neural networks.
.. code:: python
>>> c.backward()
>>> print(a.grad) # prints 200.55 as the gradient with respect to c i.e dc/da
3. **Neural Network Modules:** The implementation includes rudimentary neural
network modules such as linear layers and activation functions. These modules
can be composed to construct simple neural network architectures.
4. **Optimizers and Loss Functions:** Basic optimizers like SGD and common
loss functions are included to facilitate the training process of neural
networks.
Educational Value
-----------------
This project stands as a testament to the educational philosophy of learning
by doing. It is particularly beneficial for:
- Students and enthusiasts who aspire to gain a profound understanding of the
inner workings of neural network libraries.
- Developers and researchers seeking to customize or extend the functionalities
of existing deep learning libraries for their specific requirements.
Usage and Documentation
-----------------------
The codebase is structured to be intuitive and mirrors the design principles
of PyTorch to a significant extent. Comprehensive docstrings are provided for
each module and function, ensuring clarity and ease of understanding. Users
are encouraged to delve into the code, experiment with it, and modify it to
suit their learning curve.
Contributions and Feedback
--------------------------
Contributions to this project are warmly welcomed. Whether it's refining the
code, enhancing the documentation, or extending the current feature set, your
input is highly valued. Feedback, whether constructive criticism or
commendation, is equally appreciated and will be instrumental in the evolution
of this educational tool.
Acknowledgments
---------------
This project is inspired by the remarkable work done by the `PyTorch
development team`_. It is a tribute to their contributions to the field of
machine learning and the open-source community at large.
Project Links
-------------
- Source Code: https://github.com/xames3/nanotorch
- Issue Tracker: https://github.com/xames3/nanotorch/issues
.. _Andrej Karpathy: https://github.com/karpathy
.. _PyTorch development team: https://github.com/pytorch/pytorch
.. _PyTorch: https://pytorch.org
.. _micrograd: https://github.com/karpathy/micrograd
.. _pip: https://pip.pypa.io/en/stable/getting-started/
Raw data
{
"_id": null,
"home_page": "https://github.com/xames3/nanotorch/",
"name": "nanotorch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "nanotorch, pytorch, python",
"author": "Akshay Mestry (XAMES3)",
"author_email": "xa@mes3.dev",
"download_url": "https://files.pythonhosted.org/packages/7c/a5/0b8c3ad9250ba900722a38c4aac783a2c84a79429d470295be00c3d963b7/nanotorch-2.1.0.tar.gz",
"platform": "osx",
"description": ".. Author: Akshay Mestry <xa@mes3.dev>\n.. Created on: Saturday, December 02 2023\n.. Last updated on: Tuesday, December 12 2023\n\nNanoTorch\n=========\n\nEtymology: *nano* (Small) and *torch* (PyTorch)\n\nSmall-scale implementation of `PyTorch`_ from the ground up.\n\nThis project, a miniature implementation of the PyTorch library, is crafted\nwith the primary goal of elucidating the intricate workings of neural network\nlibraries. It serves as a pedagogical tool for those seeking to unravel the\nmathematical complexities and the underlying architecture that powers such\nsophisticated libraries.\n\n**NOTE:** This project is based on the excellent work done by\n`Andrej Karpathy`_ in his `micrograd`_ project.\n\nInstallation\n------------\n\n.. See more at: https://stackoverflow.com/a/15268990\n\nInstall the latest version of NanoTorch using `pip`_:\n\n.. code-block:: bash\n\n pip install -U git+https://github.com/xames3/nanotorch.git#egg=nanotorch\n\nObjective\n---------\n\nThe cornerstone of this endeavor is to provide a hands-on learning experience\nby replicating key components of PyTorch, thereby granting insights into its\nfunctional mechanisms. This bespoke implementation focuses on the core aspects\nof neural network computation, including tensor operations, automatic\ndifferentiation, and basic neural network modules.\n\nFeatures\n--------\n\n1. **Tensor Operations:** At the heart of this implementation lie tensor\noperations, which are the building blocks of any neural network library. As of\nnow, our tensors support basic arithmetic functionalities found in PyTorch.\n\n.. code:: python\n\n >>> a = nanotorch.tensor(2.0)\n >>> b = nanotorch.tensor(3.0)\n >>> a + b\n tensor(5.0)\n >>> a - 6\n tensor(-4.0)\n >>> c = a + b\n >>> c += 2 * a / b\n >>> c = c ** 3\n >>> nanotorch.arange(5)\n [tensor(0), tensor(1), tensor(2), tensor(3), tensor(4)]\n >>> nanotorch.arange(1, 4)\n [tensor(1), tensor(2), tensor(3)]\n >>> nanotorch.arange(1, 2.5, 0.5)\n [tensor(1), tensor(1.5), tensor(2.)]\n\n2. **Automatic Differentiation:** A pivotal feature of this project is a\nsimplistic version of automatic differentiation, akin to PyTorch's\n``autograd``. It allows for the computation of gradients automatically, which\nis essential for training neural networks.\n\n.. code:: python\n\n >>> c.backward()\n >>> print(a.grad) # prints 200.55 as the gradient with respect to c i.e dc/da\n\n3. **Neural Network Modules:** The implementation includes rudimentary neural\nnetwork modules such as linear layers and activation functions. These modules\ncan be composed to construct simple neural network architectures.\n\n4. **Optimizers and Loss Functions:** Basic optimizers like SGD and common\nloss functions are included to facilitate the training process of neural\nnetworks.\n\nEducational Value\n-----------------\n\nThis project stands as a testament to the educational philosophy of learning\nby doing. It is particularly beneficial for:\n\n- Students and enthusiasts who aspire to gain a profound understanding of the\n inner workings of neural network libraries.\n\n- Developers and researchers seeking to customize or extend the functionalities\n of existing deep learning libraries for their specific requirements.\n\nUsage and Documentation\n-----------------------\n\nThe codebase is structured to be intuitive and mirrors the design principles\nof PyTorch to a significant extent. Comprehensive docstrings are provided for\neach module and function, ensuring clarity and ease of understanding. Users\nare encouraged to delve into the code, experiment with it, and modify it to\nsuit their learning curve.\n\nContributions and Feedback\n--------------------------\n\nContributions to this project are warmly welcomed. Whether it's refining the\ncode, enhancing the documentation, or extending the current feature set, your\ninput is highly valued. Feedback, whether constructive criticism or \ncommendation, is equally appreciated and will be instrumental in the evolution\nof this educational tool.\n\nAcknowledgments\n---------------\n\nThis project is inspired by the remarkable work done by the `PyTorch\ndevelopment team`_. It is a tribute to their contributions to the field of\nmachine learning and the open-source community at large.\n\nProject Links\n-------------\n\n- Source Code: https://github.com/xames3/nanotorch\n- Issue Tracker: https://github.com/xames3/nanotorch/issues\n\n.. _Andrej Karpathy: https://github.com/karpathy\n.. _PyTorch development team: https://github.com/pytorch/pytorch\n.. _PyTorch: https://pytorch.org\n.. _micrograd: https://github.com/karpathy/micrograd\n.. _pip: https://pip.pypa.io/en/stable/getting-started/\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "nanotorch: Small-scale implementation of PyTorch from the ground up.",
"version": "2.1.0",
"project_urls": {
"Homepage": "https://github.com/xames3/nanotorch/",
"Source": "https://github.com/xames3/nanotorch",
"Tracker": "https://github.com/xames3/nanotorch/issues"
},
"split_keywords": [
"nanotorch",
" pytorch",
" python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "46577cdaba221f2987878678f24be69a19b671c4c5e91ba51e8bccd476f19aef",
"md5": "3b584f2a6d47453c622018f1f153646a",
"sha256": "5fc42e88789fee069fd502ac6e78eeb365d47ee6edbeb404994047a3bd1f1af4"
},
"downloads": -1,
"filename": "nanotorch-2.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3b584f2a6d47453c622018f1f153646a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 21211,
"upload_time": "2024-03-24T16:38:21",
"upload_time_iso_8601": "2024-03-24T16:38:21.689915Z",
"url": "https://files.pythonhosted.org/packages/46/57/7cdaba221f2987878678f24be69a19b671c4c5e91ba51e8bccd476f19aef/nanotorch-2.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7ca50b8c3ad9250ba900722a38c4aac783a2c84a79429d470295be00c3d963b7",
"md5": "77e3008a8d34d716c62727590646c483",
"sha256": "d1a92649d7a852738e22c9cc456a83141e6d3e77abcedf6ffc07bff81a757108"
},
"downloads": -1,
"filename": "nanotorch-2.1.0.tar.gz",
"has_sig": false,
"md5_digest": "77e3008a8d34d716c62727590646c483",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 18441,
"upload_time": "2024-03-24T16:38:23",
"upload_time_iso_8601": "2024-03-24T16:38:23.278950Z",
"url": "https://files.pythonhosted.org/packages/7c/a5/0b8c3ad9250ba900722a38c4aac783a2c84a79429d470295be00c3d963b7/nanotorch-2.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-24 16:38:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "xames3",
"github_project": "nanotorch",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "black",
"specs": [
[
"==",
"23.11.0"
]
]
},
{
"name": "build",
"specs": [
[
"==",
"1.0.3"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.46.0"
]
]
},
{
"name": "graphviz",
"specs": [
[
"==",
"0.20.1"
]
]
},
{
"name": "isort",
"specs": [
[
"==",
"5.12.0"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.5"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.8.2"
]
]
},
{
"name": "mypy",
"specs": [
[
"==",
"1.7.1"
]
]
},
{
"name": "mypy-extensions",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.2"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"23.2"
]
]
},
{
"name": "pathspec",
"specs": [
[
"==",
"0.11.2"
]
]
},
{
"name": "Pillow",
"specs": [
[
"==",
"10.1.0"
]
]
},
{
"name": "pip-tools",
"specs": [
[
"==",
"7.3.0"
]
]
},
{
"name": "platformdirs",
"specs": [
[
"==",
"4.0.0"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.1.1"
]
]
},
{
"name": "pyproject_hooks",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.8.2"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "toposort",
"specs": [
[
"==",
"1.10"
]
]
},
{
"name": "types-docutils",
"specs": [
[
"==",
"0.20.0.3"
]
]
},
{
"name": "types-setuptools",
"specs": [
[
"==",
"69.0.0.0"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.8.0"
]
]
}
],
"lcname": "nanotorch"
}