neuralforge


Nameneuralforge JSON
Version 0.0.15 PyPI version JSON
download
home_pagehttps://github.com/eduardoleao052/Autograd-from-scratch
SummaryAn educational framework similar to PyTorch, built to be interpretable and easy to implement.
upload_time2024-03-27 21:17:19
maintainerEduardo Leitao da Cunha Opice Leao
docs_urlNone
authorEduardo Leitao da Cunha Opice Leao
requires_python>=3.0
licenseMIT
keywords autograd deep-learning machine-learning ai numpy python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="left">
    <a href="https://github.com/eduardoleao052/autograd-from-scratch/actions/workflows/test.yml/badge.svg" alt="Unit Tests">
        <img src="https://github.com/eduardoleao052/autograd-from-scratch/actions/workflows/test.yml/badge.svg" /></a>
    <a href="https://github.com/eduardoleao052/autograd-from-scratch/pulse" alt="Activity">
        <img src="https://img.shields.io/github/commit-activity/m/eduardoleao052/autograd-from-scratch" /></a>
    <a href="https://github.com/eduardoleao052/autograd-from-scratch/graphs/contributors" alt="Contributors">
        <img src="https://img.shields.io/github/contributors/eduardoleao052/autograd-from-scratch" /></a>
    <a href="https://www.python.org/">
        <img src="https://img.shields.io/badge/language-Python-blue">
    </a>
    <a href="mailto:eduardoleao052@usp.br">
        <img src="https://img.shields.io/badge/-Email-red?style=flat-square&logo=gmail&logoColor=white">
    </a>
    <a href="https://www.linkedin.com/in/eduardoleao052/">
        <img src="https://img.shields.io/badge/-Linkedin-blue?style=flat-square&logo=linkedin">
    </a>
</p>


# Autograd Framework From Scratch
- NeuralForge is a unit-tested and documented educational framework. Similar to PyTorch, but with more __clear code__.
- The autograd from scratch engine is in [tensor_operations.py](neuralforge/tensor_operations.py). I got a lot of inspiration from Andrej Karpathy's micrograd videos.
- The deep learning model layers are in [nn/layers.py](neuralforge/nn/layers.py).
<br/>
<details>
<summary> Check out the <b>implemented basic operations</b>: </summary>


<br/>


- [Addition](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L205-L257)
- [Subtraction](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L259-L286)
- [Multiplication](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L288-L342)
- [Division](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L344-L398)
- [Matrix multiplication](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L400-L451)
- [Exponentiation](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L582-L609)
- [Log](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L611-L638)
- [Square Root](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L640-L667)

<br/>
  
</details>


<details>
<summary> The <b>implemented statistics</b>: </summary>


<br/>


- [Sum](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L492-L519)
- [Mean](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L521-L549)
- [Max](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L454-L490)
- [Variance](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L551-L579)

<br/>

</details>


<details>
<summary> And the <b>implemented tensor operations</b>: </summary>


<br/>


- [Reshape](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L682-L710)
- [Transpose](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L713-L741)
- [Concatenate](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L744-L780)
- [Stack](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L783-L820)
- [MaskedFill](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L823-L851)
- [Slice](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L854-L882)

<br/>


</details>
<br/>


## 1. Project Structure
- `neuralforge/` : Framework with python files.
  - `neuralforge/tensor_operations.py`:  File with the `Tensor` class and all of the tensor `Operations`.
  - `neuralforge/utils.py`: File with operations and helper functions.
  - `neuralforge/nn/`: Submodule of the framework. Contains full layers and optimizers.
      - `neuralforge/nn/nn.py`: Most deep learning layers, and `nn.Module` class.
      - `neuralforge/nn/optim.py` : File with optimizers.
- `data/` : Folder to store training data. Currently holds `shakespeare.txt`.
- `test/`: Folder with unit tests. Contains `test_framework.py`.
- `setup.py` : Setup file for the framework.
    
## 2. Running it Yourself
### Simple Autograd Example: 
```python
import neuralforge as forge

# Instantiate Tensors:
x = forge.randn((8,4,5))
w = forge.randn((8,5,4), requires_grad = True)
b = forge.randint((4), requires_grad = True)

# Make calculations:
out = x @ w
out += b

# Compute gradients on whole graph:
out.backward()

# Get gradients from specific Tensors:
print(w.grad)
print(b.grad)

```

### Complex Autograd Example (Transformer): 
```python
import neuralforge as forge
import neuralforge.nn as nn

# Implement Transformer class inheriting from forge.nn.Module:
class Transformer(nn.Module):
    def __init__(self, vocab_size: int, hidden_size: int, n_timesteps: int, n_heads: int, p: float):
        super().__init__()
        # Instantiate Transformer's Layers:
        self.embed = nn.Embedding(vocab_size, hidden_size)
        self.pos_embed = nn.PositionalEmbedding(n_timesteps, hidden_size)
        self.b1 = nn.Block(hidden_size, hidden_size, n_heads, n_timesteps, dropout_prob=p) 
        self.b2 = nn.Block(hidden_size, hidden_size, n_heads, n_timesteps, dropout_prob=p)
        self.ln = nn.LayerNorm(hidden_size)
        self.linear = nn.Linear(hidden_size, vocab_size)

    def forward(self, x):
        z = self.embed(x) + self.pos_embed(x)
        z = self.b1(z)
        z = self.b2(z)
        z = self.ln(z)
        z = self.linear(z)

        return z

# Get tiny Shakespeare test data:
text = load_text_data(f'{PATH}/data/shakespeare.txt')

# Create Transformer instance:
model = Transformer(vocab_size, hidden_size, n_timesteps, n_heads, dropout_p)

# Define loss function and optimizer:
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01, reg=0)
        
# Training Loop:
for _ in range(n_iters):
    x, y = get_batch(test_data, n_timesteps, batch_size)

    z = model.forward(x)

    # Get loss:
    loss = loss_func(z, y)

    # Backpropagate the loss using forge.tensor's backward() method:
    loss.backward()

    # Update the weights:
    optimizer.step()

    # Reset the gradients to zero after each training step:
    optimizer.zero_grad()
```
> **Note:** You can install the framework locally with: `pip install neuralforge`
<details>
<summary> <b> Requirements </b> </summary>

<br/>
  
- The required packages are listed in `requirements.txt`.
- The requirements can be installed on a virtual environment with the command:
```
pip install -r requirements.txt
```
> **Note:** The framework is built around numpy, so there is no CUDA availability.

<br/>

</details>
<details>
<summary> <b> Build a Custom Model </b> </summary>

<br/>

- To create a custom model class, you can use the exact same syntax as you would in PyTorch, inheriting from nn.Module.
<details>
<summary> You may chose among <b>the following layers</b>: </summary>

<br/>

- [nn.Embedding](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L129-L146) (first layer, turns input indexes into vectors)
- [nn.PositionalEmbedding](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L149-L164) (second layer, adds position information to every timestep of the input)
- [nn.Linear](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L47-L64) (simple fully-connected layer)
- [nn.MultiHeadSelfAttention](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L67-L126) (core of the transformer, calculates weighted sum of inputs)
- [nn.Block](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L268-L287) (full transformer block - Contains MHSA, Linear and LayerNorm layers)
- [nn.CrossEntropyLoss](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L290-L320) (last layer, returns probabilities for next generated character)

</details>
<details>
<summary> And <b>the following functions</b>: </summary>

<br/>

- [nn.Dropout](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L167-L183) (can be added to apply dropout)
- [nn.LayerNorm](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L186-L201) (normalizes the tensors)
- [nn.Softmax](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L215-L229) (scales the values between 0 and 1)
- [nn.Tanh](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L232-L241) (scales the values between -1 and 1)
- [nn.Relu](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L204-L212) (zeroes all negative values)

</details>

<br/>

</details>

## 3. Results
- The models implemented in [test_framework.py](tests/test_framework.py) all converged to __near-zero losses__.
- This framework is not as fast or as optimized as PyTorch, but I tried making it more interpretable.
- Hope you enjoy!


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/eduardoleao052/Autograd-from-scratch",
    "name": "neuralforge",
    "maintainer": "Eduardo Leitao da Cunha Opice Leao",
    "docs_url": null,
    "requires_python": ">=3.0",
    "maintainer_email": null,
    "keywords": "autograd deep-learning machine-learning ai numpy python",
    "author": "Eduardo Leitao da Cunha Opice Leao",
    "author_email": "eduardoleao052@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ba/4d/39b3a5c5c023855e6cc9361673bdb1cabca66f0930215c699517db97b4ff/neuralforge-0.0.15.tar.gz",
    "platform": null,
    "description": "<p align=\"left\">\n    <a href=\"https://github.com/eduardoleao052/autograd-from-scratch/actions/workflows/test.yml/badge.svg\" alt=\"Unit Tests\">\n        <img src=\"https://github.com/eduardoleao052/autograd-from-scratch/actions/workflows/test.yml/badge.svg\" /></a>\n    <a href=\"https://github.com/eduardoleao052/autograd-from-scratch/pulse\" alt=\"Activity\">\n        <img src=\"https://img.shields.io/github/commit-activity/m/eduardoleao052/autograd-from-scratch\" /></a>\n    <a href=\"https://github.com/eduardoleao052/autograd-from-scratch/graphs/contributors\" alt=\"Contributors\">\n        <img src=\"https://img.shields.io/github/contributors/eduardoleao052/autograd-from-scratch\" /></a>\n    <a href=\"https://www.python.org/\">\n        <img src=\"https://img.shields.io/badge/language-Python-blue\">\n    </a>\n    <a href=\"mailto:eduardoleao052@usp.br\">\n        <img src=\"https://img.shields.io/badge/-Email-red?style=flat-square&logo=gmail&logoColor=white\">\n    </a>\n    <a href=\"https://www.linkedin.com/in/eduardoleao052/\">\n        <img src=\"https://img.shields.io/badge/-Linkedin-blue?style=flat-square&logo=linkedin\">\n    </a>\n</p>\n\n\n# Autograd Framework From Scratch\n- NeuralForge is a unit-tested and documented educational framework. Similar to PyTorch, but with more __clear code__.\n- The autograd from scratch engine is in [tensor_operations.py](neuralforge/tensor_operations.py). I got a lot of inspiration from Andrej Karpathy's micrograd videos.\n- The deep learning model layers are in [nn/layers.py](neuralforge/nn/layers.py).\n<br/>\n<details>\n<summary> Check out the <b>implemented basic operations</b>: </summary>\n\n\n<br/>\n\n\n- [Addition](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L205-L257)\n- [Subtraction](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L259-L286)\n- [Multiplication](https://github.com/eduardoleao052/Autograd-from-scratch/blob/97b5d4e9d9c118375e53699043556e4d68d7fce7/neuralforge/tensor_operations.py#L288-L342)\n- [Division](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L344-L398)\n- [Matrix multiplication](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L400-L451)\n- [Exponentiation](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L582-L609)\n- [Log](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L611-L638)\n- [Square Root](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L640-L667)\n\n<br/>\n  \n</details>\n\n\n<details>\n<summary> The <b>implemented statistics</b>: </summary>\n\n\n<br/>\n\n\n- [Sum](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L492-L519)\n- [Mean](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L521-L549)\n- [Max](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L454-L490)\n- [Variance](https://github.com/eduardoleao052/Autograd-from-scratch/blob/c8c9b697815bc2c9efb1e9ce4d9ee490b43f19a2/neuralforge/tensor_operations.py#L551-L579)\n\n<br/>\n\n</details>\n\n\n<details>\n<summary> And the <b>implemented tensor operations</b>: </summary>\n\n\n<br/>\n\n\n- [Reshape](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L682-L710)\n- [Transpose](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L713-L741)\n- [Concatenate](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L744-L780)\n- [Stack](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L783-L820)\n- [MaskedFill](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L823-L851)\n- [Slice](https://github.com/eduardoleao052/Autograd-from-scratch/blob/4b7083149a8dd8e9bdb2b0c93fe130d9be516bf0/neuralforge/tensor_operations.py#L854-L882)\n\n<br/>\n\n\n</details>\n<br/>\n\n\n## 1. Project Structure\n- `neuralforge/` : Framework with python files.\n  - `neuralforge/tensor_operations.py`:  File with the `Tensor` class and all of the tensor `Operations`.\n  - `neuralforge/utils.py`: File with operations and helper functions.\n  - `neuralforge/nn/`: Submodule of the framework. Contains full layers and optimizers.\n      - `neuralforge/nn/nn.py`: Most deep learning layers, and `nn.Module` class.\n      - `neuralforge/nn/optim.py` : File with optimizers.\n- `data/` : Folder to store training data. Currently holds `shakespeare.txt`.\n- `test/`: Folder with unit tests. Contains `test_framework.py`.\n- `setup.py` : Setup file for the framework.\n    \n## 2. Running it Yourself\n### Simple Autograd Example: \n```python\nimport neuralforge as forge\n\n# Instantiate Tensors:\nx = forge.randn((8,4,5))\nw = forge.randn((8,5,4), requires_grad = True)\nb = forge.randint((4), requires_grad = True)\n\n# Make calculations:\nout = x @ w\nout += b\n\n# Compute gradients on whole graph:\nout.backward()\n\n# Get gradients from specific Tensors:\nprint(w.grad)\nprint(b.grad)\n\n```\n\n### Complex Autograd Example (Transformer): \n```python\nimport neuralforge as forge\nimport neuralforge.nn as nn\n\n# Implement Transformer class inheriting from forge.nn.Module:\nclass Transformer(nn.Module):\n    def __init__(self, vocab_size: int, hidden_size: int, n_timesteps: int, n_heads: int, p: float):\n        super().__init__()\n        # Instantiate Transformer's Layers:\n        self.embed = nn.Embedding(vocab_size, hidden_size)\n        self.pos_embed = nn.PositionalEmbedding(n_timesteps, hidden_size)\n        self.b1 = nn.Block(hidden_size, hidden_size, n_heads, n_timesteps, dropout_prob=p) \n        self.b2 = nn.Block(hidden_size, hidden_size, n_heads, n_timesteps, dropout_prob=p)\n        self.ln = nn.LayerNorm(hidden_size)\n        self.linear = nn.Linear(hidden_size, vocab_size)\n\n    def forward(self, x):\n        z = self.embed(x) + self.pos_embed(x)\n        z = self.b1(z)\n        z = self.b2(z)\n        z = self.ln(z)\n        z = self.linear(z)\n\n        return z\n\n# Get tiny Shakespeare test data:\ntext = load_text_data(f'{PATH}/data/shakespeare.txt')\n\n# Create Transformer instance:\nmodel = Transformer(vocab_size, hidden_size, n_timesteps, n_heads, dropout_p)\n\n# Define loss function and optimizer:\nloss_func = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.01, reg=0)\n        \n# Training Loop:\nfor _ in range(n_iters):\n    x, y = get_batch(test_data, n_timesteps, batch_size)\n\n    z = model.forward(x)\n\n    # Get loss:\n    loss = loss_func(z, y)\n\n    # Backpropagate the loss using forge.tensor's backward() method:\n    loss.backward()\n\n    # Update the weights:\n    optimizer.step()\n\n    # Reset the gradients to zero after each training step:\n    optimizer.zero_grad()\n```\n> **Note:** You can install the framework locally with: `pip install neuralforge`\n<details>\n<summary> <b> Requirements </b> </summary>\n\n<br/>\n  \n- The required packages are listed in `requirements.txt`.\n- The requirements can be installed on a virtual environment with the command:\n```\npip install -r requirements.txt\n```\n> **Note:** The framework is built around numpy, so there is no CUDA availability.\n\n<br/>\n\n</details>\n<details>\n<summary> <b> Build a Custom Model </b> </summary>\n\n<br/>\n\n- To create a custom model class, you can use the exact same syntax as you would in PyTorch, inheriting from nn.Module.\n<details>\n<summary> You may chose among <b>the following layers</b>: </summary>\n\n<br/>\n\n- [nn.Embedding](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L129-L146) (first layer, turns input indexes into vectors)\n- [nn.PositionalEmbedding](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L149-L164) (second layer, adds position information to every timestep of the input)\n- [nn.Linear](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L47-L64) (simple fully-connected layer)\n- [nn.MultiHeadSelfAttention](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L67-L126) (core of the transformer, calculates weighted sum of inputs)\n- [nn.Block](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L268-L287) (full transformer block - Contains MHSA, Linear and LayerNorm layers)\n- [nn.CrossEntropyLoss](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L290-L320) (last layer, returns probabilities for next generated character)\n\n</details>\n<details>\n<summary> And <b>the following functions</b>: </summary>\n\n<br/>\n\n- [nn.Dropout](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L167-L183) (can be added to apply dropout)\n- [nn.LayerNorm](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L186-L201) (normalizes the tensors)\n- [nn.Softmax](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L215-L229) (scales the values between 0 and 1)\n- [nn.Tanh](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L232-L241) (scales the values between -1 and 1)\n- [nn.Relu](https://github.com/eduardoleao052/Autograd-from-scratch/blob/e7569075cb3342300274839bcf4edd8ba19a1c08/neuralforge/nn/layers.py#L204-L212) (zeroes all negative values)\n\n</details>\n\n<br/>\n\n</details>\n\n## 3. Results\n- The models implemented in [test_framework.py](tests/test_framework.py) all converged to __near-zero losses__.\n- This framework is not as fast or as optimized as PyTorch, but I tried making it more interpretable.\n- Hope you enjoy!\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An educational framework similar to PyTorch, built to be interpretable and easy to implement.",
    "version": "0.0.15",
    "project_urls": {
        "Homepage": "https://github.com/eduardoleao052/Autograd-from-scratch"
    },
    "split_keywords": [
        "autograd",
        "deep-learning",
        "machine-learning",
        "ai",
        "numpy",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eec4a202e2ebb062c8e4769498cbe677b067880d319ac5980e731df1ff38d37f",
                "md5": "aedd452f6ee9ddc2f7a753991a8e0468",
                "sha256": "c3aaff8e51e9ebb65da1cee2da38260f3bbae233504a531a266c26019910c574"
            },
            "downloads": -1,
            "filename": "neuralforge-0.0.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aedd452f6ee9ddc2f7a753991a8e0468",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.0",
            "size": 15307,
            "upload_time": "2024-03-27T21:17:17",
            "upload_time_iso_8601": "2024-03-27T21:17:17.049930Z",
            "url": "https://files.pythonhosted.org/packages/ee/c4/a202e2ebb062c8e4769498cbe677b067880d319ac5980e731df1ff38d37f/neuralforge-0.0.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba4d39b3a5c5c023855e6cc9361673bdb1cabca66f0930215c699517db97b4ff",
                "md5": "de623871556f7d73540968acdd2c7d65",
                "sha256": "81e576cbc13064f1eabdbcca1bad285b53c09293f249afd272c59c7a531a9b92"
            },
            "downloads": -1,
            "filename": "neuralforge-0.0.15.tar.gz",
            "has_sig": false,
            "md5_digest": "de623871556f7d73540968acdd2c7d65",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.0",
            "size": 18245,
            "upload_time": "2024-03-27T21:17:19",
            "upload_time_iso_8601": "2024-03-27T21:17:19.416533Z",
            "url": "https://files.pythonhosted.org/packages/ba/4d/39b3a5c5c023855e6cc9361673bdb1cabca66f0930215c699517db97b4ff/neuralforge-0.0.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-27 21:17:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "eduardoleao052",
    "github_project": "Autograd-from-scratch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "neuralforge"
}
        
Elapsed time: 0.25233s