fastAIcourse


NamefastAIcourse JSON
Version 0.0.60 PyPI version JSON
download
home_pagehttps://github.com/bthek1/fastAIcourse
SummaryfastAIcourse
upload_time2023-12-16 06:58:19
maintainer
docs_urlNone
authorBenedict Thekkel
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # [Fast AI Course](https://bthek1.github.io/fastAIcourse/)

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Step for git push

- nbdev_prepare

``` sh
nbdev_prepare
```

- Git stuff

``` sh
git add .
git commit -m "update"
git push
```

## After changing dependencies

``` sh
pip install fastAIcourse
```

``` sh
pip install -e '.[dev]'
```

## Structure

1.  Data process
    - Normalise
    - Remove NAN
2.  Create Model
    - try decision tree - random forest
3.  Run model

## Model structure

1.  train/test split
    - asd
2.  initalise random weights
3.  calculate loss
4.  Gradient descent - recalculate weights
5.  Repeat steps 2 - 4

``` python
import torch
import math
import matplotlib.pyplot as plt

dtype = torch.float
device = "cuda" if torch.cuda.is_available() else "cpu"
torch.set_default_device("cuda:0")
device
```

    'cuda'

``` python
# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)
y = torch.sin(x)
plt.plot
plt.plot(x.detach().cpu(),y.detach().cpu())
```

![](index_files/figure-commonmark/cell-3-output-1.png)

``` python
# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
from torch import tensor

a = tensor([1], dtype=dtype, requires_grad=True)
b = tensor([0], dtype=dtype, requires_grad=True)
c = tensor([0], dtype=dtype, requires_grad=True)
d = tensor([0], dtype=dtype, requires_grad=True)

a,b,c,d
```

    (tensor([1.], device='cuda:0', requires_grad=True),
     tensor([0.], device='cuda:0', requires_grad=True),
     tensor([0.], device='cuda:0', requires_grad=True),
     tensor([0.], device='cuda:0', requires_grad=True))

``` python
torch.abs(loss-a)
```

    tensor([8.3212], device='cuda:0', grad_fn=<AbsBackward0>)

``` python
learning_rate = 1e-6

y_pred = a + b * x + c * x ** 2 + d * x ** 3
plt.plot(x.detach().cpu(),y_pred.detach().cpu())

previous_loss = tensor([0], dtype=dtype, requires_grad=True)

for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if torch.abs(previous_loss - loss) > 50:
        previous_loss = loss
        print(t, loss.item(), previous_loss.item())
        plt.plot(x.detach().cpu(),y_pred.detach().cpu())
        

    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()

    # Manually update weights using gradient descent. Wrap in torch.no_grad()
    # because weights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
plt.plot(x.detach().cpu(),y.detach().cpu())
```

    0 2999.5 2999.5
    1 2633.0322265625 2633.0322265625
    2 2440.7451171875 2440.7451171875
    3 2299.944091796875 2299.944091796875
    4 2184.31591796875 2184.31591796875
    5 2086.458740234375 2086.458740234375
    6 2002.959228515625 2002.959228515625
    7 1931.475830078125 1931.475830078125
    8 1870.134033203125 1870.134033203125
    9 1817.367919921875 1817.367919921875
    11 1732.486328125 1732.486328125
    13 1668.533203125 1668.533203125
    16 1599.33935546875 1599.33935546875
    20 1537.807861328125 1537.807861328125
    25 1486.8544921875 1486.8544921875
    32 1436.569580078125 1436.569580078125
    41 1384.7811279296875 1384.7811279296875
    51 1332.85009765625 1332.85009765625
    62 1278.809814453125 1278.809814453125
    73 1227.13720703125 1227.13720703125
    85 1173.205322265625 1173.205322265625
    97 1121.6829833984375 1121.6829833984375
    110 1068.4569091796875 1068.4569091796875
    123 1017.7958984375 1017.7958984375
    137 965.9642333984375 965.9642333984375
    152 913.40380859375 913.40380859375
    168 860.5406494140625 860.5406494140625
    185 807.7796020507812 807.7796020507812
    203 755.4998779296875 755.4998779296875
    222 704.0501098632812 704.0501098632812
    242 653.7462768554688 653.7462768554688
    264 602.63671875 602.63671875
    288 551.522705078125 551.522705078125
    314 501.137939453125 501.137939453125
    343 450.4776306152344 450.4776306152344
    376 399.1850280761719 399.1850280761719
    413 348.7738037109375 348.7738037109375
    456 298.3631896972656 298.3631896972656
    507 248.23623657226562 248.23623657226562
    570 198.20193481445312 198.20193481445312
    653 147.97088623046875 147.97088623046875
    774 97.72457885742188 97.72457885742188
    999 47.624717712402344 47.624717712402344
    Result: y = 0.03058280050754547 + 0.8431105613708496 x + -0.005276043433696032 x^2 + -0.09139160066843033 x^3

![](index_files/figure-commonmark/cell-6-output-2.png)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/bthek1/fastAIcourse",
    "name": "fastAIcourse",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "Benedict Thekkel",
    "author_email": "bthekkel1@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/42/7e/6777a90d11b86b722bfb3ac98295e3fc62f0a8d7fb2ee80c32deaff4b812/fastAIcourse-0.0.60.tar.gz",
    "platform": null,
    "description": "# [Fast AI Course](https://bthek1.github.io/fastAIcourse/)\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Step for git push\n\n- nbdev_prepare\n\n``` sh\nnbdev_prepare\n```\n\n- Git stuff\n\n``` sh\ngit add .\ngit commit -m \"update\"\ngit push\n```\n\n## After changing dependencies\n\n``` sh\npip install fastAIcourse\n```\n\n``` sh\npip install -e '.[dev]'\n```\n\n## Structure\n\n1.  Data process\n    - Normalise\n    - Remove NAN\n2.  Create Model\n    - try decision tree - random forest\n3.  Run model\n\n## Model structure\n\n1.  train/test split\n    - asd\n2.  initalise random weights\n3.  calculate loss\n4.  Gradient descent - recalculate weights\n5.  Repeat steps 2 - 4\n\n``` python\nimport torch\nimport math\nimport matplotlib.pyplot as plt\n\ndtype = torch.float\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\ntorch.set_default_device(\"cuda:0\")\ndevice\n```\n\n    'cuda'\n\n``` python\n# Create Tensors to hold input and outputs.\n# By default, requires_grad=False, which indicates that we do not need to\n# compute gradients with respect to these Tensors during the backward pass.\nx = torch.linspace(-math.pi, math.pi, 2000, dtype=dtype)\ny = torch.sin(x)\nplt.plot\nplt.plot(x.detach().cpu(),y.detach().cpu())\n```\n\n![](index_files/figure-commonmark/cell-3-output-1.png)\n\n``` python\n# Create random Tensors for weights. For a third order polynomial, we need\n# 4 weights: y = a + b x + c x^2 + d x^3\n# Setting requires_grad=True indicates that we want to compute gradients with\n# respect to these Tensors during the backward pass.\nfrom torch import tensor\n\na = tensor([1], dtype=dtype, requires_grad=True)\nb = tensor([0], dtype=dtype, requires_grad=True)\nc = tensor([0], dtype=dtype, requires_grad=True)\nd = tensor([0], dtype=dtype, requires_grad=True)\n\na,b,c,d\n```\n\n    (tensor([1.], device='cuda:0', requires_grad=True),\n     tensor([0.], device='cuda:0', requires_grad=True),\n     tensor([0.], device='cuda:0', requires_grad=True),\n     tensor([0.], device='cuda:0', requires_grad=True))\n\n``` python\ntorch.abs(loss-a)\n```\n\n    tensor([8.3212], device='cuda:0', grad_fn=<AbsBackward0>)\n\n``` python\nlearning_rate = 1e-6\n\ny_pred = a + b * x + c * x ** 2 + d * x ** 3\nplt.plot(x.detach().cpu(),y_pred.detach().cpu())\n\nprevious_loss = tensor([0], dtype=dtype, requires_grad=True)\n\nfor t in range(2000):\n    # Forward pass: compute predicted y using operations on Tensors.\n    y_pred = a + b * x + c * x ** 2 + d * x ** 3\n\n    # Compute and print loss using operations on Tensors.\n    # Now loss is a Tensor of shape (1,)\n    # loss.item() gets the scalar value held in the loss.\n    loss = (y_pred - y).pow(2).sum()\n    if torch.abs(previous_loss - loss) > 50:\n        previous_loss = loss\n        print(t, loss.item(), previous_loss.item())\n        plt.plot(x.detach().cpu(),y_pred.detach().cpu())\n        \n\n    # Use autograd to compute the backward pass. This call will compute the\n    # gradient of loss with respect to all Tensors with requires_grad=True.\n    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding\n    # the gradient of the loss with respect to a, b, c, d respectively.\n    loss.backward()\n\n    # Manually update weights using gradient descent. Wrap in torch.no_grad()\n    # because weights have requires_grad=True, but we don't need to track this\n    # in autograd.\n    with torch.no_grad():\n        a -= learning_rate * a.grad\n        b -= learning_rate * b.grad\n        c -= learning_rate * c.grad\n        d -= learning_rate * d.grad\n\n        # Manually zero the gradients after updating weights\n        a.grad = None\n        b.grad = None\n        c.grad = None\n        d.grad = None\n\nprint(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')\nplt.plot(x.detach().cpu(),y.detach().cpu())\n```\n\n    0 2999.5 2999.5\n    1 2633.0322265625 2633.0322265625\n    2 2440.7451171875 2440.7451171875\n    3 2299.944091796875 2299.944091796875\n    4 2184.31591796875 2184.31591796875\n    5 2086.458740234375 2086.458740234375\n    6 2002.959228515625 2002.959228515625\n    7 1931.475830078125 1931.475830078125\n    8 1870.134033203125 1870.134033203125\n    9 1817.367919921875 1817.367919921875\n    11 1732.486328125 1732.486328125\n    13 1668.533203125 1668.533203125\n    16 1599.33935546875 1599.33935546875\n    20 1537.807861328125 1537.807861328125\n    25 1486.8544921875 1486.8544921875\n    32 1436.569580078125 1436.569580078125\n    41 1384.7811279296875 1384.7811279296875\n    51 1332.85009765625 1332.85009765625\n    62 1278.809814453125 1278.809814453125\n    73 1227.13720703125 1227.13720703125\n    85 1173.205322265625 1173.205322265625\n    97 1121.6829833984375 1121.6829833984375\n    110 1068.4569091796875 1068.4569091796875\n    123 1017.7958984375 1017.7958984375\n    137 965.9642333984375 965.9642333984375\n    152 913.40380859375 913.40380859375\n    168 860.5406494140625 860.5406494140625\n    185 807.7796020507812 807.7796020507812\n    203 755.4998779296875 755.4998779296875\n    222 704.0501098632812 704.0501098632812\n    242 653.7462768554688 653.7462768554688\n    264 602.63671875 602.63671875\n    288 551.522705078125 551.522705078125\n    314 501.137939453125 501.137939453125\n    343 450.4776306152344 450.4776306152344\n    376 399.1850280761719 399.1850280761719\n    413 348.7738037109375 348.7738037109375\n    456 298.3631896972656 298.3631896972656\n    507 248.23623657226562 248.23623657226562\n    570 198.20193481445312 198.20193481445312\n    653 147.97088623046875 147.97088623046875\n    774 97.72457885742188 97.72457885742188\n    999 47.624717712402344 47.624717712402344\n    Result: y = 0.03058280050754547 + 0.8431105613708496 x + -0.005276043433696032 x^2 + -0.09139160066843033 x^3\n\n![](index_files/figure-commonmark/cell-6-output-2.png)\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "fastAIcourse",
    "version": "0.0.60",
    "project_urls": {
        "Homepage": "https://github.com/bthek1/fastAIcourse"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cd646ab6f0a15a26c531900b8cdc12863fdd320b40a57a88918d7fa26cd2961a",
                "md5": "65c7e28ea3d645c3b31ab3465369c894",
                "sha256": "45cc00eeab00d13562b7f469039d6dafecfbd1ba80dfd1b8ea493ae8d05e90e2"
            },
            "downloads": -1,
            "filename": "fastAIcourse-0.0.60-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "65c7e28ea3d645c3b31ab3465369c894",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 12910,
            "upload_time": "2023-12-16T06:58:17",
            "upload_time_iso_8601": "2023-12-16T06:58:17.551046Z",
            "url": "https://files.pythonhosted.org/packages/cd/64/6ab6f0a15a26c531900b8cdc12863fdd320b40a57a88918d7fa26cd2961a/fastAIcourse-0.0.60-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "427e6777a90d11b86b722bfb3ac98295e3fc62f0a8d7fb2ee80c32deaff4b812",
                "md5": "3bf7cd225f9b35b3e81e5d627070073e",
                "sha256": "94df3352deb3b227c7f61f31270a1d1bddc991d555ef5d3b21f20ef3221b23f5"
            },
            "downloads": -1,
            "filename": "fastAIcourse-0.0.60.tar.gz",
            "has_sig": false,
            "md5_digest": "3bf7cd225f9b35b3e81e5d627070073e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 14703,
            "upload_time": "2023-12-16T06:58:19",
            "upload_time_iso_8601": "2023-12-16T06:58:19.569125Z",
            "url": "https://files.pythonhosted.org/packages/42/7e/6777a90d11b86b722bfb3ac98295e3fc62f0a8d7fb2ee80c32deaff4b812/fastAIcourse-0.0.60.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-16 06:58:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bthek1",
    "github_project": "fastAIcourse",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "fastaicourse"
}
        
Elapsed time: 0.14710s