micrograd2023


Namemicrograd2023 JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/hdocmsu/micrograd2023/
Summarymicrograd2023 was developed based on Andrej Karpathy micrograd with added documentations using nbdev for teachning purposes
upload_time2024-08-29 15:25:23
maintainerNone
docs_urlNone
authorHung Do, PhD, MSEE
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev micrograd micrograd2023
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # micrograd2023


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
<img src='media/mArtificialNeuralNetwork_title.gif' width=100% height=auto>

## Literate Programming

``` mermaid
flowchart LR
  A(Andrej's micrograd) --> C((Combination))
  B(Jeremy's nbdev) --> C
  C -->|Literate Programming| D(micrograd2023)
```

<img src='media/literate_programming.svg' width=100% height=auto >

## Disclaimers

`micrograd2023`, an automatic differentiation software, was developed
based on [Andrej Karpathy’s](https://karpathy.ai/)
[micrograd](https://github.com/karpathy/micrograd).

Andrej is the man who needs no introduction in the field of Deep
Learning and Computer Vision. He released a series of lectures called
[Neural Network from Zero to
Hero](https://karpathy.ai/zero-to-hero.html), which I found extremely
educational and practical. I am reviewing the lectures and creating
notes for myself and for teaching purposes.

`mirograd2023` was written using [nbdev](https://nbdev.fast.ai/), which
was developed by [Jeremy Howard](https://jeremy.fast.ai/), the man who
needs no introduction in the field of Deep Learning. Jeremy also created
`fastai` Deep Learning software [library](https://docs.fast.ai/) and
[Courses](https://course.fast.ai/) that are extremely influential. I
highly recommend `fastai` if you are interested in starting your journey
and learning with ML and DL.

`nbdev` is a powerful tool that can be used to efficiently develop,
build, test, document, and distribute software packages all in one
place, Jupyter Notebook (I used Jupyter Notebooks in VS Code). In this
tutorial, you will learn how to use `nbdev` to develop software
`micrograd2023`.

## Demonstrations

- A detailed demonstration of `micrograd2023` for training and
  integrating MLP can be found in this [MLP
  DEMO](https://hdocmsu.github.io/micrograd2023/mlp_demo.html).

- A demonstration of `micrograd2023` for Physics for
  auto-differentiation of a popular cosine function can be found in this
  [Physics Cosine
  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_cos.html).

  - Comparing the `micrograd2023` results with the analytical solutions,
    `pytorch`’s autograd, and `jax`’s autograd.
  - Additionally, second-order derivatives are calculated using `jax`’s
    autograd.
  - it is possible to use `jax`’s autograd to calculate higher-order
    derivatives.

- A demonstration of `micrograd2023` for Physics for
  auto-differentiation of a popular exponential decay function can be
  found in this [Physics Exp.
  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_exp.html).

- A demonstration of `micrograd2023` for Physics for
  auto-differentiation of a damping function can be found in this
  [Physics Damp
  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_damp.html).

- A demonstration of `micrograd2023` for MRI for auto-differentiation of
  a T2\* decay model of data acquired from a multi-echo UTE sequence.
  Additionally, the auto-differentiations then be used to calculate the
  Fisher Information Matrix (FIM), which then allows calculations of
  Cramer-Rao Lower Bound (CRLB) of an un-bias estimator of T2\*. Details
  can be seen at [MRI T2\* Decay
  DEMO](https://hdocmsu.github.io/micrograd2023/mri_demo_expdecay.html).

- A demonstration of `micrograd2023` for MRI for auto-differentiation of
  a T1 recovery model of data acquired from a myocardial MOLLI T1
  mapping sequence. Additionally, the auto-differentiations then be used
  to calculate the Fisher Information Matrix (FIM), which then allows
  calculations of Cramer-Rao Lower Bound (CRLB) of an un-bias estimator
  of T1. Details can be seen at [MRI T1 Recovery
  DEMO](https://hdocmsu.github.io/micrograd2023/mri_demo_exprec.html).

## Features

Compared to Andrej’s `micrograd`, `micrograd2023` has many extensions
such as:

- Adding more and extensive unit and integration tests.

- Adding more methods for
  [`Value`](https://hdocmsu.github.io/micrograd2023/engine.html#value)
  object such as `tanh()`, `exp()`, and `log()`. In principle, any
  method/function with known derivative or can be broken into primitive
  operations can be added to the
  [`Value`](https://hdocmsu.github.io/micrograd2023/engine.html#value)
  object. Examples are `sin()`, `sigmoid()`, `cos()`, etc., which I left
  as exercises 😄.

- Refactoring Andrej’s demo code make it easier to demonstrate many
  fundamental concepts and/or best engineering practices when training
  neural network. The concepts/best-practices are listed below. Some
  concepts were demonstrated while the rest are left as exercises 😄.

  - Always implemented a simplest and most intuitive solution as a
    baseline to compare with whatever fancy implementations we want to
    achieve

  - Data preparation - train, validation, and test sets are disjointed

  - Over-fitting

  - Gradient Descent vs. Stochastic Gradient Descent (SGD)

  - Develop and experiment with different optimizations i.e. SGD, SGD
    with momentum, rmsProp, Adam, etc.

  - SGD with momentum

  - Non-Optimal learning rate

  - How to find the optimal learning rate

  - Learning rate decay and learning rate schedule

  - Role of nonlinearity

  - Linear separable and non-separable data

  - Out of distribution shift

  - Under-fitting

  - The importance and trade-off between width and depth of the MLP

  - Over-fitting a single-batch

  - Hyperparameter tuning and optimizing

  - Weights initialization

  - Inspect and visualize statistics of weights, gradients, gradient to
    data ratios, and update to data ratios

  - Forward and backward dynamics of shallow and deep linear and
    non-linear Neural Network

  - etc.

If you study lectures by Andrej and Jeremy you will probably notice that
they are both great educators and utilize both top-down and bottom-up
approaches in their teaching, but Andrej predominantly uses *bottom-up*
approach while Jeremy predominantly uses *top-down* one. I personally
fascinated by both educators and found values from both of them and hope
you are too!

## Related Projects

Below are a few of my projects related to optimization and Deep
Learning:

- Diploma Research on Crystal Structure using Gradient-based
  Optimization
  [SLIDES](https://hdocmsu.github.io/projects/ictp_1_thesis/)

- Deep Convolution Neural Network (DCNN) for MRI image segmentation with
  uncertainty quantification and controllable tradeoff between False
  Positive and False Negative. [Journal Paper
  PDF](https://hdocmsu.github.io/assets/pdf/papers/do_mrm2019.pdf) and
  [Conference Talk
  SLIDES](https://hdocmsu.github.io/assets/pdf/slides/2018-10-28-HungDo_MLworkshop2018_web.pdf)

- Deep Learning-based Denoising for quantitative MRI. [Conference Talk
  SLIDES](https://hdocmsu.github.io/assets/pdf/slides/2019-02-06-HungDo_dnoiseNET_web.pdf)

- Besides technical projects, I had an opportunity to contribute and
  engage in the whole process of 510(k) FDA clinical validation of Deep
  Learning-based MRI Reconstruction resulting the worlds-first fully
  integrated Deep Learning-based Reconstruction Technology to receive
  Food and Drug Administration (FDA) 510(k)-clearance for use in
  clinical environment. [Product
  Page](https://us.medical.canon/products/magnetic-resonance/aice/),
  [Whitepaper
  HTMLs](https://us.medical.canon/products/magnetic-resonance/experience/),
  [Whitepaper
  PDF](https://canonmedical.widen.net/content/t3vj2i3kwt/original/637271900181629483SK.pdf?u=vmbupa&),
  and [Whitepaper
  PDF2](https://canonmedical.widen.net/content/u72d0f4vuh/original/637309925416001229SG.pdf?u=vmbupa&)

  - [AiCE Challenge
    1](https://us.medical.canon/promo/magnetic-resonance/aice/1/): 1.5T
    MRI with Deep Learning Reconstruction (DLR) vs. 3T MRI

  - [AiCE Challenge
    2](https://us.medical.canon/promo/magnetic-resonance/aice/2/): 1.5T
    MRI with DLR vs. 3T MRI - beyond knee and brain MRI

  - [AiCE Challenge
    3](https://us.medical.canon/promo/magnetic-resonance/aice/3/):
    Faster and Higher Resolution MRI with DLR

  - [AiCE Challenge
    4](https://us.medical.canon/promo/magnetic-resonance/aice/4/):
    Faster MRI with DLR

## How to install

The [micrograd2023](https://pypi.org/project/micrograd2023/) package was
uploaded to [PyPI](https://pypi.org/) and can be easily installed using
the below command.

`pip install micrograd2023`

### Developer install

If you want to develop `micrograd2023` yourself, please use an editable
installation.

`git clone https://github.com/hdocmsu/micrograd2023.git`

`pip install -e "micrograd2023[dev]"`

You also need to use an editable installation of
[nbdev](https://github.com/fastai/nbdev),
[fastcore](https://github.com/fastai/fastcore), and
[execnb](https://github.com/fastai/execnb).

Happy Coding!!!

## How to use

Here are examples of using micrograd2023.

``` python
# import necessary objects and functions
from micrograd2023.engine import Value
from micrograd2023.nn import Neuron, Layer, MLP
from micrograd2023.utils import draw_dot
import random
```

``` python
# inputs xs, weights ws, and bias b
w1 = Value(1.1)
x1 = Value(0.5)
w2 = Value(0.12)
x2 = Value(1.7)
b = Value(0.34)

# pre-activation
s = w1*x1 + x2*w2 + b

# activation
y = s.tanh()

# automatic differentiation
y.backward()

# show the computation graph of the perceptron
draw_dot(y)
```

![](index_files/figure-commonmark/cell-5-output-1.svg)

``` python
# added random seed for reproducibility
random.seed(1234)
n = Neuron(3)
x = [Value(0.15), Value(-0.21), Value(-0.91) ]
y = n(x)
y.backward()
draw_dot(y)
```

![](index_files/figure-commonmark/cell-6-output-1.svg)

You can use `micrograd2023` to train a MLP and learn fundamental
concepts such as overfilling, optimal learning rate, etc.

Good training

<img src='media/MPL_good_training_decision_boundary.png' width=100% height=auto >
<img src='media/MPL_good_training_loss_acc_plots.png' width=100% height=auto >

Overfitting

<img src='media/MPL_overfitting_decision_boundary.png' width=100% height=auto >
<img src='media/MPL_overfitting_loss_acc_plots.png' width=100% height=auto >

## Testings

To perform unit testing, using terminal to navigate to the directory,
which contains `tests` folder, then simply type `python -m pytest` on
the terminal. Note that,
[PyTorch](https://pytorch.org/get-started/locally/) is needed for the
test to run since derivatives calculated using `micrograd2023` are
compared against those calculated using `PyTorch` as references.

`python -m pytest`

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hdocmsu/micrograd2023/",
    "name": "micrograd2023",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "nbdev micrograd micrograd2023",
    "author": "Hung Do, PhD, MSEE",
    "author_email": "clinicalcollaborations@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5d/1a/1b61a17a4bbaf1ca25b226d77cde9c7ee415f76f467baa929d3a4f88cf38/micrograd2023-0.2.3.tar.gz",
    "platform": null,
    "description": "# micrograd2023\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n<img src='media/mArtificialNeuralNetwork_title.gif' width=100% height=auto>\n\n## Literate Programming\n\n``` mermaid\nflowchart LR\n  A(Andrej's micrograd) --> C((Combination))\n  B(Jeremy's nbdev) --> C\n  C -->|Literate Programming| D(micrograd2023)\n```\n\n<img src='media/literate_programming.svg' width=100% height=auto >\n\n## Disclaimers\n\n`micrograd2023`, an automatic differentiation software, was developed\nbased on [Andrej Karpathy\u2019s](https://karpathy.ai/)\n[micrograd](https://github.com/karpathy/micrograd).\n\nAndrej is the man who needs no introduction in the field of Deep\nLearning and Computer Vision. He released a series of lectures called\n[Neural Network from Zero to\nHero](https://karpathy.ai/zero-to-hero.html), which I found extremely\neducational and practical. I am reviewing the lectures and creating\nnotes for myself and for teaching purposes.\n\n`mirograd2023` was written using [nbdev](https://nbdev.fast.ai/), which\nwas developed by [Jeremy Howard](https://jeremy.fast.ai/), the man who\nneeds no introduction in the field of Deep Learning. Jeremy also created\n`fastai` Deep Learning software [library](https://docs.fast.ai/) and\n[Courses](https://course.fast.ai/) that are extremely influential. I\nhighly recommend `fastai` if you are interested in starting your journey\nand learning with ML and DL.\n\n`nbdev` is a powerful tool that can be used to efficiently develop,\nbuild, test, document, and distribute software packages all in one\nplace, Jupyter Notebook (I used Jupyter Notebooks in VS Code). In this\ntutorial, you will learn how to use `nbdev` to develop software\n`micrograd2023`.\n\n## Demonstrations\n\n- A detailed demonstration of `micrograd2023` for training and\n  integrating MLP can be found in this [MLP\n  DEMO](https://hdocmsu.github.io/micrograd2023/mlp_demo.html).\n\n- A demonstration of `micrograd2023` for Physics for\n  auto-differentiation of a popular cosine function can be found in this\n  [Physics Cosine\n  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_cos.html).\n\n  - Comparing the `micrograd2023` results with the analytical solutions,\n    `pytorch`\u2019s autograd, and `jax`\u2019s autograd.\n  - Additionally, second-order derivatives are calculated using `jax`\u2019s\n    autograd.\n  - it is possible to use `jax`\u2019s autograd to calculate higher-order\n    derivatives.\n\n- A demonstration of `micrograd2023` for Physics for\n  auto-differentiation of a popular exponential decay function can be\n  found in this [Physics Exp.\n  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_exp.html).\n\n- A demonstration of `micrograd2023` for Physics for\n  auto-differentiation of a damping function can be found in this\n  [Physics Damp\n  DEMO](https://hdocmsu.github.io/micrograd2023/phys_demo_damp.html).\n\n- A demonstration of `micrograd2023` for MRI for auto-differentiation of\n  a T2\\* decay model of data acquired from a multi-echo UTE sequence.\n  Additionally, the auto-differentiations then be used to calculate the\n  Fisher Information Matrix (FIM), which then allows calculations of\n  Cramer-Rao Lower Bound (CRLB) of an un-bias estimator of T2\\*. Details\n  can be seen at [MRI T2\\* Decay\n  DEMO](https://hdocmsu.github.io/micrograd2023/mri_demo_expdecay.html).\n\n- A demonstration of `micrograd2023` for MRI for auto-differentiation of\n  a T1 recovery model of data acquired from a myocardial MOLLI T1\n  mapping sequence. Additionally, the auto-differentiations then be used\n  to calculate the Fisher Information Matrix (FIM), which then allows\n  calculations of Cramer-Rao Lower Bound (CRLB) of an un-bias estimator\n  of T1. Details can be seen at [MRI T1 Recovery\n  DEMO](https://hdocmsu.github.io/micrograd2023/mri_demo_exprec.html).\n\n## Features\n\nCompared to Andrej\u2019s `micrograd`, `micrograd2023` has many extensions\nsuch as:\n\n- Adding more and extensive unit and integration tests.\n\n- Adding more methods for\n  [`Value`](https://hdocmsu.github.io/micrograd2023/engine.html#value)\n  object such as `tanh()`, `exp()`, and `log()`. In principle, any\n  method/function with known derivative or can be broken into primitive\n  operations can be added to the\n  [`Value`](https://hdocmsu.github.io/micrograd2023/engine.html#value)\n  object. Examples are `sin()`, `sigmoid()`, `cos()`, etc., which I left\n  as exercises \ud83d\ude04.\n\n- Refactoring Andrej\u2019s demo code make it easier to demonstrate many\n  fundamental concepts and/or best engineering practices when training\n  neural network. The concepts/best-practices are listed below. Some\n  concepts were demonstrated while the rest are left as exercises \ud83d\ude04.\n\n  - Always implemented a simplest and most intuitive solution as a\n    baseline to compare with whatever fancy implementations we want to\n    achieve\n\n  - Data preparation - train, validation, and test sets are disjointed\n\n  - Over-fitting\n\n  - Gradient Descent vs.\u00a0Stochastic Gradient Descent (SGD)\n\n  - Develop and experiment with different optimizations i.e.\u00a0SGD, SGD\n    with momentum, rmsProp, Adam, etc.\n\n  - SGD with momentum\n\n  - Non-Optimal learning rate\n\n  - How to find the optimal learning rate\n\n  - Learning rate decay and learning rate schedule\n\n  - Role of nonlinearity\n\n  - Linear separable and non-separable data\n\n  - Out of distribution shift\n\n  - Under-fitting\n\n  - The importance and trade-off between width and depth of the MLP\n\n  - Over-fitting a single-batch\n\n  - Hyperparameter tuning and optimizing\n\n  - Weights initialization\n\n  - Inspect and visualize statistics of weights, gradients, gradient to\n    data ratios, and update to data ratios\n\n  - Forward and backward dynamics of shallow and deep linear and\n    non-linear Neural Network\n\n  - etc.\n\nIf you study lectures by Andrej and Jeremy you will probably notice that\nthey are both great educators and utilize both top-down and bottom-up\napproaches in their teaching, but Andrej predominantly uses *bottom-up*\napproach while Jeremy predominantly uses *top-down* one. I personally\nfascinated by both educators and found values from both of them and hope\nyou are too!\n\n## Related Projects\n\nBelow are a few of my projects related to optimization and Deep\nLearning:\n\n- Diploma Research on Crystal Structure using Gradient-based\n  Optimization\n  [SLIDES](https://hdocmsu.github.io/projects/ictp_1_thesis/)\n\n- Deep Convolution Neural Network (DCNN) for MRI image segmentation with\n  uncertainty quantification and controllable tradeoff between False\n  Positive and False Negative. [Journal Paper\n  PDF](https://hdocmsu.github.io/assets/pdf/papers/do_mrm2019.pdf) and\n  [Conference Talk\n  SLIDES](https://hdocmsu.github.io/assets/pdf/slides/2018-10-28-HungDo_MLworkshop2018_web.pdf)\n\n- Deep Learning-based Denoising for quantitative MRI. [Conference Talk\n  SLIDES](https://hdocmsu.github.io/assets/pdf/slides/2019-02-06-HungDo_dnoiseNET_web.pdf)\n\n- Besides technical projects, I had an opportunity to contribute and\n  engage in the whole process of 510(k) FDA clinical validation of Deep\n  Learning-based MRI Reconstruction resulting the worlds-first fully\n  integrated Deep Learning-based Reconstruction Technology to receive\n  Food and Drug Administration (FDA) 510(k)-clearance for use in\n  clinical environment. [Product\n  Page](https://us.medical.canon/products/magnetic-resonance/aice/),\n  [Whitepaper\n  HTMLs](https://us.medical.canon/products/magnetic-resonance/experience/),\n  [Whitepaper\n  PDF](https://canonmedical.widen.net/content/t3vj2i3kwt/original/637271900181629483SK.pdf?u=vmbupa&),\n  and [Whitepaper\n  PDF2](https://canonmedical.widen.net/content/u72d0f4vuh/original/637309925416001229SG.pdf?u=vmbupa&)\n\n  - [AiCE Challenge\n    1](https://us.medical.canon/promo/magnetic-resonance/aice/1/): 1.5T\n    MRI with Deep Learning Reconstruction (DLR) vs.\u00a03T MRI\n\n  - [AiCE Challenge\n    2](https://us.medical.canon/promo/magnetic-resonance/aice/2/): 1.5T\n    MRI with DLR vs.\u00a03T MRI - beyond knee and brain MRI\n\n  - [AiCE Challenge\n    3](https://us.medical.canon/promo/magnetic-resonance/aice/3/):\n    Faster and Higher Resolution MRI with DLR\n\n  - [AiCE Challenge\n    4](https://us.medical.canon/promo/magnetic-resonance/aice/4/):\n    Faster MRI with DLR\n\n## How to install\n\nThe [micrograd2023](https://pypi.org/project/micrograd2023/) package was\nuploaded to [PyPI](https://pypi.org/) and can be easily installed using\nthe below command.\n\n`pip install micrograd2023`\n\n### Developer install\n\nIf you want to develop `micrograd2023` yourself, please use an editable\ninstallation.\n\n`git clone https://github.com/hdocmsu/micrograd2023.git`\n\n`pip install -e \"micrograd2023[dev]\"`\n\nYou also need to use an editable installation of\n[nbdev](https://github.com/fastai/nbdev),\n[fastcore](https://github.com/fastai/fastcore), and\n[execnb](https://github.com/fastai/execnb).\n\nHappy Coding!!!\n\n## How to use\n\nHere are examples of using micrograd2023.\n\n``` python\n# import necessary objects and functions\nfrom micrograd2023.engine import Value\nfrom micrograd2023.nn import Neuron, Layer, MLP\nfrom micrograd2023.utils import draw_dot\nimport random\n```\n\n``` python\n# inputs xs, weights ws, and bias b\nw1 = Value(1.1)\nx1 = Value(0.5)\nw2 = Value(0.12)\nx2 = Value(1.7)\nb = Value(0.34)\n\n# pre-activation\ns = w1*x1 + x2*w2 + b\n\n# activation\ny = s.tanh()\n\n# automatic differentiation\ny.backward()\n\n# show the computation graph of the perceptron\ndraw_dot(y)\n```\n\n![](index_files/figure-commonmark/cell-5-output-1.svg)\n\n``` python\n# added random seed for reproducibility\nrandom.seed(1234)\nn = Neuron(3)\nx = [Value(0.15), Value(-0.21), Value(-0.91) ]\ny = n(x)\ny.backward()\ndraw_dot(y)\n```\n\n![](index_files/figure-commonmark/cell-6-output-1.svg)\n\nYou can use `micrograd2023` to train a MLP and learn fundamental\nconcepts such as overfilling, optimal learning rate, etc.\n\nGood training\n\n<img src='media/MPL_good_training_decision_boundary.png' width=100% height=auto >\n<img src='media/MPL_good_training_loss_acc_plots.png' width=100% height=auto >\n\nOverfitting\n\n<img src='media/MPL_overfitting_decision_boundary.png' width=100% height=auto >\n<img src='media/MPL_overfitting_loss_acc_plots.png' width=100% height=auto >\n\n## Testings\n\nTo perform unit testing, using terminal to navigate to the directory,\nwhich contains `tests` folder, then simply type `python -m pytest` on\nthe terminal. Note that,\n[PyTorch](https://pytorch.org/get-started/locally/) is needed for the\ntest to run since derivatives calculated using `micrograd2023` are\ncompared against those calculated using `PyTorch` as references.\n\n`python -m pytest`\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "micrograd2023 was developed based on Andrej Karpathy micrograd with added documentations using nbdev for teachning purposes",
    "version": "0.2.3",
    "project_urls": {
        "Homepage": "https://github.com/hdocmsu/micrograd2023/"
    },
    "split_keywords": [
        "nbdev",
        "micrograd",
        "micrograd2023"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cc4864e59e9109e99a083ad7f24b69bd0964a4827b8599717a00d89c684b1038",
                "md5": "18985c3714bb8ec66b9d63494fc10b59",
                "sha256": "8f19fe8499dde9aee09a767dd61d744f54753e9d639f7ba2b590304bc868869f"
            },
            "downloads": -1,
            "filename": "micrograd2023-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "18985c3714bb8ec66b9d63494fc10b59",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 10616,
            "upload_time": "2024-08-29T15:25:22",
            "upload_time_iso_8601": "2024-08-29T15:25:22.044105Z",
            "url": "https://files.pythonhosted.org/packages/cc/48/64e59e9109e99a083ad7f24b69bd0964a4827b8599717a00d89c684b1038/micrograd2023-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d1a1b61a17a4bbaf1ca25b226d77cde9c7ee415f76f467baa929d3a4f88cf38",
                "md5": "0bb1e2dc7114d0a3333400199b5690a5",
                "sha256": "40d7a05f563cc874ba2673854c2f725211ead6a3f28a453e83c3c52436532abf"
            },
            "downloads": -1,
            "filename": "micrograd2023-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "0bb1e2dc7114d0a3333400199b5690a5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 16937,
            "upload_time": "2024-08-29T15:25:23",
            "upload_time_iso_8601": "2024-08-29T15:25:23.473411Z",
            "url": "https://files.pythonhosted.org/packages/5d/1a/1b61a17a4bbaf1ca25b226d77cde9c7ee415f76f467baa929d3a4f88cf38/micrograd2023-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-29 15:25:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hdocmsu",
    "github_project": "micrograd2023",
    "github_not_found": true,
    "lcname": "micrograd2023"
}
        
Elapsed time: 1.14974s