my-torch


Namemy-torch JSON
Version 0.0.13 PyPI version JSON
download
home_pagehttps://github.com/geraltofrivia/mytorch/
SummaryA transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.
upload_time2023-01-18 12:27:57
maintainer
docs_urlNone
authorPriyansh Trivedi
requires_python
license
keywords deep learning pytorch boilerplate machine learning neural network preprocessing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            # mytorch is your torch :fire:

[![Build Status](https://travis-ci.org/geraltofrivia/mytorch.svg?branch=master)](https://travis-ci.org/geraltofrivia/mytorch)
![GitHub](https://img.shields.io/github/license/geraltofrivia/mytorch)
![PyPI](https://img.shields.io/pypi/v/my-torch)



A transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.

Some parts here are inspired/copied from [fast.ai](https://github.com/fastai/fastai).
However, I've tried to keep is such that the control of model (model architecture), vocabulary, preprocessing is always maintained outside of this library.
The [training loop](src/mytorch/loops.py), [data samplers](src/mytorch/dataiters.py) etc can be used independent of anything else in here, but ofcourse work better together.

I'll be adding proper documentation, examples here, gradually.

# Installation

`pip install my-torch`

(Added hyphen because someone beat me to the [mytorch](https://pypi.org/project/mytorch/) package name.)

# Idea

Use/Ignore most parts of the library. Will not hide code from you, and you retain control over your models. 
    If you need just one thing, no fluff, feel free to copy-paste snippets of the code from this repo to yours.
    I'd be delighted if you drop me a line, if you found this stuff helpful.

# Features

1. **Customizable Training Loop**
    - Callbacks @ epoch start and end
    - Weight Decay (see [this blog post](https://www.fast.ai/2018/07/02/adam-weight-decay/) )
    - :scissors: Gradient Clipping
    - :floppy_disk: Model Saving 
    - :bell: Mobile push notifications @ the end of training :ghost: ( [See Usage](#notifications)) )
    
2. **Sortist Sampling** 
    
3. **Custom Learning Rate Schedules** 

4. Customisability & Flat Hierarchy

# Usage


## Simplest Use Case
```
import torch, torch.nn as nn, numpy as np

# Assuming that you have a torch model with a predict and a forward function.
# model = MyModel()
assert type(model) is nn.Module

# X, Y are input and output labels for a text classification task with four classes. 200 examples.
X_trn = np.random.randint(0, 100, (200, 4))
Y_trn = np.random.randint(0, 4, (200, 1))
X_val = np.random.randint(0, 100, (100, 4))
Y_val = np.random.randint(0, 4, (100, 1))

# Preparing data
data = {"train":{"x":X_trn, "y":Y_trn}, "valid":{"x":X_val, "y":Y_val} }

# Specifying other hyperparameters
epochs = 10
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
loss_function = nn.functional.cross_entropy
train_function = model      # or model.forward
predict_function = model.predict

train_acc, valid_acc, train_loss = loops.simplest_loop(epochs=epochs, data=data, opt=optimizer,
                                                        loss_fn=loss_function, 
                                                        train_fn=train_function,
                                                        predict_fn=predict_function)
```

## Slightly more complex examples

@TODO: They exist! Just need to add examples :sweat_smile:
1. Custom eval
2. Custom data sampler
3. Custom learning rate annealing schedules

## Saving the model
@TODO


## Notifications
The training loop can send notifications to your phone informing you that your model's done training and report metrics alongwith.
We use [push.techulus.com](https://push.techulus.com/) to do so and you'll need the app on your phone.
*If you're not bothered, this part of the code will stay out of your way.* 
But If you'd like this completely unnecessary gimmick, follow along:

1. Get the app. [Play Store](https://play.google.com/store/apps/details?id=com.techulus.push) |  [AppStore](https://itunes.apple.com/us/app/push-by-techulus/id1444391917?ls=1&mt=8)
2. Sign In/Up and get yout **api key**
3. Making the key available. Options:
    1. in a file, named `./push-techulus-key`, in plaintext at the root dir of this folder. You could just `echo 'your-api-key' >> ./push-techulus-ley`.
    2. through arguments to the training loop as a string
4. Pass flag to loop, to enable notifications
5. Done :balloon: You'll be notified when your model's done training.

# Changelog
#### v0.0.6
1. Interfaced some metrics from [torchmetrics](https://pypi.org/project/torchmetrics/), and implemented some more into a neat little package pending 

#### v0.0.2
1. Added negative sampling
1. [TODO] Added multiple evaluation functions
1. [TODO] Logging
1. [TODO] Typing all functions

#### v0.0.1
1. Added some tests.
1. Wrapping spaCy tokenizers, with some vocab management. 
1. Packaging :confetti:

# Upcoming
1. Models
    1. Classifiers 
    1. Encoders
    1. ~~Transformers~~ (USE [pytorch-transformers by :huggingface:](https://github.com/huggingface/pytorch-transformers))
3. Using FastProgress for progress + live plotting
1. [W&B](https://wandb.ai) integration
4. ?? (tell me [here](https://github.com/geraltofrivia/mytorch/issues))  

# Contributions
I'm eager to implement more tricks/features in the library, while maintaining the flat structure (and ensuring backward compatibility). 
Open to suggestions and contributions. Thanks! 

PS: Always appreciate more tests.

# Acknowledgements

An important part of the code was designed, and tested by :

> [Gaurav Maheshwari](https://gauravm.gitbook.io/)  · 
> GitHub [@saist1993](https://github.com/saist1993/)  · 
> Twitter [@__gauravm](https://twitter.com/__gauravm)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/geraltofrivia/mytorch/",
    "name": "my-torch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "deep learning,pytorch,boilerplate,machine learning,neural network,preprocessing",
    "author": "Priyansh Trivedi",
    "author_email": "mail@priyansh.page",
    "download_url": "https://files.pythonhosted.org/packages/a1/51/e61688259c3b9675b7c92cc5e38b57abab6696ffd60519c786a677cedbf2/my-torch-0.0.13.tar.gz",
    "platform": null,
    "description": "# mytorch is your torch :fire:\n\n[![Build Status](https://travis-ci.org/geraltofrivia/mytorch.svg?branch=master)](https://travis-ci.org/geraltofrivia/mytorch)\n![GitHub](https://img.shields.io/github/license/geraltofrivia/mytorch)\n![PyPI](https://img.shields.io/pypi/v/my-torch)\n\n\n\nA transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.\n\nSome parts here are inspired/copied from [fast.ai](https://github.com/fastai/fastai).\nHowever, I've tried to keep is such that the control of model (model architecture), vocabulary, preprocessing is always maintained outside of this library.\nThe [training loop](src/mytorch/loops.py), [data samplers](src/mytorch/dataiters.py) etc can be used independent of anything else in here, but ofcourse work better together.\n\nI'll be adding proper documentation, examples here, gradually.\n\n# Installation\n\n`pip install my-torch`\n\n(Added hyphen because someone beat me to the [mytorch](https://pypi.org/project/mytorch/) package name.)\n\n# Idea\n\nUse/Ignore most parts of the library. Will not hide code from you, and you retain control over your models. \n    If you need just one thing, no fluff, feel free to copy-paste snippets of the code from this repo to yours.\n    I'd be delighted if you drop me a line, if you found this stuff helpful.\n\n# Features\n\n1. **Customizable Training Loop**\n    - Callbacks @ epoch start and end\n    - Weight Decay (see [this blog post](https://www.fast.ai/2018/07/02/adam-weight-decay/) )\n    - :scissors: Gradient Clipping\n    - :floppy_disk: Model Saving \n    - :bell: Mobile push notifications @ the end of training :ghost: ( [See Usage](#notifications)) )\n    \n2. **Sortist Sampling** \n    \n3. **Custom Learning Rate Schedules** \n\n4. Customisability & Flat Hierarchy\n\n# Usage\n\n\n## Simplest Use Case\n```\nimport torch, torch.nn as nn, numpy as np\n\n# Assuming that you have a torch model with a predict and a forward function.\n# model = MyModel()\nassert type(model) is nn.Module\n\n# X, Y are input and output labels for a text classification task with four classes. 200 examples.\nX_trn = np.random.randint(0, 100, (200, 4))\nY_trn = np.random.randint(0, 4, (200, 1))\nX_val = np.random.randint(0, 100, (100, 4))\nY_val = np.random.randint(0, 4, (100, 1))\n\n# Preparing data\ndata = {\"train\":{\"x\":X_trn, \"y\":Y_trn}, \"valid\":{\"x\":X_val, \"y\":Y_val} }\n\n# Specifying other hyperparameters\nepochs = 10\noptimizer = torch.optim.SGD(model.parameters(), lr=0.001)\nloss_function = nn.functional.cross_entropy\ntrain_function = model      # or model.forward\npredict_function = model.predict\n\ntrain_acc, valid_acc, train_loss = loops.simplest_loop(epochs=epochs, data=data, opt=optimizer,\n                                                        loss_fn=loss_function, \n                                                        train_fn=train_function,\n                                                        predict_fn=predict_function)\n```\n\n## Slightly more complex examples\n\n@TODO: They exist! Just need to add examples :sweat_smile:\n1. Custom eval\n2. Custom data sampler\n3. Custom learning rate annealing schedules\n\n## Saving the model\n@TODO\n\n\n## Notifications\nThe training loop can send notifications to your phone informing you that your model's done training and report metrics alongwith.\nWe use [push.techulus.com](https://push.techulus.com/) to do so and you'll need the app on your phone.\n*If you're not bothered, this part of the code will stay out of your way.* \nBut If you'd like this completely unnecessary gimmick, follow along:\n\n1. Get the app. [Play Store](https://play.google.com/store/apps/details?id=com.techulus.push) |  [AppStore](https://itunes.apple.com/us/app/push-by-techulus/id1444391917?ls=1&mt=8)\n2. Sign In/Up and get yout **api key**\n3. Making the key available. Options:\n    1. in a file, named `./push-techulus-key`, in plaintext at the root dir of this folder. You could just `echo 'your-api-key' >> ./push-techulus-ley`.\n    2. through arguments to the training loop as a string\n4. Pass flag to loop, to enable notifications\n5. Done :balloon: You'll be notified when your model's done training.\n\n# Changelog\n#### v0.0.6\n1. Interfaced some metrics from [torchmetrics](https://pypi.org/project/torchmetrics/), and implemented some more into a neat little package pending \n\n#### v0.0.2\n1. Added negative sampling\n1. [TODO] Added multiple evaluation functions\n1. [TODO] Logging\n1. [TODO] Typing all functions\n\n#### v0.0.1\n1. Added some tests.\n1. Wrapping spaCy tokenizers, with some vocab management. \n1. Packaging :confetti:\n\n# Upcoming\n1. Models\n    1. Classifiers \n    1. Encoders\n    1. ~~Transformers~~ (USE [pytorch-transformers by :huggingface:](https://github.com/huggingface/pytorch-transformers))\n3. Using FastProgress for progress + live plotting\n1. [W&B](https://wandb.ai) integration\n4. ?? (tell me [here](https://github.com/geraltofrivia/mytorch/issues))  \n\n# Contributions\nI'm eager to implement more tricks/features in the library, while maintaining the flat structure (and ensuring backward compatibility). \nOpen to suggestions and contributions. Thanks! \n\nPS: Always appreciate more tests.\n\n# Acknowledgements\n\nAn important part of the code was designed, and tested by :\n\n> [Gaurav Maheshwari](https://gauravm.gitbook.io/)  · \n> GitHub [@saist1993](https://github.com/saist1993/)  · \n> Twitter [@__gauravm](https://twitter.com/__gauravm)\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A transparent boilerplate + bag of tricks to ease my (yours?) (our?) PyTorch dev time.",
    "version": "0.0.13",
    "split_keywords": [
        "deep learning",
        "pytorch",
        "boilerplate",
        "machine learning",
        "neural network",
        "preprocessing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f7da62bc1bdd4abc805b29365f521c34f844ab1e892a4bb534345fc0a24c5cf",
                "md5": "5655a68227cf1937d9bcd253fcb2ce1f",
                "sha256": "22d1604b1ef7ce3eebdb9aabc62243a890444f4f26083327df6497357e619920"
            },
            "downloads": -1,
            "filename": "my_torch-0.0.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5655a68227cf1937d9bcd253fcb2ce1f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 21395,
            "upload_time": "2023-01-18T12:27:52",
            "upload_time_iso_8601": "2023-01-18T12:27:52.221664Z",
            "url": "https://files.pythonhosted.org/packages/9f/7d/a62bc1bdd4abc805b29365f521c34f844ab1e892a4bb534345fc0a24c5cf/my_torch-0.0.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a151e61688259c3b9675b7c92cc5e38b57abab6696ffd60519c786a677cedbf2",
                "md5": "0b675cdf704c18453c4ec7a35a07d25c",
                "sha256": "506a13eccf21941913e01751b654bdcea950861cb3135b52d6937ffb6044f93d"
            },
            "downloads": -1,
            "filename": "my-torch-0.0.13.tar.gz",
            "has_sig": false,
            "md5_digest": "0b675cdf704c18453c4ec7a35a07d25c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 19250,
            "upload_time": "2023-01-18T12:27:57",
            "upload_time_iso_8601": "2023-01-18T12:27:57.136075Z",
            "url": "https://files.pythonhosted.org/packages/a1/51/e61688259c3b9675b7c92cc5e38b57abab6696ffd60519c786a677cedbf2/my-torch-0.0.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-18 12:27:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "geraltofrivia",
    "github_project": "mytorch",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "my-torch"
}
        
Elapsed time: 0.03380s