lazytorch


Namelazytorch JSON
Version 1.0.9 PyPI version JSON
download
home_pagehttps://github.com/ppvalluri09/lazytorch
Summary
upload_time2020-12-01 08:10:21
maintainer
docs_urlNone
authorPavan Preetham Valluri
requires_python>=3.6
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # lazytorch

<!--<link rel="stylesheet" href="path/to/font-awesome/css/font-awesome.min.css">
<div class="shields" style="float: left">
    <img src="https://img.shields.io/github/issues/ppvalluri09/lazytorch"/>
    <img src="https://img.shields.io/github/forks/ppvalluri09/lazytorch"/>
    <img src="https://img.shields.io/github/stars/ppvalluri09/lazytorch"/>
    <img src="https://img.shields.io/github/license/ppvalluri09/lazytorch"/>
</div>-->

A PyTorch wrapper for PyTorch users who are tired of writing the same code EVERY SINGLE TIME <i class="fa fa-smile-o" aria-hidden="true"></i>

<!-- <img src="https://venturebeat.com/wp-content/uploads/2019/06/pytorch-e1576624094357.jpg?zoom=2&resize=1200%2C450&strip=all"/> -->
<!--<img src="https://raw.githubusercontent.com/ppvalluri09/lazytorch/master/lazyML/logo/lazyML.png"/>-->

# Installation

### With Git
```console
foo@bar:~$ git clone https://github.com/ppvalluri09/lazytorch
foo@bar:~$ cd lazytorch
foo@bar:~$ pip3 install -r requirements.txt
```

### With pip

```console
foo@bar:~$ pip3 install lazytorch
```

<h2><i class="fa fa-terminal" aria-hidden="true"></i> Code Usage</h2> 

<h4><i class="fa fa-picture-o" aria-hidden="true"></i> Image Classification (transfer learning)</h4>

```python
# imports
from lazytorch.models import *
from lazytorch.trainer import *
from lazytorch.loaders import *

# defining the learner
learn = ConvLearner("resnet34", 6, "multi", False)

# directories for the image folders, there are loaders for other types of problems too...
train_path = "../datasets/IIC_dataset/seg_train"
val_path = "../datasets/IIC_dataset/seg_test"

# making the dataset
dataset = ImageClassificationDataset(True, train_path, val_path)

# build the trainer to teach the learner
trainer = Trainer(learn, dataset.load())

# tell the trainer to fit the learner
# metrics could be accuracy, f1, recall, precision, roc_auc
# trainer.fit_cycles(epochs: int, lr: float, optimizer: str, metrics: list)
trainer.fit_cycles(epochs=3, metrics=["accuracy", "f1"]) 

# save the model
trainer.save("stage-1")

# if you want to train all layers
trainer.unfreeze()
# and fit again
```

### Using Pytorch datasets

<h4>Integration with torchvision.datasets</h4>

```python
from lazytorch.models import *
from lazytorch.loaders import *
from lazytorch.trainer import *

# use the TorchDataset module to download the dataset from pytorch repos
cifar = TorchDataset(".", dataset_name="CIFAR10", download=True)
# define your learner, here we are going for transfer learning with the RESNET-18
learn = Learner("resnet18", 10, "multi", False)
# initialize your trainer, cifar.load() will return a tuuple with train and validation sets
trainer = Trainer(learn, cifar.load(bs=128))
# fit your trainer
trainer.fit_cycles(epochs=5, metrics=["accuracy", "f1"])

# if you wanna retrain the entire network
trainer.unfreeze()
trainer.fit_cycles(epochs=20, metrics=["accuracy", "f1"])
```

### Classification

<h4><i class="fa fa-file-text" aria-hidden="true"></i> pd.DataFrame or np.ndarray</h4>

```python
# define your custom pytorch model and pass it to the learner
# set imbalance = True only if u have data imbalance issue (note if imbalance=True, your training labels must be in one-hot form too)
# problem could be made multi-class by setting problem="multi"
learner = CustomLearner(model, problem="binary", imbalance=False)

# read your datasets in
df = pd.read_csv("path_to_train.csv")
train_set, val_set = Splitter(df.drop(["target"]).values, df["target"].values, imbalance=False, test_size=0.2)
# you can skip splitting if you have a validation set already
# don't worry if you have a pd.DataFrame, you can go ahead and pass it into the below function
# note the label column in the dataframe (if you pass pd.DataFrame) must be named 'target'
train_loader, val_loader = ClassificationDataset(train_set, val_set)

# define the trainer
trainer = Trainer(learner, (train_loader, val_loader))
train.fit_cycles(epochs=50, lr=3e-4, metrics=["accuracy", "f1", "roc_auc"])

# save the model
trainer.save("stage-1")
```

### NLP
<h4>WIP, currently the SentimentDataset module is made available for preparing DataLoader for Sentiment Classification tasks</h4>
<h5> Other NLP tasks will be made available ASAP</h5>

```python
# other imports here
from lazytorch import loaders

# use the SentimentDataset loader
dataset = loaders.SentimentDataset(train_df, val_df, imbalance=False, lemmatize=False, url=True, remove_punc=False)
# get the vocab from the SentimentDataset module, we'll need this to encode the test examples later and also to define the 
# embedding layer
vocab = dataset.vocab
train_loader, val_loader = dataset.load(bs=64)

# define your custom model for sentiment classification, note that the input for embedding layer would be len(vocab) + 1
learner = CustomLearner(model, problem="binary", imbalance=False)

# define the trainer
trainer = Trainer(learner, (train_loader, val_loader))
train.fit_cycles(epochs=20, lr=3e-4, metrics=["accuracy", "f1"])

trainer.save("stage-1")
```

### Regression

```python
# same as the Classification problem changing the problem from "binary" to "reg" would do it
# imbalance param doesn't matter here, even if we pass something it wouldn't do anything
learner = CustomLearner(model, problem="reg")
# the rest is the same as above
```

<h3><i class="fa fa-line-chart" aria-hidden="true"></i> Plotting Loss</h3>

```python
# user can monitor the loss also by plotting it
# after training is complete
trainer.plot() # will plot the losses
```

## Checklist

- [X] Image Classification with and without transfer learning
- [X] Classification and Regression Tasks
- [X] Data Loaders and Dataset makers
- [X] Color codes to monitor losses
- [ ] Support for NLP tasks <b>(WIP)</b>
- [ ] Support for Sequence Models <b>(WIP)</b>
- [ ] Support for Unsupervised Learning <b>(WIP)</b>

# Wanna contribute ?

It would be great to have people work and contribute to this project to help PyTorch users make the best use of it without any fuss... 
To work on this project email me at <a href="#">ppvalluri09@gmail.com</a>

# Author Details
<h3>Pavan Preetham Valluri</h3>
<h5><a href="#">ppvalluri09@gmail.com</a></h5>
<h5><a href="https://www.linkedin.com/in/pavan-preetham-a1928116b/">LinkedIn</a></h5>
<h5><a href="https://twitter.com/ppvalluri">Twitter</a></h5>



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ppvalluri09/lazytorch",
    "name": "lazytorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Pavan Preetham Valluri",
    "author_email": "ppvalluri09@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/7a/65/19b63af0f9c63134a016dfdf62725ebb239468de2494cb973e892ce00fdf/lazytorch-1.0.9.tar.gz",
    "platform": "",
    "description": "# lazytorch\n\n<!--<link rel=\"stylesheet\" href=\"path/to/font-awesome/css/font-awesome.min.css\">\n<div class=\"shields\" style=\"float: left\">\n    <img src=\"https://img.shields.io/github/issues/ppvalluri09/lazytorch\"/>\n    <img src=\"https://img.shields.io/github/forks/ppvalluri09/lazytorch\"/>\n    <img src=\"https://img.shields.io/github/stars/ppvalluri09/lazytorch\"/>\n    <img src=\"https://img.shields.io/github/license/ppvalluri09/lazytorch\"/>\n</div>-->\n\nA PyTorch wrapper for PyTorch users who are tired of writing the same code EVERY SINGLE TIME <i class=\"fa fa-smile-o\" aria-hidden=\"true\"></i>\n\n<!-- <img src=\"https://venturebeat.com/wp-content/uploads/2019/06/pytorch-e1576624094357.jpg?zoom=2&resize=1200%2C450&strip=all\"/> -->\n<!--<img src=\"https://raw.githubusercontent.com/ppvalluri09/lazytorch/master/lazyML/logo/lazyML.png\"/>-->\n\n# Installation\n\n### With Git\n```console\nfoo@bar:~$ git clone https://github.com/ppvalluri09/lazytorch\nfoo@bar:~$ cd lazytorch\nfoo@bar:~$ pip3 install -r requirements.txt\n```\n\n### With pip\n\n```console\nfoo@bar:~$ pip3 install lazytorch\n```\n\n<h2><i class=\"fa fa-terminal\" aria-hidden=\"true\"></i> Code Usage</h2> \n\n<h4><i class=\"fa fa-picture-o\" aria-hidden=\"true\"></i> Image Classification (transfer learning)</h4>\n\n```python\n# imports\nfrom lazytorch.models import *\nfrom lazytorch.trainer import *\nfrom lazytorch.loaders import *\n\n# defining the learner\nlearn = ConvLearner(\"resnet34\", 6, \"multi\", False)\n\n# directories for the image folders, there are loaders for other types of problems too...\ntrain_path = \"../datasets/IIC_dataset/seg_train\"\nval_path = \"../datasets/IIC_dataset/seg_test\"\n\n# making the dataset\ndataset = ImageClassificationDataset(True, train_path, val_path)\n\n# build the trainer to teach the learner\ntrainer = Trainer(learn, dataset.load())\n\n# tell the trainer to fit the learner\n# metrics could be accuracy, f1, recall, precision, roc_auc\n# trainer.fit_cycles(epochs: int, lr: float, optimizer: str, metrics: list)\ntrainer.fit_cycles(epochs=3, metrics=[\"accuracy\", \"f1\"]) \n\n# save the model\ntrainer.save(\"stage-1\")\n\n# if you want to train all layers\ntrainer.unfreeze()\n# and fit again\n```\n\n### Using Pytorch datasets\n\n<h4>Integration with torchvision.datasets</h4>\n\n```python\nfrom lazytorch.models import *\nfrom lazytorch.loaders import *\nfrom lazytorch.trainer import *\n\n# use the TorchDataset module to download the dataset from pytorch repos\ncifar = TorchDataset(\".\", dataset_name=\"CIFAR10\", download=True)\n# define your learner, here we are going for transfer learning with the RESNET-18\nlearn = Learner(\"resnet18\", 10, \"multi\", False)\n# initialize your trainer, cifar.load() will return a tuuple with train and validation sets\ntrainer = Trainer(learn, cifar.load(bs=128))\n# fit your trainer\ntrainer.fit_cycles(epochs=5, metrics=[\"accuracy\", \"f1\"])\n\n# if you wanna retrain the entire network\ntrainer.unfreeze()\ntrainer.fit_cycles(epochs=20, metrics=[\"accuracy\", \"f1\"])\n```\n\n### Classification\n\n<h4><i class=\"fa fa-file-text\" aria-hidden=\"true\"></i> pd.DataFrame or np.ndarray</h4>\n\n```python\n# define your custom pytorch model and pass it to the learner\n# set imbalance = True only if u have data imbalance issue (note if imbalance=True, your training labels must be in one-hot form too)\n# problem could be made multi-class by setting problem=\"multi\"\nlearner = CustomLearner(model, problem=\"binary\", imbalance=False)\n\n# read your datasets in\ndf = pd.read_csv(\"path_to_train.csv\")\ntrain_set, val_set = Splitter(df.drop([\"target\"]).values, df[\"target\"].values, imbalance=False, test_size=0.2)\n# you can skip splitting if you have a validation set already\n# don't worry if you have a pd.DataFrame, you can go ahead and pass it into the below function\n# note the label column in the dataframe (if you pass pd.DataFrame) must be named 'target'\ntrain_loader, val_loader = ClassificationDataset(train_set, val_set)\n\n# define the trainer\ntrainer = Trainer(learner, (train_loader, val_loader))\ntrain.fit_cycles(epochs=50, lr=3e-4, metrics=[\"accuracy\", \"f1\", \"roc_auc\"])\n\n# save the model\ntrainer.save(\"stage-1\")\n```\n\n### NLP\n<h4>WIP, currently the SentimentDataset module is made available for preparing DataLoader for Sentiment Classification tasks</h4>\n<h5> Other NLP tasks will be made available ASAP</h5>\n\n```python\n# other imports here\nfrom lazytorch import loaders\n\n# use the SentimentDataset loader\ndataset = loaders.SentimentDataset(train_df, val_df, imbalance=False, lemmatize=False, url=True, remove_punc=False)\n# get the vocab from the SentimentDataset module, we'll need this to encode the test examples later and also to define the \n# embedding layer\nvocab = dataset.vocab\ntrain_loader, val_loader = dataset.load(bs=64)\n\n# define your custom model for sentiment classification, note that the input for embedding layer would be len(vocab) + 1\nlearner = CustomLearner(model, problem=\"binary\", imbalance=False)\n\n# define the trainer\ntrainer = Trainer(learner, (train_loader, val_loader))\ntrain.fit_cycles(epochs=20, lr=3e-4, metrics=[\"accuracy\", \"f1\"])\n\ntrainer.save(\"stage-1\")\n```\n\n### Regression\n\n```python\n# same as the Classification problem changing the problem from \"binary\" to \"reg\" would do it\n# imbalance param doesn't matter here, even if we pass something it wouldn't do anything\nlearner = CustomLearner(model, problem=\"reg\")\n# the rest is the same as above\n```\n\n<h3><i class=\"fa fa-line-chart\" aria-hidden=\"true\"></i> Plotting Loss</h3>\n\n```python\n# user can monitor the loss also by plotting it\n# after training is complete\ntrainer.plot() # will plot the losses\n```\n\n## Checklist\n\n- [X] Image Classification with and without transfer learning\n- [X] Classification and Regression Tasks\n- [X] Data Loaders and Dataset makers\n- [X] Color codes to monitor losses\n- [ ] Support for NLP tasks <b>(WIP)</b>\n- [ ] Support for Sequence Models <b>(WIP)</b>\n- [ ] Support for Unsupervised Learning <b>(WIP)</b>\n\n# Wanna contribute ?\n\nIt would be great to have people work and contribute to this project to help PyTorch users make the best use of it without any fuss... \nTo work on this project email me at <a href=\"#\">ppvalluri09@gmail.com</a>\n\n# Author Details\n<h3>Pavan Preetham Valluri</h3>\n<h5><a href=\"#\">ppvalluri09@gmail.com</a></h5>\n<h5><a href=\"https://www.linkedin.com/in/pavan-preetham-a1928116b/\">LinkedIn</a></h5>\n<h5><a href=\"https://twitter.com/ppvalluri\">Twitter</a></h5>\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "",
    "version": "1.0.9",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "408c6af93598d6d6768a72d56b4b4b60",
                "sha256": "366e808ef6dcd1ed32cc13e289ceec18eb3e884a47aa39663550de01b0f8e944"
            },
            "downloads": -1,
            "filename": "lazytorch-1.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "408c6af93598d6d6768a72d56b4b4b60",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 14086,
            "upload_time": "2020-12-01T08:10:12",
            "upload_time_iso_8601": "2020-12-01T08:10:12.118431Z",
            "url": "https://files.pythonhosted.org/packages/ef/38/af095031d6f8c88be6dfc7f993d12d2a4b02e0f3fe57acd022634ab1831b/lazytorch-1.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "5f5b68c05cc3481436352a134246ea36",
                "sha256": "1ad6fcfd5856addee42494175b182cb54a7cb77b9fd632c96ffbc14983cba259"
            },
            "downloads": -1,
            "filename": "lazytorch-1.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "5f5b68c05cc3481436352a134246ea36",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 13530,
            "upload_time": "2020-12-01T08:10:21",
            "upload_time_iso_8601": "2020-12-01T08:10:21.684384Z",
            "url": "https://files.pythonhosted.org/packages/7a/65/19b63af0f9c63134a016dfdf62725ebb239468de2494cb973e892ce00fdf/lazytorch-1.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2020-12-01 08:10:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "ppvalluri09",
    "error": "Could not fetch GitHub repository",
    "lcname": "lazytorch"
}
        
Elapsed time: 0.15526s