# fasttrain
`fasttrain` is a lightweight framework for building training loops for neural nets as fast as possible. It's designed to remove all boring details about making up training loops in [PyTorch](https://pytorch.org/), so you don't have to concentrate on how to pretty print a loss or metrics or bother about how to calculate them right.
## Installation
```
$ pip install fasttrain
```
## How do we start?
Let's use a neural network to classify images in the FashionMNIST dataset:
```python
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
learning_rate = 1e-3
batch_size = 64
epochs = 5
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork()
```
Then we make up a trainer:
```python
from fasttrain import Trainer
from fasttrain.metrics import accuracy
class MyTrainer(Trainer):
# Define how we compute the loss
def compute_loss(self, input_batch, output_batch):
(_, y_batch) = input_batch
return nn.CrossEntropyLoss()(output_batch, y_batch)
# Define how we compute metrics
def eval_metrics(self, input_batch, output_batch):
(_, y_batch) = input_batch
return {
"accuracy": accuracy(output_batch, y_batch, task="multiclass")
}
```
Finally, let's train our model:
```python
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
trainer = MyTrainer(model, optimizer)
history = trainer.train(train_dataloader, val_data=test_dataloader, num_epochs=epochs)
```
`fasttrain` offers some useful callbacks - one of them is `Tqdm` which shows a pretty-looking progress bar:
![training_loop](https://github.com/samedit66/fasttrain/assets/45196253/edecaee0-1c92-4a9f-ac3d-639c458a2ab5)
`Trainer.train()` returns the history of training - it contains a dict which stores metrics over epochs and can plot them:
```python
history.plot("loss", with_val=True)
```
![loss](https://github.com/samedit66/fasttrain/assets/45196253/efc0c9e9-4459-4bce-81ec-3c1a53cf51f1)
```python
history.plot("accuracy", with_val=True)
```
![accuracy](https://github.com/samedit66/fasttrain/assets/45196253/336bdef0-9f06-4887-8cb5-05255c89b228)
Raw data
{
"_id": null,
"home_page": "https://github.com/samedit66/fasttrain",
"name": "fasttrain",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "python torch pytorch",
"author": "samedit66",
"author_email": "samedit66@yandex.ru",
"download_url": "https://files.pythonhosted.org/packages/4a/46/26cd6a54a32fe7078e979fe741e24d05fec245ddd9c11416b1fc36385d8b/fasttrain-0.0.7.tar.gz",
"platform": null,
"description": "# fasttrain\r\n`fasttrain` is a lightweight framework for building training loops for neural nets as fast as possible. It's designed to remove all boring details about making up training loops in [PyTorch](https://pytorch.org/), so you don't have to concentrate on how to pretty print a loss or metrics or bother about how to calculate them right.\r\n\r\n## Installation\r\n```\r\n$ pip install fasttrain\r\n```\r\n\r\n## How do we start?\r\nLet's use a neural network to classify images in the FashionMNIST dataset:\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom torch.utils.data import DataLoader\r\nfrom torchvision import datasets\r\nfrom torchvision.transforms import ToTensor\r\n\r\nlearning_rate = 1e-3\r\nbatch_size = 64\r\nepochs = 5\r\n\r\ntraining_data = datasets.FashionMNIST(\r\n root=\"data\",\r\n train=True,\r\n download=True,\r\n transform=ToTensor()\r\n)\r\n\r\ntest_data = datasets.FashionMNIST(\r\n root=\"data\",\r\n train=False,\r\n download=True,\r\n transform=ToTensor()\r\n)\r\n\r\ntrain_dataloader = DataLoader(training_data, batch_size=64)\r\ntest_dataloader = DataLoader(test_data, batch_size=64)\r\n\r\nclass NeuralNetwork(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.flatten = nn.Flatten()\r\n self.linear_relu_stack = nn.Sequential(\r\n nn.Linear(28*28, 512),\r\n nn.ReLU(),\r\n nn.Linear(512, 512),\r\n nn.ReLU(),\r\n nn.Linear(512, 10),\r\n )\r\n\r\n def forward(self, x):\r\n x = self.flatten(x)\r\n logits = self.linear_relu_stack(x)\r\n return logits\r\n\r\nmodel = NeuralNetwork()\r\n```\r\n\r\nThen we make up a trainer:\r\n```python\r\nfrom fasttrain import Trainer\r\nfrom fasttrain.metrics import accuracy\r\n\r\nclass MyTrainer(Trainer):\r\n\r\n # Define how we compute the loss\r\n def compute_loss(self, input_batch, output_batch):\r\n (_, y_batch) = input_batch\r\n return nn.CrossEntropyLoss()(output_batch, y_batch)\r\n\r\n # Define how we compute metrics\r\n def eval_metrics(self, input_batch, output_batch):\r\n (_, y_batch) = input_batch\r\n return {\r\n \"accuracy\": accuracy(output_batch, y_batch, task=\"multiclass\")\r\n }\r\n```\r\n\r\nFinally, let's train our model:\r\n```python\r\noptimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\r\ntrainer = MyTrainer(model, optimizer)\r\nhistory = trainer.train(train_dataloader, val_data=test_dataloader, num_epochs=epochs)\r\n```\r\n`fasttrain` offers some useful callbacks - one of them is `Tqdm` which shows a pretty-looking progress bar:\r\n![training_loop](https://github.com/samedit66/fasttrain/assets/45196253/edecaee0-1c92-4a9f-ac3d-639c458a2ab5)\r\n\r\n`Trainer.train()` returns the history of training - it contains a dict which stores metrics over epochs and can plot them:\r\n```python\r\nhistory.plot(\"loss\", with_val=True)\r\n```\r\n![loss](https://github.com/samedit66/fasttrain/assets/45196253/efc0c9e9-4459-4bce-81ec-3c1a53cf51f1)\r\n```python\r\nhistory.plot(\"accuracy\", with_val=True)\r\n```\r\n![accuracy](https://github.com/samedit66/fasttrain/assets/45196253/336bdef0-9f06-4887-8cb5-05255c89b228)\r\n",
"bugtrack_url": null,
"license": "Apache 2.0 License",
"summary": "Framework for building training loops easier and faster",
"version": "0.0.7",
"project_urls": {
"Homepage": "https://github.com/samedit66/fasttrain"
},
"split_keywords": [
"python",
"torch",
"pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "861c904a6a41cea00ce0b2369be4a1bcf7bf12d51581a5b134f0ec65bc9c32fb",
"md5": "b7202c0d7da2d099c306fd722ac945b3",
"sha256": "d20b636ebb11a6ec6e40bc714904d27bd94c7491c19e9b5bdd92c9f5f9b8a5e2"
},
"downloads": -1,
"filename": "fasttrain-0.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b7202c0d7da2d099c306fd722ac945b3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 20799,
"upload_time": "2024-03-14T21:47:18",
"upload_time_iso_8601": "2024-03-14T21:47:18.387893Z",
"url": "https://files.pythonhosted.org/packages/86/1c/904a6a41cea00ce0b2369be4a1bcf7bf12d51581a5b134f0ec65bc9c32fb/fasttrain-0.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4a4626cd6a54a32fe7078e979fe741e24d05fec245ddd9c11416b1fc36385d8b",
"md5": "5686cf05c4dcf9be491287896c4e2df1",
"sha256": "6a7248f803c50941e41a1e60b455efa729d73739a44ac51fc9f2f822c545dc70"
},
"downloads": -1,
"filename": "fasttrain-0.0.7.tar.gz",
"has_sig": false,
"md5_digest": "5686cf05c4dcf9be491287896c4e2df1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 18859,
"upload_time": "2024-03-14T21:47:19",
"upload_time_iso_8601": "2024-03-14T21:47:19.664654Z",
"url": "https://files.pythonhosted.org/packages/4a/46/26cd6a54a32fe7078e979fe741e24d05fec245ddd9c11416b1fc36385d8b/fasttrain-0.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-14 21:47:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "samedit66",
"github_project": "fasttrain",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "fasttrain"
}