# HyperModule
`HyperModule` is a wrapper of functions for managing the training, validation, and testing processes of neural networks. With HyperModule, it is easier to monitor the progress of the training process by logging loss and validation accuracy. Additionally, HyperModule provides convenient functions for loading and saving pre-trained models, streamlining the entire process of working with neural networks.
## Usage
### Getting Started
A simpliest way to use hypermodule is to create instances of network, optimizer, and scheduler first and then
loading them with a hypermodule:
```Python
# Example 1.
model = NN()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
hm = HyperModule(model, criterion, optimizer, scheduler)
hm.train(train_dataloader, valid_dataloader, save_path, num_epochs=100)
hm.test(test_dataloader)
```
### Partials
However, **a more recommended approach is to assign optimizer and scheduler by partial functions**,
which save you from chaining model parameters, optimizer, and learning rate scheduler on your own.
```Python
# Example 2.
from .partials import optim, sched
hm = Hypermodule(
model = NN(),
criterion = torch.nn.CrossEntropyLoss(),
optimizer = optim("SGD", lr=0.01, momentum=0.9),
scheduler = sched("ExponentialLR", gamma=0.9)
)
```
This is equivalent to Example 1.
The partial function `optim`/`sched` can take an existed optimizer/scheduler as its argument,
generate another optimizer/scheduler instance with the new hyperparameters.
```Python
# Example 3.
model = NeuralNetwork()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
hm = Hypermodule(
model = NN(),
criterion = torch.nn.CrossEntropyLoss(),
optimizer = optim(optimizer, lr=0.005, momentum=0.99),
scheduler = sched(scheduler, gamma=0.11)
)
```
Now the optimizer used in training is a SGD with learning rate 0.005 and momentum 0.99,
and the scheduler is an ExponentialLR scheduler with gamma 0.11.
### Hyperparameters
With partials, we can provide hyperparmeters in a dict apart from the optimizer and scheduler functions.
```Python
from .partials import optim, sched
hyperparams = {
'lr': 0.01,
'momentum': 0.9,
'gamma': 0.9
}
hm = Hypermodule(
model = NN(),
criterion = torch.nn.CrossEntropyLoss(),
optimizer = optim("SGD"),
scheduler = sched("ExponentialLR"),
hyperparams = hyperparams
)
```
Note this is equivalent to Example 1 and 2.
## Class structure
- `__init__`: The constructor of the class which takes the model, criterion, optimizer, scheduler, and hyperparameters as input arguments.
- `optimizer`: A property that returns the optimizer used for training the model.
- `optimizer.setter`: A method that sets the optimizer used for training the model, either by accepting an instance of `torch.optim.Optimizer` or by creating an optimizer based on the provided optimizer configuration.
- `scheduler`: A property that returns the scheduler used for adjusting the learning rate of the optimizer during training.
- `scheduler.setter`: A method that sets the scheduler used for adjusting the learning rate of the optimizer during training, either by accepting an instance of `torch.optim.lr_scheduler.LRScheduler` or by creating a scheduler based on the provided scheduler configuration.
- `hyperparams`: A property that returns the hyperparameters used for optimizing the model.
- `hyperparams.setter`: A method that sets the hyperparameters used for optimizing the model, either by accepting a dictionary of hyperparameters or by creating a new optimizer and scheduler based on the provided hyperparameters.
- `train`: A method that trains the model using the provided training and validation data loaders for the specified number of epochs. It also saves the best model based on the lowest validation loss and returns the training and validation losses and accuracy.
- `_update`: A method that updates the model parameters based on the provided input images and targets and the current loss.
- `_update_progress`: A method that updates the training progress bar based on the current epoch and batch loss.
- `_update_scheduler`: A method that updates the learning rate of the optimizer using the scheduler.
- `_update_history`: A method that updates the training history with the current epoch's training and validation losses and accuracy.
- `_perform_validation`: A method that validates the model using the provided validation data loader and loss function and returns the validation loss and accuracy.
- `validate`: Evaluates the model on the given dataloader using the given criterion and returns the loss and accuracy.
- `test`: Evaluates the model on the given test dataloader using the given criterion and returns the accuracy.
- `predict`: Returns the model's predictions for the given dataloader, optionally applying softmax to the output.
- `save`: Saves the model and training information to the given save path, or to the HyperModule's load_path attribute if none is given.
## Loading and Saving
The *information* being loaded and saved in `HyperModule` is
* `state_dict` of neural network
* `state_dict` of optimizer
* `state_dict` of scheduler
* number of epochs that neural network has been trained
* training loss in each epoch
* validation accuracy in each epoch
* testing accuracy
Raw data
{
"_id": null,
"home_page": "https://github.com/kevinkevin556/HyperModule",
"name": "hypermodule",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Zhen-Lun Hong",
"author_email": "kevink556@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/08/fe/f108f59a74fe02649563b128e0644ef6ae4df3658cc89e9930d97ee047c6/hypermodule-0.1.2.tar.gz",
"platform": null,
"description": "# HyperModule\n\n`HyperModule` is a wrapper of functions for managing the training, validation, and testing processes of neural networks. With HyperModule, it is easier to monitor the progress of the training process by logging loss and validation accuracy. Additionally, HyperModule provides convenient functions for loading and saving pre-trained models, streamlining the entire process of working with neural networks.\n\n## Usage\n\n### Getting Started\n\nA simpliest way to use hypermodule is to create instances of network, optimizer, and scheduler first and then\nloading them with a hypermodule:\n\n```Python\n# Example 1.\nmodel = NN()\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)\n\nhm = HyperModule(model, criterion, optimizer, scheduler)\n\nhm.train(train_dataloader, valid_dataloader, save_path, num_epochs=100)\nhm.test(test_dataloader)\n```\n### Partials\n\nHowever, **a more recommended approach is to assign optimizer and scheduler by partial functions**,\nwhich save you from chaining model parameters, optimizer, and learning rate scheduler on your own.\n\n```Python\n# Example 2.\nfrom .partials import optim, sched\n\nhm = Hypermodule(\n model = NN(),\n criterion = torch.nn.CrossEntropyLoss(),\n optimizer = optim(\"SGD\", lr=0.01, momentum=0.9),\n scheduler = sched(\"ExponentialLR\", gamma=0.9)\n)\n```\nThis is equivalent to Example 1.\n\nThe partial function `optim`/`sched` can take an existed optimizer/scheduler as its argument,\ngenerate another optimizer/scheduler instance with the new hyperparameters. \n\n```Python\n# Example 3.\nmodel = NeuralNetwork()\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)\n\nhm = Hypermodule(\n model = NN(),\n criterion = torch.nn.CrossEntropyLoss(),\n optimizer = optim(optimizer, lr=0.005, momentum=0.99),\n scheduler = sched(scheduler, gamma=0.11)\n)\n```\nNow the optimizer used in training is a SGD with learning rate 0.005 and momentum 0.99,\nand the scheduler is an ExponentialLR scheduler with gamma 0.11. \n\n### Hyperparameters\n\nWith partials, we can provide hyperparmeters in a dict apart from the optimizer and scheduler functions.\n\n```Python\nfrom .partials import optim, sched\n\nhyperparams = {\n 'lr': 0.01, \n 'momentum': 0.9,\n 'gamma': 0.9\n}\n\nhm = Hypermodule(\n model = NN(),\n criterion = torch.nn.CrossEntropyLoss(),\n optimizer = optim(\"SGD\"),\n scheduler = sched(\"ExponentialLR\"),\n hyperparams = hyperparams\n)\n```\nNote this is equivalent to Example 1 and 2.\n\n\n\n## Class structure\n\n- `__init__`: The constructor of the class which takes the model, criterion, optimizer, scheduler, and hyperparameters as input arguments.\n- `optimizer`: A property that returns the optimizer used for training the model.\n - `optimizer.setter`: A method that sets the optimizer used for training the model, either by accepting an instance of `torch.optim.Optimizer` or by creating an optimizer based on the provided optimizer configuration.\n- `scheduler`: A property that returns the scheduler used for adjusting the learning rate of the optimizer during training.\n - `scheduler.setter`: A method that sets the scheduler used for adjusting the learning rate of the optimizer during training, either by accepting an instance of `torch.optim.lr_scheduler.LRScheduler` or by creating a scheduler based on the provided scheduler configuration.\n- `hyperparams`: A property that returns the hyperparameters used for optimizing the model.\n - `hyperparams.setter`: A method that sets the hyperparameters used for optimizing the model, either by accepting a dictionary of hyperparameters or by creating a new optimizer and scheduler based on the provided hyperparameters.\n- `train`: A method that trains the model using the provided training and validation data loaders for the specified number of epochs. It also saves the best model based on the lowest validation loss and returns the training and validation losses and accuracy.\n - `_update`: A method that updates the model parameters based on the provided input images and targets and the current loss.\n - `_update_progress`: A method that updates the training progress bar based on the current epoch and batch loss.\n - `_update_scheduler`: A method that updates the learning rate of the optimizer using the scheduler.\n - `_update_history`: A method that updates the training history with the current epoch's training and validation losses and accuracy.\n - `_perform_validation`: A method that validates the model using the provided validation data loader and loss function and returns the validation loss and accuracy.\n- `validate`: Evaluates the model on the given dataloader using the given criterion and returns the loss and accuracy.\n- `test`: Evaluates the model on the given test dataloader using the given criterion and returns the accuracy.\n- `predict`: Returns the model's predictions for the given dataloader, optionally applying softmax to the output.\n- `save`: Saves the model and training information to the given save path, or to the HyperModule's load_path attribute if none is given.\n\n\n## Loading and Saving\n\nThe *information* being loaded and saved in `HyperModule` is\n\n* `state_dict` of neural network\n* `state_dict` of optimizer\n* `state_dict` of scheduler\n* number of epochs that neural network has been trained\n* training loss in each epoch\n* validation accuracy in each epoch\n* testing accuracy\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Wrapper for managing neural network training processes",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/kevinkevin556/HyperModule"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0030bde65869809922e88435a722f676f3c3e913ba188671a73cd72ab8784695",
"md5": "ce3ed9defdc8d309b697021e7fa5a616",
"sha256": "44d73a7a772e1a41cb5ffcbc19208d734ed5a5cca0718c54afa08ce0ab21c57c"
},
"downloads": -1,
"filename": "hypermodule-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ce3ed9defdc8d309b697021e7fa5a616",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 9200,
"upload_time": "2023-08-11T14:28:46",
"upload_time_iso_8601": "2023-08-11T14:28:46.914985Z",
"url": "https://files.pythonhosted.org/packages/00/30/bde65869809922e88435a722f676f3c3e913ba188671a73cd72ab8784695/hypermodule-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "08fef108f59a74fe02649563b128e0644ef6ae4df3658cc89e9930d97ee047c6",
"md5": "fb1b438d87e361540af5e845eb539816",
"sha256": "d719e65e22844d1e8771c61032285c1eb999c4abb9fda4889ed6f0a541cfcb55"
},
"downloads": -1,
"filename": "hypermodule-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "fb1b438d87e361540af5e845eb539816",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 9657,
"upload_time": "2023-08-11T14:28:49",
"upload_time_iso_8601": "2023-08-11T14:28:49.616308Z",
"url": "https://files.pythonhosted.org/packages/08/fe/f108f59a74fe02649563b128e0644ef6ae4df3658cc89e9930d97ee047c6/hypermodule-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-11 14:28:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kevinkevin556",
"github_project": "HyperModule",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "hypermodule"
}