# Hyperparameter Optimization for Deep Learning (HPO4DL)
HPO4DL is a framework for multi-fidelity (gray-box) hyperparameter optimization.
The core optimizer in HPO4DL is DyHPO, a novel Bayesian Optimization approach to
hyperparameter optimization tailored for deep learning. DyHPO dynamically determines
the best hyperparameter configurations to train further by using a deep kernel for
Gaussian Processes that captures the details of the learning curve and an acquisition function
that incorporates multi-budget information.
## Installation
To install the package:
```bash
pip install hpo4dl
```
## Getting started
Following is a simple example to get you started:
```python
from typing import List, Dict, Union
from hpo4dl.tuner import Tuner
from ConfigSpace import ConfigurationSpace
def objective_function(
configuration: Dict,
epoch: int,
previous_epoch: int,
checkpoint_path: str
) -> Union[Dict, List[Dict]]:
x = configuration["x"]
evaluated_info = [
{'epoch': i, 'metric': (x - 2) ** 2}
for i in range(previous_epoch + 1, epoch + 1)
]
return evaluated_info
configspace = ConfigurationSpace({"x": (-5.0, 10.0)})
tuner = Tuner(
objective_function=objective_function,
configuration_space=configspace,
minimize=True,
max_budget=1000,
optimizer='dyhpo',
seed=0,
max_epochs=27,
num_configurations=1000,
output_path='hpo4dl_results',
)
incumbent = tuner.run()
```
Key Parameters Explained:
- ```objective_function```: The function you aim to optimize.
- ```configuration_space```: The hyperparameter configuration space over which the optimization is performed.
- ```minimize```: Boolean flag indicates whether the objective function should be minimized (True) or maximized (False).
- ```max_budget```: The cumulative number of epochs the tuner will evaluate. This budget gets distributed across various
hyperparameter configurations.
- ```optimizer```: Specifies the optimization technique employed.
- ```seed```: Random seed for reproducibility.
- ```max_epochs```: Maximum number of epochs a single configuration is evaluated.
- ```num_configurations```: Determines the number of configurations DyHPO reviews before selecting the next one for
evaluation. Essentially, it guides the balance between exploration and exploitation in the optimization process.
- ```output_path```: Designates the location to save the results and the checkpoint for the best hyperparameter
optimization.
### Objective function
```python
def objective_function(
configuration: Dict,
epoch: int,
previous_epoch: int,
checkpoint_path: str
) -> Union[Dict, List[Dict]]
```
The objective function is tailored to support interrupted and resumed training processes.
Specifically, it should continue training from a ```previous_epoch``` to the designated ```epoch```.
The function should return a dictionary or a list of dictionaries upon completion.
Every dictionary must include the ```epoch``` and ```metric``` keys. Here's a sample return value:
```
{
“epoch”: 5,
“metric”: 0.76
}
```
For optimal performance with DyHPO, ensure the metric is normalized.
Lastly, the ```checkpoint_path``` is allocated for saving any intermediate files produced
during training pertinent to the current configuration. It facilitates storing models, logs,
and other relevant data, ensuring that training can resume seamlessly.
### Detailed Examples
For a detailed exploration of the HPO4DL framework, we've provided an in-depth example
under: ```examples/timm_main.py```
To execute the provided example, use the following command:
```bash
python examples/timm_main.py
--dataset torch/cifar100
--train-split train
--val-split validation
--optimizer dyhpo
--output-dir ./hpo4dl_results
```
## Citation
```
@inproceedings{
wistuba2022supervising,
title={Supervising the Multi-Fidelity Race of Hyperparameter Configurations},
author={Martin Wistuba and Arlind Kadra and Josif Grabocka},
booktitle={Thirty-Sixth Conference on Neural Information Processing Systems},
year={2022},
url={https://openreview.net/forum?id=0Fe7bAWmJr}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/releaunifreiburg/HPO4DL",
"name": "hpo4dl",
"maintainer": "Sebastian Pineda",
"docs_url": null,
"requires_python": ">=3.8,<4.0",
"maintainer_email": "pineda@cs.uni-freiburg.de",
"keywords": "Hyperparameter Optimization,AutoML",
"author": "Rishan Senanayake",
"author_email": "rd.senanayake@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/6e/e4/550c6912c6e71240f406cd6b35c3cca7ac4f78847244efad118832291a23/hpo4dl-0.1.0.tar.gz",
"platform": null,
"description": "# Hyperparameter Optimization for Deep Learning (HPO4DL)\n\nHPO4DL is a framework for multi-fidelity (gray-box) hyperparameter optimization.\nThe core optimizer in HPO4DL is DyHPO, a novel Bayesian Optimization approach to\nhyperparameter optimization tailored for deep learning. DyHPO dynamically determines\nthe best hyperparameter configurations to train further by using a deep kernel for\nGaussian Processes that captures the details of the learning curve and an acquisition function\nthat incorporates multi-budget information.\n\n## Installation\n\nTo install the package:\n\n```bash\npip install hpo4dl\n```\n\n## Getting started\n\nFollowing is a simple example to get you started:\n\n```python\nfrom typing import List, Dict, Union\nfrom hpo4dl.tuner import Tuner\nfrom ConfigSpace import ConfigurationSpace\n\n\ndef objective_function(\n configuration: Dict,\n epoch: int,\n previous_epoch: int,\n checkpoint_path: str\n) -> Union[Dict, List[Dict]]:\n x = configuration[\"x\"]\n evaluated_info = [\n {'epoch': i, 'metric': (x - 2) ** 2}\n for i in range(previous_epoch + 1, epoch + 1)\n ]\n return evaluated_info\n\n\nconfigspace = ConfigurationSpace({\"x\": (-5.0, 10.0)})\n\ntuner = Tuner(\n objective_function=objective_function,\n configuration_space=configspace,\n minimize=True,\n max_budget=1000,\n optimizer='dyhpo',\n seed=0,\n max_epochs=27,\n num_configurations=1000,\n output_path='hpo4dl_results',\n)\n\nincumbent = tuner.run()\n\n```\n\nKey Parameters Explained:\n\n- ```objective_function```: The function you aim to optimize.\n\n- ```configuration_space```: The hyperparameter configuration space over which the optimization is performed.\n\n- ```minimize```: Boolean flag indicates whether the objective function should be minimized (True) or maximized (False).\n\n- ```max_budget```: The cumulative number of epochs the tuner will evaluate. This budget gets distributed across various\n hyperparameter configurations.\n\n- ```optimizer```: Specifies the optimization technique employed.\n\n- ```seed```: Random seed for reproducibility.\n\n- ```max_epochs```: Maximum number of epochs a single configuration is evaluated.\n\n- ```num_configurations```: Determines the number of configurations DyHPO reviews before selecting the next one for\n evaluation. Essentially, it guides the balance between exploration and exploitation in the optimization process.\n\n- ```output_path```: Designates the location to save the results and the checkpoint for the best hyperparameter\n optimization.\n\n### Objective function\n\n```python\ndef objective_function(\n configuration: Dict,\n epoch: int,\n previous_epoch: int,\n checkpoint_path: str\n) -> Union[Dict, List[Dict]]\n```\n\nThe objective function is tailored to support interrupted and resumed training processes.\nSpecifically, it should continue training from a ```previous_epoch``` to the designated ```epoch```.\n\nThe function should return a dictionary or a list of dictionaries upon completion.\nEvery dictionary must include the ```epoch``` and ```metric``` keys. Here's a sample return value:\n\n```\n{\n \u201cepoch\u201d: 5,\n \u201cmetric\u201d: 0.76\n}\n```\n\nFor optimal performance with DyHPO, ensure the metric is normalized.\n\nLastly, the ```checkpoint_path``` is allocated for saving any intermediate files produced\nduring training pertinent to the current configuration. It facilitates storing models, logs,\nand other relevant data, ensuring that training can resume seamlessly.\n\n### Detailed Examples\n\nFor a detailed exploration of the HPO4DL framework, we've provided an in-depth example\nunder: ```examples/timm_main.py```\n\nTo execute the provided example, use the following command:\n\n```bash\npython examples/timm_main.py \n --dataset torch/cifar100 \n --train-split train \n --val-split validation \n --optimizer dyhpo \n --output-dir ./hpo4dl_results\n\n```\n\n## Citation\n\n```\n@inproceedings{\nwistuba2022supervising,\ntitle={Supervising the Multi-Fidelity Race of Hyperparameter Configurations},\nauthor={Martin Wistuba and Arlind Kadra and Josif Grabocka},\nbooktitle={Thirty-Sixth Conference on Neural Information Processing Systems},\nyear={2022},\nurl={https://openreview.net/forum?id=0Fe7bAWmJr}\n}\n```",
"bugtrack_url": null,
"license": "MIT",
"summary": "Hyper parameter optimization for deep learning",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/releaunifreiburg/HPO4DL",
"Repository": "https://github.com/releaunifreiburg/HPO4DL"
},
"split_keywords": [
"hyperparameter optimization",
"automl"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "71f3513f69a91c98f33917a6fd6e6c43aeba260c06839b40200affe05b6d46d3",
"md5": "9768052e7cc9cbf0af757e586c300f7b",
"sha256": "3c6ec58409fa4b062a60b173f172e9a66c479569440bbca90594bc5ddf319951"
},
"downloads": -1,
"filename": "hpo4dl-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9768052e7cc9cbf0af757e586c300f7b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<4.0",
"size": 30158,
"upload_time": "2023-10-06T12:15:32",
"upload_time_iso_8601": "2023-10-06T12:15:32.052954Z",
"url": "https://files.pythonhosted.org/packages/71/f3/513f69a91c98f33917a6fd6e6c43aeba260c06839b40200affe05b6d46d3/hpo4dl-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6ee4550c6912c6e71240f406cd6b35c3cca7ac4f78847244efad118832291a23",
"md5": "97696125cc4851688d6fdeea44d6e14e",
"sha256": "c0faf41ae0cd58324c898765d64d4a330d29c0a99c65d9671accc50f5b4c094c"
},
"downloads": -1,
"filename": "hpo4dl-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "97696125cc4851688d6fdeea44d6e14e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<4.0",
"size": 23380,
"upload_time": "2023-10-06T12:15:33",
"upload_time_iso_8601": "2023-10-06T12:15:33.776872Z",
"url": "https://files.pythonhosted.org/packages/6e/e4/550c6912c6e71240f406cd6b35c3cca7ac4f78847244efad118832291a23/hpo4dl-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-06 12:15:33",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "releaunifreiburg",
"github_project": "HPO4DL",
"github_not_found": true,
"lcname": "hpo4dl"
}