Name | syne-tune JSON |
Version |
0.10.0
JSON |
| download |
home_page | |
Summary | Distributed Hyperparameter Optimization on SageMaker |
upload_time | 2023-11-08 12:29:04 |
maintainer | |
docs_url | None |
author | AWS |
requires_python | |
license | |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization
[](https://pypi.org/project/syne-tune/)
[](https://opensource.org/licenses/Apache-2.0)
[](https://pepy.tech/project/syne-tune)
[](https://syne-tune.readthedocs.io)
[](https://pypi.org/project/syne-tune/)
[](https://app.codecov.io/gh/awslabs/syne-tune)

**[Documentation](https://syne-tune.readthedocs.io/en/latest/index.html)** | **[Tutorials](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/README.html)** | **[API Reference](https://syne-tune.readthedocs.io/en/latest/_apidoc/modules.html#)** | **[PyPI](https://pypi.org/project/syne-tune)** | **[Latest Blog Post](https://aws.amazon.com/blogs/machine-learning/hyperparameter-optimization-for-fine-tuning-pre-trained-transformer-models-from-hugging-face/)**
Syne Tune provides state-of-the-art algorithms for hyperparameter optimization (HPO) with the following key features:
* **Lightweight and platform-agnostic**: Syne Tune is designed to work with
different execution backends, so you are not locked into a particular
distributed system architecture. Syne Tune runs with minimal dependencies.
* **Wide coverage of different HPO methods**: Syne Tune supports more than 20 different optimization methods across [multi-fidelity HPO](https://syne-tune.readthedocs.io/en/latest/tutorials/multifidelity/README.html), [constrained HPO](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/basics_outlook.html#further-topics), [multi-objective HPO](https://syne-tune.readthedocs.io/en/latest/getting_started.html#supported-multi-objective-optimization-methods), [transfer learning](https://syne-tune.readthedocs.io/en/latest/tutorials/transfer_learning/transfer_learning.html), [cost-aware HPO](https://syne-tune.readthedocs.io/en/latest/_apidoc/syne_tune.optimizer.schedulers.searchers.cost_aware.html), and [population-based training](https://syne-tune.readthedocs.io/en/latest/_apidoc/syne_tune.optimizer.schedulers.pbt.html).
* **Simple, modular design**: Rather than wrapping other HPO
frameworks, Syne Tune provides simple APIs and scheduler templates, which can
easily be [extended to your specific needs](https://syne-tune.readthedocs.io/en/latest/tutorials/developer/README.html).
Studying the code will allow you to understand what the different algorithms
are doing, and how they differ from each other.
* **Industry-strength Bayesian optimization**: Syne Tune has comprehensive support
for [Gaussian Process-based Bayesian optimization](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/basics_bayesopt.html).
The same code powers modalities such as multi-fidelity HPO, constrained HPO, and
cost-aware HPO, and has been tried and tested in production for several years.
* **Support for distributed workloads**: Syne Tune lets you move fast, thanks to the parallel compute resources AWS SageMaker offers. Syne Tune allows ML/AI practitioners to easily set up and run studies with many [experiments running in parallel](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html). Run on different compute environments (locally, AWS, simulation) by changing just one line of code.
* **Out-of-the-box tabulated benchmarks:** Tabulated benchmarks let you simulate results in seconds while preserving the real dynamics of asynchronous or synchronous HPO with any number of workers.
Syne Tune is developed in collaboration with the team behind the [Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) service.
## Installing
To install Syne Tune from pip, you can simply do:
```bash
pip install 'syne-tune[basic]'
```
or to install the latest version from source:
```bash
git clone https://github.com/awslabs/syne-tune.git
cd syne-tune
python3 -m venv st_venv
. st_venv/bin/activate
pip install --upgrade pip
pip install -e '.[basic]'
```
This installs everything in a virtual environment `st_venv`. Remember to activate
this environment before working with Syne Tune. We also recommend building the
virtual environment from scratch now and then, in particular when you pull a new
release, as dependencies may have changed.
See our [change log](CHANGELOG.md) to see what changed in the latest version.
## Getting started
To enable tuning, you have to report metrics from a training script so that they can be communicated later to Syne Tune,
this can be accomplished by just calling `report(epoch=epoch, loss=loss)` as shown in the example below:
```python
# train_height_simple.py
import logging
import time
from syne_tune import Reporter
from argparse import ArgumentParser
if __name__ == '__main__':
root = logging.getLogger()
root.setLevel(logging.INFO)
parser = ArgumentParser()
parser.add_argument('--epochs', type=int)
parser.add_argument('--width', type=float)
parser.add_argument('--height', type=float)
args, _ = parser.parse_known_args()
report = Reporter()
for step in range(args.epochs):
time.sleep(0.1)
dummy_score = 1.0 / (0.1 + args.width * step / 100) + args.height * 0.1
# Feed the score back to Syne Tune.
report(epoch=step + 1, mean_loss=dummy_score)
```
Once you have a training script reporting a metric, you can launch a tuning as follows:
```python
# launch_height_simple.py
from syne_tune import Tuner, StoppingCriterion
from syne_tune.backend import LocalBackend
from syne_tune.config_space import randint
from syne_tune.optimizer.baselines import ASHA
# hyperparameter search space to consider
config_space = {
'width': randint(1, 20),
'height': randint(1, 20),
'epochs': 100,
}
tuner = Tuner(
trial_backend=LocalBackend(entry_point='train_height_simple.py'),
scheduler=ASHA(
config_space,
metric='mean_loss',
resource_attr='epoch',
max_resource_attr="epochs",
search_options={'debug_log': False},
),
stop_criterion=StoppingCriterion(max_wallclock_time=30),
n_workers=4, # how many trials are evaluated in parallel
)
tuner.run()
```
The above example runs ASHA with 4 asynchronous workers on a local machine.
## Experimentation with Syne Tune
If you plan to use advanced features of Syne Tune, such as different execution
backends or running experiments remotely, writing launcher scripts like
`examples/launch_height_simple.py` can become tedious. Syne Tune provides an
advanced experimentation framework, which you can learn about in
[this tutorial](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html)
or also in
[this one](https://syne-tune.readthedocs.io/en/latest/tutorials/odsc_tutorial/README.html).
## Supported HPO methods
The following hyperparameter optimization (HPO) methods are available in Syne Tune:
Method | Reference | Searcher | Asynchronous? | Multi-fidelity? | Transfer?
:--- | :---: | :---: | :---: | :---: | :---:
Grid Search | | deterministic | yes | no | no
Random Search | Bergstra, et al. (2011) | random | yes | no | no
Bayesian Optimization | Snoek, et al. (2012) | model-based | yes | no | no
BORE | Tiao, et al. (2021) | model-based | yes | no | no
CQR | Salinas, et al. (2023) | model-based | yes | no | no
MedianStoppingRule | Golovin, et al. (2017) | any | yes | yes | no
SyncHyperband | Li, et al. (2018) | random | no | yes | no
SyncBOHB | Falkner, et al. (2018) | model-based | no | yes | no
SyncMOBSTER | Klein, et al. (2020) | model-based | no | yes | no
ASHA | Li, et al. (2019) | random | yes | yes | no
BOHB | Falkner, et al. (2018) | model-based | yes | yes | no
MOBSTER | Klein, et al. (2020) | model-based | yes | yes | no
DEHB | Awad, et al. (2021) | evolutionary | no | yes | no
HyperTune | Li, et al. (2022) | model-based | yes | yes | no
DyHPO<sup>*</sup> | Wistuba, et al. (2022) | model-based | yes | yes | no
ASHABORE | Tiao, et al. (2021) | model-based | yes | yes | no
ASHACQR | Salinas, et al. (2023) | model-based | yes | yes | no
PASHA | Bohdal, et al. (2022)| random or model-based | yes | yes | no
REA | Real, et al. (2019) | evolutionary | yes | no | no
KDE | Falkner, et al. (2018) | model-based | yes | no | no
PBT | Jaderberg, et al. (2017) | evolutionary | no | yes | no
ZeroShotTransfer | Wistuba, et al. (2015) | deterministic | yes | no | yes
ASHA-CTS | Salinas, et al. (2021)| random | yes | yes | yes
RUSH | Zappella, et al. (2021)| random | yes | yes | yes
BoundingBox | Perrone, et al. (2019) | any | yes | yes | yes
<sup>*</sup>: We implement the model-based scheduling logic of DyHPO, but use
the same Gaussian process surrogate models as MOBSTER and HyperTune. The original
source code for the paper is [here](https://github.com/releaunifreiburg/DyHPO/tree/main).
The searchers fall into four broad categories, **deterministic**, **random**, **evolutionary** and **model-based**. The random searchers sample candidate hyperparameter configurations uniformly at random, while the model-based searchers sample them non-uniformly at random, according to a model (e.g., Gaussian process, density ration estimator, etc.) and an acquisition function. The evolutionary searchers make use of an evolutionary algorithm.
Syne Tune also supports [BoTorch](https://github.com/awslabs/syne-tune/blob/main/syne_tune/optimizer/schedulers/searchers/botorch/botorch_searcher.py) searchers.
## Supported multi-objective optimization methods
Method | Reference | Searcher | Asynchronous? | Multi-fidelity? | Transfer?
:--- |:---------------------------:|:------------:| :---: | :---: | :---:
Constrained Bayesian Optimization | Gardner, et al. (2014) | model-based | yes | no | no
MOASHA | Schmucker, et al. (2021) | random | yes | yes | no
NSGA-2 | Deb, et al. (2002) | evolutionary | no | no | no
Multi Objective Multi Surrogate (MSMOS) | Guerrero-Viu, et al. (2021) | model-based | no | no | no
MSMOS wihh random scalarization | Paria, et al. (2018) | model-based | no | no | no
HPO methods listed can be used in a multi-objective setting by scalarization or non-dominated sorting. See [multiobjective_priority.py](syne_tune/optimizer/schedulers/multiobjective/multiobjective_priority.py) for details.
## Examples
You will find many examples in the [examples/](examples/) folder illustrating
different functionalities provided by Syne Tune. For example:
* [launch_height_baselines.py](examples/launch_height_baselines.py):
launches HPO locally, tuning a simple script
[train_height_example.py](examples/training_scripts/height_example/train_height.py) for several baselines
* [launch_height_moasha.py](examples/launch_height_moasha.py):
shows how to tune a script reporting multiple-objectives with multiobjective Asynchronous Hyperband (MOASHA)
* [launch_height_standalone_scheduler.py](examples/launch_height_standalone_scheduler.py):
launches HPO locally with a custom scheduler that cuts any trial that is not
in the top 80%
* [launch_height_sagemaker_remotely.py](examples/launch_height_sagemaker_remotely.py):
launches the HPO loop on SageMaker rather than a local machine, trial can be executed either
the remote machine or distributed again as separate SageMaker training jobs. See
[launch_height_sagemaker_remote_launcher.py](examples/launch_height_sagemaker_remote_launcher.py)
for remote launching with the help of RemoteTuner also discussed in one of the FAQs.
* [launch_height_sagemaker.py](examples/launch_height_sagemaker.py):
launches HPO on SageMaker to tune a SageMaker Pytorch estimator
* [launch_bayesopt_constrained.py](examples/launch_bayesopt_constrained.py):
launches Bayesian constrained hyperparameter optimization
* [launch_height_sagemaker_custom_image.py](examples/launch_height_sagemaker_custom_image.py):
launches HPO on SageMaker to tune an entry point with a custom docker image
* [launch_plot_results.py](examples/launch_plot_results.py): shows how to plot
results of a HPO experiment
* [launch_tensorboard_example.py](examples/launch_tensorboard_example.py):
shows how results can be visualized on the fly with TensorBoard
* [launch_nasbench201_simulated.py](examples/launch_nasbench201_simulated.py):
demonstrates simulation of experiments on a tabulated benchmark
* [launch_fashionmnist.py](examples/launch_fashionmnist.py):
launches HPO locally tuning a multi-layer perceptron on Fashion MNIST. This
employs an easy-to-use benchmark convention
* [launch_huggingface_classification.py](examples/launch_huggingface_classification.py):
launches HPO on SageMaker to tune a SageMaker Hugging Face estimator for sentiment classification
* [launch_tuning_gluonts.py](examples/launch_tuning_gluonts.py):
launches HPO locally to tune a gluon-ts time series forecasting algorithm
* [launch_rl_tuning.py](examples/launch_rl_tuning.py):
launches HPO locally to tune a RL algorithm on the cartpole environment
* [launch_height_ray.py](examples/launch_height_ray.py):
launches HPO locally with [Ray Tune](https://docs.ray.io/en/master/tune/index.html)
scheduler
## Examples for Experimentation and Benchmarking
You will find many examples for experimentation and benchmarking in
[benchmarking/examples/](benchmarking/examples/) and in
[benchmarking/nusery/](benchmarking/nursery/).
## FAQ and Tutorials
You can check our [FAQ](https://syne-tune.readthedocs.io/en/latest/faq.html), to
learn more about Syne Tune functionalities.
* [Why should I use Syne Tune?](https://syne-tune.readthedocs.io/en/latest/faq.html#why-should-i-use-syne-tune)
* [What are the different installations options supported?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-are-the-different-installations-options-supported)
* [How can I run on AWS and SageMaker?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-run-on-aws-and-sagemaker)
* [What are the metrics reported by default when calling the `Reporter`?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-are-the-metrics-reported-by-default-when-calling-the-reporter)
* [How can I utilize multiple GPUs?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-utilize-multiple-gpus)
* [What is the default mode when performing optimization?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-is-the-default-mode-when-performing-optimization)
* [How are trials evaluated on a local machine?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-are-trials-evaluated-on-a-local-machine)
* [Where can I find the output of the tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#where-can-i-find-the-output-of-the-tuning)
* [How can I change the default output folder where tuning results are stored?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-change-the-default-output-folder-where-tuning-results-are-stored)
* [What does the output of the tuning contain?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-does-the-output-of-the-tuning-contain)
* [How can I enable trial checkpointing?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-enable-trial-checkpointing)
* [How can I retrieve the best checkpoint obtained after tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-retrieve-the-best-checkpoint-obtained-after-tuning)
* [Which schedulers make use of checkpointing?](https://syne-tune.readthedocs.io/en/latest/faq.html#which-schedulers-make-use-of-checkpointing)
* [Is the tuner checkpointed?](https://syne-tune.readthedocs.io/en/latest/faq.html#is-the-tuner-checkpointed)
* [Where can I find the output of my trials?](https://syne-tune.readthedocs.io/en/latest/faq.html#where-can-i-find-the-output-of-my-trials)
* [How can I plot the results of a tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-plot-the-results-of-a-tuning)
* [How can I specify additional tuning metadata?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-specify-additional-tuning-metadata)
* [How do I append additional information to the results which are stored?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-append-additional-information-to-the-results-which-are-stored)
* [I don’t want to wait, how can I launch the tuning on a remote machine?](https://syne-tune.readthedocs.io/en/latest/faq.html#i-dont-want-to-wait-how-can-i-launch-the-tuning-on-a-remote-machine)
* [How can I run many experiments in parallel?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-run-many-experiments-in-parallel)
* [How can I access results after tuning remotely?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-access-results-after-tuning-remotely)
* [How can I specify dependencies to remote launcher or when using the SageMaker backend?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-specify-dependencies-to-remote-launcher-or-when-using-the-sagemaker-backend)
* [How can I benchmark different methods?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-benchmark-different-methods)
* [What different schedulers do you support? What are the main differences between them?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-different-schedulers-do-you-support-what-are-the-main-differences-between-them)
* [How do I define the configuration space?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-define-the-configuration-space)
* [How do I set arguments of multi-fidelity schedulers?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-set-arguments-of-multi-fidelity-schedulers)
* [How can I visualize the progress of my tuning experiment with Tensorboard?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-visualize-the-progress-of-my-tuning-experiment-with-tensorboard)
* [How can I add a new scheduler?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-add-a-new-scheduler)
* [How can I add a new tabular or surrogate benchmark?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-add-a-new-tabular-or-surrogate-benchmark)
* [How can I reduce delays in starting trials with the SageMaker backend?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-reduce-delays-in-starting-trials-with-the-sageMaker-backend)
* [How can I pass lists or dictionaries to the training script?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-pass-lists-or-dictionaries-to-the-training-script)
* [How can I write extra results for an experiment?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-write-extra-results-for-an-experiment)
Do you want to know more? Here are a number of tutorials.
* [Basics of Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/README.html)
* [Choosing a Configuration Space](https://syne-tune.readthedocs.io/en/latest/search_space.html)
* [Using the Built-in Schedulers](https://syne-tune.readthedocs.io/en/latest/schedulers.html)
* [Multi-Fidelity Hyperparameter Optimization](https://syne-tune.readthedocs.io/en/latest/tutorials/multifidelity/README.html)
* [Benchmarking in Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/benchmarking/README.html)
* [Visualization of Results](https://syne-tune.readthedocs.io/en/latest/tutorials/visualization/README.html)
* [Rapid Experimentation with Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html)
* [How to Contribute a New Scheduler](https://syne-tune.readthedocs.io/en/latest/tutorials/developer/README.html)
* [PASHA: Efficient HPO and NAS with Progressive Resource Allocation](https://syne-tune.readthedocs.io/en/latest/tutorials/pasha/pasha.html)
* [Using Syne Tune for Transfer Learning](https://syne-tune.readthedocs.io/en/latest/tutorials/transfer_learning/transfer_learning.html)
* [Distributed Hyperparameter Tuning: Finding the Right Model can be Fast and Fun](https://syne-tune.readthedocs.io/en/latest/tutorials/odsc_tutorial/README.html)
## Blog Posts
* [Run distributed hyperparameter and neural architecture tuning jobs with Syne Tune](https://aws.amazon.com/blogs/machine-learning/run-distributed-hyperparameter-and-neural-architecture-tuning-jobs-with-syne-tune/)
* [Hyperparameter optimization for fine-tuning pre-trained transformer models from Hugging Face](https://aws.amazon.com/blogs/machine-learning/hyperparameter-optimization-for-fine-tuning-pre-trained-transformer-models-from-hugging-face/) [(notebook)](https://github.com/awslabs/syne-tune/blob/hf_blog_post/hf_blog_post/example_syne_tune_for_hf.ipynb)
* [Learn Amazon Simple Storage Service transfer configuration with Syne Tune](https://aws.amazon.com/blogs/opensource/learn-amazon-simple-storage-service-transfer-configuration-with-syne-tune/) [(code)](https://github.com/aws-samples/syne-tune-s3-transfer)
## Videos
* [Martin Wistuba: Hyperparameter Optimization for the Impatient (PyData 2023)](https://www.youtube.com/watch?v=onX6fXzp9Yk)
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## Citing Syne Tune
If you use Syne Tune in a scientific publication, please cite the following paper:
["Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research"](https://openreview.net/forum?id=BVeGJ-THIg9&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3Dautoml.cc%2FAutoML%2F2022%2FTrack%2FMain%2FAuthors%23your-submissions)) First Conference on Automated Machine Learning, 2022.
```bibtex
@inproceedings{
salinas2022syne,
title={Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},
author={David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},
booktitle={International Conference on Automated Machine Learning, AutoML 2022},
year={2022},
url={https://proceedings.mlr.press/v188/salinas22a.html}
}
```
## License
This project is licensed under the Apache-2.0 License.
Raw data
{
"_id": null,
"home_page": "",
"name": "syne-tune",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "AWS",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/48/7a/1605daa555aef4a67835cb7e2239e3f1e3ad2321fe1aa9e4d45bf956387a/syne_tune-0.10.0.tar.gz",
"platform": null,
"description": "# Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization\n\n[](https://pypi.org/project/syne-tune/)\n[](https://opensource.org/licenses/Apache-2.0)\n[](https://pepy.tech/project/syne-tune)\n[](https://syne-tune.readthedocs.io)\n[](https://pypi.org/project/syne-tune/)\n[](https://app.codecov.io/gh/awslabs/syne-tune)\n\n\n\n**[Documentation](https://syne-tune.readthedocs.io/en/latest/index.html)** | **[Tutorials](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/README.html)** | **[API Reference](https://syne-tune.readthedocs.io/en/latest/_apidoc/modules.html#)** | **[PyPI](https://pypi.org/project/syne-tune)** | **[Latest Blog Post](https://aws.amazon.com/blogs/machine-learning/hyperparameter-optimization-for-fine-tuning-pre-trained-transformer-models-from-hugging-face/)**\n\nSyne Tune provides state-of-the-art algorithms for hyperparameter optimization (HPO) with the following key features:\n* **Lightweight and platform-agnostic**: Syne Tune is designed to work with\n different execution backends, so you are not locked into a particular\n distributed system architecture. Syne Tune runs with minimal dependencies.\n* **Wide coverage of different HPO methods**: Syne Tune supports more than 20 different optimization methods across [multi-fidelity HPO](https://syne-tune.readthedocs.io/en/latest/tutorials/multifidelity/README.html), [constrained HPO](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/basics_outlook.html#further-topics), [multi-objective HPO](https://syne-tune.readthedocs.io/en/latest/getting_started.html#supported-multi-objective-optimization-methods), [transfer learning](https://syne-tune.readthedocs.io/en/latest/tutorials/transfer_learning/transfer_learning.html), [cost-aware HPO](https://syne-tune.readthedocs.io/en/latest/_apidoc/syne_tune.optimizer.schedulers.searchers.cost_aware.html), and [population-based training](https://syne-tune.readthedocs.io/en/latest/_apidoc/syne_tune.optimizer.schedulers.pbt.html).\n* **Simple, modular design**: Rather than wrapping other HPO\n frameworks, Syne Tune provides simple APIs and scheduler templates, which can\n easily be [extended to your specific needs](https://syne-tune.readthedocs.io/en/latest/tutorials/developer/README.html).\n Studying the code will allow you to understand what the different algorithms\n are doing, and how they differ from each other.\n* **Industry-strength Bayesian optimization**: Syne Tune has comprehensive support\n for [Gaussian Process-based Bayesian optimization](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/basics_bayesopt.html).\n The same code powers modalities such as multi-fidelity HPO, constrained HPO, and\n cost-aware HPO, and has been tried and tested in production for several years.\n* **Support for distributed workloads**: Syne Tune lets you move fast, thanks to the parallel compute resources AWS SageMaker offers. Syne Tune allows ML/AI practitioners to easily set up and run studies with many [experiments running in parallel](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html). Run on different compute environments (locally, AWS, simulation) by changing just one line of code.\n* **Out-of-the-box tabulated benchmarks:** Tabulated benchmarks let you simulate results in seconds while preserving the real dynamics of asynchronous or synchronous HPO with any number of workers.\n\n\nSyne Tune is developed in collaboration with the team behind the [Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) service.\n\n\n## Installing\n\nTo install Syne Tune from pip, you can simply do:\n\n```bash\npip install 'syne-tune[basic]'\n```\n\nor to install the latest version from source: \n\n```bash\ngit clone https://github.com/awslabs/syne-tune.git\ncd syne-tune\npython3 -m venv st_venv\n. st_venv/bin/activate\npip install --upgrade pip\npip install -e '.[basic]'\n```\n\nThis installs everything in a virtual environment `st_venv`. Remember to activate\nthis environment before working with Syne Tune. We also recommend building the\nvirtual environment from scratch now and then, in particular when you pull a new\nrelease, as dependencies may have changed.\n\nSee our [change log](CHANGELOG.md) to see what changed in the latest version. \n\n## Getting started\n\nTo enable tuning, you have to report metrics from a training script so that they can be communicated later to Syne Tune,\nthis can be accomplished by just calling `report(epoch=epoch, loss=loss)` as shown in the example below:\n\n```python\n# train_height_simple.py\nimport logging\nimport time\n\nfrom syne_tune import Reporter\nfrom argparse import ArgumentParser\n\nif __name__ == '__main__':\n root = logging.getLogger()\n root.setLevel(logging.INFO)\n parser = ArgumentParser()\n parser.add_argument('--epochs', type=int)\n parser.add_argument('--width', type=float)\n parser.add_argument('--height', type=float)\n args, _ = parser.parse_known_args()\n report = Reporter()\n for step in range(args.epochs):\n time.sleep(0.1)\n dummy_score = 1.0 / (0.1 + args.width * step / 100) + args.height * 0.1\n # Feed the score back to Syne Tune.\n report(epoch=step + 1, mean_loss=dummy_score)\n```\n\nOnce you have a training script reporting a metric, you can launch a tuning as follows:\n\n```python\n# launch_height_simple.py\nfrom syne_tune import Tuner, StoppingCriterion\nfrom syne_tune.backend import LocalBackend\nfrom syne_tune.config_space import randint\nfrom syne_tune.optimizer.baselines import ASHA\n\n# hyperparameter search space to consider\nconfig_space = {\n 'width': randint(1, 20),\n 'height': randint(1, 20),\n 'epochs': 100,\n}\n\ntuner = Tuner(\n trial_backend=LocalBackend(entry_point='train_height_simple.py'),\n scheduler=ASHA(\n config_space,\n metric='mean_loss',\n resource_attr='epoch',\n max_resource_attr=\"epochs\",\n search_options={'debug_log': False},\n ),\n stop_criterion=StoppingCriterion(max_wallclock_time=30),\n n_workers=4, # how many trials are evaluated in parallel\n)\ntuner.run()\n```\n\nThe above example runs ASHA with 4 asynchronous workers on a local machine.\n\n## Experimentation with Syne Tune\n\nIf you plan to use advanced features of Syne Tune, such as different execution\nbackends or running experiments remotely, writing launcher scripts like\n`examples/launch_height_simple.py` can become tedious. Syne Tune provides an\nadvanced experimentation framework, which you can learn about in\n[this tutorial](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html)\nor also in\n[this one](https://syne-tune.readthedocs.io/en/latest/tutorials/odsc_tutorial/README.html).\n\n## Supported HPO methods\n\nThe following hyperparameter optimization (HPO) methods are available in Syne Tune:\n\nMethod | Reference | Searcher | Asynchronous? | Multi-fidelity? | Transfer? \n:--- | :---: | :---: | :---: | :---: | :---: \nGrid Search | | deterministic | yes | no | no \nRandom Search | Bergstra, et al. (2011) | random | yes | no | no \nBayesian Optimization | Snoek, et al. (2012) | model-based | yes | no | no \nBORE | Tiao, et al. (2021) | model-based | yes | no | no \nCQR | Salinas, et al. (2023) | model-based | yes | no | no \nMedianStoppingRule | Golovin, et al. (2017) | any | yes | yes | no \nSyncHyperband | Li, et al. (2018) | random | no | yes | no \nSyncBOHB | Falkner, et al. (2018) | model-based | no | yes | no \nSyncMOBSTER | Klein, et al. (2020) | model-based | no | yes | no \nASHA | Li, et al. (2019) | random | yes | yes | no \nBOHB | Falkner, et al. (2018) | model-based | yes | yes | no \nMOBSTER | Klein, et al. (2020) | model-based | yes | yes | no \nDEHB | Awad, et al. (2021) | evolutionary | no | yes | no \nHyperTune | Li, et al. (2022) | model-based | yes | yes | no\nDyHPO<sup>*</sup> | Wistuba, et al. (2022) | model-based | yes | yes | no\nASHABORE | Tiao, et al. (2021) | model-based | yes | yes | no\nASHACQR | Salinas, et al. (2023) | model-based | yes | yes | no \nPASHA | Bohdal, et al. (2022)| random or model-based | yes | yes | no \nREA | Real, et al. (2019) | evolutionary | yes | no | no \nKDE | Falkner, et al. (2018) | model-based | yes | no | no \nPBT | Jaderberg, et al. (2017) | evolutionary | no | yes | no \nZeroShotTransfer | Wistuba, et al. (2015) | deterministic | yes | no | yes \nASHA-CTS | Salinas, et al. (2021)| random | yes | yes | yes \nRUSH | Zappella, et al. (2021)| random | yes | yes | yes \nBoundingBox | Perrone, et al. (2019) | any | yes | yes | yes\n\n<sup>*</sup>: We implement the model-based scheduling logic of DyHPO, but use\nthe same Gaussian process surrogate models as MOBSTER and HyperTune. The original\nsource code for the paper is [here](https://github.com/releaunifreiburg/DyHPO/tree/main).\n\nThe searchers fall into four broad categories, **deterministic**, **random**, **evolutionary** and **model-based**. The random searchers sample candidate hyperparameter configurations uniformly at random, while the model-based searchers sample them non-uniformly at random, according to a model (e.g., Gaussian process, density ration estimator, etc.) and an acquisition function. The evolutionary searchers make use of an evolutionary algorithm.\n\nSyne Tune also supports [BoTorch](https://github.com/awslabs/syne-tune/blob/main/syne_tune/optimizer/schedulers/searchers/botorch/botorch_searcher.py) searchers.\n\n## Supported multi-objective optimization methods\n\nMethod | Reference | Searcher | Asynchronous? | Multi-fidelity? | Transfer?\n:--- |:---------------------------:|:------------:| :---: | :---: | :---: \nConstrained Bayesian Optimization | Gardner, et al. (2014) | model-based | yes | no | no\nMOASHA | Schmucker, et al. (2021) | random | yes | yes | no\nNSGA-2 | Deb, et al. (2002) | evolutionary | no | no | no\nMulti Objective Multi Surrogate (MSMOS) | Guerrero-Viu, et al. (2021) | model-based | no | no | no\nMSMOS wihh random scalarization | Paria, et al. (2018) | model-based | no | no | no\n\nHPO methods listed can be used in a multi-objective setting by scalarization or non-dominated sorting. See [multiobjective_priority.py](syne_tune/optimizer/schedulers/multiobjective/multiobjective_priority.py) for details.\n\n## Examples\n\nYou will find many examples in the [examples/](examples/) folder illustrating\ndifferent functionalities provided by Syne Tune. For example:\n* [launch_height_baselines.py](examples/launch_height_baselines.py):\n launches HPO locally, tuning a simple script \n [train_height_example.py](examples/training_scripts/height_example/train_height.py) for several baselines \n* [launch_height_moasha.py](examples/launch_height_moasha.py):\n shows how to tune a script reporting multiple-objectives with multiobjective Asynchronous Hyperband (MOASHA)\n* [launch_height_standalone_scheduler.py](examples/launch_height_standalone_scheduler.py):\n launches HPO locally with a custom scheduler that cuts any trial that is not\n in the top 80%\n* [launch_height_sagemaker_remotely.py](examples/launch_height_sagemaker_remotely.py):\n launches the HPO loop on SageMaker rather than a local machine, trial can be executed either\n the remote machine or distributed again as separate SageMaker training jobs. See \n [launch_height_sagemaker_remote_launcher.py](examples/launch_height_sagemaker_remote_launcher.py)\n for remote launching with the help of RemoteTuner also discussed in one of the FAQs.\n* [launch_height_sagemaker.py](examples/launch_height_sagemaker.py):\n launches HPO on SageMaker to tune a SageMaker Pytorch estimator\n* [launch_bayesopt_constrained.py](examples/launch_bayesopt_constrained.py):\n launches Bayesian constrained hyperparameter optimization\n* [launch_height_sagemaker_custom_image.py](examples/launch_height_sagemaker_custom_image.py):\n launches HPO on SageMaker to tune an entry point with a custom docker image\n* [launch_plot_results.py](examples/launch_plot_results.py): shows how to plot\n results of a HPO experiment\n* [launch_tensorboard_example.py](examples/launch_tensorboard_example.py):\n shows how results can be visualized on the fly with TensorBoard\n* [launch_nasbench201_simulated.py](examples/launch_nasbench201_simulated.py):\n demonstrates simulation of experiments on a tabulated benchmark\n* [launch_fashionmnist.py](examples/launch_fashionmnist.py):\n launches HPO locally tuning a multi-layer perceptron on Fashion MNIST. This\n employs an easy-to-use benchmark convention\n* [launch_huggingface_classification.py](examples/launch_huggingface_classification.py):\n launches HPO on SageMaker to tune a SageMaker Hugging Face estimator for sentiment classification\n* [launch_tuning_gluonts.py](examples/launch_tuning_gluonts.py):\n launches HPO locally to tune a gluon-ts time series forecasting algorithm\n* [launch_rl_tuning.py](examples/launch_rl_tuning.py):\n launches HPO locally to tune a RL algorithm on the cartpole environment\n* [launch_height_ray.py](examples/launch_height_ray.py):\n launches HPO locally with [Ray Tune](https://docs.ray.io/en/master/tune/index.html)\n scheduler\n\n## Examples for Experimentation and Benchmarking\n\nYou will find many examples for experimentation and benchmarking in\n[benchmarking/examples/](benchmarking/examples/) and in\n[benchmarking/nusery/](benchmarking/nursery/).\n\n## FAQ and Tutorials\n\nYou can check our [FAQ](https://syne-tune.readthedocs.io/en/latest/faq.html), to\nlearn more about Syne Tune functionalities.\n\n* [Why should I use Syne Tune?](https://syne-tune.readthedocs.io/en/latest/faq.html#why-should-i-use-syne-tune)\n* [What are the different installations options supported?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-are-the-different-installations-options-supported)\n* [How can I run on AWS and SageMaker?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-run-on-aws-and-sagemaker)\n* [What are the metrics reported by default when calling the `Reporter`?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-are-the-metrics-reported-by-default-when-calling-the-reporter)\n* [How can I utilize multiple GPUs?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-utilize-multiple-gpus)\n* [What is the default mode when performing optimization?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-is-the-default-mode-when-performing-optimization)\n* [How are trials evaluated on a local machine?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-are-trials-evaluated-on-a-local-machine)\n* [Where can I find the output of the tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#where-can-i-find-the-output-of-the-tuning)\n* [How can I change the default output folder where tuning results are stored?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-change-the-default-output-folder-where-tuning-results-are-stored)\n* [What does the output of the tuning contain?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-does-the-output-of-the-tuning-contain)\n* [How can I enable trial checkpointing?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-enable-trial-checkpointing)\n* [How can I retrieve the best checkpoint obtained after tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-retrieve-the-best-checkpoint-obtained-after-tuning)\n* [Which schedulers make use of checkpointing?](https://syne-tune.readthedocs.io/en/latest/faq.html#which-schedulers-make-use-of-checkpointing)\n* [Is the tuner checkpointed?](https://syne-tune.readthedocs.io/en/latest/faq.html#is-the-tuner-checkpointed)\n* [Where can I find the output of my trials?](https://syne-tune.readthedocs.io/en/latest/faq.html#where-can-i-find-the-output-of-my-trials)\n* [How can I plot the results of a tuning?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-plot-the-results-of-a-tuning)\n* [How can I specify additional tuning metadata?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-specify-additional-tuning-metadata)\n* [How do I append additional information to the results which are stored?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-append-additional-information-to-the-results-which-are-stored) \n* [I don\u2019t want to wait, how can I launch the tuning on a remote machine?](https://syne-tune.readthedocs.io/en/latest/faq.html#i-dont-want-to-wait-how-can-i-launch-the-tuning-on-a-remote-machine)\n* [How can I run many experiments in parallel?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-run-many-experiments-in-parallel)\n* [How can I access results after tuning remotely?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-access-results-after-tuning-remotely)\n* [How can I specify dependencies to remote launcher or when using the SageMaker backend?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-specify-dependencies-to-remote-launcher-or-when-using-the-sagemaker-backend)\n* [How can I benchmark different methods?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-benchmark-different-methods)\n* [What different schedulers do you support? What are the main differences between them?](https://syne-tune.readthedocs.io/en/latest/faq.html#what-different-schedulers-do-you-support-what-are-the-main-differences-between-them)\n* [How do I define the configuration space?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-define-the-configuration-space) \n* [How do I set arguments of multi-fidelity schedulers?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-do-i-set-arguments-of-multi-fidelity-schedulers)\n* [How can I visualize the progress of my tuning experiment with Tensorboard?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-visualize-the-progress-of-my-tuning-experiment-with-tensorboard)\n* [How can I add a new scheduler?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-add-a-new-scheduler)\n* [How can I add a new tabular or surrogate benchmark?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-add-a-new-tabular-or-surrogate-benchmark)\n* [How can I reduce delays in starting trials with the SageMaker backend?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-reduce-delays-in-starting-trials-with-the-sageMaker-backend)\n* [How can I pass lists or dictionaries to the training script?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-pass-lists-or-dictionaries-to-the-training-script)\n* [How can I write extra results for an experiment?](https://syne-tune.readthedocs.io/en/latest/faq.html#how-can-i-write-extra-results-for-an-experiment)\n\nDo you want to know more? Here are a number of tutorials.\n* [Basics of Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/basics/README.html)\n* [Choosing a Configuration Space](https://syne-tune.readthedocs.io/en/latest/search_space.html)\n* [Using the Built-in Schedulers](https://syne-tune.readthedocs.io/en/latest/schedulers.html)\n* [Multi-Fidelity Hyperparameter Optimization](https://syne-tune.readthedocs.io/en/latest/tutorials/multifidelity/README.html)\n* [Benchmarking in Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/benchmarking/README.html)\n* [Visualization of Results](https://syne-tune.readthedocs.io/en/latest/tutorials/visualization/README.html)\n* [Rapid Experimentation with Syne Tune](https://syne-tune.readthedocs.io/en/latest/tutorials/experimentation/README.html)\n* [How to Contribute a New Scheduler](https://syne-tune.readthedocs.io/en/latest/tutorials/developer/README.html)\n* [PASHA: Efficient HPO and NAS with Progressive Resource Allocation](https://syne-tune.readthedocs.io/en/latest/tutorials/pasha/pasha.html)\n* [Using Syne Tune for Transfer Learning](https://syne-tune.readthedocs.io/en/latest/tutorials/transfer_learning/transfer_learning.html)\n* [Distributed Hyperparameter Tuning: Finding the Right Model can be Fast and Fun](https://syne-tune.readthedocs.io/en/latest/tutorials/odsc_tutorial/README.html)\n\n## Blog Posts\n\n* [Run distributed hyperparameter and neural architecture tuning jobs with Syne Tune](https://aws.amazon.com/blogs/machine-learning/run-distributed-hyperparameter-and-neural-architecture-tuning-jobs-with-syne-tune/)\n* [Hyperparameter optimization for fine-tuning pre-trained transformer models from Hugging Face](https://aws.amazon.com/blogs/machine-learning/hyperparameter-optimization-for-fine-tuning-pre-trained-transformer-models-from-hugging-face/) [(notebook)](https://github.com/awslabs/syne-tune/blob/hf_blog_post/hf_blog_post/example_syne_tune_for_hf.ipynb)\n* [Learn Amazon Simple Storage Service transfer configuration with Syne Tune](https://aws.amazon.com/blogs/opensource/learn-amazon-simple-storage-service-transfer-configuration-with-syne-tune/) [(code)](https://github.com/aws-samples/syne-tune-s3-transfer)\n\n## Videos\n\n* [Martin Wistuba: Hyperparameter Optimization for the Impatient (PyData 2023)](https://www.youtube.com/watch?v=onX6fXzp9Yk)\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## Citing Syne Tune\n\nIf you use Syne Tune in a scientific publication, please cite the following paper:\n\n[\"Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research\"](https://openreview.net/forum?id=BVeGJ-THIg9&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3Dautoml.cc%2FAutoML%2F2022%2FTrack%2FMain%2FAuthors%23your-submissions)) First Conference on Automated Machine Learning, 2022.\n\n\n```bibtex\n@inproceedings{\n salinas2022syne,\n title={Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research},\n author={David Salinas and Matthias Seeger and Aaron Klein and Valerio Perrone and Martin Wistuba and Cedric Archambeau},\n booktitle={International Conference on Automated Machine Learning, AutoML 2022},\n year={2022},\n url={https://proceedings.mlr.press/v188/salinas22a.html}\n}\n```\n\n## License\n\nThis project is licensed under the Apache-2.0 License.\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Distributed Hyperparameter Optimization on SageMaker",
"version": "0.10.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "814882813f6d7a5deea2817947aeefe3de166e100c3d7dc7ed2a600a0beb31ed",
"md5": "f883a9fcb138afccd0c6c606561e8579",
"sha256": "29bb7d2c9c13f4b9c111d0f1c186dac768b6f0c29f0b2bac0e06ebd2bfcdcbdb"
},
"downloads": -1,
"filename": "syne_tune-0.10.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f883a9fcb138afccd0c6c606561e8579",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 750946,
"upload_time": "2023-11-08T12:29:01",
"upload_time_iso_8601": "2023-11-08T12:29:01.347035Z",
"url": "https://files.pythonhosted.org/packages/81/48/82813f6d7a5deea2817947aeefe3de166e100c3d7dc7ed2a600a0beb31ed/syne_tune-0.10.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "487a1605daa555aef4a67835cb7e2239e3f1e3ad2321fe1aa9e4d45bf956387a",
"md5": "a00ab4a4f5cc01da6cbfbec1e740f5ed",
"sha256": "aedeb6bc15ad37b667f3dd0590c8698db3159ae02dcecfa233eb58655ca00a4e"
},
"downloads": -1,
"filename": "syne_tune-0.10.0.tar.gz",
"has_sig": false,
"md5_digest": "a00ab4a4f5cc01da6cbfbec1e740f5ed",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 515039,
"upload_time": "2023-11-08T12:29:04",
"upload_time_iso_8601": "2023-11-08T12:29:04.317499Z",
"url": "https://files.pythonhosted.org/packages/48/7a/1605daa555aef4a67835cb7e2239e3f1e3ad2321fe1aa9e4d45bf956387a/syne_tune-0.10.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-11-08 12:29:04",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "syne-tune"
}