tune


Nametune JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttp://github.com/fugue-project/tune
SummaryAn abstraction layer for hyper parameter tuning
upload_time2023-03-19 23:42:15
maintainer
docs_urlNone
authorHan Wang
requires_python>=3.6
licenseApache-2.0
keywords hyper parameter hyperparameter tuning tune tuner optimzation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tune

[![Doc](https://readthedocs.org/projects/tune/badge)](https://tune.readthedocs.org)
[![PyPI version](https://badge.fury.io/py/tune.svg)](https://pypi.python.org/pypi/tune/)[![PyPI pyversions](https://img.shields.io/pypi/pyversions/tune.svg)](https://pypi.python.org/pypi/tune/)
[![PyPI license](https://img.shields.io/pypi/l/tune.svg)](https://pypi.python.org/pypi/tune/)
[![codecov](https://codecov.io/gh/fugue-project/tune/branch/master/graph/badge.svg?token=6AJPYFPJYT)](https://codecov.io/gh/fugue-project/tune)

[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ)

Tune is an abstraction layer for general parameter tuning. It is built on [Fugue](https://github.com/fugue-project/fugue) so it can seamlessly run on any backend supported by Fugue, such as Spark, Dask and local.

## Installation

```bash
pip install tune
```

It's recommended to also install Scikit-Learn (for all compatible models tuning) and Hyperopt (to enable [Bayesian Optimization](https://en.wikipedia.org/wiki/Bayesian_optimization))

```bash
pip install tune[hyperopt,sklearn]
```

## Quick Start

To quickly start, please go through these tutorials on Kaggle:

1. [Search Space](https://www.kaggle.com/goodwanghan/tune-tutorials-01-seach-space)
2. [Non-iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), such as Scikit-Learn model tuning
3. [Iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-3-iterative-problems), such as Keras model tuning


## Design Philosophy

Tune does not follow Scikit-Learn's model selection APIs and does not provide distributed backend for it. **We believe that parameter tuning is a general problem that is not only for machine learning**, so our abstractions are built from ground up, the lower level APIs do not assume the objective is a machine learning model, while the higher level APIs are dedicated to solve specific problems, such as Scikit-Learn compatible model tuning and Keras model tuning.

Although we didn't base our solution on any of [HyperOpt](http://hyperopt.github.io/hyperopt/), [Optuna](https://optuna.org/), [Ray Tune](https://docs.ray.io/en/master/tune/index.html) and [Nevergrad](https://github.com/facebookresearch/nevergrad) etc., we are truly inspired by these wonderful solutions and their design. We also integrated with many of them for deeper level optimizations.

Tuning problems are never easy, here are our goals:

* Provide the simplest and most intuitive APIs for major tuning cases. We always start from real tuning cases, figure out the minimal requirement for each of them and then determine the layers of abstraction. Read [this tutorial](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), you can see how minimal the interfaces can be.
* Be scale agnostic and platform agnostic. We want you to worry less about *distributed computing*, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change. It can effectively save time and cost and make the process fun and rewarding. And to run any tuning logic distributedly, you only need a core framework itself (Spark, Dask, etc.) and you do not need a database, a queue service or even an embeded cluster.
* Be highly extendable and flexible on lower level. For example
    * you can extend on Fugue level, for example create an execution engine for [Prefect](https://www.prefect.io/) to run the tuning jobs as a Prefect workflow
    * you can integrate third party optimizers and use Tune just as a distributed orchestrator. We have integrated [HyperOpt](http://hyperopt.github.io/hyperopt/). And [Optuna](https://optuna.org/) and [Nevergrad](https://github.com/facebookresearch/nevergrad) is on the way.
    * you can start external instances (e.g. EC2 instances) for different training subtasks and to fully utilize your cloud
    * you can combine with distributed training as long as your have enough compute resource

## Focuses

Here are our current focuses:

* A flexible space design and can describe a hybrid space of grid search, random search and second level optimization such as bayesian optimization
* Integrate with 3rd party tuning frameworks
* Create generalized and distributed versions of [Successive Halving](https://scikit-learn.org/stable/auto_examples/model_selection/plot_successive_halving_iterations.html), [Hyperband](https://arxiv.org/abs/1603.06560) and [Asynchronous Successive Halving](https://arxiv.org/abs/1810.05934).


## Collaboration

We are looking for collaborators, if you are interested, please let us know. Please join our [Slack channel](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ).


            

Raw data

            {
    "_id": null,
    "home_page": "http://github.com/fugue-project/tune",
    "name": "tune",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "hyper parameter hyperparameter tuning tune tuner optimzation",
    "author": "Han Wang",
    "author_email": "goodwanghan@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/60/79/850686023a4655db34f4519b476a531361e2cf2e86013b72eff6c2cd84bf/tune-0.1.5.tar.gz",
    "platform": null,
    "description": "# Tune\n\n[![Doc](https://readthedocs.org/projects/tune/badge)](https://tune.readthedocs.org)\n[![PyPI version](https://badge.fury.io/py/tune.svg)](https://pypi.python.org/pypi/tune/)[![PyPI pyversions](https://img.shields.io/pypi/pyversions/tune.svg)](https://pypi.python.org/pypi/tune/)\n[![PyPI license](https://img.shields.io/pypi/l/tune.svg)](https://pypi.python.org/pypi/tune/)\n[![codecov](https://codecov.io/gh/fugue-project/tune/branch/master/graph/badge.svg?token=6AJPYFPJYT)](https://codecov.io/gh/fugue-project/tune)\n\n[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ)\n\nTune is an abstraction layer for general parameter tuning. It is built on [Fugue](https://github.com/fugue-project/fugue) so it can seamlessly run on any backend supported by Fugue, such as Spark, Dask and local.\n\n## Installation\n\n```bash\npip install tune\n```\n\nIt's recommended to also install Scikit-Learn (for all compatible models tuning) and Hyperopt (to enable [Bayesian Optimization](https://en.wikipedia.org/wiki/Bayesian_optimization))\n\n```bash\npip install tune[hyperopt,sklearn]\n```\n\n## Quick Start\n\nTo quickly start, please go through these tutorials on Kaggle:\n\n1. [Search Space](https://www.kaggle.com/goodwanghan/tune-tutorials-01-seach-space)\n2. [Non-iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), such as Scikit-Learn model tuning\n3. [Iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-3-iterative-problems), such as Keras model tuning\n\n\n## Design Philosophy\n\nTune does not follow Scikit-Learn's model selection APIs and does not provide distributed backend for it. **We believe that parameter tuning is a general problem that is not only for machine learning**, so our abstractions are built from ground up, the lower level APIs do not assume the objective is a machine learning model, while the higher level APIs are dedicated to solve specific problems, such as Scikit-Learn compatible model tuning and Keras model tuning.\n\nAlthough we didn't base our solution on any of [HyperOpt](http://hyperopt.github.io/hyperopt/), [Optuna](https://optuna.org/), [Ray Tune](https://docs.ray.io/en/master/tune/index.html) and [Nevergrad](https://github.com/facebookresearch/nevergrad) etc., we are truly inspired by these wonderful solutions and their design. We also integrated with many of them for deeper level optimizations.\n\nTuning problems are never easy, here are our goals:\n\n* Provide the simplest and most intuitive APIs for major tuning cases. We always start from real tuning cases, figure out the minimal requirement for each of them and then determine the layers of abstraction. Read [this tutorial](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), you can see how minimal the interfaces can be.\n* Be scale agnostic and platform agnostic. We want you to worry less about *distributed computing*, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change. It can effectively save time and cost and make the process fun and rewarding. And to run any tuning logic distributedly, you only need a core framework itself (Spark, Dask, etc.) and you do not need a database, a queue service or even an embeded cluster.\n* Be highly extendable and flexible on lower level. For example\n    * you can extend on Fugue level, for example create an execution engine for [Prefect](https://www.prefect.io/) to run the tuning jobs as a Prefect workflow\n    * you can integrate third party optimizers and use Tune just as a distributed orchestrator. We have integrated [HyperOpt](http://hyperopt.github.io/hyperopt/). And [Optuna](https://optuna.org/) and [Nevergrad](https://github.com/facebookresearch/nevergrad) is on the way.\n    * you can start external instances (e.g. EC2 instances) for different training subtasks and to fully utilize your cloud\n    * you can combine with distributed training as long as your have enough compute resource\n\n## Focuses\n\nHere are our current focuses:\n\n* A flexible space design and can describe a hybrid space of grid search, random search and second level optimization such as bayesian optimization\n* Integrate with 3rd party tuning frameworks\n* Create generalized and distributed versions of [Successive Halving](https://scikit-learn.org/stable/auto_examples/model_selection/plot_successive_halving_iterations.html), [Hyperband](https://arxiv.org/abs/1603.06560) and [Asynchronous Successive Halving](https://arxiv.org/abs/1810.05934).\n\n\n## Collaboration\n\nWe are looking for collaborators, if you are interested, please let us know. Please join our [Slack channel](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ).\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "An abstraction layer for hyper parameter tuning",
    "version": "0.1.5",
    "split_keywords": [
        "hyper",
        "parameter",
        "hyperparameter",
        "tuning",
        "tune",
        "tuner",
        "optimzation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "57fe08b78906c112636ab73bee429f5aa064d8b45d6f676d5628c2529a4f0ebc",
                "md5": "f1eee54385d0447a6133ae1f56661660",
                "sha256": "c3e25a9a74f32412d4a8cf62c3834751fa57bcb8f7823df5f2a6a08914255dd8"
            },
            "downloads": -1,
            "filename": "tune-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f1eee54385d0447a6133ae1f56661660",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 95067,
            "upload_time": "2023-03-19T23:42:13",
            "upload_time_iso_8601": "2023-03-19T23:42:13.819977Z",
            "url": "https://files.pythonhosted.org/packages/57/fe/08b78906c112636ab73bee429f5aa064d8b45d6f676d5628c2529a4f0ebc/tune-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6079850686023a4655db34f4519b476a531361e2cf2e86013b72eff6c2cd84bf",
                "md5": "8d5f513718c7c45774d46ce6b2554b1c",
                "sha256": "790a1f0ce4bbe3d34168f485de7cac35682ddbbd0b517130ca62338218ae3945"
            },
            "downloads": -1,
            "filename": "tune-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "8d5f513718c7c45774d46ce6b2554b1c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 70331,
            "upload_time": "2023-03-19T23:42:15",
            "upload_time_iso_8601": "2023-03-19T23:42:15.391681Z",
            "url": "https://files.pythonhosted.org/packages/60/79/850686023a4655db34f4519b476a531361e2cf2e86013b72eff6c2cd84bf/tune-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-19 23:42:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "fugue-project",
    "github_project": "tune",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "tune"
}
        
Elapsed time: 0.04573s