tune


Nametune JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttp://github.com/fugue-project/tune
SummaryAn abstraction layer for hyper parameter tuning
upload_time2024-09-03 04:57:04
maintainerNone
docs_urlNone
authorHan Wang
requires_python>=3.8
licenseApache-2.0
keywords hyper parameter hyperparameter tuning tune tuner optimzation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tune

[![Doc](https://readthedocs.org/projects/tune/badge)](https://tune.readthedocs.org)
[![PyPI version](https://badge.fury.io/py/tune.svg)](https://pypi.python.org/pypi/tune/)[![PyPI pyversions](https://img.shields.io/pypi/pyversions/tune.svg)](https://pypi.python.org/pypi/tune/)
[![PyPI license](https://img.shields.io/pypi/l/tune.svg)](https://pypi.python.org/pypi/tune/)
[![codecov](https://codecov.io/gh/fugue-project/tune/branch/master/graph/badge.svg?token=6AJPYFPJYT)](https://codecov.io/gh/fugue-project/tune)

[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ)

Tune is an abstraction layer for general parameter tuning. It is built on [Fugue](https://github.com/fugue-project/fugue) so it can seamlessly run on any backend supported by Fugue, such as Spark, Dask and local.

## Installation

```bash
pip install tune
```

It's recommended to also install Scikit-Learn (for all compatible models tuning) and Hyperopt (to enable [Bayesian Optimization](https://en.wikipedia.org/wiki/Bayesian_optimization))

```bash
pip install tune[hyperopt,sklearn]
```

## Quick Start

To quickly start, please go through these tutorials on Kaggle:

1. [Search Space](https://www.kaggle.com/goodwanghan/tune-tutorials-01-seach-space)
2. [Non-iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), such as Scikit-Learn model tuning
3. [Iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-3-iterative-problems), such as Keras model tuning


## Design Philosophy

Tune does not follow Scikit-Learn's model selection APIs and does not provide distributed backend for it. **We believe that parameter tuning is a general problem that is not only for machine learning**, so our abstractions are built from ground up, the lower level APIs do not assume the objective is a machine learning model, while the higher level APIs are dedicated to solve specific problems, such as Scikit-Learn compatible model tuning and Keras model tuning.

Although we didn't base our solution on any of [HyperOpt](http://hyperopt.github.io/hyperopt/), [Optuna](https://optuna.org/), [Ray Tune](https://docs.ray.io/en/master/tune/index.html) and [Nevergrad](https://github.com/facebookresearch/nevergrad) etc., we are truly inspired by these wonderful solutions and their design. We also integrated with many of them for deeper level optimizations.

Tuning problems are never easy, here are our goals:

* Provide the simplest and most intuitive APIs for major tuning cases. We always start from real tuning cases, figure out the minimal requirement for each of them and then determine the layers of abstraction. Read [this tutorial](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), you can see how minimal the interfaces can be.
* Be scale agnostic and platform agnostic. We want you to worry less about *distributed computing*, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change. It can effectively save time and cost and make the process fun and rewarding. And to run any tuning logic distributedly, you only need a core framework itself (Spark, Dask, etc.) and you do not need a database, a queue service or even an embeded cluster.
* Be highly extendable and flexible on lower level. For example
    * you can extend on Fugue level, for example create an execution engine for [Prefect](https://www.prefect.io/) to run the tuning jobs as a Prefect workflow
    * you can integrate third party optimizers and use Tune just as a distributed orchestrator. We have integrated [HyperOpt](http://hyperopt.github.io/hyperopt/). And [Optuna](https://optuna.org/) and [Nevergrad](https://github.com/facebookresearch/nevergrad) is on the way.
    * you can start external instances (e.g. EC2 instances) for different training subtasks and to fully utilize your cloud
    * you can combine with distributed training as long as your have enough compute resource

## Focuses

Here are our current focuses:

* A flexible space design and can describe a hybrid space of grid search, random search and second level optimization such as bayesian optimization
* Integrate with 3rd party tuning frameworks
* Create generalized and distributed versions of [Successive Halving](https://scikit-learn.org/stable/auto_examples/model_selection/plot_successive_halving_iterations.html), [Hyperband](https://arxiv.org/abs/1603.06560) and [Asynchronous Successive Halving](https://arxiv.org/abs/1810.05934).


## Collaboration

We are looking for collaborators, if you are interested, please let us know. Please join our [Slack channel](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ).


            

Raw data

            {
    "_id": null,
    "home_page": "http://github.com/fugue-project/tune",
    "name": "tune",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "hyper parameter hyperparameter tuning tune tuner optimzation",
    "author": "Han Wang",
    "author_email": "goodwanghan@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b0/38/329febab4b0e071e252a8210e12d9415dd47a9793018770970e96d3282d7/tune-0.1.6.tar.gz",
    "platform": null,
    "description": "# Tune\n\n[![Doc](https://readthedocs.org/projects/tune/badge)](https://tune.readthedocs.org)\n[![PyPI version](https://badge.fury.io/py/tune.svg)](https://pypi.python.org/pypi/tune/)[![PyPI pyversions](https://img.shields.io/pypi/pyversions/tune.svg)](https://pypi.python.org/pypi/tune/)\n[![PyPI license](https://img.shields.io/pypi/l/tune.svg)](https://pypi.python.org/pypi/tune/)\n[![codecov](https://codecov.io/gh/fugue-project/tune/branch/master/graph/badge.svg?token=6AJPYFPJYT)](https://codecov.io/gh/fugue-project/tune)\n\n[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ)\n\nTune is an abstraction layer for general parameter tuning. It is built on [Fugue](https://github.com/fugue-project/fugue) so it can seamlessly run on any backend supported by Fugue, such as Spark, Dask and local.\n\n## Installation\n\n```bash\npip install tune\n```\n\nIt's recommended to also install Scikit-Learn (for all compatible models tuning) and Hyperopt (to enable [Bayesian Optimization](https://en.wikipedia.org/wiki/Bayesian_optimization))\n\n```bash\npip install tune[hyperopt,sklearn]\n```\n\n## Quick Start\n\nTo quickly start, please go through these tutorials on Kaggle:\n\n1. [Search Space](https://www.kaggle.com/goodwanghan/tune-tutorials-01-seach-space)\n2. [Non-iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), such as Scikit-Learn model tuning\n3. [Iterative Problems](https://www.kaggle.com/goodwanghan/tune-tutorials-3-iterative-problems), such as Keras model tuning\n\n\n## Design Philosophy\n\nTune does not follow Scikit-Learn's model selection APIs and does not provide distributed backend for it. **We believe that parameter tuning is a general problem that is not only for machine learning**, so our abstractions are built from ground up, the lower level APIs do not assume the objective is a machine learning model, while the higher level APIs are dedicated to solve specific problems, such as Scikit-Learn compatible model tuning and Keras model tuning.\n\nAlthough we didn't base our solution on any of [HyperOpt](http://hyperopt.github.io/hyperopt/), [Optuna](https://optuna.org/), [Ray Tune](https://docs.ray.io/en/master/tune/index.html) and [Nevergrad](https://github.com/facebookresearch/nevergrad) etc., we are truly inspired by these wonderful solutions and their design. We also integrated with many of them for deeper level optimizations.\n\nTuning problems are never easy, here are our goals:\n\n* Provide the simplest and most intuitive APIs for major tuning cases. We always start from real tuning cases, figure out the minimal requirement for each of them and then determine the layers of abstraction. Read [this tutorial](https://www.kaggle.com/goodwanghan/tune-tutorials-2-non-iterative-problems), you can see how minimal the interfaces can be.\n* Be scale agnostic and platform agnostic. We want you to worry less about *distributed computing*, and just focus on the tuning logic itself. Built on Fugue, Tune let you develop your tuning process iteratively. You can test with small spaces on local machine, and then switch to larger spaces and run distributedly with no code change. It can effectively save time and cost and make the process fun and rewarding. And to run any tuning logic distributedly, you only need a core framework itself (Spark, Dask, etc.) and you do not need a database, a queue service or even an embeded cluster.\n* Be highly extendable and flexible on lower level. For example\n    * you can extend on Fugue level, for example create an execution engine for [Prefect](https://www.prefect.io/) to run the tuning jobs as a Prefect workflow\n    * you can integrate third party optimizers and use Tune just as a distributed orchestrator. We have integrated [HyperOpt](http://hyperopt.github.io/hyperopt/). And [Optuna](https://optuna.org/) and [Nevergrad](https://github.com/facebookresearch/nevergrad) is on the way.\n    * you can start external instances (e.g. EC2 instances) for different training subtasks and to fully utilize your cloud\n    * you can combine with distributed training as long as your have enough compute resource\n\n## Focuses\n\nHere are our current focuses:\n\n* A flexible space design and can describe a hybrid space of grid search, random search and second level optimization such as bayesian optimization\n* Integrate with 3rd party tuning frameworks\n* Create generalized and distributed versions of [Successive Halving](https://scikit-learn.org/stable/auto_examples/model_selection/plot_successive_halving_iterations.html), [Hyperband](https://arxiv.org/abs/1603.06560) and [Asynchronous Successive Halving](https://arxiv.org/abs/1810.05934).\n\n\n## Collaboration\n\nWe are looking for collaborators, if you are interested, please let us know. Please join our [Slack channel](https://join.slack.com/t/fugue-project/shared_invite/zt-jl0pcahu-KdlSOgi~fP50TZWmNxdWYQ).\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "An abstraction layer for hyper parameter tuning",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "http://github.com/fugue-project/tune"
    },
    "split_keywords": [
        "hyper",
        "parameter",
        "hyperparameter",
        "tuning",
        "tune",
        "tuner",
        "optimzation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6761758304cf0e165b574736b460720a8b32aa91acd3439231200907ca02c00",
                "md5": "a05ba007da86b578cb2ada199d958c44",
                "sha256": "bc535dcf5261b2d1e68a83ae0548827bfb0219c88b2d4bf4a67d88e8bb4d5286"
            },
            "downloads": -1,
            "filename": "tune-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a05ba007da86b578cb2ada199d958c44",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 95807,
            "upload_time": "2024-09-03T04:57:03",
            "upload_time_iso_8601": "2024-09-03T04:57:03.304460Z",
            "url": "https://files.pythonhosted.org/packages/b6/76/1758304cf0e165b574736b460720a8b32aa91acd3439231200907ca02c00/tune-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b038329febab4b0e071e252a8210e12d9415dd47a9793018770970e96d3282d7",
                "md5": "a0998f0e8e13aa1d8f5bc11d3f572b40",
                "sha256": "6de79adf3ccfb416989839d0407730b8c2551ee104e6e56a255a032fb70ed955"
            },
            "downloads": -1,
            "filename": "tune-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "a0998f0e8e13aa1d8f5bc11d3f572b40",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 71023,
            "upload_time": "2024-09-03T04:57:04",
            "upload_time_iso_8601": "2024-09-03T04:57:04.704890Z",
            "url": "https://files.pythonhosted.org/packages/b0/38/329febab4b0e071e252a8210e12d9415dd47a9793018770970e96d3282d7/tune-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-03 04:57:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fugue-project",
    "github_project": "tune",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "tune"
}
        
Elapsed time: 0.34390s