jahs-bench


Namejahs-bench JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://github.com/automl/jahs_bench
SummaryThe first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search.
upload_time2023-03-22 14:52:02
maintainer
docs_urlNone
authorArchit Bansal
requires_python>=3.7.1,<3.11
licenseMIT
keywords joint architecture and hyperparameter search neural architecture search hyperparameter optimization benchmark deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # JAHS-Bench-201

The first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search (JAHS), built to also support and
facilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms.


![Python versions](https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10-informational)
[![License](https://img.shields.io/badge/license-MIT-informational)](LICENSE)

Please see our [documentation here](https://automl.github.io/jahs_bench_201/). Precise details about the data collection and surrogate creation process, as well as our experiments, can be found in the assosciated [publication](https://openreview.net/forum?id=_HLcjaVlqJ).


## Installation

Using pip

```bash
pip install jahs-bench
```

Optionally, you can download the data required to use the surrogate benchmark ahead of time with
```bash
python -m jahs_bench.download --target surrogates
```

To test if the installation was successful, you can, e.g, run a minimal example with
```bash
python -m jahs_bench_examples.minimal
```
This should randomly sample a configuration, and display both the sampled configuration and the result of querying the
surrogate for that configuration. Note: We have recently discovered that XGBoost - the library used for our surrogate models - can suffer from some incompatibility issues with MacOS. Users who run into such an issue may consult [this](https://github.com/automl/jahs_bench_201/issues/6) discussion for details. 

## Using the Benchmark

### Creating Configurations

Configurations in our Joint Architecture and Hyperparameter (JAHS) space are represented as dictionaries, e.g.,:

```python
config = {
    'Optimizer': 'SGD',
    'LearningRate': 0.1,
    'WeightDecay': 5e-05,
    'Activation': 'Mish',
    'TrivialAugment': False,
    'Op1': 4,
    'Op2': 1,
    'Op3': 2,
    'Op4': 0,
    'Op5': 2,
    'Op6': 1,
    'N': 5,
    'W': 16,
    'Resolution': 1.0,
}
```

For a full description on the search space and configurations see our [documentation](https://automl.github.io/jahs_bench_201/search_space).


### Evaluating Configurations

```python
import jahs_bench

benchmark = jahs_bench.Benchmark(task="cifar10", download=True)

# Query a random configuration
config = benchmark.sample_config()
results = benchmark(config, nepochs=200)

# Display the outputs
print(f"Config: {config}")  # A dict
print(f"Result: {results}")  # A dict
```


### More Evaluation Options

The API of our benchmark enables users to either query a surrogate model (the default) or the tables of performance data, or train a
configuration from our search space from scratch using the same pipeline as was used by our benchmark.
However, users should note that the latter functionality requires the installation of `jahs_bench_201` with the
optional `data_creation` component and its relevant dependencies. The relevant data can be automatically downloaded by
our API. See our [documentation](https://automl.github.io/jahs_bench_201/usage) for details.

## Benchmark Data

We provide [documentation for the performance dataset](https://automl.github.io/jahs_bench_201/performance_dataset) used to train our surrogate models and [further information on our surrogate models](https://automl.github.io/jahs_bench_201/surrogate).


## Experiments and Evaluation Protocol

See [our experiments repository](https://github.com/automl/jahs_bench_201_experiments) and our [documentation](https://automl.github.io/jahs_bench_201/evaluation_protocol).

## Leaderboards

We maintain [leaderboards](https://automl.github.io/jahs_bench_201/leaderboards) for several optimization tasks and algorithmic frameworks.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/automl/jahs_bench",
    "name": "jahs-bench",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.1,<3.11",
    "maintainer_email": "",
    "keywords": "Joint Architecture and Hyperparameter Search,Neural Architecture Search,Hyperparameter Optimization,Benchmark,Deep Learning",
    "author": "Archit Bansal",
    "author_email": "bansala@cs.uni-freiburg.de",
    "download_url": "https://files.pythonhosted.org/packages/e2/2c/cf6947313f1af0bfdc27d9e48b034e3be7a6e34a56f580150fb6279a87f5/jahs_bench-1.1.0.tar.gz",
    "platform": null,
    "description": "# JAHS-Bench-201\n\nThe first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search (JAHS), built to also support and\nfacilitate research on multi-objective, cost-aware and (multi) multi-fidelity optimization algorithms.\n\n\n![Python versions](https://img.shields.io/badge/python-3.7%20%7C%203.8%20%7C%203.9%20%7C%203.10-informational)\n[![License](https://img.shields.io/badge/license-MIT-informational)](LICENSE)\n\nPlease see our [documentation here](https://automl.github.io/jahs_bench_201/). Precise details about the data collection and surrogate creation process, as well as our experiments, can be found in the assosciated [publication](https://openreview.net/forum?id=_HLcjaVlqJ).\n\n\n## Installation\n\nUsing pip\n\n```bash\npip install jahs-bench\n```\n\nOptionally, you can download the data required to use the surrogate benchmark ahead of time with\n```bash\npython -m jahs_bench.download --target surrogates\n```\n\nTo test if the installation was successful, you can, e.g, run a minimal example with\n```bash\npython -m jahs_bench_examples.minimal\n```\nThis should randomly sample a configuration, and display both the sampled configuration and the result of querying the\nsurrogate for that configuration. Note: We have recently discovered that XGBoost - the library used for our surrogate models - can suffer from some incompatibility issues with MacOS. Users who run into such an issue may consult [this](https://github.com/automl/jahs_bench_201/issues/6) discussion for details. \n\n## Using the Benchmark\n\n### Creating Configurations\n\nConfigurations in our Joint Architecture and Hyperparameter (JAHS) space are represented as dictionaries, e.g.,:\n\n```python\nconfig = {\n    'Optimizer': 'SGD',\n    'LearningRate': 0.1,\n    'WeightDecay': 5e-05,\n    'Activation': 'Mish',\n    'TrivialAugment': False,\n    'Op1': 4,\n    'Op2': 1,\n    'Op3': 2,\n    'Op4': 0,\n    'Op5': 2,\n    'Op6': 1,\n    'N': 5,\n    'W': 16,\n    'Resolution': 1.0,\n}\n```\n\nFor a full description on the search space and configurations see our [documentation](https://automl.github.io/jahs_bench_201/search_space).\n\n\n### Evaluating Configurations\n\n```python\nimport jahs_bench\n\nbenchmark = jahs_bench.Benchmark(task=\"cifar10\", download=True)\n\n# Query a random configuration\nconfig = benchmark.sample_config()\nresults = benchmark(config, nepochs=200)\n\n# Display the outputs\nprint(f\"Config: {config}\")  # A dict\nprint(f\"Result: {results}\")  # A dict\n```\n\n\n### More Evaluation Options\n\nThe API of our benchmark enables users to either query a surrogate model (the default) or the tables of performance data, or train a\nconfiguration from our search space from scratch using the same pipeline as was used by our benchmark.\nHowever, users should note that the latter functionality requires the installation of `jahs_bench_201` with the\noptional `data_creation` component and its relevant dependencies. The relevant data can be automatically downloaded by\nour API. See our [documentation](https://automl.github.io/jahs_bench_201/usage) for details.\n\n## Benchmark Data\n\nWe provide [documentation for the performance dataset](https://automl.github.io/jahs_bench_201/performance_dataset) used to train our surrogate models and [further information on our surrogate models](https://automl.github.io/jahs_bench_201/surrogate).\n\n\n## Experiments and Evaluation Protocol\n\nSee [our experiments repository](https://github.com/automl/jahs_bench_201_experiments) and our [documentation](https://automl.github.io/jahs_bench_201/evaluation_protocol).\n\n## Leaderboards\n\nWe maintain [leaderboards](https://automl.github.io/jahs_bench_201/leaderboards) for several optimization tasks and algorithmic frameworks.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "The first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search.",
    "version": "1.1.0",
    "split_keywords": [
        "joint architecture and hyperparameter search",
        "neural architecture search",
        "hyperparameter optimization",
        "benchmark",
        "deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "93bed3023802058901e74c6af055ef24401b0d282969050d4d60d7a2260ecbbe",
                "md5": "0433f206b771c14408d734c4305ea168",
                "sha256": "751b61f2a583017558a3c0de3f0625840e0b0401e4b94cf7efec48c99f7a5bd7"
            },
            "downloads": -1,
            "filename": "jahs_bench-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0433f206b771c14408d734c4305ea168",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.1,<3.11",
            "size": 186961,
            "upload_time": "2023-03-22T14:51:59",
            "upload_time_iso_8601": "2023-03-22T14:51:59.396731Z",
            "url": "https://files.pythonhosted.org/packages/93/be/d3023802058901e74c6af055ef24401b0d282969050d4d60d7a2260ecbbe/jahs_bench-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e22ccf6947313f1af0bfdc27d9e48b034e3be7a6e34a56f580150fb6279a87f5",
                "md5": "7cdb855f6cd93ba8cac8ad70aed0bb15",
                "sha256": "0bfd5e69af436f2d514726fb6c25d30589e8c2787e82721c1331ecc69a8ea7d2"
            },
            "downloads": -1,
            "filename": "jahs_bench-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7cdb855f6cd93ba8cac8ad70aed0bb15",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.1,<3.11",
            "size": 157540,
            "upload_time": "2023-03-22T14:52:02",
            "upload_time_iso_8601": "2023-03-22T14:52:02.615229Z",
            "url": "https://files.pythonhosted.org/packages/e2/2c/cf6947313f1af0bfdc27d9e48b034e3be7a6e34a56f580150fb6279a87f5/jahs_bench-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-22 14:52:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "automl",
    "github_project": "jahs_bench",
    "lcname": "jahs-bench"
}
        
Elapsed time: 0.05885s