hyperfetch


Namehyperfetch JSON
Version 1.1.1 PyPI version JSON
download
home_page
SummaryA tool to optimize and post hyperparameters for your reinforcement learning application. Currently available on Linux and macOS.
upload_time2023-05-10 15:37:32
maintainer
docs_urlNone
author
requires_python<3.11.0,>=3.9
licenseCopyright 2023 Karoline Sund Wahl Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords reinforcement learning hyperparameters replication
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # HyperFetch

#### Prerequisistes 
The package has been successfully tested with Linux and MacOS.\
However, these are the prerequisites:
- pip==22.2.2
- setuptools==64.0.3
- swig==4.0.2
- box2d-py==2.3.8

NB: If you are able to install box2d-py==2.3.8 on your Windows computer, then the installation will likely succeed 
there as well.

#### HyperFetch is a tool consisting of:
- A [website](https://www.hyperfetch.online/) for fetching hyperparameters that are tuned by others
- This pip-module for tuning and saving hyperparameters yourself 

#### The intention of HyperFetch is to:
- Make recreation of existing projects easier within the 
  reinforcement learning research community.
- Allow beginners to train and implement their own reinforcement 
  learning models easier due to abstracting away the advanced tuning-step.

#### The tool is expected to aid in decreasing CO2-emissions related to tuning hyperparameters when training RL models. 
By posting tuned [algorithm x environment] combinations to the website it's expected that:
- Developers/Students can access hyperparameters that have already been optimially tuned instead of having to tune them themselves.
- Researchers can filter by project on the website and access hyperparameters they wish to recreate/replicate for their own research.
- Transparancy related to emissions will become more mainstream within the field.


## Content
* 1.0 [Using this package](#using-the-pip-module)
* 2.0 [Examples of use](#example-1--tuning--posting-using-hyperfetch)


## Links
Repository: [HyperFetch Github](https://github.com/karolisw/hyperFetch)\
Documentation: [Configuration docs](https://www.hyperfetch.online/configDocs)\
Website: [hyperfetch.online](https://www.hyperfetch.online/)
## Using this package
To use this package please do the following:

1. Create a virtual environment in your favorite IDE. 

   Install virtualenv if you haven't 
   
           pip install virtualenv
   
   Create a virtual environment
   
           virtualenv [some_name]

   Activate virtualenv this way (Linux/MacOS):
   
            source myvenv/bin/activate
2. Install the [prerequisites](#prerequisistes).
3. Install the pip-module. 

        pip install hyperfetch


         
## Example 1: tuning + posting 
Here is a quick example of how to tune and run PPO in the Pendulum-v1 environment inside your new or existing project:

### 1. Create configuration file using YAML (minimal example)
If you are unsure of which configuration values to use, see the [config docs](https://www.hyperfetch.online/configDocs)

```yaml
# Required (example values)
alg: ppo
env: Pendulum-v1
project_name: some_project
git_link: github.com/user/some_project

# Some other useful parameters
sampler: tpe
tuner: median
n_trials: 20
log_folder: logs
```

### 2. Tune and post using python file or the command line

```python

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.tune(config_path)
```

#### Command line:
If in the same directory as the config file and the config file is called "config.yml"

      tune config.yml

If the config file is in another directory and it's called "config.yml"

      tune path/to/config.yml 

#### Enjoy your hyperparameters!

## Example 2: Posting hyperparameters that are not tuned by Hyperfetch

### Just a reminder:
The pip package must be installed before this can be done.
For details, see [using the pip module](#using-the-pip-module).

### 1. Create configuartion  YAML file 
If you are unsure of which configuration values to use, see the [config docs](https://www.hyperfetch.online/configDocs)

```yaml
# Required (example values)
alg: dqn
env: Pendulum-v1
project_name: some_project
git_link: github.com/user/some_project
hyperparameters: # These depend on the choice of algorithm
  batch_size: 256
  buffer_size: 50000
  exploration_final_eps: 0.10717928118310233
  exploration_fraction: 0.3318973226098944
  gamma: 0.9
  learning_rate: 0.0002126832542803243
  learning_starts: 10000
  net_arch: medium
  subsample_steps: 4
  target_update_interval: 1000
  train_freq: 8
  
# Not required (but appreciated if you have the data)
CO2_emissions: 0.78 #kgs
energy_consumed: 3.27 #kWh
cpu_model: 12th Gen Intel(R) Core(TM) i5-12500H
gpu_model: NVIDIA GeForce RTX 3070
total_time: 0:04:16.842800 # Format: H:M:S:MS
```

### 2. Post using python file or command line

#### Python file:
```python

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.save(config_path)
```

#### Command line:
If in the same directory as the config file and the config file is called "config.yml"

      save config.yml

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "hyperfetch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "<3.11.0,>=3.9",
    "maintainer_email": "",
    "keywords": "reinforcement learning,hyperparameters,replication",
    "author": "",
    "author_email": "Karoline Sund Wahl <karolines.wahl@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/99/d2/13a9aae9949eb901eecdaf48a88631556eca93561ea2604ec307dd5595a5/hyperfetch-1.1.1.tar.gz",
    "platform": null,
    "description": "# HyperFetch\r\n\r\n#### Prerequisistes \r\nThe package has been successfully tested with Linux and MacOS.\\\r\nHowever, these are the prerequisites:\r\n- pip==22.2.2\r\n- setuptools==64.0.3\r\n- swig==4.0.2\r\n- box2d-py==2.3.8\r\n\r\nNB: If you are able to install box2d-py==2.3.8 on your Windows computer, then the installation will likely succeed \r\nthere as well.\r\n\r\n#### HyperFetch is a tool consisting of:\r\n- A [website](https://www.hyperfetch.online/) for fetching hyperparameters that are tuned by others\r\n- This pip-module for tuning and saving hyperparameters yourself \r\n\r\n#### The intention of HyperFetch is to:\r\n- Make recreation of existing projects easier within the \r\n  reinforcement learning research community.\r\n- Allow beginners to train and implement their own reinforcement \r\n  learning models easier due to abstracting away the advanced tuning-step.\r\n\r\n#### The tool is expected to aid in decreasing CO2-emissions related to tuning hyperparameters when training RL models. \r\nBy posting tuned [algorithm x environment] combinations to the website it's expected that:\r\n- Developers/Students can access hyperparameters that have already been optimially tuned instead of having to tune them themselves.\r\n- Researchers can filter by project on the website and access hyperparameters they wish to recreate/replicate for their own research.\r\n- Transparancy related to emissions will become more mainstream within the field.\r\n\r\n\r\n## Content\r\n* 1.0 [Using this package](#using-the-pip-module)\r\n* 2.0 [Examples of use](#example-1--tuning--posting-using-hyperfetch)\r\n\r\n\r\n## Links\r\nRepository: [HyperFetch Github](https://github.com/karolisw/hyperFetch)\\\r\nDocumentation: [Configuration docs](https://www.hyperfetch.online/configDocs)\\\r\nWebsite: [hyperfetch.online](https://www.hyperfetch.online/)\r\n## Using this package\r\nTo use this package please do the following:\r\n\r\n1. Create a virtual environment in your favorite IDE. \r\n\r\n   Install virtualenv if you haven't \r\n   \r\n           pip install virtualenv\r\n   \r\n   Create a virtual environment\r\n   \r\n           virtualenv [some_name]\r\n\r\n   Activate virtualenv this way (Linux/MacOS):\r\n   \r\n            source myvenv/bin/activate\r\n2. Install the [prerequisites](#prerequisistes).\r\n3. Install the pip-module. \r\n\r\n        pip install hyperfetch\r\n\r\n\r\n         \r\n## Example 1: tuning + posting \r\nHere is a quick example of how to tune and run PPO in the Pendulum-v1 environment inside your new or existing project:\r\n\r\n### 1. Create configuration file using YAML (minimal example)\r\nIf you are unsure of which configuration values to use, see the [config docs](https://www.hyperfetch.online/configDocs)\r\n\r\n```yaml\r\n# Required (example values)\r\nalg: ppo\r\nenv: Pendulum-v1\r\nproject_name: some_project\r\ngit_link: github.com/user/some_project\r\n\r\n# Some other useful parameters\r\nsampler: tpe\r\ntuner: median\r\nn_trials: 20\r\nlog_folder: logs\r\n```\r\n\r\n### 2. Tune and post using python file or the command line\r\n\r\n```python\r\n\r\nfrom hyperfetch import tuning\r\n\r\n# Path to your YAML config file \r\nconfig_path = \"../some_folder/config_name.yml\"\r\n\r\n# Writes each trial's best hyperparameters to log folder\r\ntuning.tune(config_path)\r\n```\r\n\r\n#### Command line:\r\nIf in the same directory as the config file and the config file is called \"config.yml\"\r\n\r\n      tune config.yml\r\n\r\nIf the config file is in another directory and it's called \"config.yml\"\r\n\r\n      tune path/to/config.yml \r\n\r\n#### Enjoy your hyperparameters!\r\n\r\n## Example 2: Posting hyperparameters that are not tuned by Hyperfetch\r\n\r\n### Just a reminder:\r\nThe pip package must be installed before this can be done.\r\nFor details, see [using the pip module](#using-the-pip-module).\r\n\r\n### 1. Create configuartion  YAML file \r\nIf you are unsure of which configuration values to use, see the [config docs](https://www.hyperfetch.online/configDocs)\r\n\r\n```yaml\r\n# Required (example values)\r\nalg: dqn\r\nenv: Pendulum-v1\r\nproject_name: some_project\r\ngit_link: github.com/user/some_project\r\nhyperparameters: # These depend on the choice of algorithm\r\n  batch_size: 256\r\n  buffer_size: 50000\r\n  exploration_final_eps: 0.10717928118310233\r\n  exploration_fraction: 0.3318973226098944\r\n  gamma: 0.9\r\n  learning_rate: 0.0002126832542803243\r\n  learning_starts: 10000\r\n  net_arch: medium\r\n  subsample_steps: 4\r\n  target_update_interval: 1000\r\n  train_freq: 8\r\n  \r\n# Not required (but appreciated if you have the data)\r\nCO2_emissions: 0.78 #kgs\r\nenergy_consumed: 3.27 #kWh\r\ncpu_model: 12th Gen Intel(R) Core(TM) i5-12500H\r\ngpu_model: NVIDIA GeForce RTX 3070\r\ntotal_time: 0:04:16.842800 # Format: H:M:S:MS\r\n```\r\n\r\n### 2. Post using python file or command line\r\n\r\n#### Python file:\r\n```python\r\n\r\nfrom hyperfetch import tuning\r\n\r\n# Path to your YAML config file \r\nconfig_path = \"../some_folder/config_name.yml\"\r\n\r\n# Writes each trial's best hyperparameters to log folder\r\ntuning.save(config_path)\r\n```\r\n\r\n#### Command line:\r\nIf in the same directory as the config file and the config file is called \"config.yml\"\r\n\r\n      save config.yml\r\n",
    "bugtrack_url": null,
    "license": "Copyright 2023 Karoline Sund Wahl  Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \u201cAS IS\u201d AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ",
    "summary": "A tool to optimize and post hyperparameters for your reinforcement learning application. Currently available on Linux and macOS.",
    "version": "1.1.1",
    "project_urls": {
        "Source": "https://github.com/karolisw/hyperFetch",
        "Website": "https://www.hyperfetch.online/"
    },
    "split_keywords": [
        "reinforcement learning",
        "hyperparameters",
        "replication"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "017f32819ecd673c57d6c8a560ad704e6ccb7240f4a696473b0fb32e1ab89fa8",
                "md5": "561ab985dec1a559c8133e0978da3fe8",
                "sha256": "86bad11b64018a17ebff6512f88386700f6eb6523601d2d89ce1f7fddbdc3585"
            },
            "downloads": -1,
            "filename": "hyperfetch-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "561ab985dec1a559c8133e0978da3fe8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11.0,>=3.9",
            "size": 17984,
            "upload_time": "2023-05-10T15:37:30",
            "upload_time_iso_8601": "2023-05-10T15:37:30.118099Z",
            "url": "https://files.pythonhosted.org/packages/01/7f/32819ecd673c57d6c8a560ad704e6ccb7240f4a696473b0fb32e1ab89fa8/hyperfetch-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "99d213a9aae9949eb901eecdaf48a88631556eca93561ea2604ec307dd5595a5",
                "md5": "4f2fdb916850bda6ac00f24dfc7982a0",
                "sha256": "0c3fc618c7b48a1cfa9fb32d57083744d3562c869efcaad6117093ab98a5e677"
            },
            "downloads": -1,
            "filename": "hyperfetch-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4f2fdb916850bda6ac00f24dfc7982a0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11.0,>=3.9",
            "size": 18738,
            "upload_time": "2023-05-10T15:37:32",
            "upload_time_iso_8601": "2023-05-10T15:37:32.184186Z",
            "url": "https://files.pythonhosted.org/packages/99/d2/13a9aae9949eb901eecdaf48a88631556eca93561ea2604ec307dd5595a5/hyperfetch-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-10 15:37:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "karolisw",
    "github_project": "hyperFetch",
    "github_not_found": true,
    "lcname": "hyperfetch"
}
        
Elapsed time: 0.18976s