| Name | trac-optimizer JSON |
| Version |
0.0.2
JSON |
| download |
| home_page | None |
| Summary | Official implementation of the TRAC optimizer in Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning. |
| upload_time | 2024-08-19 23:04:45 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.8 |
| license | None |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# TRAC: Adaptive Parameter-free Optimization ⚡️
[](https://computationalrobotics.seas.harvard.edu/TRAC/)[](https://arxiv.org/abs/2405.16642) [](https://colab.research.google.com/drive/1c5OxMa5fiSVnl5w6J7flrjNUteUkp6BV?usp=sharing)[](https://pypi.org/project/trac-optimizer/)
This repository is the official implementation of the **TRAC** optimizer in ***Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning***.
How can you _quickly_ adapt to new tasks or distribution shifts? Without knowing when or how much to adapt? And without _ANY_ tuning?
🤔💭
Well, we suggest you get on the fast **TRAC** 🏎️💨.
**TRAC** is a parameter-free optimizer for continual environments inspired by [online convex optimization](https://arxiv.org/abs/1912.13213) and uses [discounted adaptive online prediction](https://arxiv.org/abs/2402.02720).
**Update [08/20/24]**: _TRAC is now supported for JAX and Optax!_
## Implement with only one line change with both PyTorch/JAX!
Like other [meta-tuners](https://openreview.net/pdf?id=uhKtQMn21D), TRAC can work with any of your continual, fine-tuning, or lifelong experiments with just one line change.
```python
pip install trac-optimizer
```
**PyTorch**
```python
from trac import start_trac
# original optimizer
optimizer = torch.optim.Adam
lr = 0.001
optimizer = start_trac(log_file='logs/trac.text', optimizer)(model.parameters(), lr=lr)
```
**JAX**
```python
from trac_optimizer.experimental.jax.trac import start_trac
# original optimizer
optimizer = optax.adam(1e-3)
optimizer = start_trac(optimizer)
```
After this modification, you can continue using your optimizer methods exactly as you did before. Whether it's calling `optimizer.step()` to update your model's parameters or `optimizer.zero_grad()` to clear gradients, everything stays the same. TRAC integrates into your existing workflow without any additional overhead.
## Control Experiments
We recommend running ``main.ipynb`` in Google Colab. This approach requires no setup, making it easy to get started with our control experiments. If you run locally, to install the necessary dependencies, simply:
```setup
pip install -r requirements.txt
```
## Vision-based RL Experiments
Our vision-based experiments for [Procgen](https://openai.com/index/procgen-benchmark/) and [Atari](https://www.gymlibrary.dev/environments/atari/index.html) are hosted in the `vision_exp` directory, which is based off [this Procgen Pytorch implementation](https://github.com/joonleesky/train-procgen-pytorch).
To initiate an experiment with the default configuration in the Procgen "starpilot" environment, use the command below. You can easily switch to other game environments, like Atari, by altering the `--exp_name="atari"` parameter:
```bash
python vision_exp/train.py --exp_name="procgen" --env_name="starpilot" --optimizer="TRAC" --warmstart_step=0
```
Raw data
{
"_id": null,
"home_page": null,
"name": "trac-optimizer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Aneesh Muppidi <aneeshmuppidi@college.harvard.edu>",
"download_url": "https://files.pythonhosted.org/packages/b7/8a/54bac599ea3ca707ae04487220389a7e4173b4df9a4370a727f338bc4dc3/trac_optimizer-0.0.2.tar.gz",
"platform": null,
"description": "# TRAC: Adaptive Parameter-free Optimization \u26a1\ufe0f\n[](https://computationalrobotics.seas.harvard.edu/TRAC/)[](https://arxiv.org/abs/2405.16642) [](https://colab.research.google.com/drive/1c5OxMa5fiSVnl5w6J7flrjNUteUkp6BV?usp=sharing)[](https://pypi.org/project/trac-optimizer/)\n\n\nThis repository is the official implementation of the **TRAC** optimizer in ***Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning***.\n\nHow can you _quickly_ adapt to new tasks or distribution shifts? Without knowing when or how much to adapt? And without _ANY_ tuning? \n \ud83e\udd14\ud83d\udcad\n\nWell, we suggest you get on the fast **TRAC** \ud83c\udfce\ufe0f\ud83d\udca8.\n\n**TRAC** is a parameter-free optimizer for continual environments inspired by [online convex optimization](https://arxiv.org/abs/1912.13213) and uses [discounted adaptive online prediction](https://arxiv.org/abs/2402.02720).\n\n**Update [08/20/24]**: _TRAC is now supported for JAX and Optax!_\n\n## Implement with only one line change with both PyTorch/JAX!\nLike other [meta-tuners](https://openreview.net/pdf?id=uhKtQMn21D), TRAC can work with any of your continual, fine-tuning, or lifelong experiments with just one line change.\n```python\npip install trac-optimizer\n```\n**PyTorch**\n```python\nfrom trac import start_trac\n# original optimizer\noptimizer = torch.optim.Adam\nlr = 0.001\noptimizer = start_trac(log_file='logs/trac.text', optimizer)(model.parameters(), lr=lr)\n```\n**JAX**\n```python\nfrom trac_optimizer.experimental.jax.trac import start_trac\n# original optimizer\noptimizer = optax.adam(1e-3)\noptimizer = start_trac(optimizer)\n```\n\nAfter this modification, you can continue using your optimizer methods exactly as you did before. Whether it's calling `optimizer.step()` to update your model's parameters or `optimizer.zero_grad()` to clear gradients, everything stays the same. TRAC integrates into your existing workflow without any additional overhead.\n\n## Control Experiments\n\nWe recommend running ``main.ipynb`` in Google Colab. This approach requires no setup, making it easy to get started with our control experiments. If you run locally, to install the necessary dependencies, simply:\n\n```setup\npip install -r requirements.txt\n```\n\n## Vision-based RL Experiments\n\nOur vision-based experiments for [Procgen](https://openai.com/index/procgen-benchmark/) and [Atari](https://www.gymlibrary.dev/environments/atari/index.html) are hosted in the `vision_exp` directory, which is based off [this Procgen Pytorch implementation](https://github.com/joonleesky/train-procgen-pytorch). \n\nTo initiate an experiment with the default configuration in the Procgen \"starpilot\" environment, use the command below. You can easily switch to other game environments, like Atari, by altering the `--exp_name=\"atari\"` parameter:\n\n```bash\npython vision_exp/train.py --exp_name=\"procgen\" --env_name=\"starpilot\" --optimizer=\"TRAC\" --warmstart_step=0\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Official implementation of the TRAC optimizer in Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning.",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://computationalrobotics.seas.harvard.edu/TRAC/",
"Issues": "https://github.com/ComputationalRobotics/TRAC/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d7510a207947ce80b875f46eaa17b63ab4b30fa7228321ebc3257440f123cd7a",
"md5": "6fd2021cbdfc2fc59b74f2dfd4aa0c0a",
"sha256": "40114635764d77b2ac84f3ed0542df34f8df7c4502d99d6ce137170acf073c30"
},
"downloads": -1,
"filename": "trac_optimizer-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6fd2021cbdfc2fc59b74f2dfd4aa0c0a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 8632,
"upload_time": "2024-08-19T23:04:43",
"upload_time_iso_8601": "2024-08-19T23:04:43.840842Z",
"url": "https://files.pythonhosted.org/packages/d7/51/0a207947ce80b875f46eaa17b63ab4b30fa7228321ebc3257440f123cd7a/trac_optimizer-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b78a54bac599ea3ca707ae04487220389a7e4173b4df9a4370a727f338bc4dc3",
"md5": "fd36e1d4d0478e1cbb5c42ed8a70c4df",
"sha256": "7456088a5e680981a24b7cf8a40051ead7b9691c3872d1c584158fcd110a71f7"
},
"downloads": -1,
"filename": "trac_optimizer-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "fd36e1d4d0478e1cbb5c42ed8a70c4df",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 7079,
"upload_time": "2024-08-19T23:04:45",
"upload_time_iso_8601": "2024-08-19T23:04:45.735341Z",
"url": "https://files.pythonhosted.org/packages/b7/8a/54bac599ea3ca707ae04487220389a7e4173b4df9a4370a727f338bc4dc3/trac_optimizer-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-19 23:04:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ComputationalRobotics",
"github_project": "TRAC",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "trac-optimizer"
}