SAC-pytorch


NameSAC-pytorch JSON
Version 0.0.9 PyPI version JSON
download
home_pageNone
SummarySoft Actor Critic - Pytorch
upload_time2024-11-19 22:09:42
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2024 Phil Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords artificial intelligence deep learning reinforcement learning soft actor critic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## SAC (Soft Actor Critic) - Pytorch (wip)

Implementation of Soft Actor Critic and some of its improvements in Pytorch. Interest comes from watching <a href="https://www.youtube.com/watch?v=17NrtKHdPDw">this lecture</a>

```python
import torch
from SAC_pytorch import (
  SAC,
  Actor,
  Critic,
  MultipleCritics
)

critic1 = Critic(
  dim_state = 5,
  num_cont_actions = 2,
  num_discrete_actions = (5, 5),
  num_quantiles = 3
)

critic2 = Critic(
  dim_state = 5,
  num_cont_actions = 2,
  num_discrete_actions = (5, 5),
  num_quantiles = 3
)

actor = Actor(
  dim_state = 5,
  num_cont_actions = 2,
  num_discrete_actions = (5, 5)
)

agent = SAC(
  actor = actor,
  critics = [
    dict(dim_state = 5, num_cont_actions = 2, num_discrete_actions = (5, 5)),
    dict(dim_state = 5, num_cont_actions = 2, num_discrete_actions = (5, 5)),
  ],
  quantiled_critics = False
)

state = torch.randn(3, 5)
cont_actions, discrete, cont_logprob, discrete_logprob = actor(state, sample = True)

agent(
  states = state,
  cont_actions = cont_actions,
  discrete_actions = discrete,
  rewards = torch.randn(1),
  done = torch.zeros(1).bool(),
  next_states = state + 1
)
```

## Citations

```bibtex
@article{Haarnoja2018SoftAA,
    title   = {Soft Actor-Critic Algorithms and Applications},
    author  = {Tuomas Haarnoja and Aurick Zhou and Kristian Hartikainen and G. Tucker and Sehoon Ha and Jie Tan and Vikash Kumar and Henry Zhu and Abhishek Gupta and P. Abbeel and Sergey Levine},
    journal = {ArXiv},
    year    = {2018},
    volume  = {abs/1812.05905},
    url     = {https://api.semanticscholar.org/CorpusID:55703664}
}
```

```bibtex
@article{Hiraoka2021DropoutQF,
    title   = {Dropout Q-Functions for Doubly Efficient Reinforcement Learning},
    author  = {Takuya Hiraoka and Takahisa Imagawa and Taisei Hashimoto and Takashi Onishi and Yoshimasa Tsuruoka},
    journal = {ArXiv},
    year    = {2021},
    volume  = {abs/2110.02034},
    url     = {https://api.semanticscholar.org/CorpusID:238353966}
}
```

```bibtex
@inproceedings{ObandoCeron2024MixturesOE,
    title   = {Mixtures of Experts Unlock Parameter Scaling for Deep RL},
    author  = {Johan S. Obando-Ceron and Ghada Sokar and Timon Willi and Clare Lyle and Jesse Farebrother and Jakob Foerster and Gintare Karolina Dziugaite and Doina Precup and Pablo Samuel Castro},
    year    = {2024},
    url     = {https://api.semanticscholar.org/CorpusID:267637059}
}
```

```bibtex
@inproceedings{Kumar2023MaintainingPI,
    title   = {Maintaining Plasticity in Continual Learning via Regenerative Regularization},
    author  = {Saurabh Kumar and Henrik Marklund and Benjamin Van Roy},
    year    = {2023},
    url     = {https://api.semanticscholar.org/CorpusID:261076021}
}
```

```bibtex
@inproceedings{Kuznetsov2020ControllingOB,
    title   = {Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics},
    author  = {Arsenii Kuznetsov and Pavel Shvechikov and Alexander Grishin and Dmitry P. Vetrov},
    booktitle = {International Conference on Machine Learning},
    year    = {2020},
    url     = {https://api.semanticscholar.org/CorpusID:218581840}
}
```

```bibtex
@article{Zagoruyko2017DiracNetsTV,
    title   = {DiracNets: Training Very Deep Neural Networks Without Skip-Connections},
    author={Sergey Zagoruyko and Nikos Komodakis},
    journal = {ArXiv},
    year    = {2017},
    volume  = {abs/1706.00388},
    url     = {https://api.semanticscholar.org/CorpusID:1086822}
}
```

```bibtex
@article{Abbas2023LossOP,
    title  = {Loss of Plasticity in Continual Deep Reinforcement Learning},
    author = {Zaheer Abbas and Rosie Zhao and Joseph Modayil and Adam White and Marlos C. Machado},
    journal = {ArXiv},
    year    = {2023},
    volume  = {abs/2303.07507},
    url     = {https://api.semanticscholar.org/CorpusID:257504763}
}
```

```bibtex
@article{Zhang2024ReLU2WD,
    title   = {ReLU2 Wins: Discovering Efficient Activation Functions for Sparse LLMs},
    author  = {Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
    journal = {ArXiv},
    year    = {2024},
    volume  = {abs/2402.03804},
    url     = {https://api.semanticscholar.org/CorpusID:267499856}
}
```

```bibtex
@inproceedings{Lee2024SimBaSB,
    title  = {SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning},
    author = {Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno},
    year   = {2024},
    url    = {https://api.semanticscholar.org/CorpusID:273346233}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "SAC-pytorch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, reinforcement learning, soft actor critic",
    "author": null,
    "author_email": "Phil Wang <lucidrains@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/0e/42/41a23a1940b9205b97f6e0d038b51aaad89e5bbd7f4c1227f14bf25b84d1/sac_pytorch-0.0.9.tar.gz",
    "platform": null,
    "description": "## SAC (Soft Actor Critic) - Pytorch (wip)\n\nImplementation of Soft Actor Critic and some of its improvements in Pytorch. Interest comes from watching <a href=\"https://www.youtube.com/watch?v=17NrtKHdPDw\">this lecture</a>\n\n```python\nimport torch\nfrom SAC_pytorch import (\n  SAC,\n  Actor,\n  Critic,\n  MultipleCritics\n)\n\ncritic1 = Critic(\n  dim_state = 5,\n  num_cont_actions = 2,\n  num_discrete_actions = (5, 5),\n  num_quantiles = 3\n)\n\ncritic2 = Critic(\n  dim_state = 5,\n  num_cont_actions = 2,\n  num_discrete_actions = (5, 5),\n  num_quantiles = 3\n)\n\nactor = Actor(\n  dim_state = 5,\n  num_cont_actions = 2,\n  num_discrete_actions = (5, 5)\n)\n\nagent = SAC(\n  actor = actor,\n  critics = [\n    dict(dim_state = 5, num_cont_actions = 2, num_discrete_actions = (5, 5)),\n    dict(dim_state = 5, num_cont_actions = 2, num_discrete_actions = (5, 5)),\n  ],\n  quantiled_critics = False\n)\n\nstate = torch.randn(3, 5)\ncont_actions, discrete, cont_logprob, discrete_logprob = actor(state, sample = True)\n\nagent(\n  states = state,\n  cont_actions = cont_actions,\n  discrete_actions = discrete,\n  rewards = torch.randn(1),\n  done = torch.zeros(1).bool(),\n  next_states = state + 1\n)\n```\n\n## Citations\n\n```bibtex\n@article{Haarnoja2018SoftAA,\n    title   = {Soft Actor-Critic Algorithms and Applications},\n    author  = {Tuomas Haarnoja and Aurick Zhou and Kristian Hartikainen and G. Tucker and Sehoon Ha and Jie Tan and Vikash Kumar and Henry Zhu and Abhishek Gupta and P. Abbeel and Sergey Levine},\n    journal = {ArXiv},\n    year    = {2018},\n    volume  = {abs/1812.05905},\n    url     = {https://api.semanticscholar.org/CorpusID:55703664}\n}\n```\n\n```bibtex\n@article{Hiraoka2021DropoutQF,\n    title   = {Dropout Q-Functions for Doubly Efficient Reinforcement Learning},\n    author  = {Takuya Hiraoka and Takahisa Imagawa and Taisei Hashimoto and Takashi Onishi and Yoshimasa Tsuruoka},\n    journal = {ArXiv},\n    year    = {2021},\n    volume  = {abs/2110.02034},\n    url     = {https://api.semanticscholar.org/CorpusID:238353966}\n}\n```\n\n```bibtex\n@inproceedings{ObandoCeron2024MixturesOE,\n    title   = {Mixtures of Experts Unlock Parameter Scaling for Deep RL},\n    author  = {Johan S. Obando-Ceron and Ghada Sokar and Timon Willi and Clare Lyle and Jesse Farebrother and Jakob Foerster and Gintare Karolina Dziugaite and Doina Precup and Pablo Samuel Castro},\n    year    = {2024},\n    url     = {https://api.semanticscholar.org/CorpusID:267637059}\n}\n```\n\n```bibtex\n@inproceedings{Kumar2023MaintainingPI,\n    title   = {Maintaining Plasticity in Continual Learning via Regenerative Regularization},\n    author  = {Saurabh Kumar and Henrik Marklund and Benjamin Van Roy},\n    year    = {2023},\n    url     = {https://api.semanticscholar.org/CorpusID:261076021}\n}\n```\n\n```bibtex\n@inproceedings{Kuznetsov2020ControllingOB,\n    title   = {Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics},\n    author  = {Arsenii Kuznetsov and Pavel Shvechikov and Alexander Grishin and Dmitry P. Vetrov},\n    booktitle = {International Conference on Machine Learning},\n    year    = {2020},\n    url     = {https://api.semanticscholar.org/CorpusID:218581840}\n}\n```\n\n```bibtex\n@article{Zagoruyko2017DiracNetsTV,\n    title   = {DiracNets: Training Very Deep Neural Networks Without Skip-Connections},\n    author={Sergey Zagoruyko and Nikos Komodakis},\n    journal = {ArXiv},\n    year    = {2017},\n    volume  = {abs/1706.00388},\n    url     = {https://api.semanticscholar.org/CorpusID:1086822}\n}\n```\n\n```bibtex\n@article{Abbas2023LossOP,\n    title  = {Loss of Plasticity in Continual Deep Reinforcement Learning},\n    author = {Zaheer Abbas and Rosie Zhao and Joseph Modayil and Adam White and Marlos C. Machado},\n    journal = {ArXiv},\n    year    = {2023},\n    volume  = {abs/2303.07507},\n    url     = {https://api.semanticscholar.org/CorpusID:257504763}\n}\n```\n\n```bibtex\n@article{Zhang2024ReLU2WD,\n    title   = {ReLU2 Wins: Discovering Efficient Activation Functions for Sparse LLMs},\n    author  = {Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},\n    journal = {ArXiv},\n    year    = {2024},\n    volume  = {abs/2402.03804},\n    url     = {https://api.semanticscholar.org/CorpusID:267499856}\n}\n```\n\n```bibtex\n@inproceedings{Lee2024SimBaSB,\n    title  = {SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning},\n    author = {Hojoon Lee and Dongyoon Hwang and Donghu Kim and Hyunseung Kim and Jun Jet Tai and Kaushik Subramanian and Peter R. Wurman and Jaegul Choo and Peter Stone and Takuma Seno},\n    year   = {2024},\n    url    = {https://api.semanticscholar.org/CorpusID:273346233}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Phil Wang  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Soft Actor Critic - Pytorch",
    "version": "0.0.9",
    "project_urls": {
        "Homepage": "https://pypi.org/project/SAC-pytorch/",
        "Repository": "https://github.com/lucidrains/SAC-pytorch"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " reinforcement learning",
        " soft actor critic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba6a2b9d33551e79d970e224e5c10fd5de8bf0e5bf70a99061d746b891362b37",
                "md5": "58c2bdbc7fb94a730372c64466e0e8b9",
                "sha256": "d91d5be35e3f85609672377d0d258855d41566d83c0a1a4d5db3ea748ead6162"
            },
            "downloads": -1,
            "filename": "sac_pytorch-0.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "58c2bdbc7fb94a730372c64466e0e8b9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 10762,
            "upload_time": "2024-11-19T22:09:39",
            "upload_time_iso_8601": "2024-11-19T22:09:39.112661Z",
            "url": "https://files.pythonhosted.org/packages/ba/6a/2b9d33551e79d970e224e5c10fd5de8bf0e5bf70a99061d746b891362b37/sac_pytorch-0.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0e4241a23a1940b9205b97f6e0d038b51aaad89e5bbd7f4c1227f14bf25b84d1",
                "md5": "133f294c1dad24c9dc75701272c97bfd",
                "sha256": "66693b19ec319f39845d526a71ac57df4cb953797355dcd02d079f0d95b90992"
            },
            "downloads": -1,
            "filename": "sac_pytorch-0.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "133f294c1dad24c9dc75701272c97bfd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 11673,
            "upload_time": "2024-11-19T22:09:42",
            "upload_time_iso_8601": "2024-11-19T22:09:42.762348Z",
            "url": "https://files.pythonhosted.org/packages/0e/42/41a23a1940b9205b97f6e0d038b51aaad89e5bbd7f4c1227f14bf25b84d1/sac_pytorch-0.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-19 22:09:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lucidrains",
    "github_project": "SAC-pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sac-pytorch"
}
        
Elapsed time: 0.77966s