objectrl


Nameobjectrl JSON
Version 0.1.1.post1 PyPI version JSON
download
home_pageNone
SummaryObjectRL: An Object-Oriented Reinforcement Learning Codebase
upload_time2025-07-09 11:03:28
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseGPL-3.0-or-later
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ObjectRL

[![docs](https://readthedocs.org/projects/objectrl/badge/?version=latest)](https://objectrl.readthedocs.io/en/latest/)
[![license](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://github.com/adinlab/objectrl/blob/master/LICENSE)
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a>

<p align="center">
  <img src="docs/_static/imgs/logo.svg" alt="ObjectRL Logo" height="150">
</p>

**ObjectRL** is a deep reinforcement learning library designed for research and rapid prototyping. It focuses on deep actor-critic algorithms for continuous control tasks such as those in the MuJoCo environment suite, while providing a flexible object-oriented architecture that supports future extensions to value-based and discrete-action methods.

---

## Features

- Object-oriented design for easy experimentation  
- Implements popular deep RL algorithms for continuous control  
- Includes experimental implementations of Bayesian and value-based methods  
- Supports easy configuration via CLI and YAML files  
- Rich examples and tutorials for customization and advanced use cases  

---

## Supported Algorithms

- **DDPG** (Deep Deterministic Policy Gradient)  
- **TD3** (Twin Delayed DDPG)  
- **SAC** (Soft Actor-Critic)  
- **PPO** (Proximal Policy Optimization)  
- **REDQ** (Randomized Ensemble Double Q-Learning)  
- **DRND** (Distributional Random Network Distillation)  
- **OAC** (Optimistic Actor-Critic)  
- **PBAC** (PAC-Bayesian Actor-Critic)  
- **BNN-SAC** (Bayesian Neural Network SAC) — experimental, in examples  
- **DQN** (Deep Q-Network) — experimental, in examples  

---

## Installation

### Create Environment

```bash
conda create -n objectrl python=3.12 -y
conda activate objectrl
```

### Using PyPI (Recommended)

```bash
pip install objectrl
```

### From Source (Latest Development Version)

```bash
git clone https://github.com/adinlab/objectrl.git
cd objectrl
pip install -e .
```

### Optional Dependencies

To enable additional features such as documentation generation:
```bash
pip install objectrl['docs']
```

---

## Quick Start Guide

Run your first experiment using Soft Actor-Critic (SAC) on the default `cheetah` environment:

```bash
python objectrl/main.py --model.name sac
```

### Change Algorithm and Environment

Run DDPG on the `hopper` environment:

```bash
python objectrl/main.py --model.name ddpg --env.name hopper
```

### Customize Training Parameters

Train SAC for 100,000 steps and evaluate every 5 episodes:

```bash
python objectrl/main.py --model.name sac --env.name hopper --training.max_steps 100000 --training.eval_episodes 5
```

### Use YAML Configuration Files

For more complex or reproducible setups, create YAML config files in `objectrl/config/model_yamls/` and specify them at runtime:

```bash
python objectrl/main.py --config objectrl/config/model_yamls/ppo.yaml
```

Example `ppo.yaml`:

```yaml
model:
  name: ppo
training:
  warmup_steps: 0
  learn_frequency: 2048
  batch_size: 64
  n_epochs: 10
```

---

## Documentation

Explore detailed documentation, tutorials, and API references at: [https://objectrl.readthedocs.io](https://objectrl.readthedocs.io)

---

## Citation

If you use ObjectRL in your research, please cite:

```bibtex
@article{baykal2025objectrl,
  title={ObjectRL: An Object-Oriented Reinforcement Learning Codebase}, 
  author={Baykal, Gulcin and  Akg{\"u}l, Abdullah and Haussmann, Manuel and Tasdighi, Bahareh and Werge, Nicklas and Wu Yi-Shan and Kandemir, Melih},
  year={2025},
  journal={arXiv preprint arXiv:2507.03487}
}
```





            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "objectrl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Abdullah Akgul <akgul@imada.sdu.dk>, Bahareh Tasdighi <tasdighi@imada.sdu.dk>, Gulcin Baykal Can <baykalg@imada.sdu.dk>, Manuel Haussmann <haussmann@imada.sdu.dk>, Melih Kandemir <kandemir@imada.sdu.dk>, Nicklas Werge <werge@sdu.dk>, Yi-Shan Wu <yswu@imada.sdu.dk>",
    "download_url": "https://files.pythonhosted.org/packages/7b/91/9e0de0de2b770d4835a658e4bf0c68e40852ba7c17e8a56c7c0490a95d17/objectrl-0.1.1.post1.tar.gz",
    "platform": null,
    "description": "# ObjectRL\n\n[![docs](https://readthedocs.org/projects/objectrl/badge/?version=latest)](https://objectrl.readthedocs.io/en/latest/)\n[![license](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://github.com/adinlab/objectrl/blob/master/LICENSE)\n<a href=\"https://pytorch.org/get-started/locally/\"><img alt=\"PyTorch\" src=\"https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white\"></a>\n\n<p align=\"center\">\n  <img src=\"docs/_static/imgs/logo.svg\" alt=\"ObjectRL Logo\" height=\"150\">\n</p>\n\n**ObjectRL** is a deep reinforcement learning library designed for research and rapid prototyping. It focuses on deep actor-critic algorithms for continuous control tasks such as those in the MuJoCo environment suite, while providing a flexible object-oriented architecture that supports future extensions to value-based and discrete-action methods.\n\n---\n\n## Features\n\n- Object-oriented design for easy experimentation  \n- Implements popular deep RL algorithms for continuous control  \n- Includes experimental implementations of Bayesian and value-based methods  \n- Supports easy configuration via CLI and YAML files  \n- Rich examples and tutorials for customization and advanced use cases  \n\n---\n\n## Supported Algorithms\n\n- **DDPG** (Deep Deterministic Policy Gradient)  \n- **TD3** (Twin Delayed DDPG)  \n- **SAC** (Soft Actor-Critic)  \n- **PPO** (Proximal Policy Optimization)  \n- **REDQ** (Randomized Ensemble Double Q-Learning)  \n- **DRND** (Distributional Random Network Distillation)  \n- **OAC** (Optimistic Actor-Critic)  \n- **PBAC** (PAC-Bayesian Actor-Critic)  \n- **BNN-SAC** (Bayesian Neural Network SAC) \u2014 experimental, in examples  \n- **DQN** (Deep Q-Network) \u2014 experimental, in examples  \n\n---\n\n## Installation\n\n### Create Environment\n\n```bash\nconda create -n objectrl python=3.12 -y\nconda activate objectrl\n```\n\n### Using PyPI (Recommended)\n\n```bash\npip install objectrl\n```\n\n### From Source (Latest Development Version)\n\n```bash\ngit clone https://github.com/adinlab/objectrl.git\ncd objectrl\npip install -e .\n```\n\n### Optional Dependencies\n\nTo enable additional features such as documentation generation:\n```bash\npip install objectrl['docs']\n```\n\n---\n\n## Quick Start Guide\n\nRun your first experiment using Soft Actor-Critic (SAC) on the default `cheetah` environment:\n\n```bash\npython objectrl/main.py --model.name sac\n```\n\n### Change Algorithm and Environment\n\nRun DDPG on the `hopper` environment:\n\n```bash\npython objectrl/main.py --model.name ddpg --env.name hopper\n```\n\n### Customize Training Parameters\n\nTrain SAC for 100,000 steps and evaluate every 5 episodes:\n\n```bash\npython objectrl/main.py --model.name sac --env.name hopper --training.max_steps 100000 --training.eval_episodes 5\n```\n\n### Use YAML Configuration Files\n\nFor more complex or reproducible setups, create YAML config files in `objectrl/config/model_yamls/` and specify them at runtime:\n\n```bash\npython objectrl/main.py --config objectrl/config/model_yamls/ppo.yaml\n```\n\nExample `ppo.yaml`:\n\n```yaml\nmodel:\n  name: ppo\ntraining:\n  warmup_steps: 0\n  learn_frequency: 2048\n  batch_size: 64\n  n_epochs: 10\n```\n\n---\n\n## Documentation\n\nExplore detailed documentation, tutorials, and API references at: [https://objectrl.readthedocs.io](https://objectrl.readthedocs.io)\n\n---\n\n## Citation\n\nIf you use ObjectRL in your research, please cite:\n\n```bibtex\n@article{baykal2025objectrl,\n  title={ObjectRL: An Object-Oriented Reinforcement Learning Codebase}, \n  author={Baykal, Gulcin and  Akg{\\\"u}l, Abdullah and Haussmann, Manuel and Tasdighi, Bahareh and Werge, Nicklas and Wu Yi-Shan and Kandemir, Melih},\n  year={2025},\n  journal={arXiv preprint arXiv:2507.03487}\n}\n```\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "GPL-3.0-or-later",
    "summary": "ObjectRL: An Object-Oriented Reinforcement Learning Codebase",
    "version": "0.1.1.post1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c414d988b5d1e18995acf83e95042aeef494e2649087dac0e2f07a22ed2aa538",
                "md5": "dd3197319ce0158170fedd6a784d959c",
                "sha256": "c23db6a212a9c0d9a625ebda46908b2d36798433d28a6c9db760d253c014fff9"
            },
            "downloads": -1,
            "filename": "objectrl-0.1.1.post1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dd3197319ce0158170fedd6a784d959c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 125301,
            "upload_time": "2025-07-09T11:03:27",
            "upload_time_iso_8601": "2025-07-09T11:03:27.120743Z",
            "url": "https://files.pythonhosted.org/packages/c4/14/d988b5d1e18995acf83e95042aeef494e2649087dac0e2f07a22ed2aa538/objectrl-0.1.1.post1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7b919e0de0de2b770d4835a658e4bf0c68e40852ba7c17e8a56c7c0490a95d17",
                "md5": "0521521f84ae178fc4884ba655b9e639",
                "sha256": "a27b9d010d105b72925583eaf29c07e29b67499ae132af8be38ee55fc0340670"
            },
            "downloads": -1,
            "filename": "objectrl-0.1.1.post1.tar.gz",
            "has_sig": false,
            "md5_digest": "0521521f84ae178fc4884ba655b9e639",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 79317,
            "upload_time": "2025-07-09T11:03:28",
            "upload_time_iso_8601": "2025-07-09T11:03:28.938620Z",
            "url": "https://files.pythonhosted.org/packages/7b/91/9e0de0de2b770d4835a658e4bf0c68e40852ba7c17e8a56c7c0490a95d17/objectrl-0.1.1.post1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-09 11:03:28",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "objectrl"
}
        
Elapsed time: 0.72889s