opioidrl


Nameopioidrl JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/OpioidRL
SummaryPaper - Pytorch
upload_time2024-09-17 01:27:08
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.com/servers/agora-999382051935506503)

# Opioid RL

[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)


**OpioidRL** is a cutting-edge reinforcement learning (RL) library that simulates drug addiction behaviors within RL agents. Inspired by the addictive properties of drugs like methamphetamine and crack cocaine, OpioidRL offers a unique environment where agents experience reward dependency, high-risk decision-making, and compulsive behaviors — pushing RL research into new and provocative territories.

## Features

- **Meth Simulation**: Models the erratic and compulsive high-risk behaviors typically seen in methamphetamine addiction.
- **Crack Simulation**: Models the short-term, intense craving for rewards, leading to aggressive reward-seeking behaviors.
- **Customizable Reward Loops**: Easily adjust the reinforcement pathways to mimic varying levels of addiction, from mild dependency to extreme compulsion.
- **Addiction Dynamics**: Introduces tolerance, withdrawal, and relapse phenomena, simulating real-world addiction cycles.
- **Compatible with Any RL Framework**: Easily integrate OpioidRL with popular RL frameworks like PyTorch, TensorFlow, and Stable Baselines3.

## Installation

You can install OpioidRL using `pip`:

```bash
pip install opioidrl
```

## Quick Start

Below is a simple example of how to integrate **OpioidRL** into your RL pipeline.

```python
import opioidrl
from stable_baselines3 import PPO

# Create a Crack environment
env = opioidrl.make('Crack-v0')

# Train the agent using PPO
model = PPO('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=100000)

# Test the agent
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()
```

### Example: Meth Environment

```python
import opioidrl
from stable_baselines3 import A2C

# Create a Meth environment
env = opioidrl.make('Meth-v0')

# Train the agent using A2C
model = A2C('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=100000)

# Evaluate agent behavior
obs = env.reset()
for _ in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()
```

## Available Environments

OpioidRL currently offers two environments simulating different types of addiction:

1. **Crack-v0**: Fast and intense, simulates the short-term, high-risk reward-seeking behaviors common in crack cocaine addiction.
2. **Meth-v0**: More sustained compulsive behaviors, with agents showing an increasing tolerance and willingness to take extreme actions for delayed rewards.

### Environment Customization

You can modify the parameters of each environment to simulate different levels of addiction severity:

```python
env = opioidrl.make('Meth-v0', tolerance_increase_rate=0.01, withdrawal_penalty=5)
```

### Configuration Options

- `tolerance_increase_rate`: How fast the agent builds tolerance to rewards.
- `withdrawal_penalty`: The penalty imposed when the agent doesn't receive its expected reward.
- `relapse_probability`: The probability that an agent will fall back into compulsive behaviors after overcoming addiction.

## Roadmap

- **Opioid-v0**: A new environment simulating opioid addiction with prolonged reward dependency and extreme withdrawal effects.
- **Alcohol-v0**: An environment simulating long-term, mild addiction behaviors with subtle but persistent effects on decision-making.
- **Nicotine-v0**: Simulating the reward-seeking behavior tied to nicotine addiction, with frequent, small rewards.

## Contributing

Contributions are welcome! If you have ideas for new environments or features, feel free to submit a pull request or open an issue.

### Steps to Contribute:

1. Fork this repository.
2. Create a new branch: `git checkout -b feature-name`
3. Commit your changes: `git commit -m 'Add new feature'`
4. Push to the branch: `git push origin feature-name`
5. Submit a pull request.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Disclaimer

OpioidRL is a research tool designed for educational and experimental purposes. The behaviors simulated within this library are based on abstract models of addiction and are not intended to trivialize or promote drug addiction in any form. Addiction is a serious issue, and if you or someone you know is struggling with addiction, please seek professional help.

Made with ❤️ by the OpioidRL team.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/OpioidRL",
    "name": "opioidrl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/71/f9/66d0594c69841cc8b6039de77137e80c149873c2b39c1bcc070025a3b850/opioidrl-0.0.2.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.com/servers/agora-999382051935506503)\n\n# Opioid RL\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\n\n**OpioidRL** is a cutting-edge reinforcement learning (RL) library that simulates drug addiction behaviors within RL agents. Inspired by the addictive properties of drugs like methamphetamine and crack cocaine, OpioidRL offers a unique environment where agents experience reward dependency, high-risk decision-making, and compulsive behaviors \u2014 pushing RL research into new and provocative territories.\n\n## Features\n\n- **Meth Simulation**: Models the erratic and compulsive high-risk behaviors typically seen in methamphetamine addiction.\n- **Crack Simulation**: Models the short-term, intense craving for rewards, leading to aggressive reward-seeking behaviors.\n- **Customizable Reward Loops**: Easily adjust the reinforcement pathways to mimic varying levels of addiction, from mild dependency to extreme compulsion.\n- **Addiction Dynamics**: Introduces tolerance, withdrawal, and relapse phenomena, simulating real-world addiction cycles.\n- **Compatible with Any RL Framework**: Easily integrate OpioidRL with popular RL frameworks like PyTorch, TensorFlow, and Stable Baselines3.\n\n## Installation\n\nYou can install OpioidRL using `pip`:\n\n```bash\npip install opioidrl\n```\n\n## Quick Start\n\nBelow is a simple example of how to integrate **OpioidRL** into your RL pipeline.\n\n```python\nimport opioidrl\nfrom stable_baselines3 import PPO\n\n# Create a Crack environment\nenv = opioidrl.make('Crack-v0')\n\n# Train the agent using PPO\nmodel = PPO('MlpPolicy', env, verbose=1)\nmodel.learn(total_timesteps=100000)\n\n# Test the agent\nobs = env.reset()\nfor _ in range(1000):\n    action, _states = model.predict(obs)\n    obs, rewards, dones, info = env.step(action)\n    env.render()\n```\n\n### Example: Meth Environment\n\n```python\nimport opioidrl\nfrom stable_baselines3 import A2C\n\n# Create a Meth environment\nenv = opioidrl.make('Meth-v0')\n\n# Train the agent using A2C\nmodel = A2C('MlpPolicy', env, verbose=1)\nmodel.learn(total_timesteps=100000)\n\n# Evaluate agent behavior\nobs = env.reset()\nfor _ in range(1000):\n    action, _states = model.predict(obs)\n    obs, rewards, dones, info = env.step(action)\n    env.render()\n```\n\n## Available Environments\n\nOpioidRL currently offers two environments simulating different types of addiction:\n\n1. **Crack-v0**: Fast and intense, simulates the short-term, high-risk reward-seeking behaviors common in crack cocaine addiction.\n2. **Meth-v0**: More sustained compulsive behaviors, with agents showing an increasing tolerance and willingness to take extreme actions for delayed rewards.\n\n### Environment Customization\n\nYou can modify the parameters of each environment to simulate different levels of addiction severity:\n\n```python\nenv = opioidrl.make('Meth-v0', tolerance_increase_rate=0.01, withdrawal_penalty=5)\n```\n\n### Configuration Options\n\n- `tolerance_increase_rate`: How fast the agent builds tolerance to rewards.\n- `withdrawal_penalty`: The penalty imposed when the agent doesn't receive its expected reward.\n- `relapse_probability`: The probability that an agent will fall back into compulsive behaviors after overcoming addiction.\n\n## Roadmap\n\n- **Opioid-v0**: A new environment simulating opioid addiction with prolonged reward dependency and extreme withdrawal effects.\n- **Alcohol-v0**: An environment simulating long-term, mild addiction behaviors with subtle but persistent effects on decision-making.\n- **Nicotine-v0**: Simulating the reward-seeking behavior tied to nicotine addiction, with frequent, small rewards.\n\n## Contributing\n\nContributions are welcome! If you have ideas for new environments or features, feel free to submit a pull request or open an issue.\n\n### Steps to Contribute:\n\n1. Fork this repository.\n2. Create a new branch: `git checkout -b feature-name`\n3. Commit your changes: `git commit -m 'Add new feature'`\n4. Push to the branch: `git push origin feature-name`\n5. Submit a pull request.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Disclaimer\n\nOpioidRL is a research tool designed for educational and experimental purposes. The behaviors simulated within this library are based on abstract models of addiction and are not intended to trivialize or promote drug addiction in any form. Addiction is a serious issue, and if you or someone you know is struggling with addiction, please seek professional help.\n\nMade with \u2764\ufe0f by the OpioidRL team.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Paper - Pytorch",
    "version": "0.0.2",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/OpioidRL",
        "Homepage": "https://github.com/kyegomez/OpioidRL",
        "Repository": "https://github.com/kyegomez/OpioidRL"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8fbf0a05f0e3cf1bb5b5c67b717da3ae802488c87591bca91d03c6a5d4ed9eb5",
                "md5": "6300a063258af31848bb32a1b4532bef",
                "sha256": "682a9557b685c8f9af18fcde61fcb2796aeb6c67076e6aa42d22c109186b4b04"
            },
            "downloads": -1,
            "filename": "opioidrl-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6300a063258af31848bb32a1b4532bef",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 9388,
            "upload_time": "2024-09-17T01:27:06",
            "upload_time_iso_8601": "2024-09-17T01:27:06.985117Z",
            "url": "https://files.pythonhosted.org/packages/8f/bf/0a05f0e3cf1bb5b5c67b717da3ae802488c87591bca91d03c6a5d4ed9eb5/opioidrl-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "71f966d0594c69841cc8b6039de77137e80c149873c2b39c1bcc070025a3b850",
                "md5": "d5c6a421de082fd2a1509239f2a9e036",
                "sha256": "e2f98f0eb46231953ac7bc77083d40286770921222f5a8e65293f13375369456"
            },
            "downloads": -1,
            "filename": "opioidrl-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "d5c6a421de082fd2a1509239f2a9e036",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 8344,
            "upload_time": "2024-09-17T01:27:08",
            "upload_time_iso_8601": "2024-09-17T01:27:08.016035Z",
            "url": "https://files.pythonhosted.org/packages/71/f9/66d0594c69841cc8b6039de77137e80c149873c2b39c1bcc070025a3b850/opioidrl-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-17 01:27:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "OpioidRL",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "opioidrl"
}
        
Elapsed time: 0.33610s