DTRGym


NameDTRGym JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttp://github.com/GilesLuo/DTRGym
SummaryA Collection of Reinforcement Learning Environments for Dynamic Treatment Regime Simulation.
upload_time2024-03-19 11:50:22
maintainer
docs_urlNone
authorZhiyao Luo, Mingcheng Zhu
requires_python== 3.10.*
licenseMIT
keywords healthcare simulation dynamic treatment regime reinforcement learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h3 align="center">DTRGym: Reinforcement learning Environments for Dynamic Treatment Regimes </h3>


---
## 📝 Table of Contents
- [About](#about)
- [Getting Started](#getting_started)
- [Module Description](#module_description)
- [Usage](#usage)
- [Reference](#reference)
- [Special Thanks](#special_thanks)
- [Acknowledge](#acknowledgement)

## 🧐 About <a name = "about"></a>
DTR-Gym is a benchmarking platform with four unique simulation environments aimed at improving treatments in areas including cancer chemotherapy, tumor growth, diabetes, and sepsis therapy.

The design of DTR-Gym is committed to replicate the intricacies of real clinical scenarios, thereby providing a robust framework for exploring and evaluating reinforcement learning algorithms.


## 🏁 Getting Started <a name = "getting_started"></a>
These instructions will get you a copy of the project up and running on your local machine.

### Prerequisites
+ Python 3.10: The project is developed using Python 3.10. It is recommended to use the same version to avoid compatibility issues.

### Installing
1. Clone the repository
```
git clone git@github.com:GilesLuo/SimMedEnv.git
```
2. Install the required packages
```
cd SimMedEnv
pip install -r requirements.txt
```

3. Test the installation
```
python test_installation.py
```

### Initialise the Environment

You can run the example by:
```python
import gymnasium as gym
import DTRGym  # this line is necessary!

env = gym.make('AhnChemoEnv-discrete', n_act=11)
print(env.action_space.n)
print(env.observation_space.shape)
```

## 🎈 Module Description <a name="module_description"></a>

### Simulation Environments
There are four simulation environments in the DTRGym. Each environment simulates a specific disease and treatment.

| Environment                                   | Disease        | Treatment                                   | Dynamics | Action Space |
|-----------------------------------------------|----------------|---------------------------------------------|----------|--------------|
| [*AhnChemoEnv*](DTRGym/ahn_chemo_env.py)      | Cancer         | Chemotherapy                               | ODE      | Cont./Disc.  |
| [*GhaffariCancerEnv*](DTRGym/ghaffari_cancer_env.py) | Cancer         | Chemotherapy & Radiotherapy                | ODE      | Cont./Disc.  |
| [*OberstSepsisEnv*](DTRGym/OberstSepsisEnv/env.py)   | Sepsis         | Antibiotics, Mechanical Ventilation, Vasopressors | SCM      | Disc.        |
| [*SimGlucoseEnv*](DTRGym/simglucose_env.py)          | Type-1 Diabetes | Insulin Administration                    | ODE      | Cont./Disc.  |

### Environment Settings
There are five default settings for each environment. The settings are designed to simulate different scenarios in the real world. The settings include:

| Setting | Description                                                                        |
|---------|------------------------------------------------------------------------------------|
| 1       | No PK/PD variation, no observation noise, no missing values. |
| 2       | PK/PD variation, no observation noise, no missing values. |
| 3       | PK/PD variation, observation noise (medium), no missing values. |
| 4       | PK/PD variation, observation noise (large), no missing values. |
| 5       | PK/PD variation, observation noise (large), missing values. |

For different environments, the variations are defined as follows:

| Environment            | PK/PD Variation                            | Observation Noise (Medium)             | Observation Noise (Large)          | Missing Values |
|------------------------|--------------------------------------------|----------------------------------------|------------------------------------|----------------|
| *AhnChemoEnv*          | 10%                                        | 20%                                    | 50%                                | 50%            |
| *GhaffariCancerEnv*    | 10%                                        | 10%                                    | 20%                                | 50%            |
| *OberstSepsisEnv*      | 10%                                        | 20%                                    | 50%                                | 50%            |
| *SimGlucoseEnv*        | Parameters of different patients          | Use data from simulated glucose monitor.| Further randomize food intake times.| 50%           |


## 🔧 Usage <a name="usage"></a>
### Use Default Environment Configuration
DTR-Gym provides default environment configuration to simulate the real-world clinical scenarios. For example, if you want to use the setting 1, you can initialise the environment by
```python
import gymnasium as gym
import DTRGym

env = gym.make("AhnChemoEnv-continuous-setting1")
```

### Customize Maximum Timestep
You can set the maximum available timestep for the environment by passing value to `max_t`. Here's an example:

```python
import gymnasium as gym
import DTRGym

env = gym.make("AhnChemoEnv-continuous", max_t=50)
print(env.max_t)
```

### Choose Action Space
When creating the environment, you can choose from a discrete action space version or a continuous action space version. For all the environment except "TangSepsisEnv-discrete", which only has the discrete actions space version, you can choose different action space by pass id. The environment with same id prefix are only different on the type of action space. They have the same observation space, same disease dynamics, and the same reward function. So feel free to choose the environment according to your RL policy.

Here's an example:

```python
import DTRGym

continuous_env = gym.make("AhnChemoEnv-continuous")
discrete_env = gym.make("AhnChemoEnv-discrete")

print(continuous_env.env_info["action_type"])
print(discrete_env.env_info['action_type'])
print(continuous_env.observation_space.sample() in discrete_env.observation_space)

```

### Customize Action Number (for Discrete Action Space Env)
You can also set the the number of action you want the environment to have by using the `n_act`. This is only effective for the discrete version. Here is an example:

```python
import DTRGym

env = gym.make("AhnChemoEnv-discrete", n_act=5)
print(env.n_act)
```

## Reference <a name="reference"></a>

If you use the DTR-Gym in your research, please cite the following paper:

```
To be updated
```


## ✍️ Sepcial Thanks <a name = "special_thanks"></a>
Special thanks to the following contributors that make the DTR-Gym possible:
- [@Mingcheng Zhu](https://github.com/JasonZuu) - who developed DTRGym and produced extensive DTRBench experiments.
- To be continued

## 🎉 Acknowledgement <a name = "acknowledgement"></a>
  - [Gymnasium](https://github.com/Farama-Foundation/Gymnasium)
  - [Simglucose](https://github.com/jxx123/simglucose)
  - [gumbel-max-scm](https://github.com/clinicalml/gumbel-max-scm)


            

Raw data

            {
    "_id": null,
    "home_page": "http://github.com/GilesLuo/DTRGym",
    "name": "DTRGym",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "== 3.10.*",
    "maintainer_email": "",
    "keywords": "Healthcare Simulation,Dynamic Treatment Regime,Reinforcement Learning",
    "author": "Zhiyao Luo, Mingcheng Zhu",
    "author_email": "zhiyao.luo@eng.ox.ac.uk",
    "download_url": "https://files.pythonhosted.org/packages/eb/7d/26f13a8991ce8664d5c1a31b1271db7d4d53342bb9fc6ab8277b8759c390/DTRGym-0.1.0.tar.gz",
    "platform": null,
    "description": "<h3 align=\"center\">DTRGym: Reinforcement learning Environments for Dynamic Treatment Regimes </h3>\n\n\n---\n## \ud83d\udcdd Table of Contents\n- [About](#about)\n- [Getting Started](#getting_started)\n- [Module Description](#module_description)\n- [Usage](#usage)\n- [Reference](#reference)\n- [Special Thanks](#special_thanks)\n- [Acknowledge](#acknowledgement)\n\n## \ud83e\uddd0 About <a name = \"about\"></a>\nDTR-Gym is a benchmarking platform with four unique simulation environments aimed at improving treatments in areas including cancer chemotherapy, tumor growth, diabetes, and sepsis therapy.\n\nThe design of DTR-Gym is committed to replicate the intricacies of real clinical scenarios, thereby providing a robust framework for exploring and evaluating reinforcement learning algorithms.\n\n\n## \ud83c\udfc1 Getting Started <a name = \"getting_started\"></a>\nThese instructions will get you a copy of the project up and running on your local machine.\n\n### Prerequisites\n+ Python 3.10: The project is developed using Python 3.10. It is recommended to use the same version to avoid compatibility issues.\n\n### Installing\n1. Clone the repository\n```\ngit clone git@github.com:GilesLuo/SimMedEnv.git\n```\n2. Install the required packages\n```\ncd SimMedEnv\npip install -r requirements.txt\n```\n\n3. Test the installation\n```\npython test_installation.py\n```\n\n### Initialise the Environment\n\nYou can run the example by:\n```python\nimport gymnasium as gym\nimport DTRGym  # this line is necessary!\n\nenv = gym.make('AhnChemoEnv-discrete', n_act=11)\nprint(env.action_space.n)\nprint(env.observation_space.shape)\n```\n\n## \ud83c\udf88 Module Description <a name=\"module_description\"></a>\n\n### Simulation Environments\nThere are four simulation environments in the DTRGym. Each environment simulates a specific disease and treatment.\n\n| Environment                                   | Disease        | Treatment                                   | Dynamics | Action Space |\n|-----------------------------------------------|----------------|---------------------------------------------|----------|--------------|\n| [*AhnChemoEnv*](DTRGym/ahn_chemo_env.py)      | Cancer         | Chemotherapy                               | ODE      | Cont./Disc.  |\n| [*GhaffariCancerEnv*](DTRGym/ghaffari_cancer_env.py) | Cancer         | Chemotherapy & Radiotherapy                | ODE      | Cont./Disc.  |\n| [*OberstSepsisEnv*](DTRGym/OberstSepsisEnv/env.py)   | Sepsis         | Antibiotics, Mechanical Ventilation, Vasopressors | SCM      | Disc.        |\n| [*SimGlucoseEnv*](DTRGym/simglucose_env.py)          | Type-1 Diabetes | Insulin Administration                    | ODE      | Cont./Disc.  |\n\n### Environment Settings\nThere are five default settings for each environment. The settings are designed to simulate different scenarios in the real world. The settings include:\n\n| Setting | Description                                                                        |\n|---------|------------------------------------------------------------------------------------|\n| 1       | No PK/PD variation, no observation noise, no missing values. |\n| 2       | PK/PD variation, no observation noise, no missing values. |\n| 3       | PK/PD variation, observation noise (medium), no missing values. |\n| 4       | PK/PD variation, observation noise (large), no missing values. |\n| 5       | PK/PD variation, observation noise (large), missing values. |\n\nFor different environments, the variations are defined as follows:\n\n| Environment            | PK/PD Variation                            | Observation Noise (Medium)             | Observation Noise (Large)          | Missing Values |\n|------------------------|--------------------------------------------|----------------------------------------|------------------------------------|----------------|\n| *AhnChemoEnv*          | 10%                                        | 20%                                    | 50%                                | 50%            |\n| *GhaffariCancerEnv*    | 10%                                        | 10%                                    | 20%                                | 50%            |\n| *OberstSepsisEnv*      | 10%                                        | 20%                                    | 50%                                | 50%            |\n| *SimGlucoseEnv*        | Parameters of different patients          | Use data from simulated glucose monitor.| Further randomize food intake times.| 50%           |\n\n\n## \ud83d\udd27 Usage <a name=\"usage\"></a>\n### Use Default Environment Configuration\nDTR-Gym provides default environment configuration to simulate the real-world clinical scenarios. For example, if you want to use the setting 1, you can initialise the environment by\n```python\nimport gymnasium as gym\nimport DTRGym\n\nenv = gym.make(\"AhnChemoEnv-continuous-setting1\")\n```\n\n### Customize Maximum Timestep\nYou can set the maximum available timestep for the environment by passing value to `max_t`. Here's an example:\n\n```python\nimport gymnasium as gym\nimport DTRGym\n\nenv = gym.make(\"AhnChemoEnv-continuous\", max_t=50)\nprint(env.max_t)\n```\n\n### Choose Action Space\nWhen creating the environment, you can choose from a discrete action space version or a continuous action space version. For all the environment except \"TangSepsisEnv-discrete\", which only has the discrete actions space version, you can choose different action space by pass id. The environment with same id prefix are only different on the type of action space. They have the same observation space, same disease dynamics, and the same reward function. So feel free to choose the environment according to your RL policy.\n\nHere's an example:\n\n```python\nimport DTRGym\n\ncontinuous_env = gym.make(\"AhnChemoEnv-continuous\")\ndiscrete_env = gym.make(\"AhnChemoEnv-discrete\")\n\nprint(continuous_env.env_info[\"action_type\"])\nprint(discrete_env.env_info['action_type'])\nprint(continuous_env.observation_space.sample() in discrete_env.observation_space)\n\n```\n\n### Customize Action Number (for Discrete Action Space Env)\nYou can also set the the number of action you want the environment to have by using the `n_act`. This is only effective for the discrete version. Here is an example:\n\n```python\nimport DTRGym\n\nenv = gym.make(\"AhnChemoEnv-discrete\", n_act=5)\nprint(env.n_act)\n```\n\n## Reference <a name=\"reference\"></a>\n\nIf you use the DTR-Gym in your research, please cite the following paper:\n\n```\nTo be updated\n```\n\n\n## \u270d\ufe0f Sepcial Thanks <a name = \"special_thanks\"></a>\nSpecial thanks to the following contributors that make the DTR-Gym possible:\n- [@Mingcheng Zhu](https://github.com/JasonZuu) - who developed DTRGym and produced extensive DTRBench experiments.\n- To be continued\n\n## \ud83c\udf89 Acknowledgement <a name = \"acknowledgement\"></a>\n  - [Gymnasium](https://github.com/Farama-Foundation/Gymnasium)\n  - [Simglucose](https://github.com/jxx123/simglucose)\n  - [gumbel-max-scm](https://github.com/clinicalml/gumbel-max-scm)\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Collection of Reinforcement Learning Environments for Dynamic Treatment Regime Simulation.",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "http://github.com/GilesLuo/DTRGym",
        "Source Code": "https://github.com/GilesLuo/DTRGym"
    },
    "split_keywords": [
        "healthcare simulation",
        "dynamic treatment regime",
        "reinforcement learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "15017eb7739c211a6447a6b72cc0499ce4f4f7ae7cb621d4473d33e8923cc714",
                "md5": "4760393528f569f688d931bce2198fcd",
                "sha256": "4c67825fe149ac21f3dcce1cd750743f5696d71c6dcc72c6ad1c10541d6fe86d"
            },
            "downloads": -1,
            "filename": "DTRGym-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4760393528f569f688d931bce2198fcd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "== 3.10.*",
            "size": 69193,
            "upload_time": "2024-03-19T11:50:21",
            "upload_time_iso_8601": "2024-03-19T11:50:21.730304Z",
            "url": "https://files.pythonhosted.org/packages/15/01/7eb7739c211a6447a6b72cc0499ce4f4f7ae7cb621d4473d33e8923cc714/DTRGym-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eb7d26f13a8991ce8664d5c1a31b1271db7d4d53342bb9fc6ab8277b8759c390",
                "md5": "32af3d5a9c1b2687608496dd6ab0f52a",
                "sha256": "272824f216130486387d2320ea014b8e9cde0c9a2218687d34e0541a6f1c5797"
            },
            "downloads": -1,
            "filename": "DTRGym-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "32af3d5a9c1b2687608496dd6ab0f52a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "== 3.10.*",
            "size": 55541,
            "upload_time": "2024-03-19T11:50:22",
            "upload_time_iso_8601": "2024-03-19T11:50:22.806195Z",
            "url": "https://files.pythonhosted.org/packages/eb/7d/26f13a8991ce8664d5c1a31b1271db7d4d53342bb9fc6ab8277b8759c390/DTRGym-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-19 11:50:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "GilesLuo",
    "github_project": "DTRGym",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dtrgym"
}
        
Elapsed time: 0.22293s