famews


Namefamews JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/ratschlab/famews
SummaryFAMEWS: A Fairness Auditing tool for Medical Early-Warning Systems
upload_time2024-01-19 11:45:26
maintainer
docs_urlNone
authorMarine Hoche, Olga Mineeva, Manuel Burger, Alessandro Blasimme, Gunnar Rätsch
requires_python>=3.9
licenseMIT License
keywords fairness machine learning early-warning system clinical applications
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# FAMEWS: a Fairness Auditing tool for Medical Early-Warning Systems

![FAMEWS Workflow](./data/figures/summary_tool_paper.png)

**FAMEWS** has primarily been designed to run on the HiRID dataset. However, it is possible to give already processed input to some stages in order to run it on other datasets that differ in their format.  
We also encourage users to add functionalities to the tool in order to expand the range of compatible datasets.  
This tool has been created to audit Early-Warning Systems in the medical domain. As such we consider a set of patients with a time-series of input features and a time-series of labels.   
As we focus on early warning, we expect a label for a current time step to be positive when a targeted event occurs a certain amount (called the prediction horizon) of time in the future. While the patient is undergoing an event, we expect the label to be NaN.  

For additional explanations on the tool, please refer to our paper: *FAMEWS: a Fairness Auditing tool for Medical Early-Warning Systems*.  
We provide a sample fairness audit report (`sample_fairness_report.pdf`) that can be produced with FAMEWS. The instructions to reproduce the report are given in the section **Pipeline Overview - How to run FAMEWS on HiRID?** of this README below the header **[TO RUN TO REPRODUCE SAMPLE REPORT]** (there are three steps: HiRID preprocessing, model inference and fairness analysis).

After explaining how to set up FAMEWS, we will describe how to run it on the HiRID dataset and how to obtain the sample report. 
A more [detailed documentation](documentation/DETAILED_DOC.md) on the extended range of applications is also available.

## Setup

This repository depends on the work done by [Yèche et al. HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark)
to preprocess the HiRID dataset and get it ready for model training, as well as inference and fairness analysis.

The [HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark) repository with the preprocessing is included as a submodule in this repository. To clone the repository with the submodule, run:

```bash
git submodule init
git submodule update

# follow instructions in the `HiRID Benchmark` repository to download and preprocess the dataset
# the subsequent steps rely on the different stage outputs defined by Yèche et al.
```

Then please follow the instructions of the HiRID Benchmark repository to obtain preprocessed data in a suitable format.

### Conda Environment

A conda environment configuration is provided: `environment_linux.yml`. You can create 
the environment with:
```
conda env create -f environment_linux.yml
conda activate famews
```

### Code Package

The `famews` package can be installed using `pip` and
is part of the environment file `environment_linux.yml`. Otherwise, you can install it with:
```
pip install famews
```

### Configurations

We use [Gin Configurations](https://github.com/google/gin-config/tags) to configure the
machine learning pipelines, preprocessing, and evaluation pipelines. Example configurations are in `./config`.  
**Please note that some paths need to be completed in these configs based on where the preprocessing outputs have been saved.
To facilitate this step, they are all gathered under `# Paths preprocessed data` or `# Data parameter`.**

## Pipeline Overview - How to run FAMEWS on HiRID?

Any task (preprocessing, training, evaluation, fairness analysis) is to be run with a script located in
`famews/scripts`. Ideally, these scripts invoke a `Pipeline` object, which consists of different
`PipelineStage` objects.

### Preprocessing
 
#### HiRID
>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  
>This repository depends on the work done by [Yèche et al. HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark)
>to preprocess the HiRID dataset and get it ready for model training, as well as inference and fairness analysis.

### ML Training

To facilitate experimentation, we provide model weights in `./data/models`.

#### LGBM model
To train an LGBM model, an example GIN config is available at `./config/lgbm_base_train.gin`.
Training can be performed with the following command:
```
python -m famews.scripts.train_tabular_model \
    -g ./config/lgbm_base_train.gin \
    -l ./logs/lgbm_base \
    --seed 1111
```

Pre-trained weights are available at `./data/models/lgbm` and can be used with the following command:
```
python -m famews.scripts.train_tabular_model \
    -g ./config/lgbm_base_pred.gin \
    -l ./logs/lgbm_base \
    --seed 1111
```
Note that these runs will also store in the log directory the predictions obtained on the test set.

You can launch several training with the `submit_wrapper.py` script. We encourage to do so to obtain model predictions from different random seeds (see config at `./config/lgbm_10seeds.yaml`).
The following command can be run:
```
python -m famews.scripts.submit_wrapper \
       --config ./config/lgbm_10seeds_train.yaml \
       -d ./logs/lgbm_10seeds
```

>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  
>We also provide pre-trained weights for the LGBM models trained with 10 different random seeds in `./data/models/lgbm_10seeds`.
>To generate the predictions from each of these models, one can launch the `submit_wrapper_pred_models.py` script with the following command:
>```
>python -m famews.scripts.submit_wrapper_pred_models \
>       --config ./config/lgbm_10seeds_pred.yaml \
>       -d ./logs/lgbm_10seeds
>```

#### LSTM model
To train an LSTM model, an example GIN config is available at `./config/lstm_base_train.gin`.
Training can be performed with the following command:
```
python -m famews.scripts.train_sequence_model \
    -g ./config/lstm_base_train.gin \
    -l ./logs/lstm_base \
    --seed 1111
```

Pre-trained weights are available at `./data/models/lstm` and can be used with the following command:
```
python -m famews.scripts.train_sequence_model \
    -g ./config/lstm_base_pred.gin \
    -l ./logs/lstm_base \
    --seed 1111
```
Note that these runs will also store in the log directory the predictions obtained on the test set.

### Fairness analysis
To audit the fairness of a model, we first need to obtain its predictions on the test set (see above commands) and to obtain certain preprocessed data (see Preprocessing section).  
The following commands can be used to run a basic configuration of the fairness analysis on the HiRID dataset based on our example models. 
We give more details afterwards on how to construct such configurations for different use-cases.  
#### LGBM model
To audit an LGBM model, an example GIN config is available at `./config/lgbm_base_fairness.gin` and the following command can be run:
```
python -m famews.scripts.run_fairness_analysis \
    -g ./config/lgbm_base_fairness.gin \
    -l ./logs/lgbm_base/seed_1111 \
    --seed 1111
```
>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  
>We encourage users to audit an averaged model obtained from models trained on different random seeds, an example GIN config is available at `./config/lgbm_10seeds_fairness.gin` and the following command can be run:
>```
>python -m famews.scripts.run_fairness_analysis \
>    -g ./config/lgbm_10seeds_fairness.gin \
>    -l ./logs/lgbm_10seeds \
>    --seed 1111
>```

#### LSTM model
To audit an LSTM model, an example GIN config is available at `./config/lstm_base_fairness.gin` and the following command can be run:
```
python -m famews.scripts.run_fairness_analysis \
    -g ./config/lstm_base_fairness.gin \
    -l ./logs/lstm_base/seed_1111 \
    --seed 1111
```
Please note that for this audit we don't run the `AnalyseFeatImportanceGroup` stage as it requires computing the SHAP values and this isn't supported for the DL learning model.  
However, if you still want to run this stage you can directly provide the SHAP values as input to the pipeline (see `./famews/famews/fairness_check/README.md` for more details).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ratschlab/famews",
    "name": "famews",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "Fairness,Machine Learning,Early-Warning System,Clinical Applications",
    "author": "Marine Hoche, Olga Mineeva, Manuel Burger, Alessandro Blasimme, Gunnar R\u00e4tsch",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "\n# FAMEWS: a Fairness Auditing tool for Medical Early-Warning Systems\n\n![FAMEWS Workflow](./data/figures/summary_tool_paper.png)\n\n**FAMEWS** has primarily been designed to run on the HiRID dataset. However, it is possible to give already processed input to some stages in order to run it on other datasets that differ in their format.  \nWe also encourage users to add functionalities to the tool in order to expand the range of compatible datasets.  \nThis tool has been created to audit Early-Warning Systems in the medical domain. As such we consider a set of patients with a time-series of input features and a time-series of labels.   \nAs we focus on early warning, we expect a label for a current time step to be positive when a targeted event occurs a certain amount (called the prediction horizon) of time in the future. While the patient is undergoing an event, we expect the label to be NaN.  \n\nFor additional explanations on the tool, please refer to our paper: *FAMEWS: a Fairness Auditing tool for Medical Early-Warning Systems*.  \nWe provide a sample fairness audit report (`sample_fairness_report.pdf`) that can be produced with FAMEWS. The instructions to reproduce the report are given in the section **Pipeline Overview - How to run FAMEWS on HiRID?** of this README below the header **[TO RUN TO REPRODUCE SAMPLE REPORT]** (there are three steps: HiRID preprocessing, model inference and fairness analysis).\n\nAfter explaining how to set up FAMEWS, we will describe how to run it on the HiRID dataset and how to obtain the sample report. \nA more [detailed documentation](documentation/DETAILED_DOC.md) on the extended range of applications is also available.\n\n## Setup\n\nThis repository depends on the work done by [Y\u00e8che et al. HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark)\nto preprocess the HiRID dataset and get it ready for model training, as well as inference and fairness analysis.\n\nThe [HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark) repository with the preprocessing is included as a submodule in this repository. To clone the repository with the submodule, run:\n\n```bash\ngit submodule init\ngit submodule update\n\n# follow instructions in the `HiRID Benchmark` repository to download and preprocess the dataset\n# the subsequent steps rely on the different stage outputs defined by Y\u00e8che et al.\n```\n\nThen please follow the instructions of the HiRID Benchmark repository to obtain preprocessed data in a suitable format.\n\n### Conda Environment\n\nA conda environment configuration is provided: `environment_linux.yml`. You can create \nthe environment with:\n```\nconda env create -f environment_linux.yml\nconda activate famews\n```\n\n### Code Package\n\nThe `famews` package can be installed using `pip` and\nis part of the environment file `environment_linux.yml`. Otherwise, you can install it with:\n```\npip install famews\n```\n\n### Configurations\n\nWe use [Gin Configurations](https://github.com/google/gin-config/tags) to configure the\nmachine learning pipelines, preprocessing, and evaluation pipelines. Example configurations are in `./config`.  \n**Please note that some paths need to be completed in these configs based on where the preprocessing outputs have been saved.\nTo facilitate this step, they are all gathered under `# Paths preprocessed data` or `# Data parameter`.**\n\n## Pipeline Overview - How to run FAMEWS on HiRID?\n\nAny task (preprocessing, training, evaluation, fairness analysis) is to be run with a script located in\n`famews/scripts`. Ideally, these scripts invoke a `Pipeline` object, which consists of different\n`PipelineStage` objects.\n\n### Preprocessing\n \n#### HiRID\n>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  \n>This repository depends on the work done by [Y\u00e8che et al. HiRID Benchmark](https://github.com/ratschlab/HIRID-ICU-Benchmark)\n>to preprocess the HiRID dataset and get it ready for model training, as well as inference and fairness analysis.\n\n### ML Training\n\nTo facilitate experimentation, we provide model weights in `./data/models`.\n\n#### LGBM model\nTo train an LGBM model, an example GIN config is available at `./config/lgbm_base_train.gin`.\nTraining can be performed with the following command:\n```\npython -m famews.scripts.train_tabular_model \\\n    -g ./config/lgbm_base_train.gin \\\n    -l ./logs/lgbm_base \\\n    --seed 1111\n```\n\nPre-trained weights are available at `./data/models/lgbm` and can be used with the following command:\n```\npython -m famews.scripts.train_tabular_model \\\n    -g ./config/lgbm_base_pred.gin \\\n    -l ./logs/lgbm_base \\\n    --seed 1111\n```\nNote that these runs will also store in the log directory the predictions obtained on the test set.\n\nYou can launch several training with the `submit_wrapper.py` script. We encourage to do so to obtain model predictions from different random seeds (see config at `./config/lgbm_10seeds.yaml`).\nThe following command can be run:\n```\npython -m famews.scripts.submit_wrapper \\\n       --config ./config/lgbm_10seeds_train.yaml \\\n       -d ./logs/lgbm_10seeds\n```\n\n>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  \n>We also provide pre-trained weights for the LGBM models trained with 10 different random seeds in `./data/models/lgbm_10seeds`.\n>To generate the predictions from each of these models, one can launch the `submit_wrapper_pred_models.py` script with the following command:\n>```\n>python -m famews.scripts.submit_wrapper_pred_models \\\n>       --config ./config/lgbm_10seeds_pred.yaml \\\n>       -d ./logs/lgbm_10seeds\n>```\n\n#### LSTM model\nTo train an LSTM model, an example GIN config is available at `./config/lstm_base_train.gin`.\nTraining can be performed with the following command:\n```\npython -m famews.scripts.train_sequence_model \\\n    -g ./config/lstm_base_train.gin \\\n    -l ./logs/lstm_base \\\n    --seed 1111\n```\n\nPre-trained weights are available at `./data/models/lstm` and can be used with the following command:\n```\npython -m famews.scripts.train_sequence_model \\\n    -g ./config/lstm_base_pred.gin \\\n    -l ./logs/lstm_base \\\n    --seed 1111\n```\nNote that these runs will also store in the log directory the predictions obtained on the test set.\n\n### Fairness analysis\nTo audit the fairness of a model, we first need to obtain its predictions on the test set (see above commands) and to obtain certain preprocessed data (see Preprocessing section).  \nThe following commands can be used to run a basic configuration of the fairness analysis on the HiRID dataset based on our example models. \nWe give more details afterwards on how to construct such configurations for different use-cases.  \n#### LGBM model\nTo audit an LGBM model, an example GIN config is available at `./config/lgbm_base_fairness.gin` and the following command can be run:\n```\npython -m famews.scripts.run_fairness_analysis \\\n    -g ./config/lgbm_base_fairness.gin \\\n    -l ./logs/lgbm_base/seed_1111 \\\n    --seed 1111\n```\n>**[TO RUN TO REPRODUCE SAMPLE REPORT]**  \n>We encourage users to audit an averaged model obtained from models trained on different random seeds, an example GIN config is available at `./config/lgbm_10seeds_fairness.gin` and the following command can be run:\n>```\n>python -m famews.scripts.run_fairness_analysis \\\n>    -g ./config/lgbm_10seeds_fairness.gin \\\n>    -l ./logs/lgbm_10seeds \\\n>    --seed 1111\n>```\n\n#### LSTM model\nTo audit an LSTM model, an example GIN config is available at `./config/lstm_base_fairness.gin` and the following command can be run:\n```\npython -m famews.scripts.run_fairness_analysis \\\n    -g ./config/lstm_base_fairness.gin \\\n    -l ./logs/lstm_base/seed_1111 \\\n    --seed 1111\n```\nPlease note that for this audit we don't run the `AnalyseFeatImportanceGroup` stage as it requires computing the SHAP values and this isn't supported for the DL learning model.  \nHowever, if you still want to run this stage you can directly provide the SHAP values as input to the pipeline (see `./famews/famews/fairness_check/README.md` for more details).\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "FAMEWS: A Fairness Auditing tool for Medical Early-Warning Systems",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/ratschlab/famews"
    },
    "split_keywords": [
        "fairness",
        "machine learning",
        "early-warning system",
        "clinical applications"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6c23446efe9fb55e4af0964a8c937a25547cab8155641c3730046d8d14f921af",
                "md5": "896c5fa1374a646b34771ce04e00720e",
                "sha256": "9b1e808d7e1ffff4a1867851bc417687c71f9ca4b285f03e0aa892b9b65919e5"
            },
            "downloads": -1,
            "filename": "famews-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "896c5fa1374a646b34771ce04e00720e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 6490,
            "upload_time": "2024-01-19T11:45:26",
            "upload_time_iso_8601": "2024-01-19T11:45:26.565861Z",
            "url": "https://files.pythonhosted.org/packages/6c/23/446efe9fb55e4af0964a8c937a25547cab8155641c3730046d8d14f921af/famews-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-19 11:45:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ratschlab",
    "github_project": "famews",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "famews"
}
        
Elapsed time: 0.16543s