aisdc


Nameaisdc JSON
Version 1.1.3 PyPI version JSON
download
home_pagehttps://github.com/AI-SDC/AI-SDC
SummaryTools for the statistical disclosure control of machine learning models
upload_time2024-04-26 17:30:25
maintainerJim Smith
docs_urlNone
authorNone
requires_python<3.12,>=3.9
licenseMIT
keywords data-privacy data-protection machine-learning privacy privacy-tools statistical-disclosure-control
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://opensource.org/licenses/MIT)
[![Latest Version](https://img.shields.io/github/v/release/AI-SDC/AI-SDC?style=flat)](https://github.com/AI-SDC/AI-SDC/releases)
[![DOI](https://zenodo.org/badge/518801511.svg)](https://zenodo.org/badge/latestdoi/518801511)
[![codecov](https://codecov.io/gh/AI-SDC/AI-SDC/branch/main/graph/badge.svg?token=AXX2XCXUNU)](https://codecov.io/gh/AI-SDC/AI-SDC)
[![Python versions](https://img.shields.io/pypi/pyversions/aisdc.svg)](https://pypi.org/project/aisdc)

# AI-SDC

A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).

### User Guides

A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder.

## Content

* `aisdc`
    - `attacks` Contains a variety of privacy attacks on machine learning models, including membership and attribute inference.
    - `preprocessing` Contains preprocessing modules for test datasets.
    - `safemodel` The safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.
* `docs` Contains Sphinx documentation files.
* `example_notebooks` Contains short tutorials on the basic concept of "safe_XX" versions of machine learning algorithms, and examples of some specific algorithms.
* `examples` Contains examples of how to run the code contained in this repository:
  - How to simulate attribute inference attacks `attribute_inference_example.py`.
  - How to simulate membership inference attacks:
    + Worst case scenario attack `worst_case_attack_example.py`.
    + LIRA scenario attack `lira_attack_example.py`.
  - Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`.
* `risk_examples` Contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214).
* `tests` Contains unit tests.

## Documentation

Documentation is hosted here: https://ai-sdc.github.io/AI-SDC/

## Installation / End-user

[![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc)

Install `aisdc` (safest in a virtual env) and manually copy the [`examples`](examples/) and [`example_notebooks`](example_notebooks/).

To install only the base package, which includes the attacks used for assessing privacy:

```
$ pip install aisdc
```

To install the base package and the safemodel package, which includes defensive wrappers for popular ML frameworks including [scikit-learn](https://scikit-learn.org) and [Keras](https://keras.io):

```
$ pip install aisdc[safemodel]
```

## Running

To run an example, simply execute the desired script or start up `jupyter notebook` and run one of the notebooks.

For example, to run the `lira_attack_example.py`:

```
$ python -m lira_attack_example
```

## Development

Clone the repository and install the local package including all dependencies (safest in a virtual env):

```
$ git clone https://github.com/AI-SDC/AI-SDC.git
$ cd AI-SDC
$ pip install .[test]
```

Then run the tests:

```
$ pytest .
```

---

This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033  and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO -MC_PC_23006) and   Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER - MC_PC_21033).­ This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.

<img src="docs/source/images/UK_Research_and_Innovation_logo.svg" width="20%" height="20%" padding=20/> <img src="docs/source/images/health-data-research-uk-hdr-uk-logo-vector.png" width="10%" height="10%" padding=20/> <img src="docs/source/images/logo_print.png" width="15%" height="15%" padding=20/>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AI-SDC/AI-SDC",
    "name": "aisdc",
    "maintainer": "Jim Smith",
    "docs_url": null,
    "requires_python": "<3.12,>=3.9",
    "maintainer_email": "james.smith@uwe.ac.uk",
    "keywords": "data-privacy, data-protection, machine-learning, privacy, privacy-tools, statistical-disclosure-control",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/51/f0/8b1375600bf36aec2c1d557ed5cb900695fdfce0d49081049ebd80fded4d/aisdc-1.1.3.tar.gz",
    "platform": null,
    "description": "[![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://opensource.org/licenses/MIT)\n[![Latest Version](https://img.shields.io/github/v/release/AI-SDC/AI-SDC?style=flat)](https://github.com/AI-SDC/AI-SDC/releases)\n[![DOI](https://zenodo.org/badge/518801511.svg)](https://zenodo.org/badge/latestdoi/518801511)\n[![codecov](https://codecov.io/gh/AI-SDC/AI-SDC/branch/main/graph/badge.svg?token=AXX2XCXUNU)](https://codecov.io/gh/AI-SDC/AI-SDC)\n[![Python versions](https://img.shields.io/pypi/pyversions/aisdc.svg)](https://pypi.org/project/aisdc)\n\n# AI-SDC\n\nA collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).\n\n### User Guides\n\nA collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the [`user_stories`](./user_stories) folder.\n\n## Content\n\n* `aisdc`\n    - `attacks` Contains a variety of privacy attacks on machine learning models, including membership and attribute inference.\n    - `preprocessing` Contains preprocessing modules for test datasets.\n    - `safemodel` The safemodel package is an open source wrapper for common machine learning models. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.\n* `docs` Contains Sphinx documentation files.\n* `example_notebooks` Contains short tutorials on the basic concept of \"safe_XX\" versions of machine learning algorithms, and examples of some specific algorithms.\n* `examples` Contains examples of how to run the code contained in this repository:\n  - How to simulate attribute inference attacks `attribute_inference_example.py`.\n  - How to simulate membership inference attacks:\n    + Worst case scenario attack `worst_case_attack_example.py`.\n    + LIRA scenario attack `lira_attack_example.py`.\n  - Integration of attacks into safemodel classes `safemodel_attack_integration_bothcalls.py`.\n* `risk_examples` Contains hypothetical examples of data leakage through machine learning models as described in the [Green Paper](https://doi.org/10.5281/zenodo.6896214).\n* `tests` Contains unit tests.\n\n## Documentation\n\nDocumentation is hosted here: https://ai-sdc.github.io/AI-SDC/\n\n## Installation / End-user\n\n[![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc)\n\nInstall `aisdc` (safest in a virtual env) and manually copy the [`examples`](examples/) and [`example_notebooks`](example_notebooks/).\n\nTo install only the base package, which includes the attacks used for assessing privacy:\n\n```\n$ pip install aisdc\n```\n\nTo install the base package and the safemodel package, which includes defensive wrappers for popular ML frameworks including [scikit-learn](https://scikit-learn.org) and [Keras](https://keras.io):\n\n```\n$ pip install aisdc[safemodel]\n```\n\n## Running\n\nTo run an example, simply execute the desired script or start up `jupyter notebook` and run one of the notebooks.\n\nFor example, to run the `lira_attack_example.py`:\n\n```\n$ python -m lira_attack_example\n```\n\n## Development\n\nClone the repository and install the local package including all dependencies (safest in a virtual env):\n\n```\n$ git clone https://github.com/AI-SDC/AI-SDC.git\n$ cd AI-SDC\n$ pip install .[test]\n```\n\nThen run the tests:\n\n```\n$ pytest .\n```\n\n---\n\nThis work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033  and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO -MC_PC_23006) and   Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER - MC_PC_21033).\u00ad This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.\n\n<img src=\"docs/source/images/UK_Research_and_Innovation_logo.svg\" width=\"20%\" height=\"20%\" padding=20/> <img src=\"docs/source/images/health-data-research-uk-hdr-uk-logo-vector.png\" width=\"10%\" height=\"10%\" padding=20/> <img src=\"docs/source/images/logo_print.png\" width=\"15%\" height=\"15%\" padding=20/>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Tools for the statistical disclosure control of machine learning models",
    "version": "1.1.3",
    "project_urls": {
        "Homepage": "https://github.com/AI-SDC/AI-SDC"
    },
    "split_keywords": [
        "data-privacy",
        " data-protection",
        " machine-learning",
        " privacy",
        " privacy-tools",
        " statistical-disclosure-control"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "abc29054a8ba968a77267109a2c40016d002c8087ee5466712f849282a2075ce",
                "md5": "e1f84f49d0e7ce5faa9a9219c034df8d",
                "sha256": "0a168803c2704136462824b224352155a3a85c9684d677b37f5d89d5d4f9e4e6"
            },
            "downloads": -1,
            "filename": "aisdc-1.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e1f84f49d0e7ce5faa9a9219c034df8d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.9",
            "size": 88058,
            "upload_time": "2024-04-26T17:30:23",
            "upload_time_iso_8601": "2024-04-26T17:30:23.049652Z",
            "url": "https://files.pythonhosted.org/packages/ab/c2/9054a8ba968a77267109a2c40016d002c8087ee5466712f849282a2075ce/aisdc-1.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "51f08b1375600bf36aec2c1d557ed5cb900695fdfce0d49081049ebd80fded4d",
                "md5": "fde1b09aeaf8d91bc6473faf387c811c",
                "sha256": "7b82c9f42faccfedfd04af5c0b8ab6bc5b5fdf3e3a7bf73eb528b61442e8991a"
            },
            "downloads": -1,
            "filename": "aisdc-1.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "fde1b09aeaf8d91bc6473faf387c811c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.9",
            "size": 107526,
            "upload_time": "2024-04-26T17:30:25",
            "upload_time_iso_8601": "2024-04-26T17:30:25.570682Z",
            "url": "https://files.pythonhosted.org/packages/51/f0/8b1375600bf36aec2c1d557ed5cb900695fdfce0d49081049ebd80fded4d/aisdc-1.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 17:30:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AI-SDC",
    "github_project": "AI-SDC",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aisdc"
}
        
Elapsed time: 0.24864s