[![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://opensource.org/licenses/MIT)
[![Latest Version](https://img.shields.io/github/v/release/AI-SDC/AI-SDC?style=flat)](https://github.com/AI-SDC/AI-SDC/releases)
[![DOI](https://zenodo.org/badge/518801511.svg)](https://zenodo.org/badge/latestdoi/518801511)
[![codecov](https://codecov.io/gh/AI-SDC/AI-SDC/branch/main/graph/badge.svg?token=AXX2XCXUNU)](https://codecov.io/gh/AI-SDC/AI-SDC)
[![Python versions](https://img.shields.io/pypi/pyversions/aisdc.svg)](https://pypi.org/project/aisdc)
# AI-SDC
A collection of tools and resources for managing the [statistical disclosure control](https://en.wikipedia.org/wiki/Statistical_disclosure_control) of trained [machine learning](https://en.wikipedia.org/wiki/Machine_learning) models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).
The `aisdc` package provides:
* A variety of privacy attacks for assessing machine learning models.
* The safemodel package: a suite of open source wrappers for common machine learning frameworks, including [scikit-learn](https://scikit-learn.org) and [Keras](https://keras.io). It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.
A collection of user guides can be found in the [`user_stories`](user_stories) folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README located in the folder.
## Installation
[![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc)
Install `aisdc` and manually copy the [`examples`](examples/).
To install only the base package, which includes the attacks used for assessing privacy:
```
$ pip install aisdc
```
To additionally install the safemodel package:
```
$ pip install aisdc[safemodel]
```
## Running
To run an example, simply execute the desired script. For example, to run LiRA:
```
$ python -m lira_attack_example
```
## Acknowledgement
This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the [DARE UK](https://dareuk.org.uk) (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO; MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER; MC_PC_21033).This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.
<img src="docs/source/images/UK_Research_and_Innovation_logo.svg" width="20%" height="20%" padding=20/> <img src="docs/source/images/health-data-research-uk-hdr-uk-logo-vector.png" width="10%" height="10%" padding=20/> <img src="docs/source/images/logo_print.png" width="15%" height="15%" padding=20/>
Raw data
{
"_id": null,
"home_page": "https://github.com/AI-SDC/AI-SDC",
"name": "aisdc",
"maintainer": "Jim Smith",
"docs_url": null,
"requires_python": "<3.12,>=3.9",
"maintainer_email": "james.smith@uwe.ac.uk",
"keywords": "data-privacy, data-protection, machine-learning, privacy, privacy-tools, statistical-disclosure-control",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/74/00/0fe05d68f82cb4062a77bd4826d93cd1ddf8e562347d52fb9ade2660afe4/aisdc-1.1.3.post1.tar.gz",
"platform": null,
"description": "[![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://opensource.org/licenses/MIT)\n[![Latest Version](https://img.shields.io/github/v/release/AI-SDC/AI-SDC?style=flat)](https://github.com/AI-SDC/AI-SDC/releases)\n[![DOI](https://zenodo.org/badge/518801511.svg)](https://zenodo.org/badge/latestdoi/518801511)\n[![codecov](https://codecov.io/gh/AI-SDC/AI-SDC/branch/main/graph/badge.svg?token=AXX2XCXUNU)](https://codecov.io/gh/AI-SDC/AI-SDC)\n[![Python versions](https://img.shields.io/pypi/pyversions/aisdc.svg)](https://pypi.org/project/aisdc)\n\n# AI-SDC\n\nA collection of tools and resources for managing the [statistical disclosure control](https://en.wikipedia.org/wiki/Statistical_disclosure_control) of trained [machine learning](https://en.wikipedia.org/wiki/Machine_learning) models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).\n\nThe `aisdc` package provides:\n* A variety of privacy attacks for assessing machine learning models.\n* The safemodel package: a suite of open source wrappers for common machine learning frameworks, including [scikit-learn](https://scikit-learn.org) and [Keras](https://keras.io). It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.\n\nA collection of user guides can be found in the [`user_stories`](user_stories) folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README located in the folder.\n\n## Installation\n\n[![PyPI package](https://img.shields.io/pypi/v/aisdc.svg)](https://pypi.org/project/aisdc)\n\nInstall `aisdc` and manually copy the [`examples`](examples/).\n\nTo install only the base package, which includes the attacks used for assessing privacy:\n\n```\n$ pip install aisdc\n```\n\nTo additionally install the safemodel package:\n\n```\n$ pip install aisdc[safemodel]\n```\n\n## Running\n\nTo run an example, simply execute the desired script. For example, to run LiRA:\n\n```\n$ python -m lira_attack_example\n```\n\n## Acknowledgement\n\nThis work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the [DARE UK](https://dareuk.org.uk) (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO; MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER; MC_PC_21033).\u00adThis project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.\n\n<img src=\"docs/source/images/UK_Research_and_Innovation_logo.svg\" width=\"20%\" height=\"20%\" padding=20/> <img src=\"docs/source/images/health-data-research-uk-hdr-uk-logo-vector.png\" width=\"10%\" height=\"10%\" padding=20/> <img src=\"docs/source/images/logo_print.png\" width=\"15%\" height=\"15%\" padding=20/>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Tools for the statistical disclosure control of machine learning models",
"version": "1.1.3.post1",
"project_urls": {
"Bug Tracker": "https://github.com/AI-SDC/AI-SDC/issues",
"Changelog": "https://github.com/AI-SDC/AI-SDC/CHANGELOG.md",
"Discussions": "https://github.com/AI-SDC/AI-SDC/discussions",
"Documentation": "https://ai-sdc.github.io/AI-SDC/",
"Homepage": "https://github.com/AI-SDC/AI-SDC"
},
"split_keywords": [
"data-privacy",
" data-protection",
" machine-learning",
" privacy",
" privacy-tools",
" statistical-disclosure-control"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a3b10dc7886a3df1514559c94cf54b359b0b86466ec4a7e9527a41ed3ba0948f",
"md5": "264818401daaa7c4f8b0c4aa6db976a3",
"sha256": "112086ed21038f41d372c26ea636cafe5874c8799ac8ca213a85d687d2556cb3"
},
"downloads": -1,
"filename": "aisdc-1.1.3.post1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "264818401daaa7c4f8b0c4aa6db976a3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.12,>=3.9",
"size": 84101,
"upload_time": "2024-06-25T18:57:01",
"upload_time_iso_8601": "2024-06-25T18:57:01.070342Z",
"url": "https://files.pythonhosted.org/packages/a3/b1/0dc7886a3df1514559c94cf54b359b0b86466ec4a7e9527a41ed3ba0948f/aisdc-1.1.3.post1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "74000fe05d68f82cb4062a77bd4826d93cd1ddf8e562347d52fb9ade2660afe4",
"md5": "3e02e3b2f2c01b8f516cdb9725cc5307",
"sha256": "2ab96880bb15233a1051ae42e08777e2e92566ec7a7fb315e5a7874d25d28104"
},
"downloads": -1,
"filename": "aisdc-1.1.3.post1.tar.gz",
"has_sig": false,
"md5_digest": "3e02e3b2f2c01b8f516cdb9725cc5307",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.12,>=3.9",
"size": 74526,
"upload_time": "2024-06-25T18:57:03",
"upload_time_iso_8601": "2024-06-25T18:57:03.752381Z",
"url": "https://files.pythonhosted.org/packages/74/00/0fe05d68f82cb4062a77bd4826d93cd1ddf8e562347d52fb9ade2660afe4/aisdc-1.1.3.post1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-25 18:57:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AI-SDC",
"github_project": "AI-SDC",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "aisdc"
}