flexclash


Nameflexclash JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-10-22 09:31:33
maintainerNone
docs_urlNone
authorNuria Rodríguez Barroso
requires_pythonNone
licenseNone
keywords adversarial attacks defences fl federated-learning flexible
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="Attack-defense.png" width="100">

# FLEX-Clash

flex-clash is a Python library dedicated to adversarial attacks and defences in Federated Learning. It offers the state-of-the methods and features to ease the implementation of custom methods. It is intended to extend the [FLEXible](https://github.com/FLEXible-FL/FLEXible) framework.

## Details

This repository includes both:
- Features to implement poisoning attacks in Federated Learning.
- Features to implement defences in the aggregator in Federated Learning.
- State-of-the-art defences implementedin FLEXible.

### Folder structure

- **flexclash/data**: which contains the features to poison the clients' data.
- **flexclash/model**: which contains the features to poison the clients' model updates.
- **flexclash/pool**: which contains the features to implement any defence in the aggregation operator as well as the state-of-the-art implemented defences.
- **notebooks**: which contains explanatory notebooks showing how to implement poisoning attacks and use the implemented defences.
- **test**: which contains the test for the implemented features.

### Explanatory notebooks

- **Poisoning_data_FLEX.ipynb**: A notebook showing how to implement data-poisoning attacks using `flexclash` including both byzantine and backdoor attacks.
- **Poisoning_model_FLEX.ipynb**: A notebook showing how to implement model-poisoning attacks using `flexclash`.
- **Defences_FLEX.ipynb**: A notebook showing how to employ defences against adversarial attacks using `flexclash`.


## Features

In the following we detail the poisoning attacks implemented:

|  Attack |  Description  | Citation |
|----------|:-----------------------------------:|------:|
| Data poisoning | It consists of poisoning a certain amount of data of certain clients randomly or according to certain criteria. Several examples are shown in the notebooks. | [Data Poisoning Attacks Against Federated Learning Systems](https://link.springer.com/chapter/10.1007/978-3-030-58951-6_24) |
| Model poisoning | It consists of directly poisoning the weights of the model update that the client shares with the server. | [Deep Model Poisoning Attack on Federated Learning](https://www.mdpi.com/1999-5903/13/3/73)|

In the following we detail the defences implemented:

|  Defence |  Description  | Citation |
|----------|:-----------------------------------:|------:|
| Median    | It is a robust-aggregation operator based on replacing the arithmetic mean by the median of the model updates, which choose the value that represents the centre of the distribution. | [Byzantine-robust distributed learning: Towards optimal statistical rates.](https://proceedings.mlr.press/v80/yin18a.html) |
| Trimmed mean | It is a version of the arithmetic mean, consisting of filtering a fixed percentage of extreme values both below and above the data distribution. | [Byzantine-robust distributed learning: Towards optimal statistical rates.](https://proceedings.mlr.press/v80/yin18a.html) |
| MultiKrum | It sorts the clients according to the geometric distances of their model updates. Hence, it employs an aggregation parameter, which specifies the number of clients to be aggregated (the first ones after being sorted) resulting in the aggregated model.  | [Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent](https://proceedings.neurips.cc/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf) |
| Bulyan | It is a  federated aggregation operator to prevent poisoning attacks, combining the MultiKrum federated aggregation operator and the trimmed-mean. Hence, it sorts the clients according to their geometric distances, and according to a 𝑓 parameter filters out the 2𝑓 clients of the tails of the sorted distribution of clients and aggregates the rest of them.| [The Hidden Vulnerability of Distributed Learning in Byzantium](https://proceedings.mlr.press/v80/mhamdi18a/mhamdi18a.pdf) |


## Installation

In order to install this repo locally:

``
    pip install -e .
``

FLEX-Clash is available on the PyPi repository and can be easily installed using pip:

``
    pip install flex-clash
``

## Citation

If you use this repository in your research work, please cite Flexible paper:


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "flexclash",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "adversarial attacks defences FL federated-learning flexible",
    "author": "Nuria Rodr\u00edguez Barroso",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/3d/11/ae72fd857c54389bdcf10f9e927447238b02bd47fadcbc7766358b99d470/flexclash-0.0.3.tar.gz",
    "platform": null,
    "description": "<img src=\"Attack-defense.png\" width=\"100\">\n\n# FLEX-Clash\n\nflex-clash is a Python library dedicated to adversarial attacks and defences in Federated Learning. It offers the state-of-the methods and features to ease the implementation of custom methods. It is intended to extend the [FLEXible](https://github.com/FLEXible-FL/FLEXible) framework.\n\n## Details\n\nThis repository includes both:\n- Features to implement poisoning attacks in Federated Learning.\n- Features to implement defences in the aggregator in Federated Learning.\n- State-of-the-art defences implementedin FLEXible.\n\n### Folder structure\n\n- **flexclash/data**: which contains the features to poison the clients' data.\n- **flexclash/model**: which contains the features to poison the clients' model updates.\n- **flexclash/pool**: which contains the features to implement any defence in the aggregation operator as well as the state-of-the-art implemented defences.\n- **notebooks**: which contains explanatory notebooks showing how to implement poisoning attacks and use the implemented defences.\n- **test**: which contains the test for the implemented features.\n\n### Explanatory notebooks\n\n- **Poisoning_data_FLEX.ipynb**: A notebook showing how to implement data-poisoning attacks using `flexclash` including both byzantine and backdoor attacks.\n- **Poisoning_model_FLEX.ipynb**: A notebook showing how to implement model-poisoning attacks using `flexclash`.\n- **Defences_FLEX.ipynb**: A notebook showing how to employ defences against adversarial attacks using `flexclash`.\n\n\n## Features\n\nIn the following we detail the poisoning attacks implemented:\n\n|  Attack |  Description  | Citation |\n|----------|:-----------------------------------:|------:|\n| Data poisoning | It consists of poisoning a certain amount of data of certain clients randomly or according to certain criteria. Several examples are shown in the notebooks. | [Data Poisoning Attacks Against Federated Learning Systems](https://link.springer.com/chapter/10.1007/978-3-030-58951-6_24) |\n| Model poisoning | It consists of directly poisoning the weights of the model update that the client shares with the server. | [Deep Model Poisoning Attack on Federated Learning](https://www.mdpi.com/1999-5903/13/3/73)|\n\nIn the following we detail the defences implemented:\n\n|  Defence |  Description  | Citation |\n|----------|:-----------------------------------:|------:|\n| Median    | It is a robust-aggregation operator based on replacing the arithmetic mean by the median of the model updates, which choose the value that represents the centre of the distribution. | [Byzantine-robust distributed learning: Towards optimal statistical rates.](https://proceedings.mlr.press/v80/yin18a.html) |\n| Trimmed mean | It is a version of the arithmetic mean, consisting of filtering a fixed percentage of extreme values both below and above the data distribution. | [Byzantine-robust distributed learning: Towards optimal statistical rates.](https://proceedings.mlr.press/v80/yin18a.html) |\n| MultiKrum | It sorts the clients according to the geometric distances of their model updates. Hence, it employs an aggregation parameter, which specifies the number of clients to be aggregated (the first ones after being sorted) resulting in the aggregated model.  | [Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent](https://proceedings.neurips.cc/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf) |\n| Bulyan | It is a  federated aggregation operator to prevent poisoning attacks, combining the MultiKrum federated aggregation operator and the trimmed-mean. Hence, it sorts the clients according to their geometric distances, and according to a \ud835\udc53 parameter filters out the 2\ud835\udc53 clients of the tails of the sorted distribution of clients and aggregates the rest of them.| [The Hidden Vulnerability of Distributed Learning in Byzantium](https://proceedings.mlr.press/v80/mhamdi18a/mhamdi18a.pdf) |\n\n\n## Installation\n\nIn order to install this repo locally:\n\n``\n    pip install -e .\n``\n\nFLEX-Clash is available on the PyPi repository and can be easily installed using pip:\n\n``\n    pip install flex-clash\n``\n\n## Citation\n\nIf you use this repository in your research work, please cite Flexible paper:\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "0.0.3",
    "project_urls": null,
    "split_keywords": [
        "adversarial",
        "attacks",
        "defences",
        "fl",
        "federated-learning",
        "flexible"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "571e1e7cae34cab2f15ab8bf35a052fd905ffd35f1ec4bb6a05843d0fa060f50",
                "md5": "f84bd6455997d527ee6bf0069144fd94",
                "sha256": "b5fcf8b3159c0e7cdd9b080be098a8beb35b5b4dd74d1afb8eea7691dcc0d70b"
            },
            "downloads": -1,
            "filename": "flexclash-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f84bd6455997d527ee6bf0069144fd94",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 23352,
            "upload_time": "2024-10-22T09:31:31",
            "upload_time_iso_8601": "2024-10-22T09:31:31.996075Z",
            "url": "https://files.pythonhosted.org/packages/57/1e/1e7cae34cab2f15ab8bf35a052fd905ffd35f1ec4bb6a05843d0fa060f50/flexclash-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3d11ae72fd857c54389bdcf10f9e927447238b02bd47fadcbc7766358b99d470",
                "md5": "4378643c03cbef5990d50657be9254a4",
                "sha256": "f0c8458e502c2158e9ede1eb4d5b3dee96ebaeb68a3d50d6ae370f7b569684ce"
            },
            "downloads": -1,
            "filename": "flexclash-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "4378643c03cbef5990d50657be9254a4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 19906,
            "upload_time": "2024-10-22T09:31:33",
            "upload_time_iso_8601": "2024-10-22T09:31:33.251145Z",
            "url": "https://files.pythonhosted.org/packages/3d/11/ae72fd857c54389bdcf10f9e927447238b02bd47fadcbc7766358b99d470/flexclash-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-22 09:31:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "flexclash"
}
        
Elapsed time: 1.25327s