gradgpad


Namegradgpad JSON
Version 2.1.0 PyPI version JSON
download
home_pagehttps://github.com/acostapazo/gradgpad
Summarygradgpad
upload_time2023-02-07 10:26:09
maintainer
docs_urlNone
authorALiCE Biometrics
requires_python
licenseMIT
keywords face-pad framework evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # gradgpad 🗿 [![version](https://img.shields.io/github/release/acostapazo/gradgpad/all.svg)](https://github.com/acostapazo/gradgpad/releases) [![ci](https://github.com/acostapazo/gradgpad/workflows/ci/badge.svg)](https://github.com/acostapazo/gradgpad/actions) [![pypi](https://img.shields.io/pypi/dm/gradgpad)](https://pypi.org/project/gradgpad/) [![codecov](https://codecov.io/gh/acostapazo/gradgpad/branch/main/graph/badge.svg?token=HXTGF8ZBJ7)](https://codecov.io/gh/acostapazo/gradgpad)



👉  The GRAD-GPAD framework is a comprehensive and modular framework to evaluate the performance of face-PAD (face Presentation Attack Detection) approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.

🙋  GRAD-GPAD stand for Generalization Representation over Aggregated Datasets for Generalized Presentation Attack Detection.

## 🤔 Abstract 

Face recognition technology is now mature enough to reach commercial products, such as smart phones or tablets. However, it still needs to increase robustness against imposter attacks. In this regard, face Presentation Attack Detection (face-PAD) is a key component in providing trustable facial access to digital devices. Despite the success of several face-PAD works in publicly available datasets, most of them fail to reach the market, revealing the lack of evaluation frameworks that represent realistic settings. Here, an extensive analysis of the generalisation problem in face-PAD is provided, jointly with an evaluation strategy based on the aggregation of most publicly available datasets and a set of novel protocols to cover the most realistic settings, including a novel demographic bias analysis. Besides, a new fine-grained categorisation of presentation attacks and instruments is provided, enabling higher flexibility in assessing the generalisation of different algorithms under a common framework. As a result, GRAD-GPAD v2, a comprehensive and modular framework is presented to evaluate the performance of face-PAD approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.


## 🙏 Acknowledgements

If you use this framework, please cite the following publication:

```
@article{https://doi.org/10.1049/bme2.12049,
author = {Costa-Pazo, Artur and Pérez-Cabo, Daniel and Jiménez-Cabello, David and Alba-Castro, José Luis and Vazquez-Fernandez, Esteban},
title = {Face presentation attack detection. A comprehensive evaluation of the generalisation problem},
journal = {IET Biometrics},
volume = {10},
number = {4},
pages = {408-429},
doi = {https://doi.org/10.1049/bme2.12049},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/bme2.12049},
eprint = {https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/bme2.12049},
abstract = {Abstract Face recognition technology is now mature enough to reach commercial products, such as smart phones or tablets. However, it still needs to increase robustness against imposter attacks. In this regard, face Presentation Attack Detection (face-PAD) is a key component in providing trustable facial access to digital devices. Despite the success of several face-PAD works in publicly available datasets, most of them fail to reach the market, revealing the lack of evaluation frameworks that represent realistic settings. Here, an extensive analysis of the generalisation problem in face-PAD is provided, jointly with an evaluation strategy based on the aggregation of most publicly available datasets and a set of novel protocols to cover the most realistic settings, including a novel demographic bias analysis. Besides, a new fine-grained categorisation of presentation attacks and instruments is provided, enabling higher flexibility in assessing the generalisation of different algorithms under a common framework. As a result, GRAD-GPAD v2, a comprehensive and modular framework is presented to evaluate the performance of face-PAD approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.},
year = {2021}
}
```


This publication has been financed by the "Agencia Estatal de Investigación. Gobierno de España"  ref. `DIN2019-010735 / AEI / 10.13039/501100011033`


## 💻 Installation

```console
pip install gradgpad
```

## 🚀 Getting Started

The best way to learn how to use the GRAD-GPAD framework is through the Notebook examples available in:

*  [gradgpad-notebooks](https://github.com/acostapazo/gradgpad-notebooks) 📔 

## 📺 Video Tutorial

[![Tutorial](https://img.youtube.com/vi/y5lQox0hmGU/0.jpg)](https://www.youtube.com/watch?v=y5lQox0hmGU)


## 👍 Annotations

Labels and annotations are available through the Python package. 

Example:

```python
from gradgpad import annotations
print(f"Total GRAD-GPAD Annotations: {annotations.num_annotations}")
print(annotations.annotated_samples[0])
annotations.print_semantic(annotation_index=0)
```

These annotations are also publicly available in [json file](https://github.com/acostapazo/gradgpad/blob/master/gradgpad/data/gradgpad_annotations.json).

## 📰 Reproducible Research

```console
$ gradgpad --reproducible-research -o <output-folder> 
```

Use `gradgpad --help` to check available parameter

```
$ gradgpad --help                         
usage: gradgpad [-h] [--reproducible-research] [--zip]
                [--output-path OUTPUT_PATH]

optional arguments:
  -h, --help            show this help message and exit
  --reproducible-research, -rr
                        Create a folder with reproducible research results
  --zip, -z             Zip result folder
  --output-path OUTPUT_PATH, -o OUTPUT_PATH
                        Output path
```

## ❓ FAQ

#### Is it necessary to have all data sets to test framework?

No, it is not necessary, although the more datasets you add to the test, the greater the statistical significance of 
your evaluation set. 

From the paper: *"The unified categorisation added in GRAD-GPAD v2 brings the opportunity both to create novel protocols and to visualise the results from different perspectives. Also, the extended GRAD-GPAD v2 dataset allows a better statistical significance of the results of previous protocols, leveraging their added-value for assessing face-PAD generalisation on current and future algorithms."*

> **Note**
> Even if you only have access to a few datasets, you can take advantage of annotations and perform tests on your datasets. Filter by datasets with the following code:
> ```python
> from gradgpad import annotations
> 
> my_datasets = ["replay-mobile", "replay-attack"]
> selected_annotations = annotations.get_annotations_filtered_by_datasets(my_datasets)
> ```


#### I want to evaluate my own algorithms in the GRAD-GPAD framework? How should I start?

We strongly recommend using the python client for easy access to the annotations (available in a json file [here](https://github.com/acostapazo/gradgpad/blob/main/gradgpad/data/gradgpad_annotations.json)). 
If you integrate your algorithm and define a score file format compatible with GRAD-GPAD (examples in [scores](https://github.com/acostapazo/gradgpad/tree/main/gradgpad/data/scores)), your will be able to use the available evaluation tools.

```mermaid
flowchart LR
    subgraph GRAD-GPAD Dataset Annotations
    gradgpad_annotations.json
    python(Python client)
	end

    Algorithm

    subgraph Evaluation
    scores_format(Scores Format)
    tools(GRAD-GPAD Evaluation tools)
    end

    gradgpad_annotations.json --> python
    python --> Algorithm
    Algorithm --> scores_format
    scores_format --> tools
```

> **Note**
> The following code could help you to integrate your algorithm:
>
>```python
>from gradgpad import annotations
>
>my_datasets = {
>    "replay-mobile": "/Users/username/datasets/replay-mobile",  # set path to your dataset
>    "replay-attack": "/Users/username/datasets/replay-attack",  # set path to your dataset
>}
>selected_annotations = annotations.get_annotations_filtered_by_datasets([*my_datasets])
>
>for annotation in selected_annotations:
>    filename = f"{my_datasets.get(annotation.dataset.value)}/{annotation.media}"
>    print(f"{filename=}")
>
>    # 1. Load the media file
>
>    # 2. Perform your algorithm
>
>    # 3. Save to a file like this {annotation.media: score} 
>    #    like in https://github.com/acostapazo/gradgpad/tree/main/gradgpad/data/scores/auxiliary
>
>    # 4. Once you have the score files, you can use the evaluation tools
>    #    check notebooks in https://github.com/acostapazo/gradgpad-notebooks
>```


## 🤔 Contributing

There is a lot of work ahead (adding new categorizations, datasets, improving documentation...), feel free to add and propose any improvements you can think of! If you need help getting started, don't hesitate to contact us ✌️

* 🛠️ Environment

```console
>> python -m venv venv
>> source venv/bin/activate
(venv) >> pip install lume
(venv) >> lume -install
```

* ✅ Testing

```console
(venv) >> lume -test
```




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/acostapazo/gradgpad",
    "name": "gradgpad",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "face-PAD,framework,evaluation",
    "author": "ALiCE Biometrics",
    "author_email": "acosta@alicebiometrics.com",
    "download_url": "https://files.pythonhosted.org/packages/86/77/9e2814476af0b056630f4cdbb0dd6d1dd654302e570312f7b2d98e2c70e5/gradgpad-2.1.0.tar.gz",
    "platform": null,
    "description": "# gradgpad \ud83d\uddff [![version](https://img.shields.io/github/release/acostapazo/gradgpad/all.svg)](https://github.com/acostapazo/gradgpad/releases) [![ci](https://github.com/acostapazo/gradgpad/workflows/ci/badge.svg)](https://github.com/acostapazo/gradgpad/actions) [![pypi](https://img.shields.io/pypi/dm/gradgpad)](https://pypi.org/project/gradgpad/) [![codecov](https://codecov.io/gh/acostapazo/gradgpad/branch/main/graph/badge.svg?token=HXTGF8ZBJ7)](https://codecov.io/gh/acostapazo/gradgpad)\n\n\n\n\ud83d\udc49  The GRAD-GPAD framework is a comprehensive and modular framework to evaluate the performance of face-PAD (face Presentation Attack Detection) approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.\n\n\ud83d\ude4b  GRAD-GPAD stand for Generalization Representation over Aggregated Datasets for Generalized Presentation Attack Detection.\n\n## \ud83e\udd14 Abstract \n\nFace recognition technology is now mature enough to reach commercial products, such as smart phones or tablets. However, it still needs to increase robustness against imposter attacks. In this regard, face Presentation Attack Detection (face-PAD) is a key component in providing trustable facial access to digital devices. Despite the success of several face-PAD works in publicly available datasets, most of them fail to reach the market, revealing the lack of evaluation frameworks that represent realistic settings. Here, an extensive analysis of the generalisation problem in face-PAD is provided, jointly with an evaluation strategy based on the aggregation of most publicly available datasets and a set of novel protocols to cover the most realistic settings, including a novel demographic bias analysis. Besides, a new fine-grained categorisation of presentation attacks and instruments is provided, enabling higher flexibility in assessing the generalisation of different algorithms under a common framework. As a result, GRAD-GPAD v2, a comprehensive and modular framework is presented to evaluate the performance of face-PAD approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.\n\n\n## \ud83d\ude4f Acknowledgements\n\nIf you use this framework, please cite the following publication:\n\n```\n@article{https://doi.org/10.1049/bme2.12049,\nauthor = {Costa-Pazo, Artur and P\u00e9rez-Cabo, Daniel and Jim\u00e9nez-Cabello, David and Alba-Castro, Jos\u00e9 Luis and Vazquez-Fernandez, Esteban},\ntitle = {Face presentation attack detection. A comprehensive evaluation of the generalisation problem},\njournal = {IET Biometrics},\nvolume = {10},\nnumber = {4},\npages = {408-429},\ndoi = {https://doi.org/10.1049/bme2.12049},\nurl = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/bme2.12049},\neprint = {https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/bme2.12049},\nabstract = {Abstract Face recognition technology is now mature enough to reach commercial products, such as smart phones or tablets. However, it still needs to increase robustness against imposter attacks. In this regard, face Presentation Attack Detection (face-PAD) is a key component in providing trustable facial access to digital devices. Despite the success of several face-PAD works in publicly available datasets, most of them fail to reach the market, revealing the lack of evaluation frameworks that represent realistic settings. Here, an extensive analysis of the generalisation problem in face-PAD is provided, jointly with an evaluation strategy based on the aggregation of most publicly available datasets and a set of novel protocols to cover the most realistic settings, including a novel demographic bias analysis. Besides, a new fine-grained categorisation of presentation attacks and instruments is provided, enabling higher flexibility in assessing the generalisation of different algorithms under a common framework. As a result, GRAD-GPAD v2, a comprehensive and modular framework is presented to evaluate the performance of face-PAD approaches in realistic settings, enabling accountability and fair comparison of most face-PAD approaches in the literature.},\nyear = {2021}\n}\n```\n\n\nThis publication has been financed by the \"Agencia Estatal de Investigaci\u00f3n. Gobierno de Espa\u00f1a\"  ref. `DIN2019-010735 / AEI / 10.13039/501100011033`\n\n\n## \ud83d\udcbb Installation\n\n```console\npip install gradgpad\n```\n\n## \ud83d\ude80 Getting Started\n\nThe best way to learn how to use the GRAD-GPAD framework is through the Notebook examples available in:\n\n*  [gradgpad-notebooks](https://github.com/acostapazo/gradgpad-notebooks) \ud83d\udcd4 \n\n## \ud83d\udcfa Video Tutorial\n\n[![Tutorial](https://img.youtube.com/vi/y5lQox0hmGU/0.jpg)](https://www.youtube.com/watch?v=y5lQox0hmGU)\n\n\n## \ud83d\udc4d Annotations\n\nLabels and annotations are available through the Python package. \n\nExample:\n\n```python\nfrom gradgpad import annotations\nprint(f\"Total GRAD-GPAD Annotations: {annotations.num_annotations}\")\nprint(annotations.annotated_samples[0])\nannotations.print_semantic(annotation_index=0)\n```\n\nThese annotations are also publicly available in [json file](https://github.com/acostapazo/gradgpad/blob/master/gradgpad/data/gradgpad_annotations.json).\n\n## \ud83d\udcf0 Reproducible Research\n\n```console\n$ gradgpad --reproducible-research -o <output-folder> \n```\n\nUse `gradgpad --help` to check available parameter\n\n```\n$ gradgpad --help                         \nusage: gradgpad [-h] [--reproducible-research] [--zip]\n                [--output-path OUTPUT_PATH]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --reproducible-research, -rr\n                        Create a folder with reproducible research results\n  --zip, -z             Zip result folder\n  --output-path OUTPUT_PATH, -o OUTPUT_PATH\n                        Output path\n```\n\n## \u2753 FAQ\n\n#### Is it necessary to have all data sets to test framework?\n\nNo, it is not necessary, although the more datasets you add to the test, the greater the statistical significance of \nyour evaluation set. \n\nFrom the paper: *\"The unified categorisation added in GRAD-GPAD v2 brings the opportunity both to create novel protocols and to visualise the results from different perspectives. Also, the extended GRAD-GPAD v2 dataset allows a better statistical significance of the results of previous protocols, leveraging their added-value for assessing face-PAD generalisation on current and future algorithms.\"*\n\n> **Note**\n> Even if you only have access to a few datasets, you can take advantage of annotations and perform tests on your datasets. Filter by datasets with the following code:\n> ```python\n> from gradgpad import annotations\n> \n> my_datasets = [\"replay-mobile\", \"replay-attack\"]\n> selected_annotations = annotations.get_annotations_filtered_by_datasets(my_datasets)\n> ```\n\n\n#### I want to evaluate my own algorithms in the GRAD-GPAD framework? How should I start?\n\nWe strongly recommend using the python client for easy access to the annotations (available in a json file [here](https://github.com/acostapazo/gradgpad/blob/main/gradgpad/data/gradgpad_annotations.json)). \nIf you integrate your algorithm and define a score file format compatible with GRAD-GPAD (examples in [scores](https://github.com/acostapazo/gradgpad/tree/main/gradgpad/data/scores)), your will be able to use the available evaluation tools.\n\n```mermaid\nflowchart LR\n    subgraph GRAD-GPAD Dataset Annotations\n    gradgpad_annotations.json\n    python(Python client)\n\tend\n\n    Algorithm\n\n    subgraph Evaluation\n    scores_format(Scores Format)\n    tools(GRAD-GPAD Evaluation tools)\n    end\n\n    gradgpad_annotations.json --> python\n    python --> Algorithm\n    Algorithm --> scores_format\n    scores_format --> tools\n```\n\n> **Note**\n> The following code could help you to integrate your algorithm:\n>\n>```python\n>from gradgpad import annotations\n>\n>my_datasets = {\n>    \"replay-mobile\": \"/Users/username/datasets/replay-mobile\",  # set path to your dataset\n>    \"replay-attack\": \"/Users/username/datasets/replay-attack\",  # set path to your dataset\n>}\n>selected_annotations = annotations.get_annotations_filtered_by_datasets([*my_datasets])\n>\n>for annotation in selected_annotations:\n>    filename = f\"{my_datasets.get(annotation.dataset.value)}/{annotation.media}\"\n>    print(f\"{filename=}\")\n>\n>    # 1. Load the media file\n>\n>    # 2. Perform your algorithm\n>\n>    # 3. Save to a file like this {annotation.media: score} \n>    #\u00a0   like in https://github.com/acostapazo/gradgpad/tree/main/gradgpad/data/scores/auxiliary\n>\n>    # 4. Once you have the score files, you can use the evaluation tools\n>    #    check notebooks in https://github.com/acostapazo/gradgpad-notebooks\n>```\n\n\n## \ud83e\udd14 Contributing\n\nThere is a lot of work ahead (adding new categorizations, datasets, improving documentation...), feel free to add and propose any improvements you can think of! If you need help getting started, don't hesitate to contact us \u270c\ufe0f\n\n* \ud83d\udee0\ufe0f Environment\n\n```console\n>> python -m venv venv\n>> source venv/bin/activate\n(venv) >> pip install lume\n(venv) >> lume -install\n```\n\n* \u2705 Testing\n\n```console\n(venv) >> lume -test\n```\n\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "gradgpad",
    "version": "2.1.0",
    "split_keywords": [
        "face-pad",
        "framework",
        "evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "acafce8c90b6c2a37b6647cbfef577140f6d1c10250619de216ebc728c1be385",
                "md5": "55fd56c39e1070fb870591e8881ff52f",
                "sha256": "3168eb9c00a49959decae8161776ac64460d6b83603940f03606631ba1ca85e5"
            },
            "downloads": -1,
            "filename": "gradgpad-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "55fd56c39e1070fb870591e8881ff52f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 16311541,
            "upload_time": "2023-02-07T10:26:06",
            "upload_time_iso_8601": "2023-02-07T10:26:06.667247Z",
            "url": "https://files.pythonhosted.org/packages/ac/af/ce8c90b6c2a37b6647cbfef577140f6d1c10250619de216ebc728c1be385/gradgpad-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "86779e2814476af0b056630f4cdbb0dd6d1dd654302e570312f7b2d98e2c70e5",
                "md5": "9cd74dc56a62a131a14f7168c16b12aa",
                "sha256": "91c42223fd5f464eefb0fd6bb74c5e5df3467a27c7a3853dce77a8177b1b4260"
            },
            "downloads": -1,
            "filename": "gradgpad-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "9cd74dc56a62a131a14f7168c16b12aa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 15460640,
            "upload_time": "2023-02-07T10:26:09",
            "upload_time_iso_8601": "2023-02-07T10:26:09.755059Z",
            "url": "https://files.pythonhosted.org/packages/86/77/9e2814476af0b056630f4cdbb0dd6d1dd654302e570312f7b2d98e2c70e5/gradgpad-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-07 10:26:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "acostapazo",
    "github_project": "gradgpad",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "gradgpad"
}
        
Elapsed time: 0.08889s