axa-ds-safeai


Nameaxa-ds-safeai JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryMODEL AGNOSTIC SAFE AI package to measure risks of AI models WITHOUT CONSIDERING TYPE OF THE MODEL
upload_time2024-10-02 09:38:39
maintainerNone
docs_urlNone
authorGolnoosh Babaei
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ### AXA safeAI package

This S.A.F.E. approach is based on “Rank Graduation
Box” proposed in [Babaei et al. 2024](https://www.sciencedirect.com/science/article/pii/S0957417424021067). The use of the term “box” is motivated by the need of emphasizing that our proposal is
always in progress so that, like a box, it can be constantly filled by innovative tools addressed
to the measurement of the new future requirements necessary for the safety condition of
AI-systems.

# Install

Simply use:

pip install safeaipackage

# Citations

The proposed measures in this package came primarily out of research by 
[Paolo Giudici](https://www.linkedin.com/in/paolo-giudici-60028a/), [Emanuela Raffinetti](https://www.linkedin.com/in/emanuela-raffinetti-a3980215/), 
and [Golnoosh Babaei](https://www.linkedin.com/in/golnoosh-babaei-990077187/) in the [Statistical laboratory](https://sites.google.com/unipv.it/statslab-pavia/home?authuser=0) 
at the University of Pavia. 

This package is based on the following papers:
* [Babaei, G., Giudici, P., & Raffinetti, E. (2024). A Rank Graduation Box for SAFE AI. Expert Systems with Applications, 125239.](https://doi.org/10.1016/j.eswa.2024.125239)
* [Giudici, P., & Raffinetti, E. (2024). RGA: a unified measure of predictive accuracy. Advances in Data Analysis and Classification, 1-27.](https://link.springer.com/article/10.1007/s11634-023-00574-2)
* [Raffinetti, E. (2023). A rank graduation accuracy measure to mitigate artificial intelligence risks. Quality & Quantity, 57(Suppl 2), 131-150.](https://link.springer.com/article/10.1007/s11135-023-01613-y)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "axa-ds-safeai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Golnoosh Babaei",
    "author_email": "<golnoosh.babaei@unipv.it>",
    "download_url": "https://files.pythonhosted.org/packages/46/f5/d1c6bb5b82da02bbe12260f19bfc236416633c4bf27c920c6c612533144f/axa_ds_safeai-0.1.0.tar.gz",
    "platform": null,
    "description": "### AXA safeAI package\r\n\r\nThis S.A.F.E. approach is based on \u201cRank Graduation\r\nBox\u201d proposed in [Babaei et al. 2024](https://www.sciencedirect.com/science/article/pii/S0957417424021067). The use of the term \u201cbox\u201d is motivated by the need of emphasizing that our proposal is\r\nalways in progress so that, like a box, it can be constantly filled by innovative tools addressed\r\nto the measurement of the new future requirements necessary for the safety condition of\r\nAI-systems.\r\n\r\n# Install\r\n\r\nSimply use:\r\n\r\npip install safeaipackage\r\n\r\n# Citations\r\n\r\nThe proposed measures in this package came primarily out of research by \r\n[Paolo Giudici](https://www.linkedin.com/in/paolo-giudici-60028a/), [Emanuela Raffinetti](https://www.linkedin.com/in/emanuela-raffinetti-a3980215/), \r\nand [Golnoosh Babaei](https://www.linkedin.com/in/golnoosh-babaei-990077187/) in the [Statistical laboratory](https://sites.google.com/unipv.it/statslab-pavia/home?authuser=0) \r\nat the University of Pavia. \r\n\r\nThis package is based on the following papers:\r\n* [Babaei, G., Giudici, P., & Raffinetti, E. (2024). A Rank Graduation Box for SAFE AI. Expert Systems with Applications, 125239.](https://doi.org/10.1016/j.eswa.2024.125239)\r\n* [Giudici, P., & Raffinetti, E. (2024). RGA: a unified measure of predictive accuracy. Advances in Data Analysis and Classification, 1-27.](https://link.springer.com/article/10.1007/s11634-023-00574-2)\r\n* [Raffinetti, E. (2023). A rank graduation accuracy measure to mitigate artificial intelligence risks. Quality & Quantity, 57(Suppl 2), 131-150.](https://link.springer.com/article/10.1007/s11135-023-01613-y)\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "MODEL AGNOSTIC SAFE AI package to measure risks of AI models WITHOUT CONSIDERING TYPE OF THE MODEL",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "46f5d1c6bb5b82da02bbe12260f19bfc236416633c4bf27c920c6c612533144f",
                "md5": "8b2a271577e709d2a5d9af60c48be31a",
                "sha256": "f2b123100f464ce720eea291c283973616ac6a16e70b02a8d446705e3020cbe0"
            },
            "downloads": -1,
            "filename": "axa_ds_safeai-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "8b2a271577e709d2a5d9af60c48be31a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6170,
            "upload_time": "2024-10-02T09:38:39",
            "upload_time_iso_8601": "2024-10-02T09:38:39.491738Z",
            "url": "https://files.pythonhosted.org/packages/46/f5/d1c6bb5b82da02bbe12260f19bfc236416633c4bf27c920c6c612533144f/axa_ds_safeai-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-02 09:38:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "axa-ds-safeai"
}
        
Elapsed time: 0.50110s