rai-toolbox


Namerai-toolbox JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/mit-ll-responsible-ai/responsible-ai-toolbox
SummaryPyTorch-centric library for evaluating and enhancing the robustness of AI technologies
upload_time2022-06-16 17:59:21
maintainer
docs_urlNone
authorRyan Soklaski, Justin Goodwin, Olivia Brown, Michael Yee
requires_python>=3.7
licenseMIT
keywords machine learning robustness pytorch responsible ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
The rAI-toolbox is designed to enable methods for evaluating and enhancing both the
robustness and the explainability of AI models in a way that is scalable and that
composes naturally with other popular ML frameworks.

A key design principle of the rAI-toolbox is that it adheres strictly to the APIs
specified by the PyTorch machine learning framework. For example, the rAI-toolbox frames
adversarial training workflows solely in terms of the `torch.nn.Optimizer` and
`torch.nn.Module` APIs. This makes it trivial to leverage other libraries and
frameworks from the PyTorch ecosystem to bolster your responsible AI R&D. For
instance, one can naturally leverage the rAI-toolbox together with
PyTorch Lightning to perform distributed adversarial training.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mit-ll-responsible-ai/responsible-ai-toolbox",
    "name": "rai-toolbox",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "machine learning robustness pytorch responsible AI",
    "author": "Ryan Soklaski, Justin Goodwin, Olivia Brown, Michael Yee",
    "author_email": "ryan.soklaski@ll.mit.edu",
    "download_url": "https://files.pythonhosted.org/packages/50/5c/984495d9ea9889f67f1afba2d91d32bcd13621e8bee117aa84c8464efd8c/rai_toolbox-0.2.1.tar.gz",
    "platform": null,
    "description": "\nThe rAI-toolbox is designed to enable methods for evaluating and enhancing both the\nrobustness and the explainability of AI models in a way that is scalable and that\ncomposes naturally with other popular ML frameworks.\n\nA key design principle of the rAI-toolbox is that it adheres strictly to the APIs\nspecified by the PyTorch machine learning framework. For example, the rAI-toolbox frames\nadversarial training workflows solely in terms of the `torch.nn.Optimizer` and\n`torch.nn.Module` APIs. This makes it trivial to leverage other libraries and\nframeworks from the PyTorch ecosystem to bolster your responsible AI R&D. For\ninstance, one can naturally leverage the rAI-toolbox together with\nPyTorch Lightning to perform distributed adversarial training.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "PyTorch-centric library for evaluating and enhancing the robustness of AI technologies",
    "version": "0.2.1",
    "project_urls": {
        "Download": "https://github.com/mit-ll-responsible-ai/responsible-ai-toolbox/tarball/v0.2.1",
        "Homepage": "https://github.com/mit-ll-responsible-ai/responsible-ai-toolbox"
    },
    "split_keywords": [
        "machine",
        "learning",
        "robustness",
        "pytorch",
        "responsible",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7eacf8b5e3c314acd1ea5a4d5f14b529c0cf2c3e37356800ad35c892174aeebb",
                "md5": "731bea49b9b2fdcff891ebcfd52e661e",
                "sha256": "489c3d2724e0012febca29eacc5251eb8a0cecac37f3f266b0f79c7e41177285"
            },
            "downloads": -1,
            "filename": "rai_toolbox-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "731bea49b9b2fdcff891ebcfd52e661e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 92789,
            "upload_time": "2022-06-16T17:59:19",
            "upload_time_iso_8601": "2022-06-16T17:59:19.968537Z",
            "url": "https://files.pythonhosted.org/packages/7e/ac/f8b5e3c314acd1ea5a4d5f14b529c0cf2c3e37356800ad35c892174aeebb/rai_toolbox-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "505c984495d9ea9889f67f1afba2d91d32bcd13621e8bee117aa84c8464efd8c",
                "md5": "b5a0006be5828e44d0f8c738a9bc1881",
                "sha256": "f265d8ee88f4874da884d09cc9435c48131723d9e341a365101310ae04988a60"
            },
            "downloads": -1,
            "filename": "rai_toolbox-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b5a0006be5828e44d0f8c738a9bc1881",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 88136,
            "upload_time": "2022-06-16T17:59:21",
            "upload_time_iso_8601": "2022-06-16T17:59:21.218476Z",
            "url": "https://files.pythonhosted.org/packages/50/5c/984495d9ea9889f67f1afba2d91d32bcd13621e8bee117aa84c8464efd8c/rai_toolbox-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-06-16 17:59:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mit-ll-responsible-ai",
    "github_project": "responsible-ai-toolbox",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "rai-toolbox"
}
        
Elapsed time: 0.60027s