# Python Risk Identification Tool for generative AI (PyRIT)
The Python Risk Identification Tool for generative AI (PyRIT) is an open
access automation framework to empower security professionals and ML
engineers to red team foundation models and their applications.
## Introduction
PyRIT is a library developed by the AI Red Team for researchers and engineers
to help them assess the robustness of their LLM endpoints against different
harm categories such as fabrication/ungrounded content (e.g., hallucination),
misuse (e.g., bias), and prohibited content (e.g., harassment).
PyRIT automates AI Red Teaming tasks to allow operators to focus on more
complicated and time-consuming tasks and can also identify security harms such
as misuse (e.g., malware generation, jailbreaking), and privacy harms
(e.g., identity theft).
The goal is to allow researchers to have a baseline of how well their model
and entire inference pipeline is doing against different harm categories and
to be able to compare that baseline to future iterations of their model.
This allows them to have empirical data on how well their model is doing
today, and detect any degradation of performance based on future improvements.
Additionally, this tool allows researchers to iterate and improve their
mitigations against different harms.
For example, at Microsoft we are using this tool to iterate on different
versions of a product (and its metaprompt) so that we can more effectively
protect against prompt injection attacks.
![PyRIT architecture](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/assets/pyrit_architecture.png)
## Where can I learn more?
Microsoft Learn has a
[dedicated page on AI Red Teaming](https://learn.microsoft.com/en-us/security/ai-red-team).
Check out our [docs](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/README.md) for more information
on how to [install PyRIT](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/setup/install_pyrit.md),
our [How to Guide](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/how_to_guide.ipynb),
and more, as well as our [demos](https://github.com/Azure/PyRIT/tree/releases/v0.4.0/doc/code).
## Trademarks
This project may contain trademarks or logos for projects, products, or services.
Authorized use of Microsoft trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must
not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's
policies.
Raw data
{
"_id": null,
"home_page": null,
"name": "pyrit",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "llm, ai-safety, ai-security, ai-red-team, ai-red-teaming, ai-robustness, ai-robustness-testing, ai-risk-assessment",
"author": "dlmgary, amandajean119, microsiska, rdheekonda, rlundeen2, romanlutz, jbolor21, nina-msft",
"author_email": "Microsoft AI Red Team <airedteam@microsoft.com>",
"download_url": "https://files.pythonhosted.org/packages/e2/0f/79e007af5d6ff4215b4e8a56e2a41ba75eeb7f74e2fb7db9dba26da4b6db/pyrit-0.4.0.tar.gz",
"platform": null,
"description": "# Python Risk Identification Tool for generative AI (PyRIT)\r\n\r\nThe Python Risk Identification Tool for generative AI (PyRIT) is an open\r\naccess automation framework to empower security professionals and ML\r\nengineers to red team foundation models and their applications.\r\n\r\n## Introduction\r\n\r\nPyRIT is a library developed by the AI Red Team for researchers and engineers\r\nto help them assess the robustness of their LLM endpoints against different\r\nharm categories such as fabrication/ungrounded content (e.g., hallucination),\r\nmisuse (e.g., bias), and prohibited content (e.g., harassment).\r\n\r\nPyRIT automates AI Red Teaming tasks to allow operators to focus on more\r\ncomplicated and time-consuming tasks and can also identify security harms such\r\nas misuse (e.g., malware generation, jailbreaking), and privacy harms\r\n(e.g., identity theft).\u200b\r\n\r\nThe goal is to allow researchers to have a baseline of how well their model\r\nand entire inference pipeline is doing against different harm categories and\r\nto be able to compare that baseline to future iterations of their model.\r\nThis allows them to have empirical data on how well their model is doing\r\ntoday, and detect any degradation of performance based on future improvements.\r\n\r\nAdditionally, this tool allows researchers to iterate and improve their\r\nmitigations against different harms.\r\nFor example, at Microsoft we are using this tool to iterate on different\r\nversions of a product (and its metaprompt) so that we can more effectively\r\nprotect against prompt injection attacks.\r\n\r\n![PyRIT architecture](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/assets/pyrit_architecture.png)\r\n\r\n## Where can I learn more?\r\n\r\nMicrosoft Learn has a\r\n[dedicated page on AI Red Teaming](https://learn.microsoft.com/en-us/security/ai-red-team).\r\n\r\nCheck out our [docs](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/README.md) for more information\r\non how to [install PyRIT](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/setup/install_pyrit.md),\r\nour [How to Guide](https://raw.githubusercontent.com/Azure/PyRIT/releases/v0.4.0/doc/how_to_guide.ipynb),\r\nand more, as well as our [demos](https://github.com/Azure/PyRIT/tree/releases/v0.4.0/doc/code).\r\n\r\n## Trademarks\r\n\r\nThis project may contain trademarks or logos for projects, products, or services.\r\nAuthorized use of Microsoft trademarks or logos is subject to and must follow\r\n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\r\nUse of Microsoft trademarks or logos in modified versions of this project must\r\nnot cause confusion or imply Microsoft sponsorship.\r\nAny use of third-party trademarks or logos are subject to those third-party's\r\npolicies.\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "The Python Risk Identification Tool for LLMs (PyRIT) is a library used to assess the robustness of LLMs",
"version": "0.4.0",
"project_urls": null,
"split_keywords": [
"llm",
" ai-safety",
" ai-security",
" ai-red-team",
" ai-red-teaming",
" ai-robustness",
" ai-robustness-testing",
" ai-risk-assessment"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1f0811e1037ab26a6d142dc39e2b5fd9d8d79c9808dbf81a9fcf465d83f30c38",
"md5": "0771ad3be6087a853f8d0fe99faf858f",
"sha256": "4faa5be12c8f5c599477324d10a48607b53e4f90e309f0a9c1b20ba90f1b1d78"
},
"downloads": -1,
"filename": "pyrit-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0771ad3be6087a853f8d0fe99faf858f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 402724,
"upload_time": "2024-08-23T00:41:51",
"upload_time_iso_8601": "2024-08-23T00:41:51.682329Z",
"url": "https://files.pythonhosted.org/packages/1f/08/11e1037ab26a6d142dc39e2b5fd9d8d79c9808dbf81a9fcf465d83f30c38/pyrit-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e20f79e007af5d6ff4215b4e8a56e2a41ba75eeb7f74e2fb7db9dba26da4b6db",
"md5": "07843aedf9b74948c5a64e9a8b453148",
"sha256": "0482c547c41125ed06799805ca658a6c75a5dd8d1f1f765ebad09825b9390793"
},
"downloads": -1,
"filename": "pyrit-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "07843aedf9b74948c5a64e9a8b453148",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 254304,
"upload_time": "2024-08-23T00:41:53",
"upload_time_iso_8601": "2024-08-23T00:41:53.571481Z",
"url": "https://files.pythonhosted.org/packages/e2/0f/79e007af5d6ff4215b4e8a56e2a41ba75eeb7f74e2fb7db9dba26da4b6db/pyrit-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-23 00:41:53",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pyrit"
}