llm-guard


Namellm-guard JSON
Version 0.3.12 PyPI version JSON
download
home_pageNone
SummaryLLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
upload_time2024-04-23 09:16:39
maintainerNone
docs_urlNone
authorNone
requires_python<3.12,>=3.9
licenseThe MIT License (MIT) Copyright (c) Protect AI. All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords llm language model security adversarial attacks prompt injection prompt leakage pii detection self-hardening firewall
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Guard - The Security Toolkit for LLM Interactions

LLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).

[**Documentation**](https://llm-guard.com/) | [**Playground**](https://huggingface.co/spaces/ProtectAI/llm-guard-playground) | [**Changelog**](https://llm-guard.com/changelog/)

[![GitHub
stars](https://img.shields.io/github/stars/protectai/llm-guard.svg?style=social&label=Star&maxAge=2592000)](https://GitHub.com/protectai/llm-guard/stargazers/)
[![MIT license](https://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![PyPI - Python Version](https://img.shields.io/pypi/v/llm-guard)](https://pypi.org/project/llm-guard)
[![Downloads](https://static.pepy.tech/badge/llm-guard)](https://pepy.tech/project/llm-guard)
[![Downloads](https://static.pepy.tech/badge/llm-guard/month)](https://pepy.tech/project/llm-guard)

<a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>

## What is LLM Guard?

![LLM-Guard](https://github.com/protectai/llm-guard/blob/main/docs/assets/flow.png?raw=true)

By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt
injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

## Installation

Begin your journey with LLM Guard by downloading the package:

```sh
pip install llm-guard
```

## Getting Started

**Important Notes**:

- LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use
  out-of-the-box, please be informed that we're constantly improving and updating the repository.
- Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries
  will be automatically installed.
- Ensure you're using Python version 3.9 or higher. Confirm with: `python --version`.
- Library installation issues? Consider upgrading pip: `python -m pip install --upgrade pip`.

**Examples**:

- Get started with [ChatGPT and LLM Guard](./examples/openai_api.py).
- Deploy LLM Guard as [API](https://llm-guard.com/api/overview/)

## Supported scanners

### Prompt scanners

- [Anonymize](https://llm-guard.com/input_scanners/anonymize/)
- [BanCode](./docs/input_scanners/ban_code.md)
- [BanCompetitors](https://llm-guard.com/input_scanners/ban_competitors/)
- [BanSubstrings](https://llm-guard.com/input_scanners/ban_substrings/)
- [BanTopics](https://llm-guard.com/input_scanners/ban_topics/)
- [Code](https://llm-guard.com/input_scanners/code/)
- [Gibberish](https://llm-guard.com/input_scanners/gibberish/)
- [InvisibleText](https://llm-guard.com/input_scanners/invisible_text/)
- [Language](https://llm-guard.com/input_scanners/language/)
- [PromptInjection](https://llm-guard.com/input_scanners/prompt_injection/)
- [Regex](https://llm-guard.com/input_scanners/regex/)
- [Secrets](https://llm-guard.com/input_scanners/secrets/)
- [Sentiment](https://llm-guard.com/input_scanners/sentiment/)
- [TokenLimit](https://llm-guard.com/input_scanners/token_limit/)
- [Toxicity](https://llm-guard.com/input_scanners/toxicity/)

### Output scanners

- [BanCode](./docs/output_scanners/ban_code.md)
- [BanCompetitors](https://llm-guard.com/output_scanners/ban_competitors/)
- [BanSubstrings](https://llm-guard.com/output_scanners/ban_substrings/)
- [BanTopics](https://llm-guard.com/output_scanners/ban_topics/)
- [Bias](https://llm-guard.com/output_scanners/bias/)
- [Code](https://llm-guard.com/output_scanners/code/)
- [Deanonymize](https://llm-guard.com/output_scanners/deanonymize/)
- [JSON](https://llm-guard.com/output_scanners/json/)
- [Language](https://llm-guard.com/output_scanners/language/)
- [LanguageSame](https://llm-guard.com/output_scanners/language_same/)
- [MaliciousURLs](https://llm-guard.com/output_scanners/malicious_urls/)
- [NoRefusal](https://llm-guard.com/output_scanners/no_refusal/)
- [ReadingTime](https://llm-guard.com/output_scanners/reading_time/)
- [FactualConsistency](https://llm-guard.com/output_scanners/factual_consistency/)
- [Gibberish](https://llm-guard.com/output_scanners/gibberish/)
- [Regex](https://llm-guard.com/output_scanners/regex/)
- [Relevance](https://llm-guard.com/output_scanners/relevance/)
- [Sensitive](https://llm-guard.com/output_scanners/sensitive/)
- [Sentiment](https://llm-guard.com/output_scanners/sentiment/)
- [Toxicity](https://llm-guard.com/output_scanners/toxicity/)
- [URLReachability](https://llm-guard.com/output_scanners/url_reachability/)

## Community, Contributing, Docs & Support

LLM Guard is an open source solution.
We are committed to a transparent development process and highly appreciate any contributions.
Whether you are helping us fix bugs, propose new features, improve our documentation or spread the word,
we would love to have you as part of our community.

- Give us a ⭐️ github star ⭐️ on the top of this page to support what we're doing,
  it means a lot for open source projects!
- Read our
  [docs](https://llm-guard.com/)
  for more info about how to use and customize LLM Guard, and for step-by-step tutorials.
- Post a [Github
  Issue](https://github.com/protectai/llm-guard/issues) to submit a bug report, feature request, or suggest an improvement.
- To contribute to the package, check out our [contribution guidelines](CONTRIBUTING.md), and open a PR.

Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions,
get help for package usage or contributions, or engage in discussions about LLM security!

<a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>

### Production Support

We're eager to provide personalized assistance when deploying your LLM Guard to a production environment.

- [Send Email ✉️](mailto:community@protectai.com)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-guard",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.9",
    "maintainer_email": null,
    "keywords": "llm, language model, security, adversarial attacks, prompt injection, prompt leakage, PII detection, self-hardening, firewall",
    "author": null,
    "author_email": "Protect AI <community@protectai.com>",
    "download_url": "https://files.pythonhosted.org/packages/02/3b/264315c7e3025e5d2335a7fabd0df4a97de3020b8237e7afc0b4050f1bf1/llm_guard-0.3.12.tar.gz",
    "platform": null,
    "description": "# LLM Guard - The Security Toolkit for LLM Interactions\n\nLLM Guard by [Protect AI](https://protectai.com/llm-guard) is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).\n\n[**Documentation**](https://llm-guard.com/) | [**Playground**](https://huggingface.co/spaces/ProtectAI/llm-guard-playground) | [**Changelog**](https://llm-guard.com/changelog/)\n\n[![GitHub\nstars](https://img.shields.io/github/stars/protectai/llm-guard.svg?style=social&label=Star&maxAge=2592000)](https://GitHub.com/protectai/llm-guard/stargazers/)\n[![MIT license](https://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![PyPI - Python Version](https://img.shields.io/pypi/v/llm-guard)](https://pypi.org/project/llm-guard)\n[![Downloads](https://static.pepy.tech/badge/llm-guard)](https://pepy.tech/project/llm-guard)\n[![Downloads](https://static.pepy.tech/badge/llm-guard/month)](https://pepy.tech/project/llm-guard)\n\n<a href=\"https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w\"><img src=\"https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true\" width=\"200\"></a>\n\n## What is LLM Guard?\n\n![LLM-Guard](https://github.com/protectai/llm-guard/blob/main/docs/assets/flow.png?raw=true)\n\nBy offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt\ninjection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.\n\n## Installation\n\nBegin your journey with LLM Guard by downloading the package:\n\n```sh\npip install llm-guard\n```\n\n## Getting Started\n\n**Important Notes**:\n\n- LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use\n  out-of-the-box, please be informed that we're constantly improving and updating the repository.\n- Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries\n  will be automatically installed.\n- Ensure you're using Python version 3.9 or higher. Confirm with: `python --version`.\n- Library installation issues? Consider upgrading pip: `python -m pip install --upgrade pip`.\n\n**Examples**:\n\n- Get started with [ChatGPT and LLM Guard](./examples/openai_api.py).\n- Deploy LLM Guard as [API](https://llm-guard.com/api/overview/)\n\n## Supported scanners\n\n### Prompt scanners\n\n- [Anonymize](https://llm-guard.com/input_scanners/anonymize/)\n- [BanCode](./docs/input_scanners/ban_code.md)\n- [BanCompetitors](https://llm-guard.com/input_scanners/ban_competitors/)\n- [BanSubstrings](https://llm-guard.com/input_scanners/ban_substrings/)\n- [BanTopics](https://llm-guard.com/input_scanners/ban_topics/)\n- [Code](https://llm-guard.com/input_scanners/code/)\n- [Gibberish](https://llm-guard.com/input_scanners/gibberish/)\n- [InvisibleText](https://llm-guard.com/input_scanners/invisible_text/)\n- [Language](https://llm-guard.com/input_scanners/language/)\n- [PromptInjection](https://llm-guard.com/input_scanners/prompt_injection/)\n- [Regex](https://llm-guard.com/input_scanners/regex/)\n- [Secrets](https://llm-guard.com/input_scanners/secrets/)\n- [Sentiment](https://llm-guard.com/input_scanners/sentiment/)\n- [TokenLimit](https://llm-guard.com/input_scanners/token_limit/)\n- [Toxicity](https://llm-guard.com/input_scanners/toxicity/)\n\n### Output scanners\n\n- [BanCode](./docs/output_scanners/ban_code.md)\n- [BanCompetitors](https://llm-guard.com/output_scanners/ban_competitors/)\n- [BanSubstrings](https://llm-guard.com/output_scanners/ban_substrings/)\n- [BanTopics](https://llm-guard.com/output_scanners/ban_topics/)\n- [Bias](https://llm-guard.com/output_scanners/bias/)\n- [Code](https://llm-guard.com/output_scanners/code/)\n- [Deanonymize](https://llm-guard.com/output_scanners/deanonymize/)\n- [JSON](https://llm-guard.com/output_scanners/json/)\n- [Language](https://llm-guard.com/output_scanners/language/)\n- [LanguageSame](https://llm-guard.com/output_scanners/language_same/)\n- [MaliciousURLs](https://llm-guard.com/output_scanners/malicious_urls/)\n- [NoRefusal](https://llm-guard.com/output_scanners/no_refusal/)\n- [ReadingTime](https://llm-guard.com/output_scanners/reading_time/)\n- [FactualConsistency](https://llm-guard.com/output_scanners/factual_consistency/)\n- [Gibberish](https://llm-guard.com/output_scanners/gibberish/)\n- [Regex](https://llm-guard.com/output_scanners/regex/)\n- [Relevance](https://llm-guard.com/output_scanners/relevance/)\n- [Sensitive](https://llm-guard.com/output_scanners/sensitive/)\n- [Sentiment](https://llm-guard.com/output_scanners/sentiment/)\n- [Toxicity](https://llm-guard.com/output_scanners/toxicity/)\n- [URLReachability](https://llm-guard.com/output_scanners/url_reachability/)\n\n## Community, Contributing, Docs & Support\n\nLLM Guard is an open source solution.\nWe are committed to a transparent development process and highly appreciate any contributions.\nWhether you are helping us fix bugs, propose new features, improve our documentation or spread the word,\nwe would love to have you as part of our community.\n\n- Give us a \u2b50\ufe0f github star \u2b50\ufe0f on the top of this page to support what we're doing,\n  it means a lot for open source projects!\n- Read our\n  [docs](https://llm-guard.com/)\n  for more info about how to use and customize LLM Guard, and for step-by-step tutorials.\n- Post a [Github\n  Issue](https://github.com/protectai/llm-guard/issues) to submit a bug report, feature request, or suggest an improvement.\n- To contribute to the package, check out our [contribution guidelines](CONTRIBUTING.md), and open a PR.\n\nJoin our Slack to give us feedback, connect with the maintainers and fellow users, ask questions,\nget help for package usage or contributions, or engage in discussions about LLM security!\n\n<a href=\"https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w\"><img src=\"https://github.com/protectai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true\" width=\"200\"></a>\n\n### Production Support\n\nWe're eager to provide personalized assistance when deploying your LLM Guard to a production environment.\n\n- [Send Email \u2709\ufe0f](mailto:community@protectai.com)\n",
    "bugtrack_url": null,
    "license": "The MIT License (MIT)  Copyright (c) Protect AI. All rights reserved.  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.",
    "version": "0.3.12",
    "project_urls": {
        "changelog": "https://llm-guard.com/changelog/",
        "documentation": "https://llm-guard.com/",
        "homepage": "https://github.com/protectai/llm-guard",
        "issues": "https://github.com/protectai/llm-guard/issues",
        "repository": "https://github.com/protectai/llm-guard"
    },
    "split_keywords": [
        "llm",
        " language model",
        " security",
        " adversarial attacks",
        " prompt injection",
        " prompt leakage",
        " pii detection",
        " self-hardening",
        " firewall"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "978b9a5a248bc44a8ab9718547bdfb8730601dac56c1c55868dbe8bb8e4bfa6d",
                "md5": "bff3b182240f3f1e2d0929d5bdaf14de",
                "sha256": "ce059cbc07a15dd94cc4c71d88235c056d77b5e8836b41e094be79819db48bae"
            },
            "downloads": -1,
            "filename": "llm_guard-0.3.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bff3b182240f3f1e2d0929d5bdaf14de",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.9",
            "size": 133249,
            "upload_time": "2024-04-23T09:16:37",
            "upload_time_iso_8601": "2024-04-23T09:16:37.913431Z",
            "url": "https://files.pythonhosted.org/packages/97/8b/9a5a248bc44a8ab9718547bdfb8730601dac56c1c55868dbe8bb8e4bfa6d/llm_guard-0.3.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "023b264315c7e3025e5d2335a7fabd0df4a97de3020b8237e7afc0b4050f1bf1",
                "md5": "e26bbc2cb09b0f196f1f8b1bdd20ca93",
                "sha256": "a7d20b01295b1019d4bbb0f418e00b91a9c34829d6a25d29ec4abcca0b7928ba"
            },
            "downloads": -1,
            "filename": "llm_guard-0.3.12.tar.gz",
            "has_sig": false,
            "md5_digest": "e26bbc2cb09b0f196f1f8b1bdd20ca93",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.9",
            "size": 69160,
            "upload_time": "2024-04-23T09:16:39",
            "upload_time_iso_8601": "2024-04-23T09:16:39.621819Z",
            "url": "https://files.pythonhosted.org/packages/02/3b/264315c7e3025e5d2335a7fabd0df4a97de3020b8237e7afc0b4050f1bf1/llm_guard-0.3.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 09:16:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "protectai",
    "github_project": "llm-guard",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-guard"
}
        
Elapsed time: 0.26703s