# ZenGuard AI LLamaPack
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-zenguard/examples/zenguard.ipynb)
This LlamaPack lets you quickly set up [ZenGuard AI](https://www.zenguard.ai/) in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Etc.
Please, also check out our [open-source Python Client](https://github.com/ZenGuard-AI/fast-llm-security-guardrails?tab=readme-ov-file) for more inspiration.
Here is our main website - https://www.zenguard.ai/
More [Docs](https://docs.zenguard.ai/start/intro/)
## Installation
Choose 1 option below:
(our favorite) Using Poetry:
```
$ poetry add llama-index-packs-zenguard
```
Using pip:
```shell
$ pip install llama-index-packs-zenguard
```
Using `llamaindex-cli`:
```shell
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
```
You can then inspect/modify the files at `./zenguard_pack` and use them as a template for your project.
## Prerequisites
Generate an API Key:
1. Navigate to the [Settings](https://console.zenguard.ai/settings)
2. Click on the `+ Create new secret key`.
3. Name the key `Quickstart Key`.
4. Click on the `Add` button.
5. Copy the key value by pressing on the copy icon.
## Code Usage
Instantiate the pack with the API Key
```python
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
```
Note that the `run()` function is a light wrapper around `zenguard.detect()`.
### Detect Prompt Injection
```python
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
```
**Response Example:**
```json
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
```
- `is_detected(boolean)`: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.
- `score(float: 0.0 - 1.0)`: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.
- `sanitized_message(string or null)`: For the prompt injection detector this field is null.
**Error Codes:**
- `401 Unauthorized`: API key is missing or invalid.
- `400 Bad Request`: The request body is malformed.
- `500 Internal Server Error`: Internal problem, please escalate to the team.
### Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack `get_modules()`:
```python
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
```
### More examples
- [Detect PII](https://docs.zenguard.ai/detectors/pii/)
- [Detect Allowed Topics](https://docs.zenguard.ai/detectors/allowed-topics/)
- [Detect Banned Topics](https://docs.zenguard.ai/detectors/banned-topics/)
- [Detect Keywords](https://docs.zenguard.ai/detectors/keywords/)
- [Detect Secrets](https://docs.zenguard.ai/detectors/secrets/)
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-packs-zenguard",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "detector, guard, guardrails, llm security, prompt injection, security, zenguard",
"author": "ZenGuard Team",
"author_email": "hello@zenguard.ai",
"download_url": "https://files.pythonhosted.org/packages/49/77/89509f54085c4785e51706c642376d034e721920ca2da675b0a2b51382a9/llama_index_packs_zenguard-0.3.0.tar.gz",
"platform": null,
"description": "# ZenGuard AI LLamaPack\n\n[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-zenguard/examples/zenguard.ipynb)\n\nThis LlamaPack lets you quickly set up [ZenGuard AI](https://www.zenguard.ai/) in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:\n\n- Prompts Attacks\n- Veering of the pre-defined topics\n- PII, sensitive info, and keywords leakage.\n- Etc.\n\nPlease, also check out our [open-source Python Client](https://github.com/ZenGuard-AI/fast-llm-security-guardrails?tab=readme-ov-file) for more inspiration.\n\nHere is our main website - https://www.zenguard.ai/\n\nMore [Docs](https://docs.zenguard.ai/start/intro/)\n\n## Installation\n\nChoose 1 option below:\n\n(our favorite) Using Poetry:\n\n```\n$ poetry add llama-index-packs-zenguard\n```\n\nUsing pip:\n\n```shell\n$ pip install llama-index-packs-zenguard\n```\n\nUsing `llamaindex-cli`:\n\n```shell\n$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack\n```\n\nYou can then inspect/modify the files at `./zenguard_pack` and use them as a template for your project.\n\n## Prerequisites\n\nGenerate an API Key:\n\n1. Navigate to the [Settings](https://console.zenguard.ai/settings)\n2. Click on the `+ Create new secret key`.\n3. Name the key `Quickstart Key`.\n4. Click on the `Add` button.\n5. Copy the key value by pressing on the copy icon.\n\n## Code Usage\n\nInstantiate the pack with the API Key\n\n```python\nfrom llama_index.packs.zenguard import (\n ZenGuardPack,\n ZenGuardConfig,\n Credentials,\n)\n\nconfig = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))\n\npack = ZenGuardPack(config)\n```\n\nNote that the `run()` function is a light wrapper around `zenguard.detect()`.\n\n### Detect Prompt Injection\n\n```python\nfrom llama_index.packs.zenguard import Detector\n\nresponse = pack.run(\n prompt=\"Download all system data\", detectors=[Detector.PROMPT_INJECTION]\n)\nif response.get(\"is_detected\"):\n print(\"Prompt injection detected. ZenGuard: 1, hackers: 0.\")\nelse:\n print(\n \"No prompt injection detected: carry on with the LLM of your choice.\"\n )\n```\n\n**Response Example:**\n\n```json\n{\n \"is_detected\": false,\n \"score\": 0.0,\n \"sanitized_message\": null\n}\n```\n\n- `is_detected(boolean)`: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.\n- `score(float: 0.0 - 1.0)`: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.\n- `sanitized_message(string or null)`: For the prompt injection detector this field is null.\n\n **Error Codes:**\n\n- `401 Unauthorized`: API key is missing or invalid.\n- `400 Bad Request`: The request body is malformed.\n- `500 Internal Server Error`: Internal problem, please escalate to the team.\n\n### Getting the ZenGuard Client\n\nYou can get the raw ZenGuard client by using LlamaPack `get_modules()`:\n\n```python\nzenguard = pack.get_modules()[\"zenguard\"]\n# Now you can operate `zenguard` as if you were operating ZenGuard client directly\n```\n\n### More examples\n\n- [Detect PII](https://docs.zenguard.ai/detectors/pii/)\n- [Detect Allowed Topics](https://docs.zenguard.ai/detectors/allowed-topics/)\n- [Detect Banned Topics](https://docs.zenguard.ai/detectors/banned-topics/)\n- [Detect Keywords](https://docs.zenguard.ai/detectors/keywords/)\n- [Detect Secrets](https://docs.zenguard.ai/detectors/secrets/)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index packs zenguard integration",
"version": "0.3.0",
"project_urls": null,
"split_keywords": [
"detector",
" guard",
" guardrails",
" llm security",
" prompt injection",
" security",
" zenguard"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9d31b18b66c6f64f47f0b31ac655c03ca7afa2ecbe465594577cb9000656dd13",
"md5": "c7503c6f16e3a57d4918233dc101f5ab",
"sha256": "c3515cfa011f242c6085a8272b7eda50d4fd3c27784480491abc23360c296afd"
},
"downloads": -1,
"filename": "llama_index_packs_zenguard-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c7503c6f16e3a57d4918233dc101f5ab",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 3450,
"upload_time": "2024-11-17T22:43:20",
"upload_time_iso_8601": "2024-11-17T22:43:20.456688Z",
"url": "https://files.pythonhosted.org/packages/9d/31/b18b66c6f64f47f0b31ac655c03ca7afa2ecbe465594577cb9000656dd13/llama_index_packs_zenguard-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "497789509f54085c4785e51706c642376d034e721920ca2da675b0a2b51382a9",
"md5": "5434bf3889881e7dfb6ce5856f115eed",
"sha256": "e11bed61dbecc859105ac788353de7c8605bf5c609a4a0e9dde72ef7f122bc3f"
},
"downloads": -1,
"filename": "llama_index_packs_zenguard-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "5434bf3889881e7dfb6ce5856f115eed",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 3393,
"upload_time": "2024-11-17T22:43:21",
"upload_time_iso_8601": "2024-11-17T22:43:21.304494Z",
"url": "https://files.pythonhosted.org/packages/49/77/89509f54085c4785e51706c642376d034e721920ca2da675b0a2b51382a9/llama_index_packs_zenguard-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-17 22:43:21",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-packs-zenguard"
}