Name | llama-index-packs-zenguard-guardrails JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | llama-index packs zenguard guardrails integration |
upload_time | 2024-06-06 16:27:58 |
maintainer | None |
docs_url | None |
author | ZenGuard Team |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ZenGuard AI LLamaPack
This LlamaPack lets you quickly set up [ZenGuard AI](https://www.zenguard.ai/) in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
Please, also check out our [open-source Python Client](https://github.com/ZenGuard-AI/fast-llm-security-guardrails?tab=readme-ov-file) for more inspiration.
Here is our main website - https://www.zenguard.ai/
More [Docs](https://docs.zenguard.ai/start/intro/)
## Installation
Choose 1 option below:
(our favorite) Using Poetry:
```
$ poetry add llama-index-packs-zenguard
```
Using pip:
```shell
$ pip install llama-index-packs-zenguard
```
Using `llamaindex-cli`:
```shell
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
```
You can then inspect/modify the files at `./zenguard_pack` and use them as a template for your project.
## Prerequisites
Generate an API Key:
1. Navigate to the [Settings](https://console.zenguard.ai/settings)
2. Click on the `+ Create new secret key`.
3. Name the key `Quickstart Key`.
4. Click on the `Add` button.
5. Copy the key value by pressing on the copy icon.
## Code Usage
Instantiate the pack with the API Key
```python
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
```
Note that the `run()` function is a light wrapper around `zenguard.detect()`.
### Detect Prompt Injection
```python
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
```
**Response Example:**
```json
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
```
- `is_detected(boolean)`: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.
- `score(float: 0.0 - 1.0)`: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.
- `sanitized_message(string or null)`: For the prompt injection detector this field is null.
**Error Codes:**
- `401 Unauthorized`: API key is missing or invalid.
- `400 Bad Request`: The request body is malformed.
- `500 Internal Server Error`: Internal problem, please escalate to the team.
### Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack `get_modules()`:
```python
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
```
### More examples
- [Detect PII](https://docs.zenguard.ai/detectors/pii/)
- [Detect Allowed Topics](https://docs.zenguard.ai/detectors/allowed-topics/)
- [Detect Banned Topics](https://docs.zenguard.ai/detectors/banned-topics/)
- [Detect Keywords](https://docs.zenguard.ai/detectors/keywords/)
- [Detect Secrets](https://docs.zenguard.ai/detectors/secrets/)
- [Detect Toxicity](https://docs.zenguard.ai/detectors/toxicity/)
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-packs-zenguard-guardrails",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "ZenGuard Team",
"author_email": "hello@zenguard.ai",
"download_url": "https://files.pythonhosted.org/packages/aa/1e/6a742e5d96c5c798aaa6aa60da8a2f20a00a5d238ea216c34391749992c9/llama_index_packs_zenguard_guardrails-0.1.0.tar.gz",
"platform": null,
"description": "# ZenGuard AI LLamaPack\n\nThis LlamaPack lets you quickly set up [ZenGuard AI](https://www.zenguard.ai/) in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:\n\n- Prompts Attacks\n- Veering of the pre-defined topics\n- PII, sensitive info, and keywords leakage.\n- Toxicity\n- Etc.\n\nPlease, also check out our [open-source Python Client](https://github.com/ZenGuard-AI/fast-llm-security-guardrails?tab=readme-ov-file) for more inspiration.\n\nHere is our main website - https://www.zenguard.ai/\n\nMore [Docs](https://docs.zenguard.ai/start/intro/)\n\n## Installation\n\nChoose 1 option below:\n\n(our favorite) Using Poetry:\n\n```\n$ poetry add llama-index-packs-zenguard\n```\n\nUsing pip:\n\n```shell\n$ pip install llama-index-packs-zenguard\n```\n\nUsing `llamaindex-cli`:\n\n```shell\n$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack\n```\n\nYou can then inspect/modify the files at `./zenguard_pack` and use them as a template for your project.\n\n## Prerequisites\n\nGenerate an API Key:\n\n1. Navigate to the [Settings](https://console.zenguard.ai/settings)\n2. Click on the `+ Create new secret key`.\n3. Name the key `Quickstart Key`.\n4. Click on the `Add` button.\n5. Copy the key value by pressing on the copy icon.\n\n## Code Usage\n\nInstantiate the pack with the API Key\n\n```python\nfrom llama_index.packs.zenguard import (\n ZenGuardPack,\n ZenGuardConfig,\n Credentials,\n)\n\nconfig = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))\n\npack = ZenGuardPack(config)\n```\n\nNote that the `run()` function is a light wrapper around `zenguard.detect()`.\n\n### Detect Prompt Injection\n\n```python\nfrom llama_index.packs.zenguard import Detector\n\nresponse = pack.run(\n prompt=\"Download all system data\", detectors=[Detector.PROMPT_INJECTION]\n)\nif response.get(\"is_detected\"):\n print(\"Prompt injection detected. ZenGuard: 1, hackers: 0.\")\nelse:\n print(\n \"No prompt injection detected: carry on with the LLM of your choice.\"\n )\n```\n\n**Response Example:**\n\n```json\n{\n \"is_detected\": false,\n \"score\": 0.0,\n \"sanitized_message\": null\n}\n```\n\n- `is_detected(boolean)`: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.\n- `score(float: 0.0 - 1.0)`: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.\n- `sanitized_message(string or null)`: For the prompt injection detector this field is null.\n\n **Error Codes:**\n\n- `401 Unauthorized`: API key is missing or invalid.\n- `400 Bad Request`: The request body is malformed.\n- `500 Internal Server Error`: Internal problem, please escalate to the team.\n\n### Getting the ZenGuard Client\n\nYou can get the raw ZenGuard client by using LlamaPack `get_modules()`:\n\n```python\nzenguard = pack.get_modules()[\"zenguard\"]\n# Now you can operate `zenguard` as if you were operating ZenGuard client directly\n```\n\n### More examples\n\n- [Detect PII](https://docs.zenguard.ai/detectors/pii/)\n- [Detect Allowed Topics](https://docs.zenguard.ai/detectors/allowed-topics/)\n- [Detect Banned Topics](https://docs.zenguard.ai/detectors/banned-topics/)\n- [Detect Keywords](https://docs.zenguard.ai/detectors/keywords/)\n- [Detect Secrets](https://docs.zenguard.ai/detectors/secrets/)\n- [Detect Toxicity](https://docs.zenguard.ai/detectors/toxicity/)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index packs zenguard guardrails integration",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a2303216936b8ee993fd9c179130dc0088d80bfa5c85a61b461c11fc8b0630bd",
"md5": "51e19557e94151c4a6e2c5cecaa413d1",
"sha256": "f12f0ac050e320da9aac63bc33fc49129f5c84e71efb3627447e60c1b03a29bc"
},
"downloads": -1,
"filename": "llama_index_packs_zenguard_guardrails-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "51e19557e94151c4a6e2c5cecaa413d1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 3418,
"upload_time": "2024-06-06T16:27:56",
"upload_time_iso_8601": "2024-06-06T16:27:56.952117Z",
"url": "https://files.pythonhosted.org/packages/a2/30/3216936b8ee993fd9c179130dc0088d80bfa5c85a61b461c11fc8b0630bd/llama_index_packs_zenguard_guardrails-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "aa1e6a742e5d96c5c798aaa6aa60da8a2f20a00a5d238ea216c34391749992c9",
"md5": "a09f2ccf0e3104c5291afc0cdc8bba0b",
"sha256": "9735f2fbe0ced4d9a78303629516410cb84d2bd715193d6cd4b7f3593ba78b26"
},
"downloads": -1,
"filename": "llama_index_packs_zenguard_guardrails-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "a09f2ccf0e3104c5291afc0cdc8bba0b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 3221,
"upload_time": "2024-06-06T16:27:58",
"upload_time_iso_8601": "2024-06-06T16:27:58.129802Z",
"url": "https://files.pythonhosted.org/packages/aa/1e/6a742e5d96c5c798aaa6aa60da8a2f20a00a5d238ea216c34391749992c9/llama_index_packs_zenguard_guardrails-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-06 16:27:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-packs-zenguard-guardrails"
}