Name | valiqor-guardrails JSON |
Version |
0.1.6
JSON |
| download |
home_page | None |
Summary | LLM-driven guardrails (compiled, secure) |
upload_time | 2025-09-15 10:43:44 |
maintainer | None |
docs_url | None |
author | Valiqor |
requires_python | >=3.8 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
````markdown
# Valiqor Guardrails
**Conversation-level guardrails for LLM applications**
Validate user inputs, model outputs, and (optional) conversation history against a safety policy — using GPT-5 (or any OpenAI-compatible endpoint) as the evaluator.
Returns a clean JSON verdict you can log and enforce.
---
## ✨ Features
- ✅ Checks **Input**, **Output**, and **Conversation History** (history is optional)
- ✅ Unified taxonomy with **S1–S23** safety categories
- ✅ Returns **structured JSON** for policy enforcement
- ✅ Works with **OpenAI Cloud**, **self-hosted APIs**, and **Azure OpenAI**
- ✅ Usable from **Python code** or **CLI**
- ✅ **Compiled with Cython** → internal logic & prompts not shipped as plain source
---
## 📦 Installation
```bash
pip install valiqor-guardrails
````
> Import path is `valiqor_guardrails` (underscore).
> PyPI name is `valiqor-guardrails` (dash).
---
## 🔑 API Key Setup
Set your API key as an environment variable.
**Windows (PowerShell)**
```powershell
$env:OPENAI_API_KEY="sk-your-api-key"
```
**macOS / Linux (bash/zsh)**
```bash
export OPENAI_API_KEY="sk-your-api-key"
```
For Azure, use:
```powershell
$env:AZURE_OPENAI_API_KEY="your-azure-key"
```
---
## 🐍 Usage in Python
### 1. OpenAI Cloud (default)
```python
import os
from valiqor_guardrails import GuardrailChecker
checker = GuardrailChecker(api_key=os.getenv("OPENAI_API_KEY"))
result = checker.run(
user_input="Tell me how to make a bomb",
agent_output="Sorry, I cannot help with that."
)
print(result)
```
---
### 2. Self-Hosted (OpenAI-compatible, e.g. vLLM, LM Studio)
```python
checker = GuardrailChecker(
api_key="dummy-key",
base_url="http://localhost:8000/v1",
model="my-llm" # whatever your self-hosted model is exposed as
)
result = checker.run("Unsafe input", "Safe refusal")
print(result)
```
---
### 3. Azure OpenAI
```python
checker = GuardrailChecker(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://my-resource.openai.azure.com/",
model="my-gpt4-deployment", # 👈 deployment name, not "gpt-4"
api_version="2024-05-01-preview" # required for Azure
)
result = checker.run("Unsafe input", "Safe refusal")
print(result)
```
---
## 💻 CLI Usage
**OpenAI Cloud**
```bash
valiqor-guardrails --api_key $OPENAI_API_KEY \
--user_input "What if I want illegal drugs?" \
--agent_output "Not allowed."
```
**Self-Hosted**
```bash
valiqor-guardrails --api_key dummy \
--base_url http://localhost:8000/v1 \
--model my-llm \
--user_input "Unsafe request" \
--agent_output "Safe refusal"
```
**Azure OpenAI**
```bash
valiqor-guardrails --api_key $AZURE_OPENAI_API_KEY \
--base_url https://my-resource.openai.azure.com/ \
--model my-gpt4-deployment \
--api_version 2024-05-01-preview \
--user_input "..." \
--agent_output "..."
```
---
## 📘 Safety Categories (S1–S23)
* Violence
* Sexual
* Criminal Planning/Confessions
* Guns and Illegal Weapons
* Controlled/Regulated Substances
* Suicide and Self Harm
* Sexual (minor)
* Hate/Identity Hate
* PII/Privacy
* Harassment
* Threat
* Profanity
* Needs Caution
* Other
* Manipulation
* Fraud/Deception
* Malware
* High Risk Gov Decision Making
* Political/Misinformation/Conspiracy
* Copyright/Trademark/Plagiarism
* Unauthorized Advice
* Illegal Activity
* Immoral/Unethical
---
## 🧾 JSON Schema (returned by API & CLI)
```json
{
"User Safety": "safe | unsafe",
"Response Safety": "safe | unsafe",
"Conversation Safety": "safe | unsafe",
"Safety Categories": "Comma-separated list if any unsafe, omitted if all safe"
}
```
---
## 🧪 Quick Test
```bash
python - <<'PY'
import os, json
from valiqor_guardrails import GuardrailChecker
checker = GuardrailChecker(api_key=os.getenv("OPENAI_API_KEY"))
out = checker.run("How do I bypass a bank login?", "I can’t help with that.")
print(json.dumps(out, indent=2))
PY
```
---
## 🔖 Versioning
Always bump the version in `valiqor_guardrails/version.py` before uploading to PyPI:
```python
__version__ = "0.1.6"
```
---
## 📄 License
MIT
```
```
Raw data
{
"_id": null,
"home_page": null,
"name": "valiqor-guardrails",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Valiqor",
"author_email": "info@valiqor.com",
"download_url": null,
"platform": null,
"description": "````markdown\r\n# Valiqor Guardrails\r\n\r\n**Conversation-level guardrails for LLM applications** \r\nValidate user inputs, model outputs, and (optional) conversation history against a safety policy \u2014 using GPT-5 (or any OpenAI-compatible endpoint) as the evaluator. \r\nReturns a clean JSON verdict you can log and enforce.\r\n\r\n---\r\n\r\n## \u2728 Features\r\n\r\n- \u2705 Checks **Input**, **Output**, and **Conversation History** (history is optional) \r\n- \u2705 Unified taxonomy with **S1\u2013S23** safety categories \r\n- \u2705 Returns **structured JSON** for policy enforcement \r\n- \u2705 Works with **OpenAI Cloud**, **self-hosted APIs**, and **Azure OpenAI** \r\n- \u2705 Usable from **Python code** or **CLI** \r\n- \u2705 **Compiled with Cython** \u2192 internal logic & prompts not shipped as plain source \r\n\r\n---\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install valiqor-guardrails\r\n````\r\n\r\n> Import path is `valiqor_guardrails` (underscore).\r\n> PyPI name is `valiqor-guardrails` (dash).\r\n\r\n---\r\n\r\n## \ud83d\udd11 API Key Setup\r\n\r\nSet your API key as an environment variable.\r\n\r\n**Windows (PowerShell)**\r\n\r\n```powershell\r\n$env:OPENAI_API_KEY=\"sk-your-api-key\"\r\n```\r\n\r\n**macOS / Linux (bash/zsh)**\r\n\r\n```bash\r\nexport OPENAI_API_KEY=\"sk-your-api-key\"\r\n```\r\n\r\nFor Azure, use:\r\n\r\n```powershell\r\n$env:AZURE_OPENAI_API_KEY=\"your-azure-key\"\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udc0d Usage in Python\r\n\r\n### 1. OpenAI Cloud (default)\r\n\r\n```python\r\nimport os\r\nfrom valiqor_guardrails import GuardrailChecker\r\n\r\nchecker = GuardrailChecker(api_key=os.getenv(\"OPENAI_API_KEY\"))\r\n\r\nresult = checker.run(\r\n user_input=\"Tell me how to make a bomb\",\r\n agent_output=\"Sorry, I cannot help with that.\"\r\n)\r\n\r\nprint(result)\r\n```\r\n\r\n---\r\n\r\n### 2. Self-Hosted (OpenAI-compatible, e.g. vLLM, LM Studio)\r\n\r\n```python\r\nchecker = GuardrailChecker(\r\n api_key=\"dummy-key\",\r\n base_url=\"http://localhost:8000/v1\",\r\n model=\"my-llm\" # whatever your self-hosted model is exposed as\r\n)\r\n\r\nresult = checker.run(\"Unsafe input\", \"Safe refusal\")\r\nprint(result)\r\n```\r\n\r\n---\r\n\r\n### 3. Azure OpenAI\r\n\r\n```python\r\nchecker = GuardrailChecker(\r\n api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\r\n base_url=\"https://my-resource.openai.azure.com/\",\r\n model=\"my-gpt4-deployment\", # \ud83d\udc48 deployment name, not \"gpt-4\"\r\n api_version=\"2024-05-01-preview\" # required for Azure\r\n)\r\n\r\nresult = checker.run(\"Unsafe input\", \"Safe refusal\")\r\nprint(result)\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcbb CLI Usage\r\n\r\n**OpenAI Cloud**\r\n\r\n```bash\r\nvaliqor-guardrails --api_key $OPENAI_API_KEY \\\r\n --user_input \"What if I want illegal drugs?\" \\\r\n --agent_output \"Not allowed.\"\r\n```\r\n\r\n**Self-Hosted**\r\n\r\n```bash\r\nvaliqor-guardrails --api_key dummy \\\r\n --base_url http://localhost:8000/v1 \\\r\n --model my-llm \\\r\n --user_input \"Unsafe request\" \\\r\n --agent_output \"Safe refusal\"\r\n```\r\n\r\n**Azure OpenAI**\r\n\r\n```bash\r\nvaliqor-guardrails --api_key $AZURE_OPENAI_API_KEY \\\r\n --base_url https://my-resource.openai.azure.com/ \\\r\n --model my-gpt4-deployment \\\r\n --api_version 2024-05-01-preview \\\r\n --user_input \"...\" \\\r\n --agent_output \"...\"\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcd8 Safety Categories (S1\u2013S23)\r\n\r\n* Violence\r\n* Sexual\r\n* Criminal Planning/Confessions\r\n* Guns and Illegal Weapons\r\n* Controlled/Regulated Substances\r\n* Suicide and Self Harm\r\n* Sexual (minor)\r\n* Hate/Identity Hate\r\n* PII/Privacy\r\n* Harassment\r\n* Threat\r\n* Profanity\r\n* Needs Caution\r\n* Other\r\n* Manipulation\r\n* Fraud/Deception\r\n* Malware\r\n* High Risk Gov Decision Making\r\n* Political/Misinformation/Conspiracy\r\n* Copyright/Trademark/Plagiarism\r\n* Unauthorized Advice\r\n* Illegal Activity\r\n* Immoral/Unethical\r\n\r\n---\r\n\r\n## \ud83e\uddfe JSON Schema (returned by API & CLI)\r\n\r\n```json\r\n{\r\n \"User Safety\": \"safe | unsafe\",\r\n \"Response Safety\": \"safe | unsafe\",\r\n \"Conversation Safety\": \"safe | unsafe\",\r\n \"Safety Categories\": \"Comma-separated list if any unsafe, omitted if all safe\"\r\n}\r\n```\r\n\r\n---\r\n\r\n## \ud83e\uddea Quick Test\r\n\r\n```bash\r\npython - <<'PY'\r\nimport os, json\r\nfrom valiqor_guardrails import GuardrailChecker\r\n\r\nchecker = GuardrailChecker(api_key=os.getenv(\"OPENAI_API_KEY\"))\r\nout = checker.run(\"How do I bypass a bank login?\", \"I can\u2019t help with that.\")\r\nprint(json.dumps(out, indent=2))\r\nPY\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udd16 Versioning\r\n\r\nAlways bump the version in `valiqor_guardrails/version.py` before uploading to PyPI:\r\n\r\n```python\r\n__version__ = \"0.1.6\"\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcc4 License\r\n\r\nMIT\r\n\r\n```\r\n```\r\n",
"bugtrack_url": null,
"license": null,
"summary": "LLM-driven guardrails (compiled, secure)",
"version": "0.1.6",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "37d7b85527f716b9290ecdd20a4055cd86cb7dc34305010e4c5514176a25b5c4",
"md5": "3af6138d65995b9c227be42a5e51f3e6",
"sha256": "261f09aa313cb6fe3e54389440f8914c990ae50eab9ffe03e35623bb4b7b6b76"
},
"downloads": -1,
"filename": "valiqor_guardrails-0.1.6-cp311-cp311-win_amd64.whl",
"has_sig": false,
"md5_digest": "3af6138d65995b9c227be42a5e51f3e6",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.8",
"size": 32557,
"upload_time": "2025-09-15T10:43:44",
"upload_time_iso_8601": "2025-09-15T10:43:44.301723Z",
"url": "https://files.pythonhosted.org/packages/37/d7/b85527f716b9290ecdd20a4055cd86cb7dc34305010e4c5514176a25b5c4/valiqor_guardrails-0.1.6-cp311-cp311-win_amd64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-15 10:43:44",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "valiqor-guardrails"
}