Name | langguard JSON |
Version |
0.7.0
JSON |
| download |
home_page | https://github.com/aprzy/langguard-python |
Summary | A Python library for language security |
upload_time | 2025-08-15 19:24:57 |
maintainer | None |
docs_url | None |
author | Your Name |
requires_python | >=3.11 |
license | MIT License
Copyright (c) Andrew Jon Przybilla
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
security
language
protection
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LangGuard 🛡️
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/langguard/)
**LangGuard** is a Python library that acts as a security layer for LLM (Large Language Model) agent pipelines. It screens and validates language inputs before they reach your AI agents, helping prevent prompt injection, jailbreaking attempts, and ensuring compliance with your security specifications.
## Features
- **🤖🛡️ GuardAgent**: Agent that serves as a circuit-breaker against prompt injection, jailbreaking, and data lifting attacks.
## Installation
Install LangGuard using pip:
```bash
pip install langguard
```
## Configuration
### Required Components
To use GuardAgent, you need:
1. **LLM Provider** - Currently supports `"openai"` or `None` (test mode)
2. **API Key** - Required for OpenAI (via environment variable)
3. **Prompt** - The text to screen (passed to `screen()` method)
4. **Model** - Optional, defaults to `gpt-4o-mini`
### Setup Methods
#### Method 1: Environment Variables (Recommended)
```bash
export GUARD_LLM_PROVIDER="openai" # LLM provider to use
export GUARD_LLM_API_KEY="your-api-key" # Your OpenAI API key
export GUARD_LLM_MODEL="gpt-4o-mini" # Optional: OpenAI model (default: gpt-4o-mini)
export LLM_TEMPERATURE="0.1" # Optional: Temperature 0-1 (default: 0.1)
```
Then in your code:
```python
from langguard import GuardAgent
agent = GuardAgent() # Automatically uses environment variables
response = agent.screen("Your prompt here")
```
#### Method 2: Partial Configuration
```bash
export GUARD_LLM_API_KEY="your-api-key" # API key must be in environment
```
```python
from langguard import GuardAgent
agent = GuardAgent(llm="openai") # Specify provider in code
response = agent.screen("Your prompt here")
```
#### Method 3: Test Mode (No API Required)
```python
from langguard import GuardAgent
# No provider specified = test mode
agent = GuardAgent() # Uses TestLLM, no API needed
response = agent.screen("Your prompt here")
# Always returns {"safe": false, "reason": "Test mode - always fails for safety"}
```
### Environment Variables Reference
| Variable | Description | Required | Default |
|----------|-------------|----------|----------|
| `GUARD_LLM_PROVIDER` | LLM provider (`"openai"` or `None`) | No | `None` (test mode) |
| `GUARD_LLM_API_KEY` | API key for OpenAI | Yes (for OpenAI) | - |
| `GUARD_LLM_MODEL` | Model to use | No | `gpt-4o-mini` |
| `LLM_TEMPERATURE` | Temperature (0-1) | No | `0.1` |
**Note**: Currently, API keys and models can only be configured via environment variables, not passed directly to the constructor.
## Quick Start
### Basic Usage - Plug and Play
```python
from langguard import GuardAgent
# Initialize GuardAgent with built-in security rules
guard = GuardAgent(llm="openai")
# Screen a user prompt with default protection
prompt = "How do I write a for loop in Python?"
response = guard.screen(prompt)
if response["safe"]:
print(f"Prompt is safe: {response['reason']}")
# Proceed with your LLM agent pipeline
else:
print(f"Prompt blocked: {response['reason']}")
# Handle the blocked prompt
```
The default specification blocks:
- Jailbreak attempts and prompt injections
- Requests for harmful or illegal content
- SQL/command injection attempts
- Personal information requests
- Malicious content generation
- System information extraction
### Adding Custom Rules
```python
# Add additional rules to the default specification
guard = GuardAgent(llm="openai")
# Add domain-specific rules while keeping default protection
response = guard.screen(
"Tell me about Python decorators",
specification="Only allow Python and JavaScript questions"
)
# This adds your rules to the default security rules
```
### Overriding Default Rules
```python
# Completely replace default rules with custom specification
response = guard.screen(
"What is a SQL injection?",
specification="Only allow cybersecurity educational content",
override=True # This replaces ALL default rules
)
```
### Simple Boolean Validation
```python
# For simple pass/fail checks
is_safe = agent.is_safe(
"Tell me about Python decorators",
"Only allow programming questions"
)
if is_safe:
# Process the prompt
pass
```
## Advanced Usage
### Advanced Usage
```python
from langguard import GuardAgent
# Create a guard agent
agent = GuardAgent(llm="openai")
# Use the simple boolean check
if agent.is_safe("DROP TABLE users;"):
print("Prompt is safe")
else:
print("Prompt blocked")
# With custom rules added to defaults
is_safe = agent.is_safe(
"How do I implement a binary search tree?",
specification="Must be about data structures"
)
# With complete rule override
is_safe = agent.is_safe(
"What's the recipe for chocolate cake?",
specification="Only allow cooking questions",
override=True
)
```
### Response Structure
LangGuard returns a `GuardResponse` dictionary with:
```python
{
"safe": bool, # True if prompt is safe, False otherwise
"reason": str # Explanation of the decision
}
```
### Default Protection
GuardAgent comes with built-in protection against:
- **Jailbreak Attempts**: Prompts trying to bypass safety guidelines
- **Injection Attacks**: SQL, command, and code injection attempts
- **Data Extraction**: Attempts to extract system information or credentials
- **Harmful Content**: Requests for illegal, unethical, or dangerous content
- **Personal Information**: Requests for SSN, passwords, or private data
- **Malicious Generation**: Phishing emails, malware, or exploit code
- **Prompt Manipulation**: Instructions to ignore previous rules or reveal system prompts
## Testing
The library includes comprehensive test coverage for various security scenarios:
```bash
# Run the OpenAI integration test
cd scripts
python test_openai.py
# Run unit tests
pytest tests/
```
### Example Security Scenarios
LangGuard can detect and prevent:
- **SQL Injection Attempts**: Blocks malicious database queries
- **System Command Execution**: Prevents file system access attempts
- **Personal Information Requests**: Blocks requests for PII
- **Jailbreak Attempts**: Detects attempts to bypass AI safety guidelines
- **Phishing Content Generation**: Prevents creation of deceptive content
- **Medical Advice**: Filters out specific medical diagnosis requests
- **Harmful Content**: Blocks requests for dangerous information
## Architecture
LangGuard follows a modular architecture:
```
langguard/
├── core.py # Minimal core file (kept for potential future use)
├── agent.py # GuardAgent implementation with LLM logic
├── models.py # LLM provider implementations (OpenAI, Test)
└── __init__.py # Package exports
```
### Components
- **GuardAgent**: Primary agent that screens prompts using LLMs
- **LLM Providers**: Pluggable LLM backends (OpenAI with structured output support)
- **GuardResponse**: Typed response structure with pass/fail status and reasoning
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Links
- [GitHub Repository](https://github.com/langguard/langguard-python)
- [Issue Tracker](https://github.com/langguard/langguard-python/issues)
- [PyPI Package](https://pypi.org/project/langguard/)
---
Raw data
{
"_id": null,
"home_page": "https://github.com/aprzy/langguard-python",
"name": "langguard",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "security, language, protection",
"author": "Your Name",
"author_email": "Andrew Jon Przybilla <andrewprzy@pm.me>",
"download_url": "https://files.pythonhosted.org/packages/4f/17/1e4e5fd06db130ec2a7f6db5225bd323a0f262c23336b32707678ef34d7d/langguard-0.7.0.tar.gz",
"platform": null,
"description": "# LangGuard \ud83d\udee1\ufe0f\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://pypi.org/project/langguard/)\n\n**LangGuard** is a Python library that acts as a security layer for LLM (Large Language Model) agent pipelines. It screens and validates language inputs before they reach your AI agents, helping prevent prompt injection, jailbreaking attempts, and ensuring compliance with your security specifications.\n\n## Features\n\n- **\ud83e\udd16\ud83d\udee1\ufe0f GuardAgent**: Agent that serves as a circuit-breaker against prompt injection, jailbreaking, and data lifting attacks.\n\n## Installation\n\nInstall LangGuard using pip:\n\n```bash\npip install langguard\n```\n\n## Configuration\n\n### Required Components\n\nTo use GuardAgent, you need:\n1. **LLM Provider** - Currently supports `\"openai\"` or `None` (test mode)\n2. **API Key** - Required for OpenAI (via environment variable)\n3. **Prompt** - The text to screen (passed to `screen()` method)\n4. **Model** - Optional, defaults to `gpt-4o-mini`\n\n### Setup Methods\n\n#### Method 1: Environment Variables (Recommended)\n\n```bash\nexport GUARD_LLM_PROVIDER=\"openai\" # LLM provider to use\nexport GUARD_LLM_API_KEY=\"your-api-key\" # Your OpenAI API key\nexport GUARD_LLM_MODEL=\"gpt-4o-mini\" # Optional: OpenAI model (default: gpt-4o-mini)\nexport LLM_TEMPERATURE=\"0.1\" # Optional: Temperature 0-1 (default: 0.1)\n```\n\nThen in your code:\n```python\nfrom langguard import GuardAgent\n\nagent = GuardAgent() # Automatically uses environment variables\nresponse = agent.screen(\"Your prompt here\")\n```\n\n#### Method 2: Partial Configuration\n\n```bash\nexport GUARD_LLM_API_KEY=\"your-api-key\" # API key must be in environment\n```\n\n```python\nfrom langguard import GuardAgent\n\nagent = GuardAgent(llm=\"openai\") # Specify provider in code\nresponse = agent.screen(\"Your prompt here\")\n```\n\n#### Method 3: Test Mode (No API Required)\n\n```python\nfrom langguard import GuardAgent\n\n# No provider specified = test mode\nagent = GuardAgent() # Uses TestLLM, no API needed\nresponse = agent.screen(\"Your prompt here\")\n# Always returns {\"safe\": false, \"reason\": \"Test mode - always fails for safety\"}\n```\n\n### Environment Variables Reference\n\n| Variable | Description | Required | Default |\n|----------|-------------|----------|----------|\n| `GUARD_LLM_PROVIDER` | LLM provider (`\"openai\"` or `None`) | No | `None` (test mode) |\n| `GUARD_LLM_API_KEY` | API key for OpenAI | Yes (for OpenAI) | - |\n| `GUARD_LLM_MODEL` | Model to use | No | `gpt-4o-mini` |\n| `LLM_TEMPERATURE` | Temperature (0-1) | No | `0.1` |\n\n**Note**: Currently, API keys and models can only be configured via environment variables, not passed directly to the constructor.\n\n## Quick Start\n\n### Basic Usage - Plug and Play\n\n```python\nfrom langguard import GuardAgent\n\n# Initialize GuardAgent with built-in security rules\nguard = GuardAgent(llm=\"openai\")\n\n# Screen a user prompt with default protection\nprompt = \"How do I write a for loop in Python?\"\nresponse = guard.screen(prompt)\n\nif response[\"safe\"]:\n print(f\"Prompt is safe: {response['reason']}\")\n # Proceed with your LLM agent pipeline\nelse:\n print(f\"Prompt blocked: {response['reason']}\")\n # Handle the blocked prompt\n```\n\nThe default specification blocks:\n- Jailbreak attempts and prompt injections\n- Requests for harmful or illegal content\n- SQL/command injection attempts\n- Personal information requests\n- Malicious content generation\n- System information extraction\n\n### Adding Custom Rules\n\n```python\n# Add additional rules to the default specification\nguard = GuardAgent(llm=\"openai\")\n\n# Add domain-specific rules while keeping default protection\nresponse = guard.screen(\n \"Tell me about Python decorators\",\n specification=\"Only allow Python and JavaScript questions\"\n)\n# This adds your rules to the default security rules\n```\n\n### Overriding Default Rules\n\n```python\n# Completely replace default rules with custom specification\nresponse = guard.screen(\n \"What is a SQL injection?\",\n specification=\"Only allow cybersecurity educational content\",\n override=True # This replaces ALL default rules\n)\n```\n\n### Simple Boolean Validation\n\n```python\n# For simple pass/fail checks\nis_safe = agent.is_safe(\n \"Tell me about Python decorators\",\n \"Only allow programming questions\"\n)\n\nif is_safe:\n # Process the prompt\n pass\n```\n\n\n## Advanced Usage\n\n### Advanced Usage\n\n```python\nfrom langguard import GuardAgent\n\n# Create a guard agent\nagent = GuardAgent(llm=\"openai\")\n\n# Use the simple boolean check\nif agent.is_safe(\"DROP TABLE users;\"):\n print(\"Prompt is safe\")\nelse:\n print(\"Prompt blocked\")\n\n# With custom rules added to defaults\nis_safe = agent.is_safe(\n \"How do I implement a binary search tree?\",\n specification=\"Must be about data structures\"\n)\n\n# With complete rule override\nis_safe = agent.is_safe(\n \"What's the recipe for chocolate cake?\",\n specification=\"Only allow cooking questions\",\n override=True\n)\n```\n\n### Response Structure\n\nLangGuard returns a `GuardResponse` dictionary with:\n\n```python\n{\n \"safe\": bool, # True if prompt is safe, False otherwise\n \"reason\": str # Explanation of the decision\n}\n```\n\n### Default Protection\n\nGuardAgent comes with built-in protection against:\n- **Jailbreak Attempts**: Prompts trying to bypass safety guidelines\n- **Injection Attacks**: SQL, command, and code injection attempts\n- **Data Extraction**: Attempts to extract system information or credentials\n- **Harmful Content**: Requests for illegal, unethical, or dangerous content\n- **Personal Information**: Requests for SSN, passwords, or private data\n- **Malicious Generation**: Phishing emails, malware, or exploit code\n- **Prompt Manipulation**: Instructions to ignore previous rules or reveal system prompts\n\n## Testing\n\nThe library includes comprehensive test coverage for various security scenarios:\n\n```bash\n# Run the OpenAI integration test\ncd scripts\npython test_openai.py\n\n# Run unit tests\npytest tests/\n```\n\n### Example Security Scenarios\n\nLangGuard can detect and prevent:\n\n- **SQL Injection Attempts**: Blocks malicious database queries\n- **System Command Execution**: Prevents file system access attempts\n- **Personal Information Requests**: Blocks requests for PII\n- **Jailbreak Attempts**: Detects attempts to bypass AI safety guidelines\n- **Phishing Content Generation**: Prevents creation of deceptive content\n- **Medical Advice**: Filters out specific medical diagnosis requests\n- **Harmful Content**: Blocks requests for dangerous information\n\n## Architecture\n\nLangGuard follows a modular architecture:\n\n```\nlangguard/\n\u251c\u2500\u2500 core.py # Minimal core file (kept for potential future use)\n\u251c\u2500\u2500 agent.py # GuardAgent implementation with LLM logic\n\u251c\u2500\u2500 models.py # LLM provider implementations (OpenAI, Test)\n\u2514\u2500\u2500 __init__.py # Package exports\n```\n\n### Components\n\n- **GuardAgent**: Primary agent that screens prompts using LLMs\n- **LLM Providers**: Pluggable LLM backends (OpenAI with structured output support)\n- **GuardResponse**: Typed response structure with pass/fail status and reasoning\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Links\n\n- [GitHub Repository](https://github.com/langguard/langguard-python)\n- [Issue Tracker](https://github.com/langguard/langguard-python/issues)\n- [PyPI Package](https://pypi.org/project/langguard/)\n\n---\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) Andrew Jon Przybilla\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "A Python library for language security",
"version": "0.7.0",
"project_urls": {
"Homepage": "https://github.com/langguard/langguard-python",
"Issues": "https://github.com/langguard/langguard-python/issues",
"Repository": "https://github.com/langguard/langguard-python"
},
"split_keywords": [
"security",
" language",
" protection"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2e84de360dde64393a7b96e40a607bdcd9dcea4972513c76efddc6b3f2c7c9ce",
"md5": "d1385b514516030268f01ab81df80e66",
"sha256": "2c452434c041b0ef7c3c1dd21328342356edb753500af2eee0dc4181646b1247"
},
"downloads": -1,
"filename": "langguard-0.7.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d1385b514516030268f01ab81df80e66",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 9452,
"upload_time": "2025-08-15T19:24:56",
"upload_time_iso_8601": "2025-08-15T19:24:56.395858Z",
"url": "https://files.pythonhosted.org/packages/2e/84/de360dde64393a7b96e40a607bdcd9dcea4972513c76efddc6b3f2c7c9ce/langguard-0.7.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4f171e4e5fd06db130ec2a7f6db5225bd323a0f262c23336b32707678ef34d7d",
"md5": "1491909e95e5e8cbd5e94e08cfe9ad98",
"sha256": "363bdc731e4a4a0682f34e407a2915334a13890effb41790ad072309c308e6a5"
},
"downloads": -1,
"filename": "langguard-0.7.0.tar.gz",
"has_sig": false,
"md5_digest": "1491909e95e5e8cbd5e94e08cfe9ad98",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 12732,
"upload_time": "2025-08-15T19:24:57",
"upload_time_iso_8601": "2025-08-15T19:24:57.445230Z",
"url": "https://files.pythonhosted.org/packages/4f/17/1e4e5fd06db130ec2a7f6db5225bd323a0f262c23336b32707678ef34d7d/langguard-0.7.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 19:24:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aprzy",
"github_project": "langguard-python",
"github_not_found": true,
"lcname": "langguard"
}