<h1 align="center">Pallma Guard</h1>
<p align="center">
<p align="center"><strong>The Security Observability Layer for AI Agents.</strong></p>
</p>
<h4 align="center">
<a href="https://github.com/pallma-ai/pallma-guard/actions/workflows/ci.yml">
<img src="https://github.com/pallma-ai/pallma-guard/actions/workflows/ci.yml/badge.svg">
</a>
<a href="https://github.com/pallma-ai/pallma-guard/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-Apache 2.0-blue.svg" alt="Pallma-Guard is released under the Apache-2.0 License">
</a>
<a>
<img src="https://img.shields.io/badge/python-3.12+-blue.svg" alt="Minimum Python version">
</a>
<a>
<img src="https://img.shields.io/badge/status-prototype-orange.svg" alt="Project status">
</a>
</h4>
# pallma-guard
The Security Observability Layer for AI Agents.
Pallma-Guard is a security observability platform designed specifically for the era of AI agents and LLM-powered applications. As agents execute complex tasks, they create a new, dynamic attack surface vulnerable to threats like prompt injection, data leakage, and insecure tool usage.
Pallma-Guard leverages **OpenTelemetry** to trace every step of an agent's decision-making processโits "chain-of-thought." These traces are then fed into our purpose-built **open-source** ML models to detect, analyze, and block threats in real-time, giving you unprecedented visibility and control over your AI systems.
## โจ Key Features
* ๐ก๏ธ **Real-time Threat Detection:** Identify malicious prompts, data exfiltration attempts, and anomalous agent behavior as it happens.
* ๐ **LLM-Specific Analysis:** Go beyond simple text matching. Understand the context of LLM interactions to detect nuanced threats.
* ๐ **Transparent & Auditable Security Models:** No black boxes. Our core ML models are fully open-source. Inspect the logic, contribute improvements, or fine-tune them.
* โ๏ธ **Minimal Configuration:** Get started in minutes. Pallma-Guard is designed to work out-of-the-box with no complex rule-writing, providing immediate value right after installation.
* ๐ **OpenTelemetry Native:** Seamlessly integrate with your existing observability stack. If you use OTel, you can use Pallma-Guard.
* ๐ค **Framework Agnostic:** Works with LangChain, LangGraph, LlamaIndex, Haystack, LiteLLM, CrewAI.
## ๐ค Why Pallma-Guard?
Traditional security tools (WAFs, APMs) are blind to the internal logic of AI agents. They see an API call, but not the malicious prompt that caused it.
Even newer AI security solutions can operate as a "black box," asking you to trust their proprietary logic. **Pallma-Guard is different.** Our commitment to open-source extends to our core security models. This transparency allows for community auditing, eliminates vendor lock-in, and empowers you to understand exactly how your agents are being protected.
## Prerequisites
Before running pallma, ensure you have the following installed:
- **Python 3.12+**
- **Docker and Docker Compose**
- **uv** (Python package manager)
## Installation
**The easiest way to install pallma-guard is either by using pip or pipx,**
**pip**
```bash
pip install pallma-guard
```
**pipx**
```bash
pipx install pallma-guard
```
**Manual installation**
1. **Clone the repository:**
```bash
git clone https://github.com/pallma-ai/pallma.git
cd pallma
```
2. **Install dependencies:**
Install dependencies (CLI):
```bash
make install
```
(Optional) Install dependencies (CLI + SDK):
```
make install-sdk
```
3. **Activate the virtual environment:**
```bash
source .venv/bin/activate
```
## Running the Application
The project includes a CLI tool for easy management:
```bash
# Start all services
pallma start
# Display real-time statistics
pallma display
# Stop all services
pallma stop
```
The display command shows real-time statistics including:
- Total number of messages
- Percentage of allow/block decisions
- Real-time updates as messages arrive
## Services
The application consists of the following services:
- **Zookeeper**: Apache Kafka dependency
- **Kafka**: Message broker for telemetry data
- **OpenTelemetry Collector**: Collects and forwards telemetry data to Kafka
- **Processor**: Processes telemetry data from Kafka
- **Predictor**: ML service for predictions (requires Hugging Face token)
## Development
### Development Commands
```bash
# Install development dependencies
make install-dev
# Run linting
make lint
# Install specific dependency groups
make install-cli
make install-sdk
# Bump version
make bump-version version=<major|minor|patch>
```
### Releasing a new version
After running the bump_version command, you need to push the changes and tags to the remote repository:
```bash
git push --follow-tags
```
## Troubleshooting
1. **Network issues**: Ensure the `pallma-network` Docker network exists
2. **Port conflicts**: Check if ports 2181, 9092, 4317, 4318 are available
3. **Service health**: Use `docker-compose ps` to check service status
## License
See [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "pallma-guard",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "AI Agents, Cybersecurity, LLM",
"author": "Dionysis Varelas (@dvarelas), Pavlos Mitsoulis (@pm3310)",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/09/d5/6295a647b34f31ad99c9c497a3420e5f5d46a77be5c942bef008157838cb/pallma_guard-0.0.3.tar.gz",
"platform": null,
"description": "<h1 align=\"center\">Pallma Guard</h1>\n<p align=\"center\">\n <p align=\"center\"><strong>The Security Observability Layer for AI Agents.</strong></p>\n</p>\n\n<h4 align=\"center\">\n <a href=\"https://github.com/pallma-ai/pallma-guard/actions/workflows/ci.yml\">\n <img src=\"https://github.com/pallma-ai/pallma-guard/actions/workflows/ci.yml/badge.svg\">\n </a>\n <a href=\"https://github.com/pallma-ai/pallma-guard/blob/main/LICENSE\">\n <img src=\"https://img.shields.io/badge/license-Apache 2.0-blue.svg\" alt=\"Pallma-Guard is released under the Apache-2.0 License\">\n </a>\n <a>\n <img src=\"https://img.shields.io/badge/python-3.12+-blue.svg\" alt=\"Minimum Python version\">\n </a>\n <a>\n <img src=\"https://img.shields.io/badge/status-prototype-orange.svg\" alt=\"Project status\">\n </a>\n</h4>\n\n\n# pallma-guard\n\nThe Security Observability Layer for AI Agents.\n\nPallma-Guard is a security observability platform designed specifically for the era of AI agents and LLM-powered applications. As agents execute complex tasks, they create a new, dynamic attack surface vulnerable to threats like prompt injection, data leakage, and insecure tool usage.\n\nPallma-Guard leverages **OpenTelemetry** to trace every step of an agent's decision-making process\u2014its \"chain-of-thought.\" These traces are then fed into our purpose-built **open-source** ML models to detect, analyze, and block threats in real-time, giving you unprecedented visibility and control over your AI systems.\n\n## \u2728 Key Features\n\n* \ud83d\udee1\ufe0f **Real-time Threat Detection:** Identify malicious prompts, data exfiltration attempts, and anomalous agent behavior as it happens.\n* \ud83d\udd0e **LLM-Specific Analysis:** Go beyond simple text matching. Understand the context of LLM interactions to detect nuanced threats.\n* \ud83d\udd13 **Transparent & Auditable Security Models:** No black boxes. Our core ML models are fully open-source. Inspect the logic, contribute improvements, or fine-tune them.\n* \u2699\ufe0f **Minimal Configuration:** Get started in minutes. Pallma-Guard is designed to work out-of-the-box with no complex rule-writing, providing immediate value right after installation.\n* \ud83d\udd0c **OpenTelemetry Native:** Seamlessly integrate with your existing observability stack. If you use OTel, you can use Pallma-Guard.\n* \ud83e\udd16 **Framework Agnostic:** Works with LangChain, LangGraph, LlamaIndex, Haystack, LiteLLM, CrewAI.\n\n\n## \ud83e\udd14 Why Pallma-Guard?\n\nTraditional security tools (WAFs, APMs) are blind to the internal logic of AI agents. They see an API call, but not the malicious prompt that caused it.\n\nEven newer AI security solutions can operate as a \"black box,\" asking you to trust their proprietary logic. **Pallma-Guard is different.** Our commitment to open-source extends to our core security models. This transparency allows for community auditing, eliminates vendor lock-in, and empowers you to understand exactly how your agents are being protected.\n\n\n## Prerequisites\n\nBefore running pallma, ensure you have the following installed:\n\n- **Python 3.12+**\n- **Docker and Docker Compose**\n- **uv** (Python package manager)\n\n## Installation\n\n **The easiest way to install pallma-guard is either by using pip or pipx,**\n\n **pip**\n ```bash\n pip install pallma-guard\n ```\n\n **pipx**\n ```bash\n pipx install pallma-guard\n ```\n\n**Manual installation**\n\n1. **Clone the repository:**\n ```bash\n git clone https://github.com/pallma-ai/pallma.git\n cd pallma\n ```\n\n2. **Install dependencies:**\n \n Install dependencies (CLI):\n ```bash\n make install\n ```\n\n (Optional) Install dependencies (CLI + SDK):\n ```\n make install-sdk\n ```\n\n3. **Activate the virtual environment:**\n \n ```bash\n source .venv/bin/activate\n ```\n\n## Running the Application\n\nThe project includes a CLI tool for easy management:\n\n```bash\n# Start all services\npallma start\n\n# Display real-time statistics\npallma display\n\n# Stop all services\npallma stop\n```\n\nThe display command shows real-time statistics including:\n- Total number of messages\n- Percentage of allow/block decisions\n- Real-time updates as messages arrive\n\n## Services\n\nThe application consists of the following services:\n\n- **Zookeeper**: Apache Kafka dependency\n- **Kafka**: Message broker for telemetry data\n- **OpenTelemetry Collector**: Collects and forwards telemetry data to Kafka\n- **Processor**: Processes telemetry data from Kafka\n- **Predictor**: ML service for predictions (requires Hugging Face token)\n\n## Development\n\n### Development Commands\n\n```bash\n# Install development dependencies\nmake install-dev\n\n# Run linting\nmake lint\n\n# Install specific dependency groups\nmake install-cli\nmake install-sdk\n\n# Bump version\nmake bump-version version=<major|minor|patch>\n```\n### Releasing a new version\n\nAfter running the bump_version command, you need to push the changes and tags to the remote repository:\n\n```bash\ngit push --follow-tags\n```\n\n## Troubleshooting\n\n1. **Network issues**: Ensure the `pallma-network` Docker network exists\n2. **Port conflicts**: Check if ports 2181, 9092, 4317, 4318 are available\n3. **Service health**: Use `docker-compose ps` to check service status\n\n## License\n\nSee [LICENSE](LICENSE) file for details.",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "The Security Observability Layer for AI Agents",
"version": "0.0.3",
"project_urls": null,
"split_keywords": [
"ai agents",
" cybersecurity",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "8c23cca1099182b3ea4683be64bcdf4807fa3cb70c2191860c4f09e8e35290df",
"md5": "49dbb4fbd905746728301b0653edb034",
"sha256": "36d0c81c968c903e69b417ad0702becd3ec05867ab190f233013019ae3f52cce"
},
"downloads": -1,
"filename": "pallma_guard-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "49dbb4fbd905746728301b0653edb034",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 19201,
"upload_time": "2025-09-06T18:45:49",
"upload_time_iso_8601": "2025-09-06T18:45:49.296936Z",
"url": "https://files.pythonhosted.org/packages/8c/23/cca1099182b3ea4683be64bcdf4807fa3cb70c2191860c4f09e8e35290df/pallma_guard-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "09d56295a647b34f31ad99c9c497a3420e5f5d46a77be5c942bef008157838cb",
"md5": "372f8ff7d8e177f49e294c269b62dc57",
"sha256": "0f94013229f6d83b95b7b9bf97c2819e4b99ab0a7a29641b3e436b619e0190a7"
},
"downloads": -1,
"filename": "pallma_guard-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "372f8ff7d8e177f49e294c269b62dc57",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 196063,
"upload_time": "2025-09-06T18:45:50",
"upload_time_iso_8601": "2025-09-06T18:45:50.428742Z",
"url": "https://files.pythonhosted.org/packages/09/d5/6295a647b34f31ad99c9c497a3420e5f5d46a77be5c942bef008157838cb/pallma_guard-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-06 18:45:50",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pallma-guard"
}