# langchain-velatir
[](https://badge.fury.io/py/langchain-velatir)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
**AI Governance, Compliance, and Human-in-the-Loop for LangChain**
Official LangChain integration for [Velatir](https://velatir.com) - Add enterprise-grade governance, compliance checking, and human approval workflows to your LangChain agents.
## Features
- **🛡️ Compliance Guardrails**: Automatically validate agent responses against GDPR, EU AI Act, Bias & Fairness, and Prompt Injection policies
- **👥 Human-in-the-Loop**: Require human approval for sensitive operations before execution
- **📊 Full Audit Trail**: All decisions logged in Velatir dashboard with complete context
- **🔄 Multi-Channel Approvals**: Receive approval requests via Slack, Microsoft Teams, Email, or Web UI
- **⚡ Easy Integration**: Drop-in middleware that works with existing LangChain agents
- **🎯 Flexible Policies**: Configure which tools need approval and which policies to enforce
## Installation
```bash
pip install langchain-velatir
```
**Requirements:**
- Python 3.10+
- LangChain 1.0 alpha or later
- Velatir account and API key ([sign up here](https://velatir.com))
## Quick Start
### Guardrails Example
Add governance to your agent responses. Velatir automatically evaluates responses against your configured policies:
```python
from langchain_velatir import VelatirGuardrailMiddleware
from langchain.agents import create_react_agent
# Create guardrail middleware
# Policies (GDPR, EU AI Act, Bias & Fairness, etc.) are configured in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key="your-velatir-api-key",
mode="blocking", # Block responses that Velatir denies
)
# Add to your agent
agent = create_react_agent(
model,
tools,
middleware=[guardrails]
)
```
### Human-in-the-Loop Example
Send tool calls to Velatir for evaluation. Velatir determines if human approval is needed based on your configured flows:
```python
from langchain_velatir import VelatirHITLMiddleware
from langchain.agents import create_react_agent
# Create HITL middleware
# Approval flows and routing are configured in Velatir dashboard
hitl = VelatirHITLMiddleware(
api_key="your-velatir-api-key",
polling_interval=5.0,
timeout=600.0, # 10 minutes max wait
require_approval_for=["delete_user", "execute_payment"], # Optional filter
)
# Add to your agent
agent = create_react_agent(
model,
tools,
middleware=[hitl]
)
```
### Combined Guardrails + HITL
Use both for complete governance. All policies and flows are configured in your Velatir dashboard:
```python
from langchain_velatir import VelatirGuardrailMiddleware, VelatirHITLMiddleware
# Guardrails evaluate responses AFTER agent generates them
guardrails = VelatirGuardrailMiddleware(
api_key="your-api-key",
mode="blocking",
)
# HITL evaluates tool calls BEFORE execution
hitl = VelatirHITLMiddleware(
api_key="your-api-key",
require_approval_for=["process_payment", "delete_data"], # Optional filter
)
# Add both to your agent
agent = create_react_agent(
model,
tools,
middleware=[hitl, guardrails] # Order matters: HITL first, then guardrails
)
```
## How It Works
### VelatirGuardrailMiddleware
Follows the pattern of LangChain's `SafetyGuardrailMiddleware`:
1. Uses `after_agent` hook to intercept agent responses
2. Sends responses to Velatir API for evaluation
3. Velatir's backend evaluates against your configured policies and flows:
- GDPR compliance checking
- EU AI Act requirements
- Bias & Fairness detection
- Prompt Injection prevention
- Custom policies you've configured
4. Velatir returns decision (approved/denied/requires intervention)
5. Middleware blocks or logs based on mode
**Policy Configuration:**
All policies are configured in your Velatir dashboard, not in code. This allows non-technical stakeholders to manage compliance requirements without code changes.
**Modes:**
- `blocking` - Block responses that Velatir denies (default)
- `logging` - Log Velatir's decisions but allow execution
### VelatirHITLMiddleware
Implements human-in-the-loop approval workflows:
1. Uses `modify_model_request` hook to intercept tool calls
2. Sends tool calls to Velatir API for evaluation
3. Velatir's backend evaluates tool calls against your configured flows:
- Determines risk level
- Decides if human approval is needed
- Routes to appropriate reviewers (Slack, Teams, Email, Web)
- May approve instantly for low-risk actions
4. Pauses execution if human review is required
5. Polls for Velatir's decision
6. Executes or blocks based on decision
**Flow Configuration:**
All flows (when to require approval, who to route to, how many approvals, escalation paths) are configured in your Velatir dashboard. You can update flows without changing code.
**Decision Types:**
- ✅ **Approved** - Tool executes normally (may be instant or after human review)
- ❌ **Rejected** - Tool execution blocked, raises `VelatirApprovalDeniedError`
- 📝 **Change Requested** - Feedback provided, execution blocked
## Configuration
### Guardrail Middleware Options
```python
VelatirGuardrailMiddleware(
api_key="your-api-key", # Required: Velatir API key
mode="blocking", # "blocking" or "logging"
base_url=None, # Optional: Custom API URL
timeout=10.0, # API request timeout in seconds
approval_timeout=30.0, # Max wait for Velatir decision
polling_interval=2.0, # Seconds between polling
blocked_message="Response requires review...", # Message shown when blocked
metadata={}, # Optional metadata for all tasks
)
```
### HITL Middleware Options
```python
VelatirHITLMiddleware(
api_key="your-api-key", # Required: Velatir API key
base_url=None, # Optional: Custom API URL
polling_interval=5.0, # Seconds between polling
timeout=600.0, # Max wait time for approval
require_approval_for=["tool1"], # Optional: filter which tools to send (None = all)
metadata={}, # Optional metadata for all tasks
)
```
## Error Handling
The middleware raises custom exceptions for different scenarios:
```python
from langchain_velatir import (
VelatirPolicyViolationError,
VelatirApprovalDeniedError,
VelatirTimeoutError,
)
try:
result = agent.invoke({"input": "Process customer data"})
except VelatirPolicyViolationError as e:
print(f"Policy violation: {e.violated_policies}")
print(f"Review task: {e.review_task_id}")
except VelatirApprovalDeniedError as e:
print(f"Approval denied: {e.requested_change}")
print(f"Review task: {e.review_task_id}")
except VelatirTimeoutError as e:
print(f"Timeout after {e.timeout_seconds}s")
print(f"Review task: {e.review_task_id}")
```
## Examples
See the `examples/` directory for complete examples:
- **`example_guardrails.py`** - Compliance checking with guardrails
- **`example_hitl.py`** - Human approval workflows
- **`example_combined.py`** - Both guardrails and HITL together
Run examples:
```bash
export VELATIR_API_KEY="your-api-key"
export OPENAI_API_KEY="your-openai-key"
python examples/example_guardrails.py
```
## Architecture
### Middleware Integration
```mermaid
flowchart TD
A[1. User Input] --> B[2. VelatirHITLMiddleware<br/>modify_model_request hook<br/>→ Request human approval<br/>→ Poll for decision]
B --> C[3. Tool Execution<br/>if approved]
C --> D[4. Agent Response<br/>Generation]
D --> E[5. VelatirGuardrailMiddleware<br/>after_agent hook<br/>→ Validate against policies<br/>→ Block if violations found]
E --> F[6. Final Response<br/>to User]
style A fill:#e1f5ff
style B fill:#fff4e1
style C fill:#e8f5e9
style D fill:#e8f5e9
style E fill:#fff4e1
style F fill:#e1f5ff
```
### Velatir Integration Flow
```mermaid
graph LR
A[LangChain<br/>Middleware] -->|HTTP| B[Velatir<br/>API Server]
B --> C[Policy Engine<br/>• GDPR<br/>• EU AI Act<br/>• Bias & Fairness]
B --> D[Approval Channels<br/>• Slack<br/>• Teams<br/>• Email<br/>• Web]
D --> E[Human Decision<br/>Approve/Deny]
E -->|Decision| B
C -->|Evaluation| F[Decision Logged<br/>in Velatir]
style A fill:#e1f5ff
style B fill:#fff4e1
style C fill:#ffe8e8
style D fill:#e8f5e9
style E fill:#f3e5f5
style F fill:#fff9c4
```
## Best Practices
### 1. Layer Your Protections
```python
# HITL for high-risk actions (before execution)
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["delete_data", "execute_payment", "modify_user"],
)
# Guardrails for compliance (after generation)
# Policies are configured in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Apply both
agent = create_react_agent(model, tools, middleware=[hitl, guardrails])
```
### 2. Use Appropriate Modes
```python
# Production: Block violations
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking", # Strict enforcement
)
# Development: Log for analysis
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="logging", # Monitor without blocking
)
```
### 3. Configure Timeouts Appropriately
```python
# Quick operations: Short timeout
hitl = VelatirHITLMiddleware(
api_key=api_key,
timeout=300.0, # 5 minutes
polling_interval=3.0,
)
# Critical decisions: Longer timeout
hitl = VelatirHITLMiddleware(
api_key=api_key,
timeout=1800.0, # 30 minutes
polling_interval=10.0,
)
```
### 4. Selective Tool Approval
```python
# Only require approval for sensitive tools
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=[
"delete_user",
"process_payment",
"access_confidential_data",
],
)
# Other tools execute without approval
```
## Use Cases
### Financial Services
```python
# Configure EU AI Act and Bias policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for financial operations in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["execute_trade", "approve_loan", "process_withdrawal"],
)
```
### Healthcare
```python
# Configure GDPR, Bias, and custom HIPAA policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for medical operations in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["access_patient_records", "prescribe_medication"],
)
```
### Customer Support
```python
# Configure Bias and Prompt Injection policies in Velatir dashboard
guardrails = VelatirGuardrailMiddleware(
api_key=api_key,
mode="blocking",
)
# Configure approval flows for customer actions in dashboard
hitl = VelatirHITLMiddleware(
api_key=api_key,
require_approval_for=["issue_refund", "close_account", "escalate_complaint"],
)
```
## Velatir Dashboard
All review tasks, policy violations, and approvals are logged in your Velatir dashboard:
- **Real-time monitoring** of agent decisions
- **Audit trail** for compliance reporting
- **Analytics** on approval patterns and policy violations
- **Team management** for approval workflows
- **Custom policies** tailored to your industry
Visit [velatir.com](https://www.velatir.com) to set up your dashboard.
## Development
### Running Tests
```bash
pip install -e ".[dev]"
pytest tests/
```
### Code Formatting
```bash
black langchain_velatir/
ruff check langchain_velatir/
```
### Type Checking
```bash
mypy langchain_velatir/
```
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## Support
- 📖 [Documentation](https://www.velatir.com/docs)
- 🐛 [Report Issues](https://github.com/velatir/langchain-velatir/issues)
- 📧 [Email Support](mailto:hello@velatir.com)
## License
MIT License - see [LICENSE](LICENSE) file for details.
---
Made with ❤️ by [Velatir](https://velatir.com) | Enabling safe AI adoption at scale
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-velatir",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "langchain, velatir, ai-governance, compliance, human-in-the-loop, guardrails, middleware, eu-ai-act, gdpr",
"author": null,
"author_email": "Velatir <hello@velatir.com>",
"download_url": "https://files.pythonhosted.org/packages/7a/6a/bb250f923f3012af74e61b9f126102cd24cfd0f2b03287c4307939440cdd/langchain_velatir-0.0.3.tar.gz",
"platform": null,
"description": "# langchain-velatir\n\n[](https://badge.fury.io/py/langchain-velatir)\n[](https://opensource.org/licenses/MIT)\n[](https://www.python.org/downloads/)\n\n**AI Governance, Compliance, and Human-in-the-Loop for LangChain**\n\nOfficial LangChain integration for [Velatir](https://velatir.com) - Add enterprise-grade governance, compliance checking, and human approval workflows to your LangChain agents.\n\n## Features\n\n- **\ud83d\udee1\ufe0f Compliance Guardrails**: Automatically validate agent responses against GDPR, EU AI Act, Bias & Fairness, and Prompt Injection policies\n- **\ud83d\udc65 Human-in-the-Loop**: Require human approval for sensitive operations before execution\n- **\ud83d\udcca Full Audit Trail**: All decisions logged in Velatir dashboard with complete context\n- **\ud83d\udd04 Multi-Channel Approvals**: Receive approval requests via Slack, Microsoft Teams, Email, or Web UI\n- **\u26a1 Easy Integration**: Drop-in middleware that works with existing LangChain agents\n- **\ud83c\udfaf Flexible Policies**: Configure which tools need approval and which policies to enforce\n\n## Installation\n\n```bash\npip install langchain-velatir\n```\n\n**Requirements:**\n- Python 3.10+\n- LangChain 1.0 alpha or later\n- Velatir account and API key ([sign up here](https://velatir.com))\n\n## Quick Start\n\n### Guardrails Example\n\nAdd governance to your agent responses. Velatir automatically evaluates responses against your configured policies:\n\n```python\nfrom langchain_velatir import VelatirGuardrailMiddleware\nfrom langchain.agents import create_react_agent\n\n# Create guardrail middleware\n# Policies (GDPR, EU AI Act, Bias & Fairness, etc.) are configured in Velatir dashboard\nguardrails = VelatirGuardrailMiddleware(\n api_key=\"your-velatir-api-key\",\n mode=\"blocking\", # Block responses that Velatir denies\n)\n\n# Add to your agent\nagent = create_react_agent(\n model,\n tools,\n middleware=[guardrails]\n)\n```\n\n### Human-in-the-Loop Example\n\nSend tool calls to Velatir for evaluation. Velatir determines if human approval is needed based on your configured flows:\n\n```python\nfrom langchain_velatir import VelatirHITLMiddleware\nfrom langchain.agents import create_react_agent\n\n# Create HITL middleware\n# Approval flows and routing are configured in Velatir dashboard\nhitl = VelatirHITLMiddleware(\n api_key=\"your-velatir-api-key\",\n polling_interval=5.0,\n timeout=600.0, # 10 minutes max wait\n require_approval_for=[\"delete_user\", \"execute_payment\"], # Optional filter\n)\n\n# Add to your agent\nagent = create_react_agent(\n model,\n tools,\n middleware=[hitl]\n)\n```\n\n### Combined Guardrails + HITL\n\nUse both for complete governance. All policies and flows are configured in your Velatir dashboard:\n\n```python\nfrom langchain_velatir import VelatirGuardrailMiddleware, VelatirHITLMiddleware\n\n# Guardrails evaluate responses AFTER agent generates them\nguardrails = VelatirGuardrailMiddleware(\n api_key=\"your-api-key\",\n mode=\"blocking\",\n)\n\n# HITL evaluates tool calls BEFORE execution\nhitl = VelatirHITLMiddleware(\n api_key=\"your-api-key\",\n require_approval_for=[\"process_payment\", \"delete_data\"], # Optional filter\n)\n\n# Add both to your agent\nagent = create_react_agent(\n model,\n tools,\n middleware=[hitl, guardrails] # Order matters: HITL first, then guardrails\n)\n```\n\n## How It Works\n\n### VelatirGuardrailMiddleware\n\nFollows the pattern of LangChain's `SafetyGuardrailMiddleware`:\n\n1. Uses `after_agent` hook to intercept agent responses\n2. Sends responses to Velatir API for evaluation\n3. Velatir's backend evaluates against your configured policies and flows:\n - GDPR compliance checking\n - EU AI Act requirements\n - Bias & Fairness detection\n - Prompt Injection prevention\n - Custom policies you've configured\n4. Velatir returns decision (approved/denied/requires intervention)\n5. Middleware blocks or logs based on mode\n\n**Policy Configuration:**\nAll policies are configured in your Velatir dashboard, not in code. This allows non-technical stakeholders to manage compliance requirements without code changes.\n\n**Modes:**\n- `blocking` - Block responses that Velatir denies (default)\n- `logging` - Log Velatir's decisions but allow execution\n\n### VelatirHITLMiddleware\n\nImplements human-in-the-loop approval workflows:\n\n1. Uses `modify_model_request` hook to intercept tool calls\n2. Sends tool calls to Velatir API for evaluation\n3. Velatir's backend evaluates tool calls against your configured flows:\n - Determines risk level\n - Decides if human approval is needed\n - Routes to appropriate reviewers (Slack, Teams, Email, Web)\n - May approve instantly for low-risk actions\n4. Pauses execution if human review is required\n5. Polls for Velatir's decision\n6. Executes or blocks based on decision\n\n**Flow Configuration:**\nAll flows (when to require approval, who to route to, how many approvals, escalation paths) are configured in your Velatir dashboard. You can update flows without changing code.\n\n**Decision Types:**\n- \u2705 **Approved** - Tool executes normally (may be instant or after human review)\n- \u274c **Rejected** - Tool execution blocked, raises `VelatirApprovalDeniedError`\n- \ud83d\udcdd **Change Requested** - Feedback provided, execution blocked\n\n## Configuration\n\n### Guardrail Middleware Options\n\n```python\nVelatirGuardrailMiddleware(\n api_key=\"your-api-key\", # Required: Velatir API key\n mode=\"blocking\", # \"blocking\" or \"logging\"\n base_url=None, # Optional: Custom API URL\n timeout=10.0, # API request timeout in seconds\n approval_timeout=30.0, # Max wait for Velatir decision\n polling_interval=2.0, # Seconds between polling\n blocked_message=\"Response requires review...\", # Message shown when blocked\n metadata={}, # Optional metadata for all tasks\n)\n```\n\n### HITL Middleware Options\n\n```python\nVelatirHITLMiddleware(\n api_key=\"your-api-key\", # Required: Velatir API key\n base_url=None, # Optional: Custom API URL\n polling_interval=5.0, # Seconds between polling\n timeout=600.0, # Max wait time for approval\n require_approval_for=[\"tool1\"], # Optional: filter which tools to send (None = all)\n metadata={}, # Optional metadata for all tasks\n)\n```\n\n## Error Handling\n\nThe middleware raises custom exceptions for different scenarios:\n\n```python\nfrom langchain_velatir import (\n VelatirPolicyViolationError,\n VelatirApprovalDeniedError,\n VelatirTimeoutError,\n)\n\ntry:\n result = agent.invoke({\"input\": \"Process customer data\"})\nexcept VelatirPolicyViolationError as e:\n print(f\"Policy violation: {e.violated_policies}\")\n print(f\"Review task: {e.review_task_id}\")\nexcept VelatirApprovalDeniedError as e:\n print(f\"Approval denied: {e.requested_change}\")\n print(f\"Review task: {e.review_task_id}\")\nexcept VelatirTimeoutError as e:\n print(f\"Timeout after {e.timeout_seconds}s\")\n print(f\"Review task: {e.review_task_id}\")\n```\n\n## Examples\n\nSee the `examples/` directory for complete examples:\n\n- **`example_guardrails.py`** - Compliance checking with guardrails\n- **`example_hitl.py`** - Human approval workflows\n- **`example_combined.py`** - Both guardrails and HITL together\n\nRun examples:\n\n```bash\nexport VELATIR_API_KEY=\"your-api-key\"\nexport OPENAI_API_KEY=\"your-openai-key\"\npython examples/example_guardrails.py\n```\n\n## Architecture\n\n### Middleware Integration\n\n```mermaid\nflowchart TD\n A[1. User Input] --> B[2. VelatirHITLMiddleware<br/>modify_model_request hook<br/>\u2192 Request human approval<br/>\u2192 Poll for decision]\n B --> C[3. Tool Execution<br/>if approved]\n C --> D[4. Agent Response<br/>Generation]\n D --> E[5. VelatirGuardrailMiddleware<br/>after_agent hook<br/>\u2192 Validate against policies<br/>\u2192 Block if violations found]\n E --> F[6. Final Response<br/>to User]\n\n style A fill:#e1f5ff\n style B fill:#fff4e1\n style C fill:#e8f5e9\n style D fill:#e8f5e9\n style E fill:#fff4e1\n style F fill:#e1f5ff\n```\n\n### Velatir Integration Flow\n\n```mermaid\ngraph LR\n A[LangChain<br/>Middleware] -->|HTTP| B[Velatir<br/>API Server]\n B --> C[Policy Engine<br/>\u2022 GDPR<br/>\u2022 EU AI Act<br/>\u2022 Bias & Fairness]\n B --> D[Approval Channels<br/>\u2022 Slack<br/>\u2022 Teams<br/>\u2022 Email<br/>\u2022 Web]\n D --> E[Human Decision<br/>Approve/Deny]\n E -->|Decision| B\n C -->|Evaluation| F[Decision Logged<br/>in Velatir]\n\n style A fill:#e1f5ff\n style B fill:#fff4e1\n style C fill:#ffe8e8\n style D fill:#e8f5e9\n style E fill:#f3e5f5\n style F fill:#fff9c4\n```\n\n## Best Practices\n\n### 1. Layer Your Protections\n\n```python\n# HITL for high-risk actions (before execution)\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n require_approval_for=[\"delete_data\", \"execute_payment\", \"modify_user\"],\n)\n\n# Guardrails for compliance (after generation)\n# Policies are configured in Velatir dashboard\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"blocking\",\n)\n\n# Apply both\nagent = create_react_agent(model, tools, middleware=[hitl, guardrails])\n```\n\n### 2. Use Appropriate Modes\n\n```python\n# Production: Block violations\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"blocking\", # Strict enforcement\n)\n\n# Development: Log for analysis\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"logging\", # Monitor without blocking\n)\n```\n\n### 3. Configure Timeouts Appropriately\n\n```python\n# Quick operations: Short timeout\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n timeout=300.0, # 5 minutes\n polling_interval=3.0,\n)\n\n# Critical decisions: Longer timeout\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n timeout=1800.0, # 30 minutes\n polling_interval=10.0,\n)\n```\n\n### 4. Selective Tool Approval\n\n```python\n# Only require approval for sensitive tools\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n require_approval_for=[\n \"delete_user\",\n \"process_payment\",\n \"access_confidential_data\",\n ],\n)\n# Other tools execute without approval\n```\n\n## Use Cases\n\n### Financial Services\n\n```python\n# Configure EU AI Act and Bias policies in Velatir dashboard\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"blocking\",\n)\n\n# Configure approval flows for financial operations in dashboard\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n require_approval_for=[\"execute_trade\", \"approve_loan\", \"process_withdrawal\"],\n)\n```\n\n### Healthcare\n\n```python\n# Configure GDPR, Bias, and custom HIPAA policies in Velatir dashboard\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"blocking\",\n)\n\n# Configure approval flows for medical operations in dashboard\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n require_approval_for=[\"access_patient_records\", \"prescribe_medication\"],\n)\n```\n\n### Customer Support\n\n```python\n# Configure Bias and Prompt Injection policies in Velatir dashboard\nguardrails = VelatirGuardrailMiddleware(\n api_key=api_key,\n mode=\"blocking\",\n)\n\n# Configure approval flows for customer actions in dashboard\nhitl = VelatirHITLMiddleware(\n api_key=api_key,\n require_approval_for=[\"issue_refund\", \"close_account\", \"escalate_complaint\"],\n)\n```\n\n## Velatir Dashboard\n\nAll review tasks, policy violations, and approvals are logged in your Velatir dashboard:\n\n- **Real-time monitoring** of agent decisions\n- **Audit trail** for compliance reporting\n- **Analytics** on approval patterns and policy violations\n- **Team management** for approval workflows\n- **Custom policies** tailored to your industry\n\nVisit [velatir.com](https://www.velatir.com) to set up your dashboard.\n\n## Development\n\n### Running Tests\n\n```bash\npip install -e \".[dev]\"\npytest tests/\n```\n\n### Code Formatting\n\n```bash\nblack langchain_velatir/\nruff check langchain_velatir/\n```\n\n### Type Checking\n\n```bash\nmypy langchain_velatir/\n```\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n## Support\n\n- \ud83d\udcd6 [Documentation](https://www.velatir.com/docs)\n- \ud83d\udc1b [Report Issues](https://github.com/velatir/langchain-velatir/issues)\n- \ud83d\udce7 [Email Support](mailto:hello@velatir.com)\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n---\n\nMade with \u2764\ufe0f by [Velatir](https://velatir.com) | Enabling safe AI adoption at scale\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "LangChain integration for Velatir - AI governance, compliance, and human-in-the-loop workflows",
"version": "0.0.3",
"project_urls": {
"Bug Tracker": "https://github.com/velatir/langchain-velatir/issues",
"Changelog": "https://github.com/velatir/langchain-velatir/blob/main/CHANGELOG.md",
"Documentation": "https://www.velatir.com/docs",
"Homepage": "https://www.velatir.com",
"Repository": "https://github.com/velatir/langchain-velatir",
"Source Code": "https://github.com/velatir/langchain-velatir"
},
"split_keywords": [
"langchain",
" velatir",
" ai-governance",
" compliance",
" human-in-the-loop",
" guardrails",
" middleware",
" eu-ai-act",
" gdpr"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "63be97edbaad951d4c14113eb04ec84beee4b6b0798d55c3371fc9fd84f95e70",
"md5": "ba33513be4915d89de92371716f75957",
"sha256": "ac6c76362da9028b833c88713f88d8633041c475917d730ac3136cfc23bb6fdd"
},
"downloads": -1,
"filename": "langchain_velatir-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ba33513be4915d89de92371716f75957",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 13664,
"upload_time": "2025-10-19T10:16:40",
"upload_time_iso_8601": "2025-10-19T10:16:40.475404Z",
"url": "https://files.pythonhosted.org/packages/63/be/97edbaad951d4c14113eb04ec84beee4b6b0798d55c3371fc9fd84f95e70/langchain_velatir-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7a6abb250f923f3012af74e61b9f126102cd24cfd0f2b03287c4307939440cdd",
"md5": "72202775d62017a9c8752cc80a3acf74",
"sha256": "e2a3c5232c22a809c0e54cbeff651adec5d68631f1391d481c406fa4987c95f5"
},
"downloads": -1,
"filename": "langchain_velatir-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "72202775d62017a9c8752cc80a3acf74",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 21518,
"upload_time": "2025-10-19T10:16:41",
"upload_time_iso_8601": "2025-10-19T10:16:41.750730Z",
"url": "https://files.pythonhosted.org/packages/7a/6a/bb250f923f3012af74e61b9f126102cd24cfd0f2b03287c4307939440cdd/langchain_velatir-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-19 10:16:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "velatir",
"github_project": "langchain-velatir",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langchain-velatir"
}