# Xiangxin AI Guardrails Python Client
[](https://badge.fury.io/py/xiangxinai)
[](https://pypi.org/project/xiangxinai/)
[](https://opensource.org/licenses/Apache-2.0)
An LLM-based context-aware AI guardrail that understands conversation context for security, safety and data leakage detection.
## Features
* 🧠 **Context Awareness** – Based on LLM conversation understanding rather than simple batch detection
* 🔍 **Prompt Injection Detection** – Detects malicious prompt injections and jailbreak attacks
* 📋 **Content Compliance Detection** – Complies with generative AI safety requirements
* 🔐 **Sensitive Data Leak Prevention** – Detects and prevents personal or corporate data leaks
* 🧩 **User-level Ban Policy** – Supports user-granular risk recognition and blocking strategies
* 🖼️ **Multimodal Detection** – Supports image content safety detection
* 🛠️ **Easy Integration** – OpenAI-compatible API format; plug in with one line of code
* ⚡ **OpenAI-style API** – Familiar interface design for rapid adoption
* 🚀 **Sync/Async Support** – Supports both synchronous and asynchronous calls for different scenarios
## Installation
```bash
pip install xiangxinai
```
## Quick Start
### Basic Usage
```python
from xiangxinai import XiangxinAI
# Create a client
client = XiangxinAI(
api_key="your-api-key",
base_url="https://api.xiangxinai.cn/v1" # Cloud API
)
# Check user input
result = client.check_prompt("I want to learn Python programming", user_id="user-123")
print(result.suggest_action) # Output: pass
print(result.overall_risk_level) # Output: no_risk
print(result.score) # Confidence score, e.g. 0.9993114447238793
# Check model response (context-aware)
result = client.check_response_ctx(
prompt="Teach me how to cook",
response="I can teach you some simple home dishes",
user_id="user-123" # Optional user-level risk control
)
print(result.suggest_action) # Output: pass
print(result.overall_risk_level) # Output: no_risk
```
### Context-Aware Detection (Core Feature)
```python
# Context-based conversation detection - Core feature
messages = [
{"role": "user", "content": "I want to learn chemistry"},
{"role": "assistant", "content": "Chemistry is an interesting subject. What part would you like to learn?"},
{"role": "user", "content": "Teach me reactions for making explosives"}
]
result = client.check_conversation(messages, user_id="user-123")
print(result.overall_risk_level)
print(result.suggest_action) # Result based on full conversation context
if result.suggest_answer:
print(f"Suggested answer: {result.suggest_answer}")
```
### Asynchronous API (Recommended)
```python
import asyncio
from xiangxinai import AsyncXiangxinAI
async def main():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Async prompt check
result = await client.check_prompt("I want to learn Python programming")
print(result.suggest_action) # Output: pass
# Async conversation context check
messages = [
{"role": "user", "content": "I want to learn chemistry"},
{"role": "assistant", "content": "Chemistry is an interesting subject. What part would you like to learn?"},
{"role": "user", "content": "Teach me reactions for making explosives"}
]
result = await client.check_conversation(messages)
print(result.overall_risk_level)
asyncio.run(main())
```
### Concurrent Processing
```python
import asyncio
from xiangxinai import AsyncXiangxinAI
async def batch_check():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Handle multiple requests concurrently
tasks = [
client.check_prompt("Content 1"),
client.check_prompt("Content 2"),
client.check_prompt("Content 3")
]
results = await asyncio.gather(*tasks)
for i, result in enumerate(results):
print(f"Content {i+1}: {result.overall_risk_level}")
asyncio.run(batch_check())
```
### Multimodal Image Detection
Supports multimodal detection for image content safety. The system analyzes both text prompt semantics and image semantics for risk.
```python
from xiangxinai import XiangxinAI
client = XiangxinAI(api_key="your-api-key")
# Check a single local image
result = client.check_prompt_image(
prompt="Is this image safe?",
image="/path/to/image.jpg"
)
print(result.overall_risk_level)
print(result.suggest_action)
# Check an image from URL
result = client.check_prompt_image(
prompt="", # prompt can be empty
image="https://example.com/image.jpg"
)
# Check multiple images
images = [
"/path/to/image1.jpg",
"https://example.com/image2.jpg",
"/path/to/image3.png"
]
result = client.check_prompt_images(
prompt="Are all these images safe?",
images=images
)
print(result.overall_risk_level)
```
Async version:
```python
import asyncio
from xiangxinai import AsyncXiangxinAI
async def check_images():
async with AsyncXiangxinAI(api_key="your-api-key") as client:
# Async check for a single image
result = await client.check_prompt_image(
prompt="Is this image safe?",
image="/path/to/image.jpg"
)
print(result.overall_risk_level)
# Async check for multiple images
images = ["/path/to/image1.jpg", "/path/to/image2.jpg"]
result = await client.check_prompt_images(
prompt="Are these images safe?",
images=images
)
print(result.overall_risk_level)
asyncio.run(check_images())
```
### On-Premise Deployment
```python
# Sync client connecting to local deployment
client = XiangxinAI(
api_key="your-local-api-key",
base_url="http://localhost:5000/v1"
)
# Async client connecting to local deployment
async with AsyncXiangxinAI(
api_key="your-local-api-key",
base_url="http://localhost:5000/v1"
) as client:
result = await client.check_prompt("Test content")
```
## API Reference
### XiangxinAI Class (Synchronous)
#### Initialization Parameters
* `api_key` (str): API key
* `base_url` (str): Base API URL, defaults to the cloud endpoint
* `timeout` (int): Request timeout, default 30 seconds
* `max_retries` (int): Maximum retry count, default 3
#### Methods
##### check_prompt(content: str, user_id: Optional[str] = None) -> GuardrailResponse
Checks the safety of a single prompt.
**Parameters:**
* `content`: Text content to be checked
* `user_id`: Optional tenant user ID for per-user risk control and auditing
**Returns:** `GuardrailResponse` object
##### check_conversation(messages: List[Message], model: str = "Xiangxin-Guardrails-Text", user_id: Optional[str] = None) -> GuardrailResponse
Checks conversation context safety (core feature).
**Parameters:**
* `messages`: List of messages, each containing `role` and `content`
* `model`: Model name (default: "Xiangxin-Guardrails-Text")
* `user_id`: Optional tenant user ID
**Returns:** `GuardrailResponse` object
### AsyncXiangxinAI Class (Asynchronous)
Same initialization parameters as the synchronous version.
#### Methods
##### async check_prompt(content: str) -> GuardrailResponse
Asynchronously checks a single prompt.
##### async check_conversation(messages: List[Message]) -> GuardrailResponse
Asynchronously checks conversation context safety (core feature).
##### async health_check() -> Dict[str, Any]
Checks API service health.
##### async get_models() -> Dict[str, Any]
Retrieves available model list.
##### async close()
Closes async session (automatically handled with `async with`).
### GuardrailResponse Class
Represents detection results.
#### Attributes
* `id`: Unique request ID
* `result.compliance.risk_level`: Compliance risk level
* `result.security.risk_level`: Security risk level
* `result.data.risk_level`: Data leak risk level (added in v2.4.0)
* `result.data.categories`: Detected sensitive data types (added in v2.4.0)
* `overall_risk_level`: Overall risk level (none / low / medium / high)
* `suggest_action`: Suggested action (pass / block / substitute)
* `suggest_answer`: Suggested response (optional, includes redacted content if applicable)
* `score`: Confidence score of the results
#### Helper Methods
* `is_safe`: Whether the content is safe
* `is_blocked`: Whether the content is blocked
* `has_substitute`: Whether a substitute answer is provided
* `all_categories`: Get all detected risk categories
## Safety Detection Capabilities
### Risk Levels
* **High Risk:** Sensitive political topics, national image damage, violent crime, prompt attacks
* **Medium Risk:** General political topics, harm to minors, illegal acts, sexual content
* **Low Risk:** Hate speech, insults, privacy violations, commercial misconduct
* **No Risk:** Safe content
### Handling Strategies
* **High Risk:** Recommend blocking
* **Medium Risk:** Recommend substitution with a safe reply
* **Low Risk:** Recommend substitution or business-dependent handling
* **No Risk:** Recommend pass
## Error Handling
### Synchronous Error Handling
```python
from xiangxinai import XiangxinAI, AuthenticationError, ValidationError, RateLimitError
try:
result = client.check_prompt("Test content")
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Input validation failed: {e}")
except RateLimitError:
print("Rate limit exceeded")
except Exception as e:
print(f"Other error: {e}")
```
### Asynchronous Error Handling
```python
import asyncio
from xiangxinai import AsyncXiangxinAI, AuthenticationError, ValidationError, RateLimitError
async def safe_check():
try:
async with AsyncXiangxinAI(api_key="your-api-key") as client:
result = await client.check_prompt("Test content")
return result
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Input validation failed: {e}")
except RateLimitError:
print("Rate limit exceeded")
except Exception as e:
print(f"Other error: {e}")
asyncio.run(safe_check())
```
## Development
```bash
# Clone the project
git clone https://github.com/xiangxinai/xiangxin-guardrails
cd xiangxin-guardrails/client
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Code formatting
black xiangxinai
isort xiangxinai
# Type checking
mypy xiangxinai
```
## License
This project is open-sourced under the [Apache 2.0](https://opensource.org/licenses/Apache-2.0) license.
## Support
* 📧 Technical Support: [wanglei@xiangxinai.cn](mailto:wanglei@xiangxinai.cn)
* 🌐 Official Website: [https://xiangxinai.cn](https://xiangxinai.cn)
* 📖 Documentation: [https://docs.xiangxinai.cn](https://docs.xiangxinai.cn)
* 🐛 Issue Tracker: [https://github.com/xiangxinai/xiangxin-guardrails/issues](https://github.com/xiangxinai/xiangxin-guardrails/issues)
---
Made with ❤️ by [Xiangxin AI](https://xiangxinai.cn)
---
Raw data
{
"_id": null,
"home_page": null,
"name": "xiangxinai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "XiangxinAI <wanglei@xiangxinai.cn>",
"keywords": "ai, safety, guardrails, llm, content-moderation, prompt-injection, security, compliance, xiangxin",
"author": null,
"author_email": "XiangxinAI <wanglei@xiangxinai.cn>",
"download_url": "https://files.pythonhosted.org/packages/1d/7e/6dc28e6fd9c11e7a0ffba0bfc50c1a56da73db567d5674f67994456ca652/xiangxinai-2.6.2.tar.gz",
"platform": null,
"description": "# Xiangxin AI Guardrails Python Client\n\n[](https://badge.fury.io/py/xiangxinai)\n[](https://pypi.org/project/xiangxinai/)\n[](https://opensource.org/licenses/Apache-2.0)\n\nAn LLM-based context-aware AI guardrail that understands conversation context for security, safety and data leakage detection.\n\n## Features\n\n* \ud83e\udde0 **Context Awareness** \u2013 Based on LLM conversation understanding rather than simple batch detection\n* \ud83d\udd0d **Prompt Injection Detection** \u2013 Detects malicious prompt injections and jailbreak attacks\n* \ud83d\udccb **Content Compliance Detection** \u2013 Complies with generative AI safety requirements\n* \ud83d\udd10 **Sensitive Data Leak Prevention** \u2013 Detects and prevents personal or corporate data leaks\n* \ud83e\udde9 **User-level Ban Policy** \u2013 Supports user-granular risk recognition and blocking strategies\n* \ud83d\uddbc\ufe0f **Multimodal Detection** \u2013 Supports image content safety detection\n* \ud83d\udee0\ufe0f **Easy Integration** \u2013 OpenAI-compatible API format; plug in with one line of code\n* \u26a1 **OpenAI-style API** \u2013 Familiar interface design for rapid adoption\n* \ud83d\ude80 **Sync/Async Support** \u2013 Supports both synchronous and asynchronous calls for different scenarios\n\n## Installation\n\n```bash\npip install xiangxinai\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom xiangxinai import XiangxinAI\n\n# Create a client\nclient = XiangxinAI(\n api_key=\"your-api-key\",\n base_url=\"https://api.xiangxinai.cn/v1\" # Cloud API\n)\n\n# Check user input\nresult = client.check_prompt(\"I want to learn Python programming\", user_id=\"user-123\")\nprint(result.suggest_action) # Output: pass\nprint(result.overall_risk_level) # Output: no_risk\nprint(result.score) # Confidence score, e.g. 0.9993114447238793\n\n# Check model response (context-aware)\nresult = client.check_response_ctx(\n prompt=\"Teach me how to cook\",\n response=\"I can teach you some simple home dishes\",\n user_id=\"user-123\" # Optional user-level risk control\n)\nprint(result.suggest_action) # Output: pass\nprint(result.overall_risk_level) # Output: no_risk\n```\n\n### Context-Aware Detection (Core Feature)\n\n```python\n# Context-based conversation detection - Core feature\nmessages = [\n {\"role\": \"user\", \"content\": \"I want to learn chemistry\"},\n {\"role\": \"assistant\", \"content\": \"Chemistry is an interesting subject. What part would you like to learn?\"},\n {\"role\": \"user\", \"content\": \"Teach me reactions for making explosives\"}\n]\n\nresult = client.check_conversation(messages, user_id=\"user-123\")\nprint(result.overall_risk_level)\nprint(result.suggest_action) # Result based on full conversation context\nif result.suggest_answer:\n print(f\"Suggested answer: {result.suggest_answer}\")\n```\n\n### Asynchronous API (Recommended)\n\n```python\nimport asyncio\nfrom xiangxinai import AsyncXiangxinAI\n\nasync def main():\n async with AsyncXiangxinAI(api_key=\"your-api-key\") as client:\n # Async prompt check\n result = await client.check_prompt(\"I want to learn Python programming\")\n print(result.suggest_action) # Output: pass\n \n # Async conversation context check\n messages = [\n {\"role\": \"user\", \"content\": \"I want to learn chemistry\"},\n {\"role\": \"assistant\", \"content\": \"Chemistry is an interesting subject. What part would you like to learn?\"},\n {\"role\": \"user\", \"content\": \"Teach me reactions for making explosives\"}\n ]\n result = await client.check_conversation(messages)\n print(result.overall_risk_level)\n\nasyncio.run(main())\n```\n\n### Concurrent Processing\n\n```python\nimport asyncio\nfrom xiangxinai import AsyncXiangxinAI\n\nasync def batch_check():\n async with AsyncXiangxinAI(api_key=\"your-api-key\") as client:\n # Handle multiple requests concurrently\n tasks = [\n client.check_prompt(\"Content 1\"),\n client.check_prompt(\"Content 2\"),\n client.check_prompt(\"Content 3\")\n ]\n results = await asyncio.gather(*tasks)\n \n for i, result in enumerate(results):\n print(f\"Content {i+1}: {result.overall_risk_level}\")\n\nasyncio.run(batch_check())\n```\n\n### Multimodal Image Detection\n\nSupports multimodal detection for image content safety. The system analyzes both text prompt semantics and image semantics for risk.\n\n```python\nfrom xiangxinai import XiangxinAI\n\nclient = XiangxinAI(api_key=\"your-api-key\")\n\n# Check a single local image\nresult = client.check_prompt_image(\n prompt=\"Is this image safe?\",\n image=\"/path/to/image.jpg\"\n)\nprint(result.overall_risk_level)\nprint(result.suggest_action)\n\n# Check an image from URL\nresult = client.check_prompt_image(\n prompt=\"\", # prompt can be empty\n image=\"https://example.com/image.jpg\"\n)\n\n# Check multiple images\nimages = [\n \"/path/to/image1.jpg\",\n \"https://example.com/image2.jpg\",\n \"/path/to/image3.png\"\n]\nresult = client.check_prompt_images(\n prompt=\"Are all these images safe?\",\n images=images\n)\nprint(result.overall_risk_level)\n```\n\nAsync version:\n\n```python\nimport asyncio\nfrom xiangxinai import AsyncXiangxinAI\n\nasync def check_images():\n async with AsyncXiangxinAI(api_key=\"your-api-key\") as client:\n # Async check for a single image\n result = await client.check_prompt_image(\n prompt=\"Is this image safe?\",\n image=\"/path/to/image.jpg\"\n )\n print(result.overall_risk_level)\n\n # Async check for multiple images\n images = [\"/path/to/image1.jpg\", \"/path/to/image2.jpg\"]\n result = await client.check_prompt_images(\n prompt=\"Are these images safe?\",\n images=images\n )\n print(result.overall_risk_level)\n\nasyncio.run(check_images())\n```\n\n### On-Premise Deployment\n\n```python\n# Sync client connecting to local deployment\nclient = XiangxinAI(\n api_key=\"your-local-api-key\",\n base_url=\"http://localhost:5000/v1\"\n)\n\n# Async client connecting to local deployment\nasync with AsyncXiangxinAI(\n api_key=\"your-local-api-key\",\n base_url=\"http://localhost:5000/v1\"\n) as client:\n result = await client.check_prompt(\"Test content\")\n```\n\n## API Reference\n\n### XiangxinAI Class (Synchronous)\n\n#### Initialization Parameters\n\n* `api_key` (str): API key\n* `base_url` (str): Base API URL, defaults to the cloud endpoint\n* `timeout` (int): Request timeout, default 30 seconds\n* `max_retries` (int): Maximum retry count, default 3\n\n#### Methods\n\n##### check_prompt(content: str, user_id: Optional[str] = None) -> GuardrailResponse\n\nChecks the safety of a single prompt.\n\n**Parameters:**\n\n* `content`: Text content to be checked\n* `user_id`: Optional tenant user ID for per-user risk control and auditing\n\n**Returns:** `GuardrailResponse` object\n\n##### check_conversation(messages: List[Message], model: str = \"Xiangxin-Guardrails-Text\", user_id: Optional[str] = None) -> GuardrailResponse\n\nChecks conversation context safety (core feature).\n\n**Parameters:**\n\n* `messages`: List of messages, each containing `role` and `content`\n* `model`: Model name (default: \"Xiangxin-Guardrails-Text\")\n* `user_id`: Optional tenant user ID\n\n**Returns:** `GuardrailResponse` object\n\n### AsyncXiangxinAI Class (Asynchronous)\n\nSame initialization parameters as the synchronous version.\n\n#### Methods\n\n##### async check_prompt(content: str) -> GuardrailResponse\n\nAsynchronously checks a single prompt.\n\n##### async check_conversation(messages: List[Message]) -> GuardrailResponse\n\nAsynchronously checks conversation context safety (core feature).\n\n##### async health_check() -> Dict[str, Any]\n\nChecks API service health.\n\n##### async get_models() -> Dict[str, Any]\n\nRetrieves available model list.\n\n##### async close()\n\nCloses async session (automatically handled with `async with`).\n\n### GuardrailResponse Class\n\nRepresents detection results.\n\n#### Attributes\n\n* `id`: Unique request ID\n* `result.compliance.risk_level`: Compliance risk level\n* `result.security.risk_level`: Security risk level\n* `result.data.risk_level`: Data leak risk level (added in v2.4.0)\n* `result.data.categories`: Detected sensitive data types (added in v2.4.0)\n* `overall_risk_level`: Overall risk level (none / low / medium / high)\n* `suggest_action`: Suggested action (pass / block / substitute)\n* `suggest_answer`: Suggested response (optional, includes redacted content if applicable)\n* `score`: Confidence score of the results\n\n#### Helper Methods\n\n* `is_safe`: Whether the content is safe\n* `is_blocked`: Whether the content is blocked\n* `has_substitute`: Whether a substitute answer is provided\n* `all_categories`: Get all detected risk categories\n\n## Safety Detection Capabilities\n\n### Risk Levels\n\n* **High Risk:** Sensitive political topics, national image damage, violent crime, prompt attacks\n* **Medium Risk:** General political topics, harm to minors, illegal acts, sexual content\n* **Low Risk:** Hate speech, insults, privacy violations, commercial misconduct\n* **No Risk:** Safe content\n\n### Handling Strategies\n\n* **High Risk:** Recommend blocking\n* **Medium Risk:** Recommend substitution with a safe reply\n* **Low Risk:** Recommend substitution or business-dependent handling\n* **No Risk:** Recommend pass\n\n## Error Handling\n\n### Synchronous Error Handling\n\n```python\nfrom xiangxinai import XiangxinAI, AuthenticationError, ValidationError, RateLimitError\n\ntry:\n result = client.check_prompt(\"Test content\")\nexcept AuthenticationError:\n print(\"Invalid API key\")\nexcept ValidationError as e:\n print(f\"Input validation failed: {e}\")\nexcept RateLimitError:\n print(\"Rate limit exceeded\")\nexcept Exception as e:\n print(f\"Other error: {e}\")\n```\n\n### Asynchronous Error Handling\n\n```python\nimport asyncio\nfrom xiangxinai import AsyncXiangxinAI, AuthenticationError, ValidationError, RateLimitError\n\nasync def safe_check():\n try:\n async with AsyncXiangxinAI(api_key=\"your-api-key\") as client:\n result = await client.check_prompt(\"Test content\")\n return result\n except AuthenticationError:\n print(\"Invalid API key\")\n except ValidationError as e:\n print(f\"Input validation failed: {e}\")\n except RateLimitError:\n print(\"Rate limit exceeded\")\n except Exception as e:\n print(f\"Other error: {e}\")\n\nasyncio.run(safe_check())\n```\n\n## Development\n\n```bash\n# Clone the project\ngit clone https://github.com/xiangxinai/xiangxin-guardrails\ncd xiangxin-guardrails/client\n\n# Install dev dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Code formatting\nblack xiangxinai\nisort xiangxinai\n\n# Type checking\nmypy xiangxinai\n```\n\n## License\n\nThis project is open-sourced under the [Apache 2.0](https://opensource.org/licenses/Apache-2.0) license.\n\n## Support\n\n* \ud83d\udce7 Technical Support: [wanglei@xiangxinai.cn](mailto:wanglei@xiangxinai.cn)\n* \ud83c\udf10 Official Website: [https://xiangxinai.cn](https://xiangxinai.cn)\n* \ud83d\udcd6 Documentation: [https://docs.xiangxinai.cn](https://docs.xiangxinai.cn)\n* \ud83d\udc1b Issue Tracker: [https://github.com/xiangxinai/xiangxin-guardrails/issues](https://github.com/xiangxinai/xiangxin-guardrails/issues)\n\n---\n\nMade with \u2764\ufe0f by [Xiangxin AI](https://xiangxinai.cn)\n\n---\n",
"bugtrack_url": null,
"license": null,
"summary": "Xiangxin AI guardrails Python SDK - An LLM-based context-aware AI guardrail that understands conversation context for security, safety and data leakage detection.",
"version": "2.6.2",
"project_urls": {
"Changelog": "https://github.com/xiangxinai/xiangxin-guardrails/blob/main/CHANGELOG.md",
"Documentation": "https://docs.xiangxinai.cn",
"Homepage": "https://xiangxinai.cn",
"Issues": "https://github.com/xiangxinai/xiangxin-guardrails/issues",
"Repository": "https://github.com/xiangxinai/xiangxin-guardrails"
},
"split_keywords": [
"ai",
" safety",
" guardrails",
" llm",
" content-moderation",
" prompt-injection",
" security",
" compliance",
" xiangxin"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "cb1c1173d57d71b927e1c1bb2789f1a6a490db7ab3031a573c0393efbefe4370",
"md5": "daf1f0022ad186f7c2eaccb4256d66bb",
"sha256": "786436ba42e50f3c7eda3b277e8ddc949b7713f6ddc8bfab1cc6fa4834812ce0"
},
"downloads": -1,
"filename": "xiangxinai-2.6.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "daf1f0022ad186f7c2eaccb4256d66bb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 17142,
"upload_time": "2025-10-09T06:34:26",
"upload_time_iso_8601": "2025-10-09T06:34:26.547852Z",
"url": "https://files.pythonhosted.org/packages/cb/1c/1173d57d71b927e1c1bb2789f1a6a490db7ab3031a573c0393efbefe4370/xiangxinai-2.6.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1d7e6dc28e6fd9c11e7a0ffba0bfc50c1a56da73db567d5674f67994456ca652",
"md5": "4bf2b85e22ef314dbcfabbd834b864b8",
"sha256": "20adfcd21ccd1301b1cea47a143804a7f5aa217e1af85a56590ddfda3a1fdc0f"
},
"downloads": -1,
"filename": "xiangxinai-2.6.2.tar.gz",
"has_sig": false,
"md5_digest": "4bf2b85e22ef314dbcfabbd834b864b8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 20061,
"upload_time": "2025-10-09T06:34:27",
"upload_time_iso_8601": "2025-10-09T06:34:27.755578Z",
"url": "https://files.pythonhosted.org/packages/1d/7e/6dc28e6fd9c11e7a0ffba0bfc50c1a56da73db567d5674f67994456ca652/xiangxinai-2.6.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-09 06:34:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "xiangxinai",
"github_project": "xiangxin-guardrails",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "xiangxinai"
}