deeprails


Namedeeprails JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
SummaryPython SDK for interacting with the DeepRails API
upload_time2025-08-20 16:22:08
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) [2025] [DeepRails Inc.ß] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ai deeprails evaluation genai guardrails sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # DeepRails Python SDK

A lightweight, intuitive Python SDK for interacting with the DeepRails API. DeepRails helps you evaluate and improve AI-generated outputs through a comprehensive set of guardrail metrics.

## Installation

```bash
pip install deeprails
```

## Quick Start

```python
from deeprails import DeepRails

# Initialize with your API token
client = DeepRails(token="YOUR_API_KEY")

# Create an evaluation
evaluation = client.create_evaluation(
    model_input={"user_prompt": "Prompt used to generate completion"},
    model_output="Generated output",
    model_used="gpt-4o-mini",
    guardrail_metrics=["correctness", "completeness"]
)
print(f"Evaluation created with ID: {evaluation.eval_id}")

# Create a monitor
monitor = client.create_monitor(
    name="Production Assistant Monitor",
    description="Tracking our production assistant quality"
)
print(f"Monitor created with ID: {monitor.monitor_id}")
```

## Features

- **Simple API**: Just a few lines of code to integrate evaluation into your workflow
- **Comprehensive Metrics**: Evaluate outputs on correctness, completeness, and more
- **Real-time Progress**: Track evaluation progress in real-time
- **Detailed Results**: Get detailed scores and rationales for each metric
- **Continuous Monitoring**: Create monitors to track AI system performance over time

## Authentication

All API requests require authentication using your DeepRails API key. Your API key is a sensitive credential that should be kept secure.

```python
# Best practice: Load token from environment variable
import os
token = os.environ.get("DEEPRAILS_API_KEY")
client = DeepRails(token=token)
```

## Evaluation Service

### Creating Evaluations

```python
try:
    evaluation = client.create_evaluation(
        model_input={"user_prompt": "Prompt used to generate completion"},
        model_output="Generated output",
        model_used="gpt-4o-mini",
        guardrail_metrics=["correctness", "completeness"]
    )
    print(f"ID: {evaluation.eval_id}")
    print(f"Status: {evaluation.evaluation_status}")
    print(f"Progress: {evaluation.progress}%")
except Exception as e:
    print(f"Error: {e}")
```

#### Parameters

- `model_input`: Dictionary containing the prompt and any context (must include `user_prompt`)
- `model_output`: The generated output to evaluate
- `model_used`: (Optional) The model that generated the output
- `run_mode`: (Optional) Evaluation run mode - defaults to "smart"
- `guardrail_metrics`: (Optional) List of metrics to evaluate
- `nametag`: (Optional) Custom identifier for this evaluation
- `webhook`: (Optional) URL to receive completion notifications

### Retrieving Evaluations

```python
try:
    eval_id = "eval-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
    evaluation = client.get_evaluation(eval_id)
    
    print(f"Status: {evaluation.evaluation_status}")
    
    if evaluation.evaluation_result:
        print("\nResults:")
        for metric, result in evaluation.evaluation_result.items():
            score = result.get('score', 'N/A')
            print(f"  {metric}: {score}")
except Exception as e:
    print(f"Error: {e}")
```

## Monitor Service

### Creating Monitors

```python
try:
    # Create a monitor
    monitor = client.create_monitor(
        name="Production Chat Assistant Monitor",
        description="Monitoring our production chatbot responses"
    )
    
    print(f"Monitor created with ID: {monitor.monitor_id}")
except Exception as e:
    print(f"Error: {e}")
```

### Logging Monitor Events

```python
try:
    # Add an event to the monitor
    event = client.create_monitor_event(
        monitor_id="mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        model_input={"user_prompt": "Tell me about renewable energy"},
        model_output="Renewable energy comes from natural sources...",
        model_used="gpt-4o-mini",
        guardrail_metrics=["correctness", "completeness", "comprehensive_safety"]
    )
    
    print(f"Monitor event created with ID: {event.event_id}")
    print(f"Associated evaluation ID: {event.evaluation_id}")
except Exception as e:
    print(f"Error: {e}")
```

### Retrieving Monitor Data

```python
try:
    # Get monitor details
    monitor = client.get_monitor("mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")
    print(f"Monitor name: {monitor.name}")
    print(f"Status: {monitor.monitor_status}")
    
    # Get monitor events
    events = client.get_monitor_events(
        monitor_id="mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", 
        limit=10
    )
    
    for event in events:
        print(f"Event ID: {event.event_id}")
        print(f"Evaluation ID: {event.evaluation_id}")
        
    # List all monitors with filtering
    monitors = client.get_monitors(
        limit=5,
        monitor_status=["active"],
        sort_by="created_at",
        sort_order="desc"
    )
    
    print(f"Total monitors: {monitors.pagination.total_count}")
    for m in monitors.monitors:
        print(f"{m.name}: {m.event_count} events")
except Exception as e:
    print(f"Error: {e}")
```

## Available Metrics

- `correctness`: Measures factual accuracy by evaluating whether each claim in the output is true and verifiable.
- `completeness`: Assesses whether the response addresses all necessary parts of the prompt with sufficient detail and relevance.
- `instruction_adherence`: Checks whether the AI followed the explicit instructions in the prompt and system directives.
- `context_adherence`: Determines whether each factual claim is directly supported by the provided context.
- `ground_truth_adherence`: Measures how closely the output matches a known correct answer (gold standard).
- `comprehensive_safety`: Detects and categorizes safety violations across areas like PII, CBRN, hate speech, self-harm, and more.

## Error Handling

The SDK throws `DeepRailsAPIError` for API-related errors, with status code and detailed message.

```python
from deeprails import DeepRailsAPIError

try:
    # SDK operations
except DeepRailsAPIError as e:
    print(f"API Error: {e.status_code} - {e.error_detail}")
except Exception as e:
    print(f"Unexpected error: {e}")
```

## Support

For questions or support, please contact support@deeprails.ai.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "deeprails",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, deeprails, evaluation, genai, guardrails, sdk",
    "author": null,
    "author_email": "Neil Mate <support@deeprails.ai>",
    "download_url": "https://files.pythonhosted.org/packages/bf/87/5da760f4fc7e5ca021dd764813ffccb6b68acfd1c79a3fe66f60953ac467/deeprails-0.3.0.tar.gz",
    "platform": null,
    "description": "# DeepRails Python SDK\n\nA lightweight, intuitive Python SDK for interacting with the DeepRails API. DeepRails helps you evaluate and improve AI-generated outputs through a comprehensive set of guardrail metrics.\n\n## Installation\n\n```bash\npip install deeprails\n```\n\n## Quick Start\n\n```python\nfrom deeprails import DeepRails\n\n# Initialize with your API token\nclient = DeepRails(token=\"YOUR_API_KEY\")\n\n# Create an evaluation\nevaluation = client.create_evaluation(\n    model_input={\"user_prompt\": \"Prompt used to generate completion\"},\n    model_output=\"Generated output\",\n    model_used=\"gpt-4o-mini\",\n    guardrail_metrics=[\"correctness\", \"completeness\"]\n)\nprint(f\"Evaluation created with ID: {evaluation.eval_id}\")\n\n# Create a monitor\nmonitor = client.create_monitor(\n    name=\"Production Assistant Monitor\",\n    description=\"Tracking our production assistant quality\"\n)\nprint(f\"Monitor created with ID: {monitor.monitor_id}\")\n```\n\n## Features\n\n- **Simple API**: Just a few lines of code to integrate evaluation into your workflow\n- **Comprehensive Metrics**: Evaluate outputs on correctness, completeness, and more\n- **Real-time Progress**: Track evaluation progress in real-time\n- **Detailed Results**: Get detailed scores and rationales for each metric\n- **Continuous Monitoring**: Create monitors to track AI system performance over time\n\n## Authentication\n\nAll API requests require authentication using your DeepRails API key. Your API key is a sensitive credential that should be kept secure.\n\n```python\n# Best practice: Load token from environment variable\nimport os\ntoken = os.environ.get(\"DEEPRAILS_API_KEY\")\nclient = DeepRails(token=token)\n```\n\n## Evaluation Service\n\n### Creating Evaluations\n\n```python\ntry:\n    evaluation = client.create_evaluation(\n        model_input={\"user_prompt\": \"Prompt used to generate completion\"},\n        model_output=\"Generated output\",\n        model_used=\"gpt-4o-mini\",\n        guardrail_metrics=[\"correctness\", \"completeness\"]\n    )\n    print(f\"ID: {evaluation.eval_id}\")\n    print(f\"Status: {evaluation.evaluation_status}\")\n    print(f\"Progress: {evaluation.progress}%\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n#### Parameters\n\n- `model_input`: Dictionary containing the prompt and any context (must include `user_prompt`)\n- `model_output`: The generated output to evaluate\n- `model_used`: (Optional) The model that generated the output\n- `run_mode`: (Optional) Evaluation run mode - defaults to \"smart\"\n- `guardrail_metrics`: (Optional) List of metrics to evaluate\n- `nametag`: (Optional) Custom identifier for this evaluation\n- `webhook`: (Optional) URL to receive completion notifications\n\n### Retrieving Evaluations\n\n```python\ntry:\n    eval_id = \"eval-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n    evaluation = client.get_evaluation(eval_id)\n    \n    print(f\"Status: {evaluation.evaluation_status}\")\n    \n    if evaluation.evaluation_result:\n        print(\"\\nResults:\")\n        for metric, result in evaluation.evaluation_result.items():\n            score = result.get('score', 'N/A')\n            print(f\"  {metric}: {score}\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n## Monitor Service\n\n### Creating Monitors\n\n```python\ntry:\n    # Create a monitor\n    monitor = client.create_monitor(\n        name=\"Production Chat Assistant Monitor\",\n        description=\"Monitoring our production chatbot responses\"\n    )\n    \n    print(f\"Monitor created with ID: {monitor.monitor_id}\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n### Logging Monitor Events\n\n```python\ntry:\n    # Add an event to the monitor\n    event = client.create_monitor_event(\n        monitor_id=\"mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\n        model_input={\"user_prompt\": \"Tell me about renewable energy\"},\n        model_output=\"Renewable energy comes from natural sources...\",\n        model_used=\"gpt-4o-mini\",\n        guardrail_metrics=[\"correctness\", \"completeness\", \"comprehensive_safety\"]\n    )\n    \n    print(f\"Monitor event created with ID: {event.event_id}\")\n    print(f\"Associated evaluation ID: {event.evaluation_id}\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n### Retrieving Monitor Data\n\n```python\ntry:\n    # Get monitor details\n    monitor = client.get_monitor(\"mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\")\n    print(f\"Monitor name: {monitor.name}\")\n    print(f\"Status: {monitor.monitor_status}\")\n    \n    # Get monitor events\n    events = client.get_monitor_events(\n        monitor_id=\"mon-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \n        limit=10\n    )\n    \n    for event in events:\n        print(f\"Event ID: {event.event_id}\")\n        print(f\"Evaluation ID: {event.evaluation_id}\")\n        \n    # List all monitors with filtering\n    monitors = client.get_monitors(\n        limit=5,\n        monitor_status=[\"active\"],\n        sort_by=\"created_at\",\n        sort_order=\"desc\"\n    )\n    \n    print(f\"Total monitors: {monitors.pagination.total_count}\")\n    for m in monitors.monitors:\n        print(f\"{m.name}: {m.event_count} events\")\nexcept Exception as e:\n    print(f\"Error: {e}\")\n```\n\n## Available Metrics\n\n- `correctness`: Measures factual accuracy by evaluating whether each claim in the output is true and verifiable.\n- `completeness`: Assesses whether the response addresses all necessary parts of the prompt with sufficient detail and relevance.\n- `instruction_adherence`: Checks whether the AI followed the explicit instructions in the prompt and system directives.\n- `context_adherence`: Determines whether each factual claim is directly supported by the provided context.\n- `ground_truth_adherence`: Measures how closely the output matches a known correct answer (gold standard).\n- `comprehensive_safety`: Detects and categorizes safety violations across areas like PII, CBRN, hate speech, self-harm, and more.\n\n## Error Handling\n\nThe SDK throws `DeepRailsAPIError` for API-related errors, with status code and detailed message.\n\n```python\nfrom deeprails import DeepRailsAPIError\n\ntry:\n    # SDK operations\nexcept DeepRailsAPIError as e:\n    print(f\"API Error: {e.status_code} - {e.error_detail}\")\nexcept Exception as e:\n    print(f\"Unexpected error: {e}\")\n```\n\n## Support\n\nFor questions or support, please contact support@deeprails.ai.",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) [2025] [DeepRails Inc.\u00df]\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n        \n        IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Python SDK for interacting with the DeepRails API",
    "version": "0.3.0",
    "project_urls": {
        "Documentation": "https://docs.deeprails.com",
        "Homepage": "https://deeprails.com"
    },
    "split_keywords": [
        "ai",
        " deeprails",
        " evaluation",
        " genai",
        " guardrails",
        " sdk"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6a9c07d0743307c3ec9a2597bb9f31334ce2e5fd2ee8fae61c0d70dccbb763c9",
                "md5": "ac2d6d5fa45786f6e4a08b74d32f0abc",
                "sha256": "59656848328d5f16caf240464e691876b429b3385a56dcdc6ff86e3d6093ad7c"
            },
            "downloads": -1,
            "filename": "deeprails-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ac2d6d5fa45786f6e4a08b74d32f0abc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 8668,
            "upload_time": "2025-08-20T16:22:07",
            "upload_time_iso_8601": "2025-08-20T16:22:07.194910Z",
            "url": "https://files.pythonhosted.org/packages/6a/9c/07d0743307c3ec9a2597bb9f31334ce2e5fd2ee8fae61c0d70dccbb763c9/deeprails-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bf875da760f4fc7e5ca021dd764813ffccb6b68acfd1c79a3fe66f60953ac467",
                "md5": "4dd2442f2a973e0f658352a443da212b",
                "sha256": "2c1cb5b581cdf665b0cbf5c68f3be258d56508bd04033f97e4f006ad68896679"
            },
            "downloads": -1,
            "filename": "deeprails-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4dd2442f2a973e0f658352a443da212b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 6689,
            "upload_time": "2025-08-20T16:22:08",
            "upload_time_iso_8601": "2025-08-20T16:22:08.441744Z",
            "url": "https://files.pythonhosted.org/packages/bf/87/5da760f4fc7e5ca021dd764813ffccb6b68acfd1c79a3fe66f60953ac467/deeprails-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-20 16:22:08",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "deeprails"
}
        
Elapsed time: 1.87005s