panoptica-genai-protection


Namepanoptica-genai-protection JSON
Version 0.1.18 PyPI version JSON
download
home_pageNone
SummaryProtecting GenAI from Prompt Injection
upload_time2024-09-09 11:27:13
maintainerNone
docs_urlNone
authormarvin-team@cisco.com
requires_python>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
___
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems.
Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty. 

The Python SDK is provided to programmatically integrate your system with our LLM protection software,
enabling you to verify the safety level of processing a user requested prompt before actually processing it.
Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.

## Installation
```shell
pip install panoptica_genai_protection
```

## Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
  * The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
  * The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()

GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.


### Sample Snippet

```python
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult

# ... Other code in your module ...

# initialize the client
genai_protection_client = GenAIProtectionClient()

# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
  chat_request.prompt,
  api_name="chat_service",  # Name of the service running the LLM
  api_endpoint_name="/chat",  # Name of the endpoint serving the LLM interaction
  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`
  actor="John Doe",  # Name of the "actor" interacting with the LLM service.
  actor_type="user",  # Actor type, one of {"user", "ip", "bot"}
)

if inspection_result.result == InspectionResult.safe:
  # Prompt is safe, generate an LLM response
  llm_response = get_llm_response(
    chat_request.prompt
  )

  # Call GenAI protection on LLM response (completion)
  inspection_result = genai_protection_client.check_llm_response(
    prompt=chat_request.prompt,
    response=llm_response,
    api_name="chat_service",
    api_endpoint_name="/chat",
    actor="John Doe",
    actor_type="user",
    request_id=inspection_result.reqId,
    sequence_id=chat_id,
  )
  if inspection_result.result != InspectionResult.safe:
    # LLM answer is flagged as unsafe, return a predefined error message to the user
    answer_response = "Something went wrong."
else:
  # Prompt is flagged as unsafe, return a predefined error message to the user
  answer_response = "Something went wrong."
```

#### Async use:
You may use the client in async context in two ways:
```python
async def my_async_call_to_gen_ai_protection(prompt: str):
    client = GenAIProtectionClient(as_async=True)
    return await client.check_llm_prompt_async(
        prompt=prompt,
        api_name="test",
        api_endpoint_name="/test",
        actor="John Doe",
        actor_type="user"
    )
```
or
```python
async def my_other_async_call_to_gen_ai_protection(prompt: str):
    async with GenAIProtectionClient() as client:
      return await client.check_llm_prompt_async(
          prompt=prompt,
          api_name="test",
          api_endpoint_name="/test",
          actor="John Doe",
          actor_type="user"
      )
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "panoptica-genai-protection",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "marvin-team@cisco.com",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/fa/ca/390903110b254cfad29b218786025452eb0639a43a8be6353b3124cfc6e6/panoptica_genai_protection-0.1.18.tar.gz",
    "platform": null,
    "description": "# Panoptica GenAI Protection SDK\nA simple python client SDK for integration with Panoptica GenAI Protection.\n___\nGenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems.\nSpecifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty. \n\nThe Python SDK is provided to programmatically integrate your system with our LLM protection software,\nenabling you to verify the safety level of processing a user requested prompt before actually processing it.\nFollowing this evaluation, the application can then determine the appropriate subsequent steps based on your policy.\n\n## Installation\n```shell\npip install panoptica_genai_protection\n```\n\n## Usage Example\nWorking assumptions:\n- You have generated a key-pair for GenAI Protection in the Panoptica settings screen\n  * The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable\n  * The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable\n- We denote the call to generating the LLM response as get_llm_response()\n\nGenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.\n\n\n### Sample Snippet\n\n```python\nfrom panoptica_genai_protection.client import GenAIProtectionClient\nfrom panoptica_genai_protection.gen.models import Result as InspectionResult\n\n# ... Other code in your module ...\n\n# initialize the client\ngenai_protection_client = GenAIProtectionClient()\n\n# Send the prompt for inspection BEFORE sending it to the LLM\ninspection_result = genai_protection_client.check_llm_prompt(\n  chat_request.prompt,\n  api_name=\"chat_service\",  # Name of the service running the LLM\n  api_endpoint_name=\"/chat\",  # Name of the endpoint serving the LLM interaction\n  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`\n  actor=\"John Doe\",  # Name of the \"actor\" interacting with the LLM service.\n  actor_type=\"user\",  # Actor type, one of {\"user\", \"ip\", \"bot\"}\n)\n\nif inspection_result.result == InspectionResult.safe:\n  # Prompt is safe, generate an LLM response\n  llm_response = get_llm_response(\n    chat_request.prompt\n  )\n\n  # Call GenAI protection on LLM response (completion)\n  inspection_result = genai_protection_client.check_llm_response(\n    prompt=chat_request.prompt,\n    response=llm_response,\n    api_name=\"chat_service\",\n    api_endpoint_name=\"/chat\",\n    actor=\"John Doe\",\n    actor_type=\"user\",\n    request_id=inspection_result.reqId,\n    sequence_id=chat_id,\n  )\n  if inspection_result.result != InspectionResult.safe:\n    # LLM answer is flagged as unsafe, return a predefined error message to the user\n    answer_response = \"Something went wrong.\"\nelse:\n  # Prompt is flagged as unsafe, return a predefined error message to the user\n  answer_response = \"Something went wrong.\"\n```\n\n#### Async use:\nYou may use the client in async context in two ways:\n```python\nasync def my_async_call_to_gen_ai_protection(prompt: str):\n    client = GenAIProtectionClient(as_async=True)\n    return await client.check_llm_prompt_async(\n        prompt=prompt,\n        api_name=\"test\",\n        api_endpoint_name=\"/test\",\n        actor=\"John Doe\",\n        actor_type=\"user\"\n    )\n```\nor\n```python\nasync def my_other_async_call_to_gen_ai_protection(prompt: str):\n    async with GenAIProtectionClient() as client:\n      return await client.check_llm_prompt_async(\n          prompt=prompt,\n          api_name=\"test\",\n          api_endpoint_name=\"/test\",\n          actor=\"John Doe\",\n          actor_type=\"user\"\n      )\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Protecting GenAI from Prompt Injection",
    "version": "0.1.18",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6dc70a8270b536532fd2c2ca4f922e2cb66f319e3f318e99ee3b11104afa3b07",
                "md5": "0a6d5c04c23081acfdb5edab1190dd25",
                "sha256": "6c0f9237cf10d6d4c9d4524cc62768531e997a6e4eeb8996b8ec3d07a239b202"
            },
            "downloads": -1,
            "filename": "panoptica_genai_protection-0.1.18-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0a6d5c04c23081acfdb5edab1190dd25",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 8607,
            "upload_time": "2024-09-09T11:27:12",
            "upload_time_iso_8601": "2024-09-09T11:27:12.773624Z",
            "url": "https://files.pythonhosted.org/packages/6d/c7/0a8270b536532fd2c2ca4f922e2cb66f319e3f318e99ee3b11104afa3b07/panoptica_genai_protection-0.1.18-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "faca390903110b254cfad29b218786025452eb0639a43a8be6353b3124cfc6e6",
                "md5": "6aa05e46a7d88d547b23ee5cc4d9c61a",
                "sha256": "9f7e5ba12f47d2790691bb13a86400d64c786512ff4f2cd76a49443bab5ad032"
            },
            "downloads": -1,
            "filename": "panoptica_genai_protection-0.1.18.tar.gz",
            "has_sig": false,
            "md5_digest": "6aa05e46a7d88d547b23ee5cc4d9c61a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 4175,
            "upload_time": "2024-09-09T11:27:13",
            "upload_time_iso_8601": "2024-09-09T11:27:13.988046Z",
            "url": "https://files.pythonhosted.org/packages/fa/ca/390903110b254cfad29b218786025452eb0639a43a8be6353b3124cfc6e6/panoptica_genai_protection-0.1.18.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-09 11:27:13",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "panoptica-genai-protection"
}
        
Elapsed time: 0.43061s