panoptica-genai-protection


Namepanoptica-genai-protection JSON
Version 0.1.12 PyPI version JSON
download
home_pageNone
SummaryProtecting GenAI from Prompt Injection
upload_time2024-07-09 10:28:35
maintainerNone
docs_urlNone
authormarvin-team@cisco.com
requires_python>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
___
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems.
Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty. 

The Python SDK is provided to programmatically integrate your system with our LLM protection software,
enabling you to verify the safety level of processing a user requested prompt before actually processing it.
Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.

## Installation
```shell
pip install panoptica_genai_protection
```

## Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
  * The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
  * The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()

GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.


### Sample Snippet

```python
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult

# ... Other code in your module ...

# initialize the client
genai_protection_client = GenAIProtectionClient()

# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
  chat_request.prompt,
  api_name="chat_service",  # Name of the service running the LLM
  api_endpoint_name="/chat",  # Name of the endpoint serving the LLM interaction
  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`
  actor="John Doe",  # Name of the "actor" interacting with the LLM service.
  actor_type="user",  # Actor type, one of {"user", "ip", "bot"}
)

if inspection_result.result == InspectionResult.safe:
  # Prompt is safe, generate an LLM response
  llm_response = get_llm_response(
    chat_request.prompt
  )

  # Call GenAI protection on LLM response (completion)
  inspection_result = genai_protection_client.check_llm_response(
    prompt=chat_request.prompt,
    response=llm_response,
    api_name="chat_service",
    api_endpoint_name="/chat",
    actor="John Doe",
    actor_type="user",
    request_id=inspection_result.reqId,
    sequence_id=chat_id,
  )
  if inspection_result.result != InspectionResult.safe:
    # LLM answer is flagged as unsafe, return a predefined error message to the user
    answer_response = "Something went wrong."
else:
  # Prompt is flagged as unsafe, return a predefined error message to the user
  answer_response = "Something went wrong."
```

#### Async use:
You may use the client in async context in two ways:
```python
async def my_async_call_to_gen_ai_protection(prompt: str):
    client = GenAIProtectionClient(as_async=True)
    return await client.check_llm_prompt_async(
        prompt=prompt,
        api_name="test",
        api_endpoint_name="/test",
        actor="John Doe",
        actor_type="user"
    )
```
or
```python
async def my_other_async_call_to_gen_ai_protection(prompt: str):
    async with GenAIProtectionClient() as client:
      return await client.check_llm_prompt_async(
          prompt=prompt,
          api_name="test",
          api_endpoint_name="/test",
          actor="John Doe",
          actor_type="user"
      )
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "panoptica-genai-protection",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "marvin-team@cisco.com",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/33/19/f5ac072ab9ec75a64d8421b48f359bf314977b47f040565154165c97b4f2/panoptica_genai_protection-0.1.12.tar.gz",
    "platform": null,
    "description": "# Panoptica GenAI Protection SDK\nA simple python client SDK for integration with Panoptica GenAI Protection.\n___\nGenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems.\nSpecifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty. \n\nThe Python SDK is provided to programmatically integrate your system with our LLM protection software,\nenabling you to verify the safety level of processing a user requested prompt before actually processing it.\nFollowing this evaluation, the application can then determine the appropriate subsequent steps based on your policy.\n\n## Installation\n```shell\npip install panoptica_genai_protection\n```\n\n## Usage Example\nWorking assumptions:\n- You have generated a key-pair for GenAI Protection in the Panoptica settings screen\n  * The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable\n  * The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable\n- We denote the call to generating the LLM response as get_llm_response()\n\nGenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.\n\n\n### Sample Snippet\n\n```python\nfrom panoptica_genai_protection.client import GenAIProtectionClient\nfrom panoptica_genai_protection.gen.models import Result as InspectionResult\n\n# ... Other code in your module ...\n\n# initialize the client\ngenai_protection_client = GenAIProtectionClient()\n\n# Send the prompt for inspection BEFORE sending it to the LLM\ninspection_result = genai_protection_client.check_llm_prompt(\n  chat_request.prompt,\n  api_name=\"chat_service\",  # Name of the service running the LLM\n  api_endpoint_name=\"/chat\",  # Name of the endpoint serving the LLM interaction\n  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`\n  actor=\"John Doe\",  # Name of the \"actor\" interacting with the LLM service.\n  actor_type=\"user\",  # Actor type, one of {\"user\", \"ip\", \"bot\"}\n)\n\nif inspection_result.result == InspectionResult.safe:\n  # Prompt is safe, generate an LLM response\n  llm_response = get_llm_response(\n    chat_request.prompt\n  )\n\n  # Call GenAI protection on LLM response (completion)\n  inspection_result = genai_protection_client.check_llm_response(\n    prompt=chat_request.prompt,\n    response=llm_response,\n    api_name=\"chat_service\",\n    api_endpoint_name=\"/chat\",\n    actor=\"John Doe\",\n    actor_type=\"user\",\n    request_id=inspection_result.reqId,\n    sequence_id=chat_id,\n  )\n  if inspection_result.result != InspectionResult.safe:\n    # LLM answer is flagged as unsafe, return a predefined error message to the user\n    answer_response = \"Something went wrong.\"\nelse:\n  # Prompt is flagged as unsafe, return a predefined error message to the user\n  answer_response = \"Something went wrong.\"\n```\n\n#### Async use:\nYou may use the client in async context in two ways:\n```python\nasync def my_async_call_to_gen_ai_protection(prompt: str):\n    client = GenAIProtectionClient(as_async=True)\n    return await client.check_llm_prompt_async(\n        prompt=prompt,\n        api_name=\"test\",\n        api_endpoint_name=\"/test\",\n        actor=\"John Doe\",\n        actor_type=\"user\"\n    )\n```\nor\n```python\nasync def my_other_async_call_to_gen_ai_protection(prompt: str):\n    async with GenAIProtectionClient() as client:\n      return await client.check_llm_prompt_async(\n          prompt=prompt,\n          api_name=\"test\",\n          api_endpoint_name=\"/test\",\n          actor=\"John Doe\",\n          actor_type=\"user\"\n      )\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Protecting GenAI from Prompt Injection",
    "version": "0.1.12",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9ddfcd933ad4f787bfe46365d9cf4a5ba9db2225744db685335f06741ee608f3",
                "md5": "938f8dbe854d231b9d91cd787fb1b1d4",
                "sha256": "c07f17c86c31c8ce38efd2895dd9188465f31f1de5114fd2b45302ad6ffbb487"
            },
            "downloads": -1,
            "filename": "panoptica_genai_protection-0.1.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "938f8dbe854d231b9d91cd787fb1b1d4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 7458,
            "upload_time": "2024-07-09T10:28:33",
            "upload_time_iso_8601": "2024-07-09T10:28:33.677178Z",
            "url": "https://files.pythonhosted.org/packages/9d/df/cd933ad4f787bfe46365d9cf4a5ba9db2225744db685335f06741ee608f3/panoptica_genai_protection-0.1.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3319f5ac072ab9ec75a64d8421b48f359bf314977b47f040565154165c97b4f2",
                "md5": "c28a3cbcad0cf8c68958fae11a0e6e35",
                "sha256": "afc5dd87a2764c4256113ce88ff9df2822d8d3e699c6eaf4e1459739b3999f7d"
            },
            "downloads": -1,
            "filename": "panoptica_genai_protection-0.1.12.tar.gz",
            "has_sig": false,
            "md5_digest": "c28a3cbcad0cf8c68958fae11a0e6e35",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 3936,
            "upload_time": "2024-07-09T10:28:35",
            "upload_time_iso_8601": "2024-07-09T10:28:35.263771Z",
            "url": "https://files.pythonhosted.org/packages/33/19/f5ac072ab9ec75a64d8421b48f359bf314977b47f040565154165c97b4f2/panoptica_genai_protection-0.1.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-09 10:28:35",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "panoptica-genai-protection"
}
        
Elapsed time: 0.25634s