mona-openai


Namemona-openai JSON
Version 0.2.1 PyPI version JSON
download
home_page
SummaryIntegration client for monitoring OpenAI usage with Mona
upload_time2024-01-11 22:26:37
maintainer
docs_urlNone
author
requires_python>=3.9
license
keywords openai llms gpt mona monitoring ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Mona-OpenAI Integration Client
<p align="center">
  <img src="https://github.com/monalabs/mona-sdk/blob/main/mona_logo.png?raw=true" alt="Mona's logo" width="180"/>
</p>

<p align="center"><a target="_blank" href="https://monalabs.wistia.com/medias/l6xmdj3cd6?wvideo=l6xmdj3cd6"><img src="https://embed-ssl.wistia.com/deliveries/c15bb616a389fa7d752968ccb3af2ab4.jpg?wistia-l6xmdj3cd6-1-l6xmdj3cd6-video-thumbnail=1&amp;image_play_button_size=2x&amp;image_crop_resized=960x540&amp;image_play_button=1&amp;image_play_button_color=66c7d1e0" width="400" height="225" style="width: 400px; height: 225px;"></a></p>


Use one line of code to get instant live monitoring for your OpenAI usage including:
* Tokens usage
* Hallucination alerts
* Profanity and privacy analyses
* Behavioral drifts and anomalies
* LangChain support
* Much much more

## Setting Up

```console
$ pip install mona_openai
```

## Quick Start

You can find boilerplate code for many use cases under [the "examples" folder](https://github.com/monalabs/mona-openai/tree/main/examples).

### With Mona

[Sign up for a free Mona account here](https://www.monalabs.io/openai-gpt-monitoring).

```py
from openai import OpenAI

from mona_openai import monitor_client

MONA_API_KEY = environ.get("MONA_API_KEY")
MONA_SECRET = environ.get("MONA_SECRET")
MONA_CREDS = {
    "key": MONA_API_KEY,
    "secret": MONA_SECRET,
}

# This is the name of the monitoring class on Mona
MONITORING_CONTEXT_NAME = "NEW_CHAT_CLIENT_CONTEXT"

openAI_client = monitor_client(OpenAI(api_key=environ.get("OPEN_AI_KEY")), MONA_CREDS, MONITORING_CONTEXT_NAME)

response = openAI_client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)
print(response.choices[0].message.content)
```

### With Standard Logging

```py
from openai import OpenAI
from os import environ

from mona_openai import monitor_client_with_logger

from mona_openai.loggers import StandardLogger
from logging import WARNING

MONA_API_KEY = environ.get("MONA_API_KEY")
MONA_SECRET = environ.get("MONA_SECRET")
MONA_CREDS = {
    "key": MONA_API_KEY,
    "secret": MONA_SECRET,
}

# This is the name of the monitoring class on Mona
MONITORING_CONTEXT_NAME = "NEW_CHAT_CLIENT_CONTEXT"

openAI_client = monitor_client_with_logger(OpenAI(api_key=environ.get("OPEN_AI_KEY")), StandardLogger(WARNING))

response = openAI_client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)
print(response.choices[0].message.content)
```

## Supported OpenAI APIs
Currently this client supports just the Chat Completion API. Mona, not using this client, can support processes based on other APIs and also non-OpenAI-based apps.
If you have a differrent use-case, we'd love to hear about it! Please email us at support@monalabs.io.

## Usage
### Initialization

The main functions exposed in this package are `monitor_client` and `monitor_client_with_logger`.

These functions return an openai client that wraps the original chat completion method with an equivalent API that also logs relevant metrics for monitoring behind the scenes.

See above quick start examples for usage.

#### Specs
The specs arg allows you to configure what should be monitored. It expects a python dict with the follwoing possible keys:
* sampling_ratio (1): A number between 0 and 1 for how often should the call be logged.
* avoid_monitoring_exceptions (False): Whether or not to log out to Mona when there is an OpenAI exception. Default is to track exceptions - and Mona will alert you on things like a jump in number of exceptions
* export_prompt (False): Whether Mona should export the actual prompt text. Be default set to False to avoid privacy concerns.
* export_response_texts (False): Whether Mona should export the actual response texts. Be default set to False to avoid privacy concerns.
* analysis: A dictionary mapping each analysis type to a boolean value telling the client whether or not to run said analysis and log it to Mona. Possible options currently are "privacy", "profanity", and "textual". By default, all analyses take place and are logged out to Mona.

### Using custom loggers
You don't have to have a Mona account to use this package. You can define specific loggers to log out the data to a file, memory, or just a given python logger.

This SDK provides a simple interface to implement your own loggers by inheriting from Logger under loggers/logger.py.
Alternatively, by using the standard python logging library as in the example, you can create logging handlers to log the data out to any mechanism you choose (e.g., Kafka, Logstash, etc...)

### Mona arguments you can add to the API call

* MONA_context_id: The unique id of the context in which the call is made. By using this ID you can export more data to Mona to the same context from other places. If not supplied, the "id" field of the OpenAI Endpoint's response will be used as the Mona context ID automatically.
* MONA_export_timestamp: Can be used to simulate as if the current call was made in a different time, as far as Mona is concerned.
* MONA_additional_data: A JSON-serializable dict with any other data you want to add to the monitoring context. This comes in handy if you want to add more information to the monitoring contex that isn't part of the basic OpenAI API call information. For example, if you are using a specific template ID or if this call is being made for a specific customer ID, these are fields you can add there to help get full context when monitoring with Mona.


### Using OpenAI with REST calls instead of OpenAI's Python client

See rest examples in legacy example folder

### Stream support

OpenAI allows receiving responses as a stream of tokens using the "stream" parameter. When this is done, Mona will collect all the tokens in memory (without interrupting the streaming process) and will create the analysis and log out the data the moment the stream is over. You don't need to do anything to make this happen.

Since for streaming responses OpenAI doesn't supply the full usage tokens summary, Mona uses the tiktoken package to calculate the tokens of the prompt and completion and log them for monitoring.

NOTE: Stream is currently only supported with SDK usage, and not with using REST directly.

## Legacy LangChain support

You can use the exported `monitor_langchain_llm` to wrap a LangChain OpenAI LLM (chat or normal) with Mona's monitoring capabilities:

```py
from mona_openai import monitor_langchain_llm

from langchain.llms import OpenAI

# Wrap the LLM object with Mona monitoring.
llm = monitor_langchain_llm(
    OpenAI(OPEN_AI_KEY),
    MONA_CREDS,
    MONITORING_CONTEXT_NAME)
```

See full example in completion_langchain.py in the examples folder.

## Mona SDK

This package uses the mona_sdk package to export the relevant data to Mona. There are several environment variables you can use to configure the SDK's behavior. For example, you can set it up to raise exceptions when exporting data to Mona fails (it doesn't do that by default).

## Monitoring for profanity

Mona uses the alt-profanity-check pacakge (https://pypi.org/project/alt-profanity-check/) to create both boolean predictions and probabilty scores for the existence of profanity both in the prompt and in the responses. We use the built in package methods for that. If you want, for example, to use a different probability threshold for the boolean prediction, you can do that by changing your Mona config on the Mona dashboard.

## Using nest-asyncio

In environments in which there's a forever running event loop (e.g., Jupyter notebooks), the client might use [nest_asyncio.apply()](https://pypi.org/project/nest-asyncio/) to run joint sync and async code.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "mona-openai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "OpenAI,LLMs,GPT,Mona,Monitoring,AI",
    "author": "",
    "author_email": "Itai Bar Sinai <itai@monalabs.io>",
    "download_url": "https://files.pythonhosted.org/packages/87/70/56698529850c07fa18183a66cd1451bd13f6a29c889a4472393eb027c364/mona-openai-0.2.1.tar.gz",
    "platform": null,
    "description": "# Mona-OpenAI Integration Client\n<p align=\"center\">\n  <img src=\"https://github.com/monalabs/mona-sdk/blob/main/mona_logo.png?raw=true\" alt=\"Mona's logo\" width=\"180\"/>\n</p>\n\n<p align=\"center\"><a target=\"_blank\" href=\"https://monalabs.wistia.com/medias/l6xmdj3cd6?wvideo=l6xmdj3cd6\"><img src=\"https://embed-ssl.wistia.com/deliveries/c15bb616a389fa7d752968ccb3af2ab4.jpg?wistia-l6xmdj3cd6-1-l6xmdj3cd6-video-thumbnail=1&amp;image_play_button_size=2x&amp;image_crop_resized=960x540&amp;image_play_button=1&amp;image_play_button_color=66c7d1e0\" width=\"400\" height=\"225\" style=\"width: 400px; height: 225px;\"></a></p>\n\n\nUse one line of code to get instant live monitoring for your OpenAI usage including:\n* Tokens usage\n* Hallucination alerts\n* Profanity and privacy analyses\n* Behavioral drifts and anomalies\n* LangChain support\n* Much much more\n\n## Setting Up\n\n```console\n$ pip install mona_openai\n```\n\n## Quick Start\n\nYou can find boilerplate code for many use cases under [the \"examples\" folder](https://github.com/monalabs/mona-openai/tree/main/examples).\n\n### With Mona\n\n[Sign up for a free Mona account here](https://www.monalabs.io/openai-gpt-monitoring).\n\n```py\nfrom openai import OpenAI\n\nfrom mona_openai import monitor_client\n\nMONA_API_KEY = environ.get(\"MONA_API_KEY\")\nMONA_SECRET = environ.get(\"MONA_SECRET\")\nMONA_CREDS = {\n    \"key\": MONA_API_KEY,\n    \"secret\": MONA_SECRET,\n}\n\n# This is the name of the monitoring class on Mona\nMONITORING_CONTEXT_NAME = \"NEW_CHAT_CLIENT_CONTEXT\"\n\nopenAI_client = monitor_client(OpenAI(api_key=environ.get(\"OPEN_AI_KEY\")), MONA_CREDS, MONITORING_CONTEXT_NAME)\n\nresponse = openAI_client.chat.completions.create(\n  model=\"gpt-3.5-turbo\",\n  messages=[\n    {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n    {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n    {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n    {\"role\": \"user\", \"content\": \"Where was it played?\"}\n  ]\n)\nprint(response.choices[0].message.content)\n```\n\n### With Standard Logging\n\n```py\nfrom openai import OpenAI\nfrom os import environ\n\nfrom mona_openai import monitor_client_with_logger\n\nfrom mona_openai.loggers import StandardLogger\nfrom logging import WARNING\n\nMONA_API_KEY = environ.get(\"MONA_API_KEY\")\nMONA_SECRET = environ.get(\"MONA_SECRET\")\nMONA_CREDS = {\n    \"key\": MONA_API_KEY,\n    \"secret\": MONA_SECRET,\n}\n\n# This is the name of the monitoring class on Mona\nMONITORING_CONTEXT_NAME = \"NEW_CHAT_CLIENT_CONTEXT\"\n\nopenAI_client = monitor_client_with_logger(OpenAI(api_key=environ.get(\"OPEN_AI_KEY\")), StandardLogger(WARNING))\n\nresponse = openAI_client.chat.completions.create(\n  model=\"gpt-3.5-turbo\",\n  messages=[\n    {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n    {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n    {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n    {\"role\": \"user\", \"content\": \"Where was it played?\"}\n  ]\n)\nprint(response.choices[0].message.content)\n```\n\n## Supported OpenAI APIs\nCurrently this client supports just the Chat Completion API. Mona, not using this client, can support processes based on other APIs and also non-OpenAI-based apps.\nIf you have a differrent use-case, we'd love to hear about it! Please email us at support@monalabs.io.\n\n## Usage\n### Initialization\n\nThe main functions exposed in this package are `monitor_client` and `monitor_client_with_logger`.\n\nThese functions return an openai client that wraps the original chat completion method with an equivalent API that also logs relevant metrics for monitoring behind the scenes.\n\nSee above quick start examples for usage.\n\n#### Specs\nThe specs arg allows you to configure what should be monitored. It expects a python dict with the follwoing possible keys:\n* sampling_ratio (1): A number between 0 and 1 for how often should the call be logged.\n* avoid_monitoring_exceptions (False): Whether or not to log out to Mona when there is an OpenAI exception. Default is to track exceptions - and Mona will alert you on things like a jump in number of exceptions\n* export_prompt (False): Whether Mona should export the actual prompt text. Be default set to False to avoid privacy concerns.\n* export_response_texts (False): Whether Mona should export the actual response texts. Be default set to False to avoid privacy concerns.\n* analysis: A dictionary mapping each analysis type to a boolean value telling the client whether or not to run said analysis and log it to Mona. Possible options currently are \"privacy\", \"profanity\", and \"textual\". By default, all analyses take place and are logged out to Mona.\n\n### Using custom loggers\nYou don't have to have a Mona account to use this package. You can define specific loggers to log out the data to a file, memory, or just a given python logger.\n\nThis SDK provides a simple interface to implement your own loggers by inheriting from Logger under loggers/logger.py.\nAlternatively, by using the standard python logging library as in the example, you can create logging handlers to log the data out to any mechanism you choose (e.g., Kafka, Logstash, etc...)\n\n### Mona arguments you can add to the API call\n\n* MONA_context_id: The unique id of the context in which the call is made. By using this ID you can export more data to Mona to the same context from other places. If not supplied, the \"id\" field of the OpenAI Endpoint's response will be used as the Mona context ID automatically.\n* MONA_export_timestamp: Can be used to simulate as if the current call was made in a different time, as far as Mona is concerned.\n* MONA_additional_data: A JSON-serializable dict with any other data you want to add to the monitoring context. This comes in handy if you want to add more information to the monitoring contex that isn't part of the basic OpenAI API call information. For example, if you are using a specific template ID or if this call is being made for a specific customer ID, these are fields you can add there to help get full context when monitoring with Mona.\n\n\n### Using OpenAI with REST calls instead of OpenAI's Python client\n\nSee rest examples in legacy example folder\n\n### Stream support\n\nOpenAI allows receiving responses as a stream of tokens using the \"stream\" parameter. When this is done, Mona will collect all the tokens in memory (without interrupting the streaming process) and will create the analysis and log out the data the moment the stream is over. You don't need to do anything to make this happen.\n\nSince for streaming responses OpenAI doesn't supply the full usage tokens summary, Mona uses the tiktoken package to calculate the tokens of the prompt and completion and log them for monitoring.\n\nNOTE: Stream is currently only supported with SDK usage, and not with using REST directly.\n\n## Legacy LangChain support\n\nYou can use the exported `monitor_langchain_llm` to wrap a LangChain OpenAI LLM (chat or normal) with Mona's monitoring capabilities:\n\n```py\nfrom mona_openai import monitor_langchain_llm\n\nfrom langchain.llms import OpenAI\n\n# Wrap the LLM object with Mona monitoring.\nllm = monitor_langchain_llm(\n    OpenAI(OPEN_AI_KEY),\n    MONA_CREDS,\n    MONITORING_CONTEXT_NAME)\n```\n\nSee full example in completion_langchain.py in the examples folder.\n\n## Mona SDK\n\nThis package uses the mona_sdk package to export the relevant data to Mona. There are several environment variables you can use to configure the SDK's behavior. For example, you can set it up to raise exceptions when exporting data to Mona fails (it doesn't do that by default).\n\n## Monitoring for profanity\n\nMona uses the alt-profanity-check pacakge (https://pypi.org/project/alt-profanity-check/) to create both boolean predictions and probabilty scores for the existence of profanity both in the prompt and in the responses. We use the built in package methods for that. If you want, for example, to use a different probability threshold for the boolean prediction, you can do that by changing your Mona config on the Mona dashboard.\n\n## Using nest-asyncio\n\nIn environments in which there's a forever running event loop (e.g., Jupyter notebooks), the client might use [nest_asyncio.apply()](https://pypi.org/project/nest-asyncio/) to run joint sync and async code.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Integration client for monitoring OpenAI usage with Mona",
    "version": "0.2.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/monalabs/mona-openai/issues",
        "Homepage": "https://github.com/monalabs/mona-openai"
    },
    "split_keywords": [
        "openai",
        "llms",
        "gpt",
        "mona",
        "monitoring",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b336d0d368dde3b85d47a52e99ab4a078ce27b196274a1750aade47b87020a7b",
                "md5": "ef8acc37c72bbc73251746db5affaabb",
                "sha256": "6c4a4d4e66fe83701fa7a60baf1a9239426a242aee3fd436fdab291ee7d109f4"
            },
            "downloads": -1,
            "filename": "mona_openai-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ef8acc37c72bbc73251746db5affaabb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 37469,
            "upload_time": "2024-01-11T22:26:35",
            "upload_time_iso_8601": "2024-01-11T22:26:35.759448Z",
            "url": "https://files.pythonhosted.org/packages/b3/36/d0d368dde3b85d47a52e99ab4a078ce27b196274a1750aade47b87020a7b/mona_openai-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "877056698529850c07fa18183a66cd1451bd13f6a29c889a4472393eb027c364",
                "md5": "2697aab2f7894aa87fe6bcdb9d59c4a7",
                "sha256": "f141d668d91c3fc84f8d726b29f46ae04294da6299b08f3ae571154adcd6a4ec"
            },
            "downloads": -1,
            "filename": "mona-openai-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "2697aab2f7894aa87fe6bcdb9d59c4a7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 34959,
            "upload_time": "2024-01-11T22:26:37",
            "upload_time_iso_8601": "2024-01-11T22:26:37.855933Z",
            "url": "https://files.pythonhosted.org/packages/87/70/56698529850c07fa18183a66cd1451bd13f6a29c889a4472393eb027c364/mona-openai-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-11 22:26:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "monalabs",
    "github_project": "mona-openai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "mona-openai"
}
        
Elapsed time: 0.16360s