OpenLLMTelemetry


NameOpenLLMTelemetry JSON
Version 0.0.1b4 PyPI version JSON
download
home_pageNone
SummaryEnd-to-end observability with built-in security guardrails.
upload_time2024-04-25 19:19:21
maintainerNone
docs_urlNone
authorWhyLabs.ai
requires_python<4.0,>=3.8.1
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OpenLLMTelemetry

`openllmtelemetry` is an open-source Python library that provides Open Telemetry integration with Large Language Models (LLMs). It is designed to facilitate tracing applications that leverage LLMs and Generative AI, ensuring better observability and monitoring.

## Features

- Easy integration with Open Telemetry for LLM applications.
- Real-time tracing and monitoring of LLM-based systems.
- Enhanced safeguards and insights for your LLM applications.

## Installation

To install `openllmtelemetry` simply use pip:

```bash
pip install openllmtelemetry
```

## Usage 🚀

Here's a basic example of how to use **OpenLLMTelemetry** in your project:

First you need to setup a few environment variables to specify where you want your LLM telemetry to be sent, and make sure you also have any API keys set for interacting with your LLM and for sending the telemetry to [WhyLabs](https://whylabs.ai/free?utm_source=openllmtelemetry-Github&utm_medium=openllmtelemetry-readme&utm_campaign=WhyLabs_Secure)



```python
import os

os.environ["WHYLABS_DEFAULT_DATASET_ID"] = "your-model-id" #  e.g. model-1 
os.environ["WHYLABS_API_KEY"] = "replace-with-your-whylabs-api-key"

```

After you verify your env variables are set you can now instrument your app by running the following:

```python
import openllmtelemetry

openllmtelemetry.instrument()
```

This will automatically instrument your calls to LLMs to gather open telemetry traces and send these to WhyLabs.

## Integration: OpenAI
Integration with an OpenAI application is straightforward with `openllmtelemetry` package.

First, you need to set a few environment variables. This can be done via your container set up or via code. 

```python
import os 

os.environ["WHYLABS_API_KEY"] = "<your-whylabs-api-key>"
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = "<your-llm-resource-id>"
os.environ["WHYLABS_GUARD_ENDPOINT"] = "<your container endpoint>"
os.environ["WHYLABS_GUARD_API_KEY"] = "internal-secret-for-whylabs-Secure"
```

Once this is done, all of your OpenAI interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly

```python
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {
      "role": "system",
      "content": "You are a helpful chatbot. "
    },
    {
      "role": "user",
      "content": "Aren't noodles amazing?"
    }
  ],
  temperature=0.7,
  max_tokens=64,
  top_p=1
)
```

## Integration: Amazon Bedrock

One of the nice things about `openllmtelemetry` is that a single call to intrument your app can work across various LLM providers, using the same instrument call above, you can also invoke models using the boto3 client's bedrock-runtime and interaction with LLMs such as Titan and you get the same level of telemetry extracted and sent to WhyLabs

Note: you may have to test that your boto3 credentials are working to be able to use the below example
For details see [boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)

```python
import json
import boto3


def bedrock_titan(prompt: str):
    try:
        model_id = 'amazon.titan-text-express-v1'
        brt = boto3.client(service_name='bedrock-runtime')
        response = brt.invoke_model(body=json.dumps({"inputText": prompt}), modelId=model_id)
        response_body = json.loads(response.get("body").read())

    except Exception as error:
        logger.error(f"A client error occurred:{error}")

    return response_body

response = bedrock_titan("What is your name and what is the origin and reason for that name?")
print(response)
```

## Requirements 📋

- Python 3.8 or higher
- opentelemetry-api
- opentelemetry-sdk

## Contributing 👐

Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.

## License 📄

**OpenLLMTelemetry** is licensed under the Apache-2.0 License. See [LICENSE](LICENSE) for more details.

## Contact 📧

For support or any questions, feel free to contact us at support@whylabs.ai.

## Documentation
More documentation can be found here on WhyLabs site: https://whylabs.ai/docs/
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "OpenLLMTelemetry",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "WhyLabs.ai",
    "author_email": "support@whylabs.ai",
    "download_url": "https://files.pythonhosted.org/packages/49/20/007b79b60720cc673f06d85f2596358d0d556aab9abc178db90857e10446/openllmtelemetry-0.0.1b4.tar.gz",
    "platform": null,
    "description": "# OpenLLMTelemetry\n\n`openllmtelemetry` is an open-source Python library that provides Open Telemetry integration with Large Language Models (LLMs). It is designed to facilitate tracing applications that leverage LLMs and Generative AI, ensuring better observability and monitoring.\n\n## Features\n\n- Easy integration with Open Telemetry for LLM applications.\n- Real-time tracing and monitoring of LLM-based systems.\n- Enhanced safeguards and insights for your LLM applications.\n\n## Installation\n\nTo install `openllmtelemetry` simply use pip:\n\n```bash\npip install openllmtelemetry\n```\n\n## Usage \ud83d\ude80\n\nHere's a basic example of how to use **OpenLLMTelemetry** in your project:\n\nFirst you need to setup a few environment variables to specify where you want your LLM telemetry to be sent, and make sure you also have any API keys set for interacting with your LLM and for sending the telemetry to [WhyLabs](https://whylabs.ai/free?utm_source=openllmtelemetry-Github&utm_medium=openllmtelemetry-readme&utm_campaign=WhyLabs_Secure)\n\n\n\n```python\nimport os\n\nos.environ[\"WHYLABS_DEFAULT_DATASET_ID\"] = \"your-model-id\" #  e.g. model-1 \nos.environ[\"WHYLABS_API_KEY\"] = \"replace-with-your-whylabs-api-key\"\n\n```\n\nAfter you verify your env variables are set you can now instrument your app by running the following:\n\n```python\nimport openllmtelemetry\n\nopenllmtelemetry.instrument()\n```\n\nThis will automatically instrument your calls to LLMs to gather open telemetry traces and send these to WhyLabs.\n\n## Integration: OpenAI\nIntegration with an OpenAI application is straightforward with `openllmtelemetry` package.\n\nFirst, you need to set a few environment variables. This can be done via your container set up or via code. \n\n```python\nimport os \n\nos.environ[\"WHYLABS_API_KEY\"] = \"<your-whylabs-api-key>\"\nos.environ[\"WHYLABS_DEFAULT_DATASET_ID\"] = \"<your-llm-resource-id>\"\nos.environ[\"WHYLABS_GUARD_ENDPOINT\"] = \"<your container endpoint>\"\nos.environ[\"WHYLABS_GUARD_API_KEY\"] = \"internal-secret-for-whylabs-Secure\"\n```\n\nOnce this is done, all of your OpenAI interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly\n\n```python\nfrom openai import OpenAI\nclient = OpenAI()\n\nresponse = client.chat.completions.create(\n  model=\"gpt-3.5-turbo\",\n  messages=[\n    {\n      \"role\": \"system\",\n      \"content\": \"You are a helpful chatbot. \"\n    },\n    {\n      \"role\": \"user\",\n      \"content\": \"Aren't noodles amazing?\"\n    }\n  ],\n  temperature=0.7,\n  max_tokens=64,\n  top_p=1\n)\n```\n\n## Integration: Amazon Bedrock\n\nOne of the nice things about `openllmtelemetry` is that a single call to intrument your app can work across various LLM providers, using the same instrument call above, you can also invoke models using the boto3 client's bedrock-runtime and interaction with LLMs such as Titan and you get the same level of telemetry extracted and sent to WhyLabs\n\nNote: you may have to test that your boto3 credentials are working to be able to use the below example\nFor details see [boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)\n\n```python\nimport json\nimport boto3\n\n\ndef bedrock_titan(prompt: str):\n    try:\n        model_id = 'amazon.titan-text-express-v1'\n        brt = boto3.client(service_name='bedrock-runtime')\n        response = brt.invoke_model(body=json.dumps({\"inputText\": prompt}), modelId=model_id)\n        response_body = json.loads(response.get(\"body\").read())\n\n    except Exception as error:\n        logger.error(f\"A client error occurred:{error}\")\n\n    return response_body\n\nresponse = bedrock_titan(\"What is your name and what is the origin and reason for that name?\")\nprint(response)\n```\n\n## Requirements \ud83d\udccb\n\n- Python 3.8 or higher\n- opentelemetry-api\n- opentelemetry-sdk\n\n## Contributing \ud83d\udc50\n\nContributions are welcome! For major changes, please open an issue first to discuss what you would like to change. Please make sure to update tests as appropriate.\n\n## License \ud83d\udcc4\n\n**OpenLLMTelemetry** is licensed under the Apache-2.0 License. See [LICENSE](LICENSE) for more details.\n\n## Contact \ud83d\udce7\n\nFor support or any questions, feel free to contact us at support@whylabs.ai.\n\n## Documentation\nMore documentation can be found here on WhyLabs site: https://whylabs.ai/docs/",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "End-to-end observability with built-in security guardrails.",
    "version": "0.0.1b4",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "110a4f196c8aead35d9593bd3188468828a69df7484cdbfceba4235c19925fe6",
                "md5": "fbdab6325fdc30595244cf504430afb1",
                "sha256": "eb2d85d06b35e66dbecc0f8fea9d56bb0d067dae47ff0f0bfaa1782489005177"
            },
            "downloads": -1,
            "filename": "openllmtelemetry-0.0.1b4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fbdab6325fdc30595244cf504430afb1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 31085,
            "upload_time": "2024-04-25T19:19:20",
            "upload_time_iso_8601": "2024-04-25T19:19:20.065253Z",
            "url": "https://files.pythonhosted.org/packages/11/0a/4f196c8aead35d9593bd3188468828a69df7484cdbfceba4235c19925fe6/openllmtelemetry-0.0.1b4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4920007b79b60720cc673f06d85f2596358d0d556aab9abc178db90857e10446",
                "md5": "a7df2cce1984cb6f245c90a87c46b787",
                "sha256": "3f7ae16fd2a40bacdab1ac1c51ae6e0e20c9cd3e2bbf8e0b73ea5a3b8c6e08a2"
            },
            "downloads": -1,
            "filename": "openllmtelemetry-0.0.1b4.tar.gz",
            "has_sig": false,
            "md5_digest": "a7df2cce1984cb6f245c90a87c46b787",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 23778,
            "upload_time": "2024-04-25T19:19:21",
            "upload_time_iso_8601": "2024-04-25T19:19:21.448039Z",
            "url": "https://files.pythonhosted.org/packages/49/20/007b79b60720cc673f06d85f2596358d0d556aab9abc178db90857e10446/openllmtelemetry-0.0.1b4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-25 19:19:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "openllmtelemetry"
}
        
Elapsed time: 0.23742s