totvs-dta-utils


Nametotvs-dta-utils JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/totvs-ai/dta-utils-python
SummaryLib for integration with DTA services
upload_time2024-04-26 20:15:50
maintainerNone
docs_urlNone
authorTotvsLabs
requires_python>=3.10
licenseNone
keywords dta
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Dta Utils
=======

**Connect your app to DTA Services**


### What are DTA Services?

A collection of services to ease and speed up the development and monitoring of Applications, focused on gen AI apps.


## Introduction

You can use this package to easily integrate your application with the following services:
- DTA Proxy (manage and control access to LLM models)
- DTA Logs (monitor app usage and user feedback)

Any integration rely on having a service key generated on DTA to use its services.


## Dependencies

- [Python](https://www.python.org/) >= 3.10
- [Langfuse](https://github.com/langfuse/langfuse-python) >= 2


## Installation

You can install it using pip:

```bash
pip install dta-utils
```


## Basic usage

1. Install this package into your app

2. Add to your environment variables the following keys:

```bash
DTA_PROXY_KEY="sk..."
DTA_LOGS_PUBLIC_KEY="pk..."
DTA_LOGS_SECRET_KEY="sk-..."
```

3. Make calls to LLM models using *DTA Proxy*:

```python
from dta_utils import DtaProxy

client = DtaProxy.openai_client()

response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "tell me a random city of the world and its country"
    }
])

print(response)
```


4. Send logs to *Dta Logs*:


    4.1. Sending logs with a `DtaTracer` instance:


```python
from dta_utils import DtaTracer
import uuid

trace_id = f"trace_{uuid.uuid4()}"

tracer = DtaTracer(name="app-test")

# trace is a thread that will aggreagate events and scores
tracer.trace(
    id=trace_id,
    input={"data": "input"},
)

# event can be emitted anytime with anything relevant to track
tracer.event(
    name="event registered",
    input={"event-data": "event-input"},
    output={"event-data": "event-output"},
)

# traces can be updated at any time
tracer.trace(
    output={"data": "output"},
    tags=["example", "event", "tracer"],
)

# score is how to register any evaluation over a trace
tracer.score(
    name="user-feedback",
    # value can be any number
    value=5,
    comment="optional comments",
)
```


    4.2. Sending logs without an instance:


 Note: the `trace_id` have to be informed on each call


```python
from dta_utils import DtaLog
import uuid

trace_id = f"trace_{uuid.uuid4()}"

# trace is a thread that will aggreagate events
DtaLog.trace(
    id=trace_id,
    name="app-test",
    input={"data": "input"},
)

# event can be emitted anytime with anything relevant to track
DtaLog.event(
    trace_id=trace_id,
    name="event registered",
    input={"event-data": "event-input"},
    output={"event-data": "event-output"},
)

# traces can be updated at any time
DtaLog.trace(
    id=trace_id,
    output={"data": "output"},
    tags=["example", "event", "logs"],
)

# score is how to register any evaluation over a trace
DtaLog.score(
    trace_id=trace_id,
    name="user-feedback",
    # value can be any number
    value=-5,
    comment="optional comments",
)
```


    4.3. Log Model generation:

To log model generation, the most commom uses are either through openai lib (using Dta wrapper), or with Langchain, adding a set of callbacks provided.


        4.3.1. Using Openai lib wrapper:

```python
from dta_utils import DtaLog, DtaProxy
import uuid

trace_id = f"trace_{uuid.uuid4()}"

# get openai lib ready to log into DtaLog
# create the openai client using DtaProxy
client = DtaProxy.openai_client(openai=DtaLog.openai())

# example of using OpenAI functions
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        },
    }
]

# calling the model
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {
            "role": "user",
            "content": "how is the wheather in Reykjavik Iceland?",
        },
    ],
    tools=tools,
    # parameters for DTA log
    name="log-gen-name",
    trace_id=trace_id,
)

print(response)

# for performance improvement, the logs can be not immediately emitted
# so the following can be important if running the app in a mode that can be abruptly closed (like from command line)
# usually not important if running on a webserver where the app is not immediatelly closed
DtaLog.openai_flush(dta_openai)
```


        4.3.2. Using Langchain callbacks:

```python
from dta_utils import DtaLog, DtaProxy
from langchain_openai import ChatOpenAI

dta_handler = DtaLog.langchain_handler()

model = ChatOpenAI(
    temperature=0,
    model="gpt-3.5-turbo",
    openai_api_base=DtaProxy.url(),
    openai_api_key=DtaProxy.key(),
    callbacks=[dta_handler],
)

response = model.invoke(
    [
        {
            "role": "user",
            "content": "tell me a random city of the world and its country",
        }
    ]
)

print(response)

```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/totvs-ai/dta-utils-python",
    "name": "totvs-dta-utils",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "dta",
    "author": "TotvsLabs",
    "author_email": "info@totvs.ai",
    "download_url": "https://files.pythonhosted.org/packages/7b/2b/40deaa5c75565ea185f485fe9752fe6836e40b513293ec4806106c271a7d/totvs-dta-utils-0.1.6.tar.gz",
    "platform": null,
    "description": "# Dta Utils\n=======\n\n**Connect your app to DTA Services**\n\n\n### What are DTA Services?\n\nA collection of services to ease and speed up the development and monitoring of Applications, focused on gen AI apps.\n\n\n## Introduction\n\nYou can use this package to easily integrate your application with the following services:\n- DTA Proxy (manage and control access to LLM models)\n- DTA Logs (monitor app usage and user feedback)\n\nAny integration rely on having a service key generated on DTA to use its services.\n\n\n## Dependencies\n\n- [Python](https://www.python.org/) >= 3.10\n- [Langfuse](https://github.com/langfuse/langfuse-python) >= 2\n\n\n## Installation\n\nYou can install it using pip:\n\n```bash\npip install dta-utils\n```\n\n\n## Basic usage\n\n1. Install this package into your app\n\n2. Add to your environment variables the following keys:\n\n```bash\nDTA_PROXY_KEY=\"sk...\"\nDTA_LOGS_PUBLIC_KEY=\"pk...\"\nDTA_LOGS_SECRET_KEY=\"sk-...\"\n```\n\n3. Make calls to LLM models using *DTA Proxy*:\n\n```python\nfrom dta_utils import DtaProxy\n\nclient = DtaProxy.openai_client()\n\nresponse = client.chat.completions.create(model=\"gpt-3.5-turbo\", messages = [\n    {\n        \"role\": \"user\",\n        \"content\": \"tell me a random city of the world and its country\"\n    }\n])\n\nprint(response)\n```\n\n\n4. Send logs to *Dta Logs*:\n\n\n    4.1. Sending logs with a `DtaTracer` instance:\n\n\n```python\nfrom dta_utils import DtaTracer\nimport uuid\n\ntrace_id = f\"trace_{uuid.uuid4()}\"\n\ntracer = DtaTracer(name=\"app-test\")\n\n# trace is a thread that will aggreagate events and scores\ntracer.trace(\n    id=trace_id,\n    input={\"data\": \"input\"},\n)\n\n# event can be emitted anytime with anything relevant to track\ntracer.event(\n    name=\"event registered\",\n    input={\"event-data\": \"event-input\"},\n    output={\"event-data\": \"event-output\"},\n)\n\n# traces can be updated at any time\ntracer.trace(\n    output={\"data\": \"output\"},\n    tags=[\"example\", \"event\", \"tracer\"],\n)\n\n# score is how to register any evaluation over a trace\ntracer.score(\n    name=\"user-feedback\",\n    # value can be any number\n    value=5,\n    comment=\"optional comments\",\n)\n```\n\n\n    4.2. Sending logs without an instance:\n\n\n Note: the `trace_id` have to be informed on each call\n\n\n```python\nfrom dta_utils import DtaLog\nimport uuid\n\ntrace_id = f\"trace_{uuid.uuid4()}\"\n\n# trace is a thread that will aggreagate events\nDtaLog.trace(\n    id=trace_id,\n    name=\"app-test\",\n    input={\"data\": \"input\"},\n)\n\n# event can be emitted anytime with anything relevant to track\nDtaLog.event(\n    trace_id=trace_id,\n    name=\"event registered\",\n    input={\"event-data\": \"event-input\"},\n    output={\"event-data\": \"event-output\"},\n)\n\n# traces can be updated at any time\nDtaLog.trace(\n    id=trace_id,\n    output={\"data\": \"output\"},\n    tags=[\"example\", \"event\", \"logs\"],\n)\n\n# score is how to register any evaluation over a trace\nDtaLog.score(\n    trace_id=trace_id,\n    name=\"user-feedback\",\n    # value can be any number\n    value=-5,\n    comment=\"optional comments\",\n)\n```\n\n\n    4.3. Log Model generation:\n\nTo log model generation, the most commom uses are either through openai lib (using Dta wrapper), or with Langchain, adding a set of callbacks provided.\n\n\n        4.3.1. Using Openai lib wrapper:\n\n```python\nfrom dta_utils import DtaLog, DtaProxy\nimport uuid\n\ntrace_id = f\"trace_{uuid.uuid4()}\"\n\n# get openai lib ready to log into DtaLog\n# create the openai client using DtaProxy\nclient = DtaProxy.openai_client(openai=DtaLog.openai())\n\n# example of using OpenAI functions\ntools = [\n    {\n        \"type\": \"function\",\n        \"function\": {\n            \"name\": \"get_current_weather\",\n            \"description\": \"Get the current weather in a given location\",\n            \"parameters\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"location\": {\n                        \"type\": \"string\",\n                        \"description\": \"The city and state, e.g. San Francisco, CA\",\n                    },\n                    \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n                },\n                \"required\": [\"location\"],\n            },\n        },\n    }\n]\n\n# calling the model\nresponse = client.chat.completions.create(\n    model=\"gpt-3.5-turbo\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"how is the wheather in Reykjavik Iceland?\",\n        },\n    ],\n    tools=tools,\n    # parameters for DTA log\n    name=\"log-gen-name\",\n    trace_id=trace_id,\n)\n\nprint(response)\n\n# for performance improvement, the logs can be not immediately emitted\n# so the following can be important if running the app in a mode that can be abruptly closed (like from command line)\n# usually not important if running on a webserver where the app is not immediatelly closed\nDtaLog.openai_flush(dta_openai)\n```\n\n\n        4.3.2. Using Langchain callbacks:\n\n```python\nfrom dta_utils import DtaLog, DtaProxy\nfrom langchain_openai import ChatOpenAI\n\ndta_handler = DtaLog.langchain_handler()\n\nmodel = ChatOpenAI(\n    temperature=0,\n    model=\"gpt-3.5-turbo\",\n    openai_api_base=DtaProxy.url(),\n    openai_api_key=DtaProxy.key(),\n    callbacks=[dta_handler],\n)\n\nresponse = model.invoke(\n    [\n        {\n            \"role\": \"user\",\n            \"content\": \"tell me a random city of the world and its country\",\n        }\n    ]\n)\n\nprint(response)\n\n```\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Lib for integration with DTA services",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/totvs-ai/dta-utils-python"
    },
    "split_keywords": [
        "dta"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7b2b40deaa5c75565ea185f485fe9752fe6836e40b513293ec4806106c271a7d",
                "md5": "892bf6c9fd21851d93441bac8ec39e38",
                "sha256": "81819a4ba20fa07291dc8e6fe17fba5f6d731690b76a34e794dbe20e3fa8fd30"
            },
            "downloads": -1,
            "filename": "totvs-dta-utils-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "892bf6c9fd21851d93441bac8ec39e38",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 9493,
            "upload_time": "2024-04-26T20:15:50",
            "upload_time_iso_8601": "2024-04-26T20:15:50.191884Z",
            "url": "https://files.pythonhosted.org/packages/7b/2b/40deaa5c75565ea185f485fe9752fe6836e40b513293ec4806106c271a7d/totvs-dta-utils-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 20:15:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "totvs-ai",
    "github_project": "dta-utils-python",
    "github_not_found": true,
    "lcname": "totvs-dta-utils"
}
        
Elapsed time: 0.62990s