keywordsai-tracing


Namekeywordsai-tracing JSON
Version 0.0.15 PyPI version JSON
download
home_pageNone
SummaryKeywords AI SDK allows you to interact with the Keywords AI API smoothly
upload_time2025-02-11 21:36:35
maintainerNone
docs_urlNone
authorKeywords AI
requires_python<4.0,>3.10
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Building an LLM Workflow with KeywordsAI Tracing

This tutorial demonstrates how to build and trace complex LLM workflows using KeywordsAI Tracing. We'll create an example that generates jokes, translates them to pirate language, and simulates audience reactions - all while capturing detailed telemetry of our LLM calls.

## Prerequisites

- Python 3.7+
- OpenAI API key
- Anthropic API key
- Keywords AI API key, you can get your API key from the [API keys page](https://platform.keywordsai.co/platform/api/api-keys)

## Installation
```bash
pip install keywordsai-tracing openai anthropic
```


## Tutorial

### Step 1: Initialization
```
import os
from keywordsai_tracing.main import KeywordsAITelemetry
from keywordsai_tracing.decorators import workflow, task
import time

# Initialize KeywordsAI Telemetry
os.environ["KEYWORDSAI_API_KEY"] = "YOUR_KEYWORDSAI_API_KEY"
k_tl = KeywordsAITelemetry()

# Initialize OpenAI client
from openai import OpenAI
client = OpenAI()
```


### Step 2: First Draft - Basic Workflow
We'll start by creating a simple workflow that generates a joke, translates it to pirate speak,
and adds a signature. This demonstrates the basic usage of tasks and workflows.

- A task is a single unit of work, decorated with `@task`
- A workflow is a collection of tasks, decorated with `@workflow`
- Tasks can be used independently or as part of workflows

```python
@task(name="joke_creation")
def create_joke():
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
        temperature=0.5,
        max_tokens=100,
        frequency_penalty=0.5,
        presence_penalty=0.5,
        stop=["\n"],
        logprobs=True,
    )
    return completion.choices[0].message.content

@task(name="signature_generation")
def generate_signature(joke: str):
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "user", "content": "add a signature to the joke:\n\n" + joke}
        ],
    )
    return completion.choices[0].message.content

@task(name="pirate_joke_translation")
def translate_joke_to_pirate(joke: str):
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": "translate the joke to pirate language:\n\n" + joke,
            }
        ],
    )
    return completion.choices[0].message.content

@workflow(name="pirate_joke_generator")
def joke_workflow():
    eng_joke = create_joke()
    pirate_joke = translate_joke_to_pirate(eng_joke)
    signature = generate_signature(pirate_joke)
    return pirate_joke + signature

if __name__ == "__main__":
    joke_workflow()
```

Run the workflow and see the trace in Keywords AI `Traces` tab.

### Step 3: Adding Another Workflow
Let's add audience reactions to make our workflow more complex and demonstrate
what multiple workflow traces look like.

```python
@task(name="audience_laughs")
def audience_laughs(joke: str):
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": "This joke:\n\n" + joke + " is funny, say hahahahaha",
            }
        ],
        max_tokens=10,
    )
    return completion.choices[0].message.content

@task(name="audience_claps")
def audience_claps():
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Clap once"}],
        max_tokens=5,
    )
    return completion.choices[0].message.content

@task(name="audience_applaud")
def audience_applaud(joke: str):
    clap = audience_claps()
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": "Applaud to the joke, clap clap! " + clap,
            }
        ],
        max_tokens=10,
    )
    return completion.choices[0].message.content

@workflow(name="audience_reaction")
def audience_reaction(joke: str):
    laughter = audience_laughs(joke=joke)
    applauds = audience_applaud(joke=joke)
    return laughter + applauds


@workflow(name="joke_and_audience_reaction") #<--------- Create the new workflow that combines both workflows together
def joke_and_audience_reaction():
    pirate_joke = joke_workflow()
    reactions = audience_reaction(pirate_joke)
```

Don't forget to update the entrypoint!
```python
if __name__ == "__main__":
    joke_and_audience_reaction() # <--------- Update the entrypoint here
```

Run the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `audience_reaction` workflow in parallel with the `joke_workflow`. Congratulation! You have created a trace with multiple workflows.

### Step 4: Adding Vector Storage Capability
To demonstrate how to integrate with vector databases and embeddings,
we'll add a store_joke task that generates embeddings for our jokes.

```python
@task(name="store_joke")
def store_joke(joke: str):
    """Simulate storing a joke in a vector database."""
    embedding = client.embeddings.create(
        model="text-embedding-3-small",
        input=joke,
    )
    return embedding.data[0].embedding
```

Update create_joke to use store_joke
```python
@task(name="joke_creation")
def create_joke():
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
        temperature=0.5,
        max_tokens=100,
        frequency_penalty=0.5,
        presence_penalty=0.5,
        stop=["\n"],
        logprobs=True,
    )
    joke = completion.choices[0].message.content
    store_joke(joke)  # <--------- Add the task here
    return joke
```
Run the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `store_joke` task.

Expanding the `store_joke` task, you can see the embeddings call is recognized as `openai.embeddings`.

### Step 5: Adding Arbitrary Function Calls
Demonstrate how to trace non-LLM functions by adding a logging task.

```python
@task(name="logging_joke")
def logging_joke(joke: str, reactions: str):
    """Simulates logging the process into a database."""
    print(joke + "\n\n" + reactions)
    time.sleep(1)
```

Update `joke_and_audience_reaction`
```python
@workflow(name="joke_and_audience_reaction")
def joke_and_audience_reaction():
    pirate_joke = joke_workflow()
    reactions = audience_reaction(pirate_joke)
    logging_joke(pirate_joke, reactions) # <-------- Add this workflow here
```

Run the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `logging_joke` task.

This is a simple example of how to trace arbitrary functions. You can see the all the inputs and outputs of `logging_joke` task.

### Step 6: Adding Different LLM Provider (Anthropic)

Demonstrate compatibility with multiple LLM providers by adding Anthropic integration.

```python
from anthropic import Anthropic
anthropic = Anthropic()

@task(name="ask_for_comments")
def ask_for_comments(joke: str):
    completion = anthropic.messages.create(
        model="claude-3-5-sonnet-20240620",
        messages=[{"role": "user", "content": f"What do you think about this joke: {joke}"}],
        max_tokens=100,
    )
    return completion.content[0].text

@task(name="read_joke_comments")
def read_joke_comments(comments: str):
    return f"Here is the comment from the audience: {comments}"

@workflow(name="audience_interaction")
def audience_interaction(joke: str):
    comments = ask_for_comments(joke=joke)
    read_joke_comments(comments=comments)
```

Update `joke_and_audience_reaction`
```python
@workflow(name="joke_and_audience_reaction")
def joke_and_audience_reaction():
    pirate_joke = joke_workflow()
    reactions = audience_reaction(pirate_joke)
    audience_interaction(pirate_joke) # <-------- Add this workflow here
    logging_joke(pirate_joke, reactions)
```

Running the workflow for one last time, you can see that the new `audience_interaction` can recognize the `anthropic.completion` calls.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "keywordsai-tracing",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Keywords AI",
    "author_email": "team@keywordsai.co",
    "download_url": "https://files.pythonhosted.org/packages/ab/1c/0c4420ac414de843c99fdf8dd8ddb47e09931106421842284d401101dbd8/keywordsai_tracing-0.0.15.tar.gz",
    "platform": null,
    "description": "# Building an LLM Workflow with KeywordsAI Tracing\n\nThis tutorial demonstrates how to build and trace complex LLM workflows using KeywordsAI Tracing. We'll create an example that generates jokes, translates them to pirate language, and simulates audience reactions - all while capturing detailed telemetry of our LLM calls.\n\n## Prerequisites\n\n- Python 3.7+\n- OpenAI API key\n- Anthropic API key\n- Keywords AI API key, you can get your API key from the [API keys page](https://platform.keywordsai.co/platform/api/api-keys)\n\n## Installation\n```bash\npip install keywordsai-tracing openai anthropic\n```\n\n\n## Tutorial\n\n### Step 1: Initialization\n```\nimport os\nfrom keywordsai_tracing.main import KeywordsAITelemetry\nfrom keywordsai_tracing.decorators import workflow, task\nimport time\n\n# Initialize KeywordsAI Telemetry\nos.environ[\"KEYWORDSAI_API_KEY\"] = \"YOUR_KEYWORDSAI_API_KEY\"\nk_tl = KeywordsAITelemetry()\n\n# Initialize OpenAI client\nfrom openai import OpenAI\nclient = OpenAI()\n```\n\n\n### Step 2: First Draft - Basic Workflow\nWe'll start by creating a simple workflow that generates a joke, translates it to pirate speak,\nand adds a signature. This demonstrates the basic usage of tasks and workflows.\n\n- A task is a single unit of work, decorated with `@task`\n- A workflow is a collection of tasks, decorated with `@workflow`\n- Tasks can be used independently or as part of workflows\n\n```python\n@task(name=\"joke_creation\")\ndef create_joke():\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[{\"role\": \"user\", \"content\": \"Tell me a joke about opentelemetry\"}],\n        temperature=0.5,\n        max_tokens=100,\n        frequency_penalty=0.5,\n        presence_penalty=0.5,\n        stop=[\"\\n\"],\n        logprobs=True,\n    )\n    return completion.choices[0].message.content\n\n@task(name=\"signature_generation\")\ndef generate_signature(joke: str):\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\"role\": \"user\", \"content\": \"add a signature to the joke:\\n\\n\" + joke}\n        ],\n    )\n    return completion.choices[0].message.content\n\n@task(name=\"pirate_joke_translation\")\ndef translate_joke_to_pirate(joke: str):\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\n                \"role\": \"user\",\n                \"content\": \"translate the joke to pirate language:\\n\\n\" + joke,\n            }\n        ],\n    )\n    return completion.choices[0].message.content\n\n@workflow(name=\"pirate_joke_generator\")\ndef joke_workflow():\n    eng_joke = create_joke()\n    pirate_joke = translate_joke_to_pirate(eng_joke)\n    signature = generate_signature(pirate_joke)\n    return pirate_joke + signature\n\nif __name__ == \"__main__\":\n    joke_workflow()\n```\n\nRun the workflow and see the trace in Keywords AI `Traces` tab.\n\n### Step 3: Adding Another Workflow\nLet's add audience reactions to make our workflow more complex and demonstrate\nwhat multiple workflow traces look like.\n\n```python\n@task(name=\"audience_laughs\")\ndef audience_laughs(joke: str):\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\n                \"role\": \"user\",\n                \"content\": \"This joke:\\n\\n\" + joke + \" is funny, say hahahahaha\",\n            }\n        ],\n        max_tokens=10,\n    )\n    return completion.choices[0].message.content\n\n@task(name=\"audience_claps\")\ndef audience_claps():\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[{\"role\": \"user\", \"content\": \"Clap once\"}],\n        max_tokens=5,\n    )\n    return completion.choices[0].message.content\n\n@task(name=\"audience_applaud\")\ndef audience_applaud(joke: str):\n    clap = audience_claps()\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[\n            {\n                \"role\": \"user\",\n                \"content\": \"Applaud to the joke, clap clap! \" + clap,\n            }\n        ],\n        max_tokens=10,\n    )\n    return completion.choices[0].message.content\n\n@workflow(name=\"audience_reaction\")\ndef audience_reaction(joke: str):\n    laughter = audience_laughs(joke=joke)\n    applauds = audience_applaud(joke=joke)\n    return laughter + applauds\n\n\n@workflow(name=\"joke_and_audience_reaction\") #<--------- Create the new workflow that combines both workflows together\ndef joke_and_audience_reaction():\n    pirate_joke = joke_workflow()\n    reactions = audience_reaction(pirate_joke)\n```\n\nDon't forget to update the entrypoint!\n```python\nif __name__ == \"__main__\":\n    joke_and_audience_reaction() # <--------- Update the entrypoint here\n```\n\nRun the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `audience_reaction` workflow in parallel with the `joke_workflow`. Congratulation! You have created a trace with multiple workflows.\n\n### Step 4: Adding Vector Storage Capability\nTo demonstrate how to integrate with vector databases and embeddings,\nwe'll add a store_joke task that generates embeddings for our jokes.\n\n```python\n@task(name=\"store_joke\")\ndef store_joke(joke: str):\n    \"\"\"Simulate storing a joke in a vector database.\"\"\"\n    embedding = client.embeddings.create(\n        model=\"text-embedding-3-small\",\n        input=joke,\n    )\n    return embedding.data[0].embedding\n```\n\nUpdate create_joke to use store_joke\n```python\n@task(name=\"joke_creation\")\ndef create_joke():\n    completion = client.chat.completions.create(\n        model=\"gpt-3.5-turbo\",\n        messages=[{\"role\": \"user\", \"content\": \"Tell me a joke about opentelemetry\"}],\n        temperature=0.5,\n        max_tokens=100,\n        frequency_penalty=0.5,\n        presence_penalty=0.5,\n        stop=[\"\\n\"],\n        logprobs=True,\n    )\n    joke = completion.choices[0].message.content\n    store_joke(joke)  # <--------- Add the task here\n    return joke\n```\nRun the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `store_joke` task.\n\nExpanding the `store_joke` task, you can see the embeddings call is recognized as `openai.embeddings`.\n\n### Step 5: Adding Arbitrary Function Calls\nDemonstrate how to trace non-LLM functions by adding a logging task.\n\n```python\n@task(name=\"logging_joke\")\ndef logging_joke(joke: str, reactions: str):\n    \"\"\"Simulates logging the process into a database.\"\"\"\n    print(joke + \"\\n\\n\" + reactions)\n    time.sleep(1)\n```\n\nUpdate `joke_and_audience_reaction`\n```python\n@workflow(name=\"joke_and_audience_reaction\")\ndef joke_and_audience_reaction():\n    pirate_joke = joke_workflow()\n    reactions = audience_reaction(pirate_joke)\n    logging_joke(pirate_joke, reactions) # <-------- Add this workflow here\n```\n\nRun the workflow again and see the trace in Keywords AI `Traces` tab, notice the new span for the `logging_joke` task.\n\nThis is a simple example of how to trace arbitrary functions. You can see the all the inputs and outputs of `logging_joke` task.\n\n### Step 6: Adding Different LLM Provider (Anthropic)\n\nDemonstrate compatibility with multiple LLM providers by adding Anthropic integration.\n\n```python\nfrom anthropic import Anthropic\nanthropic = Anthropic()\n\n@task(name=\"ask_for_comments\")\ndef ask_for_comments(joke: str):\n    completion = anthropic.messages.create(\n        model=\"claude-3-5-sonnet-20240620\",\n        messages=[{\"role\": \"user\", \"content\": f\"What do you think about this joke: {joke}\"}],\n        max_tokens=100,\n    )\n    return completion.content[0].text\n\n@task(name=\"read_joke_comments\")\ndef read_joke_comments(comments: str):\n    return f\"Here is the comment from the audience: {comments}\"\n\n@workflow(name=\"audience_interaction\")\ndef audience_interaction(joke: str):\n    comments = ask_for_comments(joke=joke)\n    read_joke_comments(comments=comments)\n```\n\nUpdate `joke_and_audience_reaction`\n```python\n@workflow(name=\"joke_and_audience_reaction\")\ndef joke_and_audience_reaction():\n    pirate_joke = joke_workflow()\n    reactions = audience_reaction(pirate_joke)\n    audience_interaction(pirate_joke) # <-------- Add this workflow here\n    logging_joke(pirate_joke, reactions)\n```\n\nRunning the workflow for one last time, you can see that the new `audience_interaction` can recognize the `anthropic.completion` calls.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Keywords AI SDK allows you to interact with the Keywords AI API smoothly",
    "version": "0.0.15",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "307bfe4d416a4e4c21c0dde3fe5a9986e8b12595868bd9833e7f465ed16454a8",
                "md5": "91d6220d5d27985b3da87108028491ef",
                "sha256": "f988d4e28f178f42518177803e3d17658e944c6bd705df7f46463d4059a26bb3"
            },
            "downloads": -1,
            "filename": "keywordsai_tracing-0.0.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "91d6220d5d27985b3da87108028491ef",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>3.10",
            "size": 14197,
            "upload_time": "2025-02-11T21:36:34",
            "upload_time_iso_8601": "2025-02-11T21:36:34.043230Z",
            "url": "https://files.pythonhosted.org/packages/30/7b/fe4d416a4e4c21c0dde3fe5a9986e8b12595868bd9833e7f465ed16454a8/keywordsai_tracing-0.0.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ab1c0c4420ac414de843c99fdf8dd8ddb47e09931106421842284d401101dbd8",
                "md5": "c580c2b7cf4aad62880249100271a6f6",
                "sha256": "dc509ced21ce9b11360c48a7bae45d36e688cc4fd7698e68c0d516de2566a745"
            },
            "downloads": -1,
            "filename": "keywordsai_tracing-0.0.15.tar.gz",
            "has_sig": false,
            "md5_digest": "c580c2b7cf4aad62880249100271a6f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>3.10",
            "size": 24315,
            "upload_time": "2025-02-11T21:36:35",
            "upload_time_iso_8601": "2025-02-11T21:36:35.390249Z",
            "url": "https://files.pythonhosted.org/packages/ab/1c/0c4420ac414de843c99fdf8dd8ddb47e09931106421842284d401101dbd8/keywordsai_tracing-0.0.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-11 21:36:35",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "keywordsai-tracing"
}
        
Elapsed time: 0.41445s