aihero


Nameaihero JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/ai-hero/python-client-sdk
SummaryAI Hero Python SDK
upload_time2023-08-25 23:44:29
maintainer
docs_urlNone
authorAI Hero Team
requires_python
licenseMIT
keywords ai hero spotcheck mlops ai data annotation labeling model training model serving model deployment
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AI Hero Python SDK
The AI Hero Python SDK offers a powerful set of tools for managing and developing AI models. With our latest release, you can easily manage prompt templates and versions, allowing for easier and more effective model development, testing, and deployment.

## Installation
Install AI Hero using pip:
```bash
pip install aihero==0.2.9
```


# PromptStash
In the rapidly evolving world of AI, the ability to manage and control versions of prompts becomes incredibly important. Much like software version control, prompt versioning allows developers to track changes, revert to previous versions, and implement updates in a controlled and systematic manner. This is especially useful when you want to recall previous versions of your AI model's prompt templates, perhaps for debugging, comparison or to manage different versions of an AI. That's where the concept of "Promptstash" in AI Hero Python SDK comes into play. 

## Tutorials
We have two tutorials for you:
- [PromptOps with OpenAI's Completions API + PromptStash (Beginner)](examples/PromptOps_with_OpenAI_Completions_API_+_PromptStash_(Beginner).ipynb) - In this example tutorial, we'll use an LLM to do some English to Japanese translation for us so that we can talk to our friend in Japanese and look at the PromptOps associated with it. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ai-hero/python-client-sdk/blob/main/examples/PromptOps_with_OpenAI_Completions_API_%2B_PromptStash_%28Beginner%29.ipynb)

- [PromptOps with OpenAI's Chat Completions API + PromptStash (Beginner)](examples/PromptOps_with_OpenAI_Chat_Completions_API_+_PromptStash_(Beginner).ipynb) - In this example tutorial, we'll use a LLM Chat Bot as a Japanese tutor for us so that we can learn Japanese through conversation and look at the PromptOps associated with it. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ai-hero/python-client-sdk/blob/main/examples/PromptOps_with_OpenAI_Chat_Completions_API_%2B_PromptStash_%28Beginner%29.ipynb)

There's more coming soon! 
- We're building a notebook for Retrieval Augmented Generation. 
- Hit us up if you'd like a custom demo at `team@aihero.studio`.

## Overview
As an example, let's say you want to create a "Ask PG (i.e. Paul Graham from YC) Bot". You'll already be using a template like this for Langchain, LlamaIndex, etc.
```python
TEMPLATE_STR = (
  "The following is a blog by Paul Graham.\n"
  "You will answer the question below using the context provided.\n"
  "---------------------\n"
  "{context_str}"
  "\n---------------------\n"
  "Given this information, please answer the question: {query_str}\n"
)
```

Let's create the promptstash instance ps using the project id and API key from the AI Hero. To get them you'll need to log into [https://app.aihero.studio] and create a project. Note your default project id and API key. 
```python
from aihero import promptstash
ps = promptstash(project_id="YOUR_PROJECT_ID", api_key="YOUR_API_KEY")
```

### Versioning and Stashing Prompt Templates
We can stash our current prompt template to get a "variant" id. A variant id is an MD5 hash of your prompt template. The `template_id` keeps all the variants for a prompt tempalte together.

```python
variant = ps.stash_template(template_id="paul-graham-essay", body=TEMPLATE_STR)
```

You can see the prompt stashed in your AI Hero UI. 
![Stashed Prompt Templates](assets/templates.png)


You can also see each variant of each prompt template stashed in your AI Hero UI. 
![Stashed Variants](assets/variants.png)

### Recalling Previously Stashed Templates

When you want to recall the variant in the future, use the hash you want.
```python
template_str = ps.variant(template_id="paul-graham-essay", variant=variant)
```

### Tracking Prompt Inputs and Outputs
You can also stash and visualize each prompt input and output for your variants. Assuming you have already created your promptstash ps object. For example, this is how you would stash it for a Q&A agent implemented using retrieval augmented generation.
```python
trace_id = str(uuid4())
step_id = str(uuid4())

ps.stash_completion(
    trace_id=trace_id,
    step_id=step_id,
    template_id="paul-graham-essay",
    variant=variant,
    prompt=prompt,
    output=output,
    inputs={"question": question},
    rendered_inputs=f"Question: {question}",
    model={"name": "openai-davinci-003", "version": date.today().strftime("%Y-%m-%d")},
    metrics={"time": (toc - tic)},
    other={},
)
```

### Viewing Completions in Real Time

You can then view your stashed prompts in real-time from the UI.

![Real Time View of prompts, their inputs and outputs.](assets/tsne.png)

NOTE: You'll need to provide an OPENAI_API_KEY environment variable so that the client SDK can generate embeddings for your inputs and outputs. This would incur charges for OpenAI, and we recommend you set limits on your account within OpenAI's [Playground](https //beta.openai.com/playground).
```python
import os
os.environ["OPENAI_API_KEY"]="YOUR_API_KEY"
```

### Traces
You can observe the traces (that you tracked with `trace_id` and `step_id` above)
![Traces](assets/traces.png) 


### Evaluation (early release)

You can create a test suite and then test the outputs of your variant with it. There are two types of tests:
- `test_*`: Execute the Python function. Implement as needed and assert as needed.
- `ask_*`: Evaluate using GPT3.5-turbo (Note: doesn't work the best yet.)

NOTE: You'll need to provide an OPENAI_API_KEY environment variable so that the client SDK can generate embeddings for your inputs and outputs. This would incur charges for OpenAI, and we recommend you set limits on your account within OpenAI's [Playground](https //beta.openai.com/playground).
```python
import os
os.environ["OPENAI_API_KEY"]="YOUR_API_KEY"
```

First, create the test suite.
```python
from aihero.eval import PromptTestSuite

class PGBotTests(PromptTestSuite):
    def test_unquoted(self, output):
        assert not (output.startswith('"') and output.endswith('"'))

    def ask_is_first_person(self) -> str:        
        return "Is the output in the first person point of view?"

    def ask_is_spanish(self) -> str:
        return "Is the output language Spanish?"

ts = ps.build_test_suite(test_suite_id="pg-bot-tests", test_suite_cls=PGBotTests)
```

Then, you can generate your putputs, and run the test suite on that variant and outputs.
```python
times = []
completions = []
for question in [
    "What did you do growing up?",
    "What's your advice for a founder?",
    "Why did you start YC?",
]:  #
    print(f"Using prompt {variant}")
    print("Q: " + str(question))
    tic = time.perf_counter()
    output = answer(question, template_str)
    toc = time.perf_counter()
    times.append(toc - tic)
    print("A: " + str(output).strip())

    prompt = template_str
    prompt = prompt.replace("{query_str}", question)

    completions.append(
        {
            "inputs": {"query_str": question},
            "rendered_inputs": f"Question: {question}",
            "prompt": prompt,
            "output": output,
        }
    )

avg_time = sum(times) / len(times)
ts.run(
    template_id=template_id,
    variant=variant,
    completions=completions,
    model={"name": "openai-davinci-003"},
    metrics={"time": avg_time},
    other={},
)
```

You can see the eval results in the UI.
![Evaluations](assets/evals.png) 
![Test Run](assets/eval.png) 

### Record Feedback
You can record feedback from your user using the same trace_id and addinng your user feedback.
```python
step_id = str(uuid4())
ps.stash_feedback(
    trace_id=trace_id,
    step_id=step_id,
    thumbs_up=True,
    thumbs_down=False,
    correction="Foo",
    annotations={"user": "a"},
    other={},
)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ai-hero/python-client-sdk",
    "name": "aihero",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "AI Hero,Spotcheck,MLOps,AI,Data Annotation,Labeling,Model Training,Model Serving,Model Deployment",
    "author": "AI Hero Team",
    "author_email": "team@aihero.studio",
    "download_url": "https://files.pythonhosted.org/packages/08/66/b2ded65ef2db822226c68032d03e85e263bc813a7b26c0fff0b5ba8525b9/aihero-0.3.0.tar.gz",
    "platform": null,
    "description": "# AI Hero Python SDK\nThe AI Hero Python SDK offers a powerful set of tools for managing and developing AI models. With our latest release, you can easily manage prompt templates and versions, allowing for easier and more effective model development, testing, and deployment.\n\n## Installation\nInstall AI Hero using pip:\n```bash\npip install aihero==0.2.9\n```\n\n\n# PromptStash\nIn the rapidly evolving world of AI, the ability to manage and control versions of prompts becomes incredibly important. Much like software version control, prompt versioning allows developers to track changes, revert to previous versions, and implement updates in a controlled and systematic manner. This is especially useful when you want to recall previous versions of your AI model's prompt templates, perhaps for debugging, comparison or to manage different versions of an AI. That's where the concept of \"Promptstash\" in AI Hero Python SDK comes into play. \n\n## Tutorials\nWe have two tutorials for you:\n- [PromptOps with OpenAI's Completions API + PromptStash (Beginner)](examples/PromptOps_with_OpenAI_Completions_API_+_PromptStash_(Beginner).ipynb) - In this example tutorial, we'll use an LLM to do some English to Japanese translation for us so that we can talk to our friend in Japanese and look at the PromptOps associated with it. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ai-hero/python-client-sdk/blob/main/examples/PromptOps_with_OpenAI_Completions_API_%2B_PromptStash_%28Beginner%29.ipynb)\n\n- [PromptOps with OpenAI's Chat Completions API + PromptStash (Beginner)](examples/PromptOps_with_OpenAI_Chat_Completions_API_+_PromptStash_(Beginner).ipynb) - In this example tutorial, we'll use a LLM Chat Bot as a Japanese tutor for us so that we can learn Japanese through conversation and look at the PromptOps associated with it. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ai-hero/python-client-sdk/blob/main/examples/PromptOps_with_OpenAI_Chat_Completions_API_%2B_PromptStash_%28Beginner%29.ipynb)\n\nThere's more coming soon! \n- We're building a notebook for Retrieval Augmented Generation. \n- Hit us up if you'd like a custom demo at `team@aihero.studio`.\n\n## Overview\nAs an example, let's say you want to create a \"Ask PG (i.e. Paul Graham from YC) Bot\". You'll already be using a template like this for Langchain, LlamaIndex, etc.\n```python\nTEMPLATE_STR = (\n  \"The following is a blog by Paul Graham.\\n\"\n  \"You will answer the question below using the context provided.\\n\"\n  \"---------------------\\n\"\n  \"{context_str}\"\n  \"\\n---------------------\\n\"\n  \"Given this information, please answer the question: {query_str}\\n\"\n)\n```\n\nLet's create the promptstash instance ps using the project id and API key from the AI Hero. To get them you'll need to log into [https://app.aihero.studio] and create a project. Note your default project id and API key. \n```python\nfrom aihero import promptstash\nps = promptstash(project_id=\"YOUR_PROJECT_ID\", api_key=\"YOUR_API_KEY\")\n```\n\n### Versioning and Stashing Prompt Templates\nWe can stash our current prompt template to get a \"variant\" id. A variant id is an MD5 hash of your prompt template. The `template_id` keeps all the variants for a prompt tempalte together.\n\n```python\nvariant = ps.stash_template(template_id=\"paul-graham-essay\", body=TEMPLATE_STR)\n```\n\nYou can see the prompt stashed in your AI Hero UI. \n![Stashed Prompt Templates](assets/templates.png)\n\n\nYou can also see each variant of each prompt template stashed in your AI Hero UI. \n![Stashed Variants](assets/variants.png)\n\n### Recalling Previously Stashed Templates\n\nWhen you want to recall the variant in the future, use the hash you want.\n```python\ntemplate_str = ps.variant(template_id=\"paul-graham-essay\", variant=variant)\n```\n\n### Tracking Prompt Inputs and Outputs\nYou can also stash and visualize each prompt input and output for your variants. Assuming you have already created your promptstash ps object. For example, this is how you would stash it for a Q&A agent implemented using retrieval augmented generation.\n```python\ntrace_id = str(uuid4())\nstep_id = str(uuid4())\n\nps.stash_completion(\n    trace_id=trace_id,\n    step_id=step_id,\n    template_id=\"paul-graham-essay\",\n    variant=variant,\n    prompt=prompt,\n    output=output,\n    inputs={\"question\": question},\n    rendered_inputs=f\"Question: {question}\",\n    model={\"name\": \"openai-davinci-003\", \"version\": date.today().strftime(\"%Y-%m-%d\")},\n    metrics={\"time\": (toc - tic)},\n    other={},\n)\n```\n\n### Viewing Completions in Real Time\n\nYou can then view your stashed prompts in real-time from the UI.\n\n![Real Time View of prompts, their inputs and outputs.](assets/tsne.png)\n\nNOTE: You'll need to provide an OPENAI_API_KEY environment variable so that the client SDK can generate embeddings for your inputs and outputs. This would incur charges for OpenAI, and we recommend you set limits on your account within OpenAI's [Playground](https //beta.openai.com/playground).\n```python\nimport os\nos.environ[\"OPENAI_API_KEY\"]=\"YOUR_API_KEY\"\n```\n\n### Traces\nYou can observe the traces (that you tracked with `trace_id` and `step_id` above)\n![Traces](assets/traces.png) \n\n\n### Evaluation (early release)\n\nYou can create a test suite and then test the outputs of your variant with it. There are two types of tests:\n- `test_*`: Execute the Python function. Implement as needed and assert as needed.\n- `ask_*`: Evaluate using GPT3.5-turbo (Note: doesn't work the best yet.)\n\nNOTE: You'll need to provide an OPENAI_API_KEY environment variable so that the client SDK can generate embeddings for your inputs and outputs. This would incur charges for OpenAI, and we recommend you set limits on your account within OpenAI's [Playground](https //beta.openai.com/playground).\n```python\nimport os\nos.environ[\"OPENAI_API_KEY\"]=\"YOUR_API_KEY\"\n```\n\nFirst, create the test suite.\n```python\nfrom aihero.eval import PromptTestSuite\n\nclass PGBotTests(PromptTestSuite):\n    def test_unquoted(self, output):\n        assert not (output.startswith('\"') and output.endswith('\"'))\n\n    def ask_is_first_person(self) -> str:        \n        return \"Is the output in the first person point of view?\"\n\n    def ask_is_spanish(self) -> str:\n        return \"Is the output language Spanish?\"\n\nts = ps.build_test_suite(test_suite_id=\"pg-bot-tests\", test_suite_cls=PGBotTests)\n```\n\nThen, you can generate your putputs, and run the test suite on that variant and outputs.\n```python\ntimes = []\ncompletions = []\nfor question in [\n    \"What did you do growing up?\",\n    \"What's your advice for a founder?\",\n    \"Why did you start YC?\",\n]:  #\n    print(f\"Using prompt {variant}\")\n    print(\"Q: \" + str(question))\n    tic = time.perf_counter()\n    output = answer(question, template_str)\n    toc = time.perf_counter()\n    times.append(toc - tic)\n    print(\"A: \" + str(output).strip())\n\n    prompt = template_str\n    prompt = prompt.replace(\"{query_str}\", question)\n\n    completions.append(\n        {\n            \"inputs\": {\"query_str\": question},\n            \"rendered_inputs\": f\"Question: {question}\",\n            \"prompt\": prompt,\n            \"output\": output,\n        }\n    )\n\navg_time = sum(times) / len(times)\nts.run(\n    template_id=template_id,\n    variant=variant,\n    completions=completions,\n    model={\"name\": \"openai-davinci-003\"},\n    metrics={\"time\": avg_time},\n    other={},\n)\n```\n\nYou can see the eval results in the UI.\n![Evaluations](assets/evals.png) \n![Test Run](assets/eval.png) \n\n### Record Feedback\nYou can record feedback from your user using the same trace_id and addinng your user feedback.\n```python\nstep_id = str(uuid4())\nps.stash_feedback(\n    trace_id=trace_id,\n    step_id=step_id,\n    thumbs_up=True,\n    thumbs_down=False,\n    correction=\"Foo\",\n    annotations={\"user\": \"a\"},\n    other={},\n)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "AI Hero Python SDK",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/ai-hero/python-client-sdk"
    },
    "split_keywords": [
        "ai hero",
        "spotcheck",
        "mlops",
        "ai",
        "data annotation",
        "labeling",
        "model training",
        "model serving",
        "model deployment"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cc690e7cb9ea7e783a95c33d896679af0a3af76887113a38a8480b163b022ef7",
                "md5": "a5d44044e3150a6d1c4f01b4e9a906b1",
                "sha256": "231f8275c971c6e11e9451247805363138b11f6549d4e9f8ace911c0c1298a66"
            },
            "downloads": -1,
            "filename": "aihero-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a5d44044e3150a6d1c4f01b4e9a906b1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 25853,
            "upload_time": "2023-08-25T23:44:27",
            "upload_time_iso_8601": "2023-08-25T23:44:27.269344Z",
            "url": "https://files.pythonhosted.org/packages/cc/69/0e7cb9ea7e783a95c33d896679af0a3af76887113a38a8480b163b022ef7/aihero-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0866b2ded65ef2db822226c68032d03e85e263bc813a7b26c0fff0b5ba8525b9",
                "md5": "df2ed45fc72bcd0d7d85e846452dd357",
                "sha256": "0e954caf290d3ed05335b52f6b71b83aed6f664fc18e6a344e9cd5e8d2867e19"
            },
            "downloads": -1,
            "filename": "aihero-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "df2ed45fc72bcd0d7d85e846452dd357",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 22218,
            "upload_time": "2023-08-25T23:44:29",
            "upload_time_iso_8601": "2023-08-25T23:44:29.017970Z",
            "url": "https://files.pythonhosted.org/packages/08/66/b2ded65ef2db822226c68032d03e85e263bc813a7b26c0fff0b5ba8525b9/aihero-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-25 23:44:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ai-hero",
    "github_project": "python-client-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aihero"
}
        
Elapsed time: 0.11888s