llmfoo


Namellmfoo JSON
Version 0.5.0 PyPI version JSON
download
home_page
SummaryAutomatically make the OpenAI tool JSON Schema, parsing call and constructing the result to the chat model.
upload_time2024-01-02 16:23:25
maintainer
docs_urlNone
authorMikko Korpela
requires_python>=3.10,<4.0
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM FOO

[![Version](https://img.shields.io/pypi/v/llmfoo.svg)](https://pypi.python.org/pypi/llmfoo)
[![Downloads](http://pepy.tech/badge/llmfoo)](http://pepy.tech/project/llmfoo)

## Overview
LLM FOO is a cutting-edge project blending the art of Kung Fu with the science of Large Language Models... or 
actually this is about automatically making the OpenAI tool JSON Schema, parsing call and constructing the
result to the chat model.
And then there is a second utility `is_statement_true` that uses [genius logit_bias trick](https://twitter.com/AAAzzam/status/1669753721574633473)
that only uses one output token.

But hey I hope this will become a set of small useful LLM helper functions that will make building stuff easier
because current bleeding edge APIs are a bit of a mess and I think we can do better.

![](/llmfoo.webp)

## Installation
```bash
pip install llmfoo
```

## Usage

* You need to have OPENAI_API_KEY in env and ability to call `gpt-4-1106-preview` model

* `is_statement_true` should be easy to understand.
Make some natural language statement, and check it against criteria or general truthfulness. You get back boolean.

For the LLM FOO tool:

1. Add `@tool` annotation.
2. llmfoo will generate the json schema to YOURFILE.tool.json with GPT-4-Turbo - "Never send a machine to do a human's job" .. like who wants to write boilerplate docs for Machines???
3. Annotated functions have helpers:
   - `openai_schema` to return the schema (You can edit it from the json if your not happy with what the machines did)
   - `openai_tool_call` to make the tool call and return the result in chat API message format
   - `openai_tool_output` to make the tool call and return the result in assistant API tool output format

```python
from time import sleep

from openai import OpenAI

from llmfoo.functions import tool
from llmfoo import is_statement_true


def test_is_statement_true_with_default_criteria():
    assert is_statement_true("Earth is a planet.")
    assert not is_statement_true("1 + 2 = 5")


def test_is_statement_true_with_own_criteria():
    assert not is_statement_true("Temperature outside is -2 degrees celsius",
                                 criteria="Temperature above 0 degrees celsius")
    assert is_statement_true("1984 was written by George Orwell",
                             criteria="George Orwell is the author of 1984")


def test_is_statement_true_criteria_can_change_truth_value():
    assert is_statement_true("Earth is 3rd planet from the Sun")
    assert not is_statement_true("Earth is 3rd planet from the Sun",
                                 criteria="Earth is stated to be 5th planet from the Sun")


@tool
def adder(x: int, y: int) -> int:
    return x + y


@tool
def multiplier(x: int, y: int) -> int:
    return x * y


client = OpenAI()


def test_chat_completion_with_adder():
    number1 = 3267182746
    number2 = 798472847
    messages = [
        {
            "role": "user",
            "content": f"What is {number1} + {number2}?"
        }
    ]
    response = client.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=[adder.openai_schema]
    )
    messages.append(response.choices[0].message)
    messages.append(adder.openai_tool_call(response.choices[0].message.tool_calls[0]))
    response2 = client.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=[adder.openai_schema]
    )
    assert str(adder(number1, number2)) in response2.choices[0].message.content.replace(",", "")


def test_assistant_with_multiplier():
    number1 = 1238763428176
    number2 = 172388743612
    assistant = client.beta.assistants.create(
        name="The Calc Machina",
        instructions="You are a calculator with a funny pirate accent.",
        tools=[multiplier.openai_schema],
        model="gpt-4-1106-preview"
    )
    thread = client.beta.threads.create(messages=[
        {
            "role":"user",
            "content":f"What is {number1} * {number2}?"
        }
    ])
    run = client.beta.threads.runs.create(
        thread_id=thread.id,
        assistant_id=assistant.id
    )
    while True:
        run_state = client.beta.threads.runs.retrieve(
            run_id=run.id,
            thread_id=thread.id,
        )
        if run_state.status not in ['in_progress', 'requires_action']:
            break
        if run_state.status == 'requires_action':
            tool_call = run_state.required_action.submit_tool_outputs.tool_calls[0]
            run = client.beta.threads.runs.submit_tool_outputs(
                thread_id=thread.id,
                run_id=run.id,
                tool_outputs=[
                    multiplier.openai_tool_output(tool_call)
                ]
            )
            sleep(1)
        sleep(0.1)
    messages = client.beta.threads.messages.list(thread_id=thread.id)
    assert str(multiplier(number1, number2)) in messages.data[0].content[0].text.value.replace(",", "")

```

## Contributing
Interested in contributing? Loved to get your help to make this project better!
The APIs under are changing and system is still very much first version.

## License
This project is licensed under the [MIT License](LICENSE).

## Acknowledgements
- Thanks to all the contributors and maintainers.
- Special thanks to the Kung Fu masters such as Bruce Lee who inspired this project.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "llmfoo",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10,<4.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Mikko Korpela",
    "author_email": "mikko@robocorp.com",
    "download_url": "https://files.pythonhosted.org/packages/a8/56/a9b9cce4897bf1f4dc958f3a108d80e22c5dffe6b8823b5bb2a40333e53c/llmfoo-0.5.0.tar.gz",
    "platform": null,
    "description": "# LLM FOO\n\n[![Version](https://img.shields.io/pypi/v/llmfoo.svg)](https://pypi.python.org/pypi/llmfoo)\n[![Downloads](http://pepy.tech/badge/llmfoo)](http://pepy.tech/project/llmfoo)\n\n## Overview\nLLM FOO is a cutting-edge project blending the art of Kung Fu with the science of Large Language Models... or \nactually this is about automatically making the OpenAI tool JSON Schema, parsing call and constructing the\nresult to the chat model.\nAnd then there is a second utility `is_statement_true` that uses [genius logit_bias trick](https://twitter.com/AAAzzam/status/1669753721574633473)\nthat only uses one output token.\n\nBut hey I hope this will become a set of small useful LLM helper functions that will make building stuff easier\nbecause current bleeding edge APIs are a bit of a mess and I think we can do better.\n\n![](/llmfoo.webp)\n\n## Installation\n```bash\npip install llmfoo\n```\n\n## Usage\n\n* You need to have OPENAI_API_KEY in env and ability to call `gpt-4-1106-preview` model\n\n* `is_statement_true` should be easy to understand.\nMake some natural language statement, and check it against criteria or general truthfulness. You get back boolean.\n\nFor the LLM FOO tool:\n\n1. Add `@tool` annotation.\n2. llmfoo will generate the json schema to YOURFILE.tool.json with GPT-4-Turbo - \"Never send a machine to do a human's job\" .. like who wants to write boilerplate docs for Machines???\n3. Annotated functions have helpers:\n   - `openai_schema` to return the schema (You can edit it from the json if your not happy with what the machines did)\n   - `openai_tool_call` to make the tool call and return the result in chat API message format\n   - `openai_tool_output` to make the tool call and return the result in assistant API tool output format\n\n```python\nfrom time import sleep\n\nfrom openai import OpenAI\n\nfrom llmfoo.functions import tool\nfrom llmfoo import is_statement_true\n\n\ndef test_is_statement_true_with_default_criteria():\n    assert is_statement_true(\"Earth is a planet.\")\n    assert not is_statement_true(\"1 + 2 = 5\")\n\n\ndef test_is_statement_true_with_own_criteria():\n    assert not is_statement_true(\"Temperature outside is -2 degrees celsius\",\n                                 criteria=\"Temperature above 0 degrees celsius\")\n    assert is_statement_true(\"1984 was written by George Orwell\",\n                             criteria=\"George Orwell is the author of 1984\")\n\n\ndef test_is_statement_true_criteria_can_change_truth_value():\n    assert is_statement_true(\"Earth is 3rd planet from the Sun\")\n    assert not is_statement_true(\"Earth is 3rd planet from the Sun\",\n                                 criteria=\"Earth is stated to be 5th planet from the Sun\")\n\n\n@tool\ndef adder(x: int, y: int) -> int:\n    return x + y\n\n\n@tool\ndef multiplier(x: int, y: int) -> int:\n    return x * y\n\n\nclient = OpenAI()\n\n\ndef test_chat_completion_with_adder():\n    number1 = 3267182746\n    number2 = 798472847\n    messages = [\n        {\n            \"role\": \"user\",\n            \"content\": f\"What is {number1} + {number2}?\"\n        }\n    ]\n    response = client.chat.completions.create(\n        model=\"gpt-4-1106-preview\",\n        messages=messages,\n        tools=[adder.openai_schema]\n    )\n    messages.append(response.choices[0].message)\n    messages.append(adder.openai_tool_call(response.choices[0].message.tool_calls[0]))\n    response2 = client.chat.completions.create(\n        model=\"gpt-4-1106-preview\",\n        messages=messages,\n        tools=[adder.openai_schema]\n    )\n    assert str(adder(number1, number2)) in response2.choices[0].message.content.replace(\",\", \"\")\n\n\ndef test_assistant_with_multiplier():\n    number1 = 1238763428176\n    number2 = 172388743612\n    assistant = client.beta.assistants.create(\n        name=\"The Calc Machina\",\n        instructions=\"You are a calculator with a funny pirate accent.\",\n        tools=[multiplier.openai_schema],\n        model=\"gpt-4-1106-preview\"\n    )\n    thread = client.beta.threads.create(messages=[\n        {\n            \"role\":\"user\",\n            \"content\":f\"What is {number1} * {number2}?\"\n        }\n    ])\n    run = client.beta.threads.runs.create(\n        thread_id=thread.id,\n        assistant_id=assistant.id\n    )\n    while True:\n        run_state = client.beta.threads.runs.retrieve(\n            run_id=run.id,\n            thread_id=thread.id,\n        )\n        if run_state.status not in ['in_progress', 'requires_action']:\n            break\n        if run_state.status == 'requires_action':\n            tool_call = run_state.required_action.submit_tool_outputs.tool_calls[0]\n            run = client.beta.threads.runs.submit_tool_outputs(\n                thread_id=thread.id,\n                run_id=run.id,\n                tool_outputs=[\n                    multiplier.openai_tool_output(tool_call)\n                ]\n            )\n            sleep(1)\n        sleep(0.1)\n    messages = client.beta.threads.messages.list(thread_id=thread.id)\n    assert str(multiplier(number1, number2)) in messages.data[0].content[0].text.value.replace(\",\", \"\")\n\n```\n\n## Contributing\nInterested in contributing? Loved to get your help to make this project better!\nThe APIs under are changing and system is still very much first version.\n\n## License\nThis project is licensed under the [MIT License](LICENSE).\n\n## Acknowledgements\n- Thanks to all the contributors and maintainers.\n- Special thanks to the Kung Fu masters such as Bruce Lee who inspired this project.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Automatically make the OpenAI tool JSON Schema, parsing call and constructing the result to the chat model.",
    "version": "0.5.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d4cd55d8c4e646b95588eb3626d5d27333135f049b40fe1115ec0b2873da445f",
                "md5": "33caca92ac8c8e07588cddabd7ea1330",
                "sha256": "44a5541895e0b364b8581c306d30f582472fbf8f3634d0abbc2c56347cc7624e"
            },
            "downloads": -1,
            "filename": "llmfoo-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "33caca92ac8c8e07588cddabd7ea1330",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10,<4.0",
            "size": 6344,
            "upload_time": "2024-01-02T16:23:24",
            "upload_time_iso_8601": "2024-01-02T16:23:24.118538Z",
            "url": "https://files.pythonhosted.org/packages/d4/cd/55d8c4e646b95588eb3626d5d27333135f049b40fe1115ec0b2873da445f/llmfoo-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a856a9b9cce4897bf1f4dc958f3a108d80e22c5dffe6b8823b5bb2a40333e53c",
                "md5": "f88f3d4541ce20fe677454732a92fcd7",
                "sha256": "603ca69db80a55b2c125065825a9d75d7be38530d435f0124ab29d08e71648a6"
            },
            "downloads": -1,
            "filename": "llmfoo-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f88f3d4541ce20fe677454732a92fcd7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10,<4.0",
            "size": 5889,
            "upload_time": "2024-01-02T16:23:25",
            "upload_time_iso_8601": "2024-01-02T16:23:25.450828Z",
            "url": "https://files.pythonhosted.org/packages/a8/56/a9b9cce4897bf1f4dc958f3a108d80e22c5dffe6b8823b5bb2a40333e53c/llmfoo-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-02 16:23:25",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llmfoo"
}
        
Elapsed time: 0.21438s