functioncalming


Namefunctioncalming JSON
Version 0.0.9 PyPI version JSON
download
home_pagehttps://github.com/phdowling/functioncalming
SummaryRobust and reliable OpenAI function calling
upload_time2024-11-13 12:42:05
maintainerNone
docs_urlNone
authorPhilipp Dowling
requires_python<4.0,>=3.12
licenseMIT
keywords openai function-calling distillation fine-tuning pydantic validation llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # functioncalming
## Installation
`pip install functioncalming`

## Overview
Get (near-)guaranteed structured responses from OpenAI using pydantic and function calling (and, if you like, fine-tuning).

functioncalming uses OpenAI's function calling in combination with pydantic model validation to hide away the messy details of getting structured responses from an LLM.

functioncalming comes with support for:
- Structured responses from the LLM via pydantic models
- Structured responses from the LLM via plain python function (pydantic argument validation happens under the hood)
- Parallel function calling, as well as giving the model a choice of multiple different tools
- Automatically passing function/tool results back to the model
- Automatic message history re-writing to hide failed function calls that were re-tried
- Create fine-tuning data to make model better at calling your functions/models with near zero config
- Create fine-tuning data for distilling a complex pipeline to a simple model via a simple decorator (`@distillery`)
- Reporting the cost of your API requests (using OpenAI pricing as of April 2024)

## Who is this for?
Basically, functioncalming provides useful utilities for any case where you find yourself using function calling in OpenAI. 
However, it particularly shines in use-cases where any of the following are the case:
- LLM responses are consumed in a mostly machine-facing way (i.e. the output of the LLM is used in a workflow instead of direct conversation with a user)
- LLMs are used for data extraction, i.e. you just want to extract a possibly complex and nested structured object from an input (rather than just calling e.g. a simple `get_weather()`-style function)
- The same function(s) are called over and over again, and you want to fine-tune a cheaper model to reach the level of quality that GPT-4 offers
- A cheaper (e.g. `gpt-3.5-turbo`) model should be fine-tuned (**distilled**) to perform the task of a complex pipeline based on an expensive model (e.g. `gpt-4`) directly

## Usage
Simple example of calling two functions in parallel (may be flaky using a real model, but this is how parallel calls are done):

```python
from pydantic import BaseModel
from functioncalming.client import get_completion


class Actor(BaseModel):
    """
    A person or non-human actor involved in a situation
    """
    name: str
    adjectives: list[str]


class Situation(BaseModel):
    """
    A situation or event involving a number of actors
    """
    actors: list[Actor]
    action: str


class EmojiTranslation(BaseModel):
    translation: str


PROMPT = """You help extract cleaned data from unstructured input text 
and simultaneously (but separately) turn the text into an Emoji-translation.
You also have a tendency to always make a mistake the first time you call a function, but then do it correctly.
"""

history = [
    {'role': 'system', 'content': PROMPT},
    {'role': 'user', 'content': "The quick brown fox jumps over the lazy dog"}
]


async def main():
    calm_response = await get_completion(
        messages=history,
        tools=[Situation, EmojiTranslation],
        temperature=0,
        retries=1,
        rewrite_log_destination='finetune.jsonl', 
    )
    print(calm_response.success)
    print(calm_response.retries_done)
    print(calm_response.usage)  # total tokens used 
    print(calm_response.cost)  # estimated dollar cost of all requests that were done
    print(calm_response.tool_call_results[0].model_dump_json(
        indent=4))  # {"actors": [{"name": "fox", "adjectives": ["quick", "brown"]}, {"name": "dog", "adjectives": ["lazy"]}], "action": "jumping over"}
    print(calm_response.tool_call_results[1].model_dump_json(indent=4))  # {"translation": "🦊↗️🐶"}
    print(f"Clean, rewritten history: {len(calm_response.messages)} messages. Real history: {len(calm_response.messages_raw)} messages.")
```
## Generating fine-tuning data for distillation
functioncalming tries to make it easy to generate data for function distillation - i.e. fine-tuning a cheaper, faster "student" pipeline
to perform a complex task that can be reliably achieved using a more expensive, slower "teacher" pipeline. The ideas is to track the inputs 
and outputs of the teacher pipeline and use them to train the student pipeline to perform the task directly.

What functioncalming provides here is a simple interface to "clean up" and augment the message history of the teacher pipeline to 
have the correct format for the student fine-tuning task with no custom data cleaning scripts required.

TODO - show how to set up a distillation pipeline.

## functioncalming and instructor
Credit where it's due: functioncalming takes inspiration from https://github.com/jxnl/instructor and serves the same basic purpose.

It's an alternative (or supplement) to `instructor` that is opinionated in a different way and has (probably) slightly different priorities: 
ease of use, exposing all features of the function calling API, and providing tools for improving function calling performance and reliability.

A few differences vs instructor (as of early December 2023):
- Message history re-writing (i.e. hiding failed function call attempts from the model in subsequent calls / fine-tuning data)
  - This tends to make subsequent calls more likely to succeed if you continue sending more messages in the same conversation
  - It also makes the resulting message history more suitable for fine-tuning
- functioncalming avoids supplying/hard-coding fixed prompts (almost everywhere), while instructor has hard-coded prompts in a few places
  - This is not necessarily an advantage or disadvantage per se - in my own work I just prefer being able to customize prompts everywhere 
- Support for multiple response models (i.e. multiple tool calls) in a single completion call
- Support for multiple returned response objects (i.e. parallel tool calls, independent of whether multiple models were used)
- functioncalming handles calling functions directly and returns results
  - in instructor (from my understanding) you need to invoke the functions yourself, but it ships some helpers for doing this 
  - It also handles returning extraction/function results back to the model (not particularly difficult, but one less thing to code yourself)
- functioncalming provides its own get_completion method instead of monkey-patching OpenAI
  - not really a feature, just opinionation
  - note: it still exposes all underlying settings and config of the `openai` library via kwargs
- both libraries help with distillation, but again with different approaches/APIs (and instructor goes further with CLI utilities for triggering training runs, etc.)
- functioncalming does not ship LLM-validators for pydantic (but in principle, those from instructor should work with functioncalming)
- functioncalming does not, in its current release, support json-mode or legacy function calling as its underlying mechanisms
- Currently, instructor has much nicer docs and is probably better supported :)

It might make sense to use both libraries together. functioncalming does not handle many of the features instructor provides (e.g. LLM-based pydantic validators, fine-tuning CLI, etc.). 
If your use-case is simply to call OpenAI with multiple functions and/or to generate fine-tuning/distillation training data for a repeatable function-calling task, 
functioncalming might be a more straightforward option. 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/phdowling/functioncalming",
    "name": "functioncalming",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.12",
    "maintainer_email": null,
    "keywords": "openai, function-calling, distillation, fine-tuning, pydantic, validation, llm",
    "author": "Philipp Dowling",
    "author_email": "phdowling@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/67/55/cd7b15a47d274f05ed2059e003f33af137279b9ad5e6e08396c5b5861fb9/functioncalming-0.0.9.tar.gz",
    "platform": null,
    "description": "# functioncalming\n## Installation\n`pip install functioncalming`\n\n## Overview\nGet (near-)guaranteed structured responses from OpenAI using pydantic and function calling (and, if you like, fine-tuning).\n\nfunctioncalming uses OpenAI's function calling in combination with pydantic model validation to hide away the messy details of getting structured responses from an LLM.\n\nfunctioncalming comes with support for:\n- Structured responses from the LLM via pydantic models\n- Structured responses from the LLM via plain python function (pydantic argument validation happens under the hood)\n- Parallel function calling, as well as giving the model a choice of multiple different tools\n- Automatically passing function/tool results back to the model\n- Automatic message history re-writing to hide failed function calls that were re-tried\n- Create fine-tuning data to make model better at calling your functions/models with near zero config\n- Create fine-tuning data for distilling a complex pipeline to a simple model via a simple decorator (`@distillery`)\n- Reporting the cost of your API requests (using OpenAI pricing as of April 2024)\n\n## Who is this for?\nBasically, functioncalming provides useful utilities for any case where you find yourself using function calling in OpenAI. \nHowever, it particularly shines in use-cases where any of the following are the case:\n- LLM responses are consumed in a mostly machine-facing way (i.e. the output of the LLM is used in a workflow instead of direct conversation with a user)\n- LLMs are used for data extraction, i.e. you just want to extract a possibly complex and nested structured object from an input (rather than just calling e.g. a simple `get_weather()`-style function)\n- The same function(s) are called over and over again, and you want to fine-tune a cheaper model to reach the level of quality that GPT-4 offers\n- A cheaper (e.g. `gpt-3.5-turbo`) model should be fine-tuned (**distilled**) to perform the task of a complex pipeline based on an expensive model (e.g. `gpt-4`) directly\n\n## Usage\nSimple example of calling two functions in parallel (may be flaky using a real model, but this is how parallel calls are done):\n\n```python\nfrom pydantic import BaseModel\nfrom functioncalming.client import get_completion\n\n\nclass Actor(BaseModel):\n    \"\"\"\n    A person or non-human actor involved in a situation\n    \"\"\"\n    name: str\n    adjectives: list[str]\n\n\nclass Situation(BaseModel):\n    \"\"\"\n    A situation or event involving a number of actors\n    \"\"\"\n    actors: list[Actor]\n    action: str\n\n\nclass EmojiTranslation(BaseModel):\n    translation: str\n\n\nPROMPT = \"\"\"You help extract cleaned data from unstructured input text \nand simultaneously (but separately) turn the text into an Emoji-translation.\nYou also have a tendency to always make a mistake the first time you call a function, but then do it correctly.\n\"\"\"\n\nhistory = [\n    {'role': 'system', 'content': PROMPT},\n    {'role': 'user', 'content': \"The quick brown fox jumps over the lazy dog\"}\n]\n\n\nasync def main():\n    calm_response = await get_completion(\n        messages=history,\n        tools=[Situation, EmojiTranslation],\n        temperature=0,\n        retries=1,\n        rewrite_log_destination='finetune.jsonl', \n    )\n    print(calm_response.success)\n    print(calm_response.retries_done)\n    print(calm_response.usage)  # total tokens used \n    print(calm_response.cost)  # estimated dollar cost of all requests that were done\n    print(calm_response.tool_call_results[0].model_dump_json(\n        indent=4))  # {\"actors\": [{\"name\": \"fox\", \"adjectives\": [\"quick\", \"brown\"]}, {\"name\": \"dog\", \"adjectives\": [\"lazy\"]}], \"action\": \"jumping over\"}\n    print(calm_response.tool_call_results[1].model_dump_json(indent=4))  # {\"translation\": \"\ud83e\udd8a\u2197\ufe0f\ud83d\udc36\"}\n    print(f\"Clean, rewritten history: {len(calm_response.messages)} messages. Real history: {len(calm_response.messages_raw)} messages.\")\n```\n## Generating fine-tuning data for distillation\nfunctioncalming tries to make it easy to generate data for function distillation - i.e. fine-tuning a cheaper, faster \"student\" pipeline\nto perform a complex task that can be reliably achieved using a more expensive, slower \"teacher\" pipeline. The ideas is to track the inputs \nand outputs of the teacher pipeline and use them to train the student pipeline to perform the task directly.\n\nWhat functioncalming provides here is a simple interface to \"clean up\" and augment the message history of the teacher pipeline to \nhave the correct format for the student fine-tuning task with no custom data cleaning scripts required.\n\nTODO - show how to set up a distillation pipeline.\n\n## functioncalming and instructor\nCredit where it's due: functioncalming takes inspiration from https://github.com/jxnl/instructor and serves the same basic purpose.\n\nIt's an alternative (or supplement) to `instructor` that is opinionated in a different way and has (probably) slightly different priorities: \nease of use, exposing all features of the function calling API, and providing tools for improving function calling performance and reliability.\n\nA few differences vs instructor (as of early December 2023):\n- Message history re-writing (i.e. hiding failed function call attempts from the model in subsequent calls / fine-tuning data)\n  - This tends to make subsequent calls more likely to succeed if you continue sending more messages in the same conversation\n  - It also makes the resulting message history more suitable for fine-tuning\n- functioncalming avoids supplying/hard-coding fixed prompts (almost everywhere), while instructor has hard-coded prompts in a few places\n  - This is not necessarily an advantage or disadvantage per se - in my own work I just prefer being able to customize prompts everywhere \n- Support for multiple response models (i.e. multiple tool calls) in a single completion call\n- Support for multiple returned response objects (i.e. parallel tool calls, independent of whether multiple models were used)\n- functioncalming handles calling functions directly and returns results\n  - in instructor (from my understanding) you need to invoke the functions yourself, but it ships some helpers for doing this \n  - It also handles returning extraction/function results back to the model (not particularly difficult, but one less thing to code yourself)\n- functioncalming provides its own get_completion method instead of monkey-patching OpenAI\n  - not really a feature, just opinionation\n  - note: it still exposes all underlying settings and config of the `openai` library via kwargs\n- both libraries help with distillation, but again with different approaches/APIs (and instructor goes further with CLI utilities for triggering training runs, etc.)\n- functioncalming does not ship LLM-validators for pydantic (but in principle, those from instructor should work with functioncalming)\n- functioncalming does not, in its current release, support json-mode or legacy function calling as its underlying mechanisms\n- Currently, instructor has much nicer docs and is probably better supported :)\n\nIt might make sense to use both libraries together. functioncalming does not handle many of the features instructor provides (e.g. LLM-based pydantic validators, fine-tuning CLI, etc.). \nIf your use-case is simply to call OpenAI with multiple functions and/or to generate fine-tuning/distillation training data for a repeatable function-calling task, \nfunctioncalming might be a more straightforward option. \n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Robust and reliable OpenAI function calling",
    "version": "0.0.9",
    "project_urls": {
        "Homepage": "https://github.com/phdowling/functioncalming",
        "Repository": "https://github.com/phdowling/functioncalming"
    },
    "split_keywords": [
        "openai",
        " function-calling",
        " distillation",
        " fine-tuning",
        " pydantic",
        " validation",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d81a5ff1c0e6c37f32b5a61208e5c6091747748d5864f5b5bc5c2346dcb4ec3",
                "md5": "1266ae8aa045d006835dd4eb768f71ac",
                "sha256": "211f5b7a006907f2ad1f9a0439950bb16f84efc5c391fc1ad3749d7a04af6a4c"
            },
            "downloads": -1,
            "filename": "functioncalming-0.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1266ae8aa045d006835dd4eb768f71ac",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.12",
            "size": 16632,
            "upload_time": "2024-11-13T12:42:04",
            "upload_time_iso_8601": "2024-11-13T12:42:04.002260Z",
            "url": "https://files.pythonhosted.org/packages/5d/81/a5ff1c0e6c37f32b5a61208e5c6091747748d5864f5b5bc5c2346dcb4ec3/functioncalming-0.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6755cd7b15a47d274f05ed2059e003f33af137279b9ad5e6e08396c5b5861fb9",
                "md5": "1a0c3fa41336068a6574da8833e4a7bb",
                "sha256": "598d64497ca6d066cbf22cc7101b5871746ee976769568164bf92888382b0f91"
            },
            "downloads": -1,
            "filename": "functioncalming-0.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "1a0c3fa41336068a6574da8833e4a7bb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.12",
            "size": 18020,
            "upload_time": "2024-11-13T12:42:05",
            "upload_time_iso_8601": "2024-11-13T12:42:05.735868Z",
            "url": "https://files.pythonhosted.org/packages/67/55/cd7b15a47d274f05ed2059e003f33af137279b9ad5e6e08396c5b5861fb9/functioncalming-0.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-13 12:42:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "phdowling",
    "github_project": "functioncalming",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "functioncalming"
}
        
Elapsed time: 0.37466s