local-llm-function-calling


Namelocal-llm-function-calling JSON
Version 0.1.23 PyPI version JSON
download
home_pagehttps://github.com/rizerphe/local-llm-function-calling
SummaryA tool for generating function arguments and choosing what function to call with local LLMs
upload_time2024-03-12 12:29:43
maintainer
docs_urlNone
authorrizerphe
requires_python>=3.11,<4.0
licenseMIT
keywords llm jsonschema huggingface transformers local llama.cpp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Local LLM function calling

[![Documentation Status](https://readthedocs.org/projects/local-llm-function-calling/badge/?version=latest)](https://local-llm-function-calling.readthedocs.io/en/latest/?badge=latest) [![PyPI version](https://badge.fury.io/py/local-llm-function-calling.svg)](https://badge.fury.io/py/local-llm-function-calling)

## Overview

The `local-llm-function-calling` project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's [function calling](https://openai.com/blog/function-calling-and-other-api-updates) feature, but actually enforcing the schema unlike OpenAI.

The project provides a `Generator` class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the `local-llm-function-calling` library, users can conveniently control the output of text generation models. It uses my own quickly sketched `json-schema-enforcer` project as the enforcer.

## Features

- Constrains the generation of Hugging Face text generation models to follow a JSON schema.
- Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.
- Simplifies the text generation process through a user-friendly `Generator` class.

## Installation

To install the `local-llm-function-calling` library, use the following command:

```shell
pip install local-llm-function-calling
```

## Usage

Here's a simple example demonstrating how to use `local-llm-function-calling`:

```python
from local_llm_function_calling import Generator

# Define a function and models
functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                    "maxLength": 20,
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },
    }
]

# Initialize the generator with the Hugging Face model and our functions
generator = Generator.hf(functions, "gpt2")

# Generate text using a prompt
function_call = generator.generate("What is the weather like today in Brooklyn?")
print(function_call)
```

## Custom constraints

You don't have to use my prompting methods; you can craft your own prompts and your own constraints, and still benefit from the constrained generation:

```python
from local_llm_function_calling import Constrainer
from local_llm_function_calling.model.huggingface import HuggingfaceModel

# Define your own constraint
# (you can also use local_llm_function_calling.JsonSchemaConstraint)
def lowercase_sentence_constraint(text: str):
    # Has to return (is_valid, is_complete)
    return [text.islower(), text.endswith(".")]

# Create the constrainer
constrainer = Constrainer(HuggingfaceModel("gpt2"))

# Generate your text
generated = constrainer.generate("Prefix.\n", lowercase_sentence_constraint, max_len=10)
```

## Extending and Customizing

To extend or customize the prompt structure, you can subclass the `TextPrompter` class. This allows you to modify the prompt generation process according to your specific requirements.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rizerphe/local-llm-function-calling",
    "name": "local-llm-function-calling",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.11,<4.0",
    "maintainer_email": "",
    "keywords": "llm,jsonschema,huggingface,transformers,local,llama.cpp",
    "author": "rizerphe",
    "author_email": "44440399+rizerphe@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/7c/f4/ee9a8c9cc1cbb85ac71c5dfffe933f8b23a0f925c325c5e156227e5d3eea/local_llm_function_calling-0.1.23.tar.gz",
    "platform": null,
    "description": "# Local LLM function calling\n\n[![Documentation Status](https://readthedocs.org/projects/local-llm-function-calling/badge/?version=latest)](https://local-llm-function-calling.readthedocs.io/en/latest/?badge=latest) [![PyPI version](https://badge.fury.io/py/local-llm-function-calling.svg)](https://badge.fury.io/py/local-llm-function-calling)\n\n## Overview\n\nThe `local-llm-function-calling` project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's [function calling](https://openai.com/blog/function-calling-and-other-api-updates) feature, but actually enforcing the schema unlike OpenAI.\n\nThe project provides a `Generator` class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the `local-llm-function-calling` library, users can conveniently control the output of text generation models. It uses my own quickly sketched `json-schema-enforcer` project as the enforcer.\n\n## Features\n\n- Constrains the generation of Hugging Face text generation models to follow a JSON schema.\n- Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.\n- Simplifies the text generation process through a user-friendly `Generator` class.\n\n## Installation\n\nTo install the `local-llm-function-calling` library, use the following command:\n\n```shell\npip install local-llm-function-calling\n```\n\n## Usage\n\nHere's a simple example demonstrating how to use `local-llm-function-calling`:\n\n```python\nfrom local_llm_function_calling import Generator\n\n# Define a function and models\nfunctions = [\n    {\n        \"name\": \"get_current_weather\",\n        \"description\": \"Get the current weather in a given location\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"The city and state, e.g. San Francisco, CA\",\n                    \"maxLength\": 20,\n                },\n                \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n            },\n            \"required\": [\"location\"],\n        },\n    }\n]\n\n# Initialize the generator with the Hugging Face model and our functions\ngenerator = Generator.hf(functions, \"gpt2\")\n\n# Generate text using a prompt\nfunction_call = generator.generate(\"What is the weather like today in Brooklyn?\")\nprint(function_call)\n```\n\n## Custom constraints\n\nYou don't have to use my prompting methods; you can craft your own prompts and your own constraints, and still benefit from the constrained generation:\n\n```python\nfrom local_llm_function_calling import Constrainer\nfrom local_llm_function_calling.model.huggingface import HuggingfaceModel\n\n# Define your own constraint\n# (you can also use local_llm_function_calling.JsonSchemaConstraint)\ndef lowercase_sentence_constraint(text: str):\n    # Has to return (is_valid, is_complete)\n    return [text.islower(), text.endswith(\".\")]\n\n# Create the constrainer\nconstrainer = Constrainer(HuggingfaceModel(\"gpt2\"))\n\n# Generate your text\ngenerated = constrainer.generate(\"Prefix.\\n\", lowercase_sentence_constraint, max_len=10)\n```\n\n## Extending and Customizing\n\nTo extend or customize the prompt structure, you can subclass the `TextPrompter` class. This allows you to modify the prompt generation process according to your specific requirements.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A tool for generating function arguments and choosing what function to call with local LLMs",
    "version": "0.1.23",
    "project_urls": {
        "Documentation": "https://local-llm-function-calling.readthedocs.io/",
        "Homepage": "https://github.com/rizerphe/local-llm-function-calling"
    },
    "split_keywords": [
        "llm",
        "jsonschema",
        "huggingface",
        "transformers",
        "local",
        "llama.cpp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "69b6e3f003e2d26735e0d37476487edb2c78931defe2a7358fed70f13ed2a15f",
                "md5": "1f5f45ce1446fb369ccda5f0e262f169",
                "sha256": "fdf7b982acfa016637e9a0aecb42efcfa60b21346b8ae5ae38fbb71e23dac3e2"
            },
            "downloads": -1,
            "filename": "local_llm_function_calling-0.1.23-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1f5f45ce1446fb369ccda5f0e262f169",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11,<4.0",
            "size": 18234,
            "upload_time": "2024-03-12T12:29:41",
            "upload_time_iso_8601": "2024-03-12T12:29:41.700601Z",
            "url": "https://files.pythonhosted.org/packages/69/b6/e3f003e2d26735e0d37476487edb2c78931defe2a7358fed70f13ed2a15f/local_llm_function_calling-0.1.23-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7cf4ee9a8c9cc1cbb85ac71c5dfffe933f8b23a0f925c325c5e156227e5d3eea",
                "md5": "5c89f271a0ff4e33fb00471477d704cb",
                "sha256": "c05fc5bad533fb671ee5c7c19052214428b6c2857dd95605d2751eaeaf235674"
            },
            "downloads": -1,
            "filename": "local_llm_function_calling-0.1.23.tar.gz",
            "has_sig": false,
            "md5_digest": "5c89f271a0ff4e33fb00471477d704cb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11,<4.0",
            "size": 14014,
            "upload_time": "2024-03-12T12:29:43",
            "upload_time_iso_8601": "2024-03-12T12:29:43.933078Z",
            "url": "https://files.pythonhosted.org/packages/7c/f4/ee9a8c9cc1cbb85ac71c5dfffe933f8b23a0f925c325c5e156227e5d3eea/local_llm_function_calling-0.1.23.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-12 12:29:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rizerphe",
    "github_project": "local-llm-function-calling",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "local-llm-function-calling"
}
        
Elapsed time: 0.20445s