llm-json-adapter


Namellm-json-adapter JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-04-27 09:58:20
maintainerNone
docs_urlNone
authorTakaaki Mizuno
requires_python<4.0,>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM JSON Adapter

## What is it ?

When using LLMs from the system, you often expect to get output results in JSON: OpenAPI's GPT API has a mechanism called Function Calling, which can return JSON, but Google's Gemini does not seem to have that functionality.

Therefore, I have created a wrapper library to switch LLMs and get results in JSON. What this library can do is as follows.

- Allows you to define the results you want to get in JSON Schema
- Switch between LLMs (currently supports OpenAI's GPT, Google's Gemini, Ollama and Bedrock for Llama and Anthropic Claude).
- Retry a specified number of times if the JSON retrieval fails

## How to use

Use the following code to get the results in JSON.

| Parameter       | Description                                                                               |
|-----------------|-------------------------------------------------------------------------------------------|
| provider_name   | The name of the LLM provider to use. Currently, only "google" and "openai" are supported. |
| max_retry_count | The number of times to retry if the JSON retrieval fails.                                 |
| attributes      | The attributes to pass to the LLM provider.                                               |



### OpenAI

#### Libraries

You need to install `openai`.

#### Attributes

| Parameter         | Description                        |
|-------------------|------------------------------------|
| api_key           | The API key to use.                |
| model             | Model name. Default: gpt-3.5-turbo |
| temperature       | Default: 0.67                      |
| presence_penalty  | Default: 0                         |
| frequency_penalty | Default: 0                         |

#### Example

```python
from llm_json_adapter import LLMJsonAdapter, Response

adapter = LLMJsonAdapter(provider_name="openai", max_retry_count=3, attributes={
    "api_key": "Your API Key",
    "model": "gpt-3.5-turbo",
})
result = adapter.generate(
    prompt="prompt",
    language="en",
    act_as="Professional Software Service Business Analyst",
    function=Response(
        name="response name",
        description="response description",
        parameters={
            "type": "object",
            "properties": {
                "data": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "title": {
                                "type": "string",
                            },
                            "description": {
                                "type": "string",
                            },
                        },
                        "required": ["title", "description"],
                    },
                },
            },
            "required": ["data"]
        },
    )
)
```

### Gemini

#### Libraries

You need to install `google-generativeai`.

#### Attributes

| Parameter | Description                                |
|-----------|--------------------------------------------|
| api_key   | The API key to use.                        |
| model     | Model name( Default: gemini-1.5-pro-latest |

#### Example

```python
from llm_json_adapter import LLMJsonAdapter, Response

adapter = LLMJsonAdapter(provider_name="google", max_retry_count=3, attributes={
    "api_key": "Your API Key",
    "model": "gemini-1.5-pro-latest",
})
result = adapter.generate(
    prompt="prompt",
    language="en",
    act_as="Professional Software Service Business Analyst",
    function=Response(
        name="response name",
        description="response description",
        parameters={
            "type": "object",
            "properties": {
                "data": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "title": {
                                "type": "string",
                            },
                            "description": {
                                "type": "string",
                            },
                        },
                        "required": ["title", "description"],
                    },
                },
            },
            "required": ["data"]
        },
    )
)
```

### Ollama

You need to prepare the Ollama server. ( https://ollama.com/ )

#### Libraries

You need to install `ollama`.

#### Attributes

| Parameter | Description                     |
|-----------|---------------------------------|
| url       | http://localhost:11434          |
| model     | Model name ( Default: llama3 ). |

#### Example

```python
from llm_json_adapter import LLMJsonAdapter, Response

adapter = LLMJsonAdapter(provider_name="ollama", max_retry_count=3, attributes={
    "url": "http://localhost:11434",
    "model": "llama3",
})
result = adapter.generate(
    prompt="prompt",
    language="en",
    act_as="Professional Software Service Business Analyst",
    function=Response(
        name="response name",
        description="response description",
        parameters={
            "type": "object",
            "properties": {
                "data": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "title": {
                                "type": "string",
                            },
                            "description": {
                                "type": "string",
                            },
                        },
                        "required": ["title", "description"],
                    },
                },
            },
            "required": ["data"]
        },
    )
)
```

### Bedrock

You need to setup the AWS Bedrock( https://aws.amazon.com/bedrock/ )

#### Libraries

You need to install `boto3`.

#### Attributes


| Parameter         | Description                                     |
|-------------------|-------------------------------------------------|
| access_key_id     | The access key id to use.                       |
| secret_access_key | The secret access key to use.                   |
| region            | Region. Default: us-east-1                      |
| model             | Default: anthropic.claude-3-haiku-20240307-v1:0 |
| max_tokens        | Default: 1024                                   |

#### Example

```python
from llm_json_adapter import LLMJsonAdapter, Response

adapter = LLMJsonAdapter(provider_name="bedrock", max_retry_count=3, attributes={
    "access_key_id": "<YOUR AWS ACCESS KEY>",
    "secret_access_key": "<YOUR AWS SECRET ACCESS KEY>",
    "region": "us-east-1",
    "model": "anthropic.claude-3-haiku-20240307-v1:0",
    "max_tokens": 1024,
})
result = adapter.generate(
    prompt="prompt",
    language="en",
    act_as="Professional Software Service Business Analyst",
    function=Response(
        name="response name",
        description="response description",
        parameters={
            "type": "object",
            "properties": {
                "data": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "title": {
                                "type": "string",
                            },
                            "description": {
                                "type": "string",
                            },
                        },
                        "required": ["title", "description"],
                    },
                },
            },
            "required": ["data"]
        },
    )
)
```




            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-json-adapter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Takaaki Mizuno",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ff/02/36fa27c92ba14ef98b13bd60464f837fec9219bb2647185b48553494134b/llm_json_adapter-0.2.0.tar.gz",
    "platform": null,
    "description": "# LLM JSON Adapter\n\n## What is it ?\n\nWhen using LLMs from the system, you often expect to get output results in JSON: OpenAPI's GPT API has a mechanism called Function Calling, which can return JSON, but Google's Gemini does not seem to have that functionality.\n\nTherefore, I have created a wrapper library to switch LLMs and get results in JSON. What this library can do is as follows.\n\n- Allows you to define the results you want to get in JSON Schema\n- Switch between LLMs (currently supports OpenAI's GPT, Google's Gemini, Ollama and Bedrock for Llama and Anthropic Claude).\n- Retry a specified number of times if the JSON retrieval fails\n\n## How to use\n\nUse the following code to get the results in JSON.\n\n| Parameter       | Description                                                                               |\n|-----------------|-------------------------------------------------------------------------------------------|\n| provider_name   | The name of the LLM provider to use. Currently, only \"google\" and \"openai\" are supported. |\n| max_retry_count | The number of times to retry if the JSON retrieval fails.                                 |\n| attributes      | The attributes to pass to the LLM provider.                                               |\n\n\n\n### OpenAI\n\n#### Libraries\n\nYou need to install `openai`.\n\n#### Attributes\n\n| Parameter         | Description                        |\n|-------------------|------------------------------------|\n| api_key           | The API key to use.                |\n| model             | Model name. Default: gpt-3.5-turbo |\n| temperature       | Default: 0.67                      |\n| presence_penalty  | Default: 0                         |\n| frequency_penalty | Default: 0                         |\n\n#### Example\n\n```python\nfrom llm_json_adapter import LLMJsonAdapter, Response\n\nadapter = LLMJsonAdapter(provider_name=\"openai\", max_retry_count=3, attributes={\n    \"api_key\": \"Your API Key\",\n    \"model\": \"gpt-3.5-turbo\",\n})\nresult = adapter.generate(\n    prompt=\"prompt\",\n    language=\"en\",\n    act_as=\"Professional Software Service Business Analyst\",\n    function=Response(\n        name=\"response name\",\n        description=\"response description\",\n        parameters={\n            \"type\": \"object\",\n            \"properties\": {\n                \"data\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"title\": {\n                                \"type\": \"string\",\n                            },\n                            \"description\": {\n                                \"type\": \"string\",\n                            },\n                        },\n                        \"required\": [\"title\", \"description\"],\n                    },\n                },\n            },\n            \"required\": [\"data\"]\n        },\n    )\n)\n```\n\n### Gemini\n\n#### Libraries\n\nYou need to install `google-generativeai`.\n\n#### Attributes\n\n| Parameter | Description                                |\n|-----------|--------------------------------------------|\n| api_key   | The API key to use.                        |\n| model     | Model name( Default: gemini-1.5-pro-latest |\n\n#### Example\n\n```python\nfrom llm_json_adapter import LLMJsonAdapter, Response\n\nadapter = LLMJsonAdapter(provider_name=\"google\", max_retry_count=3, attributes={\n    \"api_key\": \"Your API Key\",\n    \"model\": \"gemini-1.5-pro-latest\",\n})\nresult = adapter.generate(\n    prompt=\"prompt\",\n    language=\"en\",\n    act_as=\"Professional Software Service Business Analyst\",\n    function=Response(\n        name=\"response name\",\n        description=\"response description\",\n        parameters={\n            \"type\": \"object\",\n            \"properties\": {\n                \"data\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"title\": {\n                                \"type\": \"string\",\n                            },\n                            \"description\": {\n                                \"type\": \"string\",\n                            },\n                        },\n                        \"required\": [\"title\", \"description\"],\n                    },\n                },\n            },\n            \"required\": [\"data\"]\n        },\n    )\n)\n```\n\n### Ollama\n\nYou need to prepare the Ollama server. ( https://ollama.com/ )\n\n#### Libraries\n\nYou need to install `ollama`.\n\n#### Attributes\n\n| Parameter | Description                     |\n|-----------|---------------------------------|\n| url       | http://localhost:11434          |\n| model     | Model name ( Default: llama3 ). |\n\n#### Example\n\n```python\nfrom llm_json_adapter import LLMJsonAdapter, Response\n\nadapter = LLMJsonAdapter(provider_name=\"ollama\", max_retry_count=3, attributes={\n    \"url\": \"http://localhost:11434\",\n    \"model\": \"llama3\",\n})\nresult = adapter.generate(\n    prompt=\"prompt\",\n    language=\"en\",\n    act_as=\"Professional Software Service Business Analyst\",\n    function=Response(\n        name=\"response name\",\n        description=\"response description\",\n        parameters={\n            \"type\": \"object\",\n            \"properties\": {\n                \"data\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"title\": {\n                                \"type\": \"string\",\n                            },\n                            \"description\": {\n                                \"type\": \"string\",\n                            },\n                        },\n                        \"required\": [\"title\", \"description\"],\n                    },\n                },\n            },\n            \"required\": [\"data\"]\n        },\n    )\n)\n```\n\n### Bedrock\n\nYou need to setup the AWS Bedrock( https://aws.amazon.com/bedrock/ )\n\n#### Libraries\n\nYou need to install `boto3`.\n\n#### Attributes\n\n\n| Parameter         | Description                                     |\n|-------------------|-------------------------------------------------|\n| access_key_id     | The access key id to use.                       |\n| secret_access_key | The secret access key to use.                   |\n| region            | Region. Default: us-east-1                      |\n| model             | Default: anthropic.claude-3-haiku-20240307-v1:0 |\n| max_tokens        | Default: 1024                                   |\n\n#### Example\n\n```python\nfrom llm_json_adapter import LLMJsonAdapter, Response\n\nadapter = LLMJsonAdapter(provider_name=\"bedrock\", max_retry_count=3, attributes={\n    \"access_key_id\": \"<YOUR AWS ACCESS KEY>\",\n    \"secret_access_key\": \"<YOUR AWS SECRET ACCESS KEY>\",\n    \"region\": \"us-east-1\",\n    \"model\": \"anthropic.claude-3-haiku-20240307-v1:0\",\n    \"max_tokens\": 1024,\n})\nresult = adapter.generate(\n    prompt=\"prompt\",\n    language=\"en\",\n    act_as=\"Professional Software Service Business Analyst\",\n    function=Response(\n        name=\"response name\",\n        description=\"response description\",\n        parameters={\n            \"type\": \"object\",\n            \"properties\": {\n                \"data\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"title\": {\n                                \"type\": \"string\",\n                            },\n                            \"description\": {\n                                \"type\": \"string\",\n                            },\n                        },\n                        \"required\": [\"title\", \"description\"],\n                    },\n                },\n            },\n            \"required\": [\"data\"]\n        },\n    )\n)\n```\n\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "0.2.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7dc525a3fc047c6d2c77713161d507b5221859e30730f69d406c835e1082f184",
                "md5": "0240454ad358aa3f2358934841f806c4",
                "sha256": "bb796e20ad125d7fa25c29bbdf9eb5ee08d9f570407f1e7df8e1149bbcb24253"
            },
            "downloads": -1,
            "filename": "llm_json_adapter-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0240454ad358aa3f2358934841f806c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 14157,
            "upload_time": "2024-04-27T09:58:19",
            "upload_time_iso_8601": "2024-04-27T09:58:19.223309Z",
            "url": "https://files.pythonhosted.org/packages/7d/c5/25a3fc047c6d2c77713161d507b5221859e30730f69d406c835e1082f184/llm_json_adapter-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff0236fa27c92ba14ef98b13bd60464f837fec9219bb2647185b48553494134b",
                "md5": "d9ce6aa066fac3e9341296ff0874f9f2",
                "sha256": "ac0f6ad704757ee43f332b20b0be57982a4b0df5aaba7adb8411d485c6b4d32d"
            },
            "downloads": -1,
            "filename": "llm_json_adapter-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d9ce6aa066fac3e9341296ff0874f9f2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 9986,
            "upload_time": "2024-04-27T09:58:20",
            "upload_time_iso_8601": "2024-04-27T09:58:20.789390Z",
            "url": "https://files.pythonhosted.org/packages/ff/02/36fa27c92ba14ef98b13bd60464f837fec9219bb2647185b48553494134b/llm_json_adapter-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-27 09:58:20",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llm-json-adapter"
}
        
Elapsed time: 0.20897s