ollama-instructor


Nameollama-instructor JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/lennartpollvogt/ollama-instructor
SummaryInstruct and validate structured outputs from LLMs with Ollama.
upload_time2024-10-27 15:00:30
maintainerNone
docs_urlNone
authorLennart Pollvogt
requires_python<4.0,>=3.8
licenseMIT
keywords ollama pydantic validation json-schema json instructor prompting local-llm llm
VCS
bugtrack_url
requirements ollama pydantic partial-json-parser promptools fastapi rich icecream pytest pytest-asyncio
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ollama-instructor

`ollama-instructor` is a lightweight Python library that provides a convenient wrapper around the Client of the renowned Ollama repository, extending it with validation features for obtaining valid JSON responses from a Large Language Model (LLM). Utilizing Pydantic, `ollama-instructor` allows users to specify models for JSON schemas and data validation, ensuring that responses from LLMs adhere to the defined schema.

[![Downloads](https://static.pepy.tech/badge/ollama-instructor/month)](https://pepy.tech/project/ollama-instructor)

> **Note 1**: This library has a native support for the Ollamas Python client. If you want to have more flexibility with other providers like Groq, OpenAI, Perplexity and more, have a look into the great library of [instrutor](https://github.com/jxnl/instructor) of Jason Lui.

> **Note 2**: This library depends on having [Ollama](https://ollama.com) installed and running. For more information, please refer to the official website of Ollama.

---

### Documentation and guides
- [Why ollama-instructor?](/docs/1_Why%20ollama-instructor.md)
- [Features of ollama-instructor](/docs/2_Features%20of%20ollama-instructor.md)
- [The concept of ollama-instructor](/docs/3_The%20concept%20of%20ollama-instructor.md)
- [Enhanced prompting with Pydantics BaseModel](/docs/4_Enhanced%20prompting%20within%20Pydantics%20BaseModel.md)
- [Best practices](/docs/5_Best%20practices.md)

### Examples
- [Image Captioning](/examples/images/image_captioning.md)
- [Todos from Conversation](/examples/todos/todos_from_chat.md)
- [Multiple async operations](examples/async/async_operations.md)

### Blog
- [How to use ollama-instructor best](/blog/How%20to%20use%20ollama-instructor%20best.md)
- [What you can learn from prompting LLMs for you relationships](/blog/What%20you%20can%20learn%20from%20prompting%20LLMs%20for%20your%20relationships.md)
- [May the BaseModel be with you](/blog/May%20the%20BaseModel%20be%20with%20you.md)


## Features

- Easy **integration with the Ollama** repository for running open-source LLMs locally. See:
    - https://github.com/ollama/ollama
    - https://github.com/ollama/ollama-python
- Data **validation** using **Pydantic BaseModel** to ensure the JSON response from a LLM meets the specified schema. See:
    - https://docs.pydantic.dev/latest/
- **Retries with error guidance** if the LLM returns invalid responses. You can set the maxium number of retries.
- **Allow partial responses** to be returned by setting the `allow_partial` flag to True. This will try to clean set invalid data within the response and set it to `None`. Unsetted data (not part of the Pydantic model) will be deleted from the response.
- **Reasoning** for the LLM to enhance the response quality of an LLM. This could be useful for complex tasks and JSON schemas to adhere and help smaller LLMs to perform better. By setting `format` to '' instead to 'json' (default) the LLM can return a string with a step by step reasoning. The LLM is instructed to return the JSON response within a code block (```json ... ```) which can be extracted from ollama-instructor (see [example](/docs/2_Features%20of%20ollama-instructor.md)).

`ollama-instructor` can help you to get structured and reliable JSON from local LLMs like:
- llama3 & llama3.1
- phi3
- mistral
- gemma
- ...

`ollama-instructor` can be your starting point to build agents by your self. Have full control over agent flows without relying on complex agent framework.

## Concept

![Concept.png](/Concept.png)

> Find more here: [The concept of ollama-instructor](/docs/3_The%20concept%20of%20ollama-instructor.md)

# Quick guide

## Installation

To install `ollama-instructor`, run the following command in your terminal:

```
pip install ollama-instructor
```


## Quick Start

Here are quick examples to get you started with `ollama-instructor`:

**chat completion**:
```python
from ollama_instructor.ollama_instructor_client import OllamaInstructorClient
from pydantic import BaseModel

class Person(BaseModel):
    name: str
    age: int

client = OllamaInstructorClient(...)
response = client.chat_completion(
    model='phi3',
    pydantic_model=Person,
    messages=[
        {
            'role': 'user',
            'content': 'Jason is 30 years old.'
        }
    ]
)

print(response['message']['content'])
```
Output:
```json
{"name": "Jason", "age": 30}
```

**asynchronous chat completion**:
```python
from pydantic import BaseModel, ConfigDict
from enum import Enum
from typing import List
import rich
import asyncio

from ollama_instructor.ollama_instructor_client import OllamaInstructorAsyncClient

class Gender(Enum):
    MALE = 'male'
    FEMALE = 'female'

class Person(BaseModel):
    '''
    This model defines a person.
    '''
    name: str
    age: int
    gender: Gender
    friends: List[str] = []

    model_config = ConfigDict(
        extra='forbid'
    )

async def main():
    client = OllamaInstructorAsyncClient(...)
    await client.async_init()  # Important: must call this before using the client

    response = await client.chat_completion(
        model='phi3:instruct',
        pydantic_model=Person,
        messages=[
            {
                'role': 'user',
                'content': 'Jason is 25 years old. Jason loves to play soccer with his friends Nick and Gabriel. His favorite food is pizza.'
            }
        ],
    )
    rich.print(response['message']['content'])

if __name__ == "__main__":
    asyncio.run(main())
```

**chat completion with streaming**:
```python
from ollama_instructor.ollama_instructor_client import OllamaInstructorClient
from pydantic import BaseModel

class Person(BaseModel):
    name: str
    age: int

client = OllamaInstructorClient(...)
response = client.chat_completion_with_stream(
    model='phi3',
    pydantic_model=Person,
    messages=[
        {
            'role': 'user',
            'content': 'Jason is 30 years old.'
        }
    ]
)

for chunk in response:
    print(chunk['message']['content'])
```

## OllamaInstructorClient and OllamaInstructorAsyncClient

The classes `OllamaInstructorClient` and `OllamaInstructorAsyncClient` are the main class of the `ollama-instructor` library. They are the wrapper around the `Ollama` (async) client and contain the following arguments:
- `host`: the URL of the Ollama server (default: `http://localhost:11434`). See documentation of [Ollama](https://github.com/ollama/ollama)
- `debug`: a `bool` indicating whether to print debug messages (default: `False`).

> **Note**: Until versions (`v0.4.2`) I was working with `icecream` for debugging. I switched to the `logging` module.

### chat_completion & chat_completion_with_stream

The `chat_completion` and `chat_completion_with_stream` methods are the main methods of the library. They are used to generate text completions from a given prompt.

`ollama-instructor` uses `chat_completion` and `chat_completion_with_stream` to expand the `chat` method of `Ollama`. For all available arguments of `chat` see the [Ollama documentation](https://github.com/ollama/ollama).

The following arguments are added to the `chat` method within `chat_completion` and `chat_completion_with_stream`:
- `pydantic_model`: a class of Pydantic's `BaseModel` class that is used to firstly instruct the LLM with the JSON schema of the `BaseModel` and secondly to validate the response of the LLM with the built-in validation of [Pydantic](https://docs.pydantic.dev/latest/).
- `retries`: the number of retries if the LLM fails to generate a valid response (default: `3`). If a LLM fails the retry will provide the last response of the LLM with the given `ValidationError` and insructs it to generate a valid response.
- `allow_partial`: If set to `True` `ollama-instructor` will modify the `BaseModel` to allow partial responses. In this case it makes sure to provide the correct instance of the JSON schema but with default or None values. Therefore, it is useful to provide default values within the `BaseModel`. With the improvement of this library you will find examples and best practice guides on that topic in the [docs](/docs/) folder.
- `format`: In fact this is an argument of `Ollama` already. But since version `0.4.0` of `ollama-instructor` this can be set to `'json'` or `''`. By default `ollama-instructor` uses the `'json'` format. Before verion `0.4.0` only `'json'` was possible. But within `chat_completion` (**NOT** for `chat_completion_with_stream`) you can set `format` = `''` to enable the reasoning capabilities. The default system prompt of `ollama-instructor` instructs the LLM properly to response in a ```json ...``` code block, to extract the JSON for validation. When coming with a own system prompt an setting `format`= `''`, this has to be considered. See an [example here](/docs/2_Features%20of%20ollama-instructor.md).


## Documentation and examples
- It is my goal to have a well documented library. Therefore, have a look into the repositorys code to get an idea how to use it.
- There will be a bunch of guides and examples in the [docs](/docs/) folder (work in progress).
- If you need more information about the library, please feel free to open a discussion or write an email to lennartpollvogt@protonmail.com.


## License

`ollama-instructor` is released under the MIT License. See the [LICENSE](LICENSE) file for more details.


## Support and Community

If you need help or want to discuss `ollama-instructor`, feel free to open an issue, a discussion on GitHub or just drop me an email (lennartpollvogt@protonmail.com).
I always welcome new ideas of use cases for LLMs and vision models, and would love to cover them in the examples folder. Feel free to discuss them with me via email, issue or discussion section of this repository. 😊

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lennartpollvogt/ollama-instructor",
    "name": "ollama-instructor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": "ollama, pydantic, validation, json-schema, json, instructor, prompting, local-llm, llm",
    "author": "Lennart Pollvogt",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d8/a6/41ed9f0938ba938b17d4d0f1006c2cb2700b1bb8a759bfac091d0a968f5f/ollama_instructor-0.5.0.tar.gz",
    "platform": null,
    "description": "# ollama-instructor\n\n`ollama-instructor` is a lightweight Python library that provides a convenient wrapper around the Client of the renowned Ollama repository, extending it with validation features for obtaining valid JSON responses from a Large Language Model (LLM). Utilizing Pydantic, `ollama-instructor` allows users to specify models for JSON schemas and data validation, ensuring that responses from LLMs adhere to the defined schema.\n\n[![Downloads](https://static.pepy.tech/badge/ollama-instructor/month)](https://pepy.tech/project/ollama-instructor)\n\n> **Note 1**: This library has a native support for the Ollamas Python client. If you want to have more flexibility with other providers like Groq, OpenAI, Perplexity and more, have a look into the great library of [instrutor](https://github.com/jxnl/instructor) of Jason Lui.\n\n> **Note 2**: This library depends on having [Ollama](https://ollama.com) installed and running. For more information, please refer to the official website of Ollama.\n\n---\n\n### Documentation and guides\n- [Why ollama-instructor?](/docs/1_Why%20ollama-instructor.md)\n- [Features of ollama-instructor](/docs/2_Features%20of%20ollama-instructor.md)\n- [The concept of ollama-instructor](/docs/3_The%20concept%20of%20ollama-instructor.md)\n- [Enhanced prompting with Pydantics BaseModel](/docs/4_Enhanced%20prompting%20within%20Pydantics%20BaseModel.md)\n- [Best practices](/docs/5_Best%20practices.md)\n\n### Examples\n- [Image Captioning](/examples/images/image_captioning.md)\n- [Todos from Conversation](/examples/todos/todos_from_chat.md)\n- [Multiple async operations](examples/async/async_operations.md)\n\n### Blog\n- [How to use ollama-instructor best](/blog/How%20to%20use%20ollama-instructor%20best.md)\n- [What you can learn from prompting LLMs for you relationships](/blog/What%20you%20can%20learn%20from%20prompting%20LLMs%20for%20your%20relationships.md)\n- [May the BaseModel be with you](/blog/May%20the%20BaseModel%20be%20with%20you.md)\n\n\n## Features\n\n- Easy **integration with the Ollama** repository for running open-source LLMs locally. See:\n    - https://github.com/ollama/ollama\n    - https://github.com/ollama/ollama-python\n- Data **validation** using **Pydantic BaseModel** to ensure the JSON response from a LLM meets the specified schema. See:\n    - https://docs.pydantic.dev/latest/\n- **Retries with error guidance** if the LLM returns invalid responses. You can set the maxium number of retries.\n- **Allow partial responses** to be returned by setting the `allow_partial` flag to True. This will try to clean set invalid data within the response and set it to `None`. Unsetted data (not part of the Pydantic model) will be deleted from the response.\n- **Reasoning** for the LLM to enhance the response quality of an LLM. This could be useful for complex tasks and JSON schemas to adhere and help smaller LLMs to perform better. By setting `format` to '' instead to 'json' (default) the LLM can return a string with a step by step reasoning. The LLM is instructed to return the JSON response within a code block (```json ... ```) which can be extracted from ollama-instructor (see [example](/docs/2_Features%20of%20ollama-instructor.md)).\n\n`ollama-instructor` can help you to get structured and reliable JSON from local LLMs like:\n- llama3 & llama3.1\n- phi3\n- mistral\n- gemma\n- ...\n\n`ollama-instructor` can be your starting point to build agents by your self. Have full control over agent flows without relying on complex agent framework.\n\n## Concept\n\n![Concept.png](/Concept.png)\n\n> Find more here: [The concept of ollama-instructor](/docs/3_The%20concept%20of%20ollama-instructor.md)\n\n# Quick guide\n\n## Installation\n\nTo install `ollama-instructor`, run the following command in your terminal:\n\n```\npip install ollama-instructor\n```\n\n\n## Quick Start\n\nHere are quick examples to get you started with `ollama-instructor`:\n\n**chat completion**:\n```python\nfrom ollama_instructor.ollama_instructor_client import OllamaInstructorClient\nfrom pydantic import BaseModel\n\nclass Person(BaseModel):\n    name: str\n    age: int\n\nclient = OllamaInstructorClient(...)\nresponse = client.chat_completion(\n    model='phi3',\n    pydantic_model=Person,\n    messages=[\n        {\n            'role': 'user',\n            'content': 'Jason is 30 years old.'\n        }\n    ]\n)\n\nprint(response['message']['content'])\n```\nOutput:\n```json\n{\"name\": \"Jason\", \"age\": 30}\n```\n\n**asynchronous chat completion**:\n```python\nfrom pydantic import BaseModel, ConfigDict\nfrom enum import Enum\nfrom typing import List\nimport rich\nimport asyncio\n\nfrom ollama_instructor.ollama_instructor_client import OllamaInstructorAsyncClient\n\nclass Gender(Enum):\n    MALE = 'male'\n    FEMALE = 'female'\n\nclass Person(BaseModel):\n    '''\n    This model defines a person.\n    '''\n    name: str\n    age: int\n    gender: Gender\n    friends: List[str] = []\n\n    model_config = ConfigDict(\n        extra='forbid'\n    )\n\nasync def main():\n    client = OllamaInstructorAsyncClient(...)\n    await client.async_init()  # Important: must call this before using the client\n\n    response = await client.chat_completion(\n        model='phi3:instruct',\n        pydantic_model=Person,\n        messages=[\n            {\n                'role': 'user',\n                'content': 'Jason is 25 years old. Jason loves to play soccer with his friends Nick and Gabriel. His favorite food is pizza.'\n            }\n        ],\n    )\n    rich.print(response['message']['content'])\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n**chat completion with streaming**:\n```python\nfrom ollama_instructor.ollama_instructor_client import OllamaInstructorClient\nfrom pydantic import BaseModel\n\nclass Person(BaseModel):\n    name: str\n    age: int\n\nclient = OllamaInstructorClient(...)\nresponse = client.chat_completion_with_stream(\n    model='phi3',\n    pydantic_model=Person,\n    messages=[\n        {\n            'role': 'user',\n            'content': 'Jason is 30 years old.'\n        }\n    ]\n)\n\nfor chunk in response:\n    print(chunk['message']['content'])\n```\n\n## OllamaInstructorClient and OllamaInstructorAsyncClient\n\nThe classes `OllamaInstructorClient` and `OllamaInstructorAsyncClient` are the main class of the `ollama-instructor` library. They are the wrapper around the `Ollama` (async) client and contain the following arguments:\n- `host`: the URL of the Ollama server (default: `http://localhost:11434`). See documentation of [Ollama](https://github.com/ollama/ollama)\n- `debug`: a `bool` indicating whether to print debug messages (default: `False`).\n\n> **Note**: Until versions (`v0.4.2`) I was working with `icecream` for debugging. I switched to the `logging` module.\n\n### chat_completion & chat_completion_with_stream\n\nThe `chat_completion` and `chat_completion_with_stream` methods are the main methods of the library. They are used to generate text completions from a given prompt.\n\n`ollama-instructor` uses `chat_completion` and `chat_completion_with_stream` to expand the `chat` method of `Ollama`. For all available arguments of `chat` see the [Ollama documentation](https://github.com/ollama/ollama).\n\nThe following arguments are added to the `chat` method within `chat_completion` and `chat_completion_with_stream`:\n- `pydantic_model`: a class of Pydantic's `BaseModel` class that is used to firstly instruct the LLM with the JSON schema of the `BaseModel` and secondly to validate the response of the LLM with the built-in validation of [Pydantic](https://docs.pydantic.dev/latest/).\n- `retries`: the number of retries if the LLM fails to generate a valid response (default: `3`). If a LLM fails the retry will provide the last response of the LLM with the given `ValidationError` and insructs it to generate a valid response.\n- `allow_partial`: If set to `True` `ollama-instructor` will modify the `BaseModel` to allow partial responses. In this case it makes sure to provide the correct instance of the JSON schema but with default or None values. Therefore, it is useful to provide default values within the `BaseModel`. With the improvement of this library you will find examples and best practice guides on that topic in the [docs](/docs/) folder.\n- `format`: In fact this is an argument of `Ollama` already. But since version `0.4.0` of `ollama-instructor` this can be set to `'json'` or `''`. By default `ollama-instructor` uses the `'json'` format. Before verion `0.4.0` only `'json'` was possible. But within `chat_completion` (**NOT** for `chat_completion_with_stream`) you can set `format` = `''` to enable the reasoning capabilities. The default system prompt of `ollama-instructor` instructs the LLM properly to response in a ```json ...``` code block, to extract the JSON for validation. When coming with a own system prompt an setting `format`= `''`, this has to be considered. See an [example here](/docs/2_Features%20of%20ollama-instructor.md).\n\n\n## Documentation and examples\n- It is my goal to have a well documented library. Therefore, have a look into the repositorys code to get an idea how to use it.\n- There will be a bunch of guides and examples in the [docs](/docs/) folder (work in progress).\n- If you need more information about the library, please feel free to open a discussion or write an email to lennartpollvogt@protonmail.com.\n\n\n## License\n\n`ollama-instructor` is released under the MIT License. See the [LICENSE](LICENSE) file for more details.\n\n\n## Support and Community\n\nIf you need help or want to discuss `ollama-instructor`, feel free to open an issue, a discussion on GitHub or just drop me an email (lennartpollvogt@protonmail.com).\nI always welcome new ideas of use cases for LLMs and vision models, and would love to cover them in the examples folder. Feel free to discuss them with me via email, issue or discussion section of this repository. \ud83d\ude0a\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Instruct and validate structured outputs from LLMs with Ollama.",
    "version": "0.5.0",
    "project_urls": {
        "Documentation": "https://github.com/lennartpollvogt/ollama-instructor/tree/main/docs",
        "Homepage": "https://github.com/lennartpollvogt/ollama-instructor",
        "Repository": "https://github.com/lennartpollvogt/ollama-instructor"
    },
    "split_keywords": [
        "ollama",
        " pydantic",
        " validation",
        " json-schema",
        " json",
        " instructor",
        " prompting",
        " local-llm",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e24cadea8846303fc85b76244f53e88c9b8bcf203ce8835a1881dacfa62692ba",
                "md5": "74fa8da79f3aaf2b71f9174cc3bb0e9a",
                "sha256": "7faaa17819190097972db66e8c5c775017737168c6c09c8a33a469fc30f62b97"
            },
            "downloads": -1,
            "filename": "ollama_instructor-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "74fa8da79f3aaf2b71f9174cc3bb0e9a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 18127,
            "upload_time": "2024-10-27T15:00:28",
            "upload_time_iso_8601": "2024-10-27T15:00:28.334409Z",
            "url": "https://files.pythonhosted.org/packages/e2/4c/adea8846303fc85b76244f53e88c9b8bcf203ce8835a1881dacfa62692ba/ollama_instructor-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d8a641ed9f0938ba938b17d4d0f1006c2cb2700b1bb8a759bfac091d0a968f5f",
                "md5": "933f628a18482bae8e47037ce4c39951",
                "sha256": "21effd10e78b1f644625ee38ff4d1d29f857350d5cf72452c349789a34fc0562"
            },
            "downloads": -1,
            "filename": "ollama_instructor-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "933f628a18482bae8e47037ce4c39951",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 17569,
            "upload_time": "2024-10-27T15:00:30",
            "upload_time_iso_8601": "2024-10-27T15:00:30.119502Z",
            "url": "https://files.pythonhosted.org/packages/d8/a6/41ed9f0938ba938b17d4d0f1006c2cb2700b1bb8a759bfac091d0a968f5f/ollama_instructor-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-27 15:00:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lennartpollvogt",
    "github_project": "ollama-instructor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "ollama",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.7.1"
                ]
            ]
        },
        {
            "name": "partial-json-parser",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "promptools",
            "specs": [
                [
                    "==",
                    "0.1.3.2"
                ]
            ]
        },
        {
            "name": "fastapi",
            "specs": [
                [
                    "==",
                    "0.110.2"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "13.7.1"
                ]
            ]
        },
        {
            "name": "icecream",
            "specs": [
                [
                    "==",
                    "2.1.3"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "8.2.2"
                ]
            ]
        },
        {
            "name": "pytest-asyncio",
            "specs": [
                [
                    "==",
                    "0.23.7"
                ]
            ]
        }
    ],
    "lcname": "ollama-instructor"
}
        
Elapsed time: 3.15600s