instructor


Nameinstructor JSON
Version 1.2.6 PyPI version JSON
download
home_pagehttps://github.com/jxnl/instructor
Summarystructured outputs for llm
upload_time2024-05-09 04:29:50
maintainerNone
docs_urlNone
authorJason Liu
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # Instructor: Structured LLM Outputs

Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!

[![Twitter Follow](https://img.shields.io/twitter/follow/jxnlco?style=social)](https://twitter.com/jxnlco)
[![Discord](https://img.shields.io/discord/1192334452110659664?label=discord)](https://discord.gg/CV8sPM5k5Y)
[![Downloads](https://img.shields.io/pypi/dm/instructor.svg)](https://pypi.python.org/pypi/instructor)


## Key Features

- **Response Models**: Specify Pydantic models to define the structure of your LLM outputs
- **Retry Management**: Easily configure the number of retry attempts for your requests
- **Validation**: Ensure LLM responses conform to your expectations with Pydantic validation
- **Streaming Support**: Work with Lists and Partial responses effortlessly
- **Flexible Backends**: Seamlessly integrate with various LLM providers beyond OpenAI

## Get Started in Minutes

Install Instructor with a single command:

```bash
pip install -U instructor
```

Now, let's see Instructor in action with a simple example:

```python
import instructor
from pydantic import BaseModel
from openai import OpenAI


# Define your desired output structure
class UserInfo(BaseModel):
    name: str
    age: int


# Patch the OpenAI client
client = instructor.from_openai(OpenAI())

# Extract structured data from natural language
user_info = client.chat.completions.create(
    model="gpt-3.5-turbo",
    response_model=UserInfo,
    messages=[{"role": "user", "content": "John Doe is 30 years old."}],
)

print(user_info.name)
#> John Doe
print(user_info.age)
#> 30
```

### Using Anthropic Models

```python
import instructor
from anthropic import Anthropic
from pydantic import BaseModel


class User(BaseModel):
    name: str
    age: int


client = instructor.from_anthropic(Anthropic())

# note that client.chat.completions.create will also work
resp = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
```

### Using Cohere Models

Make sure to install `cohere` and set your system environment variable with `export CO_API_KEY=<YOUR_COHERE_API_KEY>`.

```
pip install cohere
```

```python
import instructor
import cohere
from pydantic import BaseModel


class User(BaseModel):
    name: str
    age: int


client = instructor.from_cohere(cohere.Client())

# note that client.chat.completions.create will also work
resp = client.chat.completions.create(
    model="command-r-plus",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
```


### Using Litellm

```python
import instructor
from litellm import completion
from pydantic import BaseModel


class User(BaseModel):
    name: str
    age: int


client = instructor.from_litellm(completion)

resp = client.chat.completions.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Extract Jason is 25 years old.",
        }
    ],
    response_model=User,
)

assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
```

## Type are inferred correctly

This was the dream of instructor but due to the patching of openai, it wasnt possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few `create_*` methods to make it easier to create iterables and partials, and to access the original completion.

### Calling `create`

```python
import openai
import instructor
from pydantic import BaseModel


class User(BaseModel):
    name: str
    age: int


client = instructor.from_openai(openai.OpenAI())

user = client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)
```

Now if you use a IDE, you can see the type is correctly inferred.

![type](./docs/blog/posts/img/type.png)

### Handling async: `await create`

This will also work correctly with asynchronous clients.

```python
import openai
import instructor
from pydantic import BaseModel


client = instructor.from_openai(openai.AsyncOpenAI())


class User(BaseModel):
    name: str
    age: int


async def extract():
    return await client.chat.completions.create(
        model="gpt-4-turbo-preview",
        messages=[
            {"role": "user", "content": "Create a user"},
        ],
        response_model=User,
    )
```

Notice that simply because we return the `create` method, the `extract()` function will return the correct user type.

![async](./docs/blog/posts/img/async_type.png)

### Returning the original completion: `create_with_completion`

You can also return the original completion object

```python
import openai
import instructor
from pydantic import BaseModel


client = instructor.from_openai(openai.OpenAI())


class User(BaseModel):
    name: str
    age: int


user, completion = client.chat.completions.create_with_completion(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)
```

![with_completion](./docs/blog/posts/img/with_completion.png)


### Streaming Partial Objects: `create_partial`

In order to handle streams, we still support `Iterable[T]` and `Partial[T]` but to simply the type inference, we've added `create_iterable` and `create_partial` methods as well!

```python
import openai
import instructor
from pydantic import BaseModel


client = instructor.from_openai(openai.OpenAI())


class User(BaseModel):
    name: str
    age: int


user_stream = client.chat.completions.create_partial(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create a user"},
    ],
    response_model=User,
)

for user in user_stream:
    print(user)
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name=None age=None
    #> name=None age=25
    #> name=None age=25
    #> name=None age=25
    #> name=None age=25
    #> name=None age=25
    #> name=None age=25
    #> name='John Doe' age=25
    # name=None age=None
    # name='' age=None
    # name='John' age=None
    # name='John Doe' age=None
    # name='John Doe' age=30
```

Notice now that the type inferred is `Generator[User, None]`

![generator](./docs/blog/posts/img/generator.png)

### Streaming Iterables: `create_iterable`

We get an iterable of objects when we want to extract multiple objects.

```python
import openai
import instructor
from pydantic import BaseModel


client = instructor.from_openai(openai.OpenAI())


class User(BaseModel):
    name: str
    age: int


users = client.chat.completions.create_iterable(
    model="gpt-4-turbo-preview",
    messages=[
        {"role": "user", "content": "Create 2 users"},
    ],
    response_model=User,
)

for user in users:
    print(user)
    #> name='John' age=30
    #> name='Jane' age=25
    # User(name='John Doe', age=30)
    # User(name='Jane Smith', age=25)
```

![iterable](./docs/blog/posts/img/iterable.png)

## [Evals](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests)

We invite you to contribute to evals in `pytest` as a way to monitor the quality of the OpenAI models and the `instructor` library. To get started check out the evals for [anthropic](https://github.com/jxnl/instructor/blob/main/tests/llm/test_anthropic/evals/test_simple.py) and [OpenAI](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests) and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.

## Contributing

If you want to help, checkout some of the issues marked as `good-first-issue` or `help-wanted` found [here](https://github.com/jxnl/instructor/labels/good%20first%20issue). They could be anything from code improvements, a guest blog post, or a new cookbook.

## CLI

We also provide some added CLI functionality for easy convinience:

- `instructor jobs` : This helps with the creation of fine-tuning jobs with OpenAI. Simple use `instructor jobs create-from-file --help` to get started creating your first fine-tuned GPT3.5 model

- `instructor files` : Manage your uploaded files with ease. You'll be able to create, delete and upload files all from the command line

- `instructor usage` : Instead of heading to the OpenAI site each time, you can monitor your usage from the cli and filter by date and time period. Note that usage often takes ~5-10 minutes to update from OpenAI's side

## License

This project is licensed under the terms of the MIT License.

# Contributors

<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->

<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->

<!-- ALL-CONTRIBUTORS-LIST:END -->

<a href="https://github.com/jxnl/instructor/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=jxnl/instructor" />
</a>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jxnl/instructor",
    "name": "instructor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Jason Liu",
    "author_email": "jason@jxnl.co",
    "download_url": "https://files.pythonhosted.org/packages/c2/0f/678a070931bc1dcc65acd6e01252a9f934afb021d778a182bee2a95a06ad/instructor-1.2.6.tar.gz",
    "platform": null,
    "description": "# Instructor: Structured LLM Outputs\n\nInstructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!\n\n[![Twitter Follow](https://img.shields.io/twitter/follow/jxnlco?style=social)](https://twitter.com/jxnlco)\n[![Discord](https://img.shields.io/discord/1192334452110659664?label=discord)](https://discord.gg/CV8sPM5k5Y)\n[![Downloads](https://img.shields.io/pypi/dm/instructor.svg)](https://pypi.python.org/pypi/instructor)\n\n\n## Key Features\n\n- **Response Models**: Specify Pydantic models to define the structure of your LLM outputs\n- **Retry Management**: Easily configure the number of retry attempts for your requests\n- **Validation**: Ensure LLM responses conform to your expectations with Pydantic validation\n- **Streaming Support**: Work with Lists and Partial responses effortlessly\n- **Flexible Backends**: Seamlessly integrate with various LLM providers beyond OpenAI\n\n## Get Started in Minutes\n\nInstall Instructor with a single command:\n\n```bash\npip install -U instructor\n```\n\nNow, let's see Instructor in action with a simple example:\n\n```python\nimport instructor\nfrom pydantic import BaseModel\nfrom openai import OpenAI\n\n\n# Define your desired output structure\nclass UserInfo(BaseModel):\n    name: str\n    age: int\n\n\n# Patch the OpenAI client\nclient = instructor.from_openai(OpenAI())\n\n# Extract structured data from natural language\nuser_info = client.chat.completions.create(\n    model=\"gpt-3.5-turbo\",\n    response_model=UserInfo,\n    messages=[{\"role\": \"user\", \"content\": \"John Doe is 30 years old.\"}],\n)\n\nprint(user_info.name)\n#> John Doe\nprint(user_info.age)\n#> 30\n```\n\n### Using Anthropic Models\n\n```python\nimport instructor\nfrom anthropic import Anthropic\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_anthropic(Anthropic())\n\n# note that client.chat.completions.create will also work\nresp = client.messages.create(\n    model=\"claude-3-opus-20240229\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n### Using Cohere Models\n\nMake sure to install `cohere` and set your system environment variable with `export CO_API_KEY=<YOUR_COHERE_API_KEY>`.\n\n```\npip install cohere\n```\n\n```python\nimport instructor\nimport cohere\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_cohere(cohere.Client())\n\n# note that client.chat.completions.create will also work\nresp = client.chat.completions.create(\n    model=\"command-r-plus\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n\n### Using Litellm\n\n```python\nimport instructor\nfrom litellm import completion\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_litellm(completion)\n\nresp = client.chat.completions.create(\n    model=\"claude-3-opus-20240229\",\n    max_tokens=1024,\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Extract Jason is 25 years old.\",\n        }\n    ],\n    response_model=User,\n)\n\nassert isinstance(resp, User)\nassert resp.name == \"Jason\"\nassert resp.age == 25\n```\n\n## Type are inferred correctly\n\nThis was the dream of instructor but due to the patching of openai, it wasnt possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few `create_*` methods to make it easier to create iterables and partials, and to access the original completion.\n\n### Calling `create`\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\nuser = client.chat.completions.create(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n```\n\nNow if you use a IDE, you can see the type is correctly inferred.\n\n![type](./docs/blog/posts/img/type.png)\n\n### Handling async: `await create`\n\nThis will also work correctly with asynchronous clients.\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.AsyncOpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nasync def extract():\n    return await client.chat.completions.create(\n        model=\"gpt-4-turbo-preview\",\n        messages=[\n            {\"role\": \"user\", \"content\": \"Create a user\"},\n        ],\n        response_model=User,\n    )\n```\n\nNotice that simply because we return the `create` method, the `extract()` function will return the correct user type.\n\n![async](./docs/blog/posts/img/async_type.png)\n\n### Returning the original completion: `create_with_completion`\n\nYou can also return the original completion object\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nuser, completion = client.chat.completions.create_with_completion(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n```\n\n![with_completion](./docs/blog/posts/img/with_completion.png)\n\n\n### Streaming Partial Objects: `create_partial`\n\nIn order to handle streams, we still support `Iterable[T]` and `Partial[T]` but to simply the type inference, we've added `create_iterable` and `create_partial` methods as well!\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nuser_stream = client.chat.completions.create_partial(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create a user\"},\n    ],\n    response_model=User,\n)\n\nfor user in user_stream:\n    print(user)\n    #> name=None age=None\n    #> name=None age=None\n    #> name=None age=None\n    #> name=None age=None\n    #> name=None age=25\n    #> name=None age=25\n    #> name=None age=25\n    #> name=None age=25\n    #> name=None age=25\n    #> name=None age=25\n    #> name='John Doe' age=25\n    # name=None age=None\n    # name='' age=None\n    # name='John' age=None\n    # name='John Doe' age=None\n    # name='John Doe' age=30\n```\n\nNotice now that the type inferred is `Generator[User, None]`\n\n![generator](./docs/blog/posts/img/generator.png)\n\n### Streaming Iterables: `create_iterable`\n\nWe get an iterable of objects when we want to extract multiple objects.\n\n```python\nimport openai\nimport instructor\nfrom pydantic import BaseModel\n\n\nclient = instructor.from_openai(openai.OpenAI())\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n\nusers = client.chat.completions.create_iterable(\n    model=\"gpt-4-turbo-preview\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Create 2 users\"},\n    ],\n    response_model=User,\n)\n\nfor user in users:\n    print(user)\n    #> name='John' age=30\n    #> name='Jane' age=25\n    # User(name='John Doe', age=30)\n    # User(name='Jane Smith', age=25)\n```\n\n![iterable](./docs/blog/posts/img/iterable.png)\n\n## [Evals](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests)\n\nWe invite you to contribute to evals in `pytest` as a way to monitor the quality of the OpenAI models and the `instructor` library. To get started check out the evals for [anthropic](https://github.com/jxnl/instructor/blob/main/tests/llm/test_anthropic/evals/test_simple.py) and [OpenAI](https://github.com/jxnl/instructor/tree/main/tests/llm/test_openai/evals#how-to-contribute-writing-and-running-evaluation-tests) and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.\n\n## Contributing\n\nIf you want to help, checkout some of the issues marked as `good-first-issue` or `help-wanted` found [here](https://github.com/jxnl/instructor/labels/good%20first%20issue). They could be anything from code improvements, a guest blog post, or a new cookbook.\n\n## CLI\n\nWe also provide some added CLI functionality for easy convinience:\n\n- `instructor jobs` : This helps with the creation of fine-tuning jobs with OpenAI. Simple use `instructor jobs create-from-file --help` to get started creating your first fine-tuned GPT3.5 model\n\n- `instructor files` : Manage your uploaded files with ease. You'll be able to create, delete and upload files all from the command line\n\n- `instructor usage` : Instead of heading to the OpenAI site each time, you can monitor your usage from the cli and filter by date and time period. Note that usage often takes ~5-10 minutes to update from OpenAI's side\n\n## License\n\nThis project is licensed under the terms of the MIT License.\n\n# Contributors\n\n<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->\n<!-- prettier-ignore-start -->\n<!-- markdownlint-disable -->\n\n<!-- markdownlint-restore -->\n<!-- prettier-ignore-end -->\n\n<!-- ALL-CONTRIBUTORS-LIST:END -->\n\n<a href=\"https://github.com/jxnl/instructor/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=jxnl/instructor\" />\n</a>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "structured outputs for llm",
    "version": "1.2.6",
    "project_urls": {
        "Homepage": "https://github.com/jxnl/instructor",
        "Repository": "https://github.com/jxnl/instructor"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cc3fd82903d5b04bebb375842fc222cc1c3f1e4f7ec0c1fa49ac3a18488b30b4",
                "md5": "c7ee1bb200f84770d67fbc93144b04c2",
                "sha256": "c148372c1c2aa12e0cb3b42a8109555769156d8c13c584531b32758f1ad2f69b"
            },
            "downloads": -1,
            "filename": "instructor-1.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c7ee1bb200f84770d67fbc93144b04c2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 45716,
            "upload_time": "2024-05-09T04:29:48",
            "upload_time_iso_8601": "2024-05-09T04:29:48.676501Z",
            "url": "https://files.pythonhosted.org/packages/cc/3f/d82903d5b04bebb375842fc222cc1c3f1e4f7ec0c1fa49ac3a18488b30b4/instructor-1.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c20f678a070931bc1dcc65acd6e01252a9f934afb021d778a182bee2a95a06ad",
                "md5": "70eb89bb55eb55d120b28b1ac4cc1466",
                "sha256": "bfe969251c4f01f7355e49c356f167bfd54523108ef40257aada888402333e5e"
            },
            "downloads": -1,
            "filename": "instructor-1.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "70eb89bb55eb55d120b28b1ac4cc1466",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 36235,
            "upload_time": "2024-05-09T04:29:50",
            "upload_time_iso_8601": "2024-05-09T04:29:50.979460Z",
            "url": "https://files.pythonhosted.org/packages/c2/0f/678a070931bc1dcc65acd6e01252a9f934afb021d778a182bee2a95a06ad/instructor-1.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-09 04:29:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jxnl",
    "github_project": "instructor",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "requirements": [],
    "lcname": "instructor"
}
        
Elapsed time: 0.24908s