openai


Nameopenai JSON
Version 1.25.1 PyPI version JSON
download
home_pageNone
SummaryThe official Python library for the openai API
upload_time2024-05-02 15:52:56
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7.1
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OpenAI Python API library

[![PyPI version](https://img.shields.io/pypi/v/openai.svg)](https://pypi.org/project/openai/)

The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).

It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).

## Documentation

The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).

## Installation

> [!IMPORTANT]
> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.

```sh
# install from PyPI
pip install openai
```

## Usage

The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).

```python
import os
from openai import OpenAI

client = OpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Say this is a test",
        }
    ],
    model="gpt-3.5-turbo",
)
```

While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.

### Polling Helpers

When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
If an API method results in an action which could benefit from polling there will be a corresponding version of the
method ending in '\_and_poll'.

For instance to create a Run and poll until it reaches a terminal state you can run:

```python
run = client.beta.threads.runs.create_and_poll(
    thread_id=thread.id,
    assistant_id=assistant.id,
)
```

More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)

### Bulk Upload Helpers

When creating an interacting with vector stores, you can use the polling helpers to monitor the status of operations.
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.

```python
sample_files = [Path("sample-paper.pdf"), ...]

batch = await client.vector_stores.file_batches.upload_and_poll(
    store.id,
    files=sample_files,
)
```

### Streaming Helpers

The SDK also includes helpers to process streams and handle the incoming events.

```python
with client.beta.threads.runs.stream(
    thread_id=thread.id,
    assistant_id=assistant.id,
    instructions="Please address the user as Jane Doe. The user has a premium account.",
) as stream:
    for event in stream:
        # Print the text from text delta events
        if event.type == "thread.message.delta" and event.data.delta.content:
            print(event.data.delta.content[0].text)
```

More information on streaming helpers can be found in the dedicated documentation: [helpers.md](https://github.com/openai/openai-python/tree/main/helpers.md)

## Async usage

Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:

```python
import os
import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)


async def main() -> None:
    chat_completion = await client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": "Say this is a test",
            }
        ],
        model="gpt-3.5-turbo",
    )


asyncio.run(main())
```

Functionality between the synchronous and asynchronous clients is otherwise identical.

## Streaming responses

We provide support for streaming responses using Server Side Events (SSE).

```python
from openai import OpenAI

client = OpenAI()

stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Say this is a test"}],
    stream=True,
)
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")
```

The async client uses the exact same interface.

```python
from openai import AsyncOpenAI

client = AsyncOpenAI()


async def main():
    stream = await client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Say this is a test"}],
        stream=True,
    )
    async for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="")


asyncio.run(main())
```

## Module-level client

> [!IMPORTANT]
> We highly recommend instantiating client instances instead of relying on the global client.

We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.

```py
import openai

# optional; defaults to `os.environ['OPENAI_API_KEY']`
openai.api_key = '...'

# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://..."
openai.default_headers = {"x-foo": "true"}

completion = openai.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.choices[0].message.content)
```

The API is the exact same as the standard client instance based API.

This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.

We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:

- It can be difficult to reason about where client options are configured
- It's not possible to change certain client options without potentially causing race conditions
- It's harder to mock for testing purposes
- It's not possible to control cleanup of network connections

## Using types

Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:

- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.

## Pagination

List methods in the OpenAI API are paginated.

This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

```python
import openai

client = OpenAI()

all_jobs = []
# Automatically fetches more pages as needed.
for job in client.fine_tuning.jobs.list(
    limit=20,
):
    # Do something with job here
    all_jobs.append(job)
print(all_jobs)
```

Or, asynchronously:

```python
import asyncio
import openai

client = AsyncOpenAI()


async def main() -> None:
    all_jobs = []
    # Iterate through items across all pages, issuing requests as needed.
    async for job in client.fine_tuning.jobs.list(
        limit=20,
    ):
        all_jobs.append(job)
    print(all_jobs)


asyncio.run(main())
```

Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:

```python
first_page = await client.fine_tuning.jobs.list(
    limit=20,
)
if first_page.has_next_page():
    print(f"will fetch next page using these details: {first_page.next_page_info()}")
    next_page = await first_page.get_next_page()
    print(f"number of items we just fetched: {len(next_page.data)}")

# Remove `await` for non-async usage.
```

Or just work directly with the returned data:

```python
first_page = await client.fine_tuning.jobs.list(
    limit=20,
)

print(f"next page cursor: {first_page.after}")  # => "next page cursor: ..."
for job in first_page.data:
    print(job.id)

# Remove `await` for non-async usage.
```

## Nested params

Nested parameters are dictionaries, typed using `TypedDict`, for example:

```python
from openai import OpenAI

client = OpenAI()

completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Can you generate an example json object describing a fruit?",
        }
    ],
    model="gpt-3.5-turbo-1106",
    response_format={"type": "json_object"},
)
```

## File uploads

Request parameters that correspond to file uploads can be passed as `bytes`, a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.

```python
from pathlib import Path
from openai import OpenAI

client = OpenAI()

client.files.create(
    file=Path("input.jsonl"),
    purpose="fine-tune",
)
```

The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.

## Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.

When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.

All errors inherit from `openai.APIError`.

```python
import openai
from openai import OpenAI

client = OpenAI()

try:
    client.fine_tuning.jobs.create(
        model="gpt-3.5-turbo",
        training_file="file-abc123",
    )
except openai.APIConnectionError as e:
    print("The server could not be reached")
    print(e.__cause__)  # an underlying Exception, likely raised within httpx.
except openai.RateLimitError as e:
    print("A 429 status code was received; we should back off a bit.")
except openai.APIStatusError as e:
    print("Another non-200-range status code was received")
    print(e.status_code)
    print(e.response)
```

Error codes are as followed:

| Status Code | Error Type                 |
| ----------- | -------------------------- |
| 400         | `BadRequestError`          |
| 401         | `AuthenticationError`      |
| 403         | `PermissionDeniedError`    |
| 404         | `NotFoundError`            |
| 422         | `UnprocessableEntityError` |
| 429         | `RateLimitError`           |
| >=500       | `InternalServerError`      |
| N/A         | `APIConnectionError`       |

### Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use the `max_retries` option to configure or disable retry settings:

```python
from openai import OpenAI

# Configure the default for all requests:
client = OpenAI(
    # default is 2
    max_retries=0,
)

# Or, configure per-request:
client.with_options(max_retries=5).chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "How can I get the name of the current day in Node.js?",
        }
    ],
    model="gpt-3.5-turbo",
)
```

### Timeouts

By default requests time out after 10 minutes. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object:

```python
from openai import OpenAI

# Configure the default for all requests:
client = OpenAI(
    # 20 seconds (default is 10 minutes)
    timeout=20.0,
)

# More granular control:
client = OpenAI(
    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)

# Override per-request:
client.with_options(timeout=5 * 1000).chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "How can I list all files in a directory using Python?",
        }
    ],
    model="gpt-3.5-turbo",
)
```

On timeout, an `APITimeoutError` is thrown.

Note that requests that time out are [retried twice by default](https://github.com/openai/openai-python/tree/main/#retries).

## Advanced

### Logging

We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.

You can enable logging by setting the environment variable `OPENAI_LOG` to `debug`.

```shell
$ export OPENAI_LOG=debug
```

### How to tell whether `None` means `null` or missing

In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:

```py
if response.my_field is None:
  if 'my_field' not in response.model_fields_set:
    print('Got json like {}, without a "my_field" key present at all.')
  else:
    print('Got json like {"my_field": null}.')
```

### Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,

```py
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.with_raw_response.create(
    messages=[{
        "role": "user",
        "content": "Say this is a test",
    }],
    model="gpt-3.5-turbo",
)
print(response.headers.get('X-My-Header'))

completion = response.parse()  # get the object that `chat.completions.create()` would have returned
print(completion)
```

These methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.

For the sync client this will mostly be the same with the exception
of `content` & `text` will be methods instead of properties. In the
async client, all methods will be async.

A migration script will be provided & the migration in general should
be smooth.

#### `.with_streaming_response`

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.

As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.

```python
with client.chat.completions.with_streaming_response.create(
    messages=[
        {
            "role": "user",
            "content": "Say this is a test",
        }
    ],
    model="gpt-3.5-turbo",
) as response:
    print(response.headers.get("X-My-Header"))

    for line in response.iter_lines():
        print(line)
```

The context manager is required so that the response will reliably be closed.

### Making custom/undocumented requests

This library is typed for convenient access to the documented API.

If you need to access undocumented endpoints, params, or response properties, the library can still be used.

#### Undocumented endpoints

To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) will be respected when making this
request.

```py
import httpx

response = client.post(
    "/foo",
    cast_to=httpx.Response,
    body={"my_param": True},
)

print(response.headers.get("x-foo"))
```

#### Undocumented request params

If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.

#### Undocumented response properties

To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).

### Configuring the HTTP client

You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:

- Support for proxies
- Custom transports
- Additional [advanced](https://www.python-httpx.org/advanced/#client-instances) functionality

```python
from openai import OpenAI, DefaultHttpxClient

client = OpenAI(
    # Or use the `OPENAI_BASE_URL` env var
    base_url="http://my.test.server.example.com:8083",
    http_client=DefaultHttpxClient(
        proxies="http://my.test.proxy.example.com",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)
```

### Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.

## Microsoft Azure OpenAI

To use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI`
class instead of the `OpenAI` class.

> [!IMPORTANT]
> The Azure API shape differs from the core API shape which means that the static types for responses / params
> won't always be correct.

```py
from openai import AzureOpenAI

# gets the API Key from environment variable AZURE_OPENAI_API_KEY
client = AzureOpenAI(
    # https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning
    api_version="2023-07-01-preview",
    # https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
    azure_endpoint="https://example-endpoint.openai.azure.com",
)

completion = client.chat.completions.create(
    model="deployment-name",  # e.g. gpt-35-instant
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.to_json())
```

In addition to the options provided in the base `OpenAI` client, the following options are provided:

- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)
- `azure_deployment`
- `api_version` (or the `OPENAI_API_VERSION` environment variable)
- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)
- `azure_ad_token_provider`

An example of using the client with Azure Active Directory can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).

## Versioning

This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:

1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals)_.
3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.

## Requirements

Python 3.7 or higher.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "openai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7.1",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "OpenAI <support@openai.com>",
    "download_url": "https://files.pythonhosted.org/packages/06/8a/7e3d0aa81be59fa3c7beb54ade24ec46b569f78908761eb5f3a78cbbdc4e/openai-1.25.1.tar.gz",
    "platform": null,
    "description": "# OpenAI Python API library\n\n[![PyPI version](https://img.shields.io/pypi/v/openai.svg)](https://pypi.org/project/openai/)\n\nThe OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+\napplication. The library includes type definitions for all request params and response fields,\nand offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).\n\nIt is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).\n\n## Documentation\n\nThe REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).\n\n## Installation\n\n> [!IMPORTANT]\n> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.\n\n```sh\n# install from PyPI\npip install openai\n```\n\n## Usage\n\nThe full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).\n\n```python\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n    # This is the default and can be omitted\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\nchat_completion = client.chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Say this is a test\",\n        }\n    ],\n    model=\"gpt-3.5-turbo\",\n)\n```\n\nWhile you can provide an `api_key` keyword argument,\nwe recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)\nto add `OPENAI_API_KEY=\"My API Key\"` to your `.env` file\nso that your API Key is not stored in source control.\n\n### Polling Helpers\n\nWhen interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes\nhelper functions which will poll the status until it reaches a terminal state and then return the resulting object.\nIf an API method results in an action which could benefit from polling there will be a corresponding version of the\nmethod ending in '\\_and_poll'.\n\nFor instance to create a Run and poll until it reaches a terminal state you can run:\n\n```python\nrun = client.beta.threads.runs.create_and_poll(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n)\n```\n\nMore information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)\n\n### Bulk Upload Helpers\n\nWhen creating an interacting with vector stores, you can use the polling helpers to monitor the status of operations.\nFor convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.\n\n```python\nsample_files = [Path(\"sample-paper.pdf\"), ...]\n\nbatch = await client.vector_stores.file_batches.upload_and_poll(\n    store.id,\n    files=sample_files,\n)\n```\n\n### Streaming Helpers\n\nThe SDK also includes helpers to process streams and handle the incoming events.\n\n```python\nwith client.beta.threads.runs.stream(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n    instructions=\"Please address the user as Jane Doe. The user has a premium account.\",\n) as stream:\n    for event in stream:\n        # Print the text from text delta events\n        if event.type == \"thread.message.delta\" and event.data.delta.content:\n            print(event.data.delta.content[0].text)\n```\n\nMore information on streaming helpers can be found in the dedicated documentation: [helpers.md](https://github.com/openai/openai-python/tree/main/helpers.md)\n\n## Async usage\n\nSimply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:\n\n```python\nimport os\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI(\n    # This is the default and can be omitted\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\n\nasync def main() -> None:\n    chat_completion = await client.chat.completions.create(\n        messages=[\n            {\n                \"role\": \"user\",\n                \"content\": \"Say this is a test\",\n            }\n        ],\n        model=\"gpt-3.5-turbo\",\n    )\n\n\nasyncio.run(main())\n```\n\nFunctionality between the synchronous and asynchronous clients is otherwise identical.\n\n## Streaming responses\n\nWe provide support for streaming responses using Server Side Events (SSE).\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nstream = client.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}],\n    stream=True,\n)\nfor chunk in stream:\n    print(chunk.choices[0].delta.content or \"\", end=\"\")\n```\n\nThe async client uses the exact same interface.\n\n```python\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\n\nasync def main():\n    stream = await client.chat.completions.create(\n        model=\"gpt-4\",\n        messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}],\n        stream=True,\n    )\n    async for chunk in stream:\n        print(chunk.choices[0].delta.content or \"\", end=\"\")\n\n\nasyncio.run(main())\n```\n\n## Module-level client\n\n> [!IMPORTANT]\n> We highly recommend instantiating client instances instead of relying on the global client.\n\nWe also expose a global client instance that is accessible in a similar fashion to versions prior to v1.\n\n```py\nimport openai\n\n# optional; defaults to `os.environ['OPENAI_API_KEY']`\nopenai.api_key = '...'\n\n# all client options can be configured just like the `OpenAI` instantiation counterpart\nopenai.base_url = \"https://...\"\nopenai.default_headers = {\"x-foo\": \"true\"}\n\ncompletion = openai.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How do I output all files in a directory using Python?\",\n        },\n    ],\n)\nprint(completion.choices[0].message.content)\n```\n\nThe API is the exact same as the standard client instance based API.\n\nThis is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.\n\nWe recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:\n\n- It can be difficult to reason about where client options are configured\n- It's not possible to change certain client options without potentially causing race conditions\n- It's harder to mock for testing purposes\n- It's not possible to control cleanup of network connections\n\n## Using types\n\nNested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:\n\n- Serializing back into JSON, `model.to_json()`\n- Converting to a dictionary, `model.to_dict()`\n\nTyped requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.\n\n## Pagination\n\nList methods in the OpenAI API are paginated.\n\nThis library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:\n\n```python\nimport openai\n\nclient = OpenAI()\n\nall_jobs = []\n# Automatically fetches more pages as needed.\nfor job in client.fine_tuning.jobs.list(\n    limit=20,\n):\n    # Do something with job here\n    all_jobs.append(job)\nprint(all_jobs)\n```\n\nOr, asynchronously:\n\n```python\nimport asyncio\nimport openai\n\nclient = AsyncOpenAI()\n\n\nasync def main() -> None:\n    all_jobs = []\n    # Iterate through items across all pages, issuing requests as needed.\n    async for job in client.fine_tuning.jobs.list(\n        limit=20,\n    ):\n        all_jobs.append(job)\n    print(all_jobs)\n\n\nasyncio.run(main())\n```\n\nAlternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\nif first_page.has_next_page():\n    print(f\"will fetch next page using these details: {first_page.next_page_info()}\")\n    next_page = await first_page.get_next_page()\n    print(f\"number of items we just fetched: {len(next_page.data)}\")\n\n# Remove `await` for non-async usage.\n```\n\nOr just work directly with the returned data:\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\n\nprint(f\"next page cursor: {first_page.after}\")  # => \"next page cursor: ...\"\nfor job in first_page.data:\n    print(job.id)\n\n# Remove `await` for non-async usage.\n```\n\n## Nested params\n\nNested parameters are dictionaries, typed using `TypedDict`, for example:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ncompletion = client.chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Can you generate an example json object describing a fruit?\",\n        }\n    ],\n    model=\"gpt-3.5-turbo-1106\",\n    response_format={\"type\": \"json_object\"},\n)\n```\n\n## File uploads\n\nRequest parameters that correspond to file uploads can be passed as `bytes`, a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.\n\n```python\nfrom pathlib import Path\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nclient.files.create(\n    file=Path(\"input.jsonl\"),\n    purpose=\"fine-tune\",\n)\n```\n\nThe async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.\n\n## Handling errors\n\nWhen the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.\n\nWhen the API returns a non-success status code (that is, 4xx or 5xx\nresponse), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.\n\nAll errors inherit from `openai.APIError`.\n\n```python\nimport openai\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntry:\n    client.fine_tuning.jobs.create(\n        model=\"gpt-3.5-turbo\",\n        training_file=\"file-abc123\",\n    )\nexcept openai.APIConnectionError as e:\n    print(\"The server could not be reached\")\n    print(e.__cause__)  # an underlying Exception, likely raised within httpx.\nexcept openai.RateLimitError as e:\n    print(\"A 429 status code was received; we should back off a bit.\")\nexcept openai.APIStatusError as e:\n    print(\"Another non-200-range status code was received\")\n    print(e.status_code)\n    print(e.response)\n```\n\nError codes are as followed:\n\n| Status Code | Error Type                 |\n| ----------- | -------------------------- |\n| 400         | `BadRequestError`          |\n| 401         | `AuthenticationError`      |\n| 403         | `PermissionDeniedError`    |\n| 404         | `NotFoundError`            |\n| 422         | `UnprocessableEntityError` |\n| 429         | `RateLimitError`           |\n| >=500       | `InternalServerError`      |\n| N/A         | `APIConnectionError`       |\n\n### Retries\n\nCertain errors are automatically retried 2 times by default, with a short exponential backoff.\nConnection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,\n429 Rate Limit, and >=500 Internal errors are all retried by default.\n\nYou can use the `max_retries` option to configure or disable retry settings:\n\n```python\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # default is 2\n    max_retries=0,\n)\n\n# Or, configure per-request:\nclient.with_options(max_retries=5).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How can I get the name of the current day in Node.js?\",\n        }\n    ],\n    model=\"gpt-3.5-turbo\",\n)\n```\n\n### Timeouts\n\nBy default requests time out after 10 minutes. You can configure this with a `timeout` option,\nwhich accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object:\n\n```python\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # 20 seconds (default is 10 minutes)\n    timeout=20.0,\n)\n\n# More granular control:\nclient = OpenAI(\n    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),\n)\n\n# Override per-request:\nclient.with_options(timeout=5 * 1000).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How can I list all files in a directory using Python?\",\n        }\n    ],\n    model=\"gpt-3.5-turbo\",\n)\n```\n\nOn timeout, an `APITimeoutError` is thrown.\n\nNote that requests that time out are [retried twice by default](https://github.com/openai/openai-python/tree/main/#retries).\n\n## Advanced\n\n### Logging\n\nWe use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.\n\nYou can enable logging by setting the environment variable `OPENAI_LOG` to `debug`.\n\n```shell\n$ export OPENAI_LOG=debug\n```\n\n### How to tell whether `None` means `null` or missing\n\nIn an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:\n\n```py\nif response.my_field is None:\n  if 'my_field' not in response.model_fields_set:\n    print('Got json like {}, without a \"my_field\" key present at all.')\n  else:\n    print('Got json like {\"my_field\": null}.')\n```\n\n### Accessing raw response data (e.g. headers)\n\nThe \"raw\" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,\n\n```py\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.with_raw_response.create(\n    messages=[{\n        \"role\": \"user\",\n        \"content\": \"Say this is a test\",\n    }],\n    model=\"gpt-3.5-turbo\",\n)\nprint(response.headers.get('X-My-Header'))\n\ncompletion = response.parse()  # get the object that `chat.completions.create()` would have returned\nprint(completion)\n```\n\nThese methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.\n\nFor the sync client this will mostly be the same with the exception\nof `content` & `text` will be methods instead of properties. In the\nasync client, all methods will be async.\n\nA migration script will be provided & the migration in general should\nbe smooth.\n\n#### `.with_streaming_response`\n\nThe above interface eagerly reads the full response body when you make the request, which may not always be what you want.\n\nTo stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.\n\nAs such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.\n\n```python\nwith client.chat.completions.with_streaming_response.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Say this is a test\",\n        }\n    ],\n    model=\"gpt-3.5-turbo\",\n) as response:\n    print(response.headers.get(\"X-My-Header\"))\n\n    for line in response.iter_lines():\n        print(line)\n```\n\nThe context manager is required so that the response will reliably be closed.\n\n### Making custom/undocumented requests\n\nThis library is typed for convenient access to the documented API.\n\nIf you need to access undocumented endpoints, params, or response properties, the library can still be used.\n\n#### Undocumented endpoints\n\nTo make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other\nhttp verbs. Options on the client will be respected (such as retries) will be respected when making this\nrequest.\n\n```py\nimport httpx\n\nresponse = client.post(\n    \"/foo\",\n    cast_to=httpx.Response,\n    body={\"my_param\": True},\n)\n\nprint(response.headers.get(\"x-foo\"))\n```\n\n#### Undocumented request params\n\nIf you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request\noptions.\n\n#### Undocumented response properties\n\nTo access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You\ncan also get all the extra fields on the Pydantic model as a dict with\n[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).\n\n### Configuring the HTTP client\n\nYou can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:\n\n- Support for proxies\n- Custom transports\n- Additional [advanced](https://www.python-httpx.org/advanced/#client-instances) functionality\n\n```python\nfrom openai import OpenAI, DefaultHttpxClient\n\nclient = OpenAI(\n    # Or use the `OPENAI_BASE_URL` env var\n    base_url=\"http://my.test.server.example.com:8083\",\n    http_client=DefaultHttpxClient(\n        proxies=\"http://my.test.proxy.example.com\",\n        transport=httpx.HTTPTransport(local_address=\"0.0.0.0\"),\n    ),\n)\n```\n\n### Managing HTTP resources\n\nBy default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.\n\n## Microsoft Azure OpenAI\n\nTo use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI`\nclass instead of the `OpenAI` class.\n\n> [!IMPORTANT]\n> The Azure API shape differs from the core API shape which means that the static types for responses / params\n> won't always be correct.\n\n```py\nfrom openai import AzureOpenAI\n\n# gets the API Key from environment variable AZURE_OPENAI_API_KEY\nclient = AzureOpenAI(\n    # https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning\n    api_version=\"2023-07-01-preview\",\n    # https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource\n    azure_endpoint=\"https://example-endpoint.openai.azure.com\",\n)\n\ncompletion = client.chat.completions.create(\n    model=\"deployment-name\",  # e.g. gpt-35-instant\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How do I output all files in a directory using Python?\",\n        },\n    ],\n)\nprint(completion.to_json())\n```\n\nIn addition to the options provided in the base `OpenAI` client, the following options are provided:\n\n- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)\n- `azure_deployment`\n- `api_version` (or the `OPENAI_API_VERSION` environment variable)\n- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)\n- `azure_ad_token_provider`\n\nAn example of using the client with Azure Active Directory can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).\n\n## Versioning\n\nThis package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:\n\n1. Changes that only affect static types, without breaking runtime behavior.\n2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals)_.\n3. Changes that we do not expect to impact the vast majority of users in practice.\n\nWe take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.\n\nWe are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.\n\n## Requirements\n\nPython 3.7 or higher.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "The official Python library for the openai API",
    "version": "1.25.1",
    "project_urls": {
        "Homepage": "https://github.com/openai/openai-python",
        "Repository": "https://github.com/openai/openai-python"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e64e9e9db238574aeb280377813ac2085c0df3e163d113d3a622039855a3de90",
                "md5": "dcd3c17f47eee4e96f2718f8fa7582a4",
                "sha256": "aa2f381f476f5fa4df8728a34a3e454c321caa064b7b68ab6e9daa1ed082dbf9"
            },
            "downloads": -1,
            "filename": "openai-1.25.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dcd3c17f47eee4e96f2718f8fa7582a4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.1",
            "size": 312919,
            "upload_time": "2024-05-02T15:52:51",
            "upload_time_iso_8601": "2024-05-02T15:52:51.584842Z",
            "url": "https://files.pythonhosted.org/packages/e6/4e/9e9db238574aeb280377813ac2085c0df3e163d113d3a622039855a3de90/openai-1.25.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "068a7e3d0aa81be59fa3c7beb54ade24ec46b569f78908761eb5f3a78cbbdc4e",
                "md5": "022364f963f2432adebc1451ba2364f1",
                "sha256": "f561ce86f4b4008eb6c78622d641e4b7e1ab8a8cdb15d2f0b2a49942d40d21a8"
            },
            "downloads": -1,
            "filename": "openai-1.25.1.tar.gz",
            "has_sig": false,
            "md5_digest": "022364f963f2432adebc1451ba2364f1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.1",
            "size": 174546,
            "upload_time": "2024-05-02T15:52:56",
            "upload_time_iso_8601": "2024-05-02T15:52:56.016297Z",
            "url": "https://files.pythonhosted.org/packages/06/8a/7e3d0aa81be59fa3c7beb54ade24ec46b569f78908761eb5f3a78cbbdc4e/openai-1.25.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-02 15:52:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "openai",
    "github_project": "openai-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "openai"
}
        
Elapsed time: 0.28256s