| Name | llm-openai-plugin JSON |
| Version |
0.7
JSON |
| download |
| home_page | None |
| Summary | LLM plugin for OpenAI |
| upload_time | 2025-10-06 19:43:32 |
| maintainer | None |
| docs_url | None |
| author | Simon Willison |
| requires_python | >=3.9 |
| license | Apache-2.0 |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# llm-openai-plugin
[](https://pypi.org/project/llm-openai-plugin/)
[](https://github.com/simonw/llm-openai-plugin/releases)
[](https://github.com/simonw/llm-openai-plugin/actions/workflows/test.yml)
[](https://github.com/simonw/llm-openai-plugin/blob/main/LICENSE)
[LLM](https://llm.datasette.io/) plugin for [OpenAI models](https://platform.openai.com/docs/models).
This plugin **is a preview**. LLM currently ships with OpenAI models as part of its default collection, implemented using the [Chat Completions API](https://platform.openai.com/docs/guides/responses-vs-chat-completions).
This plugin implements those same models using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).
Currently the only reason to use this plugin over the LLM defaults is to access [o1-pro](https://platform.openai.com/docs/models/o1-pro), which can only be used via the Responses API.
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-openai-plugin
```
## Usage
To run a prompt against `o1-pro` do this:
```bash
llm -m openai/o1-pro "Convince me that pelicans are the most noble of birds"
```
Run this to see a full list of models - they start with the `openai/` prefix:
```bash
llm models -q openai/
```
Here's the output of that command:
<!-- [[[cog
import cog
from llm import cli
from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(cli.cli, ["models", "-q", "openai/"])
cog.out(
"```\n{}\n```".format(result.output.strip())
)
]]] -->
```
OpenAI: openai/gpt-4o
OpenAI: openai/gpt-4o-mini
OpenAI: openai/gpt-4.5-preview
OpenAI: openai/gpt-4.5-preview-2025-02-27
OpenAI: openai/o3-mini
OpenAI: openai/o1-mini
OpenAI: openai/o1
OpenAI: openai/o1-pro
OpenAI: openai/gpt-4.1
OpenAI: openai/gpt-4.1-2025-04-14
OpenAI: openai/gpt-4.1-mini
OpenAI: openai/gpt-4.1-mini-2025-04-14
OpenAI: openai/gpt-4.1-nano
OpenAI: openai/gpt-4.1-nano-2025-04-14
OpenAI: openai/o3
OpenAI: openai/o3-2025-04-16
OpenAI: openai/o3-streaming
OpenAI: openai/o3-2025-04-16-streaming
OpenAI: openai/o4-mini
OpenAI: openai/o4-mini-2025-04-16
OpenAI: openai/codex-mini-latest
OpenAI: openai/o3-pro
OpenAI: openai/gpt-5
OpenAI: openai/gpt-5-mini
OpenAI: openai/gpt-5-nano
OpenAI: openai/gpt-5-2025-08-07
OpenAI: openai/gpt-5-mini-2025-08-07
OpenAI: openai/gpt-5-nano-2025-08-07
OpenAI: openai/gpt-5-codex
OpenAI: openai/gpt-5-pro
OpenAI: openai/gpt-5-pro-2025-10-06
```
<!-- [[[end]]] -->
Add `--options` to see a full list of options that can be provided to each model.
The `o3-streaming` model ID exists because o3 currently requires a verified organization in order to support streaming. If you have a verified organization you can use `o3-streaming` - everyone else should use `o3`.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-openai-plugin
python -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
python -m pytest
```
This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record OpenAI API responses for the tests, and [syrupy](https://github.com/syrupy-project/syrupy) to capture snapshots of their results.
If you add a new test that calls the API you can capture the API response and snapshot like this:
```bash
PYTEST_OPENAI_API_KEY="$(llm keys get openai)" pytest --record-mode once --snapshot-update
```
Then review the new snapshots in `tests/__snapshots__/` to make sure they look correct.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-openai-plugin",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Simon Willison",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d9/22/be5f1e3e17a11e5710771ebe22080e0cd4f6e101a4af3d0ff0be8d17dfd0/llm_openai_plugin-0.7.tar.gz",
"platform": null,
"description": "# llm-openai-plugin\n\n[](https://pypi.org/project/llm-openai-plugin/)\n[](https://github.com/simonw/llm-openai-plugin/releases)\n[](https://github.com/simonw/llm-openai-plugin/actions/workflows/test.yml)\n[](https://github.com/simonw/llm-openai-plugin/blob/main/LICENSE)\n\n[LLM](https://llm.datasette.io/) plugin for [OpenAI models](https://platform.openai.com/docs/models).\n\nThis plugin **is a preview**. LLM currently ships with OpenAI models as part of its default collection, implemented using the [Chat Completions API](https://platform.openai.com/docs/guides/responses-vs-chat-completions).\n\nThis plugin implements those same models using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).\n\nCurrently the only reason to use this plugin over the LLM defaults is to access [o1-pro](https://platform.openai.com/docs/models/o1-pro), which can only be used via the Responses API.\n\n## Installation\n\nInstall this plugin in the same environment as [LLM](https://llm.datasette.io/).\n```bash\nllm install llm-openai-plugin\n```\n## Usage\n\nTo run a prompt against `o1-pro` do this:\n\n```bash\nllm -m openai/o1-pro \"Convince me that pelicans are the most noble of birds\"\n```\n\nRun this to see a full list of models - they start with the `openai/` prefix:\n\n```bash\nllm models -q openai/\n```\n\nHere's the output of that command:\n\n<!-- [[[cog\nimport cog\nfrom llm import cli\nfrom click.testing import CliRunner\nrunner = CliRunner()\nresult = runner.invoke(cli.cli, [\"models\", \"-q\", \"openai/\"])\ncog.out(\n \"```\\n{}\\n```\".format(result.output.strip())\n)\n]]] -->\n```\nOpenAI: openai/gpt-4o\nOpenAI: openai/gpt-4o-mini\nOpenAI: openai/gpt-4.5-preview\nOpenAI: openai/gpt-4.5-preview-2025-02-27\nOpenAI: openai/o3-mini\nOpenAI: openai/o1-mini\nOpenAI: openai/o1\nOpenAI: openai/o1-pro\nOpenAI: openai/gpt-4.1\nOpenAI: openai/gpt-4.1-2025-04-14\nOpenAI: openai/gpt-4.1-mini\nOpenAI: openai/gpt-4.1-mini-2025-04-14\nOpenAI: openai/gpt-4.1-nano\nOpenAI: openai/gpt-4.1-nano-2025-04-14\nOpenAI: openai/o3\nOpenAI: openai/o3-2025-04-16\nOpenAI: openai/o3-streaming\nOpenAI: openai/o3-2025-04-16-streaming\nOpenAI: openai/o4-mini\nOpenAI: openai/o4-mini-2025-04-16\nOpenAI: openai/codex-mini-latest\nOpenAI: openai/o3-pro\nOpenAI: openai/gpt-5\nOpenAI: openai/gpt-5-mini\nOpenAI: openai/gpt-5-nano\nOpenAI: openai/gpt-5-2025-08-07\nOpenAI: openai/gpt-5-mini-2025-08-07\nOpenAI: openai/gpt-5-nano-2025-08-07\nOpenAI: openai/gpt-5-codex\nOpenAI: openai/gpt-5-pro\nOpenAI: openai/gpt-5-pro-2025-10-06\n```\n<!-- [[[end]]] -->\nAdd `--options` to see a full list of options that can be provided to each model.\n\nThe `o3-streaming` model ID exists because o3 currently requires a verified organization in order to support streaming. If you have a verified organization you can use `o3-streaming` - everyone else should use `o3`.\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-openai-plugin\npython -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\nllm install -e '.[test]'\n```\nTo run the tests:\n```bash\npython -m pytest\n```\n\nThis project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record OpenAI API responses for the tests, and [syrupy](https://github.com/syrupy-project/syrupy) to capture snapshots of their results.\n\nIf you add a new test that calls the API you can capture the API response and snapshot like this:\n```bash\nPYTEST_OPENAI_API_KEY=\"$(llm keys get openai)\" pytest --record-mode once --snapshot-update\n```\nThen review the new snapshots in `tests/__snapshots__/` to make sure they look correct.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "LLM plugin for OpenAI",
"version": "0.7",
"project_urls": {
"CI": "https://github.com/simonw/llm-openai-plugin/actions",
"Changelog": "https://github.com/simonw/llm-openai-plugin/releases",
"Homepage": "https://github.com/simonw/llm-openai-plugin",
"Issues": "https://github.com/simonw/llm-openai-plugin/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4d714aec9bd35d3a305bc372567ea15ff8a26d6c5fd53b95582d09f91f6eaf62",
"md5": "acc15fbe9957af4e9bacbb2f278aa81d",
"sha256": "ba3f02194e0cad0eded015dd19492f72d8a81b22ecdc1f69562c70a3ef52c029"
},
"downloads": -1,
"filename": "llm_openai_plugin-0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "acc15fbe9957af4e9bacbb2f278aa81d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 12414,
"upload_time": "2025-10-06T19:43:31",
"upload_time_iso_8601": "2025-10-06T19:43:31.616443Z",
"url": "https://files.pythonhosted.org/packages/4d/71/4aec9bd35d3a305bc372567ea15ff8a26d6c5fd53b95582d09f91f6eaf62/llm_openai_plugin-0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "d922be5f1e3e17a11e5710771ebe22080e0cd4f6e101a4af3d0ff0be8d17dfd0",
"md5": "2eecbf96d096773449ac60ead0171471",
"sha256": "865d3116f4daf9823f470f5702bdc3fe71ea59d7bc67e525817aa35413f21e9f"
},
"downloads": -1,
"filename": "llm_openai_plugin-0.7.tar.gz",
"has_sig": false,
"md5_digest": "2eecbf96d096773449ac60ead0171471",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 12879,
"upload_time": "2025-10-06T19:43:32",
"upload_time_iso_8601": "2025-10-06T19:43:32.618608Z",
"url": "https://files.pythonhosted.org/packages/d9/22/be5f1e3e17a11e5710771ebe22080e0cd4f6e101a4af3d0ff0be8d17dfd0/llm_openai_plugin-0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 19:43:32",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "simonw",
"github_project": "llm-openai-plugin",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-openai-plugin"
}