Name | llm-openai-plugin JSON |
Version |
0.1
JSON |
| download |
home_page | None |
Summary | LLM plugin for OpenAI |
upload_time | 2025-03-21 21:34:52 |
maintainer | None |
docs_url | None |
author | Simon Willison |
requires_python | >=3.9 |
license | Apache-2.0 |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-openai-plugin
[](https://pypi.org/project/llm-openai-plugin/)
[](https://github.com/simonw/llm-openai-plugin/releases)
[](https://github.com/simonw/llm-openai-plugin/actions/workflows/test.yml)
[](https://github.com/simonw/llm-openai-plugin/blob/main/LICENSE)
[LLM](https://llm.datasette.io/) plugin for [OpenAI models](https://platform.openai.com/docs/models).
This plugin **is a preview**. LLM currently ships with OpenAI models as part of its default collection, implemented using the [Chat Completions API](https://platform.openai.com/docs/guides/responses-vs-chat-completions).
This plugin implements those same models using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).
Currently the only reason to use this plugin over the LLM defaults is to access [o1-pro](https://platform.openai.com/docs/models/o1-pro), which can only be used via the Responses API.
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-openai-plugin
```
## Usage
To run a prompt against `o1-pro` do this:
```bash
llm -m openai/o1-pro "Convince me that pelicans are the most noble of birds"
```
Run this to see a full list of models - they start with the `openai/` prefix:
```bash
llm models -q openai/
```
Here's the output of that command:
<!-- [[[cog
import cog
from llm import cli
from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(cli.cli, ["models", "-q", "openai/"])
cog.out(
"```\n{}\n```".format(result.output.strip())
)
]]] -->
```
OpenAI: openai/gpt-4o
OpenAI: openai/gpt-4o-mini
OpenAI: openai/o3-mini
OpenAI: openai/o1-mini
OpenAI: openai/o1
OpenAI: openai/o1-pro
```
<!-- [[[end]]] -->
Add `--options` to see a full list of options that can be provided to each model.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-openai-plugin
python -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
python -m pytest
```
This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record OpenAI API responses for the tests, and [syrupy](https://github.com/syrupy-project/syrupy) to capture snapshots of their results.
If you add a new test that calls the API you can capture the API response and snapshot like this:
```bash
PYTEST_OPENAI_API_KEY="$(llm keys get openai)" pytest --record-mode once --snapshot-update
```
Then review the new snapshots in `tests/__snapshots__/` to make sure they look correct.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-openai-plugin",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Simon Willison",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/0e/8a/a64246b08adaff52fee5ea51902ce45623d4a2c599897cf4ac8e4325a560/llm_openai_plugin-0.1.tar.gz",
"platform": null,
"description": "# llm-openai-plugin\n\n[](https://pypi.org/project/llm-openai-plugin/)\n[](https://github.com/simonw/llm-openai-plugin/releases)\n[](https://github.com/simonw/llm-openai-plugin/actions/workflows/test.yml)\n[](https://github.com/simonw/llm-openai-plugin/blob/main/LICENSE)\n\n[LLM](https://llm.datasette.io/) plugin for [OpenAI models](https://platform.openai.com/docs/models).\n\nThis plugin **is a preview**. LLM currently ships with OpenAI models as part of its default collection, implemented using the [Chat Completions API](https://platform.openai.com/docs/guides/responses-vs-chat-completions).\n\nThis plugin implements those same models using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).\n\nCurrently the only reason to use this plugin over the LLM defaults is to access [o1-pro](https://platform.openai.com/docs/models/o1-pro), which can only be used via the Responses API.\n\n## Installation\n\nInstall this plugin in the same environment as [LLM](https://llm.datasette.io/).\n```bash\nllm install llm-openai-plugin\n```\n## Usage\n\nTo run a prompt against `o1-pro` do this:\n\n```bash\nllm -m openai/o1-pro \"Convince me that pelicans are the most noble of birds\"\n```\n\nRun this to see a full list of models - they start with the `openai/` prefix:\n\n```bash\nllm models -q openai/\n```\n\nHere's the output of that command:\n\n<!-- [[[cog\nimport cog\nfrom llm import cli\nfrom click.testing import CliRunner\nrunner = CliRunner()\nresult = runner.invoke(cli.cli, [\"models\", \"-q\", \"openai/\"])\ncog.out(\n \"```\\n{}\\n```\".format(result.output.strip())\n)\n]]] -->\n```\nOpenAI: openai/gpt-4o\nOpenAI: openai/gpt-4o-mini\nOpenAI: openai/o3-mini\nOpenAI: openai/o1-mini\nOpenAI: openai/o1\nOpenAI: openai/o1-pro\n```\n<!-- [[[end]]] -->\nAdd `--options` to see a full list of options that can be provided to each model.\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-openai-plugin\npython -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\nllm install -e '.[test]'\n```\nTo run the tests:\n```bash\npython -m pytest\n```\n\nThis project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record OpenAI API responses for the tests, and [syrupy](https://github.com/syrupy-project/syrupy) to capture snapshots of their results.\n\nIf you add a new test that calls the API you can capture the API response and snapshot like this:\n```bash\nPYTEST_OPENAI_API_KEY=\"$(llm keys get openai)\" pytest --record-mode once --snapshot-update\n```\nThen review the new snapshots in `tests/__snapshots__/` to make sure they look correct.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "LLM plugin for OpenAI",
"version": "0.1",
"project_urls": {
"CI": "https://github.com/simonw/llm-openai-plugin/actions",
"Changelog": "https://github.com/simonw/llm-openai-plugin/releases",
"Homepage": "https://github.com/simonw/llm-openai-plugin",
"Issues": "https://github.com/simonw/llm-openai-plugin/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f5905ec7eac111f881c88a006ae9adb25bfd208bad0914961634ea979626a3da",
"md5": "32633950111fd40e2ac1c6b4d88d002a",
"sha256": "adc9ef1b85f4b53a7dfe61274dc51d9d0e73174c592a8e0912e580e5ecdc0b7f"
},
"downloads": -1,
"filename": "llm_openai_plugin-0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "32633950111fd40e2ac1c6b4d88d002a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 10172,
"upload_time": "2025-03-21T21:34:51",
"upload_time_iso_8601": "2025-03-21T21:34:51.261644Z",
"url": "https://files.pythonhosted.org/packages/f5/90/5ec7eac111f881c88a006ae9adb25bfd208bad0914961634ea979626a3da/llm_openai_plugin-0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0e8aa64246b08adaff52fee5ea51902ce45623d4a2c599897cf4ac8e4325a560",
"md5": "ad5d4aba42c9654f6a486af3a94ff02c",
"sha256": "f6c2e2e95a9ef1f795365794b3fca9cbb2fde4ee7fe383d97adfb5c1b0655e85"
},
"downloads": -1,
"filename": "llm_openai_plugin-0.1.tar.gz",
"has_sig": false,
"md5_digest": "ad5d4aba42c9654f6a486af3a94ff02c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 10267,
"upload_time": "2025-03-21T21:34:52",
"upload_time_iso_8601": "2025-03-21T21:34:52.730012Z",
"url": "https://files.pythonhosted.org/packages/0e/8a/a64246b08adaff52fee5ea51902ce45623d4a2c599897cf4ac8e4325a560/llm_openai_plugin-0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-21 21:34:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "simonw",
"github_project": "llm-openai-plugin",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-openai-plugin"
}