Name | llm-anthropic JSON |
Version |
0.15.1
JSON |
| download |
home_page | None |
Summary | LLM access to models by Anthropic, including the Claude series |
upload_time | 2025-03-01 01:00:29 |
maintainer | None |
docs_url | None |
author | Simon Willison |
requires_python | None |
license | Apache-2.0 |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-anthropic
[](https://pypi.org/project/llm-anthropic/)
[](https://github.com/simonw/llm-anthropic/releases)
[](https://github.com/simonw/llm-anthropic/actions/workflows/test.yml)
[](https://github.com/simonw/llm-anthropic/blob/main/LICENSE)
LLM access to models by Anthropic, including the Claude series
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-anthropic
```
<details><summary>Instructions for users who need to upgrade from <code>llm-claude-3</code></summary>
<br>
If you previously used `llm-claude-3` you can upgrade like this:
```bash
llm install -U llm-claude-3
llm keys set anthropic --value "$(llm keys get claude)"
```
The first line will remove the previous `llm-claude-3` version and install this one, because the latest `llm-claude-3` depends on `llm-anthropic`.
The second line sets the `anthropic` key to whatever value you previously used for the `claude` key.
</details>
## Usage
First, set [an API key](https://console.anthropic.com/settings/keys) for Anthropic:
```bash
llm keys set anthropic
# Paste key here
```
You can also set the key in the environment variable `ANTHROPIC_API_KEY`
Run `llm models` to list the models, and `llm models --options` to include a list of their options.
Run prompts like this:
```bash
llm -m claude-3.7-sonnet 'Fun facts about pelicans'
llm -m claude-3.5-sonnet 'Fun facts about pelicans'
llm -m claude-3.5-haiku 'Fun facts about armadillos'
llm -m claude-3-opus 'Fun facts about squirrels'
```
Image attachments are supported too:
```bash
llm -m claude-3.5-sonnet 'describe this image' -a https://static.simonwillison.net/static/2024/pelicans.jpg
llm -m claude-3-haiku 'extract text' -a page.png
```
The Claude 3.5 and 3.7 models can handle PDF files:
```bash
llm -m claude-3.5-sonnet 'extract text' -a page.pdf
```
Anthropic's models support [schemas](https://llm.datasette.io/en/stable/schemas.html). Here's how to use Claude 3.7 Sonnet to invent a dog:
```bash
llm -m claude-3.7-sonnet --schema 'name,age int,bio: one sentence' 'invent a surprising dog'
```
Example output:
```json
{
"name": "Whiskers the Mathematical Mastiff",
"age": 7,
"bio": "Whiskers is a mastiff who can solve complex calculus problems by barking in binary code and has won three international mathematics competitions against human competitors."
}
```
## Extended reasoning with Claude 3.7 Sonnet
Claude 3.7 introduced [extended thinking](https://www.anthropic.com/news/visible-extended-thinking) mode, where Claude can expend extra effort thinking through the prompt before producing a response.
Use the `-o thinking 1` option to enable this feature:
```bash
llm -m claude-3.7-sonnet -o thinking 1 'Write a convincing speech to congress about the need to protect the California Brown Pelican'
```
The chain of thought is not currently visible while using LLM, but it is logged to the database and can be viewed using this command:
```bash
llm logs -c --json
```
Or in combination with `jq`:
```bash
llm logs --json -c | jq '.[0].response_json.content[0].thinking' -r
```
By default up to 1024 tokens can be used for thinking. You can increase this budget with the `thinking_budget` option:
```bash
llm -m claude-3.7-sonnet -o thinking_budget 32000 'Write a long speech about pelicans in French'
```
## Model options
The following options can be passed using `-o name value` on the CLI or as `keyword=value` arguments to the Python `model.prompt()` method:
<!-- [[[cog
import cog, llm
_type_lookup = {
"number": "float",
"integer": "int",
"string": "str",
"object": "dict",
}
model = llm.get_model("claude-3.7-sonnet")
output = []
for name, field in model.Options.schema()["properties"].items():
any_of = field.get("anyOf")
if any_of is None:
any_of = [{"type": field["type"]}]
types = ", ".join(
[
_type_lookup.get(item["type"], item["type"])
for item in any_of
if item["type"] != "null"
]
)
bits = ["- **", name, "**: `", types, "`\n"]
description = field.get("description", "")
if description:
bits.append('\n ' + description + '\n\n')
output.append("".join(bits))
cog.out("".join(output))
]]] -->
- **max_tokens**: `int`
The maximum number of tokens to generate before stopping
- **temperature**: `float`
Amount of randomness injected into the response. Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.
- **top_p**: `float`
Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. Recommended for advanced use cases only. You usually only need to use temperature.
- **top_k**: `int`
Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.
- **user_id**: `str`
An external identifier for the user who is associated with the request
- **prefill**: `str`
A prefill to use for the response
- **hide_prefill**: `boolean`
Do not repeat the prefill value at the start of the response
- **stop_sequences**: `array, str`
Custom text sequences that will cause the model to stop generating - pass either a list of strings or a single string
- **thinking**: `boolean`
Enable thinking mode
- **thinking_budget**: `int`
Number of tokens to budget for thinking
<!-- [[[end]]] -->
The `prefill` option can be used to set the first part of the response. To increase the chance of returning JSON, set that to `{`:
```bash
llm -m claude-3.5-sonnet 'Fun data about pelicans' \
-o prefill '{'
```
If you do not want the prefill token to be echoed in the response, set `hide_prefill` to `true`:
```bash
llm -m claude-3.5-haiku 'Short python function describing a pelican' \
-o prefill '```python' \
-o hide_prefill true \
-o stop_sequences '```'
```
This example sets `` ``` `` as the stop sequence, so the response will be a Python function without the wrapping Markdown code block.
To pass a single stop sequence, send a string:
```bash
llm -m claude-3.5-sonnet 'Fun facts about pelicans' \
-o stop-sequences "beak"
```
For multiple stop sequences, pass a JSON array:
```bash
llm -m claude-3.5-sonnet 'Fun facts about pelicans' \
-o stop-sequences '["beak", "feathers"]'
```
When using the Python API, pass a string or an array of strings:
```python
response = llm.query(
model="claude-3.5-sonnet",
query="Fun facts about pelicans",
stop_sequences=["beak", "feathers"],
)
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-anthropic
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```
This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Anthropic API responses for the tests.
If you add a new test that calls the API you can capture the API response like this:
```bash
PYTEST_ANTHROPIC_API_KEY="$(llm keys get anthropic)" pytest --record-mode once
```
You will need to have stored a valid Anthropic API key using this command first:
```bash
llm keys set anthropic
# Paste key here
```
I use the following sequence:
```bash
# First delete the relevant cassette if it exists already:
rm tests/cassettes/test_anthropic/test_thinking_prompt.yaml
# Run this failing test to recreate the cassette
PYTEST_ANTHROPIC_API_KEY="$(llm keys get claude)" pytest -k test_thinking_prompt --record-mode once
# Now run the test again with --pdb to figure out how to update it
pytest -k test_thinking_prompt --pdb
# Edit test
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-anthropic",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Simon Willison",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/74/f2/05bfd432161a9829443a174f5b95e5cfb8a4d19b7f3c126cf7e43e59ba97/llm_anthropic-0.15.1.tar.gz",
"platform": null,
"description": "# llm-anthropic\n\n[](https://pypi.org/project/llm-anthropic/)\n[](https://github.com/simonw/llm-anthropic/releases)\n[](https://github.com/simonw/llm-anthropic/actions/workflows/test.yml)\n[](https://github.com/simonw/llm-anthropic/blob/main/LICENSE)\n\nLLM access to models by Anthropic, including the Claude series\n\n## Installation\n\nInstall this plugin in the same environment as [LLM](https://llm.datasette.io/).\n```bash\nllm install llm-anthropic\n```\n\n<details><summary>Instructions for users who need to upgrade from <code>llm-claude-3</code></summary>\n\n<br>\n\nIf you previously used `llm-claude-3` you can upgrade like this:\n\n```bash\nllm install -U llm-claude-3\nllm keys set anthropic --value \"$(llm keys get claude)\"\n```\nThe first line will remove the previous `llm-claude-3` version and install this one, because the latest `llm-claude-3` depends on `llm-anthropic`.\n\nThe second line sets the `anthropic` key to whatever value you previously used for the `claude` key.\n\n</details>\n\n## Usage\n\nFirst, set [an API key](https://console.anthropic.com/settings/keys) for Anthropic:\n```bash\nllm keys set anthropic\n# Paste key here\n```\n\nYou can also set the key in the environment variable `ANTHROPIC_API_KEY`\n\nRun `llm models` to list the models, and `llm models --options` to include a list of their options.\n\nRun prompts like this:\n```bash\nllm -m claude-3.7-sonnet 'Fun facts about pelicans'\nllm -m claude-3.5-sonnet 'Fun facts about pelicans'\nllm -m claude-3.5-haiku 'Fun facts about armadillos'\nllm -m claude-3-opus 'Fun facts about squirrels'\n```\nImage attachments are supported too:\n```bash\nllm -m claude-3.5-sonnet 'describe this image' -a https://static.simonwillison.net/static/2024/pelicans.jpg\nllm -m claude-3-haiku 'extract text' -a page.png\n```\nThe Claude 3.5 and 3.7 models can handle PDF files:\n```bash\nllm -m claude-3.5-sonnet 'extract text' -a page.pdf\n```\nAnthropic's models support [schemas](https://llm.datasette.io/en/stable/schemas.html). Here's how to use Claude 3.7 Sonnet to invent a dog:\n\n```bash\nllm -m claude-3.7-sonnet --schema 'name,age int,bio: one sentence' 'invent a surprising dog'\n```\nExample output:\n```json\n{\n \"name\": \"Whiskers the Mathematical Mastiff\",\n \"age\": 7,\n \"bio\": \"Whiskers is a mastiff who can solve complex calculus problems by barking in binary code and has won three international mathematics competitions against human competitors.\"\n}\n```\n\n## Extended reasoning with Claude 3.7 Sonnet\n\nClaude 3.7 introduced [extended thinking](https://www.anthropic.com/news/visible-extended-thinking) mode, where Claude can expend extra effort thinking through the prompt before producing a response.\n\nUse the `-o thinking 1` option to enable this feature:\n\n```bash\nllm -m claude-3.7-sonnet -o thinking 1 'Write a convincing speech to congress about the need to protect the California Brown Pelican'\n```\nThe chain of thought is not currently visible while using LLM, but it is logged to the database and can be viewed using this command:\n```bash\nllm logs -c --json\n```\nOr in combination with `jq`:\n```bash\nllm logs --json -c | jq '.[0].response_json.content[0].thinking' -r\n```\nBy default up to 1024 tokens can be used for thinking. You can increase this budget with the `thinking_budget` option:\n```bash\nllm -m claude-3.7-sonnet -o thinking_budget 32000 'Write a long speech about pelicans in French'\n```\n\n## Model options\n\nThe following options can be passed using `-o name value` on the CLI or as `keyword=value` arguments to the Python `model.prompt()` method:\n\n<!-- [[[cog\nimport cog, llm\n_type_lookup = {\n \"number\": \"float\",\n \"integer\": \"int\",\n \"string\": \"str\",\n \"object\": \"dict\",\n}\n\nmodel = llm.get_model(\"claude-3.7-sonnet\")\noutput = []\nfor name, field in model.Options.schema()[\"properties\"].items():\n any_of = field.get(\"anyOf\")\n if any_of is None:\n any_of = [{\"type\": field[\"type\"]}]\n types = \", \".join(\n [\n _type_lookup.get(item[\"type\"], item[\"type\"])\n for item in any_of\n if item[\"type\"] != \"null\"\n ]\n )\n bits = [\"- **\", name, \"**: `\", types, \"`\\n\"]\n description = field.get(\"description\", \"\")\n if description:\n bits.append('\\n ' + description + '\\n\\n')\n output.append(\"\".join(bits))\ncog.out(\"\".join(output))\n]]] -->\n- **max_tokens**: `int`\n\n The maximum number of tokens to generate before stopping\n\n- **temperature**: `float`\n\n Amount of randomness injected into the response. Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.\n\n- **top_p**: `float`\n\n Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. Recommended for advanced use cases only. You usually only need to use temperature.\n\n- **top_k**: `int`\n\n Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.\n\n- **user_id**: `str`\n\n An external identifier for the user who is associated with the request\n\n- **prefill**: `str`\n\n A prefill to use for the response\n\n- **hide_prefill**: `boolean`\n\n Do not repeat the prefill value at the start of the response\n\n- **stop_sequences**: `array, str`\n\n Custom text sequences that will cause the model to stop generating - pass either a list of strings or a single string\n\n- **thinking**: `boolean`\n\n Enable thinking mode\n\n- **thinking_budget**: `int`\n\n Number of tokens to budget for thinking\n\n<!-- [[[end]]] -->\n\nThe `prefill` option can be used to set the first part of the response. To increase the chance of returning JSON, set that to `{`:\n\n```bash\nllm -m claude-3.5-sonnet 'Fun data about pelicans' \\\n -o prefill '{'\n```\nIf you do not want the prefill token to be echoed in the response, set `hide_prefill` to `true`:\n\n```bash\nllm -m claude-3.5-haiku 'Short python function describing a pelican' \\\n -o prefill '```python' \\\n -o hide_prefill true \\\n -o stop_sequences '```'\n```\nThis example sets `` ``` `` as the stop sequence, so the response will be a Python function without the wrapping Markdown code block.\n\nTo pass a single stop sequence, send a string:\n```bash\nllm -m claude-3.5-sonnet 'Fun facts about pelicans' \\\n -o stop-sequences \"beak\"\n```\nFor multiple stop sequences, pass a JSON array:\n\n```bash\nllm -m claude-3.5-sonnet 'Fun facts about pelicans' \\\n -o stop-sequences '[\"beak\", \"feathers\"]'\n```\n\nWhen using the Python API, pass a string or an array of strings:\n\n```python\nresponse = llm.query(\n model=\"claude-3.5-sonnet\",\n query=\"Fun facts about pelicans\",\n stop_sequences=[\"beak\", \"feathers\"],\n)\n```\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-anthropic\npython3 -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\nllm install -e '.[test]'\n```\nTo run the tests:\n```bash\npytest\n```\n\nThis project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Anthropic API responses for the tests.\n\nIf you add a new test that calls the API you can capture the API response like this:\n```bash\nPYTEST_ANTHROPIC_API_KEY=\"$(llm keys get anthropic)\" pytest --record-mode once\n```\nYou will need to have stored a valid Anthropic API key using this command first:\n```bash\nllm keys set anthropic\n# Paste key here\n```\nI use the following sequence:\n```bash\n# First delete the relevant cassette if it exists already:\nrm tests/cassettes/test_anthropic/test_thinking_prompt.yaml\n# Run this failing test to recreate the cassette\nPYTEST_ANTHROPIC_API_KEY=\"$(llm keys get claude)\" pytest -k test_thinking_prompt --record-mode once\n# Now run the test again with --pdb to figure out how to update it\npytest -k test_thinking_prompt --pdb\n# Edit test\n```\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "LLM access to models by Anthropic, including the Claude series",
"version": "0.15.1",
"project_urls": {
"CI": "https://github.com/simonw/llm-anthropic/actions",
"Changelog": "https://github.com/simonw/llm-anthropic/releases",
"Homepage": "https://github.com/simonw/llm-anthropic",
"Issues": "https://github.com/simonw/llm-anthropic/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "866446493fcdada0c457b38bb97c8a40f9b1f07980420912908ec3efa121282d",
"md5": "b564baf5eead22dca107656003d0e9ad",
"sha256": "1f66507df807fb27e53e73f448b4a22e2285ef0b198e9c4889c3a695dd7f8d34"
},
"downloads": -1,
"filename": "llm_anthropic-0.15.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b564baf5eead22dca107656003d0e9ad",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 12588,
"upload_time": "2025-03-01T01:00:27",
"upload_time_iso_8601": "2025-03-01T01:00:27.132939Z",
"url": "https://files.pythonhosted.org/packages/86/64/46493fcdada0c457b38bb97c8a40f9b1f07980420912908ec3efa121282d/llm_anthropic-0.15.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "74f205bfd432161a9829443a174f5b95e5cfb8a4d19b7f3c126cf7e43e59ba97",
"md5": "7e6f98ea4fe6013d5b67da89e90615a0",
"sha256": "0bcc4db38a12e75631027d6226493c8f8b09e5d3b4a553b02223f89affcc9d09"
},
"downloads": -1,
"filename": "llm_anthropic-0.15.1.tar.gz",
"has_sig": false,
"md5_digest": "7e6f98ea4fe6013d5b67da89e90615a0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 14692,
"upload_time": "2025-03-01T01:00:29",
"upload_time_iso_8601": "2025-03-01T01:00:29.912103Z",
"url": "https://files.pythonhosted.org/packages/74/f2/05bfd432161a9829443a174f5b95e5cfb8a4d19b7f3c126cf7e43e59ba97/llm_anthropic-0.15.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-01 01:00:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "simonw",
"github_project": "llm-anthropic",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-anthropic"
}