Name | llm-gemini JSON |
Version |
0.25
JSON |
| download |
home_page | None |
Summary | LLM plugin to access Google's Gemini family of models |
upload_time | 2025-08-18 23:55:32 |
maintainer | None |
docs_url | None |
author | Simon Willison |
requires_python | None |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-gemini
[](https://pypi.org/project/llm-gemini/)
[](https://github.com/simonw/llm-gemini/releases)
[](https://github.com/simonw/llm-gemini/actions?query=workflow%3ATest)
[](https://github.com/simonw/llm-gemini/blob/main/LICENSE)
API access to Google's Gemini models
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-gemini
```
## Usage
Configure the model by setting a key called "gemini" to your [API key](https://aistudio.google.com/app/apikey):
```bash
llm keys set gemini
```
```
<paste key here>
```
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.
Now run the model using `-m gemini-2.0-flash`, for example:
```bash
llm -m gemini-2.0-flash "A short joke about a pelican and a walrus"
```
> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, "So, what brings you two together?"
>
> The walrus sighs and says, "It's a long story. Let's just say we met through a mutual friend... of the fin."
You can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:
```bash
llm models default gemini-2.0-flash
llm "A joke about a pelican and a walrus"
```
## Available models
<!-- [[[cog
import cog
from llm import cli
from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(cli.cli, ["models", "-q", "gemini/"])
lines = reversed(result.output.strip().split("\n"))
to_output = []
NOTES = {
"gemini/gemini-2.5-pro": "Gemini 2.5 Pro",
"gemini/gemini-2.5-flash": "Gemini 2.5 Flash",
"gemini/gemini-2.5-flash-lite": "Gemini 2.5 Flash Lite",
"gemini/gemini-2.5-flash-preview-05-20": "Gemini 2.5 Flash preview (priced differently from 2.5 Flash)",
"gemini/gemini-2.0-flash-thinking-exp-01-21": "Experimental \"thinking\" model from January 2025",
"gemini/gemini-1.5-flash-8b-latest": "The least expensive model",
}
for line in lines:
model_id, rest = line.split(None, 2)[1:]
note = NOTES.get(model_id, "")
to_output.append(
"- `{}`{}".format(
model_id,
': {}'.format(note) if note else ""
)
)
cog.out("\n".join(to_output))
]]] -->
- `gemini/gemini-2.5-flash-lite`: Gemini 2.5 Flash Lite
- `gemini/gemini-2.5-pro`: Gemini 2.5 Pro
- `gemini/gemini-2.5-flash`: Gemini 2.5 Flash
- `gemini/gemini-2.5-pro-preview-06-05`
- `gemini/gemini-2.5-flash-preview-05-20`: Gemini 2.5 Flash preview (priced differently from 2.5 Flash)
- `gemini/gemini-2.5-pro-preview-05-06`
- `gemini/gemini-2.5-flash-preview-04-17`
- `gemini/gemini-2.5-pro-preview-03-25`
- `gemini/gemini-2.5-pro-exp-03-25`
- `gemini/gemini-2.0-flash-lite`
- `gemini/gemini-2.0-pro-exp-02-05`
- `gemini/gemini-2.0-flash`
- `gemini/gemini-2.0-flash-thinking-exp-01-21`: Experimental "thinking" model from January 2025
- `gemini/gemini-2.0-flash-thinking-exp-1219`
- `gemini/gemma-3n-e4b-it`
- `gemini/gemma-3-27b-it`
- `gemini/gemma-3-12b-it`
- `gemini/gemma-3-4b-it`
- `gemini/gemma-3-1b-it`
- `gemini/learnlm-1.5-pro-experimental`
- `gemini/gemini-2.0-flash-exp`
- `gemini/gemini-exp-1206`
- `gemini/gemini-exp-1121`
- `gemini/gemini-exp-1114`
- `gemini/gemini-1.5-flash-8b-001`
- `gemini/gemini-1.5-flash-8b-latest`: The least expensive model
- `gemini/gemini-1.5-flash-002`
- `gemini/gemini-1.5-pro-002`
- `gemini/gemini-1.5-flash-001`
- `gemini/gemini-1.5-pro-001`
- `gemini/gemini-1.5-flash-latest`
- `gemini/gemini-1.5-pro-latest`
- `gemini/gemini-pro`
<!-- [[[end]]] -->
All of these models have aliases that omit the `gemini/` prefix, for example:
```bash
llm -m gemini-1.5-flash-8b-latest --schema 'name,age int,bio' 'invent a dog'
```
### Images, audio and video
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
```bash
llm -m gemini-2.0-flash 'extract text' -a image.jpg
```
Or with a URL:
```bash
llm -m gemini-2.0-flash-lite 'describe image' \
-a https://static.simonwillison.net/static/2024/pelicans.jpg
```
Audio works too:
```bash
llm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3
```
And video:
```bash
llm -m gemini-2.0-flash 'describe what happens' -a video.mp4
```
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.
### JSON output
Use `-o json_object 1` to force the output to be JSON:
```bash
llm -m gemini-2.0-flash -o json_object 1 \
'3 largest cities in California, list of {"name": "..."}'
```
Outputs:
```json
{"cities": [{"name": "Los Angeles"}, {"name": "San Diego"}, {"name": "San Jose"}]}
```
### Code execution
Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs/code-execution) - they can decide to write Python code, execute it in a secure sandbox and use the result as part of their response.
To enable this feature, use `-o code_execution 1`:
```bash
llm -m gemini-2.0-flash -o code_execution 1 \
'use python to calculate (factorial of 13) * 3'
```
### Google search
Some Gemini models support [Grounding with Google Search](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini), where the model can run a Google search and use the results as part of answering a prompt.
Using this feature may incur additional requirements in terms of how you use the results. Consult [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini) for more details.
To run a prompt with Google search enabled, use `-o google_search 1`:
```bash
llm -m gemini-2.0-flash -o google_search 1 \
'What happened in Ireland today?'
```
Use `llm logs -c --json` after running a prompt to see the full JSON response, which includes [additional information](https://github.com/simonw/llm-gemini/pull/29#issuecomment-2606201877) about grounded results.
### URL context
Gemini models support a [URL context](https://ai.google.dev/gemini-api/docs/url-context) tool which, when enabled, allows the models to fetch additional content from URLs as part of their execution.
You can enable that with the `-o url_context 1` option - for example:
```bash
llm -m gemini-2.5-flash -o url_context 1 'Latest headline on simonwillison.net'
```
Extra tokens introduced by this tool will be charged as input tokens. Use `--usage` to see details of those:
```bash
llm -m gemini-2.5-flash -o url_context 1 --usage \
'Latest headline on simonwillison.net'
```
Outputs:
```
The latest headline on simonwillison.net as of August 17, 2025, is "TIL: Running a gpt-oss eval suite against LM Studio on a Mac.".
Token usage: 9,613 input, 87 output, {"candidatesTokenCount": 57, "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 10}], "toolUsePromptTokenCount": 9603, "toolUsePromptTokensDetails": [{"modality": "TEXT", "tokenCount": 9603}], "thoughtsTokenCount": 30}
```
The `"toolUsePromptTokenCount"` key shows how many tokens were used for that URL context.
### Chat
To chat interactively with the model, run `llm chat`:
```bash
llm chat -m gemini-2.0-flash
```
### Timeouts
By default there is no `timeout` against the Gemini API. You can use the `timeout` option to protect against API requests that hang indefinitely.
With the CLI tool that looks like this, to set a 1.5 second timeout:
```bash
llm -m gemini-2.5-flash-preview-05-20 'epic saga about mice' -o timeout 1.5
```
In the Python library timeouts are used like this:
```python
import httpx, llm
model = llm.get_model("gemini/gemini-2.5-flash-preview-05-20")
try:
response = model.prompt(
"epic saga about mice", timeout=1.5
)
print(response.text())
except httpx.TimeoutException:
print("Timeout exceeded")
```
An `httpx.TimeoutException` subclass will be raised if the timeout is exceeded.
## Embeddings
The plugin also adds support for the `gemini-embedding-exp-03-07` and `text-embedding-004` embedding models.
Run that against a single string like this:
```bash
llm embed -m text-embedding-004 -c 'hello world'
```
This returns a JSON array of 768 numbers.
The `gemini-embedding-exp-03-07` model is larger, returning 3072 numbers. You can also use variants of it that are truncated down to smaller sizes:
- `gemini-embedding-exp-03-07` - 3072 numbers
- `gemini-embedding-exp-03-07-2048` - 2048 numbers
- `gemini-embedding-exp-03-07-1024` - 1024 numbers
- `gemini-embedding-exp-03-07-512` - 512 numbers
- `gemini-embedding-exp-03-07-256` - 256 numbers
- `gemini-embedding-exp-03-07-128` - 128 numbers
This command will embed every `README.md` file in child directories of the current directory and store the results in a SQLite database called `embed.db` in a collection called `readmes`:
```bash
llm embed-multi readmes -d embed.db -m gemini-embedding-exp-03-07-128 \
--files . '*/README.md'
```
You can then run similarity searches against that collection like this:
```bash
llm similar readmes -c 'upload csvs to stuff' -d embed.db
```
See the [LLM embeddings documentation](https://llm.datasette.io/en/stable/embeddings/cli.html) for further details.
## Listing all Gemini API models
The `llm gemini models` command lists all of the models that are exposed by the Gemini API, some of which may not be available through this plugin.
```bash
llm gemini models
```
You can add a `--key X` option to use a different API key.
To filter models by their supported generation methods use `--method` one or more times:
```bash
llm gemini models --method embedContent
```
If you provide multiple methods you will see models that support any of them.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-gemini
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```
This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Gemini API responses for the tests.
If you add a new test that calls the API you can capture the API response like this:
```bash
PYTEST_GEMINI_API_KEY="$(llm keys get gemini)" pytest --record-mode once
```
You will need to have stored a valid Gemini API key using this command first:
```bash
llm keys set gemini
# Paste key here
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-gemini",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Simon Willison",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/08/05/cd66f8bdb5946e4f9ba9fca11d19c8b1a4e86e30933b8e9f68335f20eb73/llm_gemini-0.25.tar.gz",
"platform": null,
"description": "# llm-gemini\n\n[](https://pypi.org/project/llm-gemini/)\n[](https://github.com/simonw/llm-gemini/releases)\n[](https://github.com/simonw/llm-gemini/actions?query=workflow%3ATest)\n[](https://github.com/simonw/llm-gemini/blob/main/LICENSE)\n\nAPI access to Google's Gemini models\n\n## Installation\n\nInstall this plugin in the same environment as [LLM](https://llm.datasette.io/).\n```bash\nllm install llm-gemini\n```\n## Usage\n\nConfigure the model by setting a key called \"gemini\" to your [API key](https://aistudio.google.com/app/apikey):\n```bash\nllm keys set gemini\n```\n```\n<paste key here>\n```\nYou can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.\n\nNow run the model using `-m gemini-2.0-flash`, for example:\n\n```bash\nllm -m gemini-2.0-flash \"A short joke about a pelican and a walrus\"\n```\n\n> A pelican and a walrus are sitting at a bar. The pelican orders a fishbowl cocktail, and the walrus orders a plate of clams. The bartender asks, \"So, what brings you two together?\"\n>\n> The walrus sighs and says, \"It's a long story. Let's just say we met through a mutual friend... of the fin.\"\n\nYou can set the [default model](https://llm.datasette.io/en/stable/setup.html#setting-a-custom-default-model) to avoid the extra `-m` option:\n\n```bash\nllm models default gemini-2.0-flash\nllm \"A joke about a pelican and a walrus\"\n```\n\n## Available models\n\n<!-- [[[cog\nimport cog\nfrom llm import cli\nfrom click.testing import CliRunner\nrunner = CliRunner()\nresult = runner.invoke(cli.cli, [\"models\", \"-q\", \"gemini/\"])\nlines = reversed(result.output.strip().split(\"\\n\"))\nto_output = []\nNOTES = {\n \"gemini/gemini-2.5-pro\": \"Gemini 2.5 Pro\",\n \"gemini/gemini-2.5-flash\": \"Gemini 2.5 Flash\",\n \"gemini/gemini-2.5-flash-lite\": \"Gemini 2.5 Flash Lite\",\n \"gemini/gemini-2.5-flash-preview-05-20\": \"Gemini 2.5 Flash preview (priced differently from 2.5 Flash)\",\n \"gemini/gemini-2.0-flash-thinking-exp-01-21\": \"Experimental \\\"thinking\\\" model from January 2025\",\n \"gemini/gemini-1.5-flash-8b-latest\": \"The least expensive model\",\n}\nfor line in lines:\n model_id, rest = line.split(None, 2)[1:]\n note = NOTES.get(model_id, \"\")\n to_output.append(\n \"- `{}`{}\".format(\n model_id,\n ': {}'.format(note) if note else \"\"\n )\n )\ncog.out(\"\\n\".join(to_output))\n]]] -->\n- `gemini/gemini-2.5-flash-lite`: Gemini 2.5 Flash Lite\n- `gemini/gemini-2.5-pro`: Gemini 2.5 Pro\n- `gemini/gemini-2.5-flash`: Gemini 2.5 Flash\n- `gemini/gemini-2.5-pro-preview-06-05`\n- `gemini/gemini-2.5-flash-preview-05-20`: Gemini 2.5 Flash preview (priced differently from 2.5 Flash)\n- `gemini/gemini-2.5-pro-preview-05-06`\n- `gemini/gemini-2.5-flash-preview-04-17`\n- `gemini/gemini-2.5-pro-preview-03-25`\n- `gemini/gemini-2.5-pro-exp-03-25`\n- `gemini/gemini-2.0-flash-lite`\n- `gemini/gemini-2.0-pro-exp-02-05`\n- `gemini/gemini-2.0-flash`\n- `gemini/gemini-2.0-flash-thinking-exp-01-21`: Experimental \"thinking\" model from January 2025\n- `gemini/gemini-2.0-flash-thinking-exp-1219`\n- `gemini/gemma-3n-e4b-it`\n- `gemini/gemma-3-27b-it`\n- `gemini/gemma-3-12b-it`\n- `gemini/gemma-3-4b-it`\n- `gemini/gemma-3-1b-it`\n- `gemini/learnlm-1.5-pro-experimental`\n- `gemini/gemini-2.0-flash-exp`\n- `gemini/gemini-exp-1206`\n- `gemini/gemini-exp-1121`\n- `gemini/gemini-exp-1114`\n- `gemini/gemini-1.5-flash-8b-001`\n- `gemini/gemini-1.5-flash-8b-latest`: The least expensive model\n- `gemini/gemini-1.5-flash-002`\n- `gemini/gemini-1.5-pro-002`\n- `gemini/gemini-1.5-flash-001`\n- `gemini/gemini-1.5-pro-001`\n- `gemini/gemini-1.5-flash-latest`\n- `gemini/gemini-1.5-pro-latest`\n- `gemini/gemini-pro`\n<!-- [[[end]]] -->\n\nAll of these models have aliases that omit the `gemini/` prefix, for example:\n\n```bash\nllm -m gemini-1.5-flash-8b-latest --schema 'name,age int,bio' 'invent a dog'\n```\n\n### Images, audio and video\n\nGemini models are multi-modal. You can provide images, audio or video files as input like this:\n\n```bash\nllm -m gemini-2.0-flash 'extract text' -a image.jpg\n```\nOr with a URL:\n```bash\nllm -m gemini-2.0-flash-lite 'describe image' \\\n -a https://static.simonwillison.net/static/2024/pelicans.jpg\n```\nAudio works too:\n\n```bash\nllm -m gemini-2.0-flash 'transcribe audio' -a audio.mp3\n```\n\nAnd video:\n\n```bash\nllm -m gemini-2.0-flash 'describe what happens' -a video.mp4\n```\nThe Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.\n\n### JSON output\n\nUse `-o json_object 1` to force the output to be JSON:\n\n```bash\nllm -m gemini-2.0-flash -o json_object 1 \\\n '3 largest cities in California, list of {\"name\": \"...\"}'\n```\nOutputs:\n```json\n{\"cities\": [{\"name\": \"Los Angeles\"}, {\"name\": \"San Diego\"}, {\"name\": \"San Jose\"}]}\n```\n\n### Code execution\n\nGemini models can [write and execute code](https://ai.google.dev/gemini-api/docs/code-execution) - they can decide to write Python code, execute it in a secure sandbox and use the result as part of their response.\n\nTo enable this feature, use `-o code_execution 1`:\n\n```bash\nllm -m gemini-2.0-flash -o code_execution 1 \\\n'use python to calculate (factorial of 13) * 3'\n```\n### Google search\n\nSome Gemini models support [Grounding with Google Search](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini), where the model can run a Google search and use the results as part of answering a prompt.\n\nUsing this feature may incur additional requirements in terms of how you use the results. Consult [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini) for more details.\n\nTo run a prompt with Google search enabled, use `-o google_search 1`:\n\n```bash\nllm -m gemini-2.0-flash -o google_search 1 \\\n 'What happened in Ireland today?'\n```\n\nUse `llm logs -c --json` after running a prompt to see the full JSON response, which includes [additional information](https://github.com/simonw/llm-gemini/pull/29#issuecomment-2606201877) about grounded results.\n\n### URL context\n\nGemini models support a [URL context](https://ai.google.dev/gemini-api/docs/url-context) tool which, when enabled, allows the models to fetch additional content from URLs as part of their execution.\n\nYou can enable that with the `-o url_context 1` option - for example:\n\n```bash\nllm -m gemini-2.5-flash -o url_context 1 'Latest headline on simonwillison.net'\n```\nExtra tokens introduced by this tool will be charged as input tokens. Use `--usage` to see details of those:\n```bash\nllm -m gemini-2.5-flash -o url_context 1 --usage \\\n 'Latest headline on simonwillison.net'\n```\nOutputs:\n```\nThe latest headline on simonwillison.net as of August 17, 2025, is \"TIL: Running a gpt-oss eval suite against LM Studio on a Mac.\".\nToken usage: 9,613 input, 87 output, {\"candidatesTokenCount\": 57, \"promptTokensDetails\": [{\"modality\": \"TEXT\", \"tokenCount\": 10}], \"toolUsePromptTokenCount\": 9603, \"toolUsePromptTokensDetails\": [{\"modality\": \"TEXT\", \"tokenCount\": 9603}], \"thoughtsTokenCount\": 30}\n```\nThe `\"toolUsePromptTokenCount\"` key shows how many tokens were used for that URL context.\n\n### Chat\n\nTo chat interactively with the model, run `llm chat`:\n\n```bash\nllm chat -m gemini-2.0-flash\n```\n\n### Timeouts\n\nBy default there is no `timeout` against the Gemini API. You can use the `timeout` option to protect against API requests that hang indefinitely.\n\nWith the CLI tool that looks like this, to set a 1.5 second timeout:\n\n```bash\nllm -m gemini-2.5-flash-preview-05-20 'epic saga about mice' -o timeout 1.5\n```\nIn the Python library timeouts are used like this:\n```python\nimport httpx, llm\n\nmodel = llm.get_model(\"gemini/gemini-2.5-flash-preview-05-20\")\n\ntry:\n response = model.prompt(\n \"epic saga about mice\", timeout=1.5\n )\n print(response.text())\nexcept httpx.TimeoutException:\n print(\"Timeout exceeded\")\n```\nAn `httpx.TimeoutException` subclass will be raised if the timeout is exceeded.\n\n## Embeddings\n\nThe plugin also adds support for the `gemini-embedding-exp-03-07` and `text-embedding-004` embedding models.\n\nRun that against a single string like this:\n```bash\nllm embed -m text-embedding-004 -c 'hello world'\n```\nThis returns a JSON array of 768 numbers.\n\nThe `gemini-embedding-exp-03-07` model is larger, returning 3072 numbers. You can also use variants of it that are truncated down to smaller sizes:\n\n- `gemini-embedding-exp-03-07` - 3072 numbers\n- `gemini-embedding-exp-03-07-2048` - 2048 numbers\n- `gemini-embedding-exp-03-07-1024` - 1024 numbers\n- `gemini-embedding-exp-03-07-512` - 512 numbers\n- `gemini-embedding-exp-03-07-256` - 256 numbers\n- `gemini-embedding-exp-03-07-128` - 128 numbers\n\nThis command will embed every `README.md` file in child directories of the current directory and store the results in a SQLite database called `embed.db` in a collection called `readmes`:\n\n```bash\nllm embed-multi readmes -d embed.db -m gemini-embedding-exp-03-07-128 \\\n --files . '*/README.md'\n```\nYou can then run similarity searches against that collection like this:\n```bash\nllm similar readmes -c 'upload csvs to stuff' -d embed.db\n```\n\nSee the [LLM embeddings documentation](https://llm.datasette.io/en/stable/embeddings/cli.html) for further details.\n\n## Listing all Gemini API models\n\nThe `llm gemini models` command lists all of the models that are exposed by the Gemini API, some of which may not be available through this plugin.\n\n```bash\nllm gemini models\n```\nYou can add a `--key X` option to use a different API key.\n\nTo filter models by their supported generation methods use `--method` one or more times:\n```bash\nllm gemini models --method embedContent\n```\nIf you provide multiple methods you will see models that support any of them.\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-gemini\npython3 -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\nllm install -e '.[test]'\n```\nTo run the tests:\n```bash\npytest\n```\n\nThis project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Gemini API responses for the tests.\n\nIf you add a new test that calls the API you can capture the API response like this:\n```bash\nPYTEST_GEMINI_API_KEY=\"$(llm keys get gemini)\" pytest --record-mode once\n```\nYou will need to have stored a valid Gemini API key using this command first:\n```bash\nllm keys set gemini\n# Paste key here\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "LLM plugin to access Google's Gemini family of models",
"version": "0.25",
"project_urls": {
"CI": "https://github.com/simonw/llm-gemini/actions",
"Changelog": "https://github.com/simonw/llm-gemini/releases",
"Homepage": "https://github.com/simonw/llm-gemini",
"Issues": "https://github.com/simonw/llm-gemini/issues"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a89476201e91b48a100ff699e9c0283c17eae648471e869b6e51cfb26dc33f0b",
"md5": "bdf1af67acde57411cce4c973b854112",
"sha256": "e5a7d090376937167cc95cffce7d9234497226a755e5a2c9492da4336031bec6"
},
"downloads": -1,
"filename": "llm_gemini-0.25-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bdf1af67acde57411cce4c973b854112",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 15290,
"upload_time": "2025-08-18T23:55:31",
"upload_time_iso_8601": "2025-08-18T23:55:31.906075Z",
"url": "https://files.pythonhosted.org/packages/a8/94/76201e91b48a100ff699e9c0283c17eae648471e869b6e51cfb26dc33f0b/llm_gemini-0.25-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0805cd66f8bdb5946e4f9ba9fca11d19c8b1a4e86e30933b8e9f68335f20eb73",
"md5": "d1d8fc3b50b3a22fe417fb9c47ce6d80",
"sha256": "ebf1533c69f6d0ab4d03fe77d58192a267e2de03ec6bd3d2b9ad45159dbd170f"
},
"downloads": -1,
"filename": "llm_gemini-0.25.tar.gz",
"has_sig": false,
"md5_digest": "d1d8fc3b50b3a22fe417fb9c47ce6d80",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 16772,
"upload_time": "2025-08-18T23:55:32",
"upload_time_iso_8601": "2025-08-18T23:55:32.744262Z",
"url": "https://files.pythonhosted.org/packages/08/05/cd66f8bdb5946e4f9ba9fca11d19c8b1a4e86e30933b8e9f68335f20eb73/llm_gemini-0.25.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-18 23:55:32",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "simonw",
"github_project": "llm-gemini",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-gemini"
}