llm-replicate


Namellm-replicate JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
SummaryLLM plugin for models hosted on Replicate
upload_time2024-04-18 17:14:30
maintainerNone
docs_urlNone
authorSimon Willison
requires_python>3.7
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-replicate

[![PyPI](https://img.shields.io/pypi/v/llm-replicate.svg)](https://pypi.org/project/llm-replicate/)
[![Changelog](https://img.shields.io/github/v/release/simonw/llm-replicate?include_prereleases&label=changelog)](https://github.com/simonw/llm-replicate/releases)
[![Tests](https://github.com/simonw/llm-replicate/workflows/Test/badge.svg)](https://github.com/simonw/llm-replicate/actions?query=workflow%3ATest)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/llm-replicate/blob/main/LICENSE)

[LLM](https://llm.datasette.io/) plugin for models hosted on [Replicate](https://replicate.com/)

## Installation

First, [install the LLM command-line utility](https://llm.datasette.io/en/stable/setup.html).

Now install this plugin in the same environment as LLM.
```bash
llm install llm-replicate
```
## Configuration

You will need an API key from Replicate. You can [obtain one here](https://replicate.com/account/api-tokens).

You can set that as an environment variable called `REPLICATE_API_TOKEN`, or add it to the `llm` set of saved keys using:

```bash
llm keys set replicate
```
```
Enter key: <paste key here>
```
To fetch and save details of [the default collection](https://replicate.com/collections/language-models) of language models hosted on Replicate, run this:
```bash
llm replicate fetch-models
```
To add specific models that aren't listed in that collection, use the `llm replicate add` command.

For the Llama 2 model from [a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat) run this:

```bash
llm replicate add a16z-infra/llama13b-v2-chat \
  --chat --alias llama2
```
The `--chat` flag indicates that this is a chat model, which means it will be able to work with `-c` continue mode.

Here's the [70b version](https://replicate.com/replicate/llama70b-v2-chat) of that model:
```bash
llm replicate add \
  replicate/llama70b-v2-chat \
  --chat --alias llama70b
```
## Usage

To run a prompt against a model, pass its name or an alias to `llm -m`:
```bash
llm -m llama2 "Ten great names for a pet pelican"
```

> Sure, here are ten great names for a pet pelican:
>
> 1. Pelty
> 2. Peanut
> 3. Puddles
> 4. Nibbles
> 5. Fuzzy
> 6. Gizmo
> 7. Hank
> 8. Luna
> 9. Scooter
> 10. Splishy
>
> I hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?


Chat models can support continuing conversations, for example:
```bash
llm -c "Five more and make them more nautical"
```
> Ahoy matey! Here be five more nautical-themed names for yer pet pelican:
>
> 1. Captain Hook
> 2. Anchoryn
> 3. Seadog
> 4. Plunder
> 5. Pointe Pelican
>
> I hope these suggestions help ye find the perfect name for yer feathered friend! Do ye have any other questions, matey?

Run `llm models list` to see the full list of models:

```bash
llm models list
```
You should see something like this:
```
Replicate: replicate-flan-t5-xl
Replicate: replicate-llama-7b
Replicate: replicate-gpt-j-6b
Replicate: replicate-dolly-v2-12b
Replicate: replicate-oasst-sft-1-pythia-12b
Replicate: replicate-stability-ai-stablelm-tuned-alpha-7b
Replicate: replicate-vicuna-13b
Replicate: replicate-replit-code-v1-3b
Replicate: replicate-replit-replit-code-v1-3b
Replicate: replicate-joehoover-falcon-40b-instruct (aliases: falcon)
Replicate (chat): replicate-a16z-infra-llama13b-v2-chat (aliases: llama2)
```
Then run a prompt through a specific model like this:
```bash
llm -m replicate-vicuna-13b "Five great names for a pet llama"
```

## Registering extra models

To register additional models that are not included in the default [Language models collection](https://replicate.com/collections/language-models), find their ID on Replicate and use the `llm replicate add` command.

For example, to add the [joehoover/falcon-40b-instruct](https://replicate.com/joehoover/falcon-40b-instruct) model, run this:

```bash
llm replicate add joehoover/falcon-40b-instruct \
  --alias falcon
```
This adds the model with the alias `falcon` - you can have 0 or more aliases for a model.

Now you can run it like this:
```bash
llm -m replicate-joehoover-falcon-40b-instruct \
  "Three reasons to get a pet falcon"
```
Or using the alias like this:
```bash
llm -m falcon "Three reasons to get a pet falcon"
```
You can edit the list of models you have registered using the default `$EDITOR` like this:
```bash
llm replicate edit-models
```
If you register a model using the `--chat` option that model will be treated slightly differently. Prompts sent to the model will be formatted like this:
```
User: user input here
Assistant:
```
If you use `-c` [conversation mode](https://llm.datasette.io/en/stable/usage.html#continuing-a-conversation) the prompt will include previous messages in the conversation, like this:
```
User: Ten great names for a pet pelican
Assistant: Sure, here are ten great names for a pet pelican:

1. Pelty
2. Peanut
3. Puddles
4. Nibbles
5. Fuzzy
6. Gizmo
7. Hank
8. Luna
9. Scooter
10. Splishy

I hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?
User: Five more and make them more nautical
Assistant:
```

## Fetching all Replicate predictions

Replicate logs all predictions made against models. You can fetch all of these predictions using the `llm replicate fetch-predictions` command:

```bash
llm replicate fetch-predictions
```
This will create or populate a table in your LLM `logs.db` database called `replicate_predictions`.

The data in this table will cover ALL Replicate models, not just language models that have been queried using this tool.

Running `llm replicate fetch-predictions` multiple times will only fetch predictions that have been created since the last time the command was run.

To browse the resulting data in [Datasette](https://datasette.io/), run this:
```bash
datasette "$(llm logs path)"
```
The schema for that table will look like this:
```sql
CREATE TABLE [replicate_predictions] (
   [id] TEXT PRIMARY KEY,
   [_model_guess] TEXT,
   [completed_at] TEXT,
   [created_at] TEXT,
   [error] TEXT,
   [input] TEXT,
   [logs] TEXT,
   [metrics] TEXT,
   [output] TEXT,
   [started_at] TEXT,
   [status] TEXT,
   [urls] TEXT,
   [version] TEXT,
   [webhook_completed] TEXT
)
```
This schema may change if the Replicate API adds new fields in the future.

## Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-replicate
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
pip install -e '.[test]'
```
To run the tests:
```bash
pytest
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-replicate",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "Simon Willison",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/47/08/60318cb6e4d99aa24500f30bd82088ef30a30b9cf2cc438df48286dd0375/llm_replicate-0.3.1.tar.gz",
    "platform": null,
    "description": "# llm-replicate\n\n[![PyPI](https://img.shields.io/pypi/v/llm-replicate.svg)](https://pypi.org/project/llm-replicate/)\n[![Changelog](https://img.shields.io/github/v/release/simonw/llm-replicate?include_prereleases&label=changelog)](https://github.com/simonw/llm-replicate/releases)\n[![Tests](https://github.com/simonw/llm-replicate/workflows/Test/badge.svg)](https://github.com/simonw/llm-replicate/actions?query=workflow%3ATest)\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/llm-replicate/blob/main/LICENSE)\n\n[LLM](https://llm.datasette.io/) plugin for models hosted on [Replicate](https://replicate.com/)\n\n## Installation\n\nFirst, [install the LLM command-line utility](https://llm.datasette.io/en/stable/setup.html).\n\nNow install this plugin in the same environment as LLM.\n```bash\nllm install llm-replicate\n```\n## Configuration\n\nYou will need an API key from Replicate. You can [obtain one here](https://replicate.com/account/api-tokens).\n\nYou can set that as an environment variable called `REPLICATE_API_TOKEN`, or add it to the `llm` set of saved keys using:\n\n```bash\nllm keys set replicate\n```\n```\nEnter key: <paste key here>\n```\nTo fetch and save details of [the default collection](https://replicate.com/collections/language-models) of language models hosted on Replicate, run this:\n```bash\nllm replicate fetch-models\n```\nTo add specific models that aren't listed in that collection, use the `llm replicate add` command.\n\nFor the Llama 2 model from [a16z-infra/llama13b-v2-chat](https://replicate.com/a16z-infra/llama13b-v2-chat) run this:\n\n```bash\nllm replicate add a16z-infra/llama13b-v2-chat \\\n  --chat --alias llama2\n```\nThe `--chat` flag indicates that this is a chat model, which means it will be able to work with `-c` continue mode.\n\nHere's the [70b version](https://replicate.com/replicate/llama70b-v2-chat) of that model:\n```bash\nllm replicate add \\\n  replicate/llama70b-v2-chat \\\n  --chat --alias llama70b\n```\n## Usage\n\nTo run a prompt against a model, pass its name or an alias to `llm -m`:\n```bash\nllm -m llama2 \"Ten great names for a pet pelican\"\n```\n\n> Sure, here are ten great names for a pet pelican:\n>\n> 1. Pelty\n> 2. Peanut\n> 3. Puddles\n> 4. Nibbles\n> 5. Fuzzy\n> 6. Gizmo\n> 7. Hank\n> 8. Luna\n> 9. Scooter\n> 10. Splishy\n>\n> I hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?\n\n\nChat models can support continuing conversations, for example:\n```bash\nllm -c \"Five more and make them more nautical\"\n```\n> Ahoy matey! Here be five more nautical-themed names for yer pet pelican:\n>\n> 1. Captain Hook\n> 2. Anchoryn\n> 3. Seadog\n> 4. Plunder\n> 5. Pointe Pelican\n>\n> I hope these suggestions help ye find the perfect name for yer feathered friend! Do ye have any other questions, matey?\n\nRun `llm models list` to see the full list of models:\n\n```bash\nllm models list\n```\nYou should see something like this:\n```\nReplicate: replicate-flan-t5-xl\nReplicate: replicate-llama-7b\nReplicate: replicate-gpt-j-6b\nReplicate: replicate-dolly-v2-12b\nReplicate: replicate-oasst-sft-1-pythia-12b\nReplicate: replicate-stability-ai-stablelm-tuned-alpha-7b\nReplicate: replicate-vicuna-13b\nReplicate: replicate-replit-code-v1-3b\nReplicate: replicate-replit-replit-code-v1-3b\nReplicate: replicate-joehoover-falcon-40b-instruct (aliases: falcon)\nReplicate (chat): replicate-a16z-infra-llama13b-v2-chat (aliases: llama2)\n```\nThen run a prompt through a specific model like this:\n```bash\nllm -m replicate-vicuna-13b \"Five great names for a pet llama\"\n```\n\n## Registering extra models\n\nTo register additional models that are not included in the default [Language models collection](https://replicate.com/collections/language-models), find their ID on Replicate and use the `llm replicate add` command.\n\nFor example, to add the [joehoover/falcon-40b-instruct](https://replicate.com/joehoover/falcon-40b-instruct) model, run this:\n\n```bash\nllm replicate add joehoover/falcon-40b-instruct \\\n  --alias falcon\n```\nThis adds the model with the alias `falcon` - you can have 0 or more aliases for a model.\n\nNow you can run it like this:\n```bash\nllm -m replicate-joehoover-falcon-40b-instruct \\\n  \"Three reasons to get a pet falcon\"\n```\nOr using the alias like this:\n```bash\nllm -m falcon \"Three reasons to get a pet falcon\"\n```\nYou can edit the list of models you have registered using the default `$EDITOR` like this:\n```bash\nllm replicate edit-models\n```\nIf you register a model using the `--chat` option that model will be treated slightly differently. Prompts sent to the model will be formatted like this:\n```\nUser: user input here\nAssistant:\n```\nIf you use `-c` [conversation mode](https://llm.datasette.io/en/stable/usage.html#continuing-a-conversation) the prompt will include previous messages in the conversation, like this:\n```\nUser: Ten great names for a pet pelican\nAssistant: Sure, here are ten great names for a pet pelican:\n\n1. Pelty\n2. Peanut\n3. Puddles\n4. Nibbles\n5. Fuzzy\n6. Gizmo\n7. Hank\n8. Luna\n9. Scooter\n10. Splishy\n\nI hope these suggestions help you find the perfect name for your pet pelican! Do you have any other questions?\nUser: Five more and make them more nautical\nAssistant:\n```\n\n## Fetching all Replicate predictions\n\nReplicate logs all predictions made against models. You can fetch all of these predictions using the `llm replicate fetch-predictions` command:\n\n```bash\nllm replicate fetch-predictions\n```\nThis will create or populate a table in your LLM `logs.db` database called `replicate_predictions`.\n\nThe data in this table will cover ALL Replicate models, not just language models that have been queried using this tool.\n\nRunning `llm replicate fetch-predictions` multiple times will only fetch predictions that have been created since the last time the command was run.\n\nTo browse the resulting data in [Datasette](https://datasette.io/), run this:\n```bash\ndatasette \"$(llm logs path)\"\n```\nThe schema for that table will look like this:\n```sql\nCREATE TABLE [replicate_predictions] (\n   [id] TEXT PRIMARY KEY,\n   [_model_guess] TEXT,\n   [completed_at] TEXT,\n   [created_at] TEXT,\n   [error] TEXT,\n   [input] TEXT,\n   [logs] TEXT,\n   [metrics] TEXT,\n   [output] TEXT,\n   [started_at] TEXT,\n   [status] TEXT,\n   [urls] TEXT,\n   [version] TEXT,\n   [webhook_completed] TEXT\n)\n```\nThis schema may change if the Replicate API adds new fields in the future.\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-replicate\npython3 -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\npip install -e '.[test]'\n```\nTo run the tests:\n```bash\npytest\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "LLM plugin for models hosted on Replicate",
    "version": "0.3.1",
    "project_urls": {
        "Changelog": "https://github.com/simonw/llm-replicate/releases",
        "Homepage": "https://github.com/simonw/llm-replicate",
        "Issues": "https://github.com/simonw/llm-replicate/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7db72bf20e72b57d463dd3b759773f1a5c675e229c7cb03e7737f9f2f560cfb1",
                "md5": "c37be4e31f3672df5b6b85c2f6c3a52f",
                "sha256": "7ae240d7390f0f557b249f3d3889c7cd3015b4a6818747495ca48f31a8a4a177"
            },
            "downloads": -1,
            "filename": "llm_replicate-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c37be4e31f3672df5b6b85c2f6c3a52f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">3.7",
            "size": 10999,
            "upload_time": "2024-04-18T17:14:14",
            "upload_time_iso_8601": "2024-04-18T17:14:14.501723Z",
            "url": "https://files.pythonhosted.org/packages/7d/b7/2bf20e72b57d463dd3b759773f1a5c675e229c7cb03e7737f9f2f560cfb1/llm_replicate-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "470860318cb6e4d99aa24500f30bd82088ef30a30b9cf2cc438df48286dd0375",
                "md5": "d83ff6d0d73096ea63daf863e5beb274",
                "sha256": "8097f2b70c2670685cc00851a3551965035a0d8a9b67f728d99cfa8e06f04aab"
            },
            "downloads": -1,
            "filename": "llm_replicate-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d83ff6d0d73096ea63daf863e5beb274",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">3.7",
            "size": 12522,
            "upload_time": "2024-04-18T17:14:30",
            "upload_time_iso_8601": "2024-04-18T17:14:30.431034Z",
            "url": "https://files.pythonhosted.org/packages/47/08/60318cb6e4d99aa24500f30bd82088ef30a30b9cf2cc438df48286dd0375/llm_replicate-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-18 17:14:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "simonw",
    "github_project": "llm-replicate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-replicate"
}
        
Elapsed time: 0.22881s