llm-term


Namellm-term JSON
Version 0.13.0 PyPI version JSON
download
home_pageNone
SummaryA simple CLI to chat with OpenAI GPT Models
upload_time2024-03-28 13:41:12
maintainerNone
docs_urlNone
authorNone
requires_python<4,>=3.8
licenseNone
keywords chatgpt cli openai python rich
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-term

Chat with LLM models directly from the command line.

<p align="center">
<img width="600" alt="image" src="https://i.imgur.com/1BUegLB.png">
</p>

[![PyPI](https://img.shields.io/pypi/v/llm-term?color=blue&label=🤖%20llm-term)](https://github.com/juftin/llm-term)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llm-term)](https://pypi.python.org/pypi/llm-term/)
[![GitHub License](https://img.shields.io/github/license/juftin/llm-term?color=blue&label=License)](https://github.com/juftin/llm-term/blob/main/LICENSE)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-lightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)
[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)
[![Gitmoji](https://img.shields.io/badge/gitmoji-%20😜%20😍-FFDD67.svg)](https://gitmoji.dev)

<details>
<summary>Screen Recording</summary>

https://user-images.githubusercontent.com/49741340/270871763-d872650e-bceb-4da3-8bc6-3e079d55e5a3.mov

</details>

<h2><a href="https://juftin.com/llm-term">Check Out the Docs</a></h2>

## Installation

```bash
pipx install llm-term
```

## Usage

Then, you can chat with the model directly from the command line:

```shell
llm-term
```

`llm-term` works with multiple LLM providers, but by default it uses OpenAI.
Most providers require extra packages to be installed, so make sure you
read the [Providers](#providers) section below. To use a different provider, you
can set the `--provider` / `-p` flag:

```shell
llm-term --provider anthropic
```

If needed, make sure you have your LLM's API key set as an environment variable
(this can also set via the `--api-key` / `-k` flag in the CLI). If your LLM uses
a particular environment variable for its API key, such as `OPENAI_API_KEY`,
that will be detected automatically.

```shell
export LLM_API_KEY="xxxxxxxxxxxxxx"
```

Optionally, you can set a custom model. llm-term defaults
to `gpt-3.5-turbo` (this can also set via the `--model` / `-m` flag in the CLI):

```shell
export LLM_MODEL="gpt-4"
```

Want to start the conversion directly from the command line? No problem,
just pass your prompt to `llm-term`:

```shell
llm-term show me python code to detect a palindrome
```

You can also set a custom system prompt. llm-term defaults to a reasonable
prompt for chatting with the model, but you can set your own prompt (this
can also set via the `--system` / `-s` flag in the CLI):

```shell
export LLM_SYSTEM_MESSAGE="You are a helpful assistant who talks like a pirate."
```

## Providers

### OpenAI

By default, llm-term uses OpenAI as your LLM provider. The default model is
`gpt-3.5-turbo` and you can also use the `OPENAI_API_KEY` environment variable
to set your API key.

### Anthropic

You can request access to Anthropic [here](https://www.anthropic.com/). The
default model is `claude-2.1`, and you can use the `ANTHROPIC_API_KEY` environment
variable. To use `anthropic` as your provider you must install the `anthropic`
extra.

```shell
pipx install "llm-term[anthropic]"
```

```shell
llm-term --provider anthropic
```

### MistralAI

You can request access to the [MistralAI](https://mistral.ai/)
[here](https://console.mistral.ai/). The default model is
`mistral-small`, and you can use the `MISTRAL_API_KEY` environment variable.

```shell
pipx install "llm-term[mistralai]"
```

```shell
llm-term --provider mistralai
```

### GPT4All

GPT4All is a an open source LLM provider. These models run locally on your
machine, so you don't need to worry about API keys or rate limits. The default
model is `mistral-7b-openorca.Q4_0.gguf`, and you can see what models are available on the [GPT4All
Website](https://gpt4all.io/index.html). Models are downloaded automatically when you first use them.
To use GPT4All as your provider you must install the `gpt4all` extra.

```bash
pipx install "llm-term[gpt4all]"
```

```shell
llm-term --provider gpt4all --model mistral-7b-openorca.Q4_0.gguf
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-term",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.8",
    "maintainer_email": null,
    "keywords": "chatgpt, cli, openai, python, rich",
    "author": null,
    "author_email": "Justin Flannery <juftin@juftin.com>",
    "download_url": "https://files.pythonhosted.org/packages/4f/f7/0a50e842bb72394799b31e8636446993f36057ad29da486cc201d1b66c39/llm_term-0.13.0.tar.gz",
    "platform": null,
    "description": "# llm-term\n\nChat with LLM models directly from the command line.\n\n<p align=\"center\">\n<img width=\"600\" alt=\"image\" src=\"https://i.imgur.com/1BUegLB.png\">\n</p>\n\n[![PyPI](https://img.shields.io/pypi/v/llm-term?color=blue&label=\ud83e\udd16%20llm-term)](https://github.com/juftin/llm-term)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llm-term)](https://pypi.python.org/pypi/llm-term/)\n[![GitHub License](https://img.shields.io/github/license/juftin/llm-term?color=blue&label=License)](https://github.com/juftin/llm-term/blob/main/LICENSE)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-lightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)\n[![Gitmoji](https://img.shields.io/badge/gitmoji-%20\ud83d\ude1c%20\ud83d\ude0d-FFDD67.svg)](https://gitmoji.dev)\n\n<details>\n<summary>Screen Recording</summary>\n\nhttps://user-images.githubusercontent.com/49741340/270871763-d872650e-bceb-4da3-8bc6-3e079d55e5a3.mov\n\n</details>\n\n<h2><a href=\"https://juftin.com/llm-term\">Check Out the Docs</a></h2>\n\n## Installation\n\n```bash\npipx install llm-term\n```\n\n## Usage\n\nThen, you can chat with the model directly from the command line:\n\n```shell\nllm-term\n```\n\n`llm-term` works with multiple LLM providers, but by default it uses OpenAI.\nMost providers require extra packages to be installed, so make sure you\nread the [Providers](#providers) section below. To use a different provider, you\ncan set the `--provider` / `-p` flag:\n\n```shell\nllm-term --provider anthropic\n```\n\nIf needed, make sure you have your LLM's API key set as an environment variable\n(this can also set via the `--api-key` / `-k` flag in the CLI). If your LLM uses\na particular environment variable for its API key, such as `OPENAI_API_KEY`,\nthat will be detected automatically.\n\n```shell\nexport LLM_API_KEY=\"xxxxxxxxxxxxxx\"\n```\n\nOptionally, you can set a custom model. llm-term defaults\nto `gpt-3.5-turbo` (this can also set via the `--model` / `-m` flag in the CLI):\n\n```shell\nexport LLM_MODEL=\"gpt-4\"\n```\n\nWant to start the conversion directly from the command line? No problem,\njust pass your prompt to `llm-term`:\n\n```shell\nllm-term show me python code to detect a palindrome\n```\n\nYou can also set a custom system prompt. llm-term defaults to a reasonable\nprompt for chatting with the model, but you can set your own prompt (this\ncan also set via the `--system` / `-s` flag in the CLI):\n\n```shell\nexport LLM_SYSTEM_MESSAGE=\"You are a helpful assistant who talks like a pirate.\"\n```\n\n## Providers\n\n### OpenAI\n\nBy default, llm-term uses OpenAI as your LLM provider. The default model is\n`gpt-3.5-turbo` and you can also use the `OPENAI_API_KEY` environment variable\nto set your API key.\n\n### Anthropic\n\nYou can request access to Anthropic [here](https://www.anthropic.com/). The\ndefault model is `claude-2.1`, and you can use the `ANTHROPIC_API_KEY` environment\nvariable. To use `anthropic` as your provider you must install the `anthropic`\nextra.\n\n```shell\npipx install \"llm-term[anthropic]\"\n```\n\n```shell\nllm-term --provider anthropic\n```\n\n### MistralAI\n\nYou can request access to the [MistralAI](https://mistral.ai/)\n[here](https://console.mistral.ai/). The default model is\n`mistral-small`, and you can use the `MISTRAL_API_KEY` environment variable.\n\n```shell\npipx install \"llm-term[mistralai]\"\n```\n\n```shell\nllm-term --provider mistralai\n```\n\n### GPT4All\n\nGPT4All is a an open source LLM provider. These models run locally on your\nmachine, so you don't need to worry about API keys or rate limits. The default\nmodel is `mistral-7b-openorca.Q4_0.gguf`, and you can see what models are available on the [GPT4All\nWebsite](https://gpt4all.io/index.html). Models are downloaded automatically when you first use them.\nTo use GPT4All as your provider you must install the `gpt4all` extra.\n\n```bash\npipx install \"llm-term[gpt4all]\"\n```\n\n```shell\nllm-term --provider gpt4all --model mistral-7b-openorca.Q4_0.gguf\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A simple CLI to chat with OpenAI GPT Models",
    "version": "0.13.0",
    "project_urls": {
        "Changelog": "https://github.com/juftin/llm-term/releases",
        "Documentation": "https://juftin.github.io/llm-term",
        "Issues": "https://juftin.github.io/llm-term/issues",
        "Source": "https://github.com/juftin/llm-term"
    },
    "split_keywords": [
        "chatgpt",
        " cli",
        " openai",
        " python",
        " rich"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ed6a988269ec865932d67dc19ca5a14f342f2ca45e3d277eb4b0806d94f6fc81",
                "md5": "347de88825a4651faa4ae720bbbd4009",
                "sha256": "545e8e32e31ec04f7e647b7a3c81e1422960c7773170e799d7d6f952e012f7f4"
            },
            "downloads": -1,
            "filename": "llm_term-0.13.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "347de88825a4651faa4ae720bbbd4009",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.8",
            "size": 8860,
            "upload_time": "2024-03-28T13:41:09",
            "upload_time_iso_8601": "2024-03-28T13:41:09.463378Z",
            "url": "https://files.pythonhosted.org/packages/ed/6a/988269ec865932d67dc19ca5a14f342f2ca45e3d277eb4b0806d94f6fc81/llm_term-0.13.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4ff70a50e842bb72394799b31e8636446993f36057ad29da486cc201d1b66c39",
                "md5": "ae52e070867ef0b5a2a00d82a67e467e",
                "sha256": "37712746079f6756eaeb0b8a1a4178fc2f1bcf92cbfea47b526682798f947bda"
            },
            "downloads": -1,
            "filename": "llm_term-0.13.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ae52e070867ef0b5a2a00d82a67e467e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.8",
            "size": 151131,
            "upload_time": "2024-03-28T13:41:12",
            "upload_time_iso_8601": "2024-03-28T13:41:12.859814Z",
            "url": "https://files.pythonhosted.org/packages/4f/f7/0a50e842bb72394799b31e8636446993f36057ad29da486cc201d1b66c39/llm_term-0.13.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-28 13:41:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "juftin",
    "github_project": "llm-term",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "llm-term"
}
        
Elapsed time: 0.25449s