Name | llm-term JSON |
Version |
0.14.0
JSON |
| download |
home_page | None |
Summary | A simple CLI to chat with LLM Models |
upload_time | 2024-07-18 21:58:06 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <4,>=3.8 |
license | None |
keywords |
chatgpt
cli
openai
python
rich
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# llm-term
Chat with LLM models directly from the command line.
<p align="center">
<img width="600" alt="image" src="https://i.imgur.com/453xL6I.png">
</p>
[![PyPI](https://img.shields.io/pypi/v/llm-term?color=blue&label=🤖%20llm-term)](https://github.com/juftin/llm-term)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llm-term)](https://pypi.python.org/pypi/llm-term/)
[![GitHub License](https://img.shields.io/github/license/juftin/llm-term?color=blue&label=License)](https://github.com/juftin/llm-term/blob/main/LICENSE)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-lightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)
[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)
[![Gitmoji](https://img.shields.io/badge/gitmoji-%20😜%20😍-FFDD67.svg)](https://gitmoji.dev)
<details>
<summary>Screen Recording</summary>
https://github.com/juftin/llm-term/assets/49741340/c305f636-dfcf-4d6f-884f-81d378cf0684
</details>
<h2><a href="https://juftin.com/llm-term">Check Out the Docs</a></h2>
## Installation
```bash
pipx install llm-term
```
### Install with Extras
You can install llm-term with extra dependencies for different providers:
```bash
pipx install "llm-term[anthropic]"
```
```bash
pipx install "llm-term[mistralai]"
```
Or, you can install all the extras:
```bash
pipx install "llm-term[all]"
```
## Usage
Then, you can chat with the model directly from the command line:
```shell
llm-term
```
`llm-term` works with multiple LLM providers, but by default it uses OpenAI.
Most providers require extra packages to be installed, so make sure you
read the [Providers](#providers) section below. To use a different provider, you
can set the `--provider` / `-p` flag:
```shell
llm-term --provider anthropic
```
If needed, make sure you have your LLM's API key set as an environment variable
(this can also set via the `--api-key` / `-k` flag in the CLI). If your LLM uses
a particular environment variable for its API key, such as `OPENAI_API_KEY`,
that will be detected automatically.
```shell
export LLM_API_KEY="xxxxxxxxxxxxxx"
```
Optionally, you can set a custom model. llm-term defaults
to `gpt-4o` (this can also set via the `--model` / `-m` flag in the CLI):
```shell
export LLM_MODEL="gpt-4o-mini"
```
Want to start the conversion directly from the command line? No problem,
just pass your prompt to `llm-term`:
```shell
llm-term show me python code to detect a palindrome
```
You can also set a custom system prompt. llm-term defaults to a reasonable
prompt for chatting with the model, but you can set your own prompt (this
can also set via the `--system` / `-s` flag in the CLI):
```shell
export LLM_SYSTEM_MESSAGE="You are a helpful assistant who talks like a pirate."
```
## Providers
### OpenAI
By default, llm-term uses OpenAI as your LLM provider. The default model is
`gpt-4o` and you can also use the `OPENAI_API_KEY` environment variable
to set your API key.
### Anthropic
You can request access to Anthropic [here](https://www.anthropic.com/). The
default model is `claude-3-5-sonnet-20240620`, and you can use the `ANTHROPIC_API_KEY` environment
variable. To use `anthropic` as your provider you must install the `anthropic`
extra.
```shell
pipx install "llm-term[anthropic]"
```
```shell
llm-term --provider anthropic
```
### MistralAI
You can request access to the [MistralAI](https://mistral.ai/)
[here](https://console.mistral.ai/). The default model is
`mistral-small-latest`, and you can use the `MISTRAL_API_KEY` environment variable.
```shell
pipx install "llm-term[mistralai]"
```
```shell
llm-term --provider mistralai
```
### Ollama
Ollama is a an open source LLM provider. These models run locally on your
machine, so you don't need to worry about API keys or rate limits. The default
model is `llama3`, and you can see what models are available on the [Ollama
Website](https://ollama.com/library). Make sure to
[download Ollama](https://ollama.com/download) first.
```shell
ollama pull llama3
```
```shell
llm-term --provider ollama --model llama3
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-term",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.8",
"maintainer_email": null,
"keywords": "chatgpt, cli, openai, python, rich",
"author": null,
"author_email": "Justin Flannery <juftin@juftin.com>",
"download_url": "https://files.pythonhosted.org/packages/32/77/31278afc8d4744d07c2b1377d539fb874b542a9782b50d792f5dba59d50e/llm_term-0.14.0.tar.gz",
"platform": null,
"description": "# llm-term\n\nChat with LLM models directly from the command line.\n\n<p align=\"center\">\n<img width=\"600\" alt=\"image\" src=\"https://i.imgur.com/453xL6I.png\">\n</p>\n\n[![PyPI](https://img.shields.io/pypi/v/llm-term?color=blue&label=\ud83e\udd16%20llm-term)](https://github.com/juftin/llm-term)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/llm-term)](https://pypi.python.org/pypi/llm-term/)\n[![GitHub License](https://img.shields.io/github/license/juftin/llm-term?color=blue&label=License)](https://github.com/juftin/llm-term/blob/main/LICENSE)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-lightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)\n[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)\n[![Gitmoji](https://img.shields.io/badge/gitmoji-%20\ud83d\ude1c%20\ud83d\ude0d-FFDD67.svg)](https://gitmoji.dev)\n\n<details>\n<summary>Screen Recording</summary>\n\nhttps://github.com/juftin/llm-term/assets/49741340/c305f636-dfcf-4d6f-884f-81d378cf0684\n\n</details>\n\n<h2><a href=\"https://juftin.com/llm-term\">Check Out the Docs</a></h2>\n\n## Installation\n\n```bash\npipx install llm-term\n```\n\n### Install with Extras\n\nYou can install llm-term with extra dependencies for different providers:\n\n```bash\npipx install \"llm-term[anthropic]\"\n```\n\n```bash\npipx install \"llm-term[mistralai]\"\n```\n\nOr, you can install all the extras:\n\n```bash\npipx install \"llm-term[all]\"\n```\n\n## Usage\n\nThen, you can chat with the model directly from the command line:\n\n```shell\nllm-term\n```\n\n`llm-term` works with multiple LLM providers, but by default it uses OpenAI.\nMost providers require extra packages to be installed, so make sure you\nread the [Providers](#providers) section below. To use a different provider, you\ncan set the `--provider` / `-p` flag:\n\n```shell\nllm-term --provider anthropic\n```\n\nIf needed, make sure you have your LLM's API key set as an environment variable\n(this can also set via the `--api-key` / `-k` flag in the CLI). If your LLM uses\na particular environment variable for its API key, such as `OPENAI_API_KEY`,\nthat will be detected automatically.\n\n```shell\nexport LLM_API_KEY=\"xxxxxxxxxxxxxx\"\n```\n\nOptionally, you can set a custom model. llm-term defaults\nto `gpt-4o` (this can also set via the `--model` / `-m` flag in the CLI):\n\n```shell\nexport LLM_MODEL=\"gpt-4o-mini\"\n```\n\nWant to start the conversion directly from the command line? No problem,\njust pass your prompt to `llm-term`:\n\n```shell\nllm-term show me python code to detect a palindrome\n```\n\nYou can also set a custom system prompt. llm-term defaults to a reasonable\nprompt for chatting with the model, but you can set your own prompt (this\ncan also set via the `--system` / `-s` flag in the CLI):\n\n```shell\nexport LLM_SYSTEM_MESSAGE=\"You are a helpful assistant who talks like a pirate.\"\n```\n\n## Providers\n\n### OpenAI\n\nBy default, llm-term uses OpenAI as your LLM provider. The default model is\n`gpt-4o` and you can also use the `OPENAI_API_KEY` environment variable\nto set your API key.\n\n### Anthropic\n\nYou can request access to Anthropic [here](https://www.anthropic.com/). The\ndefault model is `claude-3-5-sonnet-20240620`, and you can use the `ANTHROPIC_API_KEY` environment\nvariable. To use `anthropic` as your provider you must install the `anthropic`\nextra.\n\n```shell\npipx install \"llm-term[anthropic]\"\n```\n\n```shell\nllm-term --provider anthropic\n```\n\n### MistralAI\n\nYou can request access to the [MistralAI](https://mistral.ai/)\n[here](https://console.mistral.ai/). The default model is\n`mistral-small-latest`, and you can use the `MISTRAL_API_KEY` environment variable.\n\n```shell\npipx install \"llm-term[mistralai]\"\n```\n\n```shell\nllm-term --provider mistralai\n```\n\n### Ollama\n\nOllama is a an open source LLM provider. These models run locally on your\nmachine, so you don't need to worry about API keys or rate limits. The default\nmodel is `llama3`, and you can see what models are available on the [Ollama\nWebsite](https://ollama.com/library). Make sure to\n[download Ollama](https://ollama.com/download) first.\n\n```shell\nollama pull llama3\n```\n\n```shell\nllm-term --provider ollama --model llama3\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "A simple CLI to chat with LLM Models",
"version": "0.14.0",
"project_urls": {
"Changelog": "https://github.com/juftin/llm-term/releases",
"Documentation": "https://juftin.github.io/llm-term",
"Issues": "https://juftin.github.io/llm-term/issues",
"Source": "https://github.com/juftin/llm-term"
},
"split_keywords": [
"chatgpt",
" cli",
" openai",
" python",
" rich"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "456d43715bea59894a2d193c4f03a01fef294551f0b8c741f5d4f9eca62612b9",
"md5": "d05ef528ae57c03cecd5ee09d36f5cbe",
"sha256": "cfad27a25fba9d782a4542f6e15a27ccc1461e5af87c0ac3f0a93fb75dd3cb3b"
},
"downloads": -1,
"filename": "llm_term-0.14.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d05ef528ae57c03cecd5ee09d36f5cbe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.8",
"size": 9106,
"upload_time": "2024-07-18T21:58:05",
"upload_time_iso_8601": "2024-07-18T21:58:05.229161Z",
"url": "https://files.pythonhosted.org/packages/45/6d/43715bea59894a2d193c4f03a01fef294551f0b8c741f5d4f9eca62612b9/llm_term-0.14.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "327731278afc8d4744d07c2b1377d539fb874b542a9782b50d792f5dba59d50e",
"md5": "ade6a9a0e6ca1d4cee2f0fb025933738",
"sha256": "50cf719a566f136674532995e8d9ef4db9ed314ada29383765e394809a3eaa2c"
},
"downloads": -1,
"filename": "llm_term-0.14.0.tar.gz",
"has_sig": false,
"md5_digest": "ade6a9a0e6ca1d4cee2f0fb025933738",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.8",
"size": 153048,
"upload_time": "2024-07-18T21:58:06",
"upload_time_iso_8601": "2024-07-18T21:58:06.602610Z",
"url": "https://files.pythonhosted.org/packages/32/77/31278afc8d4744d07c2b1377d539fb874b542a9782b50d792f5dba59d50e/llm_term-0.14.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-18 21:58:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "juftin",
"github_project": "llm-term",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "llm-term"
}