llmsh


Namellmsh JSON
Version 0.2 PyPI version JSON
download
home_pagehttps://github.com/vduseev/llmsh
SummaryCommand-line tool to use Large Language Models in your shell,including OpenAI, Anthropic, PalM, Mistral, Cohere, and more.
upload_time2024-04-08 01:18:39
maintainerVagiz Duseev
docs_urlNone
authorVagiz Duseev
requires_python<4.0,>=3.9
licenseApache-2.0
keywords cli llm openai ai shell console command-line
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLMsh

[![PyPI](https://img.shields.io/pypi/v/llmsh.svg)](https://pypi.org/project/llmsh/)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/vduseev/llmsh/blob/main/LICENSE)

Command-line tool to use Large Language Models in your shell.
Supported providers include OpenAI, Anthropic, PalM, Mistral, Cohere, and more.

**Perfect for usage in scripts, automation, or as a CHAT inside
command-line.**

*This is an alpha version and a work in progress.*

## Installation

### PyPI installation

```shell
pip install llmsh
```

### Or use install.sh for system-wide installation

#### Clone the repository to your home directory

*Note: It doesn't have to be this directory but it makes the most sense.*

```shell
git clone https://github.com/vduseev/llmsh ~/.llmsh
```

#### (optional) If you are using pyenv

*Temporarily activate the preferred Python version wherever you are.*

```shell
pyenv shell 3.11.7
```

#### Run the installation script

*This will create a virtual environment in `~/.llmsh/.venv`,
install the package, and create a symlink at `~/.local/bin/llmsh`.*

```shell
$ ~/.llmsh/install.sh
```

## Usage

### Configure API keys

```shell
# If you are using OpenAI
export OPENAI_API_KEY="your-api-key"
```

### Prompt mode

```shell
$ llmsh "Translate to Polish: What a good day"
Jaki dobry dzień
```

#### Pipe output to LLM

```shell
$ echo "Translate to Polish: What a good day" | llmsh
Jaki dobry dzień
```

#### Combine pipe and prompt

```shell
$ echo "Translate to Polish:" | llmsh "What a good day"
Jaki dobry dzień
```

#### Use a file as a prompt

```shell
$ echo "Who is Dora?" > prompt.txt
$ llmsh "@prompt.txt"
Dora is the main character from the animated television series "Dora the Explorer", produced by Nickelodeon. Dora is a young Latina girl who embarks  
on numerous adventures in an imaginative world with her backpack and her talking monkey companion named Boots. 
```

#### Specify a system prompt

The system prompt is a prompt that is always present in the conversation.
It is added to the beginning of the conversation before sending it to the model.

```shell
$ llmsh "@prompt.txt" -b "You are Dora the Explorer. Help me learn Spanish"
¡Hola! I'm Dora. I help kids to learn Spanish through fun       
adventures. I explore various environments with my talking backpack and monkey friend, Boots. Do you want to learn some Spanish words with me today? 
```

*System prompt can also be a file: `-s @system.txt`.*

### Interactive chat mode

```shell
$ llmsh -i
> What is the time difference between New York and Gdansk?
New York is typically 6 hours behind Gdansk. However, due to daylight saving
changes, this can occasionally vary.

> It is April. 
In April, Daylight Saving Time is active in both locations. The time
difference remains the same. New York is 6 hours behind Gdansk.

>
# Press Ctrl+D (Ctrl+Z on Windows), or type exit or quit to quit the chat.
```

*You can also use a file as a prompt: `-i @prompt.txt`.*
*As well as a system prompt: `-i -b "You are a helpful assistant."`, which can also be sourced from a file.*

*Piping is not supported in interactive mode.*

## Configuration

### API keys

```shell
# If you are using OpenAI
export OPENAI_API_KEY="your-api-key"

# If you are using Anthropic
export ANTHROPIC_API_KEY="your-api-key"

# If you are using PalM
export PALM_API_KEY="your-api-key"

# If you are using Mistral
export MISTRAL_API_KEY="your-api-key"
```

*See [full list of supported models](https://docs.litellm.ai/docs/providers).*

### Parameters

- `prompt` The prompt to use.
  
  *Positional argument.*

  Interpreted as a path, if it starts with `@`.
  
  *Examples:*
  - Give a prompt directly:
  
    ```shell
    llmsh "Hello"
    ```

  - Read a prompt from a file:
  
    ```shell
    llmsh "@prompt.txt"
    ```

  - Ask to explain a file:

    ```shell
    cat code.py | llmsh "Explain what this code does"
    ```
  
  *As environment variable:*
  - Linux/macOS: `export LLMSH_PROMPT="Hello"`
  - Windows (cmd): `set LLMSH_PROMPT="Hello"`
  - Windows (PowerShell): `$env:LLMSH_PROMPT="Hello"`

- `--before` The system prompt to use.
  
  *Shorthand: `-b`*
  
  Interpreted as a path when starts with @.
  
  *Examples:*
  - Pipe a question to LLM with a system prompt:
  
    ```shell
    echo "Where is John Connor?" | llmsh -b "You are Terminator"
    ```

  - Use a file as a system prompt:
  
    ```shell
    llmsh -b "@terminator.txt"`
    ```
  
  *As environment variable:*
  - Linux/macOS: `export LLMSH_BEFORE_PROMPT="You are Terminator"`
  - Windows (cmd): `set LLMSH_BEFORE_PROMPT="@C:\LLM\terminator.txt"`
  - Windows (PowerShell): `$env:LLMSH_BEFORE_PROMPT="You are Terminator"`

- `--after` The system prompt to be added as last message.
  
  *Shorthand: `-a`*
  
  Interpreted as a path when starts with @.
  
  *Examples:*
  - Ask LLM to write a poem with a system prompt:
  
    ```shell
    llmsh "Write a poem" -a "Use asterisks to emphasize the words"
    ```

  - Use a file as a system prompt:
  
    ```shell
    llmsh "Write a poem" -a "@poet.txt"
    ```
  
  *As environment variable:*
  - Linux/macOS: `export LLMSH_AFTER_PROMPT="You are a poet"`
  - Windows (cmd): `set LLMSH_AFTER_PROMPT="You are a poet"`
  - Windows (PowerShell): `$env:LLMSH_AFTER_PROMPT="You are a poet"`

- `--model` The name of model to use.

  *Shorthand: `-m`*

  **Don't forget to configure the appropriate API key for the
  chosen model.**

  *Examples:*
  - Ask GPT-3.5-turbo to explain what is the moon:

    ```shell
    llmsh "What is moon?" -m "gpt-3.5-turbo"
    ```

  - Ask Mistral 8x7b to write a poem:
  
    ```shell
    llmsh "Write a poem" -m "mistral/mistral-medium"
    ```

  - Pass the prompt from a file to Claude 3 model:
  
    ```shell
    llmsh "@prompt.json" -m "claude-3"
    ```

  *As environment variable:*
  - Linux/macOS: `export LLMSH_MODEL="gpt-3.5-turbo"`
  - Windows (cmd): `set LLMSH_MODEL="gpt-3.5-turbo"`
  - Windows (PowerShell): `$env:LLMSH_MODEL="gpt-3.5-turbo"`

- `--interactive` Enable interactive **chat** mode.

  *Shorthand: `-i`*

  *Examples:*
  - Start an interactive chat:
  
    ```shell
    llmsh -i
    ```

  - Start an interactive chat with a system prompt:
  
    ```shell
    llmsh -i -s "You are a helpful assistant"
    ```

  - Start an interactive role play chat with Mistral 8x7b model:
  
    ```shell
    llmsh -i -m "mistral/mistral-medium" -s "You are a poet and I am a critic"
    ```

  *As environment variable:*
  - Linux/macOS: `export LLMSH_INTERACTIVE="true"`
  - Windows (cmd): `set LLMSH_INTERACTIVE="true"`
  - Windows (PowerShell): `$env:LLMSH_INTERACTIVE="true"`

- `--limit` The maximum number of chat messages to use as context.

  Only works in interactive chat mode. When set, only the last N 
  messages + system prompt will be used to form the context of the
  request to LLM.

  *Examples:*
  - Start an interactive chat with a limit of 10 messages:

    *Only the last 10 messages plus the system prompt will be used 
    as context.*
  
    ```llmsh -i -l 10```
  
  *As environment variable:*
  - Linux/macOS: `export LLMSH_LIMIT="10"`
  - Windows (cmd): `set LLMSH_LIMIT="10"`
  - Windows (PowerShell): `$env:LLMSH_LIMIT="10"`

- `--max-tokens` The maximum number of tokens to generate.

  *Shorthand: `-t`*

  Controls how long the response will be. The higher the number, the longer the response. Default is unlimited.

  *Examples:*
  - `llmsh -t 100`

  *As environment variable:*
  - Linux/macOS: `export LLMSH_MAX_TOKENS="100"`
  - Windows (cmd): `set LLMSH_MAX_TOKENS="100"`
  - Windows (PowerShell): `$env:LLMSH_MAX_TOKENS="100"`

- `--no-stream` Disable streaming mode.

  By default, the response is streamed. This option disables that.

  Streaming mode is useful when you want to see the response as soon as 
  it is available. And streaming works even if you redirect the
  output somewhere else.

  *Examples:*
  - `llmsh --no-stream`
  - `llmsh -i --no-stream`

  *As environment variable:*
  - Linux/macOS: `export LLMSH_NO_STREAM="true"`
  - Windows (cmd): `set LLMSH_NO_STREAM="true"`
  - Windows (PowerShell): `$env:LLMSH_NO_STREAM="true"`

## Roadmap

https://github.com/vduseev/llmsh/labels/feature

## License

Copyright 2024 Vagiz Duseev

Apache 2.0 License.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vduseev/llmsh",
    "name": "llmsh",
    "maintainer": "Vagiz Duseev",
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": "vagiz@duseev.com",
    "keywords": "cli, llm, openai, ai, shell, console, command-line",
    "author": "Vagiz Duseev",
    "author_email": "vagiz@duseev.com",
    "download_url": "https://files.pythonhosted.org/packages/3d/28/1150d9789f6447cae754cf7f461f7b43c16f1889208c7d3cb4e1cbd6e143/llmsh-0.2.tar.gz",
    "platform": null,
    "description": "# LLMsh\n\n[![PyPI](https://img.shields.io/pypi/v/llmsh.svg)](https://pypi.org/project/llmsh/)\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/vduseev/llmsh/blob/main/LICENSE)\n\nCommand-line tool to use Large Language Models in your shell.\nSupported providers include OpenAI, Anthropic, PalM, Mistral, Cohere, and more.\n\n**Perfect for usage in scripts, automation, or as a CHAT inside\ncommand-line.**\n\n*This is an alpha version and a work in progress.*\n\n## Installation\n\n### PyPI installation\n\n```shell\npip install llmsh\n```\n\n### Or use install.sh for system-wide installation\n\n#### Clone the repository to your home directory\n\n*Note: It doesn't have to be this directory but it makes the most sense.*\n\n```shell\ngit clone https://github.com/vduseev/llmsh ~/.llmsh\n```\n\n#### (optional) If you are using pyenv\n\n*Temporarily activate the preferred Python version wherever you are.*\n\n```shell\npyenv shell 3.11.7\n```\n\n#### Run the installation script\n\n*This will create a virtual environment in `~/.llmsh/.venv`,\ninstall the package, and create a symlink at `~/.local/bin/llmsh`.*\n\n```shell\n$ ~/.llmsh/install.sh\n```\n\n## Usage\n\n### Configure API keys\n\n```shell\n# If you are using OpenAI\nexport OPENAI_API_KEY=\"your-api-key\"\n```\n\n### Prompt mode\n\n```shell\n$ llmsh \"Translate to Polish: What a good day\"\nJaki dobry dzie\u0144\n```\n\n#### Pipe output to LLM\n\n```shell\n$ echo \"Translate to Polish: What a good day\" | llmsh\nJaki dobry dzie\u0144\n```\n\n#### Combine pipe and prompt\n\n```shell\n$ echo \"Translate to Polish:\" | llmsh \"What a good day\"\nJaki dobry dzie\u0144\n```\n\n#### Use a file as a prompt\n\n```shell\n$ echo \"Who is Dora?\" > prompt.txt\n$ llmsh \"@prompt.txt\"\nDora is the main character from the animated television series \"Dora the Explorer\", produced by Nickelodeon. Dora is a young Latina girl who embarks  \non numerous adventures in an imaginative world with her backpack and her talking monkey companion named Boots. \n```\n\n#### Specify a system prompt\n\nThe system prompt is a prompt that is always present in the conversation.\nIt is added to the beginning of the conversation before sending it to the model.\n\n```shell\n$ llmsh \"@prompt.txt\" -b \"You are Dora the Explorer. Help me learn Spanish\"\n\u00a1Hola! I'm Dora. I help kids to learn Spanish through fun       \nadventures. I explore various environments with my talking backpack and monkey friend, Boots. Do you want to learn some Spanish words with me today? \n```\n\n*System prompt can also be a file: `-s @system.txt`.*\n\n### Interactive chat mode\n\n```shell\n$ llmsh -i\n> What is the time difference between New York and Gdansk?\nNew York is typically 6 hours behind Gdansk. However, due to daylight saving\nchanges, this can occasionally vary.\n\n> It is April. \nIn April, Daylight Saving Time is active in both locations. The time\ndifference remains the same. New York is 6 hours behind Gdansk.\n\n>\n# Press Ctrl+D (Ctrl+Z on Windows), or type exit or quit to quit the chat.\n```\n\n*You can also use a file as a prompt: `-i @prompt.txt`.*\n*As well as a system prompt: `-i -b \"You are a helpful assistant.\"`, which can also be sourced from a file.*\n\n*Piping is not supported in interactive mode.*\n\n## Configuration\n\n### API keys\n\n```shell\n# If you are using OpenAI\nexport OPENAI_API_KEY=\"your-api-key\"\n\n# If you are using Anthropic\nexport ANTHROPIC_API_KEY=\"your-api-key\"\n\n# If you are using PalM\nexport PALM_API_KEY=\"your-api-key\"\n\n# If you are using Mistral\nexport MISTRAL_API_KEY=\"your-api-key\"\n```\n\n*See [full list of supported models](https://docs.litellm.ai/docs/providers).*\n\n### Parameters\n\n- `prompt` The prompt to use.\n  \n  *Positional argument.*\n\n  Interpreted as a path, if it starts with `@`.\n  \n  *Examples:*\n  - Give a prompt directly:\n  \n    ```shell\n    llmsh \"Hello\"\n    ```\n\n  - Read a prompt from a file:\n  \n    ```shell\n    llmsh \"@prompt.txt\"\n    ```\n\n  - Ask to explain a file:\n\n    ```shell\n    cat code.py | llmsh \"Explain what this code does\"\n    ```\n  \n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_PROMPT=\"Hello\"`\n  - Windows (cmd): `set LLMSH_PROMPT=\"Hello\"`\n  - Windows (PowerShell): `$env:LLMSH_PROMPT=\"Hello\"`\n\n- `--before` The system prompt to use.\n  \n  *Shorthand: `-b`*\n  \n  Interpreted as a path when starts with @.\n  \n  *Examples:*\n  - Pipe a question to LLM with a system prompt:\n  \n    ```shell\n    echo \"Where is John Connor?\" | llmsh -b \"You are Terminator\"\n    ```\n\n  - Use a file as a system prompt:\n  \n    ```shell\n    llmsh -b \"@terminator.txt\"`\n    ```\n  \n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_BEFORE_PROMPT=\"You are Terminator\"`\n  - Windows (cmd): `set LLMSH_BEFORE_PROMPT=\"@C:\\LLM\\terminator.txt\"`\n  - Windows (PowerShell): `$env:LLMSH_BEFORE_PROMPT=\"You are Terminator\"`\n\n- `--after` The system prompt to be added as last message.\n  \n  *Shorthand: `-a`*\n  \n  Interpreted as a path when starts with @.\n  \n  *Examples:*\n  - Ask LLM to write a poem with a system prompt:\n  \n    ```shell\n    llmsh \"Write a poem\" -a \"Use asterisks to emphasize the words\"\n    ```\n\n  - Use a file as a system prompt:\n  \n    ```shell\n    llmsh \"Write a poem\" -a \"@poet.txt\"\n    ```\n  \n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_AFTER_PROMPT=\"You are a poet\"`\n  - Windows (cmd): `set LLMSH_AFTER_PROMPT=\"You are a poet\"`\n  - Windows (PowerShell): `$env:LLMSH_AFTER_PROMPT=\"You are a poet\"`\n\n- `--model` The name of model to use.\n\n  *Shorthand: `-m`*\n\n  **Don't forget to configure the appropriate API key for the\n  chosen model.**\n\n  *Examples:*\n  - Ask GPT-3.5-turbo to explain what is the moon:\n\n    ```shell\n    llmsh \"What is moon?\" -m \"gpt-3.5-turbo\"\n    ```\n\n  - Ask Mistral 8x7b to write a poem:\n  \n    ```shell\n    llmsh \"Write a poem\" -m \"mistral/mistral-medium\"\n    ```\n\n  - Pass the prompt from a file to Claude 3 model:\n  \n    ```shell\n    llmsh \"@prompt.json\" -m \"claude-3\"\n    ```\n\n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_MODEL=\"gpt-3.5-turbo\"`\n  - Windows (cmd): `set LLMSH_MODEL=\"gpt-3.5-turbo\"`\n  - Windows (PowerShell): `$env:LLMSH_MODEL=\"gpt-3.5-turbo\"`\n\n- `--interactive` Enable interactive **chat** mode.\n\n  *Shorthand: `-i`*\n\n  *Examples:*\n  - Start an interactive chat:\n  \n    ```shell\n    llmsh -i\n    ```\n\n  - Start an interactive chat with a system prompt:\n  \n    ```shell\n    llmsh -i -s \"You are a helpful assistant\"\n    ```\n\n  - Start an interactive role play chat with Mistral 8x7b model:\n  \n    ```shell\n    llmsh -i -m \"mistral/mistral-medium\" -s \"You are a poet and I am a critic\"\n    ```\n\n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_INTERACTIVE=\"true\"`\n  - Windows (cmd): `set LLMSH_INTERACTIVE=\"true\"`\n  - Windows (PowerShell): `$env:LLMSH_INTERACTIVE=\"true\"`\n\n- `--limit` The maximum number of chat messages to use as context.\n\n  Only works in interactive chat mode. When set, only the last N \n  messages + system prompt will be used to form the context of the\n  request to LLM.\n\n  *Examples:*\n  - Start an interactive chat with a limit of 10 messages:\n\n    *Only the last 10 messages plus the system prompt will be used \n    as context.*\n  \n    ```llmsh -i -l 10```\n  \n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_LIMIT=\"10\"`\n  - Windows (cmd): `set LLMSH_LIMIT=\"10\"`\n  - Windows (PowerShell): `$env:LLMSH_LIMIT=\"10\"`\n\n- `--max-tokens` The maximum number of tokens to generate.\n\n  *Shorthand: `-t`*\n\n  Controls how long the response will be. The higher the number, the longer the response. Default is unlimited.\n\n  *Examples:*\n  - `llmsh -t 100`\n\n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_MAX_TOKENS=\"100\"`\n  - Windows (cmd): `set LLMSH_MAX_TOKENS=\"100\"`\n  - Windows (PowerShell): `$env:LLMSH_MAX_TOKENS=\"100\"`\n\n- `--no-stream` Disable streaming mode.\n\n  By default, the response is streamed. This option disables that.\n\n  Streaming mode is useful when you want to see the response as soon as \n  it is available. And streaming works even if you redirect the\n  output somewhere else.\n\n  *Examples:*\n  - `llmsh --no-stream`\n  - `llmsh -i --no-stream`\n\n  *As environment variable:*\n  - Linux/macOS: `export LLMSH_NO_STREAM=\"true\"`\n  - Windows (cmd): `set LLMSH_NO_STREAM=\"true\"`\n  - Windows (PowerShell): `$env:LLMSH_NO_STREAM=\"true\"`\n\n## Roadmap\n\nhttps://github.com/vduseev/llmsh/labels/feature\n\n## License\n\nCopyright 2024 Vagiz Duseev\n\nApache 2.0 License.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Command-line tool to use Large Language Models in your shell,including OpenAI, Anthropic, PalM, Mistral, Cohere, and more.",
    "version": "0.2",
    "project_urls": {
        "Documentation": "https://github.com/vduseev/llmsh",
        "Homepage": "https://github.com/vduseev/llmsh",
        "Repository": "https://github.com/vduseev/llmsh"
    },
    "split_keywords": [
        "cli",
        " llm",
        " openai",
        " ai",
        " shell",
        " console",
        " command-line"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "744883f84f0fdd538278d58980a982ebd586de6f4e3c29a5d884b6bbac1e5e3a",
                "md5": "1b81c8e03c203dedee24f3c6383f3d59",
                "sha256": "f505ef155438abf86676c7be81fd725bf01d03b896381686922d4f93dc6fe9f6"
            },
            "downloads": -1,
            "filename": "llmsh-0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1b81c8e03c203dedee24f3c6383f3d59",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 19413,
            "upload_time": "2024-04-08T01:18:37",
            "upload_time_iso_8601": "2024-04-08T01:18:37.944536Z",
            "url": "https://files.pythonhosted.org/packages/74/48/83f84f0fdd538278d58980a982ebd586de6f4e3c29a5d884b6bbac1e5e3a/llmsh-0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3d281150d9789f6447cae754cf7f461f7b43c16f1889208c7d3cb4e1cbd6e143",
                "md5": "9d50c0081a8947f813581877da35b1f6",
                "sha256": "0e02bbb8f9351cf004d6cd38d344c7a97da248eb31680160fe144d63b091acd7"
            },
            "downloads": -1,
            "filename": "llmsh-0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "9d50c0081a8947f813581877da35b1f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 15469,
            "upload_time": "2024-04-08T01:18:39",
            "upload_time_iso_8601": "2024-04-08T01:18:39.995119Z",
            "url": "https://files.pythonhosted.org/packages/3d/28/1150d9789f6447cae754cf7f461f7b43c16f1889208c7d3cb4e1cbd6e143/llmsh-0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-08 01:18:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vduseev",
    "github_project": "llmsh",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "llmsh"
}
        
Elapsed time: 0.23016s