llm-shell


Namellm-shell JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/mirror12k/llm-shell
SummaryA Language Model Enhanced Command Line Interface
upload_time2024-10-12 20:26:31
maintainerNone
docs_urlNone
authorMirror12k
requires_pythonNone
licenseMIT
keywords shell with integrated access to chatgpt or other llms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM-Shell: Language Model Enhanced Command Line Interface

## Overview
LLM-Shell is a command-line interface (CLI) tool that enhances your shell experience with the power of large language models (LLMs) such as GPT-4 and GPT-3.5 Turbo. It acts as a wrapper around your standard shell, allowing you to execute regular shell commands while also providing the capability to consult an LLM for programming assistance, code examples, and executing commands with natural language understanding.

## Features
- Execute standard shell commands with real-time output.
- Use language models to process commands described in natural language.
- Syntax highlighting for code blocks returned by the language model.
- Set one or multiple context/summary files to provide additional information to the LLM.
- Change the underlying LLM backend (e.g., GPT-4 Turbo, GPT-4, GPT-3.5 Turbo).
- Set or update the instruction for the LLM to change how it assists you.
- Autocompletion for custom commands and file paths.
- History tracking of commands and LLM responses.

## Prerequisites
- Python 3
- `requests` library for making HTTP requests to the LLM API.
- `pygments` library for syntax highlighting.
- An API key from OpenAI for accessing their language models.

## Installation

There are two ways to install LLM-Shell:

### Using pip (Recommended)

1. Ensure you have Python 3 installed on your system.
2. Install the `llm-shell` package from PyPI:
   ```
   pip install llm-shell
   ```
3. Set your OpenAI API key as an environment variable `CHATGPT_API_KEY` or within a `.env` file that the script can read.

### Using Git Clone (For Developers)

1. Clone the repository to your local machine.
2. Navigate to the cloned directory.
3. Install the required Python packages:
   ```
   pip install -r requirements.txt
   ```
4. Make sure the script is executable:
   ```sh
   chmod +x llm-shell.py
   ```

## Usage

### If Installed Through pip

To start the LLM-Shell, run the following command:

```sh
llm-shell
```
### If Installed By Git Cloning the Repository

To start the LLM-Shell, navigate to the `bin` directory and run the `llm_shell` script:

```sh
./bin/llm_shell
```
### Executing Commands

- Standard shell commands are executed as normal, e.g., `ls -la`.
- To use the LLM, prefix your command with a hash `#`, followed by the natural language instruction, e.g., `# How do I list all files in the current directory?`.

### Special Commands

- `help` - Displays a list of available custom commands within the LLM-Shell.
- `llm-backend [backend]` - Changes the LLM backend. Replace `[backend]` with one of the supported backends (e.g., `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, `claude-instant-v1`, `claude-v2.1`).
- `llm-instruction [instruction]` - Sets or updates the instruction for the LLM. Use this command to change how the LLM assists you.
- `llm-reindent-with-tabs [true/false]` - Controls auto-reindent with tabs, to help when the LLM doesn't auto-detect it properly.
- `llm-chatgpt-apikey [apikey]` - Set API key for OpenAI's models.
- `context [filename1] [filename2] ...` - Sets one or multiple context files that will be used to provide additional information to the LLM. Use `context none` to clear the context files.
- `summary [filename1] [filename2] ...` - Sets one or multiple summary files. Similar to `context`, but it will summarize the file before sending it to the LLM. Useful if you just want to send an outline of a class instead of the entire code.
- `exit` - Exits LLM-Shell.

### Autocompletion

- The LLM-Shell supports autocompletion for file paths and custom commands. Press `Tab` to autocomplete the current input.

## Customization

Modify the `llm-shell.py` script to add new features or change existing behavior to better suit your needs.

## License

LLM-Shell is released under the MIT License. See the LICENSE file for more information.

## Disclaimer

LLM-Shell is not an official product and is not affiliated with OpenAI.
It's just an open-source tool developed to showcase the integration of LLMs into a command-line environment.
Use it at your own risk.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mirror12k/llm-shell",
    "name": "llm-shell",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "shell with integrated access to chatgpt or other llms",
    "author": "Mirror12k",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/92/a5/2a38f2012c2ee2719f40acb0860f16c62fca15eeff0466bf5cd44f10bdba/llm_shell-0.5.0.tar.gz",
    "platform": null,
    "description": "# LLM-Shell: Language Model Enhanced Command Line Interface\n\n## Overview\nLLM-Shell is a command-line interface (CLI) tool that enhances your shell experience with the power of large language models (LLMs) such as GPT-4 and GPT-3.5 Turbo. It acts as a wrapper around your standard shell, allowing you to execute regular shell commands while also providing the capability to consult an LLM for programming assistance, code examples, and executing commands with natural language understanding.\n\n## Features\n- Execute standard shell commands with real-time output.\n- Use language models to process commands described in natural language.\n- Syntax highlighting for code blocks returned by the language model.\n- Set one or multiple context/summary files to provide additional information to the LLM.\n- Change the underlying LLM backend (e.g., GPT-4 Turbo, GPT-4, GPT-3.5 Turbo).\n- Set or update the instruction for the LLM to change how it assists you.\n- Autocompletion for custom commands and file paths.\n- History tracking of commands and LLM responses.\n\n## Prerequisites\n- Python 3\n- `requests` library for making HTTP requests to the LLM API.\n- `pygments` library for syntax highlighting.\n- An API key from OpenAI for accessing their language models.\n\n## Installation\n\nThere are two ways to install LLM-Shell:\n\n### Using pip (Recommended)\n\n1. Ensure you have Python 3 installed on your system.\n2. Install the `llm-shell` package from PyPI:\n   ```\n   pip install llm-shell\n   ```\n3. Set your OpenAI API key as an environment variable `CHATGPT_API_KEY` or within a `.env` file that the script can read.\n\n### Using Git Clone (For Developers)\n\n1. Clone the repository to your local machine.\n2. Navigate to the cloned directory.\n3. Install the required Python packages:\n   ```\n   pip install -r requirements.txt\n   ```\n4. Make sure the script is executable:\n   ```sh\n   chmod +x llm-shell.py\n   ```\n\n## Usage\n\n### If Installed Through pip\n\nTo start the LLM-Shell, run the following command:\n\n```sh\nllm-shell\n```\n### If Installed By Git Cloning the Repository\n\nTo start the LLM-Shell, navigate to the `bin` directory and run the `llm_shell` script:\n\n```sh\n./bin/llm_shell\n```\n### Executing Commands\n\n- Standard shell commands are executed as normal, e.g., `ls -la`.\n- To use the LLM, prefix your command with a hash `#`, followed by the natural language instruction, e.g., `# How do I list all files in the current directory?`.\n\n### Special Commands\n\n- `help` - Displays a list of available custom commands within the LLM-Shell.\n- `llm-backend [backend]` - Changes the LLM backend. Replace `[backend]` with one of the supported backends (e.g., `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, `claude-instant-v1`, `claude-v2.1`).\n- `llm-instruction [instruction]` - Sets or updates the instruction for the LLM. Use this command to change how the LLM assists you.\n- `llm-reindent-with-tabs [true/false]` - Controls auto-reindent with tabs, to help when the LLM doesn't auto-detect it properly.\n- `llm-chatgpt-apikey [apikey]` - Set API key for OpenAI's models.\n- `context [filename1] [filename2] ...` - Sets one or multiple context files that will be used to provide additional information to the LLM. Use `context none` to clear the context files.\n- `summary [filename1] [filename2] ...` - Sets one or multiple summary files. Similar to `context`, but it will summarize the file before sending it to the LLM. Useful if you just want to send an outline of a class instead of the entire code.\n- `exit` - Exits LLM-Shell.\n\n### Autocompletion\n\n- The LLM-Shell supports autocompletion for file paths and custom commands. Press `Tab` to autocomplete the current input.\n\n## Customization\n\nModify the `llm-shell.py` script to add new features or change existing behavior to better suit your needs.\n\n## License\n\nLLM-Shell is released under the MIT License. See the LICENSE file for more information.\n\n## Disclaimer\n\nLLM-Shell is not an official product and is not affiliated with OpenAI.\nIt's just an open-source tool developed to showcase the integration of LLMs into a command-line environment.\nUse it at your own risk.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Language Model Enhanced Command Line Interface",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://github.com/mirror12k/llm-shell"
    },
    "split_keywords": [
        "shell",
        "with",
        "integrated",
        "access",
        "to",
        "chatgpt",
        "or",
        "other",
        "llms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "62a7d49d7910e631df65964e7a9cb26ce6735de9fdf407cd807a5ac03ed7d84c",
                "md5": "4a5b9ce56d60475b8ff583b23bcb888e",
                "sha256": "69d800cf75fedbc8947f466ff99a4eccc99691ceb025b5476452d5ea5eed1d80"
            },
            "downloads": -1,
            "filename": "llm_shell-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4a5b9ce56d60475b8ff583b23bcb888e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 17741,
            "upload_time": "2024-10-12T20:26:30",
            "upload_time_iso_8601": "2024-10-12T20:26:30.387189Z",
            "url": "https://files.pythonhosted.org/packages/62/a7/d49d7910e631df65964e7a9cb26ce6735de9fdf407cd807a5ac03ed7d84c/llm_shell-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "92a52a38f2012c2ee2719f40acb0860f16c62fca15eeff0466bf5cd44f10bdba",
                "md5": "fc4d6d25a9d726e373da28c868a0aa59",
                "sha256": "3ff75ebf55e53d89d4867b21b57b9f0a304ba0835d3548601c826481f82d8aa2"
            },
            "downloads": -1,
            "filename": "llm_shell-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "fc4d6d25a9d726e373da28c868a0aa59",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 17504,
            "upload_time": "2024-10-12T20:26:31",
            "upload_time_iso_8601": "2024-10-12T20:26:31.895522Z",
            "url": "https://files.pythonhosted.org/packages/92/a5/2a38f2012c2ee2719f40acb0860f16c62fca15eeff0466bf5cd44f10bdba/llm_shell-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-12 20:26:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mirror12k",
    "github_project": "llm-shell",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "llm-shell"
}
        
Elapsed time: 0.33609s