llm-code


Namellm-code JSON
Version 0.8.0 PyPI version JSON
download
home_pagehttps://github.com/radoshi/llm-code
SummaryAn OpenAI LLM based CLI coding assistant.
upload_time2024-05-20 04:59:51
maintainerNone
docs_urlNone
authorRushabh Doshi
requires_python<4.0,>=3.11
licenseMIT
keywords openai llm cli coding assistant
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-code

![PyPi](https://img.shields.io/pypi/v/llm-code?color=green)
[![Coverage Status](https://coveralls.io/repos/github/radoshi/llm-code/badge.svg?branch=main)](https://coveralls.io/github/radoshi/llm-code?branch=main)

---

An OpenAI LLM based CLI coding assistant.

`llm-code` is inspired by
[Simon Wilson](https://simonwillison.net/2023/May/18/cli-tools-for-llms/)'s
[llm](https://github.com/simonw/llm) package. It takes a similar approach of developing
a simple tool to create an LLM based assistant that helps write code.

## Installation

```bash
pipx install llm-code
```

## Configuration

`llm-code` requires an OpenAI API key. You can get one from [OpenAI](https://openai.com/).

You can set the key in a few different ways, depending on your preference:

1. Set the `OPENAI_API_KEY` environment variable.

```bash
export OPENAI_API_KEY = sk-...
```

2. Use an env file in ~/.llm_code/env

```bash
mkdir -p ~/.llm_code
echo "OPENAI_API_KEY=sk-..." > ~/.llm_code/env
```

## Usage

`llm-code` is meant to be simple to use. The default prompts should be good enough. There are two broad modes:

1. Generage some code from scratch.

```bash
llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
```

2. Give in some input files and ask for changes.

```bash
llm-code -i my_file.py "add docstrings to all python functions."
```

```bash
llm-code --help
```

```
Usage: llm-code [OPTIONS] [INSTRUCTIONS]...

  Coding assistant using OpenAI's chat models.

  Requires OPENAI_API_KEY as an environment variable. Alternately, you can set
  it in ~/.llm_code/env.

Options:
  -i, --inputs TEXT  Glob of input files. Use repeatedly for multiple files.
  -cb, --clipboard   Copy code to clipboard.
  -nc, --no-cache    Don't use cache.
  -4, --gpt-4        Use GPT-4.
  --version          Show version.
  --help             Show this message and exit.
```

## Changing OpenAI parameters

Any of the OpenAI parameters can be changed using environment variables. GPT-4 is one exception: you can also set it using `-4` for convenience.

```bash
export MAX_TOKENS=2000
export TEMPERATURE=0.5
export MODEL=gpt-4
```

or

```bash
llm-code -4 ...
```

## Caching

A common usage pattern is to examine the output of a model and either accept it, or continue to play around with the prompts. When "accepting" the output, a common thing is to append it to a file, or copy it to the clipboard (using `pbcopy` on a mac, for example.). To facilitate this workflow of inspection and acceptance, `llm-code` caches the output of the model in a local sqlite database. This allows you to replay the same query without having to hit the OpenAI API.

```bash
llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.'
```

Following this, assuming you like the output:

```bash
llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.' > sum.py
```

## Database

Borrowing simonw's excellent idea of logging things to a local sqlite, as demonstrated in [`llm`](https://github.com/simonw/llm), `llm-code` also logs all queries to a local sqlite database. This is useful for a few reasons:

1. It allows you to replay the same query without having to hit the OpenAI API.
2. It allows you to see what queries you've made in the past with responses, and number of tokens used.

## Examples

Simple hello world.

```bash
llm-code write hello world in rust
```

```rust
fn main() {
    println!("Hello, world!");
}
```

---

Sum of two numbers with type hints.

```bash
llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
```

```python
from typing import List

def sum_numbers(numbers: List[int]) -> int:
    return sum(numbers)
```

---

Lets assume that we stuck the output of the previous call in `out.py`. We can now say:

```bash
llm-code -i out.py "add appropriate docstrings"
```

```python
from typing import List

def sum_numbers(numbers: List[int]) -> int:
    """Return the sum of the given list of numbers."""
    return sum(numbers)
```

---

Or we could write some unit tests.

```bash
llm-code -i out.py "write a complete unit test file using pytest.
```

```python
import pytest

from typing import List
from my_module import sum_numbers


def test_sum_numbers():
    assert sum_numbers([1, 2, 3]) == 6
    assert sum_numbers([-1, 0, 1]) == 0
    assert sum_numbers([]) == 0
```

## TODO

- [X] Add a simple cache to replay the same query.
- [X] Add logging to a local sqllite db.
- [ ] Add an `--exec` option to execute the generated code.
- [ ] Add a `--stats` option to output token counts.
- [X] Add `pyperclip` integration to copy to clipboard.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/radoshi/llm-code",
    "name": "llm-code",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "openai, llm, cli, coding, assistant",
    "author": "Rushabh Doshi",
    "author_email": "radoshi+pypi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/58/33/3077b3bc3744eb62ff573c07e9d97f7fbc535ce6be31b8e1e49811943de6/llm_code-0.8.0.tar.gz",
    "platform": null,
    "description": "# llm-code\n\n![PyPi](https://img.shields.io/pypi/v/llm-code?color=green)\n[![Coverage Status](https://coveralls.io/repos/github/radoshi/llm-code/badge.svg?branch=main)](https://coveralls.io/github/radoshi/llm-code?branch=main)\n\n---\n\nAn OpenAI LLM based CLI coding assistant.\n\n`llm-code` is inspired by\n[Simon Wilson](https://simonwillison.net/2023/May/18/cli-tools-for-llms/)'s\n[llm](https://github.com/simonw/llm) package. It takes a similar approach of developing\na simple tool to create an LLM based assistant that helps write code.\n\n## Installation\n\n```bash\npipx install llm-code\n```\n\n## Configuration\n\n`llm-code` requires an OpenAI API key. You can get one from [OpenAI](https://openai.com/).\n\nYou can set the key in a few different ways, depending on your preference:\n\n1. Set the `OPENAI_API_KEY` environment variable.\n\n```bash\nexport OPENAI_API_KEY = sk-...\n```\n\n2. Use an env file in ~/.llm_code/env\n\n```bash\nmkdir -p ~/.llm_code\necho \"OPENAI_API_KEY=sk-...\" > ~/.llm_code/env\n```\n\n## Usage\n\n`llm-code` is meant to be simple to use. The default prompts should be good enough. There are two broad modes:\n\n1. Generage some code from scratch.\n\n```bash\nllm-code \"write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.\"\n```\n\n2. Give in some input files and ask for changes.\n\n```bash\nllm-code -i my_file.py \"add docstrings to all python functions.\"\n```\n\n```bash\nllm-code --help\n```\n\n```\nUsage: llm-code [OPTIONS] [INSTRUCTIONS]...\n\n  Coding assistant using OpenAI's chat models.\n\n  Requires OPENAI_API_KEY as an environment variable. Alternately, you can set\n  it in ~/.llm_code/env.\n\nOptions:\n  -i, --inputs TEXT  Glob of input files. Use repeatedly for multiple files.\n  -cb, --clipboard   Copy code to clipboard.\n  -nc, --no-cache    Don't use cache.\n  -4, --gpt-4        Use GPT-4.\n  --version          Show version.\n  --help             Show this message and exit.\n```\n\n## Changing OpenAI parameters\n\nAny of the OpenAI parameters can be changed using environment variables. GPT-4 is one exception: you can also set it using `-4` for convenience.\n\n```bash\nexport MAX_TOKENS=2000\nexport TEMPERATURE=0.5\nexport MODEL=gpt-4\n```\n\nor\n\n```bash\nllm-code -4 ...\n```\n\n## Caching\n\nA common usage pattern is to examine the output of a model and either accept it, or continue to play around with the prompts. When \"accepting\" the output, a common thing is to append it to a file, or copy it to the clipboard (using `pbcopy` on a mac, for example.). To facilitate this workflow of inspection and acceptance, `llm-code` caches the output of the model in a local sqlite database. This allows you to replay the same query without having to hit the OpenAI API.\n\n```bash\nllm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.'\n```\n\nFollowing this, assuming you like the output:\n\n```bash\nllm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.' > sum.py\n```\n\n## Database\n\nBorrowing simonw's excellent idea of logging things to a local sqlite, as demonstrated in [`llm`](https://github.com/simonw/llm), `llm-code` also logs all queries to a local sqlite database. This is useful for a few reasons:\n\n1. It allows you to replay the same query without having to hit the OpenAI API.\n2. It allows you to see what queries you've made in the past with responses, and number of tokens used.\n\n## Examples\n\nSimple hello world.\n\n```bash\nllm-code write hello world in rust\n```\n\n```rust\nfn main() {\n    println!(\"Hello, world!\");\n}\n```\n\n---\n\nSum of two numbers with type hints.\n\n```bash\nllm-code \"write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.\"\n```\n\n```python\nfrom typing import List\n\ndef sum_numbers(numbers: List[int]) -> int:\n    return sum(numbers)\n```\n\n---\n\nLets assume that we stuck the output of the previous call in `out.py`. We can now say:\n\n```bash\nllm-code -i out.py \"add appropriate docstrings\"\n```\n\n```python\nfrom typing import List\n\ndef sum_numbers(numbers: List[int]) -> int:\n    \"\"\"Return the sum of the given list of numbers.\"\"\"\n    return sum(numbers)\n```\n\n---\n\nOr we could write some unit tests.\n\n```bash\nllm-code -i out.py \"write a complete unit test file using pytest.\n```\n\n```python\nimport pytest\n\nfrom typing import List\nfrom my_module import sum_numbers\n\n\ndef test_sum_numbers():\n    assert sum_numbers([1, 2, 3]) == 6\n    assert sum_numbers([-1, 0, 1]) == 0\n    assert sum_numbers([]) == 0\n```\n\n## TODO\n\n- [X] Add a simple cache to replay the same query.\n- [X] Add logging to a local sqllite db.\n- [ ] Add an `--exec` option to execute the generated code.\n- [ ] Add a `--stats` option to output token counts.\n- [X] Add `pyperclip` integration to copy to clipboard.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An OpenAI LLM based CLI coding assistant.",
    "version": "0.8.0",
    "project_urls": {
        "Homepage": "https://github.com/radoshi/llm-code",
        "Repository": "https://github.com/radoshi/llm-code"
    },
    "split_keywords": [
        "openai",
        " llm",
        " cli",
        " coding",
        " assistant"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1fb380399a43eb71990596a9326533a754ca4e2d1c3d3a0b1ddca8247d1ca39d",
                "md5": "7064f2035a70ea2c3a4c41ebbb8e12ae",
                "sha256": "fc72b09f60dbcab0f2d3aeffa4cfaa2b3e7a66bb6133d62399d2904e7fc43d4a"
            },
            "downloads": -1,
            "filename": "llm_code-0.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7064f2035a70ea2c3a4c41ebbb8e12ae",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 9814,
            "upload_time": "2024-05-20T04:59:50",
            "upload_time_iso_8601": "2024-05-20T04:59:50.224059Z",
            "url": "https://files.pythonhosted.org/packages/1f/b3/80399a43eb71990596a9326533a754ca4e2d1c3d3a0b1ddca8247d1ca39d/llm_code-0.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "58333077b3bc3744eb62ff573c07e9d97f7fbc535ce6be31b8e1e49811943de6",
                "md5": "dd67218ccfe8b7043297cccf1faa88ff",
                "sha256": "3638b4d9b8514898d02e6c7a7d89b2ef079a76ff0e979ae8af7c81dcc98ad83b"
            },
            "downloads": -1,
            "filename": "llm_code-0.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "dd67218ccfe8b7043297cccf1faa88ff",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 9481,
            "upload_time": "2024-05-20T04:59:51",
            "upload_time_iso_8601": "2024-05-20T04:59:51.804756Z",
            "url": "https://files.pythonhosted.org/packages/58/33/3077b3bc3744eb62ff573c07e9d97f7fbc535ce6be31b8e1e49811943de6/llm_code-0.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-20 04:59:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "radoshi",
    "github_project": "llm-code",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-code"
}
        
Elapsed time: 3.77090s